text
stringlengths 56
7.94M
|
---|
\begin{document}
\title{
Many-player entangled state solutions in game theory problems}
\author{Sudhakar Yarlagadda}
\affiliation{CAMCS and TCMP, Saha Institute of Nuclear Physics, Calcutta, India}
\date{\today}
\begin{abstract}
{We propose a non-classical multi-player entangled state which eliminates
the need for communication, yet can solve problems (that require
coordination) better than classical approaches.
For the entangled state,
we propose a slater determinant of all allowed states of a filled band
in a condensed matter system --
the integer quantum Hall state at filling factor 1.
Such a state
gives the best solution (i.e., best Nash equilibrium)
for some classical stochastic problems where classical solutions
are far from ideal.}
\end{abstract}
\maketitle
\section{INTRODUCTION}
Game theoretical problems dealing with conflict of interest have been
tackled in the recent past with quantum approaches \cite{landsburg}.
It is hoped that
quantum game theory, by exploiting quantum mechanics, would
produce significantly improved solutions.
Of particular interest is how many-particle quantum entanglement
can be harnessed to provide better strategies (in games)
compared to the classical solutions.
Entanglement, which provides correlations between
remote particles, can equip the players with a coordinated
set of actions depending on the state of the particle that they
privately observe.
Thus when players cannot communicate by classical channels, they can still
arrive at an optimal strategy.
Quantum non-locality was first demonstrated by Bell with his famous
inequalities \cite{bell}. Quantum theory predicts correlations among
outcomes of distant measurements which cannot be explained using only
local variables. It has been demonstrated that two photons are correlated
over large distances (of the order of 10 km) thereby violating Bell's
inequalities \cite{tittel}. Thus we have a verification of the basic
assumption of quantum information and computation that quantum systems can be
entangled over large distances and times.
In the past quantum entanglement has been incorporated
in classical two-party games such as
the prisoner's dilemma by Eisert {\em et al.} \cite{eisert}, the battle of
sexes by Marinatto and Weber \cite{weber}, etc. These authors demonstrated
how optimal solutions can be achieved using entanglement.
The purpose of the present work is to propose many-particle entangled states
and show how they can be used to obtain improved/optimal solutions for
classical problems requiring coordinated action by the players.
\section{Many-particle entangled state}
In condensed matter systems
one frequently encounters bands filled with fermions.
Based on Pauli's exclusion principle,
the ground state of any $N$-state
band filled completely by $N$ spinless-fermions is
a Slater determinant of the complete set of
$N$ single particle eigen states of the band.
Such a Slater determinant
is an antisymmetric linear superposition of $N!$-many $N$-particle
eigen states.
Here {\em we consider a case of a degenerate filled band -- the integer quantum Hall state at
filling factor 1}.
In our quantum Hall state,
electrons are chosen to be confined to the xy-plane and subjected to
a perpendicular magnetic field. On choosing a symmetric gauge vector
potential, $\vec{A} = 0.5 B (y \widehat {x} -x \widehat{y})$,
the degenerate single-particle wavefunctions
for the lowest Landau level (LLL) are given by:
\begin{eqnarray}
\phi_m (z) \equiv |m \rangle =
\frac{1}{(2 \pi l_0^2 2 ^m m!)^{\frac{1}{2}}} \left ( \frac{z}{l_0} \right )^m
e^{-|z|^2/4l_0^2} ,
\end{eqnarray}
where $z=x-iy$ is the electron position in complex plane, $m$ is the orbital
angular momentum, and $l_0 \equiv \sqrt{\hbar c /e B}$ is the magnetic length.
The area occupied by the electron in state $|m \rangle$ is
\begin{eqnarray}
\langle m | \pi r^2 |m \rangle = 2(m+1)\pi l_0^2 .
\end{eqnarray}
Thus the LLL can accommodate only $N_e$ electrons given by
\begin{eqnarray}
N_e = (M+1) = \frac{A}{2 \pi l_0^2} ,
\end{eqnarray}
where $A$ is the area of the
system and
$M$ is the largest allowed angular momentum for area $A$.
The many-electron system is described by the Hamiltonian
\begin{eqnarray}
H = &&
\sum_j \frac{1}{2 m_e} \left [ -i \hbar \nabla_j - \frac{e}{c}
\vec{A}_j \right ] ^2 + \sum_{j < k} \frac{e^2}{|z_j - z_k |}
\nonumber \\
&&
+
g \mu_B \sum_j
\vec{B} \cdot \vec{s}_j .
\end{eqnarray}
Thus when the LLL (with the lowest Zeeman energy) is completely filled with
$N_e$ electrons (i.e., when LLL is at filling factor $\nu =1$),
the many-particle wavefunction $\Psi (z_1, z_2, ....,z_{N_e})$
is given by the Slater determinant
\begin{eqnarray}
\left| \begin{array}{cccc}
\phi_0(z_1) & \phi_0 (z_2) & \ldots & \phi_0 (z_{N_e}) \\
\phi_1(z_1) & \phi_1 (z_2) & \ldots & \phi_1 (z_{N_e}) \\
\vdots & \vdots & \vdots & \vdots \\
\phi_{N_e -1}(z_1) & \phi_{N_e -1} (z_2) & \ldots & \phi_{N_e -1} (z_{N_e})
\end{array} \right| .
\end{eqnarray}
The
many-particle wavefunction $\Psi (z_1, z_2, ....,z_{N_e})$
for $N_e$ particles
can be
expressed as follows:
\begin{eqnarray}
\Psi (z_1, z_2, ....,z_{N_e}) =
\psi (z_1, z_2, ....,z_{N_e})
e^{-\sum_{l=1}^{N_e}|z_l|^2/4l_0^2} ,
\label{wf1}
\end{eqnarray}
where
\begin{eqnarray}
\!\!\!\!\!\!
\psi (z_1, z_2, ....,z_{N_e}) =
&&
\!\!\!\!\!\!
\prod_{1 \le j < k \le N_e} (z_j -z_k )
\nonumber \\
=
&&
\!\!\!\!\!\!
\sum_{\sigma \in S_{N_e}} {\rm sgn}(\sigma ) z_1^{\sigma(1)-1} ...
z_{N_e}^{\sigma(N_e)-1} ,
\label{wf2}
\end{eqnarray}
where $S_{N_e}$ denotes the set of permutations of $\{1,2, ..., N_e \}$
and ${\rm sgn} (\sigma)$ denotes the signature of the permutation $\sigma$.
Thus we see that $\psi (z_1, z_2, ....,z_{N_e}) $ is a linear
superposition of $N_e!$ states (all with the same probability
of being observed)
and each state
$ z_1^{\sigma(1)-1} ... z_{N_e}^{\sigma(N_e)-1} $
has the angular momenta $0, 1, 2, ..., N_e -1$ distributed among
$N_e$ fermionic particles (at positions $z_1, z_2, ..., z_{N_e}$) in a uniquely
different way (with no two particles having the same angular momentum)!
Thus if the
many-particle wavefunction $\Psi (z_1, z_2, ....,z_{N_e})$
is measured for angular momentum of its particles (using for instance
a Stern-Gerlach type of set-up) at positions
$z_1, z_2, ..., z_{N_e}$, then one of the $N_e!$ permutations
of the angular momentum from the set $\{ 0, 1, 2, ..., N_e -1 \}$
will be measured with probability $1/(N_e !)$.
The above fact can be exploited in a game-theoretic context as described
in the next section.
Here it should be pointed out
that although an antisymmetric wavefunction obtained based
on Pauli's exclusion principle is in general not an entangled state \cite{shi,peres},
the Coulomb interactions actually produce the same antisymmetric
wavefunction even when the
fermionic nature of the particles is ignored,
i.e., for example if the particles are treated as classical particles.
Furthermore, for the situation where the g-factor is zero (which can
be achieved in gallium arsenide heterostructures using pressure),
Coulomb interaction energy is minimized when the real space wave
function is antisymmetric and given by Eq. (\ref{wf1}) while the
spin wavefunction is symmetric (with the total spin being maximized
and equal to $N_{e}/2$). This is clearly an entangled state based on correlation effects.
This situation is very similar to that of the electronic wavefunction
in a half-filled degenerate sub-shell in an atom (such as the five electrons
in the 3d sub-shell of $Mn^{2+}$) where Hund's rule dictates that
wavefunction be antisymmetric in the real space and symmetric in the spin space.
In general, for the quantum Hall situation (at filling factor 1) where one
has at least two species of fermionic
particles with all the particles having
the same charge, spin, and
single particle energy ($\hbar \omega_c/2 - 0.5 g \mu_B B$),
one again obtains [for total number of particles
$N= N_e = A/(2 \pi l_0^2$)] the same many-body
wavefunction [given by Eq. (\ref{wf1})]
which now is certainly entangled due to correlation effects
produced by Coulomb interactions.
Lastly, we would like to add that, the above considerations
for minimum Coulomb interaction energy are certainly valid when the
repulsive interaction
is given by a short range
Dirac-delta function in which case the interaction energy is zero.
\section{Quantum solutions to classical problems}
In this section we will pose a
couple of
classical problems and
show that entanglement not only significantly improves the solution,
in fact, it also produces the best possible solution.
\subsection{Kolkata restaurant problem}
We will first examine the Kolkata restaurant problem (KRP) \cite{bkc1}
which is a variant of the Minority Game Problem \cite{minority}.
In the KRP (in its minimal form) there are $N$ restaurants
(with $N \rightarrow \infty$) that can
each accommodate only one person and there are $N$ agents to be accommodated.
All the $N$ agents take a stochastic strategy that is independent
of each other.
If we assume that, on any day,
each of the $N$ agents chooses randomly any of
the $N$ restaurants such that if $m$ ($> 1$) agents show up at any restaurant,
then only one of them (picked randomly) will be served and the rest $m-1$
go without a meal. It is also understood that each agent can choose only
one restaurant and no more. Then the probability $f$ that a person gets
a meal (or a restaurant gets a customer) on any day is calculated based
on the probability $P(m)$ that any restaurant gets chosen by $m$ agents
with
\begin{eqnarray}
P(m)
=
\frac{N!}{(N-m)! m!}p^m (1-p)^{N-m}
=
\frac{\exp(-1)}{m!} ,
\end{eqnarray}
where $p=1/N$ is the probability of choosing any restaurant.
Hence, the fraction of restaurants that get chosen on any day
is given by
\begin{eqnarray}
f= 1-P(0) = 1- \exp(-1) \approx 0.63 .
\end{eqnarray}
Now, we extend the above minimal KRP game to get a more efficient
utilization of restaurants by taking advantage of past experience
of the diners. We stipulate that the
successful diners ($NF_n$)
on the $n$th day will visit the
same restaurant on all subsequent days as well,
while the remaining $N-NF_n$ unsuccessful agents of the
$n$th day try stochastically any of the
$N$ restaurants on the next day (i.e., $n+1$th day)
and so on until all customers find a restaurant.
The above procedure can be mathematically modeled to yield the
following recurrence relation
\begin{eqnarray}
F_{n+1}=F_{n}+f(1-F_n)^2 ,
\label{F_n}
\end{eqnarray}
where $F_n$ is the fraction of restaurants
occupied on the $n$th day with
$F_1 = f= 1-1/e$.
Upon making a continuum approximation we get
\begin{eqnarray}
\frac{dF}{dn}= f(1-F)^2 ,
\end{eqnarray}
which yields the solution
\begin{eqnarray}
F = 1 - \frac{e}{e^2+(n-1)
(e-1)} .
\end{eqnarray}
The above solution $F$ turns out to be a good approximation to
the solution for $F_n$ in Eq. (\ref{F_n}) (with error less than 5\%)
as can be seen from Fig. \ref{fig1}.
We see that even after 10 iterations less than 90\% of the
restaurants are filled!
\begin{figure}\label{fig1}
\end{figure}
We will now investigate how superior quantum solutions can be obtained for the
KRP.
We will introduce quantum mechanics into the problem
by asking the $N$ agents to share
an entangled $N$-particle quantum Hall state at filling factor 1 described
in the previous section [see Eqs. (\ref{wf1}) \& (\ref{wf2})].
We assign to each of the $N$ restaurants
a unique angular momentum picked from the set
$\{ 0, 1, 2, ..., N -1 \}$. We ask each agent to measure the angular momentum
of a randomly chosen particle from the $N$-particle entangled state.
Then, based on the measured angular momentum, the agent goes to the restaurant
that has his/her particular angular momentum assigned to it. In this
approach all the agents get to eat in a restaurant and all the restaurants
get a customer. Thus we see that the prescribed entangled state
always produces restaurant-occupation probability 1
and is thus superior to the classical solution mentioned above!
Furthermore, the probability that an agent picks a restaurant is still $p=1/N$
and hence
all agents are equally likely to go to any restaurant. Thus, even if there
is an accepted-by-all hierarchy amongst the restaurants (in terms of quality
of food with price of all restaurants being the same),
the entangled state produces an equitable (Pareto optimal) solution where
all agents
have the same probability of going to the best restaurant, or the
second-best restaurant, and so on.
{\it Quite importantly, it can be shown that the chosen
entangled quantum strategy
(i.e., the entangled $N$-particle quantum Hall state at filling factor 1)
actually represents the best Nash equilibrium when there is a restaurant
ranking!} (see Appendix A for details).
\subsection{Kolkata stadium problem}
We will next introduce a variant of the KRP game which we will call
as the Kolkata stadium problem (KSP). In the KSP, there are $NK$ agents
trapped inside a Kolkata stadium that has $K$ exits.
There is a panic situation of a fire or a bomb-scare and all the
agents have to get out quickly
through the $K$ exits each of which has a capacity
of $\alpha N$
with $\alpha \ge 1$. We assume that all $NK$ agents have
equal access to all the exits and that each agent has
enough time to approach only one exit
before being harmed.
The probability $P(m)$ that any exit gets chosen by $m$ agents is given by
the binomial distribution
\begin{eqnarray}
P(m)
=
\frac{(NK)!}{(NK-m)! m!}p^m (1-p)^{NK-m} ,
\end{eqnarray}
where $p=1/K$ is the probability of choosing any gate.
For a capacity of $\alpha N$ for each gate, the cumulative probability
$P=\sum_{m=1}^{\alpha N}P(m)$ that
(on an average) a gate is approached by
$\alpha N$ or less agents is given in Table \ref{table1}.
\begin{table}
\caption{\label{table1} The calculated values of the cumulated probability
$P$ for a system with $NK$ persons and $K$ gates with a gate-capacity
$\alpha N$.}
\begin{ruledtabular}
\begin{tabular}{|c|c|c|c||c||c|c|c|c|}
\hline
\multicolumn{1}{|c|}{{\bf $\alpha$}}&
\multicolumn{1}{c|}{{\bf $N$}}&
\multicolumn{1}{c|}{{\bf $K$}} &
\multicolumn{1}{c||}{{\bf $P$}} &
\multicolumn{1}{c||}{{\bf }} &
\multicolumn{1}{c|}{{\bf $\alpha $}} &
\multicolumn{1}{c|}{{\bf $N$}}&
\multicolumn{1}{c|}{{\bf $K$}} &
\multicolumn{1}{c|}{{\bf $P$}}
\\
\hline
1 & 100 & 10 & 0.5266 && 1.05 & 100 & 10 & 0.7221 \\
1 & 500 & 10 & 0.5119 && 1.05 & 500 & 10 & 0.8848 \\
1 & 1000 & 10 & 0.5084 & & 1.05 & 1000 & 10 & 0.9531 \\
1 & 10000 & 10 & 0.5027 & & 1.05 & 10000 & 10 & 1.0000 \\
\hline
\hline
1 & 100 & 20 & 0.5266 && 1.1 & 100 & 10 & 0.8652 \\
1 & 500 & 20 & 0.5119 && 1.1 & 500 & 10 & 0.9907 \\
1 & 1000 & 20 & 0.5084 & & 1.1 & 1000 & 10 & 0.9995 \\
1 & 10000 & 20 & 0.5027 & & 1.1 & 10000 & 10 & 1.0000 \\
\hline
\end{tabular}
\end{ruledtabular}
\end{table}
Thus we see that if a gate has the optimal capacity of $N$ (i.e., $\alpha = 1$),
then $P$ is close to $0.5$ and is not affected by the number of gates $K$
(for small $K$) with $ P \rightarrow 0.5$ for $N\rightarrow \infty$.
However, as $\alpha$ increases even slightly above unity, $P$
increases significantly for fixed values $N$ and $K$.
Furthermore, for fixed values of $\alpha >1$ and $K$ (with $\alpha$ only
slightly larger than 1 and $K$ being small) $P \rightarrow 1$ as $N$ becomes
large.
{\em Here it should be mentioned that
even when $P \rightarrow 1$ on an average,
there can be fluctuations in a stampede
situation with more than $\alpha N$ people approaching a gate
and thus resulting in fatalities}.
Here too in the KSP game, if the $NK$ agents were to use the entangled
$NK$-particle state given by
the quantum Hall effect state at filling factor 1,
then every agent is assured of safe passage. In this situation, since there
are $NK$ angular momenta and only $K$ gates,
the angular momentum $M_i$ measured by an agent $i$ for
his/her particle should be divided
by $K$ and the remainder be taken to give the appropriate gate number
[i.e., gate number = $M_i$ ({\bf mod} $K$)]. Thus entanglement gives safe exit
with probability 1 even when $\alpha =1$!
Furthermore, even if there is an accepted-by-all ranking of the exits
in terms of comfort of passage, our chosen entangled state corresponds
to the best Nash equilibrium!
\section{Conclusions}
In the $N$-agent KRP game, while the number of satisfactory choices
is only $N!$, in sharp contrast the number of possibilities is $N^N$ when
all the restaurants have the same ranking.
Thus, in the classical stochastic approach,
the probability of getting the best solution where all the restaurants are
occupied by one customer is given by the vanishingly small value $\exp(-N)$.
Even in the KSP case, it can be shown that there is a vanishingly
small probability [=$\sqrt{K/(2\pi N)^{K-1}}$] of providing safe
passage to all when only $N$ people are allowed to exit from each of
the $K$ gates (i.e., when $\alpha=1$).
On the other hand,
in this work we showed how quantum entanglement can produce a coordinated
action amongst all the $N$-agents leading to the best possible solution
with a probability 1!.
Thus quantum entanglement produces a much more desirable scenario
compared to a classical approach at least for the KRP and the KSP games.
As a candidate for entanglement we could have picked any filled
band system (of condensed matter physics)
even in the absence of a magnetic field. For such an
entangled $N$-particle
state, momentum is a good quantum number.
However, only when the Coulomb interaction
is infinity compared to the kinetic energy do we have the
many-body ground state given by the antisymmetric wavefunction
satisfying Pauli's exclusion
principle.
Furthermore, the minimum
spacing between
various particle momenta is only $2\pi \hbar /L$ where $L$ is the linear size
of the system
and hence, to unambiguously determine the momentum of a particle,
one is faced with the uncertainty principle
which fixes the uncertainty in the measured momentum
to be at least $\hbar/2L$.
Next, one can also consider
$N$ number of identical qudits
each with $N$ possible states.
By producing
an antisymmetric entangled state from these N qudits,
one can get better results than classical approaches.
However, physically realizing a
qudit with a large number of states
is a challenging task \cite{qudit}.
Lastly, although it has not been shown that
our many-particle entangled state
(i.e., the quantum Hall effect state at filling factor 1)
will have
long-distance and also long-term correlations, we are
hopeful of such a demonstration in the future.
\section{}
In a $N$-player game, the set of strategies
$(s_1^* , s_2^* , ..., s_N^* )$ represent a {\it Nash Equilibrium}
if, for every player $i$, the strategy $s_i^*$ meets the following requirement
for the payoff function $\$$:
\begin{eqnarray}
&&
\$_i(s_1^*, ....,s_{i-1}^* , s_i^* , s_{i+1}^* , ....,s_N^*)
\nonumber
\\
&& ~~
\ge
\$_i(s_1^*, ....,s_{i-1}^* , s_i , s_{i+1}^* , ....,s_N^*) ,
\end{eqnarray}
for every strategy $s_i$ available to $i$.
In order to illustrate the main idea behind exploiting quantum
strategies, we will consider the simple situation of
two restaurants $R_1$ and $R_2$ with utility $u_1$ and $u_2$ respectively
as perceived by two diners $D_1$ and $D_2$. Then we can represent the payoff
for the diners by using the bimatrix displayed in
Table \ref{table2} with diner $D_1$ choices
along the rows and those of $D_2$ along the columns.
\begin{table}
\caption{\label{table2}}
\begin{tabular}{|c|c|c|}
\multicolumn{1}{c|}{{}}&
\multicolumn{1}{c|}{{\bf $R_1$}}&
\multicolumn{1}{c|}{{\bf $R_2$}}
\\
\hline
\multicolumn{1}{c}{{$R_1$}}&
\multicolumn{1}{|c|}{{\bf $\left ( \frac{u_1}{2},\frac{u_1}{2} \right ) $}}&
\multicolumn{1}{c|}{{\bf ( $u_1$,$u_2$ )}}
\\
\hline
\multicolumn{1}{c}{{$R_2$}}&
\multicolumn{1}{|c|}{{\bf ( $u_2$,$u_1$ )}} &
\multicolumn{1}{c|}{{\bf $ \left ( \frac{u_2}{2},\frac{u_2}{2} \right )$}}
\\
\hline
\end{tabular}
\end{table}
Here we use the formalism developed
in Ref. \onlinecite{weber}.
We assume that diners $D_{1,2}$ have access to the following entangled state:
\begin{eqnarray}
|\psi_{in} \rangle = a|R_1 R_2 \rangle + b | R_2 R_1 \rangle ,
\label{psi_in}
\end{eqnarray}
where the coefficients satisfy the condition
$|a|^2 + |b|^2 =1$.
The corresponding initial density matrix is given by
\begin{eqnarray}
\rho_{in} = \rho_{in} ^{D_1} \otimes \rho_{in} ^{D_2} =
|\psi_{in} \rangle \langle \psi_{in} | .
\end{eqnarray}
We assume that each player can manipulate his state (i.e., restaurant) by
either using the identity $I$ or the Pauli flipping operator $\sigma_x$
which is unitary and has the following property
\begin{eqnarray}
\sigma_x |R_{1,2} \rangle = | R_{2,1} \rangle .
\end{eqnarray}
We further assume that each diner can transform his part
($\rho_{in} ^{D_{1,2}}$)
of the total density matrix $\rho_{in} $ in the following manner:
\begin{eqnarray}
\rho_{fin}^{D_{1,2}} = p_{1,2} I \rho_{in} ^{D_{1,2}} I^{\dagger}
+
(1- p_{1,2}) \sigma_x \rho_{in} ^{D_{1,2}} \sigma_x^{\dagger} ,
\end{eqnarray}
to obtain the final density matrix
\begin{eqnarray}
\rho_{fin} = \rho_{fin}^{D_{1}} \otimes \rho_{fin}^{D_{2}} .
\end{eqnarray}
In order to evaluate the payoff, we define the payoff operator as follows:
\begin{eqnarray}
\!\!\!\!\!\!\!\!\!\!
P_{1,2} = &&
\!\!\!\!\!\!
u_{1,2}|R_1 R_2 \rangle \langle R_1 R_2 |
+ u_{2,1} |R_2 R_1 \rangle \langle R_2 R_1 |
\nonumber
\\
&&
\!\!\!\!\!\!\!
+ 0.5 u_1 |R_1 R_1 \rangle \langle R_1 R_1 |
+ 0.5 u_2 |R_2 R_2 \rangle \langle R_2 R_2 | .
\end{eqnarray}
Then the payoffs obtained using the following expression
\begin{eqnarray}
\$_{1,2} = {\rm Tr} (P_{1,2} \rho_{fin}) ,
\end{eqnarray}
are given by
\begin{eqnarray}
\$_{1}(p_1,p_2) = &&
\!\!\!\!\!\!
0.5 p_1 p_2 (u_1+u_2)
\nonumber
\\
&&
\!\!\!\!\!\!
+ p_1 \left [ 0.5 u_1 |a|^2 + 0.5 u_2 |b|^2 - u_2 |a|^2 - u_1 |b|^2 \right ]
\nonumber
\\
&&
\!\!\!\!\!\!
- 0.5 p_2 \left [ u_1 |b|^2 + u_2 |a|^2 \right ]
\nonumber
\\
&&
\!\!\!\!\!\!
+ u_2 |a|^2 + u_1 |b|^2 ,
\end{eqnarray}
and
\begin{eqnarray}
\$_{2}(p_1 , p_2 ) = &&
\!\!\!\!\!\!
0.5 p_1 p_2 (u_1+u_2)
\nonumber
\\
&&
\!\!\!\!\!\!
- 0.5 p_1 \left [ u_1 |a|^2 + u_2 |b|^2 \right ]
\nonumber
\\
&&
\!\!\!\!\!\!
+ p_2 \left [ 0.5 u_1 |b|^2 + 0.5 u_2 |a|^2 - u_1 |a|^2 - u_2 |b|^2 \right ]
\nonumber
\\
&&
\!\!\!\!\!\!
+ u_1 |a|^2 + u_2 |b|^2 .
\end{eqnarray}
To determine the Nash equilibria, we stipulate that the following
differences be non-negative:
\begin{eqnarray}
\$_{1}(p_1^*,p_2^*)
- \$_{1}(p_1,p_2^*)
=
(p_1^* -p_1)
&&
\!\!\!\!\!\!
\left [
0.5 p_2^* (u_1+u_2)
\right .
\nonumber
\\
&&
\!\!\!\!\!\!
+ u_1 (0.5 |a|^2 - |b|^2 )
\nonumber
\\
&&
\!\!\!\!\!\!
\left .
- u_2 (|a|^2 - 0.5 |b|^2) \right ],
\nonumber
\\
\label{diff1}
\end{eqnarray}
and
\begin{eqnarray}
\$_{2}(p_1^*,p_2^*)
- \$_{2}(p_1,p_2^*)
=
(p_2^* -p_2)
&&
\!\!\!\!\!\!
\left [
0.5 p_1^* (u_1+u_2)
\right .
\nonumber
\\
&&
\!\!\!\!\!\!
+ u_2 (0.5 |a|^2 - |b|^2 )
\nonumber
\\
&&
\!\!\!\!\!\!
\left .
- u_1 (|a|^2 - 0.5 |b|^2) \right ].
\nonumber
\\
\label{diff2}
\end{eqnarray}
Then, from Eqs. (\ref{diff1}) and (\ref{diff2}),
we obtain the three Nash equilibria $(p_1,p_2) = (1,1), (0,0)$,
and $(\bar{p}_1, \bar{p}_2)$
where
\begin{eqnarray}
\!\!\!\!\!
\bar{p}_1 \equiv
- [u_1 (1-3|a|^2)+u_2 (-2+3|a|^2)]/(u_1 +u_2) ,
\end{eqnarray}
and
\begin{eqnarray}
\!\!\!\!\!
\bar{p}_2 \equiv
- [u_1 (-2+3|a|^2)+u_2 (1-3|a|^2)]/(u_1 +u_2) .
\end{eqnarray}
Next, we note that the differences
\begin{eqnarray}
\$_{1}(1,1)
- \$_{1}(0,0) = (u_2 - u_1)(1 - 2 |a|^2 ) ,
\end{eqnarray}
and
\begin{eqnarray}
\$_{2}(1,1)
- \$_{2}(0,0) = -(u_2 - u_1)(1 - 2 |a|^2 ) ,
\end{eqnarray}
are equal in magnitude but opposite in sign. Hence to obtain the same
preferred
Nash equilibrium [among (1,1) and (0,0)] for both the diners $D_1$ and $D_2$,
we take $|a| =1/\sqrt{2}$ which makes the payoff (for both the diners)
the same at both the
equilibrium points, i.e.,
$\$_{1,2}(1,1)
= \$_{1,2}(0,0) = (u_2 + u_1)/2$.
It can easily be verified, for the third Nash equilibrium
strategy, that
$\$_1(\bar{p}_1,\bar{p}_2) =
\$_2(\bar{p}_1,\bar{p}_2) \le 3(u_1 + u_2)/8 < (u_1 +u_2)/2$.
Thus the entangled state
\begin{eqnarray}
|\psi_{in} \rangle = \frac{|R_1 R_2 \rangle - | R_2 R_1 \rangle}{\sqrt{2}} ,
\end{eqnarray}
corresponds to the best Nash equilibrium.
It can also be argued from the symmetry of the payoff
for the two diners, as shown in Table \ref{table2}, that
one expects the best Nash equilibrium to occur when $|a|=1/\sqrt{2}$
in Eq. (\ref{psi_in}).
The above argument can be extended to the case of
$N$ diners and $N$ restaurants
each with a different ranking \cite{sudhakar}
and one can deduce that the best Nash equilibrium strategy corresponds
to the many-particle entangled state (i.e., the integer quantum Hall
state at filling factor 1) chosen by us.
\end{document}
|
\begin{document}
\title{A New Sequential Optimality Condition of Cardinality-Constrained Optimization Problems and Application}
\author{ Liping Pang \and Menglong Xue \and Na Xu }
\institute{Liping Pang \at
Dalian University of Technology \\
Dalian, China\\
[email protected]
\and
Menglong Xue, Corresponding author \at
Dalian University of Technology \\
Dalian, China\\
[email protected]
\and
Na Xu \at
Liaoning Normal University\\
Dalian, China\\
[email protected]
}
\date{Received: date / Accepted: date}
\maketitle
\begin{abstract}
In this paper, we consider the cardinality-constrained optimization problems and propose a new sequential optimality condition for the continuous relaxation reformulation which is popular recently. It is stronger than the existing results and is still a first-order necessity condition for the cardinality constraint problems without any additional assumptions. Meanwhile, we provide a problem-tailored weaker constraint qualification, which can guarantee that new sequential conditions are Mordukhovich-type stationary points. On the other hand, we improve the theoretical results of the augmented Lagrangian algorithm. Under the same condition as the existing results, we prove that any feasible accumulation point of the iterative sequence generated by the algorithm satisfies the new sequence optimality condition. Furthermore, the algorithm can converge to the Mordukhovich-type (essentially strong) stationary point if the problem-tailored constraint qualification is satisfied.
\end{abstract}
\keywords{Sequential optimality condition \and Cardinality constraints \and Augmented Lagrangian algorithm \and Constraint qualification}
\section{Introduction}
In recent years, $cardinality$-$constrained~optimization~problems$ (CCOP) have attracted great attention, due to its wide application in $portfolio$ \cite{protfolio_1,protfolio_2,protfolio_3}, $compressed~sensing$ \cite{compress_1}, $statistical~regression$ \cite{statistic_1,statistic_2} and other fields, and a large number of scholars have tried to solve these problems from different perspectives. According to whether a model transformation is carried out, the existing methods are mainly divided into direct methods and indirect methods. While CCOP is non-convex and non-continuous, solving directly is extremely difficult. Therefore, this paper mainly focuses on the indirect methods, in which \cite{Schwartz_2016_siam.J} presents a new relaxed reformulation with orthogonal constraints by introducing an auxiliary variable $y$.
The paper \cite{Schwartz_2016_siam.J} studied the relationship between problems the relaxation problem and CCOP, and proved that the two are equivalent in terms of global solution and feasibility. Compared with CCOP, the relaxation problem has a better structure property, such as continuity and smoothness, which allows us to have more tools to deal with the problem. But the relaxation problem is an optimization problem with orthogonal constraints, which means it is highly non-convex and difficult to solve. Because of the similarity between the relaxation problem and $mathematical~programs~with~complementarity~constraints$ (MPCC), a natural idea is to directly use MPCC's rich theoretical and numerical methods to solve the problem. However, this idea is often not feasible. For example, most of the MPCC's constraint qualification (CQ) cannot be directly applied to the relaxation problem (such as MPCC-LICQ). Even if it can be applied, it will often lead to better conclusions than MPCC. Literature \cite{Schwartz_2016_MP} remark 5.7 detailed summary of the difference between the two. This means that we cannot simply treat the relaxation problem as a special case of MPCC, but should develop problem-tailored theories and numerical algorithms. In recent years, as a large number of scholars continue to pay attention to this model, some results have been achieved.
With the help of the $tightened~nonlinear~program$ of CCOP, denoted by $TNLP(x^{*})$. \v{C}ervinka et al. developed the classic constraint qualification to the CCOP in \cite{Schwartz_2016_MP}, proposed some CCOP customized constraint qualification (CC-CQ), and discussed the relationship between them. In addition, Kanzow et al. \cite{Schwartz_2021_ALA} adapted the quasi-normality CQ in \cite{Bertsekas_2002} and obtained a form corresponding to CCOP. And \cite{Schwartz_2021_Sequ} proposed a cone-continuity constraint qualification. At the same time, \cite{Schwartz_2016_siam.J} defines the first-order stationarity concept of the relaxation problem, called CC-Strong-stationary (CC-S-stationary) and CC-Mordukhovich-stationary (CC-M-stationary), where CC-S-stationary is equivalent to the $Karush$-$Kuhn$-$Tucker$ (KKT) condition of the relaxation problem, and the CC-M-stationary is equivalent to the KKT condition of $TNLP(x^{*})$; \cite{Ribeiro_2020} provides a Weak-type stationarity. It is worth mentioning that, unlike CC-S-stationary and Weak-type stationarity, CC-M-stationary is only related to the original variable $x$, and \cite{Schwartz_2021_ALA} proves that CC-S-stationary and CC-M-stationary are equivalence in the original variable space. Consequently, this paper will focus on CC-M-stationarity.
Because of the similarity between the relaxation problem and MPCC, some researchers try to apply the classic algorithm of MPCC to solve the relaxation problem. \cite{Schwartz_2016_siam.J} and \cite{Schwartz_2018} respectively applied two classic MPCC's regularization methods to the relaxation problem, and both obtained better convergence than general MPCC. However, the regularization strategy is actually to further relax the relaxation problem into a sequence of regular subproblems and obtain the solution of the relaxation problem by solving the regular subproblems. Can the relaxation problem be solved directly without further relaxation? This is an issue worthy of attention. In this paper, we have made a great answer to this. The main contributions of this paper are as follows:
\begin{enumerate}[$\bullet$]
\item {We propose a new sequential optimality condition: CC-PAM-stationarity. In recent years, the application of sequential optimality condition to stop criteria and uniform convergence analysis of algorithms has received great attention. In this area, several sequential optimality conditions have been proposed for $nonlinear~programming$ (NLP) \cite{Andreani_2011_AKKT,Andreani_2010_CAKKT,Haeser_2011_SAKKT,Andreani_2019_PAKKT}, where \cite{Andreani_2019_PAKKT} gives the relationship between them. However, there are still very few relevant results about CCOP. \cite{Ribeiro_2020} establishes a sequential optimality condition, called CC-approximate weak stationarity (CC-AW-stationarity), but this condition is based on the $(x,y)$ space. Therefore, Kanzow et al. \cite{Schwartz_2021_Sequ} proposed CC-approximate Mordukhovich stationarity (CC-AM-stationarity), which is only related to $x$, and a proof is given that it is equivalent to CC-AW-stationary. However, for some problems, the number of CC-AM-stationary points is numerous, and these points are often far from the optimal solutions (e.g. Example \ref{exam1}). In order to obtain fewer optimal candidate points, we propose CC-PAM-stationarity, which is strictly stronger than CC-AM-stationarity, and we show that it is a necessary condition of CCOP without any assumptions.}
\item {We define a new problem-tailored constraint qualification, called CC-PAM-regularity, which is weaker than CC-AM-regularity proposed in \cite{Schwartz_2021_Sequ}. We prove that any CC-M-stationary point is CC-PAM-stationary, and conversely, CC-PAM-stationary point is CC-M-stationary if CC-PAM-regularity condition is satisfied. In other words, CC-PAM-regularity condition is a CC-CQ. Borrowing the notation of reference \cite{ALGENCAN}, this constraint qualification is called strict constraint qualification (SCQ). Furthermore, we show that CC-PAM-regularity condition is the weakest SCQ relative to CC-PAM-stationarity.}
\item{We apply CC-PAM-stationarity to safeguarded augmented Lagrangian method and further improve its convergence. Different from the regularization methods, the literature \cite{Schwartz_2021_ALA} and \cite{Schwartz_2021_Sequ} try to directly apply the safeguarded augmented Lagrangian method of the general NLP to the relaxation problem and \cite{Schwartz_2021_ALA} uses the corresponding solver ALGENCAN\cite{ALGENCAN,ALGENCAN_1,ALGENCAN_2} to solve the $portfolio$ problem verify the advantages of the augmented Lagrangian algorithm over the regularization methods. In addition, the above two regularization methods both require accurate KKT points for their subproblems, while the safeguarded augmented Lagrangian method only requires subproblems to be solved inaccurately. Kanzow et al. \cite{Schwartz_2021_Sequ} show that any feasible limit point of safeguarded augmented Lagrangian method is CC-AM-stationary. And we proved that under mild conditions such as semialgebraic properties (or the same conditions as \cite{Schwartz_2021_Sequ}), these points are CC-PAM-stationary, which is strictly better than CC-AM-stationary. If additional conditions of CC-PAM-regularity hold, they will be CC-M-stationary points.}
\end{enumerate}
The organization is as follows: we give some basic definitions and preliminary conclusions in Sect.2; propose a new sequential optimality condition in Sect.3, and defines a new problem-tailored constraint qualification in Sect.4. The convergence of safeguarded augmented Lagrangian method is discussed in Sect.5, and Sect.6 is a simple summary.
Notation: $I_{g}(x)=\{i:g_{i}(x)=0,~i=1,\dots,m\}$, $I_0(x)=\{\imath:x_\imath=0,~\imath=1,\dots,n\}$, $I_\pm(x)=\{\imath:x_\imath\neq 0,~\imath=1,\dots,n\}$, $x_+=\max\{x,0\}$, $|\cdot|$ denotes $l_1$-$norm$, $\parallel\cdot\parallel$ is Euclidean norm, $\parallel\cdot\parallel_\infty$ denote infinity norm, $\parallel \cdot\parallel_0$ denotes $l_0$ norm (the number of non-zero elements), $e=(1,\dots,1)^T\in\mathbb{R}^{n}$, $e_i$ is a vector where only the ith component is 1 and all the others are 0, $x\circ y$ denotes Hadamard product of $x$ and $y$.
\section{Preliminaries}
In this paper, we consider the optimization problems
\begin{equation}
\min~ f(x) \quad s.t.\hspace{1em}g(x)\leq 0,
\quad h(x)=0,\quad \parallel x\parallel_{0}\ \leq \kappa,
\label{primary_problem}
\end{equation}
where $\kappa$ is an integer and $\kappa<n$, $f: \mathbb{R}^{n}\rightarrow\mathbb{R}$, $g: \mathbb{R}^{n}\rightarrow\mathbb{R}^{m}$, $h: \mathbb{R}^{n}\rightarrow\mathbb{R}^{p}$ is continuously differentiable, and $\parallel x\parallel_0$ is also called cardinality of $x$. Thus, the problem \eqref{primary_problem} is called a $cardinality$-$constrained~optimization~problems$ (CCOP). Let $x^*\in\mathbb{R}^n$, the tightened NLP problem ($TNLP(x^*)$) of CCOP defined as
\begin{equation}
\min~ f(x) \quad s.t.\hspace{1em}g(x)\leq 0,
\quad h(x)=0,\quad x_\imath=0,\hspace{0.5em} \imath\in I_{0}(x^{*}).
\label{TNLP}
\end{equation}
And the relaxation problem of CCOP is defined as
\begin{equation}
\left\{\begin{aligned}
&\min\quad f(x)\\
&~s.t.\hspace{1em}g(x)\leq 0,\quad h(x)=0,\\
&\qquad x\circ y=0,\quad n-\kappa-e^{T}y\leq 0,\quad y\leq e.
\end{aligned}\right.\label{relax_problem}
\end{equation}
Note that the problem \eqref{relax_problem} is one less non-negative constraint than the form in \cite{Schwartz_2016_siam.J}. The literature \cite{Schwartz_2021_ALA} shows that this change will not affect the original conclusion and can lead to a larger feasible set. In the introduction, we have mentioned the relationship between CCOP and the problem \eqref{relax_problem}. Below we will give specific conclusions.
\begin{proposition}
\textup{\cite{Schwartz_2016_siam.J}}~Let $x\in\mathbb{R}^n$, then the following statements hold.
\begin{itemize}
\item {If $x$ is a feasible point (or golbal minimizer) of CCOP if and only if there exists $y\in\mathbb{R}^n$ such that $(x,y)$ is feasible point (or golbal minimizer) of the problem \eqref{relax_problem}.}
\item{If $x$ is a local minimizer of CCOP, then there exists $y\in\mathbb{R}^n$ such that $(x,y)$ is local minimizer of the problem \eqref{relax_problem}; Conversely, if $(x,y)$ is a local minimizer of the problem \eqref{relax_problem} and $\|x\|_0=\kappa$ holds, then $x$ is a local minimizer of CCOP.}
\end{itemize}
\end{proposition}
There are several stationarity concepts with the relaxation problem \eqref{relax_problem}.
\begin{definition}
\cite{Schwartz_2016_siam.J}~Let $(x^{*},y^*)$ be feasible for \eqref{relax_problem}, then it is called
\begin{enumerate}[(1)]
\item {CC-S-stationary, if there exists $\{(\lambda,\mu,\gamma)\}\in \mathbb{R}^m\times\mathbb{R}^p\times\mathbb{R}^n$ such that
\begin{enumerate}[$\bullet$]
\item {$\nabla f(x^{*})+\nabla g(x^{*})\lambda+\nabla h(x^{*})\mu+\gamma= 0$;}
\item {$\lambda_{i}=0,~\forall i\notin I_{g}(x^{*});~\gamma_{\imath}=0,~for~all~ \imath ~such~that ~y_\imath^*=0$.}
\end{enumerate}}
\item{CC-M-stationary, if there exists $\{(\lambda,\mu,\gamma)\}\in \mathbb{R}^m\times\mathbb{R}^p\times\mathbb{R}^n$ such that
\begin{enumerate}[$\bullet$]
\item {$\nabla f(x^{*})+\nabla g(x^{*})\lambda+\nabla h(x^{*})\mu+\gamma= 0$;}
\item {$\lambda_{i}=0,~\forall i\notin I_{g}(x^{*});~\gamma_{\imath}=0,~\forall~ \imath\in I_{\pm}(x^{*})$.}
\end{enumerate}}
\end{enumerate}
\end{definition}
Obviously, CC-M-stationarity is weaker than CC-S-stationarity, but it only depends on the variable $x$, which can be used as the optimality measure of CCOP. Another important reason to focus on CC-M-stationary points in this paper is because of the validity of the following conclusion.
\begin{proposition}
\label{prop2}
\textup{\cite{Schwartz_2021_ALA}}~Let $(x,y)$ is feasible for \eqref{relax_problem}, if $(x,y)$ is a CC-M-stationary point, then there exists $z\in\mathbb{R}^n$ such that $(x,z)$ is a CC-S-stationary point.
\end{proposition}
Let us now recall a basic concepts that needed for theoretical analysis \cite{Variational}. The upper limit of set-valued maps $\Theta:~\mathbb{R}^n\rightrightarrows\mathbb{R}^m$ is
\[\limsup_{x\rightarrow x^*}\Theta(x):=\left\{z:\exists x^k\rightarrow x^*,~\exists z^k\rightarrow z,~z^k\in \Theta(x^k)\right\}.\]
For a function $l:\mathbb{R}^n\rightarrow\mathbb{R}$, the (lower) level set $l_\alpha$ is defined as
\[l_\alpha:=\left\{~x\in\mathbb{R}^n:~l(x)\leq \alpha\right\}.\]
If any level set of the function $l$ is bounded, then the function $l$ is said to be level bounded. Since some of the conclusions of this article are obtained under the assumption of semialgebraic, let us briefly introduce the basic definition and properties of semialgebraic. We say the set $C\subseteq\mathbb{R}^n$ is semialgebraic if it can be written as a finite
union of sets of the form
\[\left\{x \in \mathbb{R}^{n}: u_{i}(x)=0,~v_{i}(x)<0,~i=1,\ldots, p\right\},\]
where $u_i(x)$, $v_i(x)$ are polynomial functions. A function is called semialgebraic if its graph is a semialgebraic set, obviously polynomial functions are semialgebraic. Because of their strong stability, semialgebraic functions are a very broad class of functions.
\begin{lemma}
\textup{\cite{Attouch_2010}}~The following properties hold.
\begin{itemize}
\item {Linear combination of finite number of semialgebraic functions is semialgebraic.}
\item {Composition of semialgebraic functions is semialgebraic.}
\item{Generalized inverse of a semi-algebraic function is semialgebraic.}
\item{Let $F(x)=\sup\limits_{y\in C}f(x,y)$ and $G(x)=\inf\limits_{y\in C}f(x,y)$, if the set $C$ and function $f$ are semialgebraic, then both F and G are semialgebraic.}
\end{itemize}
\label{semialge}
\end{lemma}
Semialgebraic functions have another important property, they satisfy the Kurdyka-\L ojasiewicz property.
\begin{definition}[KL property]
\textup{\cite{Attouch_2009}}~We say the function $f$ satisfy the Kurdyka-\L ojasiewicz property, if for any limiting-critical point $x^*$ ($0\in \partial f(x^*)$), there exist $\epsilon,~C>0$, $\theta\in[0,1)$ such that
\begin{equation}
\label{KL}
C\hspace{0.1em}|f(x)-f(x^*)|^\theta\leq\|v\|,\quad \forall \hspace{0.1em}\|x-x^*\|\leq\epsilon,~v\in\partial f(x^*).
\end{equation}
\end{definition}
And if the constraint set of NLP is denoted as
\[X:=\{x:~g_{i}(x)\leq 0,~i=1,\dots,m;~h_{j}(x)=0,~ j=1,\dots ,p\},\]
then standard MFCQ is defined as follows.
\begin{definition}[MFCQ]
\label{MFCQ}
Let $x\in X$, then we say $x$ satisfies Mangasarian-Fromovitz CQ, if the gradient vectors $\nabla h_j(x)~(j=1,\dots,p)$ are linearly independent, and there exists $d\in\mathbb{R}^n$ such that
\[\nabla h_j(x)^Td=0,~\forall j=1,\dots,p,\qquad \nabla g_{i}(x)^Td<0,~\forall i\in I_g(x).\]
\end{definition}
\section{A New Sequential Optimality Condition}
Recently, due to its excellent properties, sequential optimality conditions are very popular. Although there have been many theoretical results on NLP problems, to avoid auxiliary variables, we did not directly apply the results of NLP to problem \eqref{relax_problem} but proposed new problem-tailed sequential optimality conditions.
\begin{definition}[CC-PAM-stationary]
Let $x^{*}$ is feasible for CCOP, we say that $x^{*}$ is CC-positive approximate Mordukhovich stationary, if there exist $\{(x^{k},\lambda^{k},\mu^{k},\gamma^{k})\}\in\mathbb{R}^n\times\mathbb{R}_+^m\times\mathbb{R}^p\times\mathbb{R}^n$ such that:
\begin{enumerate}[$(a)$]
\item {$x^{k}\rightarrow x^{*}$, $\nabla f(x^{k})+\nabla g(x^{k})\lambda^{k}+\nabla h(x^{k})\mu^{k}+\gamma^{k}\rightarrow 0$;}
\item {$\lambda_{i}^{k}=0,~\forall i\notin I_{g}(x^{*}),~\gamma_{\imath}^{k}=0,~\forall \imath\in I_{\pm}(x^{*})$;}
\item {$\lambda_{i}^{k}g_{i}(x^{k})>0$, if $\lim\limits_{k}\frac{\lambda_{i}^{k}}{\pi_k}>0$;}
\item {$\mu_{j}^{k}h_{j}(x^{k})>0$, if $\lim\limits_{k}\frac{|\mu_{j}^{k}|}{\pi_k}>0$;}
\item {$\gamma_{\imath}^{k}x_{\imath}^{k}>0$, if $\lim\limits_{k}\frac{|\gamma_{\imath}^{k}|}{\pi_k}>0$;}
\end{enumerate}
where $\pi_k=\parallel (1,\lambda^{k},\mu^{k},\gamma^{k})\parallel_{\infty}$, the sequence that satisfy the conditions $(a)$-$(e)$ are called a CC-PAM sequence.
\label{CC-PAM}
\end{definition}
The condition in Definition \ref{CC-PAM} only needs to be true for sufficiently large k. For example, if there is $N$, the conditions $(a)$-$(e)$ are satisfied when $k\geq N$, then you can set $\hat{x}^{k}=x^{N+k}$, and the new sequence obtained is the CC-PAM sequence. Observe that, the conditions $(a)$-$(b)$ are the same as CC-AM-stationarity, so there are the following conclusions.
\begin{proposition}
Let $x^{*}$is feasible for CCOP, if $x^{*}$ is a CC-PAM-stationary point, then it's a CC-AM-stationary point.
\end{proposition}
The converse of the above conclusion is untenable, as shown in the following example.
\begin{exam}
\label{exam1}
We consider
\begin{equation}
\label{exam1_pro}
\min_{x\in\mathbb{R}^3}~\frac{1}{2}\left[(x_1-1)^2+(x_2-1)^2\right]\quad s.t.~x_1x_3\leq0,~\|x\|_0\leq 2.
\end{equation}
Obviously, the problem \eqref{exam1_pro} has the only global optimal solution $(1,1,0)^T$. Let $x=(a,1,0)^T$, where $0<a<1$. Take
\[x^k=(a,1,\frac{1-a}{k}),\quad\lambda^k=k,\quad\gamma^k=(0,0,-ka)^T.\]
It is easy to verify that the above sequence satisfies the conditions $(a)$-$(b)$, that is, $x$ is a CC-AM-stationary point; but it is not CC-PAM-stationary, because the sequence meets the conditions $(a)$-$(b)$, $\gamma_3^k$ and $x_3^k$ must have different signs, which violates the condition $(e)$.
\end{exam}
As can be seen from Example \ref{exam1}, the number of CC-AM-stationary points is numerous, and these points are far from the optimal solution. While CC-PAM-stationary points contain fewer candidate points, that is, it is strictly superior to CC-AM-stationarity. The following Theorem \ref{thm1} states that CC-PAM-stationarity is a necessary optimality condition for CCOP without any additional assumptions.
\begin{theorem}
\label{thm1}
Let $x^{*}$ is a local minimizer of CCOP, then $x^{*}$ is a CC-PAM-stationary point.
\end{theorem}
{\it Proof}
~If $x^{*}$ is a local minimizer of CCOP, then it is also a local minimizer of $TNLP(x^{*})$, there exist $\epsilon>0$ such that $x^{*}$ is the only global minimizer for the following problem
\begin{equation}
\left\{\begin{aligned}
\min\quad &f(x)+\frac{1}{2}\parallel x-x^{*}\parallel^{2}&\\
~s.t.\hspace{1em}&g(x)\leq 0,\quad h(x)=0,\quad \\
&x_\imath=0,\hspace{0.5em} \imath\in I_{0}(x^{*}), \quad \parallel x-x^{*}\parallel\leq \epsilon.
\end{aligned}\right.\label{local_TNLP}
\end{equation}
Let $p(x)=\parallel h(x)\parallel^2+\parallel g(x)_+\parallel^2+\sum\limits_{\imath\in I_0(x^*)}x_\imath^2$, we define the local penalized problem
\begin{equation}
\left\{\begin{aligned}
\min\quad &f(x)+\frac{1}{2}\parallel x-x^{*}\parallel^{2}+\frac{M_k}{2}p(x)&\\
~s.t.\hspace{1em}&\parallel x-x^{*}\parallel\leq\epsilon,
\end{aligned}\right.\label{local_penality_TNLP}
\end{equation}
where $0<M_k\rightarrow+\infty$. For all $M_k$, the objective function of the problem \eqref{local_penality_TNLP} is continuous and the feasible set is compact, there must exist a global minimizer, denoted as $x^k$. Meanwhile, the sequence $\{x^k\}$ is bounded, there must be a convergent subsequence. For simplicity, let us set $x^k\rightarrow\bar{x}$. The following proves that $\bar{x}=x^*$.
Since $x^k$ is the global minimizer of the problem \eqref{local_penality_TNLP}, then
\begin{equation}
f(x^k)+\frac{1}{2}\parallel x^k-x^{*}\parallel^{2}+\frac{M_k}{2}p(x^k)\leq f(x^*).
\label{penality_1}
\end{equation}
Divide both sides of \eqref{penality_1} by $M_k$ and take the limit, we obtain $p(\bar{x})\leq0$. So $\bar{x}$ is feasible for the local problem \eqref{local_TNLP}. In addition, from \eqref{penality_1}
\[f(x^k)+\frac{1}{2}\parallel x^k-x^*\parallel^2\leq f(x^*).\]
Letting $k\rightarrow+\infty$ yields
\[f(\bar{x})+\frac{1}{2}\parallel \bar{x}-x^*\parallel^2\leq f(x^*),\]
but $x^*$ is the only global minimum point of the problem \eqref{local_TNLP}, there must be $\bar{x}=x^*$, that is, $x^k\rightarrow x^*$.
When $k$ is sufficiently large, there is obviously $\parallel x^k-x^*\parallel\leq\epsilon$. For simplicity, let's set $\{x^k\}\subseteq \{x:\parallel x^k-x^*\parallel\leq\epsilon\}$. From the necessary optimality condition of the problem \eqref{local_penality_TNLP} we obtain
\[\nabla f(x^k)+\nabla g(x^k)(M_kg(x^k)_+)+\nabla h(x^k)(M_kh(x^k))+\sum_{\imath\in I_0(x^*)}M_kx_{\imath}^ke_\imath=x^*-x^k.\]
And we define
\[\lambda^k=M_kg(x^k)_+,\quad \mu^k=M_kh(x^k),\quad \gamma_{\imath}^k=M_kx_{\imath}^k,\imath\in I_0(x^*),\quad \gamma_{\imath}^k=0,\imath\in I_\pm(x^*),\]
then
\[\nabla f(x^k)+\nabla g(x^k)\lambda^k+\nabla h(x^k)\mu^k+\gamma^k\rightarrow 0.\]
For all $i\notin I_{g}(x^*)$, there is $g_i(x^*)<0$, then $g_i(x^k)<0$ when $k$ is sufficiently large, so $\lambda_{i}^k=0$. Meanwhile, by the definition of $\gamma^k$, obviously
\[\gamma_{\imath}^k=0,\quad\forall\imath\in I_\pm(x^*).\]
If $\lambda_{i}^k > 0$, since $\lambda_i^k=M_kg_i(x^k)_+$ then $g_i(x^k)>0$, so
\[\lambda_i^kg_i(x^k)>0.\]
Similarly, if $\mu_{j}^k\neq 0$, then $h_j(x^k)\neq 0$; and if $\gamma_\imath^k\neq 0$, we obtain $x_\imath^k\neq0$. So
\begin{equation*}
\mu_{j}^kh_j(x^k)=M_k(h_j(x^k))^2>0~~and~~\gamma_\imath^kx_\imath^k=M_k(x_\imath^k)^2>0.
\end{equation*}
In summary, $\{(x^k,~\lambda^k,~\mu^k,~\gamma^k)\}$ is a CC-PAM sequence, that is, $x^*$ is a CC-PAM-stationary point.
\qed
Theorem \ref{thm1} and Example \ref{exam1} show that CC-PAM-stationary point we proposed can be used as a candidate point for the optimal solution, and it is more suitable as a measure of optimality than CC-AM-stationarity. On the other hand, another advantage of the sequential optimality condition is that it has nothing to do with the specific algorithm. That is, CC-PAM-stationary point has some theoretical properties, and any algorithm that can generate the CC-PAM-stationary point also has the same nature. Therefore, the existence of sequential optimality conditions provides a tool for establishing a unified framework for optimality theory.
The converse of the above conclusion is untenable, as shown in the following example.
\begin{exam}
In two-dimensional space, consider a simple geometric problem, set $z=(1,1)^T$, find the point closest to $z$ on the coordinate axis. This problem can be modeled as
\begin{equation}
\label{exam2_pro}
\min~\frac{1}{2}\left[(x_1-1)^2+(x_2-1)^2\right]\quad s.t.~\|x\|_0\leq 1.
\end{equation}
Obviously, $(1,0)^T$ and $(0,1)^T$ are the two global optimal solutions of the problem. The following shows that $x^*=(0,0)^T$ is a CC-PAM-stationary point. Take
\[x^k=(\frac{1}{k+1},\frac{1}{k+1})^T\quad \gamma^k=(1-\frac{1}{k+1},1-\frac{1}{k+1})^T,\]
it is easy to verify that $\{(x^k,\gamma^k)\}$ satisfies the conditions $(a)$-$(e)$, that is, $x^*=(0,0)^T$ is a CC-PAM-stationary point, but it is not a local minimizer of the problem \eqref{exam2_pro}.
\end{exam}
We know that CC-M-stationarity is stronger than the CC-AM-stationarity, and CC-PAM-stationarity we proposed is also strictly better than CC-AM-stationarity. An interesting question is whether CC-M-stationarity is stronger than CC-PAM-stationarity, and under what conditions are the two equivalent. This issue will be described in detail in the next section.
\section{A New Constraint Qualification}
Let $x^*$ be feasible for CCOP, $\alpha\geq 0$, $\beta\geq 0$, $x\in\mathbb{R}^n$, we defined the set
\begin{equation*}
\Theta(x,\alpha,\beta)=\left\{\nabla g(x)\lambda+\nabla h(x)\mu+\gamma\left|
\begin{aligned}
(\lambda,\mu,\gamma)&\in\mathbb{R}_+^m\times\mathbb{R}^p\times\mathbb{R}^n,\\
\lambda_{i}=0,~\forall i\notin I_g&(x^*),~\gamma_{\imath}=0,~~\forall\imath\in I_\pm(x^*),\\
\lambda_{i}g_i(x)\geq\alpha,~~&if~~\lambda_{i}~>\beta\parallel(1,\lambda,\mu,\gamma)\parallel_\infty,\\
\mu_{j}h_j(x)\geq\alpha,~~&if~|\mu_{j}|>\beta\parallel(1,\lambda,\mu,\gamma)\parallel_\infty,\\
\gamma_{\imath}x_\imath\geq\alpha,~~&if~|\gamma_{\imath}|>\beta\parallel(1,\lambda,\mu,\gamma)\parallel_\infty\\
\end{aligned}\right.\right\}.
\end{equation*}
Obviously, if $x^*$ is a CC-M-stationary point, then it can be written as
\begin{equation}
-\nabla f(x^*)\in\Theta(x^*,0,0).
\label{CC_M_reform}
\end{equation}
In addition, CC-PAM-stationarity can be expressed as the limit form of the set sequence, and the following conclusions are established.
\begin{lemma}
$x^*$ is CC-PAM-stationary $\iff -\nabla f(x^*)\in\limsup\limits_{
x^k\rightarrow x^*,
\alpha\downarrow 0,\beta\downarrow 0
}\Theta(x,\alpha,\beta).$
\label{CC-PAM-set}
\end{lemma}
{\it Proof}~~
"$\Rightarrow$" ~If $x^*$ is CC-PAM-stationary, from Definition \ref{CC-PAM}, there is a sequence $\{(x^k,\lambda^k,\mu^k,\gamma^k)\}$ such that the conditions $(a)$-$(e)$ holds. Take
\[\theta^k=\nabla g(x^k)\lambda^k+\nabla h(x^k)\mu^k+\gamma^k,\]
we know $\nabla f(x^k)+\theta^k\rightarrow 0$, by the condition $(a)$, then $\theta^k\rightarrow\theta^*=-\nabla f(x^*)$.
To prove the conclusion, just find the appropriate $\{\alpha_k\}$, $\{\beta_k\}$ such that $\alpha_k\downarrow 0$, $\beta_k\downarrow 0$ and $\theta^k\in\Theta(x^k,\alpha_k,\beta_k)$.
Let $\pi_k=\parallel (1,\lambda^k,\mu^k,\gamma^k)\parallel_\infty$, then the sequence $\left\{\frac{(\lambda^k,\mu^k,\gamma^k)}{\pi_k}\right\}$ must have a convergent subsequence. For simplicity, let's set it to converge. Let
\begin{equation*}
I=\left\{i\left|\lim\limits_{k\rightarrow\infty}\frac{\lambda_{i}^k}{\pi_k}>0\right.\right\},~~J=\left\{j\left|\lim\limits_{k\rightarrow\infty}\frac{|\mu_{j}^k|}{\pi_k}>0\right.\right\},~~K=\left\{\imath\left|\lim\limits_{k\rightarrow\infty}\frac{|\gamma_{\imath}^k|}{\pi_k}>0\right.\right\},
\end{equation*}
so we can take
\[\alpha_k=\min\left\{(\lambda_{i}^k g_{i}(x^k))_{i\in I},~(\mu_{j}^k h_{j}(x^k))_{j\in J},~(\gamma_{\imath}^k x_\imath^k)_{\imath\in K},~\frac{1}{k}\right\}.\]
Obviously there is $\alpha_k\rightarrow 0$, and when k is sufficiently large, we obtain
\[\frac{\lambda_{i}^k}{\pi_k}>\max\left\{(\frac{\lambda_{i}^k}{\pi_k})_{i\notin I},~(\frac{|\mu_{j}^k|}{\pi_k})_{j\notin J},~(\frac{|\gamma_{\imath}^k|}{\pi_k})_{\imath\notin K},~\frac{1}{k}\right\}\quad \forall i\in I.\]
Regarding $\mu$, $\gamma$ has a similar conclusion. And let
\[\beta_k=\max\left\{(\frac{\lambda_{i}^k}{\pi_k})_{i\notin I},~(\frac{|\mu_{j}^k|}{\pi_k})_{j\notin J},~(\frac{|\gamma_{\imath}^k|}{\pi_k})_{\imath\notin K},~\frac{1}{k}\right\},\]
Obviously there is $\beta_k\rightarrow 0$. Combining the non-negativity of $\{\alpha_k\}$ and $\{\beta_k\}$, we can set $\alpha_k\downarrow 0$ and $\beta_k\downarrow 0$ (if necessary, subsequence can be taken). And we obtain
\[\theta^k\in\Theta(x^k,\alpha_k,\beta_k).\]
"$\Leftarrow$"~~By hypothesis, there is $\{x^k\}$, $\{\alpha_k\}$, $\{\beta_k\}$ such that
\begin{equation*}
x^k\rightarrow 0,~\alpha_k\downarrow 0,~\beta_k\downarrow 0,~\Theta(x^k,\alpha_k,\beta_k)\ni\theta^k\rightarrow-\nabla f(x^*).
\end{equation*}
Therefore, there is $\{(\lambda^k,\mu^k,\gamma^k)\}$ such that
\[\theta^k=\nabla g(x^k)\lambda^k+\nabla h(x^k)\mu^k+\gamma^k,\]
so $\nabla f(x^k)+\theta^k\rightarrow 0$, that is
\[\nabla f(x^k)+\nabla g(x^k)\lambda^k+\nabla h(x^k)\mu^k+\gamma^k\rightarrow 0.\]
Since $\theta^k\in\Theta(x^k,\alpha_k,\beta_k)$, we obtain $\lambda_{i}^k=0~(\forall i\notin I_g(x^*)),~\gamma_{\imath}^k=0~(\forall\imath\in I_\pm(x^*))$. In addition, if $\lim\limits_{k\rightarrow\infty}\frac{|\mu_{j}^k|}{\pi_k}>0$, then $\frac{|\mu_{j}^k|}{\pi_k}>\beta_k$ for all $k$ large enough, this implies
\[\mu_{j}^kh_{j}(x^k)\geq\alpha_k>0,\]
that is the condition (b) is satisfied. There are similar conclusions about $\lambda$ and $\gamma$, so $x^{*}$ is CC-PAM-stationary.
\qed
Now, we give a new regularity condition.
\begin{definition}[CC-PAM-regularity]
We say a feasible point $x^*$ of CCOP satisfies the CC-PAM-regularity condition, if
\[\limsup\limits_{
x\rightarrow x^*,~
\alpha\downarrow 0,~\beta\downarrow 0
}\Theta(x,\alpha,\beta)\subseteq\Theta(x^*,0,0).\]
\label{CC_PAM_regular}
\end{definition}
\begin{rem}
CC-PAM-regularity condition is weaker than CC-AM-regularity condition. Take
\begin{equation*}
\hat{\Theta}(x)=\left\{\nabla g(x)\lambda+\nabla h(x)\mu+\gamma\left|
\begin{aligned}
(\lambda,\mu,\gamma)&\in\mathbb{R}_+^m\times\mathbb{R}^p\times\mathbb{R}^n,\\
\lambda_{i}=0,~\forall i\notin I_g&(x^*),~\gamma_{\imath}=0,~~\forall\imath\in I_\pm(x^*)
\end{aligned}\right.\right\}.
\end{equation*}
Obviously there is $\hat{\Theta}(x^*)=\Theta(x^*,0,0)$, and for all $\alpha\geq 0$, $\beta\geq 0$, $\forall x\in\mathbb{R}^n$ we have $\Theta(x,\alpha,\beta)\subseteq\hat{\Theta}(x)$. Therefore, if CC-AM-regularity condition is established, that is, $\limsup_{x\rightarrow x^*}\hat{\Theta}(x)\subseteq\hat{\Theta}(x^*)=\Theta(x^*,0,0)$, we obtain $\limsup_{x\rightarrow x^*,~
\alpha\downarrow 0,~\beta\downarrow 0}\Theta(x,\alpha,\beta)\subseteq\Theta(x^*,0,0)$.
\end{rem}
Now we give the relationship between CC-PAM-stationarity and CC-M-stationarity.
\begin{theorem}
\label{PAM->M}
Let $x^*$ is feasible for CCOP, the following statements hold.
\begin{enumerate}[$(\romannumeral1)$]
\item {If $x^*$ is a CC-M-stationary point, then it is a CC-PAM-stationary point.}
\item {If $x^*$ is a CC-PAM-stationary point, and CC-PAM-regularity condition holds at $x^*$, then $x^*$ is a CC-M-stationary point.}
\item {If for any continuous differentiable function $f$, the following relationship holds:
\[x^*~\text{CC-PAM-}stationary\quad \Longrightarrow\quad x^*~\text{CC-M-}stationary,\]
then CC-PAM-regularity condition holds at $x^*$.}
\end{enumerate}
\end{theorem}
{\it Proof}~$(\romannumeral1)$
~If $x^*$ is a CC-M-stationary point, then
\[-\nabla f(x^*)\in\Theta(x^*,0,0)\subseteq\limsup\limits_{
x\rightarrow x^*,~
\alpha\downarrow 0,~\beta\downarrow 0
}\Theta(x,\alpha,\beta).\]
$(\romannumeral2)$~By Lemma \ref{CC-PAM-set}, we obtain
\[-\nabla f(x^*)\in\limsup\limits_{
x\rightarrow x^*,~
\alpha\downarrow 0,~\beta\downarrow 0
}\Theta(x,\alpha,\beta)\subseteq\Theta(x^*,0,0),\]
so $x^*$ is CC-M-stationary.
$(\romannumeral3)$~We take $\theta^*\in\limsup\limits_{
x\rightarrow x^*,~
\alpha\downarrow 0,~\beta\downarrow 0
}\Theta(x,\alpha,\beta)$, and define $f=x^T\theta^*$, then $\nabla f(x^*)=\theta^*$. Since
\[\theta^*\in\limsup\limits_{
x\rightarrow x^*,~
\alpha\downarrow 0,~\beta\downarrow 0
}\Theta(x,\alpha,\beta)\quad\Longrightarrow\quad \theta^*\in \Theta(x^*,0,0),\]
then $\limsup\limits_{
x\rightarrow x^*,~
\alpha\downarrow 0,~\beta\downarrow 0
}\Theta(x,\alpha,\beta)\subseteq \Theta(x^*,0,0)$.
\qed
In Theorem \ref{thm1}, we have proved that any local minimizer of CCOP (or $TNLP(x^*)$) satisfies the sequential optimality condition (CC-PAM-stationarity), and Theorem \ref{PAM->M}$~(\romannumeral2)$ explains
\begin{equation}
\label{SCQ}
\text{CC-PAM}~+~\text{CC-PAM-regularity}\quad\Longrightarrow\quad \text{CC-M},
\end{equation}
in other words, CC-PAM-regularity condition is a CC-CQ. Literature \cite{ALGENCAN} calls the constraint qualification that satisfies the property \eqref{SCQ} as the strict constraint qualification (SCQ). And the conclusion $(\romannumeral3)$ means that the CC-PAM-regularity condition is the weakest SCQ relative to CC-PAM-stationarity. Next, we will apply CC-PAM-stationarity and CC-PAM-regularity condition to enhance the theoretical results of the augmented Lagrangian method.
\section{Convergence of Safeguarded Augmented Lagrangian Method}
This section will discuss the convergence of using safeguarded augmented Lagrangian method to directly solve the relaxation problem \eqref{relax_problem}. Let $\Lambda=(\lambda,\mu,\gamma,\delta,\eta)\in\mathbb{R}_+^m\times\mathbb{R}^p\times\mathbb{R}^n\times\mathbb{R}_+\times\mathbb{R}_+ ^n$, $\rho>0$, then the augmented Lagrangian function of the problem \eqref{relax_problem} can be written as
\begin{equation}
\begin{aligned}
\mathscr{L}(x,&y,\Lambda,\rho)=f(x)+\frac{\rho}{2}\left[\left\|\left(g(x)+\frac{\lambda}{\rho}\right)_{+}\right\|^{2}+\left\| h(x)+\frac{\mu}{\rho}\right\|^{2}+\right.\\
&\left.\left\| x\circ y+\frac{\gamma}{\rho}\right\|^{2}+\left\| \left(n-\kappa-e^{T}y+\frac{\delta}{\rho}\right)_{+}\right\|^{2}+\left\| \left(y-e+\frac{\eta}{\rho}\right)_{+}\right\|^{2}\right].
\end{aligned}
\label{aug_funtion}
\end{equation}
Now we give safeguarded augmented Lagrangian method \cite{ALGENCAN}.
\begin{alg}
\label{SALM}
Safeguarded Augmented Lagrangian Method (SALM)\\
{\upshape\bfseries Step 1}(Initialization) Given $(x^{0},y^{0})\in\mathbb{R}^{n}\times\mathbb{R}^{n}$, $\lambda_{max}>0$, $\mu_{min}<\mu_{max}$, $\gamma_{min}<\gamma_{max}$, $\delta_{max}>0$, $\eta_{max}>0$, $\tau>1$, $\sigma>1$, $\{\epsilon_k\}\in\mathbb{R}_+$ and $\epsilon_k\downarrow 0$. Choose initial values $\bar{\lambda}^0\in[0,\lambda_{max}]^m$, $\bar{\mu}^0\in[\mu_{min},\mu_{max}]^p$, $\bar{\gamma}^0\in[\gamma_{min},\gamma_{max}]^n$, $\bar{\delta}^0\in[0,\delta_{max}]$, $\bar{\eta}^0\in[0,\eta_{max}]^n$, $\rho_{0}>0$ and set $k=1$.\\
{\upshape\bfseries Step 2}(Update of the iterate) Compute $(x^{k},y^{k})$ as an approximate solution of
\begin{equation}
\min~\mathscr{L}(x,y,\bar{\Lambda}^{k-1},\rho_{k-1})
\label{AL_sub_pro}
\end{equation}
such that
\begin{equation}
\parallel \nabla\mathscr{L}(x^k,y^k,\bar{\Lambda}^{k-1},\rho_{k-1})\parallel\leq\epsilon_k.
\label{nabla->0}
\end{equation}
{\upshape\bfseries Step 3}(Update of the approximate multipliers)
\begin{align}
\lambda^k&=(\rho_{k-1} g(x^k)+\bar{\lambda}^{k-1})_+,\hspace{3em}\mu^k=\rho_{k-1} h(x^k)+\bar{\mu}^{k-1},\label{lamada_update}\\
\gamma^k&=\rho_{k-1} x^k\circ y^k+\bar{\gamma}^{k-1},\hspace{3.5em}\eta^k=(\rho_{k-1} (y^k-e)+\bar{\eta}^{k-1})_+,\\
\delta^k&=(\rho_{k-1} (n-\kappa-e^Ty^k)+\bar{\delta}^{k-1})_+.\quad\ \label{eta_update}
\end{align}
{\upshape\bfseries Step 4} (Update of the penalty parameter) Take
\begin{align*}
U^k&:=\min\left\{-g(x^k),\frac{\bar{\lambda}^{k-1}}{\rho_{k-1}}\right\},~V^k:=\min\left\{-(n-\kappa-e^Ty^k),\frac{\bar{\delta}^{k-1}}{\rho_{k-1}}\right\},\\
R^k&:=\min\left\{-(y^k-e),\frac{\bar{\eta}^{k-1}}{\rho_{k-1}}\right\},
\end{align*}
if $k=1$ or
\begin{equation}
\begin{aligned}
&\max\{\parallel U^{k-1}\parallel,\parallel h(x^{k-1})\parallel,\parallel x^{k-1}\circ y^{k-1}\parallel,\parallel V^{k-1}\parallel,\parallel R^{k-1}\parallel\}\\
&\geq \tau\max\{\parallel U^k\parallel,\parallel h(x^k)\parallel,\parallel x^k\circ y^k\parallel,\parallel V^k\parallel,\parallel R^k\parallel\},
\end{aligned}
\label{rho_bound}
\end{equation}
set $\rho_k=\rho_{k-1}$, otherwise set $\rho_k=\sigma\rho_{k-1}$.\\
{\upshape\bfseries Step 5} (Update of the safeguarded multipliers)
Choose $\bar{\lambda}^k\in[0,\lambda_{max}]^m$, $\bar{\mu}^k\in[\mu_{min},\mu_{max}]^p$, $\bar{\gamma}^k\in[\gamma_{min},\gamma_{max}]^n$, $\bar{\delta}^k\in[0,\delta_{max}]$, $\bar{\eta}^k\in[0,\eta_{max}]^n$.
Set $k\leftarrow k+1$, go to Step 1.
\end{alg}
Algorithm \ref{SALM} introduces the safeguard multiplier based on the classic augmented Lagrangian method. The convergence is further improved, and the whole sequence convergence required in the classic methods is relaxed to the subsequence convergence \cite{Kanzow_2017}. The updated way of safeguard multiplier in Step 5 is not unique, such as the most popular projection method. In addition, it should be emphasized that the stopping criterion is not set in Algorithm \ref{SALM}, and we will explore this issue in the subsequent convergence analysis.
Before proceeding to the analysis of convergence, a useful assumption is given.
\begin{assumption}
Assuming that $g:\mathbb{R}^n\rightarrow \mathbb{R}^m$ and $h:\mathbb{R}^n\rightarrow \mathbb{R}^p$ in CCOP are semialgebraic function.
\label{assum1}
\end{assumption}
In Sect.2, we have introduced the basic concepts and properties of semialgebraic, explaining that Assumption \ref{assum1} is a relatively mild condition, which covers a large class of problems. In the subsequent analysis, we will see that the "semialgebraic" assumption can be further relaxed. Let $p(x,y,\Lambda,\rho)=\frac{1}{\rho}\left[\mathscr{L}(x,y,\Lambda,\rho)-f(x)\right]$, it is actually the second penalty part of \eqref{aug_funtion} (excluding penalty parameters). Under the assumption $\ref{assum1}$, the following conclusions can be easily obtained by Lemma \ref{semialge}.
\begin{lemma}
If Assumption \ref{assum1} holds, then for any given $\Lambda$, $\rho$, $p(x,y,\Lambda,\rho)$ semialgebraic.
\label{lemma2}
\end{lemma}
Let $\{(x^k,y^k)\}$ be the iterative sequence generated by Algorithm \ref{SALM}. We already know that if $\{x^k\}$ is bounded on a subsequence, then $\{y^k\}$ is bounded on the corresponding subsequence (for detailed proof, see \cite{Schwartz_2021_ALA}). This property shows that the boundedness of the entire iteration sequence can be obtained only by ensuring that it is bounded in the $x$ space (that is, in the original problem). A sufficient condition is given below.
\begin{lemma}
If $f$ is level bounded, then for any given $\Lambda$, $\rho$, $\mathscr{L}(x,y,\Lambda,\rho)$ is also level bounded.
\label{level_bound}
\end{lemma}
{\it Proof}
~Let
\begin{equation*}
T(y,\delta,\eta,\rho)=\frac{\rho}{2}\left[\left\|\left(n-\kappa-e^{T}y+\frac{\delta}{\rho}\right)_{+}\right\|^{2}+\left\| \left(y-e+\frac{\eta}{\rho}\right)_{+}\right\|^{2}\right].
\end{equation*}
Meanwhile, for any $\alpha\in\mathbb{R}$, set $f_{\alpha}$, $T_{\alpha}$ as their respective levels under the $\alpha$ level set. By hypothesis, $f_{\alpha}$ is bounded, and for $T(y,\delta,\eta,\rho)$, we have
\[\parallel y\parallel\rightarrow\infty\quad\Longrightarrow\quad T(y,\delta,\eta,\rho)\rightarrow\infty,\]
therefore, $T_{\alpha}$ is also bounded. Because of
\[\{(x,y):\mathscr{L}(x,y,\Lambda,\rho)\leq\alpha\}\subseteq f_{\alpha}\times T_{\alpha},\]
then $\mathscr{L}(x,y,\Lambda,\rho)$ is level bounded.
\qed
Lemma \ref{level_bound} shows that $\mathscr{L}$ is consistent level bounded about $\Lambda$ and $\rho$. If the subproblem \eqref{AL_sub_pro} is solved using a descent algorithm, then the sequence generated by Algorithm \ref{SALM} must be bounded. Let us now discuss the convergence of Algorithm \ref{SALM}.
\begin{theorem}
\label{convergence}
Let $(x^*,y^*)$ be an accumulation point of $\{(x^k,y^k)\}$ generated by Algorithm \ref{SALM}, that is feasible for the problem \eqref{relax_problem}, and Assumption \ref{assum1} holds, then $x^*$ is a CC-PAM-stationary point.
\end{theorem}
{\it Proof}~
For simplicity, we assum, $(x^k,y^k)\rightarrow (x^*,y^*)$. By \eqref{nabla->0}, we obtain
\[\nabla f(x^k)+\nabla g(x^k)\lambda^k+\nabla h(x^k)\mu^k+\gamma^k\circ y^k\rightarrow 0.\]
Now let's do the proof in two cases.\\
$\romannumeral1)~\{\rho_k\}$ is bounded.
If $\{\rho_k\}$ is bounded, combined with $\{\bar{\Lambda}^k\}$ being bounded and \eqref{lamada_update}-\eqref{eta_update}, we know that $\{\Lambda^k\}$ is also bounded. To avoid repeatedly taking subsequence, we assum $\Lambda^k\rightarrow\Lambda$, then
\[\nabla f(x^*)+\nabla g(x^*)\lambda+\nabla h(x^*)\mu+\gamma\circ y^*=0.\]
Let $\mathcal{I}=\{i:\lambda_{i}>0\}$, $\mathcal{J}=\{j:\mu_{j}\neq 0\}$, $\mathcal{K}=\{\imath:\gamma_\imath y_{\imath}^*\neq 0\}\subseteq I_0(x^*)$. If the three are all empty sets, then let $\hat{x}^k=x^*$, $\hat{\lambda}^k=\hat{\mu}^k=\hat{\gamma}^k=0$, it can be concluded that $x^*$ is a CC-PAM-stationary point. Conversely, if there is at least one non-empty, it can be obtained from Lemma 1 of \cite{Andreani_2012}, there exist $I\subseteq\mathcal{I}$, $J\subseteq\mathcal{J}$, $K\subseteq\mathcal{K}$ and $(\hat{\lambda}_I,\hat{\mu}_J,\hat{\gamma}_K)$ such that
\begin{align}
&\nabla f(x^*)+\sum_{i\in I}\hat{\lambda}_i\nabla g_i(x^*)+\sum_{j\in J}\hat{\mu}_j\nabla h_j(x^*)+\sum_{\imath\in K}\hat{\gamma}_\imath e_\imath=0\label{hat-nabla};\\
&\hat{\lambda}_i\cdot\lambda_i>0,~i\in I;\label{hat-lambda}\\
&\hat{\mu}_j\cdot\mu_j>0,~j\in J;\label{hat-mu}\\
&\hat{\gamma}_\imath\cdot(\gamma_\imath y_\imath^*)>0,~\imath\in K;\label{hat-gamma}
\end{align}
And the vector group
\[\mathcal{F}=\{\nabla g_{i}(x^*)~(i\in I),~\nabla h_{j}(x^*)~(j\in J),~e_{\imath}~(\imath\in K)\}\]
is linearly independent.
Take
\begin{equation}
\hat{\lambda}_i^k=\left\{\begin{aligned}
&\hat{\lambda}_i\hspace{1.5em}i\in I,\hspace{0.9em}\\
&0\hspace{0.9em}otherwise,
\end{aligned}\right.\hspace{0.2em} \hat{\mu}_j^k=\left\{\begin{aligned}
&\hat{\mu}_j\hspace{1.5em}j\in J,\hspace{0.9em}\\
&0\hspace{0.9em}otherwise,
\end{aligned}\right.\hspace{0.2em}
\hat{\gamma}_\imath^k=\left\{\begin{aligned}
&\hat{\gamma}_\imath~\hspace{1.5em}\imath\in K,\hspace{0.75em}\\
&0\hspace{0.9em}otherwise,
\end{aligned}\right.
.\label{hat}
\end{equation}
The next key problem is to find a sequence $\{\hat{x}^k\}$, $\hat{x}^k\rightarrow x^*$ such that $\{(\hat{x}^k,\hat{\lambda}^k,\hat{\mu}^k,\hat{\gamma}^k)\}$ is a CC-PAM sequence. Let
\[J_+:=\{j:\hat{\mu}_j>0\},~J_-:=\{j:\hat{\mu}_j<0\},~K_+:=\{\imath:\hat{\gamma}_\imath>0\},~K_-:\{\imath:\hat{\gamma}_\imath<0\},\]
obviously $J_+\bigcup J_-=J$, $K_+\bigcup K_-=K$. And we define
\begin{align*}
Z:=\{x:g_i(x)\geq 0,&~h_{j_+}(x)\geq 0,~h_{j_-}(x)\leq 0,~x_{\imath_+}\geq 0,~x_{\imath_-}\leq 0,\\
&i\in I,~j_+\in J_+,~j_-\in J_-,~\imath_+\in K_+,~\imath_-\in K_-\}.
\end{align*}
Since the vector group $\mathcal{F}$ is linearly independent, then $Z$ satisfies LICQ at $x^*$, then MFCQ must also be satisfied. Thus, there exists $d\in\mathbb{R}^n$ such that
\begin{equation*}
\nabla g_i(x^*)^Td>0,~\nabla h_{j_+}(x^*)^Td>0,~\nabla h_{j_-}(x^*)^Td<0,~e_{\imath_+}^Td>0,~e_{\imath_-}^Td< 0,
\end{equation*}
where $i\in I,~j_+\in J_+,~j_-\in J_-,~\imath_+\in K_+,~\imath_-\in K_-$. For simplicity, we set $\parallel d\parallel=1$, and take
\[\hat{x}^k=x^*+t_{k}d,\]
where $t_k\downarrow 0$, this implies $\hat{x}^k\rightarrow x^*$. By \eqref{hat-nabla} and \eqref{hat}, we have
\[\nabla f(\hat{x}^k)+\nabla g(\hat{x}^k)\hat{\lambda}^k+\nabla h(\hat{x}^k)\hat{\mu}^k+\hat{\gamma}^k\rightarrow 0.\]
Let $i\notin I_g(x^*)$, we have $g_i(x^*)<0$, and $g_i(x^k)<0$ when $k$ is sufficiently large. By \eqref{rho_bound}, we know
\[\parallel U^k\parallel\rightarrow0~\Longrightarrow~\bar{\lambda}_i^{k-1}\rightarrow 0.\]
Futhermore, \eqref{lamada_update} implies
\[\lambda_i^k=(\rho_{k-1} g_i(x^k)+\bar{\lambda}_i^{k-1})_+=0,\quad \text{for all k sufficiently large} ,\]
then $\lambda_i=0$, that is, $i\notin\mathcal{I}$. Hence, by \eqref{hat}, we have
\begin{equation}
\hat{\lambda}_i^k=0,\quad \forall i \notin I_g(x^*).
\label{b-lam-satis}
\end{equation}
Take an index $\imath\in I_\pm(x^*)$, we know $x_\imath^*\neq0$, $y_\imath^*=0$, then $\gamma_\imath y_\imath^*=0$, i.e. $\imath\notin\mathcal{K}$. By \eqref{hat}, we have
\begin{equation}
\hat{\gamma}_\imath^k=0, \quad \imath\in I_\pm(x^*).
\label{b-gamma-satis}
\end{equation}
The following verifies that $\{(\hat{x}^k,\hat{\lambda}^k,\hat{\mu}^k,\hat{\gamma}^k)\}$ satisfies conditions $(c)$-$(e)$ of Definition \ref{CC-PAM}. Set
\begin{align*}
\Gamma:=\min\{\nabla g_i(x^*)^Td,&~\nabla h_{j_+}(x^*)^Td,~-\nabla h_{j_-}(x^*)^Td,~e_{\imath_+}^Td,~-e_{\imath_-}^Td,\\
&i\in I,~j_+\in J_+,~j_-\in J_-,~\imath_+\in K_+,~\imath_-\in K_-\}.
\end{align*}
If $\hat{\lambda}_i^k\neq 0$, then $i\in I$. By \eqref{b-lam-satis}, we know $I\subseteq I_g(x^*)$. This implies
\begin{align*}
g_i(\hat{x}^k)&=g_i(x^*)+\nabla g_i(x^*)^T(\hat{x}^k-x^*)+r(\parallel \hat{x}^k-x^*\parallel)\\
&=t_k\nabla g_i(x^*)^Td+r(t_k),
\end{align*}
where $r(t_k)$ represents the high-order infinitesimal of $t_k$. Divide both sides of the above formula by $t_k$, when $k$ is sufficiently large, we have
\[\frac{g_i(\hat{x}^k)}{t_k}=\nabla g_i(x^*)^Td+\frac{r(t_k)}{t_k}\geq\frac{\Gamma}{2}>0,\]
hence, $\hat{\lambda}_i^kg_i(\hat{x}^k)>0$.
If $\hat{\mu}_j^k\neq 0$, then $j\in J$. We only discuss $j\in J_-$ here (similarly available for $j\in J_+$). For any $j\in J_-$, when $k$ is sufficiently large, we know
\[\frac{h_j(\hat{x}^k)}{t_k}=\nabla h_j(x^*)^Td+\frac{r(t_k)}{t_k}\leq-\frac{\Gamma}{2}<0,\]
then $\hat{\mu}_j^kh_j(\hat{x}^k)>0$.
Analogously, if $\hat{\gamma}_\imath^k\neq0$, by \eqref{hat}, we know $\imath\in K$, then
\begin{align*}
x_\imath^k&=x_\imath^*+t_kd_\imath=t_ke_\imath^Td_\imath<0,\quad&\imath\in K_-,\\
x_\imath^k&=x_\imath^*+t_kd_\imath=t_ke_\imath^Td_\imath>0,\quad&\imath\in K_+,
\end{align*}
hence, for any $\hat{\gamma}_\imath^k\neq0$, obviously $\hat{\gamma}_\imath^kx_\imath^k>0$.
In summary, we show that $\{(\hat{x}^k,\hat{\lambda}^k,\hat{\mu}^k,\hat{\gamma}^k)\}$ is a CC-PAM sequence. Therefore $x^*$ is CC-PAM-stationary point.\\
$\romannumeral2)~\{\rho_k\}$ is unbounded.
Let $x^k\rightarrow x^*$, by \eqref{nabla->0}, we know
\[\left\|\nabla_{x} \mathscr{L}\left(x^{k}, y^{k}, \bar{\Lambda}^{k-1}, \rho_{k-1}\right)\right\|=\left\|\nabla f(x^k)+\nabla g\left(x^{k}\right) \lambda^{k}+\nabla h\left(x^{k}\right) \mu^{k}+\gamma^k\circ y^{k}\right\| \leqslant \varepsilon_{k}.\]
Set $\tilde{\gamma}^k=\gamma^k \circ y^k$, since $\epsilon_k\downarrow 0$, this implies
\[\nabla f(x^k)+\nabla g\left(x^{k}\right) \lambda^{k}+\nabla h\left(x^{k}\right) \mu^{k}+\tilde{\gamma}^k\rightarrow 0.\]
Let $i\notin I_g(x^*)$, we have $g_i(x^*)<0$, and $g_i(x^k)<0$ when $k$ is sufficiently large. Since $\rho_k\rightarrow\infty$, then
\[\lambda_{i}^k=(\rho_{k-1} g_i(x^k)+\bar{\lambda}_i^{k-1})_+=0.\]
Take an index $\imath\in I_\pm(x^*)$, we have $x_\imath^*\neq 0$ and $y_\imath^*=0$, then $y_\imath^k\rightarrow 0$. Thus
\begin{align*}
\lim _{k \rightarrow \infty} \gamma_{\imath}^{k} y_{\imath}^{k} &=\lim _{k \rightarrow \infty} \rho_{k-1} x_{\imath}^{k}\left(y_{\imath}^{k}\right)^{2}+\lim _{k \rightarrow \infty}\bar{\gamma}_{\imath}^{k-1} y_{\imath}^{k} \\
&=\lim _{k \rightarrow \infty} \frac{1}{x_{\imath}^{k}} \rho_{k-1}\left(x_{\imath}^{k} y_{\imath}^{k}\right)^{2}.
\end{align*}
Now, Let's prove $\rho_{k-1}\left(x_{\imath}^{k} y_{\imath}^{k}\right)^{2}\rightarrow 0$. For simplicity, we abbreviate $p(x,y,\Lambda,\rho)$ in Lemma \ref{lemma2} as $p(x)$, and define
\begin{equation*}
\bar{p}(x)=\frac{1}{2}\left[\left\|\left(g(x)\right)_+\right\|^2+\left\|h(x)\right\|^2+\left\|x\circ y\right\|^2+\left\|\left(n-\kappa-e^Ty\right)_+\right\|^2+\left\|\left(y-e\right)_+\right\|^2\right].
\end{equation*}
Then
\begin{align*}
\rho_{k-1}\nabla_{(x,y)}p(x^k)&-\rho_{k-1}\nabla_{(x,y)}\bar{p}(x^k)\\
&\leq\left(\begin{array}{c}
\nabla g\left(x^{k}\right) \bar{\lambda}^{k-1}+\nabla h\left(x^{k}\right) \bar{\mu}^{k-1}+\bar{\gamma}^{k-1} \circ y^{k} \\
-\bar{\delta}^{k-1} e+\bar{\eta}^{k-1}+\bar{\gamma}^{k-1} \circ x^{k}
\end{array}\right),
\end{align*}
where, the inequality sign comes from the Lipschitz property of $(\cdot)_+$, such as
\begin{equation*}
\lambda^k-(\rho_{k-1}g(x^k))_+=(\rho_{k-1}g(x^k)+\bar{\lambda}^{k-1})_+-(\rho_{k-1}g(x^k))_+\leq\bar{\lambda}^{k-1},
\end{equation*}
the others are similar. Since $\bar{\Lambda}^{k-1}$ is bounded, $g,~h\in C^1$ and $(x^k,y^k)\rightarrow(x^*,y^*)$, then there exists $M_1>0$ such that
\[\rho_{k-1}\nabla_{(x,y)}p(x^k)-\rho_{k-1}\nabla_{(x,y)}\bar{p}(x^k)\leq M_1.\]
On the other hand, by \eqref{nabla->0}, we know
\begin{align*}
\left\| \rho_{k-1}\nabla_{(x,y)}p(x^k)\right\|&\leq \left\|\nabla \mathscr{L}\left(x^{k}, y^{k}, \bar{\Lambda}^{k-1}, \rho_{k-1}\right)\right\|+\left\| \nabla f(x^k)\right\|\\
&\leq \epsilon_k+\left\|\nabla f(x^k)\right\|.
\end{align*}
Thus
\begin{align*}
\left\|\rho_{k-1}\nabla_{(x,y)}\bar{p}(x^k)\right\|&=\left\|\rho_{k-1}\nabla_{(x,y)}p(x^k)-(\rho_{k-1}\nabla_{(x,y)}p(x^k)-\rho_{k-1}\nabla_{(x,y)}\bar{p}(x^k))\right\|\\
&\leq\epsilon_k+\left\|\nabla f(x^k)\right\|+M_1.
\end{align*}
Furthermore, since $\epsilon_k\downarrow 0$, $f\in C^1$ and $x^k\rightarrow x^*$, then there exists $M>0$ such that
\begin{equation}
\left\|\rho_{k-1}\nabla_{(x,y)}\bar{p}(x^k)\right\|\leq M\label{semi1}.
\end{equation}
At the same time, by Assumption \ref{assum1} and Lemma \ref{semialge}, we know $\bar{p}(x)$ is semialgebraic. So there exists $C>0$, $\theta\in[0,1)$ such that
\begin{equation}
\left\|\rho_{k-1}\nabla_{(x,y)}\bar{p}(x^k)\right\|\geq C\rho_{k-1}\left\|\bar{p}(x^k)-\bar{p}(x^*)\right\|^\theta=C\rho_{k-1}\left\|\bar{p}(x^k)\right\|^\theta.
\label{semi2}
\end{equation}
By \eqref{semi1} and \eqref{semi2}, we obtain
\begin{align*}
0\leq\rho_{k-1}\left\|x^k\circ y^k\right\|^2\leq\left\|\rho_{k-1}\bar{p}(x^k)\right\|&\leq C^{-1}\left\|\rho_{k-1}\nabla_{(x,y)}\bar{p}(x^k)\right\|\left\|\bar{p}(x^k)\right\|^{1-\theta}\\
&\leq C^{-1}M\left\|\bar{p}(x^k)\right\|^{1-\theta}\rightarrow 0.
\end{align*}
Thus
\[\lim\limits_{k\rightarrow\infty}\rho_{k-1}x_\imath^k(y_\imath^k)^2=\lim\limits_{k\rightarrow\infty}\frac{1}{x_\imath^k}\rho_{k-1}(x_\imath^ky_\imath^k)^2=0,\]
that is $\lim\limits_{k\rightarrow\infty}\tilde{\gamma}_\imath^k=0$. Set
\begin{equation}
\hat{\gamma}_\imath^k=\left\{\begin{aligned}
&0&\imath\in I_\pm(x^*)\\
&\tilde{\gamma}_\imath^k&\imath\in I_0(x^*)
\end{aligned}\right.,
\label{hat-gamma2}
\end{equation}
It's easy to know that there are still
\[\nabla f(x^k)+\nabla g\left(x^{k}\right) \lambda^{k}+\nabla h\left(x^{k}\right) \mu^{k}+\hat{\gamma}^k\rightarrow 0.\]
Let $\pi_k:=\left\|\left(1,\lambda^k,\mu^k,\hat{\gamma}^k\right)\right\|_\infty$. If $\{\pi_k\}$ is bounded, the proof process of case $\romannumeral1)$ can verify that the conditions $(c)$-$(e)$ are established. Now, we consider the case that $\pi_k$ is unbounded. If $\lim\limits_{k \rightarrow \infty}\frac{\lambda_{i}^k}{\pi_k}>0$, then
\[\lim_{k \rightarrow \infty}\frac{\lambda_{i}^k}{\pi_k}=\lim_{k \rightarrow \infty}\frac{\rho_{k-1} g_i(x^k)+\bar{\lambda}_i^{k-1}}{\pi_k}=\lim_{k \rightarrow \infty}\frac{\rho_{k-1} g_i(x^k)}{\pi_k}>0.\]
Obviously, we have $g_i(x^k)>0$ for all $k$ sufficiently
large, so
\[\lambda_{i}^k g_i(x^k)>0,\quad if ~\lim\limits_{k \rightarrow \infty}\frac{\lambda_{i}^k}{\pi_k}>0.\]
If $\lim\limits_{k \rightarrow \infty}\frac{|\mu_{j}^k|}{\pi_k}>0$, then
\[\lim_{k \rightarrow \infty}\frac{\mu_{j}^k}{\pi_k}=\lim_{k \rightarrow \infty}\frac{\rho_{k-1} h_j(x^k)+\bar{\mu}_j^{k-1}}{\pi_k}=\lim_{k \rightarrow \infty}\frac{\rho_{k-1} h_j(x^k)}{\pi_k}.\]
Observe that $\mu_{j}^k$ has the same sign as $h_j(x^k)$, this implies
\[\mu_{j}^k h_j(x^k)>0,\quad if~\lim\limits_{k \rightarrow \infty}\frac{|\mu_{j}^k|}{\pi_k}>0.\]
Similarly, if $\lim\limits_{k \rightarrow \infty}\frac{|\hat{\gamma}_{\imath}^k|}{\pi_k}>0$, by \eqref{hat-gamma2}, we know $\imath\in I_0(x^*)$, then $y_\imath^k\rightarrow y_\imath^*\neq 0$. Meanwhile
\begin{equation*}
\lim_{k \rightarrow \infty}\frac{\hat{\gamma}_{\imath}^k}{\pi_k}=\lim_{k \rightarrow \infty}\frac{\gamma_\imath^k y_\imath^k}{\pi_k}=\lim_{k \rightarrow \infty}\frac{\rho_{k-1} x_\imath^k(y_\imath^k)^2}{\pi_k}.
\end{equation*}
Therefore, when $k$ is sufficiently large, $\hat{\gamma}_\imath^k$ has the same sign as $x_\imath^k$, namely
\[\hat{\gamma}_\imath^k x_\imath^k>0,\quad if ~\lim\limits_{k \rightarrow \infty}\frac{|\hat{\gamma}_{\imath}^k|}{\pi_k}>0.\]
We have completed this proof.\qed
As can be seen from the proof of Theorem \ref{convergence}, the "semi-algebraic" hypothesis is essentially to ensure that for any given $\Lambda$, $\rho$, $p(x,y,\Lambda,\rho)$ has KL properties. Therefore, Assumption \ref{assum1} can be further relaxed to the structure of $\mathcal{O}$-$minimal$, or even to the assumption that $p(x,y,\Lambda,\rho)$ has KL properties (that is, the same as the conditions of \cite{Schwartz_2021_Sequ}), the conclusion of Theorem \ref{convergence} still holds. There are two reasons why we did not do this. One is that $y$ is an artificial variable, so all assumptions should not be imposed on the $y$ space; in addition, through Lemma \ref{lemma2} and Lemma \ref {level_bound} can show that the introduction of $y$ does not destroy the good properties of the hypothesis on the $x$ space.
Theorem \ref{convergence} states that any feasible accumulation point of Algorithm \ref{SALM} is a CC-PAM-stationary point if Assumption \ref{assum1} holds. And from the proof process, it can be seen that the sequence generated by Algorithm \ref{SALM} is not necessarily a CC-PAM sequence. In fact, this is not contradictory, because we require the existence of the corresponding CC-PAM sequence in Definition \ref{CC-PAM}. But at least this shows that it is not appropriate to take CC-PAM-stationarity as the stop criterion. On the other hand, according to Theorem \ref{PAM->M}, if there is an additional CC-PAM regularity condition holds at $x^*$, then it is a CC-M-stationary point. In other words, when CC-PAM regularity condition is established, CC-M-stationarity itself is a very suitable stop criterion, \cite{Schwartz_2021_ALA} has verified the validity of this method, this paper mainly emphasizes the theoretical improvement, not to repeat the experiment. In addition, in conjunction with Proposition \ref{prop2}, it is clear that the following conclusion holds.
\begin{theorem}
Let $(x^*,y^*)$ be an accumulation point of $\{(x^k,y^k)\}$ generated by Algorithm \ref{SALM}, Assumption \ref{assum1} holds, that $(x^*,y^*)$ is feasible for the relaxation problem \eqref{relax_problem}, and meet CC-PAM regularity condition at $x^*$. Then $(x^*,y^*)$ is a CC-M-stationary point and there exists $z^*\in\mathbb{R}^n$ such that $(x^*,z^*)$ is a CC-S-stationary point.
\end{theorem}
\section{Final Remarks}
In this paper, we study the continuous relaxation form of CCOP, which is more popular in recent years. We propose a new sequential optimality condition called CC-PAM-stationarity. In Sect.3, we prove that CC-PAM-stationarity is strictly superior to CC-AM-stationarity. Moreover, any local minimizer of CCOP is a CC-PAM-stationary point without any additional assumptions. Obviously, CC-PAM-stationarity is a better measure of optimality than CC-AM-stationarity. In addition, we introduced a new constraint qualification called CC-PAM-regularity in Sect.4, which is weaker than CC-AM-regularity. It is proved that if the CC-PAM regularity condition is established, then any CC-PAM-stationary point all are CC-M-stationary points.
In Sect. 5, we apply the new sequential optimality condition proposed in this paper, CC-PAM-stationarity, to the safeguarded augmented Lagrangian method (Algorithm \ref{SALM}), which further improves the existing theoretical results. We have proved that under mild conditions such as KL properties, any feasible convergence point of Algorithm \ref{SALM} is a CC-PAM-stationary point; further, if the CC-PAM-regularity condition, it can converge to a CC-M-stationary (essentially CC-S-stationary) point. In other words, in this case, the CC-M-stationarity is the natural termination criterion of Algorithm \ref{SALM}. Meanwhile, we emphasize that if the same assumptions as the existing results are used, the conclusions of this article are still valid.
\end{document}
|
\begin{document}
\title{On the Laxton Group
}
\author{Miho Aoki \and
Masanari Kida
}
\institute{ M. Aoki \at
Department of Mathematics,
Interdisciplinary Faculty of Science and Engineering,
Shimane University,
Matsue, Shimane, 690-8504, Japan \\
\email{[email protected]}
\and
M. Kida \at
Department of Mathematics,
Faculty of Science Division I,
Tokyo University of Science,
1-3 Kagurazaka Shinjuku, Tokyo, 162-8601, Japan \\
\email{}
}
\date{Received: date / Accepted: date}
\maketitle
\begin{abstract}
In 1969, Laxton defined a multiplicative group structure on the set of
rational sequences
satisfying a fixed linear recurrence of degree two.
He also defined some natural subgroups of the group, and determined the
structures of their quotient groups.
Nothing has been known about the structure of Laxton's whole group
and its interpretation.
In this paper, we redefine his group
in a natural way and determine the structure of the whole group,
which clarifies Laxton's results on the quotient groups.
This definition makes us possible to use
the group to show various properties of such sequences.
\keywords{Laxton Groups \and Linear recurrence sequences \and Quadratic fields}
\subclass{11B37 \and 11B39 \and 11R11}
\end{abstract}
\section{The Laxton group}\label{sec:LG}
Let $P$ and $Q$ are
integers with $Q\ne 0$.
We consider linear recurrence sequences associated to the characteristic
polynomial $f(t):=t^2-Pt+Q$. Namely, they are determined by
$w_{n+2}-Pw_{n+1}+Qw_n=0$ with the first rational numbers $w_0$ and $w_1$.
Let $\theta_1$ and $\theta_2$ be the roots of $f(t)$.
We assume
\[
D:=\mathrm{disc}(f)=P^2-4Q=(\theta_1-\theta_2)^2\ne 0.
\]
We define an equivalence relation $\sim^*$ on the set of the linear recurrence sequences.
For ${\bf w}=(w_n),\ {\bf v}=(v_n)$, we write ${\bf w} \sim^* {\bf v}$ if there
exists $\lambda \in \Q^{\times}$
and $\nu \in \Z$
such that $w_n=\lambda v_{n+\nu}$ for any $n\in \Z$.
Laxton \cite{L1} considered the following quotient set using this relation:
\begin{align*}
G^*(f) :=& \{ (w_n)_{n\in \Z} \ |\ w_0,w_1 \in \Q \ \text{with}\ \Lambda (w_1,w_0)\ne 0,\
w_{n+2}-Pw_{n+1}+Qw_n=0 \\
& \text{for any}\ n\in \Z,
\ \text{and there exists $\nu\in \Z$ such that $w_k \in \Z$ for any $k\geq \nu$ } \}/\sim^*,
\end{align*}
where $\Lambda (w_1,w_0):=w_1^2-Pw_0w_1+Qw_0^2$.
Laxton
introduced a product on $G^*(f)$ as follows. For classes ${\mathcal W}$ and ${\mathcal V} \in G^*(f)$,
let $(w_n)$ and $(v_n)$ are representatives of the classes ${\mathcal W}$ and ${\mathcal V}$, respectively.
The product ${\mathcal W} \times {\mathcal V}$ of ${\mathcal W}$ and ${\mathcal V}$ is, then, defined by the class of $(u_n)$ with $u_n=(AC \theta_1^n-BD\theta_2^n)/
(\theta_1-\theta_2)$, where $A=w_1-w_0\theta_2,\ B=w_1-w_0\theta_1, \ C=v_1-v_0 \theta_2$ and $D=v_1-v_0 \theta_1$.
We call $G^*(f)$ the Laxton group.
Let $p$ be a prime number with $p\nmid Q$.
For ${\bf w}=(w_n)$, if $w_n\in p\Z_{(p)}$ for some $n$, we say
$p$ is a divisor of ${\bf w}$ and write $p|{\bf w}$,
where $\Z_{(p)}$ is the localization of the integers at $p$.
Laxton defined the following subgroups of $G^*(f)$:
\begin{align*}
G^*(f,p)_L := & \{ {\mathscr W} \in G^*(f) \ |\ p |{\bf w} \ \text{for\ all}\ {\bf w}\in {\mathscr W} \},\\
K^*(f,p)_L := & \{ {\mathscr W} \in G^*(f) \ |\ \text{ there exists ${\bf w} \in {\mathscr W} $
for\ which
$w_0,w_1 \in \mathbb Z$ and $p\nmid \Lambda (w_1,w_0)$}\},
\\
H^*(f,p)_L := & \{ {\mathscr W} \in G^*(f) \ |\ {\mathscr W}^n \in G^*(f,p)_L \ \text{for\ some}\ n \in \Z \}.
\end{align*}
Laxton proved the following theorem on the quotient groups.
\begin{theo}[\protect{Laxton \cite[Theorem~3.7]{L1}}]\label{theo:Laxton}
Let $p$ be a prime number with $p\nmid Q$, and $r(p)$ be the rank of the Lucas sequence ${\mathcal F}$
$($see Definitions~\ref{df:Lucas} and ~\ref{df:rank}$)$.
\begin{enumerate}[$(1)$]
\item Assume $p\nmid D$. If $\mathbb Q(\theta_1)\ne \mathbb Q$ and $p$ is inert in $\mathbb Q(\theta_1)$,
then $G^*(f)=H^*(f,p)_L=K^*(f,p)_L$ and
$G^*(f)/G^*(f,p)_L$ is a cyclic group of order $(p+1)/r(p)$.
\label{enum:Laxton1}
\item Assume $p\nmid D$. If $\Q (\theta_1)=\Q$, or $\Q(\theta_1)\ne \Q$ and
$p$ splits in $\Q(\theta_1)$, then $G^*(f)/H^*(f,p)_L$ is an infinite cyclic group,
and $H^*(f,p)_L=K^*(f,p)_L$ and $H^*(f,p)_L/G^*(f,p)_L$ is a cyclic group of order $(p-1)/r(p)$.
\label{enum:Laxton2}
\item If $p|D$ and $p^2\nmid D$, then $G^*(f)=H^*(f,p)_L$ and $K^*(f,p)_L=G^*(f,p)_L$.
Furthermore, if $p\ne 2$, then $G^*(f)/G^*(f,p)_L$ is a cyclic group of order two.
\label{enum:Laxton3}
\end{enumerate}
\end{theo}
Laxton made no assumption in $p^2 \nmid D$ of Theorem~\ref{theo:Laxton} (\ref{enum:Laxton3}).
Suwa \cite{S} recently pointed out that this assumption is necessary and
gave counterexamples that did not hold in the case $p^2 \mid D$ of Theorem~\ref{theo:Laxton} (\ref{enum:Laxton3}),
and gave the correct formulation above and the proof for the case
(Corollary~\ref{cor:main}(\ref{enum:cormain4})).
He also gives in his paper an interpretation of
linear recurrence sequences of degree two from a viewpoint of the theory of group schemes.
Althogh Laxton studied structures of the quotient groups of $G^*(f)$, he did not
study the group $G^*(f)$ itself. The aims of this paper are to redefine Laxton's group in
a natural way (Theorems~\ref{theo:main1} and \ref{theo:Vf}) and to give structure theorems of the group itself
and the subgroups.
One of our main results is the following theorem.
\begin{theo}[\protect{Theorems~\ref{theo:G*}, \ref{theo:main2} and \ref{theo:KK*}}]\label{theo:main}
Notations being as above.
Put $D=p^sD_0$ with $s\geq 0,\ p\nmid D_0$.
\begin{enumerate}[$(1)$]
\item If $f(t)$ is irreducible over $\Q$, then we have
\[
G^*(f)\tt \Q(\theta_1)^{\times}/\Q^{\times}\langle \theta_1 \rangle.
\]
If $f(t)$ is reducible over $\Q$, then we have
\[
G^*(f)\tt \Q^{\times} /
\langle \theta_1 \theta_2^{-1} \rangle.
\]
\label{enum:main1}
\item There exists the following exact sequence
\[
1 \longrightarrow G^*(f,p)_L \longrightarrow K^*(f,p)_L \underset{ \mathrm{red}_p}{\longrightarrow} G^*_{\F_p}(f) \longrightarrow 1
\]
where $G^*_{\F_p}(f)$ is the group of equivalence classes of linear recurrence sequences over the finite field $\F_p$.
\label{enum:main2}
\item If $f(t)$ is irreducible over $\Q$, then we have
\[
K^*(f,p)_L \tt
\Z_{(p)}[\theta_1]^{\times}/\Z_{(p)}^{\times}\langle \theta_1 \rangle.
\]
If $f(t)$ is reducible over $\Q$, then we have
\[
K^*(f,p)_L \tt \begin{cases}
\Z_{(p)}^{\times}/\langle \theta_1 \theta_2^{-1} \rangle & \text{if}\ p\nmid D,\\
(1+p^{\frac{s}{2}} \Z_{(p)} )/\langle \theta_1 \theta_2^{-1} \rangle & \text{if}\ p|D.
\end{cases}
\]
\label{enum:main3}
\end{enumerate}
\end{theo}
The content of this paper is as follows.
In \S~\ref{sec:LR}, we begin with redefining the group law
on the set of linear recurrence sequences,
which gives a natural interpretation of Laxton's product.
In \S~\ref{sec:GS}, we determine the group structures of the group of the set of linear recurrence
sequences
according to the irreducibility of the characteristic polynomial $f(t)$.
In \S~\ref{sec:EC}, we introduce two relations on the group of linear recurrence sequences,
and determine the group structures of the quotient groups by these
relations. In particular, we can determine that of the Laxton group $G^*(f)$.
In \S~\ref{sec:SubG}, we redefine the subgroups $G^*(f,p)_L,\ K^*(f,p)_L$ and
$H^*(f,p)_L$ for a fixed prime number $p$.
From our new definitions, we see that $G^*(f,p)_L$ is the kernel of the reduction map from $K^*(f,p)_L$
to the group $G^*_{\mathbb F_p}(f)$ of equivalence classes of linear recurrence
sequences of the finite filed $\F_p$. Furthermore, we study the relations between these subgroups and the
rank $r(p)$ of the Lucas sequence, which is a classical invariant concerning Artin's conjecture on primitive roots.
In \S~\ref{sec:K}, \ref{sec:G/Kp} and \ref{sec:Kp/Gp}, we determine the group structures of $K^*(f,p)_L,\
G^*(f)/K^*(f,p)_L$ and $K^*(f,p)/G^*(f,p)$, respectively by using $p$-adic logarithm functions.
The results in these sections yield Laxton's (Theorems~\ref{theo:Laxton}) and Suwa's theorems.
Suwa first gave a proof in the case $p^2 |D$ from a viewpoint of the theory of group schemes.
\section{Group laws of Linear recurrence sequences}\label{sec:LR}
Let $R$ be an integral domain. In order to discuss in the general situation, we consider the
sequences
$(w_n)$ of $R$ determined by
\[
w_{n+2}-Pw_{n+1}+Qw_n=0,
\]
for fixed elements $P,Q \in R$ with $Q\in R^{\times}$. Let $\theta_1$ and $\theta_2$ be
the roots of the characteristic polynomial
\[
f(t):=t^2-Pt+Q \ \in R[t]
\]
in an algebraic closure $\overline{k}_R$ of the quotinet field $k_R$ of $R$. Put
\[
d:=\mathrm{disc}(f)=P^2-4Q \ \in R.
\]
Define
\[
\mathscr{S}(f,R) := \{ (w_n)_{n\in \Z} \ \vline \ w_0,w_1 \in R,\ w_{n+2}-Pw_{n+1}+Qw_n=0\
\text{ for any }\ n\in \Z \}.
\]
By the assumption $Q\in R^{\times}$, any sequence $(w_n) \in \mathscr{S}(f,R)$ satisfies $w_n \in R$ for any $n\in \Z$.
All sequences in $\mathscr{S}(f,R)$ have the characteristic polynomial $f(t)$, and the set $\mathscr{S}(f,R)$
has a natural $R$-module strcture and is isomorphic to a free $R$-module of rank $2$ :
\[
V_f(R) := \left\{ \begin{pmatrix}
a_1\\
a_0\\
\end{pmatrix} \ \vline \ a_1,a_0\in R \right\}
\]
with an isomorphism given by
\begin{equation}\label{eq:LR1}
\phi_R : \ \mathscr{S}(f,R) \tt V_f(R), \quad (w_n) \mapsto \begin{pmatrix}
w_1\\
w_0\\
\end{pmatrix}.
\end{equation}
Furthermore, we define an isomorphism of $R$-modules $\varphi_R$ by
\begin{equation}\label{eq:LR2}
\varphi_R : \ V_f(R) \tt {\mathcal O}_R, \quad \begin{pmatrix}
a_1\\
a_0\\
\end{pmatrix} \mapsto a_1-a_0t \bmod{f(t)},
\end{equation}
where ${\mathcal O}_R:=R[t]/(f(t))$ is the quotient ring of $R[t]$.
We define ring structures on $V_f(R)$ and $\mathscr{S}(f,R) $
via the maps $\phi_R$ and $\varphi_R$ induced from that of ${\mathcal O}_R$.
Put
\[
{\mathcal B}:=\begin{pmatrix}
P & -Q\\
1 & 0\\
\end{pmatrix}.
\]
Since det$({\mathcal B})=Q\in R^{\times}$, we have ${\mathcal B}\in \mathrm{GL}_2(R)$.
There is a natural left action of the group $\langle {\mathcal B} \rangle $ on
$V_f(R)$ because the matrix ${\mathcal B}$ satisfies
\begin{equation}\label{eq:LR3}
\begin{pmatrix}
w_{n+1} \\
w_n \\
\end{pmatrix}={\mathcal B}^n \begin{pmatrix}
w_1\\
w_0\\
\end{pmatrix},
\end{equation}
for any $n \in \mathbb Z$. We define left actions of $\langle {\mathcal B} \rangle$ on
$\mathscr{S}(f,R)$ and ${\mathcal O}_R$ via the maps $\phi_R$ and $\varphi_R$,
respectively.
From
\begin{equation}\label{eq:LR4}
\varphi_R \left( {\mathcal B} \begin{pmatrix}
a_1\\
a_0\\
\end{pmatrix} \right) =(Pa_1-Qa_0)-a_1t =(P-t)(a_1-a_0t),
\end{equation}
for any $\begin{pmatrix}
a_1\\
a_0\\
\end{pmatrix} \in V_f(R)$, the action of ${\mathcal B}$ on ${\mathcal O}_R$ is given by
the multiplication by $P-t$. In particular, we get
\begin{equation}\label{eq:LR5}
{\mathcal B} \begin{pmatrix}
a_1\\
a_0\\
\end{pmatrix} *\begin{pmatrix}
b_1\\
b_0\\
\end{pmatrix} =\begin{pmatrix}
a_1\\
a_0\\
\end{pmatrix} *{\mathcal B} \begin{pmatrix}
b_1\\
b_0\\
\end{pmatrix} ={\mathcal B} \left\{
\begin{pmatrix}
a_1\\
a_0\\
\end{pmatrix} *\begin{pmatrix}
b_1\\
b_0\\
\end{pmatrix} \right \},
\end{equation}
for any $\begin{pmatrix}
a_1\\
a_0\\
\end{pmatrix}, \begin{pmatrix}
b_1\\
b_0\\
\end{pmatrix} \in V_f(R)$.
Furthermore, we get
\begin{align}
& {\mathcal B}^n \begin{pmatrix}
a_1\\
a_0\\
\end{pmatrix} =\begin{pmatrix}
P\\
1\\
\end{pmatrix}^n *\begin{pmatrix}
a_1\\
a_0\\
\end{pmatrix}, \label{eq:LR6}
\end{align}
for any $n\in \mathbb Z$.
For a class of $a_1-a_0t$ in ${\mathcal O}_R$,
we have
\begin{equation}\label{eq:LR7}
(a_1-a_0t) (1\ \ -t) =(1\ \ -t)\begin{pmatrix}
a_1 & -a_0 Q\\
a_0 & a_1-a_0P \\
\end{pmatrix}.
\end{equation}
Define the norm of a class of $a_1-a_0t \in {\mathcal O}_R$ by
\begin{align}
N(a_1-a_0t) & :=\mathrm{det} \begin{pmatrix}
a_1 & -a_0 Q\\
a_0 &a_1-a_0P \\
\end{pmatrix} \quad (\in R) \label{eq:LR8} \\
&=a_1^2-Pa_0a_1+Qa_0^2 \notag \\
&=(a_1-\theta_1 a_0)(a_1-\theta_2 a_0) \notag
\end{align}
\begin{rem}\label{rem:lambda}
If $f(t)$ is irreducible over the quotient field $k_R$ of $R$, then $N(a_1-a_0t)\ne 0$ if and only if
$a_0\ne 0$ or $a_1\ne 0$.
\end{rem}
By the definition of the norm and (\ref{eq:LR7}), the norm is multiplicative, namely we have
\[
N((a_1-a_0t)(b_1-b_0t))=N(a_1-a_0t)N(b_1-b_0t),
\]
for any classes of $a_1-a_0t,\ b_1-b_0 t$ of ${\mathcal O}_R$.
From the fact, we can see that a class of $a_1-a_0t$ is invertible in ${\mathcal O}_R$ if and only if
$N(a_1-a_0t)\in R^{\times}$.
In particular, since
\[
(P-t)(1\ \ -t)=(1\ \ -t){\mathcal B},
\]
we have $N(P-t)=\mathrm{det}({\mathcal B})=Q \in R^{\times}$,
and hence $P-t$ is invertible in ${\mathcal O}_R$.
Since the norm is multiplicative, (\ref{eq:LR3}) and (\ref{eq:LR4}) yield
\begin{equation}\label{eq:LR9}
N(w_{n+1}-w_nt) =N(P-t)^n N(w_1-w_0t)
=Q^n N(w_1-w_0t),
\end{equation}
for any integer $n$ and any sequaence $(w_n) \in \mathscr{S}(f,R)$.
We can endow the inverse image of ${\mathcal O}_R^{\times}$ by $\varphi_R$:
\[
V_f(R)^{\times} :=\varphi_R^{-1} ({\mathcal O}_R^{\times} )=\left\{ \begin{pmatrix}
a_1\\
a_0\\
\end{pmatrix} \in V_f(R) \ \vline \ \Lambda (a_1,a_0)\in R^{\times} \right\}
\]
where $\Lambda(a_1,a_0):=N(a_1-a_0t)$, and its inverse image by $\phi_R$ :
\[
\mathscr{S}(f,R)^{\times} := \phi_R^{-1} (V_f(R)^{\times} )=\left\{ (w_n) \in \mathscr{S}(f,R)
\ \vline \ \Lambda (w_1,w_0) \in R^{\times} \right\}
\]
with the structure of a multiplicative group induced from ${\mathcal O}_R^{\times}$.
If $\begin{pmatrix}
a_1\\
a_0\\
\end{pmatrix}$ and $\begin{pmatrix}
b_1\\
b_0\\
\end{pmatrix}$ are elements of $V_f(R)^{\times}$, then the corresponding product in ${\mathcal O}_R^{\times}$
is
\[
(a_1-a_0 t)(b_1-b_0t)\equiv (a_1b_1-a_0b_0 Q)-(a_0b_1+a_1b_0-Pa_0b_0)t \pmod{f(t)}.
\]
Thus the multiplication in $V_f(R)^{\times}$ is given by
\begin{equation}\label{eq:LR10}
\begin{pmatrix}
a_1\\
a_0\\
\end{pmatrix} * \begin{pmatrix}
b_1\\
b_0\\
\end{pmatrix} =\begin{pmatrix}
a_1b_1-Qa_0b_0\\
a_0b_1+a_1b_0-Pa_0b_0 \\
\end{pmatrix}.
\end{equation}
The identity element is $\begin{pmatrix}
1\\
0\\
\end{pmatrix}$ and $\begin{pmatrix}
a_1 \\
a_0\\
\end{pmatrix}^{-1} =\Lambda (a_1,a_0)^{-1} \begin{pmatrix}
a_1-Pa_0\\
-a_0\\
\end{pmatrix}$.
If $\begin{pmatrix}
a_1\\
a_0\\
\end{pmatrix} \in V_f(R)^{\times}$, then we have $ \varphi_R \left( {\mathcal B} \begin{pmatrix}
a_1\\
a_0\\
\end{pmatrix} \right) \in {\mathcal O}_R^{\times}$,
from (\ref{eq:LR4}) and $P-t \in {\mathcal O}_R^{\times}$,
and hence $ {\mathcal B} \begin{pmatrix}
a_1\\
a_0\\
\end{pmatrix} \in V_f(R)^{\times}$.
Therefore, the multiplicative groups $V_f(R)^{\times}, {\mathcal O}_R^{\times}$ and
$\mathscr{S}(f,R)^{\times}$ have the left actions of $\langle {\mathcal B} \rangle$.
The roots $\theta_1$ and $\theta_2$ of $f(t)$ are the eigenvalues of ${\mathcal B}$ and the
corresponding eigenspaces are
$\langle \begin{pmatrix}
\theta_1 \\
1\\
\end{pmatrix}
\rangle$ and $\langle \begin{pmatrix}
\theta_2 \\
1\\
\end{pmatrix}
\rangle$ respectively.
Hence,
if $d\ne 0$, then
${\mathcal B}$ is diagonalized by ${\mathcal P}:= \begin{pmatrix}
\theta_1 & \theta_2 \\
1 & 1\\
\end{pmatrix} \in \mathrm{GL}_2(\overline{k}_R)$ as
\[
{\mathcal P}^{-1} {\mathcal B} {\mathcal P} =\begin{pmatrix}
\theta_1 & 0\\
0 & \theta_2 \\
\end{pmatrix}.
\]
Let ${\bf w}=(w_n)_{n\in \Z} \in \mathscr{S}(f,R)$.
The general term is given by
\[
w_n=\frac{A\theta_1^n-B\theta_2^n}{\theta_1-\theta_2},
\]
where $A=w_1-w_0\theta_2$ and $B=w_1-w_0\theta_1$.
We have
\[
AB=\Lambda(w_1,w_0) \quad \in R.
\]
\begin{df}\label{df:Lucas}
We call the sequence ${\mathcal F}=({\mathcal F}_n) \in \mathscr{S}(f,R) $ with ${\mathcal F}_0=0$ and ${\mathcal F}_1=1$
{\it the Lucas sequence}, and the sequence ${\mathcal L}=({\mathcal L}_n) \in \mathscr{S}(f,R)$ with
${\mathcal L}_0=2$ and ${\mathcal L}_1=P$
{\it the companion Lucas sequence} after Lucas, who first introduced them in \cite{Lu}.
Their general terms are $\displaystyle{ {\mathcal F}_n=\frac{\theta_1^n -\theta_2^n}{\theta_1-\theta_2}}$ and
${\mathcal L}_n=\theta_1^n+\theta_2^n$.
\end{df}
Next, we recall a product on the sets $\mathscr{S}(f,R)^{\times}$ introduced by Laxton \cite{L1} (see also \cite{B}).
\begin{df}\label{df:LG-law}
Let ${\bf w}=(w_n), {\bf v}=(v_n)\in \mathscr{S}(f,R)^{\times}$. Write
\[
w_n=\frac{A\theta_1^n-B\theta_2^n}{\theta_1-\theta_2}, \quad v_n=\frac{C\theta_1^n-D\theta_2^n}{\theta_1-\theta_2},
\]
where $A=w_1-w_0\theta_2, B=w_1-w_0\theta_1, C=v_1-v_0\theta_2$ and $D=v_1-v_0\theta_1$.
Laxton defined the product ${\bf w}\times {\bf v}={\bf u}=(u_n)$ by
\[
u_n=\frac{AC\theta_1^n-BD\theta_2^n}{\theta_1-\theta_2},
\]
for any $n\in \Z$. In particular, $u_0$ and $u_1$ are given by
$u_0=w_0v_1+w_1v_0-Pv_0w_0,\ u_1=w_1v_1-Qv_0w_0$.
We get ${\bf u}\in \mathscr{S}(f,R)^{\times}$ since
$\Lambda(u_1,u_0)=ABCD=\Lambda(w_1,w_0)\Lambda(v_1,v_0)\in R^{\times}$.
The associativity is trivial, and the identity is the Lucas sequence ${\mathcal F}=({\mathcal F}_n)$.
The inverse element of
\[
{\bf w}=(w_n), \quad w_n=\frac{A\theta_1^n-B\theta_2^n}{\theta_1-\theta_2}
\]
is given by
\[
{\bf w}^{-1}=(u_n), \quad
u_n=\Lambda (w_1,w_0)^{-1}\frac{B\theta_1^n-A\theta_2^n}{\theta_1-\theta_2}.
\]
\end{df}
The multiplicative group structure on $\mathscr{S}(f,R)^{\times}$ defined by Laxton
coincides with one induced from ${\mathcal O}_R^{\times}=(R[t]/(f(t)))^{\times}$ via maps
$\phi_R$ and $\varphi_R$ in (\ref{eq:LR1}) and (\ref{eq:LR2}), respectively.
We get the following theorem.
\begin{theo}\label{theo:main1}
Let $R$ be an integral domain. The group strcture of $\mathscr{S}(f,R)^{\times}$ defined by
Laxton coincides with one induced from ${\mathcal O}_R^{\times} =(R[t]/(f(t)))^{\times}$ via the maps
$\phi_R$ and $\varphi_R$:
\begin{align}
&\mathscr{S}(f,R)^{\times} \overset{\sim}{ \underset{\phi_R}{\longrightarrow}} V_f(R)^{\times}
\overset{\sim}{ \underset{\varphi_R}{\longrightarrow}} {\mathcal O}_R^{\times} =(R[t]/(f(t)))^{\times}, \notag
\\
& {\bf w}=(w_n) \ \mapsto \ \begin{pmatrix}
w_1\\
w_0\\
\end{pmatrix} \ \mapsto \ w_1-w_0t. \notag
\end{align}
Furthermore, these group isomorphisms are compatible with the following action of $\langle {\mathcal B} \rangle$:
\begin{enumerate}[$(i)$]
\item For ${\bf w}=(w_n) \in \mathscr{S}(f,R)^{\times}$ and $\nu \in \Z$,
\[
{\mathcal B}^{\nu}. {\bf w} ={\bf v} =(v_n),
\]
where $v_n=w_{n+\nu}$ for any $n\in \Z$.
\label{enum:1maini}
\item For $\begin{pmatrix}
w_1\\
w_0\\
\end{pmatrix} \in V_f(R)^{\times}$ and $\nu \in \Z$,
\[
{\mathcal B}^{\nu}. \begin{pmatrix}
w_1\\
w_0\\
\end{pmatrix}={\mathcal B}^{\nu} \begin{pmatrix}
w_1\\
w_0\\
\end{pmatrix}
\]
(the right-hand side is the ordinary matrix product).
\label{enum:1mainii}
\item For $w_1-w_0t \in {\mathcal O}_R^{\times}$ and $\nu \in \Z$,
\[
{\mathcal B}^{\nu}. (w_1-w_0t) =(P-t)^{\nu} (w_1-w_0t).
\]
\label{enum:1mainiii}
\end{enumerate}
\end{theo}
\section{Group structures of $\mathscr{S}(f,R)^{\times}$}\label{sec:GS}
In this section, we assume that $R$ is a unique factorization domain, and
study the group structures of
\begin{align}
&\mathscr{S}(f,R)^{\times} \overset{\sim}{ \underset{\phi_R}{\longrightarrow}} V_f(R)^{\times}
\overset{\sim}{ \underset{\varphi_R}{\longrightarrow}} {\mathcal O}_R^{\times} =(R[t]/(f(t)))^{\times}, \label{eq:GS1}
\\
& {\bf w}=(w_n) \ \mapsto \ \begin{pmatrix}
w_1\\
w_0\\
\end{pmatrix} \ \mapsto \ w_1-w_0t \notag
\end{align}
according to the irreducibility of the polynomial $f(t)$.
\begin{theo}\label{theo:Vf}
\begin{enumerate}[$(1)$]
\item If $f(t)$ is irreducible over $R$, then we have an isomorphism of $R$-algebras
\[
\psi_R:\ V_f(R) \tt R[\theta_1], \quad \begin{pmatrix}
a_1\\
a_0\\
\end{pmatrix} \mapsto a_1-a_0\theta_1.
\]
This yields a group isomorphism
\[
\mathscr{S}(f,R)^{\times} \simeq V_f(R)^{\times} \simeq R[\theta_1 ]^{\times}.
\]
\label{enum:Vf1}
\item Assume that $f(t)$ is reducible over $R$, and hence $d=\mathrm{disc}(f)\in R^2$.
\label{enum:Vf2}
\begin{enumerate}[$(i)$]
\item The case where $f(t)$ has no double root in $R$: Let $H_R$ be an $R$-subalgebra of $R\times R$ defined by
\[
H_R:= \{(x,y)\in R\times R \ \mid \ x\equiv y \pmod{ \sqrt{d} } \},
\]
$($if $d\in R^{\times}$, then $H_R=R\times R$ $)$.
We have an isomorphism of $R$-algebras
\[
\psi_R:\ V_f(R) \tt H_R, \quad \begin{pmatrix}
a_1\\
a_0\\
\end{pmatrix} \mapsto (a_1-a_0\theta_1, a_1-a_0\theta_2).
\]
This yields a group isomorphism
\[
\mathscr{S}(f,R)^{\times} \simeq V_f(R)^{\times} \simeq H_R^{\times}=\{ (x,y) \in R^{\times} \times R^{\times} \ \mid \ x\equiv y \pmod{\sqrt{d}} \}.
\]
\label{enum:Vf2i}
\item The case where $f(t) $ has a double root $\theta$ in $R$: We have an isomorphism of $R$-algebras
\[
\psi_R :\ V_f(R) \tt R[\varepsilon]/(\varepsilon^2), \quad \begin{pmatrix}
a_1\\
a_0\\
\end{pmatrix} \mapsto a_1-a_0(\varepsilon+\theta) \bmod{\varepsilon^2}.
\]
This yields a group isomorphism
\[
\mathscr{S}(f,R)^{\times} \simeq V_f(R)^{\times} \simeq (R[\varepsilon]/(\varepsilon^2))^{\times}=\{ x+y\varepsilon \bmod{\varepsilon^2} \mid x\in R^{\times},\ y\in R\}.
\]
\label{enum:Vf2ii}
\end{enumerate}
\end{enumerate}
\end{theo}
\begin{proof}
The assertion (\ref{enum:Vf1}) follows from (\ref{eq:LR2}) and the isomorphism
\[
{\mathcal O}_R=R[t]/(f(t)) \tt R[\theta_1], \quad g(t) \bmod{f(t)} \mapsto g(\theta_1).
\]
Next, we show the assertion (\ref{enum:Vf2}).
In the case (\ref{enum:Vf2i}), we have $\theta_1\ne \theta_2$ in $R$. Consider the following homomorphism of $R$-algebras
\begin{align}
& {\mathcal O}_R=R[t]/(f(t)) \longrightarrow R[t]/(t-\theta_1) \times R[t]/(t-\theta_2) \tt R\times R, \label{eq:GS2}\\
& \qquad g(t) \bmod{f(t)} \ \mapsto (g(t) \bmod{t-\theta_1}, g(t) \bmod{t-\theta_2}) \mapsto
\ (g(\theta_1),g(\theta_2)).\notag
\end{align}
The map is injective, and the image is in the set $H_R$ since $\theta_1 \equiv \theta_2 \pmod{\sqrt{d}}$.
Conversely, we have
\[
g(t) =\frac{1}{\theta_2-\theta_1} \{ y(t-\theta_1)-x(t-\theta_2) \} \quad \in R[t]
\]
for any $(x,y) \in H_R $, and $g(t) \bmod{f(t)}$ maps to $(x,y)$ by the map (\ref{eq:GS2}).
we get the isomorphism
\[
R[t]/(f(t)) \simeq H_R,
\]
and the assertion follows from the isomorphism and (\ref{eq:LR2}).
In the case (\ref{enum:Vf2ii}), we have $\theta_1=\theta_2 $ in $R$. The assertion follow from (\ref{eq:LR2}) and
the following isomorphism of $R$-algebras
\[
{\mathcal O}_R=R[t]/(f(t)) \tt R+R\varepsilon, \quad a_1-a_0t \bmod{f(t)} \mapsto a_1-a_0 (\varepsilon+\theta).
\]
\qed
\end{proof}
\section{Equivalence classes}\label{sec:EC}
Let $R$ be a unique factorization domain, and assume $Q\in R^{\times}$.
In this section, we introduce two relations $\ss$ and $\s$ on the set $\mathscr{S}(f,R) $,
and consider the quotient sets of $\mathscr{S}(f,R)^{\times}$ by these relations.
The group structure of $\mathscr{S}(f,R)^{\times}$ defined in \S~\ref{sec:LR} naturally
induce the group structures on the
quotient sets.
Note that we have $w_n \in R$ for any $n\in \Z$ by the assumption $Q\in R^{\times}$.
\begin{df}\label{df:eq}
Let ${\bf w}=(w_n),\ {\bf v}=(v_n) \in \mathscr{S}(f,R) $.
\begin{enumerate}[$(1)$]
\item We define ${\bf w} \ss {\bf v}$ if there exists $\lambda \in R^{\times}$ such that
$w_n=\lambda v_n$ for any $n\in \Z$.
\label{enum:dfeq1}
\item We define ${\bf w} \s {\bf v}$ if there exists $\lambda \in R^{\times}$ and
$\nu \in \Z$ such that $w_n =\lambda v_{n+\nu} $ for any $n\in \Z$.
\label{enum:dfeq2}
\end{enumerate}
\end{df}
These relations are equivalence relations on the set $\mathscr{S}(f,R) $.
By the isomorphism (\ref{eq:LR1}), we can introduce the corresponding relations $\ss$ and $\s$ on the set
$V_f(R)$.
Let $\begin{pmatrix}
a_1\\
a_0\\
\end{pmatrix}, \begin{pmatrix}
b_1\\
b_0\\
\end{pmatrix} \in V_f(R)$.
\begin{itemize}
\item[(1)] We have $\begin{pmatrix}
a_1\\
a_0\\
\end{pmatrix} \ss \begin{pmatrix}
b_1\\
b_0\\
\end{pmatrix}$ if there exists $\lambda \in R^{\times}$ such that
$\begin{pmatrix}
a_1\\
a_0\\
\end{pmatrix}=\lambda \begin{pmatrix}
b_1\\
b_0\\
\end{pmatrix}$.
\item[(2)] We have $\begin{pmatrix}
a_1\\
a_0\\
\end{pmatrix} \s \begin{pmatrix}
b_1\\
b_0\\
\end{pmatrix}$ if there exist $\lambda \in R^{\times}$ and $\nu \in \Z$ such that
$\begin{pmatrix}
a_1\\
a_0\\
\end{pmatrix}=\lambda {\mathcal B}^{\nu} \begin{pmatrix}
b_1\\
b_0\\
\end{pmatrix}$.
\end{itemize}
If $\begin{pmatrix}
a_1\\
a_0\\
\end{pmatrix} \s \begin{pmatrix}
b_1\\
b_0\\
\end{pmatrix}$, then there exist $\lambda \in R^{\times}$ and $\nu \in \Z$ such that
$\begin{pmatrix}
a_1\\
a_0\\
\end{pmatrix}=\lambda {\mathcal B}^{\nu} \begin{pmatrix}
b_1\\
b_0\\
\end{pmatrix}$, hence we have from (\ref{eq:LR4})
\begin{align*}
\Lambda (a_1, a_0) &=N(a_1-a_0t)\\
&=\lambda^2 N(P-t)^{\nu} N(b_1-b_0t)\\
&=\lambda^2 Q^{\nu} \Lambda (b_1,b_0).
\end{align*}
We conclude that $\begin{pmatrix}
a_1\\
a_0\\
\end{pmatrix} \in V_f(R)^{\times}$ if and only if $\begin{pmatrix}
b_1\\
b_0\\
\end{pmatrix} \in V_f(R)^{\times}$.
\begin{df}\label{df:LG} Define two quotient sets of $\mathscr{S}(f,R)^{\times} $using the relations above:
\[
G_R(f):=\mathscr{S}(f,R)^{\times} /\ss, \quad \text{and} \quad G_R^*(R):=\mathscr{S}(f,R)^{\times}/\s.
\]
\end{df}
We define the products on the sets $G_R(f)$ and $G_R^*(f)$ induced by the abelian group $\mathscr{S}(f,R)^{\times} $.
We can see that the products are well-defined as follows. First, recall that the product in $\mathscr{S}(f,R)^{\times} $
is induced by that of ${\mathcal O}_R^{\times}=(R[t]/(f(t)))^{\times}$ via the isomorphisms (\ref{eq:GS1}).
The products on the sets $G_R(f)$ and $G_R^*(f)$ are well-defined because the multiplication by $\lambda \in R^{\times}$ on
$V_f(R)^{\times}$ is equivalent to that on ${\mathcal O}_R^{\times}$, and the action of ${\mathcal B}^{\nu} \ (\nu \in \Z)$
on $V_f(R)^{\times}$ is interpreted as the multiplication of $(P-t)^{\nu}$ on ${\mathcal O}_R^{\times}$ from (\ref{eq:LR4}).
The
identity elements of $G_R(f) $ and $G_R^*(f)$ are the class of the Lucas sequence ${\mathcal F}$, the inverse element
of the class $\displaystyle{[{\bf w}],\ {\bf w}=(w_n), \ w_n=\frac{A\theta_1^n-B\theta_2^n}{\theta_1-\theta_2} }$
is given by $\displaystyle{[{\bf w}]^{-1}=[ {\bf u} ], \ {\bf u} =(u_n),\ u_n=\frac{B\theta_1^n-A\theta_2^n}{\theta_1-\theta_2}}$.
\begin{rem}\label{rem:LG}
There exists a natural bijection between $G_{\Q}^*(f)$ and the Laxton group $G^*(f)$ in \S \ref{sec:LG}.
\end{rem}
We denote by $\left[ \begin{array}{c}
a_1\\
a_0\\
\end{array} \right] $ the class of $ V_f(R)^{\times} /\ss$ containing
$\begin{pmatrix}
a_1\\
a_0\\
\end{pmatrix}$.
For $\left[ \begin{array}{c}
0\\
1\\
\end{array} \right] \in V_f(R)^{\times} /\ss$, we have $
\left[ \begin{array}{c}
0\\
1\\
\end{array} \right]^{-1} =\left[ \begin{array}{c}
P\\
1\\
\end{array} \right]$ from (\ref{eq:LR10}) and hence
(\ref{eq:LR6}) yields
the action of ${\mathcal B}$ on $V_f(R)^{\times}/\ss :$
\begin{equation}\label{eq:EC1}
{\mathcal B}^n \left[ \begin{array}{c}
a_1\\
a_0\\
\end{array} \right]=\left[
\begin{array}{c}
0\\
1\\
\end{array} \right]^{-n} * \left[
\begin{array}{c}
a_1\\
a_0\\
\end{array} \right],
\end{equation}
for any $n\in \Z$ and $\left[ \begin{array}{c}
a_1\\
a_0\\
\end{array} \right] \in V_f(R)^{\times}/ \ss$.
We identify the group $G_R(f)$ with $V_f(R)^{\times}/\ss$ and the group $G_R^*(f)$ with
$V_f(R)^{\times}/\s$ via the group isomorphisms induced by (\ref{eq:LR1}):
\begin{align}
& G_R(f) \tt V_f(R)^{\times}/\ss, \quad [(w_n)] \mapsto \left[
\begin{array}{c}
w_1\\
w_0\\
\end{array} \right], \label{eq:EC2}\\
& G_R^*(f) \tt V_f(R)^{\times}/\s, \quad [(w_n)] \mapsto \left[
\begin{array}{c}
w_1\\
w_0\\
\end{array} \right]. \label{eq:EC2} \notag
\end{align}
Therefore, we denote a class of $G_R(f)$ or $G_R^*(f)$ by $\left[
\begin{array}{c}
w_1\\
w_0\\
\end{array} \right]$ instead of $[ (w_n) ]$. Consider a natural surjection
\begin{equation}\label{eq:EC3}
\pi :\ G_R(f) \longrightarrow G_R^*(f), \quad \left[
\begin{array}{c}
w_1\\
w_0\\
\end{array} \right] \mapsto \left[
\begin{array}{c}
w_1\\
w_0\\
\end{array} \right]
\end{equation}
and the image of a subgroup $N$ of $G_R(f)$.
\begin{lem}\label{lem:ImPi}
Let $N$ be a subgroup of $G_R(f)$ and $\pi : G_R(f) \to G_R^*(f)$ be the natural surjection.
Then we have a group isomorphism:
\[
N/N\cap {\tiny \left\langle
\left[
\begin{array}{c}
0\\
1\\
\end{array} \right]
\right\rangle } \tt \pi (N), \quad
\left[
\begin{array}{c}
w_1\\
w_0\\
\end{array} \right] \bmod{ N\cap {\tiny \left\langle
\left[
\begin{array}{c}
0\\
1\\
\end{array} \right] \right\rangle} } \mapsto \left[\begin{array}{c}
w_1\\
w_0\\
\end{array} \right].
\]
\end{lem}
\begin{proof}
The kernel of the restriction map
$ \pi \mid_N:\ N \to \pi (N)$ is the subgroup
$N \cap \left\{ {\mathcal B}^{\nu} \left[
\begin{array}{c}
1\\
0\\
\end{array} \right] \ \vline \nu \in \Z \right\}$. Since
$ {\mathcal B}^{\nu} \left[
\begin{array}{c}
1\\
0\\
\end{array} \right] =\left[
\begin{array}{c}
0\\
1\\
\end{array} \right]^{-\nu}$ from (\ref{eq:EC1}), we get the group isomorphism:
\[
N/N\cap {\tiny \left\langle
\left[
\begin{array}{c}
0\\
1\\
\end{array} \right]
\right\rangle } \simeq \pi (N).
\]
\qed
\end{proof}
Note that $\pi (N)$ is the quotient of $N$ by the action of $\langle {\mathcal B} \rangle$.
In particular, we get
\begin{equation}\label{eq:EC4}
G_R(f)/{\tiny \left\langle
\left[
\begin{array}{c}
0\\
1\\
\end{array} \right] \right\rangle }
\overset{\sim}{ \underset{\pi}{\longrightarrow}} G_R^*(f), \quad
\left[
\begin{array}{c}
w_1\\
w_0\\
\end{array} \right] \bmod{ {\tiny \left\langle
\left[
\begin{array}{c}
0\\
1\\
\end{array} \right] \right\rangle } } \mapsto \left[\begin{array}{c}
w_1\\
w_0\\
\end{array} \right].
\end{equation}
Assume that $f$ is irreducible over $R$. We get the following group isomorphism from Theorem~\ref{theo:Vf} (\ref{enum:Vf1}).
\begin{equation}\label{eq:EC5}
\Psi_R:\ G_R(f) \tt R[\theta_1]^{\times}/R^{\times}, \quad \left[ \begin{array}{c}
w_1\\
w_0\\
\end{array} \right] \mapsto w_1-w_0\theta_1 \bmod{R^{\times}}.
\end{equation}
Since $\Psi_R \left( \left[
\begin{array}{c}
0\\
1\\
\end{array} \right] \right)=\theta_1 \bmod{R^{\times}}$, the isomorphism induces the following isomorphism
\begin{equation}\label{eq:EC6}
\Psi_R^*:\ G^*_R(f) \tt R[\theta_1]^{\times}/R^{\times} \langle \theta_1 \rangle, \quad \left[ \begin{array}{c}
w_1\\
w_0\\
\end{array} \right] \mapsto w_1-w_0\theta_1 \bmod{R^{\times} \langle \theta_1 \rangle }.
\end{equation}
Assume that $f$ is reducible over $R$ and $f$ has no double root in $R$. Furthermore, we assume that $R$ is a local ring.
Let $H_R^{\times} =\{ (x,y) \in R^{\times} \times R^{\times} \mid x\equiv y \pmod{\sqrt{d}} \}$ be the group in
Theorem~\ref{theo:Vf}(\ref{enum:Vf2}),(\ref{enum:Vf2i}) (if $d\in R^{\times}$, then $H_R^{\times} =R^{\times} \times R^{\times}$).
We get a group isomorphism
\begin{equation}\label{eq:EC7}
H_R^{\times}/I_R \tt \begin{cases}
R^{\times} & (\text{if}\ d\in R^{\times} ),\\
1+\sqrt{d} \ R & (\text{if}\ d\not\in R^{\times} ),
\end{cases} \quad (x,y) \bmod{I_R} \mapsto xy^{-1}
\end{equation}
where $I_R:= \{ (x,x) \mid x\in R^{\times} \}$ and $1+\sqrt{d} \ R:= \{ 1+\sqrt{d} \ z \mid z\in R \}$. Then we get the following group isomorphism from Theorem~\ref{theo:Vf} (\ref{enum:Vf2}),(\ref{enum:Vf2i}) and
(\ref{eq:EC7}).
\begin{equation}\label{eq:EC8}
\Psi_R:\ G_R(f) \tt
\begin{cases}
R^{\times} & (\text{if}\ d\in R^{\times} ),\\
1+\sqrt{d} \ R & (\text{if}\ d\not\in R^{\times} ),
\end{cases} \quad \left[
\begin{array}{c}
w_1\\
w_0\\
\end{array} \right] \mapsto (w_1-w_0\theta_1)(w_1-w_0 \theta_2)^{-1} .
\end{equation}
Since $\Psi_R \left( \left[ \begin{array}{c}
0\\
1\\
\end{array} \right] \right)=\theta_1 \theta_2^{-1}$, the isomorphism induces the following isomorphism
\begin{align}
\Psi_R^*:\ G_R^*(f) \tt
\begin{cases}
R^{\times}/\langle \theta_1 \theta_2^{-1} \rangle & (\text{if}\ d\in R^{\times} ),\\
(1+\sqrt{d} \ R) /\langle \theta_1 \theta_2^{-1} \rangle &(\text{if}\ d\not\in R^{\times} ),
\end{cases} \label{eq:EC9} \\
\left[
\begin{array}{c}
w_1\\
w_0\\
\end{array} \right] \mapsto (w_1-w_0\theta_1)(w_1-w_0 \theta_2)^{-1} \bmod{\langle \theta_1 \theta_2^{-1} \rangle}. \notag
\end{align}
Assume that $f$ has a double root $\theta$ in $R$.
From Theorem~\ref{theo:Vf} (\ref{enum:Vf2}),(\ref{enum:Vf2ii}) and the following surjective group homomorphism
\[
(R[\varepsilon]/(\varepsilon^2))^{\times} \longrightarrow R, \quad x+y\varepsilon \bmod{\varepsilon^2} \mapsto x^{-1}y,
\]
we get the group isomorphism
\begin{equation}\label{eq:EC10}
\Psi_R : G_R(f) \tt R, \quad
\left[
\begin{array}{c}
w_1\\
w_0\\
\end{array} \right] \mapsto -w_0 (w_1-w_0\theta)^{-1}.
\end{equation}
The surjectivity follows from the fact that $\left[ \begin{array}{c}
1+y\theta \\
-y \\
\end{array} \right] \in G_R(f)$ maps to $y\in R$.
Since $\Psi_R \left( \left[ \begin{array}{c}
0\\
1\\
\end{array} \right] \right)=\theta^{-1}$, the isomorphism induces the
following isomorphism
\begin{equation}\label{eq:EC11}
\Psi_R^* : G_R^*(f) \tt R/\langle \theta^{-1} \rangle, \quad \left[
\begin{array}{c}
w_1\\
w_0\\
\end{array} \right] \mapsto -w_0 (w_1-w_0\theta)^{-1} \bmod{ \langle \theta^{-1} \rangle}.
\end{equation}
In the case $R=\Q$, we denote the discriminant $d$ of $f$ by $D$:
\[
D:= P^2-4Q \ \in \Q^{\times}.
\]
By summarizing the above discussion, we obtain the following theorem.
\begin{theo}\label{theo:G}
Let $p$ be a prime number with $p\nmid Q$.
\begin{enumerate}[$(1)$]
\item If $f(t)$ is irreducible over $\Q$, then we have
\[
G_{\Q}(f)\tt \Q(\theta_1)^{\times}/\Q^{\times}, \quad \left[ \begin{array}{c}
w_1\\
w_0\\
\end{array} \right]
\mapsto w_1-w_0\theta_1 \bmod{\Q^{\times}}.
\]
If $f(t)$ is reducible over $\Q$, then we have
\[
G_{\Q}(f)\tt \Q^{\times}, \quad \left[ \begin{array}{c}
w_1\\
w_0\\
\end{array} \right]
\mapsto (w_1-w_0\theta_1)(w_1-w_0\theta_2)^{-1}.
\]
\label{enum:G1}
\item If $f(t)$ is irreducible over $\Q$, then we have
\[
G_{\Z_{(p)}}(f) \tt \Z_{(p)}[\theta_1]^{\times}/\Z_{(p)}^{\times}, \quad
\left[ \begin{array}{c}
w_1\\
w_0\\
\end{array} \right] \mapsto w_1-w_0\theta_1\bmod{\Z_{(p)}^{\times}}.
\]
If $f(t)$ is reducible over $\Q$, then we have
\[G_{\Z_{(p)}}(f)\tt \begin{cases} \Z_{(p)}^{\times} & \text{if}\ p\nmid D,\\
1+p^{\frac{s}{2}} \Z_{(p)} & \text{if}\ p|D,
\end{cases} \quad \left[ \begin{array}{c}
w_1\\
w_0\\
\end{array} \right]
\mapsto (w_1-w_0\theta_1)(w_1-w_0\theta_2)^{-1}.
\]
\label{enum:G2}
\item
\begin{enumerate}[$(i)$]
\item If $f(t) \bmod{p}$ is irreducible over $\F_p$, then we have
\[
G_{\F_p}(f)\tt \F_p(\theta_1)^{\times}/\F_p^{\times}, \quad
\left[ \begin{array}{c}
w_1\\
w_0\\
\end{array} \right] \mapsto w_1-w_0\theta_1\bmod{\F_p^{\times}}.
\]
\label{enum:G3i}
\item Assume that $f(t)$ is reducible over $\F_p$.
If $p\nmid D$, then we have
\[
G_{\F_p}(f) \tt \F_p^{\times}
, \quad \left[ \begin{array}{c}
w_1\\
w_0\\
\end{array} \right] \mapsto (w_1-w_0\theta_1)(w_1-w_0\theta_2)^{-1}.
\]
If $p|D$, then we have
\[
G_{\F_p}(f) \tt \F_p, \quad \left[ \begin{array}{c}
w_1\\
w_0\\
\end{array} \right] \mapsto -w_0(w_1-w_0\theta)^{-1},
\]
where $\theta$ is the double root of $f(t) \bmod{p}$.
\label{enum:G3ii}
\end{enumerate}
\label{enum:G3}
\end{enumerate}
\end{theo}
We also have the following theorem for the group $G_R^*(f)$ as well.
\begin{theo}\label{theo:G*}
Let $p$ be a prime number with $p\nmid Q$.
\begin{enumerate}[$(1)$]
\item If $f(t)$ is irreducible over $\Q$, then we have
\[
G^*_{\Q}(f)\tt \Q(\theta_1)^{\times}/\Q^{\times}\langle \theta_1 \rangle, \quad
\left[ \begin{array}{c}
w_1\\
w_0\\
\end{array} \right] \mapsto w_1-w_0\theta_1 \bmod{\Q^{\times}}\langle \theta_1 \rangle.
\]
If $f(t)$ is reducible over $\Q$, then we have
\[
G_{\Q}^*(f)\tt \Q^{\times} /
\langle \theta_1 \theta_2^{-1} \rangle, \quad
\left[ \begin{array}{c}
w_1\\
w_0\\
\end{array} \right] \mapsto (w_1-w_0\theta_1)(w_1-w_0\theta_2)^{-1} \bmod{\langle \theta_1 \theta_2^{-1} \rangle}.
\]
\label{enum:G*1}
\item If $f(t)$ is irreducible over $\Q$, then we have
\[
G_{\Z_{(p)}}^*(f)\tt \Z_{(p)}[\theta_1]^{\times}/\Z_{(p)}^{\times} \langle \theta_1 \rangle,\quad
\left[ \begin{array}{c}
w_1\\
w_0\\
\end{array} \right] \mapsto w_1-w_0\theta_1 \bmod{ \Z_{(p)}^{\times} \langle \theta_1 \rangle}.
\]
If $f(t)$ is reducible over $\Q$, then we have
\begin{align*}
G_{\Z_{(p)}}^*(f) \tt
\begin{cases}
\Z_{(p)}^{\times}/\langle \theta_1\theta_2^{-1} \rangle & \text{if}\ p\nmid D,\\
(1+p^{\frac{s}{2}}\Z_{(p)})/\langle \theta_1 \theta_2^{-1} \rangle & \text{if}\ p|D,
\end{cases} \\
\quad \left[ \begin{array}{c}
w_1\\
w_0\\
\end{array} \right] \mapsto (w_1-w_0\theta_1)(w_1-w_0\theta_2)^{-1} \bmod{ \langle \theta_1 \theta_2^{-1} \rangle}.
\end{align*}
\label{enum:G*2}
\item
\begin{enumerate}[$(i)$]
\item If $f(t) \bmod{p}$ is irreducible over $\F_p$, then we have
\[
G_{\F_p}^*(f)\tt \F_p(\theta_1)^{\times}/\F_p^{\times} \langle \theta_1 \rangle ,\quad
\left[ \begin{array}{c}
w_1\\
w_0\\
\end{array} \right] \mapsto w_1-w_0 \theta_1 \bmod{\F_p^{\times} \langle \theta_1 \rangle}.
\]
\label{enum:G*3i}
\item Assume that $f(t)$ is reducible over $\F_p$.
If $p\nmid D$, then we have
\[
G_{\F_p}^*(f)\tt \F_p^{\times} /
\langle \theta_1 \theta_2^{-1} \rangle,\quad
\left[ \begin{array}{c}
w_1\\
w_0\\
\end{array} \right] \mapsto (w_1-w_0\theta_1)(w_1-w_0\theta_2)^{-1} \bmod{\langle \theta_1 \theta_2^{-1}\rangle}.
\]
If $p|D$, then we have
\[
G^*_{\F_p}(f) \tt 0.
\]
\label{enum:G*3ii}
\end{enumerate}
\label{enum:G*3}
\end{enumerate}
\end{theo}
\section{Subgroups of $G_{\Q}(f)$ and $G_{\Q}^*(f)$ }\label{sec:SubG}
In this section, we define natural subgroups $G(f,p),\ K(f,p),\ H(f,p)$
of $G_{\Q}(f)$, and $G^*(f,p),\ K^*(f,p),\ H^*(f,p)$ of $G_{\Q}^*(f)$. The definitions of
these groups give natural interpretation of Laxton's subgroups $G^*(f,p)_L,\ K^*(f,p)_L$ and
$H^*(f,p)_L$ in \S~\ref{sec:LG}.
Let $p$ be a prime number with $p\nmid Q$. Consider the projective line over $\F_p$:
\[
\P^1(\F_p) :=\F_p^2/\sim =\left\{ \left[
\begin{array}{c}
a_1\\
a_0\\
\end{array} \right] \in \F_p^2 \ \vline \ a_0\in \F_p^{\times} \ \text{or} \ a_1\in \F_p^{\times} \right\},
\]
where $\left[
\begin{array}{c}
a_1\\
a_0\\
\end{array} \right]=\left[
\begin{array}{c}
b_1\\
b_0\\
\end{array} \right]$ if and only if there exists $c\in \F_p^{\times}$ such that
$a_0=cb_0,\ a_1=cb_1$. For any class $\left[
\begin{array}{c}
w_1\\
w_0\\
\end{array} \right] \in G_{\Q}(f)$, we can choose the representative
$\begin{pmatrix}
w_1\\
w_0\\
\end{pmatrix}$ so that $w_0,w_1 \in \Z$ and $(w_0,w_1)=1$. Therefore, the reduction map
\begin{equation}\label{eq:SubG1}
\mathrm{red}_p:\ G_{\Q}(f) \longrightarrow \P^1(\F_p), \quad
\left[
\begin{array}{c}
w_1\\
w_0\\
\end{array} \right] \mapsto \left[
\begin{array}{c}
w_1\\
w_0\\
\end{array} \right]
\end{equation}
is well-defined. The group $G_{\F_p}(f) \ \left(\simeq V_f(\F_p)^{\times}/\underset{ \F_p}{\sim} \right)$
is a subset of $\P^1(\F_p)$.
\begin{df}\label{df:GKH}
Define subsets $G(f,p),\ K(f,p)$ and $H(f,p)$ of $G_{\Q}(f)$ by
\begin{align*}
G(f,p) & := \left\{
\left[
\begin{array}{c}
w_1\\
w_0\\
\end{array} \right] \in G_{\Q} (f) \ \vline \ \mathrm{red}_p \left( \left[
\begin{array}{c}
w_1\\
w_0\\
\end{array} \right] \right) = \left[
\begin{array}{c}
1\\
0\\
\end{array} \right] \right\},\\
K(f,p) & := \left\{
\left[
\begin{array}{c}
w_1\\
w_0\\
\end{array} \right] \in G_{\Q} (f) \ \vline \ \mathrm{red}_p \left( \left[
\begin{array}{c}
w_1\\
w_0\\
\end{array} \right] \right) \in G_{\F_p}(f) \right\},\\
H(f,p) & := \left\{
\left[
\begin{array}{c}
w_1\\
w_0\\
\end{array} \right] \in G_{\Q} (f) \ \vline \ \mathrm{red}_p \left( \left[
\begin{array}{c}
w_1\\
w_0\\
\end{array} \right]^n \right) = \left[
\begin{array}{c}
1\\
0\\
\end{array} \right]\ \text{ for some} \ n\in \Z \right\}.
\end{align*}
\end{df}
From the definition of $K(f,p)$, we can see that $K(f,p)$ is a subgroup of $G_{\Q}(f)$.
The reduction map (\ref{eq:SubG1}) induces
the group homomorphism
\begin{equation}\label{eq:SubG2}
\mathrm{red}_p:\ K(f,p) \longrightarrow G_{\F_p}(f), \quad
\left[
\begin{array}{c}
w_1\\
w_0\\
\end{array} \right] \mapsto \left[
\begin{array}{c}
w_1\\
w_0\\
\end{array} \right].
\end{equation}
The set $G(f,p)$ is a subgroup of $K(f,p)$ because $\left[
\begin{array}{c}
1\\
0\\
\end{array} \right]$ is the identity element of $G_{\F_p}(f)$. The set $H(f,p)$ is a subgroup of $G_{\Q}(f)$ since
$G(f,p)$ is a group and $\left[
\begin{array}{c}
w_1\\
w_0\\
\end{array} \right] \in H(f,p)$ if and only if
$
\left[
\begin{array}{c}
w_1\\
w_0\\
\end{array} \right]^n \in G(f,p)$ for some $n\in \Z$. Furthermore, we have $K(f,p) \subset H(f,p)$ since the image of
$K(f,p)$ by the reduction map is in the finite group $G_{\F_p}(f)$.
We get the following sequence of subgroups.
\begin{equation}\label{eq:SubG3}
G(f,p) \leq K(f,p) \leq H(f,p)\leq G_{\Q}(f)
\end{equation}
By the definition of the subgroups, we have an exact sequence of groups:
\begin{equation}\label{eq:SubG4}
1 \longrightarrow G(f,p) \longrightarrow K(f,p) \underset{ \mathrm{red}_p}{\longrightarrow} G_{\F_p}(f) \longrightarrow 1.
\end{equation}
This exact sequence is an analogue of the elliptic curve exact sequence (\cite[VII, Proposition 2.1]{Si}).
Next, we define the corresponding subgroups of $G_{\Q}^*(f)$ by the natural map $\pi :\ G_{\Q} (f) \to G_{\Q}^*(f)$.
\begin{df}\label{df:GKH*}
Define subsets $G^*(f,p),\ K^*(f,p)$ and $H^*(f,p)$ of $G_{\Q}^*(f)$ by
\begin{align*}
G^*(f,p) & :=\pi (G(f,p)),\\
K^*(f,p) & :=\pi (K(f,p)),\\
H^*(f,p) &:=\pi (H(f,p)).
\end{align*}
\end{df}
These subsets are subgroups of $G_{\Q}^*(f)$ since the map $\pi$ is a group homomorphism and we get the following sequence of
subgroups:
\begin{equation}\label{eq:SubG5}
G^*(f,p) \leq K^*(f,p) \leq H^*(f,p)\leq G^*_{\Q}(f).
\end{equation}
Let $p$ be a prime number, and assume $p\nmid Q$.
For the Lucas sequence ${\mathcal F}=({\mathcal F}_n) \in \mathscr{S}(f,\Q)$,
we have ${\mathcal F}_n \in \Z_{(p)}$ for any $n\in \Z$.
Lucas showed that there exists a positive integer $n$
satisfying $p|{\mathcal F}_n$ in this case
(\cite[\S 24, 25]{Lu}, \cite[Lemma~2, Theorem~12]{C}, \cite[IV.18, IV.19 and p67]{R}).
\begin{df}\label{df:rank}
Assume $p\nmid Q$. We denote {\it the rank} of the Lucas sequence ${\mathcal F}=({\mathcal F}_n) \in \mathscr{S}(f,\Q)$ by
$r(p)$. Namely, it is the smallest positive integer $n$ satisfying $p|{\mathcal F}_n$.
\end{df}
We can easily check $r(2)=2$ if $P$ is even, and $r(2)=3$ if $P$ is odd. If $p\ne 2$, then we know that $r(p)$ divides
$p-\left( \dfrac{D}{p} \right) $ from the results of Lucas, where $\left( \dfrac{*}{*} \right)$ is the Legendre symbol.
\begin{lem}\label{lem:orderF}
Assume $p\nmid Q$. Let $r(p)$ be the rank of the Lucas sequence ${\mathcal F}=({\mathcal F}_n)$. Then
$r(p)$ is equal to the order of
$
\left[ \begin{array}{c}
0\\
1\\
\end{array} \right] \in G_{\F_p}(f)$.
\end{lem}
\begin{proof}
By the definition, $r(p)$ is the smallest positive integer $n$ satisfying
\[
\left[ \begin{array}{c}
1\\
0\\
\end{array} \right] =\left[ \begin{array}{c}
{\mathcal F}_{n+1} \\
{\mathcal F}_n \\
\end{array} \right] ={\mathcal B}^n\left[ \begin{array}{c}
{\mathcal F}_1\\
{\mathcal F}_0\\
\end{array} \right]
={\mathcal B}^n\left[ \begin{array}{c}
1\\
0\\
\end{array} \right]
\]
in $G_{\F_p}(f)$.
Furthermore, we get ${\mathcal B}^n \left[ \begin{array}{c}
1\\
0\\
\end{array} \right] =
\left[ \begin{array}{c}
0\\
1\\
\end{array} \right]^{-n}$ from (\ref{eq:EC1}). Therefore, $r(p)$ is equal to the order of $
\left[ \begin{array}{c}
0\\
1\\
\end{array} \right] $ in $G_{\F_p}(f)$.
\qed
\end{proof}
\begin{lem}\label{lem:G(f,p)}
Let $r(p)$ be the rank of the Lucas sequence ${\mathcal F}=({\mathcal F}_n)$. Then we have
\[
G(f,p) \cap {\tiny \langle \left[
\begin{array}{c}
0\\
1\\
\end{array} \right] \rangle } = {\tiny
\langle \left[
\begin{array}{c}
0\\
1\\
\end{array} \right]^{r(p)} \rangle.}
\]
\end{lem}
\begin{proof}
The assertion follows from Lemma~\ref{lem:orderF}.
\qed
\end{proof}
We see that $\left[ \begin{array}{c}
0\\
1\\
\end{array} \right] \in K(f,p)$ from $\Lambda (0,1) =Q \not\equiv 0 \pmod{p}$,
hence we get the following isomorphisms by Lemmas~\ref{lem:ImPi} and \ref{lem:G(f,p)}.
\begin{align}
G^*(f,p) & \overset{\sim}{ \underset{\pi}{\longleftarrow}}
G(f,p) /{\tiny \left\langle \left[ \begin{array}{c}
0\\
1\\
\end{array} \right]^{r(p)} \right\rangle },
\\
K^*(f,p) & \overset{\sim}{ \underset{\pi}{\longleftarrow}}
K(f,p)/ {\tiny \left\langle \left[ \begin{array}{c}
0\\
1\\
\end{array} \right] \right\rangle }, \notag \\
H^*(f,p) & \overset{\sim}{ \underset{\pi}{\longleftarrow}}
H(f,p)/ {\tiny \left\langle \left[ \begin{array}{c}
0\\
1\\
\end{array} \right] \right\rangle }. \notag
\end{align}
The exact sequence (\ref{eq:SubG4}) yields the exact sequence of groups:
\begin{equation}\label{eq:SubG8}
1 \longrightarrow G^*(f,p) \longrightarrow K^*(f,p) \underset{ \mathrm{red}_p}{\longrightarrow} G^*_{\F_p}(f) \longrightarrow 1.
\end{equation}
Let $G^*(f)$ be the Laxton group and $G^*(f,p)_L, K^*(f,p)_L, H^*(f,p)_L$ be the subgroups
defined in \S 1.
For the natural isomorphism of groups $\iota: G^*(f) \to G^*_{\Q}(f), \ {\mathcal W} \mapsto {\mathcal W}$, our subgroups $G^*(f,p),\ K^*(f,p)$ and $H^*(f,p)$ in Definition~\ref{df:GKH*} correspond to $G^*(f,p)_L,\ K^*(f,p)_L$ and
$H^*(f,p)_L$, respectively.
\begin{theo}\label{theo:main2}
Let $G^*(f)$ be the Laxton group and $G^*(f,p)_L,\ K^*(f,p)_L,
H^*(f,p)_L$ be the subgroups defined in \S~\ref{sec:LG}. By the natural group isomorphism
of groups $\iota: G^*(f) \to G^*_{\Q}(f)$, we have
$\iota (G^*(f,p)_L)=G^*(f,p),\ \iota (K^*(f,p)_L)=K^*(f,p)$ and $\iota (H^*(f,p)_L)=H^*(f,p)$.
Furthermore, we have the following exact sequence of groups:
\[
1 \longrightarrow G^*(f,p) \longrightarrow K^*(f,p) \underset{ \mathrm{red}_p}{\longrightarrow} G^*_{\F_p}(f) \longrightarrow 1,
\]
where the map $\mathrm{red}_p$ is the reduction map defined by {\small $\left[
\begin{array}{c}
w_1\\
w_0\\
\end{array} \right] \mapsto \left[
\begin{array}{c}
w_1\\
w_0\\
\end{array} \right]$,
}
where $\begin{pmatrix}
w_1\\
w_0\\
\end{pmatrix}$ is a representative such that
$w_0,w_1 \in \Z$ and $(w_0,w_1)=1$.
\end{theo}
\begin{proof}
We only give the proof of $\iota (G^*(f,p)_L)=G^*(f,p)$, because the other assertions have already been proved.
Let ${\mathcal W} \in G^*(f,p)_L$.
We can choose ${\bf w} =(w_n) \in {\mathcal W}$ such that $w_0, w_1 \in \Z $ and at least one of them is
not divisible by $p$. Since ${\bf w}$ is
a representative of ${\mathcal W} \in G^*(f,p)_L$, we have $w_n \in p\Z_{(p)}$ for some $n$. For ${\mathcal B}=\begin{pmatrix}
P & -Q\\
1 & 0\\
\end{pmatrix}$, we have
\[
\begin{pmatrix}
w_1\\
w_0\\
\end{pmatrix} ={\mathcal B}^{-n}
\begin{pmatrix}
w_{n+1} \\
w_n\\
\end{pmatrix},
\]
and hence $w_{n+1} \in \Z_{(p)}^{\times}$ because of $p\nmid w_0$ or $p\nmid w_1$. We conclude that $\iota
({\mathcal W}) \in G^*(f,p)$, and we have
$\iota (G^*(f,p)_L) \subset G^*(f,p)$.
The opposite inclusion relation $\iota (G^*(f,p)_L) \supset G^*(f,p)$ is obvious.
\qed
\end{proof}
\section{Group strctures of $K(f,p)$ and $K^*(f,p)$}\label{sec:K}
In this section, we determine the group structures of $K(f,p)$ and $K^*(f,p)$ defined in Definitions~\ref{df:GKH} and
\ref{df:GKH*}.
Let $p$ be a prime number with $p\nmid Q$.
\begin{lem}\label{lem:KK*}
We have the following group isomorphisms.
\begin{enumerate}[$(1)$]
\item
\[
\rho:\ G_{\Z_{(p)}}(f)\tt
K(f,p), \quad \left[ \begin{array}{c}
w_1\\
w_0\\
\end{array} \right] \mapsto \left[
\begin{array}{c}
w_1\\
w_0\\
\end{array} \right]
\]
\label{enum:KK*1}
\item
\[
\rho* :\ G^*_{\Z_{(p)}}(f)
\tt
K^*(f,p), \quad
\left[ \begin{array}{c}
w_1\\
w_0\\
\end{array} \right] \bmod{ \langle {\tiny \left[ \begin{array}{c}
0\\
1\\
\end{array} \right] } \rangle } \mapsto
\left[ \begin{array}{c}
w_1\\
w_0\\
\end{array} \right] \bmod{ \langle {\tiny \left[ \begin{array}{c}
0\\
1\\
\end{array} \right] } \rangle }.
\]
\label{enum:KK*2}
\end{enumerate}
\end{lem}
\begin{proof}
(\ref{enum:KK*1})
We only show that $\rho$ is injective because the other part is trivial.
If $\rho \left( \left[ \begin{array}{c}
w_1\\
w_0\\
\end{array} \right] \right)=\left[ \begin{array}{c}
1\\
0\\
\end{array} \right]$ for $\left[ \begin{array}{c}
w_1\\
w_0\\
\end{array} \right] \in G^*_{\Z_{(p)}}(f)$, then
we get $w_0=0, w_1\in \Q^{\times}$.
Furthermore, we have $w_1\in \Z_{(p)}^{\times}$ since
$\Lambda (w_1,w_0)=w_1^2-Pw_0w_1+Qw_0^2 \in \Z_{(p)}^{\times}$.
We get $\left[ \begin{array}{c}
w_1\\
w_0\\
\end{array} \right] =\left[ \begin{array}{c}
1\\
0\\
\end{array} \right]$ in $G^*_{\Z_{(p)}}(f)$, and hence $\rho$ is injective.
(\ref{enum:KK*2}) We get the assertion from (\ref{enum:KK*1}) since the kernels of the natural surjection $K(f,p) \to K^*(f,p)$
and $G_{\Z_{(p)} }(f) \to G^*_{\Z_{(p)}} (f)$ are the subgroups generated by $\left[
\begin{array}{c}
0\\
1\\
\end{array} \right]$.
\qed
\end{proof}
By Lemma~\ref{lem:KK*}, Theorems~\ref{theo:G} (\ref{enum:G2} ) and \ref{theo:G*} (\ref{enum:G*2}), we get the following theorem.
\begin{theo}\label{theo:KK*}
Put $D=p^sD_0$ with $s\geq 0,\ p\nmid D_0$.
\begin{enumerate}[$(1)$]
\item If $f(t)$ is irreducible over $\Q$, then we have
\[
K(f,p) \tt
\Z_{(p)}[\theta_1]^{\times}/\Z_{(p)}^{\times} , \quad \left[
\begin{array}{c}
w_1\\
w_0\\
\end{array} \right] \mapsto w_1-w_0\theta_1 \bmod{\Z_{(p)}^{\times}},
\]
and
\[
K^*(f,p) \tt
\Z_{(p)}[\theta_1]^{\times}/\Z_{(p)}^{\times}\langle \theta_1 \rangle , \left[
\begin{array}{c}
w_1\\
w_0\\
\end{array} \right] \mapsto w_1-w_0\theta_1 \bmod{\Z_{(p)}^{\times}
\langle \theta_1 \rangle}.
\]
\label{enum:theoKK*1}
\item If $f(t)$ is reducible over $\Q$, then we have
\[
K(f,p) \tt \begin{cases}
\Z_{(p)}^{\times} & \text{if}\ p\nmid D,\\
1+p^{\frac{s}{2}} \Z_{(p)} & \text{if}\ p|D,
\end{cases}
\quad
\left[
\begin{array}{c}
w_1\\
w_0\\
\end{array} \right] \mapsto (w_1-w_0 \theta_1)(w_1-w_0\theta_2)^{-1},
\]
and
\begin{align*}
K^*(f,p) \tt \begin{cases}
\Z_{(p)}^{\times}/\langle \theta_1 \theta_2^{-1} \rangle & \text{if}\ p\nmid D,\\
(1+p^{\frac{s}{2}} \Z_{(p)} )/\langle \theta_1 \theta_2^{-1} \rangle & \text{if}\ p|D,
\end{cases}
\\
\left[
\begin{array}{c}
w_1\\
w_0\\
\end{array} \right] \mapsto (w_1-w_0 \theta_1)(w_1-w_0\theta_2)^{-1} \bmod{\langle \theta_1 \theta_2^{-1} \rangle}.
\end{align*}
\label{enum:theoKK*2}
\end{enumerate}
\end{theo}
\section{ Group structures of $G_{\Q}(f)/K(f,p)$ and $G^*_{\Q}(f)/K^*(f,p)$}\label{sec:G/Kp}
In this section, we determine the structures of $G_{\Q}(f)/K(f,p)$ and $G_{\Q}^*(f)/K^*(f,p)$.
Put $F:=\Q(\theta_1)$ and denote the ring of integers of $F$ by ${\mathcal O}_F$.
Let $p$ be a prime number and ${\mathfrak p}$ be a prime ideal of $F$ above $p$.
Put ${\mathcal O}_{\mathfrak p}:=S_{\mathfrak p}^{-1} {\mathcal O}_F$ and ${\mathcal O}_{(p)}:=
S_p^{-1}{\mathcal O}_F=\bigcap_{ {\mathfrak p} \mid p} {\mathcal O}_{F,{\mathfrak p}} $,
where $S_{\mathfrak p}:={\mathcal O}_F \setminus {\mathfrak p}$ and $S_p:=\Z\setminus p\Z$.
We have ${\mathcal O}_{(p)}^{\times} =\{ \alpha \in F^{\times} \mid v_{\mathfrak p}(\alpha)=0 \ \text{
for any}\ {\mathfrak p}\ \text{above}\ p \}$.
Put $D=ma^2$ with $a\in \N$ and a squarefree integer $m$. We have
$F=\Q(\sqrt{m})$, and if $p\ne 2$, then ${\mathcal O}_{(p)}=\Z_{(p)} [\sqrt{m} ],\ \Z_{(p)}[\theta_1 ]=\Z_{(p)} [\sqrt{D} ]$.
\begin{lem}\label{lem:localring}
If $p^2 \nmid D$, then we have ${\mathcal O}_{(p)}=\Z_{(p)} [\theta_1]$.
\end{lem}
\begin{proof}
If $F=\Q$, then the assertion is trivial. Assume $F\ne \Q$.
In the case $p\ne 2$, the assertion follows from
\[
{\mathcal O}_{(p)}=\Z_{(p)}[\sqrt{m}]=\Z_{(p)} [\sqrt{D} ]=\Z_{(p)}[\theta_1 ].
\]
Assume $p=2$. Since $2^2\nmid D=P^2-4Q$, we have
$2\nmid P$. Furthermore, since $\theta_1=(P\pm \sqrt{D})/2=(P\pm a\sqrt{m})/2 \in {\mathcal O}_F$,
we have
$m\equiv 1\pmod{4}$ and $2\nmid a$, and hence ${\mathcal O}_F=\Z[(1+\sqrt{m})/2 ]$.
We get $(1+\sqrt{m})/2 \in \Z_{(p)} [\theta_1]$ from
\[
\frac{1\pm \sqrt{m}}{2}=\theta_1-\frac{(P-1)\pm (a-1)\sqrt{m}}{2}.
\]
Therefore, we have ${\mathcal O}_{(p)}=\Z_{(p)}[\theta_1]$.
\qed
\end{proof}
\begin{lem}\label{lem:F/ZpQ}
If $f(t)$ is irreducible over $\Q$, then we have
\[
F^{\times}/\Z_{(p)} [\theta_1]^{\times} \Q^{\times} \simeq
\begin{cases}
{\mathcal O}_{(p)}^{\times}/\Z_{(p)}[\theta_1]^{\times} & \text{if $p$ is inert in $F$},\\
\Z \times ( {\mathcal O}_{(p)}^{\times}/\Z_{(p)}[\theta_1]^{\times} )& \text{if $p$ splits in $F$},\\
\Z/2\Z \times ( {\mathcal O}_{(p)}^{\times}/\Z_{(p)}[\theta_1]^{\times}) & \text{if $p$ is ramified in $F$}.
\end{cases}
\]
\end{lem}
\begin{proof}
If $p$ is inert in $F$, then the assertion follows from
\[
F^{\times}/\Z_{(p)}[\theta_1]^{\times} \Q^{\times} ={\mathcal O}_{(p)}^{\times} \langle p\rangle /\Z_{(p)}[\theta_1]^{\times}
\langle p\rangle \simeq {\mathcal O}_{(p)}^{\times} /\Z_{(p)}[\theta_1]^{\times}.
\]
Next, we consider the case $p$ splits in $F$.
Put Gal$(F/\Q)=\langle \sigma \rangle $ and $(p)={\mathfrak p}{\mathfrak p}^{\sigma}$.
Consider a split surjection
\[
{\mathcal V}: F^{\times}/{\Z_{(p)}[\theta_1]}^{\times} \Q^{\times} \to \Z,\quad {\mathcal V}(\alpha)=v_{\mathfrak p}(\alpha^{1-\sigma})
=v_{\mathfrak p}(\alpha)-v_{{\mathfrak p}^{\sigma}}(\alpha).
\]
We have $\alpha \in$Ker${\mathcal V}$ if and only if $\alpha=p^n \beta$ for some $n\in \Z$ and $\beta\in
{\mathcal O}_{(p)}^{\times}$.
Therfore, we get
\[
\mathrm{Ker}{\mathcal V}={\mathcal O}_{(p)}^{\times} \langle p\rangle /\Z_{(p)}[\theta_1]^{\times} \Q^{\times}
={\mathcal O}_{(p)}^{\times} \langle p\rangle /\Z_{(p)} [\theta_1]^{\times} \langle p\rangle \simeq
{\mathcal O}_{(p)}^{\times} /\Z_{(p)}[\theta_1]^{\times},
\]
and hence
\[
F^{\times}/\Z_{(p)}[\theta_1]^{\times} \Q^{\times} \simeq \Z\times ({\mathcal O}_{(p)}^{\times}/\Z_{(p)}[\theta_1]^{\times}).
\]
Finally, we consider the case $p$ is ramified in $F$.
Put $(p)={\mathfrak p}^2$, and choose $\pi \in {\mathfrak p}\setminus {\mathfrak p}^2$.
The assertion follows from the isomorphism
\[
{\mathcal W}: F^{\times} \overset{\sim}{\longrightarrow} {\mathcal O}_{(p)}^{\times}
\times \Z,\quad {\mathcal W}(\alpha)=(\pi^{-v_{\mathfrak p}(\alpha)} \alpha, v_{\mathfrak p}(\alpha)),
\]
and
\[
{\mathcal W} (\Z_{(p)}[\theta_1]^{\times} \Q^{\times})
={\mathcal W}(\Z_{(p)}[\theta_1]^{\times} \langle p\rangle )
= \Z_{(p)} [\theta_1]^{\times} \times 2\Z.
\]
\qed
\end{proof}
\begin{lem}\label{lem:O/Zp}
Assume that $f(t)$ is irreducible over $\Q$.
Put $D=p^s D_0$ with $s\geq 0, \ p\nmid D_0$.
\begin{enumerate}[$(1)$]
\item If $p$ is inert in $F$, then we have
\[
{\mathcal O}_{(p)}^{\times}/\Z_{(p)}[\theta_1]^{\times} \simeq \begin{cases}
0 & \text{if $s=0$},\\
\Z/p^{s/2-1}(p+1)\Z & \text{if $s\ne0, \ s\equiv 0\pmod{2}$ and $p\ne 2$}.
\end{cases}
\]
\label{enum:O/Zp1}
\item If $p$ splits in $F$, then we have
\[
{\mathcal O}_{(p)}^{\times} /\Z_{(p)}[\theta_1]^{\times}\simeq \begin{cases}
0 & \text{if $s=0$},\\
\Z/p^{s/2-1}(p-1)\Z & \text{if $s\ne0, \ s\equiv 0\pmod{2}$ and $p\ne 2$}.
\end{cases}
\]
\label{enum:O/Zp2}
\item If $p$ is ramified in $F$, then we have
\begin{align*}
& {\mathcal O}_{(p)}^{\times} /\Z_{(p)}[\theta_1]^{\times} \\
& \simeq \begin{cases}
0 & \text{if $s=1$},\\
\Z/p^{[s/2]}\Z & \text{if $s\ne1, \ s\equiv 1\pmod{2}$ and $p\ne 2,3$},
\\
\Z/p^{[s/2]}\Z \ \text{or} \
\Z/p\Z \times \Z/p^{[s/2]-1}\Z
&
\text{if $s\ne1, \ s\equiv 1\pmod{2}$ and $p=3$.}
\end{cases}
\end{align*}
\label{enum:O/Zp3}
\end{enumerate}
\end{lem}
\begin{proof}
The assertions in the cases $s=0,1$ follow from Lemma~\ref{lem:localring}.
Consider the cases $s\ne 0,1$ and $p\ne 2$.
We have ${\mathcal O}_{(p)}=\Z_{(p)}[\sqrt{m}],\ \Z_{(p)}[\theta_1]=\Z_{(p)}[\sqrt{D}]$ since $p\ne 2$.
Put $k=[s/2] (\geq 1)$. From the following commutative diagram:
\begin{equation*}
\begin{CD}
1 @>>> 1+p^k {\mathcal O}_{(p)} @>>> {\mathcal O}_{(p)}^{\times} @>>> ({\mathcal O}_F/(p^k))^{\times} @>>>1\\
@. @| @AAA @AAA \\
1 @>>> 1+p^k \Z_{(p)}[\sqrt{m}]
@>>> \Z_{(p)} [\theta_1]^{\times} @>>> (\Z/p^k\Z)^{\times} @>>>1, \\
\end{CD}
\end{equation*}
where the middle and right vertical maps are injective,
we get
\begin{equation}\label{eq:O/Zp1}
{\mathcal O}_{(p)}^{\times} /\Z_{(p)}[\theta_1]^{\times} \simeq
({\mathcal O}_F/(p^k))^{\times}/(\Z/p^k\Z)^{\times}.
\end{equation}
(\ref{enum:O/Zp1}) The assertion in the case where $p$ is inert in $F$ follows from the
isomorphisms:
$$
\begin{array}{ccccc}
& & & (id, \frac{1}{p} \mathrm{log}_{(p)}) & \\
({\mathcal O}_F/(p^k))^{\times} & \simeq & ({\mathcal O}_F/(p))^{\times}
\times
(1+(p))/(1+(p^k)) & \overset{\sim}{\longrightarrow} &
({\mathcal O}_F/(p))^{\times} \times {\mathcal O}_F/(p^{k-1})\\
& & & &\\
\cup \mid & & \cup \mid & & \cup \mid\\
& & & (id, \frac{1}{p} \mathrm{log}_p) & \\
(\Z/p^k\Z)^{\times} &\simeq &(\Z/p\Z)^{\times} \times (1+p\Z)/(1+p^k\Z) &
\overset{\sim}{\longrightarrow} & (\Z/p\Z)^{\times} \times \Z/p^{k-1}\Z,
\end{array}
$$
and (\ref{eq:O/Zp1}).
(\ref{enum:O/Zp2}) Consider the case where $p$ splits in $F$.
Put Gal$(F/\Q)=\langle \sigma \rangle$ and $(p)={\mathfrak p}
{\mathfrak p}^{\sigma}$.
Then we have
$$
\begin{array}{rcl}
({\mathcal O}_F/(p^k))^{\times} & \simeq & ({\mathcal O}_F/(p))^{\times}
\times (1+{\mathfrak p})/(1+{\mathfrak p}^k) \times (1+{\mathfrak p}^{\sigma})
/(1+({\mathfrak p}^{\sigma})^k) \\
& &\\
& (id, \mathrm{log}_{\mathfrak p}, \mathrm{log}_{{\mathfrak p}^{\sigma}}) & \\
& \overset{\sim}{\longrightarrow} & ({\mathcal O}_F/(p))^{\times} \times
{\mathfrak p}/{\mathfrak p}^k \times {\mathfrak p}^{\sigma}/({\mathfrak p}^{\sigma}
)^k \\
& &\\
& \simeq & ({\mathcal O}_F/(p))^{\times} \times (p)/(p^k)\\
& &\\
& (id,\frac{1}{p} ) &\\
& \overset{\sim}{\longrightarrow} & ({\mathcal O}_F/(p))^{\times}
\times {\mathcal O}_F/(p^{k-1}),
\end{array}
$$
and hence
$$
\begin{array}{ccc}
({\mathcal O}_F/(p^k))^{\times} & \simeq & ({\mathcal O}_F/(p))^{\times}
\times {\mathcal O}_F/(p^{k-1}) \\
& &\\
\cup \mid & & \cup \mid \\
& &\\
(\Z/p^k\Z)^{\times} & \simeq & (\Z/p\Z)^{\times} \times \Z/p^{k-1}\Z.
\end{array}
$$
The assertion follows from these isomorphisms and (\ref{eq:O/Zp1}).
((\ref{enum:O/Zp3})) Consider the case where $p$ is ramified.
Put $(p)={\mathfrak p}^2$,
\[
\ell:=\begin{cases}
1 & \text{if}\ p\ne 2,3,\\
2 & \text{if}\ p=3,
\end{cases}
\]
ane choose $\pi \in {\mathfrak p}^{\ell} \setminus {\mathfrak p}^{\ell+1}$.
We have a commutative diagram:
\begin{equation}\label{eq:O/Zp2}
\begin{CD}
1 @>>> (1+{\mathfrak p}^{\ell})/(1+{\mathfrak p}^{2k})
@>>> ({\mathcal O}_F/{\mathfrak p}^{2k})^{\times} @>>> ({\mathcal O}_F/{\mathfrak p}^{\ell})^{\times} @>>>1\\
@. @AAA @AAA @AAA \\
1 @>>> (1+p \Z)/(1+p^k\Z)
@>>> (\Z/p^k\Z)^{\times} @>>> (\Z/p\Z)^{\times} @>>>1, \\
\end{CD}
\end{equation}
where all the vertical maps are injective, and
\[
({\mathcal O}_F/{\mathfrak p}^{\ell})^{\times} /(\Z/p\Z)^{\times}
\simeq \begin{cases}
0 & \text{if}\ p\ne 2,3,
\\
\Z/p\Z & \text{if}\ p=3.
\end{cases}
\]
Define the injection $\iota :\Z/p^{k-1}\Z \to
{\mathcal O}_F/{\mathfrak p}^{2k-\ell}$
by
\[
\iota (\alpha)=\begin{cases}
\pi \alpha & \text{if}\ p\ne 2,3,\\
\alpha & \text{if}\ p=3.
\end{cases}
\]
We have
\[
\text{Coker} \ \iota \simeq \begin{cases}
\Z/p^k\Z & \text{if}\ p\ne 2,3,\\
\Z/p^{k-1}\Z & \text{if}\ p=3.
\end{cases}
\]
The assertion follows from (\ref{eq:O/Zp1}), (\ref{eq:O/Zp2})
and the following commutative diagram:
$$
\begin{array}{ccccc}
& \mathrm{log}_{\mathfrak p} & & \pi & \\
(1+{\mathfrak p}^{\ell})/
(1+{\mathfrak p}^{2k}) & \simeq & {\mathfrak p}^{\ell}/{\mathfrak p}^{2k}
& \overset{\sim}{\longleftarrow} &
{\mathcal O}_F/{\mathfrak p}^{2k-\ell}\\
& & & &\\
\uparrow & & \uparrow & & \iota \uparrow \\
& \mathrm{log}_p & & p & \\
(1+p\Z)/(1+p^k\Z) &\longrightarrow & p\Z/p^k\Z &
\longleftarrow & \Z/p^{k-1}\Z,
\end{array}
$$
where all the vertical maps are injective.
\qed
\end{proof}
\begin{cor}\label{cor:G/K}
Put $D=p^sD_0$ with $s\geq 0,\ p\nmid D_0$.
\begin{enumerate}[$(1)$]
\item Assume that $f(t)$ is irreducible over $\Q$.
We have
\[
G_{\Q}(f)/K(f,p) \simeq G_{\Q}^*(f)/K^*(f,p) \simeq F^{\times}
/\Z_{(p)}[\theta_1]^{\times} \Q^{\times}.
\]
\begin{enumerate}[$(i)$]
\item If $p$ is inert in $F$, then
\[
G_{\Q}(f)/K(f,p)
\simeq \begin{cases}
0 & \text{if}\ s=0,\\
\Z/p^{s/2-1}(p+1)\Z & \text{if}\ s\ne 0,s\equiv 0\pmod{2} \
\text{and}\ p\ne 2.
\end{cases}
\]
\label{enum:G/K1i}
\item If $p$ splits in $F$, then
\[
G_{\Q}(f)/K(f,p)
\simeq \begin{cases}
\Z & \text{if}\ s=0,\\
\Z \times \Z/p^{s/2-1}(p-1)\Z & \text{if}\ s\ne 0,s\equiv 0\pmod{2} \
\text{and}\ p\ne 2.
\end{cases}
\]
\label{enum:G/K1ii}
\item If $p$ is ramified in $F$, then
\begin{align*}
& G_{\Q}(f)/K(f,p)
\simeq \\
& \qquad \begin{cases}
\Z/2\Z & \text{if}\ s=1,\\
\Z/2p^{[s/2]}\Z & \text{if}\ s\ne 1,s\equiv 1\pmod{2} \
\text{and}\ p\ne 2,3,\\
\Z/2p^{[s/2]}\Z \ \text{or}\ \Z/2p\Z \times \Z/p^{[s/2]-1}\Z
& \text{if}\ s\ne 1,s\equiv 1\pmod{2} \
\text{and}\ p\ne 3.\\
\end{cases}
\end{align*}
\label{enum:G/K1iii}
\end{enumerate}
\label{enum:G/K1}
\item Assume that $f(t)$ is reducible over $\Q$ $($hence $s$ is even$)$.
We have
\begin{align*}
G_{\Q}(f)/K(f,p) & \simeq G_{\Q}^*(f)/K^*(f,p) \\
& \simeq
\begin{cases}
\Q^{\times}/\Z_{(p)}^{\times}\simeq \Z & \text{if}\ s=0,\\
\Q^{\times}/(1+p^{s/2}\Z_{(p)}) \simeq
\Z \times \Z/(p-1)p^{s/2-1} \Z & \text{if}\ s\ne 0\
\text{and}\ p\ne 2.
\end{cases}
\end{align*}
\label{enum:G/K2}
\end{enumerate}
\end{cor}
\begin{proof}
(\ref{enum:G/K1}) The first assertion follows from Theorems~\ref{theo:G},
\ref{theo:G*} and \ref{theo:KK*},
and the others follows from Lemmas~\ref{lem:F/ZpQ} and \ref{lem:O/Zp}.
(\ref{enum:G/K2}) We get from Theorems~\ref{theo:G},
\ref{theo:G*} and \ref{theo:KK*},
\begin{align*}
G_{\Q}(f)/K(f,p) & \simeq G_{\Q}^*(f)/K^*(f,p) \\
& \simeq
\begin{cases}
\Q^{\times}/\Z_{(p)}^{\times} \simeq \Z & \text{if}\ p\nmid D,\\
\Q^{\times}/(1+p^{s/2}\Z_{(p)}) & \text{if}\ p\mid D.\
\end{cases}
\end{align*}
Furthermore, if $p\ne 2$, then we have
\[
\Q^{\times} \simeq \Z \times \Z_{(p)}^{\times} \simeq \Z \times
(\Z/p\Z)^{\times}
\times (1+p\Z_{(p)}),
\]
and
$$
\begin{array}{ccccc}
(1+p\Z_{(p)})/(1+p^{s/2}\Z_{(p)}) & \simeq & (1+p\Z)/(1+p^{s/2}\Z)
& \overset{\sim}{\longrightarrow} & \Z/p^{s/2-1}\Z,\\
& & & \frac{1}{p} \mathrm{log}_p &
\end{array}
$$
and hence we get
\[
\Q^{\times}/(1+p^{s/2}\Z_{(p)}) \simeq
\Z \times \Z/(p-1)p^{s/2-1}\Z.
\]
\qed
\end{proof}
\section{Group structures of $K(f,p)/G(f,p)$ and $K^*(f,p)/G^*(f,p)$}\label{sec:Kp/Gp}
In this section, we determine the structure of the quotient groups $K(f,p)/G(f,p)$ and $K^*(f,p)/G^*(f,p)$,
where $K(f,p)$ and $G(f,p)$ are defined in Definition~\ref{df:GKH}, and $K^*(f,p)$ and $G^*(f,p)$ are
defined in Definition~\ref{df:GKH*}.
By the results of this section and \S~\ref{sec:G/Kp}, we get Laxton's theorem (Theorem~\ref{theo:Laxton} in \S~\ref{sec:LG}),
and a result proved by Suwa in the case $p^2|D$.
Let $p$ be a prime number with $p\nmid Q$.
From exact sequences (\ref{eq:SubG4}) and (\ref{eq:SubG8}), we get group isomorphisms
\begin{equation}\label{eq:Kp/Gp1}
K(f,p)/G(f,p) \simeq G_{\F_p}(f), \quad K^*(f,p)/G^*(f,p) \simeq G^*_{\F_p} (f).
\end{equation}
By Lemma~\ref{lem:ImPi}, we have
\begin{equation}\label{eq:Kp/Gp2}
G_{\F_p}^*(f) \simeq G_{\F_p}(f)/ {\tiny \left\langle \left[
\begin{array}{c}
0\\
1\\
\end{array} \right] \right\rangle }.
\end{equation}
From (\ref{eq:Kp/Gp1}), (\ref{eq:Kp/Gp2}), Theorems~\ref{theo:G}, \ref{theo:G*} and Lemma~\ref{lem:orderF}, we get the following theorem.
\begin{theo}\label{theo:Gp}
\begin{enumerate}[$(1)$]
\item Assume that $f(t) \bmod{p}$ is irreducible over $\F_p$. We have
\begin{align*}
& K(f,p)/G(f,p) \simeq \F_p(\theta_1)^{\times}/\F_p^{\times} \simeq \Z/(p+1)\Z, \\
& \left[ \begin{array}{c}
w_1\\
w_0\\
\end{array} \right] \bmod{G(f,p)}
\mapsto w_1-w_0\theta_1 \bmod{\F_p^{\times}},
\end{align*}
and
\begin{align*}
& K^*(f,p)/G^*(f,p) \simeq \F_p(\theta_1)^{\times}/\F_p^{\times}\langle \theta_1 \rangle\simeq \Z/ ((p+1)/r(p)) \Z, \\
& \left[ \begin{array}{c}
w_1\\
w_0\\
\end{array} \right] \bmod{G^*(f,p)}
\mapsto w_1-w_0\theta_1 \bmod{\F_p^{\times} \langle \theta_1 \rangle},
\end{align*}
\label{enum:Gp1}
\item Assume that $f(t) \bmod{p}$ is reducible over $\F_p$.
If $p\nmid D$, then we have
\begin{align*}
& K(f,p)/G(f,p) \simeq \F_p^{\times}\simeq \Z/(p-1)\Z, \\
& \left[ \begin{array}{c}
w_1\\
w_0\\
\end{array} \right] \bmod{G(f,p)}
\mapsto (w_1-w_0\theta_1)(w_1-w_0\theta_2)^{-1},
\end{align*}
and
\begin{align*}
& K^*(f,p)/G^*(f,p) \simeq \F_p^{\times}/\langle \theta_1 \theta_2^{-1} \rangle\simeq \Z/ ((p-1)/r(p)) \Z, \\
& \left[ \begin{array}{c}
w_1\\
w_0\\
\end{array} \right] \bmod{G^*(f,p)}
\mapsto ( w_1-w_0\theta_1)(w_1-w_0\theta_2)^{-1} \bmod{\langle \theta_1\theta_0^{-1} \rangle}.
\end{align*}
If $p|D$, then we have
\[ K(f,p)/G(f,p) \simeq \F_p, \quad \left[ \begin{array}{c}
w_1\\
w_0\\
\end{array} \right] \bmod{G(f,p)} \mapsto -w_0 (w_1-w_0 \theta)^{-1},
\]
where $\theta$ is the double root of $f(t) \bmod{p}$,
and
\[
K^*(f,p)/G^*(f,p)\simeq 0.
\]
\label{enum:Gp2}
\end{enumerate}
\end{theo}
Since the groups in (\ref{eq:Kp/Gp1}) are finite groups, we can see
\begin{align}
\mathrm{Tor}_{\Z}(G_{\Q}(f)/K(f,p))=H(f,p)/K(f,p), \label{eq:Kp/Gp3} \\
\mathrm{Tor}_{\Z}(G^*_{\Q}(f)/K^*(f,p))=H^*(f,p)/K^*(f,p). \notag
\end{align}
By (\ref{eq:Kp/Gp3}), Corollary~\ref{cor:G/K} and Theorem~\ref{theo:Gp}, we see that our results lead Laxton's
(Theorem~\ref{theo:Laxton}) and Suwa's theorems.
\begin{cor}\label{cor:main}
Let $p$ be a prime number with $p\nmid Q$, and $r(p)$ be the rank of the Lucas sequence ${\mathcal F}$.
\begin{enumerate}[$(1)$]
\item Assume $p\nmid D$. If $\mathbb Q(\theta_1)\ne \mathbb Q$ and $p$ is inert in $\mathbb Q(\theta_1)$,
then $G_{\Q}^*(f)=H^*(f,p)=K^*(f,p)$ and
$G^*(f)/G^*(f,p)$ is a cyclic group of order $(p+1)/r(p)$.
\label{enum:cormain1}
\item Assume $p\nmid D$. If $\Q (\theta_1)=\Q$, or $\Q(\theta_1)\ne \Q$ and
$p$ splits in $\Q(\theta_1)$, then $G_{\Q}^*(f)/H^*(f,p)$ is an infinite cyclic group,
and $H^*(f,p)=K^*(f,p)$ and $H^*(f,p)/G^*(f,p)$ is a cyclic group of order $(p-1)/r(p)$.
\label{enum:cormain2}
\item If $p|D$ and $p^2\nmid D$, then $G_{\Q}^*(f)=H^*(f,p)$ and $K^*(f,p)=G^*(f,p)$.
Furthermore, if $p\ne 2$, then $G_{\Q}^*(f)/G^*(f,p)$ is a cyclic group of order two.
\label{enum:cormain3}
\item
Assume $D=p^sD_0$ with $s\geq 2$ and $p\nmid D_0$.
\begin{enumerate}[$(i)$]
\item Assume $s\equiv 1 \pmod{2}$. We have $G_{\Q} ^*(f)=H^*(f,p)$ and $K^*(f,p)=G^*(f,p)$.
Furthermore, if $p\ne 2,3 $, then $G_{\Q}^*(f)/G^*(f,p)$ is a cyclic group of order $2p^{[s/2]}$,
and if $p=3$, then $G_{\Q}^*(f)/G^*(f,p)$ is a cyclic group of order $2p^{[s/2]}$ or a direct product of
two cyclic groups of order $2p$ and $p^{[s/2]-1}$.
\label{eum:cormain4i}
\item Assume $s\equiv 0 \pmod{2}$. If $\Q(\theta_1)\ne \Q$ and $p$ is inert in $\Q(\theta_1)$, then $G_{\Q}^*(f)=H^*(f,p)$ and
$K^*(f,p)=G^*(f,p)$.
Furthermore, if $p\ne 2$, then
$G_{\Q}^*(f)/G^*(f,p)$ is a cyclic group of order $(p+1)p^{s/2-1}$.
\label{eum:cormain4ii}
\item Assume $s\equiv 0 \pmod{2}$. If $\Q(\theta_1)=\Q$ or $\Q(\theta_1)\ne \Q$, $p$ splits in $\Q(\theta_1)$, then $K^*(f,p)=G^*(f,p)$.
Furthermore, if $p\ne 2$, then $G_{\Q}^*(f)/H^*(f,p)$ is an infinite cyclic group,
$H^*(f,p)/K^*(f,p)$ is a cyclic group of order $(p-1)p^{s/2-1}$.
\label{eum:cormain4iii}
\end{enumerate}
\label{enum:cormain4}
\end{enumerate}
\end{cor}
\end{document}
|
\begin{equation}gin{document}
\delta ate{}
\title{Weighted Hardy inequality with higher dimensional singularity
on the boundary }
\noindent {{\varphi}ootnotesize{\bf Abstract.} Let $\O$ be a smooth
bounded domain in ${\mathbb R}^N$ with $N\ge 3$ and let ${\mathbb S}igma_k$ be a
closed smooth submanifold of $\partial \O$ of dimension $1\le k\le N-2$.
In this paper we study the weighted Hardy inequality with weight
function singular on ${\mathbb S}ig_k$. In particular we provide
necessary and sufficient conditions for existence of minimizers.}
\noindent{{\varphi}ootnotesize{{\it Key Words:} Hardy inequality,
extremals, existence, non-existence, Fermi coordinates.}}\\
\section{Introduction}{\langle}bel{s:i}
Let $\O$ be a smooth bounded domain of ${\mathbb R}^N$, $N\geq2$ and let
${\mathbb S}ig_k$ be a smooth closed submanifold of $\partial\O$ with dimension
$0\leq k\leq N-1$. Here ${\mathbb S}ig_0$ is a single point and
${\mathbb S}ig_{N-1}=\partial\O$. For $\l\in{\mathbb R}$, consider the problem of finding
minimizers for the quotient:
\begin{equation}gin{equation}
{\langle}bel{eq:mpqek} \m_{\l}({\Omega}ega,{\mathbb S}igma_k):= \inf_{u\in
H^{1}_{0}(\O)} ~{\varphi}rac{\delta isplaystyle\int_{\O}|\nabla
u|^2p~dx-\l\int_{\O}\delta ^{-2}|u|^2\etaa~dx}
{\delta isplaystyle\int_{\O}\delta ^{-2}|u|^2q~dx}~,
\end{equation}
where $\delta (x):= \bar{\varepsilon}xtrm{dist}(x,{\mathbb S}ig_k)$ is the distance function to
${\mathbb S}ig_k$ and where the weights $p,q$ and $\etaa$ satisfy
\begin{equation}{\langle}bel{eq:weight} \bar{\varepsilon}xtrm{$p,q\in C^2(\overline{\O})$,}\qquad
p,q>0\quad\bar{\varepsilon}xtrm{ in $\overline{\O}$,}\qquad \etaa>0\quad\bar{\varepsilon}xtrm{ in
$\overline{\O}\setminus{\mathbb S}ig_k$,}\qquad \bar{\varepsilon}xtrm{ $\etaa\in Lip(\overline{\O})$}
\end{equation}
and
\begin{equation}gin{equation}{\langle}bel{eq:min-pq}
\max_{{\mathbb S}ig_k}{\varphi}rac{q}{p}=1,\qquad \bar{\varepsilon}xtrm{ $\etaa=0$}\qquad \bar{\varepsilon}xtrm{ on ${\mathbb S}ig_k$ }.
\end{equation}
We put
\begin{equation}gin{equation}{\langle}bel{eq:defIk}
I_{k}=\int_{{\mathbb S}ig_k}{\varphi}rac{d\s}{\sqrt{1-\left(q(\s)/p(\s)\right)}},\quad
1\leq k\leq N-1\quad\bar{\varepsilon}xtrm{ and }\quad I_0=\infty.
\end{equation}
It was shown by Brezis and Marcus in \cite{BM} that there exists $\l^*$ such that
if $\l>\l^*$ then
$\m_{\l}({\Omega}ega,{\mathbb S}igma_{N-1}) <{\varphi}rac{1}{4}$ and it is attained while for
$\l\leq\l^*$, $\m_{\l}({\Omega}ega,{\mathbb S}igma_{N-1}) ={\varphi}rac{1}{4}$ and it is not
achieved for every $\l<\l^*$. The critical case
${\l=\l^*}$ was studied by Brezis, Marcus and Shafrir in \cite{BMS}, where they
proved that $ \m_{\l^*}({\Omega}ega,{\mathbb S}igma_{N-1})$ admits a minimizer if and only if
$I_{N-1}<\infty$. The case where $k=0$ (${\mathbb S}ig_0$ is reduced to a point
on the boundary) was treated by the first author in \cite{Fallccm}
and the same conclusions hold true.\\
Here we obtain the following
\begin{equation}gin{Theorem}{\langle}bel{th:mulpqe} Let $\O$ be a smooth bounded
domain of ${\mathbb R}^N$, $N\geq3$ and let ${\mathbb S}ig_k\subset\partial\O$ be a closed
submanifold of dimension $k\in[1,N-2]$. Assume that the weight
functions $p,q$ and $\etaa$ satisfy \eqref{eq:weight} and
\eqref{eq:min-pq}. Then, there exists
$\l^*=\l^*(p,q,\etaa,\O,{\mathbb S}ig_k)$ such that
$$
\begin{equation}gin{array}{ll}
\delta isplaystyle \m_{\l}({\Omega}ega,{\mathbb S}igma_k)={\varphi}rac{(N-k)^2}{4},\quad{\varphi}orall\l\leq \l^*,\\
\delta isplaystyle \m_{\l}({\Omega}ega,{\mathbb S}igma_k)<{\varphi}rac{(N-k)^2}{4},\quad{\varphi}orall\l> \l^*.
\end{array}
$$
The infinimum $\m_{\l}({\Omega}ega,{\mathbb S}igma_k)$ is attained if $\l>\l^*$ and it is not attained when $\l< \l^*$.
\end{Theorem}
Concerning the critical case we get
\begin{equation}gin{Theorem}{\langle}bel{th:crit}
Let $\l^*$ be given by Theorem \ref{th:mulpqe} and consider $I_k$ defined in \eqref{eq:defIk}. Then
$\m_{\l^*}({\Omega}ega,{\mathbb S}igma_k)$ is achieved if and only if $I_{k}<\infty $.
\end{Theorem}
By choosing $p=q\equiv1$ and $\etaa=\delta ^2$, we obtain the following consequence of the above theorems.
\begin{equation}gin{Corollary}
Let $\O$ be a smooth bounded domain of ${\mathbb R}^N$, $N\geq3$ and
${\mathbb S}ig_k\subset\partial\O$ be a closed submanifold of dimension
$k\in\{1,\cdots,N-2\}$. For $\l\in{\mathbb R}$, put
$$
\nu_\l(\O,{\mathbb S}ig_k)=\inf_{u\in
H^{1}_{0}(\O)} ~{\varphi}rac{\delta isplaystyle\int_{\O}|\nabla
u|^2~dx-\l\int_{\O}|u|^2~dx}
{\delta isplaystyle\int_{\O}\delta ^{-2}|u|^2~dx}~,
$$
Then, there exists $\bar{\l}=\bar{\l}(\O,{\mathbb S}ig_k)$ such that
$$
\begin{equation}gin{array}{ll}
\delta isplaystyle \nu_{\l}({\Omega}ega,{\mathbb S}igma_k)={\varphi}rac{(N-k)^2}{4},\quad{\varphi}orall\l\leq \bar{\l},\\
\delta isplaystyle \nu_{\l}({\Omega}ega,{\mathbb S}igma_k)<{\varphi}rac{(N-k)^2}{4},\quad{\varphi}orall\l> \bar{\l}.
\end{array}
$$
Moreover $\nu_{\l}({\Omega}ega,{\mathbb S}igma_k) $ is attained if and only if $ \l> \bar{\l}$.
\end{Corollary}
The proof of the above theorems are mainly based on the
construction of appropriate sharp $H^1$-subsolution and $H^1$-supersolutions for the
corresponding operator
$$\calL_\l:=-\D
-{\varphi}rac{(N-k)^2}{4}q\delta ^{-2}+\l\delta ^{-2}\etaa $$
(with $p\equiv 1$).
These super-sub-solutions are perturbations of an approximate
``virtual" ground-state for the Hardy constant $ {\varphi}rac{(N-k)^2}{4}$
near ${\mathbb S}ig_k$. For that we will consider the \bar{\varepsilon}xtit{projection
distance} function $\tilde{\delta }$ defined near ${\mathbb S}ig_k$ as
$$
\tildelde \delta (x):=\sqrt{|\mbox{dist}^{\partial\O}(\overline
x,{\mathbb S}igma_k)|^2+|x-\overline x|^2},
$$
where $\overline x$ is the orthogonal projection of $x$ on $\partial\O$ and $\rm{dist}^{\partial\O}(\cdot,{\mathbb S}ig_k)$
is the geodesic distance to ${\mathbb S}ig_k$ on $\partial\O$ endowed with the induced metric.
While the distances $\delta $ and $\tildelde{\delta }$ are equivalent, $\D\delta $ and $\D\tildelde{\delta }$
differ and $\delta $ does not, in general, provide the right approximate solution for $k\leq N-2$.
Letting $d_{\partial\O}=\bar{\varepsilon}xtrm{dist}(\cdot,\partial\O)$, we have
$$
\tildelde \delta (x):=\sqrt{|\mbox{dist}^{\partial\O}(\overline
x,{\mathbb S}igma_k)|^2+d_{\partial \O}(x)^2}.
$$
Our approximate virtual ground-state near ${\mathbb S}ig_k$ reads then as
\begin{equation}{\langle}bel{eq:virtgs} x\mapsto d_{\partial\O}(x)\,\tildelde \delta ^{
{\varphi}rac{k-N}{2}}(x). \end{equation} In some appropriate Fermi coordinates
${y}=(y^1,y^2,\delta ots, y^{N-k}, y^{N-k+1},\delta ots, y^N)=(\tildelde{y},\bar{y})\in{\mathbb R}^{N}$ with $\tilde{y}=(y^1,y^2,\delta ots, y^{N-k})\in{\mathbb R}^{N-k}$ (see next section for
precise definition), the function in \eqref{eq:virtgs} then becomes
$$
{y}\mapsto y^1|\tilde{y}|^{{\varphi}rac{k-N}{2}}
$$
which is the "virtual" ground-state for the Hardy constant $ {\varphi}rac{(N-k)^2}{4}$
in the flat case ${\mathbb S}ig_k= {\mathbb R}^k$ and $\O= {\mathbb R}^N$. We refer to Section \ref{s:pn} for more details about the constructions of the super-sub-solutions.\\
The proof of the existence part in {Theorem} \ref{th:crit} is inspired from \cite{BMS}. It amounts to obtain a uniform control
of a specific minimizing sequence for $ \m_{\l^*}({\Omega}ega,{\mathbb S}igma_k) $ near ${\mathbb S}ig_k$ via the $H^1$-super-solution constructed.\\
We mention that the existence and non-existence of extremals for
\eqref{eq:mpqek} and related problems were studied in
\cite{AS,CaMuPRSE,CaMuUMI,C,Fall,FaMu,FaMu1,NaC,Na,PT} and some
references therein. We would like to mention that some of the results in this paper might
of interest in the study of semilinear equations with a Hardy potential singular
at a submanifold of the boundary. We refer to \cite{Fall-ne-sl, BMR1, BMR2},
where existence and nonexistence for semilinear problems were studied via the method
of super/sub-solutions.
\section{Preliminaries and Notations}{\langle}bel{s:pn}
In this section we collect some notations and conventions we are
going to use throughout the paper.
Let ${\mathcal U}$ be an open subset of ${\mathbb R}^N$, $N\geq 3$, with boundary
$\mathcal{M}:=\partial{\mathcal U}$ a smooth closed hypersurface of ${{\mathbb R}^N}$. Assume that
$\mathcal{M}$ contains a smooth closed submanifold ${\mathbb S}igma_k$ of dimension
$1\le k\le N-2$. In the following, for $x\in{\mathbb R}^N$, we let $d(x)$ be
the distance function of $\mathcal{M}$ and ${\partial}rtial ta (x)$ the distance
function of ${\mathbb S}igma_k$.
We denote by $N_\mathcal{M}$ the unit normal vector field of $\mathcal{M}$ pointed into ${\mathcal U}$.\\
Given $P\in{\mathbb S}ig_k$, the tangent
space $T_P \mathcal{M}$ of $\mathcal{M}$ at $P$ splits as
$$
T_P \mathcal{M}=T_P {\mathbb S}igma_k\oplus N_P {\mathbb S}igma_k,
$$
where $T_P{\mathbb S}igma_k$ is the tangent space of ${\mathbb S}igma_k$ and $N_P{\mathbb S}igma_k$ stands for the normal space of $T_P{\mathbb S}igma_k$ at $P$.
We assume that the basis of these subspaces are spanned respectively by $\big(E_a\big)_{a=N-k+1,\cdots,N}$ and $\big(E_i\big)_{i=2,\cdots,N-k} $.
We will assume that $N_\mathcal{M}(P)=E_1$.
A neighborhood of $P$ in ${\mathbb S}ig_k$ can be parameterized via the map
$$
\bar y\mapsto f^P(\bar y)=\bar{\varepsilon}xtrm{Exp}^{{\mathbb S}igma_k}_P( \sum_{a=N-k+1}^{N}y^a E_a),
$$
where, $\bar{y}=(y^{N-k+1},\cdots,y^N)$ and where $\bar{\varepsilon}xtrm{Exp}_P^{{\mathbb S}igma_k}$
is the exponential map at
$P$ in ${\mathbb S}igma_k$ endowed with
the metric induced by $\mathcal{M}$. Next we extend $(E_i)_{i=2,\cdots,N-k}$ to an orthonormal frame $(X_i)_{i=2,\cdots,N-k}$ in a neighborhood of $P$.
We can therefore define the parameterization of a neighborhood of $P$ in $\mathcal{M}$ via the mapping
$$
(\breve{y},\bar y)\mapsto h^P_{\mathcal{M}}(\breve{y},\bar y):=\bar{\varepsilon}xtrm{Exp}^{\mathcal{M}}_{f^P(\bar
y)}\left(\sum_{i=2}^{N-k} y^iX_i\right),
$$
with $ \breve{y}=(y^{2},\cdots,y^{N-k})$
and $\bar{\varepsilon}xtrm{Exp}_Q^\mathcal{M}$ is the exponential map at $Q$ in $\mathcal{M}$ endowed with
the metric induced by ${\mathbb R}^N$.
We now have a parameterization of a neighborhood of $P$ in ${\mathbb R}^N$ defined via the above {Fermi coordinates} by the map
$$
y=(y^1,\breve{y},\bar y)\mapsto F^P_{\mathcal{M}}(y^1,\breve{y},\bar y)=h^P_{\mathcal{M}}(\breve{y},\bar y)+y^1 N_\mathcal{M}(h^P_{\mathcal{M}}(\breve{y},\bar y)).
$$
Next we denote by $g$ the metric induced by $F^P_{\mathcal{M}} $ whose components are defined by
$$g_{\a\b}(y)={\langle}\partial_\alpha F^P_{\mathcal{M}}(y),\partial_\beta F^P_{\mathcal{M}}(y){\rangle}.$$
Then we have the following expansions (see for instance \cite{FaMah})
\begin{equation}{\langle}bel{eq:metexp}
\begin{equation}gin{array}{lll}
g_{11}(y)=1\\
g_{1\b}(y)=0,\quad\quad\quad\quad\quad\quad\bar{\varepsilon}xtrm{ for } \b=2,\cdots,N\\
g_{\a\b}(y)=\delta _{\a\b}+\calO(|\tildelde{y}|),\quad\bar{\varepsilon}xtrm{ for } \a,\b=2,\cdots,N,
\end{array}
\end{equation}
where $\tildelde{y}=(y^1,\breve{y})$ and $\calO(r^m)$ is a smooth function in the variable $y$ which is uniformly bounded by
a constant (depending only $\mathcal{M}$ and ${\mathbb S}ig_k$) times $r^m$.
In concordance to the above coordinates, we will consider the ``half"-geodesic neighborhood contained in ${\mathcal U}$ around
${\mathbb S}igma_k$ of radius $\rho$
\begin{equation}{\langle}bel{eq:geodtub}
{\mathcal U}_{\rho}({\mathbb S}igma_k) := \{ x \in {\mathcal U}: \quad \tilde{\delta }(x)<\rho \},
\end{equation}
with $\tildelde \delta $ is the projection distance function given by
$$
\tildelde \delta (x):=\sqrt{|\mbox{dist}^{\mathcal{M}}(\overline
x,{\mathbb S}igma_k)|^2+|x-\overline x|^2},
$$
where $\overline x$ is the orthogonal projection of $x$ on $\mathcal{M}$ and $\rm{dist}^{\mathcal{M}}(\cdot,{\mathbb S}ig_k)$
is the geodesic distance to ${\mathbb S}ig_k$ on $\mathcal{M}$ with the induced metric.
Observe that
\begin{equation}{\langle}bel{eq:tidFptiy}
\tildelde \delta (F^P_\mathcal{M}(y))=|\tildelde y|,
\end{equation}
where $\tildelde y=(y^1,\breve{y})$.
We also
define $\sigma(\overline x)$ to be the orthogonal projection of $\overline x$ on ${\mathbb S}igma_k$ within $\mathcal{M}$.
Letting
$$
\hat {\partial}rtial ta(\overline x):=\mbox{dist}^{\mathcal{M}}(\overline x,{\mathbb S}igma_k),
$$
one has
$$
\overline x=\bar{\varepsilon}xtrm{Exp}_{\sigma(\overline x)}^\mathcal{M}(\hat\delta \,\n\hat\delta )\quad \hbox{or
equivalently }\quad \sigma(\overline x)=\bar{\varepsilon}xtrm{Exp}_{\overline x}^\mathcal{M}(-\hat\delta \,\n\hat\delta ).
$$
Next we observe that
\begin{equation}{\langle}bel{eq:td-hd}
\tilde{\delta }(x)=\sqrt{\hat{\delta }^2(\bar{x})+d^2(x)}.
\end{equation}
In addition it can be easily checked via the implicit function theorem that there exists a positive constant
$\b_0=\b_0({\mathbb S}ig_k,\O)$ such that $\tilde{\delta }\in C^\infty({\mathcal U}_{\b_0}({\mathbb S}ig_k))$.
It is clear that for
$\rho$ sufficiently small, there exists a finite number of Lipschitz
open sets $(T_i)_{1\le i\le N_0}$ such that
$$
T_i\cap T_j=\emptyset \quad \hbox{for }\,i\ne j\quad \hbox{and}\quad
{\mathcal U}_\rho({\mathbb S}ig_k)=\bigcup_{i=1}^{N_0}\overline{ T_i}.
$$
We may assume that each $T_i$ is chosen, using the above coordinates, so that
$$
T_i=F^{p_i}_{\mathcal{M}}(B^{N-k}_+(0,\rho)\tildemes D_i)\quad\hbox{with }\; p_i\in {\mathbb S}igma_k,
$$
where the $D_i$'s are Lipschitz disjoint open sets of ${\mathbb R}^k$ such that
$$
\bigcup_{i=1}^{N_0} \overline{f^{p_i} (D_i)}={\mathbb S}ig_k.
$$
In the above setting we have
\begin{equation}gin{Lemma} {\langle}bel{lemddelta} As $\tilde{\delta }\to0$, the following expansions hold
\begin{equation}gin{enumerate}
\item $\delta ^2=\tilde{\delta }^2(1+O(\tilde{\delta }))$,
\item $\nabla \tilde{\delta }\cdot\nabla d=\delta isplaystyle{\varphi}rac{d}{\tilde{\delta }}$,
\item $|\n\tildelde{\delta }|=1+O(\tilde{\delta }),$
\item $\Delta \tilde{{\partial}rtial ta }={\varphi}rac{N-k-1}{\tilde{{\partial}rtial ta}}+O(1)$,
\end{enumerate}
where $O(r^m)$ is a function for which there exists a constant $C=C(\mathcal{M},{\mathbb S}ig_k)$ such that
$$
|O(r^m)|\leq C r^m.
$$
\end{Lemma}
\noindent{{\bf Proof. }}
\begin{equation}gin{enumerate}
\item Let $P\in {\mathbb S}ig_k$. With an abuse of notation, we write $x(y)= F^P_\mathcal{M}(y)$ and we set
$$
\vartheta( y):={\varphi}rac12{\partial}rtial ta^2 (x({y})).
$$
The function $\vartheta$ is
smooth in a small neighborhood of the origin in ${\mathbb R}^{N}$ and Taylor
expansion yields
\begin{equation}gin{eqnarray}
\vartheta( y)&=&\vartheta(0,\bar{y})\tildelde y+\nabla\vartheta(0,\bar{y})[\tildelde y]+{\varphi}rac12\nabla^2\vartheta(0,\bar{y})[\tildelde y,\tildelde y]+\calO(\|\tildelde y\|^3)\nonumber\\
&=&{\langle}bel{eq:vartzyb}{\varphi}rac12\nabla^2\vartheta(0,\bar{y})[\tildelde y,\tildelde y]+\calO(\|\tildelde y\|^3) .
\end{eqnarray}
Here we have used the fact that $x(0,\bar{y} )\in {\mathbb S}ig_k$ so that $ \delta (x(0,\bar{y}))=0$.
We write
$$
\nabla^2\vartheta(0,\bar{y})[\tildelde y,\tildelde
y]=\sum_{i,l=1}^{N-k}\Lambda_{il}y^iy^l,
$$
with
\begin{equation}gin{eqnarray*}
\Lambda_{il} &:=&{\varphi}rac{{\partial}rtial^2 \vartheta}{{\partial}rtial y^i{\partial}rtial y^l}/_{ \tilde{y}=0}\\
&=& {\varphi}rac{{\partial}rtial}{{\partial}rtial y^l}\bigg({\varphi}rac{{\partial}rtial }{{\partial}rtial x^j} ({\varphi}rac12 {\partial}rtial ta^2(x){\varphi}rac{{\partial}rtial x^j}{{\partial}rtial y^i} ) \bigg)/_{
\tilde{y}=0}\\
&=&{\varphi}rac{{\partial}rtial^2}{{\partial}rtial x^i{\partial}rtial x^s}({\varphi}rac12
{\partial}rtial ta^2)(x){\varphi}rac{{\partial}rtial {x^j}}{{\partial}rtial y^i}{\varphi}rac{{\partial}rtial x^s}{{\partial}rtial y^l}/_{
\tilde{y}=0}+{\varphi}rac{{\partial}rtial }{{\partial}rtial x^j}({\partial}rtial ta^2)(x){\varphi}rac{{\partial}rtial^2x^s}{{\partial}rtial y^i{\partial}rtial y^l}/_{
\tilde{y}=0}.
\end{eqnarray*}
Now using the fact that
$$
{\varphi}rac{{\partial}rtial x^s}{{\partial}rtial y^l}/_{ \tilde{y}=0}=g_{ls}={\partial}rtial ta_{ls}\quad
\bar{\varepsilon}xtrm{and}\quad{\varphi}rac{{\partial}rtial }{{\partial}rtial x^j}({\partial}rtial ta^2)(x)/_{
\tilde{y}=0}=0,
$$
we obtain
\begin{equation}gin{eqnarray*}
\Lambda_{il} y^i y^l&=&y^i y^s\,{\varphi}rac{{\partial}rtial^2}{{\partial}rtial x^i{\partial}rtial x^s}({\varphi}rac12
{\partial}rtial ta^2)(x)/_{ \tilde{y}=0} \\
&=& |\tildelde y|^2,
\end{eqnarray*}
where we have used the fact that the matrix $\left({\varphi}rac{{\partial}rtial^2}{{\partial}rtial x^i{\partial}rtial x^s}({\varphi}rac12
{\partial}rtial ta^2)(x)/_{ \tilde{y}=0} \right)_{1\leq i,s\leq N}$ is the matrix of the orthogonal projection onto the normal space of $T_{f^P(\bar{y})}{\mathbb S}ig_k$.
Hence using \eqref{eq:vartzyb}, we get
$$
{\partial}rtial ta^2 (x({y}))=|\tildelde y|^2 +\calO(|\tildelde y|^3).
$$
This together with \eqref{eq:tidFptiy} prove the first expansion.
\item Thanks to \eqref{eq:tidFptiy} and \eqref{eq:metexp}, we infer that
$$
\nabla \tilde{\delta }\cdot\nabla d(x(y))= {\varphi}rac{\partial \tilde{\delta }( x(y))}{\partial y^1}={\varphi}rac{y^1}{|\tilde{y}|}={\varphi}rac{d(x(y))}{\tilde{\delta }(x(y))}
$$
as desired.
\item We observe that
$$
{\varphi}rac{\partial \tilde{\delta }}{\partial x^\t}{\varphi}rac{ \partial \tilde{\delta }}{\partial x^\t} (x(y)) =g^{\tau \a}(y)g^{\tau \b}(y){\varphi}rac{\partial \tilde{\delta }(x(y))}{\partial y^\a}{\varphi}rac{\partial \tilde{\delta }(x(y))}{\partial y^\b},
$$
where $(g^{\a\b})_{\a,\b=1,\delta ots,N} $ is the inverse of the matrix $(g_{\a\b})_{\a,\b=1,\delta ots,N} $.
Therefore using \eqref{eq:tidFptiy} and \eqref{eq:metexp}, we get the result.
\item Finally using the expansion of the Laplace-Beltrami operator $\D_g$, see Lemma 3.3 in \cite{mm}, applied to \eqref{eq:tidFptiy}, we get
the last estimate.
\end{enumerate}
{$\square$}\goodbreak
In the following of -- only -- this section, $q:\overline{{\mathcal U}} \to {\mathbb R}$
be such that \begin{equation}{\langle}bel{eq:q} q\in C^2(\overline{{\mathcal U}}),\quad\bar{\varepsilon}xtrm{ and
}\quad q\leq 1\quad\bar{\varepsilon}xtrm{ on } {\mathbb S}ig_k. \end{equation} Let $M,a\in{\mathbb R}$, we
consider the function \begin{equation}{\langle}bel{eq:pert-gst}
W_{a,M,q}(x)=X_a(\tilde{{\partial}rtial ta}(x))\,e^{Md(x)}\,d(x)\,\tilde{{\partial}rtial ta}(x)^{\alphapha(x)},
\end{equation} where
$$
X_a(t)=(-\log(t))^a\quad \, 0<t<1 $$
and
$$
\alphapha(x)={\varphi}rac{k-N}{2}+{\varphi}rac{N-k}{2}\sqrt{1-q(\s(\bar{x}))+\tilde{\delta }(x)}.
$$
In the above setting, the following useful result holds.
\begin{equation}gin{Lemma}{\langle}bel{LapFinalExp}
As $\delta \to 0$, we have
\begin{equation}gin{eqnarray*}
\Delta W_{a,M,q}&=& - {\varphi}rac{(N-k)^2}{4}\,q\,{\partial}rtial ta^{-2} \,W_{a,M,q}-{2\,a\,\sqrt{\tildelde\alphapha}}\,X_{-1}(\delta )\,{\partial}rtial ta^{-2}\,W_{a,M,q}
\\
&+& {a(a-1)} \,X_{-2}(\delta )\,{\partial}rtial ta^{-2}\,W_{a,M,q}+{\varphi}rac{h+2M}{d}\,W_{a,M,q}+O(|\log({\partial}rtial ta)|\,{\partial}rtial ta^{-{\varphi}rac32})\,W_{a,M,q},\nonumber
\end{eqnarray*}
where $\tildelde{\alphapha}(x)={\varphi}rac{(N-k)^2}{4}\left(1- q(\sigma (\overline x))+\tilde{{\partial}rtial ta} (x)\right) $ and $h=\Delta d$. Here the lower order term satisfies
$$
|O(r)|\leq C |r|,
$$
where $C$ is a positive constant only depending on $a,M,{\mathbb S}ig_k,{\mathcal U}$ and $\|q\|_{C^2({\mathcal U})}$.
\end{Lemma}
\noindent{{\bf Proof. }}
We put $s={\varphi}rac{(N-k)^2}{4} $.
Let $w=\tilde{{\partial}rtial ta}(x)^{\alphapha(x)} $ then the following formula can be easily verified
\begin{equation}{\langle}bel{eq:1}
\Delta w=w\bigg( \Delta \log(w)+|\nabla\log(w)|^2 \bigg).
\end{equation}
Since
$$
\log(w)=\alphapha\log(\tilde{{\partial}rtial ta}),
$$
we get
\begin{equation}{\langle}bel{eq:2}
\Delta \log(w)=\Delta \alphapha\log(\tilde{{\partial}rtial ta})+2\nabla\alphapha\cdot \nabla
(\log(\tilde{{\partial}rtial ta}))+\alphapha\Delta \log(\tilde{{\partial}rtial ta}).
\end{equation}
We have
\begin{equation}{\langle}bel{eq:3}
\D\alphapha=\D\sqrt{\tildelde \alphapha}=\sqrt{\tildelde
\alphapha}\,\left({\varphi}rac12 \D\log(\tildelde \alphapha) +{\varphi}rac14|\nabla
\log(\tildelde \alphapha)|^2 \right),
\end{equation}
$$
\nabla\log(\tildelde\alphapha)={\varphi}rac{\nabla\tildelde\alphapha}{\tildelde\alphapha}={\varphi}rac{-s\nabla(q\circ\sigma)+s\nabla\tilde{{\partial}rtial ta}}{\tildelde\a}
$$
and using the formula \eqref{eq:1}, we obtain
\begin{equation}gin{eqnarray*}
\D\log(\tildelde\alphapha)&=&{\varphi}rac{\D\tildelde\alphapha}{\tildelde\alphapha} -{\varphi}rac{|\nabla\tildelde\alphapha|^2}{\tildelde\alphapha^2}\\
&=& {\varphi}rac{-s\D(q\circ\sigma)+s\D\tilde{{\partial}rtial ta}}{\tildelde\alphapha} -{\varphi}rac{s^2|\nabla(q\circ\sigma)|^2+s^2|\nabla\tilde{{\partial}rtial ta}|^2}
{\tildelde\alphapha^2}+2s^2{\varphi}rac{\nabla(q\circ\sigma)\cdot\nabla\tilde{{\partial}rtial ta}}{\tildelde\alphapha^2}.
\end{eqnarray*}
Putting the above in \eqref{eq:3}, we deduce that
\begin{equation}{\langle}bel{eq:4}
\D\alphapha ={\varphi}rac{1}{2\sqrt{\tildelde\alphapha}} \bigg( -s\Delta (q\circ\sigma)+s\D\tilde{{\partial}rtial ta}-
{\varphi}rac12{\varphi}rac{s^2|\nabla(q\circ\sigma)|^2+s^2|\nabla\tilde{{\partial}rtial ta}|^2-2s^2\nabla(q\circ\sigma)\cdot\nabla\tilde{{\partial}rtial ta}}{\tildelde\alphapha}\bigg).
\end{equation}
Using Lemma \ref{lemddelta} and
the fact that $q$ is in $C^2(\overline{{\mathcal U}})$,
together with \eqref{eq:4} we get
\begin{equation}{\langle}bel{eq:5}
\D\alphapha= O({\tilde{{\partial}rtial ta}^{-{\varphi}rac32}}).
\end{equation}
On the other hand
$$
\nabla
\alphapha=\nabla\sqrt{\tildelde\alphapha}={\varphi}rac12{\varphi}rac{\nabla\tildelde\alphapha}{\sqrt{\tildelde\alphapha}}=-{\varphi}rac{s}{2\sqrt{\tildelde\alphapha}}\nabla(q\circ\sigma)+
{\varphi}rac{s}{2}{\varphi}rac{\nabla\tilde{{\partial}rtial ta}}{\sqrt{\tildelde\alphapha}}
$$
so that
$$
\nabla \alphapha\cdot \nabla
\tilde{{\partial}rtial ta}=-{\varphi}rac{s}{2\sqrt{\tildelde\alphapha}}\nabla(q\circ\sigma)\cdot
\nabla \tilde{{\partial}rtial ta}+
{\varphi}rac{s}{2}{\varphi}rac{|\nabla\tilde{{\partial}rtial ta}|^2}{\sqrt{\tildelde\alphapha}}=O(\tilde{\delta }^{-{\varphi}rac12})
$$
and from which we deduce that
\begin{equation}{\langle}bel{eq:6}
\nabla\alphapha\cdot \nabla\log(\tilde{{\partial}rtial ta}) = {\varphi}rac{1}{\tilde{{\partial}rtial ta}} \nabla\alphapha\cdot \nabla\tilde{{\partial}rtial ta}
=
O(\tilde{\delta }^{-{\varphi}rac32}).
\end{equation}
By Lemma \ref{lemddelta} we have that
$$
\alphapha\D\log(\tilde{{\partial}rtial ta})=\alphapha\,{\varphi}rac{N-k-2}{\tilde{{\partial}rtial ta}^2}\,(1+O(\tilde{{\partial}rtial ta})).
$$
Taking back the above estimate together with \eqref{eq:6} and \eqref{eq:5} in \eqref{eq:2}, we get
\begin{equation}{\langle}bel{eq:7}
\D\log(w) = \alphapha\,{\varphi}rac{N-k-2}{\tilde{{\partial}rtial ta}^2}\,(1+O(\tilde{{\partial}rtial ta}))
+O(|\log(\tilde{\delta })|\tilde{\delta }^{-{\varphi}rac32}).
\end{equation}
We also have
$$
\nabla(\log(w))=\nabla(\alphapha \log(\tilde{{\partial}rtial ta}))=\alphapha
{\varphi}rac{\nabla\tilde{{\partial}rtial ta}}{\tilde{{\partial}rtial ta}}+\log(\tilde{{\partial}rtial ta})\nabla \alphapha
$$
and thus
$$
|\nabla(\log(w))|^2={\varphi}rac{\alphapha^2}{\tilde{{\partial}rtial ta}^2}+{\varphi}rac{2\alphapha\log(\tilde{{\partial}rtial ta})}{\tilde{{\partial}rtial ta}}\,\nabla\tilde{{\partial}rtial ta}\cdot\nabla
\alphapha+|\log(\tilde{{\partial}rtial ta})|^2|\nabla \alphapha|^2={\varphi}rac{\alphapha^2}{\tilde{{\partial}rtial ta}^2}+ O(|\log(\tilde{\delta })|\tilde{\delta }^{-{\varphi}rac32}).
$$
Putting this together with \eqref{eq:7} in \eqref{eq:1}, we conclude that
\begin{equation}{\langle}bel{eq:8}
{\varphi}rac{ \Delta w }{w}=
\alphapha\,{\varphi}rac{N-k-2}{\tilde{{\partial}rtial ta}^2}+{\varphi}rac{\alphapha^2}{\tilde{{\partial}rtial ta}^2}+O(|\log(\tilde{{\partial}rtial ta})|\,\tilde{{\partial}rtial ta}^{-{\varphi}rac32}).
\end{equation}
Now we define the function
$$
v(x):=d(x)\,w(x),
$$
where we recall that $d$ is the distance function to the boundary of ${\mathcal U}$.
It is clear that
\begin{equation}{\langle}bel{eq:9}
\Delta v= w\Delta d+d\Delta w+2\nabla d\cdot \nabla w.
\end{equation}
Notice that
$$
\nabla w=w\,\nabla
\log(w)=w\,\left(\log(\tilde{{\partial}rtial ta})\nabla\alphapha+\alphapha{\varphi}rac{\nabla
\tilde{{\partial}rtial ta}}{\tilde{{\partial}rtial ta}}\right)
$$
and so
\begin{equation}{\langle}bel{eq:10}
\nabla d\cdot\nabla w=w\,\left(\log(\tilde{{\partial}rtial ta})\nabla d
\cdot\nabla\alphapha+{\varphi}rac{\alphapha}{\tilde{{\partial}rtial ta}}\nabla d\cdot\nabla
\tilde{{\partial}rtial ta}\right).
\end{equation}
Recall the second assertion of Lemma \ref{lemddelta} that we rewrite as
\begin{equation}{\langle}bel{eq:11}
\nabla d\cdot\nabla \tilde{{\partial}rtial ta}={\varphi}rac{d}{\tilde{{\partial}rtial ta}}.
\end{equation}
Therefore
\begin{equation}{\langle}bel{eq:12}
\nabla d \cdot\nabla\alphapha=\nabla
d\cdot\left(-{\varphi}rac{s}{2\sqrt{\tildelde
\alphapha}}\nabla(q\circ\sigma)+{\varphi}rac{s}{2}{\varphi}rac{\nabla\tilde{{\partial}rtial ta}}{\sqrt{\tildelde
\alphapha}} \right)={\varphi}rac{s}{2\sqrt{\tildelde
\alphapha}}{\varphi}rac{d}{\tilde{{\partial}rtial ta}}-{\varphi}rac{s}{2\sqrt{\tildelde
\alphapha}}\nabla d\cdot\nabla(q\circ\sigma).
\end{equation}
Notice that if $x$ is in a neighborhood of some point $P\in {\mathbb S}ig_k$ one has
$$
\nabla d\cdot\nabla(q\circ\sigma)(x)={\varphi}rac{\partial}{\partial
y^1}q(\sigma(\overline x))={\varphi}rac{\partial}{\partial y^1}q( f^P(\overline y))=0.
$$
This with \eqref{eq:12} and \eqref{eq:11} in \eqref{eq:10} give
\begin{equation}gin{eqnarray}{\langle}bel{eq:13}
\nabla d\cdot\nabla w&=&w\,\left(O(\tilde{{\partial}rtial ta}^{-{\varphi}rac32}|\log(\tilde{{\partial}rtial ta})|)\,d+{\varphi}rac{\alphapha}{\tilde{{\partial}rtial ta}^2}\,
d \right)\nonumber \\
&=& v\,\left(O(\tilde{{\partial}rtial ta}^{-{\varphi}rac32}|\log(\tilde{{\partial}rtial ta})|)+{\varphi}rac{\alphapha}{\tilde{{\partial}rtial ta}^2}\right).
\end{eqnarray}
From \eqref{eq:8}, \eqref{eq:9} and \eqref{eq:13} (recalling the expression of $\a$ above), we get immediately
\begin{equation}gin{eqnarray}{\langle}bel{eq:14}
\Delta v&=&\left(
\alphapha\,{\varphi}rac{N-k}{\tilde{{\partial}rtial ta}^2}+{\varphi}rac{\alphapha^2}{\tilde{{\partial}rtial ta}^2}\right)\,v+O(|\log(\tilde{{\partial}rtial ta})|\,\tilde{{\partial}rtial ta}^{-{\varphi}rac32})\,v+
{\varphi}rac{h}{d}\,v \nonumber\\
&=&\left(- {\varphi}rac{(N-k)^2}{4}
{\varphi}rac{q(x)}{\tilde{{\partial}rtial ta}^2}+O(|\log(\tilde{{\partial}rtial ta})|\,\tilde{{\partial}rtial ta}^{-{\varphi}rac32})\right)\,v+
{\varphi}rac{h}{d}\,v,
\end{eqnarray}
where $h=\Delta d$. Here we have used the fact that $|q(x)-q(\s(\bar{x}))|\leq C \tilde{\delta }(x)$ for $x$ in a neighborhood of ${\mathbb S}ig_k$.\\
Recall that
$$
W_{a,M,q}(x)=X_a(\tilde{{\partial}rtial ta}(x))\,e^{Md(x)}\,v(x), \quad \hbox{ with }\quad
X_a(\tilde{{\partial}rtial ta}(x)):=(-\log(\tilde{{\partial}rtial ta}(x)))^a,
$$
where $M$ and $a$ are two real numbers. We have
\begin{equation}gin{eqnarray*}
\Delta W_{a,M,q} = X_a(\tilde{{\partial}rtial ta})\,\Delta (e^{Md}\,v)+2\nabla X_a(\tilde{{\partial}rtial ta})\cdot\nabla (e^{Md}\,v)+e^{Md}\,v\,\Delta X_a(\tilde{{\partial}rtial ta})
\end{eqnarray*}
and thus
\begin{equation}{\langle}bel{eq:15}
\begin{equation}gin{array}{lll}
\Delta W_{a,M,q}
&= &X_a(\tilde{{\partial}rtial ta})e^{Md}\,\D
v+X_a(\tilde{{\partial}rtial ta}) \Delta (e^{Md})\, v+2X_a(\tilde{{\partial}rtial ta})\nabla v\cdot \nabla(e^{Md})\\
&\,\,&+\,2\nabla X_a(\tilde{{\partial}rtial ta})\cdot\left( v\,\nabla (e^{Md})+e^{Md}\nabla v\right)+e^{Md}\,v\,\D
X_a(\tilde{{\partial}rtial ta}).
\end{array}
\end{equation}
We shall estimate term by term the above expression.\\
First we have form \eqref{eq:14}
\begin{equation}{\langle}bel{eq:141}
X_a(\tilde{{\partial}rtial ta})e^{Md}\,\Delta v= - {\varphi}rac{(N-k)^2}{4}
{\varphi}rac{q}{\tilde{{\partial}rtial ta}^2}\, W_{a,M,q} +
{\varphi}rac{h}{d}\, W_{a,M,q} +O(|\log(\tilde{{\partial}rtial ta})|\,\tilde{{\partial}rtial ta}^{-{\varphi}rac32})\, W_{a,M,q}.
\end{equation}
It is plain that
\begin{equation}{\langle}bel{eq:17}
X_a(\tilde{{\partial}rtial ta})\,\Delta (e^{Md})\,v=O(1)
\,W_{a,M,q}.
\end{equation}
It is clear that
\begin{equation}{\langle}bel{eq:nv}
\nabla v= w\,\nabla d+d\,\nabla w=w\,\nabla
d+d\,\left(\log(\tilde{{\partial}rtial ta})\,\nabla\alphapha+\alphapha {\varphi}rac{\nabla
\tilde{{\partial}rtial ta}}{\tilde{{\partial}rtial ta}}\right)\, w.
\end{equation}
From which and \eqref{eq:11} we get
\begin{equation}gin{eqnarray}{\langle}bel{eq:16}
X_a(\tilde{{\partial}rtial ta})\,\nabla v\cdot \nabla(e^{Md}) &=& M\,X_a(\tilde{{\partial}rtial ta})\,e^{Md}\,w\left\{ |\nabla d|^2+d\, \left(\log(\tilde{{\partial}rtial ta})\,\nabla d\cdot \nabla\alphapha+
{\varphi}rac{\alphapha }{\tilde{{\partial}rtial ta}}
\nabla\tilde{{\partial}rtial ta}\cdot\nabla d\right)\right\}\nonumber \\
&=&M\,X_a(\tilde{{\partial}rtial ta})\,e^{Md}\,w\left\{
1+O(|\log(\tilde{{\partial}rtial ta})|\,\tilde{{\partial}rtial ta}^{-{\varphi}rac12})\,d+O(\tilde{{\partial}rtial ta}^{-1})\,d\right\}\nonumber\\
&=& W_{a,M,q} \,\left\{ {\varphi}rac{M}{d}+O(|\log(\tilde{{\partial}rtial ta})|\,\tilde{{\partial}rtial ta}^{-1})\right\}.
\end{eqnarray}
Observe that
$$
\nabla(X_a(\tilde{{\partial}rtial ta}))=-a\,{\varphi}rac{\nabla \tilde{{\partial}rtial ta}}{\tilde{{\partial}rtial ta}}
X_{a-1}(\tilde{{\partial}rtial ta}).
$$
This with \eqref{eq:nv} and \eqref{eq:11} imply that
\begin{equation}{\langle}bel{eq:18}
\nabla X_a(\tilde{{\partial}rtial ta})\cdot\left( v\,\nabla (e^{Md})+e^{Md}\nabla
v\right)=
-{\varphi}rac{a(\alphapha+1)}{\tilde{{\partial}rtial ta}^2}\,X_{-1}\,W_{a,M,q}+O(|\log(\tilde{{\partial}rtial ta})|\tilde{{\partial}rtial ta}^{-{\varphi}rac32})\,W_{a,M,q}.
\end{equation}
By Lemma \ref{lemddelta}, we have
$$
\D(X_a(\tilde{{\partial}rtial ta}))={\varphi}rac{a}{\tilde{{\partial}rtial ta}^2}X_{a-1}(\tilde{{\partial}rtial ta})\{2+k-N+O(\tilde{{\partial}rtial ta})\}+{\varphi}rac{a(a-1)}{\tilde{{\partial}rtial ta}^2}X_{a-2}(\tilde{{\partial}rtial ta}).
$$
Therefore we obtain
\begin{equation}{\langle}bel{eq:19}
e^{Md}v \D(X_a(\tilde{{\partial}rtial ta}))={\varphi}rac{a}{\tilde{{\partial}rtial ta}^2} \{2+k-N+O(\tilde{{\partial}rtial ta})\}\,X_{-1}\,W_{a,M,q}+ {\varphi}rac{a(a-1)}{\tilde{{\partial}rtial ta}^2}X_{-2} \,W_{a,M,q}.
\end{equation}
Collecting \eqref{eq:141}, \eqref{eq:17}, \eqref{eq:16}, \eqref{eq:18} and \eqref{eq:19} in the expression \eqref{eq:15},
we get
as $\tilde{\delta }\to 0$
\begin{equation}gin{eqnarray*}
\Delta W_{a,M,q}&=& - {\varphi}rac{(N-k)^2}{4}\,q\,\tilde{{\partial}rtial ta}^{-2} \,W_{a,M,q}-2\,a\,\sqrt{\tilde{\alphapha}}\,X_{-1}(\tilde{\delta })\,\tilde{{\partial}rtial ta}^{-2}\,W_{a,M,q}
\\
&+& {a(a-1)} \,X_{-2}(\tilde{\delta })\,\tilde{{\partial}rtial ta}^{-2}\,W_{a,M,q}+{\varphi}rac{h+2M}{d}\,W_{a,M,q}
+O(|\log(\tilde{{\partial}rtial ta})|\,\tilde{{\partial}rtial ta}^{-{\varphi}rac32})\,W_{a,M,q}.\nonumber
\end{eqnarray*}
The conclusion of the lemma follows at once from the first assertion of Lemma \ref{lemddelta}.
{$\square$}\goodbreak
\subsection{Construction of a subsolution}
For $\l\in{\mathbb R}$ and $\etaa\in Lip(\overline{{\mathcal U}})$ with $\etaa=0$ on ${\mathbb S}ig_k$, we define the operator
\begin{equation}{\langle}bel{eq:calL_l}
\mathcal{L}_\l:=
-\Delta -{\varphi}rac{(N-k)^2}{4}\,q\,{\partial}rtial ta^{-2}+\l\, \etaa\,{\partial}rtial ta^{-2},
\end{equation}
where $q$ is as in \eqref{eq:q}.
We have the following lemma
\begin{equation}gin{Lemma} {\langle}bel{le:lowerbound}
There exist two positive constants $M_0,\begin{equation}ta_0$ such that for all
$\begin{equation}ta\in\,(0,\begin{equation}ta_0)$ the function
$V_\e:=W_{-1,M_0,q}+W_{0,M_0,q-\e}$ (see \eqref{eq:pert-gst})
satisfies
\begin{equation}gin{equation}{\langle}bel{eq:subsolution}
\mathcal{L}_\lambda V_\e\le 0 \quad \bar{\varepsilon}xtrm{ in } {\mathcal U}_\b,\quad\hbox{ for all }\; \e\in[0,1).
\end{equation}
Moreover $V_\e\in H^1({\mathcal U}_\begin{equation}ta)$ for any $\e\in(0,1)$ and in addition
\begin{equation}gin{equation}{\langle}bel{eq:Iq}
\int_{{\mathcal U}_\b}{\varphi}rac{V_{0}^2}{\delta ^2}\,dx\geq C \int_{{\mathbb S}igma_k} {\varphi}rac{1}{\sqrt{1-q(\sigma)}}\,d\sigma.
\end{equation}
\end{Lemma}
\noindent{{\bf Proof. }} Let $\begin{equation}ta_1$ be a positive small real number so that $d$ is
smooth in ${\mathcal U}_{\b_1}$. We choose
$$
M_0= \max_{x\in \overline{\mathcal U}_{\b_1}}|h(x)|+1.
$$
Using this and Lemma \ref{LapFinalExp}, for some $\b\in(0,\b_1)$, we have
\begin{equation}{\langle}bel{eq:LaM0}
\mathcal{L}_\lambda W_{-1,M_0,q} \le \left(-2{\partial}rtial ta^{-2} \,X_{-2}+C|\log({\partial}rtial ta)|\,{\partial}rtial ta^{-{\varphi}rac32}+|\l|\etaa \delta ^{-2}\right)\,W_{-1,M_0,q}\quad
\bar{\varepsilon}xtrm{ in } {\mathcal U}_\b. \end{equation}
Using the
fact that the function $\etaa$ vanishes on
${\mathbb S}igma_k$ (this implies in particular that $|\etaa|\le C {\partial}rtial ta$ in
${\mathcal U}_\b$), we have
$$
\mathcal{L}_\l(W_{-1,M_0,q})\le -{\partial}rtial ta^{-2} \,X_{-2}\,W_{-1,M_0,q}= -{\partial}rtial ta^{-2} \,X_{-3}\,W_{0,M_0,q}\quad \bar{\varepsilon}xtrm{ in }{\mathcal U}_\b,
$$
for $\b$ sufficiently small. Again by Lemma \ref{LapFinalExp}, and
similar arguments as above, we have \begin{equation}{\langle}bel{eq:LaMqep}
\mathcal{L}_\lambda W_{0,M_0,q-\e} \le C|\log({\partial}rtial ta)|\,{\partial}rtial ta^{-{\varphi}rac32}\,W_{0,M_0,q-\e}\leq C|\log({\partial}rtial ta)|\,{\partial}rtial ta^{-{\varphi}rac32}\,W_{0,M_0,q}\quad\bar{\varepsilon}xtrm{ in }{\mathcal U}_{\b},
\end{equation}
for any $\e\in [0,1)$. Therefore we get
$$
\mathcal{L}_\lambda \left(W_{-1,M_0,q}+W_{0,M_0,q-\e} \right)\leq 0\quad \bar{\varepsilon}xtrm{ in }{\mathcal U}_{\b},
$$
if $\b$ is small. This proves \eqref{eq:subsolution}.\\
The proof of the fact that
$W_{a,M_0,q}\in H^1({\mathcal U}_\begin{equation}ta)$, for any $a<-{\varphi}rac{1}{2}$ and $ W_{0,M_0,q-\e}\in H^1({\mathcal U}_\begin{equation}ta) $, for $\e>0$ can be easily checked using polar coordinates
(by assuming without any loss of generality that $M_0=0$ and $q\equiv 1$), we therefore skip it. \\
We now prove the last statement of the theorem.
Using Lemma \ref{lemddelta}, we have
\begin{equation}gin{eqnarray*}
\int_{{\mathcal U}_\b}{\varphi}rac{V_{0}^2}{\delta ^2}\,dx
&\ge& \int_{{\mathcal U}_\b}{\varphi}rac{W_{0,M_0,q}^2}{\delta ^2}\,dx\\
&\ge &C\,\int_{{\mathcal U}_\b({\mathbb S}ig_k)}d^2(x)\tildelde{{\partial}rtial ta}(x)^{2\a(x)-2}\,dx\\
&\ge& C\sum_{i=1}^{N_0}\,\int_{T_i}d^2(x)\tildelde{{\partial}rtial ta}(x)^{2\a(x)-2}\,dx\\
&=&C\sum_{i=1}^{N_0}\,\int_{B^{N-k}_+(0,\b)\tildemes
D_i}(y^1)^2\,|\tildelde y|^{2\a(F^{p_i}_\mathcal{M}(y))-2}\,
|{\rm Jac}(F^{p_i}_\mathcal{M})|(y)\,dy\\
&\ge& C\,\sum_{i=1}^{N_0}\,\int_{B^{N-k}_+(0,\b)\tildemes
D_i}(y^1)^2\,|\tildelde y|^{k-N-2+(N-k)\sqrt{1-q(f^{p_i}(\bar
y))}}\, \,|\tildelde y|^{-\sqrt{|\tilde{y}|}}\,dy.
\end{eqnarray*}
Here we used the fact that $|{\rm Jac}(F^{p_i}_\mathcal{M})|(y)\ge C$. Observe that
$$
|\tildelde y|^{-\sqrt{|\tilde{y}|}}\ge C >0
\quad \hbox{as }\, |\tildelde y| \to 0.
$$
Using polar coordinates, the above integral becomes
\begin{equation}gin{eqnarray*}
\int_{{\mathcal U}_\b}{\varphi}rac{V_{0}^2}{\delta ^2}\,dx &\ge&
C\,\sum_{i=1}^{N_0}\int_{D_i}\int_{S^{N-k-1}_+}\left({\varphi}rac{y^1}{|\tildelde
y|}\right)^2\,d
\theta\int_0^{\b}r^{-1+(N-k)\sqrt{1-q(f^{p_i}(\bar
y))}}\,d\bar y
\\
&\ge & C\,\sum_{i=1}^{N_0}\int_{D_i}\int_0^{r_{i_1}}r^{-1+(N-k)\sqrt{1-q(f^{p_i}(\bar
y))}}\,|\bar{\varepsilon}xtrm{Jac}(f^{p_i})|(\bar y)\,d\bar y.
\end{eqnarray*}
We therefore obtain
\begin{equation}gin{eqnarray*}
\int_{{\mathcal U}_\b}{\varphi}rac{V_{0}^2}{\delta ^2}\,dx
&\geq & C\,\int_{{\mathbb S}ig_k}\int_0^{\b}r^{-1+(N-k)\sqrt{1-q(\s)}}\,dr\,d\s\\
&\geq & C\,\int_{{\mathbb S}ig_k}{\varphi}rac{1}{\sqrt{1-q(\s)}}\,d\s.
\end{eqnarray*}
This concludes the proof of the lemma.
{$\square$}\goodbreak
\subsection{Construction of a supersolution}
In this subsection we provide a supersolution for the operator $\calL_\l$ defined in \eqref{eq:calL_l}. We prove
\begin{equation}gin{Lemma} {\langle}bel{le:upperbound}
There exist constants $\begin{equation}ta_0>0$,
$M_{1}<0,$ $M_0>0$ (the constant $M_0$ is as in Lemma \ref{le:lowerbound}) such that for
all $\begin{equation}ta\in\,(0,\begin{equation}ta_0)$ the function $U:=W_{0,M_1,q}-W_{-1,M_0,q}>0$ in ${\mathcal U}_\b$ and
satisfies
\begin{equation}gin{equation}{\langle}bel{eq:supsolution}
\mathcal{L}_\lambda U_a \geq 0 \quad \bar{\varepsilon}xtrm{ in } {\mathcal U}_\b.
\end{equation}
Moreover $U\in H^1({\mathcal U}_\begin{equation}ta)$
provided
\begin{equation}gin{equation}{\langle}bel{eq:Iql}
\int_{{\mathbb S}igma_k} {\varphi}rac{1}{\sqrt{1-q(\sigma)}}\,d\sigma <+\infty.
\end{equation}
\end{Lemma}
\noindent{{\bf Proof. }}
We consider $\b_1$ as in the beginning of the proof of Lemma \ref{le:lowerbound} and we define
\begin{equation}gin{equation}{\langle}bel{eq:M1}
M_1=-{\varphi}rac12\,\max_{x\in\overline{\mathcal U}_{\begin{equation}ta_1}}|h(x)|-1.
\end{equation}
Since $$ U(x)=(e^{M_1 d(x)}-e^{M_0d(x)}X_{-1}(\tilde{\delta }(x)))d(x)\tilde{\delta }(x)^{\a(x)},$$ it follows that $U>0$ in ${\mathcal U}_\b$ for $\b>0$ sufficiently small.
By \eqref{eq:M1} and Lemma \ref{LapFinalExp}, we get
\begin{equation}gin{eqnarray*}
\mathcal{L}_\lambda W_{0,M_1,q} \ge \left(-C|\log({\partial}rtial ta)|\,{\partial}rtial ta^{-{\varphi}rac32}-|\l|\etaa \delta ^{-2}\right)\,W_{0,M_1,q}.
\end{eqnarray*}
Using \eqref{eq:LaM0} we have
$$
\mathcal{L}_\lambda (- W_{-1,M_0,q})\geq
\left(2\delta ^{-2}X_{-2}-C|\log({\partial}rtial ta)|\,{\partial}rtial ta^{-{\varphi}rac32}-|\l|\etaa \delta ^{-2}\right)\, W_{-1,M_0,q}.
$$
Taking the sum of the two above inequalities, we obtain
$$
\mathcal{L}_\lambda U\geq0 \quad\bar{\varepsilon}xtrm{ in }{\mathcal U}_\b,
$$
which holds true because $|\etaa|\leq C\delta $ in $ {\mathcal U}_{\b}$.
Hence we get readily \eqref{eq:supsolution}.\\
Our next task is to prove that $U\in H^1({\mathcal U}_\b)$ provided \eqref{eq:Iql} holds, to do so it is enough to show that $W_{0,M_1,q} \in H^1({\mathcal U}_\b)$ provided \eqref{eq:Iql} holds.\\
We argue as in the proof of Lemma \ref{le:lowerbound}. We have
\begin{equation}gin{eqnarray*}
\int_{{\mathcal U}_\b}|\nabla W_{0,M_1,q}|^2 &\le & C\int_{{\mathcal U}_\b}d^2(x)\tilde{{\partial}rtial ta}(x)^{2\a(x)-2}\,dx\\
&\leq& C\sum_{i=1}^{N_0}\int_{B^{N-k}_+(0,\b)\tildemes
D_i}d^2(F^{p_i}_\mathcal{M}(y))\tildelde{{\partial}rtial ta}(F^{p_i}_\mathcal{M}(y))^{2\a(F^{p_i}_\mathcal{M}(y))-2}
|{\rm
Jac}(F^{p_i}_\mathcal{M})|(y)dy\\
&\leq&C\sum_{i=1}^{N_0}\,\int_{B^{N-k}_+(0,\b)\tildemes
D_i}(y^1)^2\,|\tildelde y|^{2\a(F^{p_i}_\mathcal{M}(y))-2}\,
|{\rm Jac}(F^{p_i}_\mathcal{M})|(y)\,dy\\
&\le& C\,\sum_{i=1}^{N_0}\,\int_{B^{N-k}_+(0,\b)\tildemes
D_i}(y^1)^2\,|\tildelde y|^{k-N-2+(N-k)\sqrt{1-q(f^{p_i}(\bar
y))}}\, \,|\tildelde y|^{-\sqrt{|\tilde{y}}|}\,dy.
\end{eqnarray*}
Here we used the fact that $|{\rm Jac}(F^{p_i}_\mathcal{M})|(y)\le C$. Note that
$$
|\tildelde y|^{-\sqrt{|\tilde{y}}|}\le C
\quad \hbox{as }\, |\tildelde y|\to 0.
$$
Using polar coordinates, it follows that
\begin{equation}gin{eqnarray*}
\int_{{\mathcal U}_\b}|\nabla W_{0,M_1,q}|^2
&\le& C\,\sum_{i=1}^{N_0}\int_{D_i}\int_{S^{N-k-1}_+}\left({\varphi}rac{y^1}{|\tildelde
y|}\right)^2\,d
\theta\int_0^{\b}r^{-1+(N-k)\sqrt{1-q(f^{p_i}(\bar
y))}}\,dr\,d\bar y\\
&\le&
C\, \sum_{i=1}^{N_0}\,\int_{D_i}{\varphi}rac{1}{\sqrt{1-q(f^{p_i}(\bar
y))}}\,d\bar y.
\end{eqnarray*}
Racalling that $|{\rm Jac}(f^{p_i})|(\bar y)=1+O(|\bar y|)$, we deduce that
\begin{equation}gin{eqnarray*}
\sum_{i=1}^{N_0}\,\int_{D_i}{\varphi}rac{1}{\sqrt{1-q(f^{p_i}(\bar
y))}}\,d\bar y&\le&
C\sum_{i=1}^{N_0}\,\int_{D_i}{\varphi}rac{1}{\sqrt{1-q(f^{p_i}(\bar
y))}}\,|{\rm Jac}(f)|(\bar y)\,d\bar
y\\
&=&C\int_{{\mathbb S}igma_k}{\varphi}rac{1}{\sqrt{1-q(\sigma})}\,d\sigma.
\end{eqnarray*}
Therefore
\begin{equation}gin{eqnarray*}
\int_{{\mathcal U}_\b}|\nabla W_{0,M_1,q}|^2\,dx
&\le&C\int_{{\mathbb S}igma_k}{\varphi}rac{1}{\sqrt{1-q(\sigma})}\,d\sigma
\end{eqnarray*}
and the lemma follows at once.
{$\square$}\goodbreak
\section{Existence of $\l^*$}{\langle}bel{s:localhardy}
We start with the following local improved Hardy inequality.
\begin{equation}gin{Lemma}{\langle}bel{lem:loc-hardy}
Let $\O$ be a smooth domain and assume that
${\partial}rtial\O$ contains a smooth closed submanifold ${\mathbb S}igma_k$ of
dimension $1\le k\le N-2$. Assume that $p,q$ and $\etaa$ satisfy \eqref{eq:weight} and \eqref{eq:min-pq}.
Then there exist constants $\begin{equation}ta_0>0$ and $c>0$
depending only on $\O, {\mathbb S}ig_k,q,\etaa$ and $p$ such that for all $\begin{equation}ta\in(0,\begin{equation}ta_0)$
the inequality
$$
\int_{\O_\begin{equation}ta}p|\n
u|^2\,dx-{\varphi}rac{(N-k)^2}{4}\int_{\O_\begin{equation}ta}q{\varphi}rac{|u|^2}{\delta ^{2}}\,dx\geq
c\int_{\O_\begin{equation}ta}{\varphi}rac{|u|^2}{ \delta ^{2}|\log(\delta )|^{2} }\,dx
$$
holds for all $ u\in H^1_0({\O_\begin{equation}ta})$.
\end{Lemma}
\noindent{{\bf Proof. }}
We use the notations in Section \ref{s:pn} with ${\mathcal U}=
\O$ and $\mathcal{M}=\partial \O$.\\
Fix $\b_1>0$ small and
\begin{equation}gin{equation}{\langle}bel{eq:M2fi}
M_2=-{\varphi}rac12\,\max_{x\in\overline\O_{\begin{equation}ta_1}}(|h(x)|+ |\nabla p\cdot \nabla d |)-1.
\end{equation}
Since ${\varphi}rac{p}{q}\in C^1(\overline{\O})$, there exists $C>0$ such that
\begin{equation}{\langle}bel{eq:Lippovq}
\left|{\varphi}rac{p(x)}{q(x)} - {\varphi}rac{p(\s(\bar{x}))}{q(\s(\bar{x}))}\right|\leq C\delta (x)\quad{\varphi}orall x\in \O_{\b},
\end{equation}
for small $\b>0$.
Hence by \eqref{eq:min-pq} there exits a constant $C'>0$ such that
\begin{equation}{\langle}bel{eq:Lippovq}
p(x)\geq q(x)- C'\delta (x)\quad {\varphi}orall x\in \O_{\b}.
\end{equation}
Consider $ W_{{\varphi}rac{1}{2},M_2,1}$ (in Lemma \ref{LapFinalExp} with $q\equiv1$).
For all $\b>0 $ small, we set
\begin{equation}{\langle}bel{eq:tiw}
\tilde{w}(x)=W_{{\varphi}rac{1}{2},M_2,1}(x),\quad {\varphi}orall x\in\O_\b.
\end{equation}
Notice that $\bar{\varepsilon}xtrm{div}(p\nabla \tilde{w})=p\Delta \tilde{w}+\nabla p\cdot\n\tilde{w}$.
By Lemma \ref{LapFinalExp}, we have
$$
- {\varphi}rac{\delta iv (p\nabla \tilde{w})}{\tilde{w}}\geq
{\varphi}rac{(N-k)^2}{4}\,p\delta ^{-2}+{\varphi}rac{p}{4}\delta ^{-2}X_{-2}(\delta )
+O({|\log(\delta )|\delta ^{-{\varphi}rac32}})\,\bar{\varepsilon}xtrm{ in }\O_\b.
$$
This together with \eqref{eq:Lippovq} yields
$$
- {\varphi}rac{\delta iv (p\nabla \tilde{w})}{\tilde{w}}\geq
{\varphi}rac{(N-k)^2}{4}\,q\delta ^{-2}+{\varphi}rac{c_0}{4}\delta ^{-2}X_{-2}(\delta )
+O({|\log(\delta )|\delta ^{-{\varphi}rac32}})\,\bar{\varepsilon}xtrm{ in }\O_\b,
$$
with $c_0=\min_{\overline{\O_{\b_1}}}p>0$.
Therefore
\begin{equation}{\langle}bel{eq:dwow} - {\varphi}rac{\delta iv (p\nabla \tilde{w})}{\tilde{w}}\geq
{\varphi}rac{(N-k)^2}{4}\,q\delta ^{-2}+c\,\delta ^{-2}X_{-2}(\delta )\,\bar{\varepsilon}xtrm{ in
}\O_{\b},
\end{equation}
for some positive constant $c$ depending only on $\O, {\mathbb S}ig_k,q,\etaa$ and $p$.
Let $u\in C^\infty_c(\O_\b)$ and put
$\psi={\varphi}rac{u}{\tilde{w}}$. Then one has $|\n
u|^2=|\tilde{w}\n\psi|^2+|\psi\nabla \tilde{w}|^2+\n(\psi^2)\cdot \tilde{w} \n
\tilde{w}$. Therefore $|\nabla u|^2p=|\tilde{w}\n\psi|^2p+p\n
\tilde{w}\cdot\n(\tilde{w}\psi^2)$. Integrating by parts, we get
$$
\int_{\O_\b}|\n
u|^2p\,dx=\int_{\O_\b}|\tilde{w}\n\psi|^2p\,dx+\int_{\O_\b}\left(-
{\varphi}rac{\bar{\varepsilon}xtrm{div}(p\nabla \tilde{w})}{\tilde{w}}\right)u^2\,dx.
$$
Putting \eqref{eq:dwow} in the above equality, we get the result.
{$\square$}\goodbreak
We next prove the following result
\begin{equation}gin{Lemma}{\langle}bel{lem:Jl1} Let $\O$ be a smooth bounded domain and assume that
${\partial}rtial\O$ contains a smooth closed submanifold ${\mathbb S}igma_k$ of
dimension $1\le k\le N-2$. Assume that \eqref{eq:weight} and \eqref{eq:min-pq} hold.
Then there exists $\l^*=\l^*(\O,{\mathbb S}ig_k,p,q,\etaa)\in{\mathbb R}$ such
that
$$
\begin{equation}gin{array}{cc}
\delta isplaystyle \mu_{\l}(\O,{\mathbb S}igma_k)={\varphi}rac{(N-k)^2}{4}, &
\quad{\varphi}orall
\l\leq\l^*,
\\
\delta isplaystyle \mu_{\l}(\O,{\mathbb S}igma_k)<{\varphi}rac{(N-k)^2}{4}, &
\quad{\varphi}orall \l>\l^*.
\end{array}
$$
\end{Lemma}
\noindent{{\bf Proof. }} We devide the proof in two steps
\noindent \bar{\varepsilon}xtbf{Step 1:} We claim that:
\begin{equation}{\langle}bel{eq:supmulambda}\sup\limits_{\l\in{\mathbb R}}\mu_\l(\O,{\mathbb S}igma_k)\leq
{\varphi}rac{(N-k)^2}{4}. \end{equation} Indeed, we know that
$\nu_0({\mathbb R}^N_+,{\mathbb R}^k)={\varphi}rac{(N-k)^2}{4}$, see \cite{FTT} for instance. Given
$\tau>0$, we let $u_\tau\in C^\infty_c({\mathbb R}^N_+)$ be such that
\begin{equation}gin{equation}{\langle}bel{eq:estutau}
\int_{{\mathbb R}^N_+}|\n
u_\tau|^2\,d y\leq\left({\varphi}rac{(N-k)^2}{4}+\tau\right)\int_{{\mathbb R}^N_+}|\tildelde
y|^{-2}u_\tau^2\,d y.
\end{equation}
By \eqref{eq:min-pq}, we can let $\sigma_0\in{\mathbb S}igma_k$ be such that
$$
q(\sigma_0)=p(\s_0).
$$
Now, given $r>0$, we let $\rho_r>0$ such that for all $ x\in
B(\sigma_0,\rho_r)\cap {\Omega}ega $
\begin{equation}gin{equation}{\langle}bel{eq:estq}
p(x)\le (1+r)q(\sigma_0),\quad q(x)\ge (1-r)q(\sigma_0)\quad\bar{\varepsilon}xtrm{ and }\quad \etaa(x)\le r.
\end{equation}
We choose Fermi coordinates near $\sigma_0\in{\mathbb S}igma_k$ given by the map $ F^{\sigma_0}_{{\partial}rtial\O}$ (as in Section \ref{s:pn}) and we choose
$\e_0>0$ small such that, for all $\e\in(0,\e_0) $,
$$
\Lambda_{\e,\rho,r,\tau}:=F^{\sigma_0}_{{\partial}rtial\O}(\e\,{\rm
Supp(u_\tau)})\subset\,B(\sigma_0,\rho_r)\cap {\Omega}ega
$$
and we define the following test function
$$
v(x)=\e^{{\varphi}rac{2-N}{2}}u_\tau\left(\e^{-1}(F^{\sigma_0}_{{\partial}rtial\O})^{-1}(x)\right),
\quad x\in \Lambda_{\e,\rho,r,\tau}.
$$
Clearly, for every $\e\in(0,\e_0)$, we have that $v\in
C^\infty_c(\O)$ and thus by a change of variable, \eqref{eq:estq}
and Lemma \ref{lemddelta}, we have~
\begin{equation}gin{eqnarray*}
\mu_\l(\O,{\mathbb S}igma_k)&\leq&{\varphi}rac{\delta isplaystyle \int_{\O}p|\nabla v|^2\,dx
+\l\int_{\O}\delta ^{-2}\etaa v^2\,dx}{\delta isplaystyle
\int_{\O}q(x)\,\delta ^{-2}\,v^2\,dx}\\
&\leq&{\varphi}rac{\delta isplaystyle (1+r)\int_{\Lambda_{\e,\rho,r,\tau}}|\n
v|^2\,dx
}{(1-r)\,\delta isplaystyle
\int_{\Lambda_{\e,\rho,r,\tau}}\delta ^{-2}\,v^2\,dx}+{\varphi}rac{r|\l|}{(1-r)q(\sigma_0) } \\
&\leq&{\varphi}rac{\delta isplaystyle (1+r)\int_{\Lambda_{\e,\rho,r,\tau}}|\n
v|^2\,dx
}{(1-c r)\,\delta isplaystyle
\int_{\Lambda_{\e,\rho,r,\tau}}\tilde{\delta }^{-2}\,v^2\,dx}+{\varphi}rac{r|\l|}{(1-r)q(\sigma_0) } \\
&\leq&{\varphi}rac{(1+r)\e^{2-N}\delta isplaystyle
\int_{{\mathbb R}^N_+}\e^{-2}(g^\e)^{ij}{\partial}rtial_i
u_\tau{\partial}rtial_ju_\tau|\,\sqrt{|g^\e|}(y)\,dy
}{(1-cr)\,\delta isplaystyle
\int_{{\mathbb R}^N_+}\e^{2-N}\,|\e\tildelde y|^{-2}\,u_\tau^2\,\sqrt{|g^\e|}(\tildelde y)\,d y}+{\varphi}rac{cr}{1-r } ,\\
\end{eqnarray*}
where $g^\e$ is the scaled metric with components $g^\e_{\a\b}(y)=\e^{-2}{\langle}\partial_\alpha F^{\s_0}_{\partial\O}(\varepsilon y), \partial_\beta F^{\s_0}_{\partial\O}(\varepsilon y){\rangle}$
for $\a,\b=1,\delta ots,N$
and where we have used the fact that $\tildelde{\delta }(F^{\s_0}_{\partial\O}(\varepsilon y))=|\e\tildelde y|^2$ for
every $\tildelde y$ in the support of $u_\t$.
Since the scaled metric $g^\e$ expands a $g^\e=I+O(\e)$ on the support of $u_\t$, we deduce that
\begin{equation}gin{eqnarray*}
\mu_\l(\O,{\mathbb S}igma_k) &\le& {\varphi}rac{1+r}{1-c r}\,{\varphi}rac{1+c\e}{1-c\e}\,\, {\varphi}rac{\delta isplaystyle
\int_{{\mathbb R}^N_+}|\nabla u_\tau|^2\,d y }{\delta isplaystyle
\int_{{\mathbb R}^N_+}|\tildelde y|^{-2}\,u_\tau^2\,d y}+{\varphi}rac{cr}{1-r} ,
\end{eqnarray*}
where $c$ is a positive constant depending only on $\O,p,q,\etaa$ and ${\mathbb S}ig_k$. Hence by \eqref{eq:estutau} we conclude
\begin{equation}gin{eqnarray*}
\mu_\l(\O,{\mathbb S}igma_k)
&\le& {\varphi}rac{1+r}{1-c r}\,{\varphi}rac{1+c\e}{1-c\e}\, \left( {\varphi}rac{(N-k)^2}{4}+\tau
\right)+ {\varphi}rac{cr}{1-r} .
\end{eqnarray*}
Taking the limit in $\e$, then in $r$ and then in $\tau$, the claim follows.\\
\bar{\varepsilon}xtbf{Step 2:} We claim that there exists $\tilde{\l}\in{\mathbb R}$ such that
$\mu_{\tilde{\l}}(\O,{\mathbb S}ig_k)\geq{\varphi}rac{(N-k)^2}{4}$.\\
Thanks to Lemma \ref{lem:loc-hardy}, the proof uses a standard argument of cut-off function
and integration by parts (see \cite{BM}) and we can obtain
$$
\int_{\O}\delta ^{-2}u^2 q\,dx\leq \int_{\O}|\nabla u|^2 p\,dx+C\int_{\O}\delta ^{-2}u^2 \etaa \,dx\quad{\varphi}orall u\in C^\infty_c(\O),
$$
for some constant $C>0$. We skip the details. The claim now follows by choosing $\tilde{\l}=-C$\\
Finally, noticing that $\mu_\l(\O,{\mathbb S}ig_k)$ is
decreasing in $\l$, we can set
\begin{equation}{\langle}bel{eq:lsdef} \l^*:=\sup\left\{{\l\in{\mathbb R}}\,:\, \mu_\l(\O,{\mathbb S}ig_k)=
{{\varphi}rac{(N-k)^2}{4}}\right\}
\end{equation}
so that $\mu_\l(\O,{\mathbb S}ig_k)<{\varphi}rac{(N-k)^2}{4}$ for all $\l>\l^*$.
{$\square$}\goodbreak
\section{Non-existence result}{\langle}bel{s:ne}
\begin{equation}gin{Lemma}{\langle}bel{lem:Opm}
Let $\O$ be a smooth bounded domain of ${\mathbb R}^N$, $N\geq 3$, and let ${\mathbb S}igma_k$ be a
smooth closed submanifold of ${\partial}rtial\O$ of dimension $k$ with $1\le k\le
N-2$. Then, there exist bounded smooth domains $\O^\pm$ such that $\O^+\subset \O\subset\O^-$
and
$$
\partial{\O^+}\cap \partial\O=\partial{\O^-}\cap \partial\Omega = {\mathbb S}igma_k.
$$
\end{Lemma}
\noindent{{\bf Proof. }}
Consider the maps
$$
x\mapsto g^\pm(x):=d_{{\partial}rtial\O}(x)\pm{\varphi}rac12\,\delta ^2(x),
$$
where $d_{\partial\O}$ is the distance function to $\partial\O$.
For some $\b_1>0$ small, $g^\pm$ are smooth in $\O_{\b_1}$ and since $|\nabla g^\pm|\geq C>0$ on ${\mathbb S}ig_k$, by the implicit function theorem, the sets
$$
\{x\in \O_{\b}\,:\,g^\pm=0 \}
$$
are smooth $(N-1)$-dimensional submanifolds of ${\mathbb R}^N$, for some $\b>0$ small. In addition, by construction, they can be taken to be part of the boundaries of
smooth bounded domains $\O^\pm $ with $\O^+\subset \O\subset\O^-$ and such that
$$
\partial{\O^+}\cap \partial \O=\partial{\O^-}\cap \partial\Omega = {\mathbb S}igma_k.
$$
The prove then follows at once.
{$\square$}\goodbreak
Now, we prove the following non-existence result.
\begin{equation}gin{Theorem}{\langle}bel{th:ne}
Let $\O$ be a smooth bounded domain of ${\mathbb R}^N$ and let ${\mathbb S}igma_k$ be a
smooth closed submanifold of ${\partial}rtial\O$ of dimension $k$ with $1\le k\le
N-2$ and let $\l\geq0$. Assume that $p,q$ and $\etaa$ satisfy \eqref{eq:weight} and \eqref{eq:min-pq}. Suppose that $u\in H^1_0(\O)\cap C(\O)$ is
a non-negative function
satisfying
\begin{equation}{\langle}bel{eq:ustf}
-\delta iv(p \nabla u)-{\varphi}rac{(N-k)^2}{4}q\delta ^{-2}u\geq-\lambda \etaa \delta ^{-2} u \quad\bar{\varepsilon}xtrm{in }\O.
\end{equation}
If $\int_{{\mathbb S}igma_k}{\varphi}rac1{\sqrt{1-p(\s)/q(\s)}}d\s=+\infty$ then
$u\equiv0$.
\end{Theorem}
\noindent{{\bf Proof. }}
We first assume that $p\equiv1$.
Let $\O^+$ be the set given by Lemma \ref{lem:Opm}. We
will use the notations in Section \ref{s:pn} with ${\mathcal U}=
\O^+$ and $\mathcal{M}=\partial \O^+$. For $\b>0$ small we define
$$\O^+_{\b} := \{ x \in \O^+: \quad
{\delta }(x)<\beta \}.$$
We suppose by contradiction that $u $ does not vanish identically near ${\mathbb S}igma_k$ and satisfies
\eqref{eq:ustf} so that $u>0$ in $\O_{\b}$ by the maximum principle, for some $\b>0$ small.\\
Consider the subsolution $V_\e$ defined in Lemma \ref{le:lowerbound} which satisfies
\begin{equation}{\langle}bel{eq:lwaneg}
\calL_\l\,V_\e\leq 0\quad\bar{\varepsilon}xtrm{ in
}\O^+_{\b},\quad{\varphi}orall \e\in(0,1).
\end{equation}
Notice that $\overline{\partial\O^+_{\b}\cap
\O^+}\subset \O$ thus, for $\b>0$ small, we can choose $R>0$ (independent on $\e$) so
that
$$
R\,V_\e\leq R\,V_0\leq u\quad\bar{\varepsilon}xtrm{ on }
\overline{\partial\O^+_{\b}\cap \O^+ }
\quad{\varphi}orall \e\in(0,1).
$$
Again by Lemma \ref{le:lowerbound}, setting $v_\e=R\, {V_\e}-u$, it
turns out that $v^+_\e=\max(v_\e,0)\in
H^1_0(\O^+_{\b})$ because $V_\e=0$ on $\partial
\O^+_{\b}\setminus
\overline{\partial\O^+_{\b}\cap
\O^+}$. Moreover by \eqref{eq:ustf} and
\eqref{eq:lwaneg},
$$
\calL_\l\,v_\e\leq 0\quad\bar{\varepsilon}xtrm{ in
}\O^+_{\b},\quad{\varphi}orall \e\in(0,1).
$$
Multiplying the above inequality by $v^+_\e$ and integrating by parts
yields
$$
\int_{\O^+_{\b}}|\n
v^+_\e|^2\,dx-{\varphi}rac{(N-k)^2}{4}\int_{\O^+_{\b}}\delta ^{-2}q|v^+_\e|^2\,dx+
\l\int_{\O^+_{\b}}\etaa \delta ^{-2}|v^+_\e|^2\,dx
\leq0.
$$
But then Lemma \ref{lem:loc-hardy} implies that $v^+_\e=0$ in
$\O^+_{\b}$ provided $\b$ small enough because $|\etaa|\leq C\delta $ near ${\mathbb S}ig_k$. Therefore $u\geq R\, {V_\e}$ for every $\e\in(0,1)$. In
particular $u\geq R\,V_0$. Hence we obtain from Lemma \ref{le:lowerbound} that
$$
\infty>\int_{\O^+_{\b}}{\varphi}rac{u^2}{\delta ^{2}}\geq R^2 \int_{\O^+_{\b}}{\varphi}rac{V_0^2}{\delta ^{2}}\geq\int_{{\mathbb S}igma_k}{\varphi}rac1{\sqrt{1-q(\s)}}d\s
$$
which leads to a contradiction. We deduce that $u\equiv0$ in $\O^+_{\b} $. Thus by
the maximum principle $u\equiv0$ in $\O$.\\
For the general case $p\neq 1$, we argue as in \cite{BMS} by setting
\begin{equation}{\langle}bel{eq:transf}
\tilde{u}=\sqrt{p} u.
\end{equation}
This function satisfies
$$
-\Delta \tilde{u}-{\varphi}rac{(N-k)^2}{4}{\varphi}rac{q}{p}\delta ^{-2}\tilde{u}\geq-\lambda {\varphi}rac{\etaa}{p} \delta ^{-2}\tilde{ u} +\left(-{\varphi}rac{\Delta p}{2 p } +{\varphi}rac{|\nabla p|^2}{4 p^2 } \right) \tilde{u}\quad\bar{\varepsilon}xtrm{in }\O.
$$
Hence since $p\in C^2(\overline{\O})$ and $p>0$ in $ \overline{\O}$, we get the same conclusions as in the case $p\equiv 1$ and $q$ replaced by $q/p$.
{$\square$}\goodbreak
\section{Existence of minimizers for $\m_{\l}({\Omega}ega,{\mathbb S}igma_k) $}
\begin{equation}gin{Theorem}{\langle}bel{th:exitslesls}
Let $\O$ be a smooth bounded domain of ${\mathbb R}^N$ and let ${\mathbb S}igma_k$ be a
smooth closed submanifold of ${\partial}rtial\O$ of dimension $k$ with $1\le k\le
N-2$. Assume that $p,q$ and $\etaa$ satisfy \eqref{eq:weight} and \eqref{eq:min-pq}. Then $\m_{\l}({\Omega}ega,{\mathbb S}igma_k)$ is achieved for every $\l<\l^*$.
\end{Theorem}
\noindent{{\bf Proof. }}
The proof follows the same argument of \cite{BM} by taking into account the fact that $\etaa=0$ on ${\mathbb S}ig_k$ so we skip it.
{$\square$}\goodbreak
Next, we prove the existence of minimizers in the critical case $\l=\l_*$.
\begin{equation}gin{Theorem}{\langle}bel{th:exits-crit}
Let $\O$ be a smooth bounded domain of ${\mathbb R}^N$ and let ${\mathbb S}igma_k$ be a
smooth closed submanifold of ${\partial}rtial\O$ of dimension $k$ with $1\le k\le
N-2$. Assume that $p,q$ and $\etaa$ satisfy \eqref{eq:weight} and \eqref{eq:min-pq}. If $\delta isplaystyle \int_{{\mathbb S}igma_k}{\varphi}rac1{\sqrt{1-p(\s)/q(\s)}}d\s<\infty$ then
$\m_{\l^*}=\m_{\l^*}(\O,{\mathbb S}ig_k)$ is achieved.
\end{Theorem}
\noindent{{\bf Proof. }}
We first consider the case $p\equiv 1$.\\
Let $\l_n$ be a sequence of real numbers decreasing to $\l^*$. By Theorem \ref{th:exitslesls}, there exits $u_n$
minimizers for $\mu_{\l_n}=\m_{\l_n}({\Omega}ega,{\mathbb S}igma_k)$ so that
\begin{equation}{\langle}bel{eq:u_n}
-\Delta u_n-\mu_{\l_n}\delta ^{-2}q u_n= -\l_n \delta ^{-2 }\etaa u_n \quad\bar{\varepsilon}xtrm{ in }\O.
\end{equation}
We may assume that $u_n\geq 0$ in $\O$. We may also assume that $\|\nabla u_n\|_{L^2(\O)}=1$. Hence $u_n \rightharpoonup u$ in $H^1_0(\O)$
and $u_n\to u$ in $L^2(\O)$ and pointwise.
Let $\O^-\supset\O$ be the set given by Lemma \ref{lem:Opm}. We
will use the notations in Section \ref{s:pn} with ${\mathcal U}=
\O^-$ and $\mathcal{M}=\partial \O^-$. It will be understood that $q$ is extended to a function in $C^2(\overline{\O^- })$.
For $\b>0$ small we define
$$\O^-_{\b} := \{ x \in \O^-: \quad
{\delta }(x)<\beta \}.$$
We have that
$$
\Delta u_n+b_n(x)\, u_n=0\quad\bar{\varepsilon}xtrm{ in }\O,
$$
with $|b_n|\leq C$ in $\overline{\O\setminus \overline{\O^-_{{\varphi}rac\b2}}}$ for all integer $n$. Thus by standard elliptic regularity theory,
\begin{equation}{\langle}bel{eq:unleC}
u_n\leq C \quad \quad\bar{\varepsilon}xtrm{ in }\overline{\O\setminus \overline{\O^-_{{\varphi}rac{\b}{2}}}}.
\end{equation}
We consider the supersolution $U$ in Lemma \ref{le:upperbound}. We shall show that there exits a constant $C>0$ such that for all $n\in\mathbb{N}$
\begin{equation}{\langle}bel{eq:unleCV12}
u_n\leq C U \quad \bar{\varepsilon}xtrm{ in }\overline{\O^-_\b}.
\end{equation}
Notice that $\overline{\O\cap\partial\O^-_\b}\subset \O^-$ thus by \eqref{eq:unleC}, we can choose $C>0$ so
that for any $n$
$$
u_n\leq C\, U\quad\bar{\varepsilon}xtrm{ on }
\overline{\O\cap\partial\O^-_\b}.
$$
Again by Lemma \ref{le:upperbound}, setting $v_n=u_n-C\, U$, it
turns out that $v^+_n=\max(v_n,0)\in
H^1_0(\O^-_{\b})$ because $u_n=0$ on $\partial\O\cap \O^-_\b$.
Hence we have
$$
\calL_{\l_n}\,v_n\leq -C(\mu_{\l^*}-\mu_n)q{U}-C(\l^*-\l_n)\etaa {U}\leq 0 \quad\bar{\varepsilon}xtrm{ in
}\O^-_{\b}\cap\Omega .
$$
Multiplying the above inequality by $v^+_n$ and integrating by parts
yields
$$
\int_{\O^-_{\b}}|\n
v^+_n|^2\,dx-\mu_{\l_n}\int_{\O^-_{\b}}\delta ^{-2}q|v^+_n|^2\,dx+
\l_n\int_{\O^-_{\b}}\etaa\delta ^{-2} |v^+_n|^2\,dx
\leq0.
$$
Hence Lemma \ref{lem:loc-hardy} implies that
$$
C \int_{\O^-_{\b}}\delta ^{-2}X_{-2} |v^+_n|^2\,dx+\l_n\int_{\O^-_{\b}}\etaa \delta ^{-2} |v^+_n|^2\,dx\leq0.
$$
Since $\l_n$ is bounded, we can choose $\b>0$ small (independent of
$n$) such that $v^+_n\equiv0$ on $\O^-_\b$ (recall that $|\etaa|\leq
C\delta $).
Thus we obtain \eqref{eq:unleCV12}. \\
Now since $u_n\to u$ in $L^2(\O)$, we get by the dominated convergence theorem and \eqref{eq:unleCV12}, that
$$
\delta ^{-1} u_n\to \delta ^{-1} u\quad \bar{\varepsilon}xtrm{ in }L^2(\O).
$$
Since $u_n$ satisfies
$$
1=\int_{\O}|\nabla u_n|^2=\mu_{\l_n}\int_{\O}\delta ^{-2} q u_n^2+ {\l_n}\int_{\O}\delta ^{-2} \etaa u_n^2,
$$
taking the limit, we have $1= \mu_{\l^*}\int_{\O}\delta ^{-2} q u^2+ {\l^*}\int_{\O}\delta ^{-2} \etaa u^2$. Hence $u\neq0$ and it is a minimizer for $\mu_{\l^*}={\varphi}rac{(N-k)^2}{4}$.\\
For the general case $p\neq 1$, we can use the same transformation as in \eqref{eq:transf}. So \eqref{eq:unleCV12} holds and the same argument as a above carries over.
{$\square$}\goodbreak
\section{Proof of Theorem \ref{th:mulpqe} and Theorem \ref{th:crit}}
\bar{\varepsilon}xtit{Proof of Theorem \ref{th:mulpqe}:} Combining Lemma \ref{lem:Jl1} and Theorem \ref{th:exitslesls},
it remains only to check the case $\l<\l^*$. But this is an easy consequence of the definition of $\l^*$ and of $\mu_{\l}(\O,{\mathbb S}ig_k)$, see [\cite{BM}, Section 3].
{$\square$}\goodbreak
\noindent
\bar{\varepsilon}xtit{Proof of Theorem \ref{th:crit}:}
Existence is proved in Theorem \ref{th:exits-crit} for $I_k<\infty$. Since the absolute value of
any minimizer for $\mu_{\l}(\O,{\mathbb S}ig_k) $ is also a minimizer, we can apply Theorem \ref{th:ne}
to infer that $\mu_{\l^*}(\O,{\mathbb S}ig_k) $ is never achieved as soon as $I_k=\infty$.
{$\square$}\goodbreak
\begin{equation}gin{center}\bar{\varepsilon}xtbf{ Acknowledgments} \end{center}
This work started when the first author was visiting CMM,
Universidad de Chile. He is grateful for their kind hospitality. M.
M. Fall is supported by the Alexander-von-Humboldt Foundation. F.
Mahmoudi is supported by the Fondecyt proyect n: 1100164 and Fondo
Basal CMM.
\begin{equation}gin{thebibliography}
{\varphi}ootnotesize
\bibitem{AS}Adimurthi and Sandeep K., Existence and non-existence of the first eigenvalue
of the perturbed Hardy-Sobolev operator.
Proc. Roy. Soc. Edinburgh Sect. A 132 (2002), no. 5, 1021-1043.
\bibitem{BMR1} Bandle C., Moroz V.. Reichel W., Large solutions to semilinear
elliptic equations with Hardy potential and exponential nonlinearity. Around the
research of Vladimir Maz'ya. II, 1-22, Int. Math. Ser. (N. Y.), 12, Springer, New York, 2010.
\bibitem{BMR2} Bandle C., Moroz V.. Reichel W., 'Boundary blowup' type sub-solutions to
semilinear elliptic equations with Hardy
potential. J. Lond. Math. Soc. (2) 77 (2008), no. 2, 503-523.
\bibitem{BM} Brezis H. and Marcus M., Hardy's inequalities revisited.
Dedicated to Ennio De Giorgi.
Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4) 25 (1997), no. 1-2, 217-237.
\bibitem{BMS} Brezis H., Marcus M. and Shafrir I.,
Extermal functions for Hardy's inequality with weight, J. Funct. Anal. 171 (2000), 177-191.
\bibitem{CaMuPRSE}
{ Caldiroli P., Musina R.}, On a class of 2-dimensional singular
elliptic problems. Proc. Roy. Soc. Edinburgh Sect. A 131 (2001),
479-497.
\bibitem{CaMuUMI} Caldiroli P., Musina R., Stationary states for a
two-dimensional~ singular Schr\"odinger equation. Boll. Unione Mat.
Ital. Sez. B Artic. Ric. Mat. (8) 4-B (2001), 609-633.
\bibitem{C} Cazacu C., On Hardy inequalities with singularities on the boundary. C. R. Math. Acad. Sci. Paris 349 (2011), no. 5-6, 273-277.
\bibitem{Fall-ne-sl} Fall M. M., Nonexistence of distributional supersolutions of a semilinear elliptic
equation with Hardy potential. To appear in J. Funct. Anal. http://arxiv.org/abs/1105.5886.
\bibitem{Fallccm} Fall M. M., On the Hardy-Poincar\'e inequality with
boundary singularities. Commun. Contemp. Math., 14, 1250019, 2012.
\bibitem{Fall} Fall M. M., A note on Hardy's inequalities with
boundary singularities. Nonlinear Anal. 75 (2012) no. 2, 951-963.
\bibitem{FaMu} Fall M. M., Musina R.,
Hardy-Poincar\'e inequalities with boundary singularities. Proc. Roy. Soc. Edinburgh. A 142, 1-18, 2012.
\bibitem{FaMu1} Fall M. M., Musina R., Sharp nonexistence results for a linear elliptic inequality involving Hardy and Leray potentials.
Journal of Inequalities and Applications, vol. 2011, Article ID 917201, 21 pages, 2011.
doi:10.1155/2011/917201.
\bibitem{FaMah} Fall M. M., Mahmoudi F.,
Hypersurfaces with free boundary and large constant mean curvature: concentration along submanifolds.
Ann. Sc. Norm. Super. Pisa Cl. Sci. (5) 7 (2008), no. 3, 407--446.
\bibitem{FTT} Filippas S., Tertikas A. and
Tidblom J., On the structure of Hardy-Sobolev-Maz'ya inequalities.
J. Eur. Math. Soc., 11(6), (2009), 1165-1185.
\bibitem{mm} Mahmoudi F., Malchiodi A., Concentration on minimal
submanifolds for a singularly perturbed Neumann problem, Adv. in
Math. 209 (2007) 460-525.
\bibitem{NaC} Nazarov A. I., Hardy-Sobolev Inequalities in a cone, J. Math.
Sciences, 132, (2006), (4), 419-427.
\bibitem{Na} Nazarov A. I., Dirichlet and Neumann problems to critical
Emden-Fowler type equations. J. Glob. Optim. (2008) 40, 289-303.
\bibitem{PT}{Pinchover Y., Tintarev K.},
{ Existence of minimizers for Schr\"{o}dinger operators under domain perturbations with application
to Hardy's inequality}.
Indiana Univ. Math. J. 54 (2005), 1061-1074.
\end{thebibliography}
\end{document}
|
\begin{equation}gin{document}
\title[Kawahara equation with boundary memory]
{\bf Asymptotic behavior of Kawahara equation with memory effect}
\author[Capistrano-Filho]{Roberto de A. Capistrano--Filho*}
\thanks{*Corresponding author: [email protected]}
\address{
Departamento de Matem\'atica, Universidade Federal de Pernambuco\\
Cidade Universit\'aria, 50740-545, Recife (PE), Brazil}
\email{[email protected]}
\thanks{Capistrano--Filho was supported by CNPq grants numbers 307808/2021-1, 401003/2022-1, and 200386/2022-0, CAPES grants numbers 88881.311964/2018-01 and 88881.520205/2020-01, and MATHAMSUD 21-MATH-03.}
\author[Chentouf]{Boumedi\`ene Chentouf}
\address{
Faculty of Science, Kuwait University \\
Department of Mathematics, Safat 13060, Kuwait}
\email{[email protected]}
\author[de Jesus]{Isadora Maria de Jesus}
\address{Departamento de Matem\'atica,\ Universidade Federal de Pernambuco (UFPE), 50740-545, Recife (PE), Brazil and Instituto de Matemática,\ cUniversidade Federal de Alagoas (UFAL),\ Maceió (AL), Brazil.}
\email{[email protected]; [email protected]}
{\mbox{sub}}jclass[2020]{Primary: 37L50, 93D15, 93D30. Secondary: 93C20}
\keywords{Kawahara equation; boundary memory term; behavior of solutions; energy decay}
\begin{equation}gin{abstract}
In this work, we are interested in a detailed qualitative analysis of the Kawahara equation, a model that has numerous physical motivations such as magneto-acoustic waves in a cold plasma and gravity waves on the surface of a heavy liquid. First, we design a feedback control law, which combines a damping component and another one of finite memory-type. Then, we are capable of proving that the problem is well-posed under a condition involving the feedback gains of the boundary control and the memory kernel. Afterwards, it is shown that the energy associated with this system exponentially decays.
\end{abstract}
\date{\today}
\maketitle
\thispagestyle{empty}
\section{Introduction}
{\mbox{sub}}section{Background and literature review}
Water wave models have been studies by many scientists from numerous disciplines such as hydraulic engineering, fluid mechanics engineering, physics as well as mathematics. These models are in general hard to derive, and complex to obtain qualitative information on the dynamics of the waves. This makes their studies interesting and challenging. Recently, appropriate assumptions on the amplitude, wavelength, wave steepness, and so on, are invoked to investigate the asymptotic models for water waves and understand the full water wave system (see, for instance, \cite{ASL,BLS,Lannes} and references therein for a rigorous justification of various asymptotic models for surface and internal waves).
As a matter of fact, it has been noticed that the water waves can be considered as a free boundary problem of the incompressible, irrotational Euler equation in an appropriate non-dimensional form. This means that there are two non-dimensional parameters $\deltata := \frac{h}{\lambdambda}$ and $\varepsilon := \frac{a}{h}$, where the water depth, the wavelength, and the amplitude of the free surface are respectively denoted by $h, \lambdambda$ and $a$. On the other hand, the parameter $\mu$, known as the Bond number, measures the importance of gravitational forces compared to surface tension forces. We also note that the long waves (also called shallow water waves) are mathematically characterised by the condition $\deltata \ll 1$. Obviously, there are several long-wave approximations depending on the relation between $\varepsilon$ and $\deltata$.
The above discussion led to, instead of studying models that do not give good asymptotic properties, we can rescale the parameters mentioned above to find systems that reveal asymptotic models for surface and internal waves, like the Kawahara model. Precisely, letting $\varepsilon = \deltata^4 \ll 1$, $\mu = \frac13 + \nu\varepsilon^{\frac12}$, and the critical Bond number $\mu = \frac13$, the so-called equation Kawahara equation is put forward. Such an equation was derived by Hasimoto and Kawahara \cite{Hasimoto1970,Kawahara} and takes the form
\[\pm2 W_t + 3WW_x - \nu W_{xxx} +\frac{1}{45}W_{xxxxx} = 0,\]
or, after re-scaling,
\begin{equation}gin{equation}\lambdabel{fda1}
W_{t}+\alphapha W_{x}+\begin{equation}ta W_{xxx}-W_{xxxxx}+WW_{x}=0.
\end{equation}
The latter is also seen as the fifth-order KdV equation \cite{Boyd}, or singularly perturbed KdV equation \cite{Pomeau}. It describes a dispersive partial differential equation with numerous wave physical phenomena such as magneto-acoustic waves in a cold plasma \cite{Kakutani}, the propagation of long waves in a shallow liquid beneath an ice sheet \cite{Iguchi}, gravity waves on the surface of a heavy liquid \cite{Cui}, etc.
Note that valuable efforts in the last decays were made to understand this model in various research frameworks. For example, numerous works focused on the analytical and numerical methods for solving \eqref{fda1}. These methods include the tanh-function method \cite{Berloff}, extended tanh-function method \cite{Biswas}, sine-cosine method \cite{Yusufoglu}, Jacobi elliptic functions method \cite{Hunter}, direct algebraic method and numerical simulations \cite{Polat}, decompositions methods \cite{Kaya}, as well as the variational iterations and homotopy perturbations methods \cite{Jin}. Another direction is the study of the Kawahara equation from the point of view of control theory and specifically, the controllability and stabilization problem \cite{ara}, which is our motivation.
Whereupon, we are interested in a detailed qualitative analysis for the system \eqref{fda1} in a bounded region. More precisely, our primary concern is to analyze the qualitative properties of solutions to the initial-boundary value problem associated with the model \eqref{fda1} posed on a bounded interval under the presence of damping and memory-type controls.
Now, we will present some previous results that dealt with the asymptotic behavior of solutions for the Kawahara model \eqref{fda1} in a bounded domain. One of the first outcomes is due to Silva and Vasconcellos \cite{vasi1,vasi2}, where the authors studied the stabilization of global solutions of the linear Kawahara equation, under the effect of a localized damping mechanism. The second endeavor is completed by Capistrano-Filho \textit{et al.} \cite{ara}, where the generalized Kawahara equation in a bounded domain is considered, that is, a more general nonlinearity $W^p \partial_x W$ with $p\in [1,4)$ is taken into account. It is proved that the solutions of the Kawahara system decay exponentially when an internal damping mechanism is applied.
Recently, a new tool for the control properties of the Kawahara operator was proposed in \cite{CaSo}. In this work, the authors treated the so-called overdetermination control problem for the Kawahara equation. Precisely, a boundary control is designed so that the solution to the problem under consideration satisfies an integral condition. Furthermore, when the control acts internally in the system, instead of the boundary, the authors proved that this integral condition is also satisfied.
We conclude the literature review with three recent works. In \cite{CaVi, boumediene} the stabilization of the Kawahara equation with a localized time-delayed interior control is considered. Under suitable assumptions on the time delay coefficients, the authors proved that the solutions of the Kawahara system are exponentially stable. This result is established by means of either the Lyapunov method or a compactness-uniqueness argument. More recently, the Kawahara equation in a bounded interval and with a delay term in one of the boundary conditions was studied in \cite{luan}. The authors used two different approaches to prove that the solutions of \eqref{fda1} are exponentially stable under a condition on the length of the spatial domain. We point out that this is a small sample of papers that were concerned with the stabilization problem of the Kawahara equation in a bounded interval. Of course, we suggest that the reader, who is interested in more details on the topic, consult the papers cited above and the references therein.
Let us now present the framework of this article.
{\mbox{sub}}section{Problem setting and main results}
Consider the system \eqref{fda1} in a bounded domain $\Omegaega=(0,\ell)$, where $\ell > 0$ is the spatial length, under the action of the following feedback:
\begin{equation}gin{equation}\lambdabel{sis1}
\begin{equation}gin{cases}
\begin{equation}gin{aligned}
\partial_{t} \omega(t,x)+&\alphapha \partial_x \omega(t,x) +\begin{equation}ta\partial_x^3 \omega(t,x)- \partial_x^5 \omega(t,x)\\
& + {\omega^p}(t,x) \partial_x \omega(t,x)=0,
\end{aligned} & x\in \Omegaega,\ t>0, \\
\omega (t,0) =\omega (t,\ell) =0 ,& t>0, \\
\partial_x \omega(t,0)=\partial_x \omega(t,\ell) =0,& t>0, \\
\partial_x^2 \omega(t,\ell)=\mathcal{F}(t),& t>0,\\
\partial_x^{2}\omega(t,0)=z_0(t), & t\in \mathcal{I},\\
\omega(0,x) =\omega_{0} (x), & x \in\Omegaega,
\end{cases}
\end{equation}
with $\omega_0$, $z_0$ are initial data and the feedback law is a linear combination of the damping and finite memory terms given by
\begin{equation}gin{equation}\lambdabel{fdl}
\mathcal{F}(t):=\nu_1 \partial_x^2 \omega(t,0)+\nu_2 \int_{t-\tau_2}^{t-\tau_1} \sigma(t-s) \partial_x^2 \omega (s,0) \, ds.
\end{equation}
Here, $\alphapha >0$ and $\begin{equation}ta>0$ are physical parameters, $p \in \{1,2\}$, whereas $\nu_1$ and $\nu_2$ are nonzero real numbers. In turn, $0<\tau_1 < \tau_2$ correspond to the finite memory interval $(t-\tau_1, t-\tau_2)$. Moreover, $\mathcal{I}=( -\tau_2, 0 )$, and the memory kernel is denoted by $\sigma(s)$. Of course, the presence of a memory term is usually ubiquitous in practice. Particularly, memory is of great importance in systems control as they are governed by equations, where the past values of one or more variables involved in the system play a crucial role. On the other hand, the impact of a memory term in some systems can be deleterious as it can affect their performance \cite{bc1, bc2, nipi}. Last but not least, we indicate that the memory term, that arises in the boundary control \eqref{fdl}, could reflect the case of a compressible (or incompressible) viscoelastic fluid. The latter is regarded as the simplest material with memory \cite{afg, da}.
On the other hand, let us note that the energy associated with the full system \eqref{sis1} is given by
\begin{equation}gin{equation}\lambdabel{energia}
\mathcal{E}(t)= \int_{\Omegaega}\omega^2 (t,x)dx + |\nu_2| \int_{\mathcal{M}} s \sigma(s) \left( \int_{\Omegaega_0} (\partial_x^2 \omega)^2 (t-s \phi,0) \, d\phi \right)ds, \ t \geq 0.
\end{equation}
Naturally, as we are interested in the behavior of the system \eqref{sis1}, we need to check whether the feedback law, given by \eqref{fdl}, represents a damping mechanism. In other words, we would like to see if, in the presence of the boundary memory-type feedback law, the energy of the system \eqref{energia} tends to zero state with some specific decay rate, when $t$ goes to $0$. This situation can be presented in the following question:
{\noindent}ndent\textbf{Question:} \textit{Does $\mathcal{E}(t)\longrightarrow0$, as $t\to\infty$? If it is the case, is it possible to come up with a decay rate?}
To answer the previous question for the system \eqref{sis1}, we will assume, from now on, that the memory kernel $\sigma$ obeys the following conditions:
\begin{equation}gin{assumptions}\lambdabel{assK}
The function $\sigma \in \ell^{\infty} (\mathcal{M})$, where $\mathcal{M}:=(\tau_1, \tau_2)$. In turn, we assume that
$$\sigma (s) > 0,\quad \text{a.e. in}\ \mathcal{M}.$$
Moreover, the feedback gains $\nu_1$ and $\nu_2$ together with the memory kernel satisfy
\begin{equation}gin{equation}\lambdabel{ab}
|\nu_1| + |\nu_2| \displaystyle \int_{\mathcal{M}} \sigma(s) \, ds<1.
\end{equation}
\end{assumptions}
Some notations, that we will use throughout this manuscript, are presented below:
\begin{equation}gin{itemize}
\item[(i)] We denote by $(\cdot,\cdot)_{\mathbb{R}^{2}}$ the canonical inner product of $\mathbb{R}^{2}$, whereas $\lambdangle\cdot,\cdot\rangle$ denotes the canonical inner product of $\ell^2(\Omegaega)$ whose induced norm is $\|\cdot\|$.
\item[(ii)] For $T>0$, consider the space of solutions
\begin{equation}gin{equation*}
Y_{T}= C(0,T;L^2(\Omegaega))\cap L^{2}(0,T; { H_0^2}(\Omegaega))
\end{equation*}
equipped with the norm
\begin{equation}gin{equation*}
\|v\|_{Y_{T}}^2= \left(\max_{t \in (0,T)}\|v(t,\cdot )\| \right)^2 + \int_{0}^{T}\|v(t,\cdot )\|^{2}_{{ H_0^2}(\Omegaega)}dt.
\end{equation*}
\item[(iii)] Let $\Omegaega_0=(0,1)$ and $\mathcal{Q}:=\Omegaega_0 \times \mathcal{M}$. Then, we consider the spaces
$$H:=L^2 (\Omegaega) \times L^2 (\mathcal{Q}), \quad \mathcal{H}:=L^2 (\Omegaega) \times L^2 (\mathcal{I} \times \mathcal{M}),$$
respectively equipped with the following inner product:
$$
\left\{ \begin{equation}gin{array}{l}
\displaystyle \lambdangle(\omega,z),(v,y)\rangle_{H}=\lambdangle \omega,v\rangle +|\nu_2| \int_{\mathcal{M}} \int_{\Omegaega_0} s \sigma(s) z(\phi,s)y(\phi,s) ~ d\phi ds, \\[4mm]
\displaystyle \lambdangle(\omega,z),(v,y)\rangle_\mathcal{H}=\lambdangle \omega,v\rangle + |\nu_2| \int_{\mathcal{I}} \int_{\mathcal{M}} \sigma(s) z(r,s)y(r,s)\, ds dr.
\end{array}
\right.
$$
\end{itemize}
Subsequently, we can state our first main result:
\begin{equation}gin{theorem}\lambdabel{Lyapunov}Under the assumptions \ref{assK} and assuming that the length $\ell$ fulfills the smallness condition
\begin{equation}gin{equation}\lambdabel{L}
0< \ell < \pi \sqrt{\frac{3\begin{equation}ta}{\alphapha}},
\end{equation}
there exists $r>0$ sufficiently small, such that for every $(\omega_{0}, z_{0})\in H$ with $\|(\omega_{0}, z_{0})\|_{H} < r$, the energy of the system \eqref{sis1}, given by \eqref{energia}, is exponentially stable. In other words, there exist two positive constants $\kappa$ and $\mu$ such that
\begin{equation}gin{equation}\lambdabel{exp decay}
\mathcal{E}(t) \leq \kappa E(0)e^{-2\mu t}, \ t > 0,
\end{equation}
where $\mathcal{E}(t)$ is defined by \eqref{energia}.
\end{theorem}
The proof of this result uses an appropriate Lyapunov function, which requires the condition \eqref{L}. In turn, such a requirement can be relaxed by using a compactness-uniqueness argument \cite{ro} (see \cite{ara,luan,cajesus,vasi1,vasi2}). The proof is based on the following outcome \cite{luan}:
\begin{equation}gin{lemma}\lambdabel{lem2}
Let $\ell>0$ and consider the assertion: There exist \; $\zeta \in \field{C}$ and $\omega \in H_0^2(\Omegaega)\cap H^5(\Omegaega)$ such that
\begin{equation}gin{equation*}
\begin{equation}gin{cases}
\begin{equation}gin{array}{ll}
\zeta \omega(x) +\omega'(x)+\omega'''(x)-\omega'''''(x)=0, & x \in \Omegaega, \\
\omega(x)=\omega'(x)=\omega''(x)=0, & x \in \{0,\ell\}.
\end{array}
\end{cases}
\lambdabel{len}
\end{equation*}
If $(\zeta,\omega) \in \field{C} \times H_0^2(\Omegaega)\cap H^5(\Omegaega)$ is solution of \eqref{len}, then
$\omega=0.$
\end{lemma}
We have:
\begin{equation}gin{theorem}\lambdabel{comp}
Suppose that assumptions \ref{assK} hold. Moreover, we choose $\ell>0$ so that the problem in Lemma \ref{lem2} has only the trivial solution. Then, there exists $\varrho>0$ such that for every $\left(\omega_{0}, z_{0}\right) \in H$ satisfying
$
\left\|\left(\omega_{0}, z_{0}\right)\right\|_{H} \leq \varrho,
$
the energy \eqref{energia} of the problem \eqref{sis1} decays exponentially.
\end{theorem}
{\mbox{sub}}section{Further comments and paper's outline} As mentioned above, the exponential stability result of the system \eqref{sis1} will be established using two different methods. The first one evokes a Lyapunov function and requires an explicit smallness condition on the length of the spatial domain $\ell$. The second one is obtained via a classical compactness-uniqueness argument, where critical lengths phenomena appear with a relation with the M\"{o}bius transforms (see for instance \cite{luan}). This permits us to answer the question raised in the introduction.
\begin{equation}gin{remarks}
Let us point out some important comments:
\begin{equation}gin{itemize}
\item[$\bullet$] Considering $\nu_2=0$ and $\alphapha=0$, the authors in \cite{cajesus} showed the stabilization property for \eqref{sis1} using the compactness-uniqueness argument. Since they removed the drift term $\alphapha\partial_{x}\omega$, the critical lengths phenomena did not appear.
\item[$\bullet$] The main concern of this work is to deal with the feedback law of memory type as in \eqref{fdl}. In fact, one needs to control this term to ensure well-posedness and stabilization results.
\item[$\bullet$] Our results are valid for the general nonlinearities $u^{p}\partial_{x}u$, $p\in\{1,2\}$, and also can be extended for linearity like $c_1u \partial_x u+c_2u^2 \partial_x u$. To draw more attention to the first general nonlinearity, the decay rate in \eqref{exp decay} depends on the values of $p$ since we have (see Section 3)
$$\mu<\min\left\{\dfrac{\mu_2 |\nu_2|e^{-\deltata\tau_2}\deltata}{2(1+\mu_1|\nu_2|)},
\dfrac{\mu_1}{2\ell^2(1+\ell\mu_1)(p+2)}\left[(p+2)(3\pi^2\begin{equation}ta-\alphapha \ell^2)-2\pi^2\ell^{2-\frac{p}{2}}r^p\right]\right\}.$$
\end{itemize}
\end{remarks}
We end our introduction with the paper's outline: The work consists of three parts including the Introduction. Section \ref{Sec2} discusses the existence of solutions for the full system \eqref{sis1}. Section \ref{Sec3} is devoted to proving the stabilization results, that is, Theorem \ref{Lyapunov} and Theorem \ref{comp}.
\section{Well-posedness theory}\lambdabel{Sec2}
In this section, we are interested in analyzing the well-posedness property of the system \eqref{sis1}. The first and the second subsections are devoted to proving the existence of solutions for the linearized (homogenous and non-homogeneous) system associated with \eqref{sis1}, respectively. The third subsection concerns the well-posedness of the full system \eqref{sis1}.
{\mbox{sub}}section{Linear problem}
As in the literature (see for instance the references \cite{xyl} and \cite{np}), the homogenous linear system associated with \eqref{sis1} can be viewed as follows:
\begin{equation}gin{equation}\lambdabel{sis2.1_coupled}
\left\{
\begin{equation}gin{array}{ll}
\partial_{t} \omega(t,x)+\alphapha \partial_x \omega(t,x) +\begin{equation}ta\partial_x^3 \omega(t,x)- \partial_x^5 \omega(t,x) =0, & (t,x) \in \mathbb{R}^{+}\times\Omegaega,\\
s\partial _tz(t,\phi,s)+\partial_\phi z(t,\phi,s)=0, & (t,\phi,s)\in \mathbb R^+\times\Omegaega_0\times \mathcal{M},\\
\omega (t,0) =\omega (t,\ell) =\partial_x \omega(t,0)=\partial_x \omega(t,\ell) =0, & t >0,\\
\partial_x^2 \omega(t,\ell)=\nu_1 \partial_x^2 \omega(t,0)+ \nu_2 \displaystyle \int_{\mathcal{M}} \sigma(s) z(t,1,s) \, ds, & t >0,\\
\omega(0,x) =\omega_{0} (x), & x \in \Omegaega,\\
z(0,\phi,r)=z_0(-\phi r),& (\phi,r)\in \Omegaega_0\times(0,\tau_2),
\end{array}
\right.
\end{equation}
where $z(t,\phi,s)= \partial_x^2\omega(t - \phi s,0)$ satisfies a transport equation (see \eqref{sis2.1_coupled}$_2$). Letting ${\Lambda}mbda(t)= \left[\begin{equation}gin{array}{ll}
\omega(t,\cdot) \\
z(t,\cdot,\cdot)
\end{array}\right], {\Lambda}mbda_{0}= \left[\begin{equation}gin{array}{ll}
\omega_{0} \\
z_{0}(-\phi\cdot)
\end{array}\right]$, one can rewrite this system abstractly:
\begin{equation}gin{equation}
\lambdabel{sla}
\begin{equation}gin{cases}
{\Lambda}mbda_t(t)=A{\Lambda}mbda(t), \quad t>0,\\
{\Lambda}mbda(0)={\Lambda}mbda_0\in H,
\end{cases}
\end{equation}
where
\begin{equation}gin{equation*}
A= \left[\begin{equation}gin{array}{lll}
-\alphapha\partial_x - \begin{equation}ta\partial_x^3 + \partial_x^5 & 0 \\
0 & -\dfrac{1}{s}\partial_{\phi}
\end{array}\right],
\end{equation*}
whose domain is given by
\begin{equation}gin{equation*}\lambdabel{DAA}
D(A) :=
\left\lbrace
\begin{equation}gin{aligned}
&(\omega,z)\in H, \\
&(\omega,z)\in H^5(\Omegaega)\cap H_0^2(\Omegaega), \\
&z \in L^2 \Bigl( \mathcal{M}; H^1(\Omegaega_0) \Bigr);
\end{aligned}
\left\lvert
\begin{equation}gin{aligned}
&\partial_x^{2}\omega(0)= z(0,\cdot),\\
& \partial_x^{2}\omega(\ell)= \nu_1\partial_x^{2}\omega(0) +\nu_2 \int_{\mathcal{M}} \sigma(s) z(1,s) \, ds
\end{aligned}
\right.
\right\rbrace.
\end{equation*}
The following result ensures the well-posedness of the linear homogeneous system.
\begin{equation}gin{proposition}\lambdabel{linear}
Under the assumption \eqref{assK}, we have:
\begin{equation}gin{itemize}
\item[i.] The operator $A$ is densely defined in $H$ and generates a $C_{0}$-semigroup of contractions $e^{t{\scriptstyle A}}$. Thereby, for each ${\Lambda}mbda_0\in H$, there exists a unique mild solution ${\Lambda}mbda\in C([0,+\infty),H)$ for the linear system associated with \eqref{sis1}. Moreover, if ${\Lambda}mbda_0\in D(A)$, then we have a unique classical solution with the regularity $${\Lambda}mbda\in C([0,+\infty),D(A))\cap C^1([0,+\infty),H).$$
\item[ii.] Given ${\Lambda}mbda_{0}=(\omega_{0},z_0(\cdot)) \in H$, the following estimates hold:
\begin{equation}gin{equation}
\lambdabel{est1}
\displaystyle \| \partial_x^2 \omega (0,\cdot) \|_{L^2 (0,T)}^2 + \int_0^T \int_{\mathcal{M}} s \sigma(s) z^2 (t,1,s) \, dsdt \leq C \| (\omega_0, z_0(\cdot) )\|_H^2,
\end{equation}
\begin{equation}gin{equation}
\lambdabel{est2}
\displaystyle \| \partial_x^2 \omega (\cdot) \|_{L^2 (0,T;L^2 (\Omegaega) )}^2 \leq C \| (\omega_0, z_0(\cdot) )\|_H^2,
\end{equation}
\begin{equation}gin{equation}
\lambdabel{est3}
\displaystyle \| z_0(\cdot) \|_{L^2 (\mathcal{Q})}^2 \leq \| z (T,\cdot,\cdot) \|_{L^2 (\mathcal{Q})}^2 + \displaystyle \int_0^T \int_{\mathcal{M}} \sigma(s) z^2 (t,1,s) \, dsdt,
\end{equation}
and
\begin{equation}gin{equation}
\lambdabel{est4}
T\| \omega_0(\cdot) \|^2 \leq \| \omega\|_{L^2 (0,T;L^2(\Omegaega))}^2 +T\|\partial^2_x \omega(0)\|_{L^2 (0,T)}^2.
\end{equation}
\item[iii.]The map $$\mathcal{G}: {\Lambda}mbda_{0}=(\omega_{0},z_0(\cdot)) \in H \mapsto {\Lambda}mbda (\cdot) =e^{\cdot {\scriptstyle A}} {\Lambda}mbda_{0} \in Y_{T} \times C \left( [0,T]; \, L^2 (\mathcal{Q}) \right)$$ is continuous.
\end{itemize}
\end{proposition}
\begin{equation}gin{proof}
{\noindent}ndent\textbf{Proof of item i.} This part can be proved by using the semigroup theory. In fact, note first that for given ${\Lambda}mbda=(\omega,z)\in D(A),$ it follows from the Cauchy-Schwarz inequality that
\begin{equation}gin{equation}
\lambdabel{dcs}
\int_{\mathcal{M}}\sigma(s)z(1,s)ds\leq \left(\int_{\mathcal{M}}\sigma(s)ds\right)^\frac{1}{2}\left(\int_{\mathcal{M}}\sigma(s)(z(1,s))^2ds\right)^\frac{1}{2}.
\end{equation}
Thus, using integration by parts and \eqref{dcs} yields that
\begin{equation}gin{equation}
\lambdabel{dissipatividade_de_A}
\begin{equation}gin{split}
\lambdangle A{\Lambda}mbda, {\Lambda}mbda\rangle=&\dfrac{1}{2}\left[\left(\nu_1\partial_x^2 \omega(0)+\nu_2\int_{\mathcal{M}}\sigma(s)z(1,s)~ ds\right)^2-\left(\partial_x^2 \omega(0)\right)^2 \right.\\
& \left.-|\nu_2|\int_{\mathcal{M}}\sigma(s)\left(z(1,s)\right)^2~ ds+|\nu_2|\left(\partial_x^2 \omega(0)\right)^2\int_{\mathcal{M}}\sigma(s)~ ds\right]\\
\leq&\dfrac{1}{2}\left[\left(\partial_x^2 \omega(0)\right)^2\left(\nu_1^2-1+|\nu_2|\int_{\mathcal{M}}\sigma(s)~ ds\right)\right.\\
&\left.+2\nu_1\nu_2\left(\partial_x^2 \omega(0)\right)\left(\int_{\mathcal{M}}\sigma(s)z(1,s)~ ds\right)\right.\\
& \left.+\left(\nu_2^2-\dfrac{|\nu_2|}{\|\sqrt{\sigma(s)}\|^2}\right)\left(\int_{\mathcal{M}}\sigma(s)z(1,s)~ ds\right)^2\right]
=\dfrac{1}{2}\lambdangle G X,X \rangle _{\mathbb R^2},
\end{split}
\end{equation}
where $$X=\left(\begin{equation}gin{array}{c}
\partial_x^2 \omega(0)\\
\displaystyle \int_{\mathcal{M}}\sigma(s)z(1,s)~ ds
\end{array}\right)$$
and
$$G=\left(\begin{equation}gin{array}{cc}
\displaystyle \nu_1^2-1+|\nu_2|\displaystyle \int_{\mathcal{M}}\sigma(s)~ ds &\nu_1\nu_2\\
\displaystyle \nu_1\nu_2&\displaystyle \nu_2^2-\dfrac{|\nu_2|}{\|\sqrt{\sigma(s)}\|^2}
\end{array}\right).$$
Due to \eqref{ab}, we have
$$\det G=|\nu_2|\left(\int_{\mathcal{M}}\sigma(s)~ ds\right)^{-1}\left\{\left[1-|\nu_2|\left(\int_{\mathcal{M}}\sigma(s)~ ds\right)\right]^2-\nu_1^2\right\}>0$$
and
$$
\mbox{tr} G\leq |\nu_1|(|\nu_1|-1)-|\nu_1||\nu_2|\left(\int_{\mathcal{M}}\sigma(s)~ ds\right)^{-1}<0,
$$
since $|\nu_1|<1.$ Moreover, is not difficult to see that $G$ is a negative definite matrix. Putting these previous information together in \eqref{dissipatividade_de_A} we have that $A$ is dissipative. Analogously, considering the adjoint operator of $A$ as follows
$$A^*(v,y)=\left(\alphapha\partial_x v+\begin{equation}ta \partial_x^3 v-\partial_x^5 v, \dfrac{1}{s}\partial_\phi y\right)$$ with domain
\begin{equation}gin{equation*}\lambdabel{DAAA}
D(A^*) :=
\left\lbrace
\begin{equation}gin{aligned}
&(v,y)\in H, \\
& (\omega,z)\in H^5(\Omegaega)\cap H_0^2(\Omegaega), \\
&y \in L^2 \Bigl( \mathcal{M}; H^1(\Omegaega_0) \Bigr);
\end{aligned}
\left\lvert
\begin{equation}gin{aligned}
& \partial_x^2 v(\ell)=\dfrac{|\nu_2|}{\nu_2}y(1,s),\\
& \partial_x^2 v(0)=\displaystyle \nu_1\partial_x^2 v(\ell) +|\nu_2|\int_{\mathcal{M}}\sigma(s)y(0,s)~ ds
\end{aligned}
\right.
\right\rbrace,
\end{equation*}
we have that for $(v,y)\in D(A^*),$
\begin{equation}gin{equation}
\lambdabel{dissipatividade_adjunto}
\begin{equation}gin{split}
\lambdangle A^*(v,y),(v,y)\rangle
& +\left[|\nu_2|^2-|\nu_2|\|\sqrt{\sigma}\|^2_{L^2(\mathcal{M})}\right]\left(\int_{\mathcal{M}}\sigma(s) y(0,s) ds\right)^2\\
=& \dfrac{1}{2}\lambdangle G_*Z,Z\rangle,
\end{split}
\end{equation}
where $$Z=\left(\begin{equation}gin{array}{c} \partial_x^2 v(\ell)\\
\displaystyle \int_{\mathcal{M}}\sigma(s) y(0,s) ds
\end{array}\right)$$ and
$$G_*=\left(\begin{equation}gin{array}{cc}
\nu_1^2-1+|\nu_2|\displaystyle \int_{\mathcal{M}}\sigma(s)~ ds &\nu_1|\nu_2|\\
\nu_1|\nu_2|&\displaystyle \nu_2^2-\dfrac{|\nu_2|}{\|\sqrt{\sigma(s)}\|^2}
\end{array}\right).$$
Again, thanks to the relation \eqref{ab}, we have $\det G_*=\det G>0$ and $ \mbox{tr} G_*= \mbox{tr} G<0$, since $|\nu_1|<1.$ Thus, using the fact that $G_*$ is negative definite in \eqref{dissipatividade_adjunto}, we have that $A^*$ is also dissipative, showing the item i.
{\noindent}ndent\textbf{Proof of item ii.} First, remember that $e^{tA}$ is a contractive semigroup and therefore, for each ${\Lambda}mbda_0=(\omega_0,z_0)\in H,$ the following estimate is valid
\begin{equation}gin{equation}
\lambdabel{contration_semigroup}
\|(\omega(t),z(t,\cdot,\cdot))\|_H^2=\|\omega(t)\|^2+\|z(t,\cdot,\cdot) \|^2_{L^2(\mathcal{Q})}\leq \|\omega_0\|^2+\|z_0(-\cdot) \|^2_{L^2(\mathcal{Q})}, \forall t\in [0,T].
\end{equation}
Moreover, the following inequality holds
\begin{equation}gin{equation}
\lambdabel{eq2.16}
\begin{equation}gin{array}{rcl}
\displaystyle \int_0^T\int_{\mathcal{M}}s\sigma(s)\left[z(t,1,s)\right]^2~dsdt&\leq&\displaystyle \dfrac{\tau_2}{|\nu_2|}\int_{\Omegaega_0}\int_{\mathcal{M}}|\nu_2| s \sigma(s)\left[z_0^2(-\phi s)\right]dsd\phi\\
&&\displaystyle +\dfrac{\tau_2}{\tau_1|\nu_2|}\int_0^T\int_{\Omegaega_0}\int_{\mathcal{M}}|\nu_2| s \sigma(s)z^2~ds d\phi dt.
\end{array}
\end{equation}
Indeed, multiplying the second equation of \eqref{sis2.1_coupled} by $\phi \sigma(s)z,$ rearranging the terms, integrating by parts and taking into account that $s \in \mathcal{M}=(\tau_1,\tau_2),$ we have
\begin{equation}gin{equation*}
\begin{equation}gin{split}
\int_0^T\int_{\mathcal{M}}s\sigma(s) \left( z(t,1, s)\right)^2~ds dt
\leq&\dfrac{\tau_2}{|\nu_2|}\int_0^T\int_{\Omegaega_0}\int_{\mathcal{M}}|\nu_2|\sigma(s) \left( z(t,\phi, s)\right)^2~ds d\phi dt\\
& +\dfrac{\tau_2}{|\nu_2|}\int_{\Omegaega_0}\int_{\mathcal{M}}\phi |\nu_2|\sigma(s)s\left( z(0,\phi, s)\right)^2~ds d\phi\\
&-\dfrac{\tau_2}{|\nu_2|}\int_{\Omegaega_0}\int_{\mathcal{M}}|\nu_2|\phi \sigma(s)s\left( z(T,\phi, s)\right)^2~ds d\phi\\
\leq& \dfrac{\tau_2}{\tau_1|\nu_2|}\int_0^T\int_{\Omegaega_0}\int_{\mathcal{M}}s|\nu_2|\sigma(s) \left( z(t,\phi, s)\right)^2~ds d\phi dt\\
& +\dfrac{\tau_2}{|\nu_2|}\int_{\Omegaega_0}\int_{\mathcal{M}}\phi |\nu_2|\sigma(s)s\left( z_0(-\phi s)\right)^2~ds d\phi
\end{split}
\end{equation*}
This proves the estimate \eqref{eq2.16}. As a consequence of \eqref{contration_semigroup}, \eqref{eq2.16} and the hypothesis of $\tau_1\leq s\leq \tau_2$ and $\phi\leq 1$, we also have
\begin{equation}gin{equation}
\lambdabel{eq2.15}
\int_0^T\int_{\mathcal{M}} s\sigma(s)\left(z(t,1,s)\right)^2~dsdt\leq \dfrac{\tau_2}{|\nu_2|}\left(\dfrac{T}{\tau_1}+1\right)\left(\|\omega_0\|^2+\| z_0(-\phi s)\|^2_{L^2(\mathcal{Q})}\right).
\end{equation}
Now, we are in a position to prove \eqref{est1}. Multiplying the first equation of \eqref{sis2.1_coupled} by $\omega$, integrating over $[0,T]\times[0,\ell],$ and using the boundary conditions, it follows that
\begin{equation}gin{equation}\lambdabel{eq2.23}
\begin{equation}gin{split}
\|\partial_x^2\omega(0)\|_{L^2(0,T)}^2=&\displaystyle \|\omega_0\|^2+\int_0^T\left(\partial_x^2\omega(\ell)\right)^2~dt-\|\omega(T)\|^2\\
\leq& \|\omega_0\|^2+\int_0^T\left(\nu_1\partial_x^2\omega(0)+\nu_2\int_{\mathcal{M}}
\sigma(s)z(\cdot,1, s)ds \right)^2~dt:=I_{1}+I_{2}.
\end{split}
\end{equation}
To estimate the integral $I_{2}$ on the right-hand side of \eqref{eq2.23}, we use Young's inequality together with the Cauchy-Schwartz inequality, to obtain
\begin{equation}gin{equation}
\lambdabel{eq2.25}
\begin{equation}gin{split}
I_{2}\leq&\nu_1^2\left(\partial_x^2 \omega(t,0)\right)^2\\
&+2|\nu_1||\nu_2|\left(\partial_x^2 \omega(t,0)\right)\left(\int_{\mathcal{M}}\sigma(s)ds\right)^\frac{1}{2}\left(\int_{\mathcal{M}}\sigma(s)z^2(\cdot,1, s)ds\right)^\frac{1}{2}\\
& +\nu_2^2\left(\left(\int_{\mathcal{M}}\sigma(s)ds\right)^\frac{1}{2}
\left(\int_{\mathcal{M}}\sigma(s)z^2(\cdot,1, s)ds\right)^\frac{1}{2}\right)^2\\
\leq& \left[\nu_1^2+\dfrac{\nu_2^2}{2\theta}\left(\int_{\mathcal{M}}\sigma(s)ds\right)\right]\left(\partial_x^2 \omega(t,0)\right)^2\\
&+\left[2\theta \nu_1^2+\nu_2^2\left(\int_{\mathcal{M}}\sigma(s)ds\right)\right]\left(\int_{\mathcal{M}}\sigma(s)z^2(\cdot,1, s)ds\right).
\end{split}
\end{equation}
Thereafter, inserting \eqref{eq2.25} into \eqref{eq2.23}, we find
\begin{equation}gin{equation}
\lambdabel{eq2.26}
\begin{equation}gin{split}
&\left[1-\nu_1^2-\dfrac{\nu_2^2}{2\theta}\left(\int_{\mathcal{M}}\sigma(s)ds\right)\right]\|\partial_x^2\omega(0)\|_{L^2(0,T)}^2\leq \|\omega_0\|^2 \\&+\left[2\theta\nu_1^2+\nu_2^2\left(\int_{\mathcal{M}}\sigma(s)ds\right)\right]\left(\int_0^T\int_{\mathcal{M}}\sigma(s)z^2(\cdot,1, s)dsdt\right).
\end{split}
\end{equation}
Thanks to \eqref{ab}, one can choose $\theta>0$ large enough so that
\begin{equation}gin{equation}\lambdabel{w1}
\displaystyle 1-\nu_1^2-\dfrac{\nu_2^2}{2\theta}\left(\int_{\mathcal{M}}\sigma(s)ds\right)>0.
\end{equation}
This, together with \eqref{eq2.26} and \eqref{eq2.15}, yields
\begin{equation}gin{equation}
\lambdabel{eq2.28}
\begin{equation}gin{split}
\|\partial_x^2\omega(0)\|_{L^2(0,T)}^2\leq \leq& \displaystyle C\left(\|\omega_0\|^2+\dfrac{1}{\tau_1}\int_0^T\int_{\mathcal{M}}s\sigma(s)z^2(\cdot,1, s)dsdt\right)\\
\leq& C\left(1+\dfrac{\tau_2}{\tau_1|\nu_2|}\left(\dfrac{T}{\tau_1}+1\right)\right)\|\omega_0\|^2+\dfrac{C\tau_2}{\tau_1|\nu_2|}\left(\dfrac{T}{\tau_1}+1\right)\|z_0(-\phi s)\|^2_{L^2(\mathcal{Q})}\\
\leq& \displaystyle C\left(\|\omega_0\|^2+\|z_0(-\phi s)\|^2_{L^2(\mathcal{Q})}\right).
\end{split}
\end{equation}
Clearly, combining \eqref{eq2.15} and \eqref{eq2.28}, we get \eqref{est1}.
Now, let us prove \eqref{est2}. Multiplying the equation \eqref{sis2.1_coupled} by $xu$, integrating by parts over $(0,T)\times \Omegaega,$ and isolating the term $\|\partial_x^2 \omega\|_{L^2(0,T;L^2(\Omegaega))}^2$, we obtain
\begin{equation}gin{equation*}
\begin{equation}gin{split}
\|\partial_x^2 \omega\|_{L^2(0,T;L^2(\Omegaega))}^2
\leq& \int_{\Omegaega} \dfrac{x}{5}\omega_0^2(x)dx+\dfrac{\alphapha}{5}\| \omega\|_{L^2(0,T;L^2(\Omegaega))}^2 \\
& +\dfrac{\ell}{5}\left[\nu_1^2+\dfrac{\nu_2^2}{2\varepsilonilon}\left(\int_{\mathcal{M}}\sigma(s)ds\right)\right]\int_0^T(\partial_x^2\omega(t,0))^2\\
& +\dfrac{\ell}{5}\left[2\varepsilonilon\nu_1^2+\nu_2^2\left(\int_{\mathcal{M}}\sigma(s)ds\right)\right]\int_0^T\int_{\mathcal{M}}\sigma(s)z^2(t,1,s)dsdt\\
\leq& \dfrac{\ell}{5}\|\omega_0\|^2+\dfrac{\alphapha}{5}\| \omega\|_{L^2(0,T;L^2(\Omegaega))}^2 \\
&+C_1\left[\int_0^T(\partial_x^2\omega(t,0))^2+\int_0^T\int_{\mathcal{M}}\sigma(s)z^2(t,1,s)dsdt\right],
\end{split}
\end{equation*}
where \eqref{eq2.25} is used and
$$ C_1=\max\left\{\dfrac{\ell}{5}\left[\nu_1^2+\dfrac{\nu_2^2}{2\varepsilonilon}\left(\int_{\mathcal{M}}\sigma(s)ds\right)\right],\dfrac{\ell}{5}\left[2\varepsilonilon\nu_1^2+\nu_2^2\left(\int_{\mathcal{M}}\sigma(s)ds\right)\right]\right\}.$$ Now, taking into account the fact that $e^{At}$ is a semigroup of contractions and using \eqref{est1}, we obtain \eqref{est2} with the constant $C=\max\left\{\dfrac{\ell}{5},\dfrac{\alphapha}{5}, C_1\right\}$.
Finally, let us show \eqref{est3} and \eqref{est4}, respectively. For \eqref{est3}, multiply the second equation in \eqref{sis2.1_coupled} by $\sigma(s)z$ and integrates by parts over $(0,T)\times\mathcal{Q},$ to obtain
$$
\int_{\Omegaega_0}\int_{\mathcal{M}}s\sigma(s) z^2(0,\phi, s) ~ds d\phi\leq \int_{\Omegaega_0}\int_{\mathcal{M}}s\sigma(s) z^2(T,\phi, s) ~ds d\phi +\int_0^T\int_{\mathcal{M}}\sigma(s) z^2(t,1,s) ~dsdt,
$$
showing \eqref{est3}. To prove \eqref{est4}, we multiply the first equation in \eqref{sis2.1_coupled} by $2(T-t)\omega$ and integrating over $[0,T]\times[0,\ell],$ to find
$$T\|\omega_0\|^2\leq T\|\omega\|^2_{L^2(0,T;L^2(\Omegaega))}+T\int_0^T\left(\partial_x^2\omega(0)\right)^2dt,$$
giving \eqref{est4}. Last but not least, it is worth mentioning that the above estimates remain true for solutions stemming from ${\Lambda}mbda_0\in H,$ giving item ii.
{\noindent}ndent\textbf{Proof of item iii.} Follows directly from \eqref{est2} and from \eqref{contration_semigroup}.
\end{proof}
{\mbox{sub}}section{Non-homogeneous problem}
Let us now consider the linear system \eqref{sis2.1_coupled} with a source term $f \in L^1(0,T; L^2(\Omegaega))$ in the right-hand side of the first equation. As done in the previous subsection, the system can be rewritten as follows:
\begin{equation}gin{equation}\lambdabel{sis_coupled_nh}
\begin{equation}gin{cases}
{\Lambda}mbda_t(t)=A{\Lambda}mbda(t)+(\varphi(t,\cdot),0), \quad t>0,\\
{\Lambda}mbda(0)={\Lambda}mbda_0\in H,
\end{cases}
\end{equation}
where ${\Lambda}mbda=(\omega,z)$ and ${\Lambda}mbda_ 0=(\omega_ 0,z_ 0(-\cdot)).$ With this in hand, the following result will be proved.
\begin{equation}gin{theorem}
\lambdabel{teor2}
Under the assumption \eqref{assK}, it follows that:
\begin{equation}gin{enumerate}
\item[$(a)$] If ${\Lambda}mbda_0=(\omega_0,z_0(-\cdot))\in H$ and $\varphi \in L^1(0,T; L^2(\Omegaega)),$ then there exists a unique mild solution $${\Lambda}mbda=(\omega,z)\in Y_{T}\times C([0,T];L^2(\mathcal{Q}))$$ of \eqref{sis_coupled_nh} such that
\begin{equation}gin{equation}
\lambdabel{eq2.38} \|(\omega,z)\|_{C([0,T];H)}^2\leq C\left(\|(\omega_0,z_0(-\cdot))\|_H^2+\|\varphi\|_{L^1(0,T; L^2(\Omegaega))}^2\right),
\end{equation}
and
\begin{equation}gin{equation}
\lambdabel{M} \|\omega\|_{Y_{T}}^2\leq C\left(\|(\omega_0,z_0(-\cdot))\|_H^2+\|\varphi\|_{L^1(0,T; L^2(\Omegaega))}^2\right),
\end{equation}
for some constant $C>0$, which is independent of ${\Lambda}mbda_0$ and $\varphi.$
\item[$(b)$] Given $$\omega\in Y_{T}=C(0,T;L^2(\Omegaega))\cap L^2(0,T;H^2_0(\Omegaega))$$ and $p\in\{1,2\}$, we have $\omega^p\partial_x \omega\in L^1(0,T; L^2(\Omegaega))$ and the map
\begin{equation}gin{equation}
\lambdabel{Theta}\mathcal{F}: \omega\in Y_{T} \mapsto \omega^p\partial_x \omega\in L^1(0,T; L^2(\Omegaega))
\end{equation}
is continuous.
\end{enumerate}
\end{theorem}
\begin{equation}gin{proof}
{\noindent}ndent\textbf{Proof of item (a).} Since $A$ is the infinitesimal generator of a semigroup of contractions $e^{tA}$ and $\varphi\in L^1(0,T;L^2(\Omegaega))$ it follows from semigroups theory that there is a unique mild solution ${\Lambda}mbda=(\omega,z)\in C([0,T];H)$ of \eqref{sis_coupled_nh} such that
$${\Lambda}mbda(t)=e^{tA}{\Lambda}mbda_0+\int_0^te^{(t-s)A}(\varphi,0)ds$$
and hence, we get
$$\|(\omega,z)\|_{C([0,T];H)}\leq C\left(\|(\omega_0,z_0(-\cdot))\|_H+\|\varphi\|_{L^1(0,T; L^2(\Omegaega))}\right).$$
Young's inequality gives
$$ \|(\omega,z)\|_{C([0,T];H)}^2\leq 2C^2\left(\|(\omega_0,z_0(-\cdot))\|_H^2+\|\varphi\|_{L^1(0,T; L^2(\Omegaega))}^2\right),$$
which proves \eqref{eq2.38}. To complete the proof of item $(a)$, we must verify the validity of \eqref{M}. For this, observe that from \eqref{eq2.38}, we have
\begin{equation}gin{equation}
\lambdabel{eq2.39}
\max_{t\in[0,T]}\|\omega\|^2\leq 2C^2\left(\|(\omega_0,z_0(-\cdot))\|_H^2+\|\varphi\|_{L^1(0,T; L^2(\Omegaega))}^2\right).
\end{equation}
In turn, if we multiply the second equation in \eqref{sis_coupled_nh} by $\phi\sigma(s)z$, integrating over $[0,T]\times [0,1]\times [\tau_1,\tau_2 ]$ and arguing as for the proof of \eqref{eq2.16}, we obtain
\begin{equation}gin{equation}
\lambdabel{eq2.42}
\begin{equation}gin{array}{l}
\displaystyle\int_0^T\int_{\mathcal{M}} s\sigma(s)\left(z(t,1,s)\right)^2~dsdt\\
\leq\displaystyle \dfrac{\tau_2}{|\nu_2|}\left(\dfrac{T}{\tau_1}+1\right)\left(\|\omega_0\|^2+\| z_0(-\phi s)\|^2_{L^2(\mathcal{Q})}+\|\varphi\|_{L^1(0,T; L^2(\Omegaega))}^2\right).
\end{array}
\end{equation}
Now, multiplying the first equation in \eqref{sis_coupled_nh} by $\omega$, integrating over $[0,T]\times[0,\ell],$ and thanks to \eqref{eq2.42}, we get
\begin{equation}gin{equation}
\lambdabel{eq2.46}
\begin{equation}gin{split}
\|\partial_x^2\omega(0)\|_{L^2(0,T)}^2\leq& \|\omega_0\|^2+\int_0^T\left(\nu_1\partial_x^2\omega(0)+\nu_2\int_{\mathcal{M}}\sigma(s)z(\cdot,1, s)ds \right)^2~dt\\
& +2\left(\max_{t\in [0,T]}\|\omega(t,x)\|\right) \int_0^T\|\varphi(t,x)\|~dt.
\end{split}
\end{equation}
Now, replacing \eqref{eq2.25} in \eqref{eq2.46}, we find
\begin{equation}gin{equation}
\begin{equation}gin{array}{rcl}
\|\partial_x^2\omega(0)\|_{L^2(0,T)}^2&\leq& \displaystyle \|\omega_0\|^2+\left[\nu_1^2+\dfrac{\nu_2^2}{2\theta}
\left(\int_{\mathcal{M}}\sigma(s)ds\right)\right]\int_0^T\left(\partial_x^2 \omega(t,0)\right)^2 dt\\
&&\displaystyle +\left[2 \theta \nu_1^2+\nu_2^2\left(\int_{\mathcal{M}}\sigma(s)ds\right)\right]\left(\int_0^T\int_{\mathcal{M}}\sigma(s)z^2(\cdot,1, s)dsdt\right)\\
&&\displaystyle +2\left(\max_{t\in [0,T]}\|\omega(t,x)\|\right) \int_0^T\|\varphi(t,x)\|~dt.
\end{array}
\end{equation}
Isolating $\|\partial_x^2\omega(0)\|_{L^2(0,T)}^2$ and using Young's inequality for the last term of the right-hand side, we reach
\begin{equation}gin{equation}
\lambdabel{eq2.48}
\begin{equation}gin{array}{l}
\displaystyle \left[1-\nu_1^2-\dfrac{\nu_2^2}{2\theta}\left(\int_{\mathcal{M}}\sigma(s)ds\right)\right]\|\partial_x^2\omega(0)\|_{L^2(0,T)}^2\\
\leq \displaystyle \|\omega_0\|^2 +\left[2\theta \nu_1^2+\nu_2^2\left(\int_{\mathcal{M}}\sigma(s)ds\right)\right]
\left(\int_0^T\int_{\mathcal{M}}\sigma(s)z^2(\cdot,1, s)dsdt\right)\\[3mm]
\displaystyle\ \ \ \ +\left(\max_{t\in [0,T]}\|\omega(t,x)\|\right)^2+\|\varphi\|_{L^1(0,T;L^2(\Omegaega))}^2.
\end{array}
\end{equation}
Thanks to \eqref{ab}, \eqref{w1} and \eqref{eq2.48}, the estimate \eqref{eq2.38} becomes
\begin{equation}gin{equation}\lambdabel{eq2.50}
\begin{equation}gin{split}
\|\partial_x^2\omega(0)\|_{L^2(0,T)}^2
\leq& C_1\left(2+C_2+\dfrac{\tau_2}{\tau_1|\nu_2|}\left(\dfrac{T}{\tau_1}+1\right)\right)\|\omega_0\|^2\\
& +C_1\left(\dfrac{\tau_2}{\tau_1|\nu_2|}\left(\dfrac{T}{\tau_1}+1\right)+1+C_2\right)\|z_0(-\phi s)\|^2_{L^2(\mathcal{Q})}\\
& +C_1(1+C_2)\|\varphi\|_{L^1(0,T;L^2(\Omegaega))}^2\\
\leq& C\left(\|(\omega_0,z_0(-\phi s))\|^2_{H}+\|\varphi\|_{L^1(0,T;L^2(\Omegaega))}^2\right).
\end{split}
\end{equation}
Now, multiply the equation \eqref{sis_coupled_nh} by $xu$ and integrate by parts over $(0,T)\times (0,\ell ) $ and then perform similar calculations to those of the previous item to get
\begin{equation}gin{equation}
\lambdabel{eq2.53}
\begin{equation}gin{split}
&\dfrac{5}{2}\|\partial_x^2 \omega\|_{L^2(0,T;L^2(\Omegaega))}^2\leq \displaystyle \dfrac{\ell}{2}\|\omega_0\|^2+\dfrac{aT}{2}C\left(\|(\omega_0,z_0(-\phi s))\|_{H}^2+\|\varphi\|^2_{L^1(0,T;L^2(\Omegaega))}\right)\\
& +\dfrac{\ell}{2}C\left(\|(\omega_0,z_0(-\phi s))\|_{H}^2+\|\varphi\|^2_{L^1(0,T;L^2(\Omegaega))}\right)+\dfrac{\ell}{2}\|\varphi\|_{L^1(0,T;L^2(\Omegaega))}^2\\
& +\dfrac{\ell}{2}\left[\nu_1^2+\dfrac{\nu_2^2}{2\varepsilonilon}\left(\int_{\mathcal{M}}\sigma(s)ds\right)\right]C\left(\|(\omega_0,z_0(-\phi s))\|_{H}^2+\|\varphi\|^2_{L^1(0,T;L^2(\Omegaega))}\right) \\
& +\dfrac{\ell}{2\tau_1}\left[2\varepsilonilon\nu_1^2+\nu_2^2\left(\int_{\mathcal{M}}\sigma(s)ds\right)\right]\dfrac{\tau_2}{|\nu_2|}\left(\dfrac{T}{\tau_1}+1\right)\left(\|(\omega_0,z_0(-\phi s))\|_{H}^2+\|\varphi\|^2_{L^1(0,T;L^2(\Omegaega))}\right),
\end{split}
\end{equation}
where we have used Cauchy-Schwarz inequality, Young inequality, estimates \eqref{eq2.25}, \eqref{eq2.42}, and \eqref{eq2.50}. Therefore, taking any $\varepsilonilon>0$ in \eqref{eq2.53}, there exists $C>0$ such that
\begin{equation}gin{equation}
\lambdabel{eq2.54}
\begin{equation}gin{array}{l}
\displaystyle
\|\omega\|_{L^2(0,T;H^2_0(\Omegaega))}^2=\|\partial_x^2 \omega\|_{L^2(0,T;L^2(\Omegaega))}^2\leq C\left(\|(\omega_0,z_0(-\phi s))\|_{H}^2+\|\varphi\|^2_{L^1(0,T;L^2(\Omegaega))}\right).\\
\end{array}
\end{equation}
The estimate \eqref{M} follows directly from the estimates \eqref{eq2.39} and \eqref{eq2.54}, and item (a) is achieved.
{\noindent}ndent\textbf{Proof of item (b).} Given $\omega,v\in Y_{T}$ we have, for $p=1,$ that
\begin{equation}gin{equation}
\lambdabel{eq2.55}
\|\omega\partial_x \omega\|_{L^1(0,T;L^2(\Omegaega))}\leq k\int_0^T\|\omega\|_{L^2(\Omegaega)}\|\partial_x \omega\|dt\leq k\int_0^T\|\omega\|_{H^ 2(\Omegaega)}^2dt\leq k\|\omega\|_{Y_{T}}^2<\infty,
\end{equation}
where $k$ is the positive constant of the Sobolev embedding $L^2(\Omegaega)\hookrightarrow L^\infty(\Omegaega)$. Therefore, $\omega\partial_x \omega\in L^1(0,T;L^2(\Omegaega)),$ for each $\omega\in Y_{T}. $ Thus, using the triangle inequality, together with the Cauchy-Schwarz inequality, we get the classical estimate
\begin{equation}gin{equation}
\lambdabel{eq2.56}
\|\mathcal{F}(\omega)-\mathcal{F}(v)\|_{L^1(0,T;L^2(\Omegaega))}\leq k\|\omega-v\|_{Y_{T}}\left(\| \omega\|_{Y_{T}} +\|v\|_{Y_{T}}\right), \quad \text{for any} \, u,v \in Y_{T}.
\end{equation}
Therefore, the map $\mathcal{F}$ is continuous concerning the corresponding topologies. On the other hand, when $p=2,$ we have for $\omega,v\in Y_{T}$ that
\begin{equation}gin{equation}
\lambdabel{eq2.58}
\|\mathcal{F}(\omega)\|_{L^1(0,T;L^2(\Omegaega))}\leq k\|\omega\|_{C(0,T; L^2(\Omegaega))}\int_{0}^T \|\omega\|_{H^2(\Omegaega)}^2dt\leq k\|\omega\|_{Y_{T}}^3<+\infty.\end{equation}
Hence, $\mathcal{F}(\omega)$ is well-defined and {for any} $u,v$ in $Y_{T}$, we have
\begin{equation}gin{equation}
\lambdabel{eq2.59}
\begin{equation}gin{split}
\|\mathcal{F}(\omega)-\mathcal{F}(v)\|_{L^1(0,T;L^2(\Omegaega))}
\leq & \dfrac{3k}{2}\left(\|\omega\|_{Y_{T}}^2+ \|v\|_{Y_{T}} ^2\right)\|\omega-v\|_{Y_{T}}.\\
\end{split}
\end{equation}
Thereby, the map $\mathcal{F}$ is continuous for the corresponding topologies.
\end{proof}
{\mbox{sub}}section{Nonlinear problem} We are now in a position to prove the main result of the section. Precisely, the next result gives the well-posedness for the full system \eqref{sis1}.
\begin{equation}gin{theorem}
\lambdabel{theorem3} Suppose that \eqref{ab} holds. Then, there exist constants $r,C>0$ such that, for every ${\Lambda}mbda_0=(\omega_0,z_0(-\cdot))\in H$ with $\|{\Lambda}mbda_0\|^2_H\leq r,$ the problem \eqref{sis1} admits a unique global solution $\omega\in Y_{T},$ which satisfies $\|\omega\|_{Y_{T}}\leq C\|{\Lambda}mbda_0\|_H.$
\end{theorem}
\begin{equation}gin{proof}
Given ${\Lambda}mbda_0=(\omega_0,z_0(-\cdot))\in H$ such that $\|{\Lambda}mbda_0\|^2_H\leq r,$ where $r$ is a positive constant to be chosen, define a mapping $\Upsilon:Y_{T}\rightarrow Y_{T}$ as follows: $\Upsilon(\omega)=y,$ where $y$ is the solution of \eqref{sis_coupled_nh} with a source term $\varphi=\omega^p\partial_x \omega=\mathcal{F}(\omega),\ p\in\{1,2\}$. The mapping $\Upsilon$ is well defined because of item $(a)$ of Theorem \ref{teor2} from which we obtain from \eqref{M} that
$$\|\Upsilon(\omega)\|_{Y_{T}}^2\leq C\left(\|{\Lambda}mbda_0\|_H^2+\|\mathcal{F}(\omega)\|^2_{L^1(0,T:L^2 (\Omegaega))}\right).$$
Note that $\Upsilon(\omega)-\Upsilon(v)$ is a solution of \eqref{sis_coupled_nh} with initial condition ${\Lambda}mbda_0=(0,0)\in H$ and source term $\varphi=\mathcal{F}(\omega)-\mathcal{F} (v).$ It follows from \eqref{M} that
$$\|\Upsilon(\omega)-\Upsilon(v)\|_{Y_{T}}^2\leq C\|\mathcal{F}(\omega)-\mathcal{F}(v)\|_{L^1(0,T:L^2 (\Omegaega))}^2,$$
where the constant $C>0$ above does not depend on ${\Lambda}mbda_0$ and $\varphi.$
Now, considering $p=1$, we have from \eqref{eq2.55} that
$$\|\Upsilon(\omega)\|_{Y_{T}}^2\leq C\left(r+k^2\|\omega\|_{Y_{T}}^4\right),\ \forall \omega\in Y_{T},$$
while from \eqref{eq2.56}, we have that
$$\|\Upsilon(\omega)-\Upsilon(v)\|_{Y_{T}}^2\leq Ck^2\left(\|\omega\|_{Y_{T}}^2+\|v\|_{Y_{T}}^2\right)^2\|\omega-v\|_{Y_{T}}^2,\ \forall \omega, v \in Y_{T}.$$
Thus, when $\|\omega\|_{Y_{T}}^2\leq R$ we get
\begin{equation}gin{equation}
\lambdabel{eq3.31}
\begin{equation}gin{array}{rcl}
\|\Upsilon(\omega)\|_{Y_{T}}^2&\leq&C\left(r+k^2R^2\right),\ \forall \omega\in \mathcal{B},\\
&&\\
\|\Upsilon(\omega)-\Upsilon(v)\|_{Y_{T}}^2&\leq&4Ck^2R^2 \|\omega-v\|_{Y_{T}}^2,\ \forall \omega, v \in \mathcal{B}.
\end{array}
\end{equation}
Next, pick $R=\dfrac{1}{5k^2C}$ and $r=\dfrac{1}{25k^2C^2}$. For $\omega\in\mathcal{B}=\{\omega\in Y_{T}; \|\omega\|_{Y_{T}}^2\leq R\},$ we have that
\begin{equation}gin{equation}
\lambdabel{eq3.32}
\begin{equation}gin{array}{rcl}
\|\Upsilon(\omega)\|_{Y_{T}}^2&\leq& R,\ \forall \omega\in\mathcal{B},\\
&&\\
\|\Upsilon(\omega)-\Upsilon(v)\|_{Y_{T}}^2&\leq&\dfrac{4}{5}\|\omega-v\|_{Y_{T}}^2,\ \forall \omega, v \in\mathcal{B}.
\end{array}
\end{equation}
On the other hand, when $p=2$, we have from \eqref{eq2.58} that
$$\|\Upsilon(\omega)\|_{Y_{T}}^2\leq C\left(r+k^2\|\omega\|_{Y_{T}}^6\right),\ \forall \omega\in Y_{T}$$
and from \eqref{eq2.59}, we have that
$$\|\Upsilon(\omega)-\Upsilon(v)\|_{Y_{T}}^2\leq C\left(\dfrac{3k}{2}\right)^2\left(\|\omega\|_{Y_{T}}^2+\|v\|_{Y_{T}}^2\right)^2\|\omega-v\|_{Y_{T}}^2,\ \forall \omega, v \in Y_{T}.$$
Thus, when $\|\omega\|_{Y_{T}}^2\leq R$, we get
\begin{equation}gin{equation}
\lambdabel{eq3.33}
\begin{equation}gin{array}{rcl}
\|\Upsilon(\omega)\|_{Y_{T}}^2&\leq&C\left(r+k^2R^3\right),\ \forall \omega\in \mathcal{B},\\
&&\\
\|\Upsilon(\omega)-\Upsilon(v)\|_{Y_{T}}^2&\leq&9Ck^2R^2 \|\omega-v\|_{Y_{T}}^2,\ \forall \omega, v \in\mathcal{B}.
\end{array}
\end{equation}
Therefore, just take $R=\dfrac{1}{4k\sqrt{C}}$ and $r=\dfrac{1}{16kC^\frac{3}{2}}$ and we will have that
\begin{equation}gin{equation}
\lambdabel{eq3.34}
\begin{equation}gin{array}{rcl}
\|\Upsilon(\omega)\|_{Y_{T}}^2&\leq& R,\ \forall \omega\in\mathcal{B},\\
&&\\
\|\Upsilon(\omega)-\Upsilon(v)\|_{Y_{T}}^2&\leq&\dfrac{9}{16}\|\omega-v\|_{Y_{T}}^2,\ \forall \forall \omega, v \in\mathcal{B}.
\end{array}
\end{equation}
Consequently, due to \eqref{eq3.32} and \eqref{eq3.33}, the restriction of the map ${\Lambda}mbda$ to $\mathcal{B}$ is well-defined, and ${\Lambda}mbda$ is a contraction on the ball $\mathcal{B}.$ As an application of Banach Fixed Point Theorem, the map ${\Lambda}mbda$ possesses a unique fixed element $\omega,$ which turns out to be the unique solution to problem \eqref{sis1}.
Finally, the solution is global thanks to the dissipation property. Indeed, the energy $\mathcal{E}(t)$ (see \eqref{energia}) of the system \eqref{sis1} satisfies
$$
\mathcal{E}^{\prime}(t) \leq \frac{1}{2}\lambdangle G X, X\rangle_{\mathbb{R}^2} \leq 0,
$$
where $G$ and $X$ are given in Proposition \ref{linear}.
\end{proof}
\section{Exponential stability of solutions}\lambdabel{Sec3}
In this section, we will prove the two main results of our work. The first stabilization result will be proved \textit{via} the Lyapunov approach. The second one is obtained showing an \textit{observability inequality} which will be proved by the compactness-uniqueness argument.
{\mbox{sub}}section{Proof of Theorem \ref{Lyapunov}}
Initially, let us remember that the energy of the system \eqref{sis_coupled_nh}, for $\varphi=\omega^p\partial_x \omega$, with $p\in\{1,2\}$, is defined by
$$\displaystyle \mathcal{E}(t)=\|{\Lambda}mbda(t)\|_H^2=\|\omega(t)\|^2+\|z(t)\|^2_{L^2(\mathcal{Q })},$$
where $\displaystyle \|z(t)\|^2_{L^2(\mathcal{Q})}=|\nu_2|\int_{\mathcal{M}}s\sigma(s)\int_{0}^{1}z^2(t,\phi,s)d\phi ds.$ Thus, using \eqref{sis_coupled_nh}, we get
\begin{equation}gin{equation}
\lambdabel{derivative energi}
\begin{equation}gin{array}{rcl}
\mathcal{E}^{\prime}(t)&=&2\lambdangle{\Lambda}mbda_t(t),{\Lambda}mbda(t)\rangle_H=2\lambdangle A{\Lambda}mbda(t),{\Lambda}mbda(t)\rangle_H+2\lambdangle(\omega^p\partial_x \omega,0),{\Lambda}mbda(t)\rangle_H\\
&=&\displaystyle \lambdangle GX,X\rangle_{\mathbb R^2}+2\int_{\Omegaega} u^{p+1}\partial_x \omega dx\\
&=&\displaystyle \lambdangle GX,X\rangle_{\mathbb R^2}+2\dfrac{\omega^{p+2}(\ell)}{p+2}-2\dfrac{\omega^{p+2}(0)}{p+2}=\lambdangle GX,X\rangle_{\mathbb R^2}\leq 0,
\end{array}
\end{equation}
where $G$ and $X$ were given in \eqref{dissipatividade_de_A}. Let us now define a Lyapunov function
$$\Phi(t)=\mathcal{E}(t)+\mu_1E_1(t)+\mu_2 E_2(t), \ t\geq 0,$$
where $E_1(t)$ and $E_2(t)$ are given by
\begin{equation}gin{equation*}
E_1(t)=\int_{\Omegaega} xu^2(x,t)dx \quad \text{and} \quad E_2(t)=|\nu_2|\int_{\Omegaega_0} \int_{\mathcal{M}}se^{-\deltata \phi s}\sigma(s) z^2(t,\phi, s)dsd\phi,
\end{equation*}
$\mu_1$ and $\mu_2$ are positive constants to be determined and $\deltata>0$ is arbitrary constant. Note that
$$ \mu_1 E_1(t)=\mu_1\int_{\Omegaega} xu^2(x,t)dx\leq \ell\mu_1\int_{\Omegaega} \omega^2(x,t)dx= \ell\mu_1\|\omega\|^2
$$
and
$$\mu_2 E_2(t)\leq \mu_2|\nu_2|\int_{\Omegaega_0} \int_{\mathcal{M}}s\sigma(s) z^2(t,\phi, s)dsd\phi=\mu_2\| z(t)\|^2_{\ell^2(\mathcal{Q})}.$$
Consequently,
$$\mu_1E_1(t)+\mu_2E_2(t)\leq \max\{\ell\mu_1,\mu_2\}\mathcal{E}(t)$$
and, therefore
\begin{equation}gin{equation}
\lambdabel{eq3.8}
\mathcal{E}(t)\leq \Phi(t)\leq \left(1+\max\{\ell\mu_1,\mu_2\}\right)\mathcal{E}(t).
\end{equation}
Differentiating $E_1(t)$ and $E_2(t)$ using integration by parts and the boundary conditions of \eqref{sis1} and \eqref{sis2.1_coupled}, we get
\begin{equation}gin{equation}
\lambdabel{eq3.9}
\begin{equation}gin{array}{rcl}
E_1'(t)&=&\displaystyle \alphapha\| \omega\|^2-3\begin{equation}ta \|\partial_x \omega\|^2-5\|\partial_x^2 \omega\|^2+\dfrac{2}{p+2}\int_{\Omegaega} \omega^{p+2}dx\\
&&\displaystyle+\ell\left[\nu_1^2\left(\partial_x^2\omega(t,0)\right)^2+2\nu_1\nu_2\left(\partial_x^2\omega(t,0)\right)\left(\int_{\mathcal{M}}\sigma(s)z(t,1,s)ds\right)\right.\\
&&\displaystyle\left.+\nu_2^2\left(\int_{\mathcal{M}}\sigma(s)z(t,1,s)ds\right)^2\right]
\end{array}
\end{equation}
and
\begin{equation}gin{equation}
\lambdabel{eq3.10}
\begin{equation}gin{array}{rcl}
E_2'(t)&=&\displaystyle - |\nu_2|\int_{\mathcal{M}}e^{-\deltata s}\sigma(s)\left( z(t,1,s)\right)^2 ds+|\nu_2|\left(\int_{\mathcal{M}}\sigma(s) ds\right)\left( \partial_x^2\omega(t,0)\right)^2\\
&&\displaystyle -|\nu_2|\int_{\mathcal{M}}\int_{\Omegaega_0}\deltata s e^{-\deltata\phi s}\sigma(s) z^2 d\phi ds.
\end{array}
\end{equation}
Thus, for $\Phi(t)=\mathcal{E}(t)+\mu_1E_1(t)+\mu_2E_2(t)$, we find that
\begin{equation}gin{equation*}
\begin{equation}gin{split}
\Phi'(t)+&2\mu \Phi(t)
= \lambdangle GX,X\rangle_{\mathbb R^2}+\alphapha\mu_1\| \omega\|^2-3\begin{equation}ta\mu_1 \|\partial_x \omega\|^2-5\mu_1\|\partial_x^2 \omega\|^2+\dfrac{2\mu_1}{p+2}\int_{\Omegaega} \omega^{p+2}dx\\
&+\ell\mu_1\left[\nu_1^2\left(\partial_x^2\omega(t,0)\right)^2+2\nu_1\nu_2\left(\partial_x^2\omega(t,0)\right)\left(\int_{\mathcal{M}}\sigma(s)z(t,1,s)ds\right)\right.\\
&\left.+\nu_2^2\left(\int_{\mathcal{M}}\sigma(s)z(t,1,s)ds\right)^2\right]\\
& - \mu_2|\nu_2|\int_{\mathcal{M}}e^{-\deltata s}\sigma(s)\left( z(t,1,s)\right)^2 ds+\mu_2|\nu_2|\left(\int_{\mathcal{M}}\sigma(s) ds\right)\left( \partial_x^2\omega(t,0)\right)^2\\
& -\mu_2|\nu_2|\int_{\mathcal{M}}\int_{\Omegaega_0}\deltata s e^{-\deltata\phi s}\sigma(s) z^2 d\phi ds +2\mu\|\omega(t)\|^2+2\mu \|z(t)\|^2_{L^2(\mathcal{Q })}+2\mu\mu_1\int_{\Omegaega} xu^2(x,t)dx\\
& +2\mu\mu_1 |\nu_2|\int_{\Omegaega_0} \int_{\mathcal{M}}se^{-\deltata \phi s}\sigma(s) z(t,\phi, s)dsd\phi.
\end{split}
\end{equation*}
Next, let
$$G_{\mu_1}=\mu_1\ell\left(\begin{equation}gin{array}{cc}
\nu_1^2&\nu_1\nu_2\\
\nu_1\nu_2&\nu_2^2\\
\end{array}\right) , \ G_{\mu_2}=\mu_2\left(\begin{equation}gin{array}{cc}
|\nu_2| \displaystyle \int_{\mathcal{M}}\sigma(s)ds&0\\
0&0\\
\end{array}\right)
$$
and
$$X=\left(\begin{equation}gin{array}{c}
\partial_x^2 \omega(t,0)\\
\displaystyle \int_{\mathcal{M}}\sigma(s)z(t,1,s)ds\\
\end{array}\right).$$ Thus, we have that
\begin{equation}gin{equation*}
\begin{equation}gin{split}
\Phi'(t)+&2\mu \Phi(t)
= \lambdangle (G+G_{\mu_1}+G_{\mu_2})X,X\rangle_{\mathbb R^2}+(\alphapha\mu_1+2\mu)\| \omega\|^2-3\begin{equation}ta\mu_1 \|\partial_x \omega\|^2-5\mu_1\|\partial_x^2 \omega\|^2\\
&+\dfrac{2\mu_1}{p+2}\int_{\Omegaega} \omega^{p+2}dx- \mu_2|\nu_2|\int_{\mathcal{M}}e^{-\deltata s}\sigma(s)\left( z(t,1,s)\right)^2 ds\\
& -\mu_2|\nu_2|\int_{\mathcal{M}}\int_{\Omegaega_0}\deltata s e^{-\deltata\phi s}\sigma(s) z^2 d\phi ds +2\mu \|z(t)\|^2_{L^2(\mathcal{Q })}+2\mu\mu_1\int_{\Omegaega} xu^2(x,t)dx\\
& +2\mu\mu_1 |\nu_2|\int_{\Omegaega_0} \int_{\mathcal{M}}se^{-\deltata \phi s}\sigma(s) z(t,\phi, s)dsd\phi\\
\leq& \lambdangle (G+G_{\mu_1}+G_{\mu_2})X,X\rangle_{\mathbb R^2}+\left(\alphapha\mu_1+2\mu(1+\mu_1\ell)\right)\| \omega\|^2-3\begin{equation}ta\mu_1 \|\partial_x \omega\|^2-5\mu_1\|\partial_x^2 \omega\|^2\\
&+\dfrac{2\mu_1}{p+2}\int_{\Omegaega} \omega^{p+2}dx- \mu_2|\nu_2|e^{-\deltata \tau_2}\int_{\mathcal{M}}\sigma(s)\left( z(t,1,s)\right)^2 ds\\
& -\mu_2|\nu_2|e^{-\deltata \tau_2}\deltata\int_{\mathcal{M}}\int_{\Omegaega_0} s\sigma(s) z^2 d\phi ds\\
& +2\mu \|z(t)\|^2_{L^2(\mathcal{Q })}+2\mu\mu_1 |\nu_2|\int_{\Omegaega_0} \int_{\mathcal{M}}s\sigma(s) z(t,\phi, s)dsd\phi.
\end{split}
\end{equation*}
Now, observe that $$T(\mu_1,\mu_2):=G+G_{\mu_1}+G_{\mu_2}=G+\mu_1\ell\left(\begin{equation}gin{array}{cc}
\nu_1^2&\nu_1\nu_2\\
\nu_1\nu_2&\nu_2^2\\
\end{array}\right)+\mu_2\left(\begin{equation}gin{array}{cc}
|\nu_2| \int_{\mathcal{M}} \sigma(s)ds&0\\
0&0\\
\end{array}\right)$$
is a continuous map of $\mathbb R^2$ on the vector space of square matrices $M_{2\times 2}(\mathbb R)$ and that the determinant and trace are continuous functions of $M_{2\times 2} (\mathbb R)$ over $\mathbb R,$ we have that $h_1(\mu_1,\mu_2)=\det T(\mu_1,\mu_2)$ and $h_2(\mu_1,\mu_2)=\mbox{tr} T(\mu_1,\mu_2)$ are continuous from $\mathbb R^2$ over $\mathbb R.$ Therefore, knowing that $h_1(0,0)=\det G>0$ and $h_2(0,0)=\mbox{tr} G<0$ for $\mu_1,\mu_2$ small enough, one can claim that $h_1(\mu_1,\mu_2)>0$ and $h_2(\mu_1,\mu_2)<0.$ Thereby, $G+G_{\mu_1}+ G_{\mu_2}$ is negative defined for $\mu_1,\mu_2$ small enough.
Moreover, using the Poincar\'e inequality\footnote{$\|\omega\|^2\leq\dfrac{\ell^2}{\pi^2}\|\partial_x \omega\|^2$, for $\omega\in H_0^2(\Omegaega),$} we find
\begin{equation}gin{equation}
\lambdabel{eq3.16}
\begin{equation}gin{split}
\Phi'(t)+2\mu \Phi(t)
\leq& \left[\dfrac{\ell^2}{\pi^2}\left(\alphapha\mu_1+2\mu(1+\mu_1\ell)\right)-3\begin{equation}ta\mu_1\right]\|\partial_x \omega\|^2-5\mu_1\|\partial_x^2 \omega\|^2\\
&+\dfrac{2\mu_1}{p+2}\int_{\Omegaega} \omega^{p+2}dx- \mu_2|\nu_2|e^{-\deltata \tau_2}\int_{\mathcal{M}}\sigma(s)\left( z(t,1,s)\right)^2 ds\\
& +\left(2\mu(1+ \mu_1 |\nu_2|)-\mu_2|\nu_2|e^{-\deltata \tau_2}\deltata\right)\|z(t)\|^2_{L^2(\mathcal{Q })}.
\end{split}
\end{equation}
Now, we are going to estimate the integral $$ \dfrac{2\mu_1}{p+2}\int_{\Omegaega} \omega^{p+2}dx.$$
For this, applying the Cauchy-Schwarz inequality and using the fact that the energy of the system $\mathcal{E}(t)$ is non-increasing, together with the embedding $H_0^1(\Omegaega)\hookrightarrow L^\infty(\Omegaega)$ we have, for $\|(\omega_0,z_0)\|_H<r$, that
\begin{equation}gin{equation}
\lambdabel{eq3.17}
\begin{equation}gin{array}{rcl}
\displaystyle \dfrac{2\mu_1}{p+2}\int_{\Omegaega} \omega^{p+2}dx&\leq&\displaystyle \dfrac{2\mu_1}{p+2}\|\omega\|^2_{L^\infty(\Omegaega)}\int_{\Omegaega} \omega^pdx\leq \dfrac{2\ell\mu_1}{p+2}\|\partial_x \omega\|^2\int_{\Omegaega} \omega^pdx\\
&\leq&\displaystyle \dfrac{2\ell\mu_1}{p+2}\|\partial_x \omega\|^2 \ell^{1-\frac{p}{2}}\|\omega\|^p\leq \dfrac{2\ell^{2-\frac{p}{2}}\mu_1}{p+2}\|\partial_x \omega\|^2\|(\omega_0,z_0)\|_H^p\\
&\leq&\displaystyle \dfrac{2\ell^{2-\frac{p}{2}}\mu_1r^p}{p+2}\|\partial_x \omega\|^2.
\end{array}
\end{equation}
Combining \eqref{eq3.17} and \eqref{eq3.16} yields
\begin{equation}gin{equation}
\lambdabel{eq3.18}
\begin{equation}gin{split}
\Phi'(t)+2\mu \Phi(t)\leq & \left[\dfrac{\ell^2}{\pi^2}\left(\alphapha\mu_1+2\mu(1+\mu_1\ell)\right)-3\begin{equation}ta\mu_1\right]\|\partial_x \omega\|^2-5\mu_1\|\partial_x^2 \omega\|^2\\
&+\dfrac{2\ell^{2-\frac{p}{2}}\mu_1r^p}{p+2}\|\partial_x \omega\|^2- \mu_2|\nu_2|e^{-\deltata \tau_2}\int_{\mathcal{M}}\sigma(s)\left( z(t,1,s)\right)^2 ds\\
& +\left(2\mu(1+ \mu_1 |\nu_2|)-\mu_2|\nu_2|e^{-\deltata \tau_2}\deltata\right)\|z(t)\|^2_{L^2(\mathcal{Q })}\\
\leq& \left[\dfrac{\ell^2}{\pi^2}\left(\alphapha\mu_1+2\mu(1+\mu_1\ell)\right)-3\begin{equation}ta\mu_1+\dfrac{2\ell^{2-\frac{p}{2}}\mu_1r^p}{p+2}\right]\|\partial_x \omega\|^2-5\mu_1\|\partial_x^2 \omega\|^2\\
& +\left(2\mu(1+ \mu_1 |\nu_2|)-\mu_2|\nu_2|e^{-\deltata \tau_2}\deltata\right)\|z(t)\|^2_{L^2(\mathcal{Q })}.
\end{split}
\end{equation}
Note that $\Phi'(t)+2\mu \Phi(t)<0$ when $$2\mu(1+ \mu_1 |\nu_2|)-\mu_2|\nu_2|e^{-\deltata \tau_2}\deltata<0$$ and $$\dfrac{\ell^2}{\pi^2}\left(\alphapha\mu_1+2\mu(1+\mu_1\ell)\right)-3\begin{equation}ta\mu_1
+\dfrac{2\ell^{2-\frac{p}{2}}\mu_1r^p}{p+2}<0,$$
which holds for $\mu>0$ satisfying, respectively
$$\mu<\dfrac{\mu_2 |\nu_2|e^{-\deltata\tau_2}\deltata}{2(1+\mu_1|\nu_2|)}$$
and
$$0<\mu<\dfrac{\mu_1}{2\ell^2(1+\ell\mu_1)(p+2)}\left[(p+2)(3\pi^2\begin{equation}ta-\alphapha \ell^2)-2\pi^2\ell^{2-\frac{p}{2}}r^p\right],$$
where we need to take $r>0$ satisfying
$$(p+2)(3\pi^2\begin{equation}ta-\alphapha \ell^2)-2\pi^2\ell^{2-\frac{p}{2}}r^p>0$$
or, equivalently, $r>0$ must satisfy
$$r<\left(\dfrac{(p+2)(3\pi^2\begin{equation}ta-\alphapha \ell^2)}{2\pi^2\ell^{2-\frac{p}{2}}}\right)^\frac{1}{p}.$$
Thus, for $\mu_1,\mu_2$ small enough and an arbitrary $\deltata>0$, taking $$r<\left(\dfrac{(p+2)(3\pi^2\begin{equation}ta-\alphapha \ell^2)}{2\pi^2\ell^{2-\frac{p}{2}}}\right)^\frac{1}{p}$$ and $$\mu<\min\left\{\dfrac{\mu_2 |\nu_2|e^{-\deltata\tau_2}\deltata}{2(1+\mu_1|\nu_2|)},
\dfrac{\mu_1}{2\ell^2(1+\ell\mu_1)(p+2)}\left[(p+2)(3\pi^2\begin{equation}ta-\alphapha \ell^2)-2\pi^2\ell^{2
-\frac{p}{2}}r^p\right]\right\},$$
we get that
$$\Phi'(t)+2\mu \Phi(t)<0 \iff \Phi(t)\leq \Phi(0)e^{-2\mu t}.$$
Lastly, from \eqref{eq3.8}, we get
$$\mathcal{E}(t)\leq \Phi(t)\leq \Phi(0)e^{-2\mu t}\leq (1+\max\{\ell\mu_1,\mu_2\})E(0)e^{-2\mu t}\leq \kappa E(0)e^{-2\mu t},$$
for $\kappa>1+\max\{\ell\mu_1,\mu_2\}$, proving the theorem. \qed
{\mbox{sub}}section{Proof of Theorem \ref{comp}}
First, we deal with the linear system \eqref{sis2.1_coupled} and claim that the following observability inequality holds
\begin{equation}gin{equation}\lambdabel{OI}
\|\omega_{0}\|^2+\| z_{0}\|_{L^2 (\mathcal{Q})}^2 \leq
C \int_{0}^{T}\left((\partial^2_x \omega(t,0))^{2}+\int_{\mathcal{M}} s \sigma(s) z^2 (t,1,s) \, ds \right) d t,
\end{equation}
where $\left(\omega_{0}, z_{0}\right) \in H$ and $(\omega, z)(t)=e^{t A} \left(\omega_{0}, z_{0}\right)$ is the unique solution of \eqref{sis2.1_coupled}. This leads to the exponential stability in $H$ of the solution $(y,z)$ to \eqref{sis2.1_coupled}. The proof of this inequality can be obtained by a contradiction argument. Indeed, if \eqref{OI} is not true, then there exists a sequence $\{\left(\omega_{0}^{n}, z_{0}^{n}\right)\}_{n} {\mbox{sub}}set H$ such that
\begin{equation}gin{equation}\lambdabel{unit}
\|\omega_{0}^{n}\|^{2}+\|z_{0}^{n}\|^{2}_{L^2 (\mathcal{Q})}=1
\end{equation}
and
\begin{equation}gin{equation}\lambdabel{n=0}
\left\| \partial^2_x \omega^{n}( \cdot, 0)\right\|_{L^{2}(0, T)}^{2}+\int_{\mathcal{M}} s \sigma(s) z^2 (t,1,s) \, ds \rightarrow 0 \text { as } n \rightarrow+\infty,
\end{equation}
where $\left(\omega^{n}, z^{n}\right)(t)=e^{tA} \left(\omega_{0}^{n}, z_{0}^{n}\right)$.
Then, arguing as in \cite{luan}, we can deduce from Proposition \ref{linear} that $\{\omega^{n}\}_{n}$ is convergent in $L^{2}\left(0, T, L^{2}(\Omegaega)\right)$. Moreover, $\{\omega_{0}^{n}\}_{n}$ is a Cauchy sequence in $L^{2}(\Omegaega)$, while $\{z_{0}^{n}\}_n$ is a Cauchy sequence in $L^{2}(\mathcal{Q})$. Thereafter, let $\left(\omega_{0}, z_{0}\right)=\lim _{n \rightarrow \infty}\left(\omega_{0}^{n}, z_{0}^{n}\right) \;\; \mbox{in} \; H$ and hence
$\|\omega_{0}\|^{2}+\|z_{0}\|^{2}_{L^2 (\mathcal{Q})}=1,$ by virtue of \eqref{unit}. Next, take $(\omega,z)=e^{\cdot A} \left(\omega_{0}, z_{0}\right),$ and assume, for the sake of simplicity and without loss of generality, that $\alphapha=\begin{equation}ta=1$. This, together with Proposition \ref{linear} and \eqref{n=0}, implies that $\omega$ is solution of the system
$$
\begin{equation}gin{cases}
\partial_t \omega + \partial_x \omega+\partial^3_x \omega-\partial^5_x \omega=0, & x \in \Omegaega, t>0, \\
\omega(0, t)=\omega(\ell, t)=\partial_x \omega(\ell, t)=\partial_x \omega(0, t)=\partial^2_x \omega(\ell, t)=\partial^2_x \omega(0, t)=0, & t>0, \\
\omega(x, 0)=\omega_{0}(x), & x \in \Omegaega,
\end{cases}
$$
with
$\left\|\omega_{0}\right\|_{L^{2}(\Omegaega)}=1 .$
The latter contradicts the result obtained in \cite[Lemma 4.2]{luan}, which states that the above system has only the trivial solution (see also Lemma \ref{lem2}). This proves the observability inequality \eqref{OI}.
Now, let us go back to the original system \eqref{sis1} and use the same arguments as in \cite{ro}. First, we restrict ourselves to the case $p=1$ as the case $p=2$ is similar. Next, consider an initial condition $\left\|\left(\omega_{0}, z_{0}\right)\right\|_H \leq \varrho,$
where $\varrho$ will be fixed later. Then, the solution $\omega$ of \eqref{sis1} can be written as $\omega=\omega_1+\omega_2$, where $\omega_1$ is the solution of \eqref{sis2.1_coupled} with the initial data $\left(\omega_{0}, z_{0}\right)\in H$ and $\omega_2$ is solution of \eqref{sis_coupled_nh} with null data and right-hand side $\varphi=\omega \partial_x \omega \in L^1(0,T;L^2(\Omegaega))$, as in Lemma \ref{teor2}. In other words, $\omega_1$ is the solution of
$$\begin{equation}gin{cases}\partial_t \omega_{1}-\partial^5_x \omega_1+ \partial^3_x \omega_{1}+ \partial_x \omega_{1}=0, & x \in \Omegaega, t>0, \\
\omega_1(t,0)=\omega_1(t,\ell)=\partial_x \omega_1(t,0)=\partial_x \omega_1(t,\ell)=0, & t>0, \\
\partial^2_x \omega_{1}(t,\ell)=\nu_1 \partial^2_x \omega_{1}(t,0)+\nu_2 \displaystyle \int_{t-\tau_2}^{t-\tau_1} \sigma(t-s) \partial_x^2 \omega (s,0) \, ds, & t>0, \\
\partial^2_x \omega_{1}(t,0)=z_{0}(t), & t \in(-\tau_2, 0), \\
\omega_1(0,x)=\omega_{0}(x), & x \in \Omegaega,\end{cases}$$
and $\omega_{2}$ is solution of
$$\begin{equation}gin{cases}
\partial_t \omega_{2}-\partial^5_x \omega_2+\partial^3_x \omega_{2}+ \partial_x \omega_{2}=-\omega \partial_x \omega, & x \in \Omegaega, t>0, \\
\omega_{2}(t,0)=\omega_{2}(t,\ell)=\partial_x \omega_2(t,0)=\partial_x \omega_2(t,\ell)=0, & t>0, \\
\partial^2_x \omega_{2}(t,\ell)=\nu_1 \partial^2_x \omega_{2}(t,0)+\nu_2 \displaystyle \int_{t-\tau_2}^{t-\tau_1} \sigma(t-s) \partial_x^2 \omega (s,0) \, ds, & t \in(-\tau_2, 0), \\
\partial^2_x \omega_{2}(t,0)=0, & x \in \Omegaega, \\
\omega_{2}(0,x)=0, & x \in\Omegaega.
\end{cases}$$
In light of the exponential stability of the linear system \eqref{sis2.1_coupled} (see the beginning of this subsection) and Theorem \ref{teor2}, we have
\begin{equation}gin{equation}\lambdabel{31}
\|(\omega(T), z(T))\|_{H}
\leq \chi \left\|\left(\omega_{0}, z_{0}\right)\right\|_{H}+C\|\omega\|_{L^{2}\left(0, T, H^{2}(\Omegaega)\right)}^{2},
\end{equation}
in which $\chi \in(0,1)$. Subsequently, multiply \eqref{sis1}$_1$ by $xu$ and performing the same computations as for \eqref{eq3.9}, we get
\begin{equation}gin{equation}\lambdabel{q1}
\begin{equation}gin{split}
&\int_{\Omegaega} x \omega^2(T,x) d x+3 \int_{0}^{T}
\int_{\Omegaega}\left( \partial_x \omega(t,x)\right)^{2} d x d t+5\int_{0}^{T} \int_{\Omegaega}\left( \partial^2_xu(t,x)\right)^{2} d x dt = \\&
\int_{0}^{T} \int_{\Omegaega} \omega^2(t,x) d x dt+\ell \int_0^T \left(\nu_1\partial_x^{2}\omega(t,0) +\nu_2 \int_{\mathcal{M}} \sigma(s) z(t,1,s) \, ds\right)^2 dt+ \int_{\Omegaega} x \omega_{0}^2(x) d x \\
&+\frac{2}{3} \int_{0}^{T} \int_{\Omegaega} \omega^3 (t,x) dxdt.
\end{split}
\end{equation}
On one hand, multiplying the first equation of \eqref{sis1} by $\omega$ and arguing as done for \eqref{est1} (see \eqref{eq2.23}), we get
\begin{equation}gin{equation}\lambdabel{q2}
\displaystyle \int_0^T \left(\nu_1\partial_x^{2}\omega(t,0) +\nu_2 \int_{\mathcal{M}} \sigma(s) z(t,1,s) \, ds\right)^2 dt \leq C \| (\omega_0, z_0 )\|_H^2.
\end{equation}
On the other hand, using Gagliardo–Nirenberg and Cauchy-Schwarz inequalities, together with the dissipativity of the system \eqref{sis1}, we deduce that
\begin{equation}gin{equation*}
\int_{0}^{T} \int_{\Omegaega} \omega^{3} d x d t \leq C(T)\left\|(\omega_{0},z_0)\right\|^{2}_{H} \|\omega\|_{L^{2}\left(0, T ; H^{2}(\Omegaega)\right)}.
\end{equation*}
Applying Young's inequality to the last estimate and combining the obtained result with \eqref{q1}-\eqref{q2}, we reach
\begin{equation}gin{equation}\lambdabel{33}
\|\omega\|_{L^{2}\left(0, T ; H^{2}(\Omegaega)\right)}^{2}\leq C\left\|\left(\omega_{0}, z_{0}\right)\right\|_{H}^{2} \left(1+\left\|\left(\omega_{0}, z_{0}\right)\right\|_{H}^{2}\right).
\end{equation}
Finally, recalling that $\left\|\left(\omega_{0}, z_{0}\right)\right\|_H \leq \varrho,$ and inserting \eqref{33} into \eqref{31}, we get
$$
\|(\omega(T), z(T))\|_{H} \leq\left\|\left(\omega_{0}, z_{0}\right)\right\|_{H}\left(\chi+C \varrho+C \varrho^{3}\right).
$$
Given $\eta>0$ sufficiently small so that $\chi+\eta<1$, one can choose $\varrho$ small such that $\varrho+\varrho^{3}<\frac{\eta}{C}$, to obtain
$$
\|(\omega(T), z(T))\|_{H} \leq(\chi+\eta)\left\|\left(\omega_{0}, z_{0}\right)\right\|_{H}.
$$
Lastly, using the semigroup property and the fact that $\chi+\eta<1$, we conclude the exponential stability result of Theorem \ref{comp}. \qed
\section{Conclusion} This article presented a study on the stability of the Kawahara equation with a boundary-damping control of finite memory type. It is shown that such a control is good enough to obtain the desirable property, namely, the exponential decay of the system's energy. The proof is based on two different approaches. The first one invokes a Lyapunov functional and provides an estimate of the energy decay. In turn, the second one uses a compactness-uniqueness argument that reduces the issue to a spectral problem.
Finally, we would like to point out that our well-posedness result (see Theorem \ref{theorem3}) is shown for the nonlinearity ${\omega^p} \partial_x \omega$, where $p \in \{1,2\}$. Notwithstanding, we believe that using an interpolation argument, this finding should remain valid if $p \in (1,2)$. The same remark applies to the second stability result (see Theorem \ref{comp}). It is also noteworthy that our first stability outcome (see Theorem \ref{Lyapunov}) is established for a more general nonlinearity ${\omega^p} \partial_x \omega$, $p \in [1,2]$.
{\mbox{sub}}section*{Acknowledgments}
This work is part of the Ph.D. thesis of de Jesus at the Department of Mathematics of the Federal University of Pernambuco and was done while the first author was visiting Virginia Tech. The first author thanks the host institution for their warm hospitality.
{\mbox{sub}}section*{Authors contributions:} Capistrano-Filho, Chentouf and de Jesus work equality in Conceptualization; formal analysis; investigation; writing--original draft; writing--review and editing.
{\mbox{sub}}section*{Conflict of interest statement} This work does not have any conflicts of interest.
{\mbox{sub}}section*{Data availability statement} Is not applicable to this article as no new data were created or analyzed in this study.
\begin{equation}gin{thebibliography}{90}
\bibitem{ASL} B. Alvarez-Samaniego and D. Lannes, \textit{Large time existence for 3D water-waves and asymptotics}, Invent. Math., 171, 485--541 (2008).
\bibitem{afg} G. Amendola, M. Fabrizio, J. M. Golden, {\em Thermodynamics of Materials with Memory: Theory and Applications}, Springer, New York (2012).
\bibitem{ara} F. D. Araruna, R. A. Capistrano-Filho, and G. G. Doronin, \textit{Energy decay for the modified Kawahara equation posed in a bounded domain}, J. Math. Anal. Appl., 385, 743--756 (2012).
\bibitem {Berloff} N. Berloff and L. Howard, \textit{Solitary and periodic solutions of nonlinear nonintegrable equations}, Studies in Applied Mathematics, 99:1, 1--24 (1997).
\bibitem {Biswas} A. Biswas, \textit{Solitary wave solution for the generalized Kawahara equation}, Applied Mathematical Letters, 22, 208--210 (2009).
\bibitem{BLS}J. L. Bona, D. Lannes, and J.-C. Saut, \textit{Asymptotic models for internal waves}, J. Math. Pures Appl., 9:89, 538--566 (2008).
\bibitem {Boyd}J. P. Boyd, \textit{Weakly non-local solitons for capillary-gravity waves: fifth-degree Korteweg-de Vries equation}, Phys. D, 48, 129--146 (1991).
\bibitem{luan} R. A. Capistrano--Filho, B. Chentouf, and L. S. de Sousa, \textit{Two stability results for the Kawahara equation with a time-delayed boundary control}, Z. Angew. Math. Phys., 74:16 (2023).
\bibitem{cajesus} R. A. Capistrano--Filho and I. M. de Jesus, \textit{Massera's theorems for a higher order dispersive system,} Acta Applicandae Mathematicae, 185:5, 1--25 (2023).
\bibitem{CaVi} R. A. Capistrano--Filho and V. H. Gonzalez Martinez, \textit{Stabilization results for delayed fifth-order KdV-type equation in a bounded domain}, Mathematical Control and Related Fields. doi: 10.3934/mcrf.2023004.
\bibitem{CaSo} R. A. Capistrano--Filho and L. S. de Sousa, \textit{Control results with overdetermination condition for higher order dispersive system}, Journal of Mathematical Analysis and Applications, 506, 1--22 (2022).
\bibitem{bc1} B. Chentouf, \textit{Qualitative analysis of the dynamic for the nonlinear Korteweg-de Vries equation with a boundary memory}, Qualitative Theory of Dynamical Systems, 20:36, 1--29 (2021).
\bibitem{bc2} B. Chentouf, \textit{On the exponential stability of a nonlinear Kuramoto-Sivashinsky-Korteweg-de Vries equation with finite memory}, Mediterranean Journal of Mathematics, vol. 19, no. 11, 22 pages (2022).
\bibitem{boumediene} B. Chentouf, \textit{Well-posedness and exponential stability of the Kawahara equation with a time-delayed localized damping}, Math Meth Appl Sci., 45(16), 10312--10330 (2022).
\bibitem{Cui}S. B. Cui, D. G. Deng, and S. P. Tao, \textit{Global existence of solutions for the Cauchy problem of the Kawahara equation with $\mathit{L}^{\mathit{2}}$ initial data}, Acta Math. Sin. (Engl. Ser.), 22, 1457--1466 (2006).
\bibitem{da} C. M. Dafermos, Asymptotic stability in viscoelasticity, {\it Arch. Rational Mech. Anal.}, 37, 297--308 (1970).
\bibitem{Hasimoto1970} H. Hasimoto, \textit{Water waves}, Kagaku, 40, 401--408 [Japanese] (1970).
\bibitem {Hunter}J. K. Hunter and J. Scheurle, \textit{Existence of perturbed solitary wave solutions to a model equation for water waves}, Physica D 32, 253--268 (1998).
\bibitem {Iguchi}T. Iguchi, \textit{A long wave approximation for capillary-gravity waves and the Kawahara Equations}, Academia Sinica (New Series), 2:2, 179--220 (2007).
\bibitem {Jin}L. Jin, \emph{Application of variational iteration method and homotopy perturbation method to the modified Kawahara equation}, Mathematical and Computer Modelling, 49, 573--578 (2009).
\bibitem {Kaya}D. Kaya and K. Al-Khaled, \textit{A numerical comparison of a Kawahara equation}, Phys. Lett. A, 363 (5-6), 433--439 (2007).
\bibitem {Kawahara}T. Kawahara, \textit{Oscillatory solitary waves in dispersive media}, J. Phys. Soc. Japan, 33 , 260--264 (1972).
\bibitem {Kakutani}T. Kakutani, \textit{Axially symmetric stagnation-point flow of an electrically conducting fluid under transverse magnetic field}, J. Phys. Soc. Japan, 15, 688--695 (1960).
\bibitem{Lannes} D. Lannes, \textit{The water waves problem. Mathematical analysis and asymptotics}. Mathematical Surveys and Monographs, 188. American Mathematical Society, Providence, RI, xx+321 pp (2013).
\bibitem{np} S. Nicaise and C. Pignotti, \textit{Stability and instability results of the wave equation with a delay term in the boundary or internal feedbacks}, SIAM J. Control Optim. 45, 561--1585 (2006).
\bibitem{nipi} S. Nicaise and C. Pignotti, Stabilization of the wave equation with boundary or internal distributed memory, {\it Diff. Integral Equations}, 21, 935-–958 (2008).
\bibitem {Polat}N. Polat, D. Kaya, and H.I. Tutalar, \textit{An analytic and numerical solution to a modified Kawahara equation and a convergence analysis of the method}, Appl. Math. Comput., 179, 466--472 (2006).
\bibitem {Pomeau}Y. Pomeau, A. Ramani, and B. Grammaticos, \textit{Structural stability of the Korteweg-de Vries solitons under a singular perturbation},Physica D, 31, 127--134 (1988).
\bibitem{ro} L. Rosier, Exact boundary controllability for the Korteweg--de Vries equation on a bounded domain, \emph{ ESAIM Control Optim. Calc. Var.}, 2 (1997), 33--55.
\bibitem {vasi1} C. F. Vasconcellos and P. N. Silva, \textit{Stabilization of the linear Kawahara equation with localized damping}, Asymptotic Analysis, 58, 229--252 (2008).
\bibitem {vasi2} C. F. Vasconcellos and P. N. Silva, \textit{Stabilization of the linear Kawahara equation with localized damping}, Asymptotic Analysis, 66, 119--124 (2010).
\bibitem {xyl} G. Q. Xu, S. P. Yung, and L. K. Li, \textit{Stabilization of wave systems with input delay in the boundary control}, ESAIM Control Optim. Calc. Var., 12, 770--785 (2006).
\bibitem {Yusufoglu} E. Yusufoglu, A. Bekir, and M. Alp, \textit{Periodic and solitary wave solutions of Kawahara and modified Kawahara equations by using Sine-Cosine method}, Chaos, Solitons and Fractals, 37,
1193--1197 (2008).
\end{thebibliography}
\end{document}
|
\begin{document}
\theoremstyle{plain}
\newtheorem{theorem}{Theorem}
\newtheorem{Cor}{Corollary}
\newtheorem{Con}{Conjecture}
\newtheorem{Main}{Main Theorem}
\newtheorem{lemma}{Lemma}
\newtheorem{proposition}{Proposition}
\newenvironment{proof}{{\bf Proof:} }{
$\Box$
\mbox{}}
\theoremstyle{definition}
\newtheorem{definition}{Definition}[section]
\newtheorem{Note}{Note}
\theoremstyle{example}
\newtheorem{example}{Example}[section]
\theoremstyle{remark}
\newtheorem{remark}{Remark}[section]
\theoremstyle{remark}
\newtheorem{notation}{Notation}
\renewcommand{\thenotation}{}
\errorcontextlines=0
\numberwithin{equation}{section}
\renewcommand{\normalshape}{\normalshape}
\newcommand{\ici}[1]{\stackrel{\circ}{#1}}
\title[The use of soft matrices on soft multisets in an optimal decision process]
{The use of soft matrices on soft multisets in an optimal decision process}
\author[Arzu Erdem, Cigdem Gunduz Aras, Ayse Sonmez, and H\"usey\.{I}n \c{C}akall\i]{Arzu Erdem*, Cigdem Gunduz Aras* , Ayse Sonmez**, and H\"usey\.{I}n \c{C}akall\i\;***\\
*Kocaeli University, Department of Mathematics, Kocaeli, Turkey Phone:(+90262)3032102\\ **Gebze Institute of Technology, Department of Mathematics, Gebze-Kocaeli, Turkey Phone: (+90262)6051389 \\*** Maltepe University, Marmara E\u{g}\.{I}t\.{I}m K\"oy\"u, TR 34857, \.{I}stanbul-Turkey \; \; \; \; \; Phone:(+90216)6261050 ext:2248, \; fax:(+90216)6261113 }
\address{Arzu Erdem, Kocaeli University, Department of Mathematics, Kocaeli, Turkey Phone:(+90 262) 303 2102}
\email{erdem.arzu@@gmail.com}
\address{Cigdem Gunduz Aras, Kocaeli University, Department of Mathematics, Kocaeli, Turkey Phone:(+90 262) 303 2102}
\email{carasgunduz@@gmail.com; caras@@kocaeli.edu.tr}
\address{Ay\c{s}e S\"onmez, Department of Mathematics, Gebze Institute of Technology, Cayirova Campus 41400 Gebze- Kocaeli, Turkey Phone: (+90 262) 605 1389}
\email{asonmez@@gyte.edu.tr; ayse.sonmz@@gmail.com}
\address{Huseyin \c{C}akall\i\;Maltepe University, Department of Mathematics, Marmara E\u{g}\.{I}t\.{I}m K\"oy\"u, TR 34857, Maltepe, \.{I}stanbul-Turkey \; \; \; \; \; Phone:(+90 216) 626 1050 ext:2248, \; fax:(+90 216) 626 1113}
\email{hcakalli@@maltepe.edu.tr; hcakalli@@gmail.com}
\keywords{Soft sets, Soft matrix, Soft multiset, Products of soft matrices on soft multisets,
Soft max-min decision making}
\date{\today}
\maketitle
\begin{abstract}
In this paper, we introduce a concept of a soft matrix on a soft multiset, and investigate how
to use soft matrices to solve decision making problems. An algorithm for a multiple choose selection
problem is also provided. Finally, we
demonstrate an illustrative example to show the decision making steps.
\end{abstract}
\maketitle
\section{Introduction}
\normalfont{}
One has to make a choose between alternative actions in almost every days
and make decisions. Most decisions involve multiple objectives. For example, a developer and manufacturer of computer equipment,
the X Industry would
like to manufacture a large quantity of a product if the consumers demand
for the product adequately high. Unfortunately, sometimes the technology
development is difficult and there are some crucial elements such as target
time for getting market, competitors, etc. The Development Group of the X
industry argues that an untested product is introduced and they propose
moving up the production start, and putting the device into production
before final tests, meanwhile launching a high-priced advertising campaign
offering the units as available now. This shows a decision event activity.
When we mention decision making one needs to figure out what to do in the
face of difficult circumstances. Molodtsov \cite{Molodtsov} has introduced
`Soft Set Theory' in which we can use parametrization tools for dealing
with this kind of uncertainty and difficulties in the process of decision
making. Maji,~Biswas~and~Roy \cite{Maji} defined equality of two soft sets,
a subset and a super set of a soft set, complement of a soft set, null soft set,
and absolute soft set with examples. In the study, soft binary operations
like AND, OR and the operations of union, intersection were also
characterized. Sezgin and Atag\"{u}n \cite{Sezgin} proved that certain De
Morgan's law hold in soft set theory with respect to different operations on
soft sets. Ali, Feng, Liu, Min and Shabir \cite{Ali} introduced some new
notions such as the restricted intersection, the restricted union, the
restricted difference and the extended intersection of two soft sets. The
relationship among soft sets, soft rough sets and topologies was established
by Li and Xie in \cite{Li}. Based on the novel granulation structures called
soft approximation spaces, soft rough approximations and soft rough sets
were introduced by Feng, Liu, Fotea and Jun in \cite{Feng}. Jiang, Tang, Chen,
Liu and Tang presented an extended fuzzy soft set theory by using the
concepts of fuzzy description logics to act as the parameters of fuzzy soft
sets. Gunduz and Bayramaov in \cite{Caras} introduced some important properties of fuzzy soft topological spaces. An alternative approach to attribute reduction in multi-valued
information system under soft set theory was presented by Herawan, Ghazali
and Deris in \cite{Herawan}. Gunduz, Sonmez and Cakalli in \cite{Caras1} introduced soft open and soft closed mappings, soft homeomorphism concept. An application of soft sets to a decision making
problem with the help of the theory of soft sets was studied in \cite{Cagman, Han, Maji2, Mamat, Singh, Zhang}.
When we consider a multiple objective decision making problem with $m$
criteria and $n$ alternatives, it is easy to show the decision making
methodology as a decision table. This representation has several advantages.
It is easy to store and manipulate matrices and hence the soft sets
represented by them in a computer. Soft matrices which were a
matrix representation of the soft sets were introduced in \cite{Basu,Cagman2,Cagman3,Mondal2,Mondal,Vijayabalaji}.
However, some decision making problems include several decision-makers such
as a group decision making problem. As a generalization of Molodtsov's soft
set, the definition of a soft multiset was introduced by Alkhazaleh and
Salleh in \cite{Alkhazaleh}. In the research, they gave basic operations
such as complement, union and intersection with examples and then in \cite{Alkhazaleh2}\ they introduced the definition of fuzzy soft multiset as a
combination of soft multiset and fuzzy set and study its properties and
operations.
In this paper we improve a soft multiset concept in decision-making problems
and apply soft matrices concept to group decision making problems.
\maketitle
\section{Preliminaries}
First of all, we recall some basic concepts and notions, which are necessary
foundations of group decision making methods.
\begin{definition}
(\cite{Molodtsov}) Let $U$ be an initial universe, $P\left( U\right) $ be
the power set of $U$, $E$ be a set of all parameters and $A\subset E$. A
soft set $\left( f_{A},E\right) $ on the universe $U$ is defined by the set
of ordered pairs
\begin{equation*}
\left( f_{A},E\right) :=\left\{ \left( e,f_{A}\left( e\right) \right) :e\in
E,~f_{A}\left( e\right) \in P\left( U\right) \right\}
\end{equation*}
where $f_{A}:E\rightarrow P\left( U\right) $ such that $f_{A}\left( e\right)
=\emptyset $ if $e\notin A$.
\end{definition}
\begin{example}
Let us consider a soft set $\left( f_{E},E\right) $ which describes the
\textquotedblright color of the shirts\textquotedblright\ that Mrs. X is
considering to buy. Suppose that there are five shirts in the universe $
U=\{s_{1},s_{2},s_{3},s_{4},s_{5}\}$ under consideration, and that $
E=\{e_{1},e_{2},e_{3},e_{4}\}$ is a set of decision parameters. For each $
e_{i},$ $i=1,2,3,4,$ denotes the parameters "white", "purple", "red" and
"blue", respectively. Let $A=\{e_{1},e_{3}\}\subset E$ and
\begin{eqnarray*}
f_{A}\left( e_{1}\right) &=&U, \\
f_{A}(e_{3}) &=&\{s_{2},s_{4}\}.
\end{eqnarray*}
Then we can view the soft set $\left( f_{A},E\right) $ as consisting of the
following collection of approximations:
\begin{equation*}
\left( f_{A},E\right) =\left\{ \left( e_{1},U\right) ,\left(
e_{3},\{s_{2},s_{4}\}\right) \right\} .
\end{equation*}
\end{example}
\begin{definition}
(\cite{Alkhazaleh}) Let $\{U_{i}:i\in I\}$ be a collection of universes such
that $\cap _{i\in I}U_{i}=\emptyset $, $\left\{ E_{i}=E_{U_{i}}:i\in
I\right\} $ be a collection of sets of parameters, $E=\prod\limits_{i\in
I}E_{i},$ $U=\prod\limits_{i\in I}P\left( U_{i}\right) ,$ $A\subset E.$ A
pair $(F_{A},E)$ is called a soft multiset over $U$,where $F_{A}$ is a
mapping given by $F_{A}:A\rightarrow U.$
\end{definition}
This paper will focus on the situation that universe sets $U_{i}$ and
parameter sets $E_{i}$ are both finite sets for each $i\in I$ .
\begin{example}
\label{ex2}Suppose that there are three universes $U_{1},U_{2},U_{3}.$ Let
us consider a soft multiset $(F_{A},E)$ which describes the "attractiveness
of houses", "attractiveness of cars"\ and "attractiveness of hotels"\ that
Mrs. X is considering for accommodation purchase, transportation purchase,
and venue to hold a wedding celebration respectively.
\begin{eqnarray*}
U_{1} &=&\{h_{1},h_{2},h_{3},h_{4},h_{5},h_{6}\} \\
U_{2} &=&\{c_{1},c_{2},c_{3},c_{4},c_{5}\} \\
U_{3} &=&\{v_{1},v_{2},v_{3},v_{4}\} \\
E_{1} &=&E_{U_{1}}=\{e_{11}="expensive",e_{12}="cheap",e_{13}="4\text{ }
bedroom\text{ }flat", \\
e_{14}&=&"3\text{ }bedroom\text{ }and\text{ }terraced
\text{ }house", e_{15} ="located\text{ }in\text{ }the\text{ }heart\text{ }of\text{ }the
\text{ }city"\} \\
E_{2} &=&E_{U_{2}}=\{e_{21}="expensive",e_{22}="cheap",e_{23}="friendly\text{
}technology", \\
e_{24} &=&"better\text{ }performance",e_{25}="luxury",e_{26}="Made\text{ }in
\text{ }Germany"\} \\
E_{3} &=&E_{U_{3}}=\{e_{31}="expensive",e_{32}="cheap",e_{33}="in~\dot{I}
stanbul",
\\
e_{34}&=&"located\text{ }in\text{ }the\text{ }historic\text{ }centre", e_{35} ="neoclassic\text{ }hotel"\} \\
E &=&\prod\limits_{i\in I}E_{i},U=\prod\limits_{i\in I}P\left( U_{i}\right)
\\
A &=&\{a_{1}=\left( e_{11},e_{21},e_{31}\right) ,a_{2}=\left(
e_{11},e_{22},e_{34}\right) ,a_{3}=\left( e_{12},e_{23},e_{35}\right)
,a_{4}=\left( e_{15},e_{24},e_{32}\right) , \\
a_{5} &=&\left( e_{14},e_{23},e_{33}\right) ,a_{6}=\left(
e_{12},e_{25},e_{32}\right) ,a_{7}=\left( e_{13},e_{21},e_{31}\right)
,a_{8}=\left( e_{11},e_{26},e_{32}\right) \}\subset E
\end{eqnarray*}
Suppose that
\begin{eqnarray*}
F_{A}\left( a_{1}\right) &=&\left(
\{h_{3},h_{4},h_{5},h_{6}\},\{c_{1},c_{2},c_{3}\},\{v_{2},v_{3}\}\right) , \\
F_{A}\left( a_{2}\right) &=&\left(
\{h_{3},h_{4},h_{5},h_{6}\},\{c_{4},c_{5}\},\{v_{1},v_{2}\}\right) , \\
F_{A}\left( a_{3}\right) &=&\left( \{h_{1},h_{2}\},\emptyset
,\{v_{2},v_{3}\}\right) , \\
F_{A}\left( a_{4}\right) &=&\left(
U_{1},\{c_{3},c_{4}\},\{v_{1},v_{4}\}\right) , \\
F_{A}\left( a_{5}\right) &=&\left( \{h_{3},h_{4},h_{5}\},\emptyset
,\{v_{2},v_{4}\}\right) , \\
F_{A}\left( a_{6}\right) &=&\left(
\{h_{1},h_{2}\},U_{2},\{v_{1},v_{4}\}\right) , \\
F_{A}\left( a_{7}\right) &=&\left( \emptyset
,\{c_{1},c_{2},c_{3}\},\{v_{2},v_{3}\}\right) , \\
F_{A}\left( a_{8}\right) &=&\left(
\{h_{3},h_{4},h_{5},h_{6}\},\{c_{4},c_{5}\},\{v_{1},v_{4}\}\right) ,
\end{eqnarray*}
Then we can view the soft multiset $(F_{A},E)$ as consisting of the
following collection of approximations
\begin{eqnarray*}
(F_{A},E) &=&\{\left( \left( e_{11},e_{21},e_{31}\right) ,\left(
\{h_{3},h_{4},h_{5},h_{6}\},\{c_{1},c_{2},c_{3}\},\{v_{2},v_{3}\}\right)
\right) , \\
&&\left( \left( e_{11},e_{22},e_{34}\right) ,\left(
\{h_{3},h_{4},h_{5},h_{6}\},\{c_{4},c_{5}\},\{v_{1},v_{2}\}\right) \right) ,
\\
&&\left( \left( e_{12},e_{23},e_{35}\right) ,\left(
\{h_{1},h_{2}\},\emptyset ,\{v_{2},v_{3}\}\right) \right) , \\
&&\left( \left( e_{15},e_{24},e_{32}\right) ,\left(
U_{1},\{c_{3},c_{4}\},\{v_{1},v_{4}\}\right) \right) , \\
&&\left( \left( e_{14},e_{23},e_{33}\right) ,\left(
\{h_{3},h_{4},h_{5}\},\emptyset ,\{v_{2},v_{4}\}\right) \right) , \\
&&\left( \left( e_{12},e_{25},e_{32}\right) ,\left(
\{h_{1},h_{2}\},U_{2},\{v_{1},v_{4}\}\right) \right) , \\
&&\left( \left( e_{13},e_{21},e_{31}\right) ,\left( \emptyset
,\{c_{1},c_{2},c_{3}\},\{v_{2},v_{3}\}\right) \right) , \\
&&\left( \left( e_{11},e_{26},e_{32}\right) ,\left(
\{h_{3},h_{4},h_{5},h_{6}\},\{c_{4},c_{5}\},\{v_{1},v_{4}\}\right) \right) \}
\end{eqnarray*}
\end{example}
\begin{definition}
(\cite{Alkhazaleh}) For any soft multiset $(F_{A},E),$\ a pair $\left(
e_{ij},F_{ij}\right) $ is called a $U_{i}-$ soft multiset part for $\forall
e_{ij}\in a_{k},$ and $F_{ij}\subset F_{A}\left( A\right) $ is an
approximate value set, where $a_{k}\in A,$ $k=1,2,3,...,r,$ $
i=1,2,...,m_{i}, $ $j=1,2,...,n_{j}.$
\end{definition}
\begin{example}
\label{ex3}Consider the soft multiset given in Example \ref{ex2}. Then,
\begin{eqnarray*}
\left( e_{1j},F_{1j}\right) &=&\{\left(
e_{11},\{h_{3},h_{4},h_{5},h_{6}\}\right) ,\left(
e_{12},\{h_{1},h_{2}\}\right) ,\left( e_{13},\emptyset \right) , \\
&&\left( e_{14},\{h_{3},h_{4},h_{5}\}\right) ,\left( e_{15},U_{1}\right) \}
\end{eqnarray*}
is a $U_{1}-$ soft multiset part of $(F_{A},E).$
\end{example}
\begin{definition}
(i) Let a pair $\left( e_{ij},F_{ij}\right) $\ be a $U_{i}-$ soft multiset
part of soft multiset $(F_{A},E),$ $A_{i}\subset E_{i}.$\ Then a subset of $
U_{i}\times E_{i}$ is is called a relation form of $U_{i}-$ soft multiset
part of $(F_{A},E)$ which is uniquely defined by
\begin{equation*}
R_{A_{i}}=\{\left( u_{ij},e_{ij}\right) :e_{ij}\in A_{i},u_{ij}\in
F_{ij}\left( e_{ij}\right) \}
\end{equation*}
The characteristic function of $\chi _{R_{A_{i}}}$ is written by
\begin{equation*}
\chi _{R_{A_{i}}}:U_{i}\times E_{i}\rightarrow \{0,1\},~~\chi
_{R_{A_{i}}}\left( u_{ij},e_{ij}\right) :=\left\{
\begin{array}{c}
1,\left( u_{ij},e_{ij}\right) \in R_{A_{i}} \\
0,\left( u_{ij},e_{ij}\right) \notin R_{A_{i}}
\end{array}
\right.
\end{equation*}
(ii) If $U_{i}=\{u_{i1},u_{i2},...,u_{im_{i}}\},E_{i}=
\{e_{i1},e_{i2},...,e_{in_{i}}\},$ then we call a matrix $\left[ a_{lk}^{i}
\right] =\chi _{R_{A_{i}}}\left( u_{ik},e_{il}\right) ,1\leq k\leq
m_{i},1\leq l\leq n_{i}$ as an $m_{i}\times n_{i}$ soft matrix of $U_{i}-$
soft multiset part of $(F_{A},E).$
\end{definition}
\begin{example}
\label{ex4}Let us consider Example \ref{ex3}. Then
\begin{eqnarray*}
R_{A_{1}} &=&\{\left( h_{3},e_{11}\right) ,\left( h_{4},e_{11}\right)
,\left( h_{5},e_{11}\right) ,\left( h_{6},e_{11}\right) ,\left(
h_{1},e_{12}\right) ,\left( h_{2},e_{12}\right) ,\left( h_{3},e_{14}\right) ,
\\
&&\left( h_{4},e_{14}\right) ,\left( h_{5},e_{14}\right) ,\left(
h_{1},e_{15}\right) ,\left( h_{2},e_{15}\right) ,\left( h_{3},e_{15}\right)
,\left( h_{4},e_{15}\right) ,\left( h_{5},e_{15}\right) ,\left(
h_{6},e_{15}\right) \}
\end{eqnarray*}
Then $R_{A_{1}}$\ is presented by a table as in the following form:
\begin{equation*}
\begin{tabular}{|l|l|l|l|l|l|}
\hline
$R_{A_{1}}$ & $e_{11}$ & $e_{12}$ & $e_{13}$ & $e_{14}$ & $e_{15}$ \\ \hline
$h_{1}$ & $\chi _{R_{A_{1}}}\left( h_{1},e_{11}\right) $ & $\chi
_{R_{A_{1}}}\left( h_{1},e_{12}\right) $ & $\chi _{R_{A_{1}}}\left(
h_{1},e_{13}\right) $ & $\chi _{R_{A_{1}}}\left( h_{1},e_{14}\right) $ & $
\chi _{R_{A_{1}}}\left( h_{1},e_{15}\right) $ \\ \hline
$h_{2}$ & $\chi _{R_{A_{1}}}\left( h_{2},e_{11}\right) $ & $\chi
_{R_{A_{1}}}\left( h_{2},e_{12}\right) $ & $\chi _{R_{A_{1}}}\left(
h_{2},e_{13}\right) $ & $\chi _{R_{A_{1}}}\left( h_{2},e_{14}\right) $ & $
\chi _{R_{A_{1}}}\left( h_{2},e_{15}\right) $ \\ \hline
$h_{3}$ & $\chi _{R_{A_{1}}}\left( h_{3},e_{11}\right) $ & $\chi
_{R_{A_{1}}}\left( h_{3},e_{12}\right) $ & $\chi _{R_{A_{1}}}\left(
h_{3},e_{13}\right) $ & $\chi _{R_{A_{1}}}\left( h_{3},e_{14}\right) $ & $
\chi _{R_{A_{1}}}\left( h_{3},e_{15}\right) $ \\ \hline
$h_{4}$ & $\chi _{R_{A_{1}}}\left( h_{4},e_{11}\right) $ & $\chi
_{R_{A_{1}}}\left( h_{4},e_{12}\right) $ & $\chi _{R_{A_{1}}}\left(
h_{4},e_{13}\right) $ & $\chi _{R_{A_{1}}}\left( h_{4},e_{14}\right) $ & $
\chi _{R_{A_{1}}}\left( h_{4},e_{15}\right) $ \\ \hline
$h_{5}$ & $\chi _{R_{A_{1}}}\left( h_{5},e_{11}\right) $ & $\chi
_{R_{A_{1}}}\left( h_{5},e_{12}\right) $ & $\chi _{R_{A_{1}}}\left(
h_{5},e_{13}\right) $ & $\chi _{R_{A_{1}}}\left( h_{5},e_{14}\right) $ & $
\chi _{R_{A_{1}}}\left( h_{5},e_{15}\right) $ \\ \hline
$h_{6}$ & $\chi _{R_{A_{1}}}\left( h_{6},e_{11}\right) $ & $\chi
_{R_{A_{1}}}\left( h_{6},e_{12}\right) $ & $\chi _{R_{A_{1}}}\left(
h_{6},e_{13}\right) $ & $\chi _{R_{A_{1}}}\left( h_{6},e_{14}\right) $ & $
\chi _{R_{A_{1}}}\left( h_{6},e_{15}\right) $ \\ \hline
\end{tabular}
.
\end{equation*}
Hence soft matrix $\left[ a_{lk}^{1}\right] $ of $U_{1}$ soft multiset part
of $(F_{A},E)$ is written by
\begin{equation*}
\left[ a_{lk}^{1}\right] _{6\times 5}=
\begin{bmatrix}
0 & 1 & 0 & 0 & 1 \\
0 & 1 & 0 & 0 & 1 \\
1 & 0 & 0 & 1 & 1 \\
1 & 0 & 0 & 1 & 1 \\
1 & 0 & 0 & 1 & 1 \\
1 & 0 & 0 & 0 & 1
\end{bmatrix}
_{6\times 5}.
\end{equation*}
\end{example}
\begin{definition}
(\cite{Cagman3}) $\left[ a_{lk}\right] _{m\times n}$ is called a zero soft
matrix, denoted by $[0]$, if $a_{lk}=0$ for all $1\leq l\leq m$ and $1\leq
k\leq n.$
\end{definition}
\maketitle
\section{Soft matrices on soft multisets}
In this section, inspired by the above definitions to soft matrices and soft
multisets, first we will begin defining soft matrices on soft multisets and
its product and then we will give examples for these concepts.
\begin{definition}
\label{def1}Let $U_{i}=\{u_{i1},u_{i2},...,u_{im_{i}}\}$ be universes$
,E_{i}=\{e_{i1},e_{i2},...,e_{in_{i}}\}$ be parameters for each $i\in I$, $
R_{A_{i}}$ be a relation form of $U_{i}-$ soft multiset part of $(F_{A},E),$
$\left[ a_{lk}^{i}\right] _{m_{i}\times n_{i}},$ $1\leq k\leq m_{i},1\leq
l\leq n_{i}$ be soft matrix of $U_{i}-$ soft multiset part of $(F_{A},E).$
Then
\begin{equation*}
\left[ A_{lk}\right] _{m\times n}=\left[
\begin{tabular}{c|c|c|c}
$\left[ a_{lk}^{1}\right] _{m_{1}\times n_{1}}$ & $\left[ 0\right]
_{m_{1}\times n_{2}}$ & $\cdots $ & $\left[ 0\right] _{m_{1}\times n_{N}}$
\\ \hline
$\left[ 0\right] _{m_{2}\times n_{1}}$ & $\left[ a_{lk}^{2}\right]
_{m_{2}\times n_{2}}$ & $\cdots $ & $\left[ 0\right] _{m_{2}\times n_{N}}$
\\ \hline
$\cdots $ & $\cdots $ & $\ddots $ & $\cdots $ \\ \hline
$\left[ 0\right] _{m_{N}\times n_{1}}$ & $\left[ 0\right] _{m_{N}\times
n_{2}}$ & $\cdots $ & $\left[ a_{lk}^{N}\right] _{m_{N}\times n_{N}}$
\end{tabular}
\right] _{m\times n}
\end{equation*}
is called a soft matrix of $(F_{A},E),$ where $m=m_{1}+m_{2}+\cdots
+m_{N},~n=n_{1}+n_{2}+\cdots +n_{N}.$
\end{definition}
\begin{example}
\label{ex6}Let us consider Example \ref{ex3}. \ The soft matrix $\left[
a_{lk}^{1}\right] $ of $U_{1}-$ soft multiset part of $(F_{A},E)$ is given
in Example \ref{ex4}. Similarly, we can obtain the soft matrices $\left[
a_{lk}^{2}\right] $ of $U_{2}-$ and $\left[ a_{lk}^{3}\right] $ of $U_{3}-$
soft multiset parts of $(F_{A},E)$ as shown below:
\begin{equation*}
\left[ a_{lk}^{2}\right] _{5\times 6}=
\begin{bmatrix}
1 & 0 & 0 & 0 & 1 & 0 \\
1 & 0 & 0 & 0 & 1 & 0 \\
1 & 0 & 0 & 1 & 1 & 0 \\
0 & 1 & 0 & 1 & 1 & 1 \\
0 & 1 & 0 & 0 & 1 & 1
\end{bmatrix}
_{5\times 6},\text{ }\left[ a_{lk}^{3}\right] _{4\times 5}=
\begin{bmatrix}
0 & 1 & 0 & 1 & 0 \\
1 & 0 & 1 & 1 & 1 \\
1 & 0 & 0 & 0 & 1 \\
0 & 1 & 1 & 0 & 0
\end{bmatrix}
_{4\times 5}.
\end{equation*}
Then soft matrix $\left[ A_{lk}\right] _{m\times n}$ of $(F_{A},E)$ can be
written
\begin{equation*}
\left[ A_{lk}\right] _{15\times 16}=\left[
\begin{tabular}{c|c|c}
$
\begin{bmatrix}
0 & 1 & 0 & 0 & 1 \\
0 & 1 & 0 & 0 & 1 \\
1 & 0 & 0 & 1 & 1 \\
1 & 0 & 0 & 1 & 1 \\
1 & 0 & 0 & 1 & 1 \\
1 & 0 & 0 & 0 & 1
\end{bmatrix}
_{6\times 5}$ & $
\begin{bmatrix}
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0
\end{bmatrix}
_{6\times 6}$ & $
\begin{bmatrix}
0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0
\end{bmatrix}
_{6\times 5}$ \\ \hline
$
\begin{bmatrix}
0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0
\end{bmatrix}
_{5\times 5}$ & $
\begin{bmatrix}
1 & 0 & 0 & 0 & 1 & 0 \\
1 & 0 & 0 & 0 & 1 & 0 \\
1 & 0 & 0 & 1 & 1 & 0 \\
0 & 1 & 0 & 1 & 1 & 1 \\
0 & 1 & 0 & 0 & 1 & 1
\end{bmatrix}
_{5\times 6}$ & $
\begin{bmatrix}
0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0
\end{bmatrix}
_{5\times 5}$ \\ \hline
$
\begin{bmatrix}
0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0
\end{bmatrix}
_{4\times 5}$ & $
\begin{bmatrix}
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0
\end{bmatrix}
_{4\times 6}$ & $
\begin{bmatrix}
0 & 1 & 0 & 1 & 0 \\
1 & 0 & 1 & 1 & 1 \\
1 & 0 & 0 & 0 & 1 \\
0 & 1 & 1 & 0 & 0
\end{bmatrix}
_{4\times 5}$
\end{tabular}
\right] _{15\times 16}
\end{equation*}
\end{example}
We make the following product definitions for soft matrices on soft
multisets, which are adapted from Definitions 7-10 in \cite{Cagman3}.
\begin{definition}
Let $\left[ a_{lk}^{i}\right] _{m_{i}\times n_{i}}$ be soft matrix of $
U_{i}- $ soft multiset part of $(F_{A},E),$ $\left[ b_{lj}^{i}\right]
_{m_{i}\times n_{i}}$ be soft matrix of $U_{i}-$ soft multiset part of $
(F_{B},E)$. Then \textbf{And }product of $\left[ a_{lk}^{i}\right]
_{m_{i}\times n_{i}}$ and $\left[ b_{lj}^{i}\right] _{m_{i}\times n_{i}}$ is
defined by
\begin{equation*}
\left[ a_{lk}^{i}\right] _{m_{i}\times n_{i}}\wedge \left[ b_{lj}^{i}\right]
_{m_{i}\times n_{i}}=\left[ c_{lp}^{i}\right] _{m_{i}\times n_{i}^{2}}
\end{equation*}
where $c_{lp}^{i}=\min \{a_{lk}^{i},b_{lj}^{i}\}$ such that $p=n_{i}\left(
k-1\right) +j.$
\end{definition}
\begin{definition}
Let $\left[ a_{lk}^{i}\right] _{m_{i}\times n_{i}}$ be soft matrix of $
U_{i}- $ soft multiset part of $(F_{A},E),$ $\left[ b_{lj}^{i}\right]
_{m_{i}\times n_{i}}$ be soft matrix of $U_{i}-$ soft multiset part of $
(F_{B},E)$. Then \textbf{Or }product of $\left[ a_{lk}^{i}\right]
_{m_{i}\times n_{i}}$ and $\left[ b_{lj}^{i}\right] _{m_{i}\times n_{i}}$ is
defined by
\begin{equation*}
\left[ a_{lk}^{i}\right] _{m_{i}\times n_{i}}\vee \left[ b_{lj}^{i}\right]
_{m_{i}\times n_{i}}=\left[ c_{lp}^{i}\right] _{m_{i}\times n_{i}^{2}}
\end{equation*}
where $c_{lp}^{i}=\max \{a_{lk}^{i},b_{lj}^{i}\}$ such that $p=n_{i}\left(
k-1\right) +j.$
\end{definition}
\begin{definition}
Let $\left[ a_{lk}^{i}\right] _{m_{i}\times n_{i}}$ be soft matrix of $
U_{i}- $ soft multiset part of $(F_{A},E),$ $\left[ b_{lj}^{i}\right]
_{m_{i}\times n_{i}}$ be soft matrix of $U_{i}-$ soft multiset part of $
(F_{B},E)$. Then \textbf{And-Not }product of $\left[ a_{lk}^{i}\right]
_{m_{i}\times n_{i}}$ and $\left[ b_{lj}^{i}\right] _{m_{i}\times n_{i}}$ is
defined by
\begin{equation*}
\left[ a_{lk}^{i}\right] _{m_{i}\times n_{i}}\barwedge \left[ b_{lj}^{i}
\right] _{m_{i}\times n_{i}}=\left[ c_{lp}^{i}\right] _{m_{i}\times
n_{i}^{2}}
\end{equation*}
where $c_{lp}^{i}=\min \{a_{lk}^{i},1-b_{lj}^{i}\}$ such that $p=n_{i}\left(
k-1\right) +j.$
\end{definition}
\begin{definition}
Let $\left[ a_{lk}^{i}\right] _{m_{i}\times n_{i}}$ be soft matrix of $
U_{i}- $ soft multiset part of $(F_{A},E),$ $\left[ b_{lj}^{i}\right]
_{m_{i}\times n_{i}}$ be soft matrix of $U_{i}-$ soft multiset part of $
(F_{B},E)$. Then \textbf{Or-Not }product of $\left[ a_{lk}^{i}\right]
_{m_{i}\times n_{i}}$ and $\left[ b_{lj}^{i}\right] _{m_{i}\times n_{i}}$ is
defined by
\begin{equation*}
\left[ a_{lk}^{i}\right] _{m_{i}\times n_{i}}\veebar \left[ b_{lj}^{i}\right]
_{m_{i}\times n_{i}}=\left[ c_{lp}^{i}\right] _{m_{i}\times n_{i}^{2}}
\end{equation*}
where $c_{lp}^{i}=\max \{a_{lk}^{i},1-b_{lj}^{i}\}$ such that $p=n_{i}\left(
k-1\right) +j.$
\end{definition}
\begin{example}
\label{ex5}As in Example \ref{ex2}, Mr. X is joining with Mrs. X for
accommodation purchase, transportation purchase, and venue to hold a wedding
celebration respectively. The set of choice parameters for him is
\begin{eqnarray*}
B &=&\{b_{1}=\left( e_{11},e_{25},e_{31}\right) ,b_{2}=\left(
e_{13},e_{22},e_{33}\right) ,b_{3}=\left( e_{12},e_{26},e_{32}\right)
,b_{4}=\left( e_{14},e_{24},e_{34}\right) , \\
b_{5} &=&\left( e_{15},e_{23},e_{35}\right) ,b_{6}=\left(
e_{11},e_{21},e_{31}\right) \}\subset E.
\end{eqnarray*}
Then the soft multiset $(F_{B},E)$ is given by
\begin{eqnarray*}
(F_{B},E) &=&\{\left( \left( e_{11},e_{25},e_{31}\right) ,\left(
U_{1},\{c_{4},c_{5}\},\{v_{1},v_{2},v_{3}\}\right) \right) , \\
&&\left( \left( e_{13},e_{22},e_{33}\right) ,\left(
\{h_{2},h_{3},h_{4},h_{5}\},\{c_{1},c_{2}\},\{v_{2},v_{4}\}\right) \right) ,
\\
&&\left( \left( e_{12},e_{26},e_{32}\right) ,\left( \emptyset
,\{c_{4},c_{5}\},\{v_{4}\}\right) \right) , \\
&&\left( \left( e_{14},e_{24},e_{34}\right) ,\left(
\{h_{1},h_{2},h_{3}\},\{c_{4},c_{5}\},\{v_{2},v_{3}\}\right) \right) , \\
&&\left( \left( e_{15},e_{23},e_{35}\right) ,\left(
\{h_{1},h_{2},h_{5},h_{6}\},\{c_{1},c_{2},c_{3},c_{4}\},\{v_{2}\}\right)
\right) , \\
&&\left( \left( e_{11},e_{21},e_{31}\right) ,\left(
U_{1},\{c_{3},c_{4},c_{5}\},\{v_{1},v_{2},v_{3}\}\right) \right) \},
\end{eqnarray*}
and the soft matrix $\left[ b_{lk}^{1}\right] $ of $U_{1}-$ soft multiset
part of $(F_{B},E)$ is obtained as shown below:
\begin{equation*}
\left[ b_{lk}^{1}\right] _{6\times 5}=
\begin{bmatrix}
1 & 0 & 0 & 1 & 1 \\
1 & 0 & 1 & 1 & 1 \\
1 & 0 & 1 & 1 & 0 \\
1 & 0 & 1 & 0 & 0 \\
1 & 0 & 1 & 0 & 1 \\
1 & 0 & 0 & 0 & 1
\end{bmatrix}
_{6\times 5}.
\end{equation*}
Then we have \textbf{And }product of $\left[ a_{ij}^{1}\right] _{m\times n}$
of $U_{1}-$ soft multiset part of $(F_{A},E)$ and $\left[ b_{ik}^{1}\right]
_{m\times n}$ of $U_{1}-$ soft multiset part of $(F_{B},E)$ in the following:
\textbf{Or }product of $
\left[ a_{ij}^{1}\right] _{m\times n}$ of $U_{1}$ soft multiset part of $
(F_{A},E)$ and $\left[ b_{ik}^{1}\right] _{m\times n}$ of $U_{1}$ soft
multiset part of $(F_{B},E)$ in the following:
\textbf{And-Not }product of $\left[ a_{ij}^{1}\right] _{m\times n}$ of $U_{1}
$ soft multiset part of $(F_{A},E)$ and $\left[ b_{ik}^{1}\right] _{m\times
n}$ of $U_{1}$ soft multiset part of $(F_{B},E)$ in the following:
\textbf{Or-Not }product
of $\left[ a_{ij}^{1}\right] _{m\times n}$ of $U_{1}$ soft multiset part of $
(F_{A},E)$ and $\left[ b_{ik}^{1}\right] _{m\times n}$ of $U_{1}$ soft
multiset part of $(F_{B},E)$ in the following:
Here, squares show the value $1$ for the product of $\left[ a_{ij}^{1}\right] _{m\times n}$
of $U_{1}-$ soft multiset part of $(F_{A},E)$ and $\left[ b_{ik}^{1}\right]
_{m\times n}$ of $U_{1}-$ soft multiset part of $(F_{B},E)$.
\end{example}
\begin{definition}
\label{def2}Let $\left[ A_{lk}\right] _{m\times n},\left[ B_{lk}\right]
_{m\times n}$ be the soft matrices of soft multisets$\ (F_{A},E)$ and $
\left( F_{B},E\right) ,$ respectively. Then \textbf{And }product of $\left[
A_{lk}\right] _{m\times n}$ and $\left[ B_{lk}\right] _{m\times n}$ is
defined by
\begin{equation*}
\left[ A_{lk}\right] _{m\times n}\wedge \left[ B_{lk}\right] _{m\times n}=
\left[
\begin{tabular}{c|c|c|c}
$\left[ a_{lk}^{1}\right] _{m_{1}\times n_{1}}\wedge \left[ b_{lk}^{1}\right]
_{m_{1}\times n_{1}}$ & $\left[ 0\right] _{m_{1}\times n_{2}^{2}}$ & $\cdots
$ & $\left[ 0\right] _{m_{1}\times n_{N}^{2}}$ \\ \hline
$\left[ 0\right] _{m_{2}\times n_{1}^{2}}$ & $\left[ a_{lk}^{2}\right]
_{m_{2}\times n_{2}}\wedge \left[ b_{lk}^{2}\right] _{m_{2}\times n_{2}}$ & $
\cdots $ & $\left[ 0\right] _{m_{2}\times n_{N}^{2}}$ \\ \hline
$\cdots $ & $\cdots $ & $\ddots $ & $\cdots $ \\ \hline
$\left[ 0\right] _{m_{N}\times n_{1}^{2}}$ & $\left[ 0\right] _{m_{N}\times
n_{2}^{2}}$ & $\cdots $ & $\left[ a_{lk}^{N}\right] _{m_{N}\times
n_{N}}\wedge \left[ b_{lk}^{N}\right] _{m_{N}\times n_{N}}$
\end{tabular}
\right] _{m\times ns},
\end{equation*}
\textbf{Or }product of $\left[ A_{lk}\right] _{m\times n}$ and $\left[ B_{lk}
\right] _{m\times n}$ is defined by
\begin{equation*}
\left[ A_{lk}\right] _{m\times n}\vee \left[ B_{lk}\right] _{m\times n}=
\left[
\begin{tabular}{c|c|c|c}
$\left[ a_{lk}^{1}\right] _{m_{1}\times n_{1}}\vee \left[ b_{lk}^{1}\right]
_{m_{1}\times n_{1}}$ & $\left[ 0\right] _{m_{1}\times n_{2}^{2}}$ & $\cdots
$ & $\left[ 0\right] _{m_{1}\times n_{N}^{2}}$ \\ \hline
$\left[ 0\right] _{m_{2}\times n_{1}^{2}}$ & $\left[ a_{lk}^{2}\right]
_{m_{2}\times n_{2}}\vee \left[ b_{lk}^{2}\right] _{m_{2}\times n_{2}}$ & $
\cdots $ & $\left[ 0\right] _{m_{2}\times n_{N}^{2}}$ \\ \hline
$\cdots $ & $\cdots $ & $\ddots $ & $\cdots $ \\ \hline
$\left[ 0\right] _{m_{N}\times n_{1}^{2}}$ & $\left[ 0\right] _{m_{N}\times
n_{2}^{2}}$ & $\cdots $ & $\left[ a_{lk}^{N}\right] _{m_{N}\times n_{N}}\vee
\left[ b_{lk}^{N}\right] _{m_{N}\times n_{N}}$
\end{tabular}
\right] _{m\times ns},
\end{equation*}
\textbf{And-Not }product of $\left[ A_{lk}\right] _{m\times n}$ and $\left[
B_{lk}\right] _{m\times n}$ is defined by
\begin{equation*}
\left[ A_{lk}\right] _{m\times n}\barwedge \left[ B_{lk}\right] _{m\times n}=
\left[
\begin{tabular}{c|c|c|c}
$\left[ a_{lk}^{1}\right] _{m_{1}\times n_{1}}\barwedge \left[ b_{lk}^{1}
\right] _{m_{1}\times n_{1}}$ & $\left[ 0\right] _{m_{1}\times n_{2}^{2}}$ &
$\cdots $ & $\left[ 0\right] _{m_{1}\times n_{N}^{2}}$ \\ \hline
$\left[ 0\right] _{m_{2}\times n_{1}^{2}}$ & $\left[ a_{lk}^{2}\right]
_{m_{2}\times n_{2}}\barwedge \left[ b_{lk}^{2}\right] _{m_{2}\times n_{2}}$
& $\cdots $ & $\left[ 0\right] _{m_{2}\times n_{N}^{2}}$ \\ \hline
$\cdots $ & $\cdots $ & $\ddots $ & $\cdots $ \\ \hline
$\left[ 0\right] _{m_{N}\times n_{1}^{2}}$ & $\left[ 0\right] _{m_{N}\times
n_{2}^{2}}$ & $\cdots $ & $\left[ a_{lk}^{N}\right] _{m_{N}\times
n_{N}}\barwedge \left[ b_{lk}^{N}\right] _{m_{N}\times n_{N}}$
\end{tabular}
\right] _{m\times ns},
\end{equation*}
\textbf{Or-Not }product of $\left[ A_{lk}\right] _{m\times n}$ and $\left[
B_{lk}\right] _{m\times n}$ is defined by
\begin{equation*}
\left[ A_{lk}\right] _{m\times n}\veebar \left[ B_{lk}\right] _{m\times n}=
\left[
\begin{tabular}{c|c|c|c}
$\left[ a_{lk}^{1}\right] _{m_{1}\times n_{1}}\veebar \left[ b_{lk}^{1}
\right] _{m_{1}\times n_{1}}$ & $\left[ 0\right] _{m_{1}\times n_{2}^{2}}$ &
$\cdots $ & $\left[ 0\right] _{m_{1}\times n_{N}^{2}}$ \\ \hline
$\left[ 0\right] _{m_{2}\times n_{1}^{2}}$ & $\left[ a_{lk}^{2}\right]
_{m_{2}\times n_{2}}\veebar \left[ b_{lk}^{2}\right] _{m_{2}\times n_{2}}$ &
$\cdots $ & $\left[ 0\right] _{m_{2}\times n_{N}^{2}}$ \\ \hline
$\cdots $ & $\cdots $ & $\ddots $ & $\cdots $ \\ \hline
$\left[ 0\right] _{m_{N}\times n_{1}^{2}}$ & $\left[ 0\right] _{m_{N}\times
n_{2}^{2}}$ & $\cdots $ & $\left[ a_{lk}^{N}\right] _{m_{N}\times
n_{N}}\veebar \left[ b_{lk}^{N}\right] _{m_{N}\times n_{N}}$
\end{tabular}
\right] _{m\times ns},
\end{equation*}
where $m=m_{1}+m_{2}+\cdots +m_{N},$ $ns=n_{1}^{2}+n_{2}^{2}+\cdots
+n_{N}^{2}$.
\end{definition}
\begin{example}
\label{ex7}\label{ex7}Let us consider Example \ref{ex6} and \ref{ex5}. \textbf{And }
product of $\left[ A_{lk}\right] _{15\times 16}$ and $\left[ B_{lk}\right]
_{15\times 16}$ is $15\times 86$ soft matrix whose visualize sparsity
pattern is given in the following:
\textbf{Or }product of $\left[
A_{lk}\right] _{15\times 16}$ and $\left[ B_{lk}\right] _{15\times 16}$ is $
15\times 86$ soft matrix whose visualize sparsity pattern is given in the
following:
\textbf{And-Not }product of $
\left[ A_{lk}\right] _{15\times 16}$ and $\left[ B_{lk}\right] _{15\times
16} $ is $15\times 86$ soft matrix whose visualize sparsity pattern is given
in the following:
\textbf{Or-Not }product of
$\left[ A_{lk}\right] _{15\times 16}$ and $\left[ B_{lk}\right] _{15\times
16}$ is $15\times 86$ soft matrix whose visualize sparsity pattern is given
in the following:
Here, squares show the
value $1$ for the product of $\left[ A_{lk}\right] _{m\times n}$ and $\left[ B_{lk}\right]
_{m\times n}$ as the soft matrices of soft multisets$\ (F_{A},E)$ and $
\left( F_{B},E\right) ,$ respectively.
\end{example}
\section{Application}
Now we use the algorithm to solve our original problem.
\begin{example}
Suppose that a married couple, Mr. X and Mrs. X, are considering for
accommodation purchase, transportation purchase, and venue to hold a wedding
celebration respectively. The set of choice parameters for them are given in
Example \ref{ex2} and Example \ref{ex5}. We now select the house(s), car(s)
and hotel(s) on the sets of partners' parameters by using the above
algorithm as follows:
\begin{description}
\item[Step 1] First, Mrs. X and Mr. X have to choose the sets of their
parameters, given in Example \ref{ex2} and Example \ref{ex5}, respectively;
\item[Step 2] Then we can write the soft multisets $\left( F_{A},E\right)
,\left( F_{B},E\right) $, given in Example \ref{ex2} and Example \ref{ex5},
respectively;
\item[Step 3] Next we find the soft matrix $\left[ a_{lk}^{i}\right] $ of $
U_{i}$ soft multiset part of $(F_{A},E)$ and the soft matrix $\left[
b_{lk}^{i}\right] $ of $U_{i}$ soft multiset part of $(F_{B},E)$, given in
Example \ref{ex6} and Example \ref{ex5}, respectively;
\item[Step 4] By using Definition \ref{def1}, we construct the soft matrix $
\left[ A_{lk}\right] _{m\times n}$ of soft set $(F_{A},E)$ and the soft
matrix $\left[ B_{lk}\right] _{m\times n}$ of soft set $(F_{A},E);$
\begin{figure}
\caption{Visualize Sparsity Pattern of the Soft
Matrix of $\left( F_{A}
\end{figure}
\begin{figure}
\caption{Visualize Sparsity Pattern of the Soft
Matrix of $\left( F_{B}
\end{figure}
\item[Step 5] We construct \textbf{And }product of $\left[ A_{lk}\right]
_{m\times n}$ and $\left[ B_{lk}\right] _{m\times n}$ by using Definition
\ref{def2}, denote by $\left[ C_{lj}\right] _{m\times ns},$ given by Example
\ref{ex7};
\item[Step 6] We find the sets $I_{k}^{\left( i\right) }$ as
\begin{eqnarray*}
I_{1}^{\left( 1\right) } &=&\{1;3;4;5\},I_{2}^{\left( 1\right)
}=\{6;8;9;10\},I_{3}^{\left( 1\right) }=\emptyset ,I_{4}^{\left( 1\right)
}=\{16;18;19;20\},I_{5}^{\left( 1\right) }=\{21;23;24;25\} \\
I_{1}^{\left( 2\right) } &=&\{26;27;28\},I_{2}^{\left( 2\right)
}=\{32;34;35;36;37\},I_{3}^{\left( 2\right) }=\emptyset ,I_{4}^{\left(
2\right) }=\{44;46;47;48;49\}, \\
I_{5}^{\left( 2\right) } &=&\{50;51;52;53;54;55\},I_{6}^{\left( 2\right)
}=\{56;58;59;60;61\} \\
I_{1}^{\left( 3\right) } &=&\{62;64;65;66\},I_{2}^{\left( 3\right)
}=\{67;68;69\},I_{3}^{\left( 3\right) }=\{72;73;74;75;76\}, \\
I_{4}^{\left( 3\right) } &=&\{77;79;80;81\},I_{5}^{\left( 3\right)
}=\{82;84;85;86\}
\end{eqnarray*}
and construct decision function
\begin{eqnarray*}
\left[ w_{l,k}^{\left( 1\right) }\right] &=&
\begin{bmatrix}
0 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 1 \\
0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0
\end{bmatrix}
_{6\times 5},\left[ w_{l,k}^{\left( 2\right) }\right] =
\begin{bmatrix}
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 1 & 0 & 1 \\
0 & 0 & 0 & 0 & 0 & 0
\end{bmatrix}
_{5\times 6},\left[ w_{l,k}^{\left( 3\right) }\right] =
\begin{bmatrix}
0 & 0 & 0 & 0 & 0 \\
1 & 0 & 0 & 1 & 1 \\
0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0
\end{bmatrix}
_{4\times 5} \\
\left[ v_{l}^{\left( 1\right) }\right] &=&
\begin{bmatrix}
0 \\
\mathbf{1} \\
0 \\
0 \\
0 \\
0
\end{bmatrix}
\begin{array}{c}
\rightarrow h_{1} \\
\rightarrow h_{2} \\
\rightarrow h_{3} \\
\rightarrow h_{4} \\
\rightarrow h_{5} \\
\rightarrow h_{6}
\end{array}
,\left[ v_{l}^{\left( 2\right) }\right] =
\begin{bmatrix}
0 \\
0 \\
0 \\
\mathbf{1} \\
0
\end{bmatrix}
\begin{array}{c}
\rightarrow c_{1} \\
\rightarrow c_{2} \\
\rightarrow c_{3} \\
\rightarrow c_{4} \\
\rightarrow c_{5}
\end{array}
,\left[ v_{l}^{\left( 3\right) }\right] =
\begin{bmatrix}
0 \\
\mathbf{1} \\
0 \\
0
\end{bmatrix}
\begin{array}{c}
\rightarrow v_{1} \\
\rightarrow v_{2} \\
\rightarrow v_{3} \\
\rightarrow v_{4}
\end{array}
\end{eqnarray*}
\item[Step 7] We find the optimum set of $U_{1}=\{h_{2}\},U_{2}=\{c_{4}
\},U_{3}=\{v_{2}\}$
\end{description}
\end{example}
\end{document}
|
\begin{document}
\title{An Interior Point-Proximal Method of Multipliers for Positive Semi-Definite Programming}
\author[*]{Spyridon Pougkakiotis}
\author[*]{Jacek Gondzio}
\affil[*]{School of Mathematics, University of Edinburgh\newline \newline ERGO Technical Report 20--006}
\maketitle
\begin{abstract}
\par In this paper we generalize the Interior Point-Proximal Method of Multipliers (IP-PMM) presented in [\emph{An Interior Point-Proximal Method of Multipliers for Convex Quadratic Programming}, Computational Optimization and Applications, 78, 307--351 (2021)] for the solution of linear positive Semi-Definite Programming (SDP) problems, allowing inexactness in the solution of the associated Newton systems. In particular, we combine an infeasible Interior Point Method (IPM) with the Proximal Method of Multipliers (PMM) and interpret the algorithm (IP-PMM) as a primal-dual regularized IPM, suitable for solving SDP problems. We apply some iterations of an IPM to each sub-problem of the PMM until a satisfactory solution is found. We then update the PMM parameters, form a new IPM neighbourhood, and repeat this process. Given this framework, we prove polynomial complexity of the algorithm, under mild assumptions, and without requiring exact computations for the Newton directions. We furthermore provide a necessary condition for lack of strong duality, which can be used as a basis for constructing detection mechanisms for identifying pathological cases within IP-PMM.
\end{abstract}
\section{Introduction}
\par Positive Semidefinite Programming (SDP) problems have attracted a lot of attention in the literature for more more than two decades, and have been used to model a plethora of different problems arising from control theory \cite[Chapter 14]{BalkaWang_BOOK_SPRINGER}, power systems \cite{LavaLow_IEEE_TRAN_POW_SYS}, stochastic optimization \cite{Ben-Tal_Nemi_MATH_OR}, truss optimization \cite{Weld_et_al_STRUCT_MULT_OPT}, and many other application areas (e.g. see \cite{BalkaWang_BOOK_SPRINGER,VandBoyd_APPL_NUM_MATH}). More recently, SDP has been extensively used for building tight convex relaxations of NP-hard combinatorial optimization problems (see \cite[Chapter 12]{BalkaWang_BOOK_SPRINGER}, and the references therein).
\par As a result of the seemingly unlimited applicability of SDP, numerous contributions have been made to optimization techniques
suitable for solving such problems. The most remarkable milestone was achieved by Nesterov and Nemirovskii \cite{NestNemir_BOOK_SIAM}, who designed a polynomially convergent Interior Point Method (IPM) for the class of SDP problems. This led to the development of numerous successful IPM variants for SDP; some of theoretical (e.g. \cite{MizuJarr_MATH_PROG,Zhang_SIAM_J_OPT,ZhouToh_MATH_PROG}) and others of practical nature (e.g. \cite{Bella_Gondz_Porc_arxiv,Bella_Gondz_Porc_MATH_PROG,MOSEK}). While IPMs enjoy fast convergence, in theory and in practice, each IPM iteration requires the solution of a very large-scale linear system, even for small-scale SDP problems. What is worse, such linear systems are inherently ill-conditioned. A viable and successful alternative to IPMs for SDP problems (e.g. see \cite{ZhaoSunToh_SIAM_J_OPT}), which circumvents the issue of ill-conditioning without significantly compromising convergence speed, is based on the so-called Augmented Lagrangian method (ALM), which can be seen as the dual application of the proximal point method (as shown in \cite{ROCK_Math_OR}). The issue with ALMs is that, unlike IPMs, a consistent strategy for tuning the algorithm parameters is not known. Furthermore, polynomial complexity is lost, and is replaced with merely a finite termination. An IPM scheme combined with the Proximal Method of Multipliers (PMM) for solving SDP problems was proposed in \cite{Dehg_Goff_Orban_OMS}, and was interpreted as a primal-dual regularized IPM. The authors established global convergence, and numerically demonstrated the efficiency of the method. However, the latter is not guaranteed to converge to an $\epsilon$-optimal solution in a polynomial number of iterations, or even to find a global optimum in a finite number of steps. Finally, viable alternatives based on proximal splitting methods have been studied in \cite{JiangVander_OPT_ONLINE,Souto_Garcia_Veiga_OPT}. Such methods are very efficient and require significantly less computations and memory per iteration, as compared to IPM or ALM. However, as they are first-order methods, their convergence to high accuracy might be slow. Hence, such methodologies are only suitable for finding approximate solutions with low-accuracy.
\par In this paper, we are extending the Interior Point-Proximal Method of Multipliers (IP-PMM) presented in \cite{Pougk_Gond_COAP}. In particular, the algorithm in \cite{Pougk_Gond_COAP} was developed for convex quadratic programming problems and assumed that the resulting linear systems are solved exactly. Under this framework, it was proved that IP-PMM converges in a polynomial number of iterations, under mild assumptions, and an infeasibility detection mechanism was established. An important feature of this method is that it provides a reliable tuning for the penalty parameters of the PMM; indeed, the reliability of the algorithm is established numerically in a wide variety of convex problems in \cite{Berga_et_al_NLAA,deSimone_et_al_arxiv,GondPougkPear_arxiv,Pougk_Gond_COAP}. In particular, the IP-PMMs proposed in \cite{Berga_et_al_NLAA,deSimone_et_al_arxiv,GondPougkPear_arxiv} use preconditioned iterative methods for the solution of the resulting linear systems, and are very robust despite the use of inexact Newton directions. In what follows, we develop and analyze an IP-PMM for linear SDP problems, which furthermore allows for inexactness in the solution of the linear systems that have to be solved at every iteration. We show that the method converges polynomially under standard assumptions. Subsequently, we provide a necessary condition for lack of strong duality, which can serve as a basis for constructing implementable detection mechanisms for pathological cases (following the developments in \cite{Pougk_Gond_COAP}). As is verified in \cite{Pougk_Gond_COAP}, IP-PMM is competitive with standard non-regularized IPM schemes, and is significantly more robust. This is because the introduction of regularization prevents severe ill-conditioning and rank deficiency of the associated linear systems solved within standard IPMs, which can hinder their convergence and numerical stability. For detailed discussions on the effectiveness of regularization within IPMs, the reader is referred to \cite{AltmanGondzio_OMS,ArmandBenoist_MATH_PROG,PougkGond_JOTA}, and the references therein. A particularly important benefit of using regularization, is that the resulting Newton systems can be preconditioned effectively (e.g. see the developments in \cite{Berga_et_al_NLAA,GondPougkPear_arxiv}), allowing for more efficient implementations, with significantly lowered memory requirements. We note that the paper is focused on the theoretical aspects of the method, and an efficient, scalable, and reliable implementation would require a separate study. Nevertheless, the practical effectiveness of IP-PMM (both in terms of efficiency, scalability, and robustness) has already been demonstrated for linear, convex quadratic \cite{Berga_et_al_NLAA,GondPougkPear_arxiv,Pougk_Gond_COAP}, and non-linear convex problems \cite{deSimone_et_al_arxiv}.
\par The rest of the paper is organized as follows. In Section \ref{section preliminaries}, we provide some preliminary background and introduce our notation. Then, in Section \ref{section Algorithmic Framework}, we provide the algorithmic framework of the method. In Section \ref{section Polynomial Convergence}, we prove polynomial complexity of the algorithm, and establish its global convergence. In Section \ref{section Infeasible problems}, a necessary condition for lack of strong duality is derived, and we discuss how it can be used to construct an implementable detection mechanism for pathological cases. Finally, we derive some conclusions in Section \ref{section conclusions}.
\section{Preliminaries and Notation} \label{section preliminaries}
\subsection{Primal-Dual Pair of SDP Problems}
\par Let the vector space $\mathcal{S}^n \coloneqq \{B \in \mathbb{R}^{n\times n} \colon B = B^\top \}$ be given, endowed with the inner product $\langle A, B \rangle = \textnormal{Tr}(AB)$, where $\textnormal{Tr}(\cdot)$ denotes the trace of a matrix. In this paper, we consider the following primal-dual pair of linear positive semi-definite programming problems, in the standard form:
\begin{equation} \label{non-regularized primal} \tag{P}
\underset{X \in \mathcal{S}^n}{\text{min}} \ \langle C,X\rangle , \ \ \text{s.t.} \ \mathcal{A}X = b, \ X \in \mathcal{S}^n_{+},
\end{equation}
\begin{equation} \label{non-regularized dual} \tag{D}
\underset{y \in \mathbb{R}^m,\ Z \in \mathcal{S}^n}{\text{max}} \ b^\top y , \ \ \text{s.t.}\ \mathcal{A}^*y + Z = C,\ Z \in \mathcal{S}^n_{+},
\end{equation}
\noindent where $\mathcal{S}^{n}_{+} \coloneqq \{B \in \mathcal{S}^{n} \colon B \succeq 0\}$, $C,X,Z \in \mathcal{S}^{n}$, $b,y \in \mathbb{R}^m$, $\mathcal{A}$ is a linear operator on $\mathcal{S}^n$, $\mathcal{A}^*$ is the adjoint of $\mathcal{A}$, and $X\succeq 0$ denotes that $X$ is positive semi-definite. We note that the norm induced by the inner product $\langle A, B \rangle = \textnormal{Tr}(AB)$ is in fact the \textit{Frobenius norm}, denoted by $\|\cdot\|_{F}$. Furthermore, the adjoint $\mathcal{A}^* \colon \mathbb{R}^m \mapsto \mathcal{S}^n$ is such that $y^\top \mathcal{A}X = \langle \mathcal{A}^*y, X\rangle,\ \ \forall\ y \in \mathbb{R}^m, \ \forall\ X \in \mathcal{S}^n$.
\par For the rest of this paper, except for Section \ref{section Infeasible problems}, we will assume that the linear operator $\mathcal{A}$ is onto and that problems \eqref{non-regularized primal} and \eqref{non-regularized dual} are both strictly feasible (that is, Slater's constraint qualification holds for both problems). It is well-known that under the previous assumptions, the primal-dual pair \eqref{non-regularized primal}--\eqref{non-regularized dual} is guaranteed to have optimal solution for which strong duality holds (see \cite{NestNemir_BOOK_SIAM}). Such a solution can be found by solving the Karush--Kuhn--Tucker (KKT) optimality conditions for \eqref{non-regularized primal}--\eqref{non-regularized dual}, which read as follows:
\begin{equation} \label{non-regularized F.O.C}
\begin{bmatrix}
\mathcal{A}^* y + Z -C\\
\mathcal{A} X - b\\
XZ
\end{bmatrix} = \begin{bmatrix}
0\\
0\\
0
\end{bmatrix},\qquad X,\ Z \in \mathcal{S}^n_+.
\end{equation}
\subsection{A Proximal Method of Multipliers}
\noindent The author in \cite{ROCK_Math_OR} presented for the first time the \textit{Proximal Method of Multipliers} (PMM), in order to solve general convex programming problems. Let us derive this method for the pair \eqref{non-regularized primal}--\eqref{non-regularized dual}. Given arbitrary starting point $(X_0,y_0) \in \mathcal{S}^n_{+}\times \mathbb{R}^m$, the PMM can be summarized by the following iteration:
\begin{equation} \label{PMM sub-problem}
\begin{split}
X_{k+1} =&\ \underset{X \in \mathcal{S}^n_+}{\arg\min}\bigg\{\langle C,X\rangle - y_k^\top (\mathcal{A}X - b) + \frac{\mu_k}{2}\|X-X_k\|_F^2 + \frac{1}{2\mu_k}\|AX-b\|_2^2 \bigg\},\\
y_{k+1} = &\ y_k - \frac{1}{\mu_k} (AX_{k+1} - b),
\end{split}
\end{equation}
\noindent where $\mu_k$ is a positive penalty parameter. The previous iteration admits a unique solution, for all $k$.
\par We can write \eqref{PMM sub-problem} equivalently by making use of the \textit{maximal monotone} operator $T_{\mathcal{L}} \colon \mathbb{R}^m\times \mathcal{S}^n \rightrightarrows \mathbb{R}^m\times \mathcal{S}^n$ (see \cite{ROCK_Math_OR,ROCK_SIAM_J_CONTROL_OPT}), whose graph is defined as:
\begin{equation} \label{Primal Dual Maximal Monotone Operator}
T_{\mathcal{L}}(X,y) \coloneqq \{(V,u): V \in C - \mathcal{A}^*y + \partial \delta_{S^n_{+}}(X),\ u = \mathcal{A}X-b \},
\end{equation}
\noindent where $\delta_{S^n_{+}}(\cdot)$ is an indicator function defined as:
\begin{equation} \label{Indicator function}
\delta_{S^n_{+}}(X) \coloneqq
\begin{cases}
0, &\quad\text{if } X \in \mathcal{S}^n_+, \\
\infty, &\quad\text{otherwise,} \\
\end{cases}
\end{equation}
\noindent and $\partial(\cdot)$ denotes the sub-differential of a function, hence (from \cite[Corollary 23.5.4]{Rockafellar_BOOK_PRINCETON}):
\begin{equation*}
Z \in \partial \delta_{S^n_{+}}(X) \Leftrightarrow -Z \in \mathcal{S}^n_+,\ \langle X,Z\rangle = 0.
\end{equation*}
\noindent By convention, we have that $\partial \delta_{\mathcal{S}_+^n(X^*)} = \emptyset$ if $X^* \notin \mathcal{S}_+^n$. Given a bounded pair $(X^*,y^*)$ such that $(0,0) \in T_{\mathcal{L}}(X^*,y^*)$, we can retrieve a matrix $Z^* \in \partial \delta_{S^n_{+}}(X^*)$, using which $(X^*,y^*,-Z^*)$ is an optimal solution for \eqref{non-regularized primal}--\eqref{non-regularized dual}. By defining the \textit{proximal operator}:
\begin{equation} \label{Primal Dual Proximal Operator}
\mathcal{P}_k \coloneqq \bigg(I_{n+m} + \frac{1}{\mu_k}T_{\mathcal{L}}\bigg)^{-1},
\end{equation}
\noindent where $I_{n+m}$ is the identity operator of size $n+m$, and describes the direct sum of the idenity operators of $\mathcal{S}_n$ and $\mathbb{R}^m$, we can express \eqref{PMM sub-problem} as:
\begin{equation} \label{PMM Operator Subproblem}
(X_{k+1},y_{k+1}) = \mathcal{P}_k(X_k,y_k),
\end{equation}
\noindent and it can be shown that $\mathcal{P}_k$ is single valued and firmly \textit{non-expansive} (see \cite{ROCK_SIAM_J_CONTROL_OPT}).
\subsection{An Infeasible Interior Point Method}
\par In what follows we present a basic infeasible IPM suitable for solving the primal-dual pair \eqref{non-regularized primal}--\eqref{non-regularized dual}. Such methods handle the conic constraints by introducing a suitable logarithmic barrier in the objective (for an extensive study of logarithmic barriers, the reader is referred to \cite{NestNemir_BOOK_SIAM}). At each iteration, we choose a \textit{barrier parameter} $\mu > 0$ and form the logarithmic \textit{barrier primal-dual pair:}
\begin{equation} \label{non-regularized barrier primal}
\underset{X \in \mathcal{S}^n}{\text{min}} \ \langle C,X\rangle - \mu \ln(\det(X)), \ \ \text{s.t.} \ \mathcal{A}X = b,
\end{equation}
\begin{equation} \label{non-regularized barrier dual}
\underset{y \in \mathbb{R}^m,\ Z \in \mathcal{S}^n}{\text{max}} \ b^\top y + \mu \ln(\det(Z)) , \ \ \text{s.t.}\ \mathcal{A}^*y + Z = C.
\end{equation}
\noindent The first-order (barrier) optimality conditions of \eqref{non-regularized barrier primal}--\eqref{non-regularized barrier dual} read as follows:
\begin{equation} \label{non-regularized barrier F.O.C}
\begin{bmatrix}
\mathcal{A}^* y + Z -C\\
\mathcal{A} X - b\\
XZ - \mu I_n
\end{bmatrix} = \begin{bmatrix}
0\\
0\\
0
\end{bmatrix},\qquad X,\ Z \in \mathcal{S}^n_{++},
\end{equation}
\noindent where $\mathcal{S}^n_{++} \coloneqq \{B \in \mathcal{S}^n: B\succ 0\}$. For every chosen value of $\mu$, we want to approximately solve the following non-linear system of equations:
\begin{equation*}
F_{\sigma,\mu}^{IPM}(w) \coloneqq \begin{bmatrix}
\mathcal{A}^* y + Z -C\\
\mathcal{A} X - b\\
XZ - \sigma \mu I_n
\end{bmatrix} = \begin{bmatrix}
0\\
0\\
0
\end{bmatrix},
\end{equation*}
\noindent where, with a slight abuse of notation, we set $w = (X,y,Z)$. Notice that $F_{\sigma,\mu}^{IPM}(w) = 0$ is a perturbed form of the barrier optimality conditions. In particular, $\sigma \in (0,1)$ is a \textit{centering parameter} which determines how fast $\mu$ will be forced to decrease at the next IPM iteration. For $\sigma = 1$ we recover the barrier optimality conditions in \eqref{non-regularized barrier F.O.C}, while for $\sigma = 0$ we recover the optimality conditions in \eqref{non-regularized F.O.C}.
\par In IPM literature it is common to apply Newton method to solve approximately the system of non-linear equations $F_{\sigma,\mu}^{IPM}(w) = 0$. Newton method is favored for systems of this form due to the \textit{self-concordance} of the logarithmic barrier (see \cite{NestNemir_BOOK_SIAM}). However, a well-known issue in the literature is that the matrix $XZ$ is not necessarily symmetric. A common approach to tackle this issue is to employ a symmetrization operator $H_P : \mathbb{R}^{n\times n} \mapsto \mathcal{S}^n$, such that $H_P(XZ) = \mu I$ if and only if $XZ = \mu I$, given that $X,\ Z \in \mathcal{S}_+^n$. Following Zhang (\cite{Zhang_SIAM_J_OPT}), we employ the following operator: $H_P : \mathbb{R}^{n\times n} \mapsto \mathcal{S}^n$:
\begin{equation} \label{Symmetrization operator}
H_P(B) \coloneqq \frac{1}{2}(PBP^{-1} + (PBP^{-1})^\top ),
\end{equation}
\noindent where $P$ is a non-singular matrix. It can be shown that the central path (a key notion used in IPMs--see \cite{NestNemir_BOOK_SIAM}) can be equivalently defined as $H_P(XZ) = \mu I$, for any non-singular $P$. In this paper, we will make use of the choice $P_k = Z_k^{-\frac{1}{2}}$. For a plethora of alternative choices, the reader is referred to \cite{Todd_OTP_METH_SOFT}. We should note that the analysis in this paper can be tailored to different symmetrization strategies, and this choice is made for simplicity of exposition. \par At the beginning of the $k$-th iteration, we have $w_k = (X_k, y_k, Z_k)$ and $\mu_k$ available. The latter is defined as $\mu_k = \frac{\langle X_k, Z_k \rangle}{n}$. By substituting the symmetrized complementarity in the last block equation and applying Newton method, we obtain the following system of equations:
\begin{equation} \label{non-regularized Newton system}
\begin{bmatrix}
0 & \mathcal{A}^* & I_n\\
\mathcal{A} & 0 & 0 \\
\mathcal{E}_k & 0 & \mathcal{F}_k
\end{bmatrix} \begin{bmatrix}
\Delta X\\
\Delta y\\
\Delta Z
\end{bmatrix} = \begin{bmatrix}
C - \mathcal{A}^*y - Z_k\\
b - \mathcal{A}X_k\\
\mu I_n - H_{P_k}(X_kZ_k)
\end{bmatrix},
\end{equation}
\noindent where $\mathcal{E}_k \coloneqq \nabla_X H_{P_k}(X_kZ_k)$, and $\mathcal{F}_k \coloneqq \nabla_Z H_{P_k}(X_kZ_k)$.
\subsection{Vectorized Format}
\par In what follows we vectorize the associated operators, in order to work with matrices. In particular, given any matrix $B \in \mathbb{R}^{m \times n}$, we denote its vectorized form as $\bm{B}$, which is a vector of size $mn$, obtained by stacking the columns of $B$, from the first to the last. For the rest of this manuscript, any boldface letter denotes a vectorized matrix. Furthermore, if $\mathcal{A}: \mathcal{S}^n \mapsto \mathbb{R}^m$ is a linear operator, we can define it component-wise as $(\mathcal{A}X)_i \coloneqq \langle A_i , X \rangle$, for $i = 1,\ldots,m$, and any $X \in \mathcal{S}^n$, where $A_i \in \mathcal{S}^n$. Furthermore, the adjoint of this operator, that is $\mathcal{A}^*: \mathbb{R}^m \mapsto \mathcal{S}^n$ is defined as $\mathcal{A}^*y \coloneqq \sum_{i = 1}^{m} y_i A_i$, for all $y \in \mathbb{R}^m$. Using this notation, we can equivalently write \eqref{non-regularized primal}--\eqref{non-regularized dual} in the following form:
\begin{equation} \label{non-regularized vectorized primal}
\underset{X \in \mathcal{S}^n}{\text{min}} \ \langle C,X\rangle , \qquad \text{s.t.} \ \langle A_i, X \rangle = b_i, \quad i = 1,\ldots,m, \qquad \ X \in \mathcal{S}^n_{+},
\end{equation}
\begin{equation} \label{non-regularized vectorized dual}
\underset{y \in \mathbb{R}^m,\ Z \in \mathcal{S}^n}{\text{max}} \ \ b^\top y , \qquad
\text{s.t.} \ \sum_{i=1}^m y_i A_i + Z = C, \qquad \ Z \in \mathcal{S}^n_{+}.
\end{equation}
\noindent The first-order optimality conditions can be re-written as:
\begin{equation*}
\begin{bmatrix}
A^\top y + \bm{Z} - \bm{C} \\
A\bm{X} - b\\
XZ
\end{bmatrix} = \begin{bmatrix}
0\\
0\\
0
\end{bmatrix}, \qquad X,\ Z \in \mathcal{S}^n_+,
\end{equation*}
\noindent where $A^\top = [\bm{A}_1\ \bm{A}_2\ \cdots \ \bm{A}_m]$.
\par Below we summarize any additional notation that is used later in the paper. An iteration of the algorithm is denoted by $k \in \mathbb{N}$. Given an arbitrary matrix $A$ (resp., vector $x$), $A_k$ (resp., $x_k$) denotes that the matrix (resp., vector) depends on the iteration $k$. An optimal solution to the pair \eqref{non-regularized primal}--\eqref{non-regularized dual} will be denoted as $(X^*,y^*,Z^*)$. Optimal solutions of different primal-dual pairs will be denoted using an appropriate subscript, in order to distinguish them (e.g. $(X_r^*,y_r^*,Z_r^*)$ will denote an optimal solution for a PMM sub-problem). Any norm (resp., semi-norm) is denoted by $\| \cdot \|_{\chi}$, where $\chi$ is used to distinguish between different norms (e.g. $\|\cdot\|_2$ denotes the Euclidean norm). Given two matrices $X,\ Y \in \mathcal{S}^n_+$, we write $X \succeq Y$ when $X$ is larger than $Y$ with respect to the Loewner ordering. Given two logical statements $T_1,\ T_2$, the condition $T_1 \wedge T_2$ is true only when both $T_1$ and $T_2$ are true. Given two real-valued positive increasing functions $T(\cdot)$ and $f(\cdot)$, we say that $T(x) = O(f(x))$ (resp., $T(x) = \Omega(f(x))$) if there exist $x_0\geq 0,\ c_1 > 0$, such that $T(x) \leq c_1 f(x)$ (resp., $c_2 > 0$ such that $T(x) \geq c_2 f(x)$), for all $x \geq x_0$. We write $T(x) = \Theta(f(x))$ if and only if $T(x) = O(f(x))$ and $T(x) = \Omega(f(x))$. Finally, let an arbitrary matrix $A$ be given. The maximum (resp., minimum) singular value of $A$ is denoted by $\eta_{\max}(A)$ (resp., $\eta_{\min}(A)$). Similarly, the maximum (resp., minimum) eigenvalue of a square matrix $A$ is denoted by $\nu_{\max}(A)$ (resp., $\nu_{\min}(A)$).
\section{An Interior Point-Proximal Method of Multipliers for SDP} \label{section Algorithmic Framework}
\par In this section we present an inexact extension of IP-PMM presented in \cite{Pougk_Gond_COAP}, suitable for solving problems of the form of \eqref{non-regularized primal}--\eqref{non-regularized dual}. Assume that we have available an estimate $\lambda_k$ for a Lagrange multiplier vector at iteration $k$. Similarly, denote by $\Xi_k \in \mathcal{S}_+^n$ an estimate of a primal solution. As we discuss later, these estimate sequences (i.e. $\{\lambda_k\}, \{\Xi_k\}$) are produced by the algorithm, and represent the dual and primal proximal estimates, respectively. During the $k$-th iteration of the PMM, applied to \eqref{non-regularized primal}, the following proximal penalty function has to be minimized:
\begin{equation} \label{PMM lagrangian}
\begin{split}
\mathcal{L}^{PMM}_{\mu_k} (X;\Xi_k, \lambda_k) \coloneqq \langle C, X \rangle -\lambda_k^\top (\mathcal{A}X - b) + \frac{1}{2\mu_k}\|\mathcal{A}X-b\|_{2}^2 + \frac{\mu_k}{2}\|X-\Xi_k\|_{F}^2,
\end{split}
\end{equation}
\noindent with $\mu_k > 0$ being some non-increasing penalty parameter. Notice that this is equivalent to the iteration (\ref{PMM sub-problem}). We approximately minimize \eqref{PMM lagrangian} by applying one (or a few) iterations of the previously presented infeasible IPM. We alter \eqref{PMM lagrangian} by adding a logarithmic barrier:
\begin{equation} \label{Proximal IPM Penalty}
\mathcal{L}^{IP-PMM}_{\mu_k} (X;\Xi_k, \lambda_k) \coloneqq \mathcal{L}^{PMM}_{\mu_k} (X;\Xi_k, \lambda_k) - \mu_k \log(\det(X)),
\end{equation}
\noindent and we treat $\mu_k$ as the barrier parameter. In order to form the optimality conditions of this sub-problem, we equate the gradient of $\mathcal{L}^{IP-PMM}_{\mu_k}(\cdot;\Xi_k,\lambda_k)$ to the zero vector, i.e.:
\begin{equation*}
C - \mathcal{A}^* \lambda_k + \frac{1}{\mu_k}\mathcal{A}^*(\mathcal{A}X - b) + \mu_k (X - \Xi_k) - \mu_k X^{-1} = 0.
\end{equation*}
\par Introducing the variables $y = \lambda_k - \frac{1}{\mu_k}(\mathcal{A}X - b)$ and $Z = \mu_k X^{-1}$, yields:
\begin{equation} \label{non-vectorized Proximal IPM FOC}
\begin{split}
\begin{bmatrix}
C - \mathcal{A}^*y - Z + \mu_k(X-\Xi_k)\\
\mathcal{A}X + \mu_k (y - \lambda_k) - b\\
XZ - \mu_k I_n
\end{bmatrix} = \begin{bmatrix}
0\\
0\\
0
\end{bmatrix} \Leftrightarrow \begin{bmatrix}
\bm{C}- A^\top y - \bm{Z} + \mu_k(\bm{X}-\bm{\Xi}_k)\\
A\bm{X}+ \mu_k (y - \lambda_k) - b\\
H_{P_k}(XZ) - \mu_k I_n
\end{bmatrix} = \begin{bmatrix}
0\\
0\\
0
\end{bmatrix},
\end{split}
\end{equation}
\noindent where the second system is obtained by introducing the symmetrization in \eqref{Symmetrization operator}, and by vectorizing the associated matrices and operators.
\par Given an arbitrary vector $b \in \mathbb{R}^m$, and matrix $C \in \mathbb{R}^{n \times n}$, we define the semi-norm:
\begin{equation} \label{semi-norm definition}
\|(b,\bm{C})\|_{\mathcal{S}} \coloneqq \min_{X,y,Z}\bigg\{\|(\bm{X},\bm{Z})\|_2\ :\begin{matrix} A\bm{X} = b, \\ A^\top y + \bm{Z} = \bm{C}\end{matrix}\bigg\}.
\end{equation}
\noindent A similar semi-norm was used before in \cite{MizuJarr_MATH_PROG}, as a way to measure infeasibility for the case of linear programming problems. For a discussion of the properties of the aforementioned semi-norm, as well as how to evaluate it (using an appropriate QR factorization, which can be computed in a polynomial time), the reader is referred to \cite[Section 4]{MizuJarr_MATH_PROG}.
\paragraph{Starting Point.}
\par Let us define the starting point for IP-PMM. For that, we set $(X_0,Z_0) = \rho(I_n,I_n)$, for some $\rho > 0$. We also set $y_0$ to some arbitrary value (e.g. $y_0 = 0$), and $\mu_0 = \frac{\langle X_0, Z_0 \rangle}{n}$. Using the aforementioned triple, we have:
\begin{equation} \label{starting point}
A\bm{X}_0 = b + \bar{b},\ A^\top y_0 + \bm{Z}_0 = \bm{C} + \bm{\bar{C}},\ \Xi_0 = X_0,\ \lambda_0 = y_0.
\end{equation}
\noindent for some $\bar{b} \in \mathbb{R}^m$, and $\bar{C} \in \mathcal{S}^{n}$.
\paragraph{Neighbourhood.}
\par Below, we describe a neighbourhood in which the iterations of the method should lie. Unlike most path-following methods, we have to define a family of neighbourhoods that depend on the PMM sub-problem parameters.
\par Given \eqref{starting point}, some $\mu_k$, $\lambda_k$, and $\Xi_k$, we define the \textit{regularized set of centers}:
\begin{equation*}
\mathscr{P}_{\mu_k}(\Xi_k,\lambda_k) \coloneqq \large\{(X,y,Z)\in \mathcal{C}_{\mu_k}(\Xi_k,\lambda_k)\ :\ X \in \mathcal{S}^n_{++},\ Z \in \mathcal{S}^n_{++},\ XZ = \mu_k I_n \large\},
\end{equation*}
\begin{equation*}
\mathscr{C}_{\mu_k}(\Xi_k,\lambda_k) \coloneqq \bigg\{(X,y,Z)\ :\quad \begin{matrix}
A\bm{X} + \mu_k (y-\lambda_k) = b + \frac{\mu_k}{\mu_0} \bar{b},\\
A^\top y + \bm{Z} - \mu_k(\bm{X}- \bm{\Xi}_k) = \bm{C} + \frac{\mu_k}{\mu_0}\bm{\bar{C}}
\end{matrix} \bigg\},
\end{equation*}
\noindent where $\bar{b},\ \bar{C}$ are as in \eqref{starting point}. The term set of centers originates from \cite{MizuJarr_MATH_PROG}.
\par We enlarge the previous set, by defining the following set:
\begin{equation*}
\begin{split}
\tilde{\mathscr{C}}_{\mu_k}(\Xi_k,\lambda_k) \coloneqq \Bigg\{(X,y,Z)\ :\quad \begin{matrix}
A\bm{X} + \mu_k(y-\lambda_k) = b + \frac{\mu_k}{\mu_0} (\bar{b}+\tilde{b}_{k}),\\
A^\top y + \bm{Z} - \mu_k (\bm{X}- \bm{\Xi}_k) =\bm{C} + \frac{\mu_k}{\mu_0}(\bm{\bar{C}}+\bm{\tilde{C}}_{k})\\
\|(\tilde{b}_{k},\bm{\tilde{C}}_{k})\|_2 \leq K_N,\ \|(\tilde{b}_{k},\bm{\tilde{C}}_{k})\|_{\mathcal{S}} \leq \gamma_{\mathcal{S}} \rho
\end{matrix} \Bigg\},
\end{split}
\end{equation*}
where $K_N > 0$ is a constant, $\gamma_{\mathcal{S}} \in (0,1)$ and $\rho>0$ is as defined in the starting point. The vector $\tilde{b}_{k}$ and the matrix $\tilde{C}_{k}$ represent the current scaled (by $\frac{\mu_0}{\mu_k}$) infeasibilities, and will vary depending on the iteration $k$. While these can be defined recursively, it is not necessary. Instead it suffices to know that they satisfy the bounds given in the definition of the previous set. In essence, the previous set requires these scaled infeasibilities to be bounded above by some constants, with respect to the 2-norm as well as the semi-norm defined in \eqref{semi-norm definition}. We can now define a family of neighbourhoods:
\begin{equation} \label{Small neighbourhood}
\mathscr{N}_{\mu_k}(\Xi_k,\lambda_k) \coloneqq \bigg\{(X,y,Z) \in \tilde{\mathscr{C}}_{\mu_k}(\Xi_k,\lambda_k)\ :\quad \begin{matrix}\ X \in \mathcal{S}^n_{++},\ Z \in \mathcal{S}^n_{++},\\ \|H_P(XZ) - \mu I_n\|_F \leq \gamma_{\mu} \mu_k \end{matrix}\bigg\},
\end{equation}
\noindent where $\gamma_{\mu} \in (0,1)$ is a constant restricting the symmetrized complementarity products. Obviously, the starting point defined in \eqref{starting point} belongs to the neighbourhood $\mathscr{N}_{\mu_0}(\Xi_0,\lambda_0)$, with $(\tilde{b}_{0},\bm{\tilde{C}}_{0}) = (0,0)$. Notice that the neighbourhood depends on the choice of the constants $K_N$, $\gamma_{\mathcal{S}}$, $\gamma_{\mu}$. However, as the neighbourhood also depends on the parameters $\mu_k,\ \lambda_k,\ \Xi_k$, the dependence on the constants is omitted for simplicity of notation.
\paragraph{Newton System.}
\par As discussed earlier, we employ the Newton method for approximately solving a perturbed form of system \eqref{non-vectorized Proximal IPM FOC}, for all $k$. In particular, we perturb \eqref{non-vectorized Proximal IPM FOC} in order to take into consideration the target reduction of the barrier parameter $\mu_k$ (by introducing the centering parameter $\sigma_k$), as well as to incorporate the initial infeasibility, given our starting point in \eqref{starting point}. In particular, we would like to approximately solve the following system:
\begin{equation} \label{exact non-vectorized Newton System}
\begin{split}
\begin{bmatrix}
-(\mu_k I_n) & \mathcal{A^*} & I_n\\
\mathcal{A} & \mu_k I_m & 0\\
Z_k & 0 &X_k
\end{bmatrix}
\begin{bmatrix}
\Delta X_k\\
\Delta y_k\\
\Delta Z_k
\end{bmatrix}
=
\begin{bmatrix}
(C + \frac{\sigma_k \mu_k}{\mu_0}\bar{C}) - \mathcal{A}^* y_k - Z_k +\sigma_k\mu_k (X_k - \Xi_k)\\
-\mathcal{A}X_k -\sigma_k\mu_k (y_k - \lambda_k)+ (b +\frac{\sigma_k \mu_k}{\mu_0}\bar{b}) \\
-X_kZ_k + \sigma_{k} \mu_k I_n
\end{bmatrix},
\end{split}
\end{equation}
\noindent where $\bar{b},\ \bar{C}$ are as in \eqref{starting point}. We note that we could either first linearize the last block equation of \eqref{non-vectorized Proximal IPM FOC} and then apply the symmetrization, defined in \eqref{Symmetrization operator}, or first apply the symmetrization directly to the last block equation of \eqref{non-vectorized Proximal IPM FOC} and then linearize it. Both approaches are equivalent. Hence, following the former approach, we obtain the vectorized Newton system, that has to be solved at every iteration of IP-PMM:
\begin{equation} \label{inexact vectorized Newton System}
\begin{split}
\begin{bmatrix}
-(\mu_k I_{n^2}) & A^\top & I_{n^2}\\
A & \mu_k I_m & 0\\
E_k & 0 &F_k
\end{bmatrix}
\begin{bmatrix}
\bm{\Delta X}_k\\
\Delta y_k\\
\bm{\Delta Z}_k
\end{bmatrix}
= \\ \begin{bmatrix}
(\bm{C} + \frac{\sigma_k \mu_k}{\mu_0}\bm{\bar{C}}) -A^\top y_k - \bm{Z}_k +\sigma_k\mu_k (\bm{X}_k - \bm{\Xi}_k)\\
-A \bm{X}_k -\sigma_k\mu_k (y_k - \lambda_k)+ (b +\frac{\sigma_k \mu_k}{\mu_0}\bar{b}) \\
-(Z_k^{\frac{1}{2}}\otimes Z_k^{\frac{1}{2}})\bm{X}_k + \sigma_{k} \mu_k \bm{I}_{n}
\end{bmatrix} + \begin{bmatrix}
\bm{\mathsf{E}}_{d,k}\\
\epsilon_{p,k}\\
\bm{\mathsf{E}}_{\mu,k}
\end{bmatrix},
\end{split}
\end{equation}
\noindent where $E_k = (Z_k^{\frac{1}{2}} \otimes Z_k^{\frac{1}{2}})$, $F_k = \frac{1}{2}\big(Z_k^{\frac{1}{2}}X_k \otimes Z_k^{-\frac{1}{2}} + Z^{-\frac{1}{2}} \otimes Z_k^{\frac{1}{2}}X_k \big)$, and $(\mathsf{E}_{d,k},\epsilon_{p,k},\mathsf{E}_{\mu,k})$ models potential errors, occurring by solving the symmetrized version of system \eqref{exact non-vectorized Newton System} inexactly (e.g. by using a Krylov subspace method). In order to make sure that the computed direction is accurate enough, we impose the following accuracy conditions:
\begin{equation} \label{Krylov method termination conditions}
\|\bm{\mathsf{E}}_{\mu,k}\|_2 = 0,\qquad \|(\epsilon_{p,k},\bm{\mathsf{E}}_{d,k})\|_2 \leq \frac{\sigma_{\min}}{4\mu_0} K_N \mu_k,\qquad \|(\epsilon_{p,k},\bm{\mathsf{E}_{d,k}})\|_{\mathcal{S}} \leq \frac{\sigma_{\min}}{4\mu_0} \gamma_{\mathcal{S}}\rho \mu_k,
\end{equation}
\noindent where $\sigma_{\min}$ is the minimum allowed value for $\sigma_k$, $K_N,\ \gamma_{\mathcal{S}}$ are constants defined by the neighbourhood in \eqref{Small neighbourhood}, and $\rho$ is defined in the starting point in \eqref{starting point}. Notice that the condition $\|\bm{\mathsf{E}}_{\mu,k}\|_2 = 0$ is imposed without loss of generality, since it can be easily satisfied in practice. For more on this, see the discussion in \cite[Section 3]{ZhouToh_MATH_PROG} and \cite[Lemma 4.1]{Gu_SIAM_J_OPT}. Furthermore, as we will observe in Section \ref{section Polynomial Convergence}, the bound on the error with respect to the semi-norm defined in \eqref{semi-norm definition} is required to ensure polynomial complexity of the method. While evaluating this semi-norm is not particularly practical (and is never evaluated in practice, e.g. see \cite{Berga_et_al_NLAA,deSimone_et_al_arxiv,Pougk_Gond_COAP}), it can be done in a polynomial time (see \cite[Section 4]{MizuJarr_MATH_PROG}), and hence does not affect the polynomial nature of the algorithm. The algorithmic scheme of the method is summarized in Algorithm \ref{Algorithm PMM-IPM}.
\renewcommand{IP--PMM}{IP--PMM}
\begin{algorithm}[!ht]
\caption{Interior Point-Proximal Method of Multipliers}
\label{Algorithm PMM-IPM}
\textbf{Input:} $\mathcal{A}, b, C$, $\text{tol}$.\\
\textbf{Parameters:} $0< \sigma_{\min} \leq \sigma_{\max} \leq 0.5$, $K_N > 0$, $0<\gamma_{\mathcal{S}} < 1,\ 0<\gamma_{\mu} < 1$.\\
\textbf{Starting point:} Set as in \eqref{starting point}.
\begin{algorithmic}
\For {($k= 0,1,2,\cdots$)}
\If {$\Bigg(\bigg(\|A\bm{X}_k - b\|_2 < \text{tol}\bigg) \wedge \bigg(\|\bm{C} - A^\top y_k - \bm{Z}_k\|_2 < \text{tol}\bigg) \wedge \bigg(\frac{\langle X_k, Z_k \rangle}{n} < \text{tol}\bigg)\Bigg)$}
\State \Return $(X_k,y_k,Z_k)$.
\Else
\State Choose $\sigma_k \in [\sigma_{\min},\sigma_{\max}]$ and solve \eqref{inexact vectorized Newton System} so that \eqref{Krylov method termination conditions} holds.
\State Choose $\alpha_k$, as the largest $\alpha \in (0,1]$, s.t. $\mu_k(\alpha) \leq \ (1-0.01 \alpha)\mu_k$, and:
\begin{equation*}
\begin{split}
(X_k + \alpha_k & \Delta X_k, y_k + \alpha_k \Delta y_k, Z_k + \alpha_k \Delta Z_k) \in \ \mathscr{N}_{\mu_k(\alpha)}(\Xi_k,\lambda_k),\\
\text{where, }\ \ & \mu_{k}(\alpha) = \frac{\langle X_k + \alpha_k \Delta X_k,Z_k + \alpha_k \Delta Z_k\rangle}{n}.
\end{split}
\end{equation*}
\State Set $(X_{k+1},y_{k+1},Z_{k+1}) = (X_k + \alpha_k \Delta X_k, y_k + \alpha_k \Delta y_k, Z_k + \alpha_k \Delta Z_k)$.
\State Set $\mu_{k+1} = \frac{\langle X_{k+1},Z_{k+1}\rangle}{n}$.
\State Let $r_p = A\bm{X}_{k+1} - (b + \frac{\mu_{k+1}}{\mu_0}\bar{b})$, $\bm{R}_d = (\bm{C} + \frac{\mu_{k+1}}{\mu_0}\bm{\bar{C}})- A^\top y_{k+1} -\bm{Z}_{k+1}.$
\If {\bigg(\big($\|(r_p,\bm{R}_d)\|_2 \leq K_N \frac{\mu_{k+1}}{\mu_0} \big) \wedge \big(\|(r_p,\bm{R}_d)\|_{\mathcal{S}} \leq \gamma_{\mathcal{S}}\rho \frac{\mu_{k+1}}{\mu_0}\big)$\bigg)}
\State $(\Xi_{k+1},\lambda_{k+1}) = (X_{k+1},y_{k+1})$.
\Else
\State $(\Xi_{k+1},\lambda_{k+1}) = (\Xi_{k},\lambda_{k})$.
\EndIf
\EndIf
\EndFor
\end{algorithmic}
\end{algorithm}
\par Algorithm \ref{Algorithm PMM-IPM} deviates from standard IPM schemes due to the solution of a different Newton system, as well as due to the possible updates of the proximal estimates, i.e. $\Xi_k$ and $\lambda_k$. Notice that when these estimates are updated, the neighbourhood in \eqref{Small neighbourhood} changes as well, since it is parametrized by them. Intuitively, when this happens, the algorithm accepts the current iterate as a sufficiently accurate solution to the associated PMM sub-problem. However, as we will see in Section \ref{section Polynomial Convergence}, it is not necessary for these estimates to converge to a primal-dual solution, for Algorithm \ref{Algorithm PMM-IPM} to converge. Instead, it suffices to ensure that these estimates will remain bounded. In light of this, Algorithm \ref{Algorithm PMM-IPM} is not studied as an inner-outer scheme, but rather as a standard IPM scheme. We will return to this point at the end of Section \ref{section Polynomial Convergence}.
\section{Convergence Analysis} \label{section Polynomial Convergence}
\par In this section we prove polynomial complexity of Algorithm \ref{Algorithm PMM-IPM}, and establish its global convergence. The analysis is modeled after that in \cite{Pougk_Gond_COAP}. We make use of the following two standard assumptions, generalizing those employed in \cite{Pougk_Gond_COAP} to the SDP case.
\begin{assumption} \label{Assumption 1}
The problems \eqref{non-regularized primal} and \eqref{non-regularized dual} are strictly feasible, that is, Slater's constraint qualification holds for both problems. Furthermore, there exists an optimal solution $(X^*, y^*, Z^*)$ and a constant $K_* > 0$ independent of $n$ and $m$ such that $\|(\bm{X}^*,y^*,\bm{Z}^*)\|_F \leq K_* \sqrt{n}$.
\end{assumption}
\begin{assumption} \label{Assumption 2}
The vectorized constraint matrix $A$ of \textnormal{\eqref{non-regularized primal}} has full row rank, that is $\textnormal{rank}(A) = m$. Moreover, there exist constants $K_{A,1} > 0$, $K_{A,2} > 0$, $K_{r,1} >0 $, and $\ K_{r,2} > 0$, independent of $n$ and $m$, such that:
\begin{equation*}
\eta_{\min}(A) \geq K_{A,1},\quad \eta_{\max}(A) \leq K_{A,2},\quad \|b\|_{\infty}\leq K_{r,1},\quad \|C\|_2 \leq K_{r,2} \sqrt{n}.
\end{equation*}
\end{assumption}
\begin{Remark} \label{Remark 1 on assumptions}
\par Note that the independence of the previous constants from $n$ and $m$ is assumed for simplicity of exposition. In particular, as long as these constants depend polynomially on $n$ (or $m$), the analysis still holds, simply by altering the worst-case polynomial bound for the number of iterations of the algorithm (given later in Theorem \textnormal{\ref{Theorem complexity}}).
\end{Remark}
\begin{Remark} \label{Remark 2 on assumptions} Assumption \textnormal{\ref{Assumption 1}} is a direct extension of that in \textnormal{\cite[Assumption 1]{Pougk_Gond_COAP}}. Given the positive semi-definiteness of $X^*$ and $Z^*$, it implies that $\textnormal{Tr}(X^*) + \textnormal{Tr}(Z^*) \leq 2 K_* n$ (from equivalence of the norms $\|\cdot\|_1$ and $\|\cdot\|_2$), which is one of the assumptions employed in \textnormal{\cite{Zhang_SIAM_J_OPT,ZhouToh_MATH_PROG}}. Notice that we assume $n > m$, without loss of generality. The theory in this section would hold if $m > n$, simply by replacing $n$ by $m$ in the upper bound of the norm of the optimal solution as well as of the problem data.
\end{Remark}
\par Before proceeding with the convergence analysis, we briefly provide an outline of it, for the convenience of the reader. Firstly, it should be noted that polynomial complexity as well as global convergence of Algorithm \ref{Algorithm PMM-IPM} is proven by induction on the iterations $k$ of the method. To that end, we provide some necessary technical results in Lemmas \ref{Lemma non-expansiveness}--\ref{Lemma tilde point}. Then, in Lemma \ref{Lemma boundedness of x z} we are able to show that the iterates $(X_k,y_k,Z_k)$ of Algorithm \ref{Algorithm PMM-IPM} will remain bounded for all $k$. Subsequently, we provide some additional technical results in Lemmas \ref{Auxiliary Lemma bound on scaled matrices}--\ref{Auxiliary Lemma scaled rhs of third block of Newton system}, which are then used in Lemma \ref{Lemma boundedness Dx Dz}, where we show that the Newton direction computed at every iteration $k$ is also bounded. All the previous are utilized in Lemmas \ref{Lemma step-length-part 1}--\ref{Lemma step-length-part 2}, where we provide a lower bound for the step-length $\alpha_k$ chosen by Algorithm \ref{Algorithm PMM-IPM} at every iteration $k$. Then, $Q$-linear convergence of $\mu_k$ (with $R$-linear convergence of the regularized residuals) is shown in Theorem \ref{Theorem mu convergence}. Polynomial complexity is proven in Theorem \ref{Theorem complexity}, and finally, global convergence is established in Theorem \ref{Theorem convergence for the feasible case}.
\par Let us now use the properties of the proximal operator defined in \eqref{Primal Dual Proximal Operator}.
\begin{lemma} \label{Lemma non-expansiveness}
Given Assumption \textnormal{\ref{Assumption 1}}, and for all $\lambda \in \mathbb{R}^m$, $\Xi \in \mathcal{S}_+^n$ and $0 \leq \mu < \infty$, there exists a unique pair $(X_r^*,y_r^*)$, such that $(X_r^*,y_r^*) = \mathcal{P}(\Xi,\lambda),$ $X_r^* \in \mathcal{S}_+^n$, and
\begin{equation} \label{non-expansiveness property}
\|(\bm{X}_r^*,y_r^*)-(\bm{X}^*,y^*)\|_{2} \leq \|(\bm{\Xi},\lambda)-(\bm{X}^*,y^*)\|_{2},
\end{equation}
\noindent where $\mathcal{P}(\cdot)$ is defined as in \eqref{Primal Dual Proximal Operator}, and $(X^*,y^*)$ is such that $(0,0) \in T_{\mathcal{L}}(X^*,y^*)$.
\end{lemma}
\begin{proof}
\par The thesis follows from the developments in \cite[Proposition 1]{ROCK_SIAM_J_CONTROL_OPT}.
\end{proof}
\par In the following lemma, we bound the solution of every PMM sub-problem encountered by Algorithm \ref{Algorithm PMM-IPM}, while establishing bounds for the proximal estimates $\Xi_k$, and $\lambda_k$.
\begin{lemma} \label{Lemma-boundedness of optimal solutions for sub-problems}
Given Assumptions \textnormal{\ref{Assumption 1}, \ref{Assumption 2}}, there exists a triple $(X_{r_k}^*,y_{r_k}^*,Z_{r_k}^*)$, satisfying:
\begin{equation} \label{PMM optimal solution}
\begin{split}
A \bm{X}_{r_k}^* + \mu (y_{r_k}^*-\lambda_k) -b & = 0,\\
-\bm{C} + A^\top y_{r_k}^* + \bm{Z}_{r_k}^* - \mu (\bm{X}_{r_k}^* - \bm{\Xi}_k)& = 0,\\
\langle X_{r_k}^*, Z_{r_k}^*\rangle & = 0,
\end{split}
\end{equation}
\noindent with $X_{r_k}^*, Z_{r_k}^* \in \mathcal{S}^n_+$, and $\|(\bm{X}_{r_k}^*,y_{r_k}^*,\bm{Z}_{r_k}^*)\|_2 = O(\sqrt{n})$, for all $\lambda_k \in \mathbb{R}^m$, $\Xi_k \in \mathcal{S}_+^n$, produced by Algorithm \textnormal{\ref{Algorithm PMM-IPM}}, and any $\mu \in [0,\infty)$. Moreover, $\|(\bm{\Xi}_k,\lambda_k)\|_2 = O(\sqrt{n})$, for all $k \geq 0$.
\end{lemma}
\begin{proof}
\par We prove the claim by induction on the iterates, $k \geq 0$, of Algorithm \ref{Algorithm PMM-IPM}. At iteration $k = 0$, we have that $\lambda_0 = y_0$ and $\Xi_0 = X_0$. But from the construction of the starting point in \eqref{starting point}, we know that $\|(X_0,y_0)\|_2 = O(\sqrt{n})$. Hence, $\|(\Xi_0,\lambda_0)\|_2 = O(\sqrt{n})$ (assuming $n > m$). Invoking Lemma \ref{Lemma non-expansiveness}, there exists a unique pair $(X_{r_0}^*,y_{r_0}^*)$ such that:
$$(X_{r_0}^*,y_{r_0}^*) = \mathcal{P}_0(\Xi_0,\lambda_0),\qquad \|(\bm{X}_{r_0}^*,y_{r_0}^*) - (\bm{X}^*,y^*)\|_{2} \leq \|(\bm{\Xi}_0,\lambda_0)-(\bm{X}^*,y^*)\|_{2},$$
\noindent where $(X^*,y^*,Z^*)$ solves \eqref{non-regularized primal}--\eqref{non-regularized dual}, and from Assumption \ref{Assumption 1}, is such that $\|\bm{X}^*,y^*,\bm{Z}^*\|_2 = O(\sqrt{n})$. Using the triangular inequality, and combining the latter inequality with our previous observations, yields that $\|(\bm{X}_{r_0}^*,y_{r_0}^*)\|_2 = O(\sqrt{n})$. From the definition of the operator in \eqref{PMM Operator Subproblem}, we know that:
\begin{equation*}
-C + \mathcal{A}^* y_{r_0}^* - \mu (X_{r_0}^* - \Xi_0) \ \in \partial \delta_{\mathcal{S}_+^n}(X_{r_0}^*),\qquad
\mathcal{A}X_{r_0}^* + \mu (y_{r_0}^*-\lambda_0) - b \ = 0,
\end{equation*}
\noindent where $\partial(\delta_{\mathcal{S}_+^n}(\cdot))$ is the sub-differential of the indicator function defined in \eqref{Indicator function}. Hence, there must exist $-Z_{r_0}^* \in \partial \delta_{\mathcal{S}_+^n}(X_{r_0}^*)$ (and thus, $Z_{r_0}^* \in \mathcal{S}^n_{+}$, $\langle X_{r_0},Z_{r_0} \rangle = 0$), such that:
\[Z_{r_0}^* = C - \mathcal{A}^* y_{r_0}^* + \mu (X_{r_0}^* - \Xi_0),\quad \langle X^*_{r_0},Z^*_{r_0}\rangle = 0,\quad \|\bm{Z}_{r_0}^*\|_2 = O(\sqrt{n}),\]
\noindent where $\|\bm{Z}_{r_0}^*\|_2 = O(\sqrt{n})$ follows from Assumption \ref{Assumption 2}, combined with $\|(\bm{X}^*_{r_0},y^*_{r_0})\|_2 = O(\sqrt{n})$.
\par Let us now assume that at some iteration $k$ of Algorithm \ref{Algorithm PMM-IPM}, we have $\|(\bm{\Xi}_k,\lambda_k)\|_2 = O(\sqrt{n})$. There are two cases for the subsequent iterations:
\begin{itemize}
\item[\textbf{1.}] The proximal estimates are updated, that is $(\Xi_{k+1},\lambda_{k+1}) = (X_{k+1},y_{k+1})$, or
\item[\textbf{2.}] the proximal estimates stay the same, i.e. $(\Xi_{k+1},\lambda_{k+1}) = (\Xi_k,\lambda_k)$.
\end{itemize}
\par \textbf{Case 1.} We know by construction that this occurs only if the following is satisfied:
$$\|(r_p,\bm{R}_d)\|_2 \leq K_N \frac{\mu_{k+1}}{\mu_0},$$
\noindent where $r_p,\ R_d$ are defined in Algorithm \ref{Algorithm PMM-IPM}. However, from the neighbourhood conditions in \eqref{Small neighbourhood}, we know that:
$$\|\big(r_p + \mu_{k+1}(y_{k+1}-\lambda_k), \bm{R}_d + \mu_{k+1}(\bm{X}_{k+1}-\bm{\Xi}_k)\big)\|_2 \leq K_N \frac{\mu_{k+1}}{\mu_0}.$$
\noindent Combining the last two inequalities by applying the triangular inequality, and using the inductive hypothesis ($\|(\bm{\Xi}_k,\lambda_k)\|_2 = O(\sqrt{n})$), yields that
\[ \|(\bm{X}_{k+1},y_{k+1})\|_2 \leq \frac{2K_N}{\mu_0} + \|(\bm{\Xi}_k,\lambda_k)\|_2 = O(\sqrt{n}).\]
\noindent Hence, $\|(\bm{\Xi}_{k+1},\lambda_{k+1})\|_2 = O(\sqrt{n})$. Then, we can invoke Lemma \ref{Lemma non-expansiveness}, with $\lambda = \lambda_{k+1}$, $\Xi = \Xi_{k+1}$ and any $\mu \geq 0$, which gives
$$\|(\bm{X}_{r_{k+1}}^*,y_{r_{k+1}}^*) - (\bm{X}^*,y^*)\|_{2} \leq \|(\bm{\Xi}_{k+1},\lambda_{k+1})-(\bm{X}^*,y^*)\|_{2}.$$
\noindent A simple manipulation shows that $\|(\bm{X}_{r_{k+1}}^*,y_{r_{k+1}}^*)\|_2 = O(\sqrt{n})$. As before, we use \eqref{PMM Operator Subproblem} alongside Assumption \ref{Assumption 2} to show the existence of $-Z_{r_{k+1}}^* \in \partial \delta_{\mathcal{S}^n_{+}}(X_{r_{k+1}}^*)$, such that the triple $(X_{r_{k+1}}^*,y_{r_{k+1}}^*,Z_{r_{k+1}}^*)$ satisfies \eqref{PMM optimal solution} with $\|\bm{Z}_{r_{k+1}}^*\|_2 = O(\sqrt{n})$.
\par \textbf{Case 2.} In this case, we have $(\Xi_{k+1},\lambda_{k+1}) = (\Xi_k,\lambda_k)$, and hence the inductive hypothesis gives us directly that $\|(\bm{\Xi}_{k+1},\lambda_{k+1})\|_2 = O(\sqrt{n})$. As before, there exists a triple $(X_{r_{k+1}}^*,y_{r_{k+1}}^*,Z_{r_{k+1}}^*)$ satisfying \eqref{PMM optimal solution}, with $\|(\bm{X}_{r_{k+1}}^*,y_{r_{k+1}}^*,\bm{Z}_{r_{k+1}}^*)\|_2 = O(\sqrt{n})$.
\end{proof}
\par In the next lemma we define and bound a triple solving a particular parametrized non-linear system of equations, which is then used in Lemma \ref{Lemma boundedness of x z} in order to prove boundedness of the iterates of Algorithm \ref{Algorithm PMM-IPM}.
\begin{lemma} \label{Lemma tilde point}
Given Assumptions \textnormal{\ref{Assumption 1}, \ref{Assumption 2}}, and $(\Xi_k,\lambda_k)$, produced at an arbitrary iteration $k \geq 0$ of Algorithm \textnormal{\ref{Algorithm PMM-IPM}}, and any $\mu \in [0,\infty)$, there exists a triple $(\tilde{X},\tilde{y},\tilde{Z})$ which satisfies the following system of equations:
\begin{equation} \label{tilde point conditions}
\begin{split}
A \bm{\tilde{X}} + \mu \tilde{y} & = b + \bar{b} + \mu \lambda_k + \tilde{b}_k,\\
A^\top \tilde{y} + \bm{\tilde{Z}} - \mu \bm{\tilde{X}} & = \bm{C} + \bm{\bar{C}} - \mu \bm{\Xi}_k + \bm{\tilde{C}}_k,\\
\tilde{X}\tilde{Z} & = \theta I_n,
\end{split}
\end{equation}
\noindent for some arbitrary $\theta > 0$ ($\theta = \Theta(1)$), with $\tilde{X},\ \tilde{Z} \in \mathcal{S}^n_{++}$ and $\|(\bm{\tilde{X}},\tilde{y},\bm{\tilde{Z}})\|_2 = O(\sqrt{n})$, where $\tilde{b}_{k},\ \tilde{C}_{k}$ are defined in \eqref{Small neighbourhood}, while $\bar{b},\ \bar{C}$ are defined with the starting point in \eqref{starting point}. Furthermore, $\nu_{\min}(\tilde{X}) \geq \xi$ and $\nu_{\min}(\tilde{Z}) \geq \xi$, for some positive $\xi = \Theta(1)$.
\end{lemma}
\begin{proof}
\par Let $k \geq 0$ denote an arbitrary iteration of Algorithm \ref{Algorithm PMM-IPM}. Let also $\bar{b},\ \bar{C}$ as defined in \eqref{starting point}, and $\tilde{b}_{k},\ \tilde{C}_{k}$, as defined in the neighbourhood conditions in \eqref{Small neighbourhood}. Given an arbitrary positive constant $\theta > 0$, we consider the following barrier primal-dual pair:
\begin{equation} \label{tilde non-regularized primal}
\underset{X \in \mathcal{S}^n}{\text{min}} \ \big( \langle C+\bar{C} + \tilde{C}_k),X\rangle -\theta \ln (\det (X)) \big), \ \ \text{s.t.} \ \mathcal{A} X= b + \bar{b} + \tilde{b}_k,
\end{equation}
\begin{equation} \label{tilde non-regularized dual}
\underset{y \in \mathbb{R}^m,Z \in \mathcal{S}^n}{\text{max}} \ \big((b + \bar{b} + \tilde{b}_k)^\top y +\theta \ln(\det(Z))\big), \ \ \text{s.t.}\ \mathcal{A}^*y + Z = C+\bar{C} + \tilde{C}_k.
\end{equation}
\par Let us now define the following triple:
\begin{equation*}
(\hat{X},\hat{y},\hat{Z}) \coloneqq \arg \min_{(X,y,Z)} \big\{\|(\bm{X},\bm{Z})\|_2: A\bm{X} = \tilde{b}_k,\ A^\top y + \bm{Z} = \tilde{C}_k \}.
\end{equation*}
\noindent From the neighbourhood conditions \eqref{Small neighbourhood}, we know that $\|(\tilde{b}_k,\bm{\tilde{C}}_k)\|_{\mathcal{S}} \leq \gamma_{\mathcal{S}}\rho$, and from the definition of the semi-norm in \eqref{semi-norm definition}, we have that $\|(\bm{\hat{X}},\bm{\hat{Z}})\|_2 \leq \gamma_{\mathcal{S}} \rho$. Using \eqref{semi-norm definition} alongside Assumption \ref{Assumption 2}, we can also show that $\|\hat{y}\|_2 = \Theta(\|(\bm{\hat{X}},\bm{\hat{Z}})\|_2)$. On the other hand, from the definition of the starting point, we have that $(X_0,Z_0) = \rho(I_n,I_n)$. By defining the following auxiliary point:
$$(\bar{X},\bar{y},\bar{Z}) = (X_0,y_0,Z_0) + (\hat{X},\hat{y},\hat{Z}),$$
\noindent we have that $(1 + \gamma_{\mathcal{S}})\rho(I_n,I_n) \succeq (\bar{X},\bar{Z}) \succeq (1-\gamma_{\mathcal{S}})\rho(I_n,I_n)$, that is, the eigenvalues of these matrices are bounded by constants that are independent of the problem under consideration. By construction, the triple $(\bar{X},\bar{y},\bar{Z})$ is a feasible solution for the primal-dual pair in \eqref{tilde non-regularized primal}--\eqref{tilde non-regularized dual}, giving bounded primal and dual objective values, respectively. This, alongside Weierstrass's theorem on a potential function, can be used to show that the solution of problem \eqref{tilde non-regularized primal}--\eqref{tilde non-regularized dual} is bounded. In other words, for any choice of $\theta > 0$, there must exist a bounded triple $(X_s^*,y_s^*,Z_s^*)$ solving \eqref{tilde non-regularized primal}--\eqref{tilde non-regularized dual}, i.e.:
\begin{equation*}
\begin{split}
A\bm{X}_s^* = b + \bar{b} + \tilde{b}_k,\quad A^\top y_s^* + \bm{Z}_s^* = \bm{C} + \bm{\bar{C}} + \bm{\tilde{C}}_k,\quad
X_s^* Z_s^* = \theta I_n,
\end{split}
\end{equation*}
\noindent such that $\nu_{\max}(X_{s^*}) \leq K_{s^*}$ and $\nu_{\max}(Z_{s^*}) \leq K_{s^*}$, where $K_{s^*} > 0$ is a positive constant. In turn, combining this with Assumption \ref{Assumption 2} implies that $\|(\bm{X}_s^*,y_s^*,\bm{Z}_s^*)\|_2 = O(\sqrt{n})$.
\par Let us now apply the PMM to \eqref{tilde non-regularized primal}--\eqref{tilde non-regularized dual}, given the estimates $\Xi_k,\ \lambda_k$. We should note at this point that the proximal operator used here is different from that in \eqref{Primal Dual Proximal Operator}, since it is based on a different maximal monotone operator to that in \eqref{Primal Dual Maximal Monotone Operator}. In particular, we associate a single-valued maximal monotone operator to \eqref{tilde non-regularized primal}--\eqref{tilde non-regularized dual}, with graph:
\begin{equation*}
\tilde{T}_{\mathcal{L}}(X,y) \coloneqq \big\{(V,u): V = (C + \bar{C} + \tilde{C}_k) - \mathcal{A}^*y - \theta X^{-1}, u = \mathcal{A}X-(b+\bar{b}+\tilde{b}_k) \big\}.
\end{equation*}
\noindent As before, the proximal operator is defined as $\tilde{\mathcal{P}} \coloneqq (I_{n+m}+ \tilde{T}_{\mathcal{L}})^{-1}$, and is single-valued and non-expansive. We let any $\mu \in [0,\infty)$ and define the following penalty function:
\begin{equation*}
\begin{split}
\tilde{\mathcal{L}}_{\mu,\theta}(X;\Xi_k,\lambda_k) \coloneqq \ & \langle C + \bar{C} + \tilde{C}_k, X\rangle +
\frac{1}{2}\mu \|X-\Xi_k\|_{F}^2 + \frac{1}{2\mu}\|\mathcal{A}X-(b+\bar{b}+\tilde{b}_k)\|_{2}^2 \\ & - (\lambda_k)^\top (\mathcal{A}X - (b+\bar{b}+\tilde{b}_k))-\theta \ln(\det(X)).
\end{split}
\end{equation*}
\noindent By defining the variables $y = \lambda_k - \frac{1}{\mu}(\mathcal{A}X - (b+\bar{b}+\tilde{b}_k))$ and $Z = \theta X^{-1}$, we can see that the optimality conditions of this PMM sub-problem are exactly those stated in \eqref{tilde point conditions}. Equivalently, we can find a pair $(\tilde{X},\tilde{y})$ such that $(\tilde{X},\tilde{y}) = \tilde{\mathcal{P}}(\Xi_k,\lambda_k)$ and set $\tilde{Z} = \theta \tilde{X}^{-1}$. We can now use the non-expansiveness of $\tilde{\mathcal{P}}$, as in Lemma \ref{Lemma non-expansiveness}, to obtain:
$$\|(\bm{\tilde{X}},\tilde{y})-(\bm{X}_s^*,y_s^*)\|_{2} \leq \|(\Xi_k,\lambda_k)-(\bm{X}_s^*,y_s^*)\|_{2}.$$
\noindent But we know, from Lemma \ref{Lemma-boundedness of optimal solutions for sub-problems}, that $\|(\bm{\Xi}_k,\lambda_k)\|_2 = O(\sqrt{n})$, $\forall\ k \geq 0$. Combining this with our previous observations, yields that $\|(\bm{\tilde{X}},\tilde{y})\|_2 = O(\sqrt{n})$. Setting $\tilde{Z} = \theta\tilde{X}^{-1}$, gives a triple $(\tilde{X},\tilde{y},\tilde{Z})$ that satisfies \eqref{tilde point conditions}, while $\|(\bm{\tilde{X}},\tilde{y},\bm{\tilde{Z}})\|_2 = O(\sqrt{n})$ (from dual feasibility).
\par To conclude the proof, let us notice that the value of $\tilde{\mathcal{L}}_{\mu,\theta}(X;\Xi_k,\lambda_k)$ will grow unbounded as $\nu_{\min}(X) \rightarrow 0$ or $\nu_{\max}(X) \rightarrow \infty$. Hence, there must exist a constant $\tilde{K} > 0$, such that the minimizer of this function satisfies $\frac{1}{\tilde{K}} \leq \nu_{\min}(\tilde{X}) \leq \nu_{\max}(\tilde{X}) \leq \tilde{K}$. The relation $\tilde{X}\tilde{Z} = \theta I_n$ then implies that $\frac{\theta}{\tilde{K}} \leq \nu_{\min}(\tilde{Z}) \leq \nu_{\max}(\tilde{Z}) \leq \theta \tilde{K}$. Hence, there exists some $\xi = \Theta(1)$ such that $\nu_{\min}(\tilde{X}) \geq \xi$ and $\nu_{\min}(\tilde{Z}) \geq \xi$.
\end{proof}
\noindent In the following lemma, we derive boundedness of the iterates of Algorithm \ref{Algorithm PMM-IPM}.
\begin{lemma} \label{Lemma boundedness of x z}
Given Assumptions \textnormal{\ref{Assumption 1}} and \textnormal{\ref{Assumption 2}}, the iterates $(X_k,y_k,Z_k)$ produced by Algorithm \textnormal{\ref{Algorithm PMM-IPM}}, for all $k \geq 0$, are such that:
$$\textnormal{Tr}(X_k) = O(n),\qquad \textnormal{Tr}(Z_k) = O(n),\qquad \|(\bm{X}_k,y_k,\bm{Z}_k)\|_2 = O(n).$$
\end{lemma}
\begin{proof}
\par Let an iterate $(X_k,y_k,Z_k) \in \mathscr{N}_{\mu_k}(\Xi_k,\lambda_k)$, produced by Algorithm \ref{Algorithm PMM-IPM} during an arbitrary iteration $k \geq 0$, be given. Firstly, we invoke Lemma \ref{Lemma tilde point}, from which we have a triple $(\tilde{X},\tilde{y},\tilde{Z})$ satisfying \eqref{tilde point conditions}, for $\mu = \mu_k$. Similarly, by invoking Lemma \ref{Lemma-boundedness of optimal solutions for sub-problems}, we know that there exists a triple $(X_{r_k}^*,y_{r_k}^*,Z_{r_k}^*)$ satisfying \eqref{PMM optimal solution}, with $\mu = \mu_k$. Consider the following auxiliary point:
\begin{equation} \label{auxiliary triple 1}
\bigg((1-\frac{\mu_k}{\mu_0})X_{r_k}^* + \frac{\mu_k}{\mu_0} \tilde{X} - X_k,\ (1-\frac{\mu_k}{\mu_0})y_{r_k}^* +\frac{\mu_k}{\mu_0} \tilde{y} - y_k,\ (1-\frac{\mu_k}{\mu_0})Z_{r_k}^* + \frac{\mu_k}{\mu_0} \tilde{Z} - Z_k\bigg).
\end{equation}
\noindent Using (\ref{auxiliary triple 1}) and \eqref{PMM optimal solution}-\eqref{tilde point conditions} (for $\mu = \mu_k$), one can observe that:
\begin{equation*}
\begin{split}
A\big((1-\frac{\mu_k}{\mu_0})\bm{X}_{r_k}^* + \frac{\mu_k}{\mu_0} \bm{\tilde{X}} - \bm{X}_k\big) + \mu_k \big((1-\frac{\mu_k}{\mu_0})y_{r_k}^* + \frac{\mu_k}{\mu_0} \tilde{y} - y_k\big) = \\
(1-\frac{\mu_k}{\mu_0})(A\bm{X}_{r_k}^* + \mu_k y_{r_k}^*) + \frac{\mu_k}{\mu_0} (A\bm{\tilde{X}}+ \mu_k \tilde{y}) - A\bm{X}_k -\mu_k y_k =\\
(1-\frac{\mu_k}{\mu_0}) (b + \mu_k \lambda_k) + \frac{\mu_k}{\mu_0} (b + \mu_k \lambda_k + \tilde{b}_k +\bar{b}) - A\bm{X}_k - \mu_k y_k =\\
b +\mu_k \lambda_k + \frac{\mu_k}{\mu_0}(\tilde{b}_k+\bar{b}) - A\bm{X}_k - \mu_k y_k = &\ 0,
\end{split}
\end{equation*}
\noindent where the last equality follows from the definition of the neighbourhood $\mathscr{N}_{\mu_k}(\Xi_k,\lambda_k)$. Similarly, one can show that:
\begin{equation*}
-\mu_k \big((1-\frac{\mu_k}{\mu_0})\bm{X}_{r_k}^* + \frac{\mu_k}{\mu_0} \bm{\tilde{X}} - \bm{X}_k\big) + A^\top \big((1-\frac{\mu_k}{\mu_0})y_{r_k}^* +\frac{\mu_k}{\mu_0} \tilde{y} - y_k\big) + \big((1-\frac{\mu_k}{\mu_0})\bm{Z}_{r_k}^* + \frac{\mu_k}{\mu_0} \bm{\tilde{Z}} - \bm{Z}_k\big) = 0.
\end{equation*}
\noindent By combining the previous two relations, we have:
\begin{equation} \label{Lemma boundedness of x,z, relation 1}
\begin{split}
\big((1-\frac{\mu_k}{\mu_0})\bm{X}_{r_k}^* + \frac{\mu_k}{\mu_0} \bm{\tilde{X}} - \bm{X}_k\big)^\top \big((1-\frac{\mu_k}{\mu_0})\bm{Z}_{r_k}^* + \frac{\mu_k}{\mu_0} \bm{\tilde{Z}} - \bm{Z}_k\big) = &\\
\mu_k\big((1-\frac{\mu_k}{\mu_0})\bm{X}_{r_k}^* + \frac{\mu_k}{\mu_0}\bm{\tilde{X}} - \bm{X}_k\big)^\top \big((1-\frac{\mu_k}{\mu_0})\bm{X}_{r_k}^* + \frac{\mu_k}{\mu_0} \bm{\tilde{X}} - \bm{X}_k\big)\ +\\ \mu_k \big((1-\frac{\mu_k}{\mu_0})y_{r_k}^* + \frac{\mu_k}{\mu_0} \tilde{y} - y_k\big)^\top \big((1-\frac{\mu_k}{\mu_0})y_{r_k}^* + \frac{\mu_k}{\mu_0} \tilde{y} - y_k\big) \geq &\ 0.
\end{split}
\end{equation}
\noindent Observe that (\ref{Lemma boundedness of x,z, relation 1}) can equivalently be written as:
\begin{equation*}
\begin{split}
\big\langle(1-\frac{\mu_k}{\mu_0})X_{r_k}^* + \frac{\mu_k}{\mu_0} \tilde{X}, Z_k \big\rangle + \big\langle (1-\frac{\mu_k}{\mu_0})Z_{r_k}^* + \frac{\mu_k}{\mu_0}\tilde{Z}, X_k \big\rangle \leq \\
\big\langle(1-\frac{\mu_k}{\mu_0})X_{r_k}^* + \frac{\mu_k}{\mu_0} \tilde{X},(1-\frac{\mu_k}{\mu_0})Z_{r_k}^* + \frac{\mu_k}{\mu_0} \tilde{Z}\big\rangle + \langle X_k, Z_k\rangle.
\end{split}
\end{equation*}
\noindent However, from Lemmas \ref{Lemma-boundedness of optimal solutions for sub-problems} and \ref{Lemma tilde point}, we have that $\tilde{X} \succeq \xi I_n$ and $\tilde{Z} \succeq \xi I_n$, for some positive constant $\xi = \Theta(1)$, $\langle X_{r_k}^*,Z_k \rangle \geq 0$, $\langle Z_{r_k}^*, X_k\rangle \geq 0$, while $\|(X_{r_k}^*,Z_{r_k}^*)\|_F = O(\sqrt{n})$, and $\|(\tilde{X},\tilde{Z})\|_F = O(\sqrt{n})$. Furthermore, by definition we have that $ n \mu_k = \langle X_k,Z_k \rangle$. By combining all the previous, we obtain:
\begin{equation} \label{Lemma boundedness of x,z, relation 2}
\begin{split}
\frac{\mu_k}{\mu_0} \xi \big(\textnormal{Tr}(X_k) + \textnormal{Tr}(Z_k) \big) = \\
\frac{\mu_k}{\mu_0} \xi\big(\langle I_n, X_k\rangle + \langle I_n, Z_k\rangle\big) \leq \\
\big\langle(1-\frac{\mu_k}{\mu_0})X_{r_k}^* + \frac{\mu_k}{\mu_0} \tilde{X}, Z_k \big\rangle + \big\langle(1-\frac{\mu_k}{\mu_0})Z_{r_k}^* + \frac{\mu_k}{\mu_0} \tilde{Z}, X_k\big\rangle \leq \\
\big\langle(1-\frac{\mu_k}{\mu_0})X_{r_k}^* + \frac{\mu_k}{\mu_0} \tilde{X},(1-\frac{\mu_k}{\mu_0})Z_{r_k}^* + \frac{\mu_k}{\mu_0} \tilde{Z}\big\rangle + \langle X_k, Z_k \rangle = \\
\frac{\mu_k}{\mu_0}(1-\frac{\mu_k}{\mu_0}) \langle X_{r_k}^*, \tilde{Z}\rangle + \frac{\mu_k}{\mu_0} (1-\frac{\mu_k}{\mu_0}) \langle\tilde{X}, Z_r^*\rangle + (\frac{\mu_k}{\mu_0})^2 \langle\tilde{X}, \tilde{Z} \rangle + \langle X_k, Z_k \rangle = &\ O(n \mu_k ),
\end{split}
\end{equation}
\noindent where the first inequality follows since $X_{r_k}^*,\ Z_{r_k}^*,\ \tilde{X},\ \tilde{Z} \in \mathcal{S}^n_+$ and $(\tilde{X},\tilde{Z}) \succeq \xi (I_n,I_n)$. In the penultimate equality we used \eqref{PMM optimal solution} (i.e. $\langle X_{r_k}^*,Z_{r_k}^*\rangle = 0$). Hence, (\ref{Lemma boundedness of x,z, relation 2}) implies that:
$$\textnormal{Tr}(X_k) = O(n), \qquad \textnormal{Tr}(Z_k) = O(n).$$
\noindent From positive definiteness we have that $\|(X_k,Z_k)\|_F = O(n)$. Finally, from the neighbourhood conditions we know that:
$$\bm{C} - A^\top y_k - \bm{Z}_k + \mu_k (\bm{X}_k - \bm{\Xi}_k) + \frac{\mu_k}{\mu_0} (\bm{\tilde{C}}_k + \bm{\bar{C}}) = 0.$$
\noindent All terms above (except for $y_k$) have a 2-norm that is bounded by some quantity that is $O(n)$ (note that $\|(\bm{\bar{C}},\bar{b})\|_2 = O(\sqrt{n})$ using Assumption \ref{Assumption 2} and the definition in \eqref{starting point}). Hence, using again Assumption \ref{Assumption 2} (i.e. $A$ is full rank, with singular values independent of $n$ and $m$) yields that $\|y_k\|_2 = O(n)$, and completes the proof.
\end{proof}
\par In what follows, we provide Lemmas \ref{Auxiliary Lemma bound on scaled matrices}--\ref{Auxiliary Lemma scaled rhs of third block of Newton system}, which we use to prove boundedness of the Newton direction computed at every iteration of Algorithm \ref{Algorithm PMM-IPM}, in Lemma \ref{Lemma boundedness Dx Dz}.
\begin{lemma} \label{Auxiliary Lemma bound on scaled matrices}
Let $D_k = S_k^{-\frac{1}{2}}F_k = S_k^{\frac{1}{2}}E_k^{-1}$, where $S_k = E_k F_k$, and $E_k,\ F_k$ are defined as in the Newton system in \eqref{inexact vectorized Newton System}. Then, for any $M \in \mathbb{R}^{n\times n}$,
\begin{equation*}
\|D_k^{-T} \bm{M}\|^2_2 \leq \frac{1}{(1-\gamma_{\mu})\mu_k}\|Z_k^{\frac{1}{2}}M Z_k^{\frac{1}{2}}\|_F^2,\quad \|D_k\bm{M}\|_2^2 \leq \frac{1}{(1-\gamma_{\mu})\mu_k} \|X_k^{\frac{1}{2}}M X_k^{\frac{1}{2}}\|_F^2,
\end{equation*}
\noindent where $\gamma_{\mu}$ is defined in \eqref{Small neighbourhood}. Moreover, we have that:
\begin{equation*}
\|D_k^{-T}\|_2^2 \leq \frac{1}{(1-\gamma_{\mu})\mu_k} \|Z_k\|_F^2 = O\bigg(\frac{n^2}{\mu_k}\bigg),\qquad \|D_k\|_2^2 \leq \frac{1}{(1-\gamma_{\mu})\mu_k} \|X_k \|_F^2 = O\bigg( \frac{n^2}{\mu_k}\bigg).
\end{equation*}
\end{lemma}
\begin{proof}
The proof of the first two inequalities follows exactly the developments in \cite[Lemma 5]{ZhouToh_MATH_PROG}. The bound on the 2-norm of the matrix $D_k^{-T}$ follows by choosing $M$ such that $\bm{M}$ is a unit eigenvector, corresponding to the largest eigenvalue of $D_k^{-T}$. Then, $\|D_k^{-T}\bm{M}\|_2^2 = \|D_k^{-T}\|_2^2$. But, we have that:
\begin{equation*}
\begin{split}
\|D_k^{-T}\bm{M}\|_2^2 \leq \ &\frac{1}{(1-\gamma_{\mu})\mu_k} \|Z_k^{\frac{1}{2}}MZ_k^{\frac{1}{2}}\|_F^2 \\
=\ & \frac{1}{(1-\gamma_{\mu})\mu_k}\textnormal{Tr}(Z_k M^\top Z_k M)\\
\leq\ & \frac{1}{(1-\gamma_{\mu})\mu_k} \|Z_k\|_F^2 = O\bigg( \frac{n^2}{\mu_k}\bigg)
\end{split}
\end{equation*}
\noindent where we used the cyclic property of the trace as well as Lemma \ref{Lemma boundedness of x z}. The same reasoning applies to deriving the bound for $\|D_k\|_2^2$.
\end{proof}
\begin{lemma} \label{Auxiliary Lemma scaled third block of Newton system}
Let $D_k$ and $S_k$ be defined as in Lemma \textnormal{\ref{Auxiliary Lemma bound on scaled matrices}}. Then, we have that:
\begin{equation*}
\|D_k^{-T} \bm{\Delta X}_k\|_2^2 + \| D_k \bm{\Delta Z}_k\|_2^2 + 2\langle \Delta X_k, \Delta Z_k \rangle = \|S_k^{-\frac{1}{2}} \bm{R}_{\mu,k}\|_2^2,
\end{equation*}
\noindent where $R_{\mu,k} = \sigma_k \mu_k I_n - Z_k^{\frac{1}{2}} X_k Z_k^{\frac{1}{2}}$. Furthermore,
\begin{equation*}
\|H_{P_k}(\Delta X_k \Delta Z_k) \|_F \leq \frac{\sqrt{\frac{1+\gamma_{\mu}}{1-\gamma_{\mu}}}}{2}\big(\|D_k^{-T}\bm{\Delta X}_k\|^2_2 + \|D_k \bm{\Delta Z}_k\|_2^2 \big),
\end{equation*}
\noindent where $\gamma_{\mu}$ is defined in \eqref{Small neighbourhood}.
\end{lemma}
\begin{proof}
\noindent The equality follows directly by pre-multiplying by $S^{-\frac{1}{2}}$ on both sides of the third block equation of the Newton system in \eqref{inexact vectorized Newton System} and by then taking the 2-norm (see \cite[Lemma 3.1]{Zhang_SIAM_J_OPT}). For a proof of the inequality, we refer the reader to \cite[Lemma 3.3]{Zhang_SIAM_J_OPT}.
\end{proof}
\begin{lemma} \label{Auxiliary Lemma scaled rhs of third block of Newton system}
Let $S_k$ as defined in Lemma \textnormal{\ref{Auxiliary Lemma bound on scaled matrices}}, and $R_{\mu,k}$ as defined in Lemma \textnormal{\ref{Auxiliary Lemma scaled third block of Newton system}}. Then,
\begin{equation*}
\|S_k^{-\frac{1}{2}} \bm{R}_{\mu,k}\|_2^2 = O(n \mu_k).
\end{equation*}
\end{lemma}
\begin{proof}
The proof is omitted since it follows exactly the developments in \cite[Lemma 7]{ZhouToh_MATH_PROG}.
\end{proof}
\par We are now ready to derive bounds for the Newton direction computed at every iteration of Algorithm \ref{Algorithm PMM-IPM}.
\begin{lemma} \label{Lemma boundedness Dx Dz}
Given Assumptions \textnormal{\ref{Assumption 1}} and \textnormal{\ref{Assumption 2}}, and the Newton direction $(\Delta X_k, \Delta y_k, \Delta Z_k)$ obtained by solving system \textnormal{\eqref{inexact vectorized Newton System}} during an arbitrary iteration $k \geq 0$ of Algorithm \textnormal{\ref{Algorithm PMM-IPM}}, we have that:
$$\|H_{P_k}(\Delta X_k \Delta Z_k)\|_F = O(n^{4}\mu),\qquad \|(\bm{\Delta X}_k,\Delta y_k,\bm{\Delta Z}_k)\|_2 = O(n^{3}).$$
\end{lemma}
\begin{proof}
\par Consider an arbitrary iteration $k$ of Algorithm \ref{Algorithm PMM-IPM}. We invoke Lemmas \ref{Lemma-boundedness of optimal solutions for sub-problems}, \ref{Lemma tilde point}, for $\mu = \sigma_k \mu_k$. That is, there exists a triple $(X_{r_k}^*,y_{r_k}^*,Z_{r_k}^*)$ satisfying \eqref{PMM optimal solution}, and a triple $(\tilde{X},\tilde{y},\tilde{Z})$ satisfying \eqref{tilde point conditions}, for $\mu = \sigma_k \mu_k$. Using the centering parameter $\sigma_k$, define:
\begin{equation} \label{Boundedness dx dz c hat b hat equation}
\begin{split}
\bm{\hat{C}} =& -\bigg(\frac{\sigma_k}{\mu_0} \bm{\bar{C}} - (1-\sigma_k)\big(\bm{X}_k - \bm{\Xi}_k + \frac{\mu_k}{\mu_0}(\bm{\tilde{X}}-\bm{X}_{r_k}^*)\big)+\frac{1}{\mu_k}\bm{\mathsf{E}}_{d,k}\bigg),\\
\hat{b} =& -\bigg(\frac{\sigma_k}{\mu_0} \bar{b} + (1-\sigma_k)\big(y_k - \lambda_k +\frac{\mu_k}{\mu_0}(\tilde{y}-y_{r_k}^*) \big)+ \frac{1}{\mu_k}\epsilon_{p,k}\bigg),
\end{split}
\end{equation}
\noindent where $\bar{b},\ \bar{C},\ \mu_0$ are given by \eqref{starting point} and $\epsilon_{p,k}$, $\mathsf{E}_{d,k}$ model the errors which occur when system \eqref{exact non-vectorized Newton System} is solved inexactly. Notice that these errors are required to satisfy \eqref{Krylov method termination conditions} at every iteration $k$. Using Lemmas \ref{Lemma-boundedness of optimal solutions for sub-problems}, \ref{Lemma tilde point}, \ref{Lemma boundedness of x z}, relation \eqref{Krylov method termination conditions}, and Assumption \ref{Assumption 2}, we know that $\|(\bm{\hat{C}},\hat{b})\|_2 = O(n)$. Then, by applying again Assumption \ref{Assumption 2}, we know that there must exist a matrix $\hat{X} \in \mathbb{R}^{n\times n}$ such that $\mathcal{A}\hat{X} = \hat{b},\ \|\hat{X}\|_F = O(n)$, and by setting $\hat{Z} = \hat{C} + \mu \hat{X}$, we have that $\|\hat{Z}\|_F = O(n)$ and:
\begin{equation} \label{Boundedness of Dx,Dz, hat point}
\mathcal{A}\hat{X} = \hat{b},\qquad \hat{Z} - \mu_k \hat{X} = \hat{C}.
\end{equation}
\par Using $(X_{r_k}^*,y_{r_k}^*,Z_{r_k}^*)$, $(\tilde{X},\tilde{y},\tilde{Z})$, as well as the triple $(\hat{X},0,\hat{Z})$, where $(\hat{X},\hat{Z})$ is defined in \eqref{Boundedness of Dx,Dz, hat point}, we define the following auxiliary triple:
\begin{equation} \label{Lemma Dx Dz boundedness, auxiliary triple}
(\bar{X},\bar{y},\bar{Z}) = (\Delta X_k, \Delta y_k, \Delta Z_k) + \frac{\mu_k}{\mu_0} (\tilde{X}, \tilde{y}, \tilde{Z}) - \frac{\mu_k}{\mu_0} (X_{r_k}^*, y_{r_k}^*,Z_{r_k}^*) + \mu_k (\hat{X},0,\hat{Z}).
\end{equation}
\noindent Using \eqref{Lemma Dx Dz boundedness, auxiliary triple}, \eqref{Boundedness dx dz c hat b hat equation}, and the second block equation of \eqref{inexact vectorized Newton System}:
\begin{equation*}
\begin{split}
A\bm{\bar{X}} + \mu_k \bar{y} = &\ (A \bm{\Delta X}_k + \mu_k \Delta y_k) + \frac{\mu_k}{\mu_0}((A\bm{\tilde{X}}+ \mu_k \tilde{y})- (A\bm{X}_{r_k}^*+ \mu_k y_{r_k}^*)) + \mu_k A\bm{\hat{X}}\\
= &\ \big(b + \sigma_k\frac{\mu_k}{\mu_0}\bar{b}-A\bm{X}_k - \sigma_k \mu_k (y_k-\lambda_k) + \epsilon_{p,k}\big) \\ &\ + \frac{\mu_k}{\mu_0}((A\bm{\tilde{X}} + \mu_k \tilde{y})- (A\bm{X}_{r_k}^*+ \mu_k y_{r_k}^*))\\
&\ - \mu_k \big(\sigma_k \frac{\bar{b}}{\mu_0} + (1-\sigma_k)(y_k-\lambda_k)\big) - \frac{\mu_k}{\mu_0}(1-\sigma_k)\mu_k (\tilde{y}-y_{r_k}^*) - \epsilon_{p,k}.
\end{split}
\end{equation*}
\noindent Then, by deleting opposite terms in the right-hand side, and employing \eqref{PMM optimal solution}-\eqref{tilde point conditions} (evaluated at $\mu = \sigma_k \mu_k$ from the definition of $(X_{r_k}^*,y_{r_k}^*,Z_{r_k}^*)$ and $(\tilde{X},\tilde{y},\tilde{Z})$), we have
\begin{equation*}
\begin{split}
A\bm{\bar{X}} + \mu_k \bar{y} = &\ \big(b + \sigma_k\frac{\mu_k}{\mu_0}\bar{b}-A\bm{X}_k - \sigma_k \mu_k (y_k-\lambda_k)\big) + \frac{\mu_k}{\mu_0}(b+\sigma_k\mu_k \lambda_k+\bar{b}+\tilde{b}_k)\\
&\ - \frac{\mu_k}{\mu_0} (\sigma_k \mu_k \lambda_k + b) - \mu_k \big(\sigma_k \frac{\bar{b}}{\mu_0} + (1-\sigma_k)(y_k-\lambda_k)\big)\\
= &\ b + \frac{\mu_k}{\mu_0}(\bar{b}+\tilde{b}_k) - A\bm{X}_k - \mu_k (y_k-\lambda_k)\\
= &\ 0,
\end{split}
\end{equation*}
\noindent where the last equation follows from the neighbourhood conditions (i.e. $(X_k,y_k,Z_k) \in \mathscr{N}_{\mu_k}(\Xi_k,\lambda_k)$). Similarly, we can show that:
$$ A^\top \bar{y} + \bar{Z}-\mu_k \bar{X} = 0.$$
\par The previous two equalities imply that:
\begin{equation} \label{Lemma boundedness Dx Dz, complementarity positivity}
\begin{split}
\langle \bar{X},\bar{Z}\rangle =
\langle \bar{X}, - \mathcal{A^*} \bar{y}+\mu_k \bar{X}\rangle = \mu_k \langle \bar{X}, \bar{X} \rangle + \mu_k \bar{y}^\top \bar{y} \geq 0.
\end{split}
\end{equation}
\noindent On the other hand, using the last block equation of the Newton system (\ref{inexact vectorized Newton System}), we have:
\begin{equation*}
E_k\bm{\bar{X}} + F_k \bm{\bar{Z}} = \bm{R_{\mu,k}}+ \frac{\mu_k}{\mu_0} E_k(\bm{\tilde{X}}-\bm{X}_{r_k}^* + \mu_0 \bm{\hat{X}})+\frac{\mu_k}{\mu_0} F_k(\bm{\tilde{Z}}- \bm{Z}_{r_k}^* + \mu_0 \bm{\hat{Z}}),
\end{equation*}
\noindent where $R_{\mu,k}$ is defined as in Lemma \ref{Auxiliary Lemma scaled third block of Newton system}. Let $S_k$ be defined as in Lemma \ref{Auxiliary Lemma bound on scaled matrices}. By multiplying both sides of the previous equation by $S_k^{-\frac{1}{2}}$, we get:
\begin{equation} \label{Lemma boundedness Dx Dz, relation 1}
D_k^{-T}\bm{\bar{X}} + D_k\bm{\bar{Z}} = S_k^{-\frac{1}{2}}\bm{R_{\mu,k}}+ \frac{\mu_k}{\mu_0} \big(D_k^{-T}(\bm{\tilde{X}}-\bm{X}_{r_k}^* + \mu_0 \bm{\hat{X}}) + D_k(\bm{\tilde{Z}}-\bm{Z}_{r_k}^* + \mu_0 \bm{\hat{Z}})\big).
\end{equation}
\noindent But from (\ref{Lemma boundedness Dx Dz, complementarity positivity}) we know that $\langle\bar{X}, \bar{Z}\rangle \geq 0$ and hence:
\begin{equation*}
\|D_k^{-T}\bm{\bar{X}} + D_k \bm{\bar{Z}}\|_2^2 \geq \|D_k^{-T} \bm{\bar{X}}\|_2^2 + \|D_k \bm{\bar{Z}}\|_2^2.
\end{equation*}
\noindent Combining (\ref{Lemma boundedness Dx Dz, relation 1}) with the previous inequality, gives:
\begin{equation*}
\begin{split}
\|D_k^{-T}\bm{\bar{X}}\|_2^2 \leq \ \bigg\{&\|S_k^{-\frac{1}{2}}\bm{R}_{\mu,k}\|_2 +\frac{\mu_k}{\mu_0} \bigg(\|D_k^{-T}(\bm{\tilde{X}}-\bm{X}_{r_k}^* + \mu_0 \bm{\hat{X}})\|_2 \\ &\ + \|D_k(\bm{\tilde{Z}}-\bm{Z}_{r_k}^* + \mu_0 \bm{\hat{Z}})\|_2\bigg) \bigg\}^2.
\end{split}
\end{equation*}
\par We take square roots, use \eqref{Lemma Dx Dz boundedness, auxiliary triple} and apply the triangular inequality, to get:
\begin{equation} \label{Lemma boundedness Dx Dz, relation 2}
\begin{split}
\|D_k^{-T} \bm{\Delta X}_k \|_2 \leq &\ \|S_k^{-\frac{1}{2}} \bm{R}_{\mu,k}\|_2
+ \frac{\mu_k}{\mu_0}\bigg( 2\|D_k^{-T} (\bm{\tilde{X}}-\bm{X}_{r_k}^* + \mu_0 \bm{\hat{X}})\|_2 \\ &\ +\|D_k(\bm{\tilde{Z}}-\bm{Z}_{r_k}^* + \mu_0 \bm{\hat{Z}})\|_2\bigg).
\end{split}
\end{equation}
\par We now proceed to bounding the terms in the right hand side of (\ref{Lemma boundedness Dx Dz, relation 2}). A bound for the first term of the right hand side is given by Lemma \ref{Auxiliary Lemma scaled rhs of third block of Newton system}, that is:
\[ \|S_k^{-\frac{1}{2}} \bm{R}_{\mu,k}\|_2 = O(n^{\frac{1}{2}}\mu_k^{\frac{1}{2}}).\]
\noindent On the other hand, we have (from Lemma \ref{Auxiliary Lemma bound on scaled matrices}) that
\[\|D_k^{-T}\|_2 = O\Bigg( \frac{n}{\mu_k^{\frac{1}{2}}}\Bigg),\qquad \|D_k\|_2 = O\Bigg( \frac{n}{\mu_k^{\frac{1}{2}}}\Bigg).\]
\noindent Hence, using the previous bounds, as well as Lemmas \ref{Lemma-boundedness of optimal solutions for sub-problems}, \ref{Lemma tilde point}, and \eqref{Boundedness of Dx,Dz, hat point}, we obtain:
\begin{equation*}
\begin{split}
2\frac{\mu_k}{\mu_0}\|D_k^{-T}(\bm{\tilde{X}}-\bm{X}_{r_k}^* + \mu_0 \bm{\hat{X}})\|_2 +\frac{\mu_k}{\mu_0}\|D_k(\bm{\tilde{Z}}-\bm{Z}_{r_k}^* + \mu_0 \bm{\hat{Z}})\|_2 = O\big(n^{2}\mu_k^{\frac{1}{2}}\big),
\end{split}
\end{equation*}
\noindent Combining all the previous bounds yields that $\|D_k^{-T}\bm{\Delta X}_k\|_2 = O(n^2 \mu_k^{\frac{1}{2}})$. One can bound $\|D_k \bm{\Delta Z}_k\|_2$ in the same way. The latter is omitted for ease of presentation.
\par Furthermore, we have that:
$$\|\bm{\Delta X}_k\|_2 = \|D_k D_k^{-T} \bm{\Delta X}_k\|_2 \leq \|D_k\|_2\|D_k^{-T} \bm{\Delta X}_k\|_2 = O(n^{3}).$$
\noindent Similarly, we can show that $\|\bm{\Delta Z}_k \|_2 = O(n^{3})$. From the first block equation of the Newton system in \eqref{inexact vectorized Newton System}, alongside Assumption \ref{Assumption 2}, we can show that $\|\Delta y_k\|_2 = O(n^{3})$.
\par Finally, using the previous bounds, as well as Lemma \ref{Auxiliary Lemma scaled third block of Newton system}, we obtain the desired bound on $\|H_{P_k}(\Delta X_k \Delta Z_k)\|_F$, that is:
\[\|H_{P_k}(\Delta X_k \Delta Z_k)\|_F = O(n^4\mu_k),\]
\noindent which completes the proof.
\end{proof}
\noindent We can now prove (Lemmas \ref{Lemma step-length-part 1}--\ref{Lemma step-length-part 2}) that at every iteration of Algorithm \ref{Algorithm PMM-IPM} there exists a step-length $\alpha_k > 0$, using which, the new iterate satisfies the conditions required by the algorithm. The lower bound on any such step-length will later determine the polynomial complexity of the method. To that end, we assume the following notation:
\[\big(X_k(\alpha),y_k(\alpha),Z_k(\alpha)\big) \equiv (X_k + \alpha \Delta X_k, y_k + \alpha \Delta y_k, Z_k + \alpha \Delta Z_k).\]
\begin{lemma} \label{Lemma step-length-part 1}
Given Assumptions \textnormal{\ref{Assumption 1}}, \textnormal{\ref{Assumption 2}}, and by letting ${P_k}(\alpha) = Z_k(\alpha)^{\frac{1}{2}}$, there exists a step-length ${\alpha^*} \in (0,1)$, such that for all $\alpha \in [0,{\alpha^*}]$ and for all iterations $k \geq 0$ of Algorithm \textnormal{\ref{Algorithm PMM-IPM}}, the following relations hold:
\begin{equation} \label{Lemma step-length relation 1}
\langle X_k + \alpha \Delta X_k,Z_k + \alpha \Delta Z_k\rangle \geq (1-\alpha(1-\beta_1))\langle X_k,Z_k \rangle,
\end{equation}
\begin{equation} \label{Lemma step-length relation 2}
\|H_{{P_k}(\alpha)}(X_k(\alpha)Z_k(\alpha)) - \mu_k(\alpha)\|_F \leq \gamma_{\mu}\mu_k(\alpha),
\end{equation}
\begin{equation} \label{Lemma step-length relation 3}
\langle X_k + \alpha \Delta X_k, Z_k + \alpha \Delta Z_k \rangle \leq (1-\alpha(1-\beta_2))\langle X_k, Z_k \rangle,
\end{equation}
where, without loss of generality, $\beta_1 = \frac{\sigma_{\min}}{2}$ and $\beta_2 = 0.99$. Moreover, ${\alpha^*} \geq \frac{{\kappa^*}}{n^{4}}$ for all $k\geq 0$, where ${\kappa^*} > 0$ is independent of $n$, $m$.
\end{lemma}
\begin{proof}
\par From Lemma \ref{Lemma boundedness Dx Dz}, there exist constants $K_{\Delta} >0$ and $K_{H\Delta} > 0$, independent of $n$ and $m$, such that:
$$\langle \Delta X_k, \Delta Z_k \rangle = (D_k^{-T} \bm{\Delta X}_k)^\top (D_k \bm{\Delta Z}_k) \leq \|D_k^{-T} \bm{\Delta X}_k\|_2 \|D_k \bm{\Delta Z}_k\|_2 \leq K_{\Delta}^2 n^4 \mu_k,$$
\[ \|H_{P_k}(\Delta X_k \Delta Z_k)\|_F \leq K_{H\Delta} n^4 \mu_k.\]
\noindent From the last block equation of the Newton system \eqref{exact non-vectorized Newton System}, we can show that:
\begin{equation} \label{Lemma step-length equation 1}
\langle Z_k, \Delta X_k\rangle + \langle X_k, \Delta Z_k\rangle = (\sigma_k - 1) \langle X_k, Z_k \rangle.
\end{equation}
\noindent The latter can also be obtained from \eqref{inexact vectorized Newton System}, since we require $\mathsf{E}_{\mu,k}= 0$. Furthermore:
\begin{equation} \label{Lemma step-length equation 2}
H_{P_k}(X_k(\alpha)Z_k(\alpha)) = (1-\alpha)H_{P_k}(X_k Z_k) + \alpha \sigma_k \mu_k I_n +\alpha^2 H_{P_k}(\Delta X_k \Delta Z_k),
\end{equation}
\noindent where $(X_{k+1},y_{k+1},Z_{k+1}) = (X_k + \alpha\Delta X_k,y_k + \alpha\Delta y_k, Z_k + \alpha\Delta Z_k)$.
\par We proceed by proving \eqref{Lemma step-length relation 1}. Using \eqref{Lemma step-length equation 1}, we have:
\begin{equation*}
\begin{split}
\langle X_k + \alpha \Delta X_k,Z_k + \alpha \Delta Z_k\rangle - (1-\alpha(1 -\beta_1))\langle X_k, Z_k\rangle = \\
\langle X_k, Z_k\rangle +\alpha (\sigma_k - 1)\langle X_k, Z_k \rangle + \alpha^2 \langle \Delta X_k, \Delta Z_k \rangle - (1-\alpha)\langle X_k, Z_k \rangle -\alpha \beta_1 \langle X_k, Z_k \rangle \geq \\
\alpha (\sigma_k - \beta_1) \langle X_k, Z_k\rangle - \alpha^2 K_{\Delta}^2 n^4 \mu_k \geq \alpha (\frac{\sigma_{\min}}{2})n \mu_k - \alpha^2 K_{\Delta}^2 n^4 \mu_k,
\end{split}
\end{equation*}
\noindent where we set (without loss of generality) $\beta_1 = \frac{\sigma_{\min}}{2}$. The most-right hand side of the previous inequality will be non-negative for every $\alpha$ satisfying:
$$\alpha \leq \frac{\sigma_{\min}}{2 K_{\Delta}^2 n^3}.$$
\par In order to prove \eqref{Lemma step-length relation 2}, we will use \eqref{Lemma step-length equation 2} and the fact that from the neighbourhood conditions we have that $\|H_{P_k}(X_k Z_k) - \mu_k\|_F \leq \gamma_{\mu} \mu_k$. For that, we use the result in \cite[Lemma 4.2]{Zhang_SIAM_J_OPT}, stating that:
\[ \|H_{{P_k}(\alpha)}(X_k(\alpha) Z_k(\alpha)) - \mu_k(\alpha) I_n\|_F \leq \|H_{P_k}(X_k(\alpha) Z_k(\alpha)) - \mu_k(\alpha) I_n\|_F. \]
\noindent By combining all the previous, we have:
\begin{equation*}
\begin{split}
\|H_{{P_k}(\alpha)}(X_k(\alpha) Z_k(\alpha)) - \mu_k(\alpha) I_n\|_F - \gamma_{\mu}\mu_k(\alpha) &\ \leq \\
\|H_{P_k}(X_k(\alpha) Z_k(\alpha)) - \mu_k(\alpha) I_n\|_F - \gamma_{\mu} \mu_k(\alpha) &\ = \\
\|(1-\alpha)(H_{P_k}(X_k Z_k)-\mu_k I_n) + \alpha^2 H_{P_k}(\Delta X_k, \Delta Z_k) - \frac{\alpha^2}{n} \langle \Delta X_k, \Delta Z_k \rangle I_n\|_F - \gamma_{\mu} \mu_k(\alpha) &\ \leq \\
(1-\alpha)\|H_{P_k}(X_kZ_k) - \mu_k I_n\|_F + \alpha^2\mu_k \bigg(\frac{K_{\Delta}^2}{n} + K_{H\Delta}\bigg)n^4 &\ \\ - \gamma_{\mu} \bigg((1-\alpha)\mu_k + \alpha\sigma_k \mu_k + \frac{\alpha^2}{n}\langle \Delta X_k, \Delta Z_k \rangle \bigg) &\ \leq\\
-\gamma_{\mu} \alpha \sigma_{\min} \mu_k + \alpha^2 \mu_k \bigg(\frac{2K_{\Delta}^2}{n} + K_{H\Delta} \bigg)n^4,
\end{split}
\end{equation*}
\noindent where we used the neighbourhood conditions in \eqref{Small neighbourhood}, the equality $\mu_k(\alpha) = (1-\alpha)\mu_k + \alpha\sigma_k \mu_k + \frac{\alpha^2}{n} \langle\Delta X_k, \Delta Z_k \rangle$ (which can be derived from \eqref{Lemma step-length equation 1}), and the third block equation of the Newton system \eqref{inexact vectorized Newton System}. The most-right hand side of the previous is non-positive for every $\alpha$ satisfying:
$$\alpha \leq \frac{\sigma_{\min}\gamma_{\mu}}{\big(\frac{2K_{\Delta}^2}{n} + K_{H\Delta}\big) n^4}.$$
\par Finally, to prove (\ref{Lemma step-length relation 3}), we set (without loss of generality) $\beta_2 = 0.99$. We know, from Algorithm \ref{Algorithm PMM-IPM}, that $\sigma_{\max} \leq 0.5$. With the previous two remarks in mind, we have:
\begin{equation*}
\begin{split}
\frac{1}{n}\langle X_k + \alpha \Delta X_k, Z_k + \alpha \Delta Z_k \rangle - (1-0.01\alpha)\mu_k \leq \\
(1-\alpha)\mu_k + \alpha \sigma_k \mu_k + \alpha^2 \frac{K_{\Delta}^2 n^4}{n}\mu_k - (1-0.01 \alpha)\mu_k \leq \\
-0.99\alpha \mu_k + 0.5\alpha \mu_k + \alpha^2 \frac{K_{\Delta}^2 n^4}{n} \mu_k =\\
-0.49\alpha \mu_k +\alpha^2\frac{K_{\Delta}^2 n^4}{n}\mu_k.
\end{split}
\end{equation*}
\noindent The last term will be non-positive for every $\alpha$ satisfying:
$$\alpha \leq \frac{0.49 }{K_{\Delta}^2 n^3}.$$
\par By combining all the previous bounds on the step-length, we have that \eqref{Lemma step-length relation 1}-\eqref{Lemma step-length relation 3} hold for every $\alpha \in (0,\alpha^*)$, where:
\begin{equation} \label{Lemma step-length bound on step-length}
\alpha^* \coloneqq \min\bigg\{ \frac{\sigma_{\min}}{2 K_{\Delta}^2 n^3},\ \frac{\sigma_{\min}\gamma_{\mu}}{\big(\frac{2K_{\Delta}^2}{n} + K_{H\Delta}\big) n^4},\ \frac{0.49 }{K_{\Delta}^2 n^3},\ 1\bigg\}.
\end{equation}
\noindent Since ${\alpha^*} = \Omega\big(\frac{1}{n^{4}}\big)$, we know that there must exist a constant ${\kappa^*} > 0$, independent of $n$, $m$ and of the iteration $k$, such that ${\alpha^*} \geq \frac{\kappa}{n^{4}}$, for all $k \geq 0$, and this completes the proof.
\end{proof}
\begin{lemma} \label{Lemma step-length-part 2}
Given Assumptions \textnormal{\ref{Assumption 1}}, \textnormal{\ref{Assumption 2}}, and by letting ${P_k}(\alpha) = Z_k(\alpha)^{\frac{1}{2}}$, there exists a step-length $\bar{\alpha} \geq \frac{\bar{\kappa}}{n^{4}} \in (0,1)$, where $\bar{\kappa} > 0$ is independent of $n$, $m$, such that for all $\alpha \in [0,\bar{\alpha}]$ and for all iterations $k \geq 0$ of Algorithm \textnormal{\ref{Algorithm PMM-IPM}}, if $(X_k,y_k,Z_k) \in \mathscr{N}_{\mu_k}(\Xi_k,\lambda_k)$, then letting:
$$(X_{k+1},y_{k+1},Z_{k+1}) = (X_k + \alpha\Delta X_k,y_k + \alpha\Delta y_k, Z_k + \alpha\Delta Z_k),\ \mu_{k+1} = \frac{\langle X_{k+1},Z_{k+1}\rangle}{n},$$
\noindent for any $\alpha \in (0,\bar{\alpha}]$, gives $(X_{k+1},y_{k+1},Z_{k+1}) \in \mathscr{N}_{\mu_{k+1}}(\Xi_{k+1},\lambda_{k+1})$, where $\Xi_k,$ and $\lambda_k$ are updated as in Algorithm \textnormal{\ref{Algorithm PMM-IPM}}.
\end{lemma}
\begin{proof}
\noindent Let $\alpha^*$ be given as in Lemma \ref{Lemma step-length-part 1} (i.e. in \eqref{Lemma step-length bound on step-length}) such that \eqref{Lemma step-length relation 1}--\eqref{Lemma step-length relation 3} are satisfied. We would like to find the maximum $\bar{\alpha} \in (0,\alpha^*)$, such that:
$$(X_k(\alpha),y_k(\alpha),Z_k(\alpha)) \in \mathscr{N}_{\mu_k(\alpha)}(\Xi_k,\lambda_k),\ \text{for all}\ \alpha \in (0,\bar{\alpha}),$$
\noindent where $\mu_k(\alpha) = \frac{\langle X_k(\alpha), Z_k(\alpha) \rangle}{n}$. Let:
\begin{equation} \label{primal infeasibility vector}
\tilde{r}_p(\alpha) = A\bm{X}_k(\alpha)+ \mu_k(\alpha)(y_k(\alpha)-\lambda_k) - \big(b + \frac{\mu_k(\alpha)}{\mu_0}\bar{b}\big),
\end{equation}
\noindent and
\begin{equation}\label{dual infeasibility matrix}
\bm{\tilde{R}}_d(\alpha) = A^\top y_k(\alpha) + \bm{Z}_k(\alpha) - \mu_k(\alpha)(\bm{X}_k(\alpha)- \bm{\Xi}_k) - \big(\bm{C} + \frac{\mu_k(\alpha)}{\mu_0}\bm{\bar{C}}\big).
\end{equation}
\noindent In other words, we need to find the maximum $\bar{\alpha} \in (0,\alpha^*)$, such that:
\begin{equation} \label{Step-length neighbourhood conditions}
\|\tilde{r}_p(\alpha),\bm{\tilde{R}}_d(\alpha)\|_2 \leq K_N \frac{\mu_k(\alpha)}{\mu_0},\ \ \|\tilde{r}_p(\alpha),\bm{\tilde{R}}_d(\alpha)\|_{\mathcal{S}} \leq \gamma_{\mathcal{S}} \rho\frac{\mu_k(\alpha)}{\mu_0} ,\ \text{for all}\ \alpha \in (0,\bar{\alpha}).
\end{equation}
\noindent If the latter two conditions hold, then $(X_k(\alpha),y_k(\alpha),Z_k(\alpha)) \in \mathscr{N}_{\mu_k(\alpha)}(\Xi_k,\lambda_k),\ \text{for all}\ \alpha \in (0,\bar{\alpha})$. Then, if Algorithm \ref{Algorithm PMM-IPM} updates $\Xi_k$, and $\lambda_k$, it does so only when similar conditions (as in \eqref{Step-length neighbourhood conditions}) hold for the new parameters. Indeed, notice that the estimates $\Xi_k$ and $\lambda_k$ are only updated if the last conditional of Algorithm \ref{Algorithm PMM-IPM} is satisfied. But this is equivalent to saying that \eqref{Step-length neighbourhood conditions} is satisfied after setting $\bm{\Xi}_k = \bm{X}_k(\alpha)$ and $\lambda_k = y_k(\alpha)$. On the other hand, if the parameters are not updated, the new iterate lies in the desired neighbourhood because of \eqref{Step-length neighbourhood conditions}, alongside \eqref{Lemma step-length relation 1}--\eqref{Lemma step-length relation 3}.
\par We start by rearranging $\tilde{r}_p(\alpha)$. Specifically, we have that:\begin{equation*}
\begin{split}
\tilde{r}_p(\alpha) &= \\ A(\bm{X}_k + \alpha \bm{\Delta X}_k) +\big(\mu_k + \alpha(\sigma_k-1)\mu_k +\frac{\alpha^2}{n}\langle\Delta X_k,\Delta Z_k\rangle\big)\big((y_k + \alpha \Delta y_k -\lambda_k)-\frac{\bar{b}}{\mu_0} \big) -b &=\\
\big(A\bm{X}_k +\mu_k (y_k -\lambda_k)-b -\frac{\mu_k}{\mu_0}\bar{b}\big) + \alpha(A \bm{\Delta X}_k + \mu_k \Delta y_k) \\
+\ \big(\alpha(\sigma_k-1)\mu_k + \frac{\alpha^2}{n}\langle\Delta X_k, \Delta Z_k\rangle \big)\big((y_k - \lambda_k + \alpha \Delta y_k) - \frac{\bar{b}}{\mu_0}\big)&=\\
\frac{\mu_k}{\mu_0}\tilde{b}_k + \alpha\bigg(b- A\bm{X}_k - \sigma_k\mu_k\big((y_k-\lambda_k)-\frac{\bar{b}}{\mu_0} \big) + \epsilon_{p,k} + \mu_k \big((y_k-\lambda_k)-\frac{\bar{b}}{\mu_0} \big)\ \\
-\ \mu_k \big((y_k-\lambda_k)-\frac{\bar{b}}{\mu_0} \big) \bigg) + \big(\alpha(\sigma_k-1)\mu_k + \frac{\alpha^2}{n}\langle \Delta X_k, \Delta Z_k\rangle\big)\big((y_k - \lambda_k + \alpha \Delta y_k) - \frac{\bar{b}}{\mu_0}\big).
\end{split}
\end{equation*}
\noindent where we used the definition of $\tilde{b}_k$ in the neighbourhood conditions in \eqref{Small neighbourhood}, and the second block equation in \eqref{inexact vectorized Newton System}. By using again the neighbourhood conditions, and then by deleting the opposite terms in the previous equation, we obtain:\begin{equation}\label{primal infeasibility formula}
\begin{split}
\tilde{r}_p(\alpha) = &\ (1-\alpha)\frac{\mu_k}{\mu_0}\tilde{b}_k + \alpha \epsilon_{p,k} + \alpha^2(\sigma_k - 1)\mu_k \Delta y_k + \frac{\alpha^2}{n}\langle \Delta X_k, \Delta Z_k\rangle\big(y_k - \lambda_k + \alpha \Delta y_k - \frac{\bar{b}}{\mu_0} \big).
\end{split}
\end{equation}
\noindent Similarly, we can show that:
\begin{equation}\label{dual infeasibility formula}
\bm{\tilde{R}}_d(\alpha) = (1-\alpha)\frac{\mu_k}{\mu_0}\bm{\tilde{C}}_k + \alpha \bm{\mathsf{E}}_{d,k}- \alpha^2(\sigma_k-1)\mu_k \bm{\Delta X}_k - \frac{\alpha^2}{n}\langle \Delta X_k, \Delta Z_k \rangle \big(\bm{X}_k - \bm{\Xi}_k + \alpha \bm{\Delta X}_k + \frac{1}{\mu_0} \bm{\bar{C}}\big).
\end{equation}
\par Recall (Lemma \ref{Lemma boundedness Dx Dz}) that $\langle \Delta X_k, \Delta Z_k \rangle \leq K_{\Delta}^2 n^4 \mu_k$, and define the following quantities
\begin{equation} \label{step-length, auxiliary constants}
\begin{split}
\xi_2 = &\ \mu_k \|(\Delta y_k,\bm{\Delta X}_k)\|_2 + K_{\Delta}^2n^3 \mu_{k}\bigg(\|(y_k - \lambda_k,\bm{X}_k-\bm{\Xi}_k)\|_2\ +\\
&\ \alpha^* \|(\Delta y_k,\bm{\Delta X}_k)\|_2 + \frac{1}{\mu_0}\|(\bar{b},\bm{\bar{C}})\|_2\bigg),\\
\xi_{\mathcal{S}} = &\ \mu_k \|(\Delta y_k,\bm{\Delta X}_k)\|_{\mathcal{S}} + K_{\Delta}^2n^3 \mu_{k}\bigg(\|(y_k - \lambda_k,\bm{X}_k-\bm{\Xi}_k)\|_{\mathcal{S}} \ +\\
&\ \alpha^* \|(\Delta y_k,\bm{\Delta X}_k)\|_{\mathcal{S}} + \frac{1}{\mu_0}\|(\bar{b},\bm{\bar{C}})\|_{\mathcal{S}}\bigg),
\end{split}
\end{equation}
\noindent where $\alpha^*$ is given by \eqref{Lemma step-length bound on step-length}. Using the definition of the starting point in \eqref{starting point}, as well as results in Lemmas \ref{Lemma boundedness of x z}, \ref{Lemma boundedness Dx Dz}, we can observe that $\xi_2 = O(n^{4} \mu_k)$. On the other hand, using Assumption \ref{Assumption 2}, we know that for every pair $(r_1,\bm{R}_2) \in \mathbb{R}^{m+n^2}$ (where $R_2 \in \mathbb{R}^{n \times n}$ is an arbitrary matrix), if $\|(r_1,\bm{R}_2)\|_2 = \Theta(f(n))$, where $f(\cdot)$ is a positive polynomial function of $n$, then $\|(r_1,R_2)\|_{\mathcal{S}} = \Theta(f(n))$. Hence, we have that $\xi_{\mathcal{S}} = O(n^{4}\mu_k)$. Using the quantities in \eqref{step-length, auxiliary constants}, equations \eqref{primal infeasibility formula}, \eqref{dual infeasibility formula}, as well as the neighbourhood conditions, we have that:
\begin{equation*}
\begin{split}
\|\tilde{r}_p(\alpha),\bm{\tilde{R}}_d(\alpha)\|_2 \leq &\ (1-\alpha)K_N \frac{\mu_k}{\mu_0} + \alpha \mu_k \|(\epsilon_{p,k},\bm{\mathsf{E}}_{d,k})\|_2+ \alpha^2 \mu_k \xi_2,\\
\|\tilde{r}_p(\alpha),\bm{\tilde{R}}_d(\alpha)\|_S \leq &\ (1-\alpha)\gamma_{\mathcal{S}}\rho \frac{\mu_k}{\mu_0} + + \alpha \mu_k \|(\epsilon_{p,k},\bm{\mathsf{E}}_{d,k})\|_{\mathcal{S}} + \alpha^2 \mu_k \xi_{\mathcal{S}},
\end{split}
\end{equation*}
\noindent for all $\alpha \in (0,\alpha^*)$, where $\alpha^*$ is given by \eqref{Lemma step-length bound on step-length} and the error occurring from the inexact solution of \eqref{exact non-vectorized Newton System}, $(\epsilon_{p,k},\mathsf{E}_{d,k})$, satisfies \eqref{Krylov method termination conditions}. From \eqref{Lemma step-length relation 1}, we know that:
$$\mu_k(\alpha) \geq (1-\alpha(1-\beta_1))\mu_k,\ \text{for all}\ \alpha \in (0,\alpha^*).$$
\noindent By combining the last three inequalities, using \eqref{Krylov method termination conditions} and setting $\beta_1 = \frac{\sigma_{\min}}{2}$, we obtain that:
\begin{equation*}
\begin{split}
\|\tilde{r}_p(\alpha),\bm{\tilde{R}}_d(\alpha)\|_2 \leq \frac{\mu_k(\alpha)}{\mu_0} K_N,\ \text{for all}\ \alpha \in \bigg(0, \min\big\{\alpha^*,\frac{\sigma_{\min} K_N}{4\xi_2 \mu_0}\big\}\bigg].
\end{split}
\end{equation*}
\noindent Similarly,
\begin{equation*}
\begin{split}
\|\tilde{r}_p(\alpha),\bm{\tilde{R}}_d(\alpha)\|_{\mathcal{S}} \leq \frac{\mu_k(\alpha)}{\mu_0} \gamma_{\mathcal{S}} \rho,\ \text{for all}\ \alpha \in \bigg(0, \min \big\{\alpha^*,\frac{\sigma_{\min} \gamma_{\mathcal{S}} \rho}{4\xi_{\mathcal{S}} \mu_0}\big\}\bigg].
\end{split}
\end{equation*}
\par Hence, we have that:
\begin{equation} \label{Lemma step-length, STEPLENGTH BOUND}
\bar{\alpha} \coloneqq \min \bigg\{\alpha^*,\frac{\sigma_{\min} K_N}{4\xi_2 \mu_0}, \frac{\sigma_{\min} \gamma_{\mathcal{S}} \rho}{4\xi_{\mathcal{S}} \mu_0} \bigg\}.
\end{equation}
\noindent Since $\bar{\alpha} = \Omega\big(\frac{1}{n^{4}}\big)$, we know that there must exist a constant $\bar{\kappa} > 0$, independent of $n$, $m$ and of the iteration $k$, such that $\bar{\alpha} \geq \frac{\kappa}{n^{4}}$, for all $k \geq 0$, and this completes the proof.
\end{proof}
\iffalse
\begin{lemma} \label{Lemma step-length}
Given Assumptions \textnormal{\ref{Assumption 1}}, \textnormal{\ref{Assumption 2}}, and by letting ${P_k}(\alpha) = Z_k(\alpha)^{\frac{1}{2}}$, there exists a step-length $\bar{\alpha} \in (0,1)$, such that for all $\alpha \in [0,\bar{\alpha}]$ and for all iterations $k \geq 0$ of Algorithm \textnormal{\ref{Algorithm PMM-IPM}}, the following relations hold:
\begin{equation} \label{Lemma step-length relation 1}
\langle X_k + \alpha \Delta X_k,Z_k + \alpha \Delta Z_k\rangle \geq (1-\alpha(1-\beta_1))\langle X_k,Z_k \rangle,
\end{equation}
\begin{equation} \label{Lemma step-length relation 2}
\|H_{{P_k}(\alpha)}(X_k(\alpha)Z_k(\alpha)) - \mu_k(\alpha)\|_F \leq \gamma_{\mu}\mu_k(\alpha),
\end{equation}
\begin{equation} \label{Lemma step-length relation 3}
\langle X_k + \alpha \Delta X_k, Z_k + \alpha \Delta Z_k \rangle \leq (1-\alpha(1-\beta_2))\langle X_k, Z_k \rangle,
\end{equation}
where, without loss of generality, $\beta_1 = \frac{\sigma_{\min}}{2}$ and $\beta_2 = 0.99$. Moreover, $\bar{\alpha} \geq \frac{\bar{\kappa}}{n^{4}}$ for all $k\geq 0$, where $\bar{\kappa} > 0$ is independent of $n$, $m$, and if $(X_k,y_k,Z_k) \in \mathscr{N}_{\mu_k}(\Xi_k,\lambda_k)$, then letting:
$$(X_{k+1},y_{k+1},Z_{k+1}) = (X_k + \alpha\Delta X_k,y_k + \alpha\Delta y_k, Z_k + \alpha\Delta Z_k),\ \mu_{k+1} = \frac{\langle X_{k+1},Z_{k+1}\rangle}{n},$$
\noindent for any $\alpha \in (0,\bar{\alpha}]$, gives $(X_{k+1},y_{k+1},Z_{k+1}) \in \mathscr{N}_{\mu_{k+1}}(\Xi_{k+1},\lambda_{k+1})$, where $\Xi_k,$ and $\lambda_k$ are updated as in Algorithm \textnormal{\ref{Algorithm PMM-IPM}}.
\end{lemma}
\begin{proof}
\par We proceed by proving the first three inequalities stated in the Lemma. From Lemma \ref{Lemma boundedness Dx Dz}, there exist constants $K_{\Delta} >0$ and $K_{H\Delta} > 0$, independent of $n$ and $m$, such that:
$$\langle \Delta X_k, \Delta Z_k \rangle = (D_k^{-T} \bm{\Delta X}_k)^\top (D_k \bm{\Delta Z}_k) \leq \|D_k^{-T} \bm{\Delta X}_k\|_2 \|D_k \bm{\Delta Z}_k\|_2 \leq K_{\Delta}^2 n^4 \mu_k,$$
\[ \|H_{P_k}(\Delta X_k \Delta Z_k)\|_F \leq K_{H\Delta} n^4 \mu_k.\]
\noindent From the last block equation of the Newton system \eqref{exact non-vectorized Newton System}, we can show that:
\begin{equation} \label{Lemma step-length equation 1}
\langle Z_k, \Delta X_k\rangle + \langle X_k, \Delta Z_k\rangle = (\sigma_k - 1) \langle X_k, Z_k \rangle.
\end{equation}
\noindent The latter can also be obtained from \eqref{inexact vectorized Newton System}, since we require $\mathsf{E}_{\mu,k}= 0$. Furthermore:
\begin{equation} \label{Lemma step-length equation 2}
H_{P_k}(X_k(\alpha)Z_k(\alpha)) = (1-\alpha)H_{P_k}(X_k Z_k) + \alpha \sigma_k \mu_k I_n +\alpha^2 H_{P_k}(\Delta X_k \Delta Z_k),
\end{equation}
\noindent where $(X_{k+1},y_{k+1},Z_{k+1}) = (X_k + \alpha\Delta X_k,y_k + \alpha\Delta y_k, Z_k + \alpha\Delta Z_k)$.
\par We proceed by proving (\ref{Lemma step-length relation 1}). Using (\ref{Lemma step-length equation 1}), we have:
\begin{equation*}
\begin{split}
\langle X_k + \alpha \Delta X_k,Z_k + \alpha \Delta Z_k\rangle - (1-\alpha(1 -\beta_1))\langle X_k, Z_k\rangle = \\
\langle X_k, Z_k\rangle +\alpha (\sigma_k - 1)\langle X_k, Z_k \rangle + \alpha^2 \langle \Delta X_k, \Delta Z_k \rangle - (1-\alpha)\langle X_k, Z_k \rangle -\alpha \beta_1 \langle X_k, Z_k \rangle \geq \\
\alpha (\sigma_k - \beta_1) \langle X_k, Z_k\rangle - \alpha^2 K_{\Delta}^2 n^4 \mu_k \geq \alpha (\frac{\sigma_{\min}}{2})n \mu_k - \alpha^2 K_{\Delta}^2 n^4 \mu_k,
\end{split}
\end{equation*}
\noindent where we set (without loss of generality) $\beta_1 = \frac{\sigma_{\min}}{2}$. The most-right hand side of the previous inequality will be non-negative for every $\alpha$ satisfying:
$$\alpha \leq \frac{\sigma_{\min}}{2 K_{\Delta}^2 n^3}.$$
\par In order to prove (\ref{Lemma step-length relation 2}), we will use (\ref{Lemma step-length equation 2}) and the fact that from the neighbourhood conditions we have that $\|H_{P_k}(X_k Z_k) - \mu_k\|_F \leq \gamma_{\mu} \mu_k$. For that, we use the result in \cite[Lemma 4.2]{Zhang_SIAM_J_OPT}, stating that:
\[ \|H_{{P_k}(\alpha)}(X_k(\alpha) Z_k(\alpha)) - \mu_k(\alpha) I_n\|_F \leq \|H_{P_k}(X_k(\alpha) Z_k(\alpha)) - \mu_k(\alpha) I_n\|_F. \]
\noindent By combining all the previous, we have:
\begin{equation*}
\begin{split}
\|H_{{P_k}(\alpha)}(X_k(\alpha) Z_k(\alpha)) - \mu_k(\alpha) I_n\|_F - \gamma_{\mu}\mu_k(\alpha) &\ \leq \\
\|H_{P_k}(X_k(\alpha) Z_k(\alpha)) - \mu_k(\alpha) I_n\|_F - \gamma_{\mu} \mu_k(\alpha) &\ = \\
\|(1-\alpha)(H_{P_k}(X_k Z_k)-\mu_k I_n) + \alpha^2 H_{P_k}(\Delta X_k, \Delta Z_k) - \frac{\alpha^2}{n} \langle \Delta X_k, \Delta Z_k \rangle I_n\|_F - \gamma_{\mu} \mu_k(\alpha) &\ \leq \\
(1-\alpha)\|H_{P_k}(X_kZ_k) - \mu_k I_n\|_F + \alpha^2\mu_k \bigg(\frac{K_{\Delta}^2}{n} + K_{H\Delta}\bigg)n^4 &\ \\ - \gamma_{\mu} \bigg((1-\alpha)\mu_k + \alpha\sigma_k \mu_k + \frac{\alpha^2}{n}\langle \Delta X_k, \Delta Z_k \rangle \bigg) &\ \leq\\
-\gamma_{\mu} \alpha \sigma_{\min} \mu_k + \alpha^2 \mu_k \bigg(\frac{2K_{\Delta}^2}{n} + K_{H\Delta} \bigg)n^4,
\end{split}
\end{equation*}
\noindent where we used the neighbourhood conditions in \eqref{Small neighbourhood}, the equality $\mu_k(\alpha) = (1-\alpha)\mu_k + \alpha\sigma_k \mu_k + \frac{\alpha^2}{n} \langle\Delta X_k, \Delta Z_k \rangle$ (which can be derived from \eqref{Lemma step-length equation 1}), and the third block equation of the Newton system \eqref{inexact vectorized Newton System}. The most-right hand side of the previous is non-positive for every $\alpha$ satisfying:
$$\alpha \leq \frac{\sigma_{\min}\gamma_{\mu}}{\big(\frac{2K_{\Delta}^2}{n} + K_{H\Delta}\big) n^4}.$$
\par Finally, to prove (\ref{Lemma step-length relation 3}), we set (without loss of generality) $\beta_2 = 0.99$. We know, from Algorithm \ref{Algorithm PMM-IPM}, that $\sigma_{\max} \leq 0.5$. With the previous two remarks in mind, we have:
\begin{equation*}
\begin{split}
\frac{1}{n}\langle X_k + \alpha \Delta X_k, Z_k + \alpha \Delta Z_k \rangle - (1-0.01\alpha)\mu_k \leq \\
(1-\alpha)\mu_k + \alpha \sigma_k \mu_k + \alpha^2 \frac{K_{\Delta}^2 n^4}{n}\mu_k - (1-0.01 \alpha)\mu_k \leq \\
-0.99\alpha \mu_k + 0.5\alpha \mu_k + \alpha^2 \frac{K_{\Delta}^2 n^4}{n} \mu_k =\\
-0.49\alpha \mu_k +\alpha^2\frac{K_{\Delta}^2 n^4}{n}\mu_k.
\end{split}
\end{equation*}
\noindent The last term will be non-positive for every $\alpha$ satisfying:
$$\alpha \leq \frac{0.49 }{K_{\Delta}^2 n^3}.$$
\par By combining all the previous bounds on the step-length, we have that \eqref{Lemma step-length relation 1}-\eqref{Lemma step-length relation 3} hold for every $\alpha \in (0,\alpha^*)$, where:
\begin{equation} \label{Lemma step-length bound on step-length}
\alpha^* \coloneqq \min\bigg\{ \frac{\sigma_{\min}}{2 K_{\Delta}^2 n^3},\ \frac{\sigma_{\min}\gamma_{\mu}}{\big(\frac{2K_{\Delta}^2}{n} + K_{H\Delta}\big) n^4},\ \frac{0.49 }{K_{\Delta}^2 n^3},\ 1\bigg\}.
\end{equation}
\par Next, we would like to find the maximum $\bar{\alpha} \in (0,\alpha^*)$, such that:
$$(X_k(\alpha),y_k(\alpha),Z_k(\alpha)) \in \mathscr{N}_{\mu_k(\alpha)}(\Xi_k,\lambda_k),\ \text{for all}\ \alpha \in (0,\bar{\alpha}),$$
\noindent where $\mu_k(\alpha) = \frac{\langle X_k(\alpha), Z_k(\alpha) \rangle}{n}$. Let:
\begin{equation} \label{primal infeasibility vector}
\tilde{r}_p(\alpha) = A\bm{X}_k(\alpha)+ \mu_k(\alpha)(y_k(\alpha)-\lambda_k) - \big(b + \frac{\mu_k(\alpha)}{\mu_0}\bar{b}\big),
\end{equation}
\noindent and
\begin{equation}\label{dual infeasibility matrix}
\bm{\tilde{R}}_d(\alpha) = A^\top y_k(\alpha) + \bm{Z}_k(\alpha) - \mu_k(\alpha)(\bm{X}_k(\alpha)- \bm{\Xi}_k) - \big(\bm{C} + \frac{\mu_k(\alpha)}{\mu_0}\bm{\bar{C}}\big).
\end{equation}
\noindent In other words, we need to find the maximum $\bar{\alpha} \in (0,\alpha^*)$, such that:
\begin{equation} \label{Step-length neighbourhood conditions}
\|\tilde{r}_p(\alpha),\bm{\tilde{R}}_d(\alpha)\|_2 \leq K_N \frac{\mu_k(\alpha)}{\mu_0},\ \ \|\tilde{r}_p(\alpha),\bm{\tilde{R}}_d(\alpha)\|_{\mathcal{S}} \leq \gamma_{\mathcal{S}} \rho\frac{\mu_k(\alpha)}{\mu_0} ,\ \text{for all}\ \alpha \in (0,\bar{\alpha}).
\end{equation}
\noindent If the latter two conditions hold, then $(X_k(\alpha),y_k(\alpha),Z_k(\alpha)) \in \mathscr{N}_{\mu_k(\alpha)}(\Xi_k,\lambda_k),\ \text{for all}\ \alpha \in (0,\bar{\alpha})$. Then, if Algorithm \ref{Algorithm PMM-IPM} updates $\Xi_k$, and $\lambda_k$, it does so only when similar conditions (as in \eqref{Step-length neighbourhood conditions}) hold for the new parameters. If the parameters are not updated, the new iterate lies in the desired neighbourhood because of \eqref{Step-length neighbourhood conditions}, alongside \eqref{Lemma step-length relation 1}-\eqref{Lemma step-length relation 3}.
\par We start by rearranging $\tilde{r}_p(\alpha)$. Specifically, we have that:\begin{equation*}
\begin{split}
\tilde{r}_p(\alpha) &= \\ A(\bm{X}_k + \alpha \bm{\Delta X}_k) +\big(\mu_k + \alpha(\sigma_k-1)\mu_k +\frac{\alpha^2}{n}\langle\Delta X_k,\Delta Z_k\rangle\big)\big((y_k + \alpha \Delta y_k -\lambda_k)-\frac{\bar{b}}{\mu_0} \big) -b &=\\
\big(A\bm{X}_k +\mu_k (y_k -\lambda_k)-b -\frac{\mu_k}{\mu_0}\bar{b}\big) + \alpha(A \bm{\Delta X}_k + \mu_k \Delta y_k) \\
+\ \big(\alpha(\sigma_k-1)\mu_k + \frac{\alpha^2}{n}\langle\Delta X_k, \Delta Z_k\rangle \big)\big((y_k - \lambda_k + \alpha \Delta y_k) - \frac{\bar{b}}{\mu_0}\big)&=\\
\frac{\mu_k}{\mu_0}\tilde{b}_k + \alpha\bigg(b- A\bm{X}_k - \sigma_k\mu_k\big((y_k-\lambda_k)-\frac{\bar{b}}{\mu_0} \big) + \epsilon_{p,k} + \mu_k \big((y_k-\lambda_k)-\frac{\bar{b}}{\mu_0} \big)\ \\
-\ \mu_k \big((y_k-\lambda_k)-\frac{\bar{b}}{\mu_0} \big) \bigg) + \big(\alpha(\sigma_k-1)\mu_k + \frac{\alpha^2}{n}\langle \Delta X_k, \Delta Z_k\rangle\big)\big((y_k - \lambda_k + \alpha \Delta y_k) - \frac{\bar{b}}{\mu_0}\big).
\end{split}
\end{equation*}
\noindent where we used the definition of $\tilde{b}_k$ in the neighbourhood conditions in \eqref{Small neighbourhood}, and the second block equation in \eqref{inexact vectorized Newton System}. By using again the neighbourhood conditions, and then by deleting the opposite terms in the previous equation, we obtain:\begin{equation}\label{primal infeasibility formula}
\begin{split}
\tilde{r}_p(\alpha) = &\ (1-\alpha)\frac{\mu_k}{\mu_0}\tilde{b}_k + \alpha \epsilon_{p,k} + \alpha^2(\sigma_k - 1)\mu_k \Delta y_k + \frac{\alpha^2}{n}\langle \Delta X_k, \Delta Z_k\rangle\big(y_k - \lambda_k + \alpha \Delta y_k - \frac{\bar{b}}{\mu_0} \big).
\end{split}
\end{equation}
\noindent Similarly, we can show that:
\begin{equation}\label{dual infeasibility formula}
\bm{\tilde{R}}_d(\alpha) = (1-\alpha)\frac{\mu_k}{\mu_0}\bm{\tilde{C}}_k + \alpha \bm{\mathsf{E}}_{d,k}- \alpha^2(\sigma_k-1)\mu_k \bm{\Delta X}_k - \frac{\alpha^2}{n}\langle \Delta X_k, \Delta Z_k \rangle \big(\bm{X}_k - \bm{\Xi}_k + \alpha \bm{\Delta X}_k + \frac{1}{\mu_0} \bm{\bar{C}}\big).
\end{equation}
\par Recall (Lemma \ref{Lemma boundedness Dx Dz}) that $\langle \Delta X_k, \Delta Z_k \rangle \leq K_{\Delta}^2 n^4 \mu_k$, and define the following quantities
\begin{equation} \label{step-length, auxiliary constants}
\begin{split}
\xi_2 = &\ \mu_k \|(\Delta y_k,\bm{\Delta X}_k)\|_2 + K_{\Delta}^2n^3 \mu_{k}\bigg(\|(y_k - \lambda_k,\bm{X}_k-\bm{\Xi}_k)\|_2\ +\\
&\ \alpha^* \|(\Delta y_k,\bm{\Delta X}_k)\|_2 + \frac{1}{\mu_0}\|(\bar{b},\bm{\bar{C}})\|_2\bigg),\\
\xi_{\mathcal{S}} = &\ \mu_k \|(\Delta y_k,\bm{\Delta X}_k)\|_{\mathcal{S}} + K_{\Delta}^2n^3 \mu_{k}\bigg(\|(y_k - \lambda_k,\bm{X}_k-\bm{\Xi}_k)\|_{\mathcal{S}} \ +\\
&\ \alpha^* \|(\Delta y_k,\bm{\Delta X}_k)\|_{\mathcal{S}} + \frac{1}{\mu_0}\|(\bar{b},\bm{\bar{C}})\|_{\mathcal{S}}\bigg),
\end{split}
\end{equation}
\noindent where $\alpha^*$ is given by \eqref{Lemma step-length bound on step-length}. Using the definition of the starting point in \eqref{starting point}, as well as results in Lemmas \ref{Lemma boundedness of x z}, \ref{Lemma boundedness Dx Dz}, we can observe that $\xi_2 = O(n^{4} \mu_k)$. On the other hand, using Assumption \ref{Assumption 2}, we know that for every pair $(r_1,\bm{R}_2) \in \mathbb{R}^{m+n^2}$ (where $R_2 \in \mathbb{R}^{n \times n}$ is an arbitrary matrix), if $\|(r_1,\bm{R}_2)\|_2 = \Theta(f(n))$, where $f(\cdot)$ is a positive polynomial function of $n$, then $\|(r_1,R_2)\|_{\mathcal{S}} = \Theta(f(n))$. Hence, we have that $\xi_{\mathcal{S}} = O(n^{4}\mu_k)$. Using the quantities in \eqref{step-length, auxiliary constants}, equations \eqref{primal infeasibility formula}, \eqref{dual infeasibility formula}, as well as the neighbourhood conditions, we have that:
\begin{equation*}
\begin{split}
\|\tilde{r}_p(\alpha),\bm{\tilde{R}}_d(\alpha)\|_2 \leq &\ (1-\alpha)K_N \frac{\mu_k}{\mu_0} + \alpha \mu_k \|(\epsilon_{p,k},\bm{\mathsf{E}}_{d,k})\|_2+ \alpha^2 \mu_k \xi_2,\\
\|\tilde{r}_p(\alpha),\bm{\tilde{R}}_d(\alpha)\|_S \leq &\ (1-\alpha)\gamma_{\mathcal{S}}\rho \frac{\mu_k}{\mu_0} + + \alpha \mu_k \|(\epsilon_{p,k},\bm{\mathsf{E}}_{d,k})\|_{\mathcal{S}} + \alpha^2 \mu_k \xi_{\mathcal{S}},
\end{split}
\end{equation*}
\noindent for all $\alpha \in (0,\alpha^*)$, where $\alpha^*$ is given by \eqref{Lemma step-length bound on step-length} and the error occurring from the inexact solution of \eqref{exact non-vectorized Newton System}, $(\epsilon_{p,k},\mathsf{E}_{d,k})$, satisfies \eqref{Krylov method termination conditions}. From \eqref{Lemma step-length relation 1}, we know that:
$$\mu_k(\alpha) \geq (1-\alpha(1-\beta_1))\mu_k,\ \text{for all}\ \alpha \in (0,\alpha^*).$$
\noindent By combining the last three inequalities, using \eqref{Krylov method termination conditions} and setting $\beta_1 = \frac{\sigma_{\min}}{2}$, we obtain that:
\begin{equation*}
\begin{split}
\|\tilde{r}_p(\alpha),\bm{\tilde{R}}_d(\alpha)\|_2 \leq \frac{\mu_k(\alpha)}{\mu_0} K_N,\ \text{for all}\ \alpha \in \bigg(0, \min\big\{\alpha^*,\frac{\sigma_{\min} K_N}{4\xi_2 \mu_0}\big\}\bigg].
\end{split}
\end{equation*}
\noindent Similarly,
\begin{equation*}
\begin{split}
\|\tilde{r}_p(\alpha),\bm{\tilde{R}}_d(\alpha)\|_{\mathcal{S}} \leq \frac{\mu_k(\alpha)}{\mu_0} \gamma_{\mathcal{S}} \rho,\ \text{for all}\ \alpha \in \bigg(0, \min \big\{\alpha^*,\frac{\sigma_{\min} \gamma_{\mathcal{S}} \rho}{4\xi_{\mathcal{S}} \mu_0}\big\}\bigg].
\end{split}
\end{equation*}
\par Hence, we have that:
\begin{equation} \label{Lemma step-length, STEPLENGTH BOUND}
\bar{\alpha} \coloneqq \min \bigg\{\alpha^*,\frac{\sigma_{\min} K_N}{4\xi_2 \mu_0}, \frac{\sigma_{\min} \gamma_{\mathcal{S}} \rho}{4\xi_{\mathcal{S}} \mu_0} \bigg\}.
\end{equation}
\noindent Since $\bar{\alpha} = \Omega\big(\frac{1}{n^{4}}\big)$, we know that there must exist a constant $\bar{\kappa} > 0$, independent of $n$, $m$ and of the iteration $k$, such that $\bar{\alpha} \geq \frac{\kappa}{n^{4}}$, for all $k \geq 0$, and this completes the proof.
\end{proof}
\fi
\noindent The following theorem summarizes our results.
\begin{theorem} \label{Theorem mu convergence}
Given Assumptions \textnormal{\ref{Assumption 1}, \ref{Assumption 2}}, the sequence $\{\mu_k\}$ generated by Algorithm \textnormal{\ref{Algorithm PMM-IPM}} converges Q-linearly to zero, and the sequences of regularized residual norms
$$\big\{\|A\bm{X}_k + \mu_k (y_k-\lambda_k) - b-\frac{\mu_k}{\mu_0}\bar{b}\|_2\big\}\ \text{and}\ \big\{\| A^\top y_k + \bm{Z}_k - \mu_k (\bm{X}_k - \bm{\Xi}_k) - \bm{C} - \frac{\mu_k}{\mu_0}\bm{\bar{C}}\|_2\big\}$$
converge R-linearly to zero.
\end{theorem}
\begin{proof}
\noindent From (\ref{Lemma step-length relation 3}) we have that:
$$ \mu_{k+1} \leq (1-0.01\alpha_k)\mu_k,$$
\noindent while, from (\ref{Lemma step-length, STEPLENGTH BOUND}), we know that $\forall\ k \geq 0$, $\exists\ \bar{\alpha} \geq \frac{\bar{\kappa}}{n^4}$ such that $\alpha_k \geq \bar{\alpha}$. Hence, we can easily see that $\mu_k \rightarrow 0$. On the other hand, from the neighbourhood conditions, we know that for all $k \geq 0$:
$$ \bigg\|A\bm{X}_k + \mu_k (y_k-\lambda_k) - b - \frac{\mu_k}{\mu_0}\bar{b}\bigg\|_2 \leq K_N \frac{\mu_k}{\mu_0}$$
\noindent and
$$\bigg\| A^\top y_k + \bm{Z}_k - \mu_k (\bm{X}_k - \bm{\Xi}_k) - \bm{C}- \frac{\mu_k}{\mu_0}\bm{\bar{C}}\bigg\|_2 \leq K_N \frac{\mu_k}{\mu_0}.$$
\noindent This completes the proof.
\end{proof}
\par The polynomial complexity of Algorithm \ref{Algorithm PMM-IPM} is established in the following theorem.
\begin{theorem} \label{Theorem complexity}
\noindent Let $\varepsilon \in (0,1)$ be a given error tolerance. Choose a starting point for Algorithm \textnormal{\ref{Algorithm PMM-IPM}} as in \eqref{starting point}, such that $\mu_0 \leq \frac{K}{\varepsilon^{\omega}}$ for some positive constants $K,\ \omega$. Given Assumptions \textnormal{\ref{Assumption 1}} and \textnormal{\ref{Assumption 2}}, there exists an index $k_0 \geq 0$ with:
$$k_0 = O\bigg(n^{4}\big|\log \frac{1}{\varepsilon}\big|\bigg)$$
\noindent such that the iterates $\{(X_k,y_k,Z_k)\}$ generated from Algorithm \textnormal{\ref{Algorithm PMM-IPM}} satisfy:
$$\mu_k \leq \varepsilon,\ \ \ \ \text{for all}\ k\geq k_0.$$
\end{theorem}
\begin{proof}
The proof can be found in \cite[Theorem 3.8]{Pougk_Gond_COAP}.
\iffalse
\noindent The proof follows standard developments and is only provided here for completeness. Without loss of generality, we can chose $\sigma_{\max} \leq 0.5$ and then from Lemma \ref{Lemma step-length}, we know that there is a constant $\bar{\kappa}$ independent of $n$ such that $\bar{a} \geq \frac{\bar{\kappa}}{n^9}$, where $\bar{a}$ is the worst-case step-length. Given the latter, we know that the new iterate lies in the neighbourhood $\mathscr{N}_{\mu_{k+1}}(\Xi_{k+1},\lambda_{k+1})$ defined in (\ref{Small neighbourhood}). We also know, from (\ref{Lemma step-length relation 3}), that:
$$\mu_{k+1} \leq (1 - 0.01\bar{a})\mu_k \leq (1-0.01\frac{\bar{\kappa}}{n^9})\mu_k,\ \ \ \ k = 0,1,2,\ldots$$
\noindent By taking logarithms on both sides in the previous inequality, we get:
$$\log (\mu_{k+1}) \leq \log (1 - \frac{\tilde{\kappa}}{n^9}) + \log (\mu_k),$$
\noindent where $\tilde{\kappa} = 0.01\bar{\kappa}$. By applying repeatedly the previous formula, and using the fact that $\mu_0 \leq \frac{C}{\epsilon^{\omega}}$, we have:
$$\log(\mu_k) \leq k \log(1 - \frac{\tilde{\kappa}}{n^9}) + \log(\mu_0) \leq k \log(1 - \frac{\tilde{\kappa}}{n^9}) + \omega \log(\frac{1}{\epsilon}) + \log(C).$$
\noindent We use the fact that: $\log(1+\beta) \leq \beta,\ \ \forall\ \beta > -1$ to get:
$$\log(\mu_k) \leq k(-\frac{\tilde{\kappa}}{n^9}) + \omega \log(\frac{1}{\epsilon}) +\log(C).$$
\noindent Hence, convergence is attained if:
$$k(-\frac{\tilde{\kappa}}{n^9}) + \omega \log(\frac{1}{\epsilon}) +\log(C) \leq \log(\epsilon).$$
\noindent The latter holds for all $k$ satisfying:
$$k \geq K = \frac{n^9}{\tilde{\kappa}}((1+\omega)\log(\frac{1}{\epsilon})+\log(C)),$$
\noindent and completes the proof.
\fi
\end{proof}
\par Finally, we present the global convergence guarantee of Algorithm \ref{Algorithm PMM-IPM}.
\begin{theorem} \label{Theorem convergence for the feasible case}
Suppose that Algorithm \textnormal{\ref{Algorithm PMM-IPM}} terminates when a limit point is reached. Then, if Assumptions \textnormal{\ref{Assumption 1}} and \textnormal{\ref{Assumption 2}} hold, every limit point of $\{(X_k,y_k,Z_k)\}$ determines a primal-dual solution of the non-regularized pair \textnormal{(\ref{non-regularized primal})--(\ref{non-regularized dual})}.
\end{theorem}
\begin{proof}
\noindent From Theorem \ref{Theorem mu convergence}, we know that $\{\mu_k\} \rightarrow 0$, and hence, there exists a sub-sequence $\mathcal{K} \subseteq \mathbb{N}$, such that:
$$\{ A\bm{X}_k + \mu_k (y_k - \lambda_k) -b -\frac{\mu_k}{\mu_0}\bar{b}\}_{\mathcal{K}} \rightarrow 0,\ \{ A^\top y_k + \bm{Z}_k - \mu_k (\bm{X}_k - \bm{\Xi}_k)-\bm{C}-\frac{\mu_k}{\mu_0}\bm{\bar{C}}\}_{\mathcal{K}} \rightarrow 0.$$
\noindent However, since Assumptions \ref{Assumption 1} and \ref{Assumption 2} hold, we know from Lemma \ref{Lemma boundedness of x z} that $\{(X_k,y_k,Z_k)\}$ is a bounded sequence. Hence, we obtain that:
$$\{ A\bm{X}_k - b\}_{\mathcal{K}} \rightarrow 0,\ \{A^\top y_k +\bm{Z}_k -\bm{C}\}_{\mathcal{K}} \rightarrow 0.$$
\noindent One can readily observe that the limit point of the algorithm satisfies the optimality conditions of \eqref{non-regularized primal}--\eqref{non-regularized dual}, since $\langle X_k, Z_k \rangle \rightarrow 0$ and $X_k,\ Z_k \in \mathcal{S}^n_+$.
\end{proof}
\begin{remark}
As mentioned at the end of Section \textnormal{\ref{section Algorithmic Framework}}, we do not study the conditions under which one can guarantee that $X_k - \Xi_k \rightarrow 0$ and $y_k - \lambda_k \rightarrow 0$, although this could be possible. This is because the method is shown to converge globally even if this is not the case. Indeed, notice that if one were to choose $X_0 = 0$ and $\lambda_0 = 0$, and simply ignore the last conditional statement of Algorithm \textnormal{\ref{Algorithm PMM-IPM}}, the convergence analysis established in this section would still hold. In this case, the method would be interpreted as an interior point-quadratic penalty method, and we could consider the regularization as a diminishing primal-dual Tikhonov regularizer (i.e. a variant of the regularization proposed in \textnormal{\cite{SaundersTomlin_Tech_Rep}}).
\end{remark}
\section{A Sufficient Condition for Strong Duality} \label{section Infeasible problems}
\par We now drop Assumptions \ref{Assumption 1}, \ref{Assumption 2}, in order to analyze the behaviour of the algorithm when solving problems that are strongly (or weakly) infeasible, problems for which strong duality does not hold (weakly feasible), or problems for which the primal or the dual solution is not attained. For a formal definition and a comprehensive study of the previous types of problems we refer the reader to \cite{Liu_Pataki_MATH_PROG}, and the references therein. Below we provide a well-known result, stating that strong duality holds if and only if there exists a KKT point.
\begin{Proposition} \label{Prop. KKT and strong duality}
Let \textnormal{\eqref{non-regularized primal}--\eqref{non-regularized dual}} be given. Then, $\textnormal{val\eqref{non-regularized primal}} \geq \textnormal{val\eqref{non-regularized dual}}$, where $\textnormal{val}(\cdot)$ denotes the optimal objective value of a problem. Moreover, $\textnormal{val\eqref{non-regularized primal}} = \textnormal{val\eqref{non-regularized dual}}$ and $(X^*,y^*,Z^*)$ is an optimal solution for \textnormal{\eqref{non-regularized primal}--\eqref{non-regularized dual}}, if and only if $(X^*,y^*,Z^*)$ satisfies the (KKT) optimality conditions in \eqref{non-regularized F.O.C}.
\end{Proposition}
\begin{proof}
This is a well-known fact, the proof of which can be found in \cite[Proposition 2.1]{ShapSchein_BOOK_SPRINGER}.
\end{proof}
Let us employ the following two premises:
\begin{premise} \label{Premise 1}
During the iterations of Algorithm \textnormal{\ref{Algorithm PMM-IPM}}, the sequences $\{\|y_k - \lambda_k\|_2\}$ and $\{\|X_k - \Xi_k\|_F\}$, remain bounded.
\end{premise}
\begin{premise} \label{Premise 2}
There does not exist a primal-dual triple, satisfying the KKT conditions in \eqref{non-regularized F.O.C} associated with the primal-dual pair \textnormal{(\ref{non-regularized primal})--(\ref{non-regularized dual})}.
\end{premise}
\noindent The following analysis extends the result presented in \cite[Section 4]{Pougk_Gond_COAP}, and is based on the developments in \cite[Sections 10 \& 11]{Dehg_Goff_Orban_OMS}. In what follows, we show that Premises \ref{Premise 1} and \ref{Premise 2} are contradictory. In other words, if Premise \ref{Premise 2} holds (which means that strong duality does not hold for the problem under consideration), then Premise \ref{Premise 1} cannot hold, and hence Premise \ref{Premise 1} is a sufficient condition for strong duality (and its negation is a necessary condition for Premise \ref{Premise 2}). We show that if Premise \ref{Premise 1} holds, then the algorithm converges to an optimal solution. If not, however, it does not necessarily mean that the problem under consideration is infeasible. For example, this could happen if either \eqref{non-regularized primal} or \eqref{non-regularized dual} is strongly infeasible, weakly infeasible, and in some cases even if either of the problems is weakly feasible (e.g. see \cite{Liu_Pataki_MATH_PROG,ShapSchein_BOOK_SPRINGER}). As we discuss later, the knowledge that Premise \ref{Premise 1} does not hold could be useful in detecting pathological problems.
\begin{lemma} \label{Lemma infeasibility bounded Newton}
Given Premise \textnormal{\ref{Premise 1}}, and by assuming that $\langle X_k, Z_k \rangle > \varepsilon$, for some $\varepsilon >0$, for all iterations $k$ of Algorithm \textnormal{\ref{Algorithm PMM-IPM}}, the Newton direction produced by \textnormal{(\ref{inexact vectorized Newton System})} is uniformly bounded by a constant dependent only on $n$ and/or $m$.
\end{lemma}
\begin{proof}
\par The proof is omitted since it follows exactly the developments in \cite[Lemma 10.1]{Dehg_Goff_Orban_OMS}. We notice that the regularization terms (blocks (1,1) and (2,2) in the Jacobian matrix in \eqref{inexact vectorized Newton System}) depend on $\mu_k$ which by assumption is always bounded away
from zero: $\mu_k \geq \frac{\epsilon}{n}$.
\end{proof}
\par In the following Lemma, we prove by contradiction that the parameter $\mu_k$ of Algorithm \ref{Algorithm PMM-IPM} converges to zero, given that Premise \ref{Premise 1} holds.
\begin{lemma} \label{Lemma infeasibility mu to zero}
Given Premise \textnormal{\ref{Premise 1}}, and a sequence $(X_k,y_k,Z_k) \in \mathcal{N}_{\mu_k}(\Xi_k,\lambda_k)$ produced by Algorithm \textnormal{\ref{Algorithm PMM-IPM}}, the sequence $\{\mu_k\}$ converges to zero.
\end{lemma}
\begin{proof}
\noindent Assume, by virtue of contradiction, that $\mu_k > \varepsilon > 0$, $\text{for all}\ k \geq 0$. Then, we know (from Lemma \ref{Lemma infeasibility bounded Newton}) that the Newton direction obtained by the algorithm at every iteration, after solving (\ref{inexact vectorized Newton System}), will be uniformly bounded by a constant dependent only on $n$, that is, there exists a positive constant $K^{\dagger}$, such that $\|(\Delta X_k,\Delta y_k,\Delta Z_k)\|_2 \leq K^{\dagger}$. We define $\tilde{r}_p(\alpha)$ and $\bm{\tilde{R}}_d(\alpha)$ as in \eqref{primal infeasibility vector} and \eqref{dual infeasibility matrix}, respectively, for which we know that equalities \eqref{primal infeasibility formula} and \eqref{dual infeasibility formula} hold, respectively.
Take any $k \geq 0$ and define the following functions:
\begin{equation*}
\begin{split}
f_1(\alpha) \coloneqq \ &\langle X_k(\alpha), Z_k(\alpha)\rangle - (1 -\alpha(1-\frac{\sigma_{\min}}{2}))\langle X_k, Z_k\rangle,\\
f_2(\alpha) \coloneqq \ & \gamma_{\mu}\mu_k(\alpha) - \|H_{{P_k}(\alpha)}(X_k(\alpha)Z_k(\alpha)) - \mu_k(\alpha)\|_F \\
f_3(\alpha) \coloneqq \ & (1-0.01\alpha)\langle X_k,Z_k\rangle - \langle X_k(\alpha), Z_k(\alpha)\rangle,\\
g_{2}(\alpha) \coloneqq \ & \frac{\mu_k(\alpha)}{\mu_0}K_N - \|(\tilde{r}_p(\alpha),\bm{\tilde{R}}_d(\alpha))\|_2,
\end{split}
\end{equation*}
\noindent where $\mu_k(\alpha) = \frac{\langle X_k + \alpha \Delta X_k, Z_k + \alpha \Delta Z_k\rangle}{n}$, $(X_k(\alpha),y_k(\alpha),Z_k(\alpha)) = (X_k + \alpha \Delta X_k,y_k + \alpha \Delta y_k,Z_k + \alpha \Delta Z_k)$. We would like to show that there exists $\alpha^* > 0$, such that:
$$f_1(\alpha) \geq 0,\quad f_2(\alpha) \geq 0,\quad f_3(\alpha) \geq 0,\quad g_2(\alpha) \geq 0,\ \text{for all}\ \alpha \in (0,\alpha^*].$$
\noindent These conditions model the requirement that the next iteration of Algorithm \ref{Algorithm PMM-IPM} must lie in the updated neighbourhood $\mathcal{N}_{\mu_{k+1}}(\Xi_k,\lambda_{k})$ (notice however that the restriction with respect to the semi-norm defined in \eqref{semi-norm definition} is not required here, and indeed it cannot be incorporated unless $\textnormal{rank}(A) = m$). Since Algorithm \ref{Algorithm PMM-IPM} updates the parameters $\lambda_k,\ \Xi_k$ only if the selected new iterate belongs to the new neighbourhood, defined using the updated parameters (again, ignoring the restrictions with respect to the semi-norm), it suffices to show that $(X_{k+1},y_{k+1},Z_{k+1}) \in \mathcal{N}_{\mu_{k+1}}(\Xi_k,\lambda_{k})$.
\par Proving the existence of $\alpha^* > 0$, such that each of the aforementioned functions is positive, follows exactly the developments in Lemmas \ref{Lemma step-length-part 1}--\ref{Lemma step-length-part 2}, with the only difference being that the bounds on the directions are not explicitly specified in this case. Using the same methodology as in aforementioned lemmas, while keeping in mind our assumption, namely $\langle X_k, Z_k \rangle> \varepsilon$, we can show that:
\begin{equation} \label{infeasibility minimum step-length}
\alpha^* \coloneqq \min \bigg\{1,\frac{\sigma_{\min}\epsilon}{2(K^{\dagger})^2}, \frac{(1-\gamma_{\mu})\sigma_{\min}{\gamma_{\mu}\epsilon}}{2n(K^{\dagger})^2},\frac{0.49\epsilon}{2(K^{\dagger})^2}, \frac{\sigma_{\min}K_N \epsilon}{4\mu_0(\xi_2)} \bigg\},
\end{equation}
\noindent where $\xi_2$ is a bounded constant, defined as in \eqref{step-length, auxiliary constants}, and dependent on $K^{\dagger}$. However, using the inequality:
$$\mu_{k+1} \leq (1-0.01 \alpha)\mu_k,\ \text{for all}\ \alpha \in [0,\alpha^*]$$
\noindent we get that $\mu_k \rightarrow 0$, which contradicts our assumption that $\mu_k > \varepsilon,\ \forall\ k\geq 0$, and completes the proof.
\end{proof}
\noindent Finally, using the following Theorem, we derive a necessary condition for lack of strong duality.
\begin{theorem} \label{Theorem Infeasibility condition}
Given Premise \textnormal{\ref{Premise 2}}, i.e. there does not exist a KKT triple for the pair \textnormal{(\ref{non-regularized primal})--(\ref{non-regularized dual})}, then Premise \textnormal{\ref{Premise 1}} fails to hold.
\end{theorem}
\begin{proof}
\noindent By virtue of contradiction, let Premise \ref{Premise 1} hold. In Lemma \ref{Lemma infeasibility mu to zero}, we proved that given Premise \ref{Premise 1}, Algorithm \ref{Algorithm PMM-IPM} produces iterates that belong to the neighbourhood (\ref{Small neighbourhood}) and $\mu_k \rightarrow 0$. But from the neighbourhood conditions we can observe that:
$$\|A\bm{X}_k + \mu_k(y_k - \lambda_k) - b - \frac{\mu_k}{\mu_0}\bar{b} \|_2 \leq K_N\frac{\mu_k}{\mu_0},$$
\noindent and
$$\|A^\top y_k + \bm{Z}_k - \mu_k(\bm{X}_k - \bm{\Xi}_k)-\bm{C}-\frac{\mu_k}{\mu_0}\bm{\bar{C}}\|_2 \leq K_N \frac{\mu_k}{\mu_0}.$$
\noindent Hence, we can choose a sub-sequence $\mathcal{K} \subseteq \mathbb{N}$, for which:
$$\{A\bm{X}_k + \mu_k(y_k - \lambda_k) - b - \frac{\mu_k}{\mu_0}\bar{b} \}_{\mathcal{K}} \rightarrow 0,\ \text{and} \ \{A^\top y_k + \bm{Z}_k - \mu_k(\bm{X}_k - \bm{\Xi}_k)-\bm{C}-\frac{\mu_k}{\mu_0}\bm{\bar{C}}\}_{\mathcal{K}} \rightarrow 0.$$
\noindent But since $\|y_k-\lambda_k\|_2$ and $\|X_k - \Xi_k\|_F$ are bounded, while $\mu_k \rightarrow 0$, we have that:
$$\{A\bm{X}_k - b\}_{\mathcal{K}} \rightarrow 0,\ \{\bm{C} - A^\top y_k - \bm{Z}_k\}_{\mathcal{K}} \rightarrow 0,\ \text{and}\ \{\langle X_k, Z_k\rangle\}_{\mathcal{K}} \rightarrow 0.$$
\noindent This contradicts Premise \ref{Premise 2}, i.e. that the pair (\ref{non-regularized primal})--(\ref{non-regularized dual}) does not have a KKT triple, and completes the proof.
\end{proof}
\par In the previous Theorem, we proved that the negation of Premise \ref{Premise 1} is a necessary condition for Premise \ref{Premise 2}. Nevertheless, this does not mean that the condition is also sufficient. In order to obtain a more reliable algorithmic test for lack of strong duality, we have to use the properties of Algorithm \ref{Algorithm PMM-IPM}. In particular, we can notice that if there does not exist a KKT point, then the PMM sub-problems will stop being updated after a finite number of iterations. In that case, we know from Theorem \ref{Theorem Infeasibility condition} that the sequence $\|(\bm{X}_k- \bm{\Xi}_k,y_k -\lambda_k)\|_2$ will grow unbounded. Hence, we can define a maximum number of iterations per PMM sub-problem, say $k_{\dagger} > 0$, as well as a very large constant $K_{\dagger}$. Then, if $\|(\bm{X}_k- \bm{\Xi}_k,y_k -\lambda_k)\|_2 > K_{\dagger}$ and $k_{in} \geq k_{\dagger}$ (where $k_{in}$ counts the number of IPM iterations per PMM sub-problem), the algorithm is terminated with a guess that there does not exist a KKT point for \eqref{non-regularized primal}--\eqref{non-regularized dual}.
\begin{remark}
Let us notice that the analysis in Section \textnormal{\ref{section Polynomial Convergence}} employs the standard assumptions used when analyzing a non-regularized IPM. However, the method could still be useful if these assumptions were not met. Indeed, if for example the constraint matrix was not of full row rank, one could still prove global convergence of the method, using the methodology employed in this section by assuming that Premise \textnormal{\ref{Premise 1}} holds and Premise \textnormal{\ref{Premise 2}} does not (or by following the developments in \textnormal{\cite{Dehg_Goff_Orban_OMS}}). Furthermore, in practice the method would not encounter any numerical issues with the inversion of the Newton system (see \textnormal{\cite{ArmandBenoist_MATH_PROG}}). Nevertheless, showing polynomial complexity in this case is still an open problem. The aim of this work is to show that under the standard Assumptions \textnormal{\ref{Assumption 1}, \ref{Assumption 2}}, Algorithm \ref{Algorithm PMM-IPM} is able to enjoy polynomial complexity, while having to solve better conditioned systems than those solved by standard IPMs at each iteration, thus ensuring better stability (and as a result better robustness and potentially better efficiency).
\end{remark}
\section{Conclusions} \label{section conclusions}
\par
In this paper we developed and analyzed an interior point-proximal method of multipliers, suitable for solving linear positive semi-definite programs, without requiring the exact solution of the associated Newton systems. By generalizing appropriately some previous results on convex quadratic programming, we show that IP-PMM inherits the polynomial complexity of standard non-regularized IPM schemes when applied to SDP problems, under standard assumptions, while having to approximately solve better-conditioned Newton systems, compared to their non-regularized counterparts. Furthermore, we provide a tuning for the proximal penalty parameters based on the well-studied barrier parameter, which can be used to guide any subsequent implementation of the method. Finally, we study the behaviour of the algorithm when applied to problems for which no KKT point exists, and give a necessary condition which can be used to construct detection mechanisms for identifying such pathological cases.
\par
A future research direction would be to construct a robust and efficient implementation of the method, which should utilize some Krylov subspace solver alongside an appropriate preconditioner for the solution of the associated linear systems. Given previous implementations of IP-PMM for other classes of problems appearing in the literature, we expect that the theory can successfully guide the implementation, yielding a competitive, robust, and efficient method.
\end{document}
|
\begin{document}
\begin{frontmatter}
\title{Hourglass alternative and the finiteness conjecture for the spectral
characteristics of sets of non-negative matrices\tnoteref{rfs}}
\tnotetext[rfs]{Funded by the Russian Science Foundation, Project No.
14-50-00150.}
\author{Victor Kozyakin}
\address{Institute
for Information Transmission Problems\\ Russian Academy of Sciences\\ Bolshoj
Karetny lane 19, Moscow 127994 GSP-4, Russia}
\ead{[email protected]}
\ead[url]{http://www.iitp.ru/en/users/46.htm}
\begin{abstract}
Recently Blondel, Nesterov and Protasov
proved~\cite{BN:SIAMJMAA09,NesPro:SIAMJMAA13} that the finiteness
conjecture holds for the generalized and the lower spectral radii of the
sets of non-negative matrices with independent row/column uncertainty. We
show that this result can be obtained as a simple consequence of the
so-called hourglass alternative used in~\cite{ACDDHK15}, by the author and
his companions, to analyze the minimax relations between the spectral radii
of matrix products. Axiomatization of the statements that constitute the
hourglass alternative makes it possible to define a new class of sets of
positive matrices having the finiteness property, which includes the sets
of non-negative matrices with independent row uncertainty. This class of
matrices, supplemented by the zero and identity matrices, forms a semiring
with the Minkowski operations of addition and multiplication of matrix
sets, which gives means to construct new sets of non-negative matrices
possessing the finiteness property for the generalized and the lower
spectral radii.
\end{abstract}
\begin{keyword}Matrix products\sep Non-negative matrices\sep Joint spectral radius\sep Lower spectral
radius\sep Finiteness conjecture
\MSC[2010]15A18\sep 15B48\sep 15A60
\end{keyword}
\end{frontmatter}
\section{Introduction}\label{S:intro}
One of the characteristics that describes the exponential growth rate of the
matrix products with factors from a set of matrices, is the so-called joint
or generalized spectral radius. Denote by $\mathcal{M}(N,M)$ the set of all real
$(N\times M)$-matrices. This set of matrices is naturally identified with the
space $\mathbb{R}^{N\times M}$ and therefore, depending on the context, it can be
interpreted as topological, metric or normed space. The \emph{joint spectral
radius}~\cite{RotaStr:IM60} of a set of matrices $\mathscr{A}\subset\mathcal{M}(N,N)$ is
defined as the value of
\begin{equation}\label{E-GSRad0}
\rho(\mathscr{A})=
\adjustlimits\lim_{n\to\infty}\sup_{A_{i}\in\mathscr{A}}\|A_{n}\cdots A_{1}\|^{1/n}
=\adjustlimits\inf_{n\ge 1}\sup_{A_{i}\in\mathscr{A}}\|A_{n}\cdots A_{1}\|^{1/n},
\end{equation}
where $\|\cdot\|$ is a matrix norm on $\mathcal{M}(N,M)$ generated by the
corresponding vector norm on $\mathbb{R}^{N}$. The \emph{generalized spectral
radius}~\cite{DaubLag:LAA92,DaubLag:LAA01} of a bounded set of matrices
$\mathscr{A}\subset\mathcal{M}(N,N)$ is the quantity
\begin{equation}\label{E-GSRad}
\hat{\rho}(\mathscr{A})=
\adjustlimits\lim_{n\to\infty}\sup_{A_{i}\in\mathscr{A}}\rho(A_{n}\cdots A_{1})^{1/n}
=\adjustlimits\sup_{n\ge 1}\sup_{A_{i}\in\mathscr{A}}\rho(A_{n}\cdots A_{1})^{1/n},
\end{equation}
where $\rho(\cdot)$ is the spectral radius of a matrix, i.e. the maximum of
modules of its eigenvalues. If the norms of matrices from the set $\mathscr{A}$ are
uniformly bounded then by the Berger-Wang theorem~\cite{BerWang:LAA92}
$\rho(\mathscr{A})=\hat{\rho}(\mathscr{A})$. In the case when the set of matrices $\mathscr{A}$
is compact (closed and bounded), the suprema over $A_{i}\in\mathscr{A}$ in
\eqref{E-GSRad0} and \eqref{E-GSRad} may be replaced by maxima.
By replacing the suprema over $A_{i}\in\mathscr{A}$ in \eqref{E-GSRad} with infima
(or with minima, in the case of a compact set of matrices) one can obtain the
so-called \emph{joint spectral subradius} or \emph{lower spectral
radius}~\cite{Gurv:LAA95}:
\begin{equation}\label{E-LSRad0}
\check{\rho}(\mathscr{A})=
\adjustlimits\lim_{n\to\infty}\inf_{A_{i}\in\mathscr{A}}\|A_{n}\cdots A_{1}\|^{1/n}
=\adjustlimits\inf_{n\ge 1}\inf_{A_{i}\in\mathscr{A}}\|A_{n}\cdots A_{1}\|^{1/n},
\end{equation}
which also (for arbitrary, not necessarily bounded set of matrices) may be
expressed in terms of the spectral radii instead of norms:
\begin{equation}\label{E-LSRad}
\check{\rho}(\mathscr{A})=
\adjustlimits\lim_{n\to\infty}\inf_{A_{i}\in\mathscr{A}}\rho(A_{n}\cdots A_{1})^{1/n}
=\adjustlimits\inf_{n\ge 1}\inf_{A_{i}\in\mathscr{A}}\rho(A_{n}\cdots A_{1})^{1/n},
\end{equation}
as was shown in~\cite[Theorem~B1]{Gurv:LAA95} for finite sets $\mathscr{A}$, and
in~\cite[Lemma~1.12]{Theys:PhD05} and~\cite[Theorem~1]{Czornik:LAA05} for
arbitrary sets $\mathscr{A}$.
The possibility of explicit calculation of the spectral characteristics of
sets of matrices is conventionally associated with the validity of the
\emph{finiteness conjecture} according to which the limit in formulas
\eqref{E-GSRad} and \eqref{E-LSRad} is attained at some finite value of $n$.
For the generalized spectral radius this conjecture was set up by Lagarias
and Wang~\cite{LagWang:LAA95} and subsequently disproved by Bousch and
Mairesse~\cite{BM:JAMS02}. Later on there appeared a few alternative
counterexamples~\cite{BTV:SIAMJMA03,Koz:CDC05:e,Koz:INFOPROC06:e}. However,
all these counterexamples were pure `non-existence' results which provided no
constructive description of the sets of matrices for which the finiteness
conjecture fails. The first explicit counterexample to the finiteness
conjecture was built in~\cite{HMST:AdvMath11}, while the general methods of
constructing of such a type of counterexamples were presented recently
in~\cite{MorSid:JEMS13,JenPoll:ArXiv15}. The lower radius in a sense is more
complex object for analysis than the generalized spectral radius because it
generally depends on $\mathscr{A}$ not
continuously~\cite{Jungers:LAA12,BochiMor:ArXiv13}. However, for the lower
spectral radius, disproof of the finiteness conjecture was found to be even
easier~\cite{BM:JAMS02,CJ:IJAMCS07} than for the generalized spectral radius.
Despite the fact that in general the finiteness conjecture is false, attempts
to discover new classes of matrices for which it still occurs continues.
However, it should be borne in mind that the validity of the finiteness
conjecture for some class of matrices provides only a theoretical possibility
to explicitly calculate the related spectral characteristics, because in
practice calculation of the spectral radii $\rho(A_{n}\cdots A_{1})$ for all
possible sets of matrices $A_{1},\ldots, A_{n} \in\mathscr{A}$ may require too much
computing resources, even for relatively small values of $n$. Therefore, from
a practical point of view, the most interesting are the cases when the
finiteness conjecture is valid for small values of $n$.
The finiteness conjecture is obviously satisfied for the sets of commuting
matrices as well as for sets consisting of upper or lower triangular matrices
or of matrices that are isometries in some norm up to a scalar factor (that
is, $\|Ax\|\equiv\lambda_{A}\|x\|$ for some $\lambda_{A}$). It is less
obvious that the finiteness conjecture holds for the class of `symmetric'
bounded sets of matrices characterizing by the property that together with
each matrix the corresponding set contains also the (complex) conjugate
matrix~\cite[Proposition~18]{PW:LAA08}. In particular, this class includes
all the sets of self-adjoint matrices. One of the most interesting classes of
matrices for which the finiteness conjecture is valid, for both the
generalized and the lower spectral radius, is the so-called class of
non-negative matrices with independent row uncertainty~\cite{BN:SIAMJMAA09}.
Note that in all these cases, the generalized spectral radius coincides with
the spectral radius of a single matrix from $\mathscr{A}$ or with the spectral
radius of the product of a pair of such matrices.
In~\cite{JB:LAA08} it was demonstrated that the finiteness conjecture is
valid for any pair of $2\times 2$ binary matrices, i.e. matrices with the
elements $\{0,1\}$, and in~\cite{CGSZ:LAA10} a similar result was proved for
any pair of $2\times 2$ sign-matrices, i.e. matrices with the elements $\{-
1,0,1\}$. Finally,
in~\cite{DHLX:LAA12,LiuXiao:LNCS12,JM:LAA13,Morris:ArXiv11,WangWen:CIS13} it
was shown that the finiteness conjecture holds for any bounded family of
matrices $\mathscr{A}$, whose matrices, except perhaps one, have rank~$1$. There
are also a number of works with less constructive sufficient conditions for
attainability of the generalized spectral radius on a finite product of
matrices, see, e.g., the references in~\cite{Koz:IITP13}.
So, calculating the joint and lower spectral radii is a challenging problem,
and only for exceptional classes of matrices these characteristics may be
found explicitly and expressed by a `closed formula', see,
e.g.,~\cite{Jungers:09,Koz:IITP13} and the bibliography therein.
Outline the structure of the work. In this section, we have presented a brief
overview of the results related to the finiteness conjecture for the spectral
characteristics of matrices. In Section~\ref{S:IRU}, we remind the definition
of the sets of non-negative matrices with independent row uncertainty, and
then in Theorem~\ref{T:BNP} give a new proof of the related
Blondel-Nesterov-Protasov results on finiteness~\cite{NesPro:SIAMJMAA13}. A
key point of this proof is the so-called hourglass alternative,
Lemma~\ref{L:alternative}, that has been proposed in~\cite{ACDDHK15} to
analyze the minimax relations between the spectral radii of matrix products.
In Section~\ref{S:Hsets}, assertions of Lemma~\ref{L:alternative} are taken
as axioms for the definition of the sets of positive matrices, called
hourglass- or $\mathcal{H}$-sets, satisfying the hourglass alternative. In
Theorem~\ref{T:semiring} we show that the totality of all $\mathcal{H}$-sets of
matrices, supplemented by the zero and the identity matrices, forms a
semiring. This opens up the possibility of constructing new classes of
matrices for which analogues of Theorem~\ref{T:BNP} are true. The main result
of such a kind, Theorem~\ref{T:Hset}, is proved in Section~\ref{S:main}, and
in Corollary~\ref{C1} we show that all the assertions on finiteness of the
spectral characteristics remain valid for the sets of matrices, obtained as a
polynomial Minkowski combination of compact sets of non-negative matrices
with independent row uncertainty. In Section~\ref{S:cosets}, we present a
general fact about the relationship between the spectral characteristics of
sets of matrices and their convex hulls. Concluding remarks are given in
Section~\ref{S:conclude}.
\section{Sets of matrices with independent row uncertainty}\label{S:IRU}
As was noted in Section~\ref{S:intro}, one of the most interesting classes of
matrices for which the finiteness conjecture holds, both for the generalized
and lower spectral radius, is the so-called class of non-negative matrices
with independent row uncertainty~\cite{BN:SIAMJMAA09}. In this section, we
recall the relevant definition and present a new proof of the corresponding
results on finiteness needed to motivate further constructions.
Following~\cite{BN:SIAMJMAA09}, a set of matrices $\mathscr{A}\subset\mathcal{M}(N,M)$ is
called an \emph{independent row uncertainty set} (\emph{IRU-set}) if it
consists of all the matrices
\[
A=\left(\begin{array}{cccc}
a_{11}&a_{12}&\cdots&a_{1M}\\
a_{21}&a_{22}&\cdots&a_{2M}\\
\cdots&\cdots&\cdots&\cdots\\
a_{N1}&a_{N2}&\cdots&a_{NM}
\end{array}\right),
\]
wherein each of the rows $a_{i} = (a_{i1}, a_{i2}, \ldots, a_{iM})$ belongs
to some set of $M$-rows $\mathscr{A}_{i}$, $i=1,2,\ldots,N$. Clearly, any singleton
set of matrices is an IRU-set. An IRU-set of matrices will be called
\emph{positive} if so are all its matrices which is equivalent to positivity
of all the rows constituting the sets $\mathscr{A}_{i}$.
If the set $\mathscr{A}$ is compact, which is equivalent to compactness of each set
of rows $\mathscr{A}_{1},\mathscr{A}_{2},\ldots,\mathscr{A}_{N}$, then the following quantities
are well defined:
\[
\rho_{min}(\mathscr{A}) = \min_{A \in \mathscr{A}} \rho(A), \quad
\rho_{max}(\mathscr{A}) = \max_{A \in \mathscr{A}} \rho(A).
\]
We will use the notation
\[
\hat{\rho}_{n}(\mathscr{A})= \sup_{A_{i}\in\mathscr{A}} \rho(A_{n} \cdots A_{1})^{1/n},\quad
\check{\rho}_{n}(\mathscr{A})= \inf_{A_{i}\in\mathscr{A}} \rho(A_{n} \cdots A_{1})^{1/n}.
\]
As shows the following theorem, the finiteness conjecture is valid for
compact IRU-sets of positive matrices.
\begin{theorem}\label{T:BNP}
Let $\mathscr{A}$ be a compact IRU-set of positive matrices and $\tilde{\mathscr{A}}$ be
a compact set of matrices satisfying the inclusions
$\mathscr{A}\subseteq\tilde{\mathscr{A}}\subseteq\co(\mathscr{A})$, where $\co(\mathscr{A})$ stands
for the convex hull of the set $\mathscr{A}$. Then
\begin{enumerate}[\rm(i)]
\item $\check{\rho}_{n}(\tilde{\mathscr{A}})=\rho_{min}(\mathscr{A})$ for all $n\ge 1$,
and therefore
$\check{\rho}(\tilde{\mathscr{A}})=\rho_{min}(\tilde{\mathscr{A}})=\rho_{min}(\mathscr{A})$;
\item $\hat{\rho}_{n}(\tilde{\mathscr{A}})=\rho_{max}(\mathscr{A})$ for all $n\ge 1$,
and therefore
$\hat{\rho}(\tilde{\mathscr{A}})=\rho_{max}(\tilde{\mathscr{A}})=\rho_{max}(\mathscr{A})$.
\end{enumerate}
\end{theorem}
For the cases $\tilde{\mathscr{A}}=\mathscr{A}$ and $\tilde{\mathscr{A}}=\co(\mathscr{A})$ this
theorem in a somewhat different formulation is proved
in~\cite{NesPro:SIAMJMAA13}. The next example demonstrates that none of the
equalities $\rho_{max}(\tilde{\mathscr{A}})=\rho_{max}(\mathscr{A})$ and
$\rho_{min}(\tilde{\mathscr{A}})=\rho_{min}(\mathscr{A})$ holds for arbitrary sets of
matrices.
\begin{example}\rm
Consider the sets of matrices $\mathscr{A}=\{A_{1},A_{2}\}$ and
$\mathscr{B}=\{B_{1},B_{2}\}$, where
\[
A_{1}=\left(\begin{array}{cc}
0&2\\0&0
\end{array}\right),\quad
A_{2}=\left(\begin{array}{cc}
0&0\\2&0
\end{array}\right),\quad
B_{1}=\left(\begin{array}{cc}
2&0\\0&0
\end{array}\right),\quad
B_{2}=\left(\begin{array}{cc}
0&0\\0&2
\end{array}\right).
\]
Then $\rho_{max}(\mathscr{A})<\rho_{max}(\co(\mathscr{A}))$ and
$\rho_{min}(\mathscr{B})>\rho_{min}(\co(\mathscr{B}))$ because
\begin{align*}
\rho_{max}(\mathscr{A})=\max_{A\in\mathscr{A}}\rho(A)&=0,\quad \rho_{max}(\co(\mathscr{A}))=\max_{A\in\co(\mathscr{A})}\rho(A)\ge
\rho\left(\tfrac{1}{2}(A_{1}+A_{2})\right)=1,\\
\rho_{min}(\mathscr{B})=\min_{B\in\mathscr{B}}\rho(B)&=2,\quad \rho_{min}(\co(\mathscr{B}))=\min_{B\in\co(\mathscr{B})}\rho(B)\le
\rho\left(\tfrac{1}{2}(B_{1}+B_{2})\right)=1.
\end{align*}
\end{example}
\begin{remark*}\rm
If an IRU-set $\mathscr{A}$ is formed by a set of rows
$\mathscr{A}_{1},\mathscr{A}_{2},\ldots,\mathscr{A}_{N}$, then its convex hull $\co(\mathscr{A})$ is
the IRU-set formed by the set of rows $\co(\mathscr{A}_{1}), \co(\mathscr{A}_{2}),
\ldots, \co(\mathscr{A}_{N})$.
\end{remark*}
\subsection{Hourglass alternative}
To prove Theorem~\ref{T:BNP} we will need some definitions and a number of
supporting facts. For vectors $x,y\in\mathbb{R}^{N}$, we write $x \ge y$ or $x>y$,
if all coordinates of the vector $x$ are not less or strictly greater,
respectively, than the corresponding coordinates of the vector $y$. Similar
notation will be applied to matrices.
In the space $\mathbb{R}^{1}$ of real numbers any two elements $x$ and $y$ are
\emph{comparable}, i.e. either $x\le y$ or $y\le x$. In this case we say that
the space $\mathbb{R}^{1}$ is \emph{linearly ordered}. In the spaces $\mathbb{R}^{N}$
with $N>1$ the situation is quite different. Here there exist infinitely many
pairs of non-comparable elements, and the failure of the inequality $x\ge y$
does not imply the inverse inequality $x\le y$. The existence of
noncomparable elements leads to the fact that if, for some $x$, the system of
linear inequalities
\[
Ax\ge v,\qquad A\in\mathscr{A}\subset\mathcal{M}(N,M)
\]
has no solution, then it does not mean that for some matrix $\bar{A}\in\mathscr{A}$
the inverse inequality $\bar{A}x\le v$ will be valid. Examples of
corresponding sets of matrices $\mathscr{A}$ can be easily constructed. However, as
the following lemma shows, for the sets of matrices with independent row
uncertainty all is not so bad, and for linear inequalities an analogue of the
linear ordering of solutions holds.
\begin{lemma}\label{L:alternative}
Let $\mathscr{A}\subset\mathcal{M}(N,M)$ be an IRU-set of matrices and let $\tilde{A}u=v$
for some matrix $\tilde{A}\in\mathscr{A}$ and vectors $u,v$. Then the following
properties hold:
\begin{enumerate}[\rm H1:]
\item either $Au\ge v$ for all $A\in\mathscr{A}$ or there exists a matrix
$\bar{A}\in\mathscr{A}$ such that $\bar{A}u\le v$ and $\bar{A}u\neq v$;
\item either $Au\le v$ for all $A\in\mathscr{A}$ or there exists a matrix
$\bar{A}\in\mathscr{A}$ such that $\bar{A}u\ge v$ and $\bar{A}u\neq v$.
\end{enumerate}
\end{lemma}
Assertions H1 and H2 have a simple geometrical interpretation. Imagine that
the sets $B_{l}=\{x:x\le v\}$ and $B_{u}=\{x:x\ge v\}$ form the lower and
upper bulbs of an hourglass with the neck at the point $v$. Then
Lemma~\ref{L:alternative} asserts that either all the grains $Au$ fill one of
the bulbs (upper or lower), or there remains at least one grain in the other
bulb (lower or upper, respectively). Such an interpretation gives reason to
call Lemma~\ref{L:alternative} the \emph{hourglass alternative}. This
alternative will play a key role in the proof of Theorem~\ref{T:BNP} as well
as in its extension to a new class of matrices. The hourglass alternative has
been proposed by the author in~\cite{ACDDHK15} to analyze the minimax
relations between the spectral radii of matrix products.
\begin{proof}[Proof of Lemma~\ref{L:alternative}]
Given an IRU-set of matrices $\mathscr{A}\subset\mathcal{M}(N,M)$, let $\tilde{A}u=v$ for
some matrix $\tilde{A}\in\mathscr{A}$ and vectors $u,v>0$.
We first prove assertion H1. If the inequality $Au\ge v$ holds for all
$A\in\mathscr{A}$ then there is nothing to prove. So let us suppose that for some
matrix $A=(a_{ij})\in\mathscr{A}$ the inequality $Au\ge v$ is not satisfied. Then
representing the vectors $u$ and $v$ in the coordinate form
\[
u=(u_{1},u_{2},\ldots,u_{M})^{\mathsmaller{\mathsf{T}}},\quad
v=(v_{1},v_{2},\ldots,v_{N})^{\mathsmaller{\mathsf{T}}},
\]
we obtain that
\[
a_{i1}u_{1}+a_{i2}u_{2}+\cdots+a_{iM}u_{M}<v_{i}
\]
for some $i\in\{1,2,\ldots,N\}$; without loss of generality we can assume
that $i=1$. In this case, for the matrix
\[
\bar{A}=\left(\begin{array}{cccccc}
a_{11}&a_{12}&\cdots&a_{1M}\\
\tilde{a}_{21}&\tilde{a}_{22}&\cdots&\tilde{a}_{2M}\\
\cdots&\cdots&\cdots&\cdots\\
\tilde{a}_{N1}&\tilde{a}_{N2}&\cdots&\tilde{a}_{NM}
\end{array}\right),
\]
which is obtained from the matrix $\tilde{A}=(\tilde{a}_{ij})$ by changing
the first row to the row
\[
a_{1}=(a_{11},a_{12},\ldots,a_{1M})
\]
and therefore also belongs to $\mathscr{A}$, we have the inequalities
\begin{align*}
a_{11}u_{1}+a_{12}u_{2}+\cdots+a_{1M}u_{M}&<v_{1}\\
\shortintertext{and}
\tilde{a}_{i1}u_{1}+\tilde{a}_{i2}u_{2}+\cdots+\tilde{a}_{iM}u_{M}&=v_{i},\qquad i=2,3,\ldots,N.
\end{align*}
Consequently, $\bar{A}u\le v$ and $\bar{A}u\neq v$, which completes the proof
of assertion~H1.
Assertion~H2 is proved similarly.
\end{proof}
We now show how Lemma~\ref{L:alternative} can be used to analyze the spectral
characteristics of sets of matrices. The \emph{spectral radius} of an
$(N\times N)$-matrix $A$ is defined as the maximal modulus of its eigenvalues
and denoted by $\rho(A)$. The spectral radius depends continuously on the
matrix, and in the case $A>0$ by the Perron-Frobenius
theorem~\cite[Theorem~8.2.2]{HJ2:e} the number $\rho(A)$ is a simple
eigenvalue of the matrix $A$ whereas all the other eigenvalues of $A$ are
strictly less than $\rho(A)$ by modulus. The eigenvector
$v=(v_{1},v_{2},\ldots,v_{N})^{\mathsmaller{\mathsf{T}}}$ corresponding to the eigenvalue
$\rho(A)$ (normalized, for example, by the equality
$v_{1}+v_{2}+\cdots+v_{N}=1$) is uniquely determined and positive.
In the following lemma we consolidate some facts of the theory of positive
matrices, which in general are well known, but references to which are
spreaded among various publications.
\begin{lemma}\label{L:1}
Let $A$ be a non-negative $(N\times N)$-matrix, then the following assertions
hold:
\begin{enumerate}[\rm(i)]
\item if $Au\le\lambda u$ for some vector $u>0$, then $\lambda\ge0$ and
$\rho(A)\le\lambda$;
\item moreover, if in conditions of {\rm(i)} $A>0$ and $Au\neq\lambda u$,
then $\rho(A)<\lambda$;
\item if $Au\ge\lambda u$ for some non-zero vector $u\ge0$ and some number
$\lambda\ge0$, then $\rho(A) \ge\lambda$;
\item moreover, if in conditions of {\rm(iii)} $A>0$ and $Au\neq\lambda u$,
then $\rho(A)> \lambda$.
\end{enumerate}
\end{lemma}
\begin{proof}
As stated in~\cite[Corollary~8.1.29]{HJ2:e}, for any nonnegative matrix $A$
and numbers $\alpha,\beta\ge0$, the inequalities
\begin{equation}\label{E:horn29}
\alpha\le \rho(A)\le \beta
\end{equation}
are valid provided that $\alpha u\le Au\le \beta u$ for some $u>0$, from
which assertion (i) immediately follows. Let us prove three remaining
assertions.
(ii) Let $Au\le\lambda u$ for $u>0$, where $A>0$ and $Au\neq\lambda u$. Then
at least one coordinate of the vector $Au-\lambda u\le 0$ is strictly
negative. Therefore, the condition $A>0$ implies strict negativity of all the
coordinates of the vector $A(Au-\lambda u)$. Then there exists
$\varepsilon>0$ such that $A(Au-\lambda u)\le -\varepsilon u$ and therefore
$A^{2}u=A(Au-\lambda u)+\lambda Au\le (\lambda^{2}-\varepsilon)u$. Then, by
\eqref{E:horn29}, we get $\rho(A^{2})\le\lambda^{2}-\varepsilon$, and thus
$\rho(A)\le\sqrt{\lambda^{2}-\varepsilon}<\lambda$.
(iii) The condition $Au\ge\lambda u$ with a non-zero $u\ge0$ implies
$A^{n}u\ge \lambda^{n}u$ for any $n\ge1$. Then
$\|A^{n}\|\cdot\|u\|\ge\|A^{n}u\|\ge\lambda^{n}\|u\|$, where $\|\cdot\|$ is
any norm monotone with respect to coordinates of a non-negative vector, e.g.
the Euclidean norm or the max-norm. Therefore, $\|A^{n}\|\ge \lambda^{n}$,
and by Gelfand's formula~\cite[Corollary~5.6.14]{HJ2:e}
$\rho(A)=\lim_{n\to\infty}\|A^n\|^{1/n}\ge\lambda$.
(iv) Now let $A>0$ and $Au\neq\lambda u$. Then at least one coordinate of the
vector $Au-\lambda u\ge 0$ is strictly positive. Therefore, the condition
$A>0$ implies strict positivity of all coordinates of the vector
$A(Au-\lambda u)$. Then there exists $\varepsilon>0$ such that $A(Au-\lambda
u)\ge \varepsilon u$ and therefore $A^{2}u=A(Au-\lambda u)+\lambda Au\ge
(\lambda^{2}+\varepsilon)u$. This, by assertion (iii) applied to the matrix
$A^{2}$, implies $\rho(A^{2})\ge\lambda^{2}+\varepsilon$, and thus
$\rho(A)\ge\sqrt{\lambda^{2}+\varepsilon}>\lambda$.
The lemma is proved.
\end{proof}
The proofs of Lemmas~\ref{L:alternative} and \ref{L:1} are borrowed
from~\cite{ACDDHK15} and presented here only for the sake of completeness of
presentation. Lemma~\ref{L:1} resembles Lemma~1
from~\cite{NesPro:SIAMJMAA13}. The next lemma shows that for the IRU-sets of
positive matrices there are valid assertions in a certain sense inverse to
Lemma~\ref{L:1}.
\begin{lemma}\label{L:mainmin}
Let $\mathscr{A}\subset\mathcal{M}(N,N)$ be a compact IRU-set of positive matrices, then
the following assertions hold:
\begin{enumerate}[\rm(i)]
\item if $\tilde{A}\in\mathscr{A}$ is a matrix satisfying $\rho(\tilde{A}) =
\rho_{min}(\mathscr{A})$ and $\tilde{v}$ is its positive eigenvector
corresponding to the eigenvalue $\rho(\tilde{A})$, then $A\tilde{v}\ge
\rho_{min}(\mathscr{A})\tilde{v}$ for all $A\in\mathscr{A}$;
\item if $\tilde{A}\in\mathscr{A}$ is a matrix satisfying $\rho(\tilde{A}) =
\rho_{max}(\mathscr{A})$ and $\tilde{v}$ is its positive eigenvector
corresponding to the eigenvalue $\rho(\tilde{A})$, then $A\tilde{v}\le
\rho_{max}(\mathscr{A})\tilde{v}$ for all $A\in\mathscr{A}$.
\end{enumerate}
\end{lemma}
\begin{proof}
To prove assertion (i) let us note that
$\tilde{A}\tilde{v}=\rho_{min}(\mathscr{A})\tilde{v}$. Then by assertion (i) of
Lemma~\ref{L:alternative} either $A\tilde{v}\ge\rho_{min}(\mathscr{A})\tilde{v}$
for all $A\in\mathscr{A}$ or there exists a matrix $\bar{A}\in\mathscr{A}$ such that
$\bar{A}\tilde{v}\le \rho_{min}(\mathscr{A})\tilde{v}$ and $\bar{A}\tilde{v}\neq
\rho_{min}(\mathscr{A})\tilde{v}$. In the latter case, by Lemma~\ref{L:1} there
would be valid the inequality $\rho(\bar{A})<\rho_{min}(\mathscr{A})$ which
contradicts to the definition of $\rho_{min}(\mathscr{A})$. Hence, the inequality
$A\tilde{v}\ge\rho_{min}(\mathscr{A})\tilde{v}$ holds for all $A\in\mathscr{A}$.
Assertion (ii) is proved similarly.
\end{proof}
Now all is ready to prove Theorem~\ref{T:BNP}.
\subsection{Proof of Theorem~\ref{T:BNP}}
To prove assertion (i) choose a matrix $\tilde{A}\in\mathscr{A}$ for which
$\rho(\tilde{A})=\rho_{min}(\mathscr{A})$ and denote by
$\tilde{v}=(\tilde{v}_{1},\tilde{v}_{2},\ldots,\tilde{v}_{N})^\mathsmaller{\mathsf{T}}$ a
positive eigenvector of $\tilde{A}$ corresponding to the eigenvalue
$\rho(\tilde{A})$. Then by assertion (i) of Lemma~\ref{L:mainmin}
$A\tilde{v}\ge\rho_{min}(\mathscr{A})\tilde{v}$ for all $A\in\co(\mathscr{A})$ and
therefore for all $A\in\tilde{\mathscr{A}}$. Hence $A_{i_{n}}\cdots
A_{i_{1}}\tilde{v}\ge\rho_{min}^{n}(\mathscr{A})\tilde{v}$ for all
$A_{i_{j}}\in\tilde{\mathscr{A}}$. Consequently, by Lemma~\ref{L:1}
$\rho(A_{i_{n}}\cdots A_{i_{1}})\ge\rho_{min}^{n}(\mathscr{A})$ for all
$A_{i_{j}}\in\tilde{\mathscr{A}}$ and therefore
\[
\check{\rho}_{n}(\tilde{\mathscr{A}})\ge\rho_{min}(\mathscr{A}).
\]
On the other hand, since $\tilde{A}\in\mathscr{A}\subseteq\tilde{\mathscr{A}}$ then
clearly
\[
\check{\rho}_{n}(\tilde{\mathscr{A}})\le\rho(\tilde{A}^{n})^{1/n}=\rho_{min}(\mathscr{A}),
\]
which implies $\check{\rho}_{n}(\tilde{\mathscr{A}})=\rho_{min}(\mathscr{A})$ for all
$n\ge 1$. Observing now that
$\check{\rho}_{1}(\tilde{\mathscr{A}})=\rho_{min}(\tilde{\mathscr{A}})$ we complete the
proof of assertion (i).
The proof of assertion (ii) is carried out by verbatim repetition of the
proof of assertion (i) by taking instead of $\tilde{A}$ a matrix maximizing
the spectral radii of matrices from $\mathscr{A}$ and instead of the estimates for
$\rho_{min}(\mathscr{A})$ the corresponding estimates for $\rho_{max}(\mathscr{A})$, and
then using assertion (ii) of Lemma~\ref{L:mainmin} instead of assertion (i).
\section{$\mathcal{H}$-sets of matrices}\label{S:Hsets}
Apart from general properties of positive matrices given in Lemma~\ref{L:1},
the proof of Theorem~\ref{T:BNP} relies only on those properties of IRU-sets
of matrices which were formulated in Lemma~\ref{L:alternative} as statements
H1 and H2 of the hourglass alternative. Therefore, it is natural to
axiomatize the class of matrices satisfying the statements H1 and H2, and to
study its properties.
\subsection{Main definitions}
A set of positive matrices $\mathscr{A}\subset\mathcal{M}(N,M)$ will be called
\emph{$\mathcal{H}$-set} or \emph{hourglass set} if every time the equality
$\tilde{A}u=v$ is true for some matrix $\tilde{A}\in\mathscr{A}$ and vectors
$u,v>0$ there are also true assertions H1 and H2 of
Lemma~\ref{L:alternative}.
A trivial example of $\mathcal{H}$-sets are \emph{linearly ordered} sets
$\mathscr{A}=\{A_{1}$, $A_{2}$, \ldots, $A_{n}\}$ composed of positive matrices
$A_{i}$ satisfying the inequalities $0<A_{1}<A_{2}<\cdots<A_{n}$. In this
case, for each $u>0$, the vectors $A_{1}u,A_{2}u,\ldots,A_{n}u$ are strictly
positive and linearly ordered, which yields the validity of assertions H1 and
H2 for $\mathscr{A}$. A less trivial and more interesting example of $\mathcal{H}$-sets,
as follows from Lemma~\ref{L:alternative}, is the class of sets of positive
matrices with independent row uncertainty.
Not every set of positive matrices is an $\mathcal{H}$-set. A relevant example
could easily be built for the set $\mathscr{A}=\{A,B\}$ consisting of two $(2\times
2)$-matrices. In this case, for $\mathscr{A}$ was $\mathcal{H}$-set, it is necessary that
the vectors $Au$ and $Bu$ were comparable for any vector $u>0$, that is,
either $Au\le Bu$ or $Bu\le Au$. But this is not fulfilled, for example, in
the case when $AB=P$, where $P$ is any projection on the linear hull of the
vector $(-1,1)$.
Let us describe some general properties of the class of $\mathcal{H}$-sets of
matrices. Introduce the operations of Minkowski addition and multiplication
for sets of matrices:
\[
\mathscr{A}+\mathscr{B}=\{A+B:A\in\mathscr{A},~ B\in\mathscr{B}\},\quad
\mathscr{A}\mathscr{B}=\{AB:A\in\mathscr{A},~ B\in\mathscr{B}\},
\]
and also the operation of multiplication of a set of matrices by a scalar:
\[
t\mathscr{A}=\mathscr{A} t=\{tA:t\in\mathbb{R},~A\in\mathscr{A}\}.
\]
Naturally, the operation of addition is \emph{admissible} if and only if the
matrices from the sets $\mathscr{A}$ and $\mathscr{B}$ are of the same size, while the
operation of multiplication is \emph{admissible} if and only if the sizes of
the matrices from sets $\mathscr{A}$ and $\mathscr{B}$ are matched: dimension of the rows
of the matrices from $\mathscr{A}$ is the same as dimension of the columns of the
matrices from $\mathscr{B}$. Problems with matching of sizes do not arise when one
considers sets of square matrices of the same size.
In what follows, we will need to make various kinds of limiting transitions
with the matrices from the sets under consideration as well as with the sets
of matrices themselves. In this connection, it is natural to restrict our
considerations to only compact (closed and bounded) sets of matrices. The
totality of all compact sets of positive $(N\times M)$-matrices with
independent row uncertainty will be denoted by $\mathcal{U}(N,M)$. The totality of
all finite\footnote{We will not consider infinite sets since this article is
not a proper place to get into the intricacies of determining the linear
ordering for infinite sets.} positive linearly ordered sets of $(N\times
M)$-matrices will be denoted by $\mathcal{L}(N,M)$. At last, by $\mathcal{H}(N,M)$ we
denote the set of all compact $\mathcal{H}$-sets of positive $(N\times
M)$-matrices.
\begin{theorem}\label{T:semiring} The following is true:
\begin{enumerate}[\rm(i)]
\item $\mathscr{A}+\mathscr{B}\in\mathcal{H}(N,M)$ if $\mathscr{A},\mathscr{B}\in\mathcal{H}(N,M)$;
\item $\mathscr{A}\mathscr{B}\in\mathcal{H}(N,Q)$ if $\mathscr{A}\in\mathcal{H}(N,M)$ and
$\mathscr{B}\in\mathcal{H}(M,Q)$;
\item $t\mathscr{A}=\mathscr{A} t\in\mathcal{H}(N,M)$ if $t>0$ and $\mathscr{A}\in\mathcal{H}(N,M)$.
\end{enumerate}
\end{theorem}
\begin{proof}
First prove (i). Show the validity of assertion H1 for the sum $\mathscr{A}+\mathscr{B}$.
Let, for some matrix $C\in\mathscr{A}+\mathscr{B}$ and vectors $u,v>0$, the equality
$Cu=v$ holds. Then, by definition of the set $\mathscr{A}+\mathscr{B}$, there exist
matrices $\tilde{A}\in\mathscr{A}$ and $\tilde{B}\in\mathscr{B}$ such that
$C=\tilde{A}+\tilde{B}$, and hence $(\tilde{A}+\tilde{B})u=v$. Denote
$\tilde{A}u=v_{1}$ and $\tilde{B}u=v_{2}$ then $v_{1}+v_{2}=v$, where
$v_{1},v_{2}>0$ due to the positivity of the matrices $\tilde{A}$ and
$\tilde{B}$. If
\begin{equation}\label{E:ABgeforall}
Au\ge v_{1},~ Bu\ge v_{2}\quad\text{for all}\quad A\in\mathscr{A},~ B\in\mathscr{B},
\end{equation}
then, for all $A+B\in\mathscr{A}+\mathscr{B}$, there will be valid also the inequality
$(A+B)u\ge v_{1}+v_{2}=v$. Thus, in this case assertion H1 is proven.
Now, let \eqref{E:ABgeforall} fail, and let, to be specific, the inequality
$Au\ge v_{1}$ be not valid for at least one matrix $A\in\mathscr{A}$. Then, since
$\mathscr{A}\in\mathcal{H}(N,M)$, assertion H1 for the set of matrices $\mathscr{A}$ implies
the existence of a matrix $\bar{A}\in\mathscr{A}$ such that $\bar{A}u\le v_{1}$ and
$\bar{A}u\neq v_{1}$. In this case the matrix
$\bar{A}+\tilde{B}\in\mathscr{A}+\mathscr{B}$ will satisfy the inequality
$(\bar{A}+\tilde{B})u\le v_{1}+v_{2}=v$, and moreover,
$(\bar{A}+\tilde{B})u\neq v$ since $\tilde{B}u= v_{2}$ while $\bar{A}u\neq
v_{1}$. Thus, assertion H1 for the set $\mathscr{A}+\mathscr{B}$ is also valid in the
case when \eqref{E:ABgeforall} fails.
The proof of assertion H2 for the set $\mathscr{A}+\mathscr{B}$ is similar. Compactness
of the set $\mathscr{A}+\mathscr{B}$ in the case when the sets $\mathscr{A}$ and $\mathscr{B}$ are
compact is evident.
We now prove (ii). Show the validity of assertion H1 for the product
$\mathscr{A}\mathscr{B}$. Suppose that $Cu=v$ for some matrix $C\in\mathscr{A}\mathscr{B}$ and
vectors $u,v>0$. Then, by definition of the set $\mathscr{A}\mathscr{B}$, there exist
matrices $\tilde{A}\in\mathscr{A}$ and $\tilde{B}\in\mathscr{B}$ such that
$\tilde{A}\tilde{B}u=v$. By denoting $\tilde{B}u=w$ we obtain, due to the
positivity of the matrix $\tilde{B}$ and the vector $u$, that $w>0$ and
$\tilde {A}w=u$. If
\begin{equation}\label{E:ABgeforall1}
Aw\ge v,~ Bu\ge w\quad\text{for all}\quad A\in\mathscr{A},~ B\in\mathscr{B},
\end{equation}
then, due to the positivity of the matrices $A\in\mathscr{A}$ and $B\in\mathscr{B}$, for
all $AB\in\mathscr{A}\mathscr{B}$ there will be valid also the inequalities $ABu\ge Aw\ge
v$. Thus in this case assertion H1 is proved.
Now, let \eqref{E:ABgeforall1} fail, and let, to be specific, the inequality
$Bu\ge w$ be not valid for at least one matrix $B\in\mathscr{B}$. Then, since
$\mathscr{B}\in\mathcal{H}(N,M)$, assertion H1 for the set of matrices $\mathscr{B}$ implies
the existence of a matrix $\bar{B}\in\mathscr{B}$ such that $\bar{B}u\le w$ and
$\bar{B}u\neq w$. But in this case the matrix
$\tilde{A}\bar{B}\in\mathscr{A}\mathscr{B}$, due to the positivity of the matrix
$\tilde{A}$, will satisfy the inequality $\tilde{A}\bar{B}u\le \tilde{A}w=v$,
and then $\tilde{A}\bar{B}u\le v$. Moreover, $\tilde{A}\bar{B}u\neq v$ since
$\bar{B}u\le w$, $\bar{B}u\neq w$ and the matrix $\tilde{A}$ is positive.
Thus, assertion H1 for the set $\mathscr{A}\mathscr{B}$ is also valid in the case when
\eqref{E:ABgeforall1} fails.
The proof of assertion H2 for the set $\mathscr{A}\mathscr{B}$ is similar. Compactness of
the set $\mathscr{A}\mathscr{B}$ in the case when the sets $\mathscr{A}$ and $\mathscr{B}$ are
compact is evident.
The proof of (iii) is also evident.
\end{proof}
\begin{remark*}\rm
The requirement of positivity for the matrices and the vectors $u,v$ in the
definition of $\mathcal{H}$-sets was introduced to ensure the inclusion
$\mathscr{A}\mathscr{B}\in\mathcal{H}(N,Q)$ in Theorem~\ref{T:semiring}, as well as to provide
an opportunity to further use of Lemma~\ref{L:1} for the analysis of the
spectral properties of the sets of matrices from $\mathcal{H}(N,Q)$.
\end{remark*}
By Theorem~\ref{T:semiring} the totality of sets of square matrices
$\mathcal{H}(N,N)$ is enabled by additive and multiplicative group operations, but
itself is not a group, neither additive nor multiplicative. However, after
adding the zero additive element $\{0\}$ and the multiplicative identity
element $\{I\}$ to $\mathcal{H}(N,N)$, the resulting totality
$\mathcal{H}(N,N)\cup\{0\}\cup\{I\}$ becomes a semiring~\cite{Golan99}.
Theorem~\ref{T:semiring} implies that any finite sum of any finite products
of sets of matrices from $\mathcal{H}(N,N)$ is again a set of matrices from
$\mathcal{H}(N,N)$. Moreover, for any integers $n,d\ge1$, all the polynomial sets
of matrices
\begin{equation}\label{E:poly}
P(\mathscr{A}_{1},\mathscr{A}_{1},\ldots,\mathscr{A}_{n})=
\sum_{k=1}^{d}\sum_{i_{1},i_{2},\ldots,i_{k}\in\{1,2,\ldots,n\}}
p_{i_{1},i_{2},\ldots,i_{k}}\mathscr{A}_{i_{1}}\mathscr{A}_{i_{2}}\cdots\mathscr{A}_{i_{k}},
\end{equation}
where $\mathscr{A}_{1},\mathscr{A}_{1},\ldots,\mathscr{A}_{n}\in\mathcal{H}(N,N)$ and the scalar
coefficients $p_{i_{1},i_{2},\ldots,i_{k}}$ are positive, belong to the set
$\mathcal{H}(N,N)$.
The polynomials \eqref{E:poly} allow to construct not only the elements
$P(\mathscr{A}_{1},\mathscr{A}_{1},\ldots,\mathscr{A}_{n})$ of the set $\mathcal{H}(N,N)$ but also
the elements of arbitrary sets $\mathcal{H}(N,M)$, by taking the arguments
$\mathscr{A}_{1},\mathscr{A}_{1},\ldots,\mathscr{A}_{n}$ from the sets $\mathcal{H}(N_{i},M_{i})$
with arbitrary matrix sizes $N_{i}\times M_{i}$. One must only ensure that
the products $\mathscr{A}_{i_{1}}\mathscr{A}_{i_{2}}\cdots\mathscr{A}_{i_{k}}$ would be
admissible and determine the sets of matrices of dimension $N\times M$.
We have presented above two types of non-trivial $\mathcal{H}$-sets of matrices,
the sets of matrices with independent row uncertainty and the linearly
ordered sets of positive matrices. In this connection, let us denote by
$\mathcal{H}_{*}(N,M)$ the totality of all sets of $(N\times M)$-matrices which can
be obtained as admissible finite sums of finite products of the sets of
positive matrices with independent rows uncertainty or the sets of linearly
ordered positive matrices. In other words, $\mathcal{H}_{*}(N,M)$ is the totality
of all sets of matrices that can be represented as the values of polynomials
\eqref{E:poly} with the arguments taken from the sets of the matrices
belonging to $\mathcal{U}(N_{i},M_{i})\cup\mathcal{L}(N_{i},M_{i})$.
\begin{question*}
Does equality $\mathcal{H}_{*}(N,M)=\mathcal{H}(N,M)$ hold?
\end{question*}
The answer to this question is probably negative, but we do not aware of any
counterexamples.
\subsection{Closure of the set $\mathcal{H}(N,M)$}
When considering various types of problems related to the sets of matrices,
it is desirable to be able to perform limit transitions. In fact, for further
goals we would like to be able to extend some facts relevant to $\mathcal{H}$-sets
of positive matrices to the same kind of sets of matrices, but with
non-negative elements. To achieve this, without going too deep into the
variety of all topologies on spaces of subsets, we confine ourselves to the
description of only one of them, the topology specified by the Hausdorff
metric.
Given some matrix norm $\|\cdot\|$ on $\mathcal{M}(N,M)$, denote by $\mathcal{K}(N,M)$
the totality of all compact subsets of $\mathcal{M}(N,M)$. Then for any two sets of
matrices $\mathscr{A},\mathscr{B}\in\mathcal{K}(N,M)$ there is defined the \emph{Hausdorff
metric}
\[
H(\mathscr{A},\mathscr{B})=
\max\left\{\adjustlimits\sup_{A\in\mathscr{A}}\inf_{B\in\mathscr{B}}\|A-B\|,
~\adjustlimits\sup_{B\in\mathscr{B}}\inf_{A\in\mathscr{A}}\|A-B\|\right\},
\]
in which $\mathcal{K}(N,M)$ becomes a full metric space. Then $\mathcal{H}(N,M)\subset
\mathcal{K}(N,M)$, equipped with the Hausdorff metric, also becomes a metric space.
As is known, see, e.g.,~\cite[Chapter~E, Proposition~5]{Ok07}, any mapping
$F(\mathscr{A})$ acting from $\mathcal{K}(N,M)$ into itself is continuous in the
Hausdorff metric at some point $\mathscr{A}_{0}$ if and only if it is both upper
and lower semicontinuous. It is also known~\cite[Section~1.3]{BGMO:84:e} that
the mappings
\[
(\mathscr{A},\mathscr{B})\mapsto \mathscr{A}+\mathscr{B},\quad (\mathscr{A},\mathscr{B})\mapsto \mathscr{A}\mathscr{B},\quad
\mathscr{A}\mapsto\mathscr{A}\times\mathscr{A}\times\cdots\times\mathscr{A},\quad
\mathscr{A}\mapsto\co(\mathscr{A}),
\]
where $\mathscr{A}$ and $\mathscr{B}$ are compact sets, are both upper and lower
semicontinuous. Then these mappings are continuous in the Hausdorff metric,
and any polynomial mapping \eqref{E:poly} possesses the same continuity
properties.
Denote by $\overline{\mathcal{H}}(N,M)$ the closure of the set $\mathcal{H}(N,M)$ in the
Hausdorff metric. It is obvious that $\{0\},\{I\}\in\overline{\mathcal{H}}(N,M)$,
and since the Minkowski addition and multiplication of matrix sets are
continuous in the Hausdorff metric, then by Theorem~\ref{T:Hset} the set
$\overline{\mathcal{H}}(N,N)$ is a semiring. However, the answer to the question
when, for some $\mathscr{A}$, the inclusion $\mathscr{A}\in\overline{\mathcal{H}}(N,M)$ holds,
requires further analysis. We restrict ourselves to the description of only
one case where the answer to this question can be given explicitly.
Let $\mathbf{1}$ stand for the matrix (of appropriate size) with all elements equal
to $1$. First note that each set of \emph{linearly ordered non-negative
matrices} $\mathscr{A}=\{A_{1},A_{2},\ldots,A_{n}\}$, that is a set whose matrices
$A_{i}$ satisfy the inequalities $0\le A_{1}\le A_{2}\le \cdots\le A_{n}$, is
a limiting point in the Hausdorff metric of the family of linearly ordered
sets of positive matrices
\[
\mathscr{A}(\varepsilon)=
\{A_{1}+\varepsilon\mathbf{1},A_{2}+2\varepsilon\mathbf{1},\ldots,A_{n}+n\varepsilon\mathbf{1}\},\qquad \varepsilon>0.
\]
Further, let $\mathscr{A}$ be an IRU-set of non-negative matrices. Then any set of
matrices
\[
\mathscr{A}(\varepsilon)=\mathscr{A}+\varepsilon\mathbf{1}=\left\{A+\varepsilon\mathbf{1}:A\in\mathscr{A}\right\},\qquad \varepsilon>0,
\]
is also an IRU-set, but this time consisting of positive matrices. To verify
this, it suffices to note that if a set of matrices $\mathscr{A}$ is defined by
sets of rows $\mathscr{A}_{i}$ then the set $\mathscr{A}+\varepsilon\mathbf{1}$ will be defined
by the sets of rows
$\mathscr{A}_{i}+\varepsilon\mathbf{1}=\{a+\varepsilon\mathbf{1}:a\in\mathscr{A}_{i}\}$, where $\mathbf{1}$ is
the unit row of appropriate size. Moreover, this IRU-set $\mathscr{A}$ of
non-negative matrices, as well as in the previous case, will be a limiting
point in the Hausdorff metric for the positive family of IRU-sets
$\mathscr{A}(\varepsilon)$, $\varepsilon>0$.
These observations imply the following lemma.
\begin{lemma}\label{L:polynonneg}
The values of any polynomial mapping \eqref{E:poly} with the arguments from
finite linearly ordered sets of non-negative matrices or from IRU-sets of
non-negative matrices belong to the closure in the Hausdorff metric of the
totality of positive $\mathcal{H}$-sets of matrices.
\end{lemma}
\section{Main results}\label{S:main}
In this section we present a generalization of Theorem~\ref{T:BNP} to the
case of $\mathcal{H}$-sets of matrices.
\begin{theorem}\label{T:Hset}
Let $\mathscr{A}\in\overline{\mathcal{H}}(N,N)$, and let $\tilde{\mathscr{A}}$ be a set of
matrices satisfying the inclusions
$\mathscr{A}\subseteq\tilde{\mathscr{A}}\subseteq\co(\mathscr{A})$. Then
\begin{enumerate}[\rm(i)]
\item $\check{\rho}_{n}(\tilde{\mathscr{A}})=\rho_{min}(\mathscr{A})$ for all $n\ge 1$,
and therefore
$\check{\rho}(\tilde{\mathscr{A}})=\rho_{min}(\tilde{\mathscr{A}})=\rho_{min}(\mathscr{A})$;
\item $\hat{\rho}_{n}(\tilde{\mathscr{A}})=\rho_{max}(\mathscr{A})$ for all $n\ge 1$,
and therefore
$\hat{\rho}(\tilde{\mathscr{A}})=\rho_{max}(\tilde{\mathscr{A}})=\rho_{max}(\mathscr{A})$.
\end{enumerate}
\end{theorem}
\begin{proof}
If $\mathscr{A}\in\mathcal{H}(N,N)$ then, by definition, the set $\mathscr{A}$ consists of
positive matrices. Therefore, for $\mathscr{A}\in\mathcal{H}(N,N)$ assertions H1 and H2
of Lemma~\ref{L:alternative} hold, which implies that Lemma~\ref{L:mainmin}
is valid, too. Then the proof of the theorem word for word repeats the proof
of Theorem~\ref{T:BNP}. Thus, we need only consider the case when
$\mathscr{A}\in\overline{\mathcal{H}}(N,N)$ but $\mathscr{A}\not\in\mathcal{H}(N,N)$.
First prove that, for every $n\ge1$, there are valid the equalities
\begin{equation}\label{E:needed}
\check{\rho}_{n}(\mathscr{A})=\rho_{min}(\mathscr{A}),\quad
\check{\rho}_{n}(\co(\mathscr{A}))=\rho_{min}(\mathscr{A})
\end{equation}
Since $\mathscr{A}\in\overline{\mathcal{H}}(N,N)$ then there exists a sequence of sets of
matrices $\mathscr{A}_{k}\in\mathcal{H}(N,N)$, $k=1,2,\dotsc$, converging to $\mathscr{A}$ in
the Hausdorff metric. Then, as it has been already proved, for each $n,k\ge
1$ we have the equalities
\begin{equation}\label{E:needed-k}
\check{\rho}_{n}(\mathscr{A}_{k})=\rho_{min}(\mathscr{A}_{k}),\quad
\check{\rho}_{n}(\co(\mathscr{A}_{k}))=\rho_{min}(\mathscr{A}_{k}),
\end{equation}
and therefore it is natural to try to get \eqref{E:needed} by limiting
transition from \eqref{E:needed-k}. To do this, we recall the following
simplified version of Berge's Maximum Theorem~\cite[Chapter~E,
Section~3]{Ok07}.
\begin{lemma}\label{L:Berge}
Let $X$ and $Y$ be metric spaces, $\Gamma: X\to Y$ be a multivalued mapping
with compact values, and $\varphi$ be a continuous real function on $X\times
Y$. If the mapping $\Gamma$ is continuous, that is both upper and lower
semicontinuous, at a point $x_{0}\in X$ then both functions
$M(x)=\max_{y\in\Gamma(x)}\varphi(x,y)$ and
$m(x)=\min_{y\in\Gamma(x)}\varphi(x,y)$ are also continuous at the point
$x_{0}$.
\end{lemma}
To use this lemma we will treat $\mathcal{M}(N,N)$ as a metric space, and take the
following notation:
\begin{alignat*}{2}
X&=\mathcal{K}(N,N),&\qquad Y&=\underbrace{\mathcal{M}(N,N)\times\cdots\times\mathcal{M}(N,N)}_{n~\text{times}},\\
x&=\mathscr{A}\in X,&\qquad
y&=(A_{1},\ldots,A_{n})\in Y,\\
\Gamma(x)&=\mathscr{A}\times\cdots\times\mathscr{A},&\qquad
\varphi(x,y)&\equiv\varphi(y)=\rho(A_{n}\cdots A_{1}),
\end{alignat*}
Here, the function $\varphi(x,y)$, which in fact depends on a single argument
$y$, is continuous. The multivalued mapping $\Gamma(x)$, for each
$x=\mathscr{A}\in\mathcal{K}(N,N)$, takes compact values and is also continuous in the
Hausdorff metric, see, e.g.,~\cite[Section~1.3]{BGMO:84:e}. Therefore,
$\min_{y\in\Gamma(x)}\varphi(x,y)=\rho_{min}(\mathscr{A})$, and by
Lemma~\ref{L:Berge} the function $\rho_{min}(\mathscr{A})$ is continuous in
$\mathscr{A}\in\mathcal{K}(N,N)$. Similarly, choosing as $\varphi(x,y)$ the functions of
the form $\varphi(x,y)\equiv\varphi(y)=\rho(A_{n}\cdots A_{1})^{1/n}$ for
various $n\ge 1$, we obtain from Lemma~\ref{L:Berge} continuity of the
functions $\check{\rho}_{n}(\mathscr{A})$ in $\mathscr{A}\in\mathcal{K}(N,N)$ for all $n\ge 1$.
And choosing as $\Gamma(x)$ the multivalued mapping
$\Gamma(x)=\co(\mathscr{A})\times\cdots\times\co(\mathscr{A})$, which also takes compact
values and is continuous in the Hausdorff metric because in the Hausdorff
metric it is continuous the mapping
$\mathscr{A}\mapsto\co(\mathscr{A})$~\cite[Section~1.3]{BGMO:84:e}, we obtain similarly
that the functions $\check{\rho}_{n}(\co(\mathscr{A}))$ are continuous in
$\mathscr{A}\in\mathcal{K}(N,N)$ for every $n\ge 1$.
Thus, we have shown that all the functions in equalities \eqref{E:needed-k}
are continuous in $\mathscr{A}\in\mathcal{K}(N,N)$ from which, taking the limit as
$\mathscr{A}_{k}\to\mathscr{A}\in\overline{\mathcal{H}}(N,N)$, we obtain \eqref{E:needed}.
Let now $\tilde{\mathscr{A}}$ be a compact set of matrices satisfying the
inclusions $\mathscr{A}\subseteq\tilde{\mathscr{A}}\subseteq\co(\mathscr{A})$. Then, since the
quantities $\check{\rho}_{n}(\cdot)$ are defined as infima over the
corresponding sets, we have the inequalities
\[
\check{\rho}_{n}(\co(\mathscr{A}))\le\check{\rho}_{n}(\tilde{\mathscr{A}})\le
\check{\rho}_{n}(\mathscr{A}).
\]
Therefore, by virtue of the already proven equalities \eqref{E:needed}, for
each $n\ge 1$, there holds the equality
\[
\check{\rho}_{n}(\tilde{\mathscr{A}})=\rho_{min}(\mathscr{A}),
\]
and then, due to \eqref{E-LSRad},
$\check{\rho}(\tilde{\mathscr{A}})=\rho_{min}(\mathscr{A})$. Finally, observe that, by
definition, $\check{\rho}_{1}(\tilde{\mathscr{A}})=\rho_{min}(\tilde{\mathscr{A}})$ and
then $\rho_{min}(\tilde{\mathscr{A}})=\rho_{min}(\mathscr{A})$. Assertion (i) of
Theorem~\ref{T:Hset} is completely proved.
The proof of assertion (ii) is similar.
\end{proof}
On application of Theorem~\ref{T:Hset}, among the first there arises the
question about verification of the inclusion $\mathscr{A}\in\overline{\mathcal{H}}(N,N)$,
for given sets of matrices $\mathscr{A}$. One such case has been described in
Lemma~\ref{L:polynonneg}, which implies the following corollary.
\begin{corollary}\label{C1}
Let $\mathscr{A}$ be a set of matrices obtained as the value of a polynomial
mapping \eqref{E:poly}, whose arguments are finite linearly ordered sets of
non-negative matrices or compact IRU-sets of non-negative matrices. Then for
any compact set of matrices $\tilde{\mathscr{A}}$ satisfying the inclusions
$\mathscr{A}\subseteq\tilde{\mathscr{A}}\subseteq\co(\mathscr{A})$ assertions of
Theorem~\ref{T:Hset} hold.
\end{corollary}
\section{Spectral characteristics of convex hulls of matrix sets}\label{S:cosets}
Theorem~\ref{T:Hset} implies that
\begin{equation}\label{E:JSR-LSR-conv}
\hat{\rho}(\mathscr{A})=\hat{\rho}(\co(\mathscr{A})),\quad
\check{\rho}(\mathscr{A})=\check{\rho}(\co(\mathscr{A}))
\end{equation}
for any set $\mathscr{A}\in\overline{\mathcal{H}}(N,M)$. In fact, it is
known~\cite{Theys:PhD05,Jungers:09} that the first of
equalities~\eqref{E:JSR-LSR-conv} holds for arbitrary (not necessarily
non-negative) sets of matrices $\mathscr{A}\subset\mathcal{M}(N,N)$, which follows from
the obvious observation that
\[
\sup_{A_{i}\in\mathscr{A}}\|A_{n}\cdots
A_{1}\|=\sup_{A_{i}\in\co(\mathscr{A})}\|A_{n}\cdots A_{1}\|
\]
for any norm. The second equality in \eqref{E:JSR-LSR-conv} for general sets
of matrices is not true, as is seen from the example of the set
$\mathscr{A}=\{I,-I\}$, for which $\check{\rho}(\mathscr{A})=1$ while
$\check{\rho}(\co(\mathscr{A}))=0$. In this regard, we note the following general
assertion.
\begin{theorem}\label{T:conv}
For any bounded set of non-negative matrices $\mathscr{A}\subset\mathcal{M}(N,N)$ the
second of equalities \eqref{E:JSR-LSR-conv} holds.
\end{theorem}
\begin{proof}
We will need some auxiliary facts. Let us take in the
definition~\eqref{E-LSRad0} the norm $\|x\|=\sum_{i}|x_{i}|$ and notice that
in this case $\|x\|=\sum_{i}x_{i}$ for any
$x=(x_{1},x_{2},\ldots,x_{N})^{\mathsmaller{\mathsf{T}}}\ge 0$, which implies that
\begin{equation}\label{E:sumin1}
\left\|\sum u_{i}\right\|=\sum_{i}\|u_{i}\|
\end{equation}
for any finite set of vectors $u_{i}\ge0$. Notice also that
\begin{equation}\label{E:srbound}
\|Ae\|\ge\rho(A),\quad\text{where}\quad e=(1,1,\ldots,1)^{\mathsmaller{\mathsf{T}}},
\end{equation}
for any matrix $A\ge0$. Indeed, if inequality~\eqref{E:srbound} is not true
then $\|Ae\|<\rho(A)$, which means that all coordinates of $Ae$ are less than
$\rho(A)$, i.e. $Ae<\rho(A)e$. This leads, by assertion (i) of
Lemma~\ref{L:1}, to the self-contradictory inequality $\rho(A)<\rho(A)$.
To prove the equality $\check{\rho}(\co(\mathscr{A}))=\check{\rho}(\mathscr{A})$ let us
observe first that
\begin{equation}\label{E:checkrho}
\check{\rho}(\co(\mathscr{A}))\le \check{\rho}(\mathscr{A}),
\end{equation}
since $\mathscr{A}\subseteq\co(\mathscr{A})$ while due to the definition~\eqref{E-LSRad}
both sides of this inequality are infima of the same expression over
$\co(\mathscr{A})$ and $\mathscr{A}$ respectively.
Now, given $n\ge 1$, let for each $i=1,2,\ldots,n$ a matrix $A_{i}$ be a
finite convex combinations of matrices $A^{(i)}_{j}\in\mathscr{A}_{i}$, that is
$A_{i}=\sum_{j} \mu^{(i)}_{j}A^{(i)}_{j}\in\co(\mathscr{A})$ with some
$\mu^{(i)}_{j}\ge0$ and $\sum_{j} \mu^{(i)}_{j}=1$. Then in view of
\eqref{E:sumin1}
\begin{multline*}
\|A_{n}\cdots A_{1}\|\cdot\|e\|\ge\|A_{n}\cdots A_{1}e\|=
\biggl\|\biggl(\sum_{j_{n}} \mu^{(n)}_{j_{n}}A^{(n)}_{j_{n}}\biggr)\cdots
\biggl(\sum_{j_{1}} \mu^{(1)}_{j_{1}}A^{(1)}_{j_{1}}\biggr)e\biggr\|=\\=
\biggl\|\sum_{j_{n}}\cdots\sum_{j_{1}}\mu^{(n)}_{j_{n}}\cdots \mu^{(1)}_{j_{1}}
A^{(n)}_{j_{n}}\cdots A^{(1)}_{j_{1}}e\biggr\|=
\sum_{j_{n}}\cdots\sum_{j_{1}}\mu^{(n)}_{j_{n}}\cdots \mu^{(1)}_{j_{1}}
\|A^{(n)}_{j_{n}}\cdots A^{(1)}_{j_{1}}e\|.
\end{multline*}
Here $\|e\|=N$, and by~\eqref{E:srbound} $\|A^{(n)}_{j_{n}}\cdots
A^{(1)}_{j_{1}}e\|\ge \rho(A^{(n)}_{j_{n}}\cdots
A^{(1)}_{j_{1}})\ge\check{\rho}_{n}(\mathscr{A})$. Therefore,
\[
N\|A_{n}\cdots A_{1}\|\ge
\biggl(\sum_{j_{n}}\cdots\sum_{j_{1}}\mu^{(n)}_{j_{n}}\cdots \mu^{(1)}_{j_{1}}\biggr)
\check{\rho}_{n}(\mathscr{A}).
\]
Moreover, since $\sum_{j_{n}}\ldots\sum_{j_{1}}\mu^{(n)}_{j_{n}}\cdots
\mu^{(1)}_{j_{1}}=1$ then
\[
\|A_{n}\cdots A_{1}\|\ge \tfrac{1}{N}\,
\check{\rho}_{n}(\mathscr{A}).
\]
Taking in this last inequality the infimum over all $A_{1},\ldots,
A_{n}\in\co(\mathscr{A})$ we obtain the inequalities
\[
\inf_{A_{i}\in\co(\mathscr{A})}\|A_{n}\cdots A_{1}\|\ge \tfrac{1}{N}\,
\check{\rho}_{n}(\mathscr{A}),\qquad n\ge 1,
\]
which we substitute in \eqref{E-LSRad0}:
\[
\check{\rho}(\co(\mathscr{A}))=
\adjustlimits\lim_{n\to\infty}\inf_{A_{i}\in\co(\mathscr{A})}\|A_{n}\cdots A_{1}\|^{1/n}\ge \lim_{n\to\infty}\left(\tfrac{1}{N}\right)^{1/n}
\check{\rho}_{n}(\mathscr{A})^{1/n}=\check{\rho}(\mathscr{A}),
\]
and together with \eqref{E:checkrho} this yields the required equality.
\end{proof}
\section{Concluding remarks}\label{S:conclude}
\subsection{Sets of matrices with independent column uncertainty}
Since the spectral radius does not change during the transposition of a
matrix, then all the assertions of Theorem~\ref{T:Hset} remain to be valid
for the sets of matrices taken from the totality of $\mathcal{H}^{\mathsmaller{\mathsf{T}}}$-sets
of matrices which is obtained by transposition of matrices from $\mathcal{H}$-sets.
In particular, in the course of transposing, the sets of matrices with
independent row uncertainty turn into the so-called \emph{sets of matrices
with independent column uncertainty}~\cite{BN:SIAMJMAA09}.
Note that for the $\mathcal{H}^{\mathsmaller{\mathsf{T}}}$-sets of matrices the hourglass
alternative, generally speaking, is not valid. In this connection the
question naturally arises about further expansion of classes of matrices for
which the theorems proven in the article hold.
\subsection{Terminology}
We have borrowed the term `set of matrices with independent row (or column)
uncertainty' from the recent work~\cite{BN:SIAMJMAA09}, although such a kind
of sets of matrices in fact have been used for a long time in the theory of
parallel computing and the theory of asynchronous systems.
In~\cite{Prot:MP15} the same sets of matrices got the name \emph{product
families}.
In the special case when each of the rows of the matrix $\mathscr{A}$ coincides
with the corresponding row of either a predetermined matrix $A$ or the
identity matrix $I$, this type of matrices is sometimes
called~\cite{KKKK:AiT83:7:e} \emph{mixtures} of the matrices $A$ and $I$, see
also a brief survey in~\cite{Koz:ICDEA04}. An example, in which the mixtures
of matrices arise, is the so-called linear asynchronous system
$x_{n+1}=A_{n}x_{n}$, wherein at each time one or more components of the
state vector are updated independently of each other, i.e. each coordinate of
the vector $x_{n+1}$ can take the value of the corresponding coordinates of
$Ax_{n}$ or $x_{n}$.
\section*{Acknowledgments}
The author is indebted to Eugene Asarin for numerous fruitful discussions and
remarks.
The work was funded by the Russian Science Foundation, Project No.
14-50-00150.
\end{document}
|
\begin{document}
\title{Lower estimates of random unconditional constants of Walsh-Paley
martingales with values in Banach spaces}
\author{Stefan Geiss \thanks{The author is supported by the DFG
(Ko 962/3-1).} \\
Mathematisches Institut der Friedrich Schiller Universit\"at Jena\\
UHH 17.OG, D O-6900 Jena, Germany}
\maketitle
{\bf Abstract.} For a Banach space $X$ we define $RU\!M\!D_n ( X)$ to be the infimum
of all $c>0$ such that
\begin{displaymath} \left ( AV_{\varepsilon_k =\pm 1} \| \sum_1^n \varepsilon_k (M_k - M_{k-1} )\|_{L_2^X}^2
\right ) ^{1/2} \le c \| M_n \|_{L_2^X} \end{displaymath}
holds for all Walsh-Paley martingales $\{ M_k \}_0^n \subset L_2^X$ with
$M_0 =0$. We relate the asymptotic behaviour of the sequence
$\{ RU\!M\!D_n ( X)\}_{n=1}^{\infty}$ to geomertrical properties of the Banach space
$X$ such as K-convexity and superreflexivity.
1980 Mathematics Subject Classification (1985 Revision): 46B10, 46B20
\setcounter{section}{-1}
\section{Introduction}
\setcounter{lemma}{0}
A Banach space $X$ is said to be an UMD-space if for all $1<p<\infty$ there is
a constant $c_p = c_p (X)>0$ such that
\begin{displaymath} \sup_{\varepsilon_k =\pm 1}\| \sum_1^n \varepsilon_k (M_k -M_{k-1} ) \|_{L_p^X}
\le c_p \| \sum_1^n (M_k - M_{k-1} )\|_{L_p^X} \end{displaymath}
for all $n=1,2,...$ and all martingales $\{ M_k \}_0^n \subset L_p^X$ with
values in $X$. It turns out that
this definition is equivalent to the modified one if we replace "for all
$1<p<\infty$" by "for some $1<p<\infty$", and "for all martingales" by "for all
Walsh-Paley-martingales" (see \cite{Bu} for a survey). Motivated by these
definitions we investigate Banach spaces $X$ by means of the sequences
$\{ RU\!M\!D_n ( X)\}_{n=1}^{\infty}$ whereas $RU\!M\!D_n ( X) := \inf c$ such that
\begin{displaymath} \left ( AV_{\varepsilon_k =\pm 1} \| \sum_1^n \varepsilon_k (M_k - M_{k-1} )\|_{L_2^X}^2
\right ) ^{1/2} \le c \| M_n \|_{L_2^X} \end{displaymath}
holds for all Walsh-Paley martingales $\{ M_k \}_0^n \subset L_2^X$ with the
starting point $M_0 =0$. "$RU\!M\!D$" stands for "random unconditional
constants of martingale differences". We consider "random" unconditional
constants instead of the usual one, where $\sup_{\varepsilon =\pm 1}$ is taken in
place of $AV_{\varepsilon =\pm 1}$, since they naturally appear in the lower
estimates we are interested in. These lower estimates are of course lower
estimates for the non-random case, too. The paper is organized in the
following way. Using a technique of Maurey we show that the exponent 2 in the
definition of $RU\!M\!D_n ( X)$ can be replaced by any $1<p<\infty$ (see Theorem 2.4).
Then we observe (see Theorem 3.5)
\begin{displaymath} X \hspace{.1cm}l \mbox{is not K-convex} \hspace{1cm} \Longleftrightarrow
\hspace{1cm} RU\!M\!D_n ( x) \asymp n. \end{displaymath}
In the case of superreflexive Banach spaces this turns into
\begin{displaymath} X \hspace{.1cm}l \mbox{is not superreflexive} \hspace{1cm} \Longrightarrow
\hspace{1cm} RU\!M\!D_n ( X) \succeq n^{1/2}, \end{displaymath}
and, under the assumption X is of type 2,
\begin{displaymath} X \hspace{.1cm}l \mbox{is not superreflexive} \hspace{1cm} \Longleftrightarrow
\hspace{1cm} RU\!M\!D_n ( x) \asymp n^{1/2} \end{displaymath}
(see Theorems 4.3 and 4.4). Using an example due to Bourgain we see that the
type 2 condition is necessary. In fact, for all $1<p<2<q<\infty$ there is a
superreflexive Banach space $X$ of type $p$ and cotype $q$ such that
$RU\!M\!D_n ( X) \asymp n^{\frac{1}{p} - \frac{1}{q}}$ (see Corollary 5.4).
According to a result of James a non-superreflexive Banach spaces $X$ is
characterized by the existence of large "James-trees" in the unit ball $B_X$
of $X$. We can identify these trees with Walsh-Paley martingales
$\{ M_k \}_0^n$ which only take values in the unit ball $B_X$ and which satisfy
$\inf_{\omega} \| M_k (\omega ) - M_{k-1} (\omega )\| \ge \theta$ for
some fixed
$0<\theta <1$. In this way we can additionally show
that the martingales, which give the lower estimates of our random
unconditional constants, are even James trees (see Theorems 3.5(2) and 4.3).
\section{Preliminaries}
\setcounter{lemma}{0}
The standard notation of the Banach space theory is used
(cf.\cite{L-T}). Throughout this paper ${\rm I\! K}$ stands for the real or complex
scalars. $B_X$ is the closed unit ball of the Banach space $X$, ${\cal L}
(X,Y)$
is the space of all linear and continuous operators from a Banach space $X$
into a Banach space $Y$ equipped with the usual operator norm.
We consider martingales over the probability space
$[\Omega_n ,\mu_n ]$ which is given by $\Omega_n := \{ \omega =(\omega_1
,...,\omega_n ) \in \{ -1,1\} ^n \}$ and
$\mu_n (\omega ) :=\frac{1}{2^n}$ for all $\omega \in \Omega_n $. The minimal
$\sigma$-algebras ${\cal F} _k $, such that the coordinate functioinals
$\omega =(\omega_1 ,...,\omega_n ) \rightarrow \omega_i \in {\rm I\! K}$
are measurable for $i=1,...,k$, and ${\cal F}_0 :=\{\emptyset ,\Omega_n \}$
form a natural filtration $\{ {\cal F} _k\}$ on $\Omega_n $.
A martingale $\{ M_k \}_0^n$ with values in a Banach space
$X$ over $[\Omega_n ,\mu_n ]$ with respect to this filtration $\{ {\cal F} \}_0^n$
is called Walsh-Paley martingale. As usual we put $dM_0 := M_0,
dM_k := M_k -M_{k-1}$ for $k\ge 1$ and $M_k^*(\omega ) = \sup_
{0\le l\le k} \| M_l(\omega )\|$. Given a function $M \in L_p^X
(\Omega_n )$ we can set $M_k :={\rm I\! E} (M|{\cal F }_k )$ for
$k=0,...,n$. Consequently, for each $M \in L_p^X (\Omega_n )$ there
is a unique Walsh-Paley martingale $\{ M_k \}_0^n$ with $M_n
=M$. In this paper we consider a further probability space
$[I\! D_n ,P_n ]$ with $I\! D_n =
\{ \varepsilon =(\varepsilon_1 ,...,\varepsilon_n ) \in \{ -1,1\} ^n \}$ and
$P_n (\varepsilon )= \frac{1}{2^n}$ for all $\varepsilon \in I\! D_n $.
${\rm I\! E}_{\varepsilon, \omega }$ means that we take the expectation with
respect to the product measure $P_n \times \mu_n$. To estimate the
random unconditional constants of Walsh-Paley martingales from
above we use the notion of the type. For $1\le p\le 2$ an
operator $T \in {\cal L }(X,Y)$ is of type $p$ if
\begin{displaymath} \left( {\rm I\! E} _{\varepsilon } \| \sum _k T \varepsilon_k x_k \| ^2 \right)
^{1/2} \le c \left( \sum_k \| x_k \|^p \right)^{1/p} \end{displaymath}
for some constant $c>0$ and all finite sequences $\{x_k \} \subset X$. The
infimum of all possible constants $c>0$ is denoted by $T_p (T)$.
Considering the above inequality for sequences $\{ x_k \}_{k=1}^n \subset X$
of a fixed length $n$ only we obtain the corresponding constant $T_p^n (T)$
which can be defined for each operator $T\in {\cal L }(X,Y)$.
In the case $T=I_X$ is the identity of a Banach space $X$ we write
$T_p (X)$ and $T_p^n (X)$ instead of $T_p (I_X)$ and $T_p^n (I_X)$, and
say "$X$ is of type p" in place of "$I_X$ is of type $p$" (see \cite{T-J} for
more information).
\pagebreak
\section{Basic definition}
\setcounter{lemma}{0}
Let $T\in {\cal L} (X,Y)$ and $1 \le q<\infty $. Then $RU\!M\!D_n^q( T) = \inf c$,
where the infimum is taken over all $c>0$ such that
\begin{displaymath} \left( {\rm I\! E}_{\varepsilonpsilon ,\omega } \| \sum_1^n \varepsilonpsilon _k TdM_k
(\omega )\| ^q \right)^{1/q} \le c \left( {\rm I\! E} _{\omega } \| M_n (\omega )\|^q
\right) ^{1/q}\end{displaymath}
holds for all Walsh-Paley martingales $\{ M_k\}_0^n$ with values in $X$ and
$M_0 =0$. Especially, we set \linebreak
$RU\!M\!D_n^q( X) := RU\!M\!D_n^q( I_X )$ for a Banach space $X$ with the identity $I_X$.
It is clear that $RU\!M\!D_n^q( T) \le 2n \| T \|$ since
\begin{displaymath} \| \sum_1^n \varepsilon_k TdM_k \|_{L_q^X} \le \sum_1^n \| T\|
\| dM_k \|_{L_q^X} \le \| T\| \sum_1^n \left ( \| M_k\|_{L_q^X} +
\| M_{k-1}\|_{L_q^X} \right ) \le 2 n \| T\| \| M_n \|_{L_q^X}. \end{displaymath}
In the case $X$ is an UMD-space we have $\sup_n RU\!M\!D_n^q( X) < \infty$ whenever
$1<q< \infty$ (the converse seems to be open). $q=1$ yields a"singularity"
since $RU\!M\!D_n^1( X) \asymp RU\!M\!D_n^1( {\rm I\! K} ) \asymp n$ for any Banach space $X$ (see
Corollary 5.2) therefore we restrict our consideration on $1<q< \infty$. Here
we show that the quantities $RU\!M\!D_n^q( T)$ are equivalent for $1<q<\infty$. In
\cite{Ga}(Thm.4.1) it is stated that $\sup_n RU\!M\!D_n^q( X)<\infty$ iff $\sup_n
RU\!M\!D_n^r( X)<\infty$ for all $1<q,r<\infty$. Using Lemma 2.2, which slightly extends
\cite{M}(Thm.II.1), we prove a more precise result in Theorem 2.4.
Let us start with a general martingale transform. Assuming
$T_1 ,...,T_n \in {\cal L}(X,Y)$ we define
\begin{displaymath} \phi = \phi (T_1 ,...,T_n ) : L_0^X (\Omega_n ) \longrightarrow
L_0^Y (\Omega_n ) \hspace{.1cm}l \mbox{by} \hspace{.1cm}l
\phi (M)(\omega ):=\sum_1^n T_k dM_k (\omega ),\end{displaymath}
where $M_k = {\rm I\! E} (M|{\cal F}_k )$. The following duality is standard.
\begin{lemma} Let $1<p<\infty$,\hspace{.1cm} $T_1,...,T_n \in {\cal L}(X,Y)$ and
$\phi =\phi (T_1 ,...,T_n ) : L_p^X \longrightarrow L_p^Y $. Then
\begin{displaymath} \phi '(F) = \sum_1^n T_k 'dF_k \hspace{.1cm}l \mbox{for all} \hspace{.1cm}l
F \in L_{p'}^{Y'}.\end{displaymath}
\end{lemma}
$Proof.$ Using the known formula
\begin{displaymath} <M,M'> = \sum_0^n <dM_k ,{dM'}_k >
\hspace{.1cm}l \left ( M \in L_s^Z (\Omega_n ),M'\in L_{s'}^{Z'} (\Omega_n ),1<s<\infty \right ) ,
\end{displaymath}
for $M\in L_p^X (\Omega_n )$ and $F\in L_{p'}^{Y'} (\Omega_n )$ we obtain
\begin{eqnarray*}
<M,\phi 'F> & = & <\sum_1^n T_k dM_k ,F> = \sum_1^n <T_k dM_k ,dF_k > \\
& = & \sum_1^n <dM_k ,T_k 'dF_k > = <M,\sum_1^n T_k 'dF_k >.\hspace{.1cm}l \Box
\end{eqnarray*}
Now we recall \cite{M}(Thm.II.1) in a more general form. Although the proof
is the same we repeat some of the details for the convenience of the reader.
\begin{lemma} Let $1<p<r<\infty$, $T_1 ,...,T_n \in {\cal L}(X,Y)$ and
$\phi = \phi (T_1 ,...,T_n )$. Then
\begin{displaymath} \| \phi :L_p^X \rightarrow L_p^Y \| \le \frac{6r^2}{(p-1)(r-1)}
\| \phi :L_r^X \rightarrow L_r^Y \| .\end{displaymath} \end{lemma}
$Proof.$ We define $1<q< \infty$ and $0<\alpha <1$ with $\frac{1}{p} =
\frac{1}{r} +\frac{1}{q} $ and $\frac{p}{r} = 1- \alpha $ and obtain
$\alpha q=(1-\alpha )r=p$. Let $\{ M_k \} _0^n$ be a Walsh-Paley martingale
in $X$ with $dM_1 \neq 0$. We set
\begin{displaymath} ^*\! M_k (\omega ) := M_{k-1}^* (\omega ) +
sup_{0\le l \le k} \| dM_l (\omega )\| \hspace{1cm}for\hspace{1cm}
k\ge 1 \end{displaymath}
and obtain an ${\cal F} _{k-1}^{\omega }$ -measurable random variable with
\begin{displaymath} 0< {^*\! M}_1(\omega )\le ... \le {^*\! M}_n(\omega ) \hspace{.1cm}l \mbox{and} \hspace{.1cm}l
M_k^*(\omega ) \le {^*\! M}_k(\omega ) \le 3 M_k^*(\omega ). \end{displaymath}
Using \cite{M}(L.II.B) and Doob's inequality we obtain
\begin{eqnarray*}
\| \phi (M) \|_p & = &
\left( {\rm I\! E} _{\omega }\| \sum_1^n
\frac{T_k dM_k (\omega )}{^*\! M_k^{\alpha } (\omega)} {^*\! M}_k^{\alpha }
(\omega ) \| ^p \right) ^{1/p} \\
& \le & 2 \left( {\rm I\! E} _{\omega } \sup_{1\le k\le n} \| \sum _1^k
\frac{T_l dM_l (\omega )}{^*\! M_l^{\alpha }(\omega )} \| ^p \hspace{0.2cm}
{^*\! M_n^{\alpha p}}(\omega) \right) ^{1/p} \\
& \le & 2 \left( {\rm I\! E} _{\omega } \sup_{1\le k\le n} \| \sum _1^k
\frac{T_l dM_l (\omega )}{^*\! M_l^{\alpha }(\omega )} \| ^r \right) ^{1/r}
\left( {{\rm I\! E} _{\omega }} ^*\! M_n^{\alpha q} (\omega ) \right) ^{1/q} \\
& \le & \frac{2r}{r-1} \left( {\rm I\! E} _{\omega } \| \sum _1^n
\frac{T_k dM_k (\omega )}{^*\! M_k^{\alpha }(\omega )} \| ^r \right) ^{1/r}
\left( {{\rm I\! E} _{\omega }} ^*\! M_n^p (\omega ) \right) ^{1/q} \\
& \le & \frac{2r}{r-1} \| \phi \|_r \left( {\rm I\! E} _{\omega } \| \sum _1^n
\frac{dM_k (\omega )}{^*\! M_k^{\alpha }(\omega )} \| ^r \right) ^{1/r}
\left( {{\rm I\! E} _{\omega }} ^*\! M_n^p (\omega ) \right) ^{1/q}. \\
\end{eqnarray*}
Applying \cite{M}(L.II.A) in the situation $ \| \sum _1^l dM_i (\omega )
\| \le \left( ^*\! M_l (\omega )^{\alpha } \right) ^{1/ {\alpha }} $ yields
\begin{displaymath} \| \sum _1^k \frac{dM_l (\omega )}{^*\! M_l^{\alpha }(\omega )} \| \le
\frac{1/{\alpha }}{1/{\alpha } -1} {^*\! M_k(\omega)}^
{{\alpha }(1/{\alpha } -1)} \le \frac{r}{p} {^*\! M_n(\omega )}^{p/r} \end{displaymath}
such that
\begin{eqnarray*}
\| \phi (M)\|_p & \le & \frac{2r^2}{(r-1)p} \| \phi \|_r \left ( {\rm I\! E}_{\omega }
^* M_n (\omega )^p \right ) ^{1/p}
\le \frac{6r^2}{(r-1)p} \| \phi \|_r \left ( {\rm I\! E}_{\omega } M_n^* (\omega )^p
\right ) ^{1/p} \\
& \le & \frac{6r^2}{(r-1)(p-1)} \| \phi \|_r \| M_n \|_p. \hspace{.1cm}l \Box
\end{eqnarray*}
We deduce
\begin{lemma} Let $T_1 ,...,T_n :X\rightarrow L_1^Y (I\! D_n )$ and
$\phi := \phi (T_1 ,...,T_n )$. For $i=1,...,n$ assume that \linebreak
$T_i (X) \subseteq \{ f:f=\sum_1^n \varepsilon_k y_k, \hspace{0.2cm} y_k \in Y \}$.
Then
\begin{displaymath} \frac{1}{c_{pr}} \| \phi : L_p^X \rightarrow L_p^{L_p^Y} \| \le
\| \phi : L_r^X \rightarrow L_r^{L_r^Y} \| \le c_{pr}
\| \phi : L_p^X \rightarrow L_p^{L_p^Y} \| \end{displaymath}
for $1<p<r<\infty$, where the constant $c_{pr} >0$ is independent from
$X$,$Y$,$(T_1 ,...,T_n )$ and $n$. \end{lemma}
$Proof.$ The left-hand inequality follows from Lemma 2.2 and
\begin{displaymath} \| \phi : L_p^X \rightarrow L_p^{L_p^Y} \| \le \frac{6r^2}{(p-1)(r-1)}
\| \phi : L_r^X \rightarrow L_r^{L_p^Y} \| \le \frac{6r^2}{(p-1)(r-1)}
\| \phi : L_r^X \rightarrow L_r^{L_r^Y} \| .\end{displaymath}
The right-hand inequality is a consequence of Lemma 2.1, Lemma 2.2 and
\begin{eqnarray*}
\| \phi : L_r^X \rightarrow L_r^{L_r^Y} \| & = &
\| \phi ': L_{r'}^{L_{r'}^{Y'}} \rightarrow L_{r'}^{X'} \| \\
& \le & \frac{{6r'}^2}{(p'-1)(r'-1)}
\| \phi ': L_{p'}^{L_{r'}^{Y'}} \rightarrow L_{p'}^{X'} \| \\
& = & \frac{{6r'}^2}{(p'-1)(r'-1)}
\| \phi : L_p^X \rightarrow L_p^{L_r^Y} \| \\
& \le & \frac{{6r'}^2}{(p'-1)(r'-1)} K_{rp}
\| \phi : L_p^X \rightarrow L_p^{L_p^Y} \|,
\end{eqnarray*}
where we use Kahane's inequality (cf. \cite{L-T} (II.1.e.13)) in the
last step.\hspace{.1cm}l $\Box$
If we apply Lemma 2.3 in the situation $T_k x:= \varepsilon _k Tx$ and exploit
\begin{displaymath} RU\!M\!D_n^p( T) \le \| \phi (T_1 ,...,T_n ):L_p^X \rightarrow L_p^{L_p^Y} \|
\le 2 RU\!M\!D_n^p( T) \end{displaymath}
then we arrive at
\begin{theorem} Let $1<p<r<\infty$ and $T \in {\cal L }(X,Y)$. Then
\begin{displaymath} \frac{1}{c_{pr}} RU\!M\!D_n^p( T) \le RU\!M\!D_n^r( T) \le c_{pr} RU\!M\!D_n^p( T), \end{displaymath}
where the constant $c_{pr} >0$ is independent from $X$,$Y$,$T$ and $n$.
\end{theorem}
The above consideration justifies
\begin{displaymath} RU\!M\!D_n ( T):=RU\!M\!D_n^2( T) \hspace{.1cm}l \mbox{for} \hspace{.1cm}l T \in {\cal L }(X,Y) \end{displaymath}
and $RU\!M\!D_n ( X):= RU\!M\!D_n ( I_X )$ for a Banach space $X$.\\
\section{K--convexity}
\setcounter{lemma}{0}
We show that $RU\!M\!D_n ( X) \asymp n$ if and only if $X$ is not K-convex, that is,
if and only if $X$ uniformly contains $l_1^n$. To do this some additional
notation is required. For $x_1,...,x_n\in X$ we set
\begin{displaymath} |x_1\wedge ...\wedge x_n|_X:=\sup \{ |det(\langle x_i,a_j\rangle )
_{i,j=1}^n|:a_1 ,...,a_n \in B_{X'} \}.\end{displaymath}
Furthermore, for fixed $n$ we define the bijection
\begin{displaymath} i: \{ -1,1\} ^n \rightarrow \{1,...,2^n \} \end{displaymath} as
\begin{displaymath} i(\omega )=i(\omega_1,...\omega_n ):=1+ \frac{1-\omega_n}{2} +
\frac{1-\omega_{n-1}}{2} 2 +...+ \frac{1-\omega_1}{2} 2^{n-1} \end{displaymath}
and the corresponding sets $I_0 := \{ 1,...,2^n\},
I(\omega_1,...,\omega_n ):= \{ i(\omega_1 ,...,\omega_n )\}$,
\begin{displaymath} I(\omega_1,...,\omega_k ):=
\{ i(\omega_1,...,\omega_n): \omega_{k+1} =\pm 1,...,\omega _n =\pm 1\}
\hspace{1cm} \mbox{for} \hspace{1cm} k=1,...,n-1.\end{displaymath}
It is clear that
\begin{displaymath} I(\omega_1,...,\omega_{k-1})=I(\omega_1,..., \omega_{k-1},1)
\cup I(\omega_1,...,\omega_{k-1},-1) \hspace{1cm} \mbox{and} \hspace{1cm}
I_0 = I(1) \cup I(-1). \end{displaymath}
Our first lemma is technical.
\begin{lemma} Let $\{ M_k\} _0^n$ be a Walsh-Paley-martingale in $X$ and let
$x_k := M_n(i^{-1} (k))\in X$ for \linebreak
$k=1,...,2^n$. Then, for all $\omega \in
\{ -1,1\} ^n$ and $1\le k\le n$ , there exist natural numbers \linebreak
$1\le r_0\le s_0<r_1 \le s_1 <...<r_k \le s_k \le 2^n$
with
\begin{displaymath} |M_0(\omega )\wedge ...\wedge M_k(\omega )|=\frac{1}{2^k} \left | \frac
{x_{r_0}+...+x_{s_0}}{s_0 -r_0 +1} \wedge ...\wedge \frac
{x_{r_k}+...+x_{s_k}}{s_k -r_k +1}\right | \end{displaymath} \end{lemma}
$Proof.$ Let us fix $\omega \in \{ -1,1\} ^n.$ Since
$M_l(\omega _1,...,\omega _l) = \frac{1}{2^{n-l}} \sum_
{i\in I(\omega _1,...,\omega _l)} x_i$
we have for $l=0,...,n-1,$
\begin{displaymath} M_l(\omega _1,...,\omega _l) - \frac{1}{2} M_{l+1}
(\omega _1,...,\omega _{l+1}) =\frac{1}{2^{n-l}} \sum_
{I(\omega _1,...,\omega _l ,-\omega _{l+1})} x_i =
\frac{1}{2\# I(\omega _1,...,\omega _l,
-\omega _{l+1})}\sum_{I(\omega _1,...,\omega _l ,
-\omega _{l+1})} x_i \end{displaymath}
for $l=0,...,n-1$. It is clear that $I(-\omega _1),
I(\omega _1,-\omega _2),
I(\omega _1,\omega _2,-\omega _3),...,
I(\omega _1,...,\omega _{k-1},-\omega _k),
I(\omega _1,...,\omega _k)$ are disjoint, such that we have
\begin{eqnarray*}
|M_0(\omega)\wedge ...\wedge M_k(\omega )| & = &
\left| \left( M_0(\omega)-\frac{1}{2} M_1(\omega) \right) \wedge ...
\wedge \left( M_{k-1}(\omega ) -\frac{1}{2} M_k(\omega ) \right)
\wedge M_k(\omega ) \right| \\ & = &
\frac{1}{2^k} \left | \frac{x_{r_0}+...+x_{s_0}}{s_0 -r_0 +1}
\wedge ...\wedge \frac{x_{r_k}+...+x_{s_k}}{s_k -r_k +1} \right |
\end{eqnarray*}
after some rearrangement. $\Box$
The second lemma, which is required, is a special case of \cite{G} (Thm.1.1).
\begin{lemma} Let $u \in {\cal L} (l_2^n ,X)$ and let $\{ e_1 ,...,e_n \}$
be the unit vector basis of $l_2^n$. Then
\begin{displaymath} |ue_1\wedge ...\wedge ue_n|_X \le \left ( \frac{1}{n!} \right ) ^{1/2}
\pi_2(u)^n ,\end{displaymath}
where $\pi_2(u)$ is the absolutely 2-summing norm of $u$.
\end{lemma}
Now we apply Lemma 3.1 to a special Walsh-Paley-martingale $\{ M_k^1\}_0^n$
with values in $l_1^{{2^n}}$ whose differences $dM_k^1(\omega )$ are closely
related to a discret version of the Haar functions from $L_1 [0,1]$.
For fixed $n$ this martingale is given by
\begin{displaymath} M_n^1(\omega _1,...,\omega _n) := e_{i(\omega _1,...,\omega _n)}
\hspace{1cm} \mbox{and} \hspace{1cm}
M_k^1 := {\rm I\! E} (M_n^1 |{\cal F}_k ), \end{displaymath}
where $\{ e_1,...,e_{2^n}\} $ stands for the unit vector basis of
$l_1^{{2^n}}.$
\begin{lemma} Let $n\ge 1$ be fixed. Then \\
(1) $\| M_k^1 (\omega )\| = \| dM_k^1 (\omega )\| = 1 $ \hspace{.1cm}l for
$k=0,...n$ and all $\omega \in \Omega_n $,\\
(2) $\inf_{\omega } {\rm I\! E}_{\varepsilon } \| \sum_1^n \varepsilon_k dM_k^1(\omega ) \|
\ge \alpha n$ \hspace{.1cm}l for some $\alpha >0$ independent from $n$.
\end{lemma}
$Proof.$ (1) is trivial. We consider (2). Lemma 3.1 implies
\begin{displaymath} \frac{1}{2^n} |f_1\wedge ...\wedge f_{n+1}|_{l_1^{n+1}} =
|M_0^1(\omega)\wedge ...\wedge M_n^1(\omega )|_{l_1^{2^n}} \end{displaymath}
for all $\omega \in \{ -1,1\} ^n$, where $\{ f_1,...,f_{n+1}\} $ denotes
the unit vector basis of $l_1^{n+1}$. We can continue to
\begin{eqnarray*}
\frac{1}{2} |f_1\wedge ...\wedge f_{n+1}|_{l_1^{n+1}}^{1/n} & = &
|M_0^1(\omega)\wedge dM_1^1(\omega)\wedge ...\wedge dM_n^1(\omega )|
_{l_1^{2^n}}^{1/n} \\ & \le &
\left( (n+1)\| M_0^1(\omega) \| \hspace{0.3em} |dM_1^1(\omega)\wedge...
\wedge dM_n^1(\omega )| \right) ^{1/n} \\ & \le &
c |dM_1^1(\omega)\wedge...\wedge dM_n^1(\omega )| ^{1/n}.
\end{eqnarray*}
If we define the operator $u_{\omega } : l_2^n \longrightarrow l_1^{2^n}$ by
$u_{\omega } ((\xi_1 ,...,\xi_n )) := \sum_1^n \xi_i dM_i^1 (\omega )$ and use
Lemma 3.2, then we get
\begin{displaymath} |dM_1^1(\omega)\wedge...\wedge dM_n^1(\omega )| ^{1/n} \le
\left( \frac{1}{n!} \right) ^{1/2n} \pi _2 (u_{\omega }). \end{displaymath}
Since the $l_1^{2^n}$ are uniformly of cotype 2 there is a constant $c_1 >0$,
independent from $n$, such that
\begin{displaymath} \pi _2 (u_{\omega }) \le c_1 {\rm I\! E}_{\varepsilon } \| \sum _1^n \varepsilon _k
dM_k^1(\omega ) \| \end{displaymath}
(see \cite{T-J},\cite{M-P}). Summerizing the above estimates yields
\begin{displaymath} |f_1\wedge ...\wedge f_{n+1}|_{l_1^{n+1}}^{1/n} \le 2ce^{1/2} n^{-1/2}
c_1 {\rm I\! E}_{\varepsilon } \| \sum _1^n \varepsilonpsilon _k dM_k^1(\omega ) \| \le c_2
n^{-1/2} {\rm I\! E}_{\varepsilon } \| \sum _1^n \varepsilonpsilon _k dM_k^1(\omega ) \|. \end{displaymath}
The known estimate
$|f_1\wedge ...\wedge f_{n+1}|^{1/{n+1}} \ge \frac{1}{c_3}(n+1)^{1/2} $
concludes the proof (see, for instance, \cite{G}(Ex.2.7)).\\
\hspace*{\fill} $\Box$ \\
Finally, we need the trivial
\begin{lemma} Let $T \in {\cal L }(X,Y)$. Then $RU\!M\!D_n ( T) \le 2 n^{1/2}
T_2^n (T)$. \end{lemma}
$Proof.$ Using the type 2 inequality for each $\omega \in \Omega_n $ and
integrating yield for a martingale $\{ M_k \} _0^n$
\begin{displaymath}
\left( {\rm I\! E} _{\varepsilon ,\omega }\| \sum_1^n \varepsilon_k T dM_k (\omega )\| ^2
\right) ^{1/2} \le
T_2^n(T) \left( \sum _1^n \| dM_k \|_{L_2^X}^2 \right) ^{1/2} \le
2 n^{1/2} T_2^n (T) \| M_n \| _{L_2^X} .\Box
\end{displaymath}
Now we can prove
\begin{theorem} There exists an absolute constant $\alpha >0$ such that for
any Banach space $X$ the following assertions are equivalent.\\
(1) $X$ is not K-convex.\\
(2) For all $\theta >0$ and all $n=1,2,...$ there is a Walsh-Paley martingale
$\{ M_k\}_0^n$ with values in $B_X$,
\begin{displaymath}
\inf_{1\le k\le n} \inf_{\omega } \| dM_k (\omega )\| \ge 1-\theta
\hspace{.1cm}l \mbox{and} \hspace{.1cm}l
\inf_{\omega } {\rm I\! E}_{\varepsilon } \| \sum_1^n \varepsilon_k dM_k(\omega ) \| \ge \alpha n.
\end{displaymath}
(3) $RU\!M\!D_n ( X)\ge cn$ for $n=1,2,...$ and some constant $c=c(X)>0$.
\end{theorem}
$Proof.$ Taking $\alpha >0$ from Lemma 3.3 the implication
(1) $\Rightarrow$ (2) follows.
(2) $\Rightarrow$ (3) is trivial.\linebreak
(3) $\Rightarrow$ (1): Assumig $X$ to be K-convex the space $X$ must be of
type p for some $p>1$. Consequently, Lemma 3.4 implies
\begin{displaymath} RU\!M\!D_n ( X)\le 2 n^{1/2} T_2^n(X) \le 2 n^{1/2} n^{1/p - 1/2}
T_p(X) \le 2 n^{1/p} T_p(X). \Box \end{displaymath}
$Remark.$ One can also deduce (3) $\Rightarrow$ (1) from \cite{El} and
\cite{Pa} in a more direct way (we would obtain that $L_2^X (\{ -1,1\}^{{\rm I\! N}} )$
is not K-convex).
\section{Superreflexivity}
\setcounter{lemma}{0}
A Banach space $X$ is superreflexive if each Banach space, which is
finitely representable in $X$, is reflexive. We will see that
$RU\!M\!D_n ( X)\ge cn^{1/2} $ whenever $X$ is not superreflexive and
that the exponent $\frac{1}{2}$ is the best possible in general. This improves
an observation of Aldous and Garling (proofs of \cite{Ga}(Thm.3.2) and
\cite{Al}(Prop.2)) which says that $RU\!M\!D_n ( X) \ge c n^{1/s}$ in the case $X$ is
of cotype s ($2 \le s < \infty $) and not superreflexive.
We make use of the summation operators
\begin{displaymath} \sigma _n : l_1^{2^n} \longrightarrow l_{\infty }^{2^n} \hspace{1cm}
and \hspace{1cm} \sigma : l_1 \longrightarrow l_{\infty }
\hspace{1cm} \mbox{with} \hspace{1cm}
\{ \xi _k \}_k \longrightarrow \left\{ \sum _{l=1}^k \xi _l \right\} _k ,\end{displaymath}
as well as of
\begin{displaymath} \Phi :C[0,1]' \longrightarrow l_{\infty } ([0,1]) \hspace{1cm} \mbox{with}
\hspace{1cm} \mu \longrightarrow \left\{ t \rightarrow \mu ([0,t]) \right\} .
\end{displaymath}
The operators $\sigma_n$ are an important tool in our situation. Assuming $X$
to be not superreflexive, according to \cite{J1} for all $n=1,2,...$ there are
factorizations $\sigma_n = B_n A_n $ with $A_n :l_1^{2^n} \rightarrow X$,
$B_n :X \rightarrow l_{\infty }^{2^n}$ and
$\sup _n \| A_n \| \| B_n \| \le 1+\theta\hspace{.1cm} (\theta >0)$. It turns out that
the image-martingale $\{ M_k \}_0^n \subset L_2^X$ of $\{ M_k^1 \}$
($n$ is fixed, $\{ M_k^1 \}$ is defined in the previous section), which is
given by $M_k (\omega ):=A_n M_k^1 (\omega ) \hspace{.3cm} (k=1,...,n)$,
possesses a large random unconditional constant. To see this we set
\begin{displaymath} M_k^{\infty }(\omega ) := \sigma _n M_k^1(\omega ) \hspace{2cm}
\left( \omega \in \Omega_n , k=0,...,n \right) \end{displaymath}
and obtain a martingale $\{ M_k^{\infty }\} _0^n $ with values in
$l_{\infty }^{2^n}$. For $k=1,...,n$ it is easy to check that
\begin{displaymath} dM_k^{\infty }(\omega _1 ,..., \omega _k ) = \omega _k 2^{k-n-1}
(0,...,0,1,2,3,...,2^{n-k},2^{n-k} -1,...,3,2,1,0,0,...,0) \end{displaymath}
where the block $(1,2,3,...,2^{n-k})$ is concentrated on
$I(\omega _1 ,..., \omega _{k-1} ,1)$ and the block $(2^{n-k} -1,...,3,2,1,0)$
is concentrated on $I(\omega _1 ,..., \omega _{k-1} ,-1)$, that is, the vectors
$|dM_k^{\infty} (\omega )|$ correspond to a discrete Schauder
system in $l_{\infty}^{2^n}$. Furthermore, we have
\begin{lemma} Let $n \ge 2$ be a natural number and let $\{ e_i \}$ be the
standard basis of $l_1^{2^n}$. Then there exists a map
$e:\{ -1,1\} ^n \longrightarrow \{ e_1 ,...,e_{2^n} \} \subset l_1^{2^n}$
such that
\begin{displaymath} \mu_n \left\{ \omega : |<dM_k^{\infty }(\omega ),e(\omega )>|
\ge \frac{1}{4} \right\} \ge \frac{1}{2} \hspace{2cm} for \hspace{1cm}
k=1,...,n. \end{displaymath}
\end{lemma}
$Proof.$ First we observe that
\begin{displaymath} \inf \left\{ |<dM_k^{\infty } (\omega _1 ,...,\omega _k ),e_i >| :
i \in I(\omega _1 ,..., \omega _k ,- \omega _k ) \right\} = 2^{k-n-1}
\min (2^{n-k-1} +1, 2^{n-k} - 2^{n-k-1} ) \ge \frac{1}{4} \end{displaymath}
for $1 \le k < n$. Then we use the fact that
\begin{displaymath} \# \left( \cap _1^n supp \hspace{0.2cm} dM_k^{\infty }(\omega ) \right)=1
\hspace{1cm} \mbox{for all} \hspace{1cm} \omega \in \{ -1,1 \} ^n \end{displaymath}
to define $e(\omega )$ as the i-th unit vector, in the case if
\begin{displaymath} \{ i \} = \cap _1^n supp \hspace{0.2cm} dM_k^{\infty }(\omega ) \subseteq
\cap _2^n I(\omega _1 ,..., \omega _{k-1} ). \end{displaymath}
For $1 \le k \le n-2$ we obtain
$ \mu_n \left\{ \omega : |<dM_k^{\infty }(\omega ),e(\omega )>| \ge
\frac{1}{4} \right\} $
$ \hspace{.1cm}a \ge \mu_n \left\{ (\omega _1 ,..., \omega _n ) : |<dM_k^{\infty }
(\omega _1 ,..., \omega _k ),e(\omega _1 ,..., \omega _n )>| \ge \frac{1}{4} ,
\hspace{0.3cm} \omega _{k+1} = -\omega _k \right\} $
$ \hspace{.1cm}a \ge \mu_n \left\{ (\omega _1 ,..., \omega _n ) :
\inf \left\{ |<dM_k^{\infty }(\omega _1 ,..., \omega _k ),e_i>| : i \in
I(\omega _1 ,..., \omega _{k+1} ) \right\} \ge \frac{1}{4} , \hspace{0.3cm}
\omega _{k+1} = -\omega _k \right\} $
$ \hspace{.1cm}a = \mu_n \{ \omega _{k+1} = - \omega _k \} = \frac{1}{2} $.
Since \hspace{.1cm} $|<dM_k^{\infty }(\omega ),e(\omega )>| \ge \frac{1}{4} $ \hspace{.1cm} for all
$\omega$ in the cases $k=n-1$ and $k=n$ the proof is complete. $\Box $
We deduce
\begin{lemma} Let $n\ge 1$ be fixed. Then \\
(1) $ \| M_k^{\infty } (\omega )\| =1$ and $\| dM_l^{\infty } (\omega )\| =
\frac{1}{2}$ \hspace{.1cm}l for $k=0,...,n$,\hspace{0.2cm} $l=1,...,n$, and all
$\omega \in \Omega_n $,\\
(2) $\mu_n \left\{ \omega :{\rm I\! E}_{\varepsilon }
\| \sum_1^n \varepsilon_k dM_k^{\infty } (\omega ) \|
\ge \alpha n^{1/2} \right\} >\beta$ \hspace{.1cm}l for some $\alpha ,\beta >0$
independent from $n$.
\end{lemma}
$Remark.$ An inequality
${\rm I\! E}_{\varepsilon }\| \sum_1^n \varepsilon_k dM_k^{\infty } (\omega ) \| \ge \alpha n^{1/2}$
can not hold for all $\omega \in \Omega_n $ since, for example,
\begin{displaymath} \| \sum_1^n \varepsilon_k dM_k^{\infty} (1,1,...,1)\| \le
\| \sum_1^n dM_k^{\infty} (1,1,...,1)\| \le
\| \sigma_n \| \| \sum_1^n dM_k^1 (1,1,...,1)\| \le 2 \end{displaymath}.
$Proof$ of Lemma 4.2. Assertion (1) is trivial. We prove (2). For $t>0$ we
consider
\begin{eqnarray*}
\mu_n \left\{ \omega : {\rm I\! E} _{\varepsilon } \| \sum_1^n \varepsilon_k dM_k^{\infty }
(\omega ) \|_{l_{\infty }^{2^n }} > t n^{1/2} \right\} & \ge &
\mu_n \left\{ \omega : \| {\rm I\! E}_{\varepsilon } | \sum_1^n \varepsilon_k
dM_k^{\infty } (\omega ) |\|_{l_{\infty }^{2^n }} > t n^{1/2} \right\} \\
& \ge & \mu_n \left\{ \omega : \frac{1}{c_o} \| \left( \sum_1^n |
dM_k^{\infty } (\omega )|^2 \right) ^{1/2} \|_{l_{\infty }^{2^n }} > t n^{1/2}
\right\} \\
& = & \mu_n \left\{ \omega : \| \sum_1^n | dM_k^{\infty }
(\omega )|^2 \|_{l_{\infty }^{2^n }} > c_o^2 t^2 n \right\}.
\end{eqnarray*}
Denoting the last mentioned expression by $p_t$ the previous lemma yields
\begin{eqnarray*}
p_t n + (1 - p_t) c_o^2 t^2 n
&\ge& {\rm I\! E}_{\omega } \| \sum_1^n |dM_k^{\infty }(\omega ) |^2 \| \\
&\ge& {\rm I\! E}_{\omega } <\sum_1^n |dM_k^{\infty }(\omega ) |^2 ,e(\omega )> \\
& = & {\rm I\! E}_{\omega } \sum_1^n |<dM_k^{\infty }(\omega ),e(\omega )>|^2 \\
&\ge& \sum_1^n \frac{1}{16} \mu \left \{ \omega :
|<dM_k^{\infty }(\omega ),e(\omega )>|^2 \ge \frac{1}{16} \right \} \\
&\ge& \frac{n}{32}
\end{eqnarray*}
such that $p_t \ge \frac{1/32 - c_o^2 t^2}{1 - c_o^2 t^2}$ for
$c_o^2 t^2 < 1$. $\Box$
Lemmas 3.2 and 4.2 imply
\begin{theorem} There are $\alpha ,\beta >0$ such that for all
non-superreflexive Banach spaces $X$, for all $\theta >0$, and for all
$n=1,2,...$ there exists a Walsh-Paley martingale $\{ M_k\}_0^n$ with values
in $B_X$,
\begin{displaymath} \inf_{1\le k\le n} \inf_{\omega } \| dM_k (\omega )\| \ge
\frac{1}{2(1+\theta )}, \hspace{.1cm}l \mbox{and} \hspace{.1cm}l
\mu_n \left\{ \omega :{\rm I\! E}_{\varepsilon } \| \sum_1^n \varepsilon_k dM_k(\omega ) \|
\ge \alpha n^{1/2} \right\} >\beta.\end{displaymath}
\end{theorem}
$Proof.$ We choose factorizations $\sigma_n = B_n A_n$ with
$A_n :l_1^{2^n} \rightarrow X$ , $B_n :X \rightarrow l_{\infty }^{2^n}$,
$\| A_n \| \le 1$ and $\| B_n \| \le 1+\min (1,\theta )$
(see \cite{J1}(Thm.4)). Defining
$M_k(\omega ):= A_n M_k^1(\omega )\in X$ we obtain
$\sup_{0\le k\le n,\omega \in \Omega_n } \| M_k (\omega ) \| \le 1$ from Lemma 3.3
as well as
$\inf_{1\le k\le n,\omega \in \Omega_n } \| dM_k (\omega ) \| \ge
\frac{1}{2(1+\theta )}$ and
\begin{eqnarray*}
\mu_n \left\{ \omega : {\rm I\! E} _{\varepsilon } \| \sum_1^n \varepsilon_k dM_k(\omega )
\|_X > \frac{\alpha }{2} n^{1/2} \right\}
& \ge & \mu_n \left\{ \omega : {\rm I\! E} _{\varepsilon } \| \sum_1^n \varepsilon_k
dM_k(\omega )\|_X > \frac{\alpha }{\| B_n \| } n^{1/2} \right\} \\
& \ge & \mu_n \left\{ \omega : {\rm I\! E} _{\varepsilon } \| \sum_1^n \varepsilon_k dM_k^{\infty }
(\omega )\| _{l_{\infty }^{2^n}} > \alpha n^{1/2} \right\} \ge \beta
\end{eqnarray*}
according to Lemma 4.2.$\Box$
For Banach spaces of type 2 we get
\begin{theorem} For any Banach space $X$ of type 2 the following assertions
are equivalent.\\
(1) $X$ is not superreflexive.\\
(2) $\frac{1}{c} n^{1/2} \le RU\!M\!D_n ( X) \le c n^{1/2}$ \hspace{.1cm}l for $n=1,2,...$
and some $c>0$.\\
(3) $\frac{1}{c'} n^{1/2} \le RU\!M\!D_n ( X)$ \hspace{.1cm}l for $n=1,2,...$ and some
$c'>0$.\\
\end{theorem}
$Proof.(1) \Rightarrow (2)$ follows from Theorem 4.3 and Lemma 3.4.
$(3) \Rightarrow (1)$. We assume $X$ to be superreflexive and find
(\cite{J2}, cf.\cite{P1}(Thm1.2,Prop.1.2)) $\gamma >0$ and $2\le s<\infty$
such that
\begin{displaymath} \left( \sum_{k\ge 0} \| dM_k\|_{L_2^X }^s \right)^{1/s} \le \gamma
\sup_k \| M_k\|_{L_2^X } \end{displaymath}
for all martingales in $X$. This martingale cotype implies
\begin{eqnarray*}
\left( {\rm I\! E}_{\varepsilon ,\omega } \| \sum_1^n \varepsilon_k dM_k (\omega )\|^2
\right)^{1/2}
& \le & T_2(X) \left( {\rm I\! E}_{\omega } \sum_1^n \| dM_k (\omega )\|^2
\right)^{1/2} \\
& \le & T_2(X) n^{1/2 - 1/s} \gamma \| M_n - M_0\|_{L_2^X}
\end{eqnarray*}
which contradicts $RU\!M\!D_n ( X) \ge \frac{1}{c'} n^{1/2}$.$\Box$
$Remark.$ Corollary 5.4 will demonstrate that the asymptotic
behaviour of $RU\!M\!D_n ( X)$ can not characterize the superreflexivity of $X$
in the case that $X$ is of type $p$ with $p<2$. Namely, according to
Theorem 5.4 for all $1<p<2<q<\infty$ there is a superreflexive Banach
space $X$ of type $p$ and of cotype $q$ with
$RU\!M\!D_n ( X) \asymp n^{\frac{1}{p} -\frac{1}{q}}$. On the
other hand, if $\frac{1}{p} -\frac{1}{q} \ge \frac{1}{2}$ then we can find
a non-superreflexive Banach space $Y$ such that
$RU\!M\!D_n ( Y) \asymp n^{\frac{1}{p} -\frac{1}{q}}$ (add a non-superreflexive
Banach space of type 2 to X).
Finally, we deduce the random unconditional constants of the summation
operators $\sigma_n$, $\sigma$, and $\Phi$ defined in the beginning of this
section. To this end we need the type 2 property of these operators. From
\cite{J1} and \cite{J3} or \cite{P-X} as well as \cite{X} we know the much
stronger results, that $\sigma$ and the usual summation operator from
$L_1 [0,1]$ into $L_{\infty } [0,1]$ can be factorized through a type 2 space.
We want to present a very simple argument for the type 2 property
of the operator $\Phi$ which can be extended to some other "integral
operators" from $C[0,1]'$ into $l_{\infty } ([0,1])$.
\begin{lemma} The operator $\Phi :C[0,1]' \rightarrow l_{\infty } ([0,1])$ is
of type 2 with $T_2(\Phi ) \le 2$.
\end{lemma}
$Proof.$ First we deduce the type 2 inequality for Dirac-measures. Let
$\lambda_1 ,...,\lambda_n \in {\rm I\! K}$, $t_1 ,...,t_n \in [0,1]$, whereas we
assume $0\le t_{k_1} = ... = t_{l_1} < t_{k_2} = ... = t_{l_2} < ... <
t_{k_M} = ... = t_{l_M} \le 1$, and let $\delta_{t_1} ,...,\delta_{t_n}$
the corresponding Dirac-measures. Then, using Doob's inequality, we obtain
\begin{eqnarray*}
\left ( {\rm I\! E}_{\varepsilon } \| \sum_1^n \Phi \varepsilon_j \lambda_j \delta_{t_j} \|^2
\right ) ^{1/2}
& = & \left ( {\rm I\! E}_{\varepsilon } \sup_t | \sum_{i=1}^M \left ( \sum_{k_i}^{l_i} \varepsilon_j
\lambda_j \right ) \delta_{t_{k_j}} ([0,t]) |^2 \right ) ^{1/2} \\
& = & \left ( {\rm I\! E}_{\varepsilon } \sup_{1\le m\le M} | \sum_{i=1}^m \left (
\sum_{k_i}^{l_i} \varepsilon_j \lambda_j \right ) |^2 \right ) ^{1/2} \\
& \le & 2 \left ( {\rm I\! E}_{\varepsilon } | \sum_{i=1}^M \left ( \sum_{k_i}^{l_i} \varepsilon_j
\lambda_j \right ) |^2 \right ) ^{1/2} \\
& = & 2 \left ( \sum_1^n |\lambda_j |^2 \right ) ^{1/2}.
\end{eqnarray*}
Hence
\begin{displaymath} \left ( {\rm I\! E}_{\varepsilon } \| \sum_1^n \Phi \varepsilon_j \lambda_j \delta_{t_j} \|^2
\right ) ^{1/2} \le 2 \left ( \sum_1^n \| \lambda_j \delta_{t_j} \|^2 \right ) ^{1/2}. \end{displaymath}
In the next step for any $\mu \in C[0,1]'$ we find a sequence of point measures
(finite sums of Dirac-measures) $\{ \mu^m \}_{m=1}^{\infty} \subset C[0,1]'$
such that
$\sup_m \| \mu^m \| \le \| \mu \|$ and $\lim_m \mu^m ([0,t]) = \mu ([0,t])$
for all $t\in [0,1]$ (take, for example,
$\mu^m := \sum_{i=1}^{2^m} \delta_{\frac{i-1}{2^m}} \mu (I_i^m )$
with $I_1^m := [0,\frac{1}{2^m} ]$ and
$I_i^m := (\frac{i-1}{2^m} ,\frac{i}{2^m} ]$ for $i>1$). Now, assuming
$\mu_1 ,...,\mu_n \in C[0,1]'$ we choose for each $\mu_j$ a sequence
$\{ \mu_j^m \}_{m=1}^{\infty}$ of point measures in the above way and obtain
\begin{eqnarray*}
\left ( {\rm I\! E}_{\varepsilon } \| \sum_1^n \Phi \varepsilon_j \mu_j \|^2 \right ) ^{1/2}
& = & \left ( {\rm I\! E}_{\varepsilon } \sup_t |\sum_1^n \varepsilon_j \mu_j ([0,t]) |^2 \right ) ^{1/2}\\
& = & \left ( {\rm I\! E}_{\varepsilon } \sup_t \lim_m |\sum_1^n \varepsilon_j \mu_j^m ([0,t]) |^2
\right ) ^{1/2}\\
& \le & \limsup_m \left ( {\rm I\! E}_{\varepsilon } \sup_t |\sum_1^n \varepsilon_j \mu_j^m ([0,t])
|^2 \right ) ^{1/2}.
\end{eqnarray*}
Using the type 2 inequality for Dirac measures and an extreme point argument
we may continue to
\begin{displaymath} \left ( {\rm I\! E}_{\varepsilon } \| \sum_1^n \Phi \varepsilon_j \mu_j \|^2 \right ) ^{1/2} \le
2 \limsup_m \left ( \sum_1^n \| \mu_j^m \|^2 \right ) ^{1/2} \le
2 \left ( \sum_1^n \| \mu_j \|^2 \right ) ^{1/2}. \Box \end{displaymath}
As a consequence we obtain
\begin{theorem} There is an absolute constant $c>0$ such that for all
$n=1,2,...$
\begin{displaymath} \frac{1}{c} n^{1/2} \le RU\!M\!D_n ( \sigma_n ) \le RU\!M\!D_n ( \sigma ) \le
RU\!M\!D_n ( \Phi ) \le c n^{1/2}. \end{displaymath}
\end{theorem}
$Proof. \frac{1}{c} n^{1/2} \le RU\!M\!D_n ( \sigma_n )$ is a consequence of
Lemma 3.3 and 4.2. $RU\!M\!D_n ( \sigma_n ) \le RU\!M\!D_n ( \sigma ) \le RU\!M\!D_n ( \Phi )$
is trivial. Finally, Lemma 4.5 and Lemma 3.4 imply $RU\!M\!D_n ( \Phi ) \le
4 n^{1/2} . \Box$
\begin{cor} There is an absolute constant $c>0$ such that for all
$n=1,2,...$
\begin{displaymath} \frac{1}{c} n^{1/2} \le
\left ( {\rm I\! E}_{\varepsilon, \omega} \| \sum_1^n \varepsilon_k dM_k^{\infty } (\omega ) \|^2
\right ) ^{1/2} =
\left ( {\rm I\! E}_{\varepsilon, \omega} \| \sum_1^n \varepsilon_k |dM_k^{\infty } (\omega )|
\hspace{.2cm} \|^2 \right ) ^{1/2} \le c n^{1/2}. \end{displaymath}
\end{cor}
$Proof.$ This immediately follows from Lemma 4.2, Theorem 4.6, and
$dM_k^{\infty } (\omega )= \omega_k |dM_k^{\infty } (\omega )|. \Box$
\section{An example}
\setcounter{lemma}{0}
We consider an example of Bourgain to demonstrate that for all
$0\le \alpha <1$ there is a superreflexive Banach space $X$ with
$RU\!M\!D_n ( X) \asymp n^\alpha$. Moreover, the general principle of this
construction allows us to show that $RU\!M\!D_n^1 ({\rm I\! K} ) \asymp n$ mentioned
in section 2 of this paper. \\
The definitions concerning upper p- and lower-q estimates of a
Banach space as well as the modulus of convexity and smoothness,
which we will use here, can be found in \cite{L-T}.
Let us start with a Banach space $X$ and let us consider the function space
$X_{\Omega_n } := \{ f:\Omega_n \rightarrow X \}$ equipped with some norm
$\| \hspace{.1cm} \| = \| \hspace{.1cm} \|_{X_{\Omega_n }}$. For a fixed $f\in X_{\Omega_n }$ we define
\begin{displaymath} M^f : \Omega_n \rightarrow X_{\Omega_n } \hspace{1cm} \mbox{by} \hspace{1cm}
M^f (\omega ) := f_{\omega } \end{displaymath}
where $f_{\omega } (\omega '):=f(\omega \omega ')$ \hspace{.3cm}
($\omega \omega ' := (\omega_1 \omega_1 ' ,...,\omega_n \omega_n ' )$ for
$\omega =(\omega_1 ,...,\omega_n )$ and $\omega ' =(\omega_1 ' ,...,
\omega_n ' )$). Setting $M_k^f := {\rm I\! E} (M_n^f |{\cal F} _k )$ we obtain a
martingale $\{ M_k^f \}_{k=o}^n$ with values in $X_{\Omega_n }$ generated by the
function $f \in X_{\Omega_n }$. Furthermore, putting $f_n := f$,
\begin{displaymath} f_k (\omega ) := {\rm I\! E} (f | {\cal F} _k )(\omega ) = \frac{1}{2^{n-k}}
\sum_{\omega_{k+1} ' = \pm 1} ... \sum_{\omega_n ' = \pm 1}
f_n (\omega_1 ,...,\omega_k ,\omega_{k+1} ',...,\omega_n '), \end{displaymath}
$df_k := f_k - f_{k-1}$ for $k\ge 1$, and $df_0 = f_0$, it yields
\begin{displaymath} \left ( \sum_0^n \alpha_k df_k \right ) _{\omega } =
\sum_0^n \alpha_k dM_k^f (\omega ) \hspace{0.5cm} \mbox{for all} \hspace{0.5cm}
\omega \in \Omega_n \hspace{0.5cm} \mbox{and all} \hspace{0.5cm}
\alpha_0 ,..., \alpha_n \in {\rm I\! K} .\end{displaymath}
The following lemma is now evident.
\begin{lemma} Let $f\in X_{\Omega_n }$ and let $\{ M_k^f \}_0^n$ be the
corresponding martingale. If $\| \hspace{.1cm} \| = \| \hspace{.1cm} \|_{X_{\Omega_n }}$ is
translation invariant then
$\| \sum_0^n \alpha_k dM_k^f (\omega ) \| = \| \sum_0^n \alpha_k df_k \|$
for all $\omega \in \Omega_n $ and all $\alpha_0 ,....,\alpha_n \in {\rm I\! K}$.
\end{lemma}
First we deduce
\begin{cor} There exists $c>0$ such that $\frac{n}{c} \le RU\!M\!D_n^1( {\rm I\! K} ) \le
RU\!M\!D_n^1( X) \le cn$ for all $n=1,2,...$ and all Banach spaces $X$.
\end{cor}
$Proof.$ We consider ${\rm I\! K}_{\Omega_n }$ with $\| f\| := \sum_{\omega } |f(\omega )|$
such that ${\rm I\! K}_{\Omega_n } = l_1 (\Omega_n )$. Defining $f \in l_1 (\Omega_n )$ as
$f := \chi _{\{ (1,...,1)\} }$ it follows that $f_{\omega } =
\chi_{\{ \omega \} }$. It is clear that the isometry $I:l_1 (\Omega_n ) \rightarrow
l_1^{2^n}$ with $If_{\omega } := e_{i(\omega )}$ ($e_{i(\omega )}$ is the
$i(\omega )$-th unit vector where $i(\omega )$ is defined as in section 3 of
this paper) transforms the martingale $\{ M_k^f\}$ into the martingale
$\{ M_k^1\}$ from section 3 by $IM_k^f(\omega ) = M_k^1(\omega )$ for all
$\omega \in \Omega_n $. Combining Lemma 5.1 and Lemma 3.3 yields
\begin{displaymath} \inf_{\omega } {\rm I\! E}_{\varepsilon } \| \sum_1^n \varepsilon_k df_k \|_{l_1 (\Omega_n )}
\ge {\alpha}n \hspace{1cm} \mbox{and} \hspace{1cm} \|
f-f_0\|_{l_1 (\Omega_n )} \le 2.
\end{displaymath}
Consequently, $RU\!M\!D_n^1( X) \ge RU\!M\!D_n^1( {\rm I\! K} ) \ge \frac{\alpha}{2} n$. On the other
hand we have $RU\!M\!D_n^1( X) \le 2n$ in general.$\Box$
Now, we treat Bourgain's example \cite{Bou} .
\begin{theorem} For all $1<p<q<\infty$ and $n \in {\rm I\! N}$ there exists a
function lattice $X_{pq}^{2n} = {\rm I\! K}_{\Omega_n t }$ such that\\
(1) $X_{pq}^{2n}$ has an upper $p$- and a lower $q$-estimate with the
constant 1,\\
(2) there exists a Walsh-Paley martingale $\{ M_k\}_0^{2n}$ with values
in $B_{X_{pq}^{2n}}$ and
\begin{displaymath} \inf_{\omega } {\rm I\! E}_{\varepsilon } \| \sum_1^{2n} \varepsilon_k dM_k (\omega )\|
\ge c(2n)^{\frac{1}{p} -\frac{1}{q} } \end{displaymath}
where $c>0$ is an absolute constant independent from $p$,$q$, and $n$.
\end{theorem}
$Proof.$ In \cite{Bou} (Lemma 3) it is shown that there is a lattice norm
$\| \hspace{.1cm} \| = \| \hspace{.1cm} \|_{{\rm I\! K}_{\Omega_n t }}$ on ${\rm I\! K}_{\Omega_n t }$ which satisfies (1),
such that there exists a function $\phi \in {\rm I\! K}_{\Omega_n t }$ with
\begin{displaymath} \| \phi \| \le \varepsilon^{1-\frac{p}{q} } \hspace{0.5cm}
(\varepsilon = 2n^{-\frac{1}{p} }) \hspace{1cm} \mbox{and} \hspace{1cm}
\| \left ( \sum_0^{2n} |d\phi_k |^2 \right ) ^{1/2} \| \ge \frac{1}{2} \end{displaymath}
\cite{Bou} (Lemma 4 and remarks below, $\varepsilon = 2n^{-1/p}$ is taken from
the proof of Lemma 4). Since $\| \hspace{.1cm} \|$ is translation invariant Lemma 5.1
implies
\begin{displaymath} \| M_k^{\phi } (\omega ) \| = \| M_k^{\phi }\|_{L_2^{X_{pq}^{2n}}} \le
\| M_n^{\phi }\|_{L_2^{X_{pq}^{2n}}} = \| \phi \|
\le 4 (2n)^{\frac{1}{q} -\frac{1}{p}} \end{displaymath}
and
\begin{displaymath} {\rm I\! E}_{\varepsilon } \| \sum_0^{2n} \varepsilon_k dM_k^{\phi } (\omega ) \| =
{\rm I\! E}_{\varepsilon } \| \sum_0^{2n} \varepsilon_k d{\phi }_k \| \ge
\| {\rm I\! E}_{\varepsilon } | \sum_0^{2n} \varepsilon_k d{\phi }_k | \hspace{0.2cm} \| \ge
\frac{1}{A} \| \left ( \sum_0^{2n} |d{\phi }_k|^2 \right ) ^{1/2} \| \ge \frac{1}{2A}.
\Box \end{displaymath}
As usual, in the following the phrase "the modulus of convexity (smoothness) of
$X$ is of power type $r$" stands for "there is some equivalent norm on $X$
with the modulus of convexity (smoothness) of power type $r$". Now, similarly
to \cite{Bou} we apply a standard procedure to the above finite-dimensional
result.
\begin{cor} (1) For all $1<p<2<q<\infty$ there is a Banach space $X$
with the modulus of convexity of power type $q$ and the modulus of smoothness
of power type $p$, and a constant $c>0$ such that
\begin{displaymath} \frac{1}{c} n^{\frac{1}{p} - \frac{1}{q}} \le RU\!M\!D_n ( X) \le c
n^{\frac{1}{p} - \frac{1}{q}} \hspace{.5cm} \mbox{for} \hspace{0.5cm}
n=1,2,...\end{displaymath}
(2) There is a Banach space $X$ with the modulus of convexity of power type
$q$ and the modulus of smoothness of power type $p$ for all $1<p<2<q<\infty$,
and $RU\!M\!D_n ( X) \rightarrow_{n \rightarrow \infty} \infty$.
\end{cor}
$Proof.$ For sequences $P=\{ p_n \}$ and $Q=\{ q_n \}$ with
\begin{displaymath} 1<p_1 \le p_2 \le ... \le p_n \le ... <2<... \le q_n \le ... \le q_2
\le q_1 < \infty \end{displaymath}
we set $X_{PQ} := \bigoplus_2 X_{p_n q_n}^{2n}$ and obtain that
$X_{PQ}$ satisfies an upper $p_k$- and a lower $q_k$-estimate for all
$k$. According to a result of Figiel and Johnson (cf. \cite{L-T} (II.1.f.10))
$X_{PQ}$ has the modulus of convexity of power type $q_k$ and the modulus
of smoothness of power type $p_k$ for all $k=1,2,...$. Furthermore,
\cite{P1}(Theorem 2.2) implies
\begin{eqnarray*}
\sup_{\varepsilon_1 \pm 1,...,\varepsilon_n \pm 1} \| \sum_1^n \varepsilon_l dM_l \|_{L_2^X}
& \le & c_{p_k} \left ( \sum_1^n \| dM_l \|_{L_2^X}^{p_k} \right ) ^{1/{p_k}} \\
& \le & c_{p_k} n^{\frac{1}{p_k} - \frac{1}{q_k}}
\left ( \sum_1^n \| dM_l \|_{L_2^X}^{q_k} \right ) ^{1/{q_k}} \\
& \le & c_{p_k} d_{q_k} n^{\frac{1}{p_k} - \frac{1}{q_k}}
\| \sum_1^n dM_l \|_{L_2^X}
\end{eqnarray*}
for all martingales $\{ M_l\}$ with values in $X_{PQ}$ such that
$RU\!M\!D_n ( X_{PQ}) \le c_{p_k} d_{q_k} n^{\frac{1}{p_k} - \frac{1}{q_k}}$.
On the other hand, from Theorem 5.3 we obtain
\begin{displaymath} c(2n)^{\frac{1}{p_n} -\frac{1}{q_n}} \le RU\!M\!D_n (e X_{p_n q_n}^{2n} )\le
RU\!M\!D_n (e X_{PQ} ). \end{displaymath}
Now, setting $p_k \equiv p$ and $q_k \equiv q$ we obtain (1). Choosing the
sequences in the way that $p_k \rightarrow_{k \rightarrow \infty} 2$,
$q_k \rightarrow_{k \rightarrow \infty} 2$, and
$n^{\frac{1}{p_n} - \frac{1}{q_n}} \rightarrow_{n \rightarrow \infty} \infty$
assertion (2) follows.$\Box$
\end{document}
|
\begin{document}
\title{A note on scalable frames}
\author[Cahill, Chen
]{Jameson Cahill and Xuemei Chen}
\address{Department of Mathematics, University
of Missouri, Columbia, MO 65211-4100}
\thanks{The first author was supported by
NSF DMS 1008183; and NSF ATD 1042701; AFOSR DGE51: FA9550-11-1-0245}
\email{[email protected]}
\email{[email protected]}
\begin{abstract}
We study the problem of determining whether a given frame is scalable, and when it is, understanding the set of all possible scalings. We show that for most frames this is a relatively simple task in that the frame is either not scalable or is scalable in a unique way, and to find this scaling we just have to solve a linear system. We also provide some insight into the set of all scalings when there is not a unique scaling. In particular, we show that this set is a convex polytope whose vertices correspond to minimal scalings.
\end{abstract}
\maketitle
\section{Introduction}
A collection of vectors $\{\varphi_i\}_{i=1}^n\subseteq\mathbb{C}^d$ is called a \textit{frame} if there are positive numbers $A\leq B<\infty$ such that
$$
A\|x\|^2\leq\sum_{i=1}^n|\langle x,\varphi_i\rangle|^2\leq B\|x\|^2
$$
for every $x$ in $\mathbb{C}^d$. If we have $A=B$ we say the frame is \textit{tight}, and if $A=B=1$ we say it is a \textit{Parseval frame}. Given a frame $\{\varphi_i\}_{i=1}^n$ we define the \textit{frame operator} $S:\mathbb{C}^d\rightarrow\mathbb{C}^d$ by
\begin{equation}\label{FO}
Sx=\sum_{i=1}^n\langle x,\varphi_i\rangle \varphi_i.
\end{equation}
It is easy to see that $S$ is always positive, invertible, and Hermitian. Furthermore, $\{\varphi_i\}_{i=1}^n$ is a Parseval frame if and only if $S=I_d$ (the identity operator on $\mathbb{C}^d$). By a slight abuse of notation, given any set of vectors $\{\varphi_i\}_{i=1}^n\subseteq\mathbb{C}^d$ we will refer to the operator defined in \eqref{FO} as their frame operator, even if they do not form a frame (in this case $S$ will not be invertible, but it will still be positive). If we have that $\|\varphi_i\|=1$ for every $i=1,...,n$ we say it is a \textit{unit norm} frame. For more background on finite frames we refer to the book \cite{book}.
A frame $\{\varphi_i\}_{i=1}^n$ is said to be \textit{scalable} if there exists a collection of scalars $\{v_i\}_{i=1}^n\subseteq\mathbb{C}$ so that $\{v_i\varphi_i\}_{i=1}^n$ is a Parseval frame. In this case, we call the vector $(|v_1|^2,...,|v_n|^2)\in\mathbb{R}_+^n$ a \textit{scaling} of $\{\varphi_i\}_{i=1}^n$. Scalable frames have been studied previously in \cite{scalable}.
We will work in the space $\mathbb{H}_{d\times d}$ of all $d\times d$ Hermitian matrices. Note that this is a \textbf{real} vector space of dimension $d^2$ (it is not a space over the complex numbers since a Hermitian matrix multiplied by a complex scalar is no longer Hermitian). The inner product on this space is given by $\langle S,T\rangle=\mathrm{Trace}(ST)$ and the norm induced by this inner product is the Froebenius norm, \textit{i.e.}, $\langle S,S\rangle=\|S\|_F^2$.
In what follows we will always consider frames in the complex space $\mathbb{C}^d$, however all of our results hold in the real space $\mathbb{R}^d$ as well. The only difference is in this case we must replace the space $\mathbb{H}_{d\times d}$ with its subspace $\mathbb{S}_{d\times d}$ consisting of all $d \times d$ real symmetric matrices, which is a real vector space of dimension $d(d+1)/2$. Thus, if one replaces $\mathbb{H}_{d\times d}$ with $\mathbb{S}_{d\times d}$ and $d^2$ with $d(d+1)/2$ all of our results will hold for frames $\{\varphi_i\}_{i=1}^n\subseteq\mathbb{R}^d$ and the same proofs will work.
\section{Scaling generic frames}
Consider the mapping from $\mathbb{C}^d$ to $\mathbb{H}_{d\times d}$ given by
$$
x\mapsto xx^*.
$$
Note that $xx^*$ is the rank one projection onto $\mathrm{span}\{x\}$ scaled by $\|x\|^2$. $xx^*$ is called the \textit{outer product} of $x$ with itself. Also note that if $x=\lambda y$ for $\lambda\in\mathbb{C}$ then $xx^*=(\lambda y)(\lambda y)^*=|\lambda|^2yy^*$.
Given a frame $\{\varphi_i\}_{i=1}^n$, in this setting we have that the frame operator is given by
$$
S=\sum_{i=1}^n\varphi_i\varphi_i^*,
$$
so $\{\varphi_i\}_{i=1}^n$ is scalable if and only if there exists a collection of \textbf{positive} scalars $\{w_i\}_{i=1}^n$ so that
$$
\sum_{i=1}^nw_i\varphi_i\varphi_i^*=I_d,
$$
in this case $\{\sqrt{w_i}\varphi_i\}_{i=1}^n$ is a Parseval frame, and the vector $(w_1,...,w_n)\in\mathbb{R}^n_+$ is the scaling.
Before stating our first theorem we need one more definition. A subset $Q\subseteq\mathbb{R}^n$ is called \textit{generic} if there exists a polynomial $p(x_1,...,x_n)$ such that $Q^c=\{(x_1,...,x_n)\in\mathbb{R}^n:p(x_1,...,x_n)=0\}$. It is a standard fact that generic sets are open, dense, and full measure. When we talk about a generic set in $\mathbb{C}^d$ we mean that it is generic when we identify $\mathbb{C}^d$ with $\mathbb{R}^{2d}$.
\begin{theorem}\label{generic}
For a generic choice of vectors $\{\varphi_i\}_{i=1}^{d^2}\subseteq\mathbb{C}^d$ we have that $\mathrm{span}\{\varphi_i\varphi_i^*\}_{i=1}^{d^2}=\mathbb{H}_{d\times d}$.
\end{theorem}
\begin{proof}
First let $\{T_i\}_{i=1}^{d^2}$ be any basis for $\mathbb{H}_{d\times d}$. Since each $T_i$ is Hermitian we can use the spectral theorem to get a decomposition $T_i=\sum_{j=1}^n\lambda_{ij}P_{ij}$ where each $P_{ij}$ is rank 1. So it follows that $\mathrm{span}\{P_{ij}\}=\mathbb{H}_{d\times d}$ and therefore this set contains a basis of $\mathbb{H}_{d\times d}.$ Thus, we have constructed a basis of $\mathbb{H}_{d\times d}$ consisting only of rank 1 matrices.
Now observe that for a given choice of vectors $\{\varphi_i\}_{i=1}^{d^2}$ we have that $\mathrm{span}\{\varphi_i\varphi_i^*\}=\mathbb{H}_{d\times d}$ if and only if the determinant of the frame operator is nonzero (note that we are refering to the frame operator of $\{\varphi_i\varphi_i^*\}_{i=1}^{d^2}$ as an operator on $\mathbb{H}_{d\times d}$, not the frame operator of $\{\varphi_i\}_{i=1}^n$ as an operator on $\mathbb{C}^d$). But the determinant of the frame operator is a polynomial in the (real and imaginary parts) of the entries of the $\varphi_i$'s, and by the first paragraph we know that there is at least one choice for which this does not vanish, so we can conclude that for a generic choice it does not vanish.
\end{proof}
\begin{corollary}
If $n\leq d^2$ then for a generic choice of vectors $\{\varphi_i\}_{i=1}^n\subseteq\mathbb{C}^d$ we have that $\{\varphi_i\varphi_i^*\}_{i=1}^n$ is linearly independent.
\end{corollary}
Given a frame $\{\varphi_i\}_{i=1}^n\subseteq\mathbb{C}^d$ define the operator $\mathcal{A}:\mathbb{R}^n\rightarrow\mathbb{H}_{d\times d}$ by
$$
\mathcal{A}w=\sum_{i=1}^nw_i\varphi_i\varphi_i^*
$$
where $w=(w_1,...,w_n)^T$. To determine whether $\{\varphi_i\}_{i=1}^n$ is scalable boils down to finding a nonnegative solution to
$$
\mathcal{A}w=I_d.
$$
In the generic case when $\{\varphi_i\varphi_i^*\}_{i=1}^n$ is linearly independent, this system is guaranteed to have either no solution, or one unique solution. So if it either has a solution with a negative entry or has no solution we can conclude that this frame is not scalable, and if it has a nonnegative solution then it is scalable and this solution tells us the unique scalars to use. We summarize this in the following corollary:
\begin{corollary}
Given frame $\{\varphi_i\}_{i=1}^n\subseteq\mathbb{C}^d$ such that $\{\varphi_i\varphi_i^*\}_{i=1}^n$ is linearly independent in $\mathbb{H}_{d\times d}$, we can determine its scalability by solving the linear system
\begin{equation}\label{system}
\mathcal{A}w=I_d.
\end{equation}
Furthermore, in this case if it is scalable then it is scalable in a unique way.
In particular, if $n\leq d^2$ then with probability 1, determining the scalability of $\{\varphi_i\}_{i=1}^n$ is equivalent to solving the linear system given in \eqref{system}.
\end{corollary}
\section{Linearly dependent outer products}
In this section we will address the situation when $\{\varphi_i\varphi_i^*\}_{i=1}^n$ is linearly dependent. The main problem here is that the system $\mathcal{A}w=I_d$ may have many solutions, and possibly none of them are nonnegative. In this section we will find it convenient to assume that $\|\varphi_i\|=1$ for every $i=1,...,n$, note that we lose no generality by making this assumption.
Given a collection of vectors $\{x_i\}_{i=1}^n\subseteq\mathbb{R}^d$ we define their \textit{affine span} as
$$
\mathrm{aff}\{x_i\}_{i=1}^n:=\{\sum_{i=1}^nc_ix_i:\sum_{i=1}^nc_i=1\}
$$
and we say that $\{x_i\}_{i=1}^n$ is \textit{affinely independent} if
$$
x_j\not\in\mathrm{aff}\{x_i\}_{i\neq j}
$$
for every $j=1,...,n$. We also define their \textit{convex hull} as
$$
\mathrm{conv}\{x_i\}_{i=1}^n:=\{\sum_{i=1}^nc_ix_i:c_i\geq 0,\sum_{i=1}^nc_i=1\}.
$$
We say a set $\mathcal{P}\subseteq\mathbb{R}^d$ is called a \textit{polytope} if it is the convex hull of finitely many points.
\begin{proposition}
Given a collection of unit norm vectors $\{\varphi_i\}_{i=1}^n\subseteq\mathbb{C}^d$ we have that $\{\varphi_i\varphi_i^*\}_{i=1}^n$ is linearly independent if and only if it is affinely independent.
\end{proposition}
\begin{proof}
Clearly linear independence always implies affine independence. So suppose that $\{\varphi_i\varphi_i^*\}_{i=1}^n$ is not linearly independent. Then we have an equation of the form
$$
\varphi_j\varphi_j^*=\sum_{i\neq j}c_i\varphi_i\varphi_i^*
$$
for some $j$. Also note that since $\|\varphi_i\|=1$ it follows that $\langle\varphi_i\varphi_i^*,I_d \rangle=1$ for every $i=1,...,n$. Therefore, we have
\begin{eqnarray*}
1&=&\langle\varphi_j\varphi_j^*,I_d \rangle =\langle\sum_{i\neq j}c_i\varphi_i\varphi_i^*,I_d \rangle \\
&=&\sum_{i\neq j}c_i\langle\varphi_i\varphi_i^*,I_d \rangle =\sum_{i\neq j}c_i.
\end{eqnarray*}
Therefore $\{\varphi_i\varphi_i^*\}_{i=1}^n$ is not affinity independent.
\end{proof}
\begin{proposition}\label{b}
A unit norm frame $\{\varphi_i\}_{i=1}^n\subseteq\mathbb{C}^d$ is scalable if and only if $\frac{1}{d}I_d\in\mathrm{conv}\{\varphi_i\varphi_i^*\}_{i=1}^n$. Furthermore, if $\lambda I_d\in\mathrm{conv}\{\varphi_i\varphi_i^*\}_{i=1}^n$ then $\lambda=\frac{1}{d}$ and if $\sum_{i=1}^nw_i\varphi_i\varphi_i^*=\frac{1}{d}I_d$ then $\sum_{i=1}^nw_i=1$.
\end{proposition}
\begin{proof}
Suppose we have a scaling $w$ so that
$$
I_d=\sum_{i=1}^nw_i\varphi_i\varphi_i^*.
$$
Then
\begin{eqnarray*}
d&=&\langle I_d,I_d \rangle=\langle\sum_{i=1}^nw_i\varphi_i\varphi_i^*,I_d \rangle \\
&=&\sum_{i=1}^nw_i\langle\varphi_i\varphi_i^*,I_d \rangle=\sum_{i=1}^nw_i.
\end{eqnarray*}
Thus, $\sum_{i=1}^n\frac{w_i}{d}=1$ and since $w_i\geq 0$ for every $i=1,...,n$ it follows that $\frac{1}{d}I_d=\sum_{i=1}^n\frac{w_i}{d}\varphi_i\varphi_i^*\in\mathrm{conv}\{\varphi_i\varphi_i^*\}_{i=1}^n$. The converse is obvious.
The furthermore part follows from a similar argument. Suppose $\lambda I_d=\sum_{i=1}^nw_i\varphi_i\varphi_i^*$ with $\sum_{i=1}^nw_i=1$. Then
$$
d\lambda=\langle\lambda I_d,I_d\rangle=\sum_{i=1}^nw_i=1.
$$
Now suppose $\frac{1}{d}I_d=\sum_{i=1}^nw_i\varphi_i\varphi_i^*$. Then
$$
1=\langle\sum_{i=1}^nw_i\varphi_i\varphi_i^*,I_d \rangle=\sum_{i=1}^nw_i.
$$
\end{proof}
The following theorem is known as Carath\'{e}odory's theorem:
\begin{theorem}\label{thm_car}
Given a set of points $\{x_i\}_{i=1}^n\subseteq\mathbb{R}^d$ suppose $y\in\mathrm{conv}\{x_i\}_{i=1}^n$. Then there exists a subset $I\subseteq\{1,...,n\}$ such that $y\in\mathrm{conv}\{x_i\}_{i\in I}$ and $\{x_i\}_{i\in I}$ is affinely independent.
\end{theorem}
\begin{corollary}\label{a}
Suppose $\{\varphi_i\}_{i=1}^n\subseteq\mathbb{C}^d$ is a scalable frame. Then there is a subset $\{\varphi_i\}_{i\in I}$ which is also scalable and $\{\varphi_i\varphi_i^*\}_{i\in I}$ is linearly independent.
\end{corollary}
Given a unit norm frame $\{\varphi_i\}_{i=1}^n\subseteq\mathbb{C}^d$ we define the set
$$
\mathcal{P}(\{\varphi_i\}_{i=1}^n):=\{(w_1,...,w_n):w_i\geq 0,\sum_{i=1}^nw_i\varphi_i\varphi_i^*=\frac{1}{d}I_d\}.
$$
Proposition \ref{b} tells us two things about this set: first we have that $w\in\mathcal{P}(\{\varphi_i\}_{i=1}^n)$ if and only if $d\cdot w$ is a scaling of $\{\varphi_i\}_{i=1}^n$, and second, that $\mathcal{P}(\{\varphi_i\}_{i=1}^n)$ is a (possibly empty) polytope (see, for example, Theorem 1.1 in \cite{polytopes}).
Suppose $\{\varphi_i\}_{i=1}^n\subseteq\mathbb{C}^d$ is a scalable frame, and we are given a scaling $w=(w_1,...,w_n)$. We say the scaling is \textit{minimal} if $\{\varphi_i:w_i>0\}$ has no proper subset which is scalable.
\begin{theorem}\label{main}
Suppose $\{\varphi_i\}_{i=1}^n\subseteq\mathbb{C}^d$ is a scalable, unit norm frame. If $w=(w_1,...,w_n)$ is a minmal scaling then $\{\varphi_i\varphi_i^*:w_i>0\}$ is linearly independent. Furthermore, $\mathcal{P}(\{\varphi_i\}_{i=1}^n)$ is the convex hull of the minimal scalings, \textit{i.e.}, every scaling is a convex combination of minimal scalings.
\end{theorem}
\begin{proof}
The first statement follows directly from Corollary \ref{a}.
We now show that every vertex of $\mathcal{P}(\{\varphi_i\}_{i=1}^n)$ is indeed a minimal scaling. Let $u\in\mathcal{P}(\{\varphi_i\}_{i=1}^n)$ be a vertex and assume to the contrary that $u$ is not minimal, then there exists a $v\in P$ such that $\mathrm{supp}(v)\subsetneq\mathrm{supp}(u)$. Let $w(t)=v+t(u-v)$, and $t_0=\mathrm{min}\{\frac{v_i}{v_i-u_i}:v_i>u_i\}$. We observe that $t_0>1$ and $w(t_0)_i\geq0$ since $\rm{supp}(v)\subsetneq\rm{supp}(u)$. This means $w(t_0)\in P$, and $u$ lies on the line segment connecting $v$ and $w(t_0)$ which contradicts the fact that $u$ is a vertex.
Finally we show that every minimal scaling is a vertex of $\mathcal{P}(\{\varphi_i\}_{i=1}^n)$. Suppose we are given a minimal scaling $w$ which is not a vertex of $\mathcal{P}(\{\varphi_i\}_{i=1}^n)$. Then we can write $w$ as a convex combination of vertices, say $w=\sum t_iv_i$, where we know at least two $t_i$'s are nonzero, without loss of generality say $t_1$ and $t_2$. Since both $t_1$ and $t_2$ are positive and all the entries of $v_1$ and $v_2$ are nonnegative, it follows that $\mathrm{supp}(v_1)\cup\mathrm{supp}(v_2)\subseteq\mathrm{supp}(w)$, which contradicts the fact the $w$ is a minimal scaling.
\end{proof}
Theorem \ref{main} reduces the problem of understanding the scalings of the frame $\{\varphi_i\}_{i=1}^n$ to that of finding the vertices of the polytope $\mathcal{P}(\{\varphi_i\}_{i=1}^n)$. Relatvely fast algorithms for doing this are known, see \cite{algorithm}.
\section{When are outer products linearly independent?}
Since most of the results in this paper deal with linear independence of the outer products of subsets of our frame vectors we will address this issue in this section. It would be nice if there were conditions on a frame $\{\varphi_i\}_{i=1}^n$ which could guarantee that the set of outer products $\{\varphi_i\varphi_i^*\}_{i=1}^n$ is linearly independent, or conversely if knowing that $\{\varphi_i\varphi_i^*\}_{i=1}^n$ is linearly independent tells anything about the frame $\{\varphi_i\}_{i=1}^n$. One obvious condition is that in order for $\{\varphi_i\varphi_i^*\}_{i=1}^n$ to be linearly independent we must have $n\leq d^2$, and when this is satisfied Theorem \ref{generic} tells us that this will usually be the case.
Another condition which is easy to prove is that if $\{\varphi_i\}_{i=1}^n$ is linearly independent then so is $\{\varphi_i\varphi_i^*\}_{i=1}^n$. The converse of this is certainly not true, and since we are usually interested in frames for which $n>d$ this condition is not very useful. The main idea here is that while the frame vectors live in a $d$-dimensional space the outer products live in a $d^2$-dimensional space, so there is much more ``room" for them to be linearly independent.
Given a frame $\{\varphi_i\}_{i=1}^n$ we define its \textit{spark} to be the size of its smallest linearly dependent subset, more precisely
$$
\mathrm{spark}(\{\varphi_i\}_{i=1}^n):=\mathrm{min}\{|I|:\{\varphi_i\}_{i\in I}\text{ is linearly dependent}\}.
$$
Clearly for a frame $\{\varphi_i\}_{i=1}^n\subseteq\mathbb{C}^d$ we must have that $\mathrm{spark}(\{\varphi_i\}_{i=1}^n)\leq d+1$, if its spark is equal to $d+1$ we say it is \textit{full spark}. For more background on full spark frames see \cite{fullspark}.
\begin{proposition}\label{spark}
Suppose $\{\varphi_i\}_{i=1}^n\subseteq\mathbb{C}^d$ is a frame with $n\leq 2d-1$. If $\{\varphi_i\}_{i=1}^n$ is full spark then $\{\varphi_i\varphi_i^*\}_{i=1}^n$ is linearly independent.
\end{proposition}
\begin{proof}
Suppose by way of contradiction that $\{\varphi_i\}_{i=1}^n$ is full spark but $\{\varphi_i\varphi_i^*\}_{i=1}^n$ is linearly dependent. Then we can write an equation of the form
\begin{equation*}\label{t}
\sum_{i\in I}a_i\varphi_i\varphi_i^*=\sum_{j\in J}b_j\varphi_j\varphi_j^*
\end{equation*}
with $a_i>0$ for every $i\in I$, $b_j>0$ for every $j\in J$, and $I\cap J=\emptyset$. This implies that
\begin{eqnarray*}
\mathrm{span}(\{\varphi_i\}_{i\in I})&=&\mathrm{Im}(\sum_{i\in I}a_i\varphi_i\varphi_i^*) \\
&=&\mathrm{Im}(\sum_{j\in J}b_j\varphi_j\varphi_j^*)=\mathrm{span}(\{\varphi_j\}_{j\in J}).
\end{eqnarray*}
But since $n\leq 2d-1$ we have either $|I|\leq d-1$ or $|J|\leq d-1$, so this contradicts the fact the $\{\varphi_i\}_{i=1}^n$ is full spark.
\end{proof}
We first remark that the converse of Proposition \ref{spark} is not true:
\begin{example}\label{ex1}
Let $\{e_1,e_2,e_3\}$ be an orthonormal basis for $\mathbb{C}^3$ and consider the frame $\{e_1,e_2,e_3,e_1+e_2,e_2+e_3\}$. Clearly this frame is not full spark and yet it is easy to verify that $\{e_1e_1^*,e_2e_2^*,e_3e_3^*,(e_1+e_2)(e_1+e_2)^*,(e_2+e_3)(e_2+e_3)^*\}$ is linearly independent.
\end{example}
Next we remark that the assumption $n\leq 2d-1$ is necessary:
\begin{example}\label{ex2}
Let $\{e_1,e_2\}$ be an orthonormal basis for $\mathbb{C}^2$ and consider the frame $\{e_1,e_2,e_1+e_2,e_1-e_2\}$. Clearly this frame is full spark but
$$
e_1e_1^*+e_2e_2^*=I_2=\frac{1}{2}((e_1+e_2)(e_1+e_2)^*+(e_1-e_2)(e_1-e_2)^*).
$$
\end{example}
Finally we remark that with only slight modifications the proof of Propostion \ref{spark} can be used to prove the following more general result:
\begin{proposition}\label{spark2}
If $\mathrm{spark}(\{\varphi_i\}_{i=1}^n)\geq s$ then $\mathrm{spark}(\{\varphi_i\varphi_i^*\}_{i=1}^n)\geq 2s-2$.
\end{proposition}
Unfortunately, the converse of Proposition \ref{spark2} is still not true. The main problem here is that given any three vectors such that no one of them is a scalar multiple of another, the corresponding outer products will be linearly independent (we leave the proof of this as an exercise). Therefore it is easy to make examples (such as Example \ref{ex1} above) of frames that have tiny spark, but the corresponding outer products are linearly independent.
We conclude our discussion of spark by remarking that in \cite{fullspark} it is shown that computing the spark of a general frame is NP-hard. Thus, the small amount of insight we gain from Proposition \ref{spark2} is of little practical use.
Another property worth mentioning in this section is known as the \textit{complement property}. A frame $\{\varphi_i\}_{i=1}^n\subseteq\mathbb{C}^d$ has the complement property if for every $I\subseteq\{1,...,n\}$ we have either $\mathrm{span}(\{\varphi_i\}_{i\in I})=\mathbb{C}^d$ or $\mathrm{span}(\{\varphi_i\}_{i\in I^c})=\mathbb{C}^d$. We remark that the complement property is usually discussed for frames in a real vector space, but for our purposes it is fine to discuss it for frames in a complex space. In \cite{phaseless} the complement property was shown to be necessary and sufficient to do phaseless reconstruction in the real case.
If a frame $\{\varphi_i\}_{i=1}^n\subseteq\mathbb{C}^d$ has the complement property then clearly we must have $n\geq 2d-1$ (if not we could partition the frame into two sets each of size at most $d-1$) and that in this case full spark implies the complement property. If $n=2d-1$ then the complement property is equivalent to full spark, but for $n>2d-1$ the complement property is (slightly) weaker. One might ask if the complement property tells us anything about the linear independence of the outer products, or vice versa. Example \ref{ex1} above is an example of a frame which does not have the complement property but the outer products are linearly independent, and Example \ref{ex2} is an example of a frame that does have the complement property but the outer products are linearly dependent. So it seems like the complement property has nothing to do with the linear independence of the outer products.
Given a frame with the complement property we can add any set of vectors to it without losing the complement property. Thus it seems natural to ask whether every frame with the complement property has a subset of size $2d-1$ which is full spark. This also turns out to be not true as the following example shows:
\begin{example}
Consider the frame in Example \ref{ex1} with the vector $e_1+e_3$ added to it. It is not difficult to verify that this frame does have the complement property, but no subset of size 5 is full spark.
\end{example}
We conclude by noting that as in the proof of Proposition \ref{spark}, a set of outer products $\{\varphi_i\varphi_i^*\}_{i=1}^n$ is linearly dependent if and only if we have an equation of the form
$$
\sum_{i\in I}a_i\varphi_i\varphi_i^*=\sum_{j\in J}b_j\varphi_j\varphi_j^*
$$
with $a_i>0$ for every $i\in I$, $b_j>0$ for every $j\in J$, and $I\cap J=\emptyset$. This is equivalent to $\{\varphi_i\}_{i=1}^n$ having two disjoint subsets, namely $\{\varphi_i\}_{i\in I}$ and $\{\varphi_j\}_{j\in J}$, which can be scaled to have the same frame operator. Thus, determining whether $\{\varphi_i\varphi_i^*\}_{i=1}^n$ is linearly independent is equivalent to solving a more difficult scaling problem than the one presented in this paper.
\section*{Acknowledgment}
The authors would like to thank Peter Casazza and Dustin Mixon for insightful conversations during the writing of this paper.
\end{document}
|
\begin{document}
\maketitle
\baselineskip 18pt
\begin{abstract}{\it
We consider
initial boundary value problems with the homogeneous Neumann
boundary condition. Given an initial value, we establish
the uniqueness in determining a spatially varying coefficient
of zeroth-order term by a single measurement of Dirichlet data
on an arbitrarily chosen subboundary. The uniqueness holds
in a subdomain where the initial value is positive,
provided that it is sufficiently smooth which is specified
by decay rates of the Fourier coefficients.
The key idea is the reduction to an inverse elliptic problem
and relies on elliptic Carleman estimates.}
\\
{\bf Key words.}
inverse coefficient problem, parabolic equation, uniqueness,
initial boundary value problem, inverse elliptic problem,
Carleman estimate
\\
{\bf AMS subject classifications.}35R30, 35K15
\end{abstract}
\section{Introduction}
Let $\Omega$ be a bounded domain in $\Bbb R^n$ with $C^2-$ boundary
$\partial\Omega$.
We set $\nu=(\nu_1(x),\dots,\nu_n(x))$ be an outward unit normal vector to
$\partial\Omega$, and
$$
\partial_k = \frac{\partial}{\partial x_k}, \quad 1\le k\le n, \quad
\partial_t = \frac{\partial}{\partial t}.
$$
We assume
$$
a_{ij}=a_{ji} \in C^2(\overline{\Omega}) \quad \mbox{for all $1\le i,j\le n$},
\quad c_1, c_2 \in C(\overline{\Omega}),
$$
and there exists a constant $\kappa_1>0$ such that
$$
\sum_{i,j=1}^n a_{ij}(x)\xi_i\xi_j
\ge \kappa_1\mathbf{v}ert \xi\mathbf{v}ert^2 \quad \mbox{for all $x \in \overline{\Omega}$ and
$\xi=(\xi_1, ... \xi_n) \in \Bbb R^n$}.
$$
We set
$$
\partial_{\nu_A}v: = \sum_{i,j=1}^n a_{ij}(x)\nu_i(x)\partial_jv(x), \quad
x\in \partial\Omega.
$$
Moreover let
$$
A(x,D)v(x) = - \sum_{i,j=1}^n \partial_i(a_{ij}(x)\partial_jv(x)) \quad
\mbox{with} \quad
\mathcal{D}(A):= \{ v\in H^2(\Omega);\, \partial_{\nu_{A_0}} v= 0\quad
\mbox{on $\partial\Omega$}\}. \eqno{(1.1)}
$$
In this article, we consider the folowing
\\
{\bf Inverse problem.}
\\
{\it
Let
$$\left\{ \begin{array}{rl}
& \partial_tu = -A(x,D)u + c_1(x)u \quad \mbox{in $\Omega\times (0,T)$}, \\
& \ppp_{\nu_A} u = 0 \quad\mbox{on $\partial\Omega$}, \\
& u(x,0) = a(x), \quad x\in \Omega,
\end{array}\right.
\eqno{(1.2)}
$$
and
$$\left\{ \begin{array}{rl}
& \partial_tv = -A(x,D)v + c_2(x)v \quad \mbox{in $\Omega\times (0,T)$}, \\
& \ppp_{\nu_A} v = 0 \quad\mbox{on $\partial\Omega$}, \\
& v(x,0) = a(x), \quad x\in \Omega.
\end{array}\right.
\eqno{(1.3)}
$$
Let an initial value $a$ be suitably given and
$\gamma \subset \partial\Omega$ be an arbitrarily chosen non-empty connected
relatively open subset of $\partial\Omega$.
Then
$$
\mbox{$u=v$ on $\gamma \times (0,T)$ implies $c_1=c_2$ in $\Omega$?}
$$
}
This inverse problem has been intensively studied in the literature.
Most general results are obtained for the case when a time of observation
$t_0$ belongs to the open interval $(0,T)$. In this case, based on the method introduce in
Bukhgeim and Klibanov \cite{BK}, Imanuvilov and Yamamoto \cite{IY98} proved
the uniqueness and the Lipschitz stability in determination of coefficients
corresponding to the zeroth order term.
Recently in Imanuvilov and Yamamoto \cite{IY23},
the authors proved a conditional Lipschitz stability
estimate as well as the uniqueness for the case $t_0=T$.
See also Huang, Imanuvilov and Yamamoto \cite{HIY}.
In case an observation is taken at the initial moment $t_0=0,$
to the authors' best knowledge, the question of uniqueness of a solution
of inverse problem is open in general.
Limited to the one-dimensional case, a result in this direction was
obtained by Suzuki \cite{S} and Suzuki and Murayama \cite{SM}.
Klibanov \cite{Kl} proved the uniqueness of determination of zeroth order
term coefficient in the case when $a_{ij}=\delta_{ij}$ (the case of
the Laplace operator). Also it is assumed that the domain of observation
$\mathbf{G}amma=\partial\Omega$.
The method proposed in \cite{Kl} is based on an integral transform and
subsequent reduction of the original problem to the problem of determination
of a coefficient of zeroth order term for a hyperbolic equation.
After that, the method in \cite{BK} is applied.
It should be mentioned that the method introduced by
Bukhgeim and Klibanov is based on Carleman estimates and the Carleman type
estimates for hyperbolic equations are subjected to so-called non-trapping
conditions. Therefore both assumptions made in \cite{Kl} are critically
important for the application of this method except of the one dimensional case.
In \cite{IY23}, the authors extended results of \cite{Kl} to the case of
general second order hyperbolic equation.
The main purpose of the current work is to remove the non-trapping assumptions
and prove the uniqueness without any geometric constraints on the
observation subbondary $\gamma$.
\\
Henceforth we set
$$
-A_1(x,D) = -A(x,D) + c_1(x), \quad -A_2(x,D) = -A(x,D) + c_2(x)
$$
with the domains $\mathcal{D}(A_1) = \mathcal{D}(A_2) = \mathcal{D}(A)$.
It is known that the spectrum
$\sigma(A_k)$ of $A_k$, $k=1,2$, consists entirely of eigenvalues with
finite multiplicities.
By changing $\mathbf{w}idetilde{u}:= e^{Mt}u$ with some constant $M$, it suffices to
assume that there exists a constant $\kappa_2>1$
such that $(A_1u,u)_{L^2(\Omega)} \ge \kappa_2\Vert u\Vert^2_{L^2(\Omega)}$
for $u \in \mathcal{D}(A_1)$
and $(A_2v,v)_{L^2(\Omega)} \ge \kappa_2\Vert v\Vert^2_{L^2(\Omega)}$
for $v\in \mathcal{D}(A_2)$.
Then, setting
$$
\sigma(A_1)= \{\lambda_k\}_{k\in \mathbb{N}}, \quad \sigma(A_2) = \{ \mu_k\}_{k\in \mathbb{N}},
$$
we can number as
$$
1 < \lambda_1 < \lambda_2 < \cdots, \qquad 1<\mu_1 < \mu_2 < \cdots.
$$
Let $P_k$ be the projection for $\lambda_k$, $k \in \mathbb{N}$ which is defined by
$$
P_k = \frac{1}{2\pi\sqrt{-1}} \int_{\gamma(\lambda_k)}
(z-A_1)^{-1} dz, \quad
Q_k = \frac{1}{2\pi\sqrt{-1}} \int_{\gamma(\mu_k)}
(z-A_2)^{-1} dz,
$$
where $\gamma(\lambda_k)$ is a circle centered at $\lambda_k$ with sufficiently
small radius such that the disc bounded by $\gamma(\lambda_k)$ does not
contain any points in $\sigma(A_1)\setminus \{\lambda_k\}$, and
$\gamma(\mu_k)$ is a similar sufficiently small circle centered
at $\mu_k$.
Then $P_k:L^2(\Omega) \longrightarrow L^2(\Omega)$ is a bounded linear operator
to a finite dimensional space
and $P_k^2 = P_k$ and $P_kP_{\ell} = 0$ for $k, \ell\in \mathbb{N}$ with
$k \ne \ell$. Then
$P_kL^2(\Omega) = \{ b\in \mathcal{D}(A_1);\, A_1b=\lambda_kb\}$, and we have
$a = \sum_{k=1}^{\infty} P_ka$ in $L^2(\Omega)$ for each
$a \in L^2(\Omega)$ (e.g., Agmon \cite{Ag}, Kato \cite{Ka}).
Setting $m_k:= \mbox{dim}\, P_kL^2(\Omega)$, we have
$m_k<\infty$, and we call $m_k$ the multiplicity of $\lambda_k$.
Similarly let $n_k$ and $Q_k$ be the multiplicity and
the eigenprojection for $\mu_k$.
\\
Moreover we set $Q:= \Omega \times (0,T)$, and
$$
H^{2,1}(Q):= \{ w\in L^2(Q);\,
w, \, \partial_iw,\, \partial_i\partial_jw,\, \partial_tw \in L^2(Q)
\,\, \mbox{for $1\le i,j\le n$}\}.
$$
Let
$$
\mathbf{G}amma=\{x\in \gamma;\, \mathbf{v}ert a(x)\mathbf{v}ert >0\}.
$$
We assume that
$$
\mathbf{G}amma\ne \emptyset. \eqno{(1.4)}
$$
For $a\in C(\overline{\Omega})$, we set
$$
\Omega_0 := \{ x\in \Omega;\, \mathbf{v}ert a(x)\mathbf{v}ert > 0\}.
$$
For $\mathbf{G}amma$, we define
$$
\mbox{$\omega: = \{ x\in \Omega_0;\,$ there exist a point $x_*\in \mathbf{G}amma$
and}
$$
$$
\mbox{a smooth curve $\ell \in C^\infty[0,1]$ such that
$\ell(\xi) \in \Omega_0$ for $0<\xi\le 1$ and $\ell(0)=x_*$, $\ell(1)=x\}$}.
\eqno{(1.5)}
$$
We remark that the definition (1.5) implies $\ell \setminus \{x_*\}
\subset \omega$.
In (1.5), replacing smooth curves by piecewise
smooth curves, we still have the same definition for $\omega$.
We note that $\omega$ is not necessarily a connected set.
However, if in addition we suppose that
$$
\mbox{$\mathbf{G}amma$ is a connected subset of $\partial\Omega$}, \eqno{(1.6)}
$$
then one can verify that $\omega \subset \Omega$ is a domain, that is,
a connected open set. Indeed, choosing $x, \mathbf{w}idetilde{x}\in \omega$ arbitrarily,
we will show that we can find a piecewise smooth curve $L \subset \omega$
connecting $x$ and $\mathbf{w}idetilde{x}$ as follows.
First we can choose smooth curves $\ell, \mathbf{w}idetilde{\ell}
\subset \Omega_0 \cup \mathbf{G}amma$ and points $x_*, \mathbf{w}idetilde{x}_* \in \mathbf{G}amma$
such that $\ell$ connects $x$ and $x_*$, $\mathbf{w}idetilde{\ell}$ connects $\mathbf{w}idetilde{x}$ and
$\mathbf{w}idetilde{x}_*$. The definition implies that $\ell \setminus \{x_*\},
\mathbf{w}idetilde{\ell} \setminus \{ \mathbf{w}idetilde{x_*}\} \subset \omega$.
Since $\mathbf{v}ert a\mathbf{v}ert >0$ in $\mathbf{G}amma$, we can
find a smooth curve $\mathbf{w}idetilde{\gamma} \subset \Omega_0$ connecting $x_*$ and
$\mathbf{w}idetilde{x}_*$. Therefore, since $\mathbf{w}idetilde{\gamma} \subset \omega$, it follows that
$x$ and $\mathbf{w}idetilde{x}$ can be connected by a piecewise smooth curve
$L \subset \omega$ composed by $\ell, \mathbf{w}idetilde{\ell}, \mathbf{w}idetilde{\gamma}$,
which means that $\omega$ is a connected set.
Moreover, if $x\in \omega$, then we see that any point $\mathbf{w}idetilde{x} \in \Omega_0$
which is sufficiently close to $x$, can be connected to some point $\mathbf{w}idetilde{x}_*
\in \mathbf{G}amma$ by some smooth curve in $\Omega_0$.
Therefore, $\omega$ is a connected and
open set, that is, $\omega$ is a domain.
$\blacksquare$
We can understand that $\omega$ is the maximal set such that
all the points of $\omega$ is connected by a curve
in $\Omega_0$ to $\mathbf{G}amma$.
By (1.4), we note that
$\omega \ne \emptyset$.
\\
{\bf Examples.}
\\
(i) Under condition (1.4), we have $\omega = \Omega$ if $\Omega_0 = \Omega$.
In general, if $\{ x\in \Omega;\, a(x) = 0\}$ has no interior points, then
$\omega = \Omega_0$.
\\
(ii) Assume that (1.4) and (1.6) hold true. Let subdomains $D_1, ..., D_m \subset \Omega$ satisfy
$\overline{D_1}, ..., \overline{D_2} \subset \Omega$ and
$a=0$ on $\overline{D_k}$ for $1\le k \le m$ and $\mathbf{v}ert a\mathbf{v}ert > 0$ in
$\Omega \setminus \overline{\bigcup_{k=1}^m D_k}$. Then
$\omega = \Omega \setminus \overline{\bigcup_{k=1}^m D_k}$.
\\
(iii) Assume that (1.4) and (1.6) hold true. Let sudomains $D_1, D_2$ satisfy $\overline{D_1} \subset D_2$,
$\overline{D_2} \subset \Omega$, $a=0$ in $\overline{D_2 \setminus D_1}$ and
$\mathbf{v}ert a\mathbf{v}ert > 0$ in $D_1 \cup (\Omega \setminus \overline{D_2})$. Then
$\omega = \Omega \setminus \overline{D_2}$. We note that $D_1$ is not included in
$\omega$ although $\mathbf{v}ert a\mathbf{v}ert > 0$ in $D_1$.
\\
Now we state the main uniqueness result.
\\
{\bf Theorem 1.}
\\
{\it
Let $ a\in C(\overline\Omega),$ $u,v \in H^{2,1}(Q)$ satisfy (1.2) and (1.3)
respectively and $\partial_tu, \partial_tv \in H^{2,1}(Q)$ and let (1.4) hold true.
Assume
\\
{\bf Condition 1:} there exists a function $\theta \in C[1,\infty)$ satisfying
$$
\lim_{\eta\to\infty} \frac{\theta(\eta)}{\eta^{\frac{2}{3}}} = +\infty
$$
and
$$
\sum_{k=1}^{\infty} e^{\theta(\lambda_k)} \Vert P_ka\Vert^2_{L^2(\Omega)} < \infty
\quad \mbox{or}
\quad \sum_{k=1}^{\infty} e^{\theta(\mu_k)} \Vert Q_ka\Vert^2_{L^2(\Omega)} < \infty.
\eqno{(1.7)}
$$
Then,
$$
u=v \quad \mbox{on $\gamma \times (0,T)$}
$$
implies $c_1=c_2$ on $\overline{\omega}$.
}
As is seen by the proof, without the assumption (1.7), we can prove
at least the coincidence of the eigenvalues of $A_1$ and $A_2$ of
non-vanishing modes:
\\
{\bf Corollary.}
\\
{\it
Let $u=v$ on $\gamma \times (0,T)$. Then
$$
\{ \lambda_k;\, k\in \mathbb{N}, \, P_ka \ne 0 \quad \mbox{in $\Omega$}\}
= \{ \mu_k;\, k\in \mathbb{N}, \, Q_ka \ne 0 \quad \mbox{in $\Omega$}\}
$$
and if $P_ka \ne 0$ in $\Omega$ for $k\in \mathbb{N}$, then
$$
P_ka = Q_ka \quad \mbox{on $\gamma$},
$$
after suitable re-numbering of $k$.
}
The corollary means that $u=v$ on $\gamma \times (0,T)$ implies
that there exists $N_1 \in \mathbb{N} \cup \{\infty\}$ such that we can find
sequences $\{i_k\}_{1\le k\le N_1}, \, \{j_k\}_{1\le k\le N_1}
\subset \mathbb{N}$ satisfying
$$
\left\{ \begin{array}{rl}
& \lambda_{i_k} = \mu_{j_k}, \quad P_{i_k}a \ne 0, \,\,
Q_{j_k}a \ne 0 \quad \mbox{in $\Omega$},\quad
P_{i_k}a = Q_{j_k}a = 0 \quad \mbox{on $\gamma$}
\quad \mbox{for $1\le k \le N_1$}, \\
& P_ia = 0 \quad\mbox{in $\Omega$ if $i\not\in \{ i_k\}_{1\le k\le N_1}$},
\quad
Q_ja = 0 \quad\mbox{in $\Omega$ if $j\not\in \{ j_k\}_{1\le k\le N_1}$}.
\end{array}\right.
$$
We remark that even in the case $N_1=\infty$, we may have
$\{ i_k\}_{1\le k \le N_1} \subsetneqq \mathbb{N}$.
\\
{\bf Remark.}
In (1.7), consider a function $\theta(\eta) = \eta^p$.
Theorem 1 asserts the uniqueness
if the initial value $a$ is smooth in the sense (1.7).
We emphasize that in (1.7), the critical exponent of $\lambda_k$ should be
greater than $\frac{2}{3}$. If we assume
the stronger condition $p=1$, that is,
$$
\sum_{k=1}^{\infty} e^{\sigma \lambda_k} \Vert P_ka\Vert^2_{L^2(\Omega)} < \infty \quad \mbox{and}
\quad \sum_{k=1}^{\infty} e^{\sigma \mu_k} \Vert Q_ka\Vert^2_{L^2(\Omega)} < \infty
\eqno{(1.8)}
$$
with some constant $\sigma>0$, then the uniqueness is trivial
because we can extend the solutions $u(\cdot,t)$ and $v(\cdot,t)$
to the time interval $(-\delta, 0)$ with small $\delta > 0$.
Indeed, since $\sum_{k=1}^{\infty} \mathbf{v}ert e^{\hhalf \sigma \lambda_k}\mathbf{v}ert^2 \Vert P_ka\Vert^2_{L^2(\Omega)}
< \infty$, we can verify that $u(\cdot,t) = \sum_{k=1}^{\infty} e^{-\lambda_kt}P_ka$ in
$L^2(\Omega)$ for $t > -\hhalf\sigma$. Therefore we can extend
$u(\cdot,t)$ to $\left(-\frac{\sigma}{2},\, 0\right)$ in $L^2(\Omega)$ and also
to $(-\delta, 0)$ with sufficiently small $\delta>0$. The extension of
$v(\cdot,t)$ is similarly done.
Therefore, under (1.8), our inverse problem is reduced to the case where
the spatial data of $u,v$ are given at an intermediate time of the whole
time interval under consideration,
which has been already solved in Bukhgeim and Klibanov
\cite{BK}, Imanuvilov and Yamamoto \cite{IY98}, Isakov \cite{Is}.
Condition corresponding to the case $p=\hhalf$ in (1.7)
appears in the controllability of a parabolic equation.
We know that a function $a(\cdot)$ in $\Omega$ satisfying the condition (1.7)
with $\theta(\eta) = \eta^{\hhalf}$ and $c_1\equiv 0$
belongs to the reachable set
$$
\{ u(\cdot,0);\, b\in L^2(\Omega),\, h \in L^2(\partial\Omega \times (-\tau,0)\},
$$
where $u$ is the solution to
$$
\left\{ \begin{array}{rl}
& \partial_tu = \Delta u \quad \mbox{in $\Omega \times (-\tau,0)$}, \\
& \partial_{\nu}u = h \quad \mbox{on $\partial\Omega \times (-\tau,0)$},\\
& u(\cdot,-\tau) = b \quad \mbox{in $\Omega$}
\end{array}\right.
$$
(Theorem 2.3 in Russell \cite{R}).
See also (1.9) stated below.
\\
The article is composed of four sections. In Section 2, we show
Carleman estimates for elliptic operator. In Section 3, we prove the
uniqueness for our inverse problem first under a condition:
there exists a constant $\sigma_1>0$ such that
$$
\sum_{k=1}^{\infty} e^{\sigma_1 \lambda_k^{\hhalf}}\Vert P_ka\Vert^2_{L^2(\Omega)}
+ \sum_{k=1}^{\infty} e^{\sigma_1 \mu_k^{\hhalf}} \Vert Q_ka\Vert^2_{L^2(\Omega)} < \infty,
\eqno{(1.9)}
$$
and next by proving that (1.7) yields (1.9), we complete
the proof of Theorem 1.
In Section 4, we prove a Carleman estimate used for deriving (1.9) from
(1.7).
\section{Key Carleman estimate}
The proof of Theorem 1 relies essentially on the reduction of our
inverse parabolic problem to an inverse elliptic problem.
After the reduction, we prove the uniqueness by the method developed in
\cite{BK} or, \cite{HIY}, \cite{IY98},
and so we need a relevant Carleman estimate
for an elliptic equation.
For the statement of Carleman estimate, we introduce a weight function.
We arbitrarily fix $y \in \omega$. For $y$, we construct a non-empty
domain $\omega_y \subset \Omega$ satisfying
$$
\left\{ \begin{array}{rl}
&\mbox{(i)} \,\,y \in \omega_y, \quad \omega_y \subset \omega. \\
&\mbox{(ii)} \,\,\mbox{$\partial\omega_y$ is of $C^{\infty}$-class.}\\
&\mbox{(iii)} \,\, \mbox{$\partial\omega_y \cap \mathbf{G}amma$ has interior points
in the topology of $\partial\Omega$.} \\
&\mbox{(iv)} \,\, \mathbf{v}ert a(x)\mathbf{v}ert > 0 \quad \mbox{for all
$x\in \overline{\omega_y}$}.
\end{array}\right.
\eqno{(2.1)}
$$
Indeed, since $y\in \omega$, by the definition of $\omega$, we can find
$y_*\in \mathbf{G}amma$ and a smooth curve $\ell \in C^{\infty}[0,1]$ such that
$\ell(1) = y$ and $\ell(0) = y_*$, $\ell(\xi) \in \omega$ for
$0<\xi \le 1$. Then as $\omega_y$, we can choose
a sufficiently thin neighborhood of the curve $\{\ell(\xi);\, 0<\xi\le 1\}$
which is included in $\omega$.
For the proof of Theorem 1, we will show that if $y\in\omega$ then $y\notin \mbox{supp}\, f.$ This of course implies that $f=0$ on $\omega$.
First we establish a Carleman estimate in $\omega_y \times (-\tau,\tau)$ with
a constant $\tau > 0$.
We know that there exists a function
$d\in C^2(\overline{\omega_y})$ such that
$$
\mathbf{v}ert \nabla d(x)\mathbf{v}ert > 0 \quad \mbox{for $x\in \overline{\omega_y}$}, \quad
d(x) > 0 \quad \mbox{for $x\in \omega_y$}, \quad
d(x) = 0 \quad \mbox{for $x\in \partial\omega_y\setminus \mathbf{G}amma$}.
\eqno{(2.2)}
$$
The existence of such $d$ is proved for example in Imanuvilov \cite{Im}.
See also Fursikov and Imanuvilov \cite{FI}.
For a constant $\tau>0$, we set
$$
{\mathcal Q}_\tau:= \omega_y \times (-\tau, \, \tau),
$$
$\partial_0 := \frac{\partial}{\partial t}$, and
$$
\alpha(x,t) := e^{\lambda(d(x) - \beta t^2)},
\quad (x,t)\in {\mathcal Q}_{\tau} \eqno{(2.3)}
$$
with an arbitrarily chosen constant $\beta > 0$ and sufficiently large
fixed $\lambda > 0$.
Then
\\
{\bf Lemma 2.1 (elliptic Carleman estimate).}
\\
{\it
There exists a constant $s_0 > 0$ such that we can find a constant
$C>0$ such that
\begin{align*}
& \int_{{\mathcal Q}_\tau} \left\{ \frac{1}{s}\sum_{i,j=0}^n \mathbf{v}ert \partial_i\partial_jw\mathbf{v}ert^2
+ s\mathbf{v}ert \partial_tw\mathbf{v}ert^2 + s\mathbf{v}ert \nabla w\mathbf{v}ert^2
+ s^3\mathbf{v}ert w\mathbf{v}ert^2\right) e^{2s\alpha} dxdt\\
\le& C\int_{{\mathcal Q}_\tau} \mathbf{v}ert \partial_t^2w - A_1w\mathbf{v}ert^2 e^{2s\alpha} dxdt
\end{align*}
for all $s \ge s_0$ and $w\in H^2_0({\mathcal Q}_\tau)$.
}
\\
Here we recall that $-A_1w = \sum_{i,j=1}^n \partial_i(a_{ij}(x)\partial_jw)
+ c_1(x)w$.
The constants $s_0>0$ and $C>0$ can be chosen uniformly provided that
$\Vert c_1\Vert_{L^{\infty}(\omega)} \le M$: arbitrarily fixed constant
$M>0$.
We note that Lemma 2.1 is a Carleman estimate for the elliptic operator
$\partial_t^2 - A_1w$. Since $(\nabla \alpha, \, \partial_t\alpha)
= (\nabla d, \, -2\beta t) \ne (0,0)$ on $\overline{{\mathcal Q}_{\tau}}$ by (2.2),
the proof of
the lemma relies directly on integration by parts and standard, similar for
example to the proof of Lemma 7.1 (p.186) in
Bellassoued and Yamamoto \cite{BY}.
See also H\"ormander \cite{H}, Isakov \cite{Is}, where the estimation
of the second-order derivatives is not included but can be be
derived by the a priori estimate for the elliptic boundary value problem.
For the proof of Theorem 1, we further need another Carleman estimate
in $\Omega$ for an elliptic
equation. We can find $\rho\in C^2(\overline{\Omega})$ such that
$$
\rho(x) > 0 \quad \mbox{for $x \in\Omega$}, \quad
\mathbf{v}ert \nabla \rho(x) \mathbf{v}ert > 0 \quad \mbox{for $x \in\overline{\Omega}$}, \quad
\partial_{\nu_A}\rho(x) \le 0 \quad \mbox{for $x\in \partial\Omega\setminus \gamma$}.
\eqno{(2.4)}
$$
The construction of $\rho$ can be found in Lemma 2.3 in \cite{IY98} for
example.
Moreover, fixing a constant $\lambda>0$ large, we set
$$
\psi(x):= e^{\lambda(\rho(x) - 2\Vert \rho\Vert_{C(\overline{\Omega})})}, \quad
x\in \Omega.
$$
Then
\\
{\bf Lemma 2.2.}
\\
{\it
There exist constants $s_0>0$ and $C>0$ such that
$$
\int_{\Omega} (s^3\mathbf{v}ert g\mathbf{v}ert^2 + s\mathbf{v}ert \nabla g\mathbf{v}ert^2)e^{2s\psi(x)} dx
\le C\int_{\Omega} \mathbf{v}ert A_2g\mathbf{v}ert^2 e^{2s\psi(x)} dx
+ cs^3\int_{\gamma} (\mathbf{v}ert g\mathbf{v}ert^2 + \mathbf{v}ert \nabla g\mathbf{v}ert^2) e^{2s\psi} dS
$$
for all $s>s_0$ and $g \in H^2(\Omega)$ satisfying $\partial_{\nu_A}g=0$
on $\partial\Omega$.
}
We postpone the proof of Lemma 2.2 to Section 4.
\section{Proof of Theorem 1.1}
We divide the proof into four steps.
In Steps 1-3, we assume the condition (1.9) to prove the
conclusion on uniqueness in Theorem 1.1.
\\
{\bf First Step.}
\\
We write $u(t):= u(\cdot,t)$ and $v(t):= v(\cdot,t)$ for $t>0$.
We recall that
$$
u(t) = \sum_{k=1}^{\infty} e^{-\lambda_kt}P_ka, \quad
v(t) = \sum_{k=1}^{\infty} e^{-\mu_kt}Q_ka \quad \mbox{in $H^2(\Omega)$ for $t>0$.}
$$
We can choose subsets $\mathbb{N}_1, \mathbb{M}_1 \subset \mathbb{N}$ such that
$$
\mathbb{N}_1:= \{k \in \mathbb{N};\, P_ka \not\equiv 0 \quad \mbox{in $\Omega$}\}, \quad
\mathbb{M}_1:= \{k \in \mathbb{N};\, Q_ka \not\equiv 0 \quad \mbox{in $\Omega$}\}.
\eqno{(3.1)}
$$
We note that $\mathbb{N}_1 = \mathbb{N}$ or $\mathbb{M}_1 = \mathbb{N}$ may happen.
We can renumber the sets $\mathbb{N}_1$ and $\mathbb{M}_1$ as
$$
\mathbb{N}_1 = \{1, ...., N_1\}, \quad \mathbb{M}_1=\{ 1, ...., M_1\},
$$
where $N_1 = \infty$ or $M_1 = \infty$ may occur.
By $^{\sharp}\mathbb{N}_1$ we mean the cardinal number of the set $\mathbb{N}_1$.
Wote that
$$
\lambda_1< \lambda_2 < \cdots < \lambda_{N_1} \quad \mbox{if $^{\sharp}\mathbb{N}_1 < \infty$} \quad
\lambda_1< \lambda_2 < \cdots \quad \mbox{if $^{\sharp}\mathbb{N}_1 = \infty$}
$$
and
$$
\mu_1< \mu_2 < \cdots < \mu_{M_1} \quad \mbox{if $^{\sharp}\mathbb{M}_1 < \infty$} \quad
\mu_1< \mu_2 < \cdots \quad \mbox{if $^{\sharp}\mathbb{M}_1 = \infty$}.
$$
Assuming that $u=v$ on $\gamma \times (0,T)$, by the time analyticity
of $u(t)$ and $v(t)$ for $t>0$ (e.g., Pazy \cite{Pa}), we obtain
$$
\sum_{k=1}^{N_1} e^{-\lambda_kt}P_ka = \sum_{k=1}^{M_1} e^{-\mu_kt}Q_ka
\quad \mbox{on $\gamma \times (0,\infty)$}.
\eqno{(3.2)}
$$
We will prove that $\lambda_1 = \mu_1$.
Assume that $\lambda_1 < \mu_1$. Then
$$
P_1a + \sum_{k=2}^{N_1} e^{-(\lambda_k-\lambda_1)t}P_ka
= \sum_{k=1}^{M_1} e^{-(\mu_k-\lambda_1)t}Q_ka
\quad \mbox{on $\gamma \times (0,\infty)$}.
$$
Since $\lambda_k - \lambda_1 > 0$ for $2\le k \le N_1$ and
$\mu_k - \lambda_1 > 0$ for $1\le k \le M_1$, letting $t\to \infty$,
we see that $P_1a=0$ on $\mathbf{G}amma$.
Therefore
$$
\left\{ \begin{array}{rl}
& (A_1-\lambda_1)P_1a = 0 \quad \mbox{in $\Omega$}, \\
& P_1a\mathbf{v}ert_{\mathbf{G}amma} = 0, \quad \ppp_{\nu_A} P_1a\mathbf{v}ert_{\partial\Omega} = 0.
\end{array}\right.
$$
The unique continuation for the elliptic equation $A_1P_1a = \lambda_1P_1a$, (see e.g. \cite{H})
yields that
$$
P_1a = 0 \quad \mbox{in $\Omega$}.
$$
This is a contradiction by $1 \in \mathbb{N}_1$.
Thus the inequality $\lambda_1 < \mu_1$ is impossible.
Similarly we can see that the inequality $\lambda_1 > \mu_1$ is impossible.
Therefore $\lambda_1 = \mu_1$ follows.
By (3.2) and $\lambda_1 = \mu_1$, we have
$$
P_1a - Q_1a
= -\sum_{k=2}^{N_1} e^{-(\lambda_k-\lambda_1)t}P_ka
+ \sum_{k=2}^{M_1} e^{-(\mu_k-\lambda_1)t}Q_ka
\quad \mbox{on $\gamma \times (0,\infty)$}.
$$
Hence, by $\lambda_k-\lambda_1 > 0$ and $\mu_k - \lambda_1 = \mu_k - \mu_1 > 0$ for
all $k \ge 2$, letting $s \to \infty$ we obtain $P_1a = Q_1a$ on $\gamma$.
In view of (3.2), we obtain
$$
\sum_{k=2}^{N_1} e^{-\lambda_kt}P_ka
= \sum_{k=2}^{M_1} e^{-\mu_kt}Q_ka
\quad \mbox{on $\gamma \times (0,\infty)$}.
$$
Repeating the same argument as much as possible, we reach
$$
N_1 = M_1, \quad \lambda_k = \mu_k, \quad
P_ka = Q_ka \quad \mbox{on $\gamma \times (0,\infty)$ for
$1\le k \le N_1$}. \eqno{(3.3)}
$$
\\
{\bf Second Step.}
We consider two initial boundary value problems for elliptic equations:
$$
\left\{ \begin{array}{rl}
& \partial_t^2w_1 - A_1w_1 = 0 \quad \mbox{in $\Omega\times (0,\tau)$}, \\
& \ppp_{\nu_A} w_1 = 0 \quad \mbox{on $\partial\Omega \times (0,\tau)$}, \\
& w_1(x,0) = a(x), \quad \partial_tw_1(x,0) = 0, \quad x\in \Omega
\end{array}\right.
\eqno{(3.4)}
$$
and
$$
\left\{ \begin{array}{rl}
& \partial_t^2w_2 - A_2w_2 = 0 \quad \mbox{in $\Omega\times (0,\tau)$}, \\
& \ppp_{\nu_A} w_2 = 0 \quad \mbox{on $\partial\Omega \times (0,\tau)$}, \\
& w_2(x,0) = a(x), \quad \partial_tw_2(x,0) = 0, \quad x\in \Omega.
\end{array}\right.
\eqno{(3.5)}
$$
Since we have the spectral representations by (1.9), we can obtain
$$
e^{-tA_1^{\hhalf}}a = \sum_{k=1}^{\infty} e^{-\lambda_k^{\hhalf}t}P_ka, \quad
e^{tA_1^{\hhalf}}a = \sum_{k=1}^{\infty} e^{\lambda_k^{\hhalf}t}P_ka \quad
\mbox{in $L^2(\Omega)$ for $t>0$}
$$
and similar representations hold for $e^{\pm tA_2^{\hhalf}}a$.
Then by the assumption (1.9) on $a$, we see that
$$
w_1(t) = \frac{1}{2}(e^{-tA_1^{\hhalf}}a + e^{tA_1^{\hhalf}}a), \quad
w_2(t) = \frac{1}{2}(e^{-tA_2^{\hhalf}}a + e^{tA_2^{\hhalf}}a)
$$
in $H^2(\Omega\times (0,\tau))$ for $t \in (0,\tau)$, satisfy (3.4) and (3.5)
respectively if $\tau>0$ is chosen sufficiently small.
In view of (3.3) and the definition (3.1) of $\mathbb{N}_1$,
the spectral representations imply
\begin{align*}
& w_1(x,t) = \hhalf \sum_{k=1}^{N_1} (e^{-\lambda_k^{\hhalf}t}P_ka
+ e^{\lambda_k^{\hhalf}t}P_ka)
+ \hhalf \sum_{k\in \mathbb{N} \setminus \{1, ..., N_1\}}(e^{-\lambda_k^{\hhalf}t}P_ka
+ e^{\lambda_k^{\hhalf}t}P_ka)\\
=& \hhalf \sum_{k=1}^{N_1} (e^{-\lambda_k^{\hhalf}t}P_ka
+ e^{\lambda_k^{\hhalf}t}P_ka) \quad \mbox{in $\Omega\times (0,\tau)$,}
\end{align*}
and
$$
w_2(x,t) = \hhalf \sum_{k=1}^{N_1} (e^{-\lambda_k^{\hhalf}t}Q_ka
+ e^{\lambda_k^{\hhalf}t}Q_ka) \quad \mbox{in $\Omega\times (0,\tau)$}.
$$
Therefore (3.3) yields
$$
w_1 = w_2 \quad \mbox{on $\gamma \times (0,\tau)$.} \eqno{(3.6)}
$$
Now we reduce our inverse problem for the parabolic equations to the one
for elliptic equations for (3.4) and (3.5).
This is the essence of the proof.
\\
{\bf Third Step.}
\\
By (1.9), we can readily verify further regularity $\partial_tw_1, \partial_tw_2
\in H^2(0,\tau;H^2(\Omega))$.
Setting $y:= w_1 - w_2$ and $R:=w_2$ in $\Omega\times (0,\tau)$
and $f:= c_2-c_1$ in $\Omega$, by (3.4) -(3.6) we have
$$
\left\{ \begin{array}{rl}
& \partial_t^2y - A_1y = f(x)R(x,t) \quad \mbox{in $\Omega\times (0,\tau)$}, \\
& \ppp_{\nu_A} y = 0 \quad \mbox{on $\partial\Omega \times (0,\tau)$}, \\
& y=0 \quad \mbox{on $\gamma \times (0,\tau)$}, \\
& y(x,0) = \partial_ty(x,0) = 0, \quad x\in \Omega.
\end{array}\right.
\eqno{(3.7)}
$$
Now we will prove that for any pair $(y,f)$ solving problem (3.7) we have $y\notin \mbox{supp}, f.$ Since $ y$ was chosen as an arbitrary point from $\omega$ this implies
$$
f=0\quad \mbox{in}\quad \omega.
$$ The argument relies on \cite{IY98}.
We set
$$
\mathbf{w}idetilde{y}(x,t) =
\left\{ \begin{array}{rl}
& y(x,t), \quad 0<t<\tau, \\
& y(x,-t), \quad -\tau<t<0
\end{array}\right.
$$
and
$$
\mathbf{w}idetilde{R}(x,t) =
\left\{ \begin{array}{rl}
& R(x,t), \quad 0<t<\tau, \\
& R(x,-t), \quad -\tau<t<0.
\end{array}\right.
$$
Then we see that $\mathbf{w}idetilde{y} \in H^3(-\tau,\tau;H^2(\omega_y))$ by
$y(\cdot,0) = \partial_ty(\cdot,0) = 0$ in $\omega_y$, and
$$
\left\{ \begin{array}{rl}
& \partial_t^2\mathbf{w}idetilde{y} - A_1\mathbf{w}idetilde{y} = f(x)\mathbf{w}idetilde{R}(x,t)
\quad \mbox{in ${\mathcal Q}_{\tau}:= \omega_y\times (-\tau,\tau)$}, \\
& \mathbf{w}idetilde{y}= \ppp_{\nu_A} \mathbf{w}idetilde{y} = 0 \quad \mbox{on $(\partial\omega_y \cap \mathbf{G}amma)
\times (-\tau,\tau)$}, \\
& \mathbf{w}idetilde{y}(x,0) = \partial_t\mathbf{w}idetilde{y}(x,0) = 0, \quad x\in \omega_y.
\end{array}\right.
$$
Setting $\mathbf{w}idetilde{z}:= \partial_t\mathbf{w}idetilde{y}$, we have $\mathbf{w}idetilde{z} \in H^2(Q_{\tau})$ and
$$
\left\{ \begin{array}{rl}
& \partial_t^2\mathbf{w}idetilde{z} - A_1\mathbf{w}idetilde{z} = f(x)\partial_t\mathbf{w}idetilde{R}(x,t)
\quad \mbox{in ${\mathcal Q}_{\tau}$}, \\
& \mathbf{w}idetilde{z} = \ppp_{\nu_A} \mathbf{w}idetilde{z} = 0 \quad \mbox{on $(\partial\omega_y \cap \mathbf{G}amma)
\times (-\tau,\tau)$}, \\
& \mathbf{w}idetilde{z}(x,0) = 0, \quad x\in \omega_y
\end{array}\right.
\eqno{(3.8)}
$$
and
$$
\partial_t\mathbf{w}idetilde{z}(x,0) = f(x)a(x), \quad x\in \omega_y. \eqno{(3.9)}
$$
In the weight function $d(x) - \beta t^2$ in Lemma 2.1, for $\tau>0$ we choose
$\beta > 0$ sufficiently large,
so that
$$
\Vert d\Vert_{C(\overline{\omega_y})} - \beta \tau^2 < 0, \eqno{(3.10)}
$$
We choose constants $\delta_1, \delta_2 > 0$ such that
$$
\Vert d\Vert_{C(\overline{\omega_y})} - \beta \tau^2 < 0 < \delta_1 < \delta_2
\quad \mbox{and}\quad d(y)>\delta_2 \eqno{(3.11)}
$$
and we define $\chi \in C^{\infty}(\overline{{\mathcal Q}_{\tau}})$ satisfying
$$
\chi(x,t) =
\left\{ \begin{array}{rl}
& 1, \quad d(x) - \beta t^2 > \delta_2, \\
& 0, \quad d(x) - \beta t^2 < \delta_1.
\end{array}\right.
\eqno{(3.12)}
$$
In particularl, $d(y) > \delta_2$ implies
$$
\chi(y,0)=1.
$$
Setting $z:= \chi \mathbf{w}idetilde{z}$ on $\overline{{\mathcal Q}_{\tau}}$, we see that
$$
z = \ppp_{\nu_A} z = 0 \quad \mbox{on $\partial\omega_y \times (-\tau, \tau)$}
\eqno{(3.13)}
$$
and
$$
z = \partial_t z = 0 \quad \mbox{on $\omega_y \times \{ \pm \tau\}$}.
\eqno{(3.14)}
$$
Indeed, $(x,t) \in (\partial\omega_y \setminus \mathbf{G}amma) \times (-\tau, \tau)$
implies
$d(x) - \beta t^2 = -\beta t^2 \le 0 < \delta_1$ by (2.2), and so
the definition (3.12) of $\chi$ yields that
$\chi(x,t) = 0$ in a neighborhood of such $(x,t)$.
For $(x,t) \in (\partial\omega_y \cap \mathbf{G}amma) \times (-\tau,\tau)$,
by (3.8) we see that
$z(x,t) = \ppp_{\nu_A} z(x,t) = 0$, which verifies (3.13).
Moreover, on $\omega_y\times \{ \pm \tau\}$, by (3.11) we have
$$
d(x) - \beta t^2 \le \Vert d\Vert_{C(\overline{\omega_y})} - \beta\tau^2
< 0 < \delta_1,
$$
so that $\chi(x,t) = 0$ in a neighborhood of such $(x,t)$.
Thus (3.14) has been verified.
$\blacksquare$
Consequently we prove that $z\in H^2_0(Q_{\tau})$.
Moreover, we can readily obtain
$$
\left\{ \begin{array}{rl}
& \partial_t^2z - A_1z = \chi(\partial_t\mathbf{w}idetilde {R})f + R_0(x,t), \quad
(x,t) \in {\mathcal Q}_{\tau}, \\
& z = \mathbf{v}ert \nabla z\mathbf{v}ert = 0 \quad \mbox{on $\partial {\mathcal Q}_{\tau}$},
\end{array}\right.
\eqno{(3.15)}
$$
where $R_0$ is a linear combination of $\nabla \mathbf{w}idetilde{z}$, $\partial_t\mathbf{w}idetilde{z}$,
whose coefficients are linear combinations of $\nabla\chi$ and
$\partial_t\chi$. Therefore (3.12) implies
$$
R_0(x,t) \ne 0 \quad \mbox{only if $\delta_1 \le d(x) - \beta t^2
\le \delta_2$}. \eqno{(3.16)}
$$
Therefore we can apply Lemma 2.1 to (3.15) with (3.16):
$$
\int_{{\mathcal Q}_{\tau}} \left( \frac{1}{s}\mathbf{v}ert \partial_t^2z\mathbf{v}ert^2
+ s\mathbf{v}ert \partial_tz\mathbf{v}ert^2 \right) e^{2s\alpha} dxdt
\eqno{(3.17)}
$$
\begin{align*}
\le & C\int_{{\mathcal Q}_{\tau}} \mathbf{v}ert \chi(\partial_t\mathbf{w}idetilde{R})f \mathbf{v}ert^2e^{2s\alpha} dxdt
+ C\int_{{\mathcal Q}_{\tau}} \mathbf{v}ert R_0(x,t)\mathbf{v}ert^2 e^{2s\alpha} dxdt \\
\le & C\int_{{\mathcal Q}_{\tau}}\chi^2 \mathbf{v}ert f\mathbf{v}ert^2 e^{2s\alpha} dxdt
+ Ce^{2se^{\lambda\delta_2}}
\end{align*}
for all large $s>0$.
On the other hand, since $\partial_tz(\cdot,-\tau) = 0$ in $\omega_y$ by
(3.14), we have
\begin{align*}
& \int_{\omega_y} \mathbf{v}ert \partial_tz(x,0)\mathbf{v}ert^2 e^{2s\alpha(x,0)} dx
= \int^0_{-\tau} \partial_t\left( \int_{\omega_y} \mathbf{v}ert \partial_tz(x,t)\mathbf{v}ert^2
e^{2s\alpha(x,t)} dx \right) dt\\
=& \int^0_{-\tau} \int_{\omega_y} \{ 2(\partial_tz)(x,t)\partial_t^2z(z,t)
+ \mathbf{v}ert \partial_tz \mathbf{v}ert^2 2s(\partial_t\alpha))\} e^{2s\alpha(x,t)} dxdt.
\end{align*}
Since
$$
\mathbf{v}ert (\partial_tz)(\partial_t^2z)\mathbf{v}ert \le \frac{1}{2}
\left( s\mathbf{v}ert \partial_tz\mathbf{v}ert^2 + \frac{1}{s}\mathbf{v}ert \partial_t^2z\mathbf{v}ert^2
\right) \quad \mbox{in $\mathcal{Q}_\tau$},
$$
in terms of (3.17) we obtain
$$
\int_{\omega_y} \mathbf{v}ert \partial_tz(x,0)\mathbf{v}ert^2 e^{2s\alpha(x,0)} dx
\le C\int_{{\mathcal Q}_{\tau}} \left( \frac{1}{s} \mathbf{v}ert \partial_t^2z(x,t)\mathbf{v}ert^2
+ s\mathbf{v}ert \partial_tz\mathbf{v}ert^2 \right) e^{2s\alpha} dxdt
\eqno{(3.18)}
$$
$$
\le C\int_{{\mathcal Q}_{\tau}}\mathbf{v}ert \chi\mathbf{v}ert^2\mathbf{v}ert f\mathbf{v}ert^2 e^{2s\alpha}
dxdt + Ce^{2se^{\lambda\delta_2}}
$$
for all large $s>0$.
Moreover, we have
\begin{align*}
& \partial_tz(x,0) = \partial_t(\chi\mathbf{w}idetilde{z})(x,0)
= (\partial_t\chi)(x,0)\mathbf{w}idetilde{z}(x,0) + \chi(x,0)\partial_t\mathbf{w}idetilde{z}(x,0)\\
=& \chi(x,0)f(x)a(x)
\end{align*}
by (3.9) and $\mathbf{w}idetilde{z}(x,0) = 0$ for $x \in \omega_y$ in (3.8).
Therefore, in terms of (2.1)-(iv), we obtain
$$
\mathbf{v}ert \partial_tz(x,0)\mathbf{v}ert \ge C\mathbf{v}ert \chi(x,0)f(x)\mathbf{v}ert, \quad
x\in \overline{\omega_y}.
$$
Consequently (3.18) implies
$$
\int_{\omega_y} \mathbf{v}ert\chi(x,0)\mathbf{v}ert^2 \mathbf{v}ert f(x) \mathbf{v}ert^2 e^{2s\alpha(x,0)} dx
\le C\int_{\mathcal{Q}_{\tau}} \mathbf{v}ert \chi f\mathbf{v}ert^2 e^{2s\alpha} dxdt
+ Ce^{2se^{\lambda\delta_2}} \eqno{(3.19)}
$$
for all large $s>0$.
Moreover, we see
\begin{align*}
&\int_{{\mathcal Q}_{\tau}} \mathbf{v}ert \chi f\mathbf{v}ert^2 e^{2s\alpha} dxdt
= \int^{\tau}_{-\tau} \int_{\omega_y} \mathbf{v}ert\chi(x,0)f(x)\mathbf{v}ert^2 e^{2s\alpha}
dxdt\\
= & \int_{\omega_y} \mathbf{v}ert \chi(x,0) f(x)\mathbf{v}ert^2 e^{2s\alpha(x,0)}
\left( \int^{\tau}_{-\tau} e^{2s(\alpha(x,t) - \alpha(x,0))} dt\right)dx.
\end{align*}
Since
$$
\int^{\tau}_{-\tau} e^{2s(\alpha(x,t) - \alpha(x,0))} dt
= \int^{\tau}_{-\tau} e^{2se^{\lambda d(x)}(e^{-\lambda\beta t^2} - 1)} dt
\le \int^{\tau}_{-\tau} e^{Cs(e^{-\lambda\beta t^2} - 1)} dt
= o(1)
$$
as $s \to \infty$ by the Lebesgue convergence theorem.
Hence, (3.19) yields
$$
\int_{\omega_y} \mathbf{v}ert \chi(x,0)f(x)\mathbf{v}ert^2 e^{2s\alpha(x,0)} dx
\le o(1)\int_{\omega_y} \mathbf{v}ert \chi(x,0)f(x)\mathbf{v}ert^2 e^{2s\alpha(x,0)} dx
+ Ce^{2se^{\lambda\delta_2}},
$$
and we can absorb the first term on the right-hand side into the
left-hand side to reach
$$
\int_{\omega_y} \mathbf{v}ert \chi(x,0)f(x)\mathbf{v}ert^2 e^{2s\alpha(x,0)} dx
\le Ce^{2se^{\lambda\delta_2}} \eqno{(3.20)}
$$
for all large $s>0$.
Henceforth we set $B(y,\mathbf{v}arphirepsilon):= \{ x;\, \mathbf{v}ert x-y\mathbf{v}ert < \mathbf{v}arphirepsilon\}$.
Then, we can choose $\delta_3 > \delta_2$ and
a sufficiently small $\mathbf{v}arphirepsilon > 0$ such that
$B(y,\mathbf{v}arphirepsilon) \subset \omega_y$ and $d(x) \ge \delta_3$ for all
$x\in B(y,\mathbf{v}arphirepsilon)$. This is possible, because $d(y) > \delta_2$ in
(3.11) and $\omega_y$ is an open set including $y$.
We shrink the integration region of the left-hand side of (3.20)
to $B(y,\mathbf{v}arphirepsilon)$ and obtain
$$
\int_{B(y,\mathbf{v}arphirepsilon)} \mathbf{v}ert \chi(x,0)f(x)\mathbf{v}ert^2 e^{2s\alpha(x,0)} dx
\le Ce^{2se^{\lambda\delta_2}}
$$
for all large $s>0$.
Since $d(x) \ge \delta_3 > \delta_2$ for $x\in B(y,\mathbf{v}arphirepsilon)$, the condition (3.12)
yields $\chi(x,0) = 1$ and $\alpha(x,0) = e^{\lambda d(x)}
\ge e^{\lambda\delta_3}$ for all $x\in B(y,\mathbf{v}arphirepsilon)$. Therefore,
$$
\left( \int_{B(y,\mathbf{v}arphirepsilon)} \mathbf{v}ert f(x)\mathbf{v}ert^2 dx\right) e^{2se^{\lambda\delta_3}}
\le Ce^{2se^{\lambda\delta_2}},
$$
that is, $\Vert f\Vert^2_{L^2(B(y,\mathbf{v}arphirepsilon))} \le e^{-2s(e^{\lambda\delta_3}
- e^{\lambda\delta_2})}$ for all large $s>0$.
In terms of $\delta_3 > \delta_2$, letting $s\to \infty$,
we see that the right-hand side
tends to $0$, and so $f=0$ in $B(y,\mathbf{v}arphirepsilon)$. Since $y$ is arbitrarily chosen,
we reach $f=c_2-c_1 = 0$ in $\omega$.
Thus the conclusion of Theorem 1.1 is proved under condition (1.9).
$\blacksquare$
\\
{\bf Fourth Step.}
\\
We will complete the proof of Theorem 1.1 by demonstrating
that (1.7) implies (1.9).
Without loss of generality, we can assume
$$
\sum_{k=1}^{\infty} e^{\theta(\lambda_k)}\Vert P_ka\Vert^2_{L^2(\Omega)} < \infty.
\eqno{(3.21)}
$$
It suffices to prove that there exists a constant $\sigma_1>0$ such that
$$
\sum_{k=1}^{\infty} e^{\sigma_1\lambda_k^{\hhalf}}\Vert Q_ka\Vert^2_{L^2(\Omega)} < \infty,
\eqno{(3.22)}
$$
with the assumption that the set of $k\in \mathbb{N}$ such that
$Q_ka\ne 0$ in $\Omega$ is infinite.
For simplicity, we can consider the case where
$P_ka \ne 0$ in $\Omega$ for all $k\in \mathbb{N}$. We can argue similarly
in the rest cases.
Then, by Corollary which was already proved in First Step, we choose
a subset $\mathbb{M}_1 \subset \mathbb{N}$ such that
$$
\{ \lambda_i\}_{i\in\mathbb{N}} = \{ \mu_j\}_{j\in \mathbb{M}_1}, \quad
Q_ja = 0 \quad \mbox{in $\Omega$ for $j\in \mathbb{N} \setminus \mathbb{M}_1$}.
$$
Now it suffices to prove (3.22) in the case where $Q_ka\ne 0$ in
$\Omega$ for all $k\in \mathbb{N}$.
After re-numbering, we can obtain
$$
\lambda_k = \mu_k, \quad P_ka = Q_ka \quad \mbox{on $\gamma$ for all
$k\in \mathbb{N}$}. \eqno{(3.23)}
$$
The trace theorem and the a priori estimate for an elliptic operator
yields
$$
\Vert P_ka\Vert_{H^1(\mathbf{G}amma)} \le C\Vert P_ka\Vert_{H^2(\Omega)}
\le C(\Vert A_1P_ka\Vert_{L^2(\Omega)} + \Vert P_ka\Vert_{L^2(\Omega)})
= C(\lambda_k+1)\Vert P_ka\Vert_{L^2(\Omega)}. \eqno{(3.24)}
$$
Here and henceforth $C>0$ denotes generic constants which are
independent of $s>0$ and $k\in \mathbb{N}$.
Since $A_2Q_k = \lambda_kQ_k$, by (3.23) we apply Lemma 2.2 to have
$$
s^3\int_{\Omega} \mathbf{v}ert Q_ka\mathbf{v}ert^2 e^{2s\psi} dx
\le C\int_{\Omega} \lambda_k^2\mathbf{v}ert Q_ka\mathbf{v}ert^2 e^{2s\psi} dx
+ Cs^3\int_{\gamma} (\mathbf{v}ert Q_ka\mathbf{v}ert^2 + \mathbf{v}ert \nabla (Q_ka)\mathbf{v}ert^2)
e^{2s\psi} dx \eqno{(3.25)}
$$
$$
\le C\int_{\Omega} \lambda_k^2\mathbf{v}ert Q_ka\mathbf{v}ert^2 e^{2s\psi} dx
+ Cs^3e^{2sM}\Vert P_ka\Vert^2_{H^1(\gamma)}
$$
for all large $s>0$. Here we set
$M:= \max_{x\in \overline{\mathbf{G}amma}} \psi(x)$.
We choose $s>0$ sufficiently large and set
$s_k:= s^*\lambda_k^{\frac{2}{3}}$ for $k\in \mathbb{N}$. Then, using (3.24),
we obtain
\begin{align*}
& ({s^*}^3\lambda_k^2 - C\lambda_k^2)\int_{\Omega} \mathbf{v}ert Q_ka\mathbf{v}ert^2
e^{2s_k\psi} dx
\le C{s^*}^3\lambda_k^2e^{2s_kM}\Vert P_ka\Vert^2_{H^1(\Omega)}\\
\le& C{s^*}^3\lambda_k^2e^{2s_kM}(\lambda_k+1)^2\Vert P_ka\Vert^2_{L^2(\Omega)}.
\end{align*}
Since $\psi \ge 0$ in $\Omega$ and we can take $s^*>0$ sufficiently large, we see
$$
{s^*}^3\lambda_k^2 \Vert Q_ka\Vert^2_{L^2(\Omega)}
\le C{s^*}^3\lambda_k^2\lambda_k^2 e^{2s_kM}\Vert P_ka\Vert^2_{L^2(\Omega)},
$$
that is,
$$
\Vert Q_ka\Vert^2_{L^2(\Omega)} \le C\lambda_k^2 e^{C_1\lambda_k^{\frac{2}{3}}}
\Vert P_ka\Vert^2_{L^2(\Omega)},
$$
where we set $C_1:= 2s^*M$. Here we note that $s^*$ and $M$, and so
the constant $C_1$ are independent of $k\in \mathbb{N}$.
Therefore, since we can find a constant $C_2>0$ such that
$\eta^2 e^{C_1\eta^{\frac{2}{3}} + \sigma_1\eta^{\hhalf}}
\le C_2e^{C_2\eta^{\frac{2}{3}}}$ for all $\eta \ge 0$,
we see
$$
\sum_{k=1}^{\infty} e^{\sigma_1\lambda_k^{\hhalf}}\Vert Q_ka\Vert^2_{L^2(\Omega)}
\le C\sum_{k=1}^{\infty} \lambda_k^2e^{C_1\lambda_k^{\frac{2}{3}}+\sigma_1\lambda_k^{\hhalf}}
\Vert P_ka\Vert^2_{L^2(\Omega)}
\le C_2\sum_{k=1}^{\infty} e^{C_2\lambda_k^{\frac{2}{3}}}\Vert P_ka\Vert^2_{L^2(\Omega)}.
$$
Moreover, $\lim_{k\to\infty} \frac{\theta(\lambda_k)}{\lambda_k^{\frac{2}{3}}}
= \infty$ yields that for the constant $C_2>0$ we can choose $N\in \mathbb{N}$
such that $C_2\lambda_k^{\frac{2}{3}} \le \theta(\lambda_k)$ for $k \ge N$.
Consequently,
$$
\sum_{k=N}^{\infty} e^{\sigma_1\lambda_k^{\hhalf}}\Vert Q_ka\Vert^2_{L^2(\Omega)}
\le C\sum_{k=N}^{\infty} e^{\theta(\lambda_k)}\Vert P_ka\Vert^2_{L^2(\Omega)}
< \infty,
$$
and so
$$
\sum_{k=1}^{\infty} e^{\sigma_1\lambda_k^{\hhalf}}\Vert Q_ka\Vert^2_{L^2(\Omega)}
< \infty.
$$
Thus (3.21) completes the proof of Theorem 1.1.
$\blacksquare$
\section{Appendix: Proof of Lemma 2.2}
We can prove the lemma by integration by parts similarly to
Lemma 7.1 (p.186) in Bellassoued and Yamamoto \cite{BY}) for example, but
here we derive from a Carleman estimate for the parabolic equation
by Imanuvilov \cite{Im}.
We set $Q:= \Omega\times (0,T)$.
We choose $\ell \in C^{\infty}[0,T]$ such that
$$
\left\{ \begin{array}{rl}
& \ell(t) = 1 \quad \mbox{for $\frac{T}{4}\le t\le \frac{3}{4}T$},\\
& \ell(0) = \ell(T) = 0,\\
& \mbox{$\ell$ is strictly increasing on $\left[ 0, \, \frac{T}{4}\right]$
and strictly decreasing on $\left[ \frac{T}{4}, \,T\right]$}.
\end{array}\right.
\eqno{(4.1)}
$$
In particular, $\ell(t) \le 1$ for $0\le t\le T$. Choosing $\lambda>0$
sufficiently large, we set
$$
\alpha(x,t) := \frac{e^{\lambda\rho(x)} - e^{2\lambda\Vert \rho\Vert_{C(\overline{\Omega})}}}
{\ell(t)}, \quad
\mathbf{v}arphi(x,t) := \frac{e^{\lambda\rho(x)}}{\ell(t)}, \quad (x,t)\in \Omega\times (0,T).
$$
Then we know
\\
{\bf Lemma 4.1}
\\
{\it
There exist constants $s_0>0$ and $C>0$ such that
\begin{align*}
& \int_Q (s\mathbf{v}arphi\mathbf{v}ert \nabla U\mathbf{v}ert^2 + s^3\mathbf{v}arphi^3\mathbf{v}ert U\mathbf{v}ert^2)
e^{2s\alpha} dxdt \\
\le& C\int_Q \mathbf{v}ert \partial_tU - A_2U\mathbf{v}ert^2 e^{2s\alpha} dxdt
+ C\int_{\gamma} (\mathbf{v}ert \partial_tU\mathbf{v}ert^2 + s\mathbf{v}arphi\mathbf{v}ert \nabla U\mathbf{v}ert^2
+ s^3\mathbf{v}arphi^3\mathbf{v}ert U\mathbf{v}ert^2) e^{2\alpha} dSdt
\end{align*}
for all $s \ge s_0$ and $U\in H^{2,1}(Q)$ satisfying
$\ppp_{\nu_A} U = 0$ on $\partial\Omega \times (0,T)$.
}
The proof is found in Chae, Imanuvilov and Kim \cite{CIK}.
We apply Lemma 4.1 to $g(x)$ satisfying $\ppp_{\nu_A} g = 0$ on $\partial\Omega$ to
obtain
$$
\int_Q (s\mathbf{v}arphi(x,t)\mathbf{v}ert \nabla g(x)\mathbf{v}ert^2 + s^3\mathbf{v}arphi^3(x,t)\mathbf{v}ert g(x)\mathbf{v}ert^2)
e^{2s\alpha(x,t)} dxdt
\eqno{(4.2)}
$$
$$
\le C\int_Q \mathbf{v}ert A_2g\mathbf{v}ert^2 e^{2s\alpha(x,t)} dxdt
+ C\int^T_0 \int_{\gamma} (s\mathbf{v}arphi(x,t)\mathbf{v}ert \nabla g(x)\mathbf{v}ert^2
+ s^3\mathbf{v}arphi^3\mathbf{v}ert g(x)\mathbf{v}ert^2) e^{2\alpha} dSdt
$$
for all $s \ge s_0$.
Moreover, in terms of (4.1) and $e^{\lambda\rho(x)}
= e^{2\lambda\Vert \rho\Vert_{C(\overline{\Omega})}}\psi(x)$ for $x\in \Omega$,
we have
$$
\int_Q (s\mathbf{v}arphi(x,t)\mathbf{v}ert \nabla g(x)\mathbf{v}ert^2 + s^3\mathbf{v}arphi^3(x,t)\mathbf{v}ert g(x)\mathbf{v}ert^2)
e^{2s\alpha(x,t)} dxdt
\eqno{(4.3)}
$$
$$
\ge \int^{\frac{3}{4}T}_{\frac{T}{4}} \int_{\Omega}
(se^{\lambda\rho(x)}\mathbf{v}ert \nabla g(x)\mathbf{v}ert^2
+ s^3e^{3\lambda\rho(x)}\mathbf{v}ert g(x)\mathbf{v}ert^2)
\exp( 2s(e^{\lambda\rho(x)} - e^{2\lambda\Vert \rho\Vert_{C(\overline\Omega)}}) ) dxdt
$$
$$
\ge C\frac{T}{2}
\int_{\Omega} (s\mathbf{v}ert \nabla g(x)\mathbf{v}ert^2 + s^3\mathbf{v}ert g(x)\mathbf{v}ert^2)
e^{2s(e^{2\lambda\Vert\rho\Vert_{C(\overline\Omega)}}\psi(x))} dx
e^{-2se^{2\lambda\Vert \rho\Vert_{C(\overline\Omega)}}}
$$
$$
\ge C\frac{T}{2}
\int_{\Omega} (s\mathbf{v}ert \nabla g(x)\mathbf{v}ert^2 + s^3\mathbf{v}ert g(x)\mathbf{v}ert^2)
e^{2s\psi(x)} dx
e^{-2se^{2\lambda\Vert \rho\Vert_{C(\overline\Omega)} }}.
$$
Here $C>0$ depends on $\lambda$ but not on $s>0$.
By $e^{2s\alpha(x,t)} \le 1$ in $Q$, (4.2) and (4.3), we obtain
$$
C\frac{T}{2}\int_{\Omega} (s\mathbf{v}ert \nabla g(x)\mathbf{v}ert^2 + s^3\mathbf{v}ert g(x)\mathbf{v}ert^2)
e^{2s\psi(x)} dx e^{-2se^{2\lambda\Vert \rho\Vert_{C(\overline\Omega)}}}
\eqno{(4.4)}
$$
$$
\le C\int_Q \mathbf{v}ert A_2g\mathbf{v}ert^2 e^{2s\alpha(x,t)} dxdt
+ C\int^T_0 \int_{\gamma} (s\mathbf{v}arphi(x,t)\mathbf{v}ert \nabla g(x)\mathbf{v}ert^2
+ s^3\mathbf{v}arphi^3(x,t)\mathbf{v}ert g(x)\mathbf{v}ert^2)
e^{2s\alpha(x,t)} dSdt.
$$
Since $\sup_{(x,t)\in \gamma \times (0,T)}
\mathbf{v}ert (s\mathbf{v}arphi)^ke^{2s\alpha(x,t)}\mathbf{v}ert
< \infty$ for $k=1,3$ and
$$
e^{2s\psi(x)} = e^{2se^{\lambda(\rho(x) - 2\Vert \rho\Vert_{C(\overline{\Omega})})}}
\ge e^{2se^{-\lambda\Vert \rho\Vert_{C(\overline\Omega)}}},
$$
we can find a constant $C_1 = C_1(\lambda) > 0$ such that
$$
(s\mathbf{v}arphi(x,t))^k e^{2s\alpha(x,t)} \le C_1e^{2s\psi(x)}, \quad
(x,t) \in \gamma \times (0,T).
$$
Therefore
\begin{align*}
& \int_{\Omega} (s\mathbf{v}ert \nabla g\mathbf{v}ert^2 + s^3\mathbf{v}ert g\mathbf{v}ert^2)
e^{2s\psi(x)} dx \\
\le &Ce^{2se^{2\lambda\Vert \rho\Vert_{C(\overline{\Omega})}}}
\left( \int_{\Omega} \mathbf{v}ert A_2g\mathbf{v}ert^2 e^{2s\psi} dx
+ \int_{\gamma} (s\mathbf{v}ert \nabla g\mathbf{v}ert^2 + s^3\mathbf{v}ert g(x)\mathbf{v}ert^2)
e^{2s\psi} dS\right).
\end{align*}
Substituting (4.3) and (4.4) into (4.2), we complete the proof of
Lemma 2.2.
$\blacksquare$
{\bf Acknowledgements.}
The work was supported by Grant-in-Aid for Scientific Research (A) 20H00117
of Japan Society for the Promotion of Science.
\end{document}
|
\begin{document}
\title{Automatic parametrization and mesh deformation for CFD optimization\\{\sc a technical report}
{\it Notice: It is a stub of an technical report. It will be extended in future revisions}
\begin{abstract}
We present an automatic and memory efficient methods of morphing-based parametrization of shapes in CFD optimization. Method is based on Kriging and Radial Basis Function interpolation methods.
\end{abstract}
\section{Introduction}
In recent years CFD optimization gained wide recognition as a design technique. With the increase use of adjoint methodologies, shape optimization for fluid flow problems is destined to be a part of industrial-scale product development in coming years. For a survey of CFD optimization methods we refer the reader to the book by Mohammadi and Pironneau\cite{mohammadi_applied_2001,jameson_aerodynamic_2003}.
In all optimization problems it is vital to construct a parametrization of the design space. In many cases the parametrization of shape is achieved by application of morphing (so called morphing box approach) to the surface geometry. In this paper we present a morphing technique in which Radial Basis Functions are used for point displacements and morphing points are selected through a maximum variance criterion.
\section{Method details}
\subsection{Interpolation}
Now let us select a covariance function $\kappa$, which is positive-definite. For example:
\[\kappa(d)=e^{-\frac{d^2}{2\theta^2}}\]
Let us also define function $K$ as $K(x,y)=\kappa(\|x-y\|)$ of covariance between points in $R^3$. Also let us extend the definition to sets of $\mathbb{R}^3$ points:
\[K\left((x_1,x_2,\cdots,x_n),(y_1,y_2,\cdots,y_m)\right)=\left[\begin{array}{cccc}
K(x_1,y_1)& K(x_1,y_2)&&K(x_1,y_m)\\
K(x_2,y_1)& K(x_2,y_2)&&K(x_2,y_m)\\
&&\cdots&\\
K(x_n,y_1)& K(x_n,y_2)&&K(x_n,y_m)\\\end{array}\right]\]
Now let us select a set of $m$ points $M$ which we will call morphing nodes. For a fixed $M$ and a vector of displacements $d$ (matrix of dimension $m\times 3$ we can define a displacement function $\hat{m}:\mathbb{R}^3\rightarrow\mathbb{R}^3$:
\[\hat{m}(x) = d^T K(M,M)^{-1} K(M,x)\]
As the displacement function is defined everywhere in $\mathbb{R}^3$ we can use it for morphing both the surface geometry and volume mesh. It is important to notice that the smoothness of the displacement is dependent solemnly on the smoothness of $\kappa$. For increased control of smoothness the Mattern function can be chosen.
Additionally, basing on Kriging statistical approach, let us define the posteriori variance function $\hat{\sigma^2}:\mathbb{R}^3\rightarrow\mathbb{R}$:
\[\hat{\sigma^2}(x) = \kappa(0) - K(x,M) K(M,M)^{-1} K(M,x)\]
The variance function can be interpreted as a measure of the possible displacement that is ``not parametrized''. It is low in regions where the morphing nodes have high influence and $1$ in regions where there is virtually no displacement done.
\subsection{Algoritm}
Let $P$ be the set of all point in a mesh and $S\subset P$ we the points of the surface mesh. Now we can execute the algorithm of construction of parametrization:
The algorithm of selection of the points is as follows:
\begin{enumerate}
\item Construct the function $\hat{\sigma^2}$ for $M$.\label{step:loop}
\item Find the point $x\in S$ with the maximum $\hat{\sigma^2}$
\item If the maximal value of the variance is below a desired level (or the number of morphing nodes is satisfactory), go to~\ref{step:out}
\item Add $x$ to $M$ and go to~\ref{step:loop}
\item Construct and save matrix $W = K(M,M)^{-1} K(M,P)$\label{step:out}
\end{enumerate}
After this pre-calculation of the parametrization we can calculate the morphing displacement of all the mesh nodes by multiplying a displacement matrix $d$ by the matrix $W$:
\[\hat{m}(P)=d^TW\]
\subsection{Fixing}
In many real world applications, we need to fix parts of the geometry, which cannot be morphed. Presented method can be easily extended to accommodate such need. For a subset $F\subset\mathbb{R}^3$ of the space, we can define a function $d_F(x)$ which is a distance of a point $x$ from the set $F$ (and zero for points inside the set). Now let us define a function $f$ as:
\[f(x) = \kappa(0)-\kappa(d_F(x))\]
We can construct a new $K$ which will take into account that all points in $F$ are fixed:
\[K(x,y)=\kappa(\|x-y\|)f(x)f(y)\]
Resulting $K$ is still positive-define. It is important to notice that this construction works for any positive $f$ which is $0$ on the set of points that are to be fixed, and tends to $1$ otherwise. The function $f$ is constructed basing on $\kappa$ to preserve the influence radius $\theta$ and smoothness of the $\kappa$ across whole parametrization
\section{Applications}
{\it This section will be extended in future revisions}
\section{Summary}
Method for automatic parametrization of shape in CFD optimization cases was showed in detail. The presented approach can be used for both morphing of the surface mesh as the volume mesh, which makes it very useful for application to real-world optimization problems.
{}
\end{document}
|
\mathbf egin{document}
\title{Imaging of complex-valued tensors for two-dimensional Maxwell's equations}
\mathbf egin{abstract}
This paper concerns the imaging of a complex-valued anisotropic tensor $\gamma=\sigma+\i\omega\varepsilon$ from knowledge of several inter magnetic fields $H$ where $H$ satisfies the anisotropic Maxwell system on a bounded domain $X\subset\mathcal{R}m^2$ with prescribed boundary conditions on $\partial X$. We show that $\gamma$ can be uniquely reconstructed with a loss of two derivatives from errors in the acquisition $H$. A minimum number of five {\em well-chosen} functionals guaranties a local reconstruction of $\gamma$ in dimension two. The explicit inversion procedure is presented in several numerical simulations, which demonstrate the influence of the choice boundary conditions on the stability of the reconstruction. This problem finds applications in the medical imaging modalities Current Density Imaging and Magnetic Resonance Electrical Impedance Tomography.
\end{abstract}
\section{Introduction}
The electrical properties of a biological tissue are characterized by $\gamma = \sigma+\i\varepsilon$, where $\sigma$ and $\varepsilon$ denote the conductivity and the permittivity. The properties that indicate conditions of tissues can provide important diagnostic information. Extensive studies have been made to produce medical images inside human body by applying currents to probe their electrical properties. This technique is called Electrical Impedance Tomography (EIT). This imaging technique displays high contrast between health and non-healthy tissues. However, EIT uses boundary measurements of current-voltage data and suffers from very low resolution capabilities. This leads to an inverse boundary value problem (IBVP) called the Calder\'{o}n's problem, which is severely ill-posed and unstable \cite{Sylvester1987,Uhlmann2009}. For IBVP in electrodynamics, we refer the reader to \cite{Caro2009, Kenig2011, Ola1993, Ola1996, Somersalo1992}. Moreover, well-known obstructions show that anisotropic admittivities cannot be uniquely reconstructed from boundary measurements, see \cite{Kohn1984,Uhlmann2009}.
Several recent medical imaging modalities, which are often called coupled-physic modalities or hybrid imaging modalities, aim to couple the high contrast modality with the high resolution modality. These new imaging techniques are two steps processes. In the first step, one uses boundary measurements to reconstruct internal functionals inside the tissue. In the second step, one reconstructs the electrical properties of the tissue from given internal data, which greatly improve the resolution of quantitative reconstructions. An incomplete list of such techniques includes ultrasound modulated optical tomography, magnetic resonance electrical impedance tomography, photo-acoustic tomography and transient elastography. We refer the reader to \cite{Ammari2008, Bal2012e, Guo2014, Bal2010, Kuchment2011a, Kuchment2011, Monard2012a, Nachman2009} for more details. For other techniques in inverse problems, such as inverse scattering, we refer the reader to \cite{Kabanikhin2011, Nivikov2005, Takhtadzhan1979}.
In this paper we are interested in the hybrid inverse problem of reconstructing $(\sigma, \varepsilon)$ in the Maxwell's system from the internal magnetic fields $H$. Internal magnetic fields can be obtained using a Magnetic Resonance Imaging (MRI) scanner, see \cite{Ider1997} for details. The explicit reconstructions we propose require that all components of the magnetic field $H$ be measured. This may be challenging in many practical settings as it requires a rotation of the domain being imaged or of the MRI scanner. The reconstruction of $(\sigma, \varepsilon)$ from knowledge of only some components of $H$, ideally only one component for the most practical experimental setup, is open at present. We assume that the above first step is done and we are given the data of the internal magnetic fields in the domain.
In the isotropic case, a reconstruction for the conductivity was given in \cite{Seo2012}. In \cite{Guo2013b}, an explicit reconstruction procedure was derived for an arbitrary anisotropic, complex-valued tensor $\gamma= \sigma+\i\varepsilon$ in the Maxwell's equations in $\mathcal{R}m^3$. This explicit reconstruction method requires that some matrices constructed from measurements satisfy appropriate conditions of linear independence. In the present work, we provide numerical simulations in two dimensions to demonstrate the reconstruction procedure
for both smooth and rough coefficients.
The rest of the paper is organized as follows. The main results and the reconstruction formulas are presented in Section \ref{main results}. The reconstructibility hypothesis is proved in Section \ref{Fulfilling Hypothesis}. The numerical implementations of the algorithm with synthetic data are shown in Section \ref{num simu}. Section \ref{se:conclu} gives some concluding remarks.
\section{Statements of the main results}\label{main results}
\subsection{Modeling of the problem}
Let $X$ be a simply connected, bounded domain of $\mathcal{R}m^2$ with smooth boundary. The smooth anisotropic electric permittivity, conductivity, and the constant isotropic magnetic permeability are respectively described by $\varepsilon(x)$, $\sigma(x)$ and $\mu_0$, where $\varepsilon(x)$, $\sigma(x)$ are tensors and $\mu_0$ is a constant scalar, known, coefficient. We denote $\gamma=\sigma+\i\omega\varepsilon$, where $\omega >0$ is the frequency of the electromagnetic wave. We assume that $\varepsilon(x)$ and $\sigma(x)$ are uniformly bounded from below and above, i.e., there exist constants $\kappa_{\varepsilon},\kappa_{\sigma}>1$ such that for all $\xi\in \mathcal{R}m^2$,
\mathbf egin{align}\label{positive definite}
\mathbf egin{split}
\kappa_{\varepsilon}^{-1}\|\xi\|^2 &\le \xi\cdot\varepsilon\xi \le \kappa_{\varepsilon}\|\xi\|^2 \quad \text{in} \enspace X\\
\kappa_{\sigma}^{-1}\|\xi\|^2 &\le\xi\cdot\sigma\xi \le \kappa_{\sigma}\|\xi\|^2 \quad \text{in} \enspace X.
\end{split}
\end{align}
Let $\bm E=( E^1, E^2)'\in\mathbb{C}^2 $ and $H\in\mathbb{C}$ denote the electric and magnetic fields inside the domain $X$. Thus $\bm E$ and $H$ solve the following time-harmonic Maxwell's equations:
\mathbf egin{align}\label{Eq:maxwell}
\left\{\mathbf egin{array}{lll}
\nabla\times\bm E+\i\omega\mu_0H =0\\
\bm\nabla\times H-\gamma\bm E=0,
\end{array}\right.
\end{align}
with the boundary condition
\mathbf egin{align}\label{Eq:boundary}
\bm\nu\times \bm E:= \nu_1E^2-\nu_2 E^1=f, \quad \text{on} \enspace {\partial X},
\end{align}
where $\bm\nu=(\nu_1,\nu_2)$ is the exterior unit normal vector on the boundary $\partial X$. The standard well-posedness theory for MaxwellÕs equations \cite{Lions1993} states that given $f\in H^{\frac{1}{2}}(\partial X)$, the equation \eqref{Eq:maxwell}-\eqref{Eq:boundary} admits a unique solution in the Sobolev space $H^1(X)$. In this paper, the notations $\nabla$ and $\bm\nabla$ distinguish between the scalar and vector curl operators:
\mathbf egin{align}
\nabla\times\bm E = \frac{\partial E^2}{\partial x_1} - \frac{\partial E^1}{\partial x_2} \quad \text{and} \quad \bm\nabla\times H=(- \frac{\partial H}{\partial x_2}, \frac{\partial H}{\partial x_1})'.
\end{align}
Although \eqref{Eq:maxwell} can be reduced to a scalar Laplace equation for $H$, we treat it as a system. The reconstruction method holds for the full 3 dimensional case. In this paper, we assume that the conductivity $\sigma$ and the permeability $\varepsilon$ are independent
of the third component in $\mathcal{R}m^3$ and give the numerical simulations in two dimension to validate the reconstruction method.
\subsection{Local reconstructibility condition}
We select $5$ boundary conditions $\{f_i\}_{1\leq i\leq 5}$ such that the corresponding electric and magnetic fields $\{\bm E_i,H_i\}_{1\leq i\leq 5}$ satisfy the Maxwell's equations \eqref{Eq:maxwell}. Assuming that over a sub-domain $X_0\subset X$, the two electric fields $\bm E_1$, $\bm E_2$ satisfy the following positive condition,
\mathbf egin{align}\label{posi condition}
\inf_{x\in X_0} |{\rm dist}et (\bm E_1,\bm E_2)| \ge c_0>0.
\end{align}
Thus the $3$ additional solutions $\{\bm E_{2+j}\}_{1\leq j\leq 3}$ can be decomposed as linear combinations in the basis $(\bm E_1,\bm E_2)$,
\mathbf egin{align}
\bm E_{2+j}=\lambda^j_1 \bm E_1+\lambda^j_2 \bm E_2, \quad 1\leq j\leq 3,
\label{ln dep}
\end{align}
where the coefficients $\{\lambda^j_1,\lambda^j_2\}_{1\leq j\leq 3}$ can be computed by Cramer's rule as follows:
\mathbf egin{align}\label{cramer rule}
\mathbf egin{split}
\lambda^j_1 &=\frac{{\rm dist}et(\bm E_{2+j},\bm E_2)}{{\rm dist}et(\bm E_1,\bm E_2)}=\frac{{\rm dist}et(\bm\nabla\times H_{2+j},\bm\nabla\times H_2)}{{\rm dist}et(\bm\nabla\times H_1,\bm\nabla\times H_2)},\\
\lambda^j_2 &=\frac{{\rm dist}et(\bm E_1,\bm E_{2+j})}{{\rm dist}et(\bm E_1,\bm E_2)}=\frac{{\rm dist}et(\bm\nabla\times H_1,\bm\nabla\times H_{2+j})}{{\rm dist}et(\bm\nabla\times H_1,\bm\nabla\times H_2)}.
\end{split}
\end{align}
Thus these coefficients can be computed from the available magnetic fields. The reconstruction procedures will make use of the matrices $Z_j$ defined by
\mathbf egin{align}
Z_j=\left[\bm\nabla\times\lambda^j_1 | \bm\nabla\times\lambda^j_2\right],\quad \text{where } \enspace 1\leq j\leq 3.
\label{Y Z}
\end{align}
These matrices are also uniquely determined from the known magnetic fields. Denoting the matrix $H:=[\bm\nabla\times H_1 | \bm\nabla\times H_2]$ and the skew-symmetric matrix $J=\left[\mathbf egin{smallmatrix} 0 &-1\\ 1 & 0 \end{smallmatrix}\right]$, we construct three matrices as follows,
\mathbf egin{align}\label{constraint matrice}
M_j = (Z_jH^T)^{sym}, \quad 1\leq j\leq 3,
\end{align}
where $A^T$ denotes the transpose of a matrix $A$ and $A^{sym}:= (A + A^T)/2$. The calculations in the following section show that condition \eqref{posi condition} and the linear independence of $\{M_j\}_{1\leq j\leq 3}$ in $S_2(\mathbb{C})$ are sufficient to guarantee local reconstruction of $\gamma$. The required conditions, which allow us to set up our reconstruction formulas, are listed in the following hypotheses. The reconstructions are {\em local} in nature: the reconstruction of $\gamma$ at $x_0\in X$ requires the knowledge of $\{H_j(x)\}_{1\leq j\leq J}$ for $x$ only in the vicinity of $x_0$.
\mathbf egin{hypothesis}\label{2 hypo}
Given Maxwell's equations \eqref{Eq:maxwell} with smooth $\varepsilon$ and $\sigma$ satisfying the uniform ellipticity conditions \eqref{positive definite}, there exist a set of illuminations $\{f_i\}_{1\leq i\leq 5}$ such that the corresponding electric fields $\{\bm E_i\}_{1\leq i\leq 5}$ satisfy the following conditions:
\mathbf egin{enumerate}
\item $\inf_{x\in X_0} |{\rm dist}et (\bm E_1,\bm E_2)| \ge c_0>0$ holds on a sub-domain $X_0\subset X$,
\item The matrices $\{M_j\}_{1\leq j\leq 3}$ constructed in \eqref{constraint matrice} are linearly independent in $S_2(\mathbb{C})$ on $X_0$, where $S_2(\mathbb{C})$ denotes the space of $2\times 2$ symmetric matrices.
\end{enumerate}
\end{hypothesis}
\mathbf egin{remark}
Note that both conditions in Hypothesis \ref{2 hypo} can be expressed in terms of the measurements $\{H_j\}_j$, and thus can be checked during the experiments. When the above constant $c_0$ is deemed too small, or the matrices $M_j$ are not sufficiently independent, then additional measurements might be required. For the $3$ dimensional case, Hypothesis \ref{2 hypo} holds locally, under some smoothness assumptions on $\gamma$, with $6$ well-chosen boundary conditions. The proof is based on the Runge approximation, see \cite{Guo2013b} for details.
\end{remark}
\subsection{Reconstruction approaches and stability results}
The reconstruction approaches were presented in \cite{Guo2013b} for a $3$ dimensional case. To make this paper self-contained, we briefly list the algorithm for the two-dimensional case.
Denote $M_2(\mathbb{C})$ as the space of $2\times 2$ matrices with inner product $\langle A,B\rangle:= {\text{tr }}(A^*B)$. We assume that Hypothesis \ref{2 hypo} holds over some $X_0\subset X$ with $5$ electric fields $\{\bm E_i\}_{1\leq i\leq 5}$. In particular, the matrices $\{M_j\}_{1\leq j\leq 3}$ constructed in \eqref{constraint matrice} are linearly independent in $S_2(\mathbb{C})$. We will see that the inner products of $(\gamma^{-1})^*$ with all $M_j$ can be calculated from knowledge of $\{H_j\}_{1\leq j\leq 5}$. Then $\gamma$ can be explicitly reconstructed by least-square method. The reconstruction formulas can be found in Section \ref{rec approach}. This algorithm leads to a unique and stable reconstruction and the stability estimate will be given in Section \ref{stablity estimate}.
\subsubsection{Reconstruction algorithms}\label{rec approach}
We apply the curl operator to both sides of \eqref{ln dep}. Using the product rule, we get the following equation,
\mathbf egin{align}
\sum_{i=1,2}\lambda^j_i\nabla\times \bm E_i + \bm E_i\cdot\bm\nabla\times\lambda^j_i = \nabla\times\bm E_{2+j}, \quad \text{for} \enspace j\geq 3.
\end{align}
Substituting $H_i$ into $\bm E_i$ in the above equation, we obtain the following equation after rearranging terms,
\mathbf egin{align}
\sum_{i=1,2} \bm\nabla\times\lambda_i^j\cdot(\gamma^{-1}\bm\nabla\times H_i) = \sum_{i=1,2}\i\omega\mu_0(\lambda^j_iH_i-H_{2+j})
\end{align}
Recalling the definition of $Z_j$ by \eqref{Y Z}, the above equation leads to
\mathbf egin{align}\label{rec sys}
\gamma^{-1}:(Z_jH^t)^{sym}=\sum_{i=1,2}\i\omega\mu_0(\lambda^j_iH_i-H_{2+j}),
\end{align}
where the matrix $H=[\bm\nabla\times H_1 | \bm\nabla\times H_2]$. Note that $M_j = (Z_jH^T)^{sym}$ and the RHS of the above equation are computable from the measurements, thus $\gamma$ can be explicitly reconstructed by \eqref{rec sys} provided that $\{M_j\}_{1\leq j\leq 3}$ are of full rank in $S_2(\mathbb{C})$.
\mathbf egin{remark}
The reconstruction formulae is local. In practice, we add more measurements and get additional $M_j$ such that $\{M_j\}_j$ is of full rank in $S_2(\mathbb{C})$. The system \eqref{rec sys} becomes overdetermined and $\gamma$ can be reconstructed by solving \eqref{rec sys} using least-square method.
\end{remark}
\subsubsection{Uniqueness and stability results}\label{stablity estimate}
The algorithm derived in the above section leads to a unique and stable reconstruction in the sense of the following theorem:
\mathbf egin{theorem}\label{stability}
Suppose that Hypotheses \ref{2 hypo} hold over some $X_0\subset X$ for two sets of electric fields $\{\bm E_i\}_{1\leq i\leq 5}$ and $\{\bm E'_i\}_{1\leq i\leq 5}$, which solve the Maxwell's equations \eqref{Eq:maxwell} with the complex tensors $\gamma$ and $\gamma'$ satisfying the uniform ellipticity condition \eqref{positive definite}. Then $\gamma$ can be uniquely reconstructed in $X_0$ with the following stability estimate,
\mathbf egin{align}
\|\gamma - \gamma'\|_{W^{s,\infty}(X_0)}\le C \sum_{i=1}^{5} \|H_i-H'_i\|_{W^{s+2,\infty}(X)},
\label{eq:stability}
\end{align}
for any integer $s>0$ and some constant $C=C(s)$.
\end{theorem}
\mathbf egin{proof}
The above estimate is straightforward by noticing that two derivatives are taken in the reconstruction procedure for $\gamma$.
\end{proof}
\section{Fulfilling Hypothesis \ref{2 hypo}} \label{Fulfilling Hypothesis}
In this section, we assume that $\gamma$ is a diagonalizable constant tensor. We will take special CGO-like solutions of the Maxwell's equations \eqref{Eq:maxwell} and demonstrate that Hypothesis \ref{2 hypo} can be fulfilled with these solutions. By definition of the curl operator, it suffices to show that
\mathbf egin{align}\label{tilde constraint matrice}
\tilde M_j = (\tilde Z_j\tilde H^T)^{sym}, \quad 1\leq j\leq 3,
\end{align}
are linearly independent in $S_2(\mathbb{C})$, where $\tilde Z_j=\left[\nabla\lambda^j_1 | \nabla\lambda^j_2\right]$ and $\tilde H=[\nabla H_1 | \nabla H_2]$. We derive the following Helmholtz-type equation from \eqref{Eq:maxwell},
\mathbf egin{align}\label{eq:Helm}
-\nabla\cdot\tilde\gamma^{-1}\nabla H_i + H_i= 0, \quad \text{for} \enspace 1\leq i\leq 5,
\end{align}
where $\tilde\gamma = -\i\omega\mu J^T\gamma J$ and admits a decomposition $\tilde\gamma= QQ^T$ with $Q$ invertible. We take special CGO-like solutions of the form
\mathbf egin{align}\label{CGO like}
H_i= e^{x\cdot Qu_i},
\end{align}
where the $u_i$ are vectors of unit length. Obviously, $u_i$ defined in \eqref{CGO like} satisfy \eqref{eq:Helm} and
\mathbf egin{align}\label{tilde H}
\tilde H = QU\left[\mathbf egin{array}{cc} e^{x\cdot Qu_1} &0\\ 0 & e^{x\cdot Qu_2} \end{array}\right],
\end{align}
where $U=[u_1| u_2]$. Therefore, Hypothesis \ref{2 hypo}.1 can be easily fulfilled by choosing independent unit vectors $u_1= \mathbf e_1$, $u_2=\mathbf e_2$. Using the corresponding additional electric fields $\{\bm E_{2+j}\}_{1\leq j\leq 3}$, Cramer's rule as in \eqref{cramer rule} yields the decompositions
\mathbf egin{align*}
\bm E_{2+j}=\lambda^j_1 \bm E_1+\lambda^j_2 \bm E_2, \quad \text{with} \enspace \lambda^j_1=e^{x\cdot Q(u_{2+j}-u_1)}{\rm dist}et(u_{2+j},u_2), \enspace \lambda^j_2=e^{x\cdot Q(u_{2+j}-u_2)}{\rm dist}et(u_1,u_{2+j}).
\end{align*}
Then by definition of $\tilde Z_j$, we get the following expression,
\mathbf egin{align}
\tilde Z_j = Q[\frac{H_{2+j}}{H_1}{\rm dist}et(u_{2+j},u_2)(u_{2+j}-u_1),\frac{H_{2+j}}{H_2}{\rm dist}et(u_1,u_{2+j})(u_{2+j}-u_2)].
\end{align}
Together with \eqref{tilde H}, straightforward calculations lead to
\mathbf egin{align}
\tilde Z_j\tilde H^T = H_{2+j}Q[{\rm dist}et(u_{2+j},u_2)(u_{2+j}-u_1),{\rm dist}et(u_1,u_{2+j})(u_{2+j}-u_2]Q^T.
\end{align}
Using the fact that $u_{2+j}=(u_{2+j}\cdot u_1)u_1+(u_{2+j}\cdot u_2)u_2$, the above equation leads to
\mathbf egin{align}
\tilde M_j = H_{2+j}Q \left[\mathbf egin{array}{cc} (u_{2+j}\cdot u_1)( (u_{2+j}\cdot u_1)-1) & (u_{2+j}\cdot u_1)(u_{2+j}\cdot u_2)\\ (u_{2+j}\cdot u_1)(u_{2+j}\cdot u_2) & (u_{2+j}\cdot u_2)( (u_{2+j}\cdot u_2)-1) \end{array}\right]Q^T,
\end{align}
where $u_1= \mathbf e_1$, $u_2=\mathbf e_2$. Therefore, it is easy to find $u_{2+j}$ vectors of unit length such that $\tilde M_j$ are linearly independent in $S_2(\mathbb{C})$.
\mathbf egin{remark}
To derive local reconstruction formulas for more general tensors (e.g. $\mathcal{C}^{1,\alpha}(X)$), we need local independence conditions of $\{M_j\}_j$ and we need to control the local behavior of solutions by well-chosen boundary conditions. This is done by means of a Runge approximation. For details, we refer the reader to \cite{Guo2012a},\cite{Bal2012} and \cite{Guo2013b}.
\end{remark}
\section{Numerical experiments}\label{num simu}
In this section we present some numerical simulations based on synthetic data to validate the reconstruction algorithms from the previous section.
\subsection{Preliminary}
We decompose $\gamma=\sigma+\i\omega\varepsilon$ into the following form with six unknown coefficients $\{\sigma_i\}_{1\leq i\leq 3}$, $\{\varepsilon_i\}_{1\leq i\leq 3}$ respectively for $\sigma$ and $\epsilon$,
\mathbf egin{align}\label{3coef}
\gamma= \left[\mathbf egin{array}{cc} \sigma_1 & \sigma_2\\ \sigma_2 & \sigma_3\end{array}\right]+\i\omega\left[\mathbf egin{array}{cc} \varepsilon_1 &\varepsilon_2\\ \varepsilon_2 & \varepsilon_3\end{array}\right],
\end{align}
where each coefficient can be explicitly reconstructed by solving the overdetermined linear system \eqref{rec sys} using least-square method.
In the numerical experiments below, we take the domain of reconstruction to be the square $X=[-1,1]^2$ and use the notation $\mathbf{x}= (x,y)$. We use a $\mathsf{N+1\times N+1}$ square grid with $\mathsf{N}=80$, the tensor product of the equi-spaced subdivision $\mathsf{x =-1:h:1}$ with $\mathsf{h=2/N}$. The synthetic data $H$ are generated by solving the Maxwell's equations \eqref{Eq:maxwell} for {\em known} conductivity $\sigma$ and electric permittivity $\varepsilon$, using a finite difference method implemented with {\tt MatLab}. We refer to these data as the "noiseless" data. To simulate noisy data, the internal magnetic fields $H$ are perturbed by adding Gaussian random matrices with zero means. The standard derivations $\alpha$ are chosen to be $0.1\%$ of the average value of $|H|$.
We use the relative $L^2$ error to measure the quality of the reconstructions. This error is defined as the $L^2$-norm of the difference between the reconstructed coefficient and the true coefficient, divided by the $L^2$-norm of the true coefficient. $\mathcal{E}^C_{\sigma_i}$, $\mathcal{E}^N_{\sigma_i}$, $\mathcal{E}^C_{\varepsilon_i}$, $\mathcal{E}^N_{\varepsilon_i}$ with $1\leq i\leq 3$ denote respectively the relative $L^2$ error in the reconstructions from clean and noisy data for $\sigma_i$ and $\varepsilon_i$.
\paragraph{Regularization procedure.} We use a total variation method as the denoising procedure by minimizing the following functional,
\mathbf egin{align}
\mathcal{O}(\mathbf{f})= \frac{1}{2}\|\mathbf{f}-\mathbf{f}_{\text{rc}}\|^2_2 +\rho \|\Gamma \mathbf{f}\|_{\text{\scriptsize TV}}
\end{align}
where $\mathbf{f}_{\text{rc}}$ denotes the explicit reconstructions of the coefficients of $\sigma$ and $\varepsilon$, $\Gamma$ denotes discretized version of the gradient operator. We choose the $l^1$-norm as the regularization TV norm for discontinuous, piecewise constant, coefficients. In this case, the minimization problem can be solved using the split Bregman method presented in \cite{Goldstein2009}. To recover smooth coefficients, we minimize the following least square problem with the $l^2$-norm for the regularization term,
\mathbf egin{align}
\mathcal{O}(\mathbf{f})= \frac{1}{2}\|\mathbf{f}-\mathbf{f}_{\text{rc}}\|^2_2 +\rho \|\Gamma \mathbf{f}\|_2^2,
\end{align}
where the Tikhonov regularization functional admits an explicit solution $\mathbf{f}=(\mathcal{I}mm+\rho \Gamma^*\Gamma)^{-1}\mathbf{f}_{\text{rc}}$. The regularization methods are used when the data are differentiated.
\subsection{Simulation results}
\paragraph{Simulation 1.} In the first experiment, we intend to reconstruct the smooth coefficients $\{\sigma_i, \varepsilon_i\}_{1\leq i\leq 3}$ defined in \eqref{3coef}. The coefficients are given by,
\mathbf egin{align*}
\left\{\mathbf egin{array}{lll}
\sigma_1 = 2+ \sin(\pi x)\sin(\pi y)\\
\sigma_2 = 0.5\sin(2\pi x)\\
\sigma_3 = 1.8+e^{-15(x^2+y^2)} +e^{-15((x-0.6)^2+(y-0.5)^2)} - e^{-15((x+0.4)^2+(y+0.6)^2)}
\end{array}\right.
\end{align*}
and
\mathbf egin{align*}
\left\{\mathbf egin{array}{lll}
\varepsilon_1 = 2- \sin(\pi x)\sin(\pi y)\\
\varepsilon_2 = 0.5\sin(2\pi y)\\
\varepsilon_3 = 1.8+e^{-12(x^2+y^2)} +e^{-12((x+0.6)^2+(y-0.5)^2)} - e^{-12((x-0.4)^2+(y+0.6)^2)} .
\end{array}\right.
\end{align*}
We performed two sets of reconstructions using clean and noisy synthetic data respectively. The $l_2$-regularization procedure is used in this simulation. For the noisy data, the noise level is $\alpha=0.1\%$. The results of the numerical experiment are shown in Figure \ref{E1sigma} and Figure \ref{E1epsilon}. The relative $L^2$ errors in the reconstructions are $\mathcal{E}^C_{\sigma_1}=0.3\%$, $\mathcal{E}^N_{\sigma_1}=5.1\%$, $\mathcal{E}^C_{\sigma_2}=0.8\%$, $\mathcal{E}^N_{\sigma_2}=33.4\%$, $\mathcal{E}^C_{\sigma_3}=0.2\%$, $\mathcal{E}^N_{\sigma_3}=4.9\%$; $\mathcal{E}^C_{\varepsilon_1}=0.1\%$, $\mathcal{E}^N_{\varepsilon_1}=5.8\%$, $\mathcal{E}^C_{\varepsilon_2}=0.5\%$, $\mathcal{E}^N_{\varepsilon_2}=30.0\%$, $\mathcal{E}^C_{\varepsilon_3}=0.1\%$, $\mathcal{E}^N_{\varepsilon_3}=4.8\%$.
\mathbf egin{figure}[htp]
\centering
\subfigure[true $\sigma_1$]{
\includegraphics[width=37mm,height=35mm]{E1rsigma1.eps}
\label{ex1txi}
}
\subfigure[$\sigma_1$ ($\alpha=0\%$)]{
\includegraphics[width=37mm,height=35mm]{E1csigma1.eps}
\label{ex1cxi}
}
\subfigure[$\sigma_1$ ($\alpha=0.1\%$)]{
\includegraphics[width=37mm,height=35mm]{E1nsigma1.eps}
\label{ex1nxi}
}
\subfigure[$\sigma_1$ at \{$y=-0.5$\}]{
\includegraphics[trim=10mm 5mm 10mm 0mm,clip,width=35mm,height=38mm]{E1crosigma1.eps}
\label{ex1rxi}
}
\subfigure[true $\sigma_2$]{
\includegraphics[width=37mm,height=35mm]{E1rsigma2.eps}
\label{ex1ttau}
}
\subfigure[$\sigma_2$ ($\alpha=0\%$)]{
\includegraphics[width=37mm,height=35mm]{E1csigma2.eps}
\label{ex1ctau}
}
\subfigure[$\sigma_2$ ($\alpha=0.1\%$)]{
\includegraphics[width=37mm,height=35mm]{E1nsigma2.eps}
\label{ex1ntau}
}
\subfigure[$\sigma_2$ at \{$y=-0.5$\}]{
\includegraphics[trim=10mm 5mm 10mm 0mm,clip,width=35mm,height=38mm]{E1crosigma2.eps}
\label{ex1rtau}
}
\subfigure[true $\sigma_3$]{
\includegraphics[width=37mm,height=35mm]{E1rsigma3.eps}
\label{ex1tbeta}
}
\subfigure[$\sigma_3$ $(\alpha=0\%)$]{
\includegraphics[width=37mm,height=35mm]{E1csigma3.eps}
\label{ex1cbeta}
}
\subfigure[$\sigma_3$ ($\alpha=0.1\%$)]{
\includegraphics[width=37mm,height=35mm]{E1nsigma3.eps}
\label{ex1nbeta}
}
\subfigure[$\sigma_3$ at $\{y=0\}$]{
\includegraphics[trim=10mm 5mm 10mm 0mm,clip,width=35mm,height=38mm]{E1crosigma3.eps}
\label{ex1rbeta}
}
\caption{$\sigma$ in Simulation 1. \subref{ex1txi}\&\subref{ex1ttau}\&\subref{ex1tbeta}: true values of $(\sigma_1, \sigma_2,\sigma_3)$. \subref{ex1cxi}\&\subref{ex1ctau}\&\subref{ex1cbeta}: reconstructions with noiseless data. \subref{ex1nxi}\&\subref{ex1ntau}\&\subref{ex1nbeta}: reconstructions with noisy data($\alpha=0.1\%$). \subref{ex1rxi}\&\subref{ex1rtau}\&\subref{ex1rbeta}: cross sections along $\{y=-0.5\}$.}
\label{E1sigma}
\end{figure}
\mathbf egin{figure}[htp]
\centering
\subfigure[true $\varepsilon_1$]{
\includegraphics[width=37mm,height=35mm]{E1repsilon1.eps}
\label{ex1txi}
}
\subfigure[$\varepsilon_1$ ($\alpha=0\%$)]{
\includegraphics[width=37mm,height=35mm]{E1cepsilon1.eps}
\label{ex1cxi}
}
\subfigure[$\varepsilon_1$ ($\alpha=0.1\%$)]{
\includegraphics[width=37mm,height=35mm]{E1nepsilon1.eps}
\label{ex1nxi}
}
\subfigure[$\varepsilon_1$ at \{$y=-0.5$\}]{
\includegraphics[trim=10mm 5mm 10mm 0mm,clip,width=35mm,height=38mm]{E1croepsilon1.eps}
\label{ex1rxi}
}
\subfigure[true $\varepsilon_2$]{
\includegraphics[width=37mm,height=35mm]{E1repsilon2.eps}
\label{ex1ttau}
}
\subfigure[$\varepsilon_2$ ($\alpha=0\%$)]{
\includegraphics[width=37mm,height=35mm]{E1cepsilon2.eps}
\label{ex1ctau}
}
\subfigure[$\varepsilon_2$ ($\alpha=0.1\%$)]{
\includegraphics[width=37mm,height=35mm]{E1nepsilon2.eps}
\label{ex1ntau}
}
\subfigure[$\varepsilon_2$ at \{$y=-0.5$\}]{
\includegraphics[trim=10mm 5mm 10mm 0mm,clip,width=35mm,height=38mm]{E1croepsilon2.eps}
\label{ex1rtau}
}
\subfigure[true $\varepsilon_3$]{
\includegraphics[width=37mm,height=35mm]{E1repsilon3.eps}
\label{ex1tbeta}
}
\subfigure[$\varepsilon_3$ $(\alpha=0\%)$]{
\includegraphics[width=37mm,height=35mm]{E1cepsilon3.eps}
\label{ex1cbeta}
}
\subfigure[$\varepsilon_3$ ($\alpha=0.1\%$)]{
\includegraphics[width=37mm,height=35mm]{E1nepsilon3.eps}
\label{ex1nbeta}
}
\subfigure[$\varepsilon_3$ at $\{y=0\}$]{
\includegraphics[trim=10mm 5mm 10mm 0mm,clip,width=35mm,height=38mm]{E1croepsilon3.eps}
\label{ex1rbeta}
}
\caption{$\varepsilon$ in Simulation 1. \subref{ex1txi}\&\subref{ex1ttau}\&\subref{ex1tbeta}: true values of $(\varepsilon_1, \varepsilon_2,\varepsilon_3)$. \subref{ex1cxi}\&\subref{ex1ctau}\&\subref{ex1cbeta}: reconstructions with noiseless data. \subref{ex1nxi}\&\subref{ex1ntau}\&\subref{ex1nbeta}: reconstructions with noisy data($\alpha=0.1\%$). \subref{ex1rxi}\&\subref{ex1rtau}\&\subref{ex1rbeta}: cross sections along $\{y=-0.5\}$.}
\label{E1epsilon}
\end{figure}
\paragraph{Simulation 2.}In this experiment, we attempt to reconstruct piecewise constant coefficients. Reconstructions with both noiseless and noisy data are performed with $l_1$-regularization using the split Bregman iteration method. The noise level is $\alpha=0.1\%$. The results of the numerical experiment are shown in Figure \ref{E2sigma} and \ref{E2epsilon}. From the figures, we observe that the singularities of the coefficients create minor artifacts on the reconstructions and the error in the reconstruction is larger at the discontinuities than in the rest of the domain. The relative $L^2$ errors in the reconstructions are $\mathcal{E}^C_{\sigma_1}=4.0\%$, $\mathcal{E}^N_{\sigma_1}=17.6\%$, $\mathcal{E}^C_{\sigma_2}=12.8\%$, $\mathcal{E}^N_{\sigma_2}=48.1\%$, $\mathcal{E}^C_{\sigma_3}=4.5\%$, $\mathcal{E}^N_{\sigma_3}=16.5\%$; $\mathcal{E}^C_{\varepsilon_1}=0.1\%$, $\mathcal{E}^N_{\varepsilon_1}=16.3\%$, $\mathcal{E}^C_{\varepsilon_2}=0.5\%$, $\mathcal{E}^N_{\varepsilon_2}=35.2\%$, $\mathcal{E}^C_{\varepsilon_3}=0.1\%$, $\mathcal{E}^N_{\varepsilon_3}=16.2\%$.
\mathbf egin{figure}[htp]
\centering
\subfigure[true $\sigma_1$]{
\includegraphics[width=37mm,height=35mm]{E2rsigma1.eps}
\label{ex1txi}
}
\subfigure[$\sigma$ ($\alpha=0\%$)]{
\includegraphics[width=37mm,height=35mm]{E2csigma1.eps}
\label{ex1cxi}
}
\subfigure[$\sigma$ ($\alpha=0.1\%$)]{
\includegraphics[width=37mm,height=35mm]{E2nsigma1.eps}
\label{ex1nxi}
}
\subfigure[$\sigma_1$ at \{$y=-0.5$\}]{
\includegraphics[trim=10mm 5mm 10mm 0mm,clip,width=35mm,height=38mm]{E2crosigma1.eps}
\label{ex1rxi}
}
\subfigure[true $\sigma_2$]{
\includegraphics[width=37mm,height=35mm]{E2rsigma2.eps}
\label{ex1ttau}
}
\subfigure[$\sigma_2$ ($\alpha=0\%$)]{
\includegraphics[width=37mm,height=35mm]{E2csigma2.eps}
\label{ex1ctau}
}
\subfigure[$\sigma_2$ ($\alpha=0.1\%$)]{
\includegraphics[width=37mm,height=35mm]{E2nsigma2.eps}
\label{ex1ntau}
}
\subfigure[$\sigma_2$ at \{$y=-0.5$\}]{
\includegraphics[trim=10mm 5mm 10mm 0mm,clip,width=35mm,height=38mm]{E2crosigma2.eps}
\label{ex1rtau}
}
\subfigure[true $\sigma_3$]{
\includegraphics[width=37mm,height=35mm]{E2rsigma3.eps}
\label{ex1tbeta}
}
\subfigure[$\sigma_3$ $(\alpha=0\%)$]{
\includegraphics[width=37mm,height=35mm]{E2csigma3.eps}
\label{ex1cbeta}
}
\subfigure[$\sigma_3$ ($\alpha=0.1\%$)]{
\includegraphics[width=37mm,height=35mm]{E2nsigma3.eps}
\label{ex1nbeta}
}
\subfigure[$\sigma_3$ at $\{y=0\}$]{
\includegraphics[trim=10mm 5mm 10mm 0mm,clip,width=35mm,height=38mm]{E2crosigma3.eps}
\label{ex1rbeta}
}
\caption{$\sigma$ in Simulation 2. \subref{ex1txi}\&\subref{ex1ttau}\&\subref{ex1tbeta}: true values of $(\sigma_1, \sigma_2,\sigma_3)$. \subref{ex1cxi}\&\subref{ex1ctau}\&\subref{ex1cbeta}: reconstructions with noiseless data. \subref{ex1nxi}\&\subref{ex1ntau}\&\subref{ex1nbeta}: reconstructions with noisy data($\alpha=0.1\%$). \subref{ex1rxi}\&\subref{ex1rtau}\&\subref{ex1rbeta}: cross sections along $\{y=-0.5\}$.}
\label{E2sigma}
\end{figure}
\mathbf egin{figure}[htp]
\centering
\subfigure[true $\varepsilon_1$]{
\includegraphics[width=37mm,height=35mm]{E2repsilon1.eps}
\label{ex1txi}
}
\subfigure[$\varepsilon_1$ ($\alpha=0\%$)]{
\includegraphics[width=37mm,height=35mm]{E2cepsilon1.eps}
\label{ex1cxi}
}
\subfigure[$\varepsilon_1$ ($\alpha=0.1\%$)]{
\includegraphics[width=37mm,height=35mm]{E2nepsilon1.eps}
\label{ex1nxi}
}
\subfigure[$\varepsilon_1$ at \{$y=-0.5$\}]{
\includegraphics[trim=10mm 5mm 10mm 0mm,clip,width=35mm,height=38mm]{E2croepsilon1.eps}
\label{ex1rxi}
}
\subfigure[true $\varepsilon_2$]{
\includegraphics[width=37mm,height=35mm]{E2repsilon2.eps}
\label{ex1ttau}
}
\subfigure[$\varepsilon_2$ ($\alpha=0\%$)]{
\includegraphics[width=37mm,height=35mm]{E2cepsilon2.eps}
\label{ex1ctau}
}
\subfigure[$\varepsilon_2$ ($\alpha=0.1\%$)]{
\includegraphics[width=37mm,height=35mm]{E2nepsilon2.eps}
\label{ex1ntau}
}
\subfigure[$\varepsilon_2$ at \{$y=-0.5$\}]{
\includegraphics[trim=10mm 5mm 10mm 0mm,clip,width=35mm,height=38mm]{E2croepsilon2.eps}
\label{ex1rtau}
}
\subfigure[true $\varepsilon_3$]{
\includegraphics[width=37mm,height=35mm]{E2repsilon3.eps}
\label{ex1tbeta}
}
\subfigure[$\varepsilon_3$ $(\alpha=0\%)$]{
\includegraphics[width=37mm,height=35mm]{E2cepsilon3.eps}
\label{ex1cbeta}
}
\subfigure[$\varepsilon_3$ ($\alpha=0.1\%$)]{
\includegraphics[width=37mm,height=35mm]{E2nepsilon3.eps}
\label{ex1nbeta}
}
\subfigure[$\varepsilon_3$ at $\{y=0\}$]{
\includegraphics[trim=10mm 5mm 10mm 0mm,clip,width=35mm,height=38mm]{E2croepsilon3.eps}
\label{ex1rbeta}
}
\caption{$\varepsilon$ in Simulation 2. \subref{ex1txi}\&\subref{ex1ttau}\&\subref{ex1tbeta}: true values of $(\varepsilon_1, \varepsilon_2,\varepsilon_3)$. \subref{ex1cxi}\&\subref{ex1ctau}\&\subref{ex1cbeta}: reconstructions with noiseless data. \subref{ex1nxi}\&\subref{ex1ntau}\&\subref{ex1nbeta}: reconstructions with noisy data($\alpha=0.1\%$). \subref{ex1rxi}\&\subref{ex1rtau}\&\subref{ex1rbeta}: cross sections along $\{y=-0.5\}$.}
\label{E2epsilon}
\end{figure}
\section{Conclusion}\label{se:conclu}
We presented in this paper the reconstruction of $(\sigma, \varepsilon)$ from knowledge of several magnetic fields $H_j$, where the measurements $H_j$ solve the Maxwell's equations \eqref{Eq:maxwell} with prescribed illuminations $f=f_j$ on $\partial X$.
The reconstruction algorithms rely heavily on the linear independence of electric fields and the families of $\{M_j\}_j$ constructed in Hypothesis \ref{2 hypo}. These linear independence conditions can be checked by the available magnetic fields $\{H_j\}_j$ and additional measurements could be added if necessary. This method was used in the numerical simulations. We proved in Section \ref{Fulfilling Hypothesis} that these linear independence conditions could be satisfied by constructing CGO-like solutions for constant tensors. In fact, these conditions can be verified numerically for a large class of illuminations and more general tensors. The numerical simulations illustrate that both smooth and rough coefficients could be well reconstructed, assuming that the interior magnetic fields $H_j$ are accurate enough. However, the reconstructions are very sensitive to the additional noise on the functionals $H_j$. This fact is consistent with the stability estimate (with the loss of two derivatives from the measurements to the reconstructed quantities) in Theorem \ref{stability}.
\section*{Acknowledgment}
\mathbf egin{thebibliography}{10}
\bibitem{Ammari2008}
{\sc H.~Ammari, E.~Bonnetier, Y.~Capdeboscq, M.~Tanter, and M.~Fink}, {\em
{E}lectrical {I}mpedance {T}omography by elastic deformation}, SIAM J. Appl.
Math., 68 (2008), pp.~1557--1573.
\bibitem{Bal2012e}
{\sc G.~Bal, C.~Guo, and F.~Monard}, {\em Linearized internal functionals for
anisotropic conductivities}, Inv. Probl. and Imaging Vol 8. No. 1, pp. 1-22, (2014).
\bibitem{Guo2012a}
\leavevmode\vrule height 2pt depth -1.6pt width 23pt, {\em Inverse anisotropic conductivity from internal current densities}, Inverse Problems, 30(2), 025001, (2014).
\bibitem{Guo2014}
\leavevmode\vrule height 2pt depth -1.6pt width 23pt, {\em Imaging of anisotropic conductivities from current densities in two dimensions},
to appear in SIAM J.Imaging Sciences, (2014).
\bibitem{Bal2010}
{\sc G.~Bal and G.~Uhlmann}, {\em Inverse diffusion theory of photoacoustics},
Inverse Problems, 26(8), 085010, (2010).
\bibitem{Bal2012}
\leavevmode\vrule height 2pt depth -1.6pt width 23pt, {\em Reconstruction of
coefficients in scalar second-order elliptic equations from knowledge of
their solutions}, Communications on Pure and Applied Mathematics 66.10 (2013), 1629-1652.
\bibitem{Caro2009}
{\sc P.~Caro, P.~Ola and M.~Salo}, {\em Inverse boundary value problem for Maxwell equations with
local data}, Comm.PDE., 34(2009), 1452-1464.
\bibitem{Lions1993}
{\sc R.~Daytray and JL.~Lions}, {Mathematical Analysis and Numerical Methods for Science and Technology}, Vol. 3, Springer-Verlag, Berlin, 2000.
\bibitem{Goldstein2009}
{\sc T.~Goldstein and S.~Osher}, {\em The Split Bregman Method for L1-Regularized Problems}, SIAM Journal on Imaging Sciences, 2 (2009), pp.~323--343.
\bibitem{Guo2013b}
{\sc C.~Guo and G.~Bal}, {\em Reconstruction of complex-valued tensors in the Maxwell system from knowledge of internal magnetic fields}, to appear in Inv. Probl. and Imaging, (2014).
\bibitem{Ider1997}
{\sc Y.~Ider and L.~Muftuler}, {\em Measurement of {AC} magnetic field
distribution using magnetic resonance imaging}, IEEE Transactions on Medical
Imaging, 16 (1997), pp.~617--622.
\bibitem{Kabanikhin2011}
{\sc S.I.~Kabanikhin}, {\em Inverse and Ill-Posed Problems. Theory and Applications.} De Gruyter, Berlin, New York, 2011.
\bibitem{Kenig2011}
{\sc C.E.~Kenig, M.~Salo, and G.~Uhlmann}, {\em Inverse problems for the anisotropic Maxwell equations}, Duke Math.J. 157(2011), 369-419.
\bibitem{Kohn1984}
{\sc R.~Kohn, M.~Vogelius}, {\em Determining conductivity by boundary measurements}, Commun. Pure Appl. Math. 37(1984) 289-98.
\bibitem{Kuchment2011a}
{\sc P.~Kuchment and L.~Kunyansky}, {\em 2{D} and 3{D} reconstructions in
acousto-electric tomography}, Inverse Problems, 27 (2011).
\bibitem{Kuchment2011}
{\sc P.~Kuchment and D.~Steinhauer}, {\em Stabilizing inverse problems by
internal data}, Inverse Problems, 28 (2012).
\bibitem{Monard2012a}
{\sc F.~Monard and G.~Bal}, {\em Inverse anisotropic conductivity from power
densities in dimension $n \ge 3$}, Comm. Partial Differential Equations, 38(7), (2013), pp.~1183-1207.
\bibitem{Nachman2009}
{\sc A.~Nachman, A.~Tamasan, and A.~Timonov}, {\em Recovering the conductivity
from a single measurement of interior data}, Inverse Problems, 25 (2009),
p.~035014.
\bibitem{Nivikov2005}
{\sc R.~Novikov}, {\em The $\bar\partial$-approach to approximate inverse scattering
at fixed energy in three dimensions}, Int.Math.Res Papers, 6(2005), p.~287-349.
\bibitem{Ola1993}
{\sc P.~Ola, L.~P\"{a}iv\"{a}rinta and E.~Somersalo},
{\em An inverse boundary value problem in electromagnetics}, Duke Math. J., 70 (1993), 617-653.
\bibitem{Ola1996}
{\sc P.~Ola and E.~Somersalo},
{\em Electromagnetic inverse problems and generalized Sommerfeld
potentials}, SIAM J.Appl.Math., 56 (1996), 1129-1145.
\bibitem{Seo2012}
{\sc J.~K. Seo, D.-H. Kim, J.~Lee, O.~I. Kwon, S.~Z.~K. Sajib, and E.~J. Woo},
{\em Electrical tissue property imaging using {MRI} at dc and larmor
frequency}, Inverse Problems, 28 (2012), p.~084002.
\bibitem{Somersalo1992}
{\sc E.~Somersalo, D.~Isaacson and M.~Cheney},
{\em A linearized inverse boundary value problem for Maxwell's equations}, J.Comp.Appl.Math, 42 (2012), 123-136.
\bibitem{Sylvester1987}
{\sc J.~Sylvester and G.~Uhlmann},
{\em A global uniqueness theorem for an inverse boundary value problem}, Ann. of Math., 125(1) (1987), pp. 153–169.
\bibitem{Takhtadzhan1979}
{\sc L.A.~Takhtadzhan and L.D.~Faddeev}, {The quantum method of the inverse problem and the Heisenberg XYZ model}, Russ. Math. Surv, 34(1979), pp.~11--68.
\bibitem{Uhlmann2009}
{\sc G.~Uhlmann},
{\em Calder\'{o}n's problem and electrical impedance tomography}, Inverse Problems, 25 (2009), p. 123011.
\end{thebibliography}
\end{document}
|
\begin{document}
\pagestyle{scrheadings}
\onehalfspacing
\newlength{\fixboxwidth}
\setlength{\fixboxwidth}{\marginparwidth}
\addtolength{\fixboxwidth}{-7pt}
\newcommand{\fix}[1]{\marginpar{\fbox{\parbox{\fixboxwidth}{\raggedright\tiny #1}}}}
\title{Quasi-Monte Carlo methods for integration of functions with dominating mixed smoothness in arbitrary dimension}
\author{Lev Markhasin\thanks{Research of the author was supported by a scholarship of Ernst Ludwig Ehrlich Studienwerk.}\\
\tiny{Friedrich-Schiller-Universit\"at Jena, e-mail: [email protected]}}
\date{\today}
\maketitle
\begin{abstract}
In a celebrated construction, Chen and Skriganov gave explicit examples of point sets achieving the best possible $L_2$-norm of the discrepancy function. We consider the discrepancy function of the Chen-Skriganov point sets in Besov spaces with dominating mixed smoothness and show that they also achieve the best possible rate in this setting. The proof uses a $b$-adic generalization of the Haar system and corresponding characterizations of the Besov space norm. Results for further function spaces and integration errors are concluded.
\end{abstract}
\noindent{\footnotesize {\it 2010 Mathematics Subject Classification.} Primary 11K06,11K38,42C10,46E35,65C05.\\
{\it Key words and phrases.} discrepancy, Chen-Skriganov point set, dominating mixed smoothness, quasi-Monte Carlo, Haar system, numerical integration.}\\[5mm]
\textit{Acknowledgement:} The author wants to thank Aicke Hinrichs, Hans Triebel and Dmitriy Bilyk.
\section{Introduction}
Let $N$ be some positive integer and $\mathcal{P}$ a point set in the unit cube $[0,1)^d$ with $N$ points. Then the discrepancy function $D_{\mathcal{P}}$ is defined as
\begin{align}
D_{\mathcal{P}}(x) = \frac{1}{N} \sum_{z \in \mathcal{P}} \chi_{[0,x)}(z) - \left| [0,x) \right|.
\end{align}
By $|[0,x)| = x_1 \cdot \ldots \cdot x_d$ we denote the volume of the rectangular box $[0,x) = [0,x_1) \times \ldots \times [0,x_d)$ where $x = (x_1,\ldots,x_d) \in [0,1)^d$ while $\chi_{[0,x)}$ is the characteristic function of $[0,x)$. So the discrepancy function measures the deviation of the number of points of $\mathcal{P}$ in $[0,x)$ from the fair number of points $N |[0,x)|$ which would be achieved by a (practically impossible) perfectly uniform distribution of the points of $\mathcal{P}$, normalized by the total number of points.
Usually one is interested in calculating the norm of the discrepancy function in some Banach space of functions on $[0,1)^d$ to which the discrepancy function belongs. A very well known result refers to the space $L_2([0,1)^d)$. It was proved by Roth in \cite{R54}. There exists a constant $c_1 > 0$ such that, for any $N \geq 1$, the discrepancy function of any point set $\mathcal{P}$ in $[0,1)^d$ with $N$ points satisfies
\[ \left\| D_{\mathcal{P}} | L_2 \right\| \geq c_1 \, \frac{(\log N)^\frac{d-1}{2}}{N}. \]
The currently best known values for the constant $c_1$ can be found in \cite{HM11}. Furthermore, there exists a constant $c_2 > 0$ such that, for any $N \geq 1$, there exists a point set $\mathcal{P}$ in $[0,1)^d$ with $N$ points that satisfies
\[ \left\| D_{\mathcal{P}} | L_2 \right\| \leq c_2 \, \frac{(\log N)^\frac{d-1}{2}}{N}. \]
This result is known for dimension $2$ from \cite{D56} (Davenport), for dimension $3$ from \cite{R79} (Roth) and for arbitrary dimension from \cite{R80} (Roth). Only Davenport's result has been proved by an explicit construction while for higher dimensions probabilistic methods were used until Chen and Skriganov found explicit constructions for arbitrary dimension in \cite{CS02}. Results for the constant $c_2$ can be found in \cite{FPPS10}.
Both bounds were extended to $L_p$-spaces for any $1 < p < \infty$. In the case of the lower bound the reference is \cite{S77} (Schmidt) while for the upper bound it is \cite{C80} (Chen).
As general references for studies of the discrepancy function we refer to the recent monographs \cite{DP10} and \cite{NW10} as well as \cite{M99}, \cite{KN74} and \cite{B11}.
Until recently other norms than $L_p$-norms weren't studied a lot in the context of discrepancy. Triebel started the study of the discrepancy function in other function spaces like Sobolev, Besov and Triebel-Lizorkin spaces with dominating mixed smoothness in \cite{T10b} and \cite{T10a}. In \cite{Hi10} Hinrichs proved sharp upper bounds for the norms in Besov spaces with dominating mixed smoothness in dimension $2$. In \cite{M12a} the author of this work proved these upper bounds for a much larger class of point sets and also for other function spaces with dominating mixed smoothness. Triebel's result was that for all $1 \leq p,q \leq \infty$ and $r \in \mathbb{R}$ satisfying $\frac{1}{p} - 1 < r < \frac{1}{p}$ and $q < \infty$ if $p = 1$ and $q > 1$ if $p = \infty$ there exist constants $c_1, c_2 > 0$ such that, for any $N \geq 2$, the discrepancy function of any point set $\mathcal{P}$ in $[0,1)^d$ with $N$ points satisfies
\[ \left\| D_{\mathcal{P}} | S_{pq}^r B([0,1)^d) \right\| \geq c_1 \, N^{r-1} (\log N)^{\frac{d-1}{q}}, \]
and, for any $N \geq 2$, there exists a point set $\mathcal{P}$ in $[0,1)^d$ with $N$ points such that
\[ \left\| D_{\mathcal{P}} | S_{pq}^r B([0,1)^d) \right\| \leq c_2 \, N^{r-1} (\log N)^{(d-1)(\frac{1}{q} + 1 - r)}. \]
Hinrichs' result closed this gap for $d = 2$, we will mention it later.
We mention some definitions from \cite{T10a} which are most important for our purpose.
Let $\mathcal{S}(\mathbb{R}^d)$ denote the Schwartz space and $\mathcal{S}'(\mathbb{R}^d)$ the space of tempered distributions on $\mathbb{R}^d$. For $f \in \mathcal{S}'(\mathbb{R}^d)$, we denote by $\mathcal{F}f$ the Fourier transform of $f$. Let $\varphi_0 \in \mathcal{S}(\mathbb{R}^d)$ satisfy $\varphi_0(t) = 1$ for $|t| \leq 1$ and $\varphi_0(t) = 0$ for $|t| > \frac{3}{2}$. Let
\[ \varphi_k(t) = \varphi_0(2^{-k} t) - \varphi_0(2^{-k + 1} t) \]
where $t \in \mathbb{R}, \, k \in \mathbb{N}$ and
\[ \varphi_k(t) = \varphi_{k_1}(t_1) \ldots \varphi_{k_d}(t_d) \]
where $k = (k_1,\ldots,k_d) \in \mathbb{N}_0^d, \, t = (t_1,\ldots,t_d) \in \mathbb{R}^d$.
The functions $\varphi_k$ are a dyadic resolution of unity since
\[ \sum_{k \in \mathbb{N}_0^d} \varphi_k(x) = 1 \]
for all $x \in \mathbb{R}^d$. The functions $\mathcal{F}^{-1}(\varphi_k \mathcal{F} f)$ are entire analytic functions for any $f \in \mathcal{S}'(\mathbb{R}^d)$.
Let $0 < p,q \leq \infty$ and $r \in \mathbb{R}$. The Besov space with dominating mixed smoothness $S_{pq}^r B(\mathbb{R}^d)$ consists of all $f \in \mathcal{S}'(\mathbb{R}^d)$ with finite quasi-norm
\[ \left\| f | S_{pq}^r B(\mathbb{R}^d) \right\| = \left( \sum_{k \in \mathbb{N}_0^d} 2^{r (k_1 + \ldots + k_d) q} \left\| \mathcal{F}^{-1}(\varphi_k \mathcal{F} f) | L_p(\mathbb{R}^d) \right\|^q \right)^{\frac{1}{q}} \]
with the usual modification if $q = \infty$.
Let $0 < p < \infty$, $0 < q \leq \infty$ and $r \in \mathbb{R}$. The Triebel-Lizorkin space with dominating mixed smoothness $S_{pq}^r F(\mathbb{R}^d)$ consists of all $f \in \mathcal{S}'(\mathbb{R}^d)$ with finite quasi-norm
\[ \left\| f | S_{pq}^r F(\mathbb{R}^d) \right\| = \left\| \left( \sum_{k \in \mathbb{N}_0^d} 2^{r (k_1 + \ldots + k_d) q} \left| \mathcal{F}^{-1}(\varphi_k \mathcal{F} f)(\cdot) \right|^q \right)^{\frac{1}{q}} | L_p(\mathbb{R}^d) \right\| \]
with the usual modification if $q = \infty$.
Let $\mathcal{D}([0,1)^d)$ consist of all complex-valued infinitely differentiable functions on $\mathbb{R}^d$ with compact support in the interior of $[0,1)^d$ and let $\mathcal{D}'([0,1)^d)$ be its dual space of all distributions in $[0,1)^d$. The Besov space with dominating mixed smoothness $S_{pq}^r B([0,1)^d)$ consists of all $f \in \mathcal{D}'([0,1)^d)$ with finite quasi-norm
\[ \left\| f | S_{pq}^r B([0,1)^d) \right\| = \inf \left\{ \left\| g | S_{pq}^r B(\mathbb{R}^d) \right\| : \: g \in S_{pq}^r B(\mathbb{R}^d), \: g|_{[0,1)^d} = f \right\}. \]
The Triebel-Lizorkin space with dominating mixed smoothness $S_{pq}^r F([0,1)^d)$ consists of all $f \in \mathcal{D}'([0,1)^d)$ with finite quasi-norm
\[ \left\| f | S_{pq}^r F([0,1)^d) \right\| = \inf \left\{ \left\| g | S_{pq}^r F(\mathbb{R}^d) \right\| : \: g \in S_{pq}^r F(\mathbb{R}^d), \: g|_{[0,1)^d} = f \right\}. \]
The spaces $S_{pq}^r B(\mathbb{R}^d), \, S_{pq}^r F(\mathbb{R}^d), \, S_{pq}^r B([0,1)^d)$ and $S_{pq}^r F([0,1)^d)$ are quasi-Banach spaces. For $1 < p < \infty$ we define the Sobolev space with dominating mixed smoothness as
\[ S_p^r H([0,1)^d) = S_{p2}^r F([0,1)^d). \]
If $r \in \mathbb{N}_0$ then it is denoted by $S_p^r W ([0,1)^d)$ and is called classical Sobolev space with dominating mixed smoothness. An equivalent norm for $S_p^r W([0,1)^d)$ is
\[ \sum_{\alpha \in \mathbb{N}_0^d; \, 0 \leq \alpha_i \leq r} \left\| D^{\alpha} f | L_p([0,1)^d) \right\|. \]
Also we have
\[ S_p^0 H([0,1)^d) = L_p([0,1)^d). \]
In \cite{Hi10} Hinrichs analyzed the norm of the discrepancy function of point sets of the Hammersley type in Besov spaces with dominating mixed smoothness. He proved upper bounds which closed the gap for Triebel's results of discrepancy in $S_{pq}^r B([0,1)^2)$-spaces. The result from \cite{Hi10} is that for $1 \leq p,q \leq \infty$ and $0 \leq r < \frac{1}{p}$ there is a constant $c > 0$ such that for any $N \geq 2$, there exists a point set $\mathcal{P}$ in $[0,1)^2$ with $N$ points such that
\[ \left\| D_{\mathcal{P}} | S_{pq}^r B([0,1)^2) \right\| \leq c \, N^{r-1} (\log N)^{\frac{1}{q}}. \]
In \cite{M12a} we proved these bounds for generalized Hammersley type point sets in Besov, Triebel-Lizorkin and Sobolev spaces with domintating mixed smoothness. In this note we close the gap of Triebel's result in arbitrary dimension. We use the same constructions which were used by Chen and Skriganov in \cite{CS02} to prove upper bounds for $L_2$-discrepancy. The notation will mostly orientate on \cite{DP10}. The main result of this note is
\begin{thm} \label{thm_main}
Let $1 \leq p,q \leq \infty$ and $0 < r < \frac{1}{p}$. Then there exists a constant $c > 0$ such that, for any $N \geq 2$, there exists a point set $\mathcal{P} \in [0,1)^d$ with $N$ points such that
\[ \left\| D_{\mathcal{P}} | S_{pq}^r B([0,1)^d) \right\| \leq c \, N^{r-1} \, (\log N)^{\frac{d-1}{q}}. \]
\end{thm}
The point sets in the theorem are the Chen-Skriganov point sets. It was conjectured in \cite{Hi10} that they might satisfy the desired upper bound. The restrictions for the parameter $r$ are necessary. The upper bound $r < \frac{1}{p}$ is due to the fact that we need characteristic functions of intervals to belong to $S_{pq}^r B ([0,1)^d)$ and the condition given by \cite[Theorem 6.3]{T10a}. The restriction $r \geq 0$ comes from the point sets. Anyway, there is a restriction of $r > \frac{1}{p} - 1$ from the fact that we require $S_{pq}^r B ([0,1)^d)$ to have a $b$-adic Haar basis. We have an additional restriction $r > 0$ which is due to our estimations which might not be optimal.
In \cite[Remark 6.28]{T10a} and \cite[Proposition 2.3.7]{Hn10} we find the following very practical embeddings. For $0 < p < \infty, \, 0 < q \leq \infty$ and $r \in \mathbb{R}$ we have
\[ S_{p, \min(p,q)}^r B([0,1)^d) \hookrightarrow S_{pq}^r F([0,1)^d) \hookrightarrow S_{p, \max(p,q)}^r B([0,1)^d). \]
For $0 < p_2 \leq q \leq p_1 < \infty$ and $r \in \mathbb{R}$ we have
\[ S_{p_1 q}^r F([0,1)^d) \hookrightarrow S_{qq}^r B([0,1)^d) \hookrightarrow S_{p_2 q}^r F([0,1)^d). \]
Therefore, we can conclude results for the Triebel-Lizorkin and Sobolev spaces.
\begin{thm}
Let $1 \leq p,q \leq \infty$ and $0 < r < \frac{1}{\max(p,q)}$. Then there exist constants $c_1, c_2 > 0$ such that, for any $N \geq 2$, the discrepancy function of any point set $\mathcal{P}$ in $[0,1)^d$ with $N$ points satisfies
\[ \left\| D_{\mathcal{P}} | S_{pq}^r F([0,1)^d) \right\| \geq c_1 \, N^{r-1} (\log N)^{\frac{d-1}{q}}, \]
and, there exists a point set $\mathcal{P} \in [0,1)^d$ with $N$ points such that
\[ \left\| D_{\mathcal{P}} | S_{pq}^r F([0,1)^d) \right\| \leq c_2 \, N^{r-1} \, (\log N)^{\frac{d-1}{q}}. \]
\end{thm}
\begin{cor}
Let $1 \leq p \leq \infty$ and $0 < r < \frac{1}{\max(p,2)}$. Then there exist constants $c_1, c_2 > 0$ such that, for any $N \geq 2$, the discrepancy function of any point set $\mathcal{P}$ in $[0,1)^d$ with $N$ points satisfies
\[ \left\| D_{\mathcal{P}} | S_p^r H([0,1)^d) \right\| \geq c_1 \, N^{r-1} (\log N)^{\frac{d-1}{2}}, \]
and, there exists a point set $\mathcal{P} \in [0,1)^d$ with $N$ points such that
\[ \left\| D_{\mathcal{P}} | S_p^r H([0,1)^d) \right\| \leq c_2 \, N^{r-1} \, (\log N)^{\frac{d-1}{2}}. \]
\end{cor}
The distribution of points in a cube is not just a theoretical concept. Its application in quasi-Monte Carlo methods is very important. Quadrature formulas need very well distributed point sets. The connection of discrepancy and the error of quadrature formulas can be given for a lot of norms. In \cite[Theorem 6.11]{T10a} Triebel gave this connection for Besov spaces with dominating mixed smoothness. Using the embeddings we get additional results for the Triebel-Lizorkin spaces with dominating mixed smoothness. We define the error of the quadrature formulas in some Banach space $M([0,1)^d)$ of functions on $[0,1)^d$ with $N$ points as
\begin{align*}
\err_N(M([0,1)^d)) = \inf_{\{ x_1, \ldots, x_N \} \subset [0,1)^d} \sup_{f \in M^1_0([0,1)^d)} \left| \int_{[0,1)^d} f(x) \, \dint x - \frac{1}{N}\sum_{i = 1}^N f(x_i) \right|
\end{align*}
where by $M^1_0([0,1)^d)$ we mean the subset of the unit ball of $M([0,1)^d)$ with the property that for all $f \in M^1_0([0,1)^d)$ its extension to $[0,1]^d$ vanishes whenever one of the coordinates of the argument is $1$. Then \cite[Theorem 6.11]{T10a} states that for $1 \leq p,q \leq \infty$ and $\frac{1}{p} < r < \frac{1}{p} + 1$ there exist constants $c_1,c_2 > 0$ such that for every integer $N \geq 2$ we have
\begin{multline*}
c_1 \, \inf_{\mathcal{P} \subset [0,1)^d; \, \#\mathcal{P} = N} \left\| D_{\mathcal{P}} | S_{pq}^r B([0,1)^d) \right\| \\
\leq \err_N(S_{p'q'}^{1-r} B([0,1)^d)) \\
\leq c_2 \, \inf_{\mathcal{P} \subset [0,1)^d; \, \#\mathcal{P} = N} \left\| D_{\mathcal{P}} | S_{pq}^r B([0,1)^d) \right\|
\end{multline*}
where
\[ \frac{1}{p} + \frac{1}{p'} = \frac{1}{q} + \frac{1}{q'} = 1. \]
We can thereby conclude results for the integration errors. For more details we refer to \cite{M12b}.
\begin{thm}
\mbox{}
\begin{enumerate}[(i)]
\item Let $1 \leq p,q \leq \infty$ and $q < \infty$ if $p = 1$ and $q > 1$ if $p = \infty$. Let $\frac{1}{p} < r < 1$. Then there exist constants $c_1, C_1 > 0$ such that, for any integer $N \geq 2$, we have
\[ c_1 \, \frac{(\log N)^{\frac{(q-1)(d-1)}{q}}}{N^r} \leq \err_N(S_{pq}^r B) \leq C_1 \, \frac{(\log N)^{\frac{(q-1)(d-1)}{q}}}{N^r}. \]
\item Let $1 \leq p,q < \infty$. Let $\frac{1}{\min(p,q)} < r < 1$. Then there exist constants $c_2, C_2 > 0$ such that, for any integer $N \geq 2$, we have
\[ c_2 \, \frac{(\log N)^{\frac{(q-1)(d-1)}{q}}}{N^r} \leq \err_N(S_{pq}^r F) \leq C_2 \, \frac{(\log N)^{\frac{(q-1)(d-1)}{q}}}{N^r}. \]
\item Let $1 \leq p < \infty$. Let $\frac{1}{\min(p,2)} < r < 1$. Then there exist constants $c_3, C_3 > 0$ such that, for any integer $N \geq 2$, we have
\[ c_3 \, \frac{(\log N)^{\frac{d-1}{2}}}{N^r} \leq \err_N(S_p^r H) \leq C_3 \, \frac{(\log N)^{\frac{d-1}{2}}}{N^r}. \] \label{BTY_quote}
\end{enumerate}
\end{thm}
In the next sections we introduce the constructions by Chen and Skriganov which we will use to prove our result. In order to do so we will calculate $b$-adic Haar coefficients of the discrepancy function. We also will introduce $b$-adic Walsh functions.
\section{The $b$-adic Haar bases}
For some integer $b\geq 2$ a $b$-adic interval of length $b^{-j},\, j\in\mathbb{N}_0$ in $[0,1)$ is an interval of the form
\[ I_{jm} = \big[ b^{-j} m, b^{-j} (m+1) \big)\]
for $m=0,1,\ldots,b^j-1$. For $j \in \mathbb{N}_0$ we divide $I_{jm}$ into $b$ intervals of length $b^{-j - 1}$, i.e. $I_{jm}^k = I_{j + 1,bm + k},\, k=0,\ldots,b - 1$. As an additional notation we put $I_{-1,0}^{-1} = I_{-1,0} = [0,1)$. Let $\mathbb{D}_j = \{0,1,\ldots,b^j-1\}$ and $\mathbb{B}_j = \{1,\ldots,b-1\}$ for $j \in \mathbb{N}_0$ and $\mathbb{D}_{-1} = \{0\}$ and $\mathbb{B}_{-1} = \{1\}$. The $b$-adic Haar functions $h_{jml}$ have support in $I_{jm}$. For any $j \in \mathbb{N}_0,\, m \in \mathbb{D}_j,\, l \in \mathbb{B}_j$ and any $k=0,\ldots,b-1$ the value of $h_{jml}$ in $I_{jm}^k$ is $e^{\frac{2\pi i}{b}lk}$. We denote the indicator function of $I_{-1,0}$ by $h_{-1,0,1}$. Let $\mathbb{N}_{-1} = \{-1,0,1,2,\ldots\}$. The functions $h_{jml},\, j \in \mathbb{N}_{-1},\,m \in \mathbb{D}_j,\,l\in\mathbb{B}_j$ are called $b$-adic Haar system. Normalized in $L_2([0,1))$ we obtain the orthonormal $b$-adic Haar basis of $L_2([0,1))$. A proof of this fact can be found in \cite{RW98}.
For $j = (j_1, \dots, j_d) \in \mathbb{N}_{-1}^d$, $m = (m_1, \ldots, m_d) \in \mathbb{D}_j = \mathbb{D}_{j_1} \times \ldots \times \mathbb{D}_{j_d}$ and $l = (l_1, \ldots, l_d) \in \mathbb{B}_j = \mathbb{B}_{j_1} \times \ldots \times \mathbb{B}_{j_d}$, the Haar function $h_{jml}$ is given as the tensor product $h_{jml}(x) = h_{j_1 m_1 l_1}(x_1) \ldots h_{j_d m_d l_d}(x_d)$ for $x = (x_1, \ldots, x_d) \in [0,1)^d$. We will call $I_{jm} = I_{j_1 m_1} \times \ldots \times I_{j_d m_d}$ $b$-adic boxes. For $k = (k_1,\ldots,k_d)$ where $k_i \in \{ 0,\ldots,b - 1 \}$ for $j_i \in \mathbb{N}_0$ and $k_i = -1$ for $j_i = -1$ we put $I_{jm}^k = I_{j_1 m_1}^{k_1} \times \ldots \times I_{j_d m_d}^{k_d}$. The functions $h_{jml},\, j \in \mathbb{N}_{-1}^d,\,m \in \mathbb{D}_j,\,l\in\mathbb{B}_j$ are called $d$-dimensional $b$-adic Haar system. Normalized in $L_2([0,1)^d)$ we obtain the orthonormal $b$-adic Haar basis of $L_2([0,1)^d)$.
For any function $f \in L_2([0,1)^d)$ we have by Parseval's equation
\begin{align}
\|f|L_2\|^2 = \sum_{j\in\mathbb{N}_{-1}^d} b^{\max(0,j_1) + \ldots + \max(0,j_d)} \sum_{m\in \mathbb{D}_j, \, l\in\mathbb{B}_j} |\mu_{jml}|^2
\end{align}
where
\begin{align}
\mu_{jml} = \mu_{jml}(f) = \int_{[0,1)^d} f(x) h_{jml}(x) \, \dint x
\end{align}
are the $b$-adic Haar coefficients of $f$.
In \cite{M12a} we gave an equivalent norm for the Besov spaces with dominating mixed smoothness. Let $j = (j_1, \ldots, j_d) \in \mathbb{N}_{-1}^d$ and let $s = \#\{ i = 1, \ldots, d: \, j_i \neq -1 \}$. Then we can choose a subsequence $(\eta_\nu)_{\nu = 1}^s$ of $(1, \ldots, d)$ such that for all $\nu = 1, \ldots, s$ we have $j_{\eta_{\nu}} \neq -1$ while all other $j_i$ are equal to $-1$. Then we have
\begin{align} \label{eq_quasinorm}
\left\| f | S_{pq}^r B([0,1)^d) \right\| \approx \left( \sum_{j \in \mathbb{N}_{-1}^d} b^{(j_{\eta_1} + \ldots + j_{\eta_s})(r - \frac{1}{p} + 1) q} \left( \sum_{m \in \mathbb{D}_j, \, l \in \mathbb{B}_j} | \mu_{jml}|^p \right)^{\frac{q}{p}} \right)^{\frac{1}{q}}
\end{align}
for every $f \in S_{pq}^r B([0,1)^d)$.
\section{Constructions of Chen and Skriganov}
In this section we will describe the constructions of point sets proposed by Chen and Skriganov. We describe the constructions for those $N = b^n$ where $n$ is divisible by $2d$, i.e. $n =2dw$ for some $w \in \mathbb{N}$. Point sets with arbitrary number of points can be constructed with the usual method as is described for example in \cite[Section 16.1]{DP10}. We give some notation and definitions.
We begin with the definition of digital nets. Let $d \geq 1$, $b \geq 2$, and $n \geq 0$ be integers. A point set $\mathcal{P} \in [0,1)^d$ of $b^n$ points is called a net in base $b$ if every $b$-adic interval with volume $b^{-n}$ contains exactly one point of $\mathcal{P}$. Usually the term $(0,n,d)$-net is used but here we just say net for shortness. Digital nets in base $b$ are special nets in base $b$ that can be constructed using $n \times n$ matrices $C_1, \ldots, C_d$. We describe the digital method to construct digital nets. Let $b$ be a prime number. Let $n \in \mathbb{N}_0$. Let $C_1, \ldots, C_d$ be $n \times n$ matrices with entries from $\mathbb{F}_b$. We generate the net point $x_r = (x_r^1, \ldots, x_r^d)$ with $0 \leq r < b^n$. We expand $r$ in base $b$ as
\[ r = r_0 + r_1 b + \ldots + r_{n-1}b^{n-1} \]
with digits $r_k \in \{ 0, 1, \ldots, b-1 \}$, $1 \leq k \leq n-1$. We put $\bar{r} = (r_0, \ldots, r_{n-1})^{\top} \in \mathbb{F}_b^n$ and $\bar{h}_{r,i} = C_i \, \bar{r} = (h_{r,i,1}, \ldots, h_{r,i,n})^{\top} \in \mathbb{F}_b$, $1 \leq i \leq d$. Then we get $x_r^i$ as
\[ x_r^i = \frac{h_{r,i,1}}{b} + \ldots + \frac{h_{r,i,n}}{b^n}. \]
A point set $\left\{ x_0, \ldots x_{b^n-1} \right\}$ constructed with the digital method is called digital net in base $b$ with generating matrices $C_1, \ldots C_d$ if it is a net in base $b$.
For a digital net $\mathcal{P}$ with generating matrices $C_1,\ldots,C_d$ over $\mathbb{F}_b$ we define
\[ \mathbb{D}n'(C_1,\ldots,C_d) = \left\{ t \in \{ 0, \ldots, b^n - 1 \}^d \, : \, C_1^{\top} \, \bar{t}_1 + \ldots + C_d^{\top} \, \bar{t}_d = 0 \right\} \backslash \{ 0 \} \]
where $t = (t_1, \ldots, t_d)$ and for $1 \leq i \leq d$ we denote by $\bar{t}_i$ the $n$-dimensional column vector of $b$-adic digits of $t_i$. Instead of $(0, \ldots, 0)$ we just wrote $0$.
Now let $\alpha \in \mathbb{N}$ with $b$-adic expansion $\alpha = \alpha_0 + \alpha_1 b + \ldots + \alpha_{h - 1} b^{h - 1}$ where $\alpha_{h - 1} \neq 0$. We define the Niederreiter-Rosenbloom-Tsfasman (NRT) weight by $\varrho(\alpha) = h$. Furthermore, we define $\varrho(0) = 0$. We define the Hamming weight $\varkappa(\alpha)$ as the number of non-zero digits $\alpha_{\nu}$, $0 \leq \nu < \varrho(\alpha)$.
For $\alpha = (\alpha_1,\ldots,\alpha_d) \in \mathbb{N}_0^d$, let
\[ \varrho^d(\alpha) = \sum_{i = 1}^d \varrho(\alpha_i) \text{ and } \varkappa^d(\alpha) = \sum_{i = 1}^d \varkappa(\alpha_i). \]
Now let $b$ be prime and let $n \in \mathbb{N}$. Let $\mathcal{C}$ be some $\mathbb{F}_b$-linear subspace of $\mathbb{F}_b^{dn}$. Then we define the dual space $\mathcal{C}^{\perp}$ relative to the standard inner product in $\mathbb{F}_b^{dn}$ as
\[ \mathcal{C}^{\perp} = \left\{ A \in \mathbb{F}_b^{dn}: \, B \cdot A = 0 \text{ for all } B \in \mathcal{C} \right\}. \]
We have $\dim(\mathcal{C}^{\perp}) = dn - \dim(\mathcal{C})$ and $(\mathcal{C}^{\perp})^{\perp} = \mathcal{C}$.
Now let $a = (a_1, \ldots, a_n) \in \mathbb{F}_b^n$ and
\[ v_n(a) = \begin{cases} 0 & \text{ if } a = 0, \\ \max\left\{ \nu: \, a_{\nu} \neq 0 \right\} & \text{ if } a \neq 0. \end{cases} \]
We define $\varkappa_n(a)$ as the number of indices $1 \leq \nu \leq n$ such that $a_{\nu} \neq 0$. Let $A = (a_1, \ldots, a_d) \in \mathbb{F}_b^{dn}$ with $a_i \in \mathbb{F}_b^n$ for $1 \leq i \leq d$ and let
\[ v_n^d(A) = \sum_{i = 1}^d v_n(a_i) \text{ and } \varkappa_n^d(A) = \sum_{i = 1}^d \varkappa_n(a_i). \]
Now let $\mathcal{C} \neq \left\{ 0 \right\}$. Then the minimum distance of $\mathcal{C}$ is defined as
\[ \delta_n(\mathcal{C}) = \min\left\{ V_n(A): \, A \in \mathcal{C} \textbackslash \left\{ 0 \right\} \right\}. \]
Furthermore, let $\delta_n(\left\{ 0 \right\}) = dn + 1$. Finally we define
\[ \varkappa_n(\mathcal{C}) = \min\left\{ \varkappa_n(A): \, A \in \mathcal{C} \textbackslash \left\{ 0 \right\} \right\}. \]
The weight $\varkappa_n(\mathcal{C})$ is called Hamming weight of $\mathcal{C}$.
Now we continue with the constructions. So let $w \in \mathbb{N}$ such that $n = 2dw$.
Let
\[ f(z) = f_0 + f_1 z + \ldots + f_{h - 1} z^{h - 1} \]
be a polynomial in $\mathbb{F}_b[z]$. For every $\lambda \in \mathbb{N}$ the $\lambda$-th hyper-derivative is
\[ \partial^{\lambda} f(z) = \sum_{i=0}^{h - 1} \binom{i}{\lambda} f_{\lambda} z^{i - \lambda}. \]
We use the usual convention for the binomial coefficient modulo $b$ that $\binom{i}{\lambda} = 0$ whenever $\lambda > i$.
Let $b \geq 2d^2$ be a prime. Then there are $2d^2$ distinct elements $\beta_{i,\nu} \in \mathbb{F}_b$ for $1 \leq i \leq d$ and $1 \leq \nu \leq 2d$. For $1 \leq i \leq d$ let
\[ a_i(f) = \left( \left( \partial^{\lambda - 1} f(\beta_{i,\nu}) \right)_{\lambda = 1}^w \right)_{\nu=1}^{2d} \in \mathbb{F}_b^n. \]
We define $\mathcal{C}_n \subset \mathbb{F}_b^{dn}$ as
\[ \mathcal{C}_n = \left\{ A(f) = (a_1(f), \ldots, a_d(f)): \, f \in \mathbb{F}_b[z], \, \dg(f) < n \right\}. \]
Clearly, $\mathcal{C}_n$ has exactly $b^n$ elements. The set of polynomials in $\mathbb{F}_b[z]$ with $\dg(f) < n$ is closed under addition and scalar multiplication over $\mathbb{F}_b$, hence $\mathcal{C}_n$ is an $\mathbb{F}_b$-linear subspace of $\mathbb{F}_b^{dn}$. For example from \cite[Theorem 16.28]{DP10} one learns that $\mathcal{C}_n$ has dimension $n$ while its dual $nd - n$ and it satisfies
\begin{align} \label{dualspace}
\varkappa_n(\mathcal{C}_n^{\perp}) \geq 2d + 1 \text{ and } \delta_n(\mathcal{C}_n^{\perp}) \geq n + 1.
\end{align}
Finally we only need to transfer $\mathcal{C}_n$ into the unit cube $[0,1)^d$ as a point set. To do so we define a mapping $\mathcal{P}hi_n: \, \mathbb{F}_b^{dn} \rightarrow [0,1)^d$. Let $a = (a_1, \ldots, a_n) \in \mathbb{F}_b^n$, we set
\[ \mathcal{P}hi_n(a) = \frac{a_1}{b} + \ldots + \frac{a_n}{b^n} \]
and for $A = (a_1, \ldots, a_d) \in F_b^{dn}$, we set
\[ \mathcal{P}hi_n^d(A) = \left( \mathcal{P}hi_n(a_1), \ldots, \mathcal{P}hi_n(a_d) \right). \]
So we are ready to define the point set that proves our main result. The point set of Chen and Skriganov is given by
\[ \mathcal{C}S_n = \mathcal{P}hi_n^d(\mathcal{C}_n) \]
and contains exactly $N = b^n$ points.
From \cite[Theorem 7.14]{DP10} we finally learn that $\mathcal{C}S_n$ is a digital net in base $b$ since for every $\mathbb{F}_b$-linear subspace $\mathcal{C}$ of $\mathbb{F}_b^{dn}$ with dimension $n$ and dual space satisfying \quad\Leftrightarrow\quadref{dualspace}, $\mathcal{P}hi_n^d(\mathcal{C})$ is a digital net in base $b$ with some generating matrices $C_1, \ldots, C_d$. We will call $\mathcal{P}hi_n^d(\mathcal{C})$ the corresponding digital net. As a final remark of this section we would like to note that we needed $b$ to be large so that $\mathbb{F}_b$ has enough distinct elements. But there are even general rules for nets on the minimum of $b$ such that, a net in base $b$ can exist (see \cite[Chapter 4]{DP10}.
\section{The $b$-adic Walsh functions}
Let $b \geq 2$ be an integer. For some $\alpha \in \mathbb{N}_0$ with $b$-adic expansion $\alpha = \alpha_0 + \alpha_1 b + \ldots + \alpha_{\varrho(\alpha) - 1} b^{\varrho(\alpha) - 1}$ we define the $\alpha$-th $b$-adic Walsh function $\wal_{\alpha}: \, [0,1) \rightarrow \mathbb{C}$, as
\[ \wal_{\alpha}(x) = e^{\frac{2 \pi i}{b} (\alpha_0 x_1 + \alpha_1 x_2 + \ldots + \alpha_{\varrho(\alpha) - 1} x_{\varrho(\alpha)})}, \]
for $x \in [0,1)$ with $b$-adic expansion $x = x_1 b^{-1} + x_2 b^{-2} + \ldots$. The functions $\wal_{\alpha}, \, \alpha \in \mathbb{N}_0$ are called $b$-adic Walsh system.
For $\alpha = (\alpha^1, \ldots, \alpha^d) \in \mathbb{N}_0^d$ the Walsh function $\wal_{\alpha}$ is given as the tensor product $\wal_{\alpha}(x) = \wal_{\alpha^1}(x_1) \ldots \wal_{\alpha^d}(x_d)$ for $x = (x_1, \ldots, x_d) \in [0,1)^d$. The functions $\wal_{\alpha}, \, \alpha \in \mathbb{N}_0^d$ are called $d$-dimensional $b$-adic Walsh system.
For $\alpha \in \mathbb{N}$ the function $\wal_{\alpha}$ is constant on $b$-adic intervals $I_{\varrho(\alpha)m}$ for any $m \in \mathbb{D}_{\varrho(\alpha)}$. Further, $\wal_0$ is constant on $[0,1)$ with value $1$. We have
\[ \int_{[0,1)} \wal_{\alpha}(x)\dint x = \begin{cases} 1 & \text{ if } \alpha = 0, \\ 0 & \text{ if } \alpha \neq 0, \end{cases} \]
and for $\alpha, \beta \in \mathbb{N}_0^d$ we have
\[ \int_{[0,1)^d} \wal_{\alpha}(x) \overline{\wal_{\beta}(x)} \dint x = \begin{cases} 1 & \text{ if } \alpha = \beta, \\ 0 & \text{ if } \alpha \neq \beta. \end{cases} \]
The $d$-dimensional $b$-adic Walsh system is an orthonormal basis in $L_2([0,1)^d)$. The proofs of these facts can be found for example in Appendix A of \cite{DP10}.
\section{Calculation of the $b$-adic Haar coefficients}
Before we can compute the Haar coefficients we need some easy lemmas. We omit the proofs since they are nothing further but easy exercises.
\begin{lem} \label{lem_haar_coeff_besov_x}
Let $f(x) = x_1 \cdot \ldots \cdot x_d$ for $x=(x_1,\ldots,x_d) \in [0,1)^d$. Let $j \in \mathbb{N}_{-1}^d, \, m \in \mathbb{D}_j, l \in \mathbb{B}_j$ and let $\mu_{jml}$ be the $b$-adic Haar coefficient of $f$. Then
\[ \mu_{jml} = \frac{b^{-2j_{\eta_1} - \ldots - 2j_{\eta_s} - s}}{2^{d-s}(e^{\frac{2\pi i}{b} l_{\eta_1}} - 1) \cdot \ldots \cdot (e^{\frac{2\pi i}{b} l_{\eta_s}} - 1)}. \]
\end{lem}
\begin{lem} \label{lem_haar_coeff_besov_indicator}
Let $z = (z_1,\ldots,z_d) \in [0,1)^d$ and $g(x) = \chi_{[0,x)}(z)$ for $x = (x_1, \ldots, x_d) \in [0,1)^d$. Let $j \in \mathbb{N}_{-1}^d, \, m \in \mathbb{D}_j, l \in \mathbb{B}_j$ and let $\mu_{jml}$ be the $b$-adic Haar coefficient of $g$. Then $\mu_{jml} = 0$ whenever $z$ is not contained in the interior of the $b$-adic box $I_{jm}$ supporting the functions $h_{jml}$. If $z$ is contained in the interior of $I_{jm}$ then there is a $k = (k_1,\ldots,k_d)$ with $k_i \in \{0,1,\ldots,b-1\}$ if $j_i \neq -1$ or $k_i = -1$ if $j_i = -1$ such that $z$ is contained in $I_{jm}^k$. Then
\begin{multline*}
\mu_{jml} = b^{-j_{\eta_1} - \ldots - j_{\eta_s} - s} \prod_{1 \leq i \leq d; \, j_i = -1}(1-z_i) \times\\
\times \prod_{\nu = 1}^s \left[ (bm_{\eta_{\nu}}+k_{\eta_{\nu}}+1-b^{j_{\eta_{\nu}}+1}z_{\eta_{\nu}}) e^{\frac{2\pi i}{b}k_{\eta_{\nu}} l_{\eta_{\nu}}} + \sum_{r_{\eta_{\nu}} = k_{\eta_{\nu}}+1}^{b-1} e^{\frac{2\pi i}{b}r_{\eta_{\nu}} l_{\eta_{\nu}}} \right].
\end{multline*}
\end{lem}
\begin{lem} \label{lem_lambda_s_minus_1}
Let $\lambda \in \mathbb{N}_0$ and $s \in \mathbb{N}$. Then
\[ \# \left\{ (j_1, \ldots, j_s) \in \mathbb{N}_0^s: \, j_1 + \ldots + j_s = \lambda \right\} \leq (\lambda + 1)^{s-1}. \]
\end{lem}
We consider the Walsh series expansion of the function $\chi_{[0,y)}$
\begin{align}
\chi_{[0,y)}(x) = \sum_{t = 0}^\infty \hat{\chi}_{[0,y)}(t) \wal_t(x),
\end{align}
where for $t \in \mathbb{N}_0$ with $b$-adic expansion $t = \tau_0 + \tau_1 b + \ldots + \tau_{\varrho(t) - 1} b^{\varrho(t) - 1}$, the $t$-th Walsh coefficient is given by
\[ \hat{\chi}_{[0,y)}(t) = \int_0^1 \chi_{[0,y)}(x) \overline{\wal_t(x)} \dint x = \int_0^y \overline{\wal_t(x)} \dint x. \]
For $t > 0$ we put $t = t' + \tau_{\varrho(t) - 1} b^{\varrho(t) - 1}$.
The following is called Fine-Price formulas and was first proved in \cite{F49} (dyadic case) and \cite{P57} ($b$-adic version). One often finds it in literature, e.g. see \cite[Lemma 14.8]{DP10} for an easy understandable proof.
\begin{lem} \label{chi_roof_lem}
Let $b \geq 2$ be an integer and $x \in [0,1)$. Then we have
\[ \hat{\chi}_{[0,y)}(0) = y = \frac{1}{2} + \sum_{a = 1}^\infty \sum_{z = 1}^{b-1} \frac{1}{b^a (e^{-\frac{2 \pi i}{b} z} - 1)} \wal_{z b^{a-1}}(y) \]
and for any integer $t > 0$ we have
\begin{multline*}
\hat{\chi}_{[0,y)}(t) = \frac{1}{b^{\varrho(t)}}\left( \frac{1}{1 - \e^{-\frac{2 \pi {\rm i}}{b} \tau_{\varrho(t) - 1}}} \overline{\wal_{t'}(y)} \right. +\\
+ \left( \frac{1}{\e^{-\frac{2 \pi {\rm i}}{b} \tau_{\varrho(t) - 1}} - 1} + \frac{1}{2} \right) \overline{\wal_t(y)} +\\
+ \left. \sum_{a = 1}^\infty \sum_{z = 1}^{b-1} \frac{1}{b^a (\e^{\frac{2 \pi {\rm i}}{b} z} - 1)} \overline{\wal_{z b^{\varrho(t)+a-1} + t}(y)} \right).
\end{multline*}
\end{lem}
The first part of the Lemma is \cite[Lemma A.22]{DP10}, the second is \cite[Lemma 14.8]{DP10}.
Let $n \in \mathbb{N}_0$, we consider the approximation of $\chi_{[0,y)}$ by the truncated series
\begin{align}
\chi_{[0,y)}^{(n)}(x) = \sum_{t = 0}^{b^n - 1} \hat{\chi}_{[0,y)}(t) \wal_t(x).
\end{align}
Let $N$ be a positive integer. Then we put for some point set $\mathcal{P}$ in $[0,1)^d$ with $N$ points
\begin{align}
\mathbb{T}heta_{\mathcal{P}}(y) = \frac{1}{N} \sum_{z \in \mathcal{P}} \chi_{[0,y)}^{(n)}(z) - y_1 \cdot \ldots \cdot y_d.
\end{align}
Let
\begin{align} \label{split}
D_{\mathcal{P}}(y) = \mathbb{T}heta_{\mathcal{P}}(y) + R_{\mathcal{P}}(y).
\end{align}
We now restrict ourselves again to the case where $b$ is prime.
\begin{lem} \label{lem_duality_into_disc}
Let $\{ x_0, \ldots, x_{b^n - 1} \}$ be a digital net in base $b$ generated by the matrices $C_1, \ldots, C_d$. Then for $t \in \{ 0, \ldots, b^n - 1 \}^d$, we have
\[ \sum_{h = 0}^{b^n - 1} \wal_t(x_h) = \begin{cases} b^n & \text{ if } t \in \mathbb{D}n(C_1, \ldots, C_d), \\ 0 & \text{ otherwise}. \end{cases} \]
\end{lem}
The proof of this fact can be found in \cite[Section 4.4]{DP10}
\begin{lem} \label{theta_lem}
Let $\mathcal{C}$ be an $\mathbb{F}_b$-linear subspace of $\mathbb{F}_b^{dn}$ of dimension $n$ and let $\mathcal{P} = \mathcal{P}hi_n^d(\mathcal{C})$ denote the corresponding digital net in base $b$ with generating matrices $C_1, \ldots, C_d$. Then
\[ \mathbb{T}heta_{\mathcal{P}}(y) = \sum_{t \in \mathbb{D}n'(C_1, \ldots, C_d)} \hat{\chi}_{[0,y)}(t). \]
\end{lem}
The proof of this fact is contained in the proof of \cite[Lemma 16.22]{DP10}.
\begin{lem} \label{rest_lem}
There exists a constant $c > 0$ such that, for any $n \in \mathbb{N}_0$ and for any $\mathbb{F}_b$-linear subspace $\mathcal{C}$ of $\mathbb{F}_b^{dn}$ of dimension $n$ with dual space $\mathcal{C}^{\perp}$ satisfying $\delta_n(\mathcal{C}^{\perp}) \geq n + 1$ with the corresponding digital net $\mathcal{P} = \mathcal{P}hi_n(\mathcal{C})$ and for every $y \in [0,1)^d$, we have
\[ |R_{\mathcal{P}}(y)| \leq c \, b^{-n}. \]
\end{lem}
For a proof of this lemma the interested reader is referred to \cite[Lemma 16.21]{DP10}.
We introduce a very common notation. For functions $f, g \in L_2(\mathbb{Q}^d)$ we write
\[ \left\langle f, g \right\rangle = \int_{\mathbb{Q}^d} f \, \bar{g}. \]
\begin{prp} \label{prp_min1}
Let $j = (-1, \ldots, -1), \, m = (0, \ldots, 0), \, l = (1, \ldots, 1)$. Then there exists a constant $c > 0$ independent of $n$ such that
\[ |\mu_{jml}(D_{\mathcal{C}S_n})| \leq c \, b^{-n}. \]
\end{prp}
\begin{proof}
As in \quad\Leftrightarrow\quadref{split} we split $D_{\mathcal{C}S_n}(y) = \mathbb{T}heta_{\mathcal{C}S_n}(y) + R_{\mathcal{C}S_n}(y)$ and we know from Lemma \ref{rest_lem} (since $\mathcal{C}S_n$ is a digital net) that there is a constant $c > 0$ such that $|R_{\mathcal{C}S_n}(y)| \leq c \, b^{-n}$. Using Lemma \ref{theta_lem} we can calculate the Haar coefficient
\[ \mu_{jml}(D_{\mathcal{C}S_n}) = \left\langle \mathbb{T}heta_{\mathcal{C}S_n} + R_{\mathcal{C}S_n} , h_{jml} \right\rangle. \]
To do so we use the fact that $h_{jml} = \wal_{(0, \ldots, 0)}$. Now we consider the one-dimensional case first and from the first part of Lemma \ref{chi_roof_lem} we get
\[ \left\langle \hat{\chi}_{[0,\cdot)}(0),\wal_0 \right\rangle = \frac{1}{2}. \]
Now let $t > 0$. Then by the second part of Lemma \ref{chi_roof_lem} we have
\[ \left\langle \hat{\chi}_{[0,\cdot)}(t),\wal_0 \right\rangle = \begin{cases}
\frac{1}{b^{\varrho(t)}} \frac{1}{1 - e^{-\frac{2 \pi i}{b} \tau_{\varrho(t)-1}}} & t' = 0, \\
0 & t' \neq 0.
\end{cases} \]
This means that we can find a constant $c_1 > 0$ such that for every integer $t \geq 0$ we have
\[ \left| \left\langle \hat{\chi}_{[0,\cdot)}(t),\wal_0 \right\rangle \right| \leq c_1 \, b^{-\varrho(t)} \]
and
\[ \left\langle \hat{\chi}_{[0,\cdot)}(t),\wal_0 \right\rangle = 0 \]
if $t > 0$ and $t' \neq 0$.
Now suppose, we have some $t \in \mathbb{D}n'(C_1, \ldots, C_d)$ such that
\[ \left\langle \hat{\chi}_{[0,\cdot)}(t), \wal_{(0, \ldots, 0)} \right\rangle \neq 0. \]
Then for all $1 \leq i \leq d$ we have
\[ \left\langle \hat{\chi}_{[0,\cdot)}(t_i), \wal_0 \right\rangle \neq 0. \]
Then necessarily $t_i = \tau_i b^{\varrho(t_i)-1}$ (since $t_i' = 0$) or $t_i = 0$ for any $i = 1, \ldots, d$ which means that either $\varkappa(t_i) = 1$ or $\varkappa(t_i) = 0$. In any case we have $\varkappa^d(t) \leq d$ which contradicts to $\varkappa_n(\mathcal{C}_n^{\bot}) \geq 2d + 1$ as must be the case according to above. Therefore, for all $t \in \mathbb{D}n'(C_1, \ldots, C_d)$ we have
\[ \left\langle \hat{\chi}_{[0,\cdot)}(t), \wal_{(0, \ldots, 0)} \right\rangle = 0 \]
and from Lemma \ref{theta_lem} follows $\left\langle \mathbb{T}heta_{\mathcal{C}S_n}, \wal_0 \right\rangle = 0$.
Hence we have
\[ |\mu_{jml}(D_{\mathcal{C}S_n})| \leq |\left\langle \mathbb{T}heta_{\mathcal{C}S_n} , \wal_0 \right\rangle| + |\left\langle R_{\mathcal{C}S_n},\wal_0 \right\rangle| \leq c \, b^{-n}. \]
\end{proof}
\begin{lem} \label{lem_scalprod_1}
Let $j \in \mathbb{N}_{-1}$, $m \in \mathbb{D}_j$, $l \in \mathbb{B}_j$ and $\alpha \in \mathbb{N}_0$. Then
\begin{enumerate}[(i)]
\item if $j \in \mathbb{N}_0$ and $\varrho(\alpha) = j + 1$ and $\alpha_{j} = l$. Then
\[ |\left\langle h_{jml} , \wal_{\alpha} \right\rangle| = b^{-j}, \]
\item if $j = -1, \, m = l = 0$ and $\alpha = 0$ then
\[ \left\langle h_{jml} , \wal_{\alpha} \right\rangle| = 1, \]
\item if $\varrho(\alpha) \neq j + 1$ or $\alpha_{j} \neq l$ then
\[ |\left\langle h_{jml} , \wal_{\alpha} \right\rangle| = 0. \]
\end{enumerate}
\end{lem}
\begin{proof}
The second claim and the third for $j = -1$ are trivial so let $j \geq 0$. Let $y \in [0,1)$. We expand $\alpha$ and $y$ as
\[ \alpha = \alpha_0 + \alpha_1 b + \ldots + \alpha_{\varrho(\alpha)-1} b^{\varrho(\alpha)-1} \]
and
\[ y = y_1 b^{-1} + y_2 b^{-2} + \ldots. \]
Hence
\[ \wal_{\alpha} (y) = e^{\frac{2 \pi i}{b} (\alpha_0 y_1 + \ldots + \alpha_{\varrho(\alpha)-1} y_{\varrho(\alpha)})}. \]
The function $\wal_{\alpha}$ is constant on the intervals
\[ \left[\right.b^{-\varrho(\alpha)} \delta , b^{-\varrho(\alpha)} (\delta + 1)\left.\right) \]
for any integer $0 \leq \delta < b^{\varrho(\alpha)}$. The function $h_{jml}$ is constant on the intervals
\[ I_{jm}^k = \left[\right.b^{-j-1} (bm+k) , b^{-j-1} (bm+k+1)\left.\right) \]
for any integer $0 \leq k < b$. Now suppose that either $j+1 > \varrho(\alpha)$ or $j+1 < \varrho(\alpha)$. This would mean that either
\[ I_{jm} = \left[\right. b^{-j} m , b^{-j} (m+1)\left.\right) \subseteq \left[\right.b^{-\varrho(\alpha)} \delta , b^{-\varrho(\alpha)} (\delta + 1)\left. \right) \]
in the first case or
\[ \left[\right. b^{-\varrho(\alpha)} \delta , b^{-\varrho(\alpha)} (\delta + 1)\left. \right) \subset I_{jm}^k \]
for some $k$ in the second case or in both cases
\[ \left[\right. b^{-j} m , b^{-j} (m+1)\left. \right) \cap \left[\right. b^{-\varrho(\alpha)} \delta , b^{-\varrho(\alpha)} (\delta + 1)\left. \right) = \emptyset \]
In any case
\[ \left\langle h_{jml} , \wal_{\alpha} \right\rangle = 0. \]
So the only relevant case is $j+1 = \varrho(\alpha)$. Then either again
\[ \left[\right. b^{-j} m , b^{-j} (m+1)\left.\right) \cap \left[\right.b^{-\varrho(\alpha)} \delta , b^{-\varrho(\alpha)} (\delta + 1)\left. \right) = \emptyset \]
or
\[ \left[\right. b^{-\varrho(\alpha)} \delta , b^{-\varrho(\alpha)} (\delta + 1)\left. \right) = I_{jm}^k \]
for some $k$. We consider the last possibility.
The value of $h_{jml}$ on $I_{jm}^k$ is $e^{\frac{2 \pi i}{b} l k}$. To calculate the value of $\wal_{\alpha}$ we expand $m$ as
\[ m = m_1 + m_2 b + \ldots + m_j b^{j-1}. \]
Clearly, $0 \leq b m + k < b^{j+1}$. Hence,
\[ b^{-j-1} (b m + k) = m_j b^{-1} + \ldots + m_2 b^{-j+1} + m_1 b^{-j} + k b^{-j-1}. \]
So,
\[ \wal_{\alpha} (b^{-j-1} (b m + k)) = e^{\frac{2 \pi i}{b} \alpha_0 m_j + \ldots + \alpha_{j-1} m_1 + \alpha_j k}. \]
Now we can calculate
\begin{align*}
\overline{\left\langle h_{jml} , \wal_{\alpha} \right\rangle} & = \int_{I_{jm}} \overline{h_{jml}(y)} \wal_{\alpha}(y) \dint y\\
& = \sum_{k=0}^{b-1} \int_{I_{jm}^k} \overline{h_{jml}(y)} \wal_{\alpha}(y) \dint y\\
& = b^{-j-1} \sum_{k=0}^{b-1} e^{\frac{2 \pi i}{b} \alpha_0 m_j + \ldots + \alpha_{j-1} m_1 + (\alpha_j-l) k}\\
& = b^{-j-1} e^{\frac{2 \pi i}{b} \alpha_0 m_j + \ldots + \alpha_{j-1} m_1} \sum_{k=0}^{b-1} e^{(\alpha_j-l) k}\\
& = \begin{cases}
b^{-j} e^{\frac{2 \pi i}{b} \alpha_0 m_j + \ldots + \alpha_{j-1} m_1} & \alpha_j = l,\\
0 & \alpha_j \neq l
\end{cases}
\end{align*}
and the lemma follows.
\end{proof}
\begin{lem} \label{lem_scalprod_2}
There exists a constant $c > 0$ with the following property. Let $t, \alpha \in \mathbb{N}_0$. Then if $\alpha = t'$ or $\alpha = t + \tau \, b^{\varrho(t) + a - 1}$ for some integer $0 \leq \tau \leq b - 1$ and $a \geq 1$ then
\[ \left| \left\langle \hat{\chi}_{[0,\cdot)}(t) , \wal_{\alpha} \right\rangle \right| \leq c \, b^{-\max(\varrho(t), \varrho(\alpha))}. \]
If $\alpha \neq t'$ and there are no integers $0 \leq \tau \leq b - 1$ and $a \geq 1$ such that $\alpha = t + \tau \, b^{\varrho(t) +a - 1}$ then
\[ \left\langle \hat{\chi}_{[0,\cdot)}(t) , \wal_{\alpha} \right\rangle = 0. \]
\end{lem}
\begin{proof}
We use Lemma \ref{chi_roof_lem}. First let $t > 0$. Suppose that $\alpha = t'$, so $\varrho(\alpha) < \varrho(t)$. Then
\[ \left| \left\langle \hat{\chi}_{[0,\cdot)}(t) , \wal_{\alpha} \right\rangle \right| = \left| \frac{1}{1 - e^{\frac{-2 \pi i}{b} \tau_{\varrho(t) - 1}}} \right| \, b^{-\varrho(t)} \leq c \, b^{-\varrho(t)}. \]
If $\alpha = t$ meaning that $\varrho(\alpha) = \varrho(t)$ then
\[ \left| \left\langle \hat{\chi}_{[0,\cdot)}(t) , \wal_{\alpha} \right\rangle \right| \leq \left| \frac{1}{e^{\frac{-2 \pi i}{b} \tau_{\varrho(t) - 1}} - 1} + \frac{1}{2} \right| \, b^{-\varrho(t)} \leq c \, b^{-\varrho(t)}. \]
Now let $\alpha = t + \tau \, b^{\varrho(t) +a - 1}$ for some $1 \leq \tau \leq b - 1$ and $a \geq 1$. Hence $\varrho(\alpha) = \varrho(t) + a$. Then
\[ \left| \left\langle \hat{\chi}_{[0,\cdot)}(t) , \wal_{\alpha} \right\rangle \right| = \left| \frac{1}{e^{\frac{2 \pi i}{b} \tau} - 1} \right| \, b^{-\varrho(t)} \, b^{-a} \leq c \, b^{-\varrho(\alpha)}. \]
For any other $\alpha$ clearly,
\[ \left\langle \hat{\chi}_{[0,\cdot)}(t) , \wal_{\alpha} \right\rangle = 0. \]
Now we consider the case $t = 0$. Then for $\alpha = 0$ (meaning $\varrho(\alpha) = 0$) we have
\[ \left| \left\langle \hat{\chi}_{[0,\cdot)}(t) , \wal_{\alpha} \right\rangle \right| = \frac{1}{2} \leq c \, b^{-\varrho(\alpha)}. \]
Let $\alpha = \tau \, b^{a - 1}$ for some $1 \leq \tau \leq b - 1$ and $a \geq 1$. Then $\varrho(\alpha) = a$ and
\[ \left| \left\langle \hat{\chi}_{[0,\cdot)}(t) , \wal_{\alpha} \right\rangle \right| = \left| \frac{1}{e^{\frac{2 \pi i}{b} \tau} - 1} \right| \, b^{-a} \leq c \, b^{-\varrho(\alpha)}. \]
For any other $\alpha$ again clearly,
\[ \left\langle \hat{\chi}_{[0,\cdot)}(t) , \wal_{\alpha} \right\rangle = 0. \]
\end{proof}
We now need an additional notation. For any function $f \, : \, \mathbb{F}b_b^{dn} \longrightarrow \mathbb{C}$ we call $\hat{f}$ given by
\[ \hat{f}(B) = \sum_{A \in \mathbb{F}b_b^{dn}} \e^{\frac{2 \pi {\rm i}}{b} A \cdot B} f(A) \]
for $B \in \mathbb{F}b_b^{dn}$ the Walsh transform of $f$.
The following two facts can be found in \cite{DP10}. The first lemma is \cite[Lemma 16.9]{DP10} while the second is \cite[(16.3)]{DP10}.
\begin{lem} \label{lem_169dp10}
Let $\mathcal{C}$ and $\mathcal{C}^{\perp}$ be mutually dual $\mathbb{F}b_b$-linear subspaces of $\mathbb{F}b_b^{dn}$. Then for any function $f \, : \, \mathbb{F}b_b^{dn} \longrightarrow \mathbb{C}$ we have
\[ \sum_{A \in \mathcal{C}} f(A) = \frac{\# \mathcal{C}}{b^{dn}} \sum_{B \in \mathcal{C}^{\perp}} \hat{f}(B). \]
\end{lem}
\begin{lem} \label{lem_walshduality}
Let $\mathcal{C}$ and $\mathcal{C}^{\perp}$ be mutually dual $\mathbb{F}b_b$-linear subspaces of $\mathbb{F}b_b^{dn}$. Let $B \in \mathbb{F}b_b^{dn}$. Then we have
\[ \sum_{A \in \mathcal{C}} \e^{\frac{2 \pi {\rm i}}{b} A \cdot B} = \begin{cases} \# \mathcal{C}, & B \in \mathcal{C}^{\perp}, \\ 0, & B \notin \mathcal{C}^{\perp}. \end{cases} \]
\end{lem}
We will introduce some notation now, slightly changed from what can be found in \cite[16.2]{DP10}. Let $0 \leq \gamma_1, \ldots, \gamma_d \leq n$ be integers. We put $\gamma = (\gamma_1, \ldots, \gamma_d)$. Then we write
\[ \mathcal{V}_{\gamma} = \left\{ A \in \mathbb{F}b_b^{dn} \, : \, \mathcal{P}hi_n(A) \in \prod_{i = 1}^d \left[ \left. 0, b^{-\gamma_i} \right) \right. \right\}. \]
Hence, $\mathcal{V}_{\gamma}$ consists of all such $A \in \mathbb{F}b_b^{dn}$ that $a_i = (0, \ldots, 0, a_{i,\gamma_i + 1}, \ldots, a_{i n})$ for all $1 \leq i \leq d$. For all $1 \leq i \leq d$ let $0 \leq \lambda_i \leq \gamma_i$ be integers and let $\lambda = (\lambda_1, \ldots, \lambda_d)$. Then we write $\mathcal{V}_{\gamma, \lambda}$ for the set consisting of all such $A \in \mathbb{F}b_b^{dn}$ that $a_i = (0, \ldots, 0, a_{i, \lambda_i + 1}, \ldots, a_{i, \gamma_i - 1}, 0, a_{i, \gamma_i + 1}, \ldots, a_{i n})$. The case $\lambda_i = \gamma_i$ is to be understood in the obvious way as $a_i = (0, \ldots, 0, a_{i,\gamma_i + 1}, \ldots, a_{i n})$. Therefore, $\mathcal{V}_{\gamma}^{\perp}$ consists of such $A \in \mathbb{F}b_b^{dn}$ that $a_i = (a_{i 1}, \ldots, a_{i, \gamma_i}, 0, \ldots, 0)$ and $\mathcal{V}_{\gamma, \lambda}^{\perp}$ consists of such $A \in \mathbb{F}b_b^{dn}$ that $a_i = (a_{i 1}, \ldots, a_{i, \lambda_i}, 0, \ldots, 0, a_{i, \gamma_i}, 0, \ldots, 0)$.
For a subset $V$ of $\mathbb{F}b_b^{dn}$ we denote the characteristic function of $V$ by $\chi_V$. The next result is a slight generalization of the corresponding assertion from \cite[Lemma 16.11]{DP10}.
\begin{lem} \label{lem_1611dp10}
Let $\gamma_1, \ldots, \gamma_d, \lambda_1, \ldots, \lambda_d$ be as above. Let $\sigma$ be the number of such $i$ that $\lambda_i < \gamma_i$. For all $B \in \mathbb{F}b_b^{dn}$ we have
\[ \hat{\chi}_{\mathcal{V}_{\gamma, \lambda}}(B) = b^{dn - |\lambda| - \sigma} \chi_{\mathcal{V}_{\gamma, \lambda}^{\perp}}(B). \]
\end{lem}
The following fact is a generalization of \cite[Lemma 16.13]{DP10}.
\begin{lem} \label{1613dp10}
Let $\mathcal{C}$ and $\mathcal{C}^{\perp}$ be mutually dual $\mathbb{F}b_b$-linear subspaces of $\mathbb{F}b_b^{dn}$. Let $\gamma_1, \ldots, \gamma_d, \lambda_1, \ldots, \lambda_d, \sigma$ be as above. Then we have
\[ \# \left( \mathcal{C} \cap \mathcal{V}_{\gamma, \lambda} \right) = \frac{\# \mathcal{C}}{b^{|\lambda| + \sigma}} \, \# \left( \mathcal{C}^{\perp} \cap \mathcal{V}_{\gamma, \lambda}^{\perp} \right). \]
\end{lem}
\begin{prp} \label{main_est_prp}
Let $\mathcal{C}$ be an $\mathbb{F}b_b$-linear subspace of $\mathbb{F}b_b^{dn}$ of dimension $n$ with dual space of dimension $dn - n$ satisfying $\delta_n(\mathcal{C}^{\perp}) \geq n + 1$. Let $0 \leq \lambda_i \leq \gamma_i \leq n$ be integers for all $1 \leq i \leq d$ with $|\gamma| \geq n + 1$ and $|\lambda| + d \leq n$. Then we have
\[ \# \left\{ A = (a_1, \ldots, a_d) \in \mathcal{C}^{\perp}: \, v_n(a_i) \leq \gamma_i; \, a_{ik} = 0 \; \forall \, \lambda_i < k < \gamma_i; \, 1 \leq i \leq d \right\} \leq b^d. \]
\end{prp}
\begin{proof}
Let $A \in \mathcal{C}^{\perp}$ with $v_n(a_i) \leq \gamma_i$ and for all $\lambda_i < k < \gamma_i$ with $a_{i k} = 0$ for all $1 \leq i \leq d$. Let $\gamma = (\gamma_1, \ldots, \gamma_d)$ and $\lambda = (\lambda_1, \ldots, \lambda_d)$. Then we have $A \in \mathcal{V}_{\gamma, \lambda}^{\perp}$. Let $\sigma$ be the number of such $i$ that $\lambda_i < \gamma_i$. Analogously to the proof of \cite[Lemma 16.26]{DP10} using Lemma \ref{1613dp10} we get
\begin{align}
\# & \left\{ A = (a_1, \ldots, a_d) \in \mathcal{C}^{\perp}: \, v_n(a_i) \leq \gamma_i; \, a_{ik} = 0 \; \forall \, \lambda_i < k < \gamma_i; \, 1 \leq i \leq d \right\} \notag \\
& \qquad \leq \# \left( \mathcal{C}^{\perp} \cap \mathcal{V}_{\gamma, \lambda}^{\perp} \right) \notag \\
& \qquad = b^{|\lambda| + \sigma - n} \, \# \left( \mathcal{C} \cap \mathcal{V}_{\gamma, \lambda} \right). \label{multline_cardinality}
\end{align}
Now suppose $A \in \mathcal{V}_{\gamma, \lambda}$. Then for all $1 \leq i \leq d$ we have
\[ \mathcal{P}hi_n(a_i) = \frac{a_{i, \lambda_i + 1}}{b^{\lambda_i + 1}} + \ldots + \frac{a_{i, \gamma_i - 1}}{b^{\gamma_i - 1}} + \frac{a_{i, \gamma_i + 1}}{b^{\gamma_i + 1}} + \ldots + \frac{a_{i n}}{b^n} < \frac{1}{b^{\lambda_i}} \]
in the case where $\lambda_i < \gamma_i$ and
\[ \mathcal{P}hi_n(a_i) = \frac{a_{i, \lambda_i + 1}}{b^{\lambda_i + 1}} + \ldots + \frac{a_{i n}}{b^n} < \frac{1}{b^{\lambda_i}} \]
elsewise. Hence, $\mathcal{P}hi_n^d(A)$ is contained in the $b$-adic interval
\[ \prod_{i = 1}^d \left[ \left. 0, b^{-\lambda_i} \right) \right. \]
of volume $b^{-|\lambda|}$. By Proposition \cite[Theorem 7.14]{DP10}, $\mathcal{P}hi_n^d(\mathcal{C})$ is a digital net in base $b$, and therefore, contains exactly $b^{n - |\lambda|}$ points which lie in a $b$-adic interval of volume $b^{-|\lambda|}$. Therefore, we have
\[ \# \left( \mathcal{C} \cap \mathcal{V}_{\gamma, \lambda} \right) \leq b^{n - |\lambda|} \]
and the result follows from \quad\Leftrightarrow\quadref{multline_cardinality} since $\sigma \leq d$.
\end{proof}
\begin{prp} \label{prp_haar_coeff_cs}
There exists a constant $c > 0$ with the following property. Let $\mathcal{C}S_n$ be a Chen-Skriganov type point set with $N = b^n$ points and let $\mu_{jml}$ be the $b$-adic Haar coefficient of the discrepancy function of $\mathcal{C}S_n$ for $j \in \mathbb{N}_{-1}^d, \, m \in \mathbb{D}d_j$ and $l \in \mathbb{B}_j$. Then
\begin{enumerate}[(i)]
\item if $j = (-1, \ldots, -1)$ then
\[ \left| \mu_{jml} \right| \leq c \, b^{-n}, \] \label{prp_1_cs}
\item if $j \neq (-1, \ldots, -1)$ and $|j| \leq n$ then
\[ \left| \mu_{jml} \right| \leq c \, b^{-|j| - n}, \] \label{prp_2_cs}
\item if $j \neq (-1, \ldots, -1)$ and $|j| > n$ and $j_{\eta_1}, \ldots, j_{\eta_s} < n$ then
\[ \left| \mu_{jml} \right| \leq c \, b^{-|j| - n} \] \label{prp_3_cs}
and
\[ \left| \mu_{jml} \right| \leq c \, b^{-2|j|} \]
for all but $b^n$ coefficients $\mu_{jml}$,
\item if $j \neq (-1, \ldots, -1)$ and $j_{\eta_1} \geq n$ or $\ldots$ or $j_{\eta_s} \geq n$ then
\[ \left| \mu_{jml} \right| \leq c \, b^{-2|j|}, \] \label{prp_4_cs}
\end{enumerate}
\end{prp}
\begin{proof}
Part \quad\Leftrightarrow\quadref{prp_1_cs} is actually Proposition \ref{prp_min1}.
To prove part \quad\Leftrightarrow\quadref{prp_2_cs} we use again the resolution of $D_{\mathcal{C}S_n}$ in
\[ D_{\mathcal{C}S_n} = \mathbb{T}heta_{\mathcal{C}S_n} + R_{\mathcal{C}S_n}. \]
Let $j \in \mathbb{N}_{-1}^d, \, j \neq (-1, \ldots, -1), \, |j| \leq n, \, m \in \mathbb{D}d_j, \, l \in \mathbb{B}_j$. The Walsh function series of $h_{jml}$ can be given as
\begin{align} \label{h_j_m_l_given_as}
h_{jml} = \sum_{\alpha \in \mathbb{N}_0^d} \left\langle h_{jml} , \wal_{\alpha} \right\rangle \wal_{\alpha}.
\end{align}
By Lemma \ref{rest_lem} we have
\[ \left| \left\langle R_{\mathcal{C}S_n} , h_{jml} \right\rangle \right| \leq c \, b^{-n} |I_{jm}| = c \, b^{-|j| - n}. \]
We recall that
\[ \left\langle h_{jml}, \wal_{\alpha} \right\rangle = \left\langle h_{j_1 m_1 l_1}, \wal_{\alpha_1} \right\rangle \cdot \ldots \cdot \left\langle h_{j_d m_d l_d}, \wal_{\alpha_d} \right\rangle \]
and
\[ \left\langle \hat{\chi}_{[0,y)}(t) , \wal_{\alpha} \right\rangle = \left\langle \hat{\chi}_{[0,y_1)}(t_1) , \wal_{\alpha_1} \right\rangle \cdot \ldots \cdot \left\langle \hat{\chi}_{[0,y_d)}(t_d) , \wal_{\alpha_d} \right\rangle. \]
We will use Lemmas \ref{lem_scalprod_1} and \ref{lem_scalprod_2} on each of the factors. Lemma \ref{lem_scalprod_1} gives us $|\left\langle h_{j_i m_i l_i}, \wal_{\alpha_i} \right\rangle| \leq b^{-j_i}$ if $j_i \neq -1$ for all $i$. For all $\alpha$ with $\varrho(\alpha_i) \neq j_i + 1$ for some $i$ we have $\left\langle h_{j m l}, \wal_{\alpha} \right\rangle = 0$. We also always get $0$ if the leading digit in the $b$-adic expansion of $\alpha_i$ is not $l_i$ for some $i$. In the case where $j_i = -1$ we can get $b^{-j_i}$, by increasing the constant. From Lemma \ref{lem_scalprod_2} we have $|\left\langle \hat{\chi}_{[0,y_i)}(t_i) , \wal_{\alpha_i} \right\rangle| \leq c \, b^{-\max(\varrho(\alpha_i), \varrho(t_i))}$. Inserting Lemma \ref{theta_lem} and \quad\Leftrightarrow\quadref{h_j_m_l_given_as} we get
\begin{align*}
\left| \mu_{jml} (\mathbb{T}heta_{\mathcal{C}S_n}) \right| & = \left| \left\langle \mathbb{T}heta_{\mathcal{C}S_n} , h_{jml} \right\rangle \right| \\
& = \left| \left\langle \sum_{t \in \mathbb{D}n'(C_1, \ldots, C_d)} \hat{\chi}_{[0,\cdot)}(t) , \sum_{\alpha \in \mathbb{N}_0^d} \left\langle h_{jml} , \wal_{\alpha} \right\rangle \wal_{\alpha} \right\rangle \right| \\
& \leq \sum_{t \in \mathbb{D}n'(C_1, \ldots, C_d)} \sum_{\alpha \in \mathbb{N}_0^d} \left| \left\langle \hat{\chi}_{[0,\cdot)}(t) , \wal_{\alpha} \right\rangle \right| \left| \left\langle h_{jml}, \wal_{\alpha} \right\rangle \right| \\
& \leq c_1 \, b^{-j_1 - \ldots - j_d} \sum_{t \in \mathbb{D}n'(C_1, \ldots, C_d)} b^{-\max(j_1, \varrho(t_1)) - \ldots - \max(j_d, \varrho(t_d))}.
\end{align*}
The summation in $\alpha$ disappears due to the following facts. The application of Lemma \ref{lem_scalprod_1} leaves only all such $\alpha$ with $\varrho(\alpha_i) = j_i + 1$ and with $l_i$ as leading digit in the $b$-adic expansion of $\alpha_i$ for all $i$. The application of Lemma \ref{lem_scalprod_2} leaves then at most one $\alpha$ per $t$, namely the one with either $\alpha_i = t_i'$ (if $\varrho(t_i) > j_i + 1$) or $\alpha_i = t_i + l_i \, b^{j_i}$ (if $\varrho(t_i) \leq j_i + 1$) for all $i$. In the cases where there is an $i$ with $\varrho(t_i) > j_i + 1$, it is possible that no $\alpha$ is left in the summation, since we still have the condition on $\alpha_i$ that the leading digit in the $b$-adic expansion is $l_i$, which cannot be guaranteed for $t_i'$.
Our next step is to break the sum above into sums where for every $t$ every coordinate either has bigger NRT weight than the corresponding coordinate of $j$ or a smaller NRT weight. Let $0 \leq r \leq d$ be the integer that is the cardinality of such $1 \leq i \leq d$ that the NRT weight is smaller. Without loss of generality we consider for every $r$ only the case where for $1 \leq i \leq r$ we have $\varrho(t_i) \leq j_i$ while for $r + 1 \leq i \leq d$ we have $\varrho(t_i) > j_i$. All the other cases follow from renaming the indices and we will just increase the constant. In the notation we split the sum
\[ \sum_{t \in \mathbb{D}n'(C_1, \ldots, C_d)} \leq c_2 \, \sum_{r = 0}^d \, \sum_{t \in \mathbb{D}n'_r(C_1, \ldots, C_d)} \]
where by $\mathbb{D}n'_r(C_1, \ldots, C_d)$ we mean the subset of $\mathbb{D}n'(C_1, \ldots, C_d)$ according to what we explained above (with ordered indices and other cases incorporated into the constant, $r$ coordinates have smaller NRT weight). So we have
\begin{align*}
\left| \mu_{jml} (\mathbb{T}heta_{\mathcal{C}S_n}) \right| & \leq c_3 \, b^{-j_1 - \ldots - j_d} \sum_{r = 0}^d \, \sum_{t \in \mathbb{D}n'_r(C_1, \ldots, C_d)} b^{-j_1 - \ldots - j_r - \varrho(t_{r + 1}) - \ldots - \varrho(t_d)} \\
& = c_3 \, \sum_{r = 0}^d \, b^{-2j_1 - \ldots - 2j_r - j_{r + 1} - \ldots - j_d} \sum_{t \in \mathbb{D}n'_r(C_1, \ldots, C_d)} b^{-\varrho(t_{r + 1}) - \ldots - \varrho(t_d)}.
\end{align*}
Instead of summing over $t$, we can sum over the values of $\varrho(t)$, considering the number of such $t$ that $\varrho(t_i) = \gamma_i$, $1 \leq i \leq d$.
We recall that $\mathcal{C}S_n = \mathcal{P}hi_n^d(\mathcal{C}_n)$. Then we denote
\[ \omega_{\gamma} = \# \left\{ A \in \mathcal{C}_n^{\perp}: \, v_n(a_i) = \gamma_i \, \forall \, i \, \wedge \, a_{ik} = 0 \; \forall \, j_i < k < \gamma_i; \, r + 1 \leq i \leq d \right\} \]
and
\[ \tilde{\omega}_{\gamma} = \# \left\{ A \in \mathcal{C}_n^{\perp}: \, v_n(a_i) \leq \gamma_i \, \forall \, i \, \wedge \, a_{ik} = 0 \; \forall \, j_i < k < \gamma_i; \, r + 1 \leq i \leq d \right\}. \]
Let $\Gamma$ consist of all such $\gamma = (\gamma_1, \ldots, \gamma_d)$ that $0 \leq \gamma_i \leq j_i$ for $1 \leq i \leq r$, $j_i < \gamma_i \leq n$ for $r + 1 \leq i \leq d$ and $|\gamma| \geq n + 1$. Then we have
\[ \left| \mu_{jml} (\mathbb{T}heta_{\mathcal{C}S_n}) \right| \leq c_3 \, \sum_{r = 0}^d \, b^{-2j_1 - \ldots - 2j_r - j_{r+1} - \ldots - j_d} \sum_{\gamma \in \Gamma} b^{-\gamma_{r + 1} - \ldots - \gamma_d} \, \omega_{\gamma}. \]
We can apply Proposition \ref{main_est_prp} with $\lambda_i = \gamma_i, \, 1 \leq i \leq r$ and $\lambda_i = j_i, \, r + 1 \leq i \leq d$. Thereby, we get $\tilde{\omega}_{\gamma} \leq b^d$. An obvious observation is that
\[ \sum_{0 \leq \kappa_i \leq \gamma_i, \, 1 \leq i \leq d} \omega_{\kappa} \leq \tilde{\omega}_{\gamma} \]
with $\kappa = (\kappa_1, \ldots, \kappa_d)$. Recall the notation $\bar{n} = (n, \ldots, n)$. For all $\gamma \in \Gamma$ it holds that $-\gamma_{r + 1} - \ldots - \gamma_d \leq \gamma_1 + \ldots + \gamma_r - n - 1$ so,
\begin{align*}
& \left| \mu_{jml} (\mathbb{T}heta_{\mathcal{C}S_n}) \right| \leq c_3 \, \sum_{r = 0}^d \, b^{-2j_1 - \ldots - 2j_r - j_{r+1} - \ldots - j_d} \sum_{\gamma \in \Gamma} b^{-n - 1 + \gamma_1 + \ldots + \gamma_r} \, \omega_{\gamma} \\
& \leq c_4 \, \sum_{r = 0}^d \, b^{-2j_1 - \ldots - 2j_r - j_{r+1} - \ldots - j_d - n} \sum_{0 \leq \gamma_i \leq j_i, \, 1 \leq i \leq r} b^{\gamma_1 + \ldots + \gamma_r} \sum_{j_i < \gamma_i \leq n, \, r + 1 \leq i \leq d} \omega_{\gamma} \\
& \leq c_4 \, \sum_{r = 0}^d \, b^{-2j_1 - \ldots - 2j_r - j_{r+1} - \ldots - j_d - n} \, \prod_{i = 1}^r \sum_{\kappa_i = 0}^{j_i} b^{\kappa_i} \sum_{j_i < \gamma_i \leq n, \, r + 1 \leq i \leq d} \, \max_{0 \leq \gamma_i \leq j_i, \, 1 \leq i \leq r} \omega_{\gamma} \\
& \leq c_5 \, \sum_{r = 0}^d \, b^{-j_1 - \ldots - j_d - n} \sum_{0 \leq \gamma_i \leq n, \, 1 \leq i \leq d} \omega_{\gamma} \\
& \leq c_6 \, b^{-j_1 - \ldots - j_d - n} \, \tilde{\omega}_{\bar{n}} \\
& \leq c_6 \, b^{-j_1 - \ldots - j_d - n} \, b^d \\
& \leq c_7 \, b^{-|j| - n}.
\end{align*}
For the part \quad\Leftrightarrow\quadref{prp_3_cs} let $|j| > n$ and $j_{\eta_1}, \ldots, j_{\eta_s} < n$. We recall that $\mathcal{C}S_n$ contains exactly $N = b^n$ points and for fixed $j \in \mathbb{N}_{-1}^d$, the interiors of the $b$-adic intervals $I_{jm}$ are mutually disjoint. There are no more than $b^n$ such $b$-adic intervals which contain a point of $\mathcal{C}S_n$ meaning that all but $b^n$ intervals contain no points at all. This fact combined with Lemma \ref{lem_haar_coeff_besov_x} gives us the second statement of this part. The remaining boxes contain exactly one point of $\mathcal{C}S_n$. So from Lemmas \ref{lem_haar_coeff_besov_x} and \ref{lem_haar_coeff_besov_indicator} we get the first statement of this part.
Finally, let $j_{\eta_1} \geq n$ or $\ldots$ or $j_{\eta_s} \geq n$, then there is no point of $\mathcal{C}S_n$ which is contained in the interior of the $b$-adic interval $I_{jm}$. Thereby part \quad\Leftrightarrow\quadref{prp_4_cs} follows from Lemma \ref{lem_haar_coeff_besov_x}.
\end{proof}
\section{The proof of the main result}
\begin{proof}[Proof of Theorem \ref{thm_main}]
The point set satisfying the assertion is the Chen-Skriganov type point set $\mathcal{C}S_n$. Let $\mu_{jml}$ be the $b$-adic Haar coefficients of the discrepancy function of $\mathcal{C}S_n$. We write $|j| = j_{\eta_1} + \ldots + j_{\eta_s}$. We have an equivalent quasi-norm on $S_{pq}^r B([0,1)^d)$ in \quad\Leftrightarrow\quadref{eq_quasinorm} so that the proof of the inequality
\[ \left( \sum_{j \in \mathbb{N}_{-1}^d} b^{|j|(r - \frac{1}{p} + 1) q} \left( \sum_{m \in \mathbb{D}d_j, \, l \in \mathbb{B}_j} |\mu_{jml}|^p \right)^{\frac{q}{p}} \right)^{\frac{1}{q}} \leq C \, b^{n(r-1)} n^{\frac{d-1}{q}} \]
for some constant $C > 0$ establishes the proof of the theorem in this case.
To estimate the expression on the left-hand side, we use Minkowski's inequality to split the sum into summands according to the cases of Proposition \ref{prp_haar_coeff_cs}. We denote
\[ \Xi_j = b^{|j|(r - \frac{1}{p} + 1) q} \left( \sum_{m \in \mathbb{D}d_j, \, l \in \mathbb{B}_j} |\mu_{jml}|^p \right)^{\frac{1}{p}} \]
and get
\[ \left( \sum_{j \in \mathbb{N}_{-1}^d} \Xi_j^q \right)^{\frac{1}{q}} \leq \Xi_{(-1, \ldots, -1)} + \sum_{s = 1}^d \left[ \left( \sum_{j \in J_s^1} \Xi_j^q \right)^{\frac{1}{q}} + \left( \sum_{j \in J_s^2} \Xi_j^q \right)^{\frac{1}{q}} + \sum_{i = 1}^s \left( \sum_{j \in J_{s i}^3} \Xi_j^q \right)^{\frac{1}{q}} \right] \]
where $J_s^1$ is the set of all such $j \neq (-1, \ldots, -1)$ for which $|j| \leq n$, $J_s^2$ is the set of all such $j \neq (-1, \ldots, -1)$ for which $0 \leq j_{\eta_1}, \ldots, j_{\eta_s} \leq n-1$ and $|j| > n$ and $J_{s i}^3$ is the set of all such $j$ for which $j_{\eta_i} \geq n$.
We will show that each of the summands above can be bounded by $C \, b^{n(r-1)} n^{\frac{d-1}{q}}$ which finishes the proof.
Part \quad\Leftrightarrow\quadref{prp_1_cs} of Proposition \ref{prp_haar_coeff_cs} gives us for $j = (-1, \ldots, -1), \, m = (0,\ldots,0), \, l = (0,\ldots,0)$
\[ \Xi_j = |\mu_{jml}| \leq c_1 b^{-n} \leq c_2 b^{n(r-1)} n^{\frac{d-1}{q}}. \]
Let now $1 \leq s \leq d$. We will use \quad\Leftrightarrow\quadref{prp_2_cs} in Proposition \ref{prp_haar_coeff_cs} and Lemma \ref{lem_lambda_s_minus_1}. The summation over $l \in \mathbb{B}_j$ can be incorporated into the constant and we recall that $\# \mathbb{D}d_j = b^{|j|}$. Hence (using the fact that $r < 0$) we have
\begin{align*}
\left( \sum_{j \in J_s^1} \Xi_j^q \right)^{\frac{1}{q}} & \leq c_3 \left( \sum_{j \in J_s^1} b^{|j|(r - \frac{1}{p} + 1) q} \left( \sum_{m \in \mathbb{D}d_j} b^{(-|j| - n) p} \right)^{\frac{q}{p}} \right)^{\frac{1}{q}} \\
& = c_3 \left( \sum_{j \in J_s^1} b^{(|j| r - n ) q} \right)^{\frac{1}{q}} \\
& \leq c_4 \left( \sum_{\lambda = 0}^n b^{(\lambda r - n) q} (\lambda + 1)^{s - 1} \right)^{\frac{1}{q}} \\
& \leq c_5 \, n^{\frac{s - 1}{q}} \, b^{-n} \left( \sum_{\lambda = 0}^n b^{\lambda r q} \right)^{\frac{1}{q}} \\
& \leq c_6 \, n^{\frac{d - 1}{q}} \, b^{n (r-1)}.
\end{align*}
From \quad\Leftrightarrow\quadref{prp_3_cs} in the same proposition (using the fact that $r - \frac{1}{p} < 0$ and $r - 1 \leq 0$) we have
\begin{align*}
\left( \sum_{j \in J_s^2} \Xi_j^q \right)^{\frac{1}{q}} & \leq c_7 \left( \sum_{j \in J_s^2} b^{|j|(r - \frac{1}{p} + 1) q} \, b^{n \frac{q}{p}} \, b^{(-|j| - n) q} \right)^{\frac{1}{q}} \\
& \quad + c_8 \left( \sum_{j \in J_s^2} b^{|j|(r - \frac{1}{p} + 1)q} \, b^{|j| \frac{q}{p}} \, b^{-2|j| q} \right)^{\frac{1}{q}} \\
& = c_7 \left( \sum_{j \in J_s^2} b^{\left[ |j| (r - \frac{1}{p}) + \frac{n}{p} - n \right] q} \right)^{\frac{1}{q}} \\
& \quad + c_8 \left( \sum_{j \in J_s^2} b^{|j|(r - 1)q} \right)^{\frac{1}{q}}\\
& \leq c_7 \left( \sum_{\lambda = n+1}^{s(n - 1)} (\lambda + 1)^{s-1} b^{\left[ \lambda(r - \frac{1}{p}) + \frac{n}{p} - n \right] q} \right)^{\frac{1}{q}} \\
& \quad + c_8 \left( \sum_{\lambda = n+1}^{s(n - 1)} (\lambda + 1)^{s-1} b^{\lambda (r-1) q} \right)^{\frac{1}{q}} \\
& \leq c_9 \, n^{\frac{s - 1}{q}} b^{\frac{n}{p} - n} \left( \sum_{\lambda = n + 1}^{s(n - 1)} b^{\lambda (r - \frac{1}{p}) q} \right)^{\frac{1}{q}} + c_{10} \, n^{\frac{s - 1}{q}} \left( \sum_{\lambda = n + 1}^{s(n - 1)} b^{\lambda (r - 1) q} \right)^{\frac{1}{q}} \\
& \leq c_{11} \, n^{\frac{s - 1}{q}} b^{\frac{n}{p} - n} \, b^{n (r - \frac{1}{p})} + c_{12} \, n^{\frac{s - 1}{q}} b^{n (r - 1)} \\
& \leq c_{13} \, n^{\frac{d - 1}{q}} \, b^{n(r-1)}.
\end{align*}
Part \quad\Leftrightarrow\quadref{prp_4_cs} in Proposition \ref{prp_haar_coeff_cs} gives us for any $1 \leq i \leq s$
\begin{align*}
\left( \sum_{j \in J_{s i}^3} \Xi_j^q \right)^{\frac{1}{q}} & \leq c_{14} \left( \sum_{j \in J_{s i}^3} b^{|j|(r - \frac{1}{p} + 1) q} \, b^{|j| \frac{q}{p}} \, b^{-2|j| q} \right)^{\frac{1}{q}}\\
& \leq c_{15} \left( \sum_{\lambda = n}^\infty (\lambda + 1)^{s-1} b^{\lambda(r - 1) q} \right)^{\frac{1}{q}}\\
& \leq c_{16} n^{\frac{d-1}{q}} b^{n(r-1)}.
\end{align*}
The cases $p = \infty$ and $q = \infty$ have to be modified in the usual way.
\end{proof}
\addcontentsline{toc}{chapter}{References}
\end{document}
|
\begin{document}
\title[Rectifiable steady and expanding gradient Ricci solitons]
{Classification of 3-dimensional complete rectifiable steady and expanding gradient Ricci solitons}
\author{Shun Maeta}
\address{Department of Mathematics,
Shimane University, Nishikawatsu 1060 Matsue, 690-8504, Japan.}
\curraddr{}
\email{[email protected]~{\em or}[email protected]}
\thanks{The author is partially supported by the Grant-in-Aid for Young Scientists, No.19K14534, Japan Society for the Promotion of Science.}
\mathrm{std}ubjclass[2010]{53C21, 53C25, 53C20}
\mathrm{div}ate{}
\mathrm{div}edicatory{}
\keywords{Gradient Ricci solitons; Rectifiable gradient Ricci solitons; Bryant solitons; Cao-Chen tensor; Perelman's conjecture.}
\commby{}
\begin{abstract}
Let $(M,g,f)$ be a 3-dimensional complete steady gradient Ricci soliton. Assume that $M$ is rectifiable, that is, the potential function can be written as $f=f(r)$, where $r$ is a distance function. Then, we prove that $M$ is isometric to (1) a quotient of $\mathbb{R}^3$, or (2) the Bryant soliton. In particular, we show that any 3-dimensional complete rectifiable steady gradient Ricci soliton with positive Ricci curvature is isometric to the Bryant soliton. Furthermore, we show that any $3$-dimensional complete rectifiable expanding gradient Ricci soliton with positive Ricci curvature is rotationally symmetric.
\end{abstract}
\maketitle
\mathrm{std}ection{Introduction}\mathrm{loc}anglebel{intro}
A Riemannian manifold $(M^n,g,f)$ is called a gradient Ricci soliton if there exists a smooth function $f$ on $M$ and a constant $\mathrm{loc}anglembda\in \mathbb{R}$, such that
\begin{equation}\mathrm{loc}anglebel{RS}
{\rm Ric}+\nablaabla\nablaabla f=\mathrm{loc}anglembda g,
\end{equation}
where ${\rm Ric}$ is the Ricci tensor on $M$, and $\nabla\nabla f$ is the Hessian of $f$.
If $\mathrm{loc}anglembda>0$, $\mathrm{loc}anglembda=0$ or $\mathrm{loc}anglembda<0$, then the Ricci soliton is called shrinking, steady or expanding.
By scaling the metric, we can assume $\mathrm{loc}anglembda=1,0,-1$, respectively.
If the potential function $f$ is constant, then it is called trivial. It is known that any compact gradient steady Ricci soliton is trivial \cite{Hamilton95}.
In this paper, we study complete steady gradient Ricci solitons.
In dimension $2$, they are well understood. In fact, complete steady gradient Ricci solitons have been completely classified (cf. \cite{BM13}). In particular, it has been shown that the only complete steady gradient Ricci soliton with positive curvature is Hamilton's cigar soliton $(\Sigma^2,g=\frac{dx^2+dy^2}{1+x^2+y^2})$ (cf. \cite{Hamilton88}).
However, in dimension $3$, the classification of complete steady gradient Ricci solitons is still open.
It is known that such examples are $(1)$ a quotient of $\mathbb{R}^3$, $(2)$ a quotient of $\Sigma^2\times\mathbb{R}$, where $\Sigma^2$ is the Hamilton's cigar soliton, and (3) the Bryant soliton \cite{Bryant}.
In this paper, we consider the following problem.
\begin{problem}\mathrm{loc}anglebel{prob1}
Classify $3$-dimensional complete steady gradient Ricci solitons.
\end{problem}
There are many results for Problem \ref{prob1}. S. Brendle \cite{Brendle2011} showed that any steady gradient Ricci soliton $M$ with (a) the scalar curvature is positive and approaches zero at infinity, and (b) there exists an exhaustion of $M$ by bounded domains $\Omega_l$ such that
$$\mathrm{loc}im_{l\rightarrow +\infty}\int_{\phiartial\Omega_l}e^{u(R)}\mathrm{loc}anglengle \nablaabla R+\phisi(R)\nablaabla f,\nablau\ranglengle=0,$$
is rotationally symmetric, where
$R$ is the scalar curvature on $M$ and
$\phisi:(0,1)\rightarrow\mathbb{R}$ is a smooth function so that $\nablaabla R+\phisi(R)\nablaabla f=0$ on the Bryant soliton
and
$$u(s)=\mathrm{loc}og \phisi(s)+\int_{\frac{1}{2}}^{s}\mathrm{loc}eft(\frac{3}{2(1-t)}-\frac{1}{(1-t)\phisi(t)}\right)dt.$$
It has been shown that a complete steady gradient Ricci soliton is either flat or isometric to the Bryant soliton under locally conformally flat by Cao and Chen \cite{CC12}. It was improved by Cao, Catino, Chen, Mantegazza and Mazzieri \cite{CCCMM14}. They obtained the same conclusion under divergence-free Bach tensor. Here we remark that the Bach tensor $B$ for 3-dimensional manifolds \cite{CCCMM14} was defined by $B_{ij}=\nablaabla_kC_{ijk}$, where $C$ is the Cotton tensor $C_{ijk}=\nabla_k A_{ij}-\nabla_j A_{ik},$ and $A$ is the Schouten tensor $A={\rm Ric}-\frac{R}{4}g$.
Finally, G. Catino, P. Mastrolia and D. D. Monticelli relaxed the assumption, that is, they showed that
``any complete steady gradient Ricci soliton with 3-divergence free Cotton tensor ${\rm div}^3(C)=\nablaabla_i\nablaabla_j\nablaabla_kC_{ijk}=0$ is isometric to either a quotient of $\mathbb{R}^3$, or the Bryant soliton" (cf. \cite{CMM17}).
They also showed that ``any complete steady gradient Ricci soliton with
$$\underset{s\rightarrow +\infty}{\mathrm{loc}im\inf}\,\frac{1}{s}\int_{B_{s}(O)}R=0,$$
is isometric to either a quotient of $\mathbb{R}^3$, or a quotient of $\Sigma^2\times \mathbb{R}$" (cf. \cite{CMM16}).
One of the most great work might be Brendle's one \cite{Brendle}. He proved the Perelman's conjecture \cite{Perelman1}, namely ``any 3-dimensional complete noncompact $\kappappa$-noncollapsed gradient steady Ricci soliton with positive curvature is rotationally symmetric, namely the Bryant soliton".
Estimates of the potential function played an important role in proving all of the result mentioned above (cf. \cite{CC12},~\cite{CCCMM14},~\cite{Brendle},~\cite{CMM16},~\cite{CMM17}). Many studies indicate that the potential function is bounded by some functions which are dependent only on a distance function $r$ on $M$ (cf. \cite{CC12}, \cite{WW13}). In particular, Cao and Chen showed the following result:
\begin{theorem}[\cite{CC12}]
Let $(M,g,f)$ be a complete steady gradient Ricci soliton with positive Ricci curvature $($and Hamilton's identity $R+|\nablaabla f|^2=c)$.
Assume that the scalar curvature attains its maximum at some origin $O$. Then, there exist some constants $0<c_1\mathrm{loc}eq \mathrm{std}qrt{c}$ and $c_2>0$ such that the potential function $f$ satisfies the estimates
$$-\mathrm{std}qrt{c}r(x)-|f(O)|\mathrm{loc}eq f(x) \mathrm{loc}eq -c_1r(x)+c_2.$$
\end{theorem}
Therefore, the potential function of steady gradient Ricci solitons with positive Ricci curvature might be written as $f=f(r)$, where $r$ is a distance function on $M$. In fact, the potential function of the Hamilton's cigar soliton and the Bryant soliton can be written as $f=f(r)$ (see for example \cite{CM16}).
Therefore, to classify steady gradient Ricci solitons, it is interesting to consider the case that the potential function can be written as $f=f(r)$. The notion was introduced by P. Petersen and W. Wylie \cite{PW09}:
\begin{definition}[\cite{PW09}]
If the potential function of a gradient Ricci soliton $(M,g,f)$ can be written as $f=f(r)$, where $r$ is a distance function on $M$, then $M$ is called a rectifiable gradient Ricci soliton.
\end{definition}
Petersen and Wylie introduced the notion of rectifiable gradient Ricci solitons for {\it shrinking} solitons and gave some rigidity results (cf. \cite{PW09}).
In this paper, we completely classify 3-dimensional complete rectifiable steady gradient Ricci solitons.
\begin{theorem}\mathrm{loc}anglebel{main}
Any $3$-dimensional complete rectifiable steady gradient Ricci soliton is isometric to $(1)$ a quotient of $\mathbb{R}^3$, or $(2)$ the Bryant soliton.
\end{theorem}
By using the same argument, one can classify expanding solitons.
\begin{theorem}\mathrm{loc}anglebel{main2}
Any $3$-dimensional complete rectifiable expanding gradient Ricci soliton with positive Ricci curvature is rotationally symmetric.
\end{theorem}
Interestingly, G. Catino and L. Mazzieri \cite{CM16} showed that any $\rho(\nablaot=0)-$Einstein soliton is rectifiable.
The $\rho-$Einstein soliton equation is as follows:
For $\rho\in\mathbb{R}$,
\begin{equation}\mathrm{loc}anglebel{rhoein}
{\rm Ric}+\nablaabla\nablaabla f=\rho R g+\mathrm{loc}anglembda g,~~(\mathrm{loc}anglembda\in\mathbb{R}).
\end{equation}
If $\rho=0,$ then it is the gradient Ricci soliton equation.
\begin{remark}
$(1)$ Here we remark that Catino and Mazzieri called the equation \eqref{rhoein} the $\rho-$Einstein soliton equation only when $\rho\nablaot=0$, but, in this paper we include the case $\rho=0.$
$(2)$ The first version of this paper was appeared on the arXiv on November in $2019$. Recently, on October in $2020$, Yi Lai constructed the flying wing.
\end{remark}
\mathrm{std}ection{Preliminary}
In this section, we recall some notions and basic facts.
The Riemannian curvature tensor is defined by
$$R(X,Y)Z=-\nabla_X\nabla_YZ+\nabla_Y\nabla_XZ+\nabla_{[X,Y]}Z,$$
where $\nablaabla$ is the Levi-Civita connection on $M$.
The Weyl tensor $W$ and the Cotton tensor C are defined by
\begin{align*}
W_{ijkl}
=&R_{ijkl}-\frac{1}{n-2}(R_{ik}g_{jl}+R_{jl}g_{ik}-R_{il}g_{jk}-R_{jk}g_{il})\\
&+\frac{R}{(n-1)(n-2)}(g_{ik}g_{jl}-g_{il}g_{jk}),
\end{align*}
and
\begin{align*}
C_{ijk}
=&\nabla_k A_{ij}-\nabla_j A_{ik},
\end{align*}
where $A={\rm Ric} -\frac{R}{2(n-1)}g$ is the Schouten tensor, and $R_{ij}={\rm Ric}_{ij}$.
The Cotton tensor is skew-symmetric in the last two indices and totally trace free, that is,
$$C_{ijk} = -C_{ikj}, \quad C_{iik}=C_{iji}=0.$$
As is well known, a Riemannian manifold $(M^n,g)$ is locally conformally flat if and only if
(1) for $n\gammaeq4$, the Weyl tensor vanishes; (2) for $n=3$, the Cotton tensor vanishes.
Moreover, for $n\gammaeq4$, if the Weyl tensor vanishes, then the Cotton tensor vanishes. We also see that for $n=3$, the Weyl tensor always vanishes, but the Cotton tensor does not vanish in general.
To study gradient Ricci solitons, Cao and Chen introduced the tensor $D$ (cf.~\cite{CC12}).
\begin{align*}
D_{ijk}
=&\frac{1}{n-2}(\nabla_k fR_{ij}-\nabla_jfR_{ik})+\frac{1}{(n-1)(n-2)}\nabla_t f (R_{tk}g_{ij}-R_{tj}g_{ik})\\
&-\frac{R}{(n-1)(n-2)}(\nabla_k fg_{ij}-\nabla_jfg_{ik}).
\end{align*}
In this paper, we call it the Cao-Chen tensor. The Cao-Chen tensor has the same symmetry properties as the Cotton tensor, that is,
$$D_{ijk} = -D_{ikj}, \quad D_{iik}=D_{iji}=0.$$
There is a relationship between the Cotton tensor and the Cao-Chen tensor:
$$C_{ijk}+\nabla_tf W_{tijk}=D_{ijk}.$$
Thus, in dimension $3$, we have $D=C$.
\mathrm{std}ection{Proof of Theorem $\ref{main}$}\mathrm{loc}anglebel{Proof of main}
In this section, we show Theorem \ref{main}.
\begin{proof}
In general, by B.-L. Chen's theorem \cite{Chen09}, $(M,g,f)$ has nonnegative sectional curvature.
By Hamilton's identity
$$R+|\nablaabla f|^2=c,$$
for some constant $c$, $(M,g,f)$ has bounded curvature.
Therefore, by Hamilton's strong maximum principle, $(M,g,f)$ is either
(1) flat, or (2) a product $\Sigma^2\times \mathbb{R}$, where $\Sigma^2$ is the cigar steady soliton, or (3) it has positive sectional curvature.
We first consider the case (2).
On $(\Sigma^2\times\mathbb{R},g)$, we adopt global coordinates $s,x,y$ and hence the metric and the potential function take the form
$$g=ds^2+\frac{dx^2+dy^2}{1+x^2+y^2},\quad f(s,x,y)=-\mathrm{loc}og(1+x^2+y^2).$$
Therefore, it is not rectifiable.
We consider the case (3).
By the soliton equation ${\rm Ric} +\nablaabla\nablaabla f=0,$ the potential function $f$ is concave.
Thus, $f$ has only one critical point $O$. Set $r(x)=\mathrm{div}ist(x,O)$, $(x\in \Omega=M\backslash \{C_O\cup\{O\}\})$ is a distance function from the point $O$, where $C_{O}$ is the cut locus of $O$.
Assume that $M$ is rectifiable, namely $f=f(r)$, where $r$ is the distance function. Here we remark that $|\nablaabla r|=1$ and $\nablaabla_{\nablaabla r}\nablaabla r=0.$ Let $\{e_1=\nablaabla r,e_2,e_3\}$ be an orthonormal frame on $\Omega$.
We use subscripts $a,b=2,3~(a\nablaot=b)$, and denote $\nablaabla_1=\nablaabla_{e_1}$ and $\nablaabla_a=\nablaabla_{e_a}$.
Since $f$ is rectifiable, we have
\begin{equation}\mathrm{loc}anglebel{nf}
\nablaabla_1f=f'(r)<0,~~\nablaabla_af=0.
\end{equation}
By Hamilton's identity $R=c-|\nablaabla f|^2=c-(f'(r))^2$,
\begin{align}
\nablaabla_a R=&0.\mathrm{loc}anglebel{n1R}
\end{align}
A direct computation yields that
\begin{align*}
R_{ij}
=&-\nablaabla _i\nablaabla_jf\\
=&-f''(r)\nablaabla_ir\nablaabla_jr-f'(r)\nablaabla _i\nablaabla_jr.
\end{align*}
Since $\nablaabla_1r=1$, $\nablaabla_ar=\nablaabla_1\nablaabla_1r=0$ and ${\rm Ric}(\nablaabla f, X)=\frac{1}{2}\mathrm{loc}anglengle \nablaabla R,X\ranglengle$, we obtain
\begin{align}
R_{11}=&-f''(r),~~R_{1a}=0,~~R_{ab}=-f'(r)\nablaabla_a\nablaabla_br.\mathrm{loc}anglebel{Rij}
\end{align}
We show that the Cao-Chen tensor $D$ vanishes.
We only have to consider the following 5 cases: $D_{11a},D_{1ab},D_{a1a},D_{a1b},D_{aab}~~(a\nablaot=b)$.
\begin{align*}
D_{11a}
=&\nablaabla_afR_{11}-\nablaabla_1fR_{1a}+\frac{1}{2}\nablaabla_1f(R_{1a}g_{11}-R_{11}g_{1a})\\
&-\frac{R}{2}(\nablaabla_afg_{11}-\nablaabla_1fg_{1a})\\
=&0,
\end{align*}
where we used \eqref{nf} and \eqref{Rij}.
The similar computations show that
\begin{align*}
D_{1ab}=D_{aab}=0,
\end{align*}
\begin{align*}
D_{a1a}=-\nablaabla_1f(R_{aa}+\frac{R_{11}}{2}-\frac{R}{2}),
\end{align*}
and
\begin{align*}
D_{a1b}=-\nablaabla_1fR_{ab}.
\end{align*}
Since the dimension of $M$ is 3, $D_{ijk}=C_{ijk}$. Hence, we also compute $C_{ijk}$.
\begin{align*}
C_{11a}=C_{1ab}=0,
\end{align*}
\begin{align*}
C_{a1a}
=-\nablaabla_1(R_{aa}-\frac{R}{4}),
\end{align*}
$$C_{a1b}=-\nablaabla_1R_{ab},$$
and
\begin{align*}
C_{aab}
=&\nablaabla_bR_{aa}-\nablaabla_aR_{ab}.
\end{align*}
Since $D_{ijk}=C_{ijk}$, we have
\begin{align}
&\nablaabla_1(R_{aa}-\frac{R}{4})=\nablaabla_1f(R_{aa}+\frac{R_{11}}{2}-\frac{R}{2}),\mathrm{loc}anglebel{key2}\\
&\nablaabla_1fR_{ab}=\nablaabla_1R_{ab},\mathrm{loc}anglebel{key3}\\
&\nablaabla_bR_{aa}=\nablaabla_aR_{ab}.\mathrm{loc}anglebel{key1}
\end{align}
By \eqref{key2}, we obtain
$$\nablaabla_1(R_{22}+R_{33}-\frac{R}{2})=\nablaabla_1f(R_{22}+R_{33}+R_{11}-R)=0.$$
Thus we have $\nablaabla_1(R-2R_{11})=0,$ and hence $R-2R_{11}=h,$ for some function $h$ which is independent of $r$. However, since $\nablaabla_a(R-2R_{11})=0,$ $h$ is a constant $C$.
From this and \eqref{key2} again, one has
$$\nablaabla_1(R_{aa}-\frac{R}{4}-\frac{C}{4})=\nablaabla_1f(R_{aa}-\frac{R}{4}-\frac{C}{4}).$$
Assume that $R_{aa}-\frac{R}{4}-\frac{C}{4}\nablaot=0$ on some open set $\Omega'$. We may assume that $R_{aa}-\frac{R}{4}-\frac{C}{4}>0$ on $\Omega'$ (The same argument yields a contradiction in the other case). We have
$$R_{aa}-\frac{R}{4}-\frac{C}{4}=e^{f+c_a},$$
where $c_a$ is some function which is independent of $r$.
Substituting them into $R=R_{11}+R_{22}+R_{33},$ we obtain
$$e^{f+c_2}+e^{f+c_3}=0,$$
which is a contradiction.
Thus, $R_{aa}-\frac{R}{4}-\frac{C}{4}=0$ on all of $\Omega$.
Therefore, we obtain
\begin{equation}\mathrm{loc}anglebel{RRaa}
R_{11}=\frac{R-C}{2},~~R_{22}=R_{33}=\frac{R+C}{4},
\end{equation}
and
$$D_{a1a}=0.$$
By \eqref{RRaa} and \eqref{n1R}, one has $\nablaabla_2R_{22}=\nablaabla_3R_{22}=\nablaabla_2R_{33}=\nablaabla_3R_{33}=0.$ From this and \eqref{key1}, we obtain $\nablaabla_2R_{23}=\nablaabla_3R_{23}=0.$
We will show that $R_{23}=0$.
We use a useful formula of Catino-Mastrolia-Monticelli (cf. \cite{CMM17}):
$$R_{kt}C_{kti}=\nablaabla_k\nablaabla_tD_{itk}.$$
Take $i=1$. Since $D_{1tk}=0$, the right hand side of the above equation vanishes.
On the other hand, one has $$R_{kt}C_{kt1}=R_{23}(C_{231}+C_{321})=2\nablaabla_1f(R_{23})^2,$$
where we used $R_{1a}=C_{a1a}=0$, and $C_{ab1}=-C_{a1b}=-D_{a1b}=\nablaabla_1fR_{ab}$.
Thus, $$\nablaabla_1f(R_{23})^2=0,$$
hence $R_{23}=0$ and $D_{a1b}=0.$
Therefore, the Cao-Chen tensor vanishes identically on $\Omega$. In particular, $|D|\equiv0$ on $\Omega$. Since every complete Ricci soliton is analytic \cite{Bando}, $|D|\equiv0$ on all of $M$. Therefore, $M$ is rotationally symmetric \cite{CC12}, and hence it is isometric to the Bryant soliton.
\end{proof}
\begin{remark}
To prove $D\equiv0$, we used only the assumption that the Ricci curvature is positive.
\end{remark}
We can consider a similar problem to expanding gradient Ricci solitons. By using the same argument as in the proof of Theorem \ref{main}, we can show Theorem \ref{main2}:
\begin{theorem}
Any $3$-dimensional complete rectifiable expanding gradient Ricci soliton with positive Ricci curvature is rotationally symmetric.
\end{theorem}
In fact, the same argument as in the proof of Theorem \ref{main} shows that $D=C\equiv0$ on $M$, that is, $M$ is locally conformally flat, and the result follows from \cite{CC12}.
\begin{remark}
To classify expanding solitons, Deruelle's work is important \cite{Deruelle16}.
\end{remark}
\end{document}
|
\begin{document}
\def\vartriangle{\vartriangle}
\def$(Bin(X),$ $ \Box)$ {$(Bin(X),$ $ \Box)$ }
\def$(Bin(X),$ $ \Box)$ {$(Bin(X),$ $ \Box)$ }
\def\cvb{
\begin{tabular}{c | c c }
$\bullet$ & x & y \\
\hline
x & x & y \\
y & a & y \\
\end{tabular}
}
\def\ccb{
\begin{tabular}{c | c c }
$*$ & x & y \\
\hline
x & x & x \\
y & x & y \\
\end{tabular}
}
\def\vb{
\begin{tabular}{c | c c }
$\bullet$ & x & y \\
\hline
x & x & y \\
y & x & y \\
\end{tabular}
}
\def\BMH{
\begin{tabular}{c|c c c}
$\bullet$ & a & b & c \\
\hline
a & a & b & c \\
b & b & b & c \\
c & c & b & c \\
\end{tabular}
}
\def\Tus{
\begin{tabular}{c | c c }
$\bullet$ & a & b \\
\hline
a & a & b \\
b & b & b \\
\end{tabular}
}
\def\cal{
\begin{tabular}{c | c c }
$\bullet$ & b & c \\
\hline
b & b & c \\
c & b & c \\
\end{tabular}
}
\def\oosa{
\begin{tabular}{c | c c }
$\bullet$ & a & c \\
\hline
a & a & c \\
c & c & c \\
\end{tabular}
}
\def\as{
\begin{tabular}{c|c c c}
$\bullet$ & a & b & c \\
\hline
a & a & b & c \\
b & b & b & c \\
c & c & b & c \\
\end{tabular}
}
\def\BM2{
\begin{tabular}{c|c c c}
$\bullet$ & a & b & c \\
\hline
a & a & b & c \\
b & b & b & b \\
c & c & c & c \\
\end{tabular}
}
\def\bg{
\begin{tabular}{c|c c c}
$\bullet$ & a & b & c \\
\hline
a & a & b & c \\
b & b & b & c \\
c & c & a & c \\
\end{tabular}
}
\def\vf{
\begin{tabular}{c|c c c}
$\bullet$ & a & b & c \\
\hline
a & a & b & c \\
b & b & b & a \\
c & c & c & c \\
\end{tabular}
}
\def\cd{
\begin{tabular}{c|c c c}
$\bullet$ & a & b & c \\
\hline
a & a & b & c \\
b & c & b & b \\
c & c & c & c \\
\end{tabular}
}
\def\xs{
\begin{tabular}{c|c c c}
$\bullet$ & a & b & c \\
\hline
a & a & c & c \\
b & b & b & b \\
c & c & c & c \\
\end{tabular}
}
\def\ta{
\begin{tabular}{c | c c }
$\Box$ & L & R \\
\hline
L & L & R \\
R & R & L \\
\end{tabular}
}
\title{Locally-zero Groupoids and the Center of $Bin(X)$ }
\author{Hiba F. Fayoumi}
\maketitle
\begin{abstract}
In this paper we introduce the notion of the center $ZBin(X)$ in the semigroup
$Bin(X)$ of all binary systems on a set $X$, and show that if $(X,\bullet
)\in ZBin(X)$, then $x\not=y$ implies $\{x,y\}=\{x\bullet y,y\bullet x\}$.
Moreover, we show that a groupoid $(X,\bullet )\in ZBin(X)$ if and only if
it is a locally-zero groupoid.
\end{abstract}
\markboth{\footnotesize \rm Hiba F. Fayoumi }
{\footnotesize \rm Locally-zero Groupoids and the Center of $Bin(X)$ }
\renewcommand{\thefootnote}{}
\footnotetext{\textit{2000 Mathematics Subject Classification.} 20N02.}
\footnotetext{\textit{Key words and phrases.} center, locally-zero, $Bin(X)$.
}
\section{\textbf{Preliminaries} \protect
}
The notion of the semigroup $(Bin(X),$ $\Box )$ was introduced by H. S. Kim
and J. Neggers \textrm{([4])}. Given binary operations \textquotedblleft $
\ast $" and \textquotedblleft $\bullet $" on a set $X$, they defined a
product binary operation \textquotedblleft $\Box $" as follows: $x\Box
y:=(x\ast y)\bullet (y\ast x)$. This in turn yields a binary operation on $
Bin(X)$, the set of all groupoids defined on $X$ turning $(Bin(X),\Box )$
into a semigroup with identity ($x\ast y=x$), the left-zero-semigroup, and
an analog of negative one in the right-zero-semigroup.
\textbf{Theorem 1.1\textrm{{([2])}.}} \textsl{The collection $(Bin(X),$ $
\Box )$ of all binary systems (groupoids or algebras) defined on $X$ is a
semigroup, i.e., the operation $\Box $ as defined in general is associative.
Furthermore, the left-zero-semigroup is an identity for this operation.}
\textbf{Example 1.2\textrm{{([2])}.}} Let $(R,+,\cdot ,0,1)$ be a
commutative ring with identity and let $L(R)$ denote the collection of
groupoids $(R,\ast )$ such that for all $x,y\in R$
\begin{equation*}
x\ast y=ax+by+c
\end{equation*}
where $a,b,c\in R$ are fixed constants. We shall consider such groupoids to
be \textit{linear groupoids}. Notice that $a=1,b=c=0$ yields $x\ast y=1\cdot
x=x$, and thus the left-zero-semigroup on $R$\ is a linear groupoid. Now,
suppose that $(R,\ast )$ and $(R,\bullet )$ are linear groupoids where $
x\ast y=ax+by+c$ and $x\bullet y=dx+ey+f$. Then $x\,\Box
\,y=d(ax+by+c)+e(ay+bx=c)+f=(da+eb)x+(db+ea)y+(d+e)c+f$, whence $(R,\Box
)=(R,\ast )\Box (R,\bullet )$ is also a linear groupoid, i.e., $(L(R),\Box )$
is a semigroup with identity.
\textbf{Example 1.3 \textrm{{([2])}.}} Suppose that in $Bin(X)$ we
consider
all those groupoids $(X,\ast )$ with the \textit{orientation property}: $
x\ast y\in \{x,y\}$ for all $x$ and $y$. Thus, $x\ast x=x$ as a consequence.
If $(X,\ast )$ and $(X,\bullet )$ both have the orientation property, then
for $x\,\Box \,y=(x\ast y)\bullet (y\ast x)$ we have the possibilities: $
x\ast x=x,\,y\ast y=y,\,x\ast y\in \{x,y\}$ and $y\ast x\in \{x,y\}$, so
that $x\,\Box \,y\in \{x,y\}$. It follows that if $OP(X)$ denotes this
collection of groupoids, then $(OP(X),\Box )$ is a subsemigroup of $(Bin(X),$
$\Box )$ . In a sequence of papers Nebesk\'{y} \textrm{([3, 4, 5])} has
sought to associate with graphs $(V,E)$ groupoids $(V,\ast )$ with various
properties and conversely. He defined a \textit{travel groupoid} $(X,\ast )$
as a groupoid satisfying the axioms: $(u\ast v)\ast u=u$ and $(u\ast v)\ast
v=u$ implies $u=v$. If one adds these two laws to the orientation property,
then $(X,\ast )$ is an OP-travel-groupoid. In this case $u\ast v=v$ implies $
v\ast u=u$, i.e., $uv\in E$ implies $vu\in E$, i.e., the digraph $(X,E)$ is
a (simple) graph if $uu\not\in E$, with $u\ast u=u$. Also, if $u\not=v$,
then $u\ast v=u$ implies $(u\ast v)\ast v=u\ast v=u$ is impossible, whence $
u\ast v=v$ and $uv\in E$, so that $(X,E)$ is a complete (simple) graph.
\section{\textbf{The Center of }$Bin\left( X\right) $\textbf{\ }\protect
}
Let $(X,*)$ be a groupoid and let $ZBin(X)$ denote the collection of
elements of $Bin(X)$ such that $(X,*)\,\Box\,(X,\bullet)\,= \,
(X,\bullet)\,\Box\, (X,*),$ $\forall (X,*)\in Bin(X)$. We call $ZBin(X)$ a
\textit{center} of the semigroup $Bin(X)$.
\noindent \textbf{Proposition 2.1.} \textsl{The left-zero-semigroup and the
right-zero-semigroup on $X$ are both in $ZBin(X)$.}
\noindent \textit{Proof.} Given a groupoid $(X,\ast )$, let $(X,\bullet )$
be a left-zero-semigroup. Then $(x\bullet y)\ast (y\bullet x)=x\ast y=(x\ast
y)\bullet (y\ast x)$ for all $x,y\in X$, proving $(X,\ast )\in ZBin(X)$.
Similarly, it holds for the right-zero-semigroup. $\blacksquare $
\noindent \textbf{Proposition 2.2.} \textsl{If $(X,\bullet )\in ZBin(X)$,
then $x\bullet x=x$ for all $x\in X$.}
\noindent \textit{Proof.} If $(X,\bullet )\in ZBin(X)$, then $(X,\bullet
)\Box (X,\ast )=(X,\ast )\Box (X,\bullet )$ for all $(X,\ast )\in Bin(X)$.
Let $(X,\ast )\in Bin(X)$ defined by $x\ast y=a$ for any $x,y\in X$ where $
a\in X$. Then $(x\bullet y)\ast (y\bullet x)=a$ and $(x\ast y)\bullet (y\ast
x)=a\bullet a$ for any $x,y\in X$. Hence we obtain $a\bullet a=a$. If we
change $\left( X,\ast \right) $ in $Bin\left( X\right) $ so that $x\ast y=b$
for every $x,y\in X$ and $b$ is any other element of $X$, then we find that $
a\bullet a=a$ for any $a\in X$. $\blacksquare $
Any set can be well-ordered by well-ordering principle, and a well-ordered
set is linearly ordered. With this notion we prove the following.
\noindent \textbf{Theorem 2.3.} \textsl{If $(X,\bullet )\in ZBin(X)$, then $
x\not=y$ implies $\{x,y\}=\{x\bullet y,y\bullet x\}$}
\noindent \textit{Proof.} Let $(X,<)$ be a linear ordered set and let $
(X,\ast )\in Bin(X)$ be defined by
\begin{equation*}
x\ast y:=\min \{x,y\},\,\,\,\forall x,y\in X\eqno\left( 1\right)
\end{equation*}
Then we have the following:
\begin{equation*}
(x\ast y)\bullet (y\ast x)=
\begin{cases}
x & \text{ if $x\leq y$} \\
y & \text{otherwise}
\end{cases}
\eqno\left( 2\right)
\end{equation*}
\noindent Similarly, we have
\begin{equation*}
(x\bullet y)\ast (y\bullet x)=\min \{x\bullet y,y\bullet x\}\in \{x\bullet
y,y\bullet x\}\eqno\left( 3\right)
\end{equation*}
If $(X,\bullet )\in ZBin(X)$, then $x<y$ implies $x\in \{x\bullet y,y\bullet
x\}$ for all $x,y\in X$. Similarly, if we define $(X,\ast )\in Bin(X)$ by $
x\ast y:=\max \{x,y\}$, for all $x,y\in X$, then $x<y$ implies $x\in
\{x\bullet y,y\bullet x\}$ for all $x,y\in X$ when $(X,\bullet )\in ZBin(X)$
. In any case, we obtain that if $(X,\bullet )\in ZBin(X)$, then
\begin{equation*}
x,y\in \{x\bullet y,y\bullet x\}\eqno\left( 4\right)
\end{equation*}
We consider 4 cases: (i) $x<y,x\bullet y<y\bullet x$; (ii) $x<y,y\bullet
x<x\bullet y$; (iii) $y<x,x\bullet y<y\bullet x$; (iv) $y<x,y\bullet
x<x\bullet y$. Routine calculations give us the conclusion that $
\{x,y\}=\{x\bullet y,y\bullet x\}$. $\blacksquare $
\noindent \textbf{Proposition 2.4.} \textsl{Let $(X,\bullet )\in ZBin(X)$.
If $x\not=y$ in $X$, then $(\{x,y\},\bullet )$ is either a
left-zero-semigroup or a right-zero-semigroup.}
\noindent \textit{Proof.} Assume that $(X,\bullet )$ is not a
left-zero-semigroup and $x\not=y$ in $X$. Then $(X,\bullet )$ has a subtable:
\begin{equation*}
\begin{tabular}{c|cc}
$\bullet $ & $x$ & $y$ \\ \hline
$x$ & $x$ & $y$ \\
$y$ & $a$ & $y$
\end{tabular}
\end{equation*}
where $a\in \{x,y\}$. Note that $x\bullet x=x,y\bullet y=y$ by Proposition
2.2. Let $(X,\ast )\in Bin(X)$ such that $X$ has a subtable:
\begin{equation*}
\begin{tabular}{c|cc}
$\ast $ & $x$ & $y$ \\ \hline
$x$ & $x$ & $x$ \\
$y$ & $x$ & $y$
\end{tabular}
\end{equation*}
Since $(X,\bullet )\in ZBin(X)$, we have $(x\ast y)\bullet (y\ast
x)=(x\bullet y)\ast (y\bullet x)$ and hence $x\bullet x=y\ast a$. If $a=x$,
then $x\bullet x=y\ast x=x$. If $a=y$, then $x=x\bullet x=y\ast y=y$, a
contradiction. Hence $(X,\bullet )$ should have a subtable:
\begin{equation*}
\begin{tabular}{c|cc}
$\bullet $ & $x$ & $y$ \\ \hline
$x$ & $x$ & $y$ \\
$y$ & $x$ & $y$
\end{tabular}
\end{equation*}
This means $(X,\bullet )$ should be a right-zero-semigroup. Similarly, if $
(X,\bullet )$ is not a right-zero-semigroup, then it must have a $2\times 2$
table of a left-zero-semigroup. $\blacksquare $
\noindent \textbf{Proposition 2.5.} \textsl{If $(\{x,y\},\bullet )$ is
either a left-zero-semigroup or a right-zero-semigroup for any $x\not=y$ in $
X$, then $(X,\bullet )\in ZBin(X)$.}
\noindent \textit{Proof.} Given $(X,\ast )\in Bin(X)$, let $x\not=y$ in $X$.
Consider $(x\ast y)\bullet (y\ast x)$ and $(x\bullet y)\ast (y\bullet x)$.
If we assume that $(\{x,y\},\bullet )$ is a left-zero-semigroup, then $
(x\ast y)\bullet (y\ast x)=x\ast y=(x\bullet y)\ast (y\bullet x)$.
Similarly, if we assume that $(\{x,y\},\bullet )$ is a right-zero-semigroup,
then $(x\ast y)\bullet (y\ast x)=y\ast x=(x\bullet y)\ast (y\bullet x)$.
Hence $(X,\bullet )\in ZBin(X)$. $\blacksquare $
\noindent \textbf{Example 2.6.} Let $X:=\{a,b,c\}$ with the following table:
\begin{equation*}
\begin{tabular}{r|rrr}
$\bullet $ & $a$ & $b$ & $c$ \\ \hline
$a$ & $a$ & $a$ & $c$ \\
$b$ & $b$ & $b$ & $b$ \\
$c$ & $a$ & $c$ & $c$
\end{tabular}
\end{equation*}
Then $(X,\bullet )$ is neither a left-zero-semigroup nor a
right-zero-semigroup, while it has the following subtables:
\begin{equation*}
\begin{tabular}{r|rr}
$\bullet $ & $a$ & $b$ \\ \hline
$a$ & $a$ & $a$ \\
$b$ & $b$ & $b$
\end{tabular}
\quad
\begin{tabular}{r|rr}
$\bullet $ & $a$ & $c$ \\ \hline
$a$ & $a$ & $c$ \\
$c$ & $a$ & $c$
\end{tabular}
\quad
\begin{tabular}{r|rr}
$\bullet $ & $b$ & $c$ \\ \hline
$b$ & $b$ & $b$ \\
$c$ & $c$ & $c$
\end{tabular}
\end{equation*}
By applying Proposition 2.5, we can see that $(X,\bullet )\in ZBin(X)$.
\noindent \textbf{Proposition 2.7.} \textsl{Let }$Ab\left( X\right) $\textsl{
\ be the collection of all commutative binary systems on }$X$\textsl{. Then }
$Ab\left( X\right) $\textsl{\ is a right ideal of }$ZBin\left( X\right) $
\textsl{.}
\noindent \textit{Proof.} Let $\left( X,\bullet \right) \in ZBin\left(
X\right) $ and $\left( X,\ast \right) \in Ab\left( X\right) $. Then by
Proposition 2.2, we have $x\Box y=\left( x\ast y\right) \bullet \left( y\ast
x\right) =\left( x\ast y\right) \bullet \left( x\ast y\right) =x\ast y$.
Also, by Proposition 2.2, we get $y\Box x=\left( y\ast x\right) \bullet
\left( x\ast y\right) =\left( y\ast x\right) \bullet \left( y\ast x\right)
=y\ast x$. Therefore, $\left( X,\ast \right) \Box \left( X,\bullet \right)
\in Ab\left( X\right) $ and so $Ab\left( X\right) \Box ZBin\left( X\right)
\subseteq Ab\left( X\right) $. $\blacksquare $
\section{\textbf{Locally-zero Groupoids}\protect
}
A groupoid $(X,\bullet )$ is said to be \textit{locally-zero} if (i) $
x\bullet x=x$ for all $x\in X$; (ii) for any $x\not=y$ in $X$, $
(\{x,y\},\bullet )$ is either a left-zero-semigroup or a
right-zero-semigroup.
Using Propositions 2.2, 2.4 and 2.5 we obtain the following.
\noindent \textbf{Theorem 3.1.} \textsl{A groupoid $(X,\bullet )\in ZBin(X)$
if and only if it is a locally-zero groupoid.}
Given any two elements $x,y\in X$, there exists exactly one
left-zero-semigroup and one right-zero-semigroup, and so if we apply Theorem
3.1 we have the following Corollary.
\noindent \textbf{Corollary 3.2.} \textsl{If $|X|=n$, there are $2^{\binom{n
}{2}}$-different (but may not be isomorphic) locally-zero groupoids.}
\noindent For example, if $n=3$, there are $2^{3}=8$ such groupoids, i.e.,
\begin{equation*}
\begin{tabular}{c|ccc}
$\bullet $ & $a$ & $b$ & $c$ \\ \hline
$a$ & $a$ & $a$ & $a$ \\
$b$ & $b$ & $b$ & $b$ \\
$c$ & $c$ & $c$ & $c$
\end{tabular}
\ \quad
\begin{tabular}{c|ccc}
$\bullet $ & $a$ & $b$ & $c$ \\ \hline
$a$ & $a$ & $a$ & $a$ \\
$b$ & $b$ & $b$ & $c$ \\
$c$ & $c$ & $b$ & $c$
\end{tabular}
\text{ }\quad
\begin{tabular}{c|ccc}
$\bullet $ & $a$ & $b$ & $c$ \\ \hline
$a$ & $a$ & $a$ & $c$ \\
$b$ & $b$ & $b$ & $b$ \\
$c$ & $a$ & $c$ & $c$
\end{tabular}
\text{ }\quad
\begin{tabular}{c|ccc}
$\bullet $ & $a$ & $b$ & $c$ \\ \hline
$a$ & $a$ & $b$ & $a$ \\
$b$ & $a$ & $b$ & $b$ \\
$c$ & $c$ & $c$ & $c$
\end{tabular}
\end{equation*}
\begin{equation*}
\begin{tabular}{c|ccc}
$\bullet $ & $a$ & $b$ & $c$ \\ \hline
$a$ & $a$ & $b$ & $c$ \\
$b$ & $a$ & $b$ & $c$ \\
$c$ & $a$ & $b$ & $c$
\end{tabular}
\ \quad
\begin{tabular}{c|ccc}
$\bullet $ & $a$ & $b$ & $c$ \\ \hline
$a$ & $a$ & $a$ & $c$ \\
$b$ & $b$ & $b$ & $c$ \\
$c$ & $a$ & $b$ & $c$
\end{tabular}
\ \quad
\begin{tabular}{c|ccc}
$\bullet $ & $a$ & $b$ & $c$ \\ \hline
$a$ & $a$ & $b$ & $a$ \\
$b$ & $a$ & $b$ & $c$ \\
$c$ & $c$ & $b$ & $c$
\end{tabular}
\ \quad
\begin{tabular}{c|ccc}
$\bullet $ & $a$ & $b$ & $c$ \\ \hline
$a$ & $a$ & $b$ & $c$ \\
$b$ & $a$ & $b$ & $b$ \\
$c$ & $a$ & $c$ & $c$
\end{tabular}
\end{equation*}
\noindent \textbf{Corollary 3.3.} \textsl{The collection of all locally-zero
groupoids on $X$ forms a subsemigroup of $(Bin(X),\Box )$.}
\noindent \textit{Proof.} Let $x\not=y$ in $X$. If $(\{x,y\},\bullet )$ is a
left-zero-semigroup and $(\{x,y\},\ast )$ is a right-zero semigroup, then $
x\Box y=(x\bullet y)\ast (y\bullet x)=x\ast y=y$, $y\Box x=(y\bullet x)\ast
(x\bullet y)=y\ast x=x$, i.e., $(\{x,y\},\Box )$ is a right-zero-semigroup.
Similarly, we can prove the other three cases, i.e.,
\begin{equation*}
\begin{tabular}{c|cc}
$\Box $ & $L$ & $R$ \\ \hline
$L$ & $L$ & $R$ \\
$R$ & $R$ & $L$
\end{tabular}
\end{equation*}
where $L$ means the \textquotedblleft left-zero-semigroup" and $R$ means
that \textquotedblleft right-zero-semigroup", proving that the locally-zero
groupoids on $X$ form a subsemigroup of $(Bin(X),\Box )$. $\blacksquare $
Using Corollary 3.3, we can see that $(X,\bullet )\Box (X,\ast )$ belongs to
the center $ZBin\left( X\right) $ of $Bin(X)$ for any $(X,\bullet ),(X,\ast
)\in ZBin(X)$.
\noindent \textbf{Proposition 3.4. }\textsl{Not all locally-zero groupoids
are semigroups.}
\noindent \textit{Proof. }Consider $\left( X,\bullet \right) $ where $
X:=\left\{ a,b,c\right\} $ and $"\bullet "$ is given by the following table:
\begin{equation*}
\begin{tabular}{r|rrr}
$\bullet $ & $a$ & $b$ & $c$ \\ \hline
$a$ & $a$ & $a$ & $c$ \\
$b$ & $b$ & $b$ & $b$ \\
$c$ & $a$ & $c$ & $c$
\end{tabular}
\end{equation*}
Then it is easy to see that $\left( X,\bullet \right) $ is locally-zero.
Consider the subtables:
\begin{equation*}
\begin{tabular}{r|rr}
$\bullet $ & $a$ & $b$ \\ \hline
$a$ & $a$ & $a$ \\
$b$ & $b$ & $b$
\end{tabular}
\quad \text{ }
\begin{tabular}{r|rr}
$\bullet $ & $a$ & $c$ \\ \hline
$a$ & $a$ & $c$ \\
$c$ & $a$ & $c$
\end{tabular}
\quad \text{ }
\begin{tabular}{r|rr}
$\bullet $ & $b$ & $c$ \\ \hline
$b$ & $b$ & $b$ \\
$c$ & $c$ & $c$
\end{tabular}
\end{equation*}
and notice that $\left( \left\{ a,b\right\} ,\bullet \right) $, $\left(
\left\{ a,c\right\} ,\bullet \right) $ and $\left( \left\{ b,c\right\}
,\bullet \right) $ are left-, right- and left-zero-semigroups, respectively.
But $\left( a\bullet b\right) \bullet c=a\bullet c=c,$ while $a\bullet
\left( b\bullet c\right) =a\bullet b=a$. Hence $\left( X,\bullet \right) $
fails to be a semigroup and the result follows. $\blacksquare $
\noindent \textbf{Proposition 3.5}. \textsl{Let }$\left( X,\bullet \right) $
\textsl{\ be a locally-zero groupoid. If }$\left( X,\bullet \right) $\textsl{
\ is a semigroup then it is either a left- or a right-zero-semigroup.}
\noindent \textit{Proof.} Suppose that $\left( X,\bullet \right) $ is a
semigroup, then $\left( x\bullet y\right) \bullet z=x\bullet \left( y\bullet
z\right) $ for all $x,y,z\in X$. By Theorem 3.1, $\left( X,\bullet \right) $
$\in ZBin\left( X\right) $, and then by Proposition 2.4, $\left( \left\{
x,y\right\} ,\bullet \right) $ is either a left-zero- or a
right-zero-semigroup for $x\neq y$. In fact, $\left( \left\{ x,z\right\}
,\bullet \right) $ and $\left( \left\{ y,z\right\} ,\bullet \right) $ are
also either left-zero- or right-zero-semigroups for $x\neq z$ and $y\neq z,$
respectively. Assume that $\left( \left\{ x,y\right\} ,\bullet \right) $, $
\left( \left\{ x,z\right\} ,\bullet \right) $ and $\left( \left\{
y,z\right\} ,\bullet \right) $ are left-, right- and left-zero-semigroups,
respectively. Then, $\left( x\bullet y\right) \bullet z=x\bullet z=z$ while $
x\bullet \left( y\bullet z\right) =x\bullet y=x$, a contradiction.
Similarly, we can reach a contradiction if we assume that $\left( \left\{
x,y\right\} ,\bullet \right) $, $\left( \left\{ x,z\right\} ,\bullet \right)
$ and $\left( \left\{ y,z\right\} ,\bullet \right) $ are right-, left- and
right-zero-semigroups, respectively. Now suppose that $\left( \left\{
x,y\right\} ,\bullet \right) $, $\left( \left\{ x,z\right\} ,\bullet \right)
$ and $\left( \left\{ y,z\right\} ,\bullet \right) $ are left-, left- and
right-zero-semigroups, respectively. Then, $\left( y\bullet x\right) \bullet
z=y\bullet z=z$ while $y\bullet \left( x\bullet z\right) =y\bullet x=y$, a
contradiction. Similarly, we can reach a contradiction if we assume that $
\left( \left\{ x,y\right\} ,\bullet \right) $, $\left( \left\{ x,z\right\}
,\bullet \right) $ and $\left( \left\{ y,z\right\} ,\bullet \right) $ are
right-, right- and left-zero-semigroups, respectively. Hence, the only two
other cases are when all three subgroupoids are either all left- or all
right-zero-semigroups. Therefore, $\left( X,\bullet \right) $ is either
left- or right-zero-semigroup. $\blacksquare $
\noindent \textbf{Proposition 3.6. }\textsl{Let }$\left( X,\bullet \right) $
\textsl{\ be a locally-zero groupoid. Then }$\left( X,\bullet \right) \Box
\left( X,\bullet \right) =\left( X,\Box \right) $\textsl{\ is the
left-zero-semigroup on }$X$\textsl{.}
\noindent \textit{Proof. }Suppose that $\left( \left\{ x,y\right\} ,\bullet
\right) $ is the right-zero-semigroup, then $x\Box y=\left( x\bullet
y\right) \bullet \left( y\bullet x\right) =y\bullet x=x.$On the other hand,
if $\left( \left\{ x,y\right\} ,\bullet \right) $ is the
left-zero-semigroup, then $x\Box y=\left( x\bullet y\right) \bullet \left(
y\bullet x\right) =x\bullet y=x.$Thus in both cases, $x\Box y=x$ for all $
x\in X$ making $\left( X,\Box \right) $ the left-zero-semigroup. $
\blacksquare $
{\footnotesize {\ \textsc{\ Hiba Fayoumi, Department of Mathematics,
University of Alabama, Tuscaloosa, AL, 35487-0350, U. S. A.} }}
{\footnotesize \textit{E-mail address}: \texttt{[email protected]} }
\end{document}
|
\begin{document}
\begin{abstract}
We describe the class of graphs for which all metric spaces with diametrical graphs belonging to this class are ultrametric. It is shown that a metric space \((X, d)\) is ultrametric iff the diametrical graph of the metric \(d_{\varepsilon}(x, y) = \max\{d(x, y), \varepsilon\}\) is either empty or complete multipartite for every \(\varepsilon > 0\). A refinement of the last result is obtained for totally bounded spaces. Moreover, using complete multipartite graphs we characterize the compact ultrametrizable topological spaces. The bounded ultrametric spaces, which are weakly similar to unbounded ones, are also characterized via complete multipartite graphs.
\end{abstract}
\title{Ultrametrics and complete multipartite graphs}
\section{Introduction}
In what follows we write \(\mathbb{R}^{+}\) for the set of all nonnegative real numbers.
\begin{definition}\label{d1.1}
A \textit{semimetric} on a set \(X\) is a function \(d\colon X\times X\rightarrow \mathbb{R}^{+}\) satisfying the following conditions for all \(x\), \(y \in X\):
\begin{enumerate}
\item \label{d1.1:s1} \((d(x,y) = 0) \Leftrightarrow (x=y)\);
\item \label{d1.1:s2} \(d(x,y)=d(y,x)\).
\end{enumerate}
A semimetric space is a pair \((X, d)\) of a set \(X\) and a semimetric \(d\colon X\times X\rightarrow \mathbb{R}^{+}\). A semimetric \(d\colon X\times X\rightarrow \mathbb{R}^{+}\) is called a \emph{metric} if the \emph{triangle inequality}
\[
d(x, y)\leq d(x, z) + d(z, y)
\]
holds \(x\), \(y\), \(z \in X\). A metric \(d\colon X\times X\rightarrow \mathbb{R}^{+}\) is an \emph{ultrametric} on \(X\) if we have
\begin{equation}\label{d1.1:e3}
d(x,y) \leq \max \{d(x,z),d(z,y)\}
\end{equation}
for all \(x\), \(y\), \(z \in X\). Inequality~\eqref{d1.1:e3} is often called the \emph{strong triangle inequality}.
\end{definition}
In all ultrametric spaces, each triangle is isosceles with the base being no greater than the legs. The converse statement also is valid: ``If \(X\) is a semimetric space and each triangle in \(X\) is isosceles with the base no greater than the legs, then \(X\) is an ultrametric space.''.
The ultrametric spaces are connected with various of investigations in mathematics, physics, linguistics, psychology and computer science. Some properties of ultrametrics have been studied in~\cite{DM2009, DD2010, DP2013SM, GroPAMS1956, Lemin1984RMS39:5, Lemin1984RMS39:1, Lemin1985SMD32:3, Lemin1988, Lemin2003, Qiu2009pNUAA, Qiu2014pNUAA, BS2017, DM2008, DLPS2008TaiA, KS2012, Vau1999TP, Ves1994UMJ, Ibragimov2012, GH1961S, PTAbAppAn2014, Dov2019pNUAA, DP2020pNUAA, DovBBMSSS2020, VauAMM1975, VauTP2003}. The use of trees and tree-like structures gives a natural language for description of ultrametric spaces \cite{Carlsson2010, DLW, Fie, GV2012DAM, HolAMM2001, H04, BH2, Lemin2003, Bestvina2002, DDP2011pNUAA, DP2019PNUAA, DPT2017FPTA, PD2014JMS, DPT2015, Pet2018pNUAA, DP2018pNUAA, DKa2021, Dov2020TaAoG, BS2017, DP2020pNUAA}.
The purpose of the present paper is to show that complete multipartite graphs also provide an adequate description of ultrametric spaces in many cases.
Let \((X, d)\) be a metric space. An \emph{open ball} with a \emph{radius} \(r > 0\) and a \emph{center} \(c \in X\) is the set
\[
B_r(c) = \{x \in X \colon d(c, x) < r\}.
\]
Write \(\mathbf{B}_X = \mathbf{B}_{X, d}\) for the set of all open balls in \((X, d)\).
We define the \emph{distance set} \(D(X)\) of a metric space \((X,d)\) as the range of the metric \(d\colon X\times X\rightarrow \mathbb{R}^{+}\),
\[
D(X) = D(X, d) := \{d(x, y) \colon x, y \in X\}
\]
and write
\[
\diam X := \sup \{d(x, y) \colon x, y \in X\}.
\]
The next basic for us notion is a graph.
A \textit{simple graph} is a pair \((V,E)\) consisting of a nonempty set \(V\) and a set \(E\) whose elements are unordered pairs \(\{u, v\}\) of different points \(u\), \(v \in V\). For a graph \(G = (V, E)\), the sets \(V=V(G)\) and \(E = E(G)\) are called \textit{the set of vertices} and \textit{the set of edges}, respectively. We say that \(G\) is \emph{empty} if \(E(G) = \varnothing\). A graph \(G\) is \emph{finite} if \(V(G)\) is a finite set, \(|V(G)| < \infty\). A graph \(H\) is, by definition, a \emph{subgraph} of a graph \(G\) if the inclusions \(V(H) \subseteq V(G)\) and \(E(H) \subseteq E(G)\) are valid.
A \emph{path} is a finite nonempty graph \(P\) whose vertices can be numbered so that
\[
V(P) = \{x_0,x_1, \ldots,x_k\},\ k \geqslant 1, \quad \text{and} \quad E(P) = \bigl\{\{x_0, x_1\}, \ldots, \{x_{k-1}, x_k\}\bigr\}.
\]
In this case we say that \(P\) is a path joining \(x_0\) and \(x_k\).
A graph \(G\) is \emph{connected} if for every two distinct \(u\), \(v \in V(G)\) there is a path in \(G\) joining \(u\) and \(v\).
The \emph{complement} \(\overline{G}\) of a graph \(G\) is the graph with \(V(\overline{G}) = V(G)\) and such that
\[
\bigl(\{x, y\} \in E(\overline{G})\bigr) \Leftrightarrow \bigl(\{x, y\} \notin E(G)\bigr)
\]
for all distinct \(x\), \(y \in V(G)\).
The following notion of complete multipartite graph is well-known when the vertex set of the graph is finite (see, for example, \cite[p.~17]{Die2005}). Below we need this concept for graphs having the vertex sets of arbitrary cardinality.
\begin{definition}\label{d1.2}
Let \(G\) be a graph and let \(k \geqslant 2\) be a cardinal number. The graph \(G\) is \emph{complete \(k\)-partite} if the vertex set \(V(G)\) can be partitioned into \(k\) nonvoind, disjoint subsets, or parts, in such a way that no edge has both ends in the same part and any two vertices in different parts are adjacent.
\end{definition}
We shall say that $G$ is a \emph{complete multipartite graph} if there is a cardinal number \(k\) such that $G$ is complete $k$-partite. It is easy to prove that if \(G\) is complete multipartite, then the non-adjacency is an equivalence relation on \(V(G)\) having at least two distinct equivalence classes \cite[p.~177]{Die2005}.
Our next definition is a modification of Definition~2.1 from~\cite{PD2014JMS}.
\begin{definition}\label{d1.3}
Let $(X,d)$ be a nonempty metric space. Denote by \(G_{X,d}\) a graph such that \(V(G_{X,d}) = X\) and, for \(u\), \(v \in V(G_{X,d})\),
\begin{equation}\label{d1.3:e1}
(\{u,v\}\in E(G_{X,d}))\Leftrightarrow (d(u,v)=\diam X \text{ and } u \neq v).
\end{equation}
We call $G_{X,d}$ the \emph{diametrical graph} of \((X, d)\).
\end{definition}
\begin{example}\label{ex1.4}
Let \(X\) be a set with \(|X| \geqslant 2\) and let \(G\) be a nonempty graph with \(V(G) = X\). If we define a mapping \(d \colon X \times X \to \mathbb{R}^{+}\) by
\[
d(x, y) = \begin{cases}
0 & \text{if } x = y,\\
2 & \text{if } \{x, y\} \in E(G),\\
1 & \text{if } \{x, y\} \in E(\overline{G}),
\end{cases}
\]
then \(d\) is a metric on \(X\) and the equality \(G_{X, d} = G\) holds.
\end{example}
\begin{example}\label{ex1.5}
If \((X, d)\) is an unbounded metric space or \(|X| = 1\) holds, then the diametrical graph \(G_{X,d}\) is empty, \(E(G_{X,d}) = \varnothing\).
\end{example}
\begin{remark}\label{r1.6}
The use of the name diametrical graph for graphs generated by metric spaces according to Definition~\ref{d1.3} is not generally accepted. For example, in~\cite{AA2008IJoMaMS, Mul1980JoGT, WLRL2019DM}, a graph \(H\) is said to be diametrical if \(H\) is connected and, for every \(u \in V(H)\), there is the unique \(v \in V(H)\) such that
\[
d_H(u, v) \geqslant d_H(x, y)
\]
holds for all distinct \(x\), \(y \in V(H)\), where \(d_H(x, y)\) is the minimum length of the paths connected \(x\) and \(y\) in \(H\). It can be proved that a connected graph \(H = (V, E)\) with \(|V| \geqslant 2\) is diametrical in this sense if and only if the complement \(\overline{G}_{V, d_H}\) of \(G_{V, d_H}\) is complete multipartite and every part of \(\overline{G}_{V, d_H}\) contains exactly two points.
\end{remark}
The paper is organized as follows.
The necessary facts on metrics and ultrametrics are collected in Section~\ref{sec2}. In particular, Proposition~\ref{p2.7} contains a characterization of totally bounded ultrametric spaces which seems to be new.
The main results of the paper are presented in Section~\ref{sec3}. Theorem~\ref{t3.3} completely describes the class of graphs for which every metric space with diametrical graph from this class is ultrametric. In Proposition~\ref{p2.36} it is shown that diametrical graphs of totally bounded ultrametric spaces are complete multipartite with finite number of parts. In Corollary~\ref{c3.5}, using Proposition~\ref{p2.36}, we find a characterization of ultrametrizable compact topological spaces in terms of complete multipartite graphs. A new characterization of ultrametric spaces and totally bounded ultrametric spaces are given in Theorems~\ref{t5.18} and \ref{t5.9}, respectively. In Theorems~\ref{t2.25} and \ref{t2.35} we study interrelations between bounded and unbounded ultrametrics. In particular, in Theorem~\ref{t2.35} it is shown that the diametrical graph of bounded ultrametric space is empty iff this space is weakly similar to an unbounded ultrametric space.
\section{Some facts on metrics and ultrametrics}
\label{sec2}
First of all, we recall a definition of \emph{total boundedness}.
\begin{definition}\label{d2.3}
A metric space \((X, d)\) is totally bounded if, for every \(r > 0\), there is a finite set \(\{B_r(x_1), \ldots, B_r(x_n)\} \subseteq \mathbf{B}_X\) such that
\[
X \subseteq \bigcup_{i = 1}^{n} B_r(x_i).
\]
\end{definition}
An important subclass of totally bounded metric spaces is the class of \emph{compact} metric spaces.
\begin{definition}[Borel---Lebesgue property]\label{d2.5}
A metric space \((X, d)\) is compact if every family \(\mathcal{F} \subseteq \mathbf{B}_X\) satisfying the inclusion
\[
X \subseteq \bigcup_{B \in \mathcal{F}} B
\]
contains a finite subfamily \(\mathcal{F}_0 \subseteq \mathcal{F}\) such that
\[
X \subseteq \bigcup_{B \in \mathcal{F}_0} B.
\]
\end{definition}
A standard definition of compactness usually formulated as: Every open cover of a topological space has a finite subcover.
The next proposition seems to be a useful characterization of totally bounded ultrametric spaces.
\begin{proposition}\label{p2.7}
Let \((X, d)\) be a nonempty ultrametric space and let \(\mathbf{B}_{X}^{r_1}\) be a set of all open balls (in \((X, d)\)) having a fixed radius \(r_1 > 0\),
\begin{equation}\label{p2.7:e1}
\mathbf{B}_{X}^{r_1} = \{B_{r_1}(c) \colon c \in X\}.
\end{equation}
Then the following conditions are equivalent:
\begin{enumerate}
\item \label{p2.7:s1} \(\mathbf{B}_{X}^{r_1}\) is finite for every \(r_1 > 0\).
\item \label{p2.7:s2} \((X, d)\) is totally bounded.
\end{enumerate}
\end{proposition}
To prove this proposition, we will use the following lemma.
\begin{lemma}[Corollary~4.5 \cite{DS2021a}]\label{l2.4}
Let \((X, d)\) be an ultrametric space. Then the equivalence
\[
(B_r(x_1) = B_r(x_2)) \Leftrightarrow (B_r(x_1) \cap B_r(x_2) \neq \varnothing)
\]
is valid for every \(r > 0\) and all \(x_1\), \(x_2 \in X\).
\end{lemma}
\begin{proof}[Proof of Proposition~\(\ref{p2.7}\)]
\(\ref{p2.7:s1} \Rightarrow \ref{p2.7:s2}\). The validity of this implication follows directly from Definition~\ref{d2.3}.
\(\ref{p2.7:s2} \Rightarrow \ref{p2.7:s1}\). Suppose \((X, d)\) is totally bounded. Let \(r_1 > 0\) be given. Then there is a finite set \(\{c_1, \ldots, c_n\} \subseteq X\) such that
\begin{equation}\label{p2.7:e2}
X = \bigcup_{i=1}^{n} B_{r_1}(c_i).
\end{equation}
Moreover, without loss of generality, we assume \(B_{r_1}(c_{n_1}) \neq B_{r_1}(c_{n_2})\) for all distinct \(n_1\), \(n_2 \in \{1, \ldots, n\}\). We claim that the equality
\begin{equation}\label{p2.7:e3}
\mathbf{B}_{X}^{r_1} = \{B_{r_1}(c_1), \ldots, B_{r_1}(c_n)\}
\end{equation}
holds. Indeed, the inclusion
\[
\{B_{r_1}(c_1), \ldots, B_{r_1}(c_n)\} \subseteq \mathbf{B}_{X}^{r_1}
\]
follows from \(\{c_1, \ldots, c_n\} \subseteq X\).
To prove the reverse inclusion, consider an arbitrary \(B \in \mathbf{B}_{X}^{r_1}\). Using \eqref{p2.7:e2} we can find \(i \in \{1, \ldots, n\}\) such that \(B \cap B_{r_1}(c_i) \neq \varnothing\), that implies the equality \(B = B_{r_1}(c_i)\) by Lemma~\ref{l2.4}. Thus, we have
\[
B \in \{B_{r_1}(c_1), \ldots, B_{r_1}(c_n)\}
\]
for every \(B \in \mathbf{B}_{X}^{r_1}\). Equality~\eqref{p2.7:e3} follows.
\end{proof}
The following constructive description of the distance sets of totally bounded ultrametric spaces can be found in \cite{DS2021a}.
\begin{proposition}\label{p2.11}
The following statements are equivalent for every \(A \subseteq \mathbb{R}^{+}\):
\begin{enumerate}
\item \label{p2.11:s1} There is an infinite totally bounded ultrametric space \((X, d)\) such that \(A\) is the distance set of \((X, d)\).
\item \label{p2.11:s2} There is a strictly decreasing sequence \((x_n)_{n \in \mathbb{N}} \subseteq \mathbb{R}^{+}\) such that
\[
\lim_{n \to \infty} x_n = 0
\]
holds and the equivalence
\[
(x \in A) \Leftrightarrow (x = 0 \text{ or } \exists n \in \mathbb{N} \colon x_n = x)
\]
is valid for every \(x \in \mathbb{R}^{+}\).
\end{enumerate}
\end{proposition}
In the next section of the paper we will also use a concept of weakly similar ultrametric spaces.
\begin{definition}\label{d2.34}
Let \((X, d)\) and \((Y, \rho)\) be nonempty semimetric spaces. A mapping \(\Phi \colon X \to Y\) is a \emph{weak similarity} of \((X, d)\) and \((Y, \rho)\) if \(\Phi\) is bijective and there is a strictly increasing bijection \(\psi \colon D(Y) \to D(X)\) such that the equality
\begin{equation}\label{d2.34:e1}
d(x, y) = \psi\left(\rho\bigl(\Phi(x), \Phi(y)\bigr)\right)
\end{equation}
holds for all \(x\), \(y \in X\).
\end{definition}
If \(\Phi \colon X \to Y\) is a weak similarity and \eqref{d2.34:e1} holds, then we say that \((X, d)\) and \((Y, \rho)\) are \emph{weakly similar}, and \(\psi\) is the \emph{scaling function} of \(\Phi\).
Some questions connected with the weak similarities and their generalizations were studied in \cite{DovBBMSSS2020, DLAMH2020, Dov2019IEJA, DP2013AMH, BDSa2020}. The weak similarities of finite ultrametric and semimetric spaces were also considered in \cite{Pet2018pNUAA, GFMV2020a}.
The following lemma is a reformulation of Proposition~1.5 from~\cite{DP2013AMH} (see also Proposition~2.2 in~\cite{BDHM2007TA}).
\begin{lemma}\label{l2.6}
Let \((X, d)\) and \((Y, \rho)\) be nonempty weakly similar semimetric spaces. Then \(d\) is an ultrametric on \(X\) if and only if \(\rho\) is an ultrametric on \(Y\).
\end{lemma}
\section{When diametrical graph are complete and multipartite}
\label{sec3}
Let us start from a refinement of Theorems~3.1 and 3.2 from \cite{DDP2011pNUAA}.
\begin{theorem}\label{t2.24}
Let \((X, d)\) be an ultrametric space with \(|X| \geqslant 2\). Then the following statements are equivalent:
\begin{enumerate}
\item\label{t2.24:s1} The diametrical graph \(G_{X,d}\) of \((X, d)\) is nonempty.
\item\label{t2.24:s2} The diametrical graph \(G_{X,d}\) is complete multipartite.
\end{enumerate}
Furthermore, if \(G_{X, d}\) is complete multipartite, then every part of \(G_{X, d}\) is an open ball with a center in \(X\) and the radius \(r = \diam X\) and, conversely, every open ball \(B_r(c)\) with \(r = \diam X\) and \(c \in X\) is a part of \(G_{X, d}\).
\end{theorem}
\begin{proof}
The validity of \(\ref{t2.24:s1} \Leftrightarrow \ref{t2.24:s2}\) follows from Theorems~3.1 and 3.2 of paper~\cite{DDP2011pNUAA}.
Let \(G_{X, d}\) be a complete multipartite graph, let \(X_{1}\) be a part of \(G_{X, d}\) and let \(x_{1}\) be a point of \(X_{1}\). We claim that the equality
\begin{equation}\label{t2.24:e1}
X_{1} = B_r(x_{1})
\end{equation}
holds with \(r = \diam X\). Using Example~\ref{ex1.5}, we see that the double inequality \(0 < \diam X < \infty\) holds. Hence, the open ball \(B_r(x_1)\) is correctly defined.
Let \(x_{2}\) be a point of the set \(X \setminus X_1\). Since \(G_{X, d}\) is complete multipartite and \(x_{2} \notin X_{1}\), the membership
\begin{equation}\label{t2.24:e3}
\{x_1, x_2\} \in E(G_{X, d})
\end{equation}
is valid. From \eqref{t2.24:e3} it follows that
\[
d(x_1, x_2) = \diam X = r.
\]
Hence, \(x_2 \in X \setminus B_r(x_1)\). Thus, the inclusion
\begin{equation}\label{t2.24:e2}
X \setminus X_1 \subseteq X \setminus B_r(x_1)
\end{equation}
holds.
Similarly, we can prove the inclusion \(X \setminus B_r(x_1) \subseteq X \setminus X_1\). The last inclusion and \eqref{t2.24:e2} imply equality~\eqref{t2.24:e1}.
Let us consider now an open ball \(B_r(c)\) with \(r = \diam X\) and arbitrary \(c \in X\). Then there is a part \(X_2\) of \(G_{X, d}\) such that \(c \in X_2\). Arguing as in the proof of equality \eqref{t2.24:e1}, we obtain the equality \(X_2 = B_r(c)\).
\end{proof}
Theorem~\ref{t2.24} remains valid for all metric spaces \((X, d)\) satisfying the condition: ``If \(t \in D(X)\) and \(t \neq \diam X\), then the inequality
\begin{equation}\label{e3.4}
2t < \diam X
\end{equation}
holds.'' As Example~\ref{ex1.4} shows, the last condition is sharp in the sense that inequality~\eqref{e3.4} cannot be replaced by inequality \(2t \leqslant \diam X\).
\begin{example}\label{ex3.2}
Let us consider a ``metric'' space \((X, d)\) for which the distance between some points can be infinite, i.e., \(d \colon X \times X \to \mathbb{R}^{+} \cup \{\infty\}\), satisfies the triangle inequality and conditions \ref{d1.1:s1}--\ref{d1.1:s2} from Definition~\ref{d1.1} (see, for example, \cite{BBI2001}). If \((X, d)\) is unbounded, then statements \ref{t2.24:s1}--\ref{t2.24:s2} from Theorem~\ref{t2.24} are equivalent and the set of parts of \(G_{X, d}\) coincides with the set of unbounded open balls
\[
B_{\infty}(c) = \{x \in X \colon d(x, c) < \infty\}, \quad c \in X,
\]
whenever \(G_{X, d}\) is complete multipartite.
\end{example}
The following theorem completely describes the structure of graphs \(H\) for which every metric space \((X, d)\) with \(G_{X, d} = H\) is ultrametric (cf. Remark~\ref{r1.6}).
\begin{theorem}\label{t3.3}
Let \(\Gamma = (V, E)\) be a nonempty graph, \(\overline{\Gamma}\) be the complement of \(\Gamma\) and let \(X\) be the set of vertices of \(\Gamma\), \(X = V(\Gamma)\). Then the following conditions are equivalent:
\begin{enumerate}
\item \label{t3.3:s1} The inequality \(|V(H)| \leqslant 2\) holds for every connected subgraph \(H\) of \(\overline{\Gamma}\).
\item \label{t3.3:s2} For every metric space \((X, d)\) the equality \(G_{X, d} = \Gamma\) implies the ultrametricity of \((X, d)\).
\end{enumerate}
\end{theorem}
\begin{proof}
\(\ref{t3.3:s1} \Rightarrow \ref{t3.3:s2}\). Let \(\Gamma\) satisfy condition~\ref{t3.3:s1} and let \((X, d)\) be a metric space such that
\begin{equation}\label{t3.3:e1}
G_{X, d} = \Gamma.
\end{equation}
If \((X, d)\) is not ultrametric, then there are points \(x\), \(y\), \(z \in X\) satisfying the inequality
\begin{equation}\label{t3.3:e2}
d(x, y) > \max \{d(x, z), d(z, y)\}.
\end{equation}
The inequality \(\diam X \geqslant d(x, y)\), \eqref{t3.3:e1} and \eqref{t3.3:e2} imply
\begin{equation}\label{t3.3:e3}
\{x, z\}, \{z, y\} \in E(\overline{\Gamma}).
\end{equation}
Moreover, from \eqref{t3.3:e2} it follows that the points \(x\), \(y\) and \(z\) are pairwise distinct. Hence, \eqref{t3.3:e3} implies that the graph \(H\) with
\[
V(H) = \{x, y, z\} \quad \text{and} \quad E(H) = \bigl\{\{x, z\}, \{z, y\}\bigr\}
\]
is connected subgraph of \(\overline{\Gamma}\) for which \(|V(H)| > 2\) holds, contrary to \ref{t3.3:s1}.
\(\ref{t3.3:s2} \Rightarrow \ref{t3.3:s1}\). Let condition~\ref{t3.3:s2} hold. Suppose that there is a connected subgraph \(H\) of the graph \(\overline{\Gamma}\) such that \(|V(H)| \geqslant 3\). Let \(x\), \(y\) and \(z\) be distinct vertices of \(H\). Without loss of generality, we assume
\[
\bigl\{\{x, z\}, \{z, y\}\bigr\} \subseteq E(H).
\]
The cases \(\{x, y\} \in E(\overline{\Gamma})\) and \(\{x, y\} \in E(\Gamma)\) are possible. Suppose \(\{x, y\} \in E(\Gamma)\) holds. Let \(a\) and \(b\) be two distinct points of the interval \((1, 2)\). Then we define a metric \(d\) on \(X = V(\Gamma)\) as
\begin{equation}\label{t3.3:e4}
d(u, v) = \begin{cases}
0 & \text{if } u = v,\\
2 & \text{if } \{u, v\} \in E(\Gamma),\\
a & \text{if } \{u, v\} = \{x, z\},\\
b & \text{if } \{u, v\} = \{z, y\},\\
\frac{a+b}{2} & \text{otherwise}.
\end{cases}
\end{equation}
From \(a\), \(b \in (1, 2)\), \eqref{t3.3:e4} and \(E(\Gamma) \neq \varnothing\) it follows that \((X, d)\) is a metric space with the diameter equals \(2\) and the diametrical graph equals \(\Gamma\). In addition, \(a\), \(b \in (1, 2)\) and \eqref{t3.3:e4} imply
\[
2 = d(x, y) > \max\{d(x, z), d(z, y)\} = \max\{a, b\}.
\]
Similarly, if \(\{x, y\} \in E(\overline{\Gamma})\) holds and \(d \colon X \times X \to \mathbb{R}^{+}\) is defined by~\eqref{t3.3:e4}. Then we have \(G_{X, d} = \Gamma\) as above and, moreover,
\[
d(x, y) = \frac{a+b}{2}, \quad d(x, z) = a, \quad d(z, y) = b,
\]
where the numbers \(a\), \(b\), \(\frac{a+b}{2}\) are pairwise different. Thus, the triangle \(\{x, y, z\}\) is not isosceles in both possible cases. Hence, \((X, d)\) is not ultrametric and satisfies \(G_{X, d} = \Gamma\), contrary to \ref{t3.3:s2}.
\end{proof}
Example~\ref{ex1.4} and Theorems~\ref{t2.24}, \ref{t3.3} imply the following.
\begin{corollary}\label{c3.4}
Let \(\Gamma\) be a graph with \(|V(\Gamma)| \geqslant 2\) and let \(|V(H)| \leqslant 2\) hold for every connected subgraph \(H\) of the complement \(\overline{\Gamma}\) of \(\Gamma\). Then \(\Gamma\) is complete multipartite.
\end{corollary}
\begin{proposition}\label{p2.36}
Let \((X, d)\) be a totally bounded ultrametric space with \(|X| \geqslant 2\). Then there is an integer \(k \geqslant 2\) such that the diametrical graph \(G_{X, d}\) is complete \(k\)-partite.
\end{proposition}
\begin{proof}
Since \(|X| \geqslant 2\) holds and every totally bounded metric space is bounded, we have \(0 < \diam X < \infty\). It follows from Proposition~\ref{p2.11} that the equality \(\diam X = d(x_1, x_2)\) holds for some \(x_1\), \(x_2 \in X\). Hence, by Theorem~\ref{t2.24}, \(G_{X, d}\) is complete multipartite. Consequently, there is a cardinal number \(k\) such that \(G_{X, d}\) is complete \(k\)-partite.
Let \(\{X_i \colon i \in I\}\) be the family of all parts of the diametrical graph \(G_{X, d}\) and let \(r_1 := \diam X\). Then, by Theorem~\ref{t2.24}, we have
\begin{equation}\label{p2.36:e1}
\{X_i \colon i \in I\} \subseteq \mathbf{B}_{X}^{r_1},
\end{equation}
where \(\mathbf{B}_{X}^{r_1}\) is the set of all open balls (in \((X, d)\)) with the radius \(r_1\) and \(\operatorname{card} I = k\). By Proposition~\ref{p2.7}, the set \(\mathbf{B}_{X}^{r_1}\) is finite. Hence, \(k\) is finite by inclusion~\eqref{p2.36:e1}.
\end{proof}
Using Proposition~\ref{p2.36} and Theorem~\ref{t3.3} we obtain the following.
\begin{corollary}\label{c3.6}
Let \((X, d)\) be a totally bounded ultrametric space. If every metric space \((X, \rho)\) satisfying the equality \(G_{X, d} = G_{X, \rho}\) is ultrametric, then \((X, d)\) is finite.
\end{corollary}
To formulate the next corollary, we recall some concepts from General Topology.
\begin{definition}\label{d3.3}
Let \(\tau\) and \(d\) be a topology and, respectively, an ultrametric on a set \(X\). Then \(\tau\) and \(d\) are said to be compatible if \(\mathbf{B}_{X, d}\) is an open base for the topology \(\tau\).
\end{definition}
Definition~\ref{d3.3} means that \(\tau\) and \(d\) are compatible if and only if every \(B \in \mathbf{B}_{X, d}\) belongs to \(\tau\) and every \(A \in \tau\) can be written as the union of a family of elements of \(\mathbf{B}_{X, d}\). If \((X, \tau)\) admits a compatible with \(\tau\) ultrametric on \(X\), then we say that the topological space \((X, \tau)\) is \emph{ultrametrizable}.
\begin{lemma}\label{t3.4}
Let \((X, \tau)\) be an ultrametrizable nonempty topological space. Then the following conditions are equivalent:
\begin{enumerate}
\item\label{t3.4:s1} The space \((X, \tau)\) is compact.
\item\label{t3.4:s2} The distance set \(D(X, d)\) has the largest element whenever \(d\) is a compatible with \(\tau\) ultrametric.
\end{enumerate}
\end{lemma}
This lemma follows directly from Theorem~4.7 of \cite{DS2021a}.
\begin{corollary}\label{c3.5}
Let \((X, \tau)\) be an ultrametrizable topological space with \(\operatorname{card} X \geqslant 2\). Then the following conditions are equivalent:
\begin{enumerate}
\item \label{c3.5:s1} The diametrical graph \(G_{X, d}\) is complete \(k\)-partite with some integer \(k = k(d)\) whenever \(d\) is a compatible with \(\tau\) ultrametric.
\item \label{c3.5:s2} The diametrical graph \(G_{X, d}\) is complete multipartite whenever \(d\) is a compatible with \(\tau\) ultrametric.
\item \label{c3.5:s3} The topological space \((X, \tau)\) is compact.
\end{enumerate}
\end{corollary}
\begin{proof}
\(\ref{c3.5:s1} \Rightarrow \ref{c3.5:s2}\). This implication is evidently valid.
\(\ref{c3.5:s2} \Rightarrow \ref{c3.5:s3}\). Suppose that \ref{c3.5:s2} holds. Let \(d \colon X \times X \to \mathbb{R}^{+}\) be a compatible with \(\tau\) ultrametric. Then, by Theorem~\ref{t2.24}, there are points \(x_1\), \(x_2 \in X\) such that \(d(x_1, x_2) = \diam X\). Hence, the distance set \(D(X, d)\) contains the largest element. It implies the compactness of \((X, \tau)\) by Lemma~\ref{t3.4}.
\(\ref{c3.5:s3} \Rightarrow \ref{c3.5:s1}\). Since every compact ultrametric space is totally bounded, the validity of \(\ref{c3.5:s3} \Rightarrow \ref{c3.5:s1}\) follows from Proposition~\ref{p2.36}.
\end{proof}
\begin{remark}\label{r3.6}
Necessary and sufficient conditions under which topological spaces are ultrametrizable were found by De Groot \cite{GroPAMS1956, GroCM1958}. See also \cite{CLTaiA2020, BMTA2015, KSBLMS2012, BriTP2015, CSJPAA2019, DS2021a} for future results connected with ultrametrizable topologies.
\end{remark}
\begin{example}\label{ex5.15}
Let \(\overline{B}_1(0) = \{x \in \mathbb{Q}_p \colon d_p(x, 0) \leqslant 1\}\) be the unit closed ball in the ultrametric space \((\mathbb{Q}_p, d_p)\) of \(p\)-adic numbers. Then \(\overline{B}_1(0)\) is a compact infinite subset of \((\mathbb{Q}_p, d_p)\) (Theorem~5.1, \cite{Sch1985}). Hence, by Proposition~\ref{p2.36}, the diametrical graph \(G_{\overline{B}_1(0), d_p|_{\overline{B}_1(0) \times \overline{B}_1(0)}}\) is complete \(k\)-partite with some integer \(k \geqslant 2\). Since the ball \(\overline{B}_1(0)\) can be written as disjoint union of open balls,
\begin{equation}\label{ex5.15:e1}
\overline{B}_1(0) = B_1(0) \cup B_1(1) \cup \ldots \cup B_1(p-1)
\end{equation}
(see, for example, Problem~50 in \cite{Gou1993}), the diametrical graph of \(\overline{B}_1(0)\) is complete \(p\)-partite with the parts \(B_1(i) \in \mathbf{B}_{\mathbb{Q}_p}\), \(i=0\), \(1\), \(\ldots\), \(p-1\), by Theorem~\ref{t2.24}.
\end{example}
Definition~\ref{d1.3} of diametrical graph can be generalized by following way.
Let \((X, d)\) be a metric space with \(|X| \geqslant 2\) and let \(r \in (0, \infty]\). Denote by \(G_{X, d}^{r}\) a graph such that \(V(G_{X, d}^{r}) = X\) and, for \(u\), \(v \in V(G_{X, d}^{r})\),
\begin{equation}\label{e5.21}
\bigl(\{u, v\} \in E(G_{X, d}^{r})\bigr) \Leftrightarrow \bigl(d(u, v) \geqslant r\bigr).
\end{equation}
\begin{remark}\label{r5.18}
It is clear that \eqref{d1.3:e1} and \eqref{e5.21} are equivalent if \(r = \diam X\). Consequently, we have the equality \(G_{X, d}^{r} = G_{X, d}\) for \(r = \diam X\). In particular, the equality \(G_{X, d}^{\infty} = G_{X, d}\) holds if \((X, d)\) is unbounded.
\end{remark}
Now we can give a new characterization of ultrametric spaces.
\begin{theorem}\label{t5.18}
Let \((X, d)\) be a metric space with \(|X| \geqslant 2\). Then the following statements are equivalent:
\begin{enumerate}
\item \label{t5.18:s1} The metric space \((X, d)\) is ultrametric.
\item \label{t5.18:s2} \(G_{X, d}^{r}\) is either empty or complete multipartite for every \(r \in (0, \diam X]\).
\end{enumerate}
\end{theorem}
\begin{proof}
\(\ref{t5.18:s1} \Rightarrow \ref{t5.18:s2}\). Let \((X, d)\) be ultrametric, let \(r \in (0, \diam X]\) and let a function \(\psi_r \colon \mathbb{R}^{+} \to \mathbb{R}^{+}\) be defined as
\begin{equation}\label{t5.18:e1}
\psi_r(t) = \min\{r, t\}, \quad t \in \mathbb{R}^{+}.
\end{equation}
It is easy to prove that the mapping \(\rho_r = \psi_r \circ d\) is an ultrametric on \(X\). From \eqref{t5.18:e1} and \(r \in (0, \diam X] = (0, \diam(X, d)]\) it follows that \(\diam (X, \rho_r) = r\). The last equality and \eqref{e5.21} imply
\begin{equation}\label{t5.18:e4}
G_{X, \rho_r} = G_{X, d}^{r}.
\end{equation}
By Theorem~\ref{t2.24}, the diametrical graph \(G_{X, \rho_r}\) is either empty or complete multipartite. The validity of \(\ref{t5.18:s1} \Rightarrow \ref{t5.18:s2}\) follows.
\(\ref{t5.18:s2} \Rightarrow \ref{t5.18:s1}\). Let \ref{t5.18:s2} hold. Suppose that there are \(x_1\), \(x_2\), \(x_3 \in X\) satisfying
\begin{equation}\label{t5.18:e2}
d(x_1, x_2) > \max\{d(x_1, x_3), d(x_3, x_2)\}.
\end{equation}
Let us consider \(G_{X, d}^{r}\) with \(r = d(x_1, x_2)\). It is clear that \(G_{X, d}^{r}\) is a nonempty graph. Inequality \eqref{t5.18:e2} implies that the points \(x_1\), \(x_2\), \(x_3\) are pairwise distinct. In the correspondence with \ref{t5.18:s2}, \(G_{X, d}^{r}\) is complete multipartite. Let \(X_i\) be a part of \(G_{X, d}^{r}\) such that \(x_i \in X_i\) holds, \(i = 1\), \(2\), \(3\). By \eqref{e5.21}, we have \(\{x_1, x_2\} \in E(G_{X, d}^{r})\). Hence, \(X_1\) are \(X_2\) are distinct, \(X_1 \neq X_2\). If \(X_1 = X_3\) holds, then from \eqref{e5.21} it follows that
\begin{equation}\label{t5.18:e3}
d(x_2, x_3) \geqslant r = d(x_1, x_2),
\end{equation}
contrary to \eqref{t5.18:e2}. Thus, we have \(X_1 \neq X_3\). Similarly, we obtain \(X_2 \neq X_3\). Hence, \(X_1\), \(X_2\), \(X_3\) are distinct parts of \(G_{X, d}^{r}\). The last statement also implies \eqref{t5.18:e3}, that contradicts \eqref{t5.18:e2}. It is shown that the strong triangle inequality holds for all \(x_1\), \(x_2\), \(x_3 \in X\). The validity of \(\ref{t5.18:s2} \Rightarrow \ref{t5.18:s1}\) follows.
\end{proof}
For the case of totally bounded ultrametric spaces we have the following refinement of Theorem~\ref{t5.18}.
\begin{theorem}\label{t5.9}
Let \((X, d)\) be a metric space with \(|X| \geqslant 2\). Then \((X, d)\) is totally bounded and ultrametric if and only if \(G_{X, d}^{r}\) is complete \(k\)-partite with an integer \(k = k(r)\) for every \(r \in (0, \diam X]\).
\end{theorem}
\begin{proof}
Suppose that \((X, d)\) is totally bounded and ultrametric. Let \(r \in (0, \diam X]\) and let \(\psi_r \colon \mathbb{R}^{+} \to \mathbb{R}^{+}\) be defined by \eqref{t5.18:e1}
\begin{equation*}
\psi_r(t) = \min\{r, t\}, \quad t \in \mathbb{R}^{+}.
\end{equation*}
Then
\begin{equation}\label{t5.9:e5}
\rho_r = \psi_r \circ d.
\end{equation}
is an ultrametric on \(X\). Moreover, \eqref{t5.9:e5} implies that, for every \(c \in X\), we have
\[
\{x \in X \colon d(x, c) < r_0\} = \{x \in X \colon d_{\rho_r}(x, c) < r_0\}
\]
whenever \(0 < r_0 \leqslant r\) holds. Thus, the ultrametric spaces \((X, d)\) and \((X, \rho_r)\) have the same sets of open balls with a radius at most \(r\). Now using Definition~\ref{d2.3}, we see that \((X, \rho_r)\) is a totally bounded ultrametric space. By Proposition~\ref{p2.36}, the diametrical graph \(G_{X, \rho_r}\) is complete \(k\)-partite for an integer \(k = k(r)\). As in the proof of Theorem~\ref{t5.18}, we obtain the equality
\begin{equation}\label{t5.9:e9}
G_{X, \rho_r} = G_{X, d}^{r}.
\end{equation}
Hence, \(G_{X, d}^{r}\) is also \(k\)-partite with the same \(k\).
Suppose now that, for every \(r \in (0, \diam(X, d)]\), \(G_{X, d}^{r}\) is complete \(k\)-partite with an integer \(k = k(r)\). Using Theorem~\ref{t5.18} we obtain that \((X, d)\) is ultrametric.
Let \(r \in (0, \diam(X, d)]\) be given. Then the space \((X, \rho_r)\) is also ultrametric. Now equality~\eqref{t5.9:e9} and the second part of Theorem~\ref{t2.24} imply that there are points \(x_1\), \(\ldots\), \(x_{k(r)} \in X\) such that
\begin{equation}\label{t5.9:e6}
X \subseteq \bigcup_{i=1}^{k(r)} B_{r^{*}}^{\rho}(x_i)
\end{equation}
where
\begin{equation}\label{t5.9:e7}
r^{*} = \diam (X, \rho_r) \quad \text{and} \quad B_{r^{*}}^{\rho}(x_i) = \{x \in X \colon \rho_r(x, x_i) < r^{*}\}.
\end{equation}
From \eqref{t5.9:e5} and the first equality in \eqref{t5.9:e7} it follows that
\begin{equation}\label{t5.9:e8}
B_{r^{*}}^{\rho}(x_i) \subseteq B_{r}(x_i) = \{x \in X \colon d(x, x_i) < r\}
\end{equation}
for every \(i \in \{1, \ldots, k(r)\}\). Since \(r\) is an arbitrary point of \((0, \diam (X, d)]\), Definition~\ref{d2.3} and formulas \eqref{t5.9:e6}, \eqref{t5.9:e8} imply the total boundedness of \((X, d)\).
\end{proof}
The following result is similar to Theorem~2.2 from~\cite{BDSa2020} whose proof is based on properties of ultrametric preserving functions. It is interesting to note that the concept of semimetric spaces allows us not to use the ultrametric preserving functions in the proof below.
\begin{theorem}\label{t2.25}
Let \((X, d)\) be an unbounded ultrametric space, let \(d^{*} \in (0, \infty)\) and \(\rho \colon X \times X \to \mathbb{R}^{+}\) be defined as
\begin{equation}\label{t2.25:e1}
\rho(x, y) = \frac{d^{*} \cdot d(x, y)}{1 + d(x, y)}.
\end{equation}
Then \((X, \rho)\) is a bounded ultrametric space with empty diametrical graph \(G_{X, \rho}\).
Conversely, let \((X, \rho)\) be a bounded ultrametric space with \(|X| \geqslant 2\) and empty \(G_{X, \rho}\). Write \(d^{*} = \diam (X, \rho)\). Then there is an unbounded ultrametric space \((X, d)\) such that~\eqref{t2.25:e1} holds for all \(x\), \(y \in X\).
\end{theorem}
\begin{proof}
It is clear that the mapping \(\rho \colon X \times X \to \mathbb{R}^{+}\), defined by~\eqref{t2.25:e1}, is a semimetric. Let us define a function \(f \colon \mathbb{R}^{+} \to \mathbb{R}^{+}\) as
\begin{equation}\label{t2.25:e2}
f(t) = \frac{d^{*} t}{1+t}
\end{equation}
for all \(t \in \mathbb{R}^{+}\). Since \(f\) is strictly increasing and satisfies the equality \(f(0) = 0\), the identical mapping \(\operatorname{Id} \colon X \to X\) is a weak similarity of \((X, d)\) and \((X, \rho)\). By Lemma~\ref{l2.6}, the semimetric \(\rho \colon X \times X \to \mathbb{R}^{+}\) is an ultrametric. Now from
\begin{equation*}
\lim_{t \to \infty} f(t) = d^{*},
\end{equation*}
we obtain
\[
\rho(x, y) < \lim_{t \to \infty} f(t) = d^{*} = \diam (X, \rho)
\]
for all \(x\), \(y \in X\). Thus, the diametrical graph \(G_{X, \rho}\) is empty.
Conversely, let \((X, \rho)\) be a bonded ultrametric space with \(|X| \geqslant 2\) and empty diametrical graph \(G_{X, \rho}\). Write \(d^{*} = \diam (X, \rho)\). The inequality \(|X| \geqslant 2\) and boundedness of \((X, \rho)\) imply \(d^{*} \in (0, \infty)\). The function \(g \colon [0, d^{*}) \to \mathbb{R}^{+}\),
\begin{equation}\label{t2.25:e7}
g(s) = \frac{s}{d^{*} - s},
\end{equation}
is strictly increasing and satisfies the equalities
\begin{equation}\label{t2.25:e5}
g(0) = 0 \quad \text{and} \quad \lim_{\substack{s \to d^{*} \\ s \in [0, d^{*})}} g(s) = +\infty.
\end{equation}
Since \(d^{*}\) equals \(\diam (X, \rho)\), there are sequences \((x_n)_{n \in \mathbb{N}} \subseteq X\) and \((y_n)_{n \in \mathbb{N}} \subseteq X\) such that
\begin{equation}\label{t2.25:e8}
\lim_{n \to \infty} \rho(x_n, y_n) = d^{*}.
\end{equation}
In addition, by Theorem~\ref{t2.24}, we have \(\rho(x, y) < d^{*}\) for all \(x\), \(y \in X\). Consequently, the inclusion \(D(X, \rho) \subseteq [0, d^{*})\) holds. Now Lemma~\ref{l2.6} implies that the mapping \(d \colon X \times X \to \mathbb{R}^{+}\) satisfying the equality
\[
d(x, y) = g(\rho(x, y))
\]
for all \(x\), \(y \in X\) is an ultrametric on~\(X\). From the second equality in \eqref{t2.25:e5} and equality~\eqref{t2.25:e8} it follows that \((X, d)\) is unbounded. A direct calculation shows the equalities
\begin{equation}\label{t2.25:e6}
f(g(s)) = s \quad \text{and} \quad g(f(t)) = t
\end{equation}
hold for all \(s \in [0, d^{*})\) and \(t \in [0, +\infty)\), where \(f\) is defined by~\eqref{t2.25:e2}. Now equality \eqref{t2.25:e1} follows from \eqref{t2.25:e6}.
\end{proof}
\begin{remark}\label{r2.33}
The condition \(|X| \geqslant 2\) cannot be dropped in the second part of Theorem~\ref{t2.25}. Indeed, if \(|X| = 1\), then, for every metric \(\rho\), the metric space \((X, \rho)\) is bounded and ultrametric with empty diametrical graph \(G_{X, \rho}\) and there are no ultrametrics \(d \colon X \times X \to \mathbb{R}^{+}\) for which \(\diam (X, d) = +\infty\) holds.
\end{remark}
\begin{lemma}\label{c5.36}
Let \((X, d)\) and \((Y, \rho)\) be nonempty weakly similar ultrametric spaces. Then the diametrical graph \(G_{X, d}\) is empty if and only if the diametrical graph \(G_{Y, \rho}\) is empty.
\end{lemma}
\begin{proof}
Let \(\Phi \colon X \to Y\) be a weak similarity of \((X, d)\) and \((Y, \rho)\) with the scaling function \(f \colon D(Y) \to D(X)\). Since \(f\) is bijective and strictly increasing, the set \(D(X)\) has the largest element iff \(D(Y)\) contains the largest element.
To complete the proof it suffices to remember that the largest element of the distance set of metric space, if such an element exists, coincides with the diameter of the space.
\end{proof}
Using the concept of weak similarity we can give a more compact variant of Theorem~\ref{t2.25}.
\begin{theorem}\label{t2.35}
Let \((X, d)\) be an ultrametric space with \(|X| \geqslant 2\). Then the following statements are equivalent:
\begin{enumerate}
\item\label{t2.35:s1} \((X, d)\) is weakly similar to an unbounded ultrametric space.
\item\label{t2.35:s2} The diametrical graph \(G_{X, d}\) is empty.
\end{enumerate}
\end{theorem}
\begin{proof}
\(\ref{t2.35:s1} \Rightarrow \ref{t2.35:s2}\). Let \((X, d)\) be a weakly similar to an unbounded ultrametric space \((Y, \rho)\). Then the diametrical graph \(G_{Y, \rho}\) is empty. Hence, \(G_{X, d}\) is also empty by Lemma~\ref{c5.36}.
\(\ref{t2.35:s2} \Rightarrow \ref{t2.35:s1}\). Suppose that the diametrical graph \(G_{X, d}\) is empty. If \((X, d)\) is unbounded, then \(\ref{t2.35:s2}\) is valid because \((X, d)\) is weakly similar to itself. If \((X, d)\) is bounded, then, by Theorem~\ref{t2.25}, there is an unbounded ultrametric space \((Y, \rho)\) such that
\[
d(x, y) = \diam X \frac{\rho(x, y)}{1 + \rho(x, y)}
\]
for all \(x\), \(y \in X\). It was shown in the proof of Theorem~\ref{t2.25} that \((X, d)\) and \((Y, \rho)\) are weakly similar.
\end{proof}
\end{document}
|
\begin{document}
\title{Bautin bifurcation in a minimal model of immunoediting}
\begin{abstract} One of the simplest model of
immune surveillance and neaoplasia was proposed by Delisi and Resigno~\cite{Delisi}. Later Liu et al~\cite{Dan} proved the existence of non-degenerate Takens-Bogdanov bifurcations defining a surface in the whole set of five positive parameters. In this paper we prove the existence of Bautin bifurcations completing the scenario of possible codimension two bifurcations that occur in this model. We give an interpretation of our results in terms of the three phases immunoediting theory:elimination, equilibrium and escape.
\end{abstract}
\noindent\textbf{Key words}: Bautin bifurcation, cancer modeling, immunoediting.
\noindent\textbf{2000 AMS classification:} Primary: 34C23, 34C60; Secondary: 37G15.
\section{Introduction}
Immune edition conceptualices the development of cancer in three phases \cite{Kim}. In the first one, formerly known as immune surveillance, the complex of the immune system eliminates cancer cells originating from an intrinsic fail in the supresor mechanisms. When some part of cancer cells are eliminated an equilibrium between the immune system and the population of cancer cells is achieved, leading to a durming state. Then the cancer cells accumulate genetic and epigenetic alterations in the DNA that generate specific stress-induced antigens. When a disbalance of the cancer polulation occurs the explosive phase appear with a fast growth of tumor cells. One of the simplest models in the first stage of the immune edition framework, based on a previous model of Bell \cite{Bell}, is due to Delisi and Resigno \cite{Delisi}. They model the population of cancer cells and lymphocites as a predator--prey system. The cancer tumor grows in the early stage as a spherical tumor that protects the inner cancer cells. Only the cancer cells on the surface of the tumor interact with the lymphocites. Under proper hypotheses on the balance of the total cancer cells and allometric growth, they propose a model of two ODEs depending on five parameters.
Years after, Liu, Ruan and Zhu \cite{Dan}, study the nonvascularized model of \cite{Delisi} and prove that a Takens-Bogdanov bifurcation of codimension two occurs.
The nonvascularized model of Delisi is
\begin{equation}\label{Delisi0}
\begin{array}{lcll}
\frac{dx}{dt} &=& -\lambda_1 x + \frac{\alpha_1 x y^{2/3}}{1+x}\left(1-\frac{x}{x_c}\right),&\\
\frac{dy}{dt} &=& \lambda_2 y -\frac{\alpha_2 x y^{2/3}}{1+x}
\end{array}
\end{equation}
where $x$ is the number of free lymphocites that are not bounded to cancer cells, $y$ is the total number of cancer cells in adimensional variables. The fractional power is the result of assuming an allometric law of the number of cancer cells on the surface of an spherical tumor. Obviously the model is not well suited for $y=0$ which correspond to the initial tumor cell being a point. In fact the theorem of uniqueness of solutions does not hold for inital conditions of the form $(x_0,0)$.
After a change of variables $\bar{x}=x$, $\bar{y}=y^{1/3}$, perform the next reparametrization
\begin{equation}\label{Repara}
\frac{dt}{d\bar{t}}= 1+x,
\end{equation}
and droping the bars the system becomes the polynomial system
\begin{equation}\label{Delisi}
\begin{array}{lcll}
\frac{dx}{dt}&=&-\lambda_{1} x (1+x) +\alpha_{1} \left(1-\frac{x}{x_{c}}\right)x y^{2}&\\
\frac{dy}{dt}&=&\lambda_{2}(1+x)y-\alpha_{2}x,
\end{array}
\end{equation}
Consider $(x_{0},y_{0})$ critical point of the system, then
\begin{eqnarray}
y_0 &=&\frac{\alpha_2x_0}{\lambda_2(1+x_0)}\label{y0}\\
\frac{\lambda_1\lambda_2^2}{\alpha_1\alpha_2^2}&=&\frac{x_0^2(1-x_{0}/x_c)}{(1+x_0)^3}\label{x0}
\end{eqnarray}
Therefore the abscissa $x_0$ of the critical points are determined by the roots of the cubic polynomial (\ref{x0}). In what follows the combination of parameters
\begin{equation}
\psi\equiv\frac{\lambda_1\lambda_2^2}{\alpha_1\alpha_2^2},\quad \lambda=\frac{\lambda_2}{\lambda_1}
\end{equation}
will be very useful.
In particular the critical points can be described by the catastrophe surface
\begin{equation}\label{cubic}
\Sigma=\{(\psi,x_c,x_0)\mid x_0^2(1-x_0/x_c)-\psi (1+x_0)^3=0\}.
\end{equation}
in the space of parameters $\psi$--$x_c$ and abscissa $x_0$. This surface is shown in Figure~1. The plane $x_0=0$ correspond to the trivial critical point $(0,0)$ and is a saddle. The red line shows a case of value of the parameters $(\psi,x_c)$ such that there are three critical points determined by their $x_0$ abscissa. At a point where the surface folds back, the number of critical point is three, counting the trivial one. The projection of this folding is given by the discriminant of the cubic,
\begin{equation}\label{discriminant}
\Delta= 4x_c^2-27 (1+x_c)^2 \psi =0,\quad\mbox{or}\quad
\psi= \frac{4x_c^2}{27(1+x_c)^2},
\end{equation}
defines a curve in the parameter plane $\psi$--$x_c$ where the projection $(\psi,x_c,x_0)\mapsto x_0$ restricted to $\Sigma$ looses range and the catastrophe surface folds back.
\begin{figure}
\caption{The catastrophe surface in coordidantes $(\psi, x_c, x)$, $x_0$ the abscisa of the critical point. For a given value of $(\psi,x_c)$ there are up to two critical points with $x_0>0$ and the trivial critical point corresponding to $x_0=0$. Notice that there are critical points with $x_0<0$ that are not considered. The foldding of the surface projects into the saddle--node curve given by (\ref{discriminant}
\label{fig:surface}
\end{figure}
The rest of the paper is organized as follows: In section 2 we summarize the results of Liu et al regarding the existence of saddle--node and Takens--Bogdanov bifurcations. In section 3 we state the main result of this paper, the existence of Bautin bifurcations and describe it explicitly in terms of a proper parametrization. We give the main idea of the proof and the details are posponed to the Appendix A. The global bifurcation diagram is completed numerically with MatCont using the local diagrams of the Takens-Bogdanov and Bautin bifurcation as described in the Appendix C. In section 4 we describe the phase portraits derived from the global bifurcation diagram and represented schematically in Figure~\ref{schema}. Finally in Section 5 we give an interpretation of our results. In Appendix C we describe briefly how the numerical continuation with MatCont was performed.
\section{Saddle node and Hopf bifurcations}
The following two results sumarizes the results by Liu et al \cite{Dan}.
\begin{prop}[Liu et al]
The parameter set
$$
SN=\left\{(\lambda_1,\lambda_2,\alpha_1,\alpha_2,x_c)\mid \psi= \frac{4x_c^2}{27(1+x_c)^2},\, \lambda\neq\frac{2(3+x_c)}{3(1+x_c)} \right\}
$$
are saddle node bifurcations of system (\ref{Delisi}). The phase portrait consists of two hyperbolic and one parabolic sectors.
\end{prop}
Using (\ref{discriminant}) we can obtain the explicit parametrization of the saddle--node curve in the plane $\lambda_1$--$\lambda_2$ for given values of $\alpha_1$, $\alpha_2$ and $x_c$.
$$
\lambda_1 = \frac{1}{3}\left(\frac{4x_c^2 \alpha_1 \alpha_2^2\lambda^2}{(1+x_c)^2} \right)^{1/3},\qquad
\lambda_2 = \frac{1}{3}\left(\frac{4x_c^2 \alpha_1 \alpha_2^2}{\lambda (1+x_c)^2} \right)^{1/3}
$$
Takens-Bogdanov bifurcations are given as follows:
\begin{theorem}[Liu et al]
The parameter set
\begin{equation}
BT=\left\{(\lambda_1,\lambda_2,\alpha_1,\alpha_2,x_c)\mid \psi= \frac{4x_c^2}{27(1+x_c)^2},\, \lambda=\frac{2(3+x_c)}{3(1+x_c)}\right\}
\end{equation}
are non-degenerate Takens-Bogdanov bifurcations of system (\ref{Delisi}).
\end{theorem}
For a choice of parameters in $BT$ the critical point undergoing a BT bifurcation is given by (\ref{y0}) and (\ref{x0}). As a previous construction towards proving our main result, we first characterize the Hopf bifurcations locus.
\begin{prop}\label{1}
The parameter set
$$
H=\{(\lambda_1,\lambda_2,\alpha_1,\alpha_2,x_c)\mid \mbox{(\ref{Hopf}) holds}\}
$$
is the Hopf and symmetric saddle bifurcation surface of system (\ref{Delisi}).
\begin{eqnarray}
0 &=&
(1+x_c)^3\psi\lambda^3 - (\psi x_c^3 +(1-\psi)x_c^2 -5\psi x_c - 3\psi)\lambda^2 +\nonumber\\
&& (x_c^2+4x_c+3)\psi\lambda +(1+x_c)^2(1+x_c\psi)\psi
\label{Hopf}
\end{eqnarray}
\end{prop}
\begin{proof}
Let $f$, $g$ denote the right hand sides in (\ref{Delisi}), then we look for a common root of the polynomial equations $f=g=trA=0$, where
$A=\frac{\partial(f,g)}{\partial(x,y)}$ and $trA=\tr{A}$. We compute
$R_1=\mathbb{R}es[trA,f,y_0]$,
$R_2=\mathbb{R}es[trA,g,y_0]$ which are polynomials in $x_0$. A necessary condition for $trA=0=f$ to have a common root is that $R_1=0$, and similarly a necessary condition for $trA=g=0$ to have a common root is that $R_2=0$. Then compute $R=\mathbb{R}es[R_1,R_2,x_0]$ which is a polynomial in the parameters. A necessary condition for $R_1=R_2=0$ to have a common root is that $R=0$. If we exclude trivial factors, we end up with (\ref{Hopf}).
\end{proof}
Liu et al \cite{Dan} prove that a non--degenerate Takens-Bogdanov bifurcation occurs for any values of the positive parameters, thus excluding the possibility of codimension three degeneracy.
Adam \cite{Adam} gives sufficient conditions for system (\ref{Delisi}) to undergo a Hopf bifurcation, although no explicit computation is done. Liu et al describe the Hopf bifurcation locus in terms of parameters involved in the normal form computation, thus not explicit. The expression in Proposition~\ref{1} gives an explicit parametrization of the locus of Hopf bifurcations in the parameters.
\section{Bautin bifurcation}
We now give the main idea to compute the first Lyapunov coefficient for a critical point undergoing a Hopf bifurcation. Let $(x_0,y_0)$ be such a critical point. Then we shift the critical point to the origin $x=x_0+\epsilon x_1$, $y=y_0+\epsilon y_1$ and expand in powers of $\epsilon$ in order to collect the homogenous components of the vector field. We first consider the linear part
\begin{eqnarray}
x_1' &=& ax_1+b x_1,\\
y_1' &=& cx_1+d x_1
\end{eqnarray}
and perform the linear change of variables $Y_1=b_2 x_1-a_2 y_1$, $Y_2=(a_1 b_2-a_2 b_1)x_1$. Under the hypothesis of complex eigenvalues and the determinant $a_1b_2-a_2 b_1>0$ the system reduces to an oscillator equation $Y_1'=Y_2$, $Y_2'=\omega^2 Y_1-2\mu Y_1$, with eigenvalues $\lambda=\mu\pm\sqrt{\omega^2-\mu^2}$ and the Hopf condition becomes $\mu=0$, $\omega^2= a_1b_2-a_2b_1$. We compute right and left eigenvectors $q_0$, $p_0$ such that $Aq_0=i\omega q_0$ and $A^{T}p_0=-i\omega p_0$ and $\langle p_0,q_0\rangle=1$. Then $\langle p_0,\bar{q_0}\rangle=0$. Let $Y=z q_0+\bar{z}\bar{q_0}$. Then the whole nonlinear system reduces to (setting $\epsilon=1$)
$z'=\lambda z+ G_2(z,\bar{z})+G_3(z,\bar{z})+\cdots$
then we compute $\ell_1$ by the formula given by \cite[p.309--310]{Yuri}.
As shown in the appendix, $\ell_1$ becomes a polynomial in $x_0,y_0,\omega$ and after elimination of $\omega^2$ and $\omega^4$ which are the only powers appearing there, and of $y_0$ using (\ref{Delisi}), a polynomial in $x_0$ of high degree (19) results. The main difficulty is that computing the abscissa $x_0$ of the critical point amounts to solving a cubic polinomial. Therefore we compute the resultant of $\ell_1$ with the cubic polinomial (\ref{cubic}) and eliminate $x_0$. Taking an appropriate factor of this, we then compute its resultant with the Hopf equation (\ref{Hopf}). There are two factors. One of this leads to the solution for $\lambda=\lambda_2/\lambda_1$,
$$
\lambda=\frac{-3+x_c}{3(1+x_c)}
$$
Substituting this value in the Hopf equation (\ref{Hopf}) we solve for $\psi$ in an appropriate factor. We then get the following
\begin{theorem}
The parameter set
\begin{eqnarray}
Bau&=&\left\{(\lambda_1,\lambda_2,\alpha_1,\alpha_2,x_c)\mid \,\psi=\frac{\sqrt{x_c}\left(
(-27+x_c)\sqrt{x_c}+ (9+x_c)^{3/2}
\right)}{27(1+x_c)^2},\right.\nonumber\\
&& \left.\qquad\lambda=\frac{-3+x_c}{3(1+x_c)}\right\}
\end{eqnarray}
are Bautin points of codimension $2$ of system (\ref{Delisi}).
\end{theorem}
\subsection{Bifurcation diagram around a point of Bautin}
The local bifurcation diagram around a Bautin point is shown in Figure \ref{Bautin-fig} (see \cite[p.313]{Yuri})
\begin{figure}
\caption{Local diagram of Bautin bifurcacion}
\label{Bautin-fig}
\end{figure}
There are two components of the Hopf curve $H_{\pm}$ correspondig to the sign of the first Lyapunov coefficient $\ell_0$. Thus when crossing the component $H_{-}$ from positive values of $\beta_1$ a stable limit cycle appears, and smilarly, when crossing the component $H_{+}$, an unstable limit cycle appears. Therefore in the cusp region 3, there coexist two limit cycles the exterior on being stable, the interior unstable, and both collapse along the LPC curve.
\section{Global dynamics}
Figure \ref{schema} shows schematically the bifurcation diagram as computed numerically with MatCont in Figure~\ref{fig:diagram}. There are shown three lines of fixed value of $\lambda_2$ varying $\lambda_1$. We will now describe the qualitative phase portrait along these lines. For the upper line $CT$ corresponding to a value of $\lambda_2$ just below the Takens--Bogdanov point $BT$, the dynamics can be described as follows: In passing from a point $C$ to a point $D$ the trivial critical point connects to the saddle point along a hetheroclinic orbit. This happens at the point marked as $K$. Indeed a curve of heteroclinic connections is depicted along the points $KK'K''$ although we have not computed it numerically. The transition from $C$ to $D$ passing through the heteroclinic connection $K$, and further evolution to a limit cycle bifurcating from a homoclinic connection at $P$, and disappearance of the limit cycle through a transcritical Hopf bifurcation ending at $T$, is shown in Figure~\ref{LineCT}. For completeness we have included the flow at infinity as described in Appendix~\ref{Blow-up}. The critical points at infinity $y=\infty$ are shown as blue points. Notice the hyperbolic sector for $x=0$ and the attractor at $x=x_c$.
\begin{figure}
\caption{Schema of the bifurcation diagram \ref{Bautin-fig}
\label{schema}
\end{figure}
\begin{figure}
\caption{Qualitative phase portrait along the line $CKDPAT$ of the bifurcation scheme in Figure \ref{schema}
\label{LineCT}
\end{figure}
Similarly, the evolution of the phase portrait along the line $C'T'$ is described in Figure~\ref{LineCprimeTprime}. The evolution along the part $C'K'D'P'A'$ is the same as $CKDPA$ in Figure~\ref{LineCT}, the difference is at the further development of an unstable limit cycle inside the stable limit cycle previously created by a homoclinic bifurcation at $P'=P$, as shown in figure $R'$ and further desappearence of both limit cycle as in $T'$ through a limit point of cycles.
\begin{figure}
\caption{Qualitative phase portrait along the line $C'K'D'P'A'T'$ of the bifurcation scheme in Figure \ref{schema}
\label{LineCprimeTprime}
\end{figure}
Finally the evolution along the line $C''T''$ is described as follows: The phase portrait along $C''K''D''$ is the same as in $C'K'D'$. Differently from the previous case, after $D''$ a Hopf bifurcation occurs and an unstable limit cycle is appear as in $A''$ and then a second stable limit cycle originating in an homoclinic bifurcation leading to coexistence of two limit cycles as in case~$R'$. The whole evolution along the line $C''T''$ is shown in Figure~\ref{LineCbiprimeTbiprime} where only the phase portraits different from the previous case are denoted as $A''$ and $P''$,
Figure~\ref{Coexistence}-(a), (b) shows in detail the evolution along $C'T'$ in the triangular region of coexistence of two limit cycles, with $\lambda_1$ as the $z$-axis. Notice that along increasing values of $\lambda_1$, first a limit cycle bifurcates from a homoclinic an then the second cycle bifurcates from a Hopf point. Figure ~\ref{Coexistence}-(c), (d) evolution along $C''T''.$
\begin{figure}
\caption{Qualitative phase portrait along the line $C''K''D''P''A''T''$ of the bifurcation scheme in Figure \ref{schema}
\label{LineCbiprimeTbiprime}
\end{figure}
\begin{figure}
\caption{Coexistence of two limit cycles along the line $C'T'$: (a), (c) and (b). Along the line $C''T''$: (b), (d) and (f).\label{TwoLC1}
\label{TwoLC1}
\label{Coexistence}
\end{figure}
\begin{figure}
\caption{Graphs of coexisting limit cycles of Figure \ref{TwoLC1}
\label{TwoLC1-bis}
\end{figure}
\section{Implications of the model on the equilibrium phase of immunoediting}
In what follows we will be interested on non-negative values of the parameters and
$x$ within the range $0<x<x_c$. Since $x'<0$ if $x=x_c$ and $y'<0$ if $y=0$, it follows
that the region $0<x<x_c$, $0<y$ is invariant. This delimites the region of real interest (ROI)
in the model.
\begin{prop}[Elimination threshold]
Given $\alpha_1$, $\alpha_2$, $\lambda_2$, $x_c$, there exists $\lambda_1^*$ such that
if $\lambda_1 >\lambda_1^*$, there exists a curve $y= h(x)$ such that for any initial condition
$(x_0,y_0)$ such that $y_0<h(x_0)$ then there exists $T>0$ such that $y(T)=0$.
\end{prop}
\begin{proof}
Fix $\alpha_1$, $\alpha_2$, $\lambda_2$ and $x_c$. Since the saddle--node curve is the hyperbola
$\lambda_1\lambda_2^2=const$ (see Proposition 1), then for $\lambda_1$ large enough the unique critical point is the origin
and is a saddle with the positive $y$ axis as a branch of the unstable manifold. Let us consider the rectangular region within the ROI
$$
R=\{(x,y)\mid 0<x<x_c,\quad 0<y<k\}
$$
We have seen that on the boundary $x=x_c$, $x'<0$; on the boundary
$y=0$, $y'<0$. On the upper boundary $y=k$.
Since $x$ remains bounded, it follows that
$y'=\lambda_2y(1+x)-\alpha_2x$ is positive for $y=k$ large enough. We now
follow the unstable manifold $W^s(0,0)$ backwards in time.
A strightforward computation of the stable eigenvalue shows that
a small components of $W^s(0,0)$ belongs to $R$, since there are no
critical points within $R$ it follows that it must intersect the line
$x=x_c$. It remains to show that in fact the component of $W^s(0,0)$
within the region $0<x<x_c$ can be expressed as the graph of
a function $y=h(x)$. Now from the first equation
$x'=-\lambda_1x(1+x)+\alpha_1 x(1-x/x_c)y^2$, since $x$, $y$ remain bounded and
$\lambda_1$ is large enough, it follows that $x'<0$, and the result
follows.
\end{proof}
The above theorem defines a threshold value of the population of cancer cells $y_c$
given by the intersection of $W^s(0,0)$ and the line $x=x_c$, namely $y_c=h(x_c)$: let $y_0$ be an initial population of cancer cells $y_0<y_c$, for a given growth parameter $\lambda_2$ and interaction constants $\alpha_{1,2}$ then there exists $x_0=h^{-1}(y_0)$ such that for $x_0'>x_0$ the evolution of cancer cells $y(t)$ with initial condition $(x_0',y_0)$ becomes zero. Geometrically, the horizontal line $y=y_0$ in phase space intersects the graph of the curve $y=h(x)$ at a a point $(\bar{x}_0,\bar{y}_0)$ and for an initial population of lymphocites large engouh $x_0<x_0'<x_c$, the solution with inital condition $(x_0',y_0)$ crosses the line $y=0$ for some finite time $T$ and $y(T)=0$. See Figure~\ref{Threshold}
Notice that the above dynamics occurrs in the scaled variables $(x,y)$, The branch of the stable manifold $y=h(x)$ transforms back to the original variables $(x,\bar{y})$ into a curve $\bar{y}= h(x)^{3}$ however, in the original variables the locus $\bar{y}=0$ does not make sense for two
reasons: the first one is that the model breaks down because of the hypothesis of
a spherical tumor. The second is that the system (\ref{Delisi0}) is not Lipschitz for $\bar{y}=0$.
Indeed one expects non--uniqueness as in the well known example $\bar{y}'=\bar{y}^{2/3}$.
Nevertheless the threshold curve is still defined in the original variables $(x,\bar{y})$, and
since the change or variables is $C^1$ outside this singular locus $\bar{y}=0$, the same dynamical behaviour occurs in the non-scaled variables.
\begin{figure}
\caption{Ilustration of Threshold Theorem}
\label{Threshold}
\end{figure}
According to the immune edition theory the relation between tumor cells and the immune system is made up of three phases (commonly known as the three E's of cancer): elimination, equilibrium and escape~\cite{Dun}.
Not in these terminology though, Delisi and Resigno \cite{Delisi}, describe these phases in terms of regions delimited by the zeroclines. For example the authors mention that within the region $x'<0$, $y'>0$ denote by $A$ in \cite{Delisi} solution evolves eventualy to escape to $x=x_c$, $y=\infty$. According to the Threshold Theorem~\ref{Threshold}, this is true for initial conditions above the curve $y=h(x)$.
Here we describe in more detail the three phases according to the regions delimited by the invariant manifold and basins of attraction. For example, the elimination phase is described as the region below the threshold curve;
the explosive phase as the basin of attraction of the point at infinity obtained by the compactification of phase space along the $y$ direction (see Appendix~\ref{Blow-up}). The equilibrium phase are the basins of attraction of either a stable anti--saddle or a stable limit cycle.
The existence of a Bautin bifurcation and the global bifurcation diagram continued numerically, implies the existence of a triangular region in the plane of paramters $\lambda_1$--$\lambda_2$, for fixed values of $\alpha_1$, $\alpha_2$ and $x_c$ as shown in Figure~\ref{fig:diagram}. Within this region two limit cycles exist and the detailed analysis of the phase diagrams along the lines $CT$, $C'T'$ and $C''T''$ in Figure \ref{schema} and explained in the text, leads to the conclusion that the inner limit cycle is unstable and the exterior one is stable. These two limit cycles are shown in Figure~\ref{TwoLC1}, the correspondig plots agains the time are shown in Figure~\ref{TwoLC1-bis}. This implies that for an initial condition within the interior of the inner limit cycle, the solution tends asymptotically to the values of the stable equlibrium. This would correspond to the equilibrium phase in the immunoedition theory. Meanwhile for an initial condition just outside the unstable inner cycle, the population of cancer cells and lymphocytes grow in amplitud and tends towards a periodic state but of larger amplitude. This yields a new type of qualitative behaviour predicted by the model.
Escape phase in the immunedition theory corresponds to the basin of attraction of the point at infinity $x=x_c$, $y=+\infty$. The analysis in Appendix B shows that his point is stable, so there is an open set of initial conditions leading to the escape phase. The basin of atraction of the point at infinity is delimited first by the threshold curve, and secondly by the unstable manifolds of the saddle point with positive coordinates here denoted as $(x_s,y_s)$. The structure of its stable and unstable branches delimits three types of behavior leading to escape. In the first one, for an initial condition $x_0>x_s$ and $y_0$ large enough, there is a transitory evolution of diminishing values of cancer cells $x(t)$ less that $x_s$ but finally leading to escape. This region is delimited by the unstable branch connecting $(x_s,y_s)$ and the point at infinity and the stable branch crossing the line $x=x_c$. The second type of evolution leading to escape occurs for an intial condition of large values of initital population of lymphocites $x_0$ with a great diminishing of $x(t)$, namely less than $x_a$, the abscisa of the anti--saddle critical point $(x_a,y_a)$, following an increse of cancer cells and lymphocites leading finally to escape. This kind of solutions can be described as a turn around the anti--saddle before escaping. A third and more complex behaviour occurs when the initial condition lies on the boundary of the basin of attraction of a limit cycle. In this situation a small perturbation can lead to oscilations of increasing magnitude and finally to escape.
\appendix
\section{Computation of the first Lyapunov exponent}
In this section we present the main procedure to compute of the first Lyapunov exponent at a Hopf point.
Let $(x_0,y_0)$ be a critical point. Replacing $x=x_{0}+x_{1}$, $y=y_{0}+y_{1}$ in (\ref{Delisi}),
$$
\begin{array}{lcll}
\frac{dx_{1}}{dt}&=&-\lambda_{1} (x_{0}+x_{1}) (1+x_{0}+x_{1}) +\alpha_{1} \left(1-\frac{x_{0}+x_{1}}{x_{c}}\right)(x_{0}+x_{1})(y_{0}+y_{1})^{2}&\\
\frac{dy_{1}}{dt}&=&\lambda_{2}(y_{0}+y_{1})(1+x_{0}+x_{1})-\alpha_{2}(x_{0}+x_{1}),
\end{array}
$$
and expanding we have
\begin{equation}\label{Delisi2}
\begin{array}{lcll}
x'_{1}&=&a_{0}+a_{1}x_{1}+a_{2}y_{1}+a_{3}x_{1}^{2}+a_{4}y_{1}^{2}+a_{5}x_{1}y_{1}+a_{6}x_{1}y_{1}^{2}+a_{7}x_{1}^{2}y_{1}+a_{8}x_{1}^{2}y_{1}^{2}&\\
y'_{1}&=&b_{0}+b_{1}x_{1}+b_{2}y_{1}+b_{3}x_{1}y_{1},
\end{array}
\end{equation}
where
\begin{eqnarray*}
a_{0}&=&-\lambda_{1}x_{0}(1+x_{0})+\alpha_{1}\left(1-\frac{x_{0}}{x_{c}}\right)x_{0}y_{0}^{2},\\
b_{0}&=&\lambda_{2}y_{0}(1+x_{0})-\alpha_{2}x_{0}.
\end{eqnarray*}
Of course $a_0=b_0$ yields the equations for the critical points.
The rest of the coefficients are
$$\begin{array}{rclrcl}
a_{1}&=&-\lambda (1+2x_{0})+\alpha_{1}\left(1-\frac{2x_{0}}{x_{c}}\right)y_{0}^{2},&
a_{2}&=&2\alpha_{1}x_{0}y_{0}\left(1-\frac{x_{0}}{x_{c}}\right),\\
a_{3}&=&-\lambda_{1}-\frac{\alpha_{1}y_{0}^{2}}{x_{c}},&
a_{4}&=&\alpha_{1} x_{0}\left(1-\frac{x_{0}}{x_{c}}\right),\\
a_{5}&=&2\alpha_{1}y_{0}\left(1-\frac{2x_{0}}{x_{c}}\right),&
a_{6}&=&\alpha_{1}\left(1-\frac{2x_{0}}{x_{c}}\right),\\
a_{7}&=&-2\frac{\alpha_{1}y_{0}}{x_{c}},&
a_{8}&=&-\frac{\alpha_{1}}{x_{c}},\\
b_{1}&=&\lambda_{2}y_{0}-\alpha_{2},&
b_{2}&=&\lambda_{2}(1+x_{0}),\\
b_{3}&=&\lambda_{2}.
\end{array}
$$
Consider the linear part $x'=Ax$ where $x=(x_1,y_1)^T$ and
$$A= \begin{pmatrix}
a_{1} & a_{2} \\
b_{1} & b_{2}
\end{pmatrix}
$$
Perform the linear change of coordinates
\begin{equation}\label{Cambio}
Y=Mx,
\end{equation}
where $Y=(Y_1,Y_2)^T$ and
$$
M=
\begin{pmatrix}
b_{2} & -a_{2}\\
a_{1}b_{2}-a_{2}b_{1} & 0
\end{pmatrix}$$
then the linear system is transformed into
$$
Y'=RY,\quad R=\begin{pmatrix} 0 & 1 \\ -\det(A) & \tr(A)
\end{pmatrix}
$$
Then $R$ has the canonical form
$$R=
\begin{pmatrix}
0 & 1\\
-\omega^2 & 2\mu
\end{pmatrix}
$$
where we have supposed and set that $0<\det(A)\equiv \omega^2$, $\tr(A)=2\mu$ and $\mu^2-\omega^2<0$ so we have complex eigenvalues
$\lambda=\mu \pm i \sqrt{\omega^{2}-\mu^{2}}$,
Let us consider that the real part of the eigenvalues is zero ($\mu=0$), then
$$
R_{0}=\begin{pmatrix}
0 & 1\\
-\omega^{2} & 0
\end{pmatrix},
$$
and we want to find vectors $q_{0}$ y $p_{0}$, such that $R_{0}q_{0}=i\omega q_{0}$, $R_0^{T}p_{0}=-i\omega p_{0}$, $\langle p_{0},q_{0}\rangle =1$ and $ \langle p_{0},\bar{q}_{0}\rangle=0$.
We find
$$
q_{0}=\frac{1}{2 i \omega}\begin{pmatrix}
1\\
i \omega
\end{pmatrix}
$$
and
$$p_{0}=\begin{pmatrix}
-i\omega\\
1
\end{pmatrix}.
$$
Let us trasform the complete system (\ref{Delisi2}) at a critical point with complex eigenvalues $\lambda\pm i \omega$
$$
x'=Ax+H_{2}(x)+H_{3}(x)+\cdots,
$$
by means of the change of variables
(\ref{Cambio}) then
\begin{eqnarray*}
Y'&=&Mx'\\
&=&MA_{0}x+MH_{2}(x)+MH_{3}(x)+\cdots,\\
&=&MA_{0}\left(M^{-1}Y\right)+MH_{2}\left(M^{-1}Y\right)+MH_{3}\left(M^{-1}Y\right)+\cdots \\
&=&
R\begin{pmatrix}
Y_{1}\\
Y_{2}
\end{pmatrix}+ K_{2}\begin{pmatrix}
Y_{1}\\
Y_{2}
\end{pmatrix}+K_{3}\begin{pmatrix}
Y_{1}\\
Y_{2}
\end{pmatrix}+\cdots
\end{eqnarray*}
where $K_l = MH_lM^{-1}$, for $l=1,2,\ldots$
Now introduce the complex variable $z$ by
$$\begin{pmatrix}
Y_{1}\\
Y_{2}
\end{pmatrix}=z q_{0}+\bar{z}\bar{q_{0}},$$
then system is reduced to the normal form
\begin{eqnarray*}
z'&=&\lambda z +\langle p_{0},K_{2}(z q_{0}+\bar{z}\bar{q_{0}})\rangle +\cdots \\
&=&\lambda z+ G_{2}(z,\bar{z})+G_{3}(z,\bar{z})+\cdots \\
&=& \lambda z+ \frac{g_{20}}{2}z^2 +g_{11}z\bar{z}+ \frac{g_{02}}{2}\bar{z}^2+\cdots
\end{eqnarray*}
where
$$
G_{l}= \langle p_{0},K_{l}(z q_{0}+\bar{z}\bar{q_{0}})\rangle ,\quad l=2,3\ldots
$$
and $g_{ij} = \frac{1}{i!j!}\frac{\partial G_l}{\partial z^{i}\bar{z}^{j}}$, for $i,j=0,1,\ldots$ We will need the expansion up to third order terms, in particular the coefficient $g_{11}$ at the third order
We will compute the first Lyapunov coefficients using the formulas (3.18) in \cite{Yuri} for the coefficient $c_1(0)$ of the Poincaré normal form and
\begin{equation}\label{l1}
\ell_1(0)= \frac{\mbox{Re}(c_1(0))}{\omega}
\end{equation}
where
$$
c_1(0)=\frac{g_{21}}{2}
+\frac{g_{20} g_{11}i\omega}{2\omega^2}-i\frac{g_{11}\bar{g_{11}}}{\omega} -i\frac{g_{02}\bar{g_{02}}}{6\omega}
$$
Observe that the change of coordinates (\ref{Cambio}) contains the coordinates of the critical point $(x_0,y_0)$ and so the coefficients $g_{ij}$. Therefore, we have to impose on the formal expression we get using (\ref{l1}) from the coefficients $g_{ij}$ up to third order, the restriction of a critical point, with zero real part and positive determinant equal to $\omega^2$. We achieve this as follows:
The expression (\ref{l1}) is a polynomial expression depending on $(x_0,y_0)$ and the parameters $P(x_0,y_0,\lambda_1,\lambda_2,x_c,\alpha_1,\alpha_2)$. Firstly we eliminate $y_0$ using (\ref{y0}) obtaining a polynomial expression in $x_0$ of order 19 and the parameters and still denote by $P(x_0,\lambda_1,\lambda_2,x_c,\alpha_1,\alpha_2)$. The abscisa $x_0$ of the critical point satisfy the cubic equation (\ref{x0}) written here as $Q(x_0,\lambda_,\psi,x_c)$.
Suprisingly, the coefficients of $P$ and $Q$ can be expressed solely in terms of the combination of parameters $\lambda$, $\psi$ and $x_c$. Next
we eliminate $x_0$ using the resultant
$$
R_1(\lambda_,\psi,x_c)=\mathbb{R}es(P(x_0,\lambda_,\psi,x_c),Q(x_0,\lambda_,\psi,x_c),x_0).
$$
Also the Hopf surface can be expressed in terms of the same combination of parameters as shown in (\ref{Hopf}) as $R_2(\lambda,\psi,x_c)=0$, then we compute
$$
R_3(\lambda,x_c)= \mathbb{R}es(R_1(\lambda,\psi,x_c),R_2(\lambda,\psi,x_c),\psi)
$$
and we get from a non trivial factor of $R_3$
\begin{equation}\label{bautin}
\lambda=\frac{-3+x_c}{3(1+x_c)}
\end{equation}
Finally, substituting (\ref{bautin}) in the Hopf surface $R_2(\lambda_,\psi,x_c)=0$ we get the nonnegative solution
$$
\psi=\frac{\sqrt{x_c}\left(
(-27+x_c)\sqrt{x_c}+ (9+x_c)^{3/2}
\right)}{27(1+x_c)^2}.
$$
\section{Blow up of infinity\label{Blow-up}}
In order to study solutions that escape to infinity in the direction $y\to\infty$ we perform a blow up of infinity
by the change of variables $(x,y)\mapsto (x,v=x/y)$, a further rescaling of time $dt/dt'=v^2$ extends the system up to $v=0$ corresponding to infinity $y=\infty$, $x>0$ (\ref{Delisi})
\begin{eqnarray}
\frac{dx}{dt'}&=&-\lambda_1 x(1+x) v^2+\alpha_1 x^3\left(1-\frac{x}{x_c}\right),\nonumber\\
\frac{dy}{dt'} &=& vx\left(\alpha_1 x\left(1-\frac{x}{x_c}\right)-\lambda_2 \right)+\alpha_2 v^2 -v^3(\lambda_1(1+x)+\lambda_2) .\label{infinity}
\end{eqnarray}
We see that $v=0$ becomes invariant and the reduced system at infinity is
$$
\frac{dx}{dt'}= x^3(1-x)
$$
showing that along $v=0$, $x>0$, $x=x_c$ an attractor.
To determine the local phase portrait of system (\ref{infinity}) at the critical point $x=x_c,v=0$, we compute its linearization
$$
A=\begin{pmatrix}
- x_c^2 \alpha_1 & 0\\ 0 & - x_c\lambda_2
\end{pmatrix}
$$
thus $(x_c,v=0)$ is an attractor. The origin $x=0=v$ is also a degenerate critical point with zero linear part with terms of third order the least. Performing a radial blow using polar coordinates $x=r\cos{\theta}$, $v=r\sin{\theta}$
we get
\begin{eqnarray*}
\frac{dr}{dt}&=& r(-\lambda_1+\alpha_1\cot^2{\theta}-\lambda_2\sin^2{\theta})+\\
&& r^2\left(-\lambda_1\cos^2{\theta}+\alpha_2\sin^3{\theta}-\frac{\alpha_1}{x_c}\cot^2{\theta}-(\lambda_1+\lambda_2)\cos{\theta}\sin^2{\theta}\right)\\
\frac{d\theta}{dt}&=&-\lambda_2\cos{\theta}\sin{\theta}- r\cos{\theta}\sin{\theta}(\lambda_2\cos{\theta}-\alpha_2\sin{\theta})
\end{eqnarray*}
which shows that $r=0$, $0<\theta<\pi/2$ is invariant. Setting $r=0$ we get
$$
\frac{d\theta}{dt}=-\lambda_2\cos{\theta}\sin{\theta}
$$
which is always negative for $0<\theta<\pi/2$. Thus the origin is a degenerate critical point with a hyperbolic sector.
\section{Numerical continuation}
Following \cite{Dan} we take the numerical values
$$
\lambda_1 = 0.01, \, \lambda_2 = 0.006672, \, \alpha_1 = 0.297312, \,\alpha_2 = 0.00318,\, xc = 2500
$$
satisfying conditions (\ref{Delisi}) for a BT bifurcation, and the coordinates $x_0=1.9976$, $y_0=0.317619$ for the critical point, according to (\ref{y0}), (\ref{x0}).
Figure~\ref{fig:diagram}~(a)--(b) shows the family of homoclinic connections in phase space, originating from the BT critical point. Since continuing the family of homoclinics from the BT point sometimes is difficult (see \cite{Hdaibat2}), for the computation of the initial member of the family of homoclinics we use the homotophy method near the previous values of $\lambda_1$, $\lambda_2$ and then continue forward and backward to assure that the family originates from the BT point. The curve of homoclinics is shown in Figure~\ref{fig:diagram} as the violet curve. The Bautin point (GH) is detected by continuing the Hopf curve from the BT point.
\begin{figure}
\caption{Numerical continuation of bifurcation diagram with MatCont. Saddle-node: black; Hopf: green; limit point of cycles: red; symmetric saddles:blue; homoclinic: violet.}
\label{fig:diagram}
\end{figure}
The Delisi model diagram bifurcation is shown in Figure \ref{fig:diagram}~(b). The saddle-node bifurcation curve is shown in black, the green corresponds to the Hopf bifurcation, the curve in red corresponds to the saddle-node bifurcation of periodic orbits (limit point of cycles) and the blue one to symmetric saddles.
\end{document}
|
\begingin{document}
\title{Solvability of Backward Stochastic Differential Equations with
Quadratic Growth}
\author{Revaz Tevzadze}
\date{~}
\maketitle
\begingin{center}
{Georgian--American University, Business School, 3, Alleyway II,
\newline
Chavchavadze Ave. 17\,a,
\newline Georgian Technical Univercity, 77 Kostava str., 0175,
\newline Institute of Cybernetics, 5 Euli str., 0186, Tbilisi,
Georgia
\newline
(e-mail: [email protected]) }
\end{center}
\numberwithin{equation}{section}
\begingin{abstract}
We prove the existence of the unique solution of a general
Backward Stochastic Differential Equation with quadratic growth
driven by martingales. Some kind of comparison theorem is also
proved.
\noindent {\bf Key words and phrases}:{Backward Stochastic
Differential Equation, Contraction principle, BMO-martingale.}
\noindent
{\bf Mathematics Subject Classification (2000)}: 90A09, 60H30, 90C39.
\end{abstract}
\
\section{Introduction}
\
In this paper we show a general result of existence and uniqueness
of Backward Stochastic Differential Equation (BSDE) with quadratic
growth driven by continuous martingale. Backward stochastic
differential equations have been introduced by Bismut \cite{Bs}
for the linear case as equations of the adjoint process in the
stochastic maximum principle. A nonlinear BSDE (with Bellman
generator) was first considered by Chitashvili \cite{Ch}. He
derived the semimartingale BSDE (or SBE), which can be considered
as a stochastic version of the Bellman equation for a stochastic
control problem, and proved the existence and uniqueness of a
solution. The theory of BSDEs driven by the Brownian motion was
developed by Pardoux and Peng \cite{PP} for more general
generators. The results of Pardoux and Peng were generalized by
Kobylansky \cite{Kob}, Lepeltier and San Martin \cite{LS} for
generators with quadratic growth. In the work of Hu at all
\cite{HIm} BMO-martingales were used for BSDE with quadratic
generators in Brownian setting and in \cite{M-T}, \cite{MRT},
\cite{MT7}, \cite{M-S-T22}, \cite{M-T2}, \cite{Mrl} for BSDEs
driven by martingales. By Chitashvili \cite{Ch}, Buckdahn
\cite{B},and El Karoui and Huang \cite{El-H} the well posedness of
BSDE with generators satisfying Lipschitz type conditions was
established. Here we suggest new approach including an existence
and uniqueness of the solution of general BSDE with quadratic
growth. In the earlier papers \cite{M-T}, \cite{MRT}, \cite{MT7},
\cite{M-S-T22}, \cite{M-T2} we studied, as well as Bobrovnytska
and Schweizer \cite{BSc}, the particular cases of BSDE with
quadratic nonlinearities related to the primal and dual problems
of Mathematical Finance. In these works the solutions were
represented as a value function of the corresponding optimization
problems.
The paper is organized as follows. In Section 2 we give some basic
definitions and facts used in what follows. In Section 3 we show
the solvability of the system of BSDEs for sufficiently small
initial condition and further prove the solvability of one
dimensional BSDE for arbitrary bounded initial data. At the end of
Section 4 we prove the comparison theorem, which generalizes the
results of Mania and Schweizer \cite{m-sc}, and apply this results
to the uniqueness of the solution.
\section{Some basic definitions and assumptions}
\
Let $(\Omega,{\cal F}, {\bf F}=({F}_t)_{t\ge 0},P)$ be filtered
probability space satisfying the usual conditions. We assume that
all local martingales with respect to ${\bf F}$ are continuous.
Here the time horizon $T<\infty$ is a stopping time and ${\cal F}=F_T$.
Let us
consider Backward Stochastic Differential Equation (BSDE) of the
form
\begingin{eqnarray}\langlebel{eq}
dY_t=-f(t,Y_t,\sigma_t^*Z_t)dK_t-d\langle N\rangle_t g_t+Z_t^*dM_t+dN_t,\\
\langlebel{in}
Y_T=\xi
\end{eqnarray}
We suppose that
\begingin{itemize}
\item $(M_t,t\ge 0)$ is an $R^n$-valued continuous martingale with
cross-variations matrix $\langle M\rangle_t=(\langle M^i,M^j\rangle_t)_{1\le
i,j\le n}$,
\item $(K_t,t\ge 0)$ is a continuous, adapted,
increasing process, such that $\langle
M\rangle_t=\int_0^t\sigma_s\sigma^*_sdK_s$ for some predictable, non
degenerate $n\times n$ matrix $\sigma$,
\item $\xi$ is ${\cal F}-$measurable an
$R^d$-valued random variable,
\item $f:\Omega\times R^+\times
R^d\times R^{n\times d}\to R^d$ is a stochastic process, such that
for any $(y,z)\in R^d\times R^{n\times d}$ the process
$f(\cdot,\cdot,y,z)$ is predictable,
\item $g:\Omega\times R^+\to
R^{d\times d}$ is a predictable process.
\end{itemize}
The notation $R^{n\times d}$ here denotes the space of ${n\times
d}$-matrix $C$
with Euclidian norm $|C|=\sqrt{{\rm tr}(CC^*)}$. For some stochastic process
$X_t$ and sopping times
$\tau,\;\nu$, such that $\tau\ge\nu$ we denote
$X_{\nu,\tau}=X_\tau-X_\nu$.
For all unexplained notations concerning the martingale theory
used below we refer \cite{J}, \cite{DM} and
\cite{L-Sh}. About {\rm BMO}-martingales see \cite{DdM} or
\cite{Kaz}.
A solution of the BSDE is a triple $(Y,Z,N)$ of stochastic
processes, such that (\ref{eq}), (\ref{in}) is satisfied and
\begingin{itemize}
\item $Y$ is an adapted $R^d$-valued continuous process,
\item $Z$ is an $R^{n\times d}$-valued predictable process,
\item $N$ is an $R^d$-valued continuous martingale, orthogonal to the
basic martingale $M$.
\end{itemize}
One says that $(f,g,\xi)$ is a generator of BSDE
(\ref{eq}),(\ref{in}).
We introduce the following spaces
\begingin{itemize}
\item
$L^\infty(R^d)=\{X:\Omega\to R^d, {\cal F}_T-\text{measurable},
||X||_\infty=\underset{\Omega}\operatornamewithlimits{ess\,sup}|X(\omega)|<\infty\}$,
\item
$S^\infty(R^d)=\{\varphi:\Omega\times R^+\to R^d,\; \text{continuous,
adapted},||\varphi||_\infty=
\underset{[[0,T]]}\operatornamewithlimits{ess\,sup}|\varphi(t,\omega)|<\infty\}$,
\item
\begin{eqnarray}\langlebel{sp} \notag {H}^2(R^{n\times
d},\sigma)=&\{\varphi:\Omega\times R^+\to R^{n\times
d},\;\;\text{predictable},\;\;
\\
||\varphi||^2_H=\underset{[[0,T]]}\operatornamewithlimits{ess\,sup}
&E(\int_t^T|\sigma_s^*\varphi_s|^2dK_s|{\cal F}_t)\equiv
\underset{[[0,T]]}\operatornamewithlimits{ess\,sup} E({\rm tr}\langle \varphi\cdot M\rangle_{tT}|{\cal
F}_t\big)<\infty\}, \end{eqnarray}
\item
${\rm BMO}(Q)=\{ N, \; R^d-\text{valued}\; Q-\text{martingale}\;
||N||_Q^2=\underset{[[0,T]]}\operatornamewithlimits{ess\,sup} E^Q(tr\langle N\rangle_{tT}|{\cal
F}_t)<\infty\}$
\end{itemize}
We also use the notation $|r|_{2,\infty}$
for the norm $||\int_0^Tr_s^2dK_s||_\infty$.
\noindent
The norm of the triple is defined as
$$
||(Y,Z,N)||^2=||Y||^2+||Z||_H^2+||N||_P^2.
$$
Throughout the paper we use the condition
A) There exist a constant $\theta$ and predictable processes
$$
\alphapha:\Omega\times R^+\to R^d,\;\Gamma:\Omega\times R^+\to
Lin(R^{n\times d},R^d),\;\; r:\Omega\times R^+\to R,
$$
such that the following conditions $\int_0^T r_sdK_s,\;\int_0^T r_s^2dK_s\in
L^\infty,\;\Gamma(\sigma^{-1})\in H_T^2,$ $|\alphapha_t|\le r_t,|g_t|\le
\theta^2$ and \begin{eqnarray}\langlebel{est}
|f(t,y_1,z_1)-f(t,y_2,z_2)-\alphapha_t(y_1-y_2)-\Gamma_t(z_1-z_2)|\\
\notag \le(r_t|y_1-y_2|+
\theta|z_1-z_2|)(r_t(|y_1|+|y_2|)+\theta(|z_1|+|z_2|)).
\end{eqnarray} are satisfied.
Sometimes we use the more restrictive conditions \begin{itemize}
\item[B1)] $\int_0^T|f(t,0,0)|dK_t+|g_t|\le \theta^2$ for all
$t\in[0,T]$,
\item[B2)] $|f_y(t,y,z)|\le r_t,\; |f_z(t,y,z)|\le r_t+\theta|z|$ for all
$(t,y,z)$,
\item[B3)] $|f_{yy}(t,y,z)|\le r_t^2,\;|f_{yz}(t,y,z)|\le \theta
r_t,\;|f_{zz}(t,y,z)|\le \theta^2$
for all $(t,y,z)$. \end{itemize}
{\bf Remark 1.} Condition A) follow from conditions B1)-B3), since
using notations $\delta y=y_1-y_2,\;\delta z=z_1-z_2$ for
$\alphapha_t=f_y(t,0,0), \; \Gamma_t= f_z(t,0,0)$ by the mean value
theorem we have
$$
|f(t,y_1,z_1)-f(t,y_2,z_2)-\alpha_t\delta y-\Gamma_t(\delta z)|
$$
$$
=|f_y(t,\nu y_1+(1-\nu)y_2,\nu z_1+(1-\nu)z_2)\delta y-f_y(t,0,0)\delta y|
$$
$$
+f_z(t,\nu y_1+(1-\nu)y_2,\nu z_1+(1-\nu)z_2)(\delta z)
-f_z(t,0,0)(\delta z)|,
$$
for some $\nu\in [0,1].$ Using again mean value theorem we obtain
that
$$
|f(t,y_1,z_1)-f(t,y_2,z_2)-\alpha_t\delta y-\Gamma_t(\delta z)|
$$
$$
\le (|\nu y_1+(1-\nu)y_2|\max_{y,z}|f_{yy}(t,y,z)|+|\nu z_1+(1-\nu)z_2
|\max_{y,z}|f_{yz}(t,y,z)|)|\delta y|
$$
$$
+(|\nu y_1+(1-\nu)y_2|\max_{y,z}|f_{yz}(t,y,z)|+|\nu
z_1+(1-\nu)z_2|\max_{y,z}|f_{zz}(t,y,z)|)|\delta z|
$$
$$
\le [r_t^2(|y_1|+|y_2|)+r_t\theta(|z_1|+|z_2|)]|\delta y|
+[r_t\theta(|y_1|+|y_2|)+\theta^2(|z_1|+|z_2|)]|\delta z|
$$
$$
=(r_t|\delta y|+\theta|\delta z|)(r_t(|y_1|+|y_2|)+\theta(|z_1|+|z_2|).
$$
{\bf Remark 2.} If $d=1$ the operator $\Gamma_t$ is given by an
$n-$dimensional vector $\gamma_t$ such that $\Gamma_t(z)=\gamma_t^*z$.
Thus inequality in A) can be rewritten as
$$
|f(t,y_1,z_1)-f(t,y_2,z_2)-\alphapha_t\delta y-\gamma_t^*\delta z|
$$
$$
\le(r_t|\delta y|+
\theta|\delta z|)(r_t(|y_1|+|y_2|)+\theta(|z_1|+|z_2|)).
$$
The main statement of the paper is the following
{\bf Theorem 1.} Let $\xi\in L^\infty,\;d=1$ and conditions B1)-B3) are
satisfied. Then there exists
a unique triple $(Y,Z,N)$, where
$Y\in S^\infty,Z\in H^2, N\in BMO$, that satisfies
equation (\ref{eq}),(\ref{in}).
\section{Existence of the solution}
\
First we prove the existence and uniqueness of the solution for a
sufficiently small initial data.
{\bf Proposition 1}. Let $f$ and $g$ satisfy condition $A)$ with
$\alpha=0$ and $\gamma_t=0$. Then for $\xi$ with the norm
$||\xi||_\infty<\frac{1}{32\beginta},
\;\beginta=8\max(|r|_{2,\infty}^2,\theta^2)$ there exists a unique
solution $(Y,Z,N)$ of BSDE \begin{eqnarray}\langlebel{eqq}
dY_t=(f(t,0,0)-f(t,Y_t,\sigma_t^*Z_t))dK_t+d\langle N\rangle_t
g_t+Z_t^*dM_t+dN_t,\\
\notag
Y_T=\xi,
\end{eqnarray}
with the norm $||(Y,Z,N)||\le R$, where $R$ is a constant satisfying
the inequality
$4||\xi||^2_\infty+\beginta^2R^4\le R^2$, namely $R=2\sqrt2||\xi||_\infty.$
Moreover if $||\xi||_\infty+||\int_0^\infty|f(s,0,0)|dK_s||_\infty$ is
small enough then BSDE (\ref{eq}) admits a unique solution.
{\it Proof}. We define the mapping $(Y,Z,N)=F(y,z,n),\;\; n\;\;
\text{is orthogonal to}\;\;\; M,\;\;\;\\(y,z\cdot M+n)\in
S_T^\infty\times BMO(P)$ by the relation \begin{eqnarray}\langlebel{eq1}
\notag dY_t=(f(t,0,0)-f(t,y_t,\sigma_t^*z_t))dK_t+d\langle
n\rangle_tg_t+Z_t^*dM_t+dN_t,\\
Y_T=\xi.
\end{eqnarray}
Using the Ito formula for $|Y_t|^2$ we obtain that
\begin{eqnarray}\langlebel{ito}
\notag
|Y_t|^2=|\xi|^2+2\int_t^TY_s^*(f(s,y_s,\sigma_s^*z_s)-f(s,0,0))dK_t\\
\notag
+2\int_t^TY_s^*d\langle n\rangle_sg_s-\int_t^T{\rm tr}Z_s^*d\langle M\rangle_sZ_s
-{\rm tr}\langle N\rangle_{tT}-\int_t^TY_s^*Z_s^*dM_s-\int_t^TY_s^*dN_{s}.
\end{eqnarray}
If we take the conditional expectation and use (\ref{sp}) and the
elementary inequality $2ab\le\frac{1}{4}a^2+4b^2$ we get
\begin{eqnarray}\langlebel{itt}
\notag
|Y_t|^2+E(\int_t^T|\sigma_s^*Z_s|^2dK_s+tr\langle N\rangle_{tT}|{\cal F}_t)\le
||\xi||^2+\frac{1}{4}||Y||_\infty^2\\
+4E^2(\int_t^T|f(s,y_s,\sigma_s^*z_s)-f(s,0,0)|dK_s+
\int_t^T|g_s|d{\rm tr}\langle n\rangle_s|{\cal F}_t). \end{eqnarray} Thus
using condition A), identities \begin{eqnarray}\langlebel{idd} {\rm
tr}\langle z\cdot M\rangle_t={\rm tr}\int_0^tz_s^*d\langle M\rangle_sz_s=
\int_0^t{\rm
tr}(z_s^*\sigma_s\sigma_s^*z_s)dK_s=\int_0^t|\sigma_s^*z_s|^2dK_s
\end{eqnarray} and explicit inequalities \begin{eqnarray}\langlebel{12}
\notag \frac{1}{2}(||Y||_\infty^2+||Z\cdot M+N||_{\text{\sm BMO}}^2)
\le \max(||Y||_\infty^2,||Z\cdot M+N||_{\text{\sm BMO}}^2)\\
\notag \le
\underset{[[0,T]]}\operatornamewithlimits{ess\,sup}[|Y_t|^2+E(\int_t^T|\sigma_s^*Z_s|^2dK_s+tr\langle
N\rangle_{tT}|{\cal F}_t)] \end{eqnarray}
we obtain from (\ref{itt})
\begin{eqnarray}\langlebel{12} \notag
\frac{1}{4}||Y||_\infty^2+\frac{1}{2}||Z\cdot M+N||_{\text{\sm BMO}}^2\le
||\xi||^2\\
\notag
+4\underset{[[0,T]]}\operatornamewithlimits{ess\,sup}
E^2(\int_t^T|f(s,y_s,\sigma_s^*z_s)-f(s,0,0)|dK_t+
\theta^2{\rm tr}\langle n\rangle_{tT}|{\cal F}_t)\\
\notag
\le ||\xi||^2+16\underset{[[0,T]]}\operatornamewithlimits{ess\,sup}
E^2(\int_t^Tr_s^2y_s^2dK_s+\theta^2{\rm tr}\langle z\cdot M+n\rangle_{tT}|{\cal F}_t)\\
\notag
\le ||\xi||^2+16|r|_{2,\infty}^4||y||_\infty^4+16\theta^4||z\cdot
M+n||^4_{\text{\sm BMO}}.
\end{eqnarray}
Therefore
\begin{eqnarray*}\langlebel{13} \begin{gathered}
||Y||_\infty^2+||Z\cdot M+N||_{\text{\sm BMO}}^2\le 4||\xi||^2\\
+64|r|_{2,\infty}^4||y||^4_\infty+64\theta^4||z\cdot M+n||_{\text{\sm BMO}}^4\\
\le 4||\xi||^2+\beginta^2(||y||_\infty^2+||z\cdot M+n||^2_{\text{\sm
BMO}})^2,
\end{gathered}
\end{eqnarray*}
where
$\beginta=8\max(|r|_{2,\infty}^2,\theta^2)$. We can pick $R$ such that
$$
4||\xi||^2+\beginta^2R^4\le R^2
$$
if and only if
$||\xi||_\infty \le\frac{1}{4\beginta}$. For instance
$R=2\sqrt2||\xi||_\infty$ satisfies this quadratic inequality. Therefore
the ball
$$
{\cal B}_R=\{(Y,Z\cdot M+N)\in S^\infty\times{\rm BMO},\;N\bot
M,\;\; ||Y||_\infty^2+||Z\cdot M+N||_{\text{\sm BMO}}^2\le R^2\}
$$
is such that
$F({\cal B}_R)\subset{\cal B}_R$.
Similarly for $(y^j,z^j\cdot M+n^j)\in{\cal B}_R,\;j=1,2$ using the
notations
$\delta y=y^1-y^2,\;\delta z=z^1-z^2, \;\delta n=n^1-n^2$ we can show
that
\begin{eqnarray*}\langlebel{14}
\begin{gathered}
||\delta Y||_\infty^2+||\delta Z\cdot M+\delta N||_{\text{\sm BMO}}^2\\
\le 4\underset{[[0,T]]}\operatornamewithlimits{ess\,sup}
E^2\big(\int_t^T|f(s,y_s^1,\sigma_s^*z_s^1)-
f(s,y^2_s,\sigma_s^*z_s^2)|dK_s\\
+\int_t^T|g_s|d{\rm var}({\rm tr}\langle\delta n,n^1+n^2\rangle)_s|{\cal F}_t)\\
\le 8\underset{[[0,T]]}\operatornamewithlimits{ess\,sup} E\big(\int_t^T(r_s^2|\delta y_s|^2+
\theta^2|\sigma_s^*\delta z_s|^2dK_s)|{\cal F}_t)\\
\times
E\big(\int_t^T(r_s(|y_s^1|+|y_s^2|)
+\theta(|\sigma_s^*z^1_s|+|\sigma_s^*z_s^2|))^2dK_s)|{\cal F}_t)\\
+\theta^2E({\rm tr}\langle\delta n\rangle_{tT}|{\cal F}_s)
E({\rm tr}\langle n^1+n^2\rangle_{tT}|{\cal F}_t)
\end{gathered}
\end{eqnarray*}
Again using the equalities ({\ref{idd})
we can pass to the norm. Thus
\begin{eqnarray*}\langlebel{114}
\begin{gathered}
||\delta Y||_\infty^2+||\delta Z\cdot M+\delta N||_{\text{\sm BMO}}^2\\
\le 8(|r|_{2,\infty}^2||\delta y||^2_\infty+\theta^2||\delta z\cdot M||_{\text{\sm
BMO}}^2)\\
\times(|r|_{2,\infty}^2(||y^1||^2_\infty+||y^2||^2_\infty)
+\theta^2(||z^1\cdot M||_P^2+||z^2\cdot M||_P^2)\\
+2\theta^2||\delta n||^2_{\text{\sm BMO}}
(||n^1||^2_{\text{\sm BMO}}+||n^2||^2_{\text{\sm BMO}})^2).
\end{gathered}
\end{eqnarray*}
Since $||z^1\cdot M||,||z^2\cdot M||\le R,||n^1||,||n^2||\le R$ we get
\begin{eqnarray}\langlebel{144}
\begin{gathered}
||\delta Y||_\infty^2+||\delta Z\cdot M+\delta N||_{\text{\sm BMO}}^2\\
\le128\beginta^2 R^2(||\delta y||_\infty^2+||\delta z\cdot M||^2_{\text{\sm
BMO}})
+4\beginta^2R^2||\delta n||^2_{\text{\sm BMO}}\\
\le128\beginta^2 R^2(||\delta y||_\infty^2+||\delta z\cdot M+\delta n||^2_{\text{\sm
BMO}}).
\end{gathered}
\end{eqnarray}
Now we can take $R=2\sqrt2||\xi||_\infty< \frac{1}{8\sqrt 2\beginta}$. This
means that $||\xi||_\infty< \frac{1}{32\beginta}$ and
$F$ is contraction on ${\cal B}_R$. By contraction
principle the mapping $F$ admits a unique fixed point, which is
the solution of (\ref{eqq}). \qed
From now we suppose that $d=1$.
{\bf Lemma 1}. Let condition A) is satisfied. Then the
generator $(\bar f,\bar g,\bar\xi)$, where
$$
\bar f(t,\bar y,\bar
z)=e^{\int_0^t\alpha_sdK_s}(f(t,e^{-\int_0^te^{\alpha_sdK_s}}
\bar y,e^{-\int_0^te^{\alpha_sdK_s}}\bar z)-f(t,0,0))-\alpha_t\bar
y-\gamma_t^*\bar z,
$$
$$
\bar g_t=e^{-\int_0^t\alpha_sdK_s}g_t\;\;\; {\rm and}\;\;\; \bar
\xi=e^{\int_0^T\alpha_sdK_s}\xi,\;\;
$$
satisfies condition A) with $\alpha=0,\;\gamma=0,\;\bar
r_t=r_te^{||\int_0^\infty r_sdK_s||_\infty},\; \text{and}\;\bar\theta=\theta
e^{||\int_0^T r_sdK_s||_\infty}$.
Moreover, $(Y,Z,N)$ is a solution of BSDE (\ref{eqq}) if and only
if
$$
(\bar Y_t,\bar Z_t,\bar N_t)=
(e^{\int_0^t\alpha_sdK_s}Y_t,e^{\int_0^t\alpha_sdK_s}Z_t,\int_0^te^{\int_0^s\alpha_udK_u}dN_s)
$$
is a solution w.r.t. measure $d\bar P={\cal E}_T((\gamma\sigma^{-1})\cdot
M)dP$ of BSDE
\begingin{eqnarray}\langlebel{beq}
d\bar Y_t=-\bar f(t,\bar Y_t,\sigma_t^*\bar Z_t)dK_t-d\langle \bar N\rangle_t \bar
g_t+
\bar Z_t^*d\bar M_t+d\bar N_t,\\
\notag \bar Y_T=\bar \xi,
\end{eqnarray}
where
$\bar M_t=M_t-\langle(\gamma\sigma^{-1})\cdot M,M\rangle_t$.
{\it Proof}.
Condition A) for $(\bar f,\bar g,\bar\xi)$ is satisfied since by
(\ref{est})
$$
|\bar f(t,\bar y_1,\bar z_1)-\bar f(t,\bar y_2,\bar z_2)|
$$
$$
\le e^{\int_0^t\alpha_sdK_s}(r_t|\delta \bar y|+ \theta|\delta \bar
z|)(r_t(|\bar y_1|+|\bar y_2|)+\theta(|\bar z_1|+|\bar z_2|))
$$
$$
\le(\bar r_t|\delta \bar y|+
\bar \theta|\delta \bar z|)(\bar r_t(|\bar y_1|+|\bar
y_2|)+\bar\theta(|\bar z_1|+|\bar z_2|)).
$$
On the other hand
using the Ito formula we have
\begin{eqnarray*}\langlebel{15}
\begin{gathered}
d\bar
Y_t=e^{\int_0^t\alpha_sdK_s}dY_t+\alpha_te^{\int_0^t\alpha_sdK_s}Y_tdK_t\\
=e^{\int_0^t\alpha_sdK_s}(f(t,0,0)-f(t,Y_t,\sigma_t^*Z_t))dK_t
+e^{\int_0^t\alpha_sdK_s}d\langle N\rangle_tg_t\\
+e^{\int_0^t\alpha_sdK_s}Z_t^*dM_t+e^{\int_0^t\alpha_sdK_s}dN_t+\alpha_t\bar
Y_tdK_t
\end{gathered}
\end{eqnarray*}
Taking into account that
\begin{eqnarray*}\langlebel{155}
\begin{gathered}
e^{\int_0^t\alpha_sdK_s}(f(t,0,0)-f(t,Y_t,\sigma_t^*Z_t))+\alpha_t\bar Y_t\\
=-\bar f(t,\bar Y_t,\sigma_t^*\bar Z_t)-\gamma_t\sigma_t^*\bar Z_t,\\
e^{\int_0^t\alpha_sdK_s}d\langle N\rangle_tg_t=d\langle\bar
N\rangle_te^{-\int_0^t\alpha_sdK_s}g_t=d\langle\bar N\rangle_t\bar g_t
\end{gathered} \end{eqnarray*} and
$$
\bar Z\cdot M-\int_0^\cdot\gamma_t\sigma_t^*\bar Z_tdK_t=\bar Z\cdot M-
\int_0^\cdot\gamma_t\sigma_t^{-1}\sigma_t\sigma_t^*\bar Z_tdK_t
$$
$$
= \bar Z\cdot M-\int_0^\cdot\gamma_t\sigma_t^{-1}d\langle M\rangle_t\bar Z_t =
\bar Z\cdot M-\langle(\gamma\cdot\sigma^{-1})\cdot M,\bar Z\cdot M\rangle =\bar
Z\cdot \bar M
$$
we obtain
$$
d\bar Y_t=-\bar f(t,\bar Y_t,\sigma_t^*\bar Z_t)dK_t-d\langle\bar N\rangle_t\bar
g_t+\bar Z_td\bar M_t+
d\bar N_t.
$$
Here $\bar M$ is a local martingale w.r.t. $\bar P$ by Girsanov
theorem.
{\bf Corollary 1}. Let $f$ and $g$ satisfy condition A) and
$||\xi||_\infty\le\frac{1}{32\beginta}{\cal E}p(-2||\int_0^T
r_sdK_s||_\infty)$. Then there exist the solution of (\ref{eqq})
with the norm $||Y||_\infty^2+||Z\cdot\bar M+N||_{\rm BMO(\bar
P)}^2\leq \frac{1}{128\beginta^2}.$
{\it Proof}. Obviously that $$||Y||_\infty^2+||Z\cdot\bar M+N||_{\rm BMO(\bar
P)}^2\leq
\left(||\bar Y||_\infty^2+||\bar Z\cdot\bar M+\bar N||_{\rm BMO(\bar
P)}^2\right){\cal E}p(2||\int_0^T r_sdK_s||_\infty)$$
$$\leq 8||\bar\xi||_\infty^2{\cal E}p(2||\int_0^T r_sdK_s||_\infty)
\leq 8||\xi||_\infty^2{\cal E}p(4||\int_0^T r_sdK_s||_\infty).$$
From $||\xi||_\infty\le\frac{1}{32\beginta}{\cal E}p(-2||\int_0^T r_sdK_s||_\infty)$ follows that
$8||\xi||_\infty{\cal E}p(4||\int_0^T r_sdK_s||_\infty)\leq \frac{1}{128\beginta^2}.$
Hence we get $||Y||_\infty^2+||Z\cdot\bar M+N||_{\rm BMO(\bar P)}^2\leq\frac{1}{128\beginta^2}.$
{\bf Corollary 2.} Let generator $(f,g,\xi)$ satisfies conditions
B1)-B3) and $(\tilde Y_t,\tilde Z_t,\tilde N_t)$ be a solution
of (\ref{eqq}). Then BSDE
\begingin{eqnarray}\langlebel{heq}
d\hat Y_t=(f(t,\tilde Y_t,\sigma_t^*\tilde Z_t)-
f(t,\hat Y_t+\tilde Y_t,\sigma_t^*\hat Z_t+\sigma_t^*\tilde Z_t))dK_t\\
\notag -d(\langle \hat N\rangle_t+2\langle \tilde N,\hat N\rangle_t)g_t+
\hat Z_t^*dM_t+d\hat N_t,\\
\notag
\hat Y_T=\hat \xi
\end{eqnarray}
satisfy condition A) with $-\hat f(t,y,z)=f(t,\tilde
Y_t,\sigma_t^*\tilde Z_t)- f(t,y+\tilde Y_t,z+\sigma_t^*\tilde Z_t)$,
$\alpha_t=f_y(t,\tilde Y_t,\sigma_t^*\tilde Z_t)$, $\gamma_t=f_z(t,\tilde
Y_t,\sigma_t^*\tilde Z_t)$ and the new probability measure ${\mathcal
E}_T(2g\cdot \tilde N)dP$. Moreover (\ref{heq}) admits a
unique solution $(\hat Y_t,\hat Z_t,\hat N_t)$ if
$||\hat\xi||_\infty\le\frac{1}{32\beginta}{\cal E}p(-2||\int_0^\cdot
r_sdK_s||_\infty)$.
{\it Proof}. Using a change of measure the equation (\ref{heq}) reduces to
equation of type (\ref{eqq}). By previous corollary we obtain the existence
and uniqueness of the BSDE.
{\bf Lemma 2}. Let conditions B1)-B3) be satisfied and
random variables $\tilde \xi$ and $\hat\xi$ be such that
$max(||\tilde\xi||_\infty,||\hat\xi||_\infty)\le\frac{1}{32\beginta}e^{-2||\int_0^T
r_s^2dK_s||_\infty}.$ Then there exist solutions of BSDEs
(\ref{heq}) and
\begingin{eqnarray}\langlebel{16}
d\tilde Y_t=(f(t,0,0)-
f(t,\tilde Y_t,\sigma_t^*\tilde Z_t))dK_t
-d\langle \tilde N\rangle_tg_t+
\tilde Z_t^*dM_t+d\tilde N_t,\\
\notag
\tilde Y_T=\tilde \xi
\end{eqnarray}
and the triple $(Y,Z,N)=(\tilde Y+\hat Y,\tilde
Z+\hat Z,\tilde N+\hat N)$ satisfies BSDE
\begingin{eqnarray}\langlebel{18}
\notag dY_t=(f(t,0,0)- f(t,Y_t,\sigma_t^* Z_t))dK_t -d\langle N\rangle_tg_t+
Z_t^*dM_t+dN_t,\\
\notag
Y_T=\tilde \xi+\hat\xi.
\end{eqnarray}
{\it Proof}. Similarly to the Remark from Section 1 we can show
that for $\hat f(t,y,z)= f(t,\tilde Y_t,\sigma_t^*\tilde Z_t)-
f(t,y+\tilde Y_t,\sigma_t^*z+\sigma_t^*\tilde Z_t), \;
\alpha_t=f_y(t,\tilde Y_t,\sigma_t^*\tilde Z_t),\; \gamma_t=f_z(t,\hat
Y_t,\sigma_t^*\hat Z_t)$ the estimate
$$
|\hat f(t,y_1,z_1)-\hat f(t,y_2,z_2)-\alpha_t\delta y-\gamma_t^*\delta z|
$$
$$
\le (r_t|\delta y|+\theta|\delta z|)(r_t(|y_1|+|y_2|)+\theta(|z_1|+|z_2|)).
$$
holds.
Now by Lemma 1 and Corollary 2 of Lemma 1 we obtain the
solvability of both equations (\ref{16}),(\ref{heq}). \qed
{\bf Proposition 2.} Let $f$ and $g$ satisfy condition B1)-B3) and
$\xi\in L^\infty$. Then BSDE (\ref{eq}) admits a solution
$(Y,Z\cdot M+N)\in S^\infty\times{\rm BMO}$.
{\it Proof}. An arbitrary $\xi\in L^\infty(R)$ can be represented as sum
$\xi=\sum_{i=1}^m\xi_i$ with
$||\xi_i||_\infty\le\frac{1}{32\beginta}{\cal E}p(-2||\int_0^\cdot
r_sdK_s||_\infty)$.
Denote by $(Y^j,Z^j,N^j),\;j=1,...,m$ the solution of
\begingin{eqnarray}\langlebel{ieq}
\notag
dY^j_t=(f(t,Y^0_t+...+Y^{j-1}_t,\sigma_t^*(Z^0_t+...+Z^{j-1}_t))\\
-f(t,Y^0_t+...+Y^j_t,\sigma_t^*(Z^0_t+...+Z^j_t))dK_t\\
\notag -d(\langle N^j\rangle_t+2\langle N^j,N^0+...+N^{j-1}\rangle_t)g_t+
Z_t^{j*}dM_t+dN^j_t,\\
\notag Y^j_T=\xi^j\\
\notag Y^0=0,\;\;Z^0=0\;\;N^0=0.
\end{eqnarray}
By Corollary 1 we get
$$
||Y^j||_\infty^2+||Z^j\cdot M^j+N^j||_{{\rm BMO(P^j)}}^2\leq
\frac{1}{128\beginta^2},
$$
where $dP^j={\cal E}_T(\int_0^\cdot
f_z(s,Y^0_s+...Y_s^{j-1},\sigma_s^*(Z_s^0+...+Z_s^{j-1}))\sigma_s^{-1}dM_s)dP,
$ and $ M^j=M-\langle
f_z(\cdot,Y^0+...+Y^{j-1},\sigma^*(Z^0+...+Z^{j-1}))\sigma^{-1}\cdot
M,M\rangle.$
Using Lemma 2 we get the existence of a solution for BSDE
\begingin{eqnarray*}
d\bar Y_t=(f(t,0,0)-f(t,\bar Y_t,\sigma_t^*Z_t))dK_t-d\langle N\rangle_tg_t+
Z_t^*dM_t+dN_t,\\
\notag \bar Y_T=\xi.
\end{eqnarray*}
Since $\int_0^Tf(t,0,0)dK_t$ is bounded we can apply the above
argument with $f$ replaced by $\bar
f(t,y,z)=f(t,y-\int_0^tf(s,0,0)dK_s,z)$ to get the existence of
solution
\begingin{eqnarray*}
d\bar Y_t=(f(t,0,0)-f(t,\bar
Y_t-\int_0^tf(s,0,0)dK_s,\sigma_t^*Z_t))dK_t-d\langle N\rangle_tg_t+
Z_t^*dM_t+dN_t,\\
\notag \bar Y_T=\xi+\int_0^Tf(s,0,0)dK_s.
\end{eqnarray*}
Obviously $Y_t=\bar Y_t-\int_0^tf(s,0,0)dK_s$ is a solution of
BSDE (\ref{eq}),(\ref{in}).
\
\section{A comparison theorem for BSDEs}
\
Let us consider BSDE (\ref{eq}),(\ref{in}) in the case $d=1$.
{\bf Lemma 3.} Let $\xi\in L_\infty$ and assume that
there are positive constants $C(f),C(g)$, increasing function $\langlembda:
R^+\to R^+$,
bounded on all bounded subsets and a predictable
process $k\in H^2(R,1)$ such that
\begingin{equation}\langlebel{2.3}
|f(t,y,z)|\le k^2_t\langlembda(|y|)+ C(f)z^2,
\end{equation}
\begingin{equation}\langlebel{2.4}
|g(t)|\le C(g).
\end{equation}
Then the martingale part of any bounded solution of
(\ref{eq}),(\ref{in}) belongs to the space ${\rm BMO}(P)$.
{\it Proof.} Let $Y$ be a solution of (\ref{eq}),(\ref{in}) and there
is a
constant $C>0$ such that
$$
|Y_t|\le C\;\;\;\text{a.s}\;\;\;\text{for all}\;\;\; t.
$$
Applying the It\^o formula for
${\cal E}p\{\beginta Y_T\}-{\cal E}p\{\beginta Y_\tau\}$ and using the
boundary condition $Y_T=\xi$ we have
\begin{eqnarray}\langlebel{3.5}
\begin{gathered}
\frac{\beginta^2}{2}\int_\tau^Te^{\beginta Y_s}Z^*_sd\langle M\rangle_sZ_s+
\frac{\beginta^2}{2}\int_\tau^Te^{\beginta Y_s}d\langle N\rangle_s\\
-\beginta\int_\tau^Te^{\beginta Y_s}f(s,Y_s,Z_s)dK_s-
\beginta\int_\tau^Te^{\beginta Y_s}g(s)d\langle N\rangle_s\\
+\beginta\int_\tau^Te^{\beginta Y_s}Z_s^*dM_s+
\beginta\int_\tau^Te^{\beginta Y_s}dN_s=
e^{\beginta \xi}-e^{\beginta Y_\tau}\le e^{\beginta C},
\end{gathered}
\end{eqnarray}
where $\beginta$ is a constant yet to be determined.
If $Z\cdot M$ and $N$ are square integrable martingales taking
conditional expectations in (\ref{3.5}) we obtain
\begin{eqnarray*}\langlebel{35} \begin{gathered}
\frac{\beginta^2}{2}E\big(\int_\tau^Te^{\beginta Y_s}Z^*_sd\langle
M\rangle_sZ_s|F_\tau\big) +\frac{\beginta^2}{2}E\big(\int_\tau^Te^{\beginta
Y_s}d\langle N\rangle_s|F_\tau\big)
\\
\le e^{\beginta C}+\beginta E\big(\int_\tau^Te^{\beginta Y_s}
|f(s,Y_s,Z_s)|dK_s|F_\tau\big)
+\beginta E\big(\int_\tau^Te^{\beginta Y_s}|g(s)|d\langle N\rangle_s|F_\tau\big)
\end{gathered}
\end{eqnarray*}
Now if we use the estimates (\ref{2.3}),(\ref{2.4}) we get
\begin{eqnarray*}\langlebel{335}
\begin{gathered}
\frac{\beginta^2}{2}E\big(\int_\tau^Te^{\beginta Y_s}Z^*_sd\langle
M\rangle_sZ_s|F_\tau\big)
+\frac{\beginta^2}{2}E\big(\int_\tau^Te^{\beginta Y_s}d\langle
N\rangle_s|F_\tau\big)\\
\le e^{\beginta C}+\beginta\langlembda(C)E\big(\int_\tau^Te^{\beginta
Y_s}k_s^2dK_s|F_\tau\big)\\
+\beginta C(f)E\big(\int_\tau^Te^{\beginta
Y_s}|\sigma_s^*Z_s|^2dK_s|F_\tau\big)
+\beginta E\big(\int_\tau^Te^{\beginta Y_s}|g(s)|d\langle N\rangle_s|F_\tau\big)\\
\le e^{\beginta C}+\beginta\langlembda(C)E\big(\int_\tau^Te^{\beginta
Y_s}k_s^2dK_s|F_\tau\big)\\
+\beginta C(f)E\big(\int_\tau^Te^{\beginta Y_s}|Z_s^*d\langle
M\rangle_sZ_s|^2|F_\tau\big)
+C(g)\beginta E\big(\int_\tau^Te^{\beginta Y_s}d\langle N\rangle_s|F_\tau\big).
\end{gathered}
\end{eqnarray*}
Conditions (\ref{2.3}) and (\ref{2.4}) imply that
\begin{eqnarray}\langlebel{3.6}
\begin{gathered}
(\frac{\beginta^2}{2}-\beginta C(f))E\big(\int_\tau^T
e^{\beginta Y_s}Z^*_sd\langle M\rangle_sZ_s|F_\tau\big)+
\\
+(\frac{\beginta^2}{2}-\beginta C(g))E\big(\int_\tau^T
e^{\beginta Y_s}d\langle N\rangle_s|F_\tau\big)\le
\\
\le e^{\beginta C}+\beginta\langlembda(C) E\big(\int_\tau^Te^{\beginta Y_s}
k^2_sdK_s|F_\tau\big).
\end{gathered}
\end{eqnarray}
Taking $\beginta = 4 \overline C$, where $\overline C=max(C(f),C(g))$,
from (\ref{3.6}) we have
$$
4\overline C^2[E\big(\int_\tau^T
e^{\beginta Y_s}Z^*_sd\langle M\rangle_sZ_s|F_\tau\big)+
E\big(\int_\tau^T
e^{\beginta Y_s}d\langle N\rangle_s|F_\tau\big)]\le
$$
$$
\le e^{4 C \overline C}\big(4\overline C\langlembda(C)||k||_{H}+1\big).
$$
Since $Y\ge -C$, from the latter inequality we finally obtain the
estimate
$$
E\big(\langle Z\cdot M\rangle_{\tau T}|F_\tau\big)+
E\big(\langle N\rangle_{\tau T}|F_\tau\big)\le
$$
\begingin{equation}\langlebel{3.7}
\le \frac{e^{8 C\overline C}[4\overline C\langlembda(C)||k||_{H}+1]}
{4 \overline C^2}
\end{equation}
for any stopping time $\tau$, hence
$Z\cdot M, N\in BMO$.
For general $Z\cdot M$ and $N$ we stop at $\tau_n$ and derive
(\ref{3.7}) with $T$ replaced $\tau_n$. Letting $n\to\infty$ then
completes the proof. \qed
Further we use some notations. Let $(Y,Z),(\widetilde Y,\widetilde Z)$ be two pairs
of processes and $(f,g,\xi),(\tilde f,\tilde g,\tilde\xi)$ two triples of
generators. Then we denote:
$$
\delta f=f-\tilde f,\;\;\delta g=g-\tilde g,\;\;\delta\xi=\xi-\tilde\xi,
$$
$$
\partial_yf(t,Y_t,\widetilde Y_t,Z_t)\equiv\partial f_y(t)
=\frac{f(t,Y_t,Z_t)-f(t,\widetilde Y_t,Z_t)}{Y_t-\widetilde Y_t}
$$
$$
\text{for all}\;\;j=1,..,n,\;\;\partial_{j}f(t,\widetilde Y_t,Z_t,\widetilde
Z_t)\equiv\partial_j f(t)
$$
$$
=\frac{
f(t,\widetilde Y_t,Z^1_t,...,Z^{j-1}_t,Z^j_t,\widetilde Z^{j+1}_t,...,\widetilde Z^n_t)-
f(t,\widetilde Y_t,Z^1_t,...,Z^{j-1}_t,\widetilde Z^j_t,\widetilde Z^{j+1}_t,...,\widetilde
Z^n_t)}
{Z^j_t-\widetilde Z^j_t},
$$
$$
\nabla f(t)=(\partial_{1}f(t),...,\partial_{n}f(t))^*
$$
Thus we have
\begin{eqnarray}\langlebel{38}
f(t,Y_t,Z_t)-f(t,\widetilde Y_t,\widetilde Z_t)=\partial_yf(t)\delta Y_t+\nabla
f(t)^*\delta Z_t.
\end{eqnarray}
{\bf Theorem 2.} Let $Y$ and $\widetilde Y$ be the bounded solutions of
SBE (\ref{eq}) with generators $(f,g,\xi)$ and $(\tilde f,\tilde
g,\tilde\xi)$ respectively, satisfying the conditions of Lemma 3.
If $\xi\ge \tilde\xi$ (a.s), $f(t,y,z)\ge \tilde f(t,y,z)$
($\mu^{K}$-a.e.), $g(t)\ge \tilde g(t)$ ($\mu^{\langle N\rangle}$-a.e.) and
$f$ (or $\tilde f$) satisfies the following Lipschitz condition:
L1) for any $Y,\widetilde Y,Z$
$$
\frac{f(t,Y_t,Z_t)-f(t,\widetilde Y_t,Z_t)}{Y_t-\widetilde Y_t}\in S^\infty,
$$
L2) for any $Z,\widetilde Z\in H^2$ and any bounded process $Y$
$$
(\sigma_t\sigma_t^*)^{-1}\nabla f(t,Y_t,Z_t,\widetilde Z_t)\in H^2(R^n,\sigma),
$$
then $Y_t\ge \widetilde Y_t$ a.s. for all $t\in [0,T]$.
{\it Proof.} Taking the difference of the equations (\ref{eq}),
(\ref{in}) with generators $(f,g,\xi)$ and $(\widetilde f,\widetilde g,\widetilde \xi)$
respectively, we have
$$
Y_t-\widetilde Y_t= Y_0 -\widetilde Y_0
$$
$$
-\int_0^t[f(s,Y_s,Z_s)-f(s,\widetilde Y_s,\widetilde Z_s)]dK_s
$$
$$
-\int_0^t[f(s,\widetilde Y_s,\widetilde Z_s)-\tilde f(s,\widetilde Y_s,\widetilde Z_s)]dK_s-
\int_0^t[g(s)-\tilde g(s)]d\langle N\rangle_s
$$
\begingin{equation}
-\int_0^t\widetilde g(s)d(\langle N\rangle_s-\langle \widetilde N\rangle_s)+\int_0^t(Z_s-\widetilde
Z_s)dM_s+N_t-\widetilde N_t.
\end{equation}
Let us define the measure $Q$ by $dQ={\cal E}_T(\Lambda)dP$,
where
$$
\Lambda_t=\int_0^t\nabla f(s)^*(\sigma_s\sigma_s^*)^{-1}dM_s
+\int_0^t\widetilde g(s)d(N_s+\widetilde N_s).
$$
By Lemma 3 $Z,\widetilde Z\in H^2$ and $N$, $\widetilde N$ are BMO- martingales.
Therefore
Condition $L1),L2)$ and (\ref{2.4}) imply that $\Lambda\in BMO$ and
hence $Q$ is a
probability measure equivalent to $P$.
Denote by $\bar \Lambda$ the martingale part of $\delta Y=Y-\widetilde Y$, i.e.,
$$
\bar \Lambda=(Z-\widetilde Z)\cdot M+ N-\widetilde N.
$$
Therefore, by Girsanov's Theorem and by (\ref{38}) the process
$$
\delta Y_t+\int_0^t(\partial_yf(s)\delta Y_s+\nabla f(s)^*\delta Z_s)dK_s
$$
$$
+\int_0^t\delta f(s,\widetilde Y_s,\widetilde Z_s)dK_s+\int_0^t\delta g(s)d\langle N\rangle_s
$$
$$
=\delta Y_t+\int_0^t(\partial_yf(s)\delta Y_s+\delta f(s,\widetilde Y_s,\widetilde
Z_s))dK_s
$$
$$
+\int_0^t\nabla f(s)^*(\sigma_s\sigma_s^*)^{-1}d\langle M\rangle_s\delta
Z_s+\int_0^t\delta g(s)d\langle N\rangle_s
$$
$$=-\int_0^t\widetilde g(s)d(\langle
N\rangle_s-\langle \widetilde N\rangle_s)+\int_0^t(Z_s-\widetilde Z_s)dM_s+N_t-\widetilde N_t $$
$$
=\bar \Lambda_t- \langle\Lambda,\bar \Lambda\rangle_t,
$$
is a local martingale under $Q$. Moreover, since by Lemma 3 $\bar
N\in BMO$, Proposition 11 of \cite{DdM} implies that
$$
\bar \Lambda_t-\langle\Lambda,\bar \Lambda\rangle_t\in BMO(Q).
$$
Thus, using the martingale property and the boundary conditions
$Y_T=\xi, \widetilde Y_T=\widetilde \xi$ we have
$$
Y_t-\widetilde Y_t=
$$
$$
=E^Q\big(e^{\int_t^T\partial_yf_sdK_s}(\xi-\tilde \xi)
$$
$$
+\int_t^T e^{\int_t^s\partial_yf_udK_u}(f(s,\widetilde Y_s,\widetilde Z_s)-\tilde
f(s,\widetilde Y_s,\widetilde Z_s)) dK_s|F_t\big)
$$
\begingin{equation*}
+E^Q\big(\int_t^Te^{\int_t^s\partial_yf_udK_u}(g(s)-\widetilde g(s))d\langle
N\rangle_s|F_t\big),
\end{equation*}
which implies that $Y_t\ge \widetilde Y_t$ a.s. for all $t\in[0,T]$.\qed
{\bf Corollary.} Let condition A) be satisfied. Then if the
solution of (\ref{eq}),(\ref{in}) exists it is unique.
The proof of {\bf Theorem 1} follows now from the last corollary
and Proposition 2.
{\bf Remark.} Condition L1),L2) is satisfied if there is constant
$C>0$ such that
$$
|f(t,y,z)-f(t,\tilde y,\tilde z)|\le C|y-\tilde y|+C|z-\tilde z|(|z|+|\tilde z|)
$$
and $tr(\sigma_t\sigma_t^*)^{-1}\le C$ $\text{for all}\;\;\; y, \tilde y\in
R, \; z, \tilde z\in R^n\;t\in [0,T].$ Conditions L1),L2) are also
fulfilled if $f(t,y,z)$ satisfies the global Lipschitz condition
and $M\in BMO$.
\
{\bf Acknowledgements}
\
I am thankful to M. Mania for useful discussions and remarks.
I also would like to thank the referee for valuable remarks and
suggestions which lead to many improvements in the original
version of the paper .
\
\begingin{thebibliography}{50}
\bibitem{Bs} Bismut J. M. (1973) Conjugate convex functions in optimal
stochastic
control, {\it J. Math. Anal. Appl.} {\bf 44}, 384--404.
\bibitem{BSc} Bobrovnytska O. and Schweizer M. (2004)
Mean-Variance Hedging and Stochastic Control: Beyond the Brownian
Setting, IEEE Transactions on Automatic Control 49, 396-408.
\bibitem{B} Buckdahn R., (1993) Backward stochastic differential
equation driven by Martingale, Preprint.
\bibitem{Ch}
{Chitashvili R.}, (1983) Martingale ideology in the theory of
controlled stochastic processes,{\it Lect. Notes in Math.},
Springer, Berlin, N. 1021, pp. 73-92.
\bibitem{DM}
{Dellacherie C. and Meyer P. A.}, (1980) {\it Probabilit\'{e}s et
potentiel. Chapitres V a VIII. Th\'{e}orie des martingales.}
Actualit\'{e}s Scientifiques et Industrielles Hermann, Paris.
\bibitem {DdM} {Doleans-Dade K. and Meyer P. A.}, (1979)
In\'{e}galit\'{e}s de
normes avec poinds, S\'{e}minaire de Probabilit\'{e}s XIII, Lect.
Notes in Math., Springer,
Berlin, N. 721, pp. 204-215.
\bibitem{El-H}
{El Karoui N., Huang S.J.}, (1997) A general result of existence and
uniqueness of
backward stochastic differential equations.
Pitman Res. Notes Math. Ser.,364, Longman, Harlow, 27-36.
\bibitem{HIm} Hu Y., Imkeller P. and M\"uller M., (2005),
Utility maximization in incomplete markets. Ann. Appl. Probab. 15 , no. 3, 1691-1712.
\bibitem{J}
{Jacod J.}, (1979) {\it Calcule Stochastique et probl\`{e}mes
des martingales. Lecture Notes in Math.}, Springer, Berlin, N.714.
\bibitem{Kaz} {Kazamaki N.}, (1994) {\it Continuous exponential
martingales
and BMO, Lecture Notes in Math.}, Springer, Berlin, N. 1579.
\bibitem{Kob} {Kobylanski M.}, (2000) Backward stochastic differential
equation and
partial differential equations with quadratic growth, The Annals of
Probability,
vol. 28, N2, 558-602.
\bibitem{LS} {Lepeltier J.P. and San Martin J.}, (1998) Existence for
BSDE with
superlinear-quadratic coefficient, Stoch. Stoch. Rep. 63, 227-240.
\bibitem{L-Sh} {Liptser R.Sh. and Shiryaev A.N.}, (1986) {\it
Martingale
theory}, Nauka, Moscow.
\bibitem{m-sc} Mania M. and Schweizer M., (2005) Dynamic exponential
utility
indifference valuation, Ann. Appl. Probab. 15, no. 3, 2113--2143.
\bibitem{M-T} Mania M. and Tevzadze R., (2000) A Semimartingale Bellman
equation and
the variance-optimal martingale measure, {\it Georgian Math. J.} {\bf
7}
4 , 765--792.
\bibitem{MRT} Mania M. and R. Tevzadze R. (2003) A Unified
Characterization of
$q$-optimal and minimal entropy martingale measures by Semimartingale
Backward Equations,{\it Georgian Math. J.} vol.10, No 2, 289-310.
\bibitem{MT7} Mania M. and Tevzadze R. (2003) Backward Stochastic PDE
and
Imperfect Hedging, International Journal of Theoretical and Applied
Finance,
vol.6, 7,663-692.
\bibitem{M-S-T22} Mania M. Santacroce M. and Tevzadze R. (2003)
A Semimartingale BSDE related to the minimal entropy martingale
measure,
Finance and Stochastics, vol.7, No 3, 385-402.
\bibitem{M-T2} Mania M. and Tevzadze R. (2003) A Semimartingale Bellman
equation
and the variance-optimal martingale measure under general information
flow,
SIAM Journal on Control and Optimization,
Vol. 42, N5, 1703-1726.
\bibitem{M-Tex} Mania M. and Tevzadze R. (2005) An exponential
martingale equation,
From Stochastic Calculus to Mathematical Finance, The Shiryaev
Festschrift, Springer, 507-516.
\bibitem{Mrl} Morlais M.A., Quadratic Backward Stochastic Differential
Equations (BSDEs) Driven by a Continuous Martingale and
Application to the Utility Maximization Problem,
http://hal.ccsd.cnrs.fr/ccsd-00020254/
\bibitem{PP} Pardoux E. and Peng S.G. (1990) Adapted solution of a
backward
stochastic differential equation, {\it Systems Control Lett.} {\bf 14}
,55--61.
\end{thebibliography}
\end{document}
|
\begin{document}
\vbox to 0pt{\vskip-1in
\centerline{[published in \emph{New York Journal of Mathematics} 18 (2012) 261--273]}
\centerline{\url{http://nyjm.albany.edu/j/2012/18-13.html}}
\vss}
\begin{abstract}
We prove a few basic facts about the space of bi-invariant (or left-invariant) total order relations on a torsion-free, nonabelian, nilpotent group~$G$. For instance, we show that the space of bi-invariant orders has no isolated points (so it is a Cantor set if $G$ is countable), and give examples to show that the outer automorphism group of~$G$ does not always act faithfully on this space. Also, it is not difficult to see that the abstract commensurator group of~$G$ has a natural action on the space of left-invariant orders, and we show that this action is faithful. These results are related to recent work of T.\,Koberda that shows the automorphism group of~$G$ acts faithfully on this space.
\end{abstract}
\maketitle
\setcounter{tocdepth}{1}
\tableofcontents
\section{Introduction} \label{IntroSect}
\begin{defn} Let $G$ be an abstract group.
\begin{itemize} \itemsep=
amount
\item A total order~$\prec$ on the elements of~$G$ is:
\begin{itemize}
\item \emph{left-invariant} if $x \prec y \Rightarrow gx \prec gy$, for all $x,y,g \in G$,
and
\item \emph{bi-invariant} if it is both left-invariant and \emph{right-invariant} (which means $x \prec y \Rightarrow xg \prec yg$, for all $x,y,g \in G$).
\end{itemize}
\item The set of all left-invariant orders on~$G$ is denoted $\LO(G)$.
It has a natural topology that makes it into a compact, Hausdorff space: for any $x,y \in G$, we have the basic open set $\{\, {\prec} \in \LO(G) \mid x \prec y \,\}$ (see~\cite{Sikora-TopOrders}).
\item The set of all bi-invariant orders on~$G$ is denoted $\BO(G)$. It is a closed subset of $\LO(G)$.
\item Any group isomorphism $G_1 \stackrel{\cong}{\to} G_2$ induces a bijection $\LO(G_1) \to \LO(G_2)$. Therefore, the automorphism group $\Aut(G)$ acts on $\LO(G)$. (Furthermore, the subset $\BO(G)$ is invariant.)
\end{itemize}
\end{defn}
It is known \cite[Thm.~B, p.~1688]{Navas-DynamicsLO} that if $G$ is a locally nilpotent group, and $G$ is not an abelian group of rank\/~$\le 1$, then the space of left-invariant orders on~$G$ has no isolated points. We prove the same for the space of bi-invariant orders:
\begin{prop} \label{BONoIsolated}
If $G$ is a locally nilpotent group, and $G$ is not an abelian group of rank\/~$\le 1$, then the space of bi-invariant orders on~$G$ has no isolated points.
\end{prop}
We also prove some variants of the following recent result.
\begin{thm}[{T.\,Koberda \cite{Koberda-Faithful}}]
If $G$ is a finitely generated group that is residually torsion-free nilpotent, then the natural action of\/ $\Aut(G)$ on\/ $\LO(G)$ is faithful.
\end{thm}
Our modifications of Koberda's theorem replace the automorphism group of~$G$ with the larger group of abstract commensurators. Before stating these results, we present some background material.
\begin{defn}
Let $G$ be a group.
\begin{itemize} \itemsep=
amount
\item A \emph{commensuration} of~$G$ is an isomorphism $\phi \colon G_1 \to G_2$, where $G_1$ and~$G_2$ are finite-index subgroups of~$G$.
\item Two commensurations $\phi \colon G_1 \to G_2$ and $\phi' \colon G_1' \to G_2'$ are \emph{equivalent} if there exists a finite-index subgroup~$H$ of $G_1 \cap G_1'$, such that $\phi$ and~$\phi'$ have the same restriction to~$H$.
\item The equivalence classes of the commensurations of~$G$ form a group that is denoted $\Comm(G)$. It is called the \emph{abstract commensurator} of~$G$.
\end{itemize}
\end{defn}
In general, there is no natural action of $\Comm(G)$ on $\LO(G)$, but the following observation provides a special case in which we do have such an action:
\begin{lem} \label{CommGActLO}
Let $G$ be a torsion-free, locally nilpotent group.
\begin{enumerate}
\item \label{CommGActLO-bij}
If $H$ is any finite-index subgroup of~$G$, then the natural restriction map\/ $\LO(G) \to \LO(H)$ is a bijection.
\item \label{CommGActLO-act}
Therefore, there is a natural action of\/ $\Comm(G)$ on $\LO(G)$.
\end{enumerate}
\end{lem}
This action allows us to state the following variant of Koberda's theorem that replaces $\Aut(G)$ with $\Comm(G)$, but, unfortunately, requires $G$ to be locally nilpotent, not just residually nilpotent.
\begin{prop} \label{NilpCommFaithful}
If $G$ is a nonabelian, torsion-free, locally nilpotent group, then the action of\/ $\Comm(G)$ on\/ $\LO(G)$ is faithful.
\end{prop}
\begin{rem}
The \namecref{NilpCommFaithful} assumes that $G$ is nonabelian. When $G$ is abelian, \cref{WhenAbelFaithful} shows that the action of $\Comm(G)$ is faithful iff the subgroup~$G^n$ of $n^{\text{th}}$ powers has infinite index in~$G$, for all $n \ge 2$.
\end{rem}
For non-nilpotent groups, $\Comm(G)$ may not act on $\LO(G)$, but it does act on a certain space $\VLO(G)$ that contains $\LO(G)$ \csee{VirtLOSect}. (It is a space of left-invariant orders on finite-index subgroups of~$G$.) This action allows us to state the following generalization of Koberda's Theorem:
\begin{cor} \label{CommMovesLOG}
If $G$ is a nonabelian group that is residually locally torsion-free nilpotent, and $\alpha$ is any nonidentity element of\/ $\Comm(G)$, then there exists ${\prec} \in \LO(G)$, such that ${\prec}^\alpha \neq {\prec}$.
\end{cor}
The following is an immediate consequence:
\begin{cor} \label{CommFaithful}
If $G$ is a nonabelian group that is residually locally torsion-free nilpotent, then the action of\/ $\Comm(G)$ on\/ $\VLO(G)$ is faithful.
\end{cor}
There is a natural action of the outer automorphism group $\Out(G)$ on $\BO(G)$, because every inner automorphism acts trivially on this space.
T.\,Koberda \cite[\S6]{Koberda-Faithful} observed that if $G$ is the fundamental group of the Klein bottle, then this action is not faithful. (Note that this group~$G$ is solvable. In fact, it is polycyclic and metabelian). In \cref{NonFaithfulSect}, we improve this example by exhibiting finitely generated, nilpotent groups for which the action is not faithful. (Like Koberda's, our groups are polycyclic and metabelian.)
\end{ack}
\section{Preliminaries} \label{PrelimSect}
\subsection{Preliminaries on nilpotent groups}
\begin{defn}[{}{\cite[p.~85 (1)]{LennoxRobinson-ThyInfSolvGrps}}]
If $G$ is a solvable group, then, by definition, there is a subnormal series
\begin{align*}
G = G_r \triangleright G_{r-1} \triangleright \cdots \triangleright G_1 \triangleright G_0 = \{e\}
, \end{align*}
such that each quotient $G_i/G_{i-1}$ is abelian. The \emph{Hirsch rank} of~$G$ is sum of the (torsion-free) ranks of these abelian groups. (This is also known as the \emph{torsion-free rank} of~$G$.) More precisely,
$$ \rank G = \sum_{i = 1}^r \dim_{\mathbb{Q}} \bigl( (G_i/G_{i-1}) \otimes \mathbb{Q} \bigr) .$$
It is not difficult to see that this is independent of the choice of the subnormal series.
\end{defn}
\begin{notation}
Let $S$ be a subset of a group~$G$.
\@nobreaktrue\nopagebreak
\begin{itemize}
\item As usual, we use $\langle S \rangle$ to denote the smallest subgroup of~$G$ that contains~$S$.
\item We let
$$ \isol{S}_G = \bigset{x \in G }{ \exists m \in \mathbb{Z}^+, x^m \in \langle S \rangle }.$$
When the group~$G$ is clear from the context, we usually omit the subscript, and write merely $\isol{S}$.
\end{itemize}
\end{notation}
\begin{lem}[{}{\cite[2.3.1(i)]{LennoxRobinson-ThyInfSolvGrps}}]
If $G$ is a locally nilpotent group, then $\isol{S}$ is a subgroup of~$G$, for all $S \subseteq G$.
\end{lem}
\begin{rem}
A subgroup~$H$ is said to be \emph{isolated} if $H = \isol{H}$, but we do not need this terminology.
\end{rem}
We provide a proof of the following well-known fact, because we do not have a convenient reference for it.
\begin{lem} \label{Rank(Abelianization)}
If $G$ is a finitely generated, nilpotent group, and $\rank G \ge 2$, then
$$\rank \bigl( G/ [G,G] \bigr) \ge 2 .$$
\end{lem}
\begin{proof}
For every proper subgroup~$H$ of~$G$, such that $\isol{H} = H$, we have $N_G(H) = \isol{N_G(H)}$ \cite[2.3.7]{LennoxRobinson-ThyInfSolvGrps} and $N_G(H) \supsetneq H$ \cite[Cor.~10.3.1, p.~154]{Hall-ThyGrps}. This implies $\rank N_G(H) > \rank H$, so, for any $g \in G$, there is a subnormal series
$$ \isol{g} = G_0 \triangleleft G_1 \triangleleft \cdots \triangleleft G_{s-1} \triangleleft G_s = G ,$$
with $\isol{G_i} = G_i$ for every~$i$. By refining the series, we may assume $1 + \rank G_{s-1} = \rank G$.
Then $G/ G_{s-1}$ is a torsion-free, nilpotent group of rank~1, and is therefore abelian (cf.\ \cite[2.3.9(i)]{LennoxRobinson-ThyInfSolvGrps}), so $[G,G] \subseteq G_{s-1}$. Since $G_{s-1}$ also contains~$g$, we conclude that $\isol{g, [G,G]} \neq G$. This implies $\rank \bigl( G/ [G,G] \bigr) \ge 2$, because $g$ is an arbitrary element of~$G$.
\end{proof}
\subsection{Preliminaries on ordered groups}
\begin{defn}[{}{\cite[pp.~29, 31, and 34]{KopytovMedvedev-ROGrps}}]
Let $\prec$ be a left-invariant order on a group~$G$.
\begin{itemize}
\item A subgroup $C$ of~$G$ is \emph{convex} if, for all $c,c' \in C$, and all $g \in G$, such that $c \prec g \prec c'$, we have $g \in C$.
\item We say that $C_2/C_1$ is a \emph{convex jump} if $C_1$ and~$C_2$ are convex subgroups, and $C_1$ is the maximal convex proper subgroup of~$C_2$.
\item A convex jump $C_2/C_1$ is \emph{Archimedean} if there is a nontrivial homomorphism $\varphi \colon C_2 \to \mathbb{R}$, such that, for all $c,c' \in C_2$, we have $\varphi(c) < \varphi(c') \Rightarrow c \prec c'$. (Since $C_1$ is the maximal convex subgroup of~$C_2$, it is easy to see that this implies $\ker \varphi = C_1$.)
\end{itemize}
\end{defn}
\begin{rem}[{}{\cite[Thm.~2.1.1, p.~31]{KopytovMedvedev-ROGrps}}] \label{SetOfConvexSubgrps}
If $\prec$ is a left-invariant order on a group~$G$, then it is easy to see that the set of convex subgroups is totally ordered under inclusion, and is closed under arbitrary intersections and unions. Therefore, each element~$g$ of~$G$ determines a convex jump $C_2(g)/C_1(g)$, defined by letting
\begin{itemize}
\item $C_2(g)$ be the (unique) smallest convex subgroup of~$G$ that contains~$g$,
and
\item $C_1(g)$ be the (unique) largest convex subgroup of~$G$ that does not contain~$g$.
\end{itemize}
\end{rem}
The following easy observation is well known:
\begin{lem}[{cf.\ \cite[Lem.~5.2.1, p.~132]{KopytovMedvedev-ROGrps}}] \label{ChangeOnQuot}
Let
\begin{itemize}
\item $\prec$ be a left-invariant order on a group~$G$,
\item $C_1$ and~$C_2$ be convex subgroups of~$G$, such that $C_1 \triangleleft C_2$,
\item $\overline{\phantom{x}} \colon C_2 \to C_2/C_1$ be the natural homomorphism,
and
\item $\ll$ be a left-invariant order on the group $C_2/C_1$.
\end{itemize}
Then there is a\/ {\upshape(}unique\/{\upshape)} left-invariant order~$\precstar$ on~$G$, such that, for $g \in G$, we have
$$ g \succstar e \iff
\begin{cases}
\overline{g} \gg \overline{e} & \text{if $g \in C_2$ and $g \notin C_1$}, \\
g \succ e & \text{otherwise}
. \end{cases}
$$
\end{lem}
\begin{defn}
The construction in \cref{ChangeOnQuot} is called \emph{changing $\prec$ on $C_2/C_1$}.
\end{defn}
\begin{lem}[{}{\cite[Thm.~2.4.2, p.~41]{KopytovMedvedev-ROGrps}}] \label{Nilp->Conradian}
If $\prec$ is a left-invariant order on a group~$G$ that is locally nilpotent, then every convex jump is Archimedean.
\end{lem}
\begin{rem}
A left-invariant order is said to be \emph{Conradian} if all of its convex jumps are Archimedean, but we do not need this terminology.
\end{rem}
\begin{lem}[{}{\cite[p.~227]{BludovGlassRhemtulla-JumpsCentral}}] \label{CentralJumps}
If $\prec$ is a bi-invariant order on a group~$G$ that is locally nilpotent, then every convex jump $C_2/C_1$ is central. {\upshape(}This means $[G, C_2] \subseteq C_1$.{\upshape)} Therefore, every convex subgroup of~$G$ is normal.
\end{lem}
\begin{lem}[{}{cf.\ \cite[Prop.~1.7]{Sikora-TopOrders}}] \label{AbelIsol->Rank1}
Let $G$ be a nontrivial, abelian group. If the space of bi-invariant orders on~$G$ has an isolated point, then $\rank G = 1$.
\end{lem}
\begin{thm}[{Rhemtulla \cite[Cor.~3.6.2, p.~66]{KopytovMedvedev-ROGrps}}] \label{ExtendLONilp}
If $G$ is a torsion-free, locally nilpotent group, then any left-invariant order on any subgroup of~$G$ extends to a left-invariant order on all of~$G$.
\end{thm}
\section{Topology of the space of bi-invariant orders} \label{TopBiInvtSect}
In this section, we prove \textbf{\cref{BONoIsolated}}:
\@nobreaktrue\nopagebreak
\begin{quote}
\it If $G$ is a locally nilpotent group, and $G$ is not an abelian group of rank\/~$\le 1$, then the space of bi-invariant orders on~$G$ has no isolated points.
\end{quote}
\begin{proof}[\bf Proof of \cref{BONoIsolated}]
Suppose $\prec$ is an isolated point in the space of bi-invariant orders on~$G$. By definition of the topology on $\BO(G)$, this means there is a finite subset~$S$ of~$G$, for which $\prec$ is the unique bi-invariant order on~$G$ that satisfies $g \succ e$ for all $g \in S$.
If we change $\prec$ on any convex jump $C_i/C_{i-1}$, then the resulting left-invariant order will actually be bi-invariant (since \cref{CentralJumps} tells us that the jump is central). Therefore, the fact that $\prec$ is isolated implies that it has only finitely many convex jumps. (Indeed, every convex jump must be determined by some element of the finite set~$S$.) Thus, we may let
\begin{align} \label{ConvexChain}
G = C_r \supsetneq C_{r-1} \supsetneq \cdots \supsetneq C_1 \supsetneq C_0 = \{e\}
\end{align}
be the chain of convex subgroups. From \cref{CentralJumps}, we know that this is a central series (so $G$ is nilpotent, not just locally nilpotent, as originally assumed). The fact that $\prec$ is isolated also implies that each convex jump $C_i/C_{i-1}$ has an isolated left-invariant order. Then, since the jump is a nontrivial abelian group, \cref{AbelIsol->Rank1} tells us that $\rank (C_i/C_{i-1}) = 1$.
So $\rank G = r$.
Let
\begin{align} \label{UCC}
G = Z_c \triangleright Z_{c-1} \triangleright \cdots \triangleright Z_1 \triangleright Z_0 = \{e\}
\end{align}
be the the upper central series of~$G$. (It is defined by letting $Z_i/Z_{i-1}$ be the center of $G/Z_{i-1}$.) Then $c$ is the nilpotence class of~$G$. It is well known that
$c < \rank G$ (for example, this follows from \cref{Rank(Abelianization)}),
which means $c < r$. So there is some~$k$ with $C_k \neq Z_k$, and we may assume $k$~is minimal. Since $\{C_i\}$ is a central series, and $\{Z_i\}$ is the upper central series, we have
$C_k \subseteq Z_k$ \cite[Thm.~10.2.2]{Hall-ThyGrps}.
Therefore, there exists some $z \in Z_k$, such that $z \notin C_k$.
Choose $\ell$ minimal with $z \in C_\ell$. (Note that $k \le \ell-1$.) Since \pref{ConvexChain} and \pref{UCC} are central series, we have $[G, C_{\ell-1}] \subseteq C_{\ell-2}$ and
$$ [G, z] \subseteq [G,Z_k] \subseteq Z_{k-1} = C_{k-1} \subseteq C_{\ell-2} .$$
Therefore $[ G, \langle C_{\ell-1}, z \rangle ] \subseteq C_{\ell-2}$.
Since $\rank ( C_\ell/C_{\ell-1} ) = 1$, we know that $C_\ell / \langle C_{\ell-1}, z \rangle$ is a torsion group, so this implies that $[G, C_\ell] \subseteq C_{\ell-2}$ \cite[2.3.9(vi)]{LennoxRobinson-ThyInfSolvGrps}. This means that $G$ centralizes $C_\ell/ C_{\ell-2}$, so changing $\prec$ on $C_\ell/ C_{\ell-2}$ will result in another bi-invariant order. Since $C_\ell/ C_{\ell-2}$ is an abelian group of rank~$2$, \cref{AbelIsol->Rank1} tells us that it has no isolated order, so we conclude that $\prec$ is not isolated. This is a contradiction.
\end{proof}
\begin{cor} \label{BOCantor}
Let $G$ be a locally nilpotent group that is not an abelian group of rank\/~$\le 1$. If $G$ is countable and torsion-free, then the space of bi-invariant orders on~$G$ is homeomorphic to the Cantor set.
\end{cor}
\section{\texorpdfstring{The action of $\Comm(G)$ on $\LO(G)$ when $G$ is nilpotent}
{The action of Comm(G) on LO(G) when G is nilpotent}} \label{CommGNilp}
Combining the two parts of the following observation yields \textbf{\cref{CommGActLO}}\pref{CommGActLO-bij}.
\begin{obs} \label{LOG->LOH}
Assume $H$ is a subgroup of a torsion-free group~$G$, and let $\eta \colon \LO(G) \to \LO(H)$ be the natural restriction map.
\@nobreaktrue\nopagebreak
\begin{enumerate}
\item \label{LOG->LOH-inj}
If $H$ has finite index in~$G$, then $\eta$ is injective. (To see this, let ${\prec} \in \LO(G)$ and note that if $x \in G$, then there is some $n \in \mathbb{Z}^+$, such that $x^n \in H$. We have $x \succ e \iff x^n \succ e$, so the positive cone of~$\prec$ is determined by its restriction to~$H$. Combine this with the fact that any left-invariant order is determined by its positive cone.)
\item \label{LOG->LOH-surj}
If $G$ is locally nilpotent, then $\eta$ is surjective (see \cref{ExtendLONilp}).
\end{enumerate}
\end{obs}
In the remainder of this section, we prove \cref{CommGKernel}, which contains \textbf{\cref{NilpCommFaithful}}:
\@nobreaktrue\nopagebreak
\begin{quote}
\it If $G$ is a nonabelian, torsion-free, locally nilpotent group, then the action of\/ $\Comm(G)$ on\/ $\LO(G)$ is faithful.
\end{quote}
\begin{notation}
Assume $G$ is a torsion-free, abelian group.
\@nobreaktrue\nopagebreak
\begin{itemize}
\item For $n \in \mathbb{Z}$, we let $G^n = \{\, g^n \mid g \in G \,\}$. This is a subgroup of~$G$ (since $G$ is abelian).
\item For $p/q \in \mathbb{Q}$, such that $G^p$ and~$G^q$ have finite index in~$G$, we define $\tau^{p/q} \in \Comm(G)$ by $\tau^{p/q}(g^q) = g^p$ for $g \in G$.
\end{itemize}
\end{notation}
\begin{prop} \label{CommGKernel}
Let $G$ be a torsion-free, locally nilpotent group.
\begin{enumerate}
\item If $G$ is not abelian, then the action of\/ $\Comm(G)$ on\/ $\LO(G)$ is faithful.
\item If $G$ is abelian, then the kernel of the action is
$$ \bigset{ \tau^{p/q} }{ \begin{matrix}
\text{$p,q \in \mathbb{Z}^+$, such that} \\
\text{$G^p$ and~$G^q$ have finite index in~$G$}
\end{matrix} }
.$$
\end{enumerate}
\end{prop}
\begin{proof}[Proof \rm (cf.\ proof of {\cite[Thm.~4.1]{Koberda-Faithful}})]
For $p,q \in \mathbb{Z}^+$, $g \in G$, and ${\prec} \in \LO(G)$, we have $g^p \succ e \iff g^q \succ e$. Therefore, $\tau^{p/q}$ acts trivially on $\LO(G)$ if it exists.
To complete the proof, we wish to show that the kernel is trivial if $G$ is not abelian, and that every element of the kernel is of the form $\tau^{p/q}$ if $G$ is abelian.
Let $\alpha$ be an element of the kernel. We consider three cases.
\setcounter{case}{0}
\begin{case}
Assume there exists $g \in \dom \alpha$, such that $g^\alpha \notin \isol{g}$.
\end{case}
Let $H = \langle g, g^\alpha \rangle$ and $\overline{H} = H/\isol{[H,H]}_H$. There is a left-invariant order~$\ll$ on the abelian group~$\overline{H}$, such that $\isol{\overline{g}}$ is a convex subgroup. Then $\isol{\overline{g}}/ \isol{\overline{e}}$ is a convex jump. Since $\overline{g^\alpha} \notin \isol{\overline{g}}$ \csee{Rank(Abelianization)}, this implies that $g$ and~$g^\alpha$ determine different convex jumps of~$\ll$. By applying \cref{ExtendLONilp}, we see that there is a left-invariant order~$\prec$ on~$G$, such that $g$ and~$g^{\alpha}$ determine two different convex jumps. Reversing the order on the convex jump containing~$g^\alpha$ yields a second left-invariant order, and it is impossible for $\alpha$ to fix both of these orders. This contradicts the fact that $\alpha$ is in the kernel of the action.
\begin{case} \label{NilpCommFaithfulPf-Abel}
Assume $G$ is abelian, and $g^\alpha \in \isol{g}$, for all $g \in \dom\alpha$.
\end{case}
The assumption means that every element of $\dom \alpha$ is an eigenvector for the action of $\alpha$ on the vector space $G \otimes \mathbb{Q}$. Since $\dom \alpha$ is a subgroup, we know that it is closed under addition as a subset of $G \otimes \mathbb{Q}$. This implies that all of $\dom \alpha$ is in a single eigenspace. Then, since $\dom \alpha$ has finite index, we conclude that $G \otimes \mathbb{Q}$ is a single eigenspace, so there is some $p/q \in \mathbb{Q}$, such that we have $\alpha(v) = (p/q)v$ for all $v \in G \otimes \mathbb{Q}$. In other words, $\alpha(g) = \tau^{p/q}(g)$ for all $g \in G$.
We must have $p/q \in \mathbb{Q}^+$, since $\tau^{p/q} = \alpha$ acts trivially on $\LO(G)$, and $g \succ e \implies g^{-1} \not\succ e$. Also, since $\tau^{p/q} = \alpha \in \Comm(G)$, we know that the domain~$G^q$ and range~$G^p$ of $\tau^{p/q}$ have finite index in~$G$ (assuming, without loss of generality, that $p/q$ is in lowest terms).
\begin{case}
Assume $G$ is nonabelian, and $g^\alpha \in \isol{g}$, for all $g \in \dom \alpha$.
\end{case}
For each nontrivial $g \in \dom \alpha$, the assumption tells us there exists $r(g) \in \mathbb{Q}$, such that $\alpha(g) = g^{r(g)}$.
The eigenvector argument of the preceding case
shows that $r(g) = r(h)$ whenever $g$ commutes with~$h$.
However, for all $g, h \in G$, there is some nontrivial $z \in G$ that commutes with both $g$ and~$h$ (since $G$ is locally nilpotent), so $r(g) = r(z) = r(h)$. Therefore $r(g) = r$ is independent of~$g$.
On the other hand, since $G$ is locally nilpotent, but not abelian, we may choose $g,h \in \dom \alpha$, such that $\langle g,h \rangle$ is nilpotent of class~$2$. This means that $[g,h]$ is a nontrivial element of the center of $\langle g,h \rangle$. Then
$$ [g,h]^r = [g,h]^\alpha = [g^\alpha, h^\alpha] = [g^r, h^r] = [g,h]^{r^2} ,$$
so $r = r^2$. Hence $r = 1$, so $\alpha(g) = g^r = g^1 = g$ for all $g \in G$.
\end{proof}
\begin{cor} \label{WhenAbelFaithful}
Assume $G$ is a torsion-free, locally nilpotent group. Then the action of\/ $\Comm(G)$ on\/ $\LO(G)$ is faithful iff either
\begin{enumerate}
\item $G$ is not abelian,
or
\item $G^n$ has infinite index in~$G$ for all $n \ge 2$,
or
\item $G = \{e\}$ is trivial.
\end{enumerate}
\end{cor}
\begin{rem}
The proof of \cite[Thm.~4.1]{Koberda-Faithful} assumes that $G$ is finitely generated, but this was omitted from the statement of the result. (The group~$\mathbb{Q}$ has infinitely many automorphisms~$\tau^{p/q}$, but only two left-invariant orders, so it provides a counterexample to the theorem as stated.)
\end{rem}
\section{The space of virtual left-orders} \label{VirtLOSect}
\begin{defn}
Note that if $H_1$ is a subgroup of a group~$H_2$, then we have a natural restriction map $\LO(H_2) \to \LO(H_1)$. Therefore, we may define the direct limit
$$ \VLO(G) = \lim_{\longrightarrow} \LO(H) ,$$
where the limit is over all finite-index subgroups~$H$ of~$G$. An element of $\VLO(G)$ can be called a \emph{virtual left-invariant order} on~$G$.
\end{defn}
\begin{rems} \
\@nobreaktrue\nopagebreak
\begin{enumerate}
\item There is a natural action of $\Comm(G)$ on $\VLO(G)$.
\item \fullcref{LOG->LOH}{inj} tells us that the inclusion $\LO(G) \hookrightarrow \VLO(G)$ is injective, so we can think of $\LO(G)$ as a subset of $\VLO(G)$.
\end{enumerate}
\end{rems}
We now have the notation to prove \textbf{\cref{CommMovesLOG}}:
\begin{quote} \it
If $G$ is a nonabelian group that is residually locally torsion-free nilpotent, and $\alpha$ is a nonidentity element of\/ $\Comm(G)$, then ${\prec}^\alpha \neq {\prec}$, for some ${\prec} \in \LO(G)$.
\end{quote}
\begin{proof}
Suppose $\alpha$ fixes every element of $\LO(G)$. Since reversing a left-invariant order on any convex jump yields another left-invariant order, it is clear that $\alpha$ must fix every convex jump of every left-invariant order. More precisely,
\begin{align} \label{alphaFixRelConvex}
\begin{matrix}
\text{if $C$ is any convex subgroup of~$G$ (with respect to some}
\\ \text{left-invariant order), then $\alpha^{-1}(C) = C \cap \dom \alpha$.}
\end{matrix}
\end{align}
Choose $g \in \dom \alpha$, such that $g^\alpha \neq g$. Then we may choose a nonabelian, torsion-free, locally nilpotent quotient $G/N$ of~$G$, such that $g^\alpha \notin g N$. Since $G/N$ is torsion-free and locally nilpotent, we know that it has a left-invariant order. We can extend this to a left-invariant order on~$G$ (by choosing any left-invariant order on the subgroup~$N$). Then $N$ is a convex subgroup for this order, so, from \pref{alphaFixRelConvex}, we know that $\alpha$ induces a well-defined $\overline{\alpha} \in \Comm(G/N)$. Then \cref{NilpCommFaithful} tells us there exists ${\ll} \in \LO(G/N)$, such that ${\ll}^\alpha \neq {\ll}$. Extend $\ll$ to a left-invariant order~$\prec$ on~$G$ (by choosing any left-invariant order on the subgroup~$N$). Then ${\prec}^\alpha \neq \prec$.
\end{proof}
\section{Non-faithful actions on the space of bi-invariant orders} \label{NonFaithfulSect}
In this section, we provide examples of torsion-free, nilpotent groups for which there is a nontrivial commensuration that acts trivially on the space of bi-invariant orders.
\begin{eg}
For $r \in \mathbb{Z}^+$, let
$$G_r = \langle\, x,y,z \mid [x,y] = z^r, [x,z] = [y,z] = e \,\rangle .$$
(Then $G_1$ is the discrete Heisenberg group, and $G_r$ is a finite-index subgroup of it.)
Since $\langle z \rangle = Z(G)$, it is easy to see that $z \in \isol{N}$, for every nontrivial, normal subgroup~$N$ of~$G$. Hence, if we define an automorphism $\alpha \colon G_r \to G_r$ by
$$ \text{$\alpha(x) = xz$, \ $\alpha(y) = y$, \ and \ $\alpha(z) = z$,} $$
then $\alpha$ acts trivially on $\BO(G_r)$. However, $\alpha$ is outer if $r > 1$ (since $z \notin \langle z^r \rangle = [G_r,G_r]$). Thus, $\Out(G_r)$ does not act faithfully on $\BO(G_r)$ when $r > 1$.
On the other hand, it is easy to see that $\Out(G_1)$ does act faithfully on $\BO(G_1)$. This means that deciding whether $\Out(G)$ acts faithfully is a rather delicate question --- the answer can be different for two groups that are commensurable to each other.
\end{eg}
Here is an example where we get the same answer for all torsion-free, nilpotent groups that are commensurable:
\begin{eg} \label{OutNotFaithfulGoodEg}
Let $G = \mathbb{Z} \ltimes \mathbb{Z}^3$, where $\mathbb{Z}$ acts on $\mathbb{Z}^3$ via the matrix
{\smaller[4]$\begin{bmatrix} 1 & 0 & 0 \\ 1 & 1 & 0 \\ 0 & 1 & 1 \end{bmatrix}$}.
In other words,
$$ G =
\bigl\langle x, y,z,w \mid [x,w] = y, \ [y,w] = z, \ \text{other commutators trivial} \bigr\rangle .$$
Choose $r \in \mathbb{Z} \smallsetminus \{0\}$, and define $\alpha \in \Aut(G)$ by
$$ \text{$x^\alpha = x z^r$, \ $y^\alpha = y$, \ $z^\alpha = z$, \ $w^\alpha = w$.} $$
We claim $\alpha$ is an outer automorphism of~$G$ that acts trivially on $\BO(G)$.
\end{eg}
\begin{proof}
Note that $[x, hw^n] = [x,w^n] \in y^n \langle z \rangle$ for all $h \in \langle x,y,z\rangle$ and all $n \in \mathbb{Z}$. This implies that if $g \in G$, and $[x,g] \neq e$, then $[x,g] \notin \langle z \rangle$. Since $x^\alpha \in x \, \langle z \rangle$, we conclude that $\alpha$ is outer.
Let ${\prec} \in \BO(G)$, and let $C$ be the minimal nontrivial convex subgroup of~$G$. From \cref{CentralJumps}, we know $C$ is a subgroup of $Z(G)$. Since $Z(G) = \langle z \rangle$ has rank one, we conclude that $C = \langle z \rangle$ is the (unique) minimal nontrivial convex subgroup. Since $\alpha$ centralizes both $\langle z \rangle$ and $G/ \langle z \rangle$, this implies that $\alpha$ centralizes every convex jump $C_2/C_1$. Therefore ${\prec}^\alpha = \alpha$. Since $\prec$ is an arbitrary bi-invariant order, we conclude that $\alpha$ acts trivially on $\BO(G)$.
\end{proof}
\section{Nilpotent Lie groups and left-invariant orders} \label{LieGrpLO}
It is easy to see that if $\prec$ is a left-order on the abelian group $G = \mathbb{Z}^n$, then there is a nontrivial linear function $\varphi \colon \mathbb{R}^n \to \mathbb{R}$, such that, for all $x,y \in \mathbb{Z}^n$, we have
$$ \varphi(x) < \varphi(y) \implies x \prec y .$$
We will generalize this observation in a natural way to any finitely generated, nilpotent group~$G$, by choosing an appropriate embedding of~$G$ in a connected Lie group \csee{GisLatt,order->homo}.
\subsection{Preliminaries on discrete subgroups of nilpotent Lie groups}
\begin{defn}
A topological space is \emph{1-connected} if it is connected and simply connected.
\end{defn}
\begin{prop}[{}{\cite[Thm.~2.18, p.~40, and Cor.~2, p.~34]{RaghunathanBook}}] \label{GisLatt}
Every finitely generated, torsion-free, nilpotent group is isomorphic to a discrete, cocompact subgroup of a 1-connected, nilpotent Lie group~$\mathbb{G}$. Furthermore, $\mathbb{G}$ is unique up to isomorphism.
\end{prop}
\begin{rem}[{}{\cite[Thm.~2.10, p.~32]{RaghunathanBook}}] \label{discrete->fg}
Conversely, every discrete subgroup of a 1-connected, nilpotent Lie group\/~$\mathbb{G}$ is finitely generated.
\end{rem}
\begin{prop}[{}{\cite[Thm.~2.11, p.~33]{RaghunathanBook}}] \label{MalcevSuperrig}
Suppose
\begin{itemize}
\item $\mathbb{G}_1$ and~$\mathbb{G}_2$ are $1$-connected, nilpotent Lie groups,
\item $G$ is a discrete, cocompact subgroup of\/~$\mathbb{G}_1$,
and
\item $\rho \colon G \to \mathbb{G}_2$ is a homomorphism.
\end{itemize}
Then $\rho$ extends\/ {\upshape(}uniquely\/{\upshape)} to a continuous homomorphism $\widehat\rho \colon \mathbb{G}_1 \to \mathbb{G}_2$.
\end{prop}
\begin{defn}[{}{\cite[Defn.~3.1]{Witte-SyndHulls}}]
Let $G$ be a discrete subgroup of a Lie group~$\mathbb{G}$. A closed, connected subgroup~$\mathbb{H}$ of~$\mathbb{G}$ is a \emph{syndetic hull} of~$G$ if
$G \subseteq \mathbb{H}$
and
$\mathbb{H}/G$ is compact.
\end{defn}
\begin{prop}[{}cf.\ {\cite[Prop.~2.5, p.~31]{RaghunathanBook}}]
If\/ $\mathbb{G}$ is a 1-connected, nilpotent Lie group, then every discrete subgroup of\/~$\mathbb{G}$ has a unique syndetic hull.
\end{prop}
\subsection{Description of left-invariant orders on nilpotent groups}
\begin{prop} \label{order->homo}
Assume
\begin{itemize}
\item $\mathbb{G}$ is a 1-connected, nilpotent Lie group,
\item $G$ is a nontrivial, discrete, cocompact subgroup of\/~$\mathbb{G}$,
and
\item $\prec$ is a left-invariant order on\/~$G$.
\end{itemize}
Then there is a nontrivial, continuous homomorphism $\varphi \colon \mathbb{G} \to \mathbb{R}$, such that, for all $x,y \in G$, we have
$$ \varphi(x) < \varphi(y) \implies x \prec y .$$
Furthermore, $\varphi$ is unique up to multiplication by a positive scalar.
\end{prop}
\begin{proof}
Since $G$ is finitely generated \csee{discrete->fg}, it is easy to see that $G$ has a maximal convex subgroup~$C$ (cf.\ \cref{SetOfConvexSubgrps}), so $G/C$ is a convex jump. Also, since $G$ is nilpotent, we know that every convex jump is Archimedean \csee{Nilp->Conradian}. Therefore, there is a nontrivial homomorphism $\varphi_0 \colon G \to \mathbb{R}$, such that $\varphi_0(x) < \varphi_0(y) \Rightarrow x \prec y$. (Furthermore, this homomorphism is unique up to multiplication by a positive scalar \cite[Prop.~2.2.1, p.~34]{KopytovMedvedev-ROGrps}.) From \cref{MalcevSuperrig}, we know that $\varphi_0$ extends (uniquely) to a continuous homomorphism $\varphi \colon \mathbb{G} \to \mathbb{R}$.
\end{proof}
The results in previous sections were originally obtained by using the following structural description of each left-invariant order on any finitely generated, nilpotent group.
\begin{cor} \label{LieDescripLONilp}
Assume
\begin{itemize}
\item $\mathbb{G}$ is a 1-connected, nilpotent Lie group,
\item $G$ is a discrete, cocompact subgroup of\/~$\mathbb{G}$,
and
\item $\prec$ is a left-invariant order on~$G$.
\end{itemize}
Then there exist:
\begin{itemize}
\item a subnormal series\/ $\mathbb{G} = \mathbb{C}_r \triangleright \mathbb{C}_{r-1} \triangleright \cdots \triangleright \mathbb{C}_1 \triangleright \mathbb{C}_0 = \{e\}$ of closed, connected subgroups of\/~$\mathbb{G}$,
and
\item for each $i \in \{1,\ldots,r\}$, a nontrivial, continuous homomorphism $\varphi_i \colon \mathbb{C}_i \to \mathbb{R}$,
\end{itemize}
such that, for $1 \le i \le r$:
\begin{enumerate}
\item for all $x,y \in G \cap \mathbb{C}_i$, we have $\varphi_i(x) < \varphi_i(y) \implies x \prec y$,
\item \label{LieDescripLONilp-cocpct}
$G \cap \mathbb{C}_i$ is a cocompact subgroup of\/~$\mathbb{C}_i$,
and
\item \label{LieDescripLONilp-intersect}
$G \cap \ker \varphi_i = G \cap \mathbb{C}_{i-1}$.
\end{enumerate}
Furthermore, the subgroups\/ $\mathbb{C}_1,\ldots,\mathbb{C}_r$ are unique, and each homomorphism $\varphi_i$ is unique up to multiplication by a positive scalar.
\end{cor}
\begin{proof}
Let $\varphi \colon \mathbb{G} \to \mathbb{R}$ be the homomorphism provided by \cref{order->homo}, and let $\mathbb{C}$ be the syndetic hull of $G \cap \ker \varphi$. By induction on $\dim \mathbb{G}$, we can apply the \namecref{LieDescripLONilp} to~$\mathbb{C}$, obtaining
\begin{itemize}
\item a chain $\mathbb{C} = \mathbb{C}_{r-1} \triangleright \mathbb{C}_{r-2} \triangleright \cdots \triangleright \mathbb{C}_1 \triangleright \mathbb{C}_0 = \{e\}$ of closed, connected subgroups of~$\mathbb{C}$,
and
\item for each $i \in \{1,\ldots,r-1\}$, a nontrivial, continuous homomorphism $\varphi_i \colon \mathbb{C}_i \to \mathbb{R}$.
\end{itemize}
To complete the construction, let $\mathbb{C}_r = \mathbb{G}$ and $\varphi_r = \varphi$.
\end{proof}
\begin{rems} \label{LieDescripRems} \
\@nobreaktrue\nopagebreak
\begin{enumerate}
\item It is not difficult to show that each quotient $\mathbb{C}_i/\mathbb{C}_{i-1}$ is abelian.
\item \label{LieDescripRems-normal}
In the setting of \cref{LieDescripLONilp}, the order~$\prec$ is bi-invariant iff $\mathbb{C}_i$ and~$\ker \varphi_i$ are normal subgroups of~$\mathbb{G}$, for $1 \le i \le r$.
\item The converse of \cref{LieDescripLONilp} is true: if subgroups $\mathbb{C}_i$ and homomorphisms~$\varphi_i$ are provided that satisfy \pref{LieDescripLONilp-cocpct} and~\pref{LieDescripLONilp-intersect}, then the positive cone of a left-invariant order~$\prec$ can be defined by prescribing:
$$ x \succ e \iff \varphi_i(x) > 0 ,$$
where $i$~is chosen so that $x \in \mathbb{C}_i \smallsetminus \mathbb{C}_{i-1}$.
\end{enumerate}
\end{rems}
\end{document}
|
\begin{document}
\title{Enhancement to Training of Bidirectional GAN : An Approach to Demystify Tax Fraud}
\author{\IEEEauthorblockN{1\textsuperscript{st} Priya Mehta}
\IEEEauthorblockA{\textit{Welingkar Institute of Management Development and Research} \\
Mumbai, India \\
[email protected]}
\and
\IEEEauthorblockN{2\textsuperscript{nd} Sandeep Kumar}
\IEEEauthorblockA{\textit{Department of Computer Science and Engineering} \\
\textit{ Indian Institute of Technology Hyderabad}\\
Sangareddy, India \\
[email protected]}
\and
\IEEEauthorblockN{3\textsuperscript{rd} Ravi Kumar}
\IEEEauthorblockA{\textit{Department of Computer Science and Engineering} \\
\textit{Indian Institute of Technology Hyderabad}\\
Sangareddy, India \\
[email protected]}
\and
\IEEEauthorblockN{4\textsuperscript{th} Ch. Sobhan Babu}
\IEEEauthorblockA{\textit{Department of Computer Science and Engineering} \\
\textit{Indian Institute of Technology Hyderabad}\\
Sangareddy, India \\
[email protected]}
}
\maketitle
\begin{abstract}
Outlier detection is a challenging activity. Several machine learning techniques are proposed in the literature for outlier detection. In this article, we propose a new training approach for bidirectional GAN (BiGAN) to detect outliers. To validate the proposed approach, we train a BiGAN with the proposed training approach to detect taxpayers, who are manipulating their tax returns. For each taxpayer, we derive six correlation parameters and three ratio parameters from tax returns submitted by him/her. We train a BiGAN with the proposed training approach on this nine-dimensional derived ground-truth data set. Next, we generate the latent representation of this data set using the $encoder$ (encode this data set using the $encoder$) and regenerate this data set using the $generator$ (decode back using the $generator$) by giving this latent representation as the input. For each taxpayer, compute the cosine similarity between his/her ground-truth data and regenerated data. Taxpayers with lower cosine similarity measures are potential return manipulators. We applied our method to analyze the iron and steel taxpayer’s data set provided by the Commercial Taxes Department, Government of Telangana, India.
\end{abstract}
\begin{IEEEkeywords}
outlier detection, tax fraud detection, bidirectional GAN, goods and services tax
\end{IEEEkeywords}
\section{Introduction}
\subsection{Outlier Detection}
Outliers are data points that are far from other data points. In other words, they are unusual values in a data set. Detecting outliers is essentially identifying unexpected elements in data sets \cite{chandola}. Outlier identification has several real world applications such as fraud detection and intrusion detection. Outlier detection is a challenging activity. Several machine learning techniques are proposed in the literature for outlier detection.
Several methods derived from neural networks have been applied to outlier detection. Generative Adversarial Networks have emerged as a leading technique. Goodfellow et al. proposed a new approach for training generative models via an adversarial process \cite{godfellow}. This method simultaneously trains two neural network modules: a generator module that captures the probability distribution of the training data and a discriminator module that estimates the probability that a sample came from the training data set rather than generated by the generator. In this way, the two modules are competing against each other, they are adversarial in the game theory sense, and are playing a zero-sum game. In their existing form, GANs have no means of learning the inverse mapping (projecting training data back into the latent space). Jeff Donahue et al. proposed {\it Bidirectional Generative Adversarial Networks} (BiGANs) as a method of learning this inverse mapping \cite{jeff}. Kaplan et al. implemented anomaly detection using BiGAN, considering it as a one-class anomaly detection algorithm \cite{KAPLAN}. Since generator and discriminator are highly dependent on each other in the training phase, to minimize this dependency, they proposed two different training approaches for BiGAN by adding extra training steps to it. They also demonstrated that the proposed approaches increased the performance of BiGAN on the anomaly detection tasks.
\subsection{Indirect Tax}
Indirect tax is collected by an intermediary (such as a retailer and manufacturer) from the consumer of goods or services \cite{dani}. The intermediary submits the tax he/she collected to the government by filing tax return forms at regular time intervals. In reality, the intermediary acts as a conduit for the flow of tax from the consumer of the goods or services to the government.
\subsubsection{Goods and Services tax}
Goods and Services Tax (GST) is a destination-based, multi-stage, comprehensive taxation system. In GST, the tax is levied in an incremental manner at each stage of the supply chain based on the value added to goods/services at that stage. This tax is levied at each stage of the supply chain in such a way that the tax paid on purchases ($\it{Input\,tax}$ or $\it{input\,tax\,credit}$) is given as a set-off for the tax levied on sales ($\it{output\,tax}$ or {\it liability}).
Figure \ref{fig:tax_flow} shows how tax is collected incrementally at each stage of the supply chain. In this example, the manufacturer purchases goods from the supplier for a value of \$100 and pays \$10 as tax at a tax rate of 10\%. The supplier then pays the tax he/she collected to the government. In the next stage of the supply chain, the retailer purchases finished goods from the manufacturer for a value of \$120 and pays \$12 as tax at a tax rate of 10\%. The manufacturer pays $(\$12-\$10=\$2)$ to the government, which is the difference between the tax he paid to the supplier and the tax he collected from the retailer. Finally, the consumer buys it from the retailer for a value of \$150 and pays \$15 as tax at a tax rate of 10\%. So the retailer will pay $(\$15-\$12=\$3)$ to the government. In essence, for every dealer in GST, {\it the tax payable is the difference between output tax and input tax}.
\begin{figure}
\caption{Tax Flow in GST}
\label{fig:tax_flow}
\end{figure}
\subsubsection{Tax evasion}
Taxation and tax evasion go hand in hand. It is a never-ending cat and mouse game. Business dealers manipulate their tax returns to avoid tax and maximize their profits. Tax enforcement officers formulate new rules and regulations to control tax evasion after studying the behavior of known tax evaders who exploit the loopholes in the existing taxation laws. In this game tax evaders always try their best to stay a few steps ahead of the enforcement officers. Hence, it is necessary for the officials to identify the evasion as early as possible and close the loopholes before the techniques become a widespread practice. In this manner, the taxation officers will be able to limit the loss of government revenue due to tax evasion. The following are prominent ways of tax evasion.
\begin{enumerate}
\item The dealer will collect tax at a higher rate from the customer and pays it to the government at a lower rate.
\item The dealer does not report all the sales transactions made by her/him (sales suppression).
\item The dealer will show a lower taxable turnover by wrongly applying the prescribed calculations.
\item The dealer creates fictitious sales transactions where there is no movement of goods but only the invoices are circulated in order to claim an Input Tax Credit (ITC) and evade tax payment. This method is called bill trading.
\end{enumerate}
\subsection{Our Contribution}
In this article, we propose a new training approach for bidirectional GAN (BiGAN) to detect outliers. This is an enhancement to the BiGAN training approach given in \cite{KAPLAN}. To validate the proposed approach, we train a BiGAN with the proposed training approach to detect taxpayers, who are manipulating their tax returns in the Goods and Services Taxation system, which came into operation in India in July $2017$.
The Goods and Services Taxation system unified the taxation laws in India. As per this system, dealers are supposed to file tax return statements every month by providing the complete details of sales and purchases that happened in the corresponding month. The objective of this work is to identify dealers who manipulate their tax return statements to minimize their tax liability. We train {\it Bidirectional GAN} (BiGAN) using the proposed approach on nine-dimensional ground-truth data derived from tax returns submitted by taxpayers (six correlation parameters and three ratio parameters). Next, we encode the ground-truth data set using the $encoder$ (generate latent representation) and decode it back using the $generator$ (regenerate the ground-truth data) by giving the latent representation as input. For each taxpayer, compute the cosine similarity between his/her ground-truth data and regenerated data. Taxpayers with lower cosine similarity measures are potential return manipulators. This idea can be applied in other nations where multi-stage indirect taxation is followed.
The rest of the paper is organized as follows. In Section \ref{pre}, we discuss the previous relevant works. In Section \ref{daused}, we will explain the data set used. In Section \ref{ebgan}, we explain our proposed enhancement for BiGAN. In Section \ref{metho}, we give a detailed description of the methodology used in this paper. The results obtained are discussed in Section \ref{exper}.
\section{Related Work}
\label{pre}
Chandola et al. presented several data mining techniques for anomaly detection \cite{chandola}. Daniel de Roux et al. presented a very interesting approach for the detection of fraudulent taxpayers using only unsupervised learning methods \cite{daniel}. Yusuf Sahin et al. worked on credit card fraud detection \cite{yusuf}. They developed some classification models based on Artificial Neural Networks (ANN) and Logistic Regression (LR). This study is one of the first in credit card fraud detection with a real data set to compare the performance of ANN and LR. Zhenisbek Assylbekov et al. presented statistical techniques for detecting VAT evasion by Kazakhstani business firms \cite{zhen}. Starting from features selection they performed an initial exploratory data analysis using Kohonen self-organizing maps. Hussein et al. described classification-based and clustering-based anomaly detection techniques \cite{hussein}. They applied {\it K-Means} to a refund transaction data set from a telecommunication company, with the intent of identifying fraudulent refunds. Gonzlez et al. described methods to detect potential false invoice issuers/users based on the information in their tax return statements using different types of data mining techniques \cite{Gonzlez}. First, clustering algorithms like SOM and neural gas are used to cluster similar taxpayers. Then decision trees, neural networks, and Bayesian networks are used to identify those features that are related to the conducting of fraud and/or no fraud. Song Wang et al. introduced the challenges of anomaly detection in the traditional network, as well as in the next generation network, and reviewed the implementation of machine learning in the anomaly detection under different network contexts \cite{wang}. Shuhan Yuan et al. used a deep learning approach for fraud detection \cite{yuan}. This method will work only for the labeled data set. Jian Chen et al. applied deep learning techniques to credit card fraud detection. The first used sparse autoencoder to obtain representations of normal transactions and then trained a generative adversarial network (GAN) with these representations. Finally, they combined the SAE and the discriminator of GAN and apply them to detect whether a transaction is genuine or fraud \cite{Chen}. Shenggang Zhang et al. proposed an anomaly detection model based on BiGAN for software defect prediction. The model proposed by them not only does not need to consider the class imbalance problem but also uses a semi-supervised method to train the model \cite{zhang}. Raghavendra Chalapathy et al. presented a structured and comprehensive overview of research methods in deep learning-based anomaly detection. Furthermore, they reviewed the adoption of these methods for anomaly across various application domains and assess their effectiveness \cite{raghav}.
\section{Description of the Data Set}
\label{daused}
GSTR-3B is a monthly return that has to be filed by the dealer. It is a simple return in which a summary of Input Tax Credit along with outward supplies are declared, and payment of tax is affected by the taxpayer. Table \ref{taxdetials} is a sample of GST returns data. Each row in this table corresponds to a monthly return by a dealer. {\it ITC (Input tax credit)} is the amount of tax paid during purchases of services/goods by the dealer. The {\it output tax} is the amount of tax collected by the dealer during the sales of services/goods. The dealer has to pay the Government the gap between the {\it output tax} and {\it ITC}, i.e., output tax - ITC. The actual database consists of much more information, like, return filing data, tax payment method, exempted sales, international exports, and sales on RCM (reverse charge mechanism). Figures \ref{R3B_TO}, \ref{R3B_Tax}, \ref{R3B_ITC}, and \ref{R3b_cash} show the distribution of turnover, liability, input tax credit, and cash payments.
\begin{figure}
\caption{Distribution of Turnover}
\label{R3B_TO}
\caption{Distribution of Liability}
\label{R3B_Tax}
\end{figure}
\begin{figure}
\caption{Distribution of Input Tax Credit}
\label{R3B_ITC}
\caption{Distribution of Cash Payments}
\label{R3b_cash}
\end{figure}
\begin{table}[ht]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
S.No.&Firm &Month & Purchases & Sales & ITC & Output Tax\\
\hline
1 &BC& Jan 2019 & 190000 & 210000 & 20200 & 24000\\
\hline
2 &BE& Sep 2021 & 202000 & 270000 & 5200 & 9200\\
\hline
3 &BD& Oct 2021& 400200 & 420000 & 41000 & 43000\\
\hline
\end{tabular}
\end{center}
\caption{GST Returns Data (GSTR-3B)}
\label{taxdetials}
\end{table}
\section{Enhanced BiGAN}
\label{ebgan}
\subsection{Generative Adversarial Network}
A generative adversarial network (GAN) is a recent invention in deep learning designed by Ian Goodfellow and his colleagues \cite{godfellow}. Two neural network modules (generator, discriminator) compete with each other in a simultaneous game. Given a training data set (ground-truth data set), the generator learns to generate new data with the same statistics as the training data set and the discriminator learns to estimate the probability that the given input came from the training data set rather than generated by the generator.
The discriminator's parameters are optimized to minimize the loss function in Equation \ref{do}. Here $DI$ is the estimated probability by the discriminator that input came from the ground-truth data set rather than generated by the generator given the ground-truth data set as input, and $DG$ is the estimated probability by the discriminator that input came from the ground-truth data set rather than generated by the generator given the generated data set as input. The generator's parameters are optimized to minimize the loss function in Equation \ref{dg}.
\begin{equation}
\label{do}
-1*(log(DI)+log(1-DG))
\end{equation}
\begin{equation}
\label{dg}
-1*(log(DG))
\end{equation}
\subsection{Bidirectional GAN}
Bidirectional GAN (BiGAN) is a representation learning method. BiGAN adds an encoder to the standard GAN architecture.
The encoder takes ground-truth data set $X$ and outputs a latent representation $E(X)$ of this data set. The generator takes a random sample $z$ and generates a data set $G(z)$. The BiGAN discriminator discriminates not only $X$ versus $G(z)$, but jointly in data and latent space ( discriminates tuple (X,E(X)) versus tuple (G(z),z)).
The discriminator is trained to minimize Equation \ref{bd}, where $DE$ is the estimated probability by the discriminator that input came from $(X,E(X))$ rather than $(G(z),z)$ given $(X,E(X))$ as the input, and $DG$ is the estimated probability by the discriminator that input came from $(X,E(X))$ rather than $(G(z),z)$ given $(G(z),z)$ as the input. The generator is trained to minimize Equation \ref{bg}. The encoder is trained to minimize Equation \ref{be}
\begin{equation}
\label{bd}
-1*(log(DE)+log(1-DG))
\end{equation}
\begin{equation}
\label{bg}
-1*(log(DG))
\end{equation}
\begin{equation}
\label{be}
-1*log(1-DE)
\end{equation}
\subsection{Enhanced Bidirectional GAN}
Kaplan et al. suggested one more cost function to optimize generator and encoder parameters of BiGAN \cite{KAPLAN}. Let $X$ be the current batch of input data set and $E(X)$ be the output of $encoder$ with $X$ as input. Let $G(E(X))$ be the output of $generator$ with $E(X)$ as input. They suggested to update $encoder's$ and $generator's$ parameters together to minimize the mean of euclidean distance between $X$ and $G(E(X))$ as in Equation \ref{kapeq}.
\begin{equation}
\label{kapeq}
mean~of~euclidean~distance(X,G(E(X)))
\end{equation}
Cosine similarity is a measure used to find similarity between two vectors. Mathematically, it measures the cosine of the angle between two vectors projected in a multi-dimensional space. We propose that increasing the cosine similarity between $X$ and $G(E(X)$ is a better approach than minimizing the euclidean distance between them. We update $encoder's$ and $generator's$ parameters together to increase the value of Equation \ref{kapneq}. Figure \ref{simcomp} shows cosine similarity between ground-truth data and regenerated data for a different number of epochs. The red coloured curve shows the result of the algorithm in \cite{KAPLAN} and the green coloured curve shows the result of the proposed method. Algorithm \ref{alg:one} gives a detailed description of the enhanced BiGAN training procedure.
\begin{equation}
\label{kapneq}
mean~of~cosine~similarity(X,G(E(X)))
\end{equation}
\begin{figure}
\caption{Epochs Vs Cosine Similarity Measure}
\label{simcomp}
\end{figure}
\SetKwComment{Comment}{/* }{ */}
\RestyleAlgo{ruled}
\LinesNumbered
\begin{algorithm}
\DontPrintSemicolon
\SetAlgoLined
\SetNoFillComment
\caption{Proposed BiGAN Training}\label{alg:one}
\KwData{Nine-Dimensional Ground-Truth Data}
\KwResult{Trained BiGAN}
\For{ number of training epochs}{
\For{number of batches}{
\tcc{Training Discriminator}
Let $z$ be random sample from a standard normal distribution \;
$G(z) \gets$ output of $generator$ with $z$ as input\;
Let $X$ be the current batch of input data set\;
$E(X) \gets$ output of $encoder$ with $X$ as input\;
$DE \gets$ output of $discriminator$ with $(X,E(X))$ as input\;
$DG \gets$ output of $discriminator$ with $(G(z),z)$ as input\;
Update $discriminator's$ parameters to maximize $log(DE)+log(1-DG)$\;
\;
\tcc{Training Generator}
Let $z$ be random sample from a standard normal distribution \;
$G(z) \gets$ output of $generator$ with $z$ as input\;
$DG \gets$ output of $discriminator$ with $(G(z),z)$ as input\;
Update $generator's$ parameters to maximize $log(DG)$\;
\;
\tcc{Training Encoder}
Let $X$ be the current batch of input data set\;
$E(X) \gets$ output of $encoder$ with $X$ as input\;
$DE \gets$ output of $discriminator$ with $(X,E(X))$ as input\;
Update $encoder's$ parameters to maximize $log(1-DE)$\;
\;
\tcc{Reducing Mismatch Between Encoder and Generator}
Let $X$ be the current batch of input data set\;
$E(X) \gets$ output of $encoder$ with $X$ as input\;
$G(E(X)) \gets$ output of $generator$ with $E(X)$ as input\;
Update $encoder's$ and $generator's$ parameters together to increase the cosine similarity between $X$ and $G(E(X))$\;
}
}
\end{algorithm}
\section{Methodology}
\label{metho}
The objective of this work is to train Bidirectional GAN (BiGAN) using the proposed training approach on nine-dimensional ground-truth data derived from tax returns submitted by taxpayers (six correlation parameters and three ratio parameters) to identify malicious dealers who manipulate their tax return statements. We had taken the data sets explained in Section \ref{daused} from July 2017 to March 2022 and derived nine features (parameters) from these data sets.
\begin{itemize}
\item Six are sensitive correlation parameters.
\item Three are ratio parameters.
\end{itemize}
In Subsection \ref{cor}, we explain the six correlation parameters that are used. In Subsection \ref{rat}, we describe the three ratio parameters that are used. In Subsection \ref{bgan}, we give a detailed algorithm.
\subsection{Correlation parameters}
\label{cor}
In the Indian GST system, three types of taxes are collected, $viz.,$ CGST, SGST, and IGST.
\begin{itemize}
\item{\it CGST:} Central Goods and Services Tax is levied on intrastate transactions and collected by the Central Government of India.
\item{\it SGST:} State/Union Territory Goods and Services Tax, which is also levied on intrastate transactions and collected by the state or union territory Government.
\item{\it IGST:} Integrated Goods and Services Tax is levied on interstate sales. Central Government takes half of this amount and passes the rest of the amount to the state, where corresponding goods or services are consumed.
\end{itemize}
The six correlation parameters are mentioned in Table \ref{cp}. Total GST liability is the sum of CGST liability, SGST liability, and IGST liability. Total ITC is equal to the sum of SGST ITC, CGST ITC, and IGST ITC.
\begin{table}
\begin{center}
\begin{tabular}{|c|l|}
\hline
S. No. & \hspace{0.25in} The Six Correlation Parameters \\
\hline
1 & Total GST Liability {\t VS} Total Sales Amount\\
\hline
2 & SGST Liability {\it VS} Total GST Liability\\
\hline
3 & SGST paid in cash {\it VS} SGST Liability \\
\hline
4 & SGST paid in cash {\it VS} Total Sales Amount\\
\hline
5 & Total ITC {\it VS} Total Tax Liability\\
\hline
6 & IGST ITC {\it VS} Total ITC\\
\hline
\end{tabular}
\end{center}
\caption{correlation parameters }
\label{cp}
\end{table}
\subsection{Ratio parameters}
\label{rat}
\begin{enumerate}
\item{The ratio of $Total\,Sales$ VS. $Total\,Purchases$}: This ratio captures the value addition.\\
\item{The ratio of $IGST\,ITC$ VS. $Total\,ITC$}: This ratio captures how much purchase is shown as interstate or imports compared to total purchases.\\
\item{The ratio of $Total\,Tax\,Liability$ VS. $IGST\,ITC$}.
\end{enumerate}
\subsection{Identifying Fraudulent Taxpayers using BiGAN}
\label{bgan}
\begin{algorithm}
\DontPrintSemicolon
\SetAlgoLined
\SetNoFillComment
\caption{Identifying Fraudulent Taxpayers}\label{alg:two}
\KwData{Nine Dimensional Ground-Truth Data (GT)}
\KwResult{Fraudulent Taxpayers}
Train a BiGAN with the ground-truth data set $GT$ using Algorithm \ref{alg:one}\;
$LR \gets$ latent representation of the ground-truth data set $GT$ computed using the $encoder$\;
$RG \gets $ output of the $generator$ with $LR$ as the input \;
$CS \gets $ Cosine similarity between corresponding rows of $GT$ and $RG$\;
$Fraud set \gets $ taxpayers whose similarity score is less than $first~quantile - 1.5*IQR$\;
\end{algorithm}
\section{Experimentation and Results obtained}
\label{exper}
We had taken returns data of 1184 iron and steel dealers. We computed correlation and ratio parameters defined in subsections \ref{cor} and \ref{rat} for each dealer. Table \ref{table1} gives a snapshot of the parameters created for each dealer. Figure \ref{corpar} and Figure \ref{ratiopar} show the distribution of correlation and ratio parameters respectively. Figures \ref{discriminator}, \ref{generator}, and \ref{encoder} give the PyTorch code of discriminator, generator, and encoder respectively. After experimentation, we opted for four-dimensional latent space. Note that input to the discriminator is thirteen dimensions (four-dimensional latent space data and nine-dimensional training/ground-truth data set).
Figure \ref{discrimierror} shows the discriminator's error at different epochs. Figure \ref{cosineavg} shows the average cosine similarity measure between corresponding rows in the training data set and regenerated data set at different epochs. The boxplot in Figure \ref{cosinebox} shows the distribution of cosine similarities between corresponding rows in the training data set and regenerated set at the final epoch. The third quantile value is 0.8992, first quantile value is 0.6827. There are nineteen taxpayers whose cosine similarity values are less than the $first~quantile - 1.5*IQR$. Tax returns of these taxpayers need further investigation by tax officers. Expected evasion by these taxpayers is more than a few hundred million Indian rupees.
\begin{figure}
\caption{Discriminator}
\label{discriminator}
\caption{Generator}
\label{generator}
\caption{Encoder}
\label{encoder}
\end{figure}
\begin{table}[ht]
\begin{adjustbox}{width=\columnwidth,center}
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}
\hline
\begin{tabular}[c]{@{}c@{}}S.No\end{tabular} & Corr 1 & Corr 2 & Corr 3 & Corr 4 & Corr 5 & Corr 6 & \begin{tabular}[c]{@{}c@{}}Total Sales\\ /Total Purchases\end{tabular} & \begin{tabular}[c]{@{}c@{}}IGST ITC\\ /Total ITC\end{tabular} & \begin{tabular}[c]{@{}c@{}}Total tax liability\\ /IGST ITC\end{tabular} \\ \hline
1 & 0.9977 & 0.9998 & 0.2159 & 0.1967 & 0.9556 & 0.9988 & 1.0465 & 0.8717 & 1.3272 \\ \hline
2 & 0.9940 & 0.9799 & -0.3371 & -0.2486 & 0.6408 & 0.5539 & 1.1992 & 0.1347 & 7.4129 \\ \hline
3 & 0.9476 & 0.4556 & 0.0017 & 0.1286 & -0.1620 & 0.9606 & 1.6991 & 0.8020 & 2.1824 \\ \hline
\end{tabular}
\end{adjustbox}
\caption{Snapshot of parameters}
\label{table1}
\end{table}
\begin{figure}
\caption{Correlation Parameters}
\label{corpar}
\caption{Ratio Parameters}
\label{ratiopar}
\end{figure}
\begin{figure}
\caption{Discriminator Error}
\label{discrimierror}
\end{figure}
\begin{figure}
\caption{Avg Cosine Measure}
\label{cosineavg}
\caption{Final Cosine Measures}
\label{cosinebox}
\end{figure}
\begin{table}[ht]
\begin{adjustbox}{width=\columnwidth,center}
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}
\hline
\begin{tabular}[c]{@{}c@{}}S.No\end{tabular} & Corr 1 & Corr 2 & Corr 3 & Corr 4 & Corr 5 & Corr 6 & \begin{tabular}[c]{@{}c@{}}Total Sales\\ /Total Purchases\end{tabular} & \begin{tabular}[c]{@{}c@{}}IGST ITC\\ /Total ITC\end{tabular} & \begin{tabular}[c]{@{}c@{}}Total tax liability\\ /IGST ITC\end{tabular} \\ \hline
1 & 0.39657727 & 0.91993311 & 0.49645071 & 0.57682353 & 0.80236321 & 0.67371039 & -0.03205805 & 0.44915969 & -0.05412638 \\ \hline
2 & 0.99616234 & 0.89398749 & 0.7674135 & 0.60683971 & 0.97080785 & 0.88260176 & -0.03226673 & 0.36960728 & -0.05415696 \\ \hline
3 & 0.99947796 & 0.98932055 & 0.65413855 & 0.70517677 & 0.35052473 & 0.87797351 & -0.03098378 & 0.60539265 & -0.05411582 \\\hline
\end{tabular}
\end{adjustbox}
\caption{Snapshot of normalized parameters of few genuine dealers}
\label{caseg}
\end{table}
\begin{table}[ht]
\begin{adjustbox}{width=\columnwidth,center}
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}
\hline
\begin{tabular}[c]{@{}c@{}}S.No\end{tabular} & Corr 1 & Corr 2 & Corr 3 & Corr 4 & Corr 5 & Corr 6 & \begin{tabular}[c]{@{}c@{}}Total Sales\\ /Total Purchases\end{tabular} & \begin{tabular}[c]{@{}c@{}}IGST ITC\\ /Total ITC\end{tabular} & \begin{tabular}[c]{@{}c@{}}Total tax liability\\ /IGST ITC\end{tabular} \\ \hline
1 & 0.31020196 & 0.71482671 & 0.9993972 & 0.45058559 & 0.95149288 & 0.99453044 & 3.95717384 & -0.56773328 & -0.04284052 \\ \hline
2 & 0.99998812 & 0.99800189 & -0.26364705 & -0.27946686 & 0.49037541 & 0.35609805 & -0.03317666 & -1.06640099 & 7.48629588 \\ \hline
3 & 0.0335644 & -0.1469533 & -0.14223751 & -0.21525574 & 0.65303945 & 0.54761113 & -0.02922851 & -0.84931729 & -0.05217054 \\ \hline
\end{tabular}
\end{adjustbox}
\caption{Snapshot of normalized parameters of few fraudulent dealers}
\label{casef}
\end{table}
\subsection{Case Study}
Table \ref{caseg} gives normalized features of three genuine dealers. Table \ref{casef} gives normalized features of three fraudulent dealers. We observed that the feature {\it IGST ITC /total ITC} is positive for most of the genuine dealers and negative for most of the fraudulent dealers. This means fraudulent dealers are showing most of their purchases as intra-state purchases. The fourth correlation parameter (total sales vs SGST paid in cash) is low for fraudulent dealers and high for genuine dealers. This means fraudulent dealers are not paying any cash and using the ITC to set off the liability. Figure \ref{bdd} shows the business details of a few fraudulent taxpayers. Amounts are in lakhs of Indian Rupees. We can observe that they are not paying cash even though they are doing huge business.
\begin{figure}
\caption{Business Details}
\label{bdd}
\end{figure}
\section{Conclusion}
\label{con}
In this paper, we enhanced the BiGAN training approach given in \cite{KAPLAN}. This training approach significantly improved the performance/stability of BiGAN.
We analyzed the tax returns data set of a set of business dealers in the state of Telangana, India, to identify dealers who perform extensive tax evasion. We had taken data of 1184 iron and steel dealers and derived ground-truth data set based on their monthly returns. The ground-truth data contains nine parameters (six are correlation parameters and three are ratio parameters). We used the proposed method of BiGAN training to identify fraudulent dealers. First, we trained a BiGAN using the ground-truth data set. Next, we encoded the ground-truth data set using the $encoder$ and decoded it back using the $generator$ by giving the latent representation as input. For each taxpayer, we computed the cosine similarity between his/her ground-truth data and regenerated data. Taxpayers with lower cosine similarity measures are potential return manipulators. We identified nineteen dealers whose cosine similarity score is less than the $first~quantile - 1.5*IQR$. Expected evasion by these taxpayers is more than a few hundred million Indian rupees.
\section*{Acknowledgment}
We express our sincere gratitude to the Telangana state Government, India, for sharing the commercial tax data set, which is used in this work.
\end{document}
|
{\cal B}egin{document}
\setlength{{{\mathcal M}athcal F}ootskip}{50pt}
\tilde tle{Uniqueness of ergodic optimization of top Lyapunov exponent for typical matrix cocycles}
{{\mathcal M}athcal A}uthor{Wanshan Lin and Xueting Tian}
{{\mathcal M}athcal A}ddress{Wanshan Lin, School of Mathematical Sciences, Fudan University\\Shanghai 200433, People's Republic of China}
{\mathcal E}mail{[email protected]}
{{\mathcal M}athcal A}ddress{Xueting Tian, School of Mathematical Sciences, Fudan University\\Shanghai 200433, People's Republic of China}
{\mathcal E}mail{[email protected]}
{\cal B}egin{abstract}
In this article, we consider the ergodic optimization of the top Lyapunov exponent. We prove that there is a unique maximising measure of top Lyapunov expoent for typical matrix cocyles. By using the results we obtain, we prove that in any non-uniquely ergodic minimal dynamical system, the Lyapunov-irregular points are typical for typical matrix cocyles.
{\mathcal E}nd{abstract}
\keywords{Ergodic optimization, Lyapunov exponent, Residual property}
\subjclass[2020] { 37A05; 37B05. }
{\mathcal M}aketitle
\section{Introduction}
Let $(X,d)$ be a compact metric space, and $T:X \rightarrow X$ be a continuous map. Such $(X,T)$ is called a dynamical system. Let ${\mathcal M}athcal{M}(X)$, ${\mathcal M}athcal{M}(X,T)$, ${\mathcal M}athcal{M}^{e}(X,T)$ denote the spaces of probability measures, $T$-invariant, $T$-ergodic probability measures, respectively. Let ${\mathcal M}athbb{Z}$, ${\mathcal M}athbb{N}$, ${\mathcal M}athbb{N^{+}}$ denote integers, non-negative integers, positive integers, respectively. Let $C(X)$ denote the space of real continuous functions on $X$ with the norm $\|f\|:=\sup{\cal L}imits_{x\in X}|f(x)|.$ A sequence $\{f_n\}_{n=1}^{+\infty}$ from $X$ to ${\mathcal M}athbb{R}\cup\{-\infty\}$ will be called {\mathcal E}mph{subadditive} if, for any $x\in X$ and any $n,m\in{\mathcal M}athbb{N^+}$, the inequality $f_{n+m}(x){\cal L}eq f_n(T^mx)+f_m(X)$ is satisfied. In a Baire space, a set is said to be residual if it has a dense $G_\delta$ subset.
For any $f\in C(X)$, we define the {\mathcal E}mph{maximum ergodic average} ${\cal B}eta(f):=\sup{\cal L}imits_{{\mathcal M}u\in{\mathcal M}athcal{M}(X,T)}\int f{\mathcal M}athrm{d{\mathcal M}u}.$ And the set of all {\mathcal E}mph{maximising measures} ${\mathcal M}athcal{M}_{max}(f):={\cal L}eft\{{\mathcal M}u\in{\mathcal M}athcal{M}(X,T){\mathcal M}id \int f{\mathcal M}athrm{d{\mathcal M}u}={\cal B}eta(f)\right\}.$ When $f\in C(X)$, the study of the functional ${\cal B}eta$ and ${\mathcal M}athcal{M}_{max}(f)$ has been termed {\mathcal E}mph{ergodic optimisation of Birkhoff averages}, and has attracted some research interest, see \cite{B2018,J2006,J2006ii,M2010} for more information.
In this article, we will pay attention to the ergodic optimization of the top Lyapunov exponent. Let $C(X,GL_d({\mathcal M}athbb{R}))$ denote the space of all continuous functions $X\to GL_d({\mathcal M}athbb{R})$. For continuous functions $A,B\in C(X,GL_d({\mathcal M}athbb{R}))$, we use the metric $\rho(A,B):={\mathcal M}ax{\cal L}imits_{x\in X}\{\|A(x)-B(x)\|+\|A(x)^{-1}-B(x)^{-1}\|\}$, which makes $C(X,GL_d({\mathcal M}athbb{R}))$ a complete metric space.
For any $A\in C(X,GL_d({\mathcal M}athbb{R}))$, any $n\in{\mathcal M}athbb{N^+}$, a {\mathcal E}mph{cocycle} $A(n,x)$ is defined as $A(n,x)=A(T^{n-1}x)\cdots A(x)$. Then $\{log\|A(n,x)\|\}_{n=1}^{+\infty}$ is a subadditive sequence of $C(X)$. For any ${\mathcal M}u\in{\mathcal M}athcal{M}(X,T)$, by Kingman subadditive ergodic theorem, we have that ${\cal L}im{\cal L}imits_{n\to+\infty}{{\mathcal M}athcal F}rac{log\|A(n,x)\|}{n}$ exists and is called the {\mathcal E}mph{top Lyapunov exponent} at $x$ for ${\mathcal M}u$-a.e. $x$. Moreover, there is a $T$-invariant function $\phi(x)$ such that $\phi(x)={\cal L}im{\cal L}imits_{n\to+\infty}{{\mathcal M}athcal F}rac{log\|A(n,x)\|}{n}$ for ${\mathcal M}u$-a.e. $x$. and $\int \phi(x){\mathcal M}athrm{d{\mathcal M}u}={\cal L}im{\cal L}imits_{n\to+\infty}{{\mathcal M}athcal F}rac{1}{n}\int log\|A(n,x)\|{\mathcal M}athrm{d{\mathcal M}u}=\inf{\cal L}imits_{n{\cal G}eq1}{{\mathcal M}athcal F}rac{1}{n}\int log\|A(n,x)\|{\mathcal M}athrm{d{\mathcal M}u}$. We define the {\mathcal E}mph{maximum ergodic average} of $A$ to be the quantity $${\cal B}eta(A):=\sup_{{\mathcal M}u\in{\mathcal M}athcal{M}(X,T)}\inf{\cal L}imits_{n{\cal G}eq1}{{\mathcal M}athcal F}rac{1}{n}\int log\|A(n,x)\|{\mathcal M}athrm{d{\mathcal M}u}.$$ We shall say that ${\mathcal M}u\in{\mathcal M}athcal{M}(X,T)$ is a {\mathcal E}mph{maximising measure} for $A$ if $\inf{\cal L}imits_{n{\cal G}eq1}{{\mathcal M}athcal F}rac{1}{n}\int log\|A(n,x)\|{\mathcal M}athrm{d{\mathcal M}u}={\cal B}eta(A)$, and denote the set of all maximising measures by ${\mathcal M}athcal{M}_{max}(A)$.
Generally, for a subadditive sequence $\{f_n\}_{n=1}^{+\infty}$ of upper semicontimuous functions from $X$ to ${\mathcal M}athbb{R}\cup\{-\infty\}$, we can define the {\mathcal E}mph{maximum ergodic average} $${\cal B}eta[(f_n)]:=\sup_{{\mathcal M}u\in{\mathcal M}athcal{M}(X,T)}\inf{\cal L}imits_{n{\cal G}eq1}{{\mathcal M}athcal F}rac{1}{n}\int f_n{\mathcal M}athrm{d{\mathcal M}u}.$$ And the set of all {\mathcal E}mph{maximising measures} $${\mathcal M}athcal{M}_{max}[(f_n)]:={\cal L}eft\{{\mathcal M}u\in{\mathcal M}athcal{M}(X,T){\mathcal M}id \inf{\cal L}imits_{n{\cal G}eq1}{{\mathcal M}athcal F}rac{1}{n}\int f_n{\mathcal M}athrm{d{\mathcal M}u}={\cal B}eta[(f_n)]\right\}.$$
As for the set ${\mathcal M}athcal{M}_{max}[(f_n)]$ and ${\cal B}eta[(f_n)]$, Ian D. Morris proved
{\cal B}egin{theorem}\cite[Proposition A.5]{M2013}{\cal L}abel{the1.1}
Suppose that $\{f_n\}_{n=1}^{+\infty}$ is a subadditive sequence of upper semicontimuous functions from $X$ to ${\mathcal M}athbb{R}\cup\{-\infty\}$. Then ${\mathcal M}athcal{M}_{max}[(f_n)]$ is compact, convex and nonempty, and the extreme points of ${\mathcal M}athcal{M}_{max}[(f_n)]$ are precisely the ergodic elements of ${\mathcal M}athcal{M}_{max}[(f_n)]$.
{\mathcal E}nd{theorem}
{\cal B}egin{theorem}\cite[Theorem A.3]{M2013}{\cal L}abel{the1.2}
Suppose that $\{f_n\}_{n=1}^{+\infty}$ is a subadditive sequence of $C(X)$. Then,
\[
{\cal B}egin{split}
{\cal B}eta[(f_n)]&=\sup_{{\mathcal M}u\in{\mathcal M}athcal{M}^{e}(X,T)}\inf{\cal L}imits_{n{\cal G}eq1}{{\mathcal M}athcal F}rac{1}{n}\int f_n{\mathcal M}athrm{d{\mathcal M}u}=\inf{\cal L}imits_{n{\cal G}eq1}\sup_{x\in X}{{\mathcal M}athcal F}rac{1}{n}f_n(x)\\&=\inf{\cal L}imits_{n{\cal G}eq1}{{\mathcal M}athcal F}rac{1}{n}\sup_{{\mathcal M}u\in{\mathcal M}athcal{M}(X,T)}\int f_n{\mathcal M}athrm{d{\mathcal M}u}=\sup_{x\in X}\inf{\cal L}imits_{n{\cal G}eq1}{{\mathcal M}athcal F}rac{1}{n}f_n(x).
{\mathcal E}nd{split}
\]
In all but the last of these expressions, the infimum over all $n{\cal G}eq1$ may be replaced with the limit as $n\to+\infty$ of the same quantity, without altering the value of the expression. Furthermore, every supremum arising in each of the above expressions is attained.
{\mathcal E}nd{theorem}
Now, Support that $\Lambda$ is a nonempty and compact subset of ${\mathcal M}athcal{M}(X,T)$. Similarly, for any $f\in C(X)$, any $A\in C(X,GL_d({\mathcal M}athbb{R}))$, any subadditive sequence $\{f_n\}_{n=1}^{+\infty}$ of upper semicontimuous functions from $X$ to ${\mathcal M}athbb{R}\cup\{-\infty\}$. We define
\[
{\cal B}egin{split}
{\cal B}eta^\Lambda(f)&:=\sup{\cal L}imits_{{\mathcal M}u\in\Lambda}\int f{\mathcal M}athrm{d{\mathcal M}u},\\
{\mathcal M}athcal{M}^\Lambda_{max}(f)&:={\cal L}eft\{{\mathcal M}u\in\Lambda{\mathcal M}id \int f{\mathcal M}athrm{d{\mathcal M}u}={\cal B}eta^\Lambda(f)\right\},\\
{\cal B}eta^\Lambda(A)&:=\sup_{{\mathcal M}u\in\Lambda}\inf{\cal L}imits_{n{\cal G}eq1}{{\mathcal M}athcal F}rac{1}{n}\int log\|A(n,x)\|{\mathcal M}athrm{d{\mathcal M}u},\\
{\mathcal M}athcal{M}^\Lambda_{max}(A)&:={\cal L}eft\{{\mathcal M}u\in\Lambda{\mathcal M}id \inf{\cal L}imits_{n{\cal G}eq1}{{\mathcal M}athcal F}rac{1}{n}\int log\|A(n,x)\|{\mathcal M}athrm{d{\mathcal M}u}={\cal B}eta^\Lambda(A)\right\},\\
{\mathcal M}athcal{U}^\Lambda&:={\cal L}eft\{A\in C(X,GL_d({\mathcal M}athbb{R})){\mathcal M}id {\mathcal M}athcal{M}^\Lambda_{max}(A) \text{ is a singleton}\right\},\\
{\cal B}eta^\Lambda[(f_n)]&:=\sup_{{\mathcal M}u\in\Lambda}\inf{\cal L}imits_{n{\cal G}eq1}{{\mathcal M}athcal F}rac{1}{n}\int f_n{\mathcal M}athrm{d{\mathcal M}u},\\
{\mathcal M}athcal{M}^\Lambda_{max}[(f_n)]&:={\cal L}eft\{{\mathcal M}u\in\Lambda{\mathcal M}id \inf{\cal L}imits_{n{\cal G}eq1}{{\mathcal M}athcal F}rac{1}{n}\int f_n{\mathcal M}athrm{d{\mathcal M}u}={\cal B}eta^\Lambda[(f_n)]\right\}.
{\mathcal E}nd{split}
\]
From the definitions, if we denote $f_n=\sum{\cal L}imits_{i=0}^{n-1}f\circ T^i$ and $A=e^fI_d$, then ${\cal B}eta^\Lambda[(f_n)]={\cal B}eta^\Lambda(A)={\cal B}eta^\Lambda(f)$ and ${\mathcal M}athcal{M}^\Lambda_{max}[(f_n)]={\mathcal M}athcal{M}^\Lambda_{max}(A)={\mathcal M}athcal{M}^\Lambda_{max}(f)$. For any $A\in{\mathcal M}athcal{U}^\Lambda$, suppose that ${\mathcal M}athcal{M}_{max}(A)=\{{\mathcal M}u^\Lambda_A\}$. When $\Lambda={\mathcal M}athcal{M}(X,T)$, we will omit $\Lambda$. Since ${\mathcal M}u\to\inf{\cal L}imits_{n{\cal G}eq1}{{\mathcal M}athcal F}rac{1}{n}\int log\|A(n,x)\|{\mathcal M}athrm{d{\mathcal M}u}$ is a upper semicontinuous function from $\Lambda$ to ${\mathcal M}athbb{R}$, we have that ${\mathcal M}athcal{M}^\Lambda_{max}(A)$ is a nonempty and compact subset of ${\mathcal M}athcal{M}(X,T)$. Similarly, ${\mathcal M}athcal{M}^\Lambda_{max}(f)$ and ${\mathcal M}athcal{M}^\Lambda_{max}[(f_n)]$ are nonempty and compact subsets of ${\mathcal M}athcal{M}(X,T)$.
As for ${\mathcal M}athcal{U}^\Lambda$, we will prove the following conclusions.
{\cal B}egin{maintheorem}{\cal L}abel{maintheorem-1}
Suppose that $(X,T)$ is a dynamical system, $\Lambda$ is a nonempty and compact subset of ${\mathcal M}athcal{M}(X,T)$. Then
{\cal B}egin{enumerate}[(1)]
\item ${\mathcal M}athcal{U}^\Lambda$ is residual in $C(X,GL_d({\mathcal M}athbb{R}))$, in particular, ${\mathcal M}athcal{U}$ is residual in $C(X,GL_d({\mathcal M}athbb{R}))$;
\item if $\#\Lambda<+\infty$, then $C(X,GL_d({\mathcal M}athbb{R}))\setminus{\mathcal M}athcal{U}^\Lambda$ is nowhere dense in $C(X,GL_d({\mathcal M}athbb{R}))$;
\item if $\overline{\Lambda\cap{\mathcal M}athcal{M}^e(X,T)}=\Lambda$ and ${\mathcal M}athcal{F}$ is a dense subset of $C(X,GL_d({\mathcal M}athbb{R}))$, then ${\cal B}igcup{\cal L}imits_{A\in{\mathcal M}athcal{F}}{\mathcal M}athcal{M}^\Lambda_{max}(A)$ is dense in $\Lambda$.
{\mathcal E}nd{enumerate}
{\mathcal E}nd{maintheorem}
The particular case of Theorem \ref{maintheorem-1} (1) is a generalization of \cite[Theorem 3.2]{J2006} from continuous functions to matrix cocycles. As corollaries, we have
{\cal B}egin{maincorollary}{\cal L}abel{maincorollary-1}
Suppose that $(X,T)$ is a dynamical system, if ${\mathcal M}athcal{F}$ is a dense subset of $C(X,GL_d({\mathcal M}athbb{R}))$, then ${\mathcal M}athcal{M}^e(X,T)\cap{\cal B}igcup{\cal L}imits_{A\in{\mathcal M}athcal{F}}{\mathcal M}athcal{M}_{max}(A)$ is dense in ${\mathcal M}athcal{M}^e(X,T)$.
{\mathcal E}nd{maincorollary}
{\cal B}egin{maincorollary}{\cal L}abel{maincorollary-2}
Suppose that $(X,T)$ is a dynamical system with $\#{\mathcal M}athcal{M}^e(X,T)<+\infty$, then $C(X,GL_d({\mathcal M}athbb{R}))\setminus{\mathcal M}athcal{U}$ is nowhere dense in $C(X,GL_d({\mathcal M}athbb{R}))$.
{\mathcal E}nd{maincorollary}
Here is an example such that $\#{\mathcal M}athcal{M}^e(X,T)=+\infty$ and $C(X,GL_d({\mathcal M}athbb{R}))\setminus{\mathcal M}athcal{U}$ is dense in $C(X,GL_d({\mathcal M}athbb{R}))$ for some $d{\cal G}eq1$.
{\cal B}egin{example}
Suppose that $X=[0,1]$ and $Tx=x$ for any $x\in[0,1]$. Then $\delta_x\in{\mathcal M}athcal{M}^e(X,T)$ for any $x\in[0,1]$. Given $f\in C(X,{\mathcal M}athbb{R}\setminus\{0\})$. For any $n{\cal G}eq1$, let
\[g_n(x):=
{\cal B}egin{cases}
{{\mathcal M}athcal F}rac{2^n-1}{2^n}\|f\|&\text{if } x\in f^{-1}([{{\mathcal M}athcal F}rac{2^n-1}{2^n}\|f\|,\|f\|]),\\
-{{\mathcal M}athcal F}rac{2^n-1}{2^n}\|f\|&\text{if } x\in f^{-1}([-\|f\|,-{{\mathcal M}athcal F}rac{2^n-1}{2^n}\|f\|]),\\
f(x)&\text{otherwise}.
{\mathcal E}nd{cases}
\]
Then $\|g_n-f\|{\cal L}eq{{\mathcal M}athcal F}rac{1}{2^n}\|f\|$ and $\#{\mathcal M}athcal{M}_{max}(g_n)>1$ for any $n{\cal G}eq1$. Hence, $C(X,{\mathcal M}athbb{R}\setminus\{0\})\setminus{\mathcal M}athcal{U}$ is dense in $C(X,{\mathcal M}athbb{R}\setminus\{0\})$.
{\mathcal E}nd{example}
For any $A\in C(X,GL_d({\mathcal M}athbb{R}))$, we denote $$LI_A:={\cal L}eft\{x\in X{\mathcal M}id {\cal L}im_{n\to+\infty}{{\mathcal M}athcal F}rac{1}{n}log\|A(n,x)\|\text{ diverges }\right\},$$ the set of {\mathcal E}mph{Lyapunov-irregular} points of $A$. Denote $${\mathcal M}athcal{R}:={\cal L}eft\{A\in C(X,GL_d({\mathcal M}athbb{R})){\mathcal M}id LI_A\text{ is residual in } X\right\}.$$
As for a minimal dynamical system, we have
{\cal B}egin{maincorollary}{\cal L}abel{maincorollary-3}
Suppose that $(X,T)$ is a minimal dynamical system with $\#{\mathcal M}athcal{M}^e(X,T)>1$, then ${\mathcal M}athcal{R}$ is residual in $C(X,GL_d({\mathcal M}athbb{R}))$. Moreover, if $1<\#{\mathcal M}athcal{M}^e(X,T)<+\infty$, then $C(X,GL_d({\mathcal M}athbb{R}))\setminus{\mathcal M}athcal{R}$ is nowhere dense in $C(X,GL_d({\mathcal M}athbb{R}))$.
{\mathcal E}nd{maincorollary}
The rest of this paper is organized as follows. In section \ref{sec2}, we will introduce some preliminary results that will be used in the proof. In section \ref{sec3}, we will prove Theorem \ref{maintheorem-1}. In section \ref{sec4}, we will prove Corollary \ref{maincorollary-1}, \ref{maincorollary-2} and \ref{maincorollary-3}.
\section{Preliminaries}{\cal L}abel{sec2}
In this section, we will introduce some preliminary results that will be used in the proof.
\subsection{Subadditive sequence}
As for a subadditive sequence of upper semicontimuous functions from $X$ to ${\mathcal M}athbb{R}\cup\{-\infty\}$, it's be proved in \cite{CFH2008} that
{\cal B}egin{theorem}\cite[Lemma 2.3]{CFH2008}{\cal L}abel{lem5}
Suppose that $\{\nu_n\}_{n=1}^{+\infty}$ is a sequence in ${\mathcal M}athcal{M}(X)$ and $\{f_n\}_{n=1}^{+\infty}$ is a subadditive sequence of upper semicontimuous functions from $X$ to ${\mathcal M}athbb{R}\cup\{-\infty\}$. We form the new sequence $\{{\mathcal M}u_{n}\}_{n=1}^{+\infty}$ by ${\mathcal M}u_{n}={{\mathcal M}athcal F}rac{1}{n}\sum{\cal L}imits_{i=0}^{n-1}\nu_n\circ T^{-i}$. Assume that ${\mathcal M}u_{n_i}$ converges to ${\mathcal M}u$ in ${\mathcal M}athcal{M}(X)$ for some subsequence $\{n_i\}$ of natural numbers. Then ${\mathcal M}u\in{\mathcal M}athcal{M}(X,T)$, and moreover $$ {\cal L}imsup_{i\to+\infty}{{\mathcal M}athcal F}rac{1}{n_i}\int f_{n_i}{\mathcal M}athrm{d}\nu_{n_i}{\cal L}eq{\cal L}im{\cal L}imits_{n\to+\infty}{{\mathcal M}athcal F}rac{1}{n}\int f_n{\mathcal M}athrm{d{\mathcal M}u}.$$
{\mathcal E}nd{theorem}
\subsection{Ergodic optimisation of Birkhoff averages}
In the research of ergodic optimisation of Birkhoff averages, it's be proved in \cite{J2006ii} that
{\cal B}egin{theorem}\cite[Theorem 1]{J2006ii}{\cal L}abel{lem8}
For any ${\mathcal M}u\in{\mathcal M}athcal{M}^e(X,T)$, there exists $h\in C(X)$ such that ${\mathcal M}athcal{M}_{max}(h)=\{{\mathcal M}u\}$.
{\mathcal E}nd{theorem}
{\cal B}egin{remark}
Since ${\mathcal M}u{\mathcal M}apsto\inf{\cal L}imits_{n{\cal G}eq1}{{\mathcal M}athcal F}rac{1}{n}\int log\|A(n,x)\|{\mathcal M}athrm{d{\mathcal M}u}={\cal L}im{\cal L}imits_{n\to+\infty}{{\mathcal M}athcal F}rac{1}{n}\int log\|A(n,x)\|{\mathcal M}athrm{d{\mathcal M}u}$ is an affine upper semicontinuous function on ${\mathcal M}athcal{M}(X,T)$. If we combine \cite[Lemma 2(ii)]{J2006ii}, \cite[Proposition 1]{J2006ii} and \cite[Proposition 5]{Phe1}, then we can obtain a more general result: for any $A\in C(X,GL_d({\mathcal M}athbb{R}))$ and any ${\mathcal M}u\in{\mathcal M}athcal{M}^e(X,T)$, there is $f\in C(X)$ such that ${\mathcal M}athcal{M}_{max}(e^fA)=\{{\mathcal M}u\}$ and ${\cal B}eta(e^fA)=0$.
{\mathcal E}nd{remark}
\subsection{Lyapunov-irregular set}
As for a minimal dynamical system, it's be proved in \cite{HLT2021} that
{\cal B}egin{theorem}\cite[Corollary 1.2]{HLT2021}{\cal L}abel{the2.3}
Suppose that $(X,T)$ is a minimal dynamical system. Given $A\in C(X,GL_d({\mathcal M}athbb{R}))$, then either there is $c\in{\mathcal M}athbb{R}$ such that ${\cal L}im{\cal L}imits_{n\to+\infty}{{\mathcal M}athcal F}rac{1}{n}log\|A(n,x)\|=c$ for any $x\in X$, or $A\in{\mathcal M}athcal{R}$.
{\mathcal E}nd{theorem}
\section{Proof of Theorem \ref{maintheorem-1}}{\cal L}abel{sec3}
In general, the proof of Theorem \ref{maintheorem-1} (1) is inspired by the proof of \cite[Theorem 3.2]{J2006}. In which, Oliver Jenkinson proved an analogous result for ${\mathcal M}athcal{M}_{max}(f)$ with $f\in C(X)$. The proof of Theorem \ref{maintheorem-1} (2) is based on Theorem \ref{maintheorem-1} (1). The key to the proof of Theorem \ref{maintheorem-1} (3) is Theorem \ref{lem8} \cite[Theorem 1]{J2006ii}.
\subsection{Proof of Theorem \ref{maintheorem-1} (1)}
{\cal B}egin{lemma}{\cal L}abel{lem1}
Support that $\Lambda$ is a nonempty and compact subset of ${\mathcal M}athcal{M}(X,T)$ and $\{f_n\}_{n=1}^{+\infty}$ is a subadditive sequence of upper semicontimuous functions from $X$ to ${\mathcal M}athbb{R}\cup\{-\infty\}$. Then
$${\cal B}eta^\Lambda[(f_n)]=\inf{\cal L}imits_{n{\cal G}eq1}{{\mathcal M}athcal F}rac{1}{n}\sup_{{\mathcal M}u\in\Lambda}\int f_n{\mathcal M}athrm{d{\mathcal M}u}
={\cal L}im_{n\to+\infty}{{\mathcal M}athcal F}rac{1}{n}\sup_{{\mathcal M}u\in\Lambda}\int f_n{\mathcal M}athrm{d{\mathcal M}u}.$$
{\mathcal E}nd{lemma}
{\cal B}egin{proof}
For the first equality, it's enough to show that ${\cal B}eta^\Lambda[(f_n)]{\cal G}eq\inf{\cal L}imits_{n{\cal G}eq1}{{\mathcal M}athcal F}rac{1}{n}\sup{\cal L}imits_{{\mathcal M}u\in\Lambda}\int f_n{\mathcal M}athrm{d{\mathcal M}u}$. The second equality is from the fact that $\sup{\cal L}imits_{{\mathcal M}u\in\Lambda}\int f_{m+n}{\mathcal M}athrm{d{\mathcal M}u}{\cal L}eq\sup{\cal L}imits_{{\mathcal M}u\in\Lambda}\int f_m{\mathcal M}athrm{d{\mathcal M}u}+\sup{\cal L}imits_{{\mathcal M}u\in\Lambda}\int f_n{\mathcal M}athrm{d{\mathcal M}u}$ for any $n{\cal G}eq1, m{\cal G}eq1$.
For each $n{\cal G}eq1$, we choose $\nu_n\in{\mathcal M}athcal{M}^\Lambda_{max}(f_n)$. Since $\Lambda$ is compact, it's closed. Suppose that ${\mathcal M}u$ is a limit point of $\{\nu_{n}\}$, then ${\mathcal M}u\in\Lambda$. By Theorem \ref{lem5}, we have
\[
{\cal B}egin{split}
\inf_{n{\cal G}eq1}{{\mathcal M}athcal F}rac{1}{n}\int f_n{\mathcal M}athrm{d{\mathcal M}u}&={\cal L}im{\cal L}imits_{n\to+\infty}{{\mathcal M}athcal F}rac{1}{n}\int f_n{\mathcal M}athrm{d{\mathcal M}u}\\&{\cal G}eq{\cal L}imsup_{i\to+\infty}{{\mathcal M}athcal F}rac{1}{n_i}\int f_{n_i}{\mathcal M}athrm{d}\nu_{n_i}\\&={\cal L}imsup_{i\to+\infty}{{\mathcal M}athcal F}rac{1}{n_i}\sup_{\nu\in\Lambda}\int f_{n_i}{\mathcal M}athrm{d}\nu\\&={\cal L}im_{i\to+\infty}{{\mathcal M}athcal F}rac{1}{i}\sup_{\nu\in\Lambda}\int f_{i}{\mathcal M}athrm{d}\nu\\&{\cal G}eq{\cal B}eta^\Lambda[(f_n)].
{\mathcal E}nd{split}
\]
Hence, ${\mathcal M}u\in{\mathcal M}athcal{M}^\Lambda_{max}[(f_n)]$ and ${\cal B}eta^\Lambda[(f_n)]{\cal G}eq\inf{\cal L}imits_{n{\cal G}eq1}{{\mathcal M}athcal F}rac{1}{n}\sup{\cal L}imits_{{\mathcal M}u\in\Lambda}\int f_n{\mathcal M}athrm{d{\mathcal M}u}$.
{\mathcal E}nd{proof}
From the proof of this lemma, we have
{\cal B}egin{theorem}
Support that $\Lambda$ is a nonempty and compact subset of ${\mathcal M}athcal{M}(X,T)$ and $\{f_n\}_{n=1}^{+\infty}$ is a subadditive sequence of upper semicontimuous functions from $X$ to ${\mathcal M}athbb{R}\cup\{-\infty\}$, we choose ${\mathcal M}u_{n}\in{\mathcal M}athcal{M}^\Lambda_{max}(f_n)$ for each $n{\cal G}eq1$. If ${\mathcal M}u$ is a limit point of $\{{\mathcal M}u_{n}\}$, then ${\mathcal M}u\in{\mathcal M}athcal{M}^\Lambda_{max}[(f_n)]$.
{\mathcal E}nd{theorem}
Since for any fixed $n{\cal G}eq1$, $\sup{\cal L}imits_{{\mathcal M}u\in\Lambda}{{\mathcal M}athcal F}rac{1}{n}\int log\|A(n,x)\|{\mathcal M}athrm{d{\mathcal M}u}$ is a continuous function from $C(X,GL_d({\mathcal M}athbb{R}))$ to ${\mathcal M}athbb{R}$. By Lemma \ref{lem1}, we have ${\cal B}eta^\Lambda$ is the limit of an everywhere convergent sequence of continuous functions. By {\mathcal E}mph{Baire's theorem on functions of first class} \cite[Theorem 7.3]{Oxtoby1980}, if we denote $$\tilde lde{{\mathcal M}athcal{C}^\Lambda}:={\cal L}eft\{A\in C(X,GL_d({\mathcal M}athbb{R})){\mathcal M}id{\cal B}eta^\Lambda \text{ is continuous at }A\right\},$$ then $C(X,GL_d({\mathcal M}athbb{R}))\setminus\tilde lde{{\mathcal M}athcal{C}^\Lambda}$ is first category, which means that $\tilde lde{{\mathcal M}athcal{C}}^\Lambda$ is residual in $C(X,GL_d({\mathcal M}athbb{R}))$.
{\cal B}egin{remark}
$\tilde lde{{\mathcal M}athcal{C}^\Lambda}$ may not be $C(X,GL_d({\mathcal M}athbb{R}))$, see \cite[Corollary 6]{F1997} for an example.
{\mathcal E}nd{remark}
For any $f\in C(X)$, we define a function $\Gamma(f):C(X,GL_d({\mathcal M}athbb{R}))\to C(X,GL_d({\mathcal M}athbb{R}))$ as $\Gamma(f)(A):=e^fA$ for any $A\in C(X,GL_d({\mathcal M}athbb{R}))$. Then $\Gamma(f)$ is a topological homeomorphism with $(\Gamma(f))^{-1}=\Gamma(-f)$. Generally, for $n\in{\mathcal M}athbb{Z}$,
\[(\Gamma(f))^n:=
{\cal B}egin{cases}
{\cal U}nderbarderbrace{\Gamma(f)\circ\cdots\circ\Gamma(f)}_\text{n terms}&\text{if } n>0,\\
id&\text{if } n=0,\\
{\cal U}nderbarderbrace{(\Gamma(f))^{-1}\circ\cdots\circ(\Gamma(f))^{-1}}_\text{(-n) terms}&\text{if } n<0.
{\mathcal E}nd{cases}
\]
Then, we have that $(\Gamma(f))^n=\Gamma(nf)$, for any $n\in{\mathcal M}athbb{Z}$.
By \cite[Theorem 6.4]{Walters1982} there exists a countable set of continuous functions $\{{\cal G}amma_i\}_{i=1}^{+\infty}$ such that $\|{\cal G}amma_i\|=1$ for any $i\in{\mathcal M}athbb{N^+}$ and $$\text{var}rho({\mathcal M}u_{1},{\mathcal M}u_{2}):=\sum_{i=1}^{+\infty}{{\mathcal M}athcal F}rac{|\int {\cal G}amma_i{\mathcal M}athrm{d}{\mathcal M}u_{1}-\int {\cal G}amma_i{\mathcal M}athrm{d}{\mathcal M}u_{2}|}{2^{i}}$$ defines a metric for the weak*-topology on ${\mathcal M}athcal{M}(X).$
Suppose that $\{{{\mathcal M}athcal F}rac{1}{j}{\cal G}amma_i{\mathcal M}id i\in{\mathcal M}athbb{N^+},j\in{\mathcal M}athbb{N^+}\}=\{{\mathcal E}ta_i{\mathcal M}id i\in{\mathcal M}athbb{N^+}\}$. We define a sequence subsets $\{{\mathcal M}athcal{C}^\Lambda_n\}_{n=1}^{+\infty}$ of $\tilde lde{{\mathcal M}athcal{C}^\Lambda}$ by induction: ${\mathcal M}athcal{C}^\Lambda_1:={\cal B}igcap{\cal L}imits_{i\in{\mathcal M}athbb{Z}}\Gamma(i{\mathcal E}ta_1)(\tilde lde{{\mathcal M}athcal{C}^\Lambda})$, ${\mathcal M}athcal{C}^\Lambda_{n+1}:={\cal B}igcap{\cal L}imits_{i\in{\mathcal M}athbb{Z}}\Gamma(i{\mathcal E}ta_{n+1})({\mathcal M}athcal{C}^\Lambda_n)\subset{\mathcal M}athcal{C}^\Lambda_n\subset\tilde lde{{\mathcal M}athcal{C}^\Lambda}$, for any $n{\cal G}eq1$. Since $\tilde lde{{\mathcal M}athcal{C}^\Lambda}$ is residual in $C(X,GL_d({\mathcal M}athbb{R}))$, we have that ${\mathcal M}athcal{C}^\Lambda_n$ is residual in $C(X,GL_d({\mathcal M}athbb{R}))$, for any $n{\cal G}eq1$. Let $${\mathcal M}athcal{C}^\Lambda:={\cal B}igcap{\cal L}imits_{n=1}^{+\infty}{\mathcal M}athcal{C}^\Lambda_n,$$ then it can be checked that
{\cal B}egin{enumerate}[(1)]
\item ${\mathcal M}athcal{C}^\Lambda\subset\tilde lde{{\mathcal M}athcal{C}^\Lambda}$;
\item ${\mathcal M}athcal{C}^\Lambda$ is residual in $C(X,GL_d({\mathcal M}athbb{R}))$;
\item $\Gamma({\mathcal E}ta_n)({\mathcal M}athcal{C}^\Lambda)=\Gamma(-{\mathcal E}ta_n)({\mathcal M}athcal{C}^\Lambda)={\mathcal M}athcal{C}^\Lambda$ for any $n{\cal G}eq1$.
{\mathcal E}nd{enumerate}
{\cal B}egin{lemma}{\cal L}abel{lem2}
Suppose that $Y$ is a complete metric space, a set $E\subset Y$ satisfies that $E$ is residual in $Y$. If $F\subset E$ and $F$ is residual in $E$, then $F$ is residual in $Y$.
{\mathcal E}nd{lemma}
{\cal B}egin{proof}
Suppose that $E\supset{\cal B}igcap{\cal L}imits_{n=1}^{+\infty}G_n$, where $G_n$ is an open and dense subset of $Y$ for each $n{\cal G}eq1$. since $F$ is residual in $E$, we can find a sequence $\{H_n\}_{n=1}^{+\infty}$ such that $F\supset{\cal B}igcap{\cal L}imits_{n=1}^{+\infty}H_n$ and $H_n$ is an open and dense subset of $E$ for each $n{\cal G}eq1$. Hence, there is a sequence $\{I_n\}_{n=1}^{+\infty}$ such that $H_n=I_n\cap E$, and $I_n$ is an open and dense subset of $Y$ for each $n{\cal G}eq1$. As a result, $$F\supset{\cal B}igcap{\cal L}imits_{n=1}^{+\infty}H_n={\cal B}igcap{\cal L}imits_{n=1}^{+\infty}(I_n\cap E)\supset{\cal B}igcap{\cal L}imits_{n=1}^{+\infty}{\cal B}igcap{\cal L}imits_{j=1}^{+\infty}(I_n\cap G_j).$$ Then $F$ is residual in $Y$.
{\mathcal E}nd{proof}
We denote $${\mathcal M}athcal{U}^\Lambda({\mathcal M}athcal{C}^\Lambda):={\cal L}eft\{A\in {\mathcal M}athcal{C}^\Lambda{\mathcal M}id {\mathcal M}athcal{M}^\Lambda_{max}(A) \text{ is a singleton}\right\}.$$ Then to prove Theorem \ref{maintheorem-1} (1), we only need to prove that ${\mathcal M}athcal{U}^\Lambda({\mathcal M}athcal{C}^\Lambda)$ is residual in ${\mathcal M}athcal{C}^\Lambda$.
{\cal B}egin{lemma}{\cal L}abel{lem3}
Suppose that $A\in C(X,GL_d({\mathcal M}athbb{R}))$ and $\{A_n\}_{n=1}^{+\infty}$ is a sequence of $C(X,GL_d({\mathcal M}athbb{R}))$. For each $n{\cal G}eq1$, ${\mathcal M}u_{n}\in{\mathcal M}athcal{M}^\Lambda_{max}(A_n)$ and ${\mathcal M}u\in\Lambda$ is a limit point of $\{{\mathcal M}u_{n}\}$. Suppose that
{\cal B}egin{enumerate}[(1)]
\item ${\cal L}im{\cal L}imits_{n\to+\infty}\rho(A_n,A)=0$;
\item ${\cal L}im{\cal L}imits_{n\to+\infty}{\cal B}eta^\Lambda(A_n)={\cal B}eta^\Lambda(A)$.
{\mathcal E}nd{enumerate}
Then ${\mathcal M}u\in{\mathcal M}athcal{M}^\Lambda_{max}(A)$.
{\mathcal E}nd{lemma}
{\cal B}egin{proof}
Suppose that ${\mathcal M}u={\cal L}im{\cal L}imits_{k\to+\infty}{\mathcal M}u_{n_k}$. For any $m{\cal G}eq1$, we have $${\cal L}im_{k\to+\infty}{\cal L}eft|{{\mathcal M}athcal F}rac{1}{m}\int{\cal L}eft(log\|A_{n_k}(m,x)\|-log\|A(m,x)\|\right){\mathcal M}athrm{d}{\mathcal M}u_{n_k}\right|=0,$$ and $${\cal L}im_{k\to+\infty}{\cal L}eft|{{\mathcal M}athcal F}rac{1}{m}\int log\|A(m,x)\|{\mathcal M}athrm{d}{\mathcal M}u_{n_k}-{{\mathcal M}athcal F}rac{1}{m}\int log\|A(m,x)\|{\mathcal M}athrm{d}{\mathcal M}u\right|=0.$$ Combining these two estimates, we obtain $${\cal L}im_{k\to+\infty}{{\mathcal M}athcal F}rac{1}{m}\int log\|A_{n_k}(m,x)\|{\mathcal M}athrm{d}{\mathcal M}u_{n_k}={{\mathcal M}athcal F}rac{1}{m}\int log\|A(m,x)\|{\mathcal M}athrm{d}{\mathcal M}u.$$ Hence, for any $m{\cal G}eq1$,
\[
{\cal B}egin{split}
{{\mathcal M}athcal F}rac{1}{m}\int log\|A(m,x)\|{\mathcal M}athrm{d}{\mathcal M}u&={\cal L}im_{k\to+\infty}{{\mathcal M}athcal F}rac{1}{m}\int log\|A_{n_k}(m,x)\|{\mathcal M}athrm{d}{\mathcal M}u_{n_k}\\
&{\cal G}eq{\cal L}im_{k\to+\infty}\inf{\cal L}imits_{n{\cal G}eq1}{{\mathcal M}athcal F}rac{1}{n}\int log\|A_{n_k}(n,x)\|{\mathcal M}athrm{d}{\mathcal M}u_{n_k}\\&={\cal L}im_{k\to+\infty}{\cal B}eta^\Lambda(A_{n_k})\\&={\cal B}eta^\Lambda(A).
{\mathcal E}nd{split}
\]
Then $\inf{\cal L}imits_{m{\cal G}eq1}{{\mathcal M}athcal F}rac{1}{m}\int log\|A(m,x)\|{\mathcal M}athrm{d}{\mathcal M}u{\cal G}eq{\cal B}eta^\Lambda(A)$ and ${\mathcal M}u\in{\mathcal M}athcal{M}^\Lambda_{max}(A)$.
{\mathcal E}nd{proof}
Since for every $\text{var}epsilon>0$, $n{\cal G}eq1$, $$\sup_{{\mathcal M}u\in\Lambda}{\cal L}eft|{{\mathcal M}athcal F}rac{1}{n}\int log\|e^{\text{var}epsilon f}A(n,x)\|{\mathcal M}athrm{d{\mathcal M}u}-{{\mathcal M}athcal F}rac{1}{n}\int log\|A(n,x)\|{\mathcal M}athrm{d{\mathcal M}u}\right|=\text{var}epsilon\sup_{{\mathcal M}u\in\Lambda}{\cal L}eft|\int f{\mathcal M}athrm{d{\mathcal M}u}\right|{\cal L}eq\text{var}epsilon\|f\|.$$ Given $f\in C(X)$, we have ${\cal L}im{\cal L}imits_{\text{var}epsilon\to0}{\cal B}eta^\Lambda(e^{\text{var}epsilon f}A)={\cal B}eta^\Lambda(A)$ uniformly for $A\in C(X,GL_d({\mathcal M}athbb{R}))$.
{\cal B}egin{definition}
Support that $\Lambda$ is a nonempty and compact subset of ${\mathcal M}athcal{M}(X,T)$. For any $f\in C(X)$, $A\in C(X,GL_d({\mathcal M}athbb{R}))$, define $${\cal B}eta^\Lambda(f{\mathcal M}id A):=\sup_{{\mathcal M}u\in{\mathcal M}athcal{M}^\Lambda_{max}(A)}\int f{\mathcal M}athrm{d}{\mathcal M}u,$$ the relative maximum ergodic average of $f$ given $A$ and $\Lambda$. Define $${\mathcal M}athcal{M}^\Lambda_{max}(f{\mathcal M}id A):={\cal L}eft\{{\mathcal M}u\in{\mathcal M}athcal{M}^\Lambda_{max}(A){\mathcal M}id\int f{\mathcal M}athrm{d}{\mathcal M}u={\cal B}eta^\Lambda(f{\mathcal M}id A)\right\}.$$
{\mathcal E}nd{definition}
Suppose that $C$ and $D$ are nonempty compact subsets of ${\mathcal M}athbb{R}$, then the {\mathcal E}mph{Hausdorff metric} $d_H$ is defined by $d_H(C,D)={\mathcal M}ax{\cal L}imits_{a\in C}{\mathcal M}in{\cal L}imits_{b\in D}|a-b|+{\mathcal M}ax{\cal L}imits_{c\in D}{\mathcal M}in{\cal L}imits_{d\in C}|c-d|$.
{\cal B}egin{lemma}{\cal L}abel{lem4}
Given $f\in C(X)$, $A\in C(X,GL_d({\mathcal M}athbb{R}))$, we have $${\cal L}eft\{\int f{\mathcal M}athrm{d{\mathcal M}u}{\mathcal M}id{\mathcal M}u\in{\mathcal M}athcal{M}^\Lambda_{max}(e^{\text{var}epsilon f}A)\right\}{\cal L}ongrightarrow{\cal L}eft\{{\cal B}eta^\Lambda(f{\mathcal M}id A)\right\} \text{ as } \text{var}epsilon\searrow 0,$$ in the sense of the Hausdorff metric.
{\mathcal E}nd{lemma}
{\cal B}egin{proof}
Given $f\in C(X)$, $A\in C(X,GL_d({\mathcal M}athbb{R}))$. Since ${\mathcal M}athcal{M}^\Lambda_{max}(e^{\text{var}epsilon f}A)$ is a nonempty and compact subset of ${\mathcal M}athcal{M}(X,T)$, the set ${\cal L}eft\{\int f{\mathcal M}athrm{d{\mathcal M}u}{\mathcal M}id{\mathcal M}u\in{\mathcal M}athcal{M}^\Lambda_{max}(e^{\text{var}epsilon f}A)\right\}$ is a nonempty compact subset of ${\mathcal M}athbb{R}$. To prove the lemma, it is enough to show that if $a_\text{var}epsilon\in {\cal L}eft\{\int f{\mathcal M}athrm{d{\mathcal M}u}{\mathcal M}id{\mathcal M}u\in{\mathcal M}athcal{M}^\Lambda_{max}(e^{\text{var}epsilon f}A)\right\}$, then ${\cal L}im{\cal L}imits_{\text{var}epsilon \searrow 0}a_\text{var}epsilon={\cal B}eta^\Lambda(f{\mathcal M}id A)$. Writing $a_\text{var}epsilon=\int f{\mathcal M}athrm{d}m_\text{var}epsilon$ for some $m_\text{var}epsilon\in {\mathcal M}athcal{M}^\Lambda_{max}(e^{\text{var}epsilon f}A)$, it is in turn enough to prove that any limit point of $m_\text{var}epsilon$, as $\text{var}epsilon \searrow 0$, belongs to ${\mathcal M}athcal{M}^\Lambda_{max}(f{\mathcal M}id A)$.
Suppose that ${\cal L}im{\cal L}imits_{i\to+\infty}m_{\text{var}epsilon_i}=m$. By Lemma \ref{lem3}, we have that $m\in{\mathcal M}athcal{M}^\Lambda_{max}(A)$. For any fixed $i{\cal G}eq1$, since $m_{\text{var}epsilon_i}\in {\mathcal M}athcal{M}^\Lambda_{max}(e^{\text{var}epsilon_i f}A)$, we have $$\inf{\cal L}imits_{n{\cal G}eq1}{{\mathcal M}athcal F}rac{1}{n}\int log\|A(n,x)\|{\mathcal M}athrm{d}m_{\text{var}epsilon_i}+\text{var}epsilon_i\int f{\mathcal M}athrm{d}m_{\text{var}epsilon_i}{\cal G}eq\inf{\cal L}imits_{n{\cal G}eq1}{{\mathcal M}athcal F}rac{1}{n}\int log\|A(n,x)\|{\mathcal M}athrm{d{\mathcal M}u}+\text{var}epsilon_i\int f{\mathcal M}athrm{d}{\mathcal M}u,$$ for any ${\mathcal M}u\in\Lambda$. For any fixed ${\mathcal M}u\in{\mathcal M}athcal{M}^\Lambda_{max}(A)$, we have $$\inf{\cal L}imits_{n{\cal G}eq1}{{\mathcal M}athcal F}rac{1}{n}\int log\|A(n,x)\|{\mathcal M}athrm{d{\mathcal M}u}{\cal G}eq\inf{\cal L}imits_{n{\cal G}eq1}{{\mathcal M}athcal F}rac{1}{n}\int log\|A(n,x)\|{\mathcal M}athrm{d}m_{\text{var}epsilon_i},$$ for any $i{\cal G}eq1$. Combining these two estimates, we obtain $\text{var}epsilon_i\int f{\mathcal M}athrm{d}m_{\text{var}epsilon_i}{\cal G}eq\text{var}epsilon_i\int f{\mathcal M}athrm{d}{\mathcal M}u$, i.e. $\int f{\mathcal M}athrm{d}m_{\text{var}epsilon_i}{\cal G}eq\int f{\mathcal M}athrm{d}{\mathcal M}u$ for any ${\mathcal M}u\in{\mathcal M}athcal{M}^\Lambda_{max}(A)$, any $i{\cal G}eq1$. Let $i\to+\infty$, then $\int f{\mathcal M}athrm{d}m{\cal G}eq\int f{\mathcal M}athrm{d}{\mathcal M}u$ for any ${\mathcal M}u\in{\mathcal M}athcal{M}^\Lambda_{max}(A)$. Hence, $m\in{\mathcal M}athcal{M}^\Lambda_{max}(f{\mathcal M}id A)$.
{\mathcal E}nd{proof}
Let's finish the proof of Theorem \ref{maintheorem-1} (1).
{\cal B}egin{proof}[Proof of Theorem \ref{maintheorem-1} (1).]
For any $A\in{\mathcal M}athcal{C}^\Lambda$, any $i{\cal G}eq1$, define $$M^\Lambda_i(A):={\cal L}eft\{\int {\cal G}amma_i{\mathcal M}athrm{d{\mathcal M}u}{\mathcal M}id{\mathcal M}u\in{\mathcal M}athcal{M}^\Lambda_{max}(A)\right\}.$$ Then $A\in{\mathcal M}athcal{U}^\Lambda({\mathcal M}athcal{C}^\Lambda)$ if and only if $M^\Lambda_i(A)$ is a singleton for any $i{\cal G}eq1$. Define $$E^\Lambda_{i,j}:={\cal L}eft\{A\in{\mathcal M}athcal{C}^\Lambda{\mathcal M}id \operatorname{diam}(M^\Lambda_i(A)){\cal G}eq{{\mathcal M}athcal F}rac{1}{j}\right\},$$ where $i{\cal G}eq1$, $j{\cal G}eq1$. Then
\[
{\cal B}egin{split}
{\mathcal M}athcal{C}^\Lambda\setminus{\mathcal M}athcal{U}^\Lambda({\mathcal M}athcal{C}^\Lambda)&={\cal L}eft\{A\in{\mathcal M}athcal{C}^\Lambda{\mathcal M}id\operatorname{diam}(M^\Lambda_i(A))>0\text{ for some }i\in{\mathcal M}athbb{N^+}\right\}\\&={\cal B}igcup_{i=1}^{+\infty}{\cal B}igcup_{j=1}^{+\infty}{\cal L}eft\{A\in{\mathcal M}athcal{C}^\Lambda{\mathcal M}id\operatorname{diam}(M^\Lambda_i(A)){\cal G}eq{{\mathcal M}athcal F}rac{1}{j}\right\}\\&={\cal B}igcup_{i=1}^{+\infty}{\cal B}igcup_{j=1}^{+\infty}E^\Lambda_{i,j}.
{\mathcal E}nd{split}
\] And $${\mathcal M}athcal{U}^\Lambda({\mathcal M}athcal{C}^\Lambda)={\cal B}igcap_{i=1}^{+\infty}{\cal B}igcap_{j=1}^{+\infty}({\mathcal M}athcal{C}^\Lambda\setminus E^\Lambda_{i,j}).$$
To prove Theorem \ref{maintheorem-1} (1), it is enough to show that $E^\Lambda_{i,j}$ is closed and has empty interior in ${\mathcal M}athcal{C}^\Lambda$.
To show that $E^\Lambda_{i,j}$ is closed in ${\mathcal M}athcal{C}^\Lambda$, let $\{A_n\}_{n=1}^{+\infty}\subset E^\Lambda_{i,j}$ with ${\cal L}im{\cal L}imits_{n\to+\infty}A_n=A\in{\mathcal M}athcal{C}^\Lambda$. Write $\operatorname{diam}(M^\Lambda_i(A_n))=\int{\cal G}amma_i{\mathcal M}athrm{d}{\mathcal M}u_{n}^+-\int{\cal G}amma_i{\mathcal M}athrm{d}{\mathcal M}u_{n}^-{\cal G}eq{{\mathcal M}athcal F}rac{1}{j}$ for measures ${\mathcal M}u_{n}^-,{\mathcal M}u_{n}^+\in{\mathcal M}athcal{M}^\Lambda_{max}(A_n)$. Then there exists ${\mathcal M}u^-,{\mathcal M}u^+\in\Lambda$ such that ${\cal L}im{\cal L}imits_{n\to+\infty}{\mathcal M}u_{r_n}^-={\mathcal M}u^-$ and ${\cal L}im{\cal L}imits_{n\to+\infty}{\mathcal M}u_{s_n}^+={\mathcal M}u^+$, where $r_1<r_2<\cdots, s_1<s_2<\cdots$. By Lemma \ref{lem3}, ${\mathcal M}u^-,{\mathcal M}u^+\in{\mathcal M}athcal{M}^\Lambda_{max}(A)$. Since $\int{\cal G}amma_i{\mathcal M}athrm{d}{\mathcal M}u_{r_n}^-\to\int{\cal G}amma_i{\mathcal M}athrm{d}{\mathcal M}u^-$ and $\int{\cal G}amma_i{\mathcal M}athrm{d}{\mathcal M}u_{s_n}^+\to\int{\cal G}amma_i{\mathcal M}athrm{d}{\mathcal M}u^+$, we have $\int{\cal G}amma_i{\mathcal M}athrm{d}{\mathcal M}u^+-\int{\cal G}amma_i{\mathcal M}athrm{d}{\mathcal M}u^-{\cal G}eq{{\mathcal M}athcal F}rac{1}{j}$. Hence, $A\in E^\Lambda_{i,j}$ and $E^\Lambda_{i,j}$ is closed.
To show that $E^\Lambda_{i,j}$ has empty interior in ${\mathcal M}athcal{C}^\Lambda$, let $A\in E^\Lambda_{i,j}$ be arbitrary. By Lemma \ref{lem4}, $$M^\Lambda_i(e^{\text{var}epsilon{\cal G}amma_i}A)={\cal L}eft\{\int {\cal G}amma_i{\mathcal M}athrm{d{\mathcal M}u}{\mathcal M}id{\mathcal M}u\in{\mathcal M}athcal{M}^\Lambda_{max}(e^{\text{var}epsilon {\cal G}amma_i}A)\right\}{\cal L}ongrightarrow{\cal L}eft\{{\cal B}eta^\Lambda({\cal G}amma_i{\mathcal M}id A)\right\} \text{ as } \text{var}epsilon\searrow 0.$$ In particular, $\operatorname{diam}(M^\Lambda_i(e^{{{\mathcal M}athcal F}rac{1}{t}{\cal G}amma_i}A))<{{\mathcal M}athcal F}rac{1}{j}$ for $t\in{\mathcal M}athbb{N^+}$ sufficiently large. Since $e^{{{\mathcal M}athcal F}rac{1}{t}{\cal G}amma_i}A\in{\mathcal M}athcal{C}^\Lambda$ for each $t\in{\mathcal M}athbb{N^+}$, we have that $E^\Lambda_{i,j}$ has empty interior in ${\mathcal M}athcal{C}^\Lambda$.
{\mathcal E}nd{proof}
\subsection{Proof of Theorem \ref{maintheorem-1} (2)} When $\#\Lambda=1$, ${\mathcal M}athcal{U}^\Lambda=C(X,GL_d({\mathcal M}athbb{R}))$. Now suppose that $\#\Lambda=m>1$ and $\Lambda=\{{\mathcal M}u_{1},{\mathcal M}u_{2},\cdots,{\mathcal M}u_{m}\}$. For any $1{\cal L}eq i{\cal L}eq m$, since ${\cal B}eta(A,{\mathcal M}u_{i}):=\inf{\cal L}imits_{n{\cal G}eq1}{{\mathcal M}athcal F}rac{1}{n}\int log\|A(m,x)\|{\mathcal M}athrm{d}{\mathcal M}u_{i}={\cal L}im{\cal L}imits_{n\to+\infty}{{\mathcal M}athcal F}rac{1}{n}\int log\|A(m,x)\|{\mathcal M}athrm{d}{\mathcal M}u_{i}$ is the limit of an everywhere convergent sequence of continuous functions. By {\mathcal E}mph{Baire's theorem on functions of first class} \cite[Theorem 7.3]{Oxtoby1980}, if we denote $$\tilde lde{{\mathcal M}athcal{C}^\Lambda_i}:={\cal L}eft\{A\in C(X,GL_d({\mathcal M}athbb{R})){\mathcal M}id{\cal B}eta(A,{\mathcal M}u_{i}) \text{ is continuous at }A\right\},$$ then $C(X,GL_d({\mathcal M}athbb{R}))\setminus\tilde lde{{\mathcal M}athcal{C}^\Lambda_i}$ is first category, which means that $\tilde lde{{\mathcal M}athcal{C}^\Lambda_i}$ is residual in $C(X,GL_d({\mathcal M}athbb{R}))$. Let ${\mathcal M}athcal{G}^\Lambda:={\mathcal M}athcal{U}^\Lambda\cap{\cal B}igcap{\cal L}imits_{i=1}^{m}\tilde lde{{\mathcal M}athcal{C}^\Lambda_i}$, then ${\mathcal M}athcal{G}^\Lambda$ is residual in $C(X,GL_d({\mathcal M}athbb{R}))$.
Let's finish the proof of Theorem \ref{maintheorem-1} (2).
{\cal B}egin{proof}[Proof of Theorem \ref{maintheorem-1} (2).]
Given $A\in{\mathcal M}athcal{G}^\Lambda$ with ${\mathcal M}athcal{M}_{max}^\Lambda(A)=\{{\mathcal M}u_{l}\}$. there is $k_l\in{\mathcal M}athbb{R}$ satisfying ${\cal B}eta(A,{\mathcal M}u_{i})<k_l<{\cal B}eta(A,{\mathcal M}u_{l})$ for any $i\neq l$. Since $A\in{\cal B}igcap{\cal L}imits_{i=1}^{m}\tilde lde{{\mathcal M}athcal{C}^\Lambda_i}$, for any $1{\cal L}eq j{\cal L}eq m$, there is an open subset $U_j$ of $C(X,GL_d({\mathcal M}athbb{R}))$ such that $A\in U_j$ and $|{\cal B}eta(B,{\mathcal M}u_{j})-{\cal B}eta(A,{\mathcal M}u_{j})|<{{\mathcal M}athcal F}rac{1}{2}|k_l-{\cal B}eta(A,{\mathcal M}u_{j})|$ for any $B\in U_j$. Let $U:={\cal B}igcap{\cal L}imits_{j=1}^{m}U_j$. Then $A\in U$ and $U$ is an open subset of $C(X,GL_d({\mathcal M}athbb{R}))$. For any $B\in U$, we have ${\cal B}eta(B,{\mathcal M}u_{i})<k_l<{\cal B}eta(B,{\mathcal M}u_{l})$ for any $i\neq l$. Hence, $U\subset{\mathcal M}athcal{U}^\Lambda$. Since ${\mathcal M}athcal{G}^\Lambda$ is residual in $C(X,GL_d({\mathcal M}athbb{R}))$, we have that $C(X,GL_d({\mathcal M}athbb{R}))\setminus{\mathcal M}athcal{U}^\Lambda$ is nowhere dense in $C(X,GL_d({\mathcal M}athbb{R}))$.
{\mathcal E}nd{proof}
\subsection{Proof of Theorem \ref{maintheorem-1} (3)}
{\cal B}egin{lemma}{\cal L}abel{lem7}
For any $f\in C(X)$, $e^fI_d\in\tilde lde{{\mathcal M}athcal{C}^\Lambda}$.
{\mathcal E}nd{lemma}
{\cal B}egin{proof}
(1) First, we will prove that $I_d\in\tilde lde{{\mathcal M}athcal{C}^\Lambda}$. Since $\|A(n,x)\|{\cal L}eq\|A(T^{n-1}x)\|\cdots\|A(x)\|$ and $$\|A(n,x)\|{\cal G}eq{{\mathcal M}athcal F}rac{1}{\|A(n,x)^{-1}\|}={{\mathcal M}athcal F}rac{1}{\|A(x)^{-1}\cdots A(T^{n-1}x)^{-1}\|}{\cal G}eq{{\mathcal M}athcal F}rac{1}{\|A(x)^{-1}\|\cdots\|A(T^{n-1}x)^{-1}\|},$$ we have that $$-\int log\|A(x)^{-1}\|{\mathcal M}athrm{d{\mathcal M}u}{\cal L}eq{{\mathcal M}athcal F}rac{1}{n}\int log\|A(n,x)\|{\mathcal M}athrm{d{\mathcal M}u}{\cal L}eq \int log\|A(x)\|{\mathcal M}athrm{d{\mathcal M}u},$$ for any ${\mathcal M}u\in\Lambda$ and any $n{\cal G}eq1$. Therefore, $$|{\cal B}eta^\Lambda(A)|{\cal L}eq\sup_{{\mathcal M}u\in\Lambda}|\inf_{n{\cal G}eq1}{{\mathcal M}athcal F}rac{1}{n}\int log\|A(n,x)\|{\mathcal M}athrm{d{\mathcal M}u}|{\cal L}eq\sup_{x\in X}\{|log\|A(x)\||,|log\|A(x)^{-1}\||\}.$$ Hence, ${\cal B}eta^\Lambda(A)\to0$ as $A\to I_d$, which means that $I_d\in\tilde lde{{\mathcal M}athcal{C}^\Lambda}$.
(2) Second, we will prove that $e^fI_d\in\tilde lde{{\mathcal M}athcal{C}}^\Lambda$ for any $f\in C(X)$. Suppose that $A_n\to e^fI_d$, then $e^{-f}A_n\to I_d$. Since $${{\mathcal M}athcal F}rac{1}{m}\int log\|A_n(m,x)\|{\mathcal M}athrm{d{\mathcal M}u}-{{\mathcal M}athcal F}rac{1}{m}\int log\|(e^fI_d)(m,x)\|{\mathcal M}athrm{d{\mathcal M}u}={{\mathcal M}athcal F}rac{1}{m}\int log\|(e^{-f}A_n)(m,x)\|{\mathcal M}athrm{d{\mathcal M}u},$$ for any ${\mathcal M}u\in\Lambda$ and any $m{\cal G}eq1$. Similarly, we have ${\cal B}eta^\Lambda(A_n)\to{\cal B}eta^\Lambda(e^fI_d)$ as $A_n\to e^fI_d$, which means that $e^fI_d\in\tilde lde{{\mathcal M}athcal{C}^\Lambda}$.
{\mathcal E}nd{proof}
Let's finish the proof of Theorem \ref{maintheorem-1} (3).
{\cal B}egin{proof}[Proof of Theorem \ref{maintheorem-1} (3).]
Suppose that $F:={\cal B}igcup{\cal L}imits_{A\in{\mathcal M}athcal{F}}{\mathcal M}athcal{M}^\Lambda_{max}(A)$. Given ${\mathcal M}u\in{\mathcal M}athcal{M}^e(X,T)\cap\Lambda$, by Theorem \ref{lem8}, there exists $h\in C(X)$ such that ${\mathcal M}athcal{M}_{max}(h)=\{{\mathcal M}u\}$. Let $A=e^hI_d$, then ${\mathcal M}athcal{M}^\Lambda_{max}(A)=\{{\mathcal M}u\}$ and $A\in\tilde lde{{\mathcal M}athcal{C}^\Lambda}\cap{\mathcal M}athcal{U}^\Lambda$ by Lemma \ref{lem7}. Since ${\mathcal M}athcal{F}$ is dense in $C(X,GL_d({\mathcal M}athbb{R}))$, there exists a sequence $\{A_n\}_{n=1}^{+\infty}$ of ${\mathcal M}athcal{F}$ with ${\cal L}im{\cal L}imits_{n\to+\infty}A_n=A$. Choose ${\mathcal M}u_{n}\in{\mathcal M}athcal{M}^\Lambda_{max}(A_n)\subset F$ for each $n{\cal G}eq1$, by Lemma \ref{lem3}, we have that ${\cal L}im{\cal L}imits_{n\to+\infty}{\mathcal M}u_{n}={\mathcal M}u$. Therefore, $F$ is dense in $\Lambda$.
{\mathcal E}nd{proof}
\section{Proofs of Corollary \ref{maincorollary-1}, \ref{maincorollary-2} and \ref{maincorollary-3}}{\cal L}abel{sec4}
In this section, we will finish the proof of Corollary \ref{maincorollary-1}, \ref{maincorollary-2} and \ref{maincorollary-3}.
{\cal B}egin{proof}[Proof of Corollary \ref{maincorollary-1}]
Suppose that $F:={\mathcal M}athcal{M}^e(X,T)\cap{\cal B}igcup{\cal L}imits_{A\in{\mathcal M}athcal{F}}{\mathcal M}athcal{M}_{max}(A)$. Given ${\mathcal M}u\in{\mathcal M}athcal{M}^e(X,T)$, by Theorem \ref{lem8}, there exists $h\in C(X)$ such that ${\mathcal M}athcal{M}_{max}(h)=\{{\mathcal M}u\}$. Let $A=e^hI_d$, then ${\mathcal M}athcal{M}_{max}(A)=\{{\mathcal M}u\}$ and $A\in\tilde lde{{\mathcal M}athcal{C}}\cap{\mathcal M}athcal{U}$ by Lemma \ref{lem7}. Since ${\mathcal M}athcal{F}$ is dense in $C(X,GL_d({\mathcal M}athbb{R}))$, there exists a sequence $\{A_n\}_{n=1}^{+\infty}$ of ${\mathcal M}athcal{F}$ with ${\cal L}im{\cal L}imits_{n\to+\infty}A_n=A$. Choose ${\mathcal M}u_{n}\in{\mathcal M}athcal{M}_{max}(A_n)\subset F$ for each $n{\cal G}eq1$, by Lemma \ref{lem3}, we have that ${\cal L}im{\cal L}imits_{n\to+\infty}{\mathcal M}u_{n}={\mathcal M}u$. Therefore, $F$ is dense in ${\mathcal M}athcal{M}^e(X,T)$.
{\mathcal E}nd{proof}
{\cal B}egin{proof}[Proof of Corollary \ref{maincorollary-2}]
Since $\#{\mathcal M}athcal{M}^e(X,T)<+\infty$, let $\Lambda={\mathcal M}athcal{M}^e(X,T)$, then $\Lambda$ is a nonempty and compact subset of ${\mathcal M}athcal{M}(X,T)$. By Theorem \ref{maintheorem-1} (2), $C(X,GL_d({\mathcal M}athbb{R}))\setminus{\mathcal M}athcal{U}^\Lambda$ is nowhere dense in $C(X,GL_d({\mathcal M}athbb{R}))$. Since ${\mathcal M}athcal{U}^\Lambda={\mathcal M}athcal{U}$, we have that $C(X,GL_d({\mathcal M}athbb{R}))\setminus{\mathcal M}athcal{U}$ is nowhere dense in $C(X,GL_d({\mathcal M}athbb{R}))$.
{\mathcal E}nd{proof}
{\cal B}egin{proof}[Proof of Corollary \ref{maincorollary-3}]
When $\#{\mathcal M}athcal{M}^e(X,T)>1$, by Theorem \ref{the2.3}, ${\mathcal M}athcal{U}\subset{\mathcal M}athcal{R}$. By Theorem \ref{maintheorem-1} (1), ${\mathcal M}athcal{R}$ is residual in $C(X,GL_d({\mathcal M}athbb{R}))$. When $1<\#{\mathcal M}athcal{M}^e(X,T)<+\infty$, by Corollary \ref{maincorollary-2}, $C(X,GL_d({\mathcal M}athbb{R}))\setminus{\mathcal M}athcal{R}$ is nowhere dense in $C(X,GL_d({\mathcal M}athbb{R}))$.
{\mathcal E}nd{proof}
{\cal B}igskip
${\mathcal M}athbf{Acknowledgements.}$ X. Tian is supported by National Natural Science Foundation of China (grant No.12071082) and in part by Shanghai Science and Technology Research Program (grant No.21JC1400700).
{\cal B}egin{thebibliography}{99}
{\cal B}ibitem{B2018}
J. Bochi, {\it Ergodic optimization of Birkhoff averages and Lyapunov exponents}, Proceedings of the International Congress of Mathematicians---Rio de Janeiro 2018. Vol. III. Invited lectures, 1825-1846, World Sci. Publ., Hackensack, NJ, 2018.
{\cal B}ibitem{CFH2008}
Y. Cao, D. Feng and W. Huang, {\it The thermodynamic formalism for sub-additive potentials}, Discrete Contin. Dyn. Syst. 20 (3) (2008), 639-657.
{\cal B}ibitem{F1997}
A. Furman, {\it On the multiplicative ergodic theorem for uniquely ergodic systems}, Ann. Inst. H. Poincar\'{e} Probab. Statist. 33 (6) (1997), 797-815.
{\cal B}ibitem{HLT2021}
X. Hou, W. Lin and X. Tian, {\it Ergodic average of typical orbits and typical functions}, Preprint. arXiv:2107.00205v2
{\cal B}ibitem{J2006}
O. Jenkinson, {\it Ergodic optimization}, Discrete Contin. Dyn. Syst. 15 (1) (2006), 197-224.
{\cal B}ibitem{J2006ii}
O. Jenkinson, {\it Every ergodic measure is uniquely maximizing}, Discrete Contin. Dyn. Syst. 16 (2) (2006), 383-392.
{\cal B}ibitem{M2010}
I. D. Morris, {\it Ergodic optimization for generic continuous functions}, Discrete Contin. Dyn. Syst. 27 (1) (2010), 383-388.
{\cal B}ibitem{M2013}
I. D. Morris, {\it Mather sets for sequences of matrices and applications to the study of joint spectral radii}, Proc. Lond. Math. Soc. 107 (1) (2013), 121-150.
{\cal B}ibitem{Oxtoby1980}
J. C. Oxtoby, {\it Measure and Category, 2nd}, Springer. 1980.
{\cal B}ibitem{Phe1}
R. R. Phelps, {\it Unique equilibrium states}, Dynamics and randomness (Santiago, 2000), 219-225, Nonlinear Phenom. Complex Systems, 7, Kluwer Acad. Publ., Dordrecht, 2002.
{\cal B}ibitem{Walters1982}
P. Walters, {\it An Introduction to Ergodic Theory}, Springer-Verlag. 1982.
{\mathcal E}nd{thebibliography}
{\mathcal E}nd{document}
|
\begin{document}
\begin{abstract}
Continuing the study of recent results on the Birkhoff-James orthogonality and the norm attainment of operators, we introduce a property namely the adjusted Bhatia-\v{S}emrl property for operators which is weaker than the Bhatia-\v{S}emrl property. The set of operators with the adjusted Bhatia-\v{S}emrl property is contained in the set of norm attaining ones as it was in the case of the Bhatia-\v{S}emrl property. It is known that the set of operators with the Bhatia-\v{S}emrl property is norm-dense if the domain space $X$ of the operators has the Radon-Nikod\'ym property like finite dimensional spaces, but it is not norm-dense for some classical spaces such as $c_0$, $L_1[0,1]$ and $C[0,1]$. In contrast with the Bhatia-\v{S}emrl property, we show that the set of operators with the adjusted Bhatia-\v{S}emrl property is norm-dense when the domain space is $c_0$ or $L_1[0,1]$. Moreover, we show that the set of functionals having the adjusted Bhatia-\v{S}emrl property on $C[0,1]$ is not norm-dense but such a set is weak-$*$-dense in $C(K)^*$ for any compact Hausdorff $K$.
\end{abstract}
\maketitle
\section{Introduction \& Preliminaries}
The famous Bishop-Phelps theorem states that the set of norm attaining functionals on a Banach space is norm-dense in its dual space \cite{BP}. This allowed many authors to study the set of norm attaining operators, and especially J. Lindenstrauss \cite{L} first showed that there is no vector-valued version of the Bishop-Phelps theorem. However, he also found many pairs of Banach spaces such that the denseness holds, and afterwards it was also discovered that the denseness holds for many pairs of classical spaces like $(L_p[0,1],L_q[0,1])$ and $(C[0,1],L_p[0,1])$ where $p,q\in \mathbb{N}$ (see \cite{B,FP,I,S}).
In 1999, Bhatia and \v{S}emrl \cite{BS} showed that an operator $T$ on a finite dimensional complex Hilbert space is Birkhoff-James orthogonal to another operator $S$ if and only if there exists a norm attaining point $x$ of $T$ such that $Tx$ is Birkhoff-James orthogonal to $Sx$. The Birkhoff-James orthogonality is introduced by G. Birkhoff in \cite{Bir} to consider the concept of orthogonality on linear metric spaces. In general, the characterization of Bhatia and \v{S}emrl is not true for operators between Banach spaces \cite{LS}. Nevertheless, when the domain space is finite dimensional or has some geometric property such as property quasi-$\alpha$ \cite{CK} or the Radon-Nikod\'ym property \cite{K}, the set of operators for which the characterization holds is a norm-dense subset of the norm attaining operators similarly to the case of the aforementioned norm attaining operator theory. However, on some classical spaces with no (or few) extreme points such as $c_0$, $L_1[0,1]$ and $C[0,1]$, the characterization holds for only a few operators, implying that the denseness of such a set of operators does not hold (see \cite{CK,K,KL,PSJ}). This is the main difference between this study and the classical norm attaining operator theory. The main aim of the present paper is to take into account a new class of norm attaining operators which contains (properly) the set of operators with the characterization of Bhatia and \v{S}emrl, such that the denseness of such a set holds for some classical spaces.
For more details, we restart the introduction with the notions. Throughout the paper, $X$ and $Y$ are real Banach spaces. By the sets $S_X$ and $B_X$ we mean the unit sphere and the closed unit ball of a Banach space $X$, respectively, and $X^*$ stands for the topological dual space of $X$. We denote by $\mathcal{L}(X,Y)$ the space of all bounded linear operators from $X$ into $Y$. An operator $T \in \mathcal{L}(X,Y)$ is said to \emph{attain its norm} at $x_0 \in S_X$ if $\|Tx_0\|=\|T\|=\sup \{ \|Tx\| : x\in B_X \}$. The set of norm attaining operators is denoted by $\mathbb{N}A(X,Y)$, and we define the set of norm attaining points of an operator $T \in \mathcal{L}(X,Y)$ by $M_T := \{x \in S_X : \|Tx\|=\|T\|\}$.
We say a vector $x \in X$ is \emph{orthogonal to $y \in X$ in the sense of Birkhoff-James} or \emph{Birkhoff-James orthogonal} to $y$ if $\|x\| \leq \|x+ \lambda y \|$ for any scalar $\lambda$, and it is denoted by $x \perp_B y$. Motivated by the aforementioned result of Bhatia and \v{S}emrl \cite{BS}, the authors introduced in \cite{PSJ} the \emph{Bhatia-\v{S}emrl property} (in short, $\BS$ property) for a norm attaining operator $T \in \mathcal{L}(X,Y)$ that for any $S \in \mathcal{L}(X,Y)$ with $T \perp_B S$, there exists $x_0 \in M_T$ such that $Tx_0 \perp_B Sx_0$. The set of norm attaining operators between $X$ and $Y$ with the $\BS$ property is denoted by $\BS(X,Y)$.
As it is mentioned in the beginning, the set $\mathbb{N}A(X,Y)$ is norm-dense in $\mathcal{L}(X,Y)$ for most of classical Banach spaces $X$ and $Y$, and by definition the set $\BS(X,Y)$ is a subset of $\mathbb{N}A(X,Y)$. Hence, it is quite natural to ask whether the set $\BS(X,Y)$ is norm dense for the same $X$ and $Y$. As we have mentioned, the answer is known to be affirmative under some geometric conditions on $X$ like property quasi-$\alpha$ and Radon-Nikod\'ym property. However, for classical spaces $X=L_1[0,1], C[0,1]$ or $c_0$, it fails \cite{CK,K,KL}. Motivated by these observations, we are willing to consider a suitable subset of $\mathbb{N}A(X,Y)$ which contains $\BS(X,Y)$ and is norm-dense in $\mathcal{L} (X,Y)$ even when $\BS(X,Y)$ is not norm-dense.
In order to define such a suitable subset, we recall the concept of the strong orthogonality of vectors which was recently taken into consideration in \cite{PSJ,SPJ} with a notion of the Birkhoff-James orthogonality. We say that a vector $x \in X$ is \emph{strongly orthogonal to $y \in X$ in the sense of Birkhoff-James} if $\|x\| < \|x+ \lambda y \|$ for any $\lambda \neq 0$, and it is denoted by $x \perp_S y$. By the definition, we see that if $x \perp_S y$, then $x \perp_B y$. This concept is used to characterize the strict convexity of a Banach space which we present below.
\begin{rem}[\mbox{\cite[Theorem 2.4]{SPJ}}]\label{rem:str-cvx}
Let $X$ be a Banach space. Then, $x \perp_S y$ is equivalent to $x \perp_B y$ for every $x,y \in X \setminus \{0\}$ if and only if $X$ is strictly convex.
\end{rem}
Using the concept of strong orthogonality, we define the main object of this paper.
\begin{definition}
A bounded linear operator $T \in \mathcal{L}(X,Y)$ is said to have the \emph{adjusted Bhatia-\v{S}emrl property} \textup{(}in short, \emph{adjusted $\BS$ property}\textup{)} if it attains its norm and for any $S \in \mathcal{L}(X,Y)$ with $T \perp_S S$, there exists $x_0 \in M_T$ such that $Tx_0 \perp_B Sx_0$. The set of operators with the adjusted $\BS$ property is denoted by $T \in \BSa(X,Y)$.
\end{definition}
It is clear that
$$
\BS(X,Y) \subseteqeq \BSa(X,Y) \subseteqeq \mathbb{N}A(X,Y)
$$
for all Banach spaces $X$ and $Y$. We will see later that both inclusions are proper in many cases. In \cite{KL}, the authors characterized the set of operators with the $\BS$ property in terms of other known materials, and we have analogies. All of them are easy consequence of the above inclusion or the own proofs of \cite[Corollary 2.2, Corollary 2.3, Proposition 2.4, Proposition 2.5]{KL}, so we omit their proofs.
\begin{prop}
Let $X$ and $Y$ be Banach spaces.
\begin{enumerate}
\setlength\itemsep{0.3em}
\item[\textup{(a)}] $X$ is reflexive if and only if $\BSa(X,\mathbb{R}) =X^*$.
\item[\textup{(b)}] $T \in \BSa(X,Y)$ if and only if $T$ attains its norm and for any $S \in \mathcal{L}(X,Y)$ with $T \perp_S S$, there exist $x_0 \in S_X$ and $y_0^* \in S_{Y^*}$ such that $y_0^*(Tx_0)=\|T\|$ and $y_0^*(Sx_0)=0$.
\item[\textup{(c)}] If $\BSa(X,Y) = \mathcal{L}(X,Y)$ for every reflexive $X$, then $Y$ is one-dimensional.
\item[\textup{(d)}] $\BSa(X,Y)=\mathcal{L}(X,Y)$ for every $Y$ if and only if $X$ is one-dimensional.
\end{enumerate}
\end{prop}
Our first main result concerns the inclusions for specific spaces, and they can be described as follows.
\begin{theorem}\label{theorem:set}
Let $X$ be a Banach space.
\begin{enumerate}
\setlength\itemsep{0.3em}
\item[\textup{(a)}] If $X=c_0$, then $\BS(X,\mathbb{R}) \subseteqneqq \BSa(X,\mathbb{R}) = \mathbb{N}A(X,\mathbb{R})$.
\item[\textup{(b)}] If $X=L_1[0,1]$ or $C[0,1]$, then $\BS(X,\mathbb{R}) \subseteqneqq \BSa(X,\mathbb{R}) \subseteqneqq \mathbb{N}A(X,\mathbb{R})$.
\item[\textup{(c)}] If $X$ is non-reflexive and $X^*$ is strictly convex, then $\BS(X,\mathbb{R}) = \BSa(X,\mathbb{R}) \subseteqneqq \mathcal{L}(X,\mathbb{R})$.
\end{enumerate}
\end{theorem}
Furthermore, we show that even the denseness holds for the spaces $c_0$ and $L_1[0,1]$, whereas there are still many norm attaining operators without the adjusted $\BS$ property. We summarize the main denseness results of the present paper below which distinguishes the adjusted $\BS$ property from the original $\BS$ property. Here, we present only the cases that the range space is the scalar field for simplicity and we refer to the next section for more generalized results on range spaces.
\begin{theorem}\label{theorem:denseness}
Let $X$ be a Banach space.
\begin{enumerate}
\setlength\itemsep{0.3em}
\item[\textup{(a)}] If $X=c_0$, then $\BSa(X,\mathbb{R})$ is norm-dense in $\mathcal{L}(X,\mathbb{R})$.
\item[\textup{(b)}] If $X=L_1[0,1]$, then $\BSa(X,\mathbb{R})$ is norm-dense in $\mathcal{L}(X,\mathbb{R})$.
\item[\textup{(c)}] If $X=L_1[0,1]$, then the set of norm attaining operators without the adjusted $\BS$ property is norm-dense in $\mathcal{L}(X,\mathbb{R})$.
\item[\textup{(d)}] If $X=C[0,1]$, then $\BSa(X,\mathbb{R})$ is weak-$*$-dense in $\mathcal{L}(X,\mathbb{R})$.
\item[\textup{(e)}] If $X=C[0,1]$, then the set of norm attaining operators without the adjusted $\BS$ property is norm-dense in $\mathcal{L}(X,\mathbb{R})$.
\end{enumerate}
\end{theorem}
\section{Main results}
In this section, we provide results as mentioned in Theorems \ref{theorem:set} and \ref{theorem:denseness} of the previous section. We begin with an immediate result that the two sets of the $\BS$ properties are the same. We omit its proof since it follows directly from Remark \ref{rem:str-cvx} and definitions of the $\BS$ property and the adjusted $\BS$ property.
\begin{prop}\label{prop:str-cvx}
Let $X$ and $Y$ be Banach spaces. If $\mathcal{L}(X,Y)$ is strictly convex, then $\BS (X, Y) = \BSa (X, Y)$.
\end{prop}
\begin{cor}
If $X$ is a separable Banach space, then there is an equivalent renorming $\tilde{X}$ of $X$ such that $\BS(\tilde{X},\mathbb{R})=\BSa(\tilde{X},\mathbb{R})$.
\end{cor}
\begin{proof}
According to \cite[II. Theorem 2.6]{DGZ}, there is an equivalent norm $| \cdot |$ on $X$ such that $(X, |\cdot|)^*$ is strictly convex. Thus the conclusion follows by Proposition \ref{prop:str-cvx}.
\end{proof}
From the well known characterization of reflexivity by R.C. James \cite{J2} that $X$ is reflexive if and only if $\mathbb{N}A(X,\mathbb{R})=\mathcal{L}(X,\mathbb{R})$, we have an example that $\BSa(X,\mathbb{R}) \subseteqneqq \mathcal{L}(X,\mathbb{R})$.
\begin{example}
For a non-reflexive Banach space $X$ such that $X^*$ is strictly convex such as a suitable renorming of $c_0$, we have
$$
\BS(X,\mathbb{R}) = \BSa(X,\mathbb{R}) \subseteq \mathbb{N}A(X,\mathbb{R})\subseteqneqq \mathcal{L}(X,\mathbb{R}).
$$
\end{example}
The converse of Proposition \ref{prop:str-cvx} is not true in the case that $X$ and $Y$ are a reflexive space and a strictly convex space respectively such that $\mathcal{L} (X, Y) = \mathcal{K} (X, Y)$ where $\mathcal{K} (X, Y)$ is the space of compact operators in $\mathcal{L} (X, Y)$. In order to see this, we need the following modification of \cite[Theorem 2.1]{GSP}.
\begin{theorem}\label{rem:str-cvxop}
Let $X$ be a reflexive Banach space and $Y$ be a strictly convex Banach space. If $T,S\in \mathcal{K} (X, Y)$ satisfy $T\perp_B S$, then $T\perp_S S$ or $Sx=0$ for some $x\in M_T$.
\end{theorem}
In \cite[Theorem 2.1]{GSP}, the authors considered the case that $X=Y$ which is both reflexive and strictly convex, but their arguement can be applied for the setting of Theorem \ref{rem:str-cvxop}.
\begin{theorem}\label{theorem:GSP}
Let $X$ be a reflexive Banach space and $Y$ be a strictly convex Banach space. If $\mathcal{L} (X, Y) = \mathcal{K} (X, Y)$, then $\BS (X, Y) = \BSa (X, Y)$.
\end{theorem}
\begin{proof}
Since we only need to prove $\BSa (X, Y)\subseteq \BS (X, Y)$, we fix $T\in \BSa (X, Y)$ and $S\in \mathcal{L} (X, Y) $ so that $T\perp_B S$. From Theorem \ref{rem:str-cvxop}, we have $T\perp_S S$ or $Sx_0=0$ for some $x_0\in M_T$. This shows that $Tx_1\perp_B Sx_1$ for some $x_1\in M_T$.
\end{proof}
\begin{example}
Let $X$ and $Y$ be Banach spaces. We have that $\BS (X, Y) = \BSa (X, Y)$ whenever
\begin{enumerate}
\setlength\itemsep{0.3em}
\item[\textup{(a)}] $X$ is finite dimensional and $Y$ is strictly convex.
\item[\textup{(b)}] $X$ is reflexive and $Y$ is finite dimensional strictly convex.
\item[\textup{(c)}] (Pitt) $X$ is a closed subspace of $\ell_p$ and $Y$ is a closed subspace of $\ell_r$ with $1 < r< p< \infty$.
\item[\textup{(d)}] (Roshenthal) $X$ is a closed subspace of $L_p (\mu)$ and $Y$ is a closed subspace of $L_r (\nu)$, $1 < r < p < \infty$ and
\begin{itemize}
\setlength\itemsep{0.3em}
\item[\textup{(d1)}] $\mu$ and $\nu$ are atomic --also covered by (c)--,
\item[\textup{(d2)}] or $1\leq r<2$ and $\nu$ is atomic,
\item[\textup{(d3)}] or $p>2$ and $\mu$ is atomic.
\end{itemize}
\end{enumerate}
\end{example}
Item (c) can be found in \cite[Theorem~2.1.4]{Albiac-Kalton}, for instance; and the item (d) can be found in the paper by H.~Rosenthal \cite[Theorem~A2]{Rosenthal-JFA1969}.
~
We are now interested in operators defined on classical spaces. To do so, the following result becomes a useful tool of extending the range space from $\mathbb{R}$ to some specific Banach spaces. It contains a concept of \emph{property quasi-$\beta$}, which was first introduced in \cite{AAP1996} as a weakening of \emph{property $\beta$} of Lindenstrauss \cite{L}. We note that $c_0, \ell_\infty$ and finite dimensional spaces with a polyhedral unit ball are examples of Banach spaces satisfying property $\beta$ (hence property quasi-$\beta$) and every closed subspace of $c_0$ has property quasi-$\beta$ (see \cite[Example 3.2]{JMR}).
\begin{prop}\label{prop:beta}
Let $X$ be a Banach space such that $\BSa(X,\mathbb{R})$ is norm-dense in $\mathcal{L}(X,\mathbb{R})$, and let $Y$ be a Banach space with property quasi-$\beta$. Then, $\BSa(X,Y)$ is norm-dense in $\mathcal{L}(X,Y)$.
\end{prop}
\begin{proof}
Since the proof is a minor modification of \cite[Proposition 2.8]{CK}, we just comment the key observation by adapting the same notion instead of giving full details. As in their proof, for $N=1$, we observe the assumption $B \perp_S C$ implies that $\| B^* y_{\alpha_0}^* + \lambdabda C^* y_{\alpha}^* \| > \| B^* y_{\alpha_0}^* \|$ for every $\lambdabda \neq 0$, that is, $B^* y_{\alpha_0}^* \perp_S C^* y_{\alpha_0}^*$.
Since $B$ is constructed in such a way that $B^* y_{\alpha_0}^* \in \BSa (X, \mathbb{R})$, there exists $x \in M_{B^* y_{\alpha_0}^*}$ such that $B^* y_{\alpha_0}^* (x) \perp_B C^* y_{\alpha_0}^* (x)$. This completes the proof.
\end{proof}
\subsection{When the domain space is $c_0$.}
In this subsection, we show that $\BSa(c_0,Y)$ can be norm-dense for some Banach space $Y$ while $\BS(c_0,Y) = \{0\}$ for many Banach spaces $Y$ such as strictly convex spaces or spaces with the Radon-Nikod\'ym property \cite[Corollary 3.6]{CK}. Moreover, it is known that $\BS(c_0,c_0)=\{0\}$ \cite[Proposition 3.7]{CK}. In contrast with this situation, we will observe that ${\BSa(c_0,c_0)}$ is norm-dense.
For more general setting, we consider $X$ as an $\ell_1$-predual space and $\phi : \ell_1 \rightarrow X^*$ be an isometric isomorphism. We denote the canonical basis of $X^*$ by $u_n^* = \phi (e_n^*)$, where $(e_n,e_n^*)$ is the canonical biorthogonal system of $c_0$.
Note that the Banach space $c$ of convergent sequences is a typical example of an $\ell_1$-predual space while it is not isometric to $c_0$. Moreover, there exists an $\ell_1$-predual space which is not isomorphic to a $C(K)$-space \cite{BL} and an isomorphic predual of $\ell_1$ (that is, a Banach space whose dual is isomorphic to $\ell_1$) which has the Radon-Nikod\'ym property \cite{BD}.
\begin{theorem}\label{thm:ell1predual}
Let $X$ be an $\ell_1$-predual space. If $x^* \in X^*\setminus \{0\}$ is of the form $\sum_{k=1}^n a_k u_k^*$ with $n \in \mathbb{N}$, then $x^* \in \BSa (X, \mathbb{R})$.
\end{theorem}
\begin{proof}
Let $x^* =\sum_{k=1}^n a_k u_k^*$ for some $n \in \mathbb{N}$ with $a_k \neq 0$ for $k =1,\ldots, n$. It is not difficult to check that $x^* \in \mathbb{N}A (X, \mathbb{R})$ (see, for instance, \cite[Theorem 5.10]{DMRR}). Let $y^* \in X^*$ so that $x^* \perp_S y^*$. Note that $\|y^* \| = \sum_{k=1}^\infty |\phi^{-1} (y^*) (e_k) | < \infty$.
We claim that
\begin{equation}\label{eq:ell1}
\left| \sum_{k=1}^n \sgn (a_k) b_k \right| < \sum_{k=n+1}^\infty |b_k|,
\end{equation}
where $b_k = \phi^{-1} (y^*) (e_k)$ for each $k \in \mathbb{N}$ and $\sgn$ denotes the sign of a value in $\mathbb{R}$. Suppose that \eqref{eq:ell1} is not true. Then, for $\lambda_0$ such that $0<|\lambda_0| < \min_{1 \leq k \leq n} \frac{|a_k|}{\max\{1,|b_k|\}}$ and $\sgn \lambda_0 = - \sgn(\sum_{k=1}^n \sgn(a_k)b_k)$, we have that
\begin{align*}
\|x^* + \lambda_0 y^*\| &= \sum_{k=1}^\infty \left| \phi^{-1} (x^*) (e_k) + \lambdabda_0 \phi^{-1} (y^*)(e_k) \right| \\
&= \sum_{k=1}^n \left| |a_k| + \lambda_0 \sgn(a_k) b_k \right| + |\lambda_0| \sum_{k=n+1}^\infty |b_k| \\
&= \sum_{k=1}^n |a_k| - |\lambda_0| \left| \sum_{k=1}^n \sgn(a_k) b_k \right| + |\lambda_0| \sum_{k=n+1}^\infty |b_k| \leq \sum_{k=1}^n |a_k| = \|x^*\|,
\end{align*}
which is a contradiction. Now, consider an element
$$
x = \left( \sgn(a_1), \ldots, \sgn(a_n), c_{n+1}, c_{n+2}, \ldots \right) \in B_{c_{00}},
$$
where $(c_k) \in B_{c_{00}}$ is chosen so that
$$
\sum_{k=n+1}^\infty c_kb_k = - \sum_{k=1}^n \sgn(a_k)b_k.
$$
This is possible since
$$
\frac{\left| \sum_{k=1}^n \sgn (a_k) b_k \right|}{\sum_{k=n+1}^\infty |b_k|} < 1.
$$
Let us say $c_k = 0$ for every $k \geq m+1$ for some $m >n$. Consider $u:= (\phi^*)^{-1} (x) \in X^{**}$.
Set $F_m := \spann \{ u_1^*, \ldots, u_m^* \} \subseteqeq X^*$ and let $Q_m : X^* \rightarrow F_m$ be the canonical contractive projection. Note that $Q_m^* (F_m^*)$ is naturally identified to a subspace of $X$ as $Q_m$ is weak-$*$ continuous (see \cite[Corollary 4.1]{Gasparis}).
We see that $v:= Q_m^* (u \vert_{F_m} ) \in X$ is an element in $M_{x^*}$ where $u \vert_{F_m}$ is the restriction of $u$ on $F_m$. Indeed, observe that
\begin{align*}
x^* (v) = Q_m (x^*) (u) = x^* (u) = \sum_{k=1}^n a_k u_k^* (u) &= \sum_{k=1}^n a_k \phi (e_k^*) ( (\phi^*)^{-1} (x)) )\\
&= \sum_{k=1}^n a_k e_k^* (x) \\
&= \sum_{k=1}^n |a_k| = \|x^* \|.
\end{align*}
On the other hand,
\begin{align*}
y^* (v) = Q_m (y^*) (u) &= \left(\sum_{k=1}^m b_k u_k^* \right) (\phi^*)^{-1} (x) \\
&= \sum_{k=1}^n b_k \sgn (a_k) + \sum_{k=n+1}^m b_k c_k = 0.
\end{align*}
This shows that $x^* \in \BSa (X, \mathbb{R})$.
\end{proof}
For an $\ell_1$-predual space $X$, it is well known that the set of extreme points of $B_{X^*}$ is $\{ \theta u_n^* : n \in \mathbb{N}, \theta= \pm 1\}$. By combining Theorem \ref{thm:ell1predual} with the Krein-Milman theorem we have the following.
\begin{cor}\label{cor:ell1predual}
Let $X$ be an $\ell_1$-predual space. Then the set $\BSa (X, \mathbb{R})$ is weak-$*$-dense in $X^*$.
\end{cor}
It is well known (and easy to check) that every norm attaining functional on $c_0$ is finitely supported. Hence, Theorem \ref{thm:ell1predual} shows that $\BSa(c_0,\mathbb{R}) = \mathbb{N}A(c_0,\mathbb{R})$. In particular, we have the following result.
\begin{cor}\label{cor:c_0-to-R}
The set $\BSa(c_0,\mathbb{R})$ is norm-dense in $\mathcal{L}(c_0,\mathbb{R})$.
\end{cor}
We finish the present subsection with giving the main consequence of Proposition \ref{prop:beta} combined with Corollary \ref{cor:c_0-to-R} and raising a natural open question due to the known result on the denseness of norm attaining operators from $c_0$ to a uniformly convex Banach space \cite{Kim2013}.
\begin{cor}
Let $Y$ be a Banach space with property quasi-$\beta$. Then, $\BSa(c_0,Y)$ is norm-dense in $\mathcal{L}(c_0,Y)$. In particular, $\BSa(c_0,c_0)$ is norm-dense in $\mathcal{L}(c_0,c_0)$.
\end{cor}
\begin{question}
For a uniformly convex Banach space $Y$, is $\BSa(c_0,Y)$ norm-dense in $\mathcal{L}(c_0,Y)$?
\end{question}
\subsection{When the domain space is $L_1[0,1]$.}
Similarly to the case of $c_0$ we show that ${\BSa(L_1[0,1],\mathbb{R})}$ is norm-dense in $\mathcal{L}(L_1[0,1],\mathbb{R})$ while it is shown in \cite[Proposition 3.6]{KL} that $\BS(L_1[0,1],\mathbb{R}) = \{0\}$.
\begin{theorem}
The set $\BSa(L_1[0,1],\mathbb{R})$ is norm-dense in $\mathcal{L}(L_1[0,1],\mathbb{R})$.
\end{theorem}
\begin{proof}
We claim that for a fixed $f\in L_\infty[0,1]\setminus\{0\}$ and $\varepsilon>0$ there exists $g\in\BSa(L_1[0,1],\mathbb{R})$ such that $\|g-f\|<\varepsilon$. For a suitable number $0<\delta<\max\{\varepsilon,\|f\|\}$, we assume that $\sigma(\{s\in[0,1]: f(s)>\|f\|-\delta\})>0$ where $\sigma$ is the Lebesgue measure on $[0,1]$. Otherwise we apply the following proof for $-f$.
Consider a function $g \in L_\infty[0,1]$ defined by
\begin{displaymath}
g(s):=\left\{\begin{array}{@{}cl}
\displaystyle \, \|f\| & \text{if } \|f\|-\delta< f(s) \\
\displaystyle \, f(s) & \text{if } -\|f\| +\delta \leq f(s) \leq \|f\| - \delta\\
\displaystyle \, -\|f\| + \delta \ & \text{if } f(s) <-\|f\| +\delta.
\end{array} \right.
\end{displaymath}
From the construction, we have that
\begin{enumerate}
\setlength\itemsep{0.3em}
\item[\textup{(i)}] $\|f-g\|<\delta$
\item[\textup{(ii)}] $\sigma(\{s \in [0,1] \colon g(s)=\|g\|\})>0$ and
\item[\textup{(iii)}] $|g(t)| \leq \|g\|-\delta$ or $g(t) = \|g\|$ for each $t\in [0,1]$.
\end{enumerate}
To see $g\in\BSa(L_1[0,1],\mathbb{R})$, take $h \in L_\infty[0,1]$ such that $g \perp_S h$.
For $\Phi := \{ s \in [0,1] \colon g(s)=\|g\| \}$, $\Phi^+ :=\{t \in \Phi \colon h(x)>0\}$ and $\Phi^- :=\{t \in \Phi \colon h(x)<0\}$, we see that both $\sigma(\Phi^+)$ and $\sigma(\Phi^-)$ are positive. Indeed, if $\sigma(\Phi^+) =0$, then it follows that $\|g+\lambda h\|\leq \|g\|$ for $0<\lambda<\delta/\|h\|$. Similarly if $\sigma(\Phi^-) =0$, then $\|g-\lambda h\|\leq \|g\|$ for $0<\lambda<\delta/\|h\|$.
We now define a function $\varphi \in S_{L_1[0,1]}$ by
\begin{displaymath}
\varphi(t):=\left\{\begin{array}{@{}cl}
\displaystyle \, \frac{H^-}{H^+\sigma(\Phi^-)+H^-\sigma(\Phi^+)} & \text{on } t \in \Phi^+ \\
\displaystyle \, \frac{H^+}{H^+\sigma(\Phi^-)+H^-\sigma(\Phi^+)} & \text{on } t \in \Phi^- \\
\displaystyle \, \phantom{\Big[}0\phantom{\Big]} & \text{otherwise},
\end{array} \right.
\end{displaymath}
where $H^+$ and $H^-$ are given by $H^+ = \int_{\Phi^+} h \,d\sigma >0$ and $H^- = -\int_{\Phi^-} h \,d\sigma >0$. We can easily see that $\int_0^1 g \varphi \,d\sigma =\|g\|$ and $\int_0^1 h \varphi \,d\sigma =0$, which shows that $g$ attains its norm at $\varphi$ and $\int_0^1 g \varphi \,d\sigma \perp_B \int_0^1 h \varphi \,d\sigma$. Hence, $g$ is an element in $\BSa(L_1[0,1],\mathbb{R})$.
\end{proof}
\begin{cor}
For a Banach space $Y$ with property quasi-$\beta$, the set $\BSa(L_1[0,1],Y)$ is norm-dense in $\mathcal{L}(L_1[0,1],Y)$.
\end{cor}
The next result reveals not only that the set of norm attaining operators without the adjusted $\BS$ property may be nonempty but also its portion can be large. This generalizes the previously known fact \cite[Theorem 3.2]{K} that if $Y$ has the Radon-Nikod\'ym property, then the set of norm attaining operators without the B\v{S} property is norm-dense in $\mathcal{L}(L_1([0,1]),Y)$.
\begin{theorem}\label{theorem:L_1-RNP}
Let $Y$ be a Banach space with the Radon-Nikod\'ym property. Then, the set of norm attaining operators without the adjusted $\BS$ property is norm-dense in $\mathcal{L}(L_1[0,1],Y)$.
\end{theorem}
\begin{proof}
From \cite[Theorem 5, p. 63]{DU}, we consider an operator $T\in \mathcal{L}(L_1[0,1],Y)$ as a function $f\in L_\infty([0,1],Y)$ isometrically by the identification $Th=\int_0^1 fh~d\sigma$ for all $h \in L_1[0,1]$ where $\sigma$ is the Lebesgue measure.
For a non-zero $f \in L_\infty([0,1],Y)$ and given $0<\varepsilon<\|f\|$, it is enough to show that there exists a norm attaining function $g\in L_\infty([0,1],Y)$ without the adjusted $\BS$ property so that $\|g-f\|<\varepsilon$.
From \cite[Corollary 3, p. 42]{DU}, a function $f$ can be approximated by countably valued functions. Hence, there exists $f_0 = \sum_{i=1}^\infty y_i \chi_{E_i} \in L_\infty([0,1],Y)$ satisfying $\|f_0-f\|<{\varepsilon}/{4}$ and $\|f_0\|=\|f\|$ where $E_i\subseteq [0,1]$ is a sequence of mutually disjoint Lebesgue measurable subsets for $i \in \mathbb{N}$.
Fix $k\in \mathbb{N}$ such that $\|y_k\|>\|f\|-{\varepsilon}/{4}$ and $\sigma(E_k)>0$, take a Lebesgue measurable subset $E_0\subseteq E_k$ so that $0<\sigma(E_0)<\sigma(E_k)$.
Define
\[
f_1= z_0 \chi_{E_0} + \sum_{i \in \mathbb{N} \setminus \{k\}} z_i \chi_{E_i} + z_k \chi_{E_k \setminus E_0} \in L_\infty([0,1],Y),
\]
where
$$
z_0=\left(\|f\| -\dfrac{\varepsilon}{4}\right) \dfrac{y_k}{\|y_k\|}, \quad z_k=\|f\|\dfrac{y_k}{\|y_k\|} \quad \text{and} \quad z_i=\left(1 -\dfrac{\varepsilon}{4\|f\|}\right)y_i
$$
for each $i \in \mathbb{N} \setminus \{k\}$. From the construction, we have that
\begin{enumerate}
\setlength\itemsep{0.3em}
\item[\textup{(i)}] $\|f-f_1\|<\dfrac{\varepsilon}{2}$,
\item[\textup{(ii)}] $\|z_k\|=\|f\|=\|f_1\|$,
\item[\textup{(iii)}] $z_0 = \left(1 -\dfrac{\varepsilon}{4\|f\|}\right) z_k$ and
\item[\textup{(iv)}] $\|z_i\|\leq\|f\|-\dfrac{\varepsilon}{4}$ for all $i \in \mathbb{N} \setminus \{k\}$.
\end{enumerate}
Put $\eta := \min\left\{ \dfrac{\varepsilon}{4\|f\|}, \dfrac{\sigma(E_0)}{2}\right\}>0$ and fix any $t_0 \in [0,1 - \eta] \cap E_0$ such that $\sigma([t_0,t_0+\delta] \cap E_0)>0$ for any $\delta>0$. Define a new function $g \in L_\infty([0,1],Y)$ by
\begin{displaymath}
g(t):=\left\{\begin{array}{@{}cl}
\displaystyle \, \bigl[(1+t_0)-t\bigr]z_k & \displaystyle \text{if } t \in [t_0,t_0+\eta] \cap E_0, \\
\displaystyle \, f_1(t) & \text{otherwise}.
\end{array} \right.
\end{displaymath}
It is clear that $\|g-f\| \leq \|g-f_1\| + \|f_1-f\| <\varepsilon$ and $\|g\|=\|f\|$. To see that $g$ does not have the adjusted $\BS$ property, consider a function $h \in L_\infty([0,1],Y)$ defined by
\begin{displaymath}
h(t):=\left\{\begin{array}{@{}cl}
\displaystyle \, z_k & \text{if } t \in [t_0,t_0+\eta] \cap E_0 \cap C \\
\displaystyle \, -z_k & \text{if } t \in [t_0,t_0+\eta] \cap E_0 \cap C^c \\
\displaystyle \, g(t) & \text{otherwise},
\end{array} \right.
\end{displaymath}
where $C$ is a nowhere dense closed subset of $[t_0,t_0+\eta] \cap E_0$ satisfying that
$$
0<\sigma([t_0,t_0+\delta]\cap E_0 \cap C) < \sigma([t_0,t_0+\delta] \cap E_0)
$$
for any small $0<\delta<\eta$. For instance, we may take a fat Cantor-type set distributed on $[t_0,t_0+\eta] \cap E_0$.
To show that $g \perp_S h$, we fix any $|\lambdabda|>0$ and obtain
\begin{align*}
\|g+\lambdabda h\| &\geq \operatorname{ess\,sup} \bigl\{\|g(t) + \lambdabda h(t)\| : t \in [t_0,t_0+\eta] \cap E_0\bigr\} \\
&\geq \max \bigl\{ \|z_k +\lambdabda z_k\|, \|z_k -\lambdabda z_k\|\bigr\} \\
&= (1+ |\lambdabda|) \|z_k\| > \|g\|
\end{align*}
from the construction of $h$. However, $\int_0^1 g\varphi \,d\sigma \not\perp_B \int_0^1 h\varphi \,d\sigma$ for all $\varphi \in M_g$. Indeed, as we know that
\begin{align*}
\|z_k\|=\|f\|=\|g\|&=\left\|\int_0^1 g\varphi \,d\sigma\right\|\\
&\leq \|z_0\|\int_{E_0} |\varphi| \,d\sigma+\sum_{i \in \mathbb{N} \setminus \{k\}} \|z_i\|\int_{E_i} |\varphi| \,d\sigma+\|z_k\|\int_{E_k\setminus E_0} |\varphi| \,d\sigma\\
&\leq \|z_k\|\|\varphi\|\\
&=\|z_k\|
\end{align*}
the support of $\varphi$ belongs $E_k \setminus E_0$ almost everywhere. Moreover, the fact that $h(t)=g(t)$ almost everywhere on $E_k \setminus E_0$ leads to $\int_0^1 h\varphi \,d\sigma = \int_0^1 g\varphi \,d\sigma$.
\end{proof}
We close the present subsection by raising the following question on denseness of $\BSa(L_1[0,1],Y)$. It is worth mentioning that $\mathbb{N}A (L_1 [0,1], Y)$ is norm-dense in $\mathcal{L}(L_1[0,1],Y)$ when a Banach space $Y$ has the Radon-Nikod\'ym property \cite{uhl}.
\begin{question}
Is $\BSa(L_1[0,1],Y)$ norm-dense in $\mathcal{L}(L_1[0,1],Y)$ for a Banach space $Y$ with the Radon-Nikod\'ym property?
\end{question}
\subsection{When the domain space is $C[0,1]$.}
Recall from \cite[Proposition 3.7]{KL} that
$$
\BS(C[0,1],Y) \subseteqeq \left\{ T \in \mathbb{N}A(C[0,1],Y) : M_T =\left\{ \pm \chi_{[0,1]}\right\}\right\} \cup \{0\}
$$
for every reflexive strictly convex Banach space $Y$. In particular, this shows that $\BS(C[0,1],\mathbb{R})$ is not norm-dense in $\mathcal{L}(C[0,1],\mathbb{R})$. Similarly, we find that $\BSa(C[0,1],\mathbb{R})$ is not norm-dense, but we show that the same set is weak-$*$-dense even though it is not true for $\BS(C[0,1],\mathbb{R})$.
To deduce these results, we use the classical Riesz representation theorem. Indeed, we consider the space $\mathcal{L}(C[0,1],\mathbb{R})$ isometrically as the space $\mathcal{M}[0,1]$ of all finite regular Borel measures on $[0,1]$ with the norm induced by the total variation $\| \mu \| = |\mu| ([0,1])$. The duality is given by
\[
\mu (f) = \int_{[0,1]} f \, d\mu.
\]
Thus, it is natural to say that a measure $\mu \in \mathcal{M}[0,1]$ is \emph{norm attaining} if there exists $f\in S_{C[0,1]}$ such that $\|\mu\| = |\mu(f)|$.
We first present some sufficient conditions for regular Borel measures on $[0,1]$ to lack the adjusted $\BS$ property.
\begin{prop}\label{example1}A norm attaining regular Borel measure $\mu\in \mathcal{M}[0,1]$ satisfying the following two properties does not have the adjusted $\BS$ property.
\begin{enumerate}
\setlength\itemsep{0.3em}
\item[\textup{(i)}] there exists an interval $A=(\alpha,\beta) \subseteq [0,1]$ such that $F(t)=\mu\bigl((\alpha,t)\bigr)$ on $[\alpha,\beta]$ is an nondecreasing \textup{(}or nonincreasing\textup{)} continuous function which is not a constant.
\item[\textup{(ii)}] there exists an interval $C=(\gamma,\delta) \subseteq [0,1]\setminus A$ such that $|\mu|(C)=0.$
\end{enumerate}
\end{prop}
\begin{proof}
We only give a proof for the case that $F(t)$ is nondecreasing, since the other case can be proved by taking $-\mu$.
It is clear that $\mu$ is non-negative on $A$ and there are sequeces $(A_i)_{i=1}^\infty$ and $(B_i)_{i=1}^\infty$ of mutually disjoint open intervals in $A$ satisfying $\mu(A_i)=\mu(B_i)={\mu(A)}/{4^i}>0$ for every $i\in \mathbb{N}$ and $A_i\cap B_j=\phi$ for every $i,j\in \mathbb{N}$.
Note that $\mu\left( A \setminus \left(\cup_{i\in \mathbb{N}} (A_i\cup B_i)\right)\right)>0$. Define a regular Borel measure $\nu$ as
\begin{align*}
d\nu = \sum_{i=1}^\infty 2^i (\chi_{A_i} - &\chi_{B_i}) d\mu - \chi_{A \setminus \left(\cup_{i\in \mathbb{N}} (A_i\cup B_i)\right)} d\mu \\
&+ \frac{\mu\left( A \setminus \left(\cup_{i\in \mathbb{N}} (A_i\cup B_i)\right)\right)}{\delta-\gamma} \left(\chi_{\left(\gamma,\frac{\gamma+\delta}{2}\right)}d\sigma-\chi_{\left(\frac{\gamma+\delta}{2},\delta\right)} d\sigma \right),
\end{align*}
where $\chi_{\{\cdot\}}$ is the characteristic function and $\sigma$ is the Lebesgue measure.
We first claim that $\mu$ is strongly orthogonal to $\nu$ in the sense of Birkhoff-James. Indeed, for $0<\lambdabda<1$, take the smallest $n_0\in\mathbb{N}$ so that $\lambdabda2^{n_0}> 1$. Then, we have
\begin{align*}
\|\mu+ \lambdabda \nu\|
&= |\mu+ \lambdabda \nu|([0,1] \setminus (A \cup C))+|\mu+ \lambdabda \nu|(\cup_{i\in \mathbb{N}} \left(A_i\cup B_i)\right) \\
&\qquad +|\mu+ \lambdabda \nu|\left(A \setminus (\cup_{i\in \mathbb{N}} (A_i\cup B_i)\right)+|\mu+ \lambdabda \nu|(C) \\
&=|\mu|([0,1] \setminus A) + (\mu+\lambdabda\nu)(\cup_{i< n_0} \left(A_i\cup B_i)\right)\\
&\qquad +(\mu+\lambdabda\nu)\left(\cup_{i\geq n_0} A_i\right)-(\mu+\lambdabda\nu)\left(\cup_{i\geq n_0} B_i\right)\\
&\qquad\qquad +(1-\lambdabda)\mu\left( A \setminus \left(\cup_{i\in \mathbb{N}} (A_i\cup B_i)\right)\right)+\lambdabda\mu\left( A \setminus \left(\cup_{i\in \mathbb{N}} (A_i\cup B_i)\right)\right)\\
&= |\mu|([0,1] \setminus \left(\cup_{i\geq n_0} (A_i\cup B_i)\right))+\sum_{i\geq n_0}\lambdabda 2^i (\mu(A_i)+\mu(B_i)) >\|\mu\|.
\end{align*}
For $\lambdabda<0$, we have
\begin{align*}
\|\mu+ \lambdabda \nu\|
&= |\mu+ \lambdabda \nu|([0,1] \setminus (A \cup C))+|\mu+ \lambdabda \nu|(\cup_{i\in \mathbb{N}} \left(A_i\cup B_i)\right) \\
&\qquad +|\mu+ \lambdabda \nu|\left(A \setminus (\cup_{i\in \mathbb{N}} (A_i\cup B_i)\right)+|\mu+ \lambdabda \nu|(C) \\
&\geq |\mu|([0,1] \setminus A ) + (\mu+\lambdabda\nu)(\cup_{i\in \mathbb{N}} \left(A_i\cup B_i)\right)\\
&\qquad +(1+|\lambdabda|)\mu\left( A \setminus \left(\cup_{i\in \mathbb{N}} (A_i\cup B_i)\right)\right)+|\lambdabda|\mu\left( A \setminus \left(\cup_{i\in \mathbb{N}} (A_i\cup B_i)\right)\right)\\
&=|\mu|([0,1])+2|\lambdabda|\mu\left( A \setminus \left(\cup_{i\in \mathbb{N}} (A_i\cup B_i)\right)\right) >\|\mu\|.
\end{align*}
Secondly, we show that $\mu(f)$ is not orthogonal to $\nu(f)$ in the sense of Birkhoff-James for an arbitrary $f\in M_\mu$ by showing that $\nu(f)\neq 0$. Without loss of generality, we assume that $\mu(f)=\|\mu\|$. It is clear that $\int_{A_i}fd\mu=\mu(A_i)=\mu(B_i)=\int_{B_i}fd\mu$ for every $i \in \mathbb{N}$ and $\int_{A \setminus \left(\cup_{i\in \mathbb{N}} (A_i\cup B_i)\right)} f d\mu=\mu\left( A \setminus \left(\cup_{i\in \mathbb{N}} (A_i\cup B_i)\right)\right)$. Hence, we have that
\begin{align*}
\nu(f)
&=\int_A f d \nu +\int _C f d\nu \\
&= \mu\left( A \setminus \left(\cup_{i\in \mathbb{N}} (A_i\cup B_i)\right)\right)\left(-1+ \frac{1}{\delta-\gamma}\left(\int_{\left(\gamma,\frac{\gamma+\delta}{2}\right)} f d\sigma - \int_{\left(\frac{\gamma+\delta}{2},\delta\right)} f d\sigma\right)\right)\\
&<0.
\end{align*}
The last inequality follows from that fact that $\int_{\left(\gamma,\frac{\gamma+\delta}{2}\right)} f d\sigma - \int_{\left(\frac{\gamma+\delta}{2},\delta\right)} f d\sigma=\delta-\gamma$ implies $f$ is $1$ on $\left(\gamma,\frac{\gamma+\delta}{2}\right)$ and $-1$ on $\left(\frac{\gamma+\delta}{2},\delta\right)$ almost everywhere and it is not possible for continuous functions.
\end{proof}
\begin{prop}\label{example2} A norm attaining regular Borel measure $\mu\in \mathcal{M}[0,1]$ satisfying the following two properties does not have the adjusted $\BS$ property.
\begin{enumerate}
\setlength\itemsep{0.3em}
\item[\textup{(i)}] there exists an infinite subset $A \subseteq [0,1]$ such that $\mu(\{t\})\neq 0$ for each $t\in A$.
\item[\textup{(ii)}] there exists an interval $C=(\gamma,\delta) \subseteq [0,1]\setminus A$ such that $|\mu|(C)=0.$
\end{enumerate}
\end{prop}
\begin{proof}
We only give a proof for the case that there are infinitely many $t\in A$ such that $\mu(t)>0$ since the other case can be proved by taking $-\mu$. We assume that $\mu$ is positive on $A$ by considering the infinite subset for the convenience.
To construct a desired regular Borel measure $\nu$ as in Proposition \ref{example1}, we first pick an element $z\in A$. Since $\|\mu\|<\infty$, there are disjoint subsets $A_1=\{x_i : i\in \mathbb{N}\}$ and $A_2=\{y_i : i\in \mathbb{N}\}$ of $A\setminus\{z\}$ satisfying $\max\{\mu(\{x_i\}),\mu(\{y_i\})\}\leq 1/4^i$ for every $i\in \mathbb{N}$. Define a regular Borel measure $\nu$ as
$$d\nu=\sum_{i=1}^\infty \frac{1}{2^i}d\delta_{x_i}-\sum_{i=1}^\infty \frac{1}{2^i}d\delta_{y_i}-\mu(\{z\})d\delta_z+\frac{\mu(\{z\})}{\delta-\gamma}\left(\chi_{\left(\gamma,\frac{\gamma+\delta}{2}\right)}d\sigma-\chi_{\left(\frac{\gamma+\delta}{2},\delta\right)} d\sigma \right)$$
where $\delta_{\{\cdot\}}$ is the Dirac measure, $\chi_{\{\cdot\}}$ is the characteristic function, and $\sigma$ is the Lebesgue measure.
We claim that $\mu$ is strongly orthogonal to $\nu$ in the sense of Birkhoff-James. Indeed, for $0<\lambdabda<1$, take a number $n_0\in\mathbb{N}$ so that $\lambdabda2^{n_0}> 1$. It is obvious that $\lambdabda \nu(\{x_i\})=-\lambdabda\nu(\{y_i\})>\max\{\mu(\{x_i\}),\mu(\{y_i\})\}$ for every $i\geq n_0$. Then, we have
\begin{align*}
\|\mu+ \lambdabda \nu\|
&= |\mu+ \lambdabda \nu|([0,1] \setminus (A_1\cup A_2 \cup \{z\} \cup C))+|\mu+ \lambdabda \nu|(A_1\cup A_2) \\
&\qquad +|\mu+ \lambdabda \nu|\left(\{z\}\right)+|\mu+ \lambdabda \nu|(C) \\
&\geq |\mu|([0,1] \setminus (A_1\cup A_2 \cup \{z\})) + (\mu+\lambdabda\nu) \left(\{x_i, y_i : i< n_0\}\right)\\
&\qquad +(\mu+\lambdabda\nu)\left(\{x_i : i\geq n_0\}\right)-(\mu+\lambdabda\nu)\left(\{y_i : i\geq n_0\}\right)\\
&\qquad\qquad +(1-\lambdabda)\mu\left( \{z\}\right)+\lambdabda\mu\left( \{z\}\right)\\
&= |\mu|([0,1] \setminus \{y_i : i\geq n_0\})+\sum_{i\geq n_0}\left(2\lambdabda|\nu(\{y_i\})|-\mu(\{y_i\})\right) >\|\mu\|.
\end{align*}
For $\lambdabda<0$, we have
\begin{align*}
\|\mu+ \lambdabda \nu\|
&= |\mu+ \lambdabda \nu|([0,1] \setminus (A_1\cup A_2 \cup \{z\} \cup C))+|\mu+ \lambdabda \nu|(A_1\cup A_2) \\
&\qquad +|\mu+ \lambdabda \nu|\left(\{z\}\right)+|\mu+ \lambdabda \nu|(C) \\
&\geq |\mu|([0,1] \setminus (A_1\cup A_2 \cup \{z\})) + (\mu+\lambdabda\nu)(A_1\cup A_2)\\
&\qquad +(1+|\lambdabda|)\left(\{z\}\right)+|\lambdabda|\mu\left(\{z\}\right)\\
&=|\mu|([0,1])|+2|\lambdabda||\mu|(\{z\}) >\|\mu\|.
\end{align*}
We now show that $\mu(f)$ is not orthogonal to $\nu(f)$ in the sense of Birkhoff-James for an arbitrary $f\in M_\mu$ by showing that $\nu(f)\neq 0$ as we did in Proposition \ref{example1}. Arguing similarly, we may assume such $f$ has the value $1$ on $A$. Hence, we have that
\begin{align*}
\nu(f)
&=\int_{A_1\cup A_2} f d\nu +\int_{\{z\}} f d\nu+\int _C f d\nu \\
&= \mu\left( \{z\}\right)\left(-1+ \frac{1}{\delta-\gamma}\left(\int_{\left(\gamma,\frac{\gamma+\delta}{2}\right)} f d\sigma - \int_{\left(\frac{\gamma+\delta}{2},\delta\right)} f d\sigma\right)\right) <0.
\end{align*}
\end{proof}
From the Bishop-Phelps theorem \cite{BP}, it is true that the set of norm attaining functionals is norm-dense for arbitrary Banach spaces. Moreover, for a norm attaining functional on $C[0,1]$, it is easy to find another functional which is arbitrarily close to the previous one and satisfies the conditions in Proposition \ref{example1} or \ref{example2}. Hence, the set of norm attaining functionals without the adjusted $\BS$ property is norm-dense. It is worth mentioning that in $\mathcal{L}(c_0,\mathbb{R})$ there is no norm attaining operators without the adjusted $\BS$ property since $\BSa(c_0,\mathbb{R})=\mathbb{N}A(c_0,\mathbb{R})$ as in Theorem \ref{thm:ell1predual}.
\begin{cor}
The set of norm attaining operators without the adjusted $\BS$ property is norm-dense in $\mathcal{L}(C[0,1],\mathbb{R})$.
\end{cor}
On the other hand, using Proposition \ref{example1} and \ref{example2}, we shall see that the denseness of $\BSa(C[0,1],\mathbb{R})$ does not hold. In the proof, we make use of the following decomposition of a regular Borel measure on the real line $\mathbb{R}$.
\begin{lemma}[\mbox{\cite[Theorem 19.57, Ch V]{HS}}]\label{lem:HS}
Let $\mu$ be any regular Borel measure on $\mathbb{R}$. Then $\mu$ can be expressed in exactly one way in the form
\[
\mu = \mu_c + \mu_d,
\]
where $\mu_c$ is a continuous regular Borel measure and $\mu_d$ is a purely discontinuous measure.
\end{lemma}
Here, a continuous measure is a measure whose value is $0$ at every singleton, whereas a purely discontinuous measure is a (possibly infinite) countable linear combination of Dirac measures. Note that the cumulative distribution of a continuous measure is continuous (see \cite[Remark 19.58, Ch V]{HS}).
\begin{theorem}
\label{nondensec}The set $\BSa(C[0,1],\mathbb{R})$ is not norm-dense in $\mathcal{L}(C[0,1],\mathbb{R})$.
\end{theorem}
\begin{proof}
Define a regular Borel measure $\mu$ on $[0,1]$ as
$$
d\mu=\chi_{\left(0,\frac{1}{2}\right)}d\sigma-\chi_{\left(\frac{1}{2},1\right)} d\sigma,
$$
where $\chi_{\{\cdot\}}$ is the characteristic function and $\sigma$ is the Lebesgue measure.
It is clear that $\mu$ vanishes at $\pm\chi_{[0,1]}$. Hence, every norm attaining $\nu\in\mathcal{M}[0,1]$ with $\|\mu-\nu\|<1/2$ does not attain its norm at these characteristic functions. Observe that $f^{-1}((-1,1))$ is a non-empty open set whose total variation $|\nu|\bigl(f^{-1}((-1,1))\bigr)$ with respect to $\nu$ is $0$ for any norm attaining point $f\in S_{C[0,1]}$ of $\nu$ (see the proof of \cite[Proposition 3.7]{KL} for details). Moreover, this implies that $|\nu| (f^{-1}(\{1\})) >0$ and $|\nu|(f^{-1}(\{-1\}))>0$, since otherwise $\nu$ attains its norm at $\pm\chi_{[0,1]}$.
Assume that there exists $\nu \in \BSa (C[0,1], \mathbb{R})$ such that $\| \mu -\nu \| < 1/2$. Fix $f\in S_{C[0,1]}$ such that $\nu(f)=\|\nu\|$ and let $C \subseteq f^{-1}((-1/2,1/2)) \subseteq [0,1]$ be an open interval which satisfies $|\nu|(C)=0$.
According to Lemma \ref{lem:HS}, we can write $\nu^+=\nu_c^++\nu_d^+$ and $\nu^-=\nu_c^-+\nu_d^-$ where $\nu^+$ and $\nu^-$ are restrictions of $\nu$ on $f^{-1}((1/2,\infty))$ and $f^{-1}((-\infty,-1/2))$ respectively and $\nu_c^{\{\cdot\}}$ is the continuous measure and $\nu_d^{\{\cdot\}}$ is the purely discontinuous one of each. We note that $\nu^{+}$ and $-\nu^{-}$ are positive measures.
If one of $\nu_d^{\{\cdot\}}$ is not zero on infinitely many points, we see that $\nu$ does not have the adjusted $\BS$ property from Proposition \ref{example2}, which is a contradiction. Therefore, there are only finitely many discontinuities of $\nu$.
We now suppose that $\nu^{+}_c$ is not $0$, then there is an open interval $A = (\eta_1,\eta_2) \subseteq f^{-1}((1/2,\infty))$ so that $\nu^{+}_c\bigl((\eta_1,t)\bigr) \geq 0$ for every $t \in A$ and $\nu^{+}_c\bigl((\eta_1,\eta_2)\bigr)>0$. Since there are only finitely many discontinuities of $\nu$, we may assume that $\nu^+$ and $\nu^+_c$ are the same on $A$. This shows that the measure $\nu$ satisfies the conditions of Proposition \ref{example1}, which is a contradiction. Similarly, we deduce the same result for the case that $\nu^{-}_c$ is not $0$.
Consequently, we see that the only candidate of measure $\nu$, with $\| \mu - \nu\| < 1/2$, having the adjusted $\BS$ property is a finite linear combination of Dirac measures. However, we then have $\|\mu-\nu\|=\|\mu\|+\|\nu\|\geq \|\mu\|= 1$ for such $\nu$.
\end{proof}
Even though the set $\BSa(C[0,1],\mathbb{R})$ is not norm-dense in $\mathcal{L}(C[0,1],\mathbb{R})$, we see that there are a lot of operators with the adjusted $\BS$ property. Indeed, we show that $\BSa(C[0,1],\mathbb{R})$ is weak-$*$-dense.
\begin{prop}
\label{example3} If a measure $\mu\in \mathcal{M}[0,1]$ is a finite linear combination of Dirac measures, then it has the adjusted $\BS$ property.
\end{prop}
\begin{proof} The proof is similar to that of Theorem \ref{thm:ell1predual}, but we give the details for the completeness. Let us write $\mu = \sum_{k=1}^n a_k \delta_{x_k}$ with non-zero $a_k$ and elements $x_k\in [0,1]$ where $\delta_{\{\cdot\}}$ is the Dirac measure. Suppose that $\mu \perp_S \nu$ for some $\nu\in \mathcal{M}[0,1]$. We claim first that
$$
\left| \sum_{k=1}^n \sgn (a_k) \nu(\{x_k\}) \right| < |\nu|([0,1]\setminus \{x_k : 1\leq k\leq n\}).
$$
If it is not true, for $\lambda_0\in \mathbb{R}$ such that $0<|\lambda_0| < \min_{1 \leq k \leq n} \frac{|a_k|}{\max\{1,|\nu|(\{x_k\})\}}$ and $\sgn \lambda_0 = - \sgn(\sum_{k=1}^n \sgn (a_k) \nu(\{x_k\}))$, we have that
\begin{align*}
\|\mu + \lambda_0 \nu\| &= \sum_{k=1}^n |a_k + \lambda_0 \nu(\{x_k\})| + |\lambda_0| |\nu|([0,1]\setminus \{x_k : 1\leq k\leq n\}) \\
&= \sum_{k=1}^n \left| |a_k| + \lambda_0 \sgn(a_k) \nu(\{x_k\}) \right| + |\lambda_0| |\nu|([0,1]\setminus \{x_k : 1\leq k\leq n\}) \\
&= \sum_{k=1}^n |a_k| - |\lambda_0| \left| \sum_{k=1}^n \sgn(a_k) \nu(\{x_k\}) \right| + |\lambda_0| |\nu|([0,1]\setminus \{x_k : 1\leq k\leq n\}) \\
&\leq \sum_{k=1}^n |a_k| = \|\mu\|,
\end{align*}
which is a contradiction.
To find $f\in S_{C[0,1]}$ such that $\mu(f)=\|\mu\|$ and $\nu(f)=0$, we take $\varepsilon>0$ such that
$$
\left| \sum_{k=1}^n \sgn (a_k) \nu(\{x_k\}) \right| +4\varepsilon < |\nu|([0,1]\setminus \{x_k : 1\leq k\leq n\}).
$$
From the regularity of $\nu$ there are a compact $K\subseteq [0,1]\setminus \{x_k : 1\leq k\leq n\}$ and mutually disjoint open sets $U_k\subseteq [0,1]\setminus K$ such that $x_k \in U_k$, $\overline{U_k}\subseteq [0,1]\setminus K$,
$$\sum_{k=1}^n |\nu|(U_k)< \sum_{k=1}^n \left|\nu(\{x_k\}) \right| +\varepsilon \text{~and~}|\nu|(K)> |\nu|([0,1]\setminus \{x_k : 1\leq k\leq n\})-\varepsilon.$$
Using Urysohn's Lemma, we construct a continuous non-negative function $h$ in $S_{C[0,1]}$ such that $h \equiv 1$ on $K$ and $h \equiv 0$ on $\cup_{k=1}^n \overline{U_k}$. Similarly, we take non-negative continuous functions $h_k$ in $S_{C[0,1]}$ such that $h_k \equiv 1$ at $x_k$ and $h_k \equiv 0$ on $[0,1]\setminus U_k$.
Now, consider an element $g\in C[0,1]$ such that $\int_K g d \nu>|\nu|(K)-\varepsilon$, and define $f_\alpha \in M_\mu$ by $f_\alpha =\sum_{k=1}^n \sgn(a_k)h_k + \alpha h g$ for $|\alpha|\leq 1$. It is clear that $\mu(f_\alpha) = \|\mu\|$ for any $|\alpha| \leq 1$. On the other hand, we have that
\begin{align*}
\nu(f_\alpha)
&=\sum_{k=1}^n \int_{\{x_k\}}f_\alpha d\nu +\sum_{k=1}^n \int_{U_k\setminus \{x_k\}}f_\alpha d\nu +\int_{[0,1]\setminus (\cup_{k=1}^n \overline{U_k})}f_\alpha d\nu\\
&= \sum_{k=1}^n \sgn (a_k) \nu(\{x_k\})+\sum_{k=1}^n \int_{U_k\setminus \{x_k\}} \sgn (a_k) h_k d\nu +\alpha\int_{[0,1]\setminus (\cup_{k=1}^n \overline{U_k})} h g d\nu
\end{align*}
From the inequalities
$$\left|\sum_{k=1}^n \sgn (a_k) \nu(\{x_k\})+\sum_{k=1}^n \int_{U_k\setminus \{x_k\}} \sgn(a_k) h_k d\nu\right| <\left| \sum_{k=1}^n \sgn (a_k) \nu(\{x_k\}) \right| +\varepsilon$$
and
\begin{align*}
\left|\int_{[0,1]\setminus (\cup_{k=1}^n \overline{U_k})} h g d\nu\right|
&\geq \left|\int_{K} hg d\nu\right|-|\nu|\left([0,1]\setminus \left(K\cup\left(\cup_{k=1}^n \overline{U_k}\right)\right)\right)\\
&>|\nu|(K)-2\varepsilon>|\nu|([0,1]\setminus \{x_k : 1\leq k\leq n\})-3\varepsilon,
\end{align*}
we see that there exists $|\alpha|\leq 1$ so that $\nu (f_\alpha)=0$. This shows that $\mu$ has the adjusted $\BS$ property.
\end{proof}
It is well known that the set of extreme points of $B_{\mathcal{M}[0,1]}$ is the set of Dirac measures on $[0,1]$ (see \cite[Lemma 3.42]{FHH}). From Proposition \ref{example3}, we see that all the linear combinations of extreme points are contained in the set $\BSa(C[0,1],\mathbb{R})$, so we have shown the following consequence due to the Krein-Milman theorem.
\begin{cor}\label{cor:C[0,1]}The set $\BSa(C[0,1],\mathbb{R})$ is weak-$*$-dense in $\mathcal{L}(C[0,1],\mathbb{R})$.
\end{cor}
\begin{remark}
Note that the argument used in Proposition \ref{example3} (and in Corollary \ref{cor:C[0,1]}) can be applied to a general $C(K)$-space. Thus, we observe that $\BSa (\ell_\infty, \mathbb{R})$ is weak-$*$-dense in $\mathcal{L} (\ell_\infty, \mathbb{R})$ since $\ell_\infty = C(\beta \mathbb{N})$.
\end{remark}
Since it is known that $\BS(C[0,1],\mathbb{R}) \subseteqeq \left\{ T \in \mathbb{N}A(C[0,1],\mathbb{R}) : M_T =\left\{ \pm \chi_{[0,1]}\right\}\right\} \cup \{0\}$ (see \cite{KL}), we have $\BS(C[0,1],\mathbb{R}) \subseteqneqq \BSa(C[0,1],\mathbb{R})$, and moreover, the set $\BS(C[0,1],\mathbb{R})$ is not weak-$*$-dense in $\mathcal{L}(C[0,1],\mathbb{R})$.
\begin{prop}The set $\BS(C[0,1],\mathbb{R})$ is not weak-$*$-dense in $\mathcal{L}(C[0,1],\mathbb{R})$.
\end{prop}
\begin{proof}
Let $\mu$ be a measure whose norm is $1$ and vanishes at $\pm\chi_{[0,1]}$ like the one in the beginning of the proof of Theorem \ref{nondensec}. Take $f \in B_{C[0,1]}$ such that $\mu(f)>1/2$.
Observe from \cite[Proposition 3.7]{KL} which is mentioned in the beginning of the present section that $\BS(C[0,1],\mathbb{R})\cap \left\{\nu\in \mathcal{M}[0,1] : |\nu\left(\chi_{[0,1]}\right)|<1/2\right\}$ is contained in a ball $(1/2)B_{\mathcal{M}[0,1]}$. Hence, the set
\[
\left\{\nu\in \mathcal{M}[0,1]~:~ |\nu\left(\chi_{[0,1]}\right)|<\frac{1}{2},~\nu(f)>\frac{1}{2}\right\}
\]
is a weak-$*$-open set containing $\mu$ which does not intersect with $\BS(C[0,1],\mathbb{R})$.
\end{proof}
This notable difference between the set of operators with the $\BS$ property and the set of operators with the adjusted $\BS$ property may lead to the following general question.
\begin{question}
Is $\BSa(X,\mathbb{R})$ weak-$*$-dense in $\mathcal{L}(X,\mathbb{R})$ for every Banach space $X$?
\end{question}
\subsection*{Statements \& Declarations}
\subsection*{Funding}
The first author was supported by Basic Science Research Program through the National Research Foundation of
Korea(NRF) funded by the Ministry of Education, Science and Technology [NRF-2020R1A2C1A01010377]. The second author was supported by NRF (NRF-2019R1A2C1003857), by POSTECH Basic Science Research Institute Grant (NRF-2021R1A6A1A10042944) and by a KIAS Individual Grant (MG086601) at Korea Institute for Advanced Study. The third author was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) [NRF-2020R1C1C1A01012267].
\subsection*{Competing Interests} The authors have no relevant financial or non-financial interests to disclose.
\subsection*{Author Contributions} All authors contributed to the whole part of works together such as the study conception, design and writing. All authors read and approved the final manuscript.
\end{document}
|
begin{document}
\title[Verifying a Quantum Superposition in a Micro-optomechanical System]{Creating and Verifying a Quantum Superposition in a Micro-optomechanical System}
author{Dustin Kleckner$^{1, 2}$, Igor Pikovski$^{1, 3, 4}$, Evan Jeffrey$^3$, Luuk Ament$^5$, Eric Eliel$^3$, Jeroen van den Brink$^{5, 6}$, Dirk Bouwmeester$^{2,3}$}
address{
$^1$These authors contributed equally to this work.\\
$^2$Physics Department, University of California, Santa Barbara\\
$^3$Huygens Laboratory, Universiteit Leiden\\
$^4$Fachbereich Physik, Freie Universit\"{a}t Berlin\\
$^5$Institute-Lorentz for Theoretical Physics, Universiteit Leiden \\
$^6$Institute for Molecules and Materials, Radboud Universiteit Nijmegen\\
}
\ead{\mailto{[email protected]}, \mailto{[email protected]}}
begin{abstract}
Micro-optomechanical systems are central to a number of recent proposals for realizing quantum mechanical effects in relatively massive systems.
Here we focus on a particular class of experiments which aim to demonstrate massive quantum superpositions, although the obtained results should be generalizable to similar experiments.
We analyze in detail the effects of finite temperature on the interpretation of the experiment, and obtain a lower bound on the degree of non-classicality of the cantilever.
Although it is possible to measure the quantum decoherence time when starting from finite temperature, an unambiguous demonstration of a quantum superposition requires the mechanical resonator to be in or near the ground state.
This can be achieved by optical cooling of the fundamental mode, which also provides a method to measure the mean phonon number in that mode.
We also calculate the rate of environmentally induced decoherence and estimate the timescale for gravitational collapse mechanisms as proposed by Penrose and Diosi.
In view of recent experimental advances, practical considerations for the realization of the described experiment are discussed.
\end{abstract}
\maketitle
\section{Introduction}
Micro-optomechanical systems have recently attracted significant interest as a potential architecture for observing quantum mechanical effects on scales many orders of magnitude more massive than previous experiments.
Proposals include entangling states of mechanical resonators to each other~cite{Mancini2003EPJD, Pinard2005EPL, Vitali2007JPA} or cavity fields~cite{Vitali2007, Paternosto2007PRL}, the creation of entangled photon pairs~cite{Giovannetti2001}, ground state optical feedback cooling of the fundamental vibrational mode~cite{Courty2001, WilsonRae2007, Marquardt2007PRL, Bhattacharya2007PRL, Bhattacharya2008PRA}, observation of discrete quantum jumps~cite{Thompson2008}, quantum state transfer~cite{Zhang2003PRA} and the creation of massive quantum superpositions or so-called ``Schr\"odinger's cat'' states~cite{Bose1997PRA, Bose1999PRA, Marshall2003PRL}.
Here we focus on the latter class of experiments, in particular the one as described in Marshall et al.~cite{Marshall2003PRL}.
begin{figure}[b]
begin{center}
\includegraphics{figure-1.pdf}
\end{center}
caption{
A diagram of the experimental setup.
An input pulse is split between the two arms of a Michelson interferometer, labeled A and B, both of which contain high finesse cavities.
One end of the cavity in arm A is a tiny end mirror on a micromechanical cantilever, whose motion is affected by the radiation pressure of light in the cavity.
Each output port of the interferometer is monitored by a single photon detector, and results are analyzed by a computer to calculate the interference visibility.
}
\label{fig-setup}
\end{figure}
The heart of this experiment is a Michelson interferometer with high finesse optical cavities in each of its arms (\fref{fig-setup}).
In one arm the traditional end mirror is replaced with a tiny mirror on a micromechanical cantilever, hereafter referred to as the ``cantilever''.
Under the right conditions, the radiation pressure of a single photon in this arm of the experiment will be enough to excite the cantilever into a distinguishable quantum state.
A single photon incident on the 50-50 beam splitter will form an optical superposition of being in either of the two arms; the coupling between the photon and the cantilever will then entangle their states, putting the cantilever into a superposition as well.
If the photon leaves the interferometer with the cantilever in a distinguishable state, an outside observer could in principle determine which arm the photon took, and so the interference visibility is destroyed.
After a full mechanical period of the cantilever, however, it returns to its original position: if the photon leaves the interferometer at this time, the interference visibility should return provided the cantilever was able to remain in a quantum superposition in the intermediate period.
Alternatively, if the state of the cantilever collapses during this period due to environmentally induced decoherence, measurement by an outside observer or perhaps an exotic mechanism (e.g. cite{Karolyhazy1966, Penrose1996, Diosi1989PRA}), the visibility will not return.
In this sense the interference revival constitutes evidence that the cantilever was able to exist in a quantum superposition, and a measurement of its magnitude constitutes a measurement of the quantum decoherence in this time interval.
In a real experiment, however, one must be careful about drawing conclusions from the visibility dynamics as similar results can be obtained from a fully classical argument.
In this work we address the issue of classicality by first calculating the quantum dynamics of the system for both a pure state and a thermal density matrix (\sref{section-quantum}).
We also caclulate the Wigner function of the system as a method of determining the transition from the quantum to classical regime (\sref{section-wigner}).
Finally we discuss quantum decoherence mechanisms (\sref{section-decoherence}) and prospects for realization in view of recent experimental results (\sref{section-experiment}).
\section{Quantum Mechanical Description}
\label{section-quantum}
A more detailed analysis of the system begins with the Hamiltonian, given by Law ~cite{Law1995PRA}:
be{eq-Hamiltonian}
H = \hbar \omega_a \left[ {a^{dagger}} a + {b}^{dagger} b \right] + \hbar \omega_c \left[{c}^{dagger} c - \kappa {a^{dagger}} a \left(c + {c}^{dagger}\right)\right],
\end{equation}
where $\omega_a$ is the frequency of the optical field, ${a^{dagger}}$/${b}^{dagger}$ and $a$/$b$ are the the photon creation and annihilation operators for photons the arms A and B of the interferometer, $\omega_c$ is the mechanical frequency of the cantilever and ${c}^{dagger}$ and $c$ are the phonon creation and annihilation operators for its fundamental vibrational mode.
The dimensionless opto-mechanical coupling constant $\kappa$ is defined as:
bems{eq-Kappa}
\kappa &=& \frac{\omega_a}{L \omega_c} \sqrt{\frac{\hbar}{2 m \omega_c}}
\nems{eq-Kappa-2}
&=& \frac{\sqrt{2} N x_0}{\lambda},
\end{equation}ms
where $m$ is the mass of the cantilever, $L$ is the length of the optical cavity, $N$ is the number of cavity round trips per mechanical period, $\lambda$ is the optical wavelength, and $x_0 = \sqrt{\frac{\hbar}{m \omega_c}}$ is the size of the ground state wavepacket for the cantilever.
The Hamiltonian treats the mechanical resonator as completely linear, which should be a valid assumption.
Non-linearities have not been observed in experiments conducted on similar systems, which is expected given that the typical vibration amplitudes are many orders of magnitude smaller than the dimensions of the resonator.
From this we can derive the unitary evolution operator ~cite{Bose1997PRA}:
\vbox{
bem{eq-unitary}
U(t)&=&\exp{b}^{dagger}ig[-i \omega_a t \left({a^{dagger}}a + {b}^{dagger}b\right) - i \left(\kappa {a^{dagger}} a\right)^2 \left(\omega_c t - \sin \omega_c t\right) {b}^{dagger}ig] \times
\\ && \nonumber \quad} \newcommand{\nems}[1]{\\ \label{#1}} \newcommand {\ket}[1]{\vert \, #1\rangle \exp{b}^{dagger}ig[ \kappa {a^{dagger}} a \left[ \left(1-e^{-i \omega_c t}\right) {c}^{dagger} - \left(1-e^{i \omega_c t}\right)c \right] {b}^{dagger}ig] \exp{b}^{dagger}ig[- i \omega_c {c}^{dagger} c t {b}^{dagger}ig].
\end{equation}m
}
\subsection{Coherent State}
If we consider a cantilever initially in a coherent state with complex amplitude $beta$, the total initial state is given by
$\ket{\Psi(0)} = \frac{1}{\sqrt{2}} \left( \ket{0,1}_{n_a, n_b} + \ket{1,0}_{n_a, n_b} \right) \otimes \ket{beta}_c$.
Under the action of the unitary operator eqn.~\eref{eq-unitary} this unentangled state evolves to:
bems{eq-state-1}
\ket{\Psi(t)}&=&\frac{1}{\sqrt{2}} e^{-i \omega_a t}
{b}^{dagger}ig( \ket{0, 1} \otimes \ket{beta e^{-i \omega_c t}} +
\\ && \nonumber \quad} \newcommand{\nems}[1]{\\ \label{#1}} \newcommand {\ket}[1]{\vert \, #1\rangle e^{i \kappa^2 (\omega_c t - sin(\omega_c t)) +
i \kappa \scriptsize{\textrm{Im}} [ beta (1- e^{-i \omega_c t}) ] }
\ket{1, 0} \otimes \ket{\kappa (1- e^{-i \omega_c t}) + beta e^{-i \omega_c t}} {b}^{dagger}ig)
\nems {eq-state-2}
&=&\frac{1}{\sqrt{2}} e^{-i \omega_a t}
{b}^{dagger}ig( \ket{0, 1} \otimes \ket{\Phi_0(t)} +
\\ && \nonumber \quad} \newcommand{\nems}[1]{\\ \label{#1}} \newcommand {\ket}[1]{\vert \, #1\rangle e^{i \kappa^2 (\omega_c t - sin(\omega_c t)) -
i \scriptsize{\textrm{Im}} [ \Phi_0(t) \Phi_1(t)^*]} \ket{1, 0} \otimes \ket{\Phi_1(t)} {b}^{dagger}ig). \end{equation}ms
Because the cantilever is only displaced if the photon is in arm A, the state of the photon and the state of the cantilever become entangled.
The cantilever then enters a superposition of two different coherent states, with time dependent amplitude $\Phi_0(t)$ when no photon is present and $\Phi_1(t)$ if there is a photon.
After half a mechanical period, the spatial distance between the two cantilever states $\ket{\Phi_0}$ and $\ket{\Phi_1}$ is given by ${d}^{dagger}elta x = \sqrt{8} \kappa x_0$, and the two cantilever states have the lowest overlap, $|braket{\Phi_0}{\Phi_1}| = e^{-2 \kappa^2}$.
After a full mechanical period $\ket{\Phi_0}$ and $\ket{\Phi_1}$ are identical again, and so the photon and cantilever are disentangled.
For a proper demonstration of a superposition, we require the overlap between the states to be relatively small during part of the experiment, implying $\kappa \gtrsim 1/\sqrt{2}$.
This is equivalent to stipulating that a measurement of the cantilever state alone is sufficient to determine which path a photon took with a reasonable fidelity.
As will be discussed in \sref{section-experiment}, obtaining this large a value of $\kappa$ poses the most significant barrier to experimental realization.
In practice, the actual quantity measured is the interferometric visibility $v(t)$ as seen by the two single photon detectors.
This visibility is given by twice the absolute value of the off-diagonal elements of the reduced photon density matrix, which in this case is the overlap between the two cantilever states:
be{eq-visibility}
v(t) = e^{-\kappa^2 (1-cos(\omega_c t))}.
\end{equation}
begin{figure}[!htp]
begin{center}
\includegraphics{figure-2.pdf}
\end{center}
caption{Left: The Visibility $v(t)$ as a function of time for different values of the opto-mechanical coupling constant, $\kappa$.
Right: The Von Neumann entropy $S(t)$ versus the visibility, $v(t)$.}
\label{fig:Vis}
\end{figure}
It exhibits a periodic behavior characterized by a suppression of the interference visibility after half a mechanical period and a revival of perfect visibility after a full period (\fref{fig:Vis}) provided there is no decoherence in the state of the cantilever.
The visibility can be mapped directly to the entanglement between the photon and the cantilever. For a pure bipartite state, we can express the entanglement as the von Neumann entropy of the photon $S(t)$ in terms of the visibility $v(t)$ (\fref{fig:Vis}):
bems{eq-Entropy-1}
S(t)&=&-\textrm{Tr}_{\textrm{ph}} \left( \rho_{\textrm{ph}} \log_2 \rho_{\textrm{ph}} \right)
\nems{eq-Entropy-2}
&=&1+\frac{v(t)}{2} \log_2 \left( \frac{1-v(t)}{1+v(t)} \right) - \frac{1}{2} \log_2 \left( 1-v(t)^2 \right),
\end{equation}ms
where $\rho_{\textrm{ph}}$ is the reduced density matrix for the photon.
Since for a pure bipartite system a high Von Neumann entropy of one subsystem corresponds to high entanglement between the two subsystems, we conclude that when the initial state is pure, the visibility alone is a good measure for the non-classical behavior of the cantilever.
This is true even in the presence of an arbitrary decoherence mechanism, which will destroy the quantum nature of the system and thus produce a corresponding loss of interference visibility.
\subsection{The cantilever at finite temperatures}
At finite temperatures the exact wavefunction of the cantilever is unknown, so the state is instead described by a density matrix:
be{eq-thermal-state}
\rho_c(0) = \frac{\sum_n e^{-E_n/k_B T } \ket{n} bra{n} }{ \sum_n e^{-E_n/k_B T } } = \frac{1}{\pi bar{n}} \int d^2beta e^{-|beta|^2 / bar{n} } \ket{beta} bra{beta},
\end{equation}
where $bar{n} = 1/ ( e^{\hbar \omega_c / k_B T} - 1) $ is the average thermal occupation number of the cantilever's center of mass mode, $\ket{n} $ are energy eigenstates and $\ket{beta} $ coherent states of the cantilever.
Here we only consider the effects of a thermally excited initial state, i.e. for a cantilever with no dissipation ($Q \to \infty$).
The effects of dissipation and resulting decoherence are discussed in \sref{section-decoherence}.
The evolution of eqn.~\eref{eq-thermal-state} under the action of eqn.~\eref{eq-unitary} yields the visibility:
be{eq-visibilityTH}
v(t) = e^{-\kappa^2 (2 bar{n} +1) (1-cos (\omega_c t))}.
\end{equation}
At finite temperatures the density matrix represents an average over coherent states with different phases which destroys the interference visibility.
Although there is also a phase shift from the entanglement as discussed earlier, in principle this shift is known and repeatable, while the same is not true for the thermal state.
A good indicator that the visibility no longer captures the quantum behavior is that it becomes independent of $\hbar$ if the initial temperature of the cantilever is high cite{Bernad2006PRL}.
This can be seen most easily by noting that in the limit $k_b T \gg \hbar \omega_c$, the mean phonon number is given by $bar{n} approx k_b T/\hbar \omega_c - 1/2$. Thus the visibility eqn.~\eref{eq-visibilityTH} can be rewritten as:
be{eq-visibility-highT}
v(t) approx e^{-\frac{k_b T}{m \omega_c^2} \left(\frac{2 N}{\lambda}\right)^2 \left(1 - cos (\omega_c t)\right)}.
\end{equation}
This is the classically expected result, which differs primarily from the quantum result in that the visibility is always one at zero temperature because the distinguishability of the cantilever state is irrelevant.
At higher temperatures it is difficult to determine when the cantilever was in a superposition state.
Because the experiment requires averaging over many runs, the quantum distinguishability is masked by the unknown classical phase shifts.
However, after a full mechanical period the net phase shift from any initial state goes to zero and so full visibility should still return in a narrow window whose width scales like $bar{n}^{-1/2}$.
This leaves open the possibility for measuring quantum collapse mechanisms at higher temperatures if one assumes that the cantilever was in a superposition state.
Provided that the opto-mechanical coupling strength $\kappa$ is relatively well known (e.g., by independently measuring $m$, $\omega_c$, $L$, etc.) and the instantaneous quantum state of the cantilever is regarded as some random coherent state (as should be the case for the weakly mechanically damped systems discussed here) it can be easily determined when a superposition should have been created.
Although eqn.~\eref{eq-visibility-highT} suggests the visibility should always return in the classical case, we note that this can only be true if both the optical \emph{and} mechanical modes are behaving classically.
On the other hand, if we regard only the optical field as quantum we should always expect no interference visibility because the classical cantilever would measure which path the photon took.
Thus the return of visibility at higher temperatures can be used to strongly imply the existence of a quantum superposition when $\kappa \gtrsim 1/\sqrt{2}$, even though the superposition can not be directly measured by the visibility loss at $t \sim \pi \omega_c$\footnote {The presence of a ``loop hole'' in such a demonstration could be regarded as analogous to experimental tests of Bell's inequalities, where even though it is generally regarded that quantum mechanics has been adequately demonstrated, an unambiguous proof has remained elusive.
In our case, the loop hole is caused by the unknown intermediate state caused by finite temperature.
Even though a weakly damped system should produce something that is very nearly a coherent state at any given instance of time, there is no way to directly show the cantilever is in this state.
}.
Nevertheless, an unambiguous demonstration can be provided if the temperature is low enough such that the visibility loss due to quantum distinguishability is still resolvable.
At finite cantilever temperatures the interferometric visibility becomes a bad measure for the non-classicality of the mirror.
This can be easily seen by the the relation between the von Neumann entropy and the visibility, eqn.~\eref{eq-Entropy-2}.
It is valid at arbitrary temperatures, but at $T>0$ the system is in a mixed state and the entropy is only an upper bound for the entanglement of formation cite{Nielsen2001Book}.
One thus needs to analyze the non-classicality of the cantilever state by other means.
In the next section we use the integrated negativity of the Wigner function~cite{Kenfack2004JOptB} to quantify the non-classicality of the cantilever with respect to temperature.
\section{The Wigner Function and the Classical Limit}
\label{section-wigner}
begin{figure}[!htp]
begin{center}
\includegraphics[width=6.0 in]{figure-3.pdf}
\end{center}
caption{
The time evolution of the cantilever's projected Wigner function for $beta = 0$, $\kappa =2$ and $\hbar = \omega_c = m = 1$.
Regions where the Wigner function is negative, shown in yellow and red, have no classical analogue.
}
\label{fig:Wigner-Ground}
\end{figure}
To study transitions between the quantum and the classical regimes, it is often convenient to refer to quasi-probability distributions, with which quantum mechanics can be formulated in the common classical phase space.
One such distribution was proposed in 1932 by Wigner cite{Wigner1932PhysRev} and can be obtained from the density matrix $\rho$:
be{Wig}
W(x,p) = \frac{1}{\pi \hbar} \int_{- \infty}^{+\infty} dy bra{x-y} \rho \ket{x+y} e^{2ipy/\hbar}.
\end{equation}
It is well known that in the classical limit $\hbar \rightarrow 0 $ the Wigner function tends to a classical probability distribution describing a microstate in phase space cite{Hillery1984PhR}.
This can most easily be seen in the case of a single particle moving in a potential $V(x)$.
The time evolution of the Wigner function for this closed system is described by the quantum Liouville equation cite{Wigner1932PhysRev, Schleich2001book}
bem{eq-WignerLiouville}
&&big( \frac{\partial}{\partial t} + \frac{p}{m} \frac{\partial}{\partial x} - \frac{dV(x)}{dx} \frac{\partial}{\partial p} big) \, W(x, p,t) =
\\ && \nonumber \quad} \newcommand{\nems}[1]{\\ \label{#1}} \newcommand {\ket}[1]{\vert \, #1\rangle\sum_{k=1}^{\infty} \hbar^{2k} \frac{(-1)^k}{4^k (2k+1)!} \frac{d^{2k+1}V(x)}{dx^{2k+1}}
\frac{\partial^{2k+1}}{\partial p^{2k+1}} W(x,p,t).
\end{equation}m
For $\hbar \rightarrow 0 $, the right hand side goes to 0, as long as no derivatives diverge.
In this limit the Wigner function $W(x,p,t)$ thus evolves according to the classical Liouville equation.
However, the quantum nature of $W(x,p,t)$ is also contained in its initial conditions.
In fact, in the special case of a harmonic potential, all non-classical behavior is encoded in the initial conditions of the Wigner function only since the right hand side of eqn.~\eref{eq-WignerLiouville} is always 0.
But for $\hbar \rightarrow 0$ also the initial conditions become classical and $W(x,p,t)$ can be fully identified with some classical probability density.
If, on the other hand, the Wigner function is negative then no classical interpretation is possible, making it a useful tool to indicate the non-classicality of an arbitrary state.
It is thus convenient to quantify the total negativity of the Wigner function cite{Kenfack2004JOptB}:
bem{eq-Negativity}
&& N = \int_{- \infty}^{+\infty}\!\! dx \int_{- \infty}^{+\infty}\!\! dp {b}^{dagger}ig\{ |W(x,p)| - W(x,p) {b}^{dagger}ig\}
\\ && \nonumber \quad} \newcommand{\nems}[1]{\\ \label{#1}} \newcommand {\ket}[1]{\vert \, #1\rangle = \int\!\! dx \int\!\! dp \, |W(x,p)| - 1.
\end{equation}m
For the experiment at hand, we compute the cantilever's Wigner function for dimensionless $x$ and $p$, with the photon projected into the superposition state $\ket{0,1} + e^{i \theta} \ket{1,0}$ to avoid destroying the quantum state of the cantilever to which it is entangled.
This projection is equivalent to detecting a single photon at one output of the interferometer, where the phase term in the projection accounts for path length differences in the arms.
Generally speaking, varying $\theta$ shifts the interference peaks but does not modify the Wigner function in a significant way; hereafter we will set it to 0.
The resulting Wigner function of the cantilever indeed shows that the system periodically exists in a highly non-classical state (\fref{fig:Wigner-Ground}).
A calculation of the thermally averaged Wigner function shows that the non-classical features are quickly washed out with increasing initial temperature (\fref{fig:Wigner-Thermal}).
However, as long as part of the Wigner function is negative, the cantilever is clearly in a non-classical superposition state.
The negativity of the Wigner function at half a mechanical round trip decreases rapidly with $bar{n} $ and is also dependent on $\kappa$ (\fref{fig:Negativity}).
In practice, this implies that $bar{n}$ must of order 1 for $\kappa approx 1$, with somewhat higher values being tolerable for higher $\kappa$.
This analysis confirms our earlier assertion that direct proof of a superposition requires low mean phonon number.
Finally, we mention that it is also possible to demonstrate the non-classical nature of a mechanical resonator by calculating a measure of entanglement~cite{Paternosto2007PRL}.
For example, in a related experiment in which two micromechanical systems are coupled to one another with a light field, the entanglement is lost at higher temperatures~cite{Vitali2007JPA, Vitali2007}
(the larger temperature bound obtained is due to a large amplitude coherent state in the optical mode).
begin{figure}[!htp]
begin{center}
\includegraphics[width=6.0 in]{figure-4.pdf}
\end{center}
caption{
The thermally averaged projected Wigner function of the cantilever at time $t=\pi$ for $\kappa = 1/\sqrt{2}$ and different mean thermal phonon numbers, $bar{n}$ .
($\hbar = \omega_c = m = 1$)
The negative regions of the Wigner function, shown in yellow and red, can be seen to quickly wash out with increasing temperature.
}
\label{fig:Wigner-Thermal}
\end{figure}
begin{figure}[!htp]
begin{center}
\includegraphics{figure-5.pdf}
\end{center}
caption{
Negativity of the projected cantilever state as a function of coupling constant $\kappa$ for several different mean phonon numbers, $bar{n}$.
The oscillations present when $bar{n}=0$ are due to a phase shift in the interference terms, which are washed out at higher temperatures.
}
\label{fig:Negativity}
\end{figure}
\section{Decoherence}
\label{section-decoherence}
In addition to ``classical'' phase scrambling caused by the initial thermal motion of the cantilever as discussed above, there are other effects which cause ``quantum'' decoherence of the cantilever.
The signature of this type of decoherence is a reduction of the visibility's revival peak -- this is caused by information loss during a single experimental run.
This is different from the previously discussed effect which is a narrowing of the visibility revival peaks caused by averaging of states in a thermal mixture, where no information is lost.
Thus, to be able to detect a signature of a macroscopic superposition, the timescale on which decoherence occurs should be larger than a single mechanical period.
\subsection{Environmentally Induced Decoherence}
Environmentally induced decoherence is due to the coupling of the system to a finite temperature bath, and results in a finite lifetime for the quantum superposition of the cantilever.
Decoherence happens when the thermal bath measures the state of the cantilever while the photon is in the cavity, introducing a phase shift that can not be compensated for even in principle.
To find the time scale for this mechanism we need to solve the open quantum representation of the system.
This is generally done by coupling the cantilever to an infinite bath of harmonic oscillators and integrating out the environmental degrees of freedom.
In doing so, one obtains a time-local master equation for the density matrix of the system incorporating the influence of the environment.
We start with the Hamiltonian:
be{eq-OpenHamilonian}
H = H_{sys} + H_{bath} + H_{int},
\end{equation}
where:
bem{eq-OpenHamilonian2}
& & \quad H_{sys} = \hbar \omega_a \left[ {a^{dagger}} a + {b}^{dagger} b \right] + \hbar \omega_c \left[{c}^{dagger} c - \kappa {a^{dagger}} a \left(c + {c}^{dagger}\right)\right]
\\ && \nonumber \quad} \newcommand{\nems}[1]{\\ \label{#1}} \newcommand {\ket}[1]{\vert \, #1\rangle H_{bath} = \sum_i \hbar \omega_i {d}^{dagger}_i d_i
\\ && \nonumber \quad} \newcommand{\nems}[1]{\\ \label{#1}} \newcommand {\ket}[1]{\vert \, #1\rangle H_{int} = (c + {c}^{dagger}) \sum_i \lambda_i (d_i + {d}^{dagger}_i ).
\end{equation}m
Here ${d}^{dagger}_i$ ($d_i $) are the creation (annihilation) operators of the bath modes, $\omega_i$ is the frequency of each mode and $\lambda_i $ are coupling constants.
Using the Feynman-Vernon influence functional method cite{Feynman1963} we can eliminate the bath degrees of freedom.
When the thermal energy of the bath sets the highest energy scale we can use the Born-Markov approximation to obtain a master equation for the density matrix of our system cite{Caldeira1983PhysA}:
be{eq-Master}
dot{\rho}(t) = \frac{1}{i\hbar} \left[ \tilde{H}_{sys},\rho(t) \right] -\frac{i \gamma}{\hbar} \left[ x,\left\{ p,\rho(t) \right\} \right] - \frac{D}{\hbar^2} \left[ x,\left[ x,\rho(t) \right] \right],
\end{equation}
where $\tilde{H}_{sys}$ is the system Hamiltonian in eqn.~\eref{eq-Hamiltonian}, renormalized by the interaction of the cantilever with the bath.
$\gamma = \omega_c/ Q $ is the damping coefficient as determined from the mechanical Q~factor and $D=2m\gamma k_B T_b $ is the diffusion coefficient where $T_b$ is the temperature of the bath.
The first term on the right hand side of~\eref{eq-Master} is the unitary part of the evolution with a renormalized frequency.
The other terms are due to the interaction with the environment only and incorporate the dissipation and diffusion of the cantilever.
The equation is valid in the Markovian regime when memory effects in the bath can be neglected; this is satisfied when the coupling to the bath is weak ($Q \gg 1$) and the thermal energy is much higher than the phonon energy ($k_B T_b \gg \hbar \omega_c$).
Both conditions are easily satisfied for realistic devices.
begin{figure}[!htp]
begin{center}
\includegraphics[width=6.0 in]{figure-6.pdf}
\end{center}
caption{
Wigner function of the system in the presence of environmentally induced decoherence for $T_b = T_{EID}/64$, $\kappa = 2$ and $\hbar = \omega_c = m = 1$.
}
\label{fig:Wigner-Decoherence}
\end{figure}
Following Zurek cite{Zurek2003RMP}, we note that in the macroscopic regime (to highest order in $\hbar^{-1}$), the master equation is dominated by the diffusion term proportional to $D/\hbar^2$.
Evaluating it in the position basis, one finds the time scale:
be{eq-time}
\tau_{\textrm{dec}} = \frac{\hbar^2}{D({d}^{dagger}elta x)^2} = \frac{\hbar Q}{16 k_B T_b \kappa^2},
\end{equation}
where ${d}^{dagger}elta x = \sqrt{8} \kappa x_0$, as before.
A calculation of the Wigner function which includes decoherence of the off-diagonal elements with the above dependence shows how the non-classicality of the state is dissipated with time (\fref{fig:Wigner-Decoherence}).
An exact open quantum system analysis of the experimental setup based on eqn.~\eref{eq-Master} has been performed by Bassi et al. cite{Bassi2005PRL} and Bern\'{a}d et al. cite{Bernad2006PRL}.
The former authors neglect the term proportional to $p$ in eqn.~\eref{eq-Master} and solve the resulting equation for the off-diagonal matrix elements of the reduced photon density matrix.
The latter authors use the full equation.
The results for the decoherence of the revival peaks in those papers are remarkably close to the above estimate, both predicting a longer coherence time by only a factor of $8/3$.
The order of magnitude is thus well captured by \eref{eq-time}.
For an optomechanical system the important parameter is the mechanical quality factor, $Q$.
It is convenient to define a characteristic environmentally induced decoherence temperature:
be{eq-TEID}
T_{EID} = \frac{\hbar \omega_c Q}{k_B}.
\end{equation}
With this definition, the decoherence time \eref{eq-time} can be written as $ \tau_{\textrm{dec}}^{-1} = 16 \kappa^2 \omega_c \left(\frac{T_{b}}{T_{EID}}\right) $.
Above this temperature the interference revival peak will be drastically reduced in magnitude.
We note that the environmentally induced decoherence rate is dependent only on the bath temperature, $T_b$, not the effective temperature of the cantilever mode, $T$, which can be made different from the bath temperature by optical cooling (see \sref{sec-optical-cooling}).
Since a high-Q resonator is only weakly coupled to the bath, it is sufficient to treat these two temperatures as independent.
\subsection{Gravitationally Induced Quantum Collapse}
To explain the apparent classicality of the macroscopic world, it has been suggested that there may be a quantum state collapse mechanism for large objects, possibly induced by mass.
Several proposals have been made which lead to such a collapse, among them reformulations of quantum mechanics cite{Ghirardi1986PRD, Weinberg1989Annals} and the use of the intrinsic incompatibility between general relativity and quantum mechanics cite{Karolyhazy1966, Penrose1996, Diosi1989PRA}.
Unlike environmentally induced decoherence, which is largely a nuisance in the realization of a massive superposition experiment, measurement of a mass induced collapse would be evidence of new physics and is hence of considerable interest.
Here we review the gravitational collapse model given by Penrose~cite{Penrose1996}.
Penrose argues that a superposition of a massive object will result in a co-existence of two different space-time geometries which cannot be matched in a coordinate independent way.
Any difference in the causal structure will then generate different time translation operators $\partial / \partial t $ in the respective space-times.
Only an asymptotic identification would be possible, but if a local notion is required the failure to identify a single time structure for two superposed space-times will be a fundamental obstacle to unitary quantum evolution. Any time translation operator $\partial / \partial t $ in such a superposition of space-times will have an intrinsic error and hence a unitary evolution cannot take place indefinitely.
This will eventually result in a collapse of the superposed state.
To give an order of magnitude estimate for the identification of the two superposed space-times, Penrose uses the Newtonian limit of gravity including the principle of general covariance.
The error is quantified by the difference of free falls (geodesics) throughout both space-times, which turns out to correspond to the gravitational self energy ${d}^{dagger}elta E $ of the superposed system, defined the following way:
bems{eq-self-energy}
E_{i, j}&=& - G \int\!\!\! \int\!\! d\vec{r_1} d\vec{r_2} \, \frac{\rho_i(\vec{r_1}) \rho_j(\vec{r_2})}{\left|\vec{r_1} - \vec{r_2} \right|}
\nems{eq-self-energy-2}
{d}^{dagger}elta E&=& 2 E_{1, 2} - E_{1, 1} - E_{2, 2},
\end{equation}ms
where $\rho_1$ and $\rho_2$ are the mass distributions for the two states in question.
A similar result was obtained by Diosi ~cite{Diosi1989PRA}.
This energy yields a timescale for the decay of a superposition, estimated by $\tau_G approx \hbar / {d}^{dagger}elta E $.
When attempting to apply this to the proposed superposition experiment, it is unclear precisely what form the mass distributions should take (see also cite{Diosi2007JPA}).
For simplicity we will consider the mass to be evenly distributed over a number of spheres, corresponding to atomic nuclei, each with mass $m_1$, radius $a$, and the superposition states to be separated by a distance ${d}^{dagger}elta x$.
The total mass is given by $m$, as before.
If the atomic spacing is much larger than the effective mass radius, the energy due to the interaction between different atomic sites is negligible and the gravitational self-energy is given by:
be{eq-DE-spheres}
{d}^{dagger}elta E = 2 G m m_1 \left( \frac{6}{5 a} - \frac{1}{{d}^{dagger}elta x} \right) \quad \textrm{(given: } {d}^{dagger}elta x \geq 2 a \textrm{)}.
\end{equation}
If we set the sphere radius to be the approximate size of a nucleus ($a = 10^{-15}$ m) and use the parameters of an ideal optomechanical device ($m = 10^{-12}$ kg, $\omega_c = 2 \pi \times 1$ kHz, $\kappa = 1/\sqrt{2}$ and $m_1 = 4.7 \times 10^{-26}$ kg, the silicon nuclear mass), this results in a timescale of order milliseconds.
Alternatively, one could argue that the effective diameter of the spheres should be the ground-state wavepacket size ($a = x_0/2$).
With the maximum separation of the states (${d}^{dagger}elta x = \sqrt{8} \kappa x_0$), the resulting energy is:
be{eq-DE-cantilever}
{d}^{dagger}elta E = \frac{G m m_1}{x_0} \left( \frac{24}{5} - \frac{1}{\sqrt{2} \kappa} \right).
\end{equation}
Using the ideal device parameters results in a timescale on the order of 1 second.
In order to practically measure such a collapse mechanism, we require the timescale to be not much larger than a mechanical period so that a significant visibility reduction is present in the first revival peak.
This means it may be possible to measure a mass-induced collapse effect with the proposed experiment, although we note that the collapse timescale given above is intended only to be a rough estimate.
To contrast with previous large superposition experiments, the collapse timescale for interferometry of large molecules like C$_{60}$~cite{Arndt1999} is calculated to be $10^{10}$ s (using the nuclear radius, $a = 10^{-15}$ m, and assuming comparatively larger separation).
Other demonstrated experiments have similar or larger timescales, meaning a collapse mechanism of this type would have certainly been undetectable in all experiments to date.
\section{Prospects for Experimental Realization}
\label{section-experiment}
\subsection{Optomechanical Devices}
In practice, the experimental realization of a macroscopic quantum superposition is severely technically demanding.
Perhaps the most challenging aspect is achieving sufficient optical quality, which is required to put the cantilever into a distinguishable state via interaction with a single photon, i.e. $\kappa \gtrsim 1/\sqrt{2}$.
Although $\kappa$ can be increased by shortening the optical cavity, this will also reduce the ring-down time, making it extremely unlikely to observe a photon in the revival period.
A reasonable compromise is reached by requiring the optical finesse, $F$, be equal to the required number of round trips per period as given by \eref{eq-Kappa-2}.
In this case the fraction of photons still in the optical cavity after a mechanical period is $e^{-2 \pi}$ (.2\%), a small number but enough to measure the visibility on the timescale of hours.
This resulting requirement for the finesse has a rather intuitive form:
be{eq-Finesse}
F \gtrsim \frac{\lambda}{2 x_0}.
\end{equation}
In order to prevent diffraction from limiting the finesse, the mirror on the cantilever needs a diameter of order 10 microns or larger cite{Kleckner2006PRL}.
If the mirror is a dielectric Bragg reflector, the conventional choice for achieving very high optical quality, the required finesse is of order $10^6 - 10^7$ given the minimum resulting mass and assuming it is placed on a cantilever with frequency $\sim$ 1 kHz.
Finesses of over $10^6$ have been realized in several experiments with larger, cm size, dielectric mirrors (for example, cite{Rempe92}), so the primary challenge in using these mirrors is finding a way to micro-fabricate them without degrading their properties.
State of the art is currently $F = 10^4-10^5$, although rapid progress has been made in recent years due to a growing interest in optomechanical systems in general. See \fref{fig-Devices} for a comparison of different devices.
An interesting alternative to the tiny mirror on the cantilever approach is the so called ``membrane in the middle''.
In this case the optomechanical element is a dielectric membrane placed between two high quality mirrors; the cavity detuning induced by motion of the membrane produces a result functionally equivalent to moving an end mirror on a mechanical resonator.
Commercially available silicon nitride membranes have recently been demonstrated in cavities with finesses of over $10^4$ and with remarkably high mechanical quality factors, $Q > 10^7$~cite{Zwickl2008}.
In theory, this type of system would require a lower finesse to achieve a superposition, as the thickness of the optical element can be an order of magnitude less than a dielectric mirror.
To take advantage of this, however, would require the membranes be micro-fabricated into cantilever or bridge-resonator structures to reduce their total mass, something that has not yet been attempted.
The other important parameter for an optomechanical system is the mechanical quality factor, $Q$, governing the characteristic environmentally induced decoherence temperature $T_{EID}$, as defined in \eref{eq-TEID}.
Optomechanical devices have already been demonstrated for which $T_{EID}$ is experimentally accessible with common cryogenic techniques (\fref{fig-Devices}), although operating the devices in the sub-Kelvin regime is likely to be difficult.
Resonators used in magnetic force resonance microscopy experiments, which have similar mechanical properties, have been cooled to temperatures of around 100 mK, limited by heating due to optical absorption in the readout~cite{Mamin2001}.
Although the magnitude of this effect should be smaller for high finesse optomechanical systems due to lower absorption and incident light levels, at temperatures of order 1 mK absorption of even single photons should produce non-negligible heating~cite{Kleckner2006Nature}.
begin{figure}[!htp]
begin{center}
\includegraphics{figure-7.pdf}
\end{center}
caption{ A comparison of opto-mechanical devices, showing the finesse and size of the ground state wavepacket, $x_0 = \sqrt{\hbar/m \omega_c}$.
All points apart from (j) are based on experimental results.
The shaded area in the upper right corresponds to $\kappa = 1/\sqrt{2}$ for visible light ($\lambda = 600$ nm).
The color of each point corresponds to the characteristic environmentally induced decoherence temperature, $T_{EID} = \hbar \omega_c Q / k_b$.
Many of the devices are the subject of ongoing research, and so the listed parameters should be regarded as approximate. \\
begin{tabular}{r p{4.5in}}
\textbf{(a)}&A dielectric Bragg reflector (DBR) with $F = 2 \times 10^6$ deposited on a cm size mirror. \\
\textbf{(b)}&Metal deposited on a conventional atomic force microscopy (AFM) cantilever (for example, cite{Metzger2004}).\\
\textbf{(c)}&A thin silicon cantilever used in magnetic force resonance microscopy (MFRM)~cite{Mamin2001}.\\
\textbf{(d)}&A Focused Ion Beam milled DBR mirror glued to a commercial AFM cantilever~cite{Kleckner2006PRL}.\\
\textbf{(e)}&Microtoroidal resonator~cite{Schliesser2007}. ($\kappa$ is not given by \eref{eq-Kappa-2} because of a different geometry.)\\
\textbf{(f)}&Resonator made of a suspended DBR bridge~cite{Gigan2006Nature}.\\
\textbf{(g)}&DBR deposited on a silicon bridge resonator~cite{Arcizet2006Nature}.\\
\textbf{(h)}&A 2 $\mu$m silicon resonator with gold deposited on it~cite{Favero2007APL}.\\
\textbf{(i)}&Commercial Si$_3$N$_4$ membrane in a high finesse optical cavity~cite{Zwickl2008}.\\
\textbf{(j)}&Theoretical device with a tiny, high finesse DBR mirror attached to a cantilever similar to those used in MFRM experiments ($m = 10^{-12}$ kg, $\omega_c = 2 \pi \times 500$ Hz, $F = 2 \times 10^6$)\\
\end{tabular}
}
\label{fig-Devices}
\end{figure}
\subsection{Optical Cooling}
\label{sec-optical-cooling}
As stated above, unambiguous observation of a macroscopic quantum superposition is possible only when the cantilever's fundamental mode is in a low phonon quantum number state.
Given that this requires temperatures of less than 1 $\mu$K for kHz resonators, the only way to practically obtain this is optical feedback cooling.
There are two primary forms of optical feedback cooling, referred to as ``active'' and ``passive''.
Active feedback cooling uses the optical cavity to read out the position of the cantilever, and then an electronic feedback loop creates a force on the cantilever (using, e.g., a second intensity modulated laser) to dampen the motion of its fundamental mode.
Because the effective damping force is not subjected to thermal fluctuations, this is equivalent to coupling the system to a zero temperature thermal bath, and so the effective temperature of the fundamental mode can be dramatically reduced.
Passive feedback cooling uses the finite ring-down time of the optical cavity to intrinsically produce a similar damping force without the use of an external feedback loop.
Note that neither type of cooling significantly reduces the temperature of the environmental bath, so the environmentally induced decoherence timescale is virtually unaffected by optical cooling.
Both active~cite{Kleckner2006Nature, Cohadon1999prl, Poggio2007} and passive~cite{Thompson2008, Schliesser2007, Gigan2006Nature, Arcizet2006Nature, Favero2007APL, Schliesser2006, Corbitt2007prl, Groblacher2007, Corbitt2007arxiv} feedback cooling have been experimentally demonstrated by many groups, in some cases achieving cooling factors of well over 10$^3$.
begin{figure}
begin{center}
\includegraphics{figure-8.pdf}
\end{center}
caption{
Left: Mean phonon number, $bar{n}$ as a function of power for passive optical feedback cooling. Right: Anti-Stokes/Stokes ratio.
The theoretical model is derived from Marquardt et al.~cite{Marquardt2007PRL}.
The input optical field strength is given in terms of a dimension less power, $alpha = \sqrt{2 bar{n}_a} \kappa$ where $bar{n}_a$ is the mean number of photons in the optical cavity.
$\gamma_a$ is the power decay constant for the optical cavity.
Pump photons are detuned from the cavity resonance by ${d}^{dagger}elta = -\omega_c$.
When $bar{n} = 1$, the Anti-Stokes/Stokes ratio decreases to half its low field ($alpha \to 0$) limit, shown with circles in (b).
The ratio, which can be measured in the light leaving the cavity, provides a direct method to determine the effective temperature of the cantilever.
}
\label{fig-Cooling}
\end{figure}
If one operates below the environmentally induced decoherence temperature given above, it is theoretically possible to cool the fundamental mode of the cantilever near the ground state using either active~cite{Courty2001} or passive optical feedback cooling~cite{WilsonRae2007, Marquardt2007PRL, Bhattacharya2007PRL}, although this has yet to be demonstrated experimentally.
Although heating due to optical absorption and linewidth of the drive laser are serious concerns ~cite{Diosi2000PRA}, these do not present fundamental obstacles.
In the limit that the ring-down time is comparable to the mechanical period, as indeed it must be for observing a macroscopic superposition, passive cooling should be more effective.
The equilibrium phonon occupation number of the cantilever as a function of pumping power is shown in \fref{fig-Cooling}; the situation where $N = F$, as discussed above, corresponds to $\omega_c/\gamma_a = 1$.
Conveniently, passive cooling also provides a method to directly measure the phonon number of the cantilever by measuring the ratio of anti-Stokes to Stokes shifted photons in the outgoing cavity field (see also \fref{fig-Cooling})~cite{WilsonRae2007, Marquardt2007PRL}.
In the limit of low pumping power and minimal cooling, this ratio remains constant, but begins to rapidly decrease when the ground state is approached.
When the ratio is less than half the low power value, the mean phonon number, $bar{n}$, is less than one, providing a clear indication of ground state cooling.
Because this type of cooling can be easily integrated with the proposed macroscopic superposition experiment, it presents an ideal method for putting the system in a known low phonon number state.
\section{Conclusion}
A detailed analysis of the effects of finite temperature on the proposed massive superposition experiments show that a fully unambiguous demonstration requires low fundamental mode temperatures, $bar{n} \lesssim 1$.
Despite this, observation of a revival of the interference visibility can be used to strongly imply the existence of a superposition at higher temperatures, as proposed in cite{Marshall2003PRL}.
Additionally, the magnitude of the visibility revival provides an opportunity to test environmentally induced decoherence models and possibly measure proposed mass-induced collapse mechanisms.
Although such an experiment is difficult to realize, comparison to several related experiments suggests it should be technologically feasible.
This is greatly aided by growing interest in developing high quality micro-optomechanical devices for a range of applications.
Additionally, recently developed optical feedback cooling techniques can be used to obtain fundamental mode temperatures far lower than are conventionally accessible, possibly even cooling to the ground state.
\section{Acknowledgments}
The authors would like to thank C. Simon and L. Diosi for useful discussions.
I. P. thanks J. Bosse for support.
This work was supported in part by the National Science Foundation (grants PHY-0504825 and PHY05-51164), Marie-Curie EXT-CT-2006-042580 and the Stichting voor Fundamenteel Onderzoek der Materie (FOM).
bibliography{refs2}
\end{document}
|
\begin{document}
\title[]{Permutoric Promotion: \\ Gliding Globs, Sliding Stones, and Colliding Coins}
\mathsf{s}ubjclass[2010]{}
\author[]{Colin Defant}
\address[]{Department of Mathematics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA}
\email{[email protected]}
\author[]{Rachana Madhukara}
\address[]{Department of Mathematics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA}
\email{[email protected]}
\author[]{Hugh Thomas}
\address[]{Lacim, UQAM, Montréal, QC, H3C 3P8 Canada}
\email{thomas.hugh\[email protected]}
\maketitle
\begin{abstract}
The first author recently introduced \emph{toric promotion}, an operator that acts on the labelings of a graph $G$ and serves as a cyclic analogue of Sch\"utzenberger's promotion operator. Toric promotion is defined as the composition of certain \emph{toggle} operators, listed in a natural cyclic order. We consider more general \emph{permutoric promotion} operators, which are defined as compositions of the same toggles, but in permuted orders. We settle a conjecture of the first author by determining the orders of all permutoric promotion operators when $G$ is a path graph. In fact, we completely characterize the orbit structures of these operators, showing that they satisfy the cyclic sieving phenomenon. The first half of our proof requires us to introduce and analyze new \emph{broken promotion} operators, which can be interpreted via globs of liquid gliding on a path graph. For the latter half of our proof, we reformulate the dynamics of permutoric promotion via stones sliding along a cycle graph and coins colliding with each other on a path graph.
\end{abstract}
\mathsf{s}ection{Introduction}\label{SecIntro}
In his study of the Robinson--Schensted--Knuth correspondence, Sch\"utzenberger \cite{Schutzenberger1, Schutzenberger2, Schutzenberger3} introduced a beautiful bijective operator called \emph{promotion}, which acts on the set of linear extensions of a finite poset. Haiman \cite{Haiman} and Malvenuto--Reutenauer \cite{Malvenuto} found that promotion could be defined as a composition of local \emph{toggle operators} (also called \emph{Bender--Knuth involutions}). There are now several articles connecting promotion to other areas \cite{Ayyer, Edelman, HopkinsRubey, Huang, Petersen, Poznanovic, Rhoades, StanleyPromotion, StrikerWilliams} and generalizing promotion in different directions \cite{Ayyer, Bernstein, DefantPromotionSorting, Dilks, Dilks2, StanleyPromotion}. Promotion is now one of the most extensively studied operators in the field of dynamical algebraic combinatorics.
Following the approach first considered by Malvenuto and Reutenauer \cite{Malvenuto}, we define promotion on labelings of graphs instead of linear extensions of posets. All graphs in this article are assumed to be simple. Let $G=(V,E)$ be a graph with $n$ vertices. A \dfn{labeling} of $G$ is a bijection $V\to \mathbb Z/n\mathbb Z$. We denote the set of labelings of $G$ by $\Lambda_G$. Given distinct $a,b\in\mathbb Z/n\mathbb Z$, let $(a\,\,b)$ be the transposition that swaps $a$ and $b$. For $i\in\mathbb{Z}/n\mathbb{Z}$, the \dfn{toggle} operator $\tau_i\colon \Lambda_G\to\Lambda_G$ is defined by
\[
\tau_i(\mathsf{s}igma)=\begin{cases} (i\,\,i+1)\circ \mathsf{s}igma & \mbox{if } \{\mathsf{s}igma^{-1}(i),\mathsf{s}igma^{-1}(i+1)\}\not\in E; \\ \mathsf{s}igma & \mbox{if } \{\mathsf{s}igma^{-1}(i),\mathsf{s}igma^{-1}(i+1)\}\in E. \end{cases}
\]
In other words, $\tau_i$ swaps the labels $i$ and $i+1$ if those labels are assigned to nonadjacent vertices of $G$, and it does nothing otherwise. Define \dfn{promotion} to be the operator $\Pro\colon\Lambda_G\to\Lambda_G$ given by \[\Pro=\tau_{n-1}\cdots\tau_2\tau_1.\] Here and in the sequel, concatenation of operators represents composition.
A recent trend in algebraic combinatorics aims to find cyclic analogues of more traditional ``linear'' objects (see \cite{AdinCyclic,Develin} and the references therein). The first author recently defined a cyclic analogue of promotion called \dfn{toric promotion} \cite{DefantToric}; this is the operator $\TPro\colon\Lambda_G\to\Lambda_G$ given by \[\TPro=\tau_{n}\tau_{n-1}\cdots\tau_2\tau_1=\tau_n\Pro.\]
The first author proved the following theorem, which reveals that toric promotion has remarkably nice dynamical properties when $G$ is a forest.
\begin{theorem}[\cite{DefantToric}]\label{thm:toric_main}
Let $G$ be a forest with $n\geq 2$ vertices, and let $\mathsf{s}igma\in\Lambda_G$ be a labeling. The orbit of toric promotion containing $\mathsf{s}igma$ has size \[(n-1)\frac{t}{\gcd(t,n)},\] where $t$ is the number of vertices in the connected component of $G$ containing $\mathsf{s}igma^{-1}(1)$. In particular, if $G$ is a tree, then every orbit of $\TPro\colon\Lambda_G\to\Lambda_G$ has size $n-1$.
\end{theorem}
\Cref{thm:toric_main} stands in stark contrast to the wild dynamics of promotion on most forests. For example, even when $G$ is a path graph with $7$ vertices, the order of $\Pro\colon\Lambda_G\to\Lambda_G$ is $3224590642072800$, whereas all orbits of $\TPro\colon\Lambda_G\to\Lambda_G$ have size $6$.
In \cite{DefantToric}, the first author (taking a suggestion from Tom Roby) proposed studying a generalization of toric promotion in which the toggle operators $\tau_1,\ldots,\tau_n$ can be composed in any order. In what follows, we let $[n]=\{1,\ldots,n\}$.
\begin{definition}
Let $G$ be a graph with $n$ vertices, and let $\pi\colon[n]\to \mathbb Z/n\mathbb Z$ be a bijection. The \dfn{permutoric promotion} operator $\TPro_\pi\colon\Lambda_G\to\Lambda_G$ is defined by \[\TPro_\pi=\tau_{\pi(n)}\cdots\tau_{\pi(2)}\tau_{\pi(1)}.\]
\end{definition}
One would ideally hope to have an extension of \Cref{thm:toric_main} to arbitrary permutoric promotion operators. Unfortunately, trying to completely describe the orbit structure of $\TPro_\pi\colon\Lambda_G\to\Lambda_G$ for arbitrary forests $G$ and arbitrary permutations $\pi$ seems to be very difficult. However, it turns out that we \emph{can} do this when $G$ is a path. To state our main result, we need a bit more terminology.
Let $[k]_q=\frac{1-q^k}{1-q}=1+q+\cdots+q^{k-1}$ and $[k]_q!=[k]_q[k-1]_q\cdots[1]_q$. The \dfn{$q$-binomial coefficient} ${k\brack r}_q$ is the polynomial $\dfrac{[k]_q!}{[r]_q![k-r]_q!}\in\mathbb C[q]$.
Let $X$ be a finite set, and let $f\colon X\to X$ be an invertible map of order $\omega$ (i.e., $\omega$ is the smallest positive integer such that $f^\omega(x)=x$ for all $x\in X$). Let $F(q)\in\mathbb C[q]$ be a polynomial in the variable $q$. Following \cite{CSPDefinition}, we say the triple $(X,f,F(q))$ \dfn{exhibits the cyclic sieving phenomenon} if for every integer $k$, the number of elements of $X$ fixed by $f^k$ is $F(e^{2\pi ik/\omega})$.
Although we view the set $\mathbb Z/n\mathbb Z$ as a ``cyclic'' object, it will often be convenient to identify
$\mathbb Z/n\mathbb Z$ with the ``linear'' set $[n]$ and consider the total ordering of its elements given by $1<2<\cdots<n$. If $\pi\colon[n]\to\mathbb{Z}/n\mathbb{Z}$ is a bijection, then a \dfn{cyclic descent} of $\pi^{-1}$ is an element $i\in\mathbb Z/n\mathbb Z$ such that $\pi^{-1}(i)>\pi^{-1}(i+1)$ (note that $n$ is permitted to be a cyclic descent).
Let $\mathsf{Path}_n$ denote the path graph with $n$ vertices. In \cite{DefantToric}, the first author conjectured (using different language) that for every bijection
$\pi\colon[n]\to\mathbb Z/n\mathbb Z$, the order of $\TPro_\pi\colon\Lambda_{\mathsf{Path}_n}\to\Lambda_{\mathsf{Path}_n}$ is $d(n-d)$, where $d$ is the number of cyclic descents of $\pi^{-1}$. Our main theorem not only proves this conjecture, but also determines the entire orbit structure of permutoric promotion in this case.
\begin{theorem}\label{thm:main}
Let $\pi\colon[n]\to\mathbb{Z}/n\mathbb{Z}$ be a bijection, and let $d$ be the number of cyclic descents of $\pi^{-1}$. The order of the permutoric promotion operator $\TPro_\pi\colon\Lambda_{\mathsf{Path}_n}\to\Lambda_{\mathsf{Path}_n}$ is $d(n-d)$. Moreover, the triple \[\left(\Lambda_{\mathsf{Path}_n},\TPro_\pi,n(d-1)!(n-d-1)![n-d]_{q^d}{n-1\brack d-1}_{q}\right)\] exhibits the cyclic sieving phenomenon.
\end{theorem}
Note that when $d=1$, the sieving polynomial in \Cref{thm:main} is $n(n-2)![n-1]_q$, which agrees with \Cref{thm:toric_main}.
Suppose $B$ is a proper subset of $\mathbb Z/n\mathbb Z$. In order to understand permutoric promotion and prove \Cref{thm:main}, we define the \dfn{broken promotion} operator $\Bro_B\colon\Lambda_G\to\Lambda_G$ as follows. Let $B_1,\ldots,B_k$ be the vertex sets of the connected components of the subgraph of $\mathsf{Cycle}_n$ induced by $B$. For $1\leq i\leq k$, let us write $B_i=\{a(i),a(i)+1,\ldots,b(i)\}$, and let $\Bro_{B_i}=\tau_{b(i)}\cdots\tau_{a(i)+1}\tau_{a(i)}$. We then define $\Bro_B=\Bro_{B_1}\cdots\Bro_{B_k}$ (the order does not matter since $\Bro_{B_1},\ldots,\Bro_{B_k}$ commute with each other).
Let $\cyc\colon\Lambda_G\to\Lambda_G$ be the \dfn{cyclic shift} operator defined by $(\cyc(\mathsf{s}igma))(v)=\mathsf{s}igma(v)+1$. In \Cref{sec:broken}, we give a description of the operator $\cyc\Bro_B$ in terms of ``gliding globs'' of liquid. Roughly speaking, some of the labels are immersed in globs of liquid, these globs (and their labels) glide along paths in $G$ in a \emph{jeu de taquin} fashion, and then some of the labels are changed appropriately. We also show that certain indicator functions are \emph{homomesic} for $\cyc\Bro_B$ (see \Cref{prop:homomesy}). In \Cref{sec:Broken_Path}, we specialize to the case when $G=\mathsf{Path}_n$ and establish useful connections between broken promotion and permutoric promotion. The purpose of \Cref{sec:divisibility} is to prove that all of the sizes of the orbits of $\TPro_\pi$ are divisible by $\lcm(d,n-d)$ (where $G=\mathsf{Path}_n$ and $d$ is the number of cyclic descents of $\pi^{-1}$).
In \Cref{sec:Orbit_Structure}, we use this divisibility result to reformulate the analysis of permutoric promotion in terms of ``sliding stones'' and ``colliding coins.'' Roughly speaking, we place some stones on the cycle graph and allow them slide around as we apply toggle operators. At the same time, we place coins on the path graph and allow them to move around and collide with one another. It turns out that the dynamical properties of permutoric promotion are closely related to those of the \emph{stones diagrams} and \emph{coins diagrams}; this allows us to complete the proof of \Cref{thm:main}. Let $\mathsf{Comp}_d(n)$ be the set of compositions of $n$ into $d$ parts, and define $\Rot_{n,d}\colon\mathsf{Comp}_d(n)\to\mathsf{Comp}_d(n)$ by $\Rot_{n,d}(a_1,a_2,\ldots,a_d)=(a_2,\ldots,a_d,a_1)$. We show how to associate an orbit of $\Rot_{n,d}$ to the dynamics of the coins diagrams by recording how far each coin must travel when passing from one collision to the next. It will turn out that the form of the sieving polynomial in \Cref{thm:main} arises from the fact that the triple $\left(\mathsf{Comp}_d(n),\Rot_{n,d},{n-1 \brack d-1}_q\right)$ exhibits the cyclic sieving phenomenon.
In \Cref{sec:orbit_broken}, we apply \Cref{thm:main} to derive the following theorems.
\begin{theorem}\label{thm:broken_main2}
Let $d$ and $n$ be integers such that $1\leq d\leq n-1$. The order of the operator $\cyc\Bro_{\{1,\ldots,d\}}\colon\Lambda_{\mathsf{Path}_n}\to\Lambda_{\mathsf{Path}_n}$ is $(n-d)n$. Moreover, the triple \[\left(\Lambda_{\mathsf{Path}_n},\cyc\Bro_{\{1,\ldots,d\}},(d-1)!(n-d-1)![n]_{q^{n-d}}[n-d]_{q^d}{n-1\brack d-1}_{q}\right)\] exhibits the cyclic sieving phenomenon.
\end{theorem}
For any real number $x$, let $[[x]]$ denote the integer closest to $x$, with the convention that $[[x]]=x-1/2$ if $x-1/2\in \mathbb{Z}$.
\begin{theorem}\label{thm:broken_main}
Let $d$ and $n$ be positive integers such that $1\leq d\leq\left\lfloor n/2\right\rfloor$. For $i\in\mathbb{Z}$, let $s_i=[[in/d]]$, and let $\mathscr R=(\mathbb{Z}/n\mathbb{Z})\mathsf{s}etminus\{s_1-1,\ldots,s_d-1\}$. The order of the operator $\cyc\Bro_{\mathscr R}\colon\Lambda_{\mathsf{Path}_n}\to\Lambda_{\mathsf{Path}_n}$ is $dn$. Moreover, the triple \[\left(\Lambda_{\mathsf{Path}_n},\cyc\Bro_{\mathscr R},(d-1)!(n-d-1)![n]_{q^{d}}[n-d]_{q^d}{n-1\brack d-1}_{q}\right)\] exhibits the cyclic sieving phenomenon.
\end{theorem}
While noteworthy on its own, the homomesy result from \Cref{sec:broken} also ends up being useful for proving \Cref{thm:broken_main2,thm:broken_main}.
\mathsf{s}ection{Basics}\label{sec:basics}
Let $n$ be a positive integer. Given integers $x\leq y$, we let $[x,y]_n$ denote the multiset with elements in $\mathbb{Z}/n\mathbb{Z}$ obtained by reducing the set $\{x,x+1,\ldots,y\}$ modulo $n$. For example, $[3,7]_3$ is the multiset $\{0,0,1,1,2\}$, where the elements are in $\mathbb{Z}/3\mathbb{Z}$.
Given a finite set $X$ and an invertible map $f\colon X\to X$, we let $\mathrm{Orb}_f$ denote the set of orbits of $f$. We will need the following technical lemma concerning the cyclic sieving phenomenon.
\begin{lemma}\label{lem:CSP_technical}
Let $f\colon X\to X$ and $g\colon \widetilde X\to \widetilde X$ be invertible maps, where $X$ and $\widetilde X$ are finite sets. Let $\{k_i^{m_i}:1\leq i\leq \ell\}$ be the multiset of orbit sizes of $f$, where we use superscripts to denote multiplicities. Let $\omega$ be the order of $f$, and let $F(q)\in\mathbb C[q]$ be such that the triple $(X,f,F(q))$ exhibits the cyclic sieving phenomenon. If $N\in\mathbb Z_{>0}$ and $\chi\in\mathbb Q_{>0}$ are such that $\{(Nk_i)^{\chi m_i}:1\leq i\leq \ell\}$ is the multiset of orbit sizes of $g$, then $g$ has order $N\omega$, and the triple $(\widetilde X,g,\chi[N]_{q^\omega}F(q))$ exhibits the cyclic sieving phenomenon.
\end{lemma}
\begin{proof}
It is clear that $g$ has order $N\omega$. Fix an integer $k$. When we evaluate the polynomial $\chi[N]_{q^\omega}F(q)$ at $q=e^{2\pi ik/(N\omega)}$, we obtain $\chi[N]_{e^{2\pi ik/N}}F(e^{2\pi i(k/N)/\omega})$; we want to show that this is the number of elements of $\widetilde X$ that are fixed by $g^k$. If $k$ is not divisible by $N$, then there are no such elements because all orbits of $g$ have sizes divisible by $N$; in this case, we are done because the factor $[N]_{e^{2\pi ik/N}}$ is $0$. Now suppose $k$ is divisible by $N$. Because $(X,f,F(q))$ exhibits the cyclic sieving phenomenon, $F(e^{2\pi i(k/N)/\omega})$ is the number of elements of $X$ fixed by $f^{k/N}$. Therefore, $\chi NF(e^{2\pi i(k/N)/\omega})$ is the number of elements of $\widetilde X$ fixed by $g^k$. This completes the proof because $\chi[N]_{e^{2\pi ik/N}}F(e^{2\pi i(k/N)/\omega})=\chi NF(e^{2\pi i(k/N)/\omega})$.
\end{proof}
We write $\mathsf{Path}_n$ and $\mathsf{Cycle}_n$ for the path with $n$ vertices and the cycle with $n$ vertices, respectively. Identify the vertices of $\mathsf{Cycle}_n$ with $\mathbb Z/n\mathbb Z$ in such a way that they appear in the cyclic order $1,2,\ldots,n$ when read clockwise around the cycle. Let $v_1,\ldots,v_n$ be the vertices of $\mathsf{Path}_n$, listed from left to right (we draw the path horizontally in the plane). As in the introduction, let us fix a graph $G$ and consider the toggle operators $\tau_i$ and the permutoric promotion operators $\TPro_\pi$ on $\Lambda_G$.
If $\mathcal D$ is an acyclic directed graph with vertex set $\mathcal V$, then we can define a partial order $\leq_{\mathcal D}$ on $\mathcal V$ by declaring $v\leq_{\mathcal D} v'$ whenever there is a directed path in $\mathcal D$ from $v$ to $v'$; the resulting poset $(\mathcal V,\leq_{\mathcal D})$ is called the \dfn{transitive closure} of $\mathcal D$.
A \dfn{linear extension} of an $n$-element poset $(P,\leq_P)$ is a word $p_1\cdots p_n$ whose letters are the elements of $P$ (with each element appearing exactly once) such that $i\leq j$ whenever $p_i\leq_P p_j$. Given a bijection $\pi\colon[n]\to \mathbb Z/n\mathbb Z$, we obtain an acyclic orientation $\alpha_\pi$ of $\mathsf{Cycle}_n$ by orienting each edge $\{i,i+1\}$ from $i$ to $i+1$ if and only if $\pi^{-1}(i)<\pi^{-1}(i+1)$. If $\beta$ is any acyclic orientation of $\mathsf{Cycle}_n$, then the linear extensions of its transitive closure $(\mathbb Z/n\mathbb Z,\leq_\beta)$ are precisely the words $\pi(1)\cdots\pi(n)$ such that $\pi\colon[n]\to \mathbb{Z}/n\mathbb{Z}$ is a bijection satisfying $\alpha_\pi=\beta$.
It is well known that any linear extension of a finite poset can be obtained from any other linear extension of the same poset by repeatedly swapping consecutive incomparable elements. If $i,j\in \mathbb Z/n\mathbb Z$ are incomparable in $(\mathbb Z/n\mathbb Z,\leq_\beta)$, then they are not adjacent in $\mathsf{Cycle}_n$, so the toggle operators $\tau_i$ and $\tau_j$ commute. This implies that if $\pi,\pi'\colon[n]\to\mathbb Z/n\mathbb Z$ are such that $\alpha_\pi=\alpha_{\pi'}$, then the expression for $\TPro_{\pi'}$ as a composition of toggle operators can be obtained from the expression for $\TPro_\pi$ by repeatedly swapping consecutive toggle operators that commute with each other, so $\TPro_\pi=\TPro_{\pi'}$. Therefore, given an acyclic orientation $\beta$ of $\mathsf{Cycle}_n$, it makes sense to write $\TPro_\beta$ for the permutoric promotion operator $\TPro_\pi$, where $\pi\colon[n]\to\mathbb Z/n\mathbb Z$ is any bijection such that $\alpha_\pi=\beta$.
A \dfn{source} (respectively, \dfn{sink}) of an acyclic orientation is a vertex of in-degree (respectively, out-degree) $0$. If $u$ is a source (respectively, sink), then we can \dfn{flip} $u$ into a sink (respectively, source) by reversing the orientations of all edges incident to $u$. Two acyclic orientations are \dfn{flip equivalent} if one can be obtained from the other by a sequence of flips.
Let us say two maps $f,g\colon\Lambda_G\to\Lambda_G$ are \dfn{dynamically equivalent} if there is a bijection $\phi\colon\Lambda_G\to\Lambda_G$ such that $f\circ\phi=\phi\circ g$. Note that dynamically equivalent invertible maps have the same orbit structure (that is, they have the same number of orbits of each size).
\begin{lemma}\label{lem:flip_equivalent}
If $\beta$ and $\beta'$ are acyclic orientations of $\mathsf{Cycle}_n$ that have the same number of edges oriented counterclockwise, then $\TPro_\beta$ and $\TPro_{\beta'}$ are dynamically equivalent.
\end{lemma}
\begin{proof}
It is known (see \cite{Develin}) that two acyclic orientations of $\mathsf{Cycle}_n$ have the same number of edges oriented counterclockwise if and only if they are flip equivalent. Therefore, we just need to show that if $\beta$ and $\beta'$ are flip equivalent, then $\TPro_\beta$ and $\TPro_{\beta'}$ are dynamically equivalent. It suffices to prove this in the case when $\beta'$ is obtained from $\beta$ by flipping a source $i$ into a sink. In this case, one can check that $\TPro_\beta\circ\tau_i=\tau_i\circ\TPro_{\beta'}$.
\end{proof}
\begin{lemma}\label{lem:counterclockwise_edges}
Let $\beta$ and $\beta'$ be acyclic orientations of $\mathsf{Cycle}_n$. Let $d$ and $d'$ be the number of edges oriented counterclockwise in $\beta$ and $\beta'$, respectively. If $d=d'$ or $d=n-d'$, then $\TPro_\beta$ and $\TPro_{\beta'}$ are dynamically equivalent.
\end{lemma}
\begin{proof}
If $d=d'$, then we are done by \Cref{lem:flip_equivalent}. Now suppose $d=n-d'$. Define $\phi\colon\Lambda_G\to\Lambda_G$ by $(\phi(\mathsf{s}igma))(v)=n+1-\mathsf{s}igma(v)$. One can readily check that
\begin{equation}\label{eq:confusion_phi}
\phi\circ\tau_i=\tau_{n-i}\circ\phi
\end{equation}
for all $i\in\mathbb{Z}/n\mathbb{Z}$. Let $\pi\colon[n]\to\mathbb{Z}/n\mathbb{Z}$ be a bijection such that $\alpha_\pi=\beta$, and define $\pi'\colon[n]\to\mathbb{Z}/n\mathbb{Z}$ by $\pi'(i)=n-\pi(i)$. We have $\TPro_\pi=\TPro_\beta$. It follows from \eqref{eq:confusion_phi} that $\phi\circ\TPro_{\pi}=\TPro_{\pi'}\circ\phi$. This shows that $\TPro_\pi$ and $\TPro_{\pi'}$ are dynamically equivalent. On the other hand, the number of edges oriented counterclockwise in $\alpha_{\pi'}$ is $n-d$, so it follows from \Cref{lem:flip_equivalent} that $\TPro_{\pi'}$ is dynamically equivalent to $\TPro_{\beta'}$.
\end{proof}
If $\pi\colon[n]\to\mathbb{Z}/n\mathbb{Z}$ is a bijection, then the number of cyclic descents of $\pi^{-1}$ is the same as the number of edges oriented counterclockwise in $\alpha_\pi$. This is why cyclic descents appear in \Cref{thm:main}.
We end this section with a lemma that will allow us to rewrite operators formed as compositions of toggles. We will consider words over the alphabet $\{\tau_1,\ldots,\tau_n\}$ both as words and as permutations of $\Lambda_{G}$. Given such a word $X$, let $X\!\langle i\rangle$ denote the number of occurrences of $\tau_i$ in $X$.
\begin{lemma}\label{lem:suffix}
Let $\beta$ be an acyclic orientation of $\mathsf{Cycle}_n$. Let $Y$ be a word over the alphabet $\{\tau_1,\ldots,\tau_n\}$ in which each letter appears exactly $k$ times. Suppose that for every suffix $X$ of $Y$ and every arrow $a\to b$ in $\beta$, we have $X\!\langle a\rangle-X\!\langle b\rangle\in\{0,1\}$. When viewed as a bijection from $\Lambda_G$ to itself, $Y$ is equal to $\TPro_\beta^k$.
\end{lemma}
\begin{proof}
For each $i\in\mathbb{Z}/n\mathbb{Z}$, consider $k$ formal symbols ${\boldsymbol \tau}_i^{(1)},\ldots,{\boldsymbol \tau}_i^{(k)}$. Let $\mathcal G$ be the group generated by the set ${\bf A}=\left\{{\boldsymbol \tau}_i^{(\ell)}:i\in\mathbb{Z}/n\mathbb{Z}, \ell\in[k]\right\}$ subject to the relations ${\boldsymbol \tau}_{i}^{(\ell)}{\boldsymbol \tau}_{j}^{(m)}={\boldsymbol \tau}_{j}^{(m)}{\boldsymbol \tau}_{i}^{(\ell)}$ whenever $j\not\in\{i-1,i,i+1\}$. Let $\mathcal D$ be the directed graph with vertex set ${\bf A}$ and with arrows defined as follows: for each arrow $a\to b$ in $\beta$, the graph $\mathcal D$ has arrows ${\boldsymbol\tau}_b^{(\ell)}\to{\boldsymbol\tau}_a^{(\ell)}$ for all $\ell\in[k]$ and ${\boldsymbol\tau}_a^{(m)}\to{\boldsymbol\tau}_b^{(m+1)}$ for all $m\in[k-1]$. Let $({\bf A},\leq_{\mathcal D})$ be the transitive closure of $\mathcal D$.
Fix a bijection $\pi\colon[n]\to\mathbb{Z}/n\mathbb{Z}$ such that $\TPro_\pi=\TPro_\beta$, and let $Y'$ be the word $(\tau_{\pi(n)}\cdots\tau_{\pi(1)})^k$ over the alphabet $\{\tau_1,\ldots,\tau_n\}$. Let $Z$ (respectively, $Z'$) be the word over the alphabet $\bf A$ obtained from $Y$ (respectively, $Y'$) by replacing the $\ell$-th occurrence of the letter $\tau_i$ with $\boldsymbol\tau_i^{(\ell)}$. The conditions on $Y$ in the hypothesis of the lemma imply that $Z$ is a linear extension of $({\bf A},\leq_{\mathcal D})$; the word $Y'$ satisfies the same conditions, so $Z'$ is also a linear extension of $({\bf A},\leq_{\mathcal D})$. This means that $Z'$ can be obtained from $Z$ be repeatedly swapping consecutive incomparable elements; each such swap corresponds to one of relations defining $\mathcal G$. Thus, $Z$ and $Z'$ represent the same element of $\mathcal G$. There is a natural homomorphism from $\mathcal G$ to the group of permutations of $\Lambda_G$ that sends each generator ${\boldsymbol \tau}_i^{(\ell)}$ to $\tau_i$. This homomorphism sends $Z$ and $Z'$ to the permutations of $\Lambda_G$ represented by $Y$ and $Y'$, respectively. Hence, these permutations are the same. This completes the proof because the permutation represented by $Y'$ is $\TPro_\beta^k$.
\end{proof}
\mathsf{s}ection{Broken Promotion}\label{sec:broken}
In this section, we study the \emph{broken promotion} operators defined in the introduction, describing them in terms of ``gliding globs'' and relating them to permutoric promotion operators.
\mathsf{s}ubsection{Jeu de Taquin}
As before, let us fix an $n$-vertex graph $G=(V,E)$. Our arguments in this section will require certain \dfn{jeu de taquin} operators defined as follows. For $i_1,i_2\in\mathbb Z/n\mathbb Z$, define
$\jdt_{(i_1,i_2)}\colon\Lambda_G\to\Lambda_G$ by \[\jdt_{(i_1,i_2)}(\mathsf{s}igma)=\begin{cases} (i_1\,\,i_2)\circ \mathsf{s}igma & \mbox{if } \{\mathsf{s}igma^{-1}(i_1),\mathsf{s}igma^{-1}(i_2)\}\in E; \\ \mathsf{s}igma & \mbox{if } \{\mathsf{s}igma^{-1}(i_1),\mathsf{s}igma^{-1}(i_2)\}\not\in E.
\end{cases}\] Thus, $\jdt_{(i_1,i_2)}$ has the effect of trying to ``glide'' the label $i_1$ through the label $i_2$; it succeeds in doing so if and only if those labels are on adjacent vertices of $G$. More generally, if $(i_1,\ldots,i_r)$ is a tuple of distinct vertices in $V$, then we define \[\jdt_{(i_1,\ldots,i_r)}=\jdt_{(i_1,i_r)}\jdt_{(i_1,i_{r-1})}\cdots\jdt_{(i_1,i_2)}.\] This operator has the effect of trying to glide $i_1$ through the labels $i_2,\ldots,i_r$ in that order. We will primarily be interested in the case when $(i_1,\ldots,i_r)$ is such that $i_j=i_1+j-1$ for all $1\leq j\leq r$. In this case, $\{i_1,\ldots,i_r\}$ is a cyclic interval $[x,y]_n$, so we simply write $\jdt_{[x,y]_n}$ instead of $\jdt_{(i_1,\ldots,i_r)}$.
\begin{example}
If $n=6$ and $\mathsf{s}igma$ is the labeling shown on the left in \Cref{Fig1}, then $\jdt_{[5,9]_6}(\mathsf{s}igma)=\jdt_{(5,6,1,2,3)}(\mathsf{s}igma)$ is the labeling shown on the right in \Cref{Fig1}.
\end{example}
\begin{figure}
\caption{On the left is a labeling of a $6$-vertex graph, where the label of each vertex is shown next to it in red. On the right is the labeling $\jdt_{[5,9]_6}
\label{Fig1}
\end{figure}
\mathsf{s}ubsection{Broken Promotion}\label{subsec:Description1}
Suppose $B$ is a proper subset of $\mathbb Z/n\mathbb Z$. Recall the definitions of the \emph{broken promotion} operator $\Bro_B\colon\Lambda_G\to\Lambda_G$ and the \emph{cyclic shift} operator $\cyc\colon\Lambda_G\to\Lambda_G$ from \Cref{SecIntro}. We can explicitly describe the action of $\cyc\Bro_B$ on a labeling $\mathsf{s}igma\in\Lambda_G$ as follows. Let $B_1,\ldots,B_k$ be the vertex sets of the connected components of the subgraph of $\mathsf{Cycle}_n$ induced by $B$. For each $1\leq i\leq k$, let $x_i$ and $y_i$ be such that $B_i=[x_i,y_i-1]_n$, and imagine immersing the label $x_i$ in a glob of liquid. The first step is to apply the jeu de taquin operators $\jdt_{[x_i,y_i]_n}$, imagining that the label $x_i$ carries its glob along with it as it glides. For the second step, increase by $1$ the label of each vertex in $\mathsf{s}igma^{-1}((\mathbb{Z}/n\mathbb{Z})\mathsf{s}etminus\bigcup_{\ell=1}^k[x_\ell,y_\ell]_n)$. If $x_i-1\not\in \bigcup_{\ell=1}^k[x_\ell,y_\ell]_n$, this second step will change the label $x_i-1$ into $x_i$, so there will be two copies of the label $x_i$: one in a glob and the other not in a glob. The third and final step is to change each label $x_i$ that is in a glob to the label $y_i+1$.
It might not be obvious at first that the procedure described in the preceding paragraph does in fact compute $\cyc\Bro_B(\mathsf{s}igma)$; however, the verification of this fact is straightforward and can be elucidated through examples.
\begin{example}
Suppose $n=9$ and $G=\mathsf{Path}_9$. Let $B=\{1,3,4,7,9\}\mathsf{s}ubseteq\mathbb{Z}/9\mathbb{Z}$. The connected components of the subgraph of $\mathsf{Cycle}_9$ induced by $B$ have vertex sets \[B_1=\{3,4\}=[3,4]_9, \quad B_2=\{7\}=[7,7]_9, \quad B_3=\{9,1\}=[9,10]_9.\] Preserving the notation from above, we have $x_1=3$, $y_1=5$, $x_2=7$, $y_2=8$, $x_3=9$, $y_3=11$. Recall that the vertices of $\mathsf{Path}_9$ are $v_1,\ldots,v_9$; let $\mathsf{s}igma\in\Lambda_{\mathsf{Path}_9}$ be the labeling that sends these vertices to $7,1,4,3,5,6,9,2,8$, respectively. \Cref{Fig2} illustrates the three-step procedure for computing $\cyc\Bro_B(\mathsf{s}igma)$, showing that $\cyc\Bro_B(\mathsf{s}igma)$ sends $v_1,\ldots,v_9$ to $9,1,6,4,5,7,2,3,8$, respectively.
$\lozenge$
\end{example}
\begin{figure}
\caption{The three steps for applying $\cyc\Bro_B$, where $B=\{1,3,4,7,9\}
\label{Fig2}
\end{figure}
\mathsf{s}ubsection{Broken Promotion for the Complement of an Independent Set}\label{subsec:Description2}
Suppose $1\leq d\leq \left\lfloor n/2\right\rfloor$, and let $\cdots<s_{-1}<s_0<s_1<s_2<\cdots$ be a bi-infinite sequence of integers such that $s_{i+d}=s_i+n$ and $s_{i+1}\geq s_i+2$ for all $i\in\mathbb{Z}$. Then $\mathscr S=\{s_1,\ldots,s_d\}$ is an independent set of size $d$ in $\mathsf{Cycle}_n$. Let $\beta_{\mathscr S}$ be the acylic orientation of $\mathsf{Cycle}_n$ in which the elements of $\mathscr S$ are sources and all edges not incident to elements of $\mathscr S$ are oriented clockwise. The sinks of $\beta_{\mathscr S}$ are the elements of $\mathscr S-1:=\{s_1-1,\ldots,s_d-1\}$. Let us write $\mathscr R=(\mathbb{Z}/n\mathbb{Z})\mathsf{s}etminus(\mathscr S-1)$.
In \Cref{subsec:Description1}, we gave a three-step description of the action of $\cyc\Bro_B$ on a labeling $\mathsf{s}igma$ when $B$ is an arbitrary proper subset of $\mathbb{Z}/n\mathbb{Z}$. This description simplifies when $B$ is $\mathscr R$ because in this case, we have $x_i=s_i$ and $y_i=s_{i+1}-1$, so $\bigcup_{\ell=1}^k[x_\ell,y_\ell]_n$ is all of $\mathbb{Z}/n\mathbb{Z}$ (so the second step in the earlier description has no effect). Hence, we have the following simpler two-step procedure. Immerse each label $s_i$ in a glob of liquid. The first step is to apply the jeu de taquin operators $\jdt_{[s_i,s_{i+1}-1]_n}$ (for $1\leq i\leq d$), imagining that the label $s_i$ carries its glob with it as it glides. The second step is to cyclically rotate the labels in the globs, changing each label $s_i$ to $s_{i+1}$ (modulo $n$).
\begin{example}
Suppose $n=9$ and $d=3$. Let $s_1=3$, $s_2=7$, $s_3=9$. Then $\mathscr S=\{3,7,9\}$, $\mathscr S-1=\{2,6,8\}$, and $\mathscr R=(\mathbb{Z}/9\mathbb{Z})\mathsf{s}etminus(\mathscr S-1)=\{1,3,4,5,7,9\}$. The first step in the above two-step procedure for applying $\cyc\Bro_{\mathscr R}$ is to immerse $3$, $7$, and $9$ in globs of liquid and apply $\jdt_{[3,6]_9}$, $\jdt_{[7,8]_9}$, and $\jdt_{[9,11]_9}$. The second step is to cyclically rotate the labels $3,7,9$. This is illustrated in \Cref{Fig3}.
$\lozenge$
\end{example}
\begin{figure}
\caption{The two steps for applying $\cyc\Bro_{\mathscr R}
\label{Fig3}
\end{figure}
\begin{remark}\label{Rem:SameOrder}
Suppose $G=\mathsf{Path}_n$. Neither of the two steps in the above procedure change the relative order in which the labels in $(\mathbb{Z}/n\mathbb{Z})\mathsf{s}etminus\mathscr S$ (i.e., the labels not in the globs) appear from left to right along the path. For example, in \Cref{Fig3}, the labels in $(\mathbb{Z}/n\mathbb{Z})\mathsf{s}etminus\mathscr S$ are $1,2,4,5,6,8$. In every step of the procedure, these labels appear in the order $1,4,5,6,2,8$.
$\triangle$
\end{remark}
\mathsf{s}ubsection{Permutoric Promotion and Broken Promotion}
The following lemma relates the operators $\cyc\Bro_{\mathscr R}$ and $\TPro_\beta$; it will later allow us to use the description of $\cyc\Bro_{\mathscr R}$ in terms of gliding globs given in \Cref{subsec:Description2} to gain a better understanding of permutoric promotion.
\begin{lemma}\label{lem:commutes}
Let $\mathscr{S}$ be a $d$-element independent set in $\mathsf{Cycle}_n$, and let $\mathscr R=(\mathbb Z/n\mathbb Z)\mathsf{s}etminus(\mathscr S-1)$. Then \[\cyc\Bro_{\mathscr R}\TPro_{\beta_{\mathscr S}}=\TPro_{\beta_{\mathscr S}}\cyc\Bro_{\mathscr R}.\]
\end{lemma}
\begin{proof}
Preserve the notation from \Cref{subsec:Description2}. Let $\mathscr B=\mathscr R\mathsf{s}etminus\mathscr S$. The vertex sets of the connected components of the subgraph of $\mathsf{Cycle}_n$ induced by $\mathscr R$ are $[s_1,s_2-2]_n,[s_2,s_3-2]_n,\ldots,[s_d,s_{d+1}-2]_n$, so \[\Bro_{\mathscr R}=\prod_{i=1}^d\Bro_{[s_i,s_{i+1}-2]_n}=\prod_{i=1}^d(\Bro_{[s_i+1,s_{i+1}-2]_n}\tau_{s_i})=\prod_{i=1}^d\Bro_{[s_i+1,s_{i+1}-2]_n}\prod_{i=1}^d\tau_{s_i}=\Bro_{\mathscr{B}}\Bro_{\mathscr S}.\] A similar computation shows that \[\cyc\Bro_{\mathscr R} \cyc^{-1}=\prod_{i=1}^d\Bro_{[s_i+1,s_{i+1}-1]_n}=\Bro_{\mathscr S-1}\Bro_{\mathscr B}.\] We also have $\cyc\Bro_{\mathscr S-1}\cyc^{-1}=\Bro_{\mathscr S}$. Therefore,
\begin{align*}
\cyc\Bro_{\mathscr R}\Bro_{\mathscr S-1}&=(\cyc\Bro_{\mathscr R} \cyc^{-1})(\cyc\Bro_{\mathscr S-1}\cyc^{-1})\cyc \\
&=\Bro_{\mathscr S-1}(\Bro_{\mathscr B}\Bro_{\mathscr S}) \cyc \\
&=\Bro_{\mathscr S-1}\Bro_{\mathscr R}\cyc.
\end{align*} This shows that \[\cyc\Bro_{\mathscr R}\Bro_{\mathscr S-1}\Bro_{\mathscr R}=\Bro_{\mathscr S-1}\Bro_{\mathscr R}\cyc\Bro_{\mathscr R}.\] The desired result now follows from the observation that $\TPro_{\beta_{\mathscr S}}=\Bro_{\mathscr S-1}\Bro_{\mathscr R}$.
\end{proof}
\mathsf{s}ubsection{Homomesy}\label{subsec:homomesy}
We end this section with a theorem about broken promotion that will be useful in \Cref{sec:orbit_broken} but that we also believe is interesting in its own right. This proposition concerns the notion of \emph{homomesy}, which Propp and Roby introduced in 2013 \cite{ProppRobyFPSAC,ProppRoby}; it is now one of the central focuses in dynamical algebraic combinatorics. Suppose $X$ is a finite set and $f\colon X\to X$ is an invertible map. Let $\mathrm{Orb}_f$ denote the set of orbits of $f$. A \dfn{statistic} on $X$ is a function $\text{stat}\colon X\to\mathbb R$. We say the statistic $\text{stat}$ is \dfn{homomesic} for $f$ with average $a$ if $\frac{1}{|\mathcal O|}\mathsf{s}um_{x\in\mathcal O}\text{stat}(x)=a$ for every orbit $\mathcal O\in\mathrm{Orb}_f$.
\begin{proposition}\label{prop:homomesy}
Suppose $G$ is connected. Let $v$ be a vertex of $G$, and let $i\in\mathbb{Z}/n\mathbb{Z}$. Define $\mathbbm{1}_{v,i}\colon\Lambda_G\to\mathbb R$ by \[\mathbbm{1}_{v,i}(\mathsf{s}igma)=\begin{cases} 1 & \mbox{if } \mathsf{s}igma(v)=i; \\ 0 & \mbox{if } \mathsf{s}igma(v)\neq i. \end{cases}\] If $B\mathsf{s}ubseteq\mathbb{Z}/n\mathbb{Z}$ and $i-1\not\in B$, then $\mathbbm{1}_{v,i}$ is homomesic for the map $\cyc\Bro_B$ with average $1/n$.
\end{proposition}
\begin{proof}
By symmetry, it suffices to prove the result when $i=1$. We identify $\mathbb{Z}/n\mathbb{Z}$ with $[n]$ and consider the total ordering $1<2<\dots<n$. Given a labeling $\mathsf{s}igma\in\Lambda_G$, we obtain an acyclic orientation $\eta_\mathsf{s}igma$ by orienting each edge $\{x,y\}$ from $x$ to $y$ if and only if $\mathsf{s}igma(x)<\mathsf{s}igma(y)$. Observe that $\eta_{\tau_j(\mathsf{s}igma)}=\eta_\mathsf{s}igma$ for every $j\in[n-1]$ and $\mathsf{s}igma\in\Lambda_G$; since $n=i-1\not\in B$, this implies that $\eta_{\Bro_B(\mathsf{s}igma)}=\eta_\mathsf{s}igma$ for every $\mathsf{s}igma\in\Lambda_G$. It is also straightforward to see that $\eta_{\cyc(\mathsf{s}igma)}$ is obtained from $\eta_\mathsf{s}igma$ by flipping the vertex $(\cyc(\mathsf{s}igma))^{-1}(1)$ from a sink to a source. Therefore, $\eta_{\cyc(\Bro_B(\mathsf{s}igma))}$ is obtained from $\eta_\mathsf{s}igma$ by flipping $(\cyc(\Bro_B(\mathsf{s}igma)))^{-1}(1)$ from a sink to a source.
Let $\mathcal O$ be an orbit of $\cyc\Bro_B$, and fix $\mu_0\in\mathcal O$. Let $\mu_t=(\cyc\Bro_B)^t(\mu_0)$ for all $t\in\mathbb{Z}$. Consider an edge $\{x_0,y_0\}$ in $G$. Let $\cdots<k(0)<k(1)<k(2)<\cdots$ be the integers such that $\mu_{k(j)}^{-1}(1)\in\{x,y\}$. Without loss of generality, assume $x_0\to y_0$ is an arrow in $\eta_{\mu_{k(0)}}$. According to the previous paragraph, the orientations of $\{x_0,y_0\}$ are different in $\eta_{\mu_{t-1}}$ and $\eta_{\mu_t}$ if and only if $t\in\{\ldots,k(0),k(1),k(2),\ldots\}$. Moreover, we have $\mu_{k(j)}(x_0)=1$ if $x_0\to y_0$ is an arrow in $\eta_{\mu_{k(j)}}$, whereas $\mu_{k(j)}(y_0)=1$ if $y_0\to x_0$ is an arrow in $\eta_{\mu_{k(j)}}$. It follows that for $t\in\mathbb{Z}$, we have $\mu_t(x_0)=1$ if and only if $t=k(j)$ for some even $j$; similarly, $\mu_t(y_0)=1$ if and only if $t=k(j)$ for some odd $j$. This shows that the number of labelings in $\mathcal O$ that send $x_0$ to $1$ is the same as the number of labelings in $\mathcal O$ that send $y_0$ to $1$. Because the edge $\{x_0,y_0\}$ was arbitrary and $G$ is connected, it follows that for any two vertices $x$ and $y$ of $G$, the number of labelings in $\mathcal O$ that send $x$ to $1$ is the same as the number of labelings in $\mathcal O$ that send $y$ to $1$. This implies the desired result.
\end{proof}
\begin{example}
Suppose $n=5$ and $B=\{1,3,4\}$. \Cref{Fig4} depicts an orbit of $\cyc\Bro_{B}$ for a particular choice of a graph $G$. Select an arbitrary vertex $v$ of $G$. As predicted by \Cref{prop:homomesy}, exactly $1$ of the $5$ labelings in this orbit sends $v$ to $1$, and exactly $1$ of the $5$ labelings in the orbit sends $v$ to $3$.
$\lozenge$
\end{example}
\begin{figure}
\caption{An orbit of $\cyc\Bro_{\{1,3,4\}
\label{Fig4}
\end{figure}
\mathsf{s}ection{Broken Promotion on a Path}\label{sec:Broken_Path}
Throughout the rest of the article, we will specialize to the case when $G=\mathsf{Path}_n$ is the path with $n$ vertices.
Suppose $1\leq d\leq\lfloor n/2\rfloor$. In \Cref{sec:broken}, we considered an arbitrary bi-infinite sequence \[\cdots<s_{-1}<s_0<s_1<s_2<\cdots\] satisfying $s_{i+d}=s_i+n$ and $s_{i+1}\geq s_i+2$ for all $i\in\mathbb{Z}$. In this section, we specialize our attention to a particular sequence. We write $[[x]]$ for the integer closest to a real number $x$, with the convention that $[[x]]=x-1/2$ if $x-1/2\in \mathbb{Z}$. For $i\in\mathbb{Z}$, let $s_i=[[in/d]]$. As in \Cref{sec:broken}, we let $\mathscr S$ be the independent set $\{s_1,\ldots,s_d\}$ of $\mathsf{Cycle}_n$ and set $\mathscr R=(\mathbb{Z}/n\mathbb{Z})\mathsf{s}etminus(\mathscr S-1)$. Let $\beta=\beta_{\mathscr S}$ be the acyclic orientation of $\mathsf{Cycle}_n$ whose sources are the elements of $\mathscr S$ and whose sinks are the elements of $\mathscr S-1$. Then $\beta$ has exactly $d$ counterclockwise edges.
In what follows, when we consider the size of the intersection of a multiset with a set, we count the elements according to their multiplicity in the multiset. For example, in the next proposition, $|[j-q,j-1]_n\cap(\mathscr S-1)|$ should be interpreted as the number of elements of $[j-q,j-1]_n$, counted with multiplicity, that are also elements of the set $\mathscr S-1$.
\begin{proposition}\label{prop:TProcycBro_R}
Let $\gamma,q,r$ be nonnegative integers such that $0\leq r\leq n-d-1$ and $\gamma n=q(n-d)+r$. Let $J=\{j\in[n]:q-\gamma+1\leq |[j-q,j-1]_n\cap(\mathscr S-1)|\}$. Then $|J|=r$, and $\TPro_\beta^\gamma=\cyc^{-q}\Bro_J(\cyc\Bro_{\mathscr R})^q$.
\end{proposition}
\Cref{prop:TProcycBro_R} will be crucial in the next section when we prove that the sizes of the orbits of $\TPro_\beta$ are all divisible by $\lcm(d,n-d)$. Before proving this proposition, we need a technical lemma.
\begin{lemma}\label{Lem:R1}
Let $\gamma,q,r,J$ be as in \Cref{prop:TProcycBro_R}. We have $J\cap(\mathscr S-1)=\emptyset$. Also, if $i\in\mathscr R$ and $i+1\in J$, then $i\in J$.
\end{lemma}
\begin{proof}
Consider some $s_j-1\in\mathscr S-1$. Recall that $s_j=[[jn/d]]$. It is straightforward to check that \[|[s_j-1-q,s_j-2]_n\cap(\mathscr S-1)|=|[s_j-1-q,s_j-1]_n\cap(\mathscr S-1)|-1\leq \frac{(q+1)d}{n}.\] Because $r\leq n-d-1$, we have
\[|[s_j-1-q,s_j-2]_n\cap(\mathscr S-1)|<\frac{(q+1)d}{n}+\frac{n-d-r}{n}=q-\frac{q(n-d)+r}{n}+1=q-\gamma+1,\] so $s_j-1\not\in J$. This proves that $J\cap(\mathscr S-1)=\emptyset$.
Now suppose $i\in\mathscr R=(\mathbb{Z}/n\mathbb{Z})\mathsf{s}etminus(\mathscr S-1)$ and $i+1\in J$. We have \[|[i-q,i-1]_n\cap(\mathscr S-1)|\geq |[(i+1)-q,(i+1)-1]_n\cap(\mathscr S-1)|\geq q-\gamma+1,\] so $i\in J$.
\end{proof}
\begin{proof}[Proof of \Cref{prop:TProcycBro_R}]
As in \Cref{lem:suffix}, we will consider words over the alphabet $\{\tau_1,\ldots,\tau_n\}$ both as words and as permutations of $\Lambda_{\mathsf{Path}_n}$. Given such a word $X$, recall that we write $X\!\langle i\rangle$ for the number of occurrences of $\tau_i$ in $X$.
Let $i_1,\ldots,i_{n-d}$ be an ordering of the elements of $\mathscr R$ such that $\Bro_{\mathscr R}=\tau_{i_{n-d}}\cdots\tau_{i_1}$. Consider the word \[W=\tau_{i_{n-d}-(q-1)}\cdots\tau_{i_1-(q-1)}\tau_{i_{n-d}-(q-2)}\cdots\tau_{i_1-(q-2)}\cdots\tau_{i_{n-d}-1}\cdots\tau_{i_1-1}\tau_{i_{n-d}}\cdots\tau_{i_1}.\] For each $i\in\mathbb{Z}/n\mathbb{Z}$, we have $W\!\langle i\rangle=q-|[i,i+q-1]_n\cap(\mathscr S-1)|$. The size of \[[i,i+q-1]_n\cap(\mathscr S-1)=[i,i+q-1]_n\cap\{[[n/d]]-1,[[2n/d]]-1,\ldots,[[dn/d]]-1\}\] must be $\left\lfloor qd/n\right\rfloor$ or $\left\lceil qd/n\right\rceil$. Using the identity $q(n-d)=\gamma n-r$, we find that $W\!\langle i\rangle\in\{\gamma-1,\gamma\}$ for all $i\in\mathbb{Z}/n\mathbb{Z}$. The total number of toggle operators in $W$ is $q(n-d)=\gamma n-r$, so there are exactly $r$ elements $i\in\mathbb{Z}/n\mathbb{Z}$ such that $W\!\langle i\rangle=\gamma-1$. Furthermore, we have $W\!\langle i\rangle=\gamma-1$ if and only if $i+q\in J$. This proves that $|J|=r$ and that $q-|[j-q,j-1]_n\cap(\mathscr S-1)|=\gamma-1$ for every $j\in J$.
It follows from \Cref{Lem:R1} that we can choose the ordering $i_1,\ldots,i_{n-d}$ so that $\Bro_J=\tau_{i_r}\cdots\tau_{i_1}$. For each $k\in\mathbb Z$, we have $\cyc^{-k}\Bro_{\mathscr R}\cyc^k=\tau_{i_{n-d}-k}\cdots\tau_{i_1-k}$. Thus, when we view $W$ as a permutation of $\Lambda_{\mathsf{Path}_n}$, it is equal to \[(\cyc^{-(q-1)}\Bro_{\mathscr R}\cyc^{q-1})\cdots (\cyc^{-1}\Bro_{\mathscr R}\cyc)\Bro_{\mathscr R}=\cyc^{-q}(\cyc\Bro_{\mathscr R})^q.\] When we view the word $W'=\tau_{i_r-q}\cdots\tau_{i_1-q}$ as a permutation, it is equal to $\cyc^{-q}\Bro_J \cyc^q$, so $W'W=\cyc^{-q}\Bro_J (\cyc\Bro_{\mathscr R})^q$. Every toggle operator $\tau_i$ occurs exactly $\gamma$ times in the word $W'W$. Our goal is to prove that the permutation $W'W$ of $\Lambda_{\mathsf{Path}_n}$ is equal to $\TPro_\beta^\gamma$. Setting $Y=W'W$ in \Cref{lem:suffix}, we find that it suffices to show that if $X$ is a suffix of $W'W$ and $a\to b$ is an arrow in $\beta$, then $X\!\langle a\rangle-X\!\langle b\rangle\in\{0,1\}$.
Given $A\mathsf{s}ubseteq\mathbb{Z}/n\mathbb{Z}$ and $i\in\mathbb{Z}/n\mathbb{Z}$, let \[A(i)=\begin{cases} 1 & \mbox{if } i\in A; \\ 0 & \mbox{if } i\not\in A. \end{cases}\] Let $X$ be a suffix of $W'W$, and write $|X|=k(n-d)+m$ for some nonnegative integers $k$ and $m$ with $0\leq m\leq n-d-1$. Then \[X=\tau_{i_m-k}\cdots\tau_{i_1-k}\tau_{i_{n-d}-(k-1)}\cdots\tau_{i_1-(k-1)}\cdots\tau_{i_{n-d}-1}\cdots\tau_{i_1-1}\tau_{i_{n-d}}\cdots\tau_{i_1}.\] Let $Q=\{i_1,\ldots,i_m\}\mathsf{s}ubseteq\mathscr R$. Let $a\to b$ be an arrow in $\beta$; we want to show that $X\!\langle a\rangle-X\!\langle b\rangle\in\{0,1\}$. To do this, let us first prove that
\begin{equation}\label{Eq:SQ}
(\mathscr S-1)(\ell)+Q(\ell)-Q(\ell+1)\in \{0,1\}\text{ for all }\ell\in\mathbb{Z}/n\mathbb{Z}.
\end{equation}
Because $(\mathscr S-1)\cap Q=\emptyset$, we must have $(\mathscr S-1)(\ell)+Q(\ell)-Q(\ell+1)\leq 1$. Suppose by way of contradiction that $(\mathscr S-1)(\ell)+Q(\ell)-Q(\ell+1)<0$. Then $(\mathscr S-1)(\ell)=Q(\ell)=0$ and $Q(\ell+1)=1$. This implies that $\ell+1$ is not in the set $\mathscr S$ of sources of $\beta$, so there is an arrow $\ell\to \ell+1$ in $\beta$. Hence, $\ell$ appears before $\ell+1$ in the ordering $i_1,\ldots,i_{n-d}$. Since $\ell+1\in Q=\{i_1,\ldots,i_m\}$, this forces $\ell\in Q$, which is a contradiction.
We can now prove that $X\!\langle a\rangle-X\!\langle b\rangle\in\{0,1\}$; we consider two cases.
\noindent{\bf Case 1.} Suppose $b=a+1$. In this case, $X\!\langle a\rangle=k-|[a,a+k-1]_n\cap(\mathscr S-1)|+Q(a+k)$ and $X\!\langle b\rangle=k-|[a+1,a+k]_n\cap(\mathscr S-1)|+Q(a+k+1)$, so \[X\!\langle a\rangle - X\!\langle b\rangle=-(\mathscr S-1)(a)+(\mathscr S-1)(a+k)+Q(a+k)-Q(a+k+1).\] Because $a\to a+1$ is an arrow in $\beta$, we know that $(\mathscr S-1)(a)=0$. If we set $\ell=a+k$ in \eqref{Eq:SQ}, we find that $X\!\langle a\rangle-X\!\langle b\rangle=(\mathscr S-1)(a+k)+Q(a+k)-Q(a+k+1)\in\{0,1\}$.
\noindent {\bf Case 2.} Suppose $b=a-1$. In this case, $X\!\langle a\rangle=k-|[a,a+k-1]_n\cap(\mathscr S-1)|+Q(a+k)$ and $X\!\langle b\rangle=k-|[a-1,a+k-2]_n\cap(\mathscr S-1)|+Q(a+k-1)$, so \[X\!\langle a\rangle - X\!\langle b\rangle=(\mathscr S-1)(a-1)-(\mathscr S-1)(a+k-1)+Q(a+k)-Q(a+k-1).\]
Because $a\to a-1$ is an arrow in $\beta$, it follows from the definition of $\beta$ that $(\mathscr S-1)(a-1)=1$. If we set $\ell=a+k-1$ in \eqref{Eq:SQ}, we find that $(\mathscr S-1)(a+k-1)+Q(a+k-1)-Q(a+k)\in\{0,1\}$. Therefore, $X\!\langle a\rangle-X\!\langle b\rangle =1-((\mathscr S-1)(a+k-1)+Q(a+k-1)-Q(a+k))\in\{0,1\}$.
\end{proof}
\mathsf{s}ection{Divisibility of Permutoric Promotion Orbit Sizes}\label{sec:divisibility}
Our goal in this section is to prove the following proposition.
\begin{proposition}\label{prop:divisibility}
If $\beta$ is an acyclic orientation of $\mathsf{Cycle}_n$ with $d$ counterclockwise edges, then every orbit of $\TPro_\beta$ has size divisible by $\lcm(d,n-d)$.
\end{proposition}
\Cref{lem:counterclockwise_edges} tells us that it suffices to prove \Cref{prop:divisibility} when $1\leq d\leq\left\lfloor n/2\right\rfloor$. Furthermore, if $d=1$, then $\TPro_\beta$ is dynamically equivalent to the toric promotion operator $\TPro$, so it follows from \Cref{thm:toric_main} (specialized to the case when $G=\mathsf{Path}_n$) that all orbits of $\TPro_\beta$ have size $n-1$. Thus, we may assume in what follows that $2\leq d\leq\left\lfloor n/2\right\rfloor$. By \Cref{lem:counterclockwise_edges}, we only need to prove \Cref{prop:divisibility} for one specific choice of an acyclic orientation $\beta$ with $d$ counterclockwise edges. As in \Cref{sec:Broken_Path}, let $s_i=[[in/d]]$, and let $\mathscr S$ be the independent set $\{s_1,\ldots,s_d\}$ of $\mathsf{Cycle}_n$. Let $\mathscr R=(\mathbb{Z}/n\mathbb{Z})\mathsf{s}etminus(\mathscr S-1)$. Let $\beta=\beta_{\mathscr S}$ be the acyclic orientation of $\mathsf{Cycle}_n$ whose sources are the elements of $\mathscr S$ and whose sinks are the elements of $\mathscr S-1$. We will prove that every orbit of $\TPro_\beta$ has size divisible by $\lcm(d,n-d)$.
Fix a labeling $\lambda\in\Lambda_{\mathsf{Path}_n}$. Let $\gamma$ be the size of the orbit of $\TPro_\beta$ containing $\lambda$. Using the division algorithm, we can write $\gamma n=q(n-d)+r$, where $q$ and $r$ are nonnegative integers and $0\leq r\leq n-d-1$. As in \Cref{prop:TProcycBro_R}, let
\[J=\{j\in[n]:q-\gamma+1\leq |[j-q,j-1]_n\cap(\mathscr S-1)|\}.\]
Since \Cref{prop:TProcycBro_R} allows us to rewrite $\TPro_\beta^\gamma$ in terms of the operator $\cyc\Bro_{\mathscr R}$, we will want to consider the orbit of $\lambda$ under $\cyc\Bro_{\mathscr R}$. Thus, we let \[\mathcal M=\{(\cyc\Bro_{\mathscr R})^t(\lambda):t\in\mathbb{Z}\}.\] In \Cref{subsec:Description2}, we described how to compute the action of $\cyc\Bro_{\mathscr R}$ on a labeling via a two-step procedure involving gliding globs. As mentioned in \Cref{Rem:SameOrder}, neither of the two steps in this procedure change the relative order in which the labels in $(\mathbb{Z}/n\mathbb{Z})\mathsf{s}etminus\mathscr S$ appear along the path. Thus, we have the following lemma.
\begin{lemma}\label{Lem:SameOrder}
For every $\mu\in\mathcal M$, the order in which the labels in $(\mathbb{Z}/n\mathbb{Z})\mathsf{s}etminus \mathscr S$ appear along the path in $\mu$ is the same as the order in which they appear along the path in $\lambda$.
\end{lemma}
We are now in a position to prove that $\gamma$ is divisible by $n-d$.
\begin{lemma}\label{lem:divisible_n-d}
If $\lambda\in\Lambda_{\mathsf{Path}_n}$ belongs to an orbit of $\TPro_\beta$ of size $\gamma$, then $\gamma$ is divisible by $n-d$.
\end{lemma}
\begin{proof}
Recall that we write $\gamma n=q(n-d)+r$ using the division algorithm. The map $\TPro_\beta^\gamma$
fixes $\lambda$. \Cref{lem:commutes} tells us that $\TPro_\beta$ commutes with $\cyc\Bro_{\mathscr R}$, so $\TPro_\beta^\gamma$ acts as the identity on $\mathcal M$ and thus trivially restricts to a bijection from $\mathcal M$ to itself. Since $\TPro_\beta^\gamma=\cyc^{-q}\Bro_J(\cyc\Bro_{\mathscr R})^q$ by
\Cref{prop:TProcycBro_R}, the map $\cyc^{-q}\Bro_J$ also restricts to a bijection from $\mathcal M$ to itself. Let $u_1,\ldots,u_{n-d}$ be the elements of $(\mathbb{Z}/n\mathbb{Z})\mathsf{s}etminus\mathscr S$, listed in the order in which they appear from left to right along the path in $\lambda$. It follows from \Cref{Lem:R1} that there exist integers $\ldots,y_0,y_1,y_2,\ldots$ satisfying $y_{i+d}=y_i+n$ and $s_i\leq y_i\leq s_{i+1}-1$ for all $i$ such that $J=\bigcup_{i=1}^d[s_i,y_i-1]_n$ (viewing $J$ as a subset of $\mathbb{Z}/n\mathbb{Z}$). For each $1\leq i\leq d$, we have that $y_i\not\in J$ and $y_i-1\in J$, so it follows from the definition of $J$ that $|[y_i-q,y_i-1]\cap(\mathscr S-1)|<|[y_i-1-q,y_i-2]\cap(\mathscr S-1)|$. We deduce that $y_i-1-q\in \mathscr S-1$ for all $1\leq i\leq d$. Therefore,
\begin{equation}\label{eq:yS}
\mathscr S=\{y_1-q,\ldots,y_d-q\}.
\end{equation}
Let \[\zeta=\begin{cases} 0 & \mbox{if } u_1\in\bigcup_{\ell=1}^d[s_\ell,y_\ell]_n; \\ 1 & \mbox{otherwise.} \end{cases}\] Note that, regardless of the value of $\zeta$, the element $u_1+\zeta-1$ cannot be of the form $y_i$ for any integer $i$. Therefore, it follows from \eqref{eq:yS} that
\begin{equation}\label{eq:zeta}
u_1+\zeta-q-1\not\in\mathscr S.
\end{equation}
As mentioned above, $\cyc^{-q}\Bro_J$ restricts to a bijection from $\mathcal M$ to itself; thus, it follows from \Cref{Lem:SameOrder} that the labels in $(\mathbb{Z}/n\mathbb{Z})\mathsf{s}etminus\mathscr S$ appear in the order $u_1,\ldots,u_{n-d}$ from left to right along the path in the labeling $\cyc^{-q}\Bro_J(\lambda)$.
Consider applying $\cyc\Bro_J$ to $\lambda$ using the three-step gliding-globs procedure described in \Cref{sec:broken}. We immerse the labels $s_1,\ldots,s_d$ and then apply the jeu de taquin operators $\jdt_{[s_i,y_i]_n}$, imagining that the label $s_i$ carries its glob along with it as it glides. After this initial step, the label $u_1$ will be on some vertex $z$; at this point in time, all of the vertices to the left of $z$ have globs of liquid on them, while $z$ does not. We claim that $\lambda(z)\in\bigcup_{\ell=1}^d[s_\ell,y_\ell]_n$ if and only if $u_1\in\bigcup_{\ell=1}^d[s_\ell,y_\ell]_n$. This is obvious if $\lambda(z)=u_1$. On the other hand, if $\lambda(z)\neq u_1$, then it follows from the definition of the jeu de taquin operators that $\lambda(z)$ and $u_1$ must both be in $\bigcup_{\ell=1}^d[s_\ell,y_\ell]_n$. This proves the claim, which is equivalent to the statement that $\zeta=1$ if and only if $z\in\lambda^{-1}((\mathbb{Z}/n\mathbb{Z})\mathsf{s}etminus\bigcup_{\ell=1}^d[s_\ell,y_\ell]_n)$. The second step in the gliding-globs procedure increases by $1$ the label of each vertex in $\lambda^{-1}((\mathbb{Z}/n\mathbb{Z})\mathsf{s}etminus\bigcup_{\ell=1}^d[s_\ell,y_\ell]_n)$; therefore, the label of $z$ is $u_1+\zeta$ after the second step. Note that the second step does not move any of the globs of liquid. The third step of the procedure changes the label in each glob of liquid to a label of the form $y_i+1$. It follows that in the labeling $\cyc\Bro_J(\lambda)$, the labels of the vertices to the left of $z$ are all of the form $y_i+1$, and the label of $z$ is $u_1+\zeta$. This means that in the labeling $\cyc^{-q}\Bro_J(\lambda)=\cyc^{-q-1}(\cyc\Bro_J(\lambda))$, the labels of the vertices to the left of $z$ are all of the form $y_i-q$ (i.e., they are in $\mathscr S$ by \eqref{eq:yS}), and the label of $z$ is $u_1+\zeta-q-1$. Combining this with \eqref{eq:zeta}, we find that $u_1+\zeta-q-1$ is the label in $(\mathbb{Z}/n\mathbb{Z})\mathsf{s}etminus\mathscr S$ that appears farthest to the left in the labeling $\cyc^{-q}\Bro_J(\lambda)$. As mentioned above, $\cyc^{-q}\Bro_J$ sends $\mathcal M$ to itself, so it follows from \Cref{Lem:SameOrder} that the labels in $(\mathbb{Z}/n\mathbb{Z})\mathsf{s}etminus\mathscr S$ appear in the order $u_1,\ldots,u_{n-d}$ in $\cyc^{-q}\Bro_J(\lambda)$. Consequently, $u_1+\zeta-q-1=u_1$. This proves that $q$ is congruent to $0$ or $-1$ modulo $n$.
We defined $q$ and $r$ so that $\gamma n=q(n-d)+r$ and $0\leq r\leq n-d-1$. This implies that $r\not\equiv -d\pmod n$. Reading the first equation modulo $n$ yields $r\equiv qd\pmod n$, so $q\not\equiv -1\pmod n$. Therefore, we must have $q\equiv 0\pmod n$ and $r=0$. Writing $q=mn$, we find that $\gamma=m(n-d)$, which completes the proof.
\end{proof}
Finally, we can complete the proof of the main result of this section.
\begin{proof}[Proof of \Cref{prop:divisibility}]
As discussed at the beginning of this section, it suffices to prove that every orbit of $\TPro_\beta$ is divisible by $\lcm(d,n-d)$, where $\beta=\beta_{\mathscr S}$ is the acyclic orientation of $\mathsf{Cycle}_n$ coming from the independent set $\mathscr S$ defined above. As before, let $\lambda\in\Lambda_{\mathsf{Path}_n}$, and let $\gamma$ be the size of the orbit of $\TPro_\beta$ containing $\lambda$. \Cref{lem:divisible_n-d} tells us that $\gamma$ is divisible by $n-d$, so we just need to show that $\gamma$ is also divisible by $d$. Using the division algorithm, we can write $\gamma n=q(n-d)+r$. Since $n-d$ divides $\gamma$, we find that $r=0$ and that $q$ is divisible by $n$. Thus, it follows from \Cref{prop:TProcycBro_R} that the set $J$ is empty and that we can write $\TPro_\beta^\gamma=\cyc^{-q}(\cyc\Bro_{\mathscr R})^q=(\cyc\Bro_{\mathscr R})^q$.
Given a labeling $\mathsf{s}igma\in\Lambda_{\mathsf{Path}_n}$, let $\psi(\mathsf{s}igma)$ be the sequence obtained by reading the labels in $\mathscr S$ in the order in which they appear from left to right along the path in $\mathsf{s}igma$. Recall from \Cref{subsec:Description2} the two-step gliding-globs procedure for computing the action of $\cyc\Bro_{\mathscr R}$. In the first step of this procedure, none of the globs of liquid can glide through each other. In the second step, we simply cyclically permute the $d$ labels in the globs of liquid. This shows that $\psi(\cyc\Bro_{\mathscr R}(\mathsf{s}igma))$ is obtained from $\psi(\mathsf{s}igma)$ by cyclically permuting the labels in $\mathscr S$ in the cyclic order $s_1,\ldots,s_d$. It follows that every orbit of $\cyc\Bro_{\mathscr R}$ has size divisible by $d$. Since $\lambda=\TPro_\beta^\gamma(\lambda)=(\cyc\Bro_{\mathscr R})^q(\lambda)$, we find that $d$ divides $q$. The equation $\gamma n=q(n-d)$ then forces $d(n-d)$ to divide $\gamma n$. Since $\gcd(n,d)$ divides $n-d$, this implies that $d$ divides $\gamma(n/\gcd(n,d))$. But $d$ and $n/\gcd(n,d)$ are coprime, so $d$ divides~$\gamma$.
\end{proof}
\mathsf{s}ection{Orbit Structure of Permutoric Promotion}\label{sec:Orbit_Structure}
Throughout this section, we continue to fix $G$ to be the path graph $\mathsf{Path}_n$. Our primary goal is to prove \Cref{thm:main}.
\mathsf{s}ubsection{A Reformulation}\label{subsec:reformulation}
One of the advantages of \Cref{lem:counterclockwise_edges} is that it allows us to work with whichever acyclic orientation $\beta$ is most convenient for our purposes. In \Cref{sec:divisibility}, we chose to work with the acyclic orientation $\beta_{\mathscr S}$ whose sources were the elements of an independent set $\mathscr S$ and whose sinks were the elements of $\mathscr S-1$. However, in this section, we will fix $\beta$ to be the acyclic orientation of $\mathsf{Cycle}_n$ whose unique source is $d$ and whose unique sink is $n$.
The purpose of \Cref{sec:divisibility} was to prove \Cref{prop:divisibility}, which tells us that the sizes of the orbits of $\TPro_\beta$ are all divisible by $\lcm(d,n-d)$. The reason this is necessary is that it allows us to reduce the problem of determining the orbit sizes of $\TPro_\beta$ to the problem of determining the orbit sizes of $\TPro_\beta^d$. The following proposition allows us to rewrite $\TPro_\beta^d$ in a more convenient form.
\begin{proposition}\label{prop:Psi}
We have \[\TPro_\beta^d=\prod_{i=n}^1(\tau_i\tau_{i+1}\cdots\tau_{i+d-1})=(\tau_{n}\tau_{n+1}\cdots\tau_{d+n-1})\cdots(\tau_2\tau_3\cdots\tau_{d+1})(\tau_1\tau_2\cdots\tau_d).\]
\end{proposition}
\begin{proof}
Think of $(\tau_{n}\tau_{n+1}\cdots\tau_{d+n-1})\cdots(\tau_2\tau_3\cdots\tau_{d+1})(\tau_1\tau_2\cdots\tau_d)$ as a word $Y$ over the alphabet $\{\tau_1,\ldots,\tau_n\}$. Note that every letter in this alphabet appears exactly $d$ times in $Y$. By \Cref{lem:suffix}, we just need to show that if $X$ is a suffix of $Y$ and $a\to b$ is an arrow in $\beta$, then $X\!\langle a\rangle-X\!\langle b\rangle\in\{0,1\}$; this is straightforward to check directly.
\end{proof}
\begin{remark}\label{rem:Bro_d}
By combining \Cref{prop:Psi} with the identity $\cyc^{-1}\tau_{i+1}=\tau_{i}\cyc^{-1}$ and the fact that $\cyc^n$ is the identity map, one can readily check that $\TPro_\beta^d=\left(\cyc^{-1}\Bro_{\{1,\ldots,d\}}^{-1}\right)^n$.
$\triangle$
\end{remark}
Define a map $\Phi_{n,d}\colon\Lambda_{\mathsf{Path}_n}\to\Lambda_{\mathsf{Path}_n}$ by \[\Phi_{n,d}=\cyc^d\prod_{i=n-d}^1(\tau_i\tau_{i+1}\cdots\tau_{i+d-1})=\cyc^d(\tau_{n-d}\tau_{n-d+1}\cdots\tau_{n-1})\cdots(\tau_2\tau_3\cdots\tau_{d+1})(\tau_1\tau_2\cdots\tau_d).\] Using the identity $\cyc\tau_i=\tau_{i+1}\cyc$ together with \Cref{prop:Psi}, one can check that
\begin{equation}\label{eq:PhiTPro}
\Phi_{n,d}^{n/\gcd(n,d)}=\TPro_\beta^{\lcm(d,n-d)}.
\end{equation}
\begin{lemma}\label{lem:Phi_Divisible}
Every orbit of $\Phi_{n,d}\colon\Lambda_{\mathsf{Path}_n}\to \Lambda_{\mathsf{Path}_n}$ has size divisible by $n/\gcd(n,d)$.
\end{lemma}
\begin{proof}
Let $\mathsf{FS}(\overline{\mathsf{Path}}_n,\mathsf{Cycle}_n)$ be the graph with vertex set $\Lambda_{\mathsf{Path}_n}$ in which two distinct labelings $\mathsf{s}igma,\mathsf{s}igma'$ are adjacent if and only if there exists $i\in\mathbb Z/n\mathbb Z$ such that $\mathsf{s}igma'=\tau_i(\mathsf{s}igma)$. In the language of the article \cite{DefantFriends}, this is the \emph{friends-and-strangers graph} of $\overline{\mathsf{Path}}_n$ and $\mathsf{Cycle}_n$, where $\overline{\mathsf{Path}}_n$ is the complement of $\mathsf{Path}_n$. For $\mathsf{s}igma\in\Lambda_{\mathsf{Path}_n}$, let $H_\mathsf{s}igma$ be the connected component of $\mathsf{FS}(\overline{\mathsf{Path}}_n,\mathsf{Cycle}_n)$ containing $\mathsf{s}igma$. It follows from Theorem~4.1 and Proposition~4.4 in \cite{DefantFriends} that there is a well-defined action of the group $\langle\cyc\rangle\cong\mathbb{Z}/n\mathbb{Z}$ on the set of connected components of $\mathsf{FS}(\overline{\mathsf{Path}}_n,\mathsf{Cycle}_n)$ given by $\cyc\cdot H_\mathsf{s}igma=H_{\cyc(\mathsf{s}igma)}$; moreover, these results from \cite{DefantFriends} imply that all orbits of this action have size $n$. Note that $H_{\Phi_{n,d}(\mathsf{s}igma)}=\cyc^d\cdot H_\mathsf{s}igma$. If $\Phi_{n,d}^k(\mathsf{s}igma)=\mathsf{s}igma$, then $H_\mathsf{s}igma=H_{\Phi_{n,d}^k(\mathsf{s}igma)}=\cyc^{dk}\cdot H_\mathsf{s}igma$, so $k$ is divisible by $n/\gcd(n,d)$.
\end{proof}
Let $\mathsf{Comp}_d(n)$ denote the set of compositions of $n$ with $d$ parts (i.e., $d$-tuples of positive integers that sum to $n$). There is a natural \dfn{rotation} operator $\Rot_{n,d}\colon\mathsf{Comp}_d(n)\to\mathsf{Comp}_d(n)$ defined by $\Rot_{n,d}(a_1,a_2,\ldots,a_d)=(a_2,\ldots,a_d,a_1)$.
Our goal in the next subsection will be to relate $\Phi_{n,d}$ and $\Rot_{n,d}$ via the following proposition. Recall that we write $\mathrm{Orb}_f$ for the set of orbits of an invertible map $f$.
\begin{proposition}\label{prop:PhiRot}
There is a map $\Omega\colon\mathrm{Orb}_{\Phi_{n,d}}\to\mathrm{Orb}_{\Rot_{n,d}}$ such that $|\Omega(\mathcal O)|=\frac{d}{n}|\mathcal O|$ for every $\mathcal O\in\mathrm{Orb}_{\Phi_{n,d}}$ and $|\Omega^{-1}(\widehat{\mathcal O})|=d!(n-d)!$ for every $\widehat{\mathcal O}\in\mathrm{Orb}_{\Rot_{n,d}}$.
\end{proposition}
Before proceeding to the proof of \Cref{prop:PhiRot}, let us see why it implies \Cref{thm:main}.
\begin{proof}[Proof of \Cref{thm:main} assuming \Cref{prop:PhiRot}]
Let $k_1,\ldots,k_\ell$ be the sizes of the orbits of $\Rot_{n,d}$, and let $m_i$ be the number of orbits of $\Rot_{n,d}$ of size $k_i$. Then $\{k_i^{m_i}:1\leq i\leq \ell\}$ is the multiset of orbit sizes of $\Rot_{n,d}$, where we use superscripts to denote multiplicities. If we assume \Cref{prop:PhiRot}, then we find that the multiset of orbit sizes of $\Phi_{n,d}$ is \[\left\{\left(\frac{n}{d}k_i\right)^{d!(n-d)!m_i}:1\leq i\leq \ell\right\}.\]
It then follows from \eqref{eq:PhiTPro} and \Cref{lem:Phi_Divisible} that the multiset of orbit sizes of $\TPro_\beta^{\lcm(d,n-d)}$ is \[\left\{\left(\frac{\gcd(n,d)}{n}\frac{n}{d}k_i\right)^{(n/\gcd(n,d))d!(n-d)!m_i}:1\leq i\leq \ell\right\},\] and we can then invoke \Cref{prop:divisibility} to see that the multiset of orbit sizes of $\TPro_\beta$ is \[\left\{\left(\lcm(d,n-d)\frac{\gcd(n,d)}{n}\frac{n}{d}k_i\right)^{(1/\lcm(d,n-d))(n/\gcd(n,d))d!(n-d)!m_i}: 1\leq i\leq \ell\right\}\] \[=\left\{\left((n-d)k_i\right)^{n(d-1)!(n-d-1)!m_i}: 1\leq i\leq \ell\right\}.\] Since $\Rot_{n,d}$ has order $d$, this implies that $\TPro_\beta$ has order $d(n-d)$. It is well known \cite{CSPDefinition} that the triple \[\left(\mathsf{Comp}_d(n),\Rot_{n,d},{n-1 \brack d-1}_q\right)\] exhibits the cyclic sieving phenomenon. Hence, \Cref{lem:CSP_technical} (with $f=\Rot_{n,d}$ and $g=\TPro_\beta$) implies that \[\left(\Lambda_{\mathsf{Path}_n},\TPro_\beta,n(d-1)!(n-d-1)![n-d]_{q^d}{n-1\brack d-1}_q\right)\] exhibits the cyclic sieving phenomenon.
\end{proof}
\mathsf{s}ubsection{Sliding Stones and Colliding Coins}
Our aim is now to prove \Cref{prop:PhiRot}, which, as we have just seen, implies our main theorem about the orbit structure of permutoric promotion. Code implementing several of the combinatorial constructions described in this section can be found at \url{https://cocalc.com/hrthomas/permutoric-promotion/implementation}.
For each integer $k$, let $\theta_k=\tau_{q+d+1-r}$, where $q$ and $r$ are the unique integers satisfying $k=qd+r$ and $1\leq r\leq d$. Let \[\nu_\ell=\theta_{d\ell}\theta_{d\ell-1}\cdots\theta_{d(\ell-1)+2}\theta_{d(\ell-1)+1}.\] Observe that $\theta_{k+dn}=\theta_k$ for all integers $k$. We have \[\Phi_{n,d}=\cyc^d\theta_{d(n-d)}\cdots\theta_2\theta_1=\cyc^d\nu_{n-d}\cdots\nu_2\nu_1.\] By combining the identity $\cyc\tau_i=\tau_{i+1}\cyc$ with the fact that $\cyc^n$ is the identity map, one can easily verify that
\begin{equation}\label{eq:Phi_Order}
\Phi_{n,d}^{m}=\theta_{md(n-d)}\cdots\theta_2\theta_1=\nu_{m(n-d)}\cdots\nu_2\nu_1
\end{equation}
whenever $m$ is a positive multiple of $n/\gcd(n,d)$.
Define a \dfn{state} to be a pair $(\mathsf{s}igma,t)\in\Lambda_{\mathsf{Path}_n}\times\mathbb{Z}$; we call $\mathsf{s}igma$ the \dfn{labeling} of the state and say that the state is at \dfn{time} $t$. A \dfn{timeline} is a bi-infinite sequence $\mathcal T=(\mathsf{s}igma_t,t)_{t\in\mathbb{Z}}$ of states such that $\mathsf{s}igma_t=\nu_t(\mathsf{s}igma_{t-1})$ for all $t\in\mathbb{Z}$. Note that every state belongs to a unique timeline. For $\mathsf{s}igma\in\Lambda_{\mathsf{Path}_n}$, let $\mathcal T_\mathsf{s}igma$ be the unique timeline containing the state $(\mathsf{s}igma,0)$.
Let $v_1,\ldots,v_n$ be the vertices of $\mathsf{Path}_n$, listed from left to right. For each $\ell\in[n]$, let ${{\bf v}}_\ell$ be a formal symbol associated to $v_\ell$; we will call ${{\bf v}}_\ell$ a \dfn{replica}. Let $\mathsf{s}_1,\ldots,\mathsf{s}_d$ be stones of different colors. We define the \dfn{stones diagram} of a state $(\mathsf{s}igma,t)$ as follows. Start with a copy of $\mathsf{Cycle}_n$. Place $\mathsf{s}_1,\ldots,\mathsf{s}_d$ on the vertices $t+d,\ldots,t+1$, respectively. Then place each replica ${{\bf v}}_\ell$ on the vertex $\mathsf{s}igma(v_\ell)$ of $\mathsf{Cycle}_n$; if this vertex already has a stone sitting on it, then we place the replica on top of the stone.
Suppose we have a timeline $\mathcal T=(\mathsf{s}igma_t,t)_{t\in\mathbb{Z}}$. We want to describe how the stones diagrams of the states evolve as we move through the timeline. We will imagine transforming the stones diagram of $(\mathsf{s}igma_{t-1},t-1)$ into that of $(\mathsf{s}igma_t,t)$ via a sequence of $d$ \dfn{small steps}. The $i$-th small step moves $\mathsf{s}_i$ one space clockwise. The labeling $(\theta_{d(t-1)+i}\cdots\theta_{d(t-1)+1})(\mathsf{s}igma_{t-1})$ is obtained from $(\theta_{d(t-1)+i-1}\cdots\theta_{d(t-1)+1})(\mathsf{s}igma_{t-1})$ by applying the toggle operator $\theta_{d(t-1)+i}=\tau_{t+d-i}$. If this operator has no effect (i.e., $(\theta_{d(t-1)+i}\cdots\theta_{d(t-1)+1})(\mathsf{s}igma_{t-1})=(\theta_{d(t-1)+i-1}\cdots\theta_{d(t-1)+1})(\mathsf{s}igma_{t-1})$), then we do not move any of the replicas ${{\bf v}}_1,\ldots,{{\bf v}}_n$ during the $i$-th small step (in this case, the stone $\mathsf{s}_i$ slides from underneath one replica to underneath a different replica). Otherwise, $\theta_{d(t-1)+i}$ has the effect of swapping the labels $t+d-i$ and $t+d-i+1$, so we swap the replicas that were sitting on the vertices $t+d-i$ and $t+d-i+1$ (in this case, the stone $\mathsf{s}_{i}$ carries the replica sitting on it along with it as it slides). \Cref{Fig5} illustrates these small steps for a particular example with $n=8$, $d=3$, and $t=1$.
\begin{figure}
\caption{The $d=3$ small steps transforming the stones diagram of a state at time $0$ into the stones diagram of the next state at time $1$.}
\label{Fig5}
\end{figure}
Now consider $d$ coins of different colors such that the set of colors of the coins is the same as the set of colors of the stones. We define the \dfn{coins diagram} of a state $(\mathsf{s}igma,t)$ as follows. Start with a copy of $\mathsf{Path}_n$. For each $i\in[d]$, there is a replica ${{\bf v}}_\ell$ sitting on the stone $\mathsf{s}_i$ in the stones diagram of $(\mathsf{s}igma,t)$; place the coin with the same color as the stone $\mathsf{s}_i$ on the vertex $v_\ell$ (see \Cref{FigNewA,Fig6}). Note that the set of vertices of $\mathsf{Path}_n$ occupied by coins is $\{\mathsf{s}igma^{-1}(t+1),\ldots,\mathsf{s}igma^{-1}(t+d)\}$.
Consider how the coins diagrams evolve as we move through a timeline $\mathcal T=(\mathsf{s}igma_t,t)_{t\in\mathbb{Z}}$. Let us transform the stones diagram of $(\mathsf{s}igma_{t-1},t-1)$ into that of $(\mathsf{s}igma_t,t)$ via the $d$ small steps described above. Let ${{\bf v}}_\ell$ be the replica sitting on $\mathsf{s}_i$ right before the $i$-th small step, and let ${{\bf v}}_{\ell'}$ be the replica sitting on the vertex one step clockwise from $\mathsf{s}_i$ right before the $i$-th small step. When $\mathsf{s}_i$ moves in the $i$-th small step, it will either carry its replica ${{\bf v}}_\ell$ along with it or slide from underneath ${{\bf v}}_\ell$ to underneath ${{\bf v}}_{\ell'}$; the latter occurs if and only if $\ell'=\ell\pm 1$. In the former case, no coins move during the $i$-th small step; in the latter case, a coin moves from $v_\ell$ to the adjacent vertex $v_{\ell'}$ (which did not have a coin on it right before this small step).
If we watch the coins diagrams evolve as we move through the timeline, then, by the previous paragraph, the coins will move around on $\mathsf{Path}_n$, but they will never move through each other. Therefore, it makes sense to name the coins $\mathsf{c}_1,\ldots,\mathsf{c}_d$ in the order they appear along the path from left to right, and this naming only depends on the timeline (not the specific state in the timeline). Define a \dfn{traffic jam} to be a maximal nonempty collection of coins that occupy a contiguous block of vertices (so the vertices occupied by the coins in a particular traffic jam induce a connected subgraph of $\mathsf{Path}_n$). Note that a traffic jam could have just a single coin. We say a traffic jam \dfn{touches a wall} if it contains a coin that occupies $v_1$ or $v_n$.
At any time, a coin has an idea of the direction in which it expects to move next (our coins are conscious now). Note that this is not necessarily the direction in which it will move next because it may change its mind before it moves. The way that a coin $\mathsf{c}$ decides which direction it expects to move is as follows. Suppose $\mathsf{c}$ currently occupies vertex $v_j$, and suppose the coins in the traffic jam containing $\mathsf{c}$ occupy the vertices $v_r,v_{r+1},\ldots,v_s$. The coin $\mathsf{c}$ looks at the stones diagram and reads ahead in the clockwise direction, starting from the stone of its color, and it determines whether it first sees ${{\bf v}}_{r-1}$ or ${{\bf v}}_{s+1}$. If it first sees ${{\bf v}}_{r-1}$, it expects to move left; if it first sees ${{\bf v}}_{s+1}$, it expects to move right. If $r-1$ is not the index of a replica (because $r=1$), the first replica that $\mathsf{c}$ sees will be ${{\bf v}}_{s+1}$; similarly, if $s+1$ is not the index of a replica (because $s=n$), the first replica $\mathsf{c}$ sees will be ${{\bf v}}_{r-1}$.
\Cref{FigNewA} shows several stones diagrams and coins diagrams. In each coins diagram, an arrow has been placed over each coin to indicate which direction it expects to move.
\begin{lemma}\label{lem:move_expected}
When a coin moves, it moves in the direction that it expects to move. \end{lemma}
\begin{proof}
Suppose $\mathsf{c}$ occupies $v_j$ and is about to move left. The stone of the same color as $\mathsf{c}$ is under the
replica ${{\bf v}}_j$, and the next replica clockwise is ${{\bf v}}_{j-1}$, which has no stone under it. It follows that $\mathsf{c}$ is the leftmost coin in a traffic jam. When $\mathsf{c}$ reads through the stones diagram looking for one of two replicas, it first sees ${{\bf v}}_{j-1}$, so it indeed expects to move left. The analysis for coins moving to the right is the same.
\end{proof}
The following lemma tells us under what circumstances a coin can change its mind about which way it is going to move.
\begin{lemma}
Let $\mathsf{c}$ be a coin, and let $\mathsf{s}$ be the stone with the same color as $\mathsf{c}$. Consider a small step, and let $v_j$ be the vertex occupied by $\mathsf{c}$ right before the small step. Let $v_{r},v_{r+1},\ldots,v_s$ be the vertices occupied by the coins in the traffic jam that contains $\mathsf{c}$ right before the small step. During this small step, $\mathsf{c}$ changes its mind about which direction it expects to move if and only if one of the following situations occurs:
\begin{itemize}
\item The stone $\mathsf{s}$ slides through ${{\bf v}}_{r-1}$ or ${{\bf v}}_{s+1}$ and carries ${{\bf v}}_j$ along with it as it slides (so $\mathsf{c}$ does not move in the coins diagram), and the traffic jam containing $\mathsf{c}$ does not touch a wall (so $1<r\leq s<n$).
\item The coin $\mathsf{c}$ moves, and the traffic jam that contains $\mathsf{c}$ after the small step touches a wall.
\end{itemize}
\end{lemma}
\begin{proof}
First of all, note that $\mathsf{c}$ will not change its mind about which way it is going to move except during a small step when $\mathsf{s}$ moves. Indeed, even though the traffic jam containing $\mathsf{c}$ may change during other small steps, it is straightforward to check that these small steps will not change the direction that $\mathsf{c}$ expects to move.
While $\mathsf{c}$ is in a traffic jam that touches a wall, there is only one way that it can expect to move: away from that wall. Thus, it does not change its mind about which way it is moving before it actually moves, but it does change its mind the moment it arrives in the traffic jam (i.e., when the second bulleted item in the statement of the lemma is satisfied).
Now consider a small step during which $\mathsf{c}$ moves, and suppose the traffic jam that contains $\mathsf{c}$ after the small step does not touch a wall. For simplicity, let us assume that $\mathsf{c}$ expects to move left before this small step. Then during the small step, $\mathsf{c}$ does in fact move left (by \cref{lem:move_expected}). Let us say $\mathsf{c}$ moves from $v_j$ to $v_{j-1}$. Then during this small step, $\mathsf{s}$ slides from underneath the replica ${{\bf v}}_j$ to underneath the replica ${{\bf v}}_{j-1}$. After the small step, the vertices occupied by the coins in the traffic jam containing $\mathsf{c}$ are $v_r,v_{r+1},\ldots,v_{j-1}$ for some $r\in\{2,\ldots,j-1\}$, so $\mathsf{c}$ looks in the stones diagram for either ${{\bf v}}_{r-1}$ or ${{\bf v}}_j$. It will certainly see ${{\bf v}}_{r-1}$ first since ${{\bf v}}_j$ is one step behind $\mathsf{s}$ (in the clockwise order) at this time. Thus, $\mathsf{c}$ still expects to move left after the small step.
Finally, consider the situation from the first bulleted item in the statement of the lemma. Let us again assume for simplicity that $\mathsf{c}$ expects to move left before the small step. Then $\mathsf{s}$ slides through ${{\bf v}}_{r-1}$. After the small step, when $\mathsf{c}$ reads through the stones diagram to determine which direction it expects to move, it again searches for ${{\bf v}}_{r-1}$ and ${{\bf v}}_{s+1}$ (because no coins moved during the small step). It will see ${{\bf v}}_{s+1}$ before ${{\bf v}}_{r-1}$ because ${{\bf v}}_{r-1}$ is now right behind $\mathsf{s}$ in the clockwise order. So $\mathsf{c}$ expects to move right after the small step.
\end{proof}
The importance of understanding the direction in which a coin expects to move is that it will enable us to understand \dfn{collisions}.
There are \dfn{two-coins collisions}, which involve two coins that occupy adjacent vertices of $\mathsf{Path}_n$; there are \dfn{left-wall collisions}, which can occur when $\mathsf{c}_1$ occupies $v_1$; and there are \dfn{right-wall collisions}, which can occur when $\mathsf{c}_d$ occupies $v_n$.
The prototypical examples of collisions are when two non-adjacent coins move to become adjacent or when a coin moves to become adjacent to a wall, but other examples are possible when traffic jams of size greater than one are involved.
The precise definition of a two-coins collision that occurs in a traffic jam that does not touch a wall is as follows. We say coins $\mathsf{c}_i$ and $\mathsf{c}_{i+1}$ are \dfn{butting heads} if they occupy adjacent vertices and $\mathsf{c}_i$ expects to move right while $\mathsf{c}_{i+1}$ expects to move left. We say $\mathsf{c}_{i}$ and $\mathsf{c}_{i+1}$ are involved in a two-coins collision at a small step if they are not butting heads immediately before the small step and they are butting heads immediately after the small step.
This can happen either because the two coins were not adjacent prior to the small step, but it can also happen because the two coins were adjacent but one of them changed its mind about the direction it expected to move.
The definition has to be slightly modified in a traffic jam that touches a wall. Consider first the case when a small step occurs during which a coin $\mathsf{c}$ moves so as to join a traffic jam that touches the wall. At the same time, $\mathsf{c}$ changes its mind so that it now expects to move away from the wall that the traffic jam touches. Nonetheless, if there is a coin $\mathsf{c}'$ adjacent to $\mathsf{c}$ after the small step, we still count this as a two-coins collision between $\mathsf{c}$ and $\mathsf{c}'$. (We can imagine that there was a brief instant of time right after $\mathsf{c}$ moved to join the traffic jam but right before it changed its mind about which way it expected to move, thus resulting in $\mathsf{c}$ butting heads with $\mathsf{c}'$ very briefly.)
Similarly, if $\mathsf{c}$ moved onto $v_1$ (respectively, $v_n$) during this small step (so it is in a traffic jam of size $1$ that touches a wall), then we count this as a left-wall (respectively, right-wall) collision.
We now discuss how to define a collision that occurs in the ``interior'' of a traffic jam of size at least $2$ that touches a wall. In such a traffic jam, all the coins always want to move away from the wall, so by the above definition, there would be no collisions within the traffic jam. However, this is not what we want. Instead, suppose we are considering a coin $\mathsf{c}_i$ that occupies $v_j$. Assume the coins in the traffic jam containing $\mathsf{c}_i$ occupy vertices $v_{1},v_2,\ldots,v_k$, where $j<k$. Thus, we are assuming the traffic jam touches the left wall, but the symmetrical considerations apply if the traffic jam touches the right wall.
The stone with the same color as $\mathsf{c}_i$ carries the replica
${{\bf v}}_{j}$. Suppose there is a small step during which the stone with the same color as $\mathsf{c}_i$ slides through ${{\bf v}}_{k+1}$, carrying ${{\bf v}}_j$ along with it as it slides. Note that $\mathsf{c}_i$ does not move during this small step. In this case, we say $\mathsf{c}_i$ collides with $\mathsf{c}_{i-1}$ (or is involved in a left-wall collision if $i=1$). To explain heuristically why this collision occurs, we can imagine that $\mathsf{c}_i$ has a ``flicker of confusion'' when it sees the stone with its same color slide through ${{\bf v}}_{k+1}$. When it sees this, $\mathsf{c}_i$ ``thinks'' it should change its mind and expect to move left. But then it realizes that it cannot expect to move left because it is in a traffic jam that touches the left wall, so it quickly goes back to expecting to move right. During this brief instant, the collision occurs because $\mathsf{c}_i$ ``thinks'' it should be butting heads with $\mathsf{c}_{i-1}$ (or with the left wall if $i=1$).
We say a collision occurs at time $t$ if it occurs during a small step between times $t-1$ and $t$.
\begin{example}
Suppose $n=6$ and $d=3$. \Cref{FigNewA} shows some stones diagrams and coins diagrams evolving over time. At each stage, the arrow over a coin points in the direction that the coin expects to move. Collisions are indicated in the coins diagrams by stars, and each star is colored to indicate which stone moves in the small step during which the collision occurs. Note that the right-wall collision at time $5$ (marked with a gold star in the first small step after time $4$) occurs because $\mathsf{c}_3$ has a ``flicker of confusion'' when the gold stone $\mathsf{s}_1$ slides through ${{\bf v}}_4$ (carrying ${{\bf v}}_6$ along with it as it slides).
$\lozenge$
\end{example}
\begin{figure}
\caption{The evolution of stones diagrams and coins diagrams over time, with each individual small step illustrated. At each moment, we have drawn an arrow over each coin to indicate which direction it expects to move. Each collision is indicated by a star whose color is the same as that of the stone that moved to cause the collision. Each labeling is depicted in red numbers below the path.}
\label{FigNewA}
\end{figure}
\begin{example}\label{exam:3}
Suppose $n=6$ and $d=3$. \Cref{Fig6} shows the stones diagrams and coins diagrams of a particular timeline at times $0,1,\ldots,17$. For brevity, we have not shown the individual small steps. All of the collisions the occur at time $t$ (i.e., during the small steps between time $t-1$ and time $t$) are indicated in the coins diagram at time $t$. The color of the star can be used to determine the small step during which the collision occurs. One can check that the states in this timeline are periodic with period $18$.
$\lozenge$
\end{example}
\begin{figure}
\caption{The stones diagrams and coins diagrams of the states in a timeline at times $0,1,\ldots,17$. Here, $n=6$ and $d=3$. The collisions that occur during the small steps between times $t-1$ and $t$ are represented by color-coded stars in the coins diagram at time $t$. Each labeling is depicted by the red numbers below the path. }
\label{Fig6}
\end{figure}
Let $\mathrm{Coll}_{\mathcal T}$ be the set of all collisions that take place in the coins diagrams of the states of the timeline $\mathcal T$. We define a directed graph with vertex set $\mathrm{Coll}_{\mathcal T}$ by drawing an arrow from a collision $\kappa$ to a collision $\kappa'$ whenever there is a coin involved in both $\kappa$ and $\kappa'$ and the collision $\kappa$ occurs before $\kappa'$. Let $(\mathrm{Coll}_{\mathcal T},\leq_{\mathcal T})$ be the transitive closure of this directed graph. Let ${\bf H}_{\mathcal T}$ be the Hasse diagram of $(\mathrm{Coll}_{\mathcal T},\leq_{\mathcal T})$. This Hasse diagram, which will be one of our primary tools, has the shape of a bi-infinite chain link fence (see \Cref{Fig7}). Suppose $\kappa_1\lessdot_{\mathcal T}\kappa_2$ is an edge in ${\bf H}_{\mathcal T}$. Then $\kappa_1$ and $\kappa_2$ are collisions that both use some coin $\mathsf{c}$; we define the \dfn{energy} of this edge, denoted $\mathcal E(\kappa_1\lessdot_{\mathcal T}\kappa_2)$, to be the number of different vertices that $\mathsf{c}$ occupies between these two collisions, including the vertices occupied by $\mathsf{c}$ when the collisions occur. More generally, if $\kappa_1\lessdot_{\mathcal T}\kappa_2\lessdot_{\mathcal T}\cdots\lessdot_{\mathcal T}\kappa_r$ is a saturated chain in ${\bf H}_{\mathcal T}$, then we write $\mathcal E(\kappa_1\lessdot_{\mathcal T}\kappa_2\lessdot_{\mathcal T}\cdots\lessdot_{\mathcal T}\kappa_r)$ for the tuple $(\mathcal E(\kappa_1\lessdot_{\mathcal T}\kappa_2),\ldots,\mathcal E(\kappa_{r-1}\lessdot_{\mathcal T}\kappa_r))$ of energies of the edges in the chain.
\begin{example}\label{exam:4}
If $\mathcal T$ is the timeline containing the states whose stones diagrams and coins diagrams are shown in \Cref{Fig6}, then (a finite part of) ${\bf H}_{\mathcal T}$ is shown in \Cref{Fig7}. Each collision is represented by a color-coded star, and the blue number inside the star is the time when the collision occurs. Each edge is labeled by its energy.
$\lozenge$
\end{example}
\begin{figure}
\caption{A Hasse diagram ${\bf H}
\label{Fig7}
\end{figure}
A \dfn{diamond} in ${\bf H}_{\mathcal T}$ consists of collisions $\kappa_1,\kappa_2,\kappa_3,\kappa_4$ together with four edges given by cover relations $\kappa_1\lessdot_{\mathcal T}\kappa_2$, $\kappa_1\lessdot_{\mathcal T}\kappa_3$, $\kappa_2\lessdot_{\mathcal T}\kappa_4$, $\kappa_3\lessdot_{\mathcal T}\kappa_4$.
A \dfn{half-diamond} in ${\bf H}_{\mathcal T}$ consists of collisions $\kappa_1',\kappa_2',\kappa_3'$, where $\kappa_1'$ and $\kappa_3'$ are either both left-wall collisions or both right-wall collisions, together with two edges given by cover relations $\kappa_1'\lessdot_{\mathcal T}\kappa_2'$ and $\kappa_2'\lessdot_{\mathcal T}\kappa_3'$.
Our arguments in the next subsection rest on the following three lemmas.
\begin{lemma}\label{lem:half-diamond}
In any half-diamond in the Hasse diagram $\bf H_{\mathcal T}$, the two edges have the same energy.
\end{lemma}
\begin{proof}
Fix a half-diamond in ${\bf H}_{\mathcal T}$ with edges given by the cover relations $\kappa_1'\lessdot_{\mathcal T}\kappa_2'\lessdot_{\mathcal T}\kappa_3'$. By symmetry, we may assume $\kappa_1'$ and $\kappa_3'$ are both left-wall collisions. Then $\kappa_1'$ and $\kappa_3'$ occur when $\mathsf{c}_1$ occupies $v_1$. Say $\kappa_2'$ occurs when $\mathsf{c}_1$ occupies $v_m$. Both edges of the half-diamond have energy $m$.
\end{proof}
\begin{lemma}\label{lem:diamond}
In any diamond in the Hasse diagram $\bf H_{\mathcal T}$, opposite edges have the same energy.
\end{lemma}
\begin{proof}
Fix a diamond in ${\bf H}_{\mathcal T}$ with edges given by cover relations $\kappa_1\lessdot_{\mathcal T}\kappa_2$, $\kappa_1\lessdot_{\mathcal T}\kappa_3$, $\kappa_2\lessdot_{\mathcal T}\kappa_4$, $\kappa_3\lessdot_{\mathcal T}\kappa_4$. Say the collision $\kappa_1$ involves coins $\mathsf{c}_i$ and $\mathsf{c}_{i+1}$ and takes place when $\mathsf{c}_i$ occupies vertex $v_m$ and $\mathsf{c}_{i+1}$ occupies vertex $v_{m+1}$. Without loss of generality, suppose $\kappa_2$ involves $\mathsf{c}_i$ and $\kappa_3$ involves $\mathsf{c}_{i+1}$. Then $\kappa_2$ occurs when $\mathsf{c}_i$ occupies some vertex $v_k$ with $k\leq m$, and $\kappa_3$ occurs when $\mathsf{c}_{i+1}$ occupies some vertex $v_\ell$ with $\ell\geq m+1$. The collision $\kappa_4$ involves the coins $\mathsf{c}_i$ and $\mathsf{c}_{i+1}$. We claim that $\kappa_4$ occurs when $\mathsf{c}_i$ occupies $v_{k+\ell-m-1}$ and $\mathsf{c}_{i+1}$ occupies $v_{k+\ell-m}$; this will imply that the edges $\kappa_1\lessdot_{\mathcal T}\kappa_2$ and $\kappa_3\lessdot_{\mathcal T} \kappa_4$ both have energy $m-k+1$ and that the edges $\kappa_1\lessdot_{\mathcal T}\kappa_3$ and $\kappa_2\lessdot_{\mathcal T} \kappa_4$ both have energy $\ell-m$.
By symmetry, we may assume that $m-k\leq \ell-m-1$. Let $x,y\in[d]$ be such that $\mathsf{s}_x$ and $\mathsf{s}_y$ are the stones with the same colors as $\mathsf{c}_i$ and $\mathsf{c}_{i+1}$, respectively.
Consider starting at the time when $\kappa_1$ occurs and watching the stones diagrams and coins diagrams evolve as we move forward in time. We will assume that $k<m$, that $x>y$, and that $\mathsf{c}_i$ moves from $v_m$ to $v_{m-1}$ before $\mathsf{c}_{i+1}$ moves from $v_{m+1}$ to $v_{m+2}$; the other cases are similar. Let $t_0$ be the first time after the collision $\kappa_1$ when $\mathsf{c}_i$ moves from $v_m$ to $v_{m-1}$. The coin $\mathsf{c}_i$ will move from $v_m$ to $v_{m-1}$ and then to $v_{m-2}$ and so on until reaching $v_k$; it will then turn around and move back across $v_{k+1},\ldots,v_m$ and then continue on toward $v_{k+\ell-m-1}$. For $j\in\{k,\ldots,k+\ell-m-1\}$, let $\zeta_j$ be the amount of time that $\mathsf{c}_i$ spends on $v_j$ during this trip. The coin $\mathsf{c}_{i+1}$ stays on $v_{m+1}$ for some time after $t_0$; it then moves to the right until reaching $v_{\ell}$, where it turns around and heads back to $v_{k+\ell-m+1}$. For $j'\in\{m+1,\ldots,\ell\}$, let $\xi_{j'}$ be the amount of time after $t_0$ that $\mathsf{c}_{i+1}$ spends on $v_{j'}$ during this trip.
By analyzing the stones diagrams, one can show that $\zeta_j=\xi_{j'}=n-d$ for all $j\in\{k,\ldots,m-1\}$ and all $j'\in\{k+\ell-m+1,\ldots,\ell\}$. Similarly, $\zeta_j=\xi_{j+1}$ for all $j\in\{m,\ldots,k+\ell-m-1\}$. Let $N=(m-k)(n-d)+\mathsf{s}um_{j=m}^{k+\ell-m-1}\zeta_j$. At time $t_0+N$, either the coin $\mathsf{c}_i$ moves from $v_{k+\ell-m-1}$ to $v_{k+\ell-m}$, or the coin $\mathsf{c}_{i+1}$ moves from $v_{k+\ell-m+1}$ to $v_{k+\ell-m}$. It follows from the assumption that $x>y$ that, in fact, $\mathsf{c}_{i+1}$ moves from $v_{k+\ell-m+1}$ to $v_{k+\ell-m}$ at time $t_0+N$. This proves the claim.
\end{proof}
\begin{example}\label{exam:2}
Suppose $n=6$ and $d=3$, and let $\mathcal T$ be the timeline from \Cref{exam:3,exam:4}. Let $\kappa_1,\kappa_3,\kappa_4$ be the collisions that occur at times $6,10,13$, respectively, and let $\kappa_2$ be the two-coins collision at time $8$. In the notation of the proof of \Cref{lem:diamond}, we have $i=2$, $m=3$, $k=2$, $\ell=6$, and $t_0=8$. We have $\zeta_2=\xi_6=3=n-d$, $\zeta_3=\xi_4=1$, and $\zeta_4=\xi_5=1$. Thus, $N=(m-k)(n-d)+\mathsf{s}um_{j=m}^{k+\ell-m-1}\zeta_j=5$. As explained in the proof of \Cref{lem:diamond}, the coin $\mathsf{c}_3$ moves from $v_6$ to $v_5$ at time $t_0+N=13$.
$\lozenge$
\end{example}
\begin{lemma} \label{lem:new}
If the two edges of a half-diamond in the Hasse diagram $\bf H_{\mathcal T}$ have energy $m$, then the amount of time between the collisions at the bottom and the top of the half-diamond is $m(n-d)$.
\end{lemma}
\begin{proof}
Without loss of generality, assume the bottom and top collisions in the half-diamond are left-wall collisions.
If $m>1$, the same style of argument as in the proof of \cref{lem:diamond} proves that the total amount of time that $\mathsf{c}_1$ spends on each of the vertices $v_1,\ldots,v_m$ is
exactly $n-d$, which proves the claim.
If $m=1$, a similar argument also applies.
Let $\mathsf{s}$ be the stone with the same color as $\mathsf{c}_1$. A left-wall collision can only occur during a small step in which $\mathsf{s}$ moves. Moreover, such a small step results in a left-wall collision if and only if, right before the small step occurs, the replica one space clockwise of $\mathsf{s}$ is ${{\bf v}}_{\ell}$, where $\ell$ is the smallest of all the indices of replicas that do not sit on stones at that time (equivalently, the traffic jam containing $\mathsf{c}_1$ has size $\ell-1$). The relative cyclic order of the indices of the replicas that do not sit on stones remains constant over time, so the time between these left-wall collisions is exactly $n-d$.
\end{proof}
\mathsf{s}ubsection{The Map $\Omega$}
Equipped with \cref{lem:half-diamond,lem:diamond,lem:new}, we now turn to constructing and analyzing the map $\Omega$ from \Cref{prop:PhiRot}.
For each collision $\kappa\in\mathrm{Coll}_{\mathcal T}$, let $\varphi(\kappa)$ be the collision involving the same set of coins as $\kappa$ that occurs next after $\kappa$. In other words, if $\kappa$ is the bottom element of a diamond (respectively, half-diamond), then $\varphi(\kappa)$ is the top element of that same diamond (respectively, half-diamond). We extend this notation to saturated chains in ${\bf H}_{\mathcal T}$ (including edges) by letting \[\varphi(\kappa_1\lessdot_{\mathcal T}\kappa_2\lessdot_{\mathcal T}\cdots\lessdot_{\mathcal T}\kappa_m)=\varphi(\kappa_1)\lessdot_{\mathcal T}\varphi(\kappa_2)\lessdot_{\mathcal T}\cdots\lessdot_{\mathcal T}\varphi(\kappa_m).\] We define the \dfn{period} of ${\bf H}_{\mathcal T}$ to be the smallest positive integer $p$ such that $e$ and $\varphi^p(e)$ have the same energy for every edge $e$ of ${\bf H}_{\mathcal T}$. A \dfn{transversal} of ${\bf H}_{\mathcal T}$ is a saturated chain $\mathscr T=(\kappa_0\lessdot_{\mathcal T}\kappa_1\lessdot_{\mathcal T}\cdots\lessdot_{\mathcal T}\kappa_d)$ such that $\kappa_0$ is a left-wall collision, $\kappa_d$ is a right-wall collision, and $\kappa_i$ involves the stones $\mathsf{c}_i$ and $\mathsf{c}_{i+1}$ for every $i\in[d-1]$. In other words, a transversal is a saturated chain that moves from left to right across ${\bf H}_{\mathcal T}$. We define the \dfn{energy composition} of $\mathscr T$ to be the tuple $\mathcal{E}(\mathscr T)=(\varepsilon_1,\ldots,\varepsilon_d)$, where $\varepsilon_i$ is the energy of the edge $\kappa_{i-1}\lessdot_{\mathcal T}\kappa_i$; note that $\mathcal{E}(\mathscr T)\in\mathsf{Comp}_d(n)$.
\begin{lemma}\label{lem:ERot}
Let $\mathcal T$ be a timeline, and let $\mathscr T$ be a transversal of ${\bf H}_{\mathcal T}$. Then \[\mathcal{E}(\varphi(\mathscr T))=\Rot_{n,d}(\mathcal{E}(\mathscr T)).\] Moreover, the period of ${\bf H}_{\mathcal T}$ is equal to the size of the orbit of $\Rot_{n,d}$ containing $\mathcal{E}(\mathscr T)$.
\end{lemma}
\begin{proof}
The second statement follows from the first because, by \Cref{lem:half-diamond,lem:diamond}, the energies of all edges in ${\bf H}_{\mathcal T}$ are determined by the energy composition of a single transversal of ${\bf H}_{\mathcal T}$. The first statement is also immediate from \Cref{lem:half-diamond,lem:diamond}.
\end{proof}
\begin{example}
Suppose $n=6$ and $d=3$. Let ${\bf H}_{\mathcal T}$ be the Hasse diagram from \Cref{Fig7}, and let $\mathscr T=(\kappa_0\lessdot_{\mathcal T}\kappa_1\lessdot_{\mathcal T}\kappa_2\lessdot_{\mathcal T}\kappa_3)$ be the transversal consisting of the collisions that occur at times $2,5,6,10$. Then $\mathcal{E}(\mathscr T)=(2,1,3)\in\mathsf{Comp}_3(6)$. The period of ${\bf H}_{\mathcal T}$ is $3$, which is the size of the $\Rot_{6,3}$-orbit containing $(2,1,3)$. The transversal $\varphi(\mathscr T)$ consists of both the collisions that occur at time $8$ along with the collisions at times $13$ and $16$. We have $\mathcal{E}(\varphi(\mathscr T))=(1,3,2)=\Rot_{6,3}(\mathcal{E}(\mathscr T))$. Similarly, $\mathcal{E}(\varphi^2(\mathscr T))=(3,2,1)=\Rot_{6,3}^2(\mathcal{E}(\mathscr T))$.
$\lozenge$
\end{example}
Let $S_r$ be the symmetric group consisting of all permutations of $[r]$. Suppose $v_{i_1},\ldots,v_{i_r}$ is a sequence of distinct vertices of $\mathsf{Path}_n$. We define the \dfn{standardization} of this sequence to be the unique permutation in $S_r$ that has the same relative order as $i_1,\ldots,i_r$ when written in one-line notation. For example, the standardization of $v_3,v_5,v_1,v_6$ is $2314$. Let $\mathcal T=(\mathsf{s}igma_t,t)_{t\in\mathbb{Z}}$ be a timeline. Recall that the stones $\mathsf{s}_1,\ldots,\mathsf{s}_d$ sit on the vertices $t+d,\ldots,t+1$, respectively, in the stones diagram of $(\mathsf{s}igma_t,t)$. Let $\mathsf{s}tand_t(\mathcal T)$ be the standardization of the sequence $\mathsf{s}igma_{t}^{-1}(t+d),\ldots,\mathsf{s}igma_{t}^{-1}(t+1)$. Alternatively, $\mathsf{s}tand_t(\mathcal T)$ is the permutation $\rho\colon[d]\to[d]$ such that the stone $\mathsf{s}_i$ and the coin $\mathsf{c}_{\rho(i)}$ have the same color for every $i\in[d]$. Let us also define $\overline\mathsf{s}tand_t(\mathcal T)$ to be the standardization of $\mathsf{s}igma_{t}^{-1}(1),\ldots,\mathsf{s}igma_{t}^{-1}(t),\mathsf{s}igma_{t}^{-1}(t+d+1),\ldots,\mathsf{s}igma_{t}^{-1}(n)$ (i.e., the standardization of the sequence obtained from $\mathsf{s}igma_{t}^{-1}(1),\mathsf{s}igma_{t}^{-1}(2),\ldots,\mathsf{s}igma_{t}^{-1}(n)$ by deleting $\mathsf{s}igma_{t}^{-1}(i)$ for all $t+1\leq i\leq t+d$). It follows from the analysis of how stones diagrams evolve through a timeline that $\mathsf{s}tand_t(\mathcal T)=\mathsf{s}tand_{t+1}(\mathcal T)$ and $\overline\mathsf{s}tand_t(\mathcal T)=\overline\mathsf{s}tand_{t+1}(\mathcal T)$. In other words, $\mathsf{s}tand_t(\mathcal T)$ and $\overline\mathsf{s}tand(\mathcal T)$ only depend on the timeline $\mathcal T$ and not on the time $t$. Thus, it makes sense to drop the subscripts and just write $\mathsf{s}tand(\mathcal T)$ and $\overline\mathsf{s}tand(\mathcal T)$. Note that there are $d!$ possibilities for $\mathsf{s}tand(\mathcal T)$ and $(n-d)!$ possibilities for $\overline\mathsf{s}tand(\mathcal T)$; this will end up being responsible for the appearance of $d!(n-d)!$ in \Cref{prop:PhiRot}.
For $k,t\in\mathbb{Z}$, let $\mathsf{s}igma_t^{(k)}=\cyc^{-k}(\mathsf{s}igma_{t+k})$. It follows immediately from the definition of a timeline that the sequence $\mathcal T^{(k)}=(\mathsf{s}igma_t^{(k)},t)_{t\in\mathbb{Z}}$ is also a timeline; that is, $\nu_t(\mathsf{s}igma_{t-1}^{(k)})=\mathsf{s}igma_t^{(k)}$ for all $t\in\mathbb{Z}$. Furthermore, the stones diagram of $(\mathsf{s}igma_t^{(k)},t)$ is obtained from that of $(\mathsf{s}igma_{t+k},t+k)$ by moving all stones and replicas $k$ positions counterclockwise. It follows that the coins diagrams of $(\mathsf{s}igma_t^{(k)},t)$ and $(\mathsf{s}igma_{t+k},t+k)$ are identical. Therefore, if $\kappa$ is a collision in $\mathrm{Coll}_{\mathcal T^{(k)}}$ that occurs at time $t$, then there is a collision $\psi_k(\kappa)\in\mathrm{Coll}_{\mathcal T}$ that occurs at time $t+k$. The resulting map $\psi_k\colon\mathrm{Coll}_{\mathcal T^{(k)}}\to\mathrm{Coll}_{\mathcal T}$ is an isomorphism from $(\mathrm{Coll}_{\mathcal T^{(k)}},\leq_{\mathcal T^{(k)}})$ to $(\mathrm{Coll}_{\mathcal T},\leq_{\mathcal T})$; furthermore, under this isomorphism, corresponding edges of the Hasse diagrams ${\bf H}_{\mathcal T^{(k)}}$ and ${\bf H}_{\mathcal T}$ have the same energy.
Recall that we write $\mathcal T_\mathsf{s}igma$ for the unique timeline containing the state $(\mathsf{s}igma,0)$. It follows from \Cref{lem:ERot} that the energy compositions of the transversals of ${\bf H}_{\mathcal T_\mathsf{s}igma}$ form a single orbit $\widetilde\Omega(\mathsf{s}igma)$ of $\Rot_{n,d}$. If $\mathcal T_\mathsf{s}igma=(\mathsf{s}igma_t,t)_{t\in\mathbb{Z}}$ (so $\mathsf{s}igma_0=\mathsf{s}igma$), then $\Phi_{n,d}(\mathsf{s}igma_0)=\mathsf{s}igma_0^{(n-d)}$, so $\mathcal T_{\Phi_{n,d}(\mathsf{s}igma_0)}=\mathcal T_\mathsf{s}igma^{(n-d)}$. Using the isomorphism $\psi_{n-d}$, we find that $\widetilde\Omega(\mathsf{s}igma_0)=\widetilde\Omega(\Phi_{n,d}(\mathsf{s}igma_0))$. Thus, we obtain a map \[\Omega=\Omega_{n,d}\colon\mathrm{Orb}_{\Phi_{n,d}}\to\mathrm{Orb}_{\Rot_{n,d}}\] that sends the $\Phi_{n,d}$-orbit containing a labeling $\mu$ to $\widetilde\Omega(\mu)$. We will prove that this map satisfies the conditions in \Cref{prop:PhiRot}.
\begin{lemma}\label{lem:depends_orbit}
For any labeling $\mathsf{s}igma\in\Lambda_{\mathsf{Path}_n}$, we have \[\mathsf{s}tand(\mathcal T_\mathsf{s}igma)=\mathsf{s}tand(\mathcal T_{\Phi_{n,d}(\mathsf{s}igma)})\quad\text{and}\quad\overline\mathsf{s}tand(\mathcal T_\mathsf{s}igma)=\overline\mathsf{s}tand(\mathcal T_{\Phi_{n,d}(\mathsf{s}igma)}).\] Hence, $\mathsf{s}tand(\mathcal T_\mathsf{s}igma)$ and $\overline\mathsf{s}tand(\mathcal T_\mathsf{s}igma)$ only depend on the orbit of $\Phi_{n,d}$ containing $\mathsf{s}igma$.
\end{lemma}
\begin{proof}
Let $\mathcal T_\mathsf{s}igma=(\mathsf{s}igma_t,t)_{t\in\mathbb{Z}}$ (so $\mathsf{s}igma_0=\mathsf{s}igma$). Let $\mu=\Phi_{n,d}(\mathsf{s}igma)$. The stones diagram of the state $(\mathsf{s}igma_0^{(n-d)},0)=(\mu,0)$ is obtained from that of $(\mathsf{s}igma_{n-d},n-d)$ by moving all stones and replicas $n-d$ positions counterclockwise, so $\mathsf{s}tand(\mathcal T_\mathsf{s}igma)=\mathsf{s}tand(\mathcal T_{\Phi_{n,d}(\mathsf{s}igma)})$. Since $(\mathsf{s}igma_{n-d},n-d)$ is in the timeline $\mathcal T_\mathsf{s}igma$ and $(\mu,0)$ is in the timeline $\mathcal T_\mu$, the permutations $\overline\mathsf{s}tand(\mathcal T_\mathsf{s}igma)$ and $\overline\mathsf{s}tand(\mathcal T_\mu)$ are the standardizations of the sequences $\mathsf{s}igma_{n-d}^{-1}(1),\mathsf{s}igma_{n-d}^{-1}(2),\ldots,\mathsf{s}igma_{n-d}^{-1}(n-d)$ and $\mu^{-1}(d+1),\mu^{-1}(d+2),\ldots,\mu^{-1}(n)$, respectively. But $\mu=\cyc^d(\mathsf{s}igma_{n-d})$, so these sequences are equal.
\end{proof}
For $\rho\in S_r$, let $\rev(\rho)$ be the permutation whose one-line notation is obtained by reversing that of $\rho$. Let $\delta\colon\mathbb{Z}/n\mathbb{Z}\to\mathbb{Z}/n\mathbb{Z}$ be the automorphism of $\mathsf{Cycle}_n$ defined by $\delta(i)=d+1-i$. Given an orbit $\widehat{\mathcal O}\in\mathrm{Orb}_{\Rot_{n,d}}$, let $\rev(\widehat{\mathcal O})\in\mathrm{Orb}_{\Rot_{n,d}}$ be the orbit obtained by reversing all the compositions in $\widehat{\mathcal O}$.
\begin{lemma}\label{lem:reverse}
For every $\mathsf{s}igma\in\Lambda_{\mathsf{Path}_n}$, we have \[\mathsf{s}tand(\mathcal T_{\delta\circ\mathsf{s}igma})=\rev(\mathsf{s}tand(\mathcal T_\mathsf{s}igma))\quad\text{and}\quad\overline\mathsf{s}tand(\mathcal T_{\delta\circ\mathsf{s}igma})=\rev(\overline\mathsf{s}tand(\mathcal T_\mathsf{s}igma)).\] Furthermore, $\widetilde\Omega(\delta\circ\mathsf{s}igma)=\rev(\widetilde\Omega(\mathsf{s}igma))$.
\end{lemma}
\begin{proof}
The first statement is immediate from the definitions. To see why the second statement is true, note that we can obtain the coins diagrams of the states in $\mathcal T_{\delta\circ\mathsf{s}igma}$ by ``going backward in time'' through the coins diagrams of the states in $\mathcal T_\mathsf{s}igma$ and permuting the colors of the coins. To be more precise, let us write $\mathcal T_\mathsf{s}igma=(\mathsf{s}igma_t,t)_{t\in\mathbb{Z}}$ and $\mathcal T_{\delta\circ\mathsf{s}igma}=(\mathsf{s}igma_t',t)_{t\in\mathbb{Z}}$ (so $\mathsf{s}igma_0=\mathsf{s}igma$ and $\mathsf{s}igma_0'=\delta\circ\mathsf{s}igma$). Then for every $t\in\mathbb{Z}$, the coins diagram of $(\mathsf{s}igma_t',t)$ is obtained from that of $(\mathsf{s}igma_{-t},-t)$ by permuting the colors of the coins. Let $\mathscr T=(\kappa_0\lessdot_{\mathcal T_\mathsf{s}igma}\cdots\lessdot_{\mathcal T_\mathsf{s}igma}\kappa_d)$ be a transversal of ${\bf H}_{\mathcal T_\mathsf{s}igma}$ with energy composition $\mathcal E(\mathscr T)=(\varepsilon_1,\ldots,\varepsilon_d)$. Then $\widetilde\Omega(\mathsf{s}igma)$ is the orbit of $\Rot_{n,d}$ containing $(\varepsilon_1,\ldots,\varepsilon_d)$. If $\kappa_j$ occurs at time $t_j$ and involves $\mathsf{c}_i$, then there is a collision $\kappa_j'$ in the timeline $\mathcal T_{\delta\circ\mathsf{s}igma}$ that occurs at time $-t_j$ and involves $\mathsf{c}_i$ (though $\mathsf{c}_i$ may have a different color in the coins diagrams of this timeline). In particular, $\kappa_d'$ is a right-wall collision, $\kappa_0'$ is a left-wall collision, and $\kappa_d'\lessdot_{\mathcal T_{\delta\circ\mathsf{s}igma}}\cdots\lessdot_{\mathcal T_{\delta\circ\mathsf{s}igma}}\kappa_0'$ is a saturated chain in ${\bf H}_{\mathcal T_{\delta\circ\mathsf{s}igma}}$. We have $\mathcal E(\kappa_d'\lessdot_{\mathcal T_{\delta\circ\mathsf{s}igma}}\cdots\lessdot_{\mathcal T_{\delta\circ\mathsf{s}igma}}\kappa_0')=(\varepsilon_d,\ldots,\varepsilon_1)$. Starting with this saturated chain, one can straightforwardly apply \Cref{lem:half-diamond,lem:diamond} to find that there is a transversal $\mathscr T'$ in ${\bf H}_{\mathcal T_{\delta\circ\mathsf{s}igma}}$ with energy composition $\mathcal E(\mathscr T')=(\varepsilon_d,\ldots,\varepsilon_1)$. Thus, $\widetilde\Omega(\delta\circ\mathsf{s}igma)$ is the orbit of $\Rot_{n,d}$ containing $(\varepsilon_d,\ldots,\varepsilon_1)$, which is $\rev(\widetilde\Omega(\mathsf{s}igma))$.
\end{proof}
\begin{lemma}\label{lem:scaling_factor}
For every $\mathcal O\in\mathrm{Orb}_{\Phi_{n,d}}$, we have $|\Omega(\mathcal O)|=\frac{d}{n}|\mathcal O|$.
\end{lemma}
\begin{proof}
Fix $\mathcal O\in\mathrm{Orb}_{\Phi_{n,d}}$, and let $\mathcal T=(\mathsf{s}igma_t,t)_{t\in\mathbb{Z}}$ be a timeline such that $\mathsf{s}igma_0\in\mathcal O$. Consider a transversal $\mathscr T=(\kappa_0\lessdot_{\mathcal T}\kappa_1\lessdot_{\mathcal T}\cdots\lessdot_{\mathcal T}\kappa_d)$ of ${\bf H}_{\mathcal T}$, and let $\mathcal{E}(\mathscr T)=(\varepsilon_1,\ldots,\varepsilon_d)$. Then $\mathcal{E}(\mathscr T)$ is in the orbit $\Omega(\mathcal O)$. Let us define $\varepsilon_k$ for all $k\in\mathbb{Z}$ by declaring $\varepsilon_{i+d}=\varepsilon_i$. Let $t_j$ be the time when the collision $\kappa_j$ occurs.
Consider the stones diagrams. Between times $t_0$ and $t_1$, the stone with the same color as $\mathsf{c}_1$ slides along the cycle carrying ${{\bf v}}_1$ until sliding from underneath ${{\bf v}}_1$ to underneath ${{\bf v}}_2$, which it carries until sliding underneath ${{\bf v}}_3$, and so on until it finally slides underneath ${{\bf v}}_{\varepsilon_1}$. The positions of ${{\bf v}}_1,\ldots,{{\bf v}}_{\varepsilon_1}$ throughout this interval of time are completely determined by the value of $\varepsilon_1$, the permutations $\mathsf{s}tand(\mathcal T)$ and $\overline\mathsf{s}tand(\mathcal T)$, and the residue of $t_0$ modulo $n$. It follows that $t_1-t_0$ is determined by $\varepsilon_1$, $\mathsf{s}tand(\mathcal T)$, $\overline\mathsf{s}tand(\mathcal T)$, and the residue of $t_0$ modulo $n$. Between times $t_1$ and $t_2$, the stone with the same color as $\mathsf{c}_2$ slides along the cycle carrying ${{\bf v}}_{\varepsilon_1+1}$ until sliding from underneath ${{\bf v}}_{\varepsilon_1+1}$ to underneath ${{\bf v}}_{\varepsilon_1+2}$, which it carries until sliding underneath ${{\bf v}}_{\varepsilon_1+3}$, and so on until it finally slides underneath ${{\bf v}}_{\varepsilon_1+\varepsilon_2}$. The positions of ${{\bf v}}_{\varepsilon_1+1},\ldots,{{\bf v}}_{\varepsilon_1+\varepsilon_2}$ throughout this interval of time are determined by the pair $(\varepsilon_1,\varepsilon_2)$, the permutations $\mathsf{s}tand(\mathcal T)$ and $\overline\mathsf{s}tand(\mathcal T)$, and the residue of $t_1$ modulo $n$. Thus, $t_2-t_1$ is determined by $(\varepsilon_1,\varepsilon_2)$, $\mathsf{s}tand(\mathcal T)$, $\overline\mathsf{s}tand(\mathcal T)$, and the residue of $t_0$ modulo $n$. In general, the values of $t_1-t_0,\ldots,t_d-t_{d-1}$ are determined by the energy composition $(\varepsilon_1,\ldots,\varepsilon_d)$, the permutations $\mathsf{s}tand(\mathcal T)$ and $\overline\mathsf{s}tand(\mathcal T)$, and the residue of $t_0$ modulo $n$.
Let $p$ be the period of ${\bf H}_{\mathcal T}$. By \Cref{lem:ERot}, $p$ is equal to $|\Omega(\mathcal O)|$, the size of the orbit of $\Rot_{n,d}$ containing the composition $\mathcal{E}(\mathscr T)=(\varepsilon_1,\ldots,\varepsilon_d)$. Hence, $\varepsilon_1+\cdots+\varepsilon_p=\frac{p}{d}(\varepsilon_1+\cdots+\varepsilon_d)=pn/d$. Let $t_j^*$ be the time when the collision $\varphi^p(\kappa_j)$ occurs. Using \Cref{lem:half-diamond,lem:diamond}, we find that the edges in the half-diamond of ${\bf H}_{\mathcal T}$ between $\varphi^{i-1}(\kappa_0)$ and $\varphi^i(\kappa_0)$ both have energy $\varepsilon_i$. Therefore, by \Cref{lem:new}, the time between the collisions $\varphi^{i-1}(\kappa_0)$ and $\varphi^i(\kappa_0)$ is $(n-d)\varepsilon_i$. This shows that $\varphi^p(\kappa_0)$ occurs at time $t_0^*=t_0+(n-d)(\varepsilon_1+\cdots+\varepsilon_p)=t_0+pn(n-d)/d$. Because $p$ is the size of an orbit of $\Rot_{n,d}$, it is divisible by $d/\gcd(n,d)$; this implies that $t_0^*\equiv t_0\pmod{n}$. By the definition of $p$, the transversal $\varphi^p(\mathscr T)$ has the same energy composition $(\varepsilon_1,\ldots,\varepsilon_d)$ as $\mathscr T$. Since the permutations $\mathsf{s}tand(\mathcal T)$ and $\overline\mathsf{s}tand(\mathcal T)$ only depend on $\mathcal T$, it follows from the preceding paragraph that $t_j^*-t_{j-1}^*=t_j-t_{j-1}$ for all $1\leq j\leq d$; consequently, $t_j^*=t_j+pn(n-d)/d$ for all $0\leq j\leq d$. From this, we deduce that $\mathsf{s}igma_t=\mathsf{s}igma_{t+pn(n-d)/d}$ for all $t\in\mathbb{Z}$. In fact, $pn/d$ is the smallest positive integer $\ell$ such that $\mathsf{s}igma_t=\mathsf{s}igma_{t+\ell (n-d)}$ for all $t\in\mathbb{Z}$ (otherwise, we could reverse this argument to find that the period of ${\bf H}_{\mathcal T}$ is smaller than $p$).
According to \eqref{eq:Phi_Order}, we have $\Phi_{n,d}^{pn/d}=\nu_{pn(n-d)/d}\cdots\nu_2\nu_1$, so $\Phi_{n,d}^{pn/d}(\mathsf{s}igma_0)=\mathsf{s}igma_{pn(n-d)/d}=\mathsf{s}igma_0$. Hence, $|\mathcal O|$ divides $pn/d$. On the other hand, since \Cref{lem:Phi_Divisible} tells us that $|\mathcal O|$ is divisible by $n/\gcd(n,d)$, we can use \eqref{eq:Phi_Order} to find that \[\mathsf{s}igma_{0}=\Phi_{n,d}^{|\mathcal O|}(\mathsf{s}igma_0)=(\nu_{|\mathcal O|(n-d)}\cdots\nu_2\nu_1)(\mathsf{s}igma_0)=\mathsf{s}igma_{|\mathcal O|(n-d)}.\] Since $|\mathcal O|(n-d)$ is divisible by $n$ (by \Cref{lem:Phi_Divisible}), we have $\nu_{t+|\mathcal O|(n-d)}=\nu_t$ for all integers $t$. Consequently, $\mathsf{s}igma_t=\mathsf{s}igma_{t+|\mathcal O|(n-d)}$ for all integers $t$. Appealing to the last sentence in the previous paragraph, we deduce that $|\mathcal O|\geq pn/d=\frac{n}{d}|\Omega(\mathcal O)|$. As $|\mathcal O|$ divides $pn/d$, the proof is complete.
\end{proof}
Recall that \Cref{lem:depends_orbit} tells us that $\mathsf{s}tand(\mathcal T_\mathsf{s}igma)$ and $\overline \mathsf{s}tand(\mathcal T_\mathsf{s}igma)$ only depend on the orbit of $\Phi_{n,d}$ containing $\mathsf{s}igma$. In order to complete the proof of \Cref{prop:PhiRot}, we just need to show that $|\Omega^{-1}(\widehat{\mathcal O})|= d!(n-d)!$ for every $\widehat{\mathcal O}\in\mathrm{Orb}_{\Rot_{n,d}}$. We will do this by showing that for each pair of permutations $(\rho,\overline\rho)\in S_d\times S_{n-d}$, there exists a unique orbit $\mathcal O\in\Omega^{-1}(\widehat{\mathcal O})$ such that $\mathsf{s}tand(\mathcal T_\mathsf{s}igma)=\rho$ and $\overline\mathsf{s}tand(\mathcal T_\mathsf{s}igma)=\overline \rho$ for every $\mathsf{s}igma\in\mathcal O$. We start by proving existence; uniqueness will then follow from a simple counting argument. We implore the reader to consult \Cref{exam:5} while reading the proof of the next lemma.
\begin{lemma}\label{lem:inequality}
Suppose $\widehat{\mathcal O}\in\mathrm{Orb}_{\Rot_{n,d}}$ and $(\rho,\overline\rho)\in S_d\times S_{n-d}$. There exists an orbit $\mathcal O\in\Omega^{-1}(\widehat{\mathcal O})$ such that $\mathsf{s}tand(\mathcal T_\mathsf{s}igma)=\rho$ and $\overline\mathsf{s}tand(\mathcal T_\mathsf{s}igma)=\overline \rho$ for every $\mathsf{s}igma\in\mathcal O$.
\end{lemma}
\begin{proof}
If $d=1$, then the result is obvious because $\widehat{\mathcal O}=\mathsf{Comp}_{1}(n)=\{(n)\}$. Therefore, we may assume $d\geq 2$ and proceed by induction on $d$. It follows from \Cref{lem:reverse} that the following two statements are equivalent:
\begin{enumerate}
\item There exists an orbit $\mathcal O\in\Omega^{-1}(\widehat{\mathcal O})$ such that $\mathsf{s}tand(\mathcal T_\mathsf{s}igma)=\rho$ and $\overline\mathsf{s}tand(\mathcal T_\mathsf{s}igma)=\overline \rho$ for every $\mathsf{s}igma\in\mathcal O$.
\item There exists an orbit $\mathcal O\in\Omega^{-1}(\rev(\widehat{\mathcal O}))$ such that $\mathsf{s}tand(\mathcal T_\mathsf{s}igma)=\rev(\rho)$ and $\overline\mathsf{s}tand(\mathcal T_\mathsf{s}igma)=\rev(\overline \rho)$ for every $\mathsf{s}igma\in\mathcal O$.
\end{enumerate}
Therefore, we may assume\footnote{This assumption might seem innocuous, but it is actually imperative for our argument. Thus, \cref{lem:reverse} really is quite crucial.} without loss of generality that the number $1$ appears to the left of the number $2$ in the one-line notation of $\rho$ (otherwise, replace $\rho$, $\overline{\rho}$, and $\widehat{\mathcal O}$ by $\rev(\rho)$, $\rev(\overline\rho)$, and $\rev(\widehat{\mathcal O})$, respectively).
Since $d<n$, every composition in $\widehat{\mathcal O}$ has a part that is strictly greater than $1$. Thus, we may choose a composition $(\varepsilon_1,\ldots,\varepsilon_d)\in\widehat{\mathcal O}$ such that $\varepsilon_2\geq 2$. We will also assume for simplicity that $\varepsilon_1\geq 2$; the case when $\varepsilon_1=1$ is similar. Let $\rho'$ be the permutation in $S_{d-1}$ obtained from $\rho$ by deleting the entry $1$ and decreasing the remaining entries by $1$. Let $\overline \rho'$ be the permutation in $S_{d-\varepsilon_1+1}$ obtained from $\overline \rho$ by deleting the entries $1,\ldots,\varepsilon_1-1$ and decreasing the remaining entries by $\varepsilon_1-1$. Let $\widehat{\mathcal O}'$ be the orbit of $\Rot_{n-\varepsilon_1,d-1}$ containing $(\varepsilon_2,\ldots,\varepsilon_d)$. By induction, there exists an orbit $\mathcal O'\in\Omega_{n-\varepsilon_1,d-1}^{-1}(\widehat{\mathcal O}')$ such that $\mathsf{s}tand(\mathcal T_{\mathsf{s}igma'})=\rho'$ and $\overline\mathsf{s}tand(\mathcal T_{\mathsf{s}igma'})=\overline \rho'$ for every $\mathsf{s}igma'\in\mathcal O'$ (the timeline $\mathcal T_{\mathsf{s}igma'}$ is defined with the parameters $n-\varepsilon_1$ and $d-1$ replacing $n$ and $d$).
Fix $\mathsf{s}igma_0'\in\mathcal O'$, and consider the timeline $\mathcal T_{\mathsf{s}igma_0'}=(\mathsf{s}igma_t',t)_{t\in\mathbb{Z}}$ (defined with the parameters $n-\varepsilon_1$ and $d-1$). Let us identify $\mathsf{Path}_{n-\varepsilon_1}$ with the subgraph of $\mathsf{Path}_n$ obtained by deleting the vertices $v_1,\ldots,v_{\varepsilon_1}$. Thus, the leftmost vertex in $\mathsf{Path}_{n-\varepsilon_1}$ is $v_{\varepsilon_1+1}$, and the replicas appearing in the stones diagrams of states in $\mathcal T_{\mathsf{s}igma_0'}$ are ${{\bf v}}_{\varepsilon_1+1},\ldots,{{\bf v}}_n$. Let
$\kappa_1'\lessdot_{\mathcal T_{\mathsf{s}igma_0'}}\cdots\lessdot_{\mathcal T_{\mathsf{s}igma_0'}}\kappa_d'$ be a transversal of ${\bf H}_{\mathcal T_{\mathsf{s}igma_0'}}$ with energy composition $(\varepsilon_2,\ldots,\varepsilon_d)$. Let $k+1$ be the first time after the collision $\kappa_1'$ when the leftmost coin moves. Because $\varepsilon_2\geq 2$, the leftmost coin occupies $v_{\varepsilon_1+1}$ in the coins diagram of $(\mathsf{s}igma_k',k)$ and occupies $v_{\varepsilon_1+2}$ in the coins diagram of $(\mathsf{s}igma_{k+1}',k+1)$. Let $v_\eta$ be the vertex of $\mathsf{Path}_{n-\varepsilon_1}$ such that $\mathsf{s}igma_k'(v_\eta)=d+k\in\mathbb{Z}/(n-\varepsilon_1)\mathbb{Z}$. In the stones diagram of $(\mathsf{s}igma_k',k)$, the replica ${{\bf v}}_\eta$ sits one space clockwise from the consecutive block of stones.
We can construct the stones diagram of a state $(\mu,m)$ with $\mu\in\Lambda_{\mathsf{Path}_n}$ from the stones diagram of $(\mathsf{s}igma_k',k)$ by inserting vertices to replace $\mathsf{Cycle}_{n-\varepsilon_1}$ by $\mathsf{Cycle}_n$, adding one stone, and adding the replicas ${{\bf v}}_1,\ldots,{{\bf v}}_{\varepsilon_1}$. When we do this, we make sure to keep the stones on the consecutive block of vertices $m+d,\ldots,m+1$, and we make sure that the replicas that were sitting on stones remain on stones. We place the replica ${{\bf v}}_{\varepsilon_1}$ on the newly inserted stone. We can also ensure that $\mathsf{s}tand(\mu,m)=\rho$ and $\overline\mathsf{s}tand(\mu,m)=\overline\rho$, and we can choose $m$ so that ${{\bf v}}_{\eta}$ is on the vertex $m+d+1$. (In fact, these conditions uniquely determine $\mu$ and uniquely determine $m$ modulo $n$.) Let $\mathsf{s}igma_0$ be the unique labeling in $\Lambda_{\mathsf{Path}_n}$ such that the timeline $\mathcal T_{\mathsf{s}igma_0}$ contains the state $(\mu,m)$.
Consider watching the coins diagrams of the states in $\mathcal T_{\mathsf{s}igma_0}$ evolve over time. At time $m$, the coins $\mathsf{c}_1$ and $\mathsf{c}_2$ occupy $v_{\varepsilon_1}$ and $v_{\varepsilon_1+1}$, respectively. At time $m+1$, the coin $\mathsf{c}_2$ moves to $v_{\varepsilon_1+2}$, and $\mathsf{c}_1$ stays on $v_{\varepsilon_1}$ (we are using the fact that $1$ appears to the left of $2$ in $\rho$). This implies (since $\varepsilon_1\geq 2$) that the last collision involving $\mathsf{c}_1$ that occurred at or before time $m$ must have been a two-coins collision involving $\mathsf{c}_1$ and $\mathsf{c}_2$; let us call this collision $\kappa_1$ and say that it occurred at time $m'$. After time $m+1$, $\mathsf{c}_1$ will move leftward until reaching $v_1$, where it will take part in a left-wall collision $\kappa_0^*$. Meanwhile, $\mathsf{c}_2$ will travel rightward until reaching $v_{\varepsilon_1+\varepsilon_2}$, where it will collide with $\mathsf{c}_3$ in a two-coins collision $\kappa_2$. Then $\mathsf{c}_3$ will move rightward until reaching $v_{\varepsilon_1+\varepsilon_2+\varepsilon_3}$, where it will collide with $\mathsf{c}_4$ in a two-coins collision $\kappa_3$, and so on. Eventually, $\mathsf{c}_d$ moves rightward and takes part in a right-wall collision $\kappa_d$. The key observation here is that throughout this process, in the stones diagrams, any stone carrying a replica ${{\bf v}}_\ell$ with $\ell\geq \varepsilon_1+2$ will just slide through any replica ${{\bf v}}_{\ell'}$ with $\ell'\leq\varepsilon_1$. This means that the replicas ${{\bf v}}_1,\ldots,{{\bf v}}_{\varepsilon_1}$ that we inserted when passing from $\mathsf{Cycle}_{n-\varepsilon_1}$ to $\mathsf{Cycle}_n$ will not affect where the collisions $\kappa_2,\kappa_3,\ldots,\kappa_d$ occur. This is why $\kappa_2,\ldots,\kappa_d$ occur at the same places (though possibly at different times) as $\kappa_2',\ldots,\kappa_d'$, respectively. We find that $\mathcal E(\kappa_1\lessdot_{\mathcal T_{\mathsf{s}igma_0}}\kappa_0^*)=\varepsilon_1$ and $\mathcal E(\kappa_1\lessdot_{\mathcal T_{\mathsf{s}igma_0}}\kappa_2\lessdot_{\mathcal T_{\mathsf{s}igma_0}}\cdots\lessdot_{\mathcal T_{\mathsf{s}igma_0}}\kappa_d)=(\varepsilon_2,\ldots,\varepsilon_d)$. The edge $\kappa_1\lessdot_{\mathcal T_{\mathsf{s}igma_0}}\kappa_0^*$ is the top edge in a half-diamond; let $\kappa_0\lessdot_{\mathcal T_{\mathsf{s}igma_0}}\kappa_1$ be the bottom edge of the same half-diamond. Then \Cref{lem:half-diamond} tells us that $\mathcal E(\kappa_0\lessdot_{\mathcal T_{\mathsf{s}igma_0}}\kappa_1)=\varepsilon_1$, so the transversal $\kappa_0\lessdot_{\mathcal T_{\mathsf{s}igma_0}}\kappa_1\lessdot_{\mathcal T_{\mathsf{s}igma_0}}\cdots\lessdot_{\mathcal T_{\mathsf{s}igma_0}}\kappa_d$ of ${\bf H}_{\mathcal T_{\mathsf{s}igma_0}}$ has energy composition $(\varepsilon_1,\ldots,\varepsilon_d)$. Thus, $\widetilde\Omega(\mathsf{s}igma_0)$ is the orbit of $\Rot_{n,d}$ containing $(\varepsilon_1,\ldots,\varepsilon_d)$. If $\mathcal O$ is the orbit of $\Phi_{n,d}$ containing $\mathsf{s}igma_0$, then $\Omega(\mathcal O)=\widehat{\mathcal O}$. Furthermore, $\mathsf{s}tand(\mathcal T_{\mathsf{s}igma_0})=\rho$ and $\overline\mathsf{s}tand(\mathcal T_{\mathsf{s}igma_0})=\overline\rho$. According to \Cref{lem:depends_orbit}, we have $\mathsf{s}tand(\mathcal T_{\mathsf{s}igma})=\rho$ and $\overline\mathsf{s}tand(\mathcal T_{\mathsf{s}igma})=\overline\rho$ for all $\mathsf{s}igma\in\mathcal O$.
\end{proof}
\begin{figure}
\caption{The stones diagrams and coins diagrams of the states at times $0,1,\ldots,7$ in the timeline $\mathcal T_{\mathsf{s}
\label{Fig8}
\end{figure}
\begin{example}\label{exam:5}
Let us illustrate the proof of \Cref{lem:inequality}. Suppose $n=8$, $d=3$, $\rho=132$, and $\overline \rho=52413$. Let $\widehat{\mathcal O}$ be the orbit of $\Rot_{8,3}$ containing the composition $(\varepsilon_1,\varepsilon_2,\varepsilon_3)=(4,3,1)$. Note that $1$ appears before $2$ in $\rho$ and that $\varepsilon_2\geq 2$. We have $\rho'=21$ and $\overline\rho'=21$.
We can choose $\mathsf{s}igma_0'$ to be the labeling such that the stones diagrams and coins diagrams of the states of $\mathcal T_{\mathsf{s}igma_0'}$ at times $0,1,\ldots,7$ are shown in \Cref{Fig8}. One can check that the states in this timeline are periodic with period $8$. We can choose the transversal $\kappa_1'\lessdot_{\mathcal T_{\mathsf{s}igma_0'}}\kappa_2'\lessdot_{\mathcal T_{\mathsf{s}igma_0'}}\kappa_3'$ so that $\kappa_1'$ is the left-wall collision at time $0$, $\kappa_2'$ is the two-coins collision at time $3$, and $\kappa_3'$ is the right-wall collision at time $5$. We have $k=1$ and $\eta=7$.
\Cref{Fig9} illustrates how we construct the stones diagram of $(\mu,m)$ from that of $(\mathsf{s}igma_1',1)$. In this example, $m=2$. Four vertices were inserted to transform $\mathsf{Cycle}_4$ into $\mathsf{Cycle}_8$, and the vertices were then renamed. Since $\eta=7$, we have placed ${{\bf v}}_7$ on the vertex $m+d+1=6$. Note that the standardization of ${{\bf v}}_4,{{\bf v}}_6,{{\bf v}}_5$ is $132=\rho$ and that the standardization of ${{\bf v}}_8,{{\bf v}}_2,{{\bf v}}_7,{{\bf v}}_1,{{\bf v}}_3$ is $52413=\overline\rho$.
\Cref{Fig10} shows the stones diagrams and coins diagrams of the states in $\mathcal T_{\mathsf{s}igma_0}$ at times $0,\ldots,11$. (The labelings of the states in this timeline are actually periodic with period $40$, but we chose not to draw the diagrams of $40$ states.) The collision $\kappa_1$ involves $\mathsf{c}_1$ and $\mathsf{c}_2$ and occurs at time $0$. Then $\kappa_0^*$ is the left-wall collision at time $9$. The collision $\kappa_2$ involves $\mathsf{c}_2$ and $\mathsf{c}_3$ and occurs at time $6$, while $\kappa_3$ is the right-wall collision at time $11$. Observe that $\mathcal E(\kappa_1\lessdot_{\mathcal T_{\mathsf{s}igma_0}}\kappa_0^*)=4=\varepsilon_1$ and $\mathcal E(\kappa_1\lessdot_{\mathcal T_{\mathsf{s}igma_0}}\kappa_2\lessdot_{\mathcal T_{\mathsf{s}igma_0}}\kappa_3)=(3,1)=(\varepsilon_2,\varepsilon_3)$.
$\lozenge$
\end{example}
\begin{figure}
\caption{On the left is the stones diagram of $(\mathsf{s}
\label{Fig9}
\end{figure}
\begin{figure}
\caption{The stones diagrams and coins diagrams of the states at times $0,1,\ldots,11$ in the timeline $\mathcal T_{\mathsf{s}
\label{Fig10}
\end{figure}
\begin{proof}[Proof of \Cref{prop:PhiRot}]
We know by \Cref{lem:scaling_factor} that $|\Omega(\mathcal O)|=\frac{d}{n}|\mathcal O|$ for every $\mathcal O\in\mathrm{Orb}_{\Phi_{n,d}}$. It follows from \Cref{lem:inequality} that $|\Omega^{-1}(\widehat{\mathcal O})|\geq d!(n-d)!$ for every $\widehat{\mathcal O}\in\mathrm{Orb}_{\Rot_{n,d}}$. Therefore,
\[n!=|\Lambda_{\mathsf{Path}_n}|=\mathsf{s}um_{\mathcal O\in\mathrm{Orb}_{\Phi_{n,d}}}|\mathcal O|=\mathsf{s}um_{\widehat{\mathcal O}\in\mathrm{Orb}_{\Rot_{n,d}}}\mathsf{s}um_{\mathcal O\in\Omega^{-1}(\widehat{\mathcal O})}|\mathcal O|=\mathsf{s}um_{\widehat{\mathcal O}\in\mathrm{Orb}_{\Rot_{n,d}}}|\Omega^{-1}(\widehat{\mathcal O})|\cdot\frac{n}{d}|\widehat{\mathcal O}|\] \[\geq d!(n-d)!\frac{n}{d}\mathsf{s}um_{\widehat{\mathcal O}\in\mathrm{Orb}_{\Rot_{n,d}}}|\widehat{\mathcal O}|=n(d-1)!(n-d)!|\mathsf{Comp}_d(n)|=n(d-1)!(n-d)!\binom{n-1}{d-1}=n!.\] This inequality must actually be an equality, so we must have $|\Omega^{-1}(\widehat{\mathcal O})|=d!(n-d)!$ for every $\widehat{\mathcal O}\in\mathrm{Orb}_{\Rot_{n,d}}$.
\end{proof}
As discussed at the end of \Cref{subsec:reformulation}, \Cref{prop:PhiRot} implies \Cref{thm:main}.
\mathsf{s}ection{Orbit Structure of Broken Promotion}\label{sec:orbit_broken}
In this final section, we prove \Cref{thm:broken_main2,thm:broken_main}, which describe the orbit structure of $\cyc\Bro_B$ for particular choices of the subset $B\mathsf{s}ubseteq\mathbb{Z}/n\mathbb{Z}$.
\begin{proof}[Proof of \Cref{thm:broken_main2}]
Let $\beta$ be the acyclic orientation of $\mathsf{Cycle}_n$ whose unique source is $d$ and whose unique sink is $n$. To ease notation, let $F(q)=n(d-1)!(n-d-1)![n-d]_{q^d}{n-1\brack d-1}_q$. \Cref{thm:main} tells us that $\TPro_\beta$ has order $d(n-d)$ and that the triple $(\Lambda_{\mathsf{Path}_n},\TPro_\beta,F(q))$ exhibits the cyclic sieving phenomenon. Since the sizes of the orbits of $\TPro_\beta$ are all divisible by $d$ (by \Cref{prop:divisibility}), it follows that $\TPro_\beta^{d}$ has order $n-d$ and that the triple $(\Lambda_{\mathsf{Path}_n},\TPro_\beta^{d},F(q))$ also exhibits the cyclic sieving phenomenon.
By \Cref{rem:Bro_d}, we have \[\TPro_\beta^d=\left(\cyc^{-1}\Bro_{\{1,\ldots,d\}}^{-1}\right)^n=(\Bro_{\{1,\ldots,d\}}\cyc)^{-n}=\cyc^{-1}(\cyc\Bro_{\{1,\ldots,d\}})^{-n}\cyc,\] so $\TPro_\beta^d$ and $(\cyc\Bro_{\{1,\ldots,d\}})^{n}$ have the same orbit structure. Consequently, $(\cyc\Bro_{\{1,\ldots,d\}})^{n}$ has order $n-d$, and the triple $(\Lambda_{\mathsf{Path}_n},(\cyc\Bro_{\{1,\ldots,d\}})^{n},F(q))$ satisfies the cyclic sieving phenomenon. It follows immediately from \Cref{prop:homomesy} that the sizes of the orbits of $\cyc\Bro_{\{1,\ldots,d\}}$ are all divisible by $n$. Therefore, $\cyc\Bro_{\{1,\ldots,d\}}$ has order $(n-d)n$, and if $\{k_i^{m_i}:1\leq i\leq \ell\}$ is the multiset of orbit sizes of $(\cyc\Bro_{\{1,\ldots,d\}})^n$, then $\{(nk_i)^{m_i/n}:1\leq i\leq \ell\}$ is the multiset of orbit sizes of $\cyc\Bro_{\{1,\ldots,d\}}$. According to \Cref{lem:CSP_technical}, the triple $(\Lambda_{\mathsf{Path}_n},\cyc\Bro_{\{1,\ldots,d\}},\frac{1}{n}[n]_{q^{n-d}}F(q))$ exhibits the cyclic sieving phenomenon. This completes the proof.
\end{proof}
\begin{proof}[Proof of \Cref{thm:broken_main}]
Let $d$, $n$, $s_i$, and $\mathscr R$ be as in the statement of the theorem. Let $\beta$ be the acyclic orientation of $\mathsf{Cycle}_n$ whose sources are the elements of the set $\mathscr S=\{s_1,\ldots,s_d\}$ and whose sinks are the elements of $\mathscr S-1$. Let $F(q)=n(d-1)!(n-d-1)![n-d]_{q^d}{n-1\brack d-1}_q$. \Cref{thm:main} tells us that $\TPro_\beta$ has order $d(n-d)$ and that the triple $(\Lambda_{\mathsf{Path}_n},\TPro_\beta,F(q))$ exhibits the cyclic sieving phenomenon. Since the sizes of the orbits of $\TPro_\beta$ are all divisible by $n-d$ (by \Cref{prop:divisibility}), it follows that $\TPro_\beta^{n-d}$ has order $d$ and that the triple $(\Lambda_{\mathsf{Path}_n},\TPro_\beta^{n-d},F(q))$ also exhibits the cyclic sieving phenomenon. If we set $\gamma=n-d$, $q=n$, and $r=0$ in \Cref{prop:TProcycBro_R}, we find that $\TPro_\beta^{n-d}=(\cyc\Bro_{\mathscr R})^n$. It follows from \Cref{prop:homomesy} that the sizes of the orbits of $\cyc\Bro_{\mathscr R}$ are all divisible by $n$. Therefore, $\cyc\Bro_{\mathscr R}$ has order $dn$, and if $\{k_i^{m_i}:1\leq i\leq \ell\}$ is the multiset of orbit sizes of $(\cyc\Bro_{\mathscr R})^n$, then $\{(nk_i)^{m_i/n}:1\leq i\leq \ell\}$ is the multiset of orbit sizes of $\cyc\Bro_{\mathscr R}$. According to \Cref{lem:CSP_technical}, the triple $(\Lambda_{\mathsf{Path}_n},\cyc\Bro_{\mathscr R},\frac{1}{n}[n]_{q^d}F(q))$ exhibits the cyclic sieving phenomenon.
\end{proof}
\mathsf{s}ection{Future Directions}
\Cref{thm:toric_main} determines the orbit structure of toric promotion when $G$ is a forest. It is still open to understand the dynamics of toric promotion for other graphs, including cycle graphs.
\Cref{thm:main} determines the orbit structure of any permutoric promotion operator when $G$ is a path. It would be interesting to gain a better understanding of $\TPro_\pi$ when $G$ is another type of tree, even in the special case when $\pi^{-1}$ has $2$ cyclic descents. A natural place to start could be the case when $G$ is obtained from $\mathsf{Path}_{n-1}$ by adding a new vertex that is adjacent to $v_{n-2}$ (i.e., $G$ is the Dynkin diagram of type $D_n$).
\mathsf{s}ection*{Acknowledgements}
Colin Defant was supported by the National Science Foundation under Award No.\ 2201907 and by a Benjamin Peirce Fellowship at Harvard University. He thanks Tom Roby for initially suggesting the generalization from toric promotion to permutoric promotion. Hugh Thomas was supported by NSERC Discovery Grant RGPIN-2022-03960 and the Canada Research Chairs program, grant number CRC-2021-00120. We thank Nathan Williams for inspiring conversations.
\end{document}
|
\begin{document}
\thispagestyle{empty}
\begin{center}
\includegraphics[width=0.9\linewidth]{SNS.png}
Classe di SCIENZE\\
Corso di perfezionamento in MATEMATICA\\
XXXIV ciclo\\
Anno Accademico 2021-2022
{\Large $A_r$-stable curves and the Chow ring of $\overline{\cM}_3$ \par}
Tesi di perfezionamento in Matematica \\
MAT/03
\textsc{Candidato}\\ \textit{Michele Pernice}
\textsc{Relatore}\\
\textit{Prof. Angelo Vistoli} (Scuola Normale Superiore)
\end{center}
\emptypage
\pagenumbering{Roman}
\tableofcontents
\emptypage
\operatorname{char}pter*{Introduction}
\addcontentsline{toc}{chapter}{Introduction}
The geometry of the moduli spaces of curves has always been the subject of intensive investigations, because of its manifold implications, for instance in the study of families of curves. One of the main aspects of this investigation is the intersection theory of these spaces, which has both enumerative and geometrical implication. In his groundbreaking paper \cite{Mum}, Mumford introduced the intersection theory with rational coefficients for the moduli spaces of stable curves. He also computed the Chow ring (with rational coefficients) of $\overline{\cM}_2$, the moduli space of stable genus $2$ curves. While the rational Chow ring of $\cM_g$, the moduli space of smooth curves, is known for $2\leq g\leq 9$ (\cite{Mum}, \cite{Fab}, \cite{Iza}, \cite{PenVak}, \cite{CanLar}), the computations in the stable case are much harder. The complete description of the rational Chow ring has been obtained only for genus $2$ by Mumford and for genus $3$ by Faber in \cite{Fab}. In his PhD thesis, Faber also computed the rational Chow ring of $\overline{\cM}_{2,1}$, the moduli space of $1$-pointed stable curves of genus $2$.
Edidin and Graham introduced in \cite{EdGra} the intersection theory of global quotient stacks with integer coefficients. It is a more refined invariant but as expected, the computations for the Chow ring with integral coefficients of the moduli stack of curves are much harder than the ones with rational coefficients. To date, the only complete description for the integral Chow ring of the moduli stack of stable curves is the case of $\overline{\cM}_2$, obtained by Larson in \cite{Lar} and subsequently with a different strategy by Di Lorenzo and Vistoli in \cite{DiLorVis}. It is also worth mentioning the result of Di Lorenzo, Pernice and Vistoli regarding the integral Chow ring of $\overline{\cM}_{2,1}$, see \cite{DiLorPerVis}.
The aim of this thesis is to describe the Chow ring with $\ZZ[1/6]$-coefficients of the moduli stack $\overline{\cM}_3$ of stable genus $3$ curves. This provides a refinement of the result of Faber with a completely indipendent method. The approach is a generalization of the one used in \cite{DiLorPerVis}: we introduce an Artin stack, which is called the stack of $A_r$-stable curves, where we allow curves with $A_r$-singularities to appear. The idea is to compute the Chow ring of this newly introduced stack in the genus $3$ case and then, using localization sequence, find a description for the Chow ring of $\overline{\cM}_3$. The stack $\widetilde{\mathcal M}_{g,n}$ introduced in \cite{DiLorPerVis} is cointained as an open substack inside our stack. We state the main theorem.
\begin{theorem}
The Chow ring of $\overline{\cM}_3$ with $\ZZ[1/6]$-coefficients is the quotient of the graded polynomial algebra
$$\ZZ[1/6,\lambda_1,\lambda_2,\lambda_3,\delta_{1},\delta_{1,1},\delta_{1,1,1},H]$$
where
\begin{itemize}
\item[] $\lambda_1,\delta_1,H$ have degree $1$, \item[]$\lambda_2,\delta_{1,1}$ have degree $2$, \item[]$\lambda_3,\delta_{1,1,1}$ have degree $3$.
\end{itemize}
The quotient ideal is generated by 15 homogeneous relations, where
\begin{itemize}
\item $1$ of them is in codimension $2$,
\item $5$ of them are in codimension $3$,
\item $8$ of them are in codimension $4$,
\item $1$ of them is in codimension $5$.
\end{itemize} .
\end{theorem}
At the end of the thesis, we explain how to compare the result of Faber with ours and comment about the information we lose in the process of tensoring with rational coefficients.
\subsection*{Stable $A_r$-curves and the strategy of the proof}
The strategy for the computation is the same used in \cite{DiLorPerVis} for the integral Chow ring of $\overline{\cM}_{2,1}$. Suppose we have a closed immersion of smooth stacks $\cZ \into \cX$ and we know how to compute the Chow rings of $\cZ$ and $\cX \smallsetminus \cZ$. We would like to use the well-known localization sequence
$$
\ch(\cZ) \rightarrow \ch(\cX) \rightarrow \ch(\cX \smallsetminus \cZ) \rightarrow 0
$$
to get the complete description of the Chow ring of $\cX$. To this end, we make use of a patching technique which is at the heart of the Borel-Atiyah-Seigel-Quillen localization theorem, which has been used by many authors in the study of equivariant cohomology, equivariant Chow ring and equivariant K-theory. See the introduction of \cite{DiLorVis} for a more detailed discussion.
However, without information regarding the kernel of the pushforward of the closed immersion $\cZ \into \cX$, there is no hope to get the complete description of the Chow ring of $\cX$. If the top Chern class of the normal bundle $N_{\cZ|\cX}$ is a non-zero divisor inside the Chow ring of $\cZ$, we can recover $\ch(\cX)$ from $\ch(\cX \smallsetminus \cZ)$, $\ch(\cZ)$ and some patching information. We will refer to the condition on the top Chern class of the normal bundle as the \emph{gluing condition}. The gluing condition implies that the troublesome kernel is trivial. See \Cref{lem:gluing} for a more detailed statement.
Unfortunately, there is no hope that this condition is verified if $\cZ$ is a Deligne-Mumford separated stack, because in this hypothesis the integral Chow ring is torsion above the dimension of the stack. This follows from Theorem 3.2 of \cite{EdGra}. This is exactly the reason that motivated the authors of \cite{DiLorPerVis} to introduce the stack of cuspidal stable curves, which is not a Deligne-Mumford separated stack because it has some positive-dimensional affine stabilizers. However, introducing cusps is not enough in the case of $\overline{\cM}_3$ to have the gluing condition verified (for the stratification we choose). This motivated us to introduce a generalization of the moduli stack of cuspidal stable curves, allowing curves with $A_r$-singular points to appear in our stack. They are a natural generalization of nodes and cusps, and \'etale locally are plane singularities described by the equation $y^2=x^{n+1}$.
\subsection*{Future Prospects}
As pointed out in the introduction of \cite{DiLorPerVis}, the limitations of this strategy are not clear. It seems that the more singularities we add, the more it is likely that the gluing condition is verified. However, adding more singularities implies that we have to eventually compute the relations coming from such loci, which can be hard. Moreover, we are left with a difficult problem, namely to find the right stratification for these newly introduced stacks. We hope that this strategy will be useful to study the intersection theory of $\overline{\cM}_{3,1}$ or $\overline{\cM}_{4}$. Moreover, we believe that our approach can be used to obtain a complete description for the integral Chow ring of $\overline{\cM}_3$. We have not verified the gluing condition with integer coefficients because we do not know the integral Chow ring of some of the strata. However, one can try to prove alternative descriptions for these strata, for instance using weighted blowups, and compute their integral Chow ring using these descriptions. See \cite{Ink} for an example.
\subsection*{Outline of the thesis}
\Cref{chap:1} focuses on introducing the moduli stack of $A_r$-stable curves and proving results that are useful for the computations. Specifically, \Cref{sec:1-1} is dedicated to studying the possible involutions (and relative quotients) of the complete local ring of an $A_r$-singularity. In \Cref{sec:1-2}, we define the moduli stack $\widetilde{\mathcal M}_{g,n}^r$ of $n$-pointed $A_r$-stable curves of genus $g$ and prove that it is a smooth global quotient stack. We also prove that it shares some features with the classic stable case, for example the existence of the Hodge bundle and of the contraction morphism (as defined in \cite{Knu}), which is just an open immersion (instead of an isomorphism as in the stable case). \Cref{sec:1-3} focuses on the second important actor of this computation: the hyperelliptic locus inside $\widetilde{\mathcal M}_g^r$. We introduce the moduli stack $\widetilde{\cH}_g^r$ of hyperelliptic $A_r$-stable curves of genus $g$ generalizing the definition for the stable case, prove that it is a smooth stack and that it contains the stack of smooth hyperelliptic curves of genus $g$ as a dense open substack. To do this, we give an alternative description of $\widetilde{\cH}_g^r$ as the moduli stack of cyclic covers of degree $2$ over twisted curves of genus $0$. This description is one of the main reasons why we choose $A_r$-singularities as they appear naturally in the case of ramified branching divisor. We also give a natural morphism $$\eta:\widetilde{\cH}_g^r \rightarrow \widetilde{\mathcal M}_g^r.$$
Finally, \Cref{sec:1-4} is entirely dedicated to proving that the morphism $\eta$ is a closed immersion of algebraic stacks as we expect from the stable case. The proof is long and uses combinatorics of hyperelliptic curves over algebraically closed fields for the injectivity on geometric points, deformation theory for the unramifiedness, and degenerations of families of $A_r$-curves for properness.
In \Cref{chap:2}, we compute the Chow ring of $\widetilde{\mathcal M}_3^7$. We start by introducing the stratification we use for the computations. This includes two closed substacks of codimension $1$, namely the hyperelliptic locus $\widetilde{\cH}_3$ and $\widetilde{\Delta}_1$, which parametrizes curves with at least one separating node. We have the codimension $2$ substack $\widetilde{\Delta}_{1,1}$, which parametrizes curves with at least two separating nodes. Finally, we have the codimension $3$ substack $\widetilde{\Delta}_{1,1,1}$ which parametrizes curves with three separating nodes. For this sequence of closed immersions
$$\widetilde{\Delta}_{1,1,1} \subset \widetilde{\Delta}_{1,1} \subset \widetilde{\Delta}_1$$
the gluing condition is verified even if we add only cusps, i.e. if we consider $A_2$-stable curves. The hyperelliptic locus is the reason why we need to add all the other singularities.
\Cref{sec:H3tilde} focuses on the description of $\widetilde{\cH}_3 \smallsetminus \widetilde{\Delta}_1$ as a global quotient stack and on the computation of its Chow ring. In \Cref{sec:m3tilde-open}, we describe the open complement of the two closed codimension $1$ strata and prove that the description given in \cite{DiLor2} of $\cM_3 \smallsetminus \cH_3$ as an open of the space of quartics in $\PP^2$ with a $\mathrm{GL}_3$-action extends to $\widetilde{\mathcal M}_3^7 \smallsetminus (\widetilde{\cH}_3^7 ^{\rm c}p \widetilde{\Delta}_1)$. Then we compute its Chow ring. Similarly, \Cref{sec:detilde-1}, \Cref{sec:detilde-1-1} and \Cref{sec:detilde-1-1-1} are dedicated to describing respectively $\widetilde{\Delta}_1\smallsetminus \widetilde{\Delta}_{1,1}$, $\widetilde{\Delta}_{1,1}\smallsetminus \widetilde{\Delta}_{1,1,1}$ and $\widetilde{\Delta}_{1,1,1}$ and compute their Chow rings. Finally, in \Cref{sec:chow-m3tilde} we describe how to glue the informations we collected in the previous sections to get the Chow ring of $\widetilde{\mathcal M}_3^7$ and to write an explicit presentation of this ring.
Finally, \Cref{chap:3} revolves around the computations of the Chow ring of $\overline{\cM}_3$. In \Cref{sec:3-1}, we study in detail the moduli stack $\cA_n$ for $n\leq r$, which parametrizes pairs $(C,p)$ where $C$ is an $A_r$-stable curve of some fixed genus and $p$ is an $A_n$-singularity. \Cref{sec:3-2} focuses on finding the generators for the ideal of relations coming from $\widetilde{\mathcal M}_3^7 \smallsetminus \overline{\cM}_3$ while in \Cref{sec:strategy} we compute the explicit description of these generators in the Chow ring of $\widetilde{\mathcal M}_3^7$. Finally, in \Cref{sec:3-4} we put all our relations together and describe the Chow ring of $\overline{\cM}_3$ and we compare our result with Faber's.
We also have added three appendices, where we prove (or cite) some results needed for the computations or some technical lemmas that are probably well-know to experts. In Appendix A we describe a variant of the stack of finite flat algebras, namely the stack of finite flat extensions of degree $d$ of finite flat (curvilinear) algebras. In Appendix B we recall some results regarding blowups and pushouts in families and some conditions to have functoriality results. In Appendix C we generalize Proposition 4.2 of \cite{EdFul} and as a byproduct we also obtain a general formula for the stratum of $A_n$-singularities restricted to the open of the hyperelliptic locus parametrizing cyclic covers of the projective line. This formula is used several times in \Cref{sec:strategy}.
\pagenumbering{arabic}
\operatorname{char}pter{$A_r$-stable curves and the hyperelliptic locus}\label{chap:1}
This chapter is dedicated to the study of $A_r$-singularities and of $A_r$-stable curves. In the first section we study the possible involutions acting on the complete local ring of an $A_r$-singularity. The main result is \Cref{lem:quotient}, where we describe explicitly the possible quotients. In the second section we introduce the moduli stack $\widetilde{\mathcal M}_g^r$ of $A_r$-stable curves of genus $g$ and prove some standard results known for the (classic) stable case. The third section focuses on the moduli stack $\widetilde{\cH}_g^r$ of hyperelliptic curves and the main theorem proves an alternative description of $\widetilde{\cH}_g^r$ as the moduli stack of cyclic covers over a genus $0$ twisted curve. Finally, the last section is dedicated to proving that the moduli stack of hyperelliptic $A_r$-stable curves embeds as a closed substack inside the moduli stack of $A_r$-stable curves.
\section{$A_r$-singularities and involutions}\label{sec:1-1}
In this section we study the possible involutions acting on a singularity of type $A_r$.
Let $r$ be a non-negative integer and $k$ be an algebraically closed field with characteristic coprime with $r+1$ and $2$. The complete local $k$-algebra $$A_r:=k[[x,y]]/(y^2-x^{r+1})$$
is called an $A_r$-singularity. By definition, $A_0$ is a regular ring.
Suppose we have an involution $\sigma$ of $A_r$. Because the normalization of a noetherian ring is universal among dominant morphism from normal rings, we know the involution lifts to the normalization of $A_r$.
\begin{remark}\label{rem:norm}
We recall the description of the normalization of $A_r$:
\begin{itemize}
\item if $r$ is even, then the normalization is the morphism
$$ \iota: A_r \arr k[[t]] $$
defined by the associations $x\mapsto t^2$ and $y\mapsto t^{r+1}$;
\item if $r$ is odd, then the normalization is the morphism
$$ \iota: A_r \arr k[[t]]\oplus k[[t]]$$
defined by the associations $x\mapsto (t,t)$ and $y \mapsto (t^{\frac{r+1}{2}}, -t^{\frac{r+1}{2}})$.
\end{itemize}
In the even case, we are identifying $A_r$ to the subalgebra of $k[[t]]$ of power series with only even degrees up to degree $r+1$. In the odd case, we are identifying $A_r$ to the subalgebra of $k[[t]]\oplus k[[t]]$ of pairs of power series which coincide up to degree $(r+1)/2$.
\end{remark}
If $\sigma$ is an involution of $k[[t]]$, we know that the differential $d\sigma$ is an involution of $k$ as a vector space over itself, therefore there exists $\xi_{\sigma} \in k$ such that $d\sigma=\xi_{\sigma} \id$ with $\xi_{\sigma}^2=1$.
We define an endomorphism $\phi_{\sigma}$ of $k[[t]]$ by the association $t\mapsto (t+\xi_{\sigma}\sigma(t))/2$.
\begin{lemma}
In the setting above, we have that $\phi_{\sigma}$ is an automorphism and $\sigma':=\phi_{\sigma}^{-1}\sigma\phi_{\sigma}$ is the involution of $k[[t]]$ defined by the association $t\mapsto \xi_{\sigma} t$.
\end{lemma}
\begin{proof}
The fact that $d\phi_{\sigma}=\id$ implies $\phi_{\sigma}$ is an automorphism. The second statement follows from a straightforward computation.
\end{proof}
The idea is to prove the above lemma also for the algebra $A_r$ using the morphism $\phi_{\sigma}$. In fact we prove that in the even case the automorphism $\phi_{\sigma}$ restricts to an automorphism of $A_r$. Similarly, in the odd case we can construct an automorphism of $k[[t]]\oplus k[[t]]$, prove that it restricts to an automorphism of the subalgebra $A_r$ and describe explicitly the conjugation of the involution by this automorphism.
\begin{proposition}\label{prop:descr-inv}
Every non-trivial involution of $A_r$ is one of the following:
\begin{itemize}
\item[$(a)$] if $r$ is even, $\sigma:k[[x,y]]/(y^2-x^{r+1}) \arr k[[x,y]]/(y^2-x^{r+1})$ is defined by the associations $x\mapsto x$ and $y\mapsto -y$;
\item[$(b)$] if $r$ is odd and $r\geq 3$, we get that $\sigma:k[[x,y]]/(y^2-x^{r+1}) \arr k[[x,y]]/(y^2-x^{r+1})$ is defined by one of the following associations:
\begin{itemize}
\item[$(b_1)$] $x\mapsto x$ and $ y \mapsto -y$,
\item[$(b_2)$] $x\mapsto -x$ and $y \mapsto -y$,
\item[$(b_3)$] $x\mapsto -x$ and $y \mapsto -y$;
\end{itemize}
\item[$(c)$] if $r=1$, we get that $\sigma:k[[x,y]]/(y^2-x^2) \arr k[[x,y]]/(y^2-x^2)$ is defined by one of the following associations:
\begin{itemize}
\item[$(c_1)$] $x\mapsto x$ and $y\mapsto -y$,
\item[$(c_2)$] $x\mapsto -x$ and $y\mapsto -y$,
\item[$(c_3)$] $x\mapsto y$ and $y\mapsto x$;
\end{itemize}
\end{itemize}
up to conjugation by an automorphism of $A_r$.
\end{proposition}
\begin{proof}
We start with the even case. We identify $\sigma$ with its lifting to the normalization of $A_r$ by abuse of notation. First of all, we know that $\sigma(t)=\xi_{\sigma}tp(t)$ where $p(t)\in k[[t]]$ with $p(0)=1$. Because $\sigma$ is induced by an involution on $A_r$, we have that $\sigma(t)^2=\sigma(t^2) \in A_r$. An easy computation shows that this implies $t^2p(t) \in A_r$.
We see that the images of the two generators of $A_r$ through the morphism $\phi_{\sigma}$ are inside $A_r$:
$$\phi_{\sigma}(t^2)=\frac{t^2+\sigma(t)^2 + 2\xi_{\sigma}t\sigma(t)}{4}= \frac{t^2+\sigma(t^2)+2t^2p(t)}{4} \in A_r$$
and
$$ \phi_{\sigma}(t^{r+1})=t^{r+1}\frac{(1+p(t))^{r+1}}{2^{r+1}}\in t^{r+1}k[[t]] \subset A_r;$$
notice that if we compute the differential of the restriction $\phi_{\sigma}\vert_{A_r}:A_r \arr A_r$ we get an endomorphism of the tangent space of $A_r$ of the form
$$
\begin{pmatrix}
1 & \star \\
0 & 1
\end{pmatrix}
$$
therefore $\phi_{\sigma}\vert_{A_r}$ is an injective morphism with surjective differential between complete noetherian rings, thus it is an automorphism. Finally, if we define $\sigma':=\phi_{\sigma}^{-1}\sigma\phi_{\sigma}$, then $\sigma'\vert_{A_r}=(\phi_{\sigma}^{-1}\vert_{A_r})(\sigma\vert_{A_r})(\phi_{\sigma}\vert_{A_r})$ and we can describe its action on the generators:
\begin{itemize}
\item $\sigma'(x)=\sigma'(t^2)=\xi_{\sigma}^2 t^2 = x$,
\item $\sigma'(y)=\sigma'(t^{r+1})=\xi_{\sigma}^{r+1} t^{r+1} = \xi_{\sigma} y$
\end{itemize}
where we know that $\xi_{\sigma}^2=1$. We can have both $\xi_{\sigma}=1$ and $\xi_{\sigma}=-1$, although the first one is just the identity.
They same idea works for the odd case. We describe the lifting $\Sigma$ of the involution $\sigma$ of $A_r$ to the normalization $k[[t]]\oplus k[[t]]$. We have two possibilities: the involution $\sigma$ exchanges the two branches or not. This translates in the condition that $\Sigma$ exchanges the two connected component of the normalization (or not). Firstly, we consider the case where $\Sigma$ fixes the two connected components of the normalization, therefore we can describe $\Sigma:k[[t]]^{\oplus 2}\arr k[[t]]^{\oplus 2}$ as a matrix of the form
$$
\Sigma=
\begin{pmatrix}
\sigma_1 & 0 \\ 0 & \sigma_2
\end{pmatrix}
$$
where $\sigma_1,\sigma_2$ are involutions of $k[[t]]$. Because $\Sigma$ is induced by an involution of $A_r$, we have that $(\sigma_1(t),\sigma_2(t))\in A_r$, i.e. $\sigma_1(t)=\sigma_2(t) \mod t^{\frac{r+1}{2}}$.
We consider the automorphism $\Phi_{\Sigma}$ of $k[[t]]^{\oplus 2}$ which can described as a matrix of the form
$$
\Phi_{\Sigma}:=
\begin{pmatrix}
\phi_{\sigma_1} & 0 \\ 0 & \phi_{\sigma_2}
\end{pmatrix};
$$
we have the following equalities:
$$
\Phi_{\Sigma}(t,t)=(\phi_{\sigma_1}(t),\phi_{\sigma_2}(t))=1/2(t+\xi_{\sigma_1}\sigma_1(t),t+\xi_{\sigma_2}\sigma_2(t)) \in A_r
$$
and
$$
\Phi_{\Sigma}(t^{\frac{r+1}{2}},-t^{\frac{r+1}{2}})=
(\phi_{\sigma_1}(t)^{\frac{r+1}{2}},-\phi_{\sigma_2}(t)^{\frac{r+1}{2}}) \in A_r
$$
and again we have that the differential of $\Phi_{\Sigma}\vert_{A_r}$ is of the form
$$
\begin{pmatrix}
1 & \star \\
0 & 1
\end{pmatrix}
$$
therefore $\Phi_{\sigma}\vert_{A_r}$ is an automorphism of $A_r$. Notice that if $r\geq 3$ we have that $\xi_{\sigma_1}$ and $\xi_{\sigma_2}$ are the same, but if $r=1$ we don't; nevertheless it is still true that $\Phi_{\Sigma}(t,t) \in A_1$.
Again, we have proved that if the involution of $A_r$ fixes the two branches, then we have only a finite number of involutions up to conjugation. If $r\geq 3$ then we have only the involution described on generators by $x\mapsto \xi x$ and $y\mapsto \xi^{\frac{r+1}{2}} y$ where $\xi^2=1$. Conversely, if $r=1$ we get more possible involutions, as $\xi_{\sigma_1}$ and $\xi_{\sigma_2}$ can be different. Specifically, we get four of them; if we consider their action on the pair of generators $(x,y)$ of $A_r$, we get the following matrices describing the four involutions:
$$
\begin{pmatrix}
1 & 0 \\
0 & 1
\end{pmatrix} ,
\begin{pmatrix}
-1 & 0 \\
0 & -1
\end{pmatrix},
\begin{pmatrix}
0 & 1 \\
1 & 0
\end{pmatrix},
\begin{pmatrix}
0 & -1 \\
-1 & 0
\end{pmatrix};
$$
notice that the third matrix is the conjugate of the fourth one by the automorphism of $A_1$ defined on the pair of generators $(x,y)$ by the following matrix:
$$
\begin{pmatrix}
-1 & 0 \\
0 & 1
\end{pmatrix}.
$$
Lastly, we consider the case when $\Sigma$ exchanges the connected components. Because it is an involution, $\Sigma$ is an automorphism of $k[[t]]^{\oplus 2}$ of the form
$$
\begin{pmatrix}
0 & \tilde{\sigma} \\
\tilde{\sigma}^{-1} & 0
\end{pmatrix}
$$
and because $\Sigma$ is induced by the involution $\sigma$ of $A_r$, we get that $\tilde{\sigma}$ is an involution of $k[[t]]/(t^{\frac{r+1}{2}})$, i.e. $\tilde{\sigma}^2(t)=t+t^{\frac{r+1}{2}}p(t)$ with $p(t)\in k[[t]]$. Notice that if $r\geq 3$ we get that $\tilde{\sigma}(t)=\xi_{\tilde{\sigma}}tp(t)$ where $p(t)\in k[[t]]$ with $p(0)=1$ and $\xi_{\tilde{\sigma}}^2=1$. On the contrary, if $r=1$ the previous condition is empty, i.e. $\tilde{\sigma}$ is any automorphism,. Let us first consider the case $r\geq 3$. Consider the endomorphism $\Phi_{\Sigma}$ of $k[[t]]^{\oplus 2}$ of the form
$$
\Phi_{\Sigma}:=
\begin{pmatrix}
\phi_1 & 0 \\
0 & \phi_2
\end{pmatrix}
$$
where we define $\phi_1(t)=1/2(t+\xi_{\tilde{\sigma}}\tilde{\sigma}(t))$ and $\phi_2(t)=1/2(t+\xi_{\tilde{\sigma}}\tilde{\sigma}(t)+t^{\frac{r+1}{2}}p(t))$. Again, an easy computation shows that the automorphism $\Phi_{\Sigma}$ restricts to the algebra $A_r$ and it is in fact an automorphism. For the case $r=1$, we can simply consider $\Phi_{\Sigma}$ of the form
$$
\Phi_{\Sigma}:=
\begin{pmatrix}
\tilde{\sigma} & 0 \\
0 & \id
\end{pmatrix}
$$
which restricts to an automorphism of $A_r$ as well. As before, if $r\geq 3$ we have that the involution is of the form $x\mapsto \xi x$ and $y \mapsto -\xi^{\frac{r+1}{2}}y$. Instead, if $r=1$ we have the involution described by the association $ x \mapsto x$ and $y \mapsto -y$.
\end{proof}
The previous corollary finally implies the description of the invariant we were looking for. Let us focus on the description of this invariant subalgebras (in the case of a non trivial involution). We prove the following statement.
\begin{corollary}
Let $\sigma$ be a non-trivial involution of the algebra $A_r$ and let us denote by $A_r^{\sigma}$ the invariant subalgebra and by $i:A_r^{\sigma}\into A_r$ the inclusion. If we refer to the classification proved in \Cref{prop:descr-inv}, we have that
\begin{itemize}
\item[$(a)$] if $r$ is even, we have that $A_r^{\sigma} \simeq A_0$, the inclusion $i$ is faithfully flat and the fixed locus of $\sigma$ has length $r+1$ and it is a Cartier divisor;
\item[$(b)$] if $r:=2k-1$ is odd and $k\geq 2$ we have that
\begin{itemize}
\item[$(b_1)$] $A_r^{\sigma} \simeq A_0$, the inclusion $i$ is faithfully flat, the fixed locus of $\sigma$ has length $r+1$ and it is the support of a Cartier divisor;
\item[$(b_2)$] $A_r^{\sigma} \simeq A_{k-1}$, the inclusion $i$ is faithfully flat, the fixed locus of $\sigma$ has length $2$ and it is the support of a Cartier divisor;
\item[$(b_3)$] $A_r^{\sigma} \simeq A_k$, the inclusion $i$ is not flat, the fixed locus is of length $1$ and it is not the support of a Cartier divisor;
\end{itemize}
\item[$(c)$] if $r=1$, we have that
\begin{itemize}
\item[$(c_1)$] $A_1^{\sigma} \simeq A_0$, the inclusion $i$ is faithfully flat, the fixed locus of $\sigma$ has length $2$ and it is the support of a Cartier divisor;
\item[$(c_2)$] $A_1^{\sigma} \simeq A_1$, the inclusion $i$ is not faithfully flat, the fixed locus of $\sigma$ has length $1$ and it is not the support of a Cartier divisor;
\item[$(c_3)$] $A_1^{\sigma} \simeq A_1$, the inclusion $i$ is faithfully flat and the fixed locus coincides with one of two irreducible components.
\end{itemize}
\end{itemize}
\end{corollary}
\begin{proof}
Let us start with the even case. Thanks to \Cref{prop:descr-inv}, we have that the involution $\sigma$ of $A_r$ is defined by the associations $x\mapsto x$ and $y\mapsto -y$ (up to conjugation by an isomorphism of $A_r$). Therefore it is clear that $A_r^{\sigma} \simeq k[[x]]$ and the quotient morphism is induced by the inclusion $$A_0 \simeq k[[x]] \subset \frac{k[[x,y]]}{(y^2-x^{r+1})} \simeq A_r;$$
the same is true for the odd case when the involution $\sigma$ acts in the same way. In this case the algebras extension (corrisponding to the quotient morphism) is faithfully flat, the fixed locus is the support of a Cartier divisor defined by the ideal $(y)$ in $A_r$ and it has length $r+1$.
Now we consider the case $r=2k-1$ with $\sigma$ acting as follows: $\sigma(x)=-x$ and $\sigma(y)=y$. A straightforward computation shows that the invariant algebra $A_r^{\sigma}$ is of the type $A_{k-1}$. To be precise, the inclusion of the invariant subalgebra can be described by the morphism
$$ i: A_{k-1}\simeq\frac{k[[x,y]]}{(y^2-x^{k})} \lhook\joinrel\longrightarrow \frac{k[[x,y]]}{(y^2-x^{2k})}\simeq A_r$$
where $i(x)=x^2$ and $i(y)=y$. In this case we get the $A_r$ is a faithfully flat $A_{k-1}$-algebra and the fixed locus is the support of a Cartier divisor defined by the ideal $(x)$ in $A_r$ and it has length $2$.
If $\sigma$ is defined by the associations $x\mapsto -x$ and $y \mapsto -y$ and $r=2k-1$, then the invariant subalgebra $A_r^{\sigma}$ is of type $A_k$ and the quotient morphism is defined by the inclusion
$$ i: A_k\simeq \frac{k[[x,y]]}{(y^2-x^{k+1})} \lhook\joinrel\longrightarrow \frac{k[[x,y]]}{(y^2-x^{r+1})}\simeq A_r $$
where $i(x)=x^2$ and $i(y)=xy$. In contrast with the two previous cases, $A_r$ is not a flat $A_k$-algebra and the fixed locus is not (the support of) a Cartier divisor, it is in fact defined by the ideal $(x,y)$ in $A_r$ and it has length $1$.
Finally, we consider the case where $r=2$ and the action of $\sigma$ is described by $x\mapsto y$ and $y \mapsto x$. If you consider the isomorphism
$$ \frac{k[[u,v]]}{(uv)} \rightarrow \frac{k[[x,y]]}{(y^2-x^2)}$$
defined by the associations $u\mapsto y+x$ and $v\mapsto y-x$ we get that the invariant subalgebra is defined by the inclusion
$$ i: A_1\simeq \frac{k[[u,v]]}{(uv)} \lhook\joinrel\longrightarrow \frac{k[[u,v]]}{(uv)}$$
where $i(u)=u$ and $i(v)=v^2$. Notice that in this situation the algebras extension is flat but the fixed locus is not a Cartier divisor, as it is defined by $(v)$, which is a zero divisor in $A_1$.
\end{proof}
\begin{remark}
If $r$ is odd and $r\geq 3$ (case $(b)$), we have that every involution gives a different quotient. The same is not true for the case $r=1$ as we can obtain the nodal singularity in two ways.
\end{remark}
\begin{remark}\label{rem:fix-locus}
Notice that $(c_3)$ is the only case when the fixed locus is an irreducible component. This situation do not appear in the stack of hyperelliptic curves as we consider only involutions with finite fixed locus.
\end{remark}
We end this section with a technical lemma which will be useful afterwards.
\begin{lemma}\label{lem:local-node-involution}
Let $(R,m)\hookrightarrow (S,n)$ be a flat extension of noetherian complete local rings over $k$ such that $$S\otimes_R R/m \simeq A_1.$$
Suppose we have an $R$-involution $\sigma$ of $S$ such that $\sigma \otimes R/m$ (seen as an involution of $A_1$) does not exchange the two irreducible components. Hence there exists a $R$-isomorphism
$$ S \simeq R[[x,y]]/(xy-t) $$
where $t \in R$ such that $\sigma$ (seen as an involution of the right-hand side of the isomorphism) acts as follows: $\sigma(x)=\xi_2x$ and $\sigma(y)=\xi_1y$ for some $\xi_i \in k$ such that $\xi_i^2=1$ for $i=1,2$. Furthermore, if $\xi_1=-\xi_2$ we have $t=0$.
\end{lemma}
\begin{proof}
For the sake of notation, we still denote by $\sigma$ the involution $\sigma \otimes R/m^{n+1}$. We inductively construct elements $x_n,y_n$ in $S_n:=S\otimes R/m^{n+1}$ and $t_n \in R/m^{n+1}$ such that
\begin{itemize}
\item[1)] $\sigma(x_n)=\xi_1x_n$,
\item[2)] $\sigma(y_n)=\xi_2y_n$,
\item[3)] $x_ny_n=t_n$ in $S_n$;
\end{itemize}
for some $\xi_i \in k$ indipendent of $n$ such that $\xi_i^2=1$ for $i=1,2$. The case $n=0$ follows from \Cref{prop:descr-inv} and it gives us $\xi_i$ for $i=1,2$. Suppose we have constructed $(x_n,y_n,t_n)$ with the properties listed above.
Consider two general liftings $x'_{n+1},y'_{n+1}$ in $S_{n+1}$.
We define $x''_{n+1}:=(x'_{n+1}+\xi_1\sigma(x'_{n+1}))/2$ and $y''_{n+1}:=(y'_{n+1}+\xi_2\sigma(y'_{n+1}))/2$. The pair $(x''_{n+1},y''_{n+1})$ clearly verify the properties 1) and 2). A priori, $x''_{n+1}y''_{n+1}=t_n+h$ with $h$ an element in $S_{n+1}$ such that its restriction in $S_n$ is zero. The flatness of the extension implies that
$$ \ker(S_{n+1}\rightarrow S_n)\simeq A \otimes_R (m^{n+1}/m^{n+2})\simeq S_0 \otimes_k (m^{n+1}/m^{n+2})$$ and therefore $$ h = h_0 + x''_{n+1}p(x''_{n+1}) + y''_{n+1}q(y''_{n+1})$$
where all the coefficients of the polynomial $p$ and $q$ (and clearly $h_0$) are in $m^{n+1}/m^{n+2}$. if we define
\begin{itemize}
\item $t_{n+1}:=t_n+h_0$,
\item $x_{n+1}:=x''_{n+1}+q(y''_{n+1})$,
\item $y_{n+1}:=y''_{n+1}+p(x''_{n+1})$,
\end{itemize}
the third condition above is verified but we have to prove that the first two are still verified for $x_{n+1}$ and $y_{n+1}$.
Using the fact that $\sigma(x''_{n+1}y''_{n+1})=\xi_1\xi_2x''_{n+1}y''_{n+1}$, we reduce to analyze three cases.
If $\xi_1=\xi_2=1$, there is nothing to prove.
If $\xi_1=-\xi_2$, a computation shows that
\begin{itemize}
\item $h_0=0$,
\item $ p\equiv 0$,
\item $q(y''_{n+1})=\tilde{q}(y''^2_{n+1})$
\end{itemize}
for a suitable polynomial $\tilde{q}$ with coefficients in $m^{n+1}/m^{n+2}$.
If $\xi_1=\xi_2=-1$, a computation shows that
\begin{itemize}
\item $ p(x''_{n+1})=\tilde{p}(x''^2_{n+1})$,
\item $q(y''_{n+1})=\tilde{q}(y''^2_{n+1})$
\end{itemize}
for a suitable $\tilde{p},\tilde{q}$ polynomials with coefficients in $m^{n+1}/m^{n+2}$.
Hence $(x_{n+1},y_{n+1},t_{n+1})$ satisfies the conditions 1), 2) and 3),
therefore we have a morphism of flat $R/m^{n+1}$-algebras
$$ (R/m^{n+1})[[x_{n+1},y_{n+1}]]/(x_{n+1}y_{n+1}-t_{n+1}) \longrightarrow S_{n+1}$$
which is an isomorphism modulo $m$, therefore it is an isomorphism. If we pass to the limit we get the result.
\end{proof}
\section{$A_r$-stable curves and moduli stack}\label{sec:1-2}
Fix a nonnegative integer $r$.
\begin{definition} Let $k$ be an algebraically closed field and $C/k$ be a proper reduced connected one-dimensional scheme over $k$. We say the $C$ is an \emph{$A_r$-prestable curve} if it has at most $A_r$-singularity, i.e. for every $p\in C(k)$, we have an isomorphism
$$ \widehat{\cO}_{C,p} \simeq k[[x,y]]/(y^2-x^{h+1}) $$
with $ 0\leq h\leq r$. Furthermore, we say that $C$ is $A_r$-stable if it is $A_r$-prestable and the dualizing sheaf $\omega_C$ is ample. A $n$-pointed $A_r$-stable curve over $k$ is $A_r$-prestable curve together with $n$ smooth distinct closed points $p_1,\dots,p_n$ such that $\omega_C(p_1+\dots+p_n)$ is ample.
\end{definition}
\begin{remark}
Notice that a $A_r$-prestable curve is l.c.i by definition, therefore the dualizing complex is in fact a line bundle.
\end{remark}
For the rest of the chapter, we fix a base field $\kappa$ where all the primes smaller than $r+1$ are invertible. Every time we talk about genus, we intend arithmetic genus, unless specified otherwise.
\begin{remark}\label{rem:genus-count}
Let $C$ be a connected, reduced, one-dimensional, proper scheme over an algebraically closed field. Let $p$ be a rational point which is a singularity of $A_r$-type. We denote by $b:\widetilde{C}\arr C$ the partial normalization at the point $p$ and by $J_b$ the conductor ideal of $b$. Then a straightforward computation shows that \begin{enumerate}
\item if $r=2h$, then $g(C)=g(\widetilde{C})+h$;
\item if $r=2h+1$ and $\widetilde{C}$ is connected, then $g(C)=g(\widetilde{C})+h+1$,
\item if $r=2h+1$ and $\widetilde{C}$ is not connected, then $g(C)=g(\widetilde{C})+h$.
\end{enumerate}
If $\widetilde{C}$ is not connected, we say that $p$ is a separating point. Furthermore, Noether formula gives us that $b^*\omega_C \simeq \omega_{\widetilde{C}}(J_b^{\vee})$.
\end{remark}
Let us define the moduli stack we are interested in. Let $g$ be an integer with $g\geq 2$.
We denote by $\widetilde{\mathcal M}_{g}^r$ the category defined in the following way: an object is a proper flat finitely presented morphism $C\arr S$ over $\kappa$ such that every geometric fiber over $S$ is a $A_r$-stable curve of genus $g$. These families are called \emph{$A_r$-stable curves} over $S$. Morphisms are defined in the usual way. This is clearly a fibered category over the category of schemes over $\kappa$.
Fix a positive integer $n$. In the same way, we can define $\widetilde{\mathcal M}_{g,n}^r$ the fibered category whose objects are the datum of $A_r$-stable curves over $S$ with $n$ distinct sections $p_1,\dots,p_n$ such that every geometric fiber over $S$ is a $n$-pointed $A_r$-stable curve. These families are called \emph{$n$-pointed $A_r$-stable curves} over $S$. Morphisms are just morphisms of $n$-pointed curves.
The main result of this section is the description of $\widetilde{\mathcal M}_{g,n}^r$ as a quotient stack. Firstly, we need to prove two results which are classical in the case of ($A_1$)-stable curves.
\begin{proposition}\label{prop:openness}
Let $C \arr S$ a proper flat finitely presented morphism with $n$-sections $s_i:S \arr C$ for $i=1,\dots,n$. There exists an open subscheme $S' \subseteq S$ with the property that a morphism $T \arr S$ factors through $S'$ if and only if the projection $T\times_{S} C \arr T$, with the sections induced by the $s_{i}$, is a $n$-pointed $A_r$-stable curve.
\end{proposition}
\begin{proof}
It is well known that a small deformation of a curve with $A_h$-singularities for $h\leq r$ still has $A_h$-singularities for $h\leq r$. Hence, after restricting to an open subscheme of $S$ we can assume that $C \arr S$ is an $A_{r}$-prestable curve. By further restricting $S$ we can assume that the sections land in the smooth locus of $C \arr S$, and are disjoint. Then the result follows from openness of ampleness for invertible sheaves.
\end{proof}
The following result is already known for canonically positive Gorestein curves (see \cite[Theorem B and Theorem C]{Cat}). We extend the proof to the case of $n$-pointed $A_r$-stable curves. As a matter of fact, we also prove Theorem 1.8 of \cite{Knu} for $n$-pointed $A_r$-stable curves of genus $g$.
\begin{proposition}\label{prop:boundedness}
Let $(C,p_1,\dots,p_n)$ be a $n$-pointed $A_r$-stable curve over an algebraically closed field $k/\kappa$. Then
\begin{enumerate}
\item[i)] $\H^1(C,\omega_C(p_1+\dots+p_n)^{\otimes m})=0$ for every $m\geq 2$,
\item[ii)] $\omega_C(p_1+\dots+p_n)^{\otimes m}$ is very ample for every $m\geq 3$.
\item[iii)] $\omega_C(p_1+\dots+p_n)^{\otimes m}$ is normally generated for $m\geq 6$.
\end{enumerate}
\end{proposition}
\begin{proof}
We denote by $\Sigma$ the Cartier divisor $p_1+\dots+p_n$ of $C$. Using duality, i) is equivalent to
$$ \H^0(C,\omega_C^{\otimes (1-m)}(-m\Sigma))=\H^0(C,\omega_C(\Sigma)^{\otimes (1-m)}(-\Sigma))=0$$
for every $m\geq 2$. If $\Gamma$ is an irreducible component of $C$, then we denote by $a_{\Gamma}$ the degree of $\omega_C(\Sigma)\vert_{\Gamma}$, which is a positive integer by the stability condition. Thus,
$$\deg \big(\omega_C(\Sigma)^{\otimes 1-m}(-\Sigma)\big) \leq (1-m)a_{\Gamma}<0$$
for every $m\geq2$. Thus the sections of the line bundle are zero restricted on every irreducible component, which implies the line bundle does not have non trivial sections.
Regarding ii), we need to prove that for every pair of closed point $q_1,q_2 \in C$ (possibly $q_1=q_2$) the restriction morphism
$$ \H^0(C,\omega_C(\Sigma)^{\otimes m})\longrightarrow \omega_C(\Sigma)^{\otimes m}\otimes D $$
is surjective, where $D$ is the closed subscheme of $C$ defined by the ideal $I_D:=m_{q_1}m_{q_2}$ ($m_{q_i}$ is the ideal defining $q_i$ for $i=1,2$). This is equivalent to prove that
$$ \H^1(C,I_D\omega_C(\Sigma)^{\otimes m})=0$$
or by duality
$$ \hom_{\cO_C}(I_D, \omega_C(\Sigma)^{\otimes (1-m)}(-\Sigma))=0.$$
The proof is divided into $4$ cases, starting with the situation when both $q_1$ and $q_2$ are smooth.
First of all, notice that, if $\Gamma \subset C$ is an irreducible component and $q$ is a smooth point on $\Gamma$, then
$$\deg\big(\omega_C(\Sigma)^{\otimes (1-m)}(-\Sigma+q)\vert_{\Gamma}\big)<0$$
for $m\geq 3$. Therefore, we know the vanishing result for every pair of smooth point that do not lie in the same irreducible component. However, if $q_1,q_2 \in \Gamma$ with $\Gamma$ irreducible component, this implies that $$\deg\big(\omega_C(\Sigma)^{\otimes (1-m)}(-\Sigma+q_1+q_2)\vert_{\tilde{\Gamma}}\big)\leq0$$
where equality is possible only for $\tilde{\Gamma}=\Gamma$. Therefore, if $C$ is not irreducible, we get the vanishing result again. Finally, if $C=\Gamma$ is integral, one can prove that
$$\deg\big(\omega_C(\Sigma)^{\otimes (1-m)}(-\Sigma+q_1+q_2)\vert_{\Gamma}\big)<0$$
by simply using the stability condition. Thus we have proved that the morphism associated to the complete linear system of $\omega_C(\Sigma)^{\otimes m}$ separates smooth points.
Suppose now that $q_1$ is singular and $q_2$ is smooth. Let us call $\pi:\tilde{C} \rightarrow C$ the partial normalization in $q_1$ of $C$. We have that (see Lemma 2.1 of \cite{Cat})
$$ \mathop{\underline{\mathrm{Hom}}}\nolimits(m_{q_1},\cO_C) \subset \pi_{*}\cO_{\tilde{C}}$$
therefore
$$\hom_{\cO_C}(I_D, \omega_C(\Sigma)^{\otimes (1-m)}(-\Sigma))\subset \H^0(\tilde{C}, \pi^*\omega_C(\Sigma)^{\otimes (1-m)}(-\Sigma+q_2)).$$
If we denote by $\cF$ the line bundle $\omega_C(\Sigma)^{\otimes (1-m)}(-\Sigma+q_2)$, then clearly the restriction of $\pi^*\cF$ to every irreducible component $\tilde{\Gamma}$ of $\tilde{C}$ has the same degree of the restriction of $\cF$ to the irreducible component $\Gamma=\pi(\tilde{\Gamma})$ because $\pi$ is finite and birational. Again, the restriction $\cF\vert_{\Gamma}$ has negative degree, therefore the vanishing result follows.
Suppose that both $q_1$ and $q_2$ are singular but distinct. Let us call $\pi:\tilde{C} \rightarrow C$ the partial normalization in $q_1$ and $q_2$ of $C$. Then we have that (see Lemma 2.1 of \cite{Cat})
$$ \mathop{\underline{\mathrm{Hom}}}\nolimits(m_{q_1}m_{q_2},\cO_C) \subset \pi_{*}\cO_{\tilde{C}}$$
therefore
$$\hom_{\cO_C}(I_D, \omega_C(\Sigma)^{\otimes (1-m)}(-\Sigma))\subset \H^0(\tilde{C}, \pi^*\omega_C(\Sigma)^{\otimes (1-m)}(-\Sigma)).$$
By the same argument again, we get the vanishing result.
Last case we need to consider is when $q:=q_1=q_2$ and it is singular. Let $\pi:\tilde{C} \rightarrow C$ be the partial normalization at $q$ and $M$ be the ideal generated by $\pi^{-1}m_q$ in $\tilde{C}$. Then we have that (see Lemma 2.2 of \cite{Cat})
$$ \mathop{\underline{\mathrm{Hom}}}\nolimits(m_q^2,\cO_C) \subset \pi_{*}M^{\vee}$$
which implies
$$\hom_{\cO_C}(I_D, \omega_C(\Sigma)^{\otimes (1-m)}(-\Sigma))\subset \H^0(\tilde{C}, M^{\vee}\otimes\pi^*\omega_C(\Sigma)^{\otimes (1-m)}(-\Sigma)).$$
A local computation shows that $M$ is the (invertible) ideal of definition of a subscheme of length $2$ in $\tilde{C}$. Therefore the same argument used for the case when $q_1$ and $q_2$ are smooth can be applied here to get the vanishing result.
Finally, we prove that if $L:=\omega(\Sigma)$ then $L^{\otimes m}$ is normally generated for $m \geq 6$. This is a simplified version of the proof in \cite{Knu} as the author proved the same statement for $m\geq 3$ which requires more work. For simplicity, we write $\H^0(-)$ and $\H^1(-)$ instead of $\H^0(C,-)$ and $\H^1(C,-)$.
Consider the following commutative diagram for $k\geq 1$
$$
\begin{tikzcd}
\H^0(L^{\otimes 3}) \otimes \H^{0}(L^{\otimes m-3}) \otimes H^0(L^{\otimes km}) \arrow[rr] \arrow[d, "\varphi_1"] & & \H^0(L^{\otimes m})\otimes \H^0(L^{\otimes km}) \arrow[d, "\phi"] \\
\H^0(L^{\otimes m-3}) \otimes \H^0(L^{\otimes km+3}) \arrow[rr, "\varphi_2"] & & \H^0(L^{\otimes (k+1)m});
\end{tikzcd}
$$
we need to prove that $\phi$ is surjective, therefore it is enough to prove that $\varphi_1$ and $\varphi_2$ are surjective. As both $L^{\otimes 3}$ and $L^{\otimes m-3}$ are globally generated because $m\geq 6$, we can use the Generalized Lemma of Castelnuovo (see pag 170 of \cite{Knu}) and reduce to prove that $\H^1(L^{\otimes (km-3)})$ and $\H^1(L^{\otimes (km+6-m)})$ are zero for $k\geq 1$. We have that by Grothendieck duality
$$\H^1(L^{\otimes km-3})=\H^0(\omega_C \otimes L^{\otimes 3-km}).$$
If we focus on a single irreducible component $\Gamma$ and denote by $a_{\Gamma}$ the quantity $deg \omega_C(\Sigma)\vert_{\Gamma}$, we have that
$$ \deg (\omega_{C} \otimes L^{3-km}) \leq 4-km <0$$
for $k\geq 1$. The same reasoning prove the vanishing of the other cohomology group.
\end{proof}
\begin{remark}
One can also prove that $\omega_C(\Sigma)^{\otimes m}$ is globally generated for $m\geq 2$. For our purpose, we just need to prove that there exists an integer $m$, indipendent of the field $k$, such that $\omega_C(\Sigma)^{\otimes m}$ is very ample.
\end{remark}
Now we are ready to prove the theorem. To be precise, both the proofs of the following theorem and of \Cref{prop:openness} are an adaptation of Theorem 1.3 and of Proposition 1.2 of \cite{DiLorPerVis} to the more general case of $A_r$-stable curve.
\begin{theorem}\label{theo:descr-quot}
$\widetilde{\mathcal M}_{g,n}^r$ is a smooth algebraic stack of finite type over $\kappa$. Furthermore, it is a quotient stack: that is, there exists a smooth quasi-projective scheme X with an action of $\mathrm{GL}_N$ for some positive $N$, such that
$ \widetilde{\mathcal M}_{g,n}^r \simeq [X/\mathrm{GL}_N]$. If $k$ is a field, then it is connected.
\end{theorem}
\begin{proof}
It follows from \Cref{prop:boundedness} that if $(\pi\colon C \arr S,\Sigma_1,\dots,\Sigma_n)$ is a $n$-pointed $A_{r}$-stable curve of genus $g$, then $\pi_{*}\omega_{C/S}(\Sigma_{1}+ \dots + \Sigma_{n})^{\otimes 3}$ is locally free sheaf of rank $N:=5g-5+3n$, and its formation commutes with base change, because of Grothendieck's base change theorem.
Call $X$ the stack over $k$, whose sections over a scheme $S$ consist of a $A_{r}$-stable $n$-pointed curve as above, and an isomorphism $\cO_{S}^{N} \simeq \pi_{*}\omega_{C/S}(\Sigma_{1}+ \dots + \Sigma_{n})^{\otimes m}$ of sheaves of $\cO_{S}$-modules. Since $\pi_{*}\omega_{C/S}(\Sigma_{1}+ \dots + \Sigma_{n})^{\otimes m}$ is very ample, the automorphism group of an object of $X$ is trivial, and $X$ is equivalent to its functor of isomorphism classes.
Call $H$ the Hilbert scheme of subschemes of $\PP^{N-1}_{\kappa}$ with Hilbert polynomial $P(t)$, and $D \arr H$ the universal family. Call $F$ the fiber product of $n$ copies of $D$ over $S$, and $C \arr F$ the pullback of $D \arr H$ to $F$; there are $n$ tautological sections $s_{1}$, \dots,~$s_{n}\colon F \arr C$. Consider the largest open subscheme $F'$ of $F$ such that the restriction $C'$ of $C$, with the restrictions of the $n$ tautological sections, is a $n$-pointed $A_{r}$-stable curve, as in Proposition~\ref{prop:openness}. Call $Y \subseteq F'$ the open subscheme whose points are those for which the corresponding curve is nondegenerate, $E \arr Y$ the restriction of the universal family, $\Sigma_{1}$, \dots,~$\Sigma_{n} \subseteq E$ the tautological sections. Call $\cO_{E}(1)$ the restriction of $\cO_{\PP^{N-1}_{Y}}(1)$ via the tautological embedding $E \subseteq \PP^{N-1}_{Y}$; there are two section of the projection $\pic_{E/Y}^{3(2g-2 + n)}\arr Y$ from the Picard scheme parametrizing invertible sheaves of degree $3(2g-2 + n)$, one defined by $\cO_{E}(1)$, the other by $\omega_{E/Y}(\Sigma_{1} + \dots + \Sigma_{n})^{\otimes 3}$; let $Z \subseteq Y$ the equalizer of these two sections, which is a locally closed subscheme of $Y$.
Then $Z$ is a quasi-projective scheme over $\kappa$ representing the functor sending a scheme $S$ into the isomorphism class of tuples consisting of a $n$-pointed $A_{r}$-stable curve $\pi\colon C \arr S$, together with an isomorphism of $S$-schemes
\[
\PP^{N-1}_{S} \simeq \PP(\pi_{*}\omega_{C/S}(\Sigma_{1} + \dots + \Sigma_{n})^{\otimes 3})\,.
\]
There is an obvious functor $X \arr Z$, associating with an isomorphism $\cO_{S}^{N} \simeq \pi_{*}\omega_{C/S}(\Sigma_{1}+ \dots + \Sigma_{n})^{\otimes 3}$ its projectivization. It is immediate to check that $X \arr Z$ is a $\GG_{\rmm}$-torsor; hence it is representable and affine, and $X$ is a quasi-projective scheme over $\spec \kappa$.
On the other hand, there is an obvious morphism $X \arr \widetilde{\mathcal M}_{g,n}^r$ which forgets the isomorphism $\cO_{S}^{N} \simeq \pi_{*}\omega_{C/S}(\Sigma_{1}+ \dots + \Sigma_{n})^{\otimes 3}$; this is immediately seen to be a $\mathrm{GL}_{N}$ torsor. We deduce that $\widetilde{\mathcal M}_{g,n}^r$ is isomorphic to $[X/\mathrm{GL}_{N}]$. This shows that is a quotient stack, as in the last statement; this implies that $\widetilde{\mathcal M}_{g,n}^r$ is an algebraic stack of finite type over $\kappa$.
The fact that $\widetilde{\mathcal M}_{g,n}^r$ is smooth follows from the fact that $A_{r}$-prestable curves are unobstructed.
Finally, to check that $\widetilde{\mathcal M}_{g,n}^r$ is connected it is enough to check that the open embedding $\cM_{g,n} \subseteq \widetilde{\mathcal M}_{g,n}^r$ has a dense image, since $\cM_{g,n}$ is well known to be connected. This is equivalent to saying that every $n$-pointed $A_r$-stable curve over an algebraically closed extension $\Omega$ of $\kappa$ has a small deformation that is smooth. Let $(C, p_{1}, \dots, p_{n})$ be a $n$-pointed $A_r$-stable curve; the singularities of $C$ are unobstructed, so we can choose a lifting $\overline{C}\arr \spec \Omega\ds{t}$, with smooth generic fiber. The points $p_{i}$ lift to sections $\spec\Omega\ds{t} \arr \overline{C}$, and then the result follows from Proposition~\ref{prop:openness}.
\end{proof}
\begin{remark}\label{rem: max-sing}
Clearly, we have an open embedding $\widetilde{\mathcal M}_{g,n}^r \subset \widetilde{\mathcal M}_{g,n}^s$ for every $r\leq s$. Notice that $\widetilde{\mathcal M}_{g,n}^r=\widetilde{\mathcal M}_{g,n}^{2g+1}$ for every $r\geq 2g+1$.
\end{remark}
We prove that the usual definition of Hodge bundle extends to our setting. As a consequence we obtain a locally free sheaf $\widetilde{\HH}_{g}$ of rank~$g$ on $\widetilde{\mathcal M}_{g, n}^r$, which is called \emph{Hodge bundle}.
\begin{proposition}\label{prop:hodge-bundle}
Let $\pi\colon C \arr S$ be an $A_r$-stable of genus $g$. Then $\pi_{*}{\omega}_{C/S}$ is a locally free sheaf of rank $g$ on $S$, and its formation commutes with base change.
\end{proposition}
\begin{proof}
If $C$ is an $A_r$-stable curve of genus $g$ over a field $k$, the dimension of $\H^{0}(C, \omega_{C/k})$ is $g$; so the result follows from Grauert's theorem when $S$ is reduced. But the versal deformation space of an $A_{r}$-stable curve over a field is smooth, so every $A_{r}$-stable curve comes, \'{e}tale-locally, from an $A_{r}$-stable curve over a reduced scheme, and this proves the result.
\end{proof}
Finally, we define the contraction morphism which will be useful later. First of all, we study the possible contractions over an algebraically closed field. We refer to Definition 1.3 of \cite{Knu} for the definition of contraction.
\begin{remark}\label{rem:contrac}
Let $k$ be an algebraically closed field and $(C,p_1,\dots,p_{n+1})$ a $n+1$-pointed $A_r$-stable curve of genus $g\geq 2$. Suppose $(C,p_1,\dots,p_n)$ is not stable. If $r \leq 2$, this implies that $p_{n+1}$ lies in either a rational bridge or a rational tail, see case $(1)$ and case $(2)$ in \Cref{fig:contrac}. If $r \geq 3$, we have another possible situation: the point $p_{n+1}$ lies in an irreducible component of genus $0$ which intersect the rest of the curve in a tacnode, see case $(3)$ in \Cref{fig:contrac}. This is a conseguence of \Cref{rem:genus-count}. We call this irreducible component a rational almost-bridge; it is a limit of rational bridges. One can construct a contraction $c$ of an almost-bridge which satisfy all the properties listed in Definition 1.3 of \cite{Knu}. The image of the almost-bridge is a $A_2$-singularity, i.e. a cusp. In \Cref{fig:contrac}, we describe the three possible situations, where $p$ is a smooth point and $q$ is the image of $p$ through $c$.
\begin{figure}
\caption{Contraction morphisms}
\label{fig:contrac}
\end{figure}
\end{remark}
\begin{lemma}\label{lem:knutsen}
Let $(C,p_1,\dots,p_n,p_{n+1})$ be a $n+1$-pointed $A_r$-stable curve of genus $g$ over an algebraically closed field $k/\kappa$ such that $2g-2+n>0$. Then the line bundle $\omega_C(p_1,\dots,p_n)$ satisfies the same statement as in \Cref{prop:boundedness}.
\end{lemma}
\begin{proof}
We do not give all the detail as again we rely on the results in \cite{Knu}. The idea is to use Lemma 1.6 of \cite{Knu}. If $(C,p_1,\dots,p_n)$ is a $n$-pointed $A_r$-stable curve, then the same proof of \Cref{prop:boundedness} works. Otherwise, we can contract the irreducible component of genus $0$ which is not stable without the point $p_{n+1}$. Therefore Lemma 1.6 of \cite{Knu} generalizes in our situation and we can conclude.
\end{proof}
\begin{proposition}\label{prop:contrac}
We have a morphism of algebraic stacks
$$ \GG_{\rma}mma:\widetilde{\mathcal M}_{g,n+1}^r \longrightarrow \widetilde{\mathcal C}_{g,n}^r$$
where $\widetilde{\mathcal C}_{g,n}^r$ is the universal curve of $\widetilde{\mathcal M}_{g,n}^r$. Furthermore, it is an open immersion and its image is the open locus in $\widetilde{\mathcal C}_{g,n}^r$ parametrizing $n$-pointed $A_r$-stable genus $g$ curves $(C,p_1,\dots,p_n)$ and a (non-necessarily smooth) section $q$ such that $q$ is an $A_h$-singularity for $h\leq 2$.
\end{proposition}
\begin{proof}
We again follow the proof of Proposition 2.1 in \cite{Knu}. Let $$(C/S,p_1,\dots,p_n,p_{n+1})$$ be an object in $\widetilde{\mathcal M}_{g,n+1}^r$ and consider the morphism induced by $\pi_*\omega_{C/S}(p_1+\dots+p_n)^{\otimes 6}$. We know that the line bundle satisfy base change because of \Cref{lem:knutsen}. Furthermore, because it is normally generated we have that the image of the morphism can be described as the relative $\operatorname{Proj}$ of the graded algebra $$\cA:=\bigoplus_{n \in \NN} \pi_*\omega_{C/S}(p_1+\dots+p_n)^{\otimes 6i}$$ which is flat over $S$ because of the base change property. Therefore we have that $\widetilde{C}:=\operatorname{Proj}_S(\cA) \rightarrow S$ is a flat proper finitely presented morphism and, if we define $q_i$ as the image of $p_i$ through the morphism for $i=1,\dots,n+1$, we have that $(\widetilde{C},q_1,\dots,q_n)$ is a $n$-pointed $A_r$-stable genus $g$ curve. Over an algebraically closed field, a computation shows that the morphism is either the identity if the $n$-pointed curve $(C,p_1,\dots,p_n)$ is stable or one of the three contractions described in \Cref{rem:contrac}. Therefore the uniquess follows from Proposition 2.1 of \cite{Knu}. This implies we have that $\GG_{\rma}mma$ is a monomorphism and in particular representable. It is enough to prove that $\GG_{\rma}mma$ is \'etale to have that it is an open immersion.
Notice that if we consider a geometric point $(C,p_1,\dots,p_{n+1})$ where there are no almost-bridges, then the morphism $\GG_{\rma}mma$ is an isomorphism in a neighborhood of that point. This follows from \Cref{lem:sep-sing} and the result for stable curves. The only non trivial case is when we have an almost-bridge, which does not depend on the existence of the other sections $p_1,\dots,p_n$. For the sake of clarity, we do the case $n=0$. The general case is analogous.
Let us consider the case when the geometric object $(C,p)$ has an almost-bridge, that is going to be contracted to a cusp through the morphism $\GG_{\rma}mma$. Let us denote by $(\widetilde{C},q)$ the image of $(C,p)$ through $\GG_{\rma}mma$. We know that $q$ is the image of the almost-bridge and in particular a cusp. As usual, deformation theory tells us that $\widetilde{\mathcal C}_g^r$ is smooth, therefore we can consider a smooth neighborhood $(\spec A,m_q)$ of $(\widetilde{C},q)$ in $\widetilde{\mathcal C}_g^r$ with $A$ a smooth algebra. This implies that we have an $A_r$-stable curve
$$
\begin{tikzcd}
\widetilde{C}_A \arrow[r] & \spec A \arrow[l, "q_A"', bend right]
\end{tikzcd}
$$
with $q_A$ a global section such that $q_A \otimes_A A/m_q = q$. By deformation theory of cusps (see Example 6.2.12 of \cite{TalVis}), we know that the completion of the local ring of $q$ in $\widetilde{C}_A$ is of the form
$$ A[[x,y]]/(y^2-x^3-r_1x-r_2)$$
where $r_1,r_2$ are part of a system of parameters for $A$ because $(\spec A, m_q)$ is versal. Consider know the element $f:=4r_1^3+27r_2^2 \in A$, which parametrizes the locus when the section $q_A$ ends in a singular point, which is either a node or a cusp. Let $q_1:\spec A/f \into \widetilde{C}_A$ be the codimension two closed immersion and denote by $C'_A:={\rm Bl}_{q_1}\widetilde{C}$. We have $C'_A$ is clearly proper and finitely presented over $A$, but it is also flat because $\tor_2^A(A/f^n,N)=0$ for any $A$-module $N$, which implies $\tor_1^A(I_{q_1}^n,N)=0$ for any $A$-module $N$ (we denote by $I_{q_1}$ the defining ideal of $q_1$ in $C_A$). We want to prove that the geometric fiber of $C_A'$ over $m_q$ is an almost-bridge.
Notice that the formation of the blowup does not commute with arbitrary base change. However, consider the system of parameters $(r_1,r_2,\dots,r_a)$ with $a:=\dim A=3g-2$. Then if we denote by $J:=(r_2,\dots,r_a)$ we have that
$$ \tor_1^A(A/f^n,A/J)=0$$
for every $n$. This implies that the blowup commutes with the base change for $A/J$ and therefore we can reduce to the case when $A$ is a DVR, $r:=r_1$ is the uniformizer and the completion of the local ring of $q$ in $\widetilde{C}_A$ is of the form
$$ A[[x,y]]/(y^2-x^3-rx).$$ A computation proves that over the special fiber we have
\begin{itemize}
\item $C_A'\otimes_A A/m_q$ is an $A_r$-prestable curve and if we denote by $p'_A$ the proper transform of $q_A$, we have that $(C_A' \otimes_A A/m_q, p'_A \otimes_A A/m_q)$ is an $1$-pointed $A_r$-stable curve of genus $g$,
\item $C_A' \rightarrow \widetilde{C}_A \otimes_A A/m_q$ is the contraction of an almost-bridge.
\item we have an isomorphism $(C_A'\otimes_A A/m_q,p_A'\otimes_A A/m_q) \rightarrow (C,p)$ of schemes over $(\widetilde{C},q)$.
\end{itemize}
Because the nodal case is already being studied in \cite{Knu} and Proposition 2.1 of \cite{Knu} is still true in our setting, we have that the object $(C_A',p_A')$ gives rise to a lifting
$$
\begin{tikzcd}
& \spec A \arrow[d, "{(\widetilde{C}_A,q_A)}"] \arrow[ld, "{(C_A',p_A')}"', dashed] \\
{\widetilde{\mathcal M}_{g,1}^r} \arrow[r, "\GG_{\rma}mma"'] & \widetilde{\mathcal C}_{g}^r.
\end{tikzcd}
$$
which implies that $\GG_{\rma}mma$ is \'etale.
\end{proof}
\section{Moduli stack of hyperelliptic curves}\label{sec:1-3}
Another important actor is the moduli stack of hyperelliptic $A_r$-stable curves. We start by proving a technical lemma.
\begin{lemma}\label{lem:quotient}
Let $n$ be a positive integer coprime with the characteristic of $\kappa$.
Let $X\arr S$ be a finitely presented, proper, flat morphism of schemes over $\kappa$ and let $\boldsymbol{\mu}_{n,S}\arr S$ the group of $n$-th roots acting on $X$ over $S$. Then there exists a geometric categorical quotient $X/\boldsymbol{\mu}_n\arr S$ which it is still flat, proper and finitely presented.
\end{lemma}
\begin{proof}
First of all, we know the existence and also the separatedness (see Corollary 5.4 of\cite{Rydh}) because $\boldsymbol{\mu}_{n,S}$ is a finite locally free group scheme over $S$. Since $\boldsymbol{\mu}_{n}$ is diagonalizable, we know that the formation of the quotient commutes with base change and that flatness is preserved. We also know that if $S$ is locally noetherian, we get that $X/\boldsymbol{\mu}_{n,S} \arr S$ is locally of finite presentation and proper (see Proposition 4.7 of \cite{Rydh}). Because proper and locally of finite presentation are two local conditions on the target, we can reduce to the case $S=\spec R$ is affine. Using now that the morphism $X\arr S$ is of finite presentation we get that there exists $R_0 \subset R$ subalgebra of $R$ which is of finite type over $\kappa$ (therefore noetherian) such that there exist a morphism $X_0\arr S_0:=\spec R_0$ proper and flat, and an action of $\boldsymbol{\mu}_{S_0,n}$ over $X_0/S_0$ such that the pullback to $S$ is our initial data. Because formation of the quotient commutes with base change, we get that we can assume $S$ noetherian and we are done.
\end{proof}
\begin{definition}
Let $C$ be an $A_r$-stable curve of genus $g$ over an algebraically closed field. We say that $C$ is hyperelliptic if there exists an involution $\sigma$ of $C$ such that the fixed locus of $\sigma$ is finite and the geometric categorical quotient, which is denoted by $Z$, is a reduced connected nodal curve of genus $0$. We call the pair $(C,\sigma)$ a \emph{hyperelliptic $A_r$-stable curve} and such $\sigma$ is called a \emph{hyperelliptic involution}.
\end{definition}
Finally we can define $\widetilde{\cH}_g^r$ as the following fibered category: its objects are the data of a pair $(C/S,\sigma)$ where $C/S$ is an $A_r$-stable curve over $S$ and $\sigma$ is an involution of $C$ over $S$ such that $(C_s,\sigma_s)$ is a $A_r$-stable hyperelliptic curve of genus $g$ for every geometric point $s \in S$. These are called \emph{hyperelliptic $A_r$-stable curves over $S$}. A morphism is a morphism of $A_r$-stable curves that commutes with the involutions. We clearly have a morphism of fibered categories
$$\eta:\widetilde{\cH}_g^r \larr \widetilde{\mathcal M}_g^r$$
over $\kappa$ defined by forgetting the involution. This morphism is known to be a closed embedding for $r\leq 1$, where both source and target are smooth algebraic stacks of finite type over $\kappa$.
In the case of $r = 0$ the theory is well-known, so from now on, we can suppose $r\geq 1$. We assume ${\rm char} \, \kappa > r$ and in particular $2$ is invertible.
\begin{remark}
We have a morphism $i:\widetilde{\cH}_g^r \arr \mathit{I}_{\widetilde{\mathcal M}_g^r}$ over $\widetilde{\mathcal M}_g^r$ induced by $\eta$ where $\mathit{I}_{\widetilde{\mathcal M}_g^r}$ is the inertia stack of $\widetilde{\mathcal M}_g^r$. This factors through $\mathit{I}_{\widetilde{\mathcal M}_g^r}[2]$, the two torsion elements of the inertia, which is closed inside the inertia stack. It is fully faithful by definition. This implies $\widetilde{\cH}_g^r$ is a prestack in groupoid. It is easy to see that it is in fact a stack in groupoid.
\end{remark}
We want to describe $\widetilde{\cH}_g^r$ as a connected component of the $2$-torsion of the inertia. To do so, we need to following lemma.
\begin{lemma}\label{lem:conn-comp}
Let $C\arr S$ be a $A_r$-stable curve over $S$, and $\sigma$ be an automorphism of the curve over $S$ such that $\sigma^2=\id$. Then there exists an open and closed subscheme $S'\subset S$ such that the following holds: a morphism $f:T\arr S$ factors through $S'$ if and only if $(C\times_S T, \sigma \times_S T)$ is a hyperelliptic $A_r$-stable curve over $T$.
\end{lemma}
\begin{proof}
Consider $$S':=\{s\in S| (C_s,\sigma_s) \in \widetilde{\cH}_g^r(s) \}\subset S$$
i.e. the subset where the geometric fibers over $S$ are hyperelliptic $A_r$-stable curves of genus $g$. If we prove $S'$ is open and closed, we are done.
Let $Z\rightarrow S$ be the geometric quotient of $C$ by the involution. Because of \Cref{lem:quotient}, $Z\rightarrow S$ is flat, proper and finitely presented and thus both the dimension and the genus of the geometric fibers over $S$ are locally costant function on $S$. It remain to prove that the finiteness of the fixed locus is an open and closed condition. Notice that in general the fixed locus is not flat over $S$, therefore this is not trivial.
The openess follows from the fact the fixed locus of the involution is proper over $S$, and therefore we can use the semicontinuity of the dimension of the fibers.
Regarding the closedness, the fixed locus of a geometric fiber over a point $s\in S$ is positive dimensional only when we have a projective line on the fiber $C_s$ such that intersects the rest of the curve only in separating nodes (because of \Cref{prop:descr-inv}). Therefore the result is a direct conseguence of \Cref{lem:local-node-involution}.
\end{proof}
\begin{proposition}\label{prop:open-closed-imm}
The morphism $i:\widetilde{\cH}_g^r \arr \mathit{I}_{\widetilde{\mathcal M}_g^r}[2]$ is an open and closed immersion of algebraic stacks. In particular, $\widetilde{\cH}_g^r$ is a closed substack of $\mathit{I}_{\widetilde{\mathcal M}_g^r}$ and it is locally of finite type (over $\kappa$).
\end{proposition}
\begin{proof}
We first prove that $\widetilde{\cH}_g^r$ is an algebraic stack, and an open and closed substack of $\mathit{I}_{\widetilde{\mathcal M}_g^r}[2]$.
First of all, we need to prove that the diagonal of $\widetilde{\cH}_g^r$ is representable by algebraic spaces. It follows from the following fact: given a morphism of fibered categories $X\arr Y$, we can consider the $2$-commutative diagram
$$
\begin{tikzcd}
X \arrow[d, "\Delta_X"] \arrow[r, "f"] & Y \arrow[d, "\Delta_Y"] \\
X\times X \arrow[r, "{(f,f)}"] & Y\times Y;
\end{tikzcd}
$$
if $f$ is fully faithful, then the diagram is also cartesian.
Secondly, we need to prove that the morphism $i$ is representable by algebraic spaces and it is an open and closed immersion. Suppose we have a morphism $ V\arr \mathit{I}_{\widetilde{\mathcal M}_g^r}$ from a $\kappa$-scheme $S$. Thus we have a cartesian diagram:
$$
\begin{tikzcd}
F \arrow[d] \arrow[r, "i_S"] & S \arrow[d] \\
\widetilde{\cH}_g^r \arrow[r, "i"] & \mathit{I}_{\widetilde{\mathcal M}_g^r}
\end{tikzcd}
$$
where $F$ is equivalent to a category fibered in sets as $i$ is fully faithful. We can describe $F$ in the following way: for every $T$ a $\kappa$-scheme, we have $$F(T)=\{ f:T\arr S |(C_S\times_S T,\sigma_S \times_S T) \in \widetilde{\cH}_g^r(T) \}$$
where $(C_S,\sigma_S) \in \mathit{I}_{\widetilde{\mathcal M}_g^r}(S)$ is the object associated to the morphism $S\arr \mathit{I}_{\widetilde{\mathcal M}_g^r}$. By \Cref{lem:conn-comp}, we deduce $F=S'$ and the morphism $i_S$ is an open and closed immersion.
\end{proof}
In the last part of this section we introduce another description of $\widetilde{\cH}_g^r$, useful for understanding the link with the smooth case. We refer to \cite{AbOlVis} for the theory of twisted nodal curves, although we consider only twisted curves with $\mu_2$ as stabilizers as with no markings.
The first description is a way of getting cyclic covers from $A_r$-stable hyperelliptic curves.
\begin{definition}
Let $\cC_g^r$ be the category fibered in groupois whose object are morphisms $f:C\rightarrow \cZ$ over some base scheme $S$ such that $C\rightarrow S$ is a family of $A_r$-stable genus $g$ curve, $\cZ\rightarrow S$ is a family of twisted curves of genus $0$ and $f$ is a finite flat morphism of degree $2$ which is generically \'etale. Morphisms are commutative diagrams of the form
$$
\begin{tikzcd}
C \arrow[d, "\phi_C"] \arrow[r, "f"] & \cZ \arrow[d, "\phi_Z"] \arrow[r] & S \arrow[d] \\
C' \arrow[r, "f'"] & \cZ' \arrow[r] & S'.
\end{tikzcd}
$$
\end{definition}
\begin{remark}
The definition implies that the morphism $f$ is \'etale over the stacky locus of $\cZ$ . First of all, we can reduce to the case of $S$ being a spectrum of an algebraically closed field. Let $\xi: B\mu_2\hookrightarrow \cZ$ be a stacky node of $\cZ$, thus $f^{-1}(\xi)\rightarrow B\mu_2$ is finite flat of degree two. It is clear that $f^{-1}(\xi)\subset C$ implies $f^{-1}(\xi)\rightarrow B\mu_2$ \'etale. It is easy to prove that over a stacky node of $\cZ$ there can only be a separating node of $C$.
\end{remark}
The theory of cyclic covers (see for instance \cite{ArVis}) guarantees the existence of the functor of fibered categories
$$\GG_{\rma}mma:\cC_g^r \longrightarrow \widetilde{\cH}_g^r$$
defined in the following way on objects: if $f:C\rightarrow Z$ is finite flat of degree $2$, we can give a $\ZZ/(2)$-grading on $f_*\cO_C$, because it splits as the sum of $\cO_{\cZ}$ and some line bundle $\cL$ on $\cZ$ with a section $\cL^{\otimes 2}\hookrightarrow \cO_{\cZ}$. This grading defines an action of $\mu_2$ over $C$. Everything is relative to a base scheme $S$. The geometric quotient by this action is the coarse moduli space of $\cZ$, which is a genus $0$ curve. The fact that $f$ is generically \'etale, implies that the fixed locus is finite.
We prove a general lemma which gives us the uniqueness of an involution once we fix the geometric quotient.
\begin{lemma}\label{lem:unique-inv-quotient}
Suppose $S$ is a $\kappa$-scheme. Let $X/S$ be a finitely presented, proper, flat $S$-scheme and $\sigma_1,\sigma_2$ be two involutions of $X/S$. Consider a geometric quotient $\pi_i:X\rightarrow Y_i$ of the involution $\sigma_i$ for $i=1,2$. If there exists an isomorphism $\psi:Y_1 \rightarrow Y_2$ of $S$-schemes which commutes with the quotient maps, then $\sigma_1=\sigma_2$.
\end{lemma}
\begin{proof}
Let $T$ be a $S$-scheme and $t:T\rightarrow X$ be a $T$-point of $X$. We want to prove that $\sigma_1(t)=\sigma_2(t)$.
Fix $i\in \{1,2\}$ and consider the cartesian diagram
$$
\begin{tikzcd}
X_{\pi_i(t)} \arrow[d] \arrow[r] & X \arrow[d, "\pi_i"] \\
T \arrow[r, "\pi_i(t)"] & Y_i,
\end{tikzcd}
$$
thus we have that there are at most two sections of the morphism $X_{\pi_i(t)}\rightarrow T$, namely $t$ and $\sigma_i(t)$. Using the fact that $\pi_2=\psi\circ \pi_1$, it follows easily that $\sigma_1(t)=\sigma_2(t)$.
\end{proof}
\begin{proposition}\label{prop:cyclic-covers}
The functor $\GG_{\rma}mma:\cC_g^r \rightarrow \widetilde{\cH}_g^r$ is an equivalence of fibered categories.
\end{proposition}
\begin{proof}
We explicitly construct an inverse. Let $(C,\sigma)$ be a hyperelliptic $A_r$-stable genus $g$ curve over $S$. Consider $F_{\sigma}\subset C$ the fixed locus of $\sigma$, which is finite over $S$. \Cref{prop:descr-inv} implies that the defining ideal of $F_{\sigma}$ in $C$ is locally generated by at most $2$ elements and the quotient morphism is not flat exactly in the locus where it is generated by $2$ element. By the theory of Fitting ideals, we can describe the non-flat locus as the vanishing locus of the $1$-st Fitting ideal of $F_{\sigma}$. Let $N_{\sigma}$ be such locus in $C$. \Cref{lem:local-node-involution} implies that $N_{\sigma}$ is also open inside $F_{\sigma}$, therefore $F_{\sigma}\smallsetminus N_{\sigma}$ is closed inside $C$. If we look at the stacky structure on the image of $F_{\sigma}\smallsetminus N_{\sigma}$ through the stacky quotient morphism $C\rightarrow [C/\sigma]$, we know that the stabilizers are isomorphic to $\mu_2$ as it is contained in the fixed locus. We denote by $[C/\sigma] \rightarrow \cZ$ the rigidification along $F_{\sigma}\smallsetminus N_{\sigma}$ of $[C/\sigma]$. Because we are dealing with linearly reductive stabilizers, we know that the rigidification is functorial and $\cZ\rightarrow S$ is still flat (proper and finitely presented).
We claim that the composition $$C\longrightarrow [C/\sigma] \longrightarrow \cZ$$ is an object of $\cC_g^r$. As the quotient morphism is \'etale, the only points where we need to prove flatness are the ones in $F_{\sigma}\smallsetminus N_{\sigma}$, which are fixed points. Because the morphism $C\rightarrow \cZ$ locally at a point $p\in F_{\sigma}\smallsetminus N_{\sigma}$ is the same as the geometric quotient, it follows from \Cref{prop:descr-inv} that it is flat. Thus $C\rightarrow \cZ$ is a finite flat morphism of degree $2$, generically \'etale as $F_{\sigma}$ is finite.
Hence, this construction defines a functor
$$\tau: \widetilde{\cH}_g^r \longrightarrow \cC_g^r$$
where the association on morphisms is defined in the natural way. A direct inspection using \Cref{lem:unique-inv-quotient} shows that $\tau$ and $\GG_{\rma}mma$ are one the quasi-inverse of the other.
\end{proof}
Using the theory developed in \cite{ArVis}, we know that $\cC_g^r$ is isomorphic to a stack of cyclic covers, namely the datum $C\rightarrow \cZ$ is equivalent to the triplet $(\cZ,\cL,i)$ where $\cZ$ is a twisted nodal curve, $\cL$ is a line bundle over $\cZ$ and $i:\cL^{\otimes 2}\rightarrow \cO_{\cZ}$ is a morphism of $\cO_{\cZ}$-modules. The vanishing locus of $i^{\vee}$ determines a subscheme of $\cZ$ which consists of the branching points of the cyclic cover.
To recover $C$, we consider the sheaf of $\cO_{\cZ}$-algebras $\cA:=\cO_{\cZ}\oplus \cL$ where the algebra structure is defined by the section $i:\cL^{\otimes 2}\hookrightarrow \cO_{\cZ}$. Thus we define $C:=\spec_{\cZ}(\cA)$. Clearly not every triplet as above recovers a $A_r$-stable genus $g$ curve $C$. We need to understand what are the conditions for $\cL$ and $i$ such that $C$ is a $A_r$-stable genus $g$ curve. Because $C\rightarrow S$ is clearly flat, proper and of finite presentation, it is a family of $A_r$-stable genus $g$ curve if and only if the geometric fiber $C_{s}$ is a $A_r$-stable genus $g$ curve for every point $s \in S$. Therefore we can reduce to understand the conditions for $\cL$ and $i$ over an algebraically closed field $k$.
\begin{lemma}
In the situation above, we have that $C$ is a proper one-dimensional Deligne-Mumford stack over $k$. Furthermore:
\begin{itemize}
\item[(i)] $C$ is a scheme if and only if the morphism $\cZ \rightarrow B\GG_m$ induced by $\cL$ is representable and $i$ does not vanish on the stacky nodes,
\item[(ii)] $C$ is reduced if and only if $i$ is injective,
\item[(iii)] $C$ has arithmetic genus $g$ if and only if $\chi(\cL)=-g$.
\end{itemize}
\end{lemma}
\begin{proof}
Because $C\rightarrow \cZ$ is finite, then $C$ is a proper one-dimensional Deligne-Mumford stack. To prove $(i)$, it is enough to prove that $C$ is an algebraic space, or equivalently it has trivial stabilizers. Because the map $f: C\rightarrow \cZ$ is affine, it is enough to check the fiber of the stacky points of $\cZ$. By the local description of the morphism, it is clear that if $p:B\mu_2 \hookrightarrow \cZ$ is a stacky point of $\cZ$, we have that the fiber $f^{-1}(p)$ is the quotient of the artinian algebra $k[x]/(x^2-h)$ by $\mu_2$, where $h \in k$. By the usual description of the cyclic cover, we know that $x$ is the local generator of $\cL$ and $h$ is the value of $i^{\vee}$ at the point $p$. The representability of the fiber then is equivalent to $h\neq 0$ and to the $\mu_2$-action being not trivial on $x$. Thus (i) follows.
Notice that $\cZ$ is clearly a Cohen-Macaulay stack and $f$ is finite flat and representable, therefore $C$ is also CM. This implies that $C$ is reduced if and only if it is generically reduced, i.e. the local ring of every irreducible component is a field. It is easy to see that this is equivalent to the fact that $i$ does not vanish in the generic point of any component.
As far as (iii) is concerned, firstly we prove that $\chi(\cL)\leq 0$ implies $C$ is connected. Suppose $C$ is the disjoint union of $C_1$ and $C_2$, then the involution has to send at least one irreducible component of $C_1$ to one irreducible component of $C_2$, otherwise the quotient $\cZ$ is not connected. But this implies that the restriction $f\vert_{C_1}$ is a finite flat representable morphism of degree $1$, therefore an isomorphism. In particular $f$ is a trivial \'etale cover of $\cZ$ and therefore $\chi(\cO_C)=2$, or equivalently $\chi(\cL)=1$. A straightforward computation now shows the equivalence.
\end{proof}
\begin{remark}
Notice that $C$ is disconnected if and only if $\chi(\cO_C)=2$. We say that in this case $C$ has genus $-1$ and it is equivalent to $\chi(\cL)=1$.
\end{remark}
Secondly, we have to take care of the $A_r$-prestable condition which makes sense only in the case when $C$ is a scheme. Therefore, we suppose $i^{\vee}$ does not vanish on the stacky points.
The $A_r$-prestable condition is encoded in the section $i^{\vee}\in \H^0(\cZ,\cL^{\otimes -2})$. The local description of the cyclic covers of degree $2$ (see \Cref{prop:descr-inv}) implies the following lemma.
\begin{lemma}
In the situation above, $C$ is an $A_r$-stable curves if and only if $i^{\vee}$ has the following properties:
\begin{itemize}
\item if it vanishes at a non-stacky node, then $r\geq 3$, $i^{\vee}$ vanishes with order $2$ and, locally at the node, it is not a zero divisor (there is a tacnode over the regular node),
\item if it vanishes at a smooth point, than it vanishes with order at most $r$.
\end{itemize}
\end{lemma}
Lastly, we want to understand how to describe the stability condition. For this, we need a lemma.
\begin{lemma}
Let $(C,\sigma)$ be an $A_r$-prestable hyperelliptic curve of genus $g\geq 2$ over an algebraically closed field such that the geometric quotient $Z:=C/\sigma$ is integral of genus $0$, i.e. a projective line. Then $(C,\sigma)$ is $A_r$-stable.
Furthermore, suppose instead that $g=1$ and let $p_1,p_2$ be two smooth points such that $\sigma(p_1)=p_2$. Then $(C,p_1,p_2)$ is stable.
\end{lemma}
\begin{proof}
If the curve $C$ is integral, there is nothing to prove. Suppose $C$ is not, then it has two irreducible components $C_1$ and $C_2$ of genus $0$ that has to be exchanged by the involution and their intersection $C_1 \cap C_2$ is a disjoint union of $A_{2k+1}$-singularities for $2k+1\leq r$. Let $p_1,\dots,p_h$ be the support of the intersection and $k_1,\dots,k_h$ integers such that $p_i$ is a $A_{2k_i+1}$-singularity for $i=1,\dots,h$. By \Cref{rem:genus-count}, we have that
$$ k_1+\dots+k_h-1=g\geq 2$$
but at the same time \cite[Lemma 1.12]{Cat} implies
$$\deg\omega_C\vert_{C_j}=-2+k_1+\dots+k_h=g-1>0$$
for $j=1,2$.
Suppose now $g=1$. If $C$ is integral, then there is nothing to prove. If $C$ is not integral, again it has two irreducible components of genus $0$ such that their intersection is either two nodes or a tacnode, because of \Cref{rem:genus-count} (see the proof of \Cref{lem:genus1} for a more detailed discussion). Then again the statement follows.
\end{proof}
\begin{remark}
In the previous lemma, we can take $p:=p_1=p_2$ to be a fixed smooth point of the hyperelliptic involution. In this case, $C$ has to be integral and therefore $(C,p)$ is $A_r$-stable.
\end{remark}
The stability condition makes sense when $C$ is Gorestein, and in particular when it is $A_r$-prestable. Therefore suppose we are in the situation when $C$ is a $A_r$-prestable curve. We translate the stability condition on $C$ to a condition on the restrictions of $i$ to the irreducible components of $\cZ$.
Given an irreducible component $\Gamma$ of $\cZ$, we can define a quantity $$g_{\Gamma}:=\frac{n_{\Gamma}}{2}-\deg \cL\vert_{\Gamma}-1$$ where $n_{\Gamma}$ is the number of stacky points of $\Gamma$. It is easy to see that $g_{\Gamma}$ coincides with the arithmetic genus of the preimage $C_{\Gamma}:=f^{-1}(\Gamma)$, when it is connected. The previous lemma implies that $\omega_{C}\vert_{C_{\Gamma}}$ is ample for all the components $\Gamma$ such that $g_{\Gamma}\geq 1$ .
Let us try to understand the stability condition for $g_{\Gamma}=0$, i.e. $\deg\cL\vert_{\Gamma}=n_{\Gamma}/2-1$. Let $m_{\Gamma}$ be the number of points of the intersection of $\Gamma$ with the rest of the curve $\cZ$, or equivalently the number of nodes (stacky or not) lying on the component. Then the stability condition on $C$ implies that $2m_{\Gamma}-n_{\Gamma}\geq 3$, because the fiber of the morphism $C\rightarrow \cZ$ of every non-stacky node of $\cZ$ is of length $2$ (either two disjoint nodes or a tacnode) in $C$ while the fiber of a stacky node ($B\mu_2 \hookrightarrow \cZ$) has length $1$.
Suppose now that the curve $C_{\Gamma}$ is disconnected. Thus $C_{\Gamma}$ is the disjoint union of two projective line with an involution that exchange them. In this case $\cL\vert_{\Gamma}$ is trivial but also $n_{\Gamma}=0$. Stability condition on $C$ is equivalent to $m_{\Gamma}\geq 3$. Notice that $g_{\Gamma}=-1$ if and only if $C_{\Gamma}$ is disconnected, or equivalently it is the \'etale trivial cover of projective line.
This motivates the following definition.
\begin{definition}\label{def:hyp-A_r}
Let $\cZ$ be a twisted nodal curve over an algebraically closed field. We denote by $n_{\Gamma}$ the number of stacky points of $\Gamma$ and by $m_{\Gamma}$ the number of intersections of $\Gamma$ with the rest of the curve for every $\Gamma$ irreducible component of $\cZ$. Let $\cL$ be a line bundle on $\cZ$ and $i:\cL^{\otimes 2} \rightarrow \cO_{\cZ}$ be a morphism of $\cO_{\cZ}$-modules. We denote by $g_{\Gamma}$ the quantity $n_{\Gamma}/2-1-\deg\cL\vert_{\Gamma}$.
\begin{itemize}
\item[(a)] We say that $(\cL,i)$ is hyperelliptic if the following are true:
\begin{itemize}
\item[(a1)] the morphism $\cZ \rightarrow B\GG_m$ induced by $\cL$ is representable,
\item[(a2)] $i^{\vee}$ does not vanish restricted to any stacky point.
\end{itemize}
\item[(b)] We say that $(\cL,i)$ is $A_r$-prestable and hyperelliptic of genus $g$ if $(\cL,i)$ is hyperelliptic, $\chi(\cL)=-g$ and the following are true:
\begin{itemize}
\item[(b1)] $i^{\vee}$ does not vanish restricted to any irreducible component of $\cZ$ or equivalently the morphism $i:\cL^{\otimes 2 }\rightarrow \cO_{\cZ}$ is injective,
\item[(b2)] if $p$ is a non-stacky node and $i^{\vee}$ vanishes at $p$, then $r\geq 3$ and the vanishing locus $\VV(i^{\vee})_p$ of $i^{\vee}$ localized at $p$ is a Cartier divisor of length $2$;
\item[(b3)] if $p$ is a smooth point and $i^{\vee}$ vanishes at $p$, then the vanishing locus $\VV(i^{\vee})_p$ of $i^{\vee}$ localized at $p$ has length at most $r+1$.
\end{itemize}
\item[(c)] We say that $(\cL,i)$ is $A_r$-stable and hyperelliptic of genus $g$ if it is $A_r$-prestable and hyperelliptic of genus $g$ and the following are true for every irreducible component $\Gamma$ in $\cZ$:
\begin{itemize}
\item[(c1)] if $g_{\Gamma}=0$ then we have $2m_{\Gamma}-n_{\Gamma}\geq 3$,
\item[(c2)] if $g_{\Gamma}=-1$ then we have
$m_{\Gamma}\geq 3$ ($n_{\Gamma}=0$).
\end{itemize}
\end{itemize}
\end{definition}
Let us define now the stack classifying these data. We denote by $\widetilde{\cC}_g^r$ the fibered category defined in the following way: the objects are triplet $(\cZ\rightarrow S,\cL,i)$ where $\cZ \rightarrow S$ is a family of twisted curves of genus $0$, $\cL$ is a line bundle of $\cZ$ and $i:\cL^{\otimes 2}\rightarrow \cO_{\cZ}$ is a morphism of $\cO_{\cZ}$-modules such that the restriction $(\cL_s,i_s)$ to the geometric fiber is $A_r$-stable and hyperelliptic of genus $g$ for every point $s \in S$ . Morphisms are defined as in \cite{ArVis}. We have proven the following equivalence.
\begin{proposition}\label{prop:descr-hyper}
The fibered category $\widetilde{\cC}_g^r$ is isomorphic to $\cC_g^r$.
\end{proposition}
We can use this description to get the smoothness of $\widetilde{\cH}_g^r$. Firstly, we need to understand what kind of line bundles $\cL$ on $\cZ$ can appear in $\widetilde{\mathcal C}_g^r$.
\begin{lemma}\label{lem:hyp-line-bun}
Let $Z$ be a nodal curve of genus $0$ over an algebraically closed field $k/\kappa$, $\cL$ a line bundle on $Z$ and $s \in \H^0(Z,\cL)$. We consider the following assertions:
\begin{itemize}
\item[$(i)$] $s$ does not vanish identically on any irreducible component of $Z$,
\item[$(ii)$] $\cL$ is globally generated,
\item[$(ii')$] $\deg \cL\vert_{\Gamma}\geq 0$ for every $\Gamma$ irreducible component of $Z$,
\item[$(iii)$] $\H^1(Z,\cL)=0$;
\end{itemize}
then we have $(i)\implies (ii)\iff (ii')\implies (iii)$.
\end{lemma}
\begin{proof}
It is easy to prove that $(i) \implies(ii')$ and $(ii)\implies (ii')$. To prove that $(ii')$ implies $(ii)$ we proceed by induction on the number of components of $Z$. Let $p$ a smooth point on $Z$ and $\Gamma_p$ the irreducible component which cointains $p$. Then the morphism $$\H^0(\Gamma_p,\cL\vert_{\Gamma_p})\longrightarrow k(p)$$ is clearly surjective because $\deg \cL\vert_{\Gamma_p}\geq 0$ and $\Gamma \simeq \PP^1$. Therefore it is enough to extend a section from $\Gamma_p$ to a section to the whole $Z$.
Because the dual graph of $Z$ is a tree, we know that $Z$ can be obtained by gluing a finite number of genus $0$ curve $Z_i$ to $\Gamma_p$ such that these curves are disjoint. Hence it is enough to find a section for every $Z_i$ such that it glues in the point of intersection with $\Gamma_p$. Because everyone of the $Z_i$'s has fewer irreducible components than $Z$, we are done.
Finally, we need to prove that $(ii')\implies (iii)$. We know there exists a decomposition $Z=Z_1^{\rm c}p Z_2$ where $Z_1$ and $Z_2$ are nodal genus $0$ curves such that $Z_1\cap Z_2$ has length $1$. Let $i_h:Z_h \hookrightarrow Z$ the closed embedding for $h=1,2$. Thus if we can consider the exact sequence of vector spaces
$$ 0\rightarrow \H^0(\cL)\rightarrow \H^0(i_1^*\cL)\oplus\H^0(i_2^*\cL) \rightarrow k \rightarrow \H^1(\cL) \rightarrow \H^1(i_1^*\cL)\oplus \H^1(i_2^*\cL) \rightarrow 0;$$
it is clear that the morphism $\H^0(i_1^*\cL)\oplus\H^0(i_2^*\cL) \rightarrow k$ is surjective (both $i_1^*\cL$ and $i_2^*\cL$ are globally generated) and therefore $\H^1(Z,\cL)=\H^1(Z_1,i_1^*\cL)\oplus \H^1(Z_2,i_2^*\cL)$. We get that
$$\H^1(Z,\cL)=\bigoplus_{\Gamma} \H^1(\Gamma, \cL\vert_{\Gamma})$$
where the sum is indexed over the irreducible components $\Gamma$ of $Z$. Thus it is enough to prove that $\H^1(\Gamma,\cL\vert_{\Gamma})=0$ for every $\Gamma$ irreducible component of $Z$, which follows from $(iii))$.
\end{proof}
\begin{proposition}\label{prop:smooth-hyp}
The moduli stack $\widetilde{\cH}_g^r$ of $A_r$-stable hyperelliptic curves of genus $g$ is smooth and the open $\cH_g$ parametrizing smooth hyperelliptic curves is dense in $\widetilde{\cH}_g^r$. In particular $\widetilde{\cH}_g^r$ is connected.
\end{proposition}
\begin{proof}
We use the description of $\widetilde{\cH}_g^r$ as $\widetilde{\mathcal C}_g^r$. First of all, let us denote by $\cP$ the moduli stack parametrizing the pair $(\cZ,\cL)$ where $\cZ$ is a twisted curve of genus $0$ and $\cL$ is a line bundle on $\cZ$. Consider the natural morphism $\cP \rightarrow \cM_0^{\rm tw}$ defined by the association $(\cZ,\cL)\mapsto \cZ$, where $\cM_0^{\rm tw}$ is the moduli stack of twisted curve of genus $0$ (see Section 4.1 of \cite{AbGrVis}).Proposition 2.7 \cite{AbOlVis} implies the formal smoothness of this morphism. We have the smoothness of $\cM_0^{\rm tw}$ thanks to Theorem A.6 of \cite{AbOlVis}. Therefore $\cP$ is formally smooth over the base field $\kappa$.
Let us consider now $\cP_g^0$, the substack of $\cP_g$ whose geometric objects are pairs $(\cZ,\cL)$ such that $\chi(\cL)=-g$ and $\H^1(\cZ,\cL^{\otimes -2})=0$. To be precise, its objects consist of families of twisted curves $\cZ\rightarrow S$ and a line bundle $\cL$ on $\cZ$ such that $\H^1(\cZ_s,\cL_s^{\otimes -2})=0$ for every $s \in S$. Because the Euler characteristic $\chi$ is locally constant for families of line bundles, the semicontinuity of the $\H^1$ implies that $\cP_g^0$ is an open inside $\cP$, therefore it is formally smooth over $\kappa$.
We have a morphism
$$ \widetilde{\cH}_g^r\simeq \widetilde{\mathcal C}_g^r \rightarrow \cP$$
defined by the association $(\cZ,\cL,i) \mapsto (\cZ,\cL)$. This factors through $\cP_g^0$ because of \Cref{lem:hyp-line-bun}. Consider the universal object $(\pi:\cZ_{\cP}\rightarrow \cP_g^0,\cL_{\cP})$ over $\cP_g^0$. Then $\cL_{\cP}^{\otimes -2}$ satisfies base change by construction and therefore we have that $\widetilde{\mathcal C}_g^r$ is a substack of $\VV(\pi_*\cL_{\cP}^{\otimes -2})$, the geometric vector bundle over $\cP_g^0$ associated to $\pi_*\cL_{\cP}^{\otimes -2}$. The inclusion $\widetilde{\mathcal C}_g^r\subset \VV(\pi_*\cL_{\cP}^{\otimes -2})$ is an open immersion because of \Cref{prop:openness}, which implies the smoothness of $\widetilde{\mathcal C}_g^r$.
Given a twisted curve $\cZ$ over an algebraically closed field $k$, we can construct a family $\cZ_R\rightarrow \spec R$ of twisted curves where $R$ is a DVR such that the special fiber is $\cZ$ and the generic fiber is smooth. The smoothness of the morphism $\VV(\pi_{*}\cL^{\otimes -2})\rightarrow \widetilde{\mathcal M}_0^{\rm tw}$ implies that given the datum $(\cZ,\cL,i) \in \widetilde{\mathcal C}_g^r(k)$ with $k$ an algebraically closed field, we can lift it to $(\cZ_R,\cL_R,i_R) \in \widetilde{\mathcal C}_g^r(R)$ where $R$ is a DVR such that it restricts to $(\cZ,\cL,i)$ in the special fiber and such that the generic fiber is isomorphic $(\PP^1, \cO(-g-1), i)$ with $i^{\vee} \in \H^0(\PP^1,\cO(2g+2))$. Finally, the open substack of $\VV(\pi_*\cL_{\cP}^{\otimes -2})$ parametrizing sections $i^{\vee}$ without multiple roots is dense, therefore we get that we can deform every datum $(\cZ,\cL,i)$ to the datum of a smooth hyperelliptic curve.
\end{proof}
\begin{remark}
In the proof of \Cref{prop:smooth-hyp}, we have used the implication $(i) \implies (iii)$ as in \Cref{lem:hyp-line-bun} for a twisted curve $\cZ$ of genus $0$. In fact, we can apply \Cref{lem:hyp-line-bun} to the coarse moduli space $Z$ of $\cZ$ and get the implication for $\cZ$ because the line bundle $\cL^{\otimes -2}$ descends to $Z$ and the morphism $\cZ\rightarrow Z$ is cohomologically affine.
\end{remark}
\section{$\eta$ is a closed immersion}\label{sec:1-4}
We have a morphism $\eta:\widetilde{\cH}_g^r \arr \widetilde{\mathcal M}_g^r$ between smooth algebraic stacks. If we prove $\eta$ is representable, formally unramified, injective on geometric points and universally closed, then we get that $\eta$ is a closed immersion.
\begin{remark}
The morphism $\eta$ is faithful, as the automorphisms of $(C,\sigma)$ are by definition a subset of the ones of $C$ over any $\kappa$-scheme $S$. This implies that $\eta$ is representable.
\end{remark}
Firstly, we discuss why $\eta$ is injective on geometric points. We just need to prove that for every geometric point the morphism $\eta$ is full, i.e. every automorphism of a $A_r$-stable curve which is also hyperelliptic have to commute with the involution. This follows directly from the unicity of the hyperelliptic involution. Therefore our next goal is to prove that the hyperelliptic involution is unique over an algebraically closed field.
\subsection*{Injectivity on geometric points}
First of all, we use the results of the first section to describe the possible quotients.
\begin{proposition}\label{prop:description-quotient}
Let $k/\kappa$ be an algebraically closed field and $(C,\sigma)$ be a hyperelliptic $A_r$-stable curve of genus $g$. Denote by $Z$ the geometric quotient by the involution and suppose $Z$ is a reduced nodal curve of genus $0$ (with only separating nodes). Furthermore, let $c\in C$ be a closed point, $z \in Z$ be the image of $c$ in the quotient and $C_z$ be the schematic fiber of $z$, which is the spectrum of an artinian $k$-algebra. Then
\begin{itemize}
\item if $z$ is a smooth point, either
\begin{itemize}
\item[(s1)] $C_z$ is disconnected and supported on two smooth points of $C$ (i.e. the quotient morphism is étale at $c$),
\item[(s2)] or $C_z$ is connected and supported on a possibly singular point of $C$ (i.e. the quotient morphism is flat and ramified at $c$);
\end{itemize}
\item if $z$ is a separating node, either
\begin{itemize}
\item[(n1)] $C_z$ is disconnected and supported on two nodes (i.e. the quotient morphism is étale at $c$),
\item[(n2)] or $C_z$ is connected and supported on a tacnode (i.e. the quotient morphism is flat and ramified at $c$),
\item[(n3)] or $C_z$ is connected and supported on a node (i.e. the quotient morphism is ramified at $c$ but not flat).
\end{itemize}
\end{itemize}
Finally, the quotient morphism is finite of generic degree $2$ and ${\rm (n3)}$ is the only case when the length of $C_z$ is not $2$ but $3$.
\end{proposition}
\begin{proof}
As we are dealing with the quotient by an involution, the quotient morphism is either étale at closed point $c\in C$ or the point $c\in C$ is fixed by the involution. If the point is in the fixed locus, we can pass to the completion and apply \Cref{prop:descr-inv}.
\end{proof}
\begin{remark}
We claim that if $C$ is an $A_r$-prestable curve of genus $g$ and $\sigma$ is any involution, then the geometric quotient $Z$ is automatically an $A_r$-prestable curve. The quotient is still reduced because $C$ is reduced. We have connectedness because of the surjectivity of the quotient morphism. The description of the quotient singularities in the first section implies the claim. Therefore if the quotient $Z$ has genus $0$, then it is a nodal curve
(with only separating nodes).
\end{remark}
\begin{remark}
If $z$ is a node, we can say more about $C_z$. In fact, it is easy to prove that $C_z$ is a separating closed subset, i.e. the partial normalization of $C$ along the support of $C_z$ is not connected. This follows from the fact that every node in $Z$ is separating.
\end{remark}
We start by proving the uniqueness of the hyperelliptic involution for the case of an integral curve. For the remaining part of the section, $k$ is an algebraically closed field over $\kappa$.
\begin{proposition}\label{prop:integral}
Let $C$ be a $A_r$-stable integral curve of genus $g\geq 2$ over $k$ and suppose are given two hyperelliptic involution $\sigma_1,\sigma_2$ of $C$. Then $\sigma_1=\sigma_2$.
\end{proposition}
\begin{proof}
First of all, notice that the quotient $Z:=C/\sigma$ is an integral curve with arithmetic genus $0$ over $k$, therefore $Z\simeq \PP^1$. Consider now the following morphism:
$$
\begin{tikzcd}
\phi: C \arrow[r, "f"] & \PP^1 \arrow[r, "i_{g-1}", hook] & \PP^{g-1}
\end{tikzcd}
$$
where $f$ is the quotient morphism and $i_{g-1}$ is the $(g-1)$-embedding. It is enough to prove that $\phi^*(\cO_{\PP^{g-1}}(1)) \simeq \omega_C$ as it implies that every hyperelliptic involution comes from the canonical morphism (therefore it is unique).
We denote by $\cL$ the line bundle $\phi^*\cO_{\PP^{g-1}}(1)$. Using Riemann-Roch for integral curves, we get that
$$ h^0(C,\omega_C \otimes \cL^{-1})= h^0(C,\cL) + 1 -g $$
therefore if we prove that $h^0(C,\cL)= g$, we get that $\deg (\omega_C \otimes \cL^{-1})=0$ and that $h^0(\omega_C \otimes \cL^{-1})=1$ which implies $\cL \simeq \omega_C$ (as $C$ is integral). Because $f$ is finite, we have that $$\H^0(C,\cL)=\H^0(\PP^1,i_{g-1}^*\cO_{\PP^{g-1}}(1) \otimes f_*\cO_C) $$
thus using that $f_*\cO_C= \cO_{\PP^1}\oplus \cO_{\PP^1}(-g-1)$ as $f$ is a cyclic cover of degree $2$, we get that
$$ \H^0(C,\cL)= \H^0(\PP^1, \cO_{\PP^1}(g-1))\oplus \H^0(\PP^1, \cO_{\PP^1}(-2))$$
which implies $h^0(C,\cL)=g$.
\end{proof}
Now we deal with the genus $1$ case.
\begin{lemma}\label{lem:genus1}
Let $(C,p_1,p_2)$ be a $2$-pointed $A_r$-stable curve of genus $1$. Then there exists a unique hyperelliptic involution which sends one point into the other.
\end{lemma}
\begin{proof}
The proof of this lemma consists in describing all the possible cases. First of all, the condition on the genus implies that the curve is $A_3$-stable, as the arithmetic genus would be too high with more complicated $A_r$-singularities. Clearly,
$$\sum_{\Gamma \subset C} g(\Gamma) \leq g(C)=1$$
where $\Gamma$ varies in the set of irreducible components of $C$. Consider first the case where there exists $\Gamma$ such that $g(\Gamma)=1$, therefore all the other irreducible components have genus $0$ and all the separating points are nodes. Thanks to the stability condition, it is clear that $C$ is either integral (case (a) in \Cref{fig:Genus1} ) with two smooth points or it has two irreducible components $\Gamma_1$ and $\Gamma_0$, where $g(\Gamma_1)=1$, $g(\Gamma_0)=0$, they intersect in a separating node and the two smooth points lie on $\Gamma_0$ (case (b) in \Cref{fig:Genus1}).
Suppose then that for every irreducible component $\Gamma$ of $C$ we have $g(\Gamma)=0$. Therefore, if the curve is not $A_1$-prestable, we have that the only possibility is that there exists a separating tacnode between two genus $0$ curves. Stability condition implies that $C$ can be described as two integral curves of genus $0$ intersecting in a tacnode, and the two points lie in different components (case (d) in \Cref{fig:Genus1}). Finally, if $C$ is $1$-prestable we get that $C$ has two irreducible components of genus $0$ intersecting in two points and the smooth sections lie in different components (case (c) in \Cref{fig:Genus1}).
\begin{figure}
\caption{$2$-pointed genus $1$ curves}
\label{fig:Genus1}
\end{figure}
In all of these four situations, it is easy to prove existence and uniqueness of the hyperelliptic involution.
\end{proof}
\begin{remark}
In the previous lemma, we can also consider the case when $p:=p_1=p_2$ and $(C,p)$ is an $1$-pointed $A_2$-stable curve. The same result is true.
\end{remark}
We want to treat the case of reducible curves.
\begin{definition}\label{def:subcurve}
Let $(C,\sigma)$ a hyperelliptic $A_r$-stable curve over an algebraically closed field. A one-equidimensional reduced closed subscheme $\Gamma \subset C$ is called a \emph{subcurve} of $C$. We denote by $\iota_{\Gamma}:\Gamma \rightarrow C$ the closed immersion. If $\Gamma$ is a subcurve of $C$, we denote by $C-C'$ the complementary subcurve of $C'$, i.e. the closure of $C\smallsetminus C'$ in $C$.
\end{definition}
\begin{remark}
A subcurve is just a union of irreducible components of $C$ with the reduced scheme structure. Furthermore, given an one-equidimensional closed subset of $C$, we can always consider the associated subcurve with the reduced scheme structure.
\end{remark}
\begin{lemma}\label{lem:subcurve}
Let $(C,\sigma)$ be a hyperelliptic $A_r$-stable curve of genus $g\geq 2$ over an algebraically closed field and $\Gamma \subset C$ be a subcurve such that $g(\Gamma)\geq 1$. Then $\dim(\Gamma \cap \sigma(\Gamma))=1$.
\end{lemma}
\begin{proof}
Suppose $\dim(\Gamma \cap \sigma(\Gamma))=0$ and consider the quotient morphism $f: C \arr Z:=C/\sigma$. Consider now the schematic image $f(\Gamma) \subset Z$, which is a subcurve of $Z$ as it is reduced and $f$ is finite. If we restrict $f$ to $f(\Gamma)$, we get the following commutative diagram:
$$
\begin{tikzcd}
\Gamma \arrow[r, hook] \arrow[rd] & f^{-1}f(\Gamma) \arrow[r, hook] \arrow[d, "f_{\Gamma}"] & C \arrow[d, "f"] \\
& f(\Gamma) \arrow[r, hook] & Z
\end{tikzcd}
$$
where the square diagram is cartesian. If we consider $U:=f(\Gamma) \smallsetminus f(\Gamma \cap \sigma(\Gamma))$, we get that $f^{-1}(U)$ is a disjoint union of two open subsets and $\sigma$ maps one into the other, therefore the action on $f^{-1}(U)$ is free, giving that the degree of $f_{\Gamma}$ restricted to $f^{-1}(U)$ is $2$. The condition $\dim(\Gamma \cap \sigma(\Gamma))=0$ assures us that $f^{-1}(U)=(\Gamma ^{\rm c}p \sigma(\Gamma))\smallsetminus (\Gamma \cap \sigma(\Gamma))$ is in fact dense in $f^{-1}f(\Gamma)$. Thus the morphism $f\vert_{\Gamma}:\Gamma \arr \pi(\Gamma)$ is a finite morphism which is in fact an isomorphism over $\Gamma\smallsetminus \sigma(\Gamma)$, which is a dense open, therefore it is birational. This implies the following inequality
$$g(Z)\geq g(f(\Gamma))\geq g(\Gamma)\geq 1$$
which is absurd because $g(Z)=0$.
\end{proof}
\begin{remark}
Notice that this lemma implies that the only irreducible components that are not fixed by the hyperelliptic involution have (arithmetic) genus equal to $0$. Therefore, if we have a hyperelliptic involution $\sigma$ of $C$ and $\Gamma$ is an irreducible component of positive genus, we get that $\sigma(\Gamma)=\Gamma$ and $\sigma\vert_{\Gamma}$ is a hyperelliptic involution of $\Gamma$. This is true because we are quotienting by a linearly reductive group, thus the morphism
$$\Gamma/\sigma \rightarrow C/\sigma$$
induced by the closed immersion $\Gamma \subset C$ is still a closed immersion.
\end{remark}
Let $C$ be an $A_r$-stable curve of genus $g\geq 2$. We denote by ${\rm Irr}(C)$ the set whose elements are the irreducible components of $C$. Then every automorphism $\phi$ of $C$ induces a permutation $\tau_{\phi}$ of the set ${\rm Irr}(C)$. First of all, we need to prove that the action on ${\rm Irr}(C)$ is the same for every hyperelliptic involution. Then we prove that the action on the $0$-dimensional locus described by all the intersections between the irreducible components is the same for all hyperelliptic involutions. Finally we see how these two facts imply the uniqueness of the hyperelliptic involution. We denote by $l(Q)$ the length of a $0$-dimensional subscheme $Q\subset C$.
\begin{lemma}\label{lem:exist-decomposition}
Let $(C,\sigma)$ be a hyperelliptic $A_r$-stable curve of genus $g$ over $k$ and suppose there exists two irreducible components $\Gamma_1$ and $\Gamma_2$ of genus $0$ of $C$ such that $\sigma$ send $\Gamma_1$ in $\Gamma_2$. Let $n$ be the length of $\Gamma_1\cap \Gamma_2$. Then there exists a nonnegative integer $m$ and $m$ disjoint subcurves $D_i \subset C$ such that the following properties holds:
\begin{itemize}
\item[a)] $m+n\geq 3$,
\item[b)] $\sigma(D_i)=D_i$ for every $i=1,\dots,m$,
\item[c)] the length of the subscheme $D_i\cap \Gamma_j$ is $1$ for every $i=1,\dots,m$ and $j=1,2$ and $D_i \cap \Gamma_1 \cap \Gamma_2=\emptyset$,
\item[d)] if we denote by $P_j^i$ the intersection $D_i \cap \Gamma_j$, we have that $(D_i,P_1^i,P_2^i)$ is a $2$-pointed $A_r$-stable curve of genus $g_i>0$ such that $\sigma\vert_{D_i}$ is a hyperelliptic involution of $D_i$ which maps $P_1^i$ to $P_2^i$,
\item[e)] $C=\Gamma_1^{\rm c}p \Gamma_2 ^{\rm c}p \bigcup_{i=1}^m D_i$.
\end{itemize}
Furthermore, the following equality holds $$g=m+n-1+\sum_{i=0}^m g_i. $$
\end{lemma}
Before proving the lemma, let us explain it more concretely. First of all, the intersections in the lemma are scheme-theoretic, therefore we can have non-reduced ones. More precisely, the intersections have to be supported in singular points of type $A_{2h-1}$, as the local ring is not integral. It is easy to prove that the scheme-theoretic intersection in such a point is in fact of length $h$. Notice that both $m$ or $n$ can be zero.
\begin{figure}
\caption{Decomposition as in \Cref{lem:exist-decomposition}
\label{fig:Decomp}
\end{figure}
\begin{proof}
Let $f:C\rightarrow Z$ be the quotient morphism and $\Gamma$ the image of $\Gamma_1^{\rm c}p\Gamma_2$ through $\pi$. Because every node in $Z$ is a separating node, we know that $Z-\Gamma = \bigsqcup_{i=1}^m E_i$, i.e. it is a disjoint union (possibly empty) of $m$ subcurves of $Z$, which are still reduced, connected and of genus $0$. Let $D_i$ be the subcurve of $C$ associated to the closed subset $\pi^{-1}(E_i)$ of $C$. We prove that $D_1,\dots,D_m$ verify the properties listed in the proposition.
Clearly, $D_i\cap D_j=\emptyset$ for every $i\neq j$ by construction. Properties b) and e) are verified by construction as well. Notice that $\Gamma_1\cap\Gamma_2\cap D_i=\emptyset$ for every $i=1,\dots,m$, otherwise if $p\in \Gamma_1\cap\Gamma_2\cap D_i$ is a closed point, then the local ring $\cO_{C,p}$ would have $3$ minimal primes. This cannot occur as the only singularities allowed are of type $A_r$, which have at most $2$ minimal primes. Therefore if we define $Q_i:=E_i\cap \Gamma$, we have that $\pi^{-1}(Q_i)$ is disconnected (as it does not belong to $\pi(\Gamma_1\cap\Gamma_2)$) and thus we are in the situation (n1) of \Cref{prop:description-quotient}. Property c) follows easily from this. Because $C$ is $A_r$-stable, we also get property a).
Regarding property d), we know that $D_i$ is reduced and that $D_i\cap (C-D_i)$ has length $2$, hence it is enough to prove $D_i$ is connected and then the statement follows. However, suppose $D_i$ is not connected, namely $D_i=D_i^1 \bigsqcup D_i^2$ with $D_i^j$ two subcurves of $C$ for $j=1,2$ such that $\sigma(D_i^1)=D_i^2$. Using \Cref{lem:subcurve} again, we get $g(D_i^j)=0$ for $j=1,2$. This is not possible because of the stability condition on $C$. The genus formula follows from a straightforward computation using \Cref{rem:genus-count}.
\end{proof}
Now that we have described the geometric structure of $C$, we can use it to prove that any other hyperelliptic involution has to act in the same way over the set ${\rm Irr}(C)$ of the irreducible components.
\begin{lemma}
Let $C/k$ be a $A_r$-stable curve of genus $g$ and suppose we have a decomposition as in \Cref{lem:exist-decomposition}. Thus every hyperelliptic involution $\sigma$ of $C$ commutes with the decomposition, i.e. we have $\sigma(\Gamma_1)=\Gamma_2$.
\end{lemma}
\begin{proof}
Let $\pi:C\rightarrow Z$ be the quotient morphism. We start with the case $m=0$ and $n:=l(\Gamma_1\cap \Gamma_2)\geq 3$. Suppose $\sigma(\Gamma_j)=\Gamma_j$ for $j=1,2$. Recall that the only singularities that can appear in the intersection $\Gamma_1\cap \Gamma_2$ are of the form $A_{2h-1}$. As the involution does not exchange the two irreducible components, \Cref{prop:descr-inv} implies that the only possible singularities in the intersection are nodes or tacnodes with nodes as quotients. Because $n\geq 3$ we have that the quotient has two irreducible components intersecting in at least two nodes, but this does not have genus $0$. Therefore $\sigma(\Gamma_1)=\Gamma_2$.
Suppose now $m\geq 1$ and $n\geq 2$. Because $n\geq 2$ then $\Gamma_1 ^{\rm c}p \Gamma_2$ has positive genus therefore $\sigma(\Gamma_1^{\rm c}p\Gamma_2)$ and $\Gamma_1^{\rm c}p\Gamma_2$ have a common component. Suppose $\sigma(\Gamma_1)\neq \Gamma_2$ and thus without loss of generality $\sigma(\Gamma_2)=\Gamma_2$. If $\sigma(\Gamma_1)=\Gamma_1$, then $\pi(\Gamma_1)\cap\pi(\Gamma_2)$ contains at least a node because $n\geq 2$ and the subcurve $\pi(\Gamma_1^{\rm c}p\Gamma_2^{\rm c}p D_1)$ of $Z$ does not have genus $0$ as $D_1$ is connected. Therefore we can suppose $\sigma(\Gamma_1)\subset D_1$. Then $\sigma(\Gamma_1\cap\Gamma_2)\subset D_1\cap\Gamma_2$, but this implies $l(\Gamma_1\cap\Gamma_2)\leq 1$ which is in contradiction with $n\geq 2$.
Now we consider the case $n=1$ and $m\geq 2$. Again it is easy to prove that we cannot have $\sigma(\Gamma_i)=\Gamma_i$ for $i=1,2$. Without loss of generality we can suppose $\sigma(\Gamma_1)\subset D_1$. However $\sigma(D_2)\neq D_2$ as $D_2\cap \Gamma_1\neq \emptyset$ and thus $\sigma(D_2)\cap D_2$ has to share an irreducible component because of \Cref{lem:subcurve}. This implies that $\Gamma_2 \subset \sigma(D_2)$ because $D_2$ is connected. Thus $\sigma(\Gamma_1)\cap\sigma(\Gamma_2)\subset D_1\cap D_2 = \emptyset$, which is absurd. The only possibility is $\sigma(\Gamma_1)=\Gamma_2$.
Finally, we consider the case $n=0$ and $m\geq 3$. As above, we cannot have $\sigma(\Gamma_j)=\Gamma_j$ for $j=1,2$, otherwise the quotient does not have genus $0$. If $\sigma(\Gamma_1)\subset D_1$, then $\sigma(\Gamma_2)$ is contained in only one of the subcurves $\{D_i\}_{i=1\dots,m}$. Thus there exists at least one between the $D_i$'s, say $D_2$, which is stable under the action of the involution (because $m\geq 3$, $D_i$ is connected for every $i=1,\dots,m$ and $\dim D_i \cap \sigma(D_i) = 1$). This is absurd because $\sigma(\Gamma_1)\cap \sigma(D_2) \subset D_1\cap D_2=\emptyset$. This implies $\sigma(\Gamma_1)=\Gamma_2$ and we are done.
\end{proof}
\begin{corollary}\label{cor:action-irrcomp}
If $C/k$ is an $A_r$-stable curve of genus $g\geq 2$ and $\sigma_1$ and $\sigma_2$ are two hyperelliptic involution, then $\tau_{\sigma_1}=\tau_{\sigma_2}$.
\end{corollary}
Let us study now the action on the intersections between irreducible components.
\begin{lemma}\label{lem:action-intersection}
Let $\Gamma_1$ and $\Gamma_2$ two irreducible components of $C$ and $p\in \Gamma_1\cap \Gamma_2$ be a closed point. If $\sigma_1$ and $\sigma_2$ are two hyperelliptic involution of $C$, then $\sigma_1(p)=\sigma_2(p)$.
\end{lemma}
\begin{proof}
Suppose $\sigma$ is a hyperelliptic involution such that $\sigma(\Gamma_1)=\Gamma_2$. Because the quotient of $\Gamma_1^{\rm c}p\Gamma_2$ by $\sigma$ is irreducible of genus $0$, we have that $\sigma(p)=p$ for every $p \in \Gamma_1\cap\Gamma_2$. Suppose now $\sigma(\Gamma_1)\neq \Gamma_2$ and denote by $\pi:C\rightarrow Z$ the quotient morphism. Clearly $\pi(\Gamma_1^{\rm c}p\Gamma_2)$ is not irreducible and $\pi(\Gamma_1\cap\Gamma_2)$ is a separating node, therefore $\Gamma_1\cap\Gamma_2$ is either supported on a node, a tacnode or two disjoint nodes exchanged by the involution. In the first two cases, the intersection is supported on one point therefore there is nothing to prove. If the intersection is supported on two nodes, then every involution has to exchange them because otherwise the quotient would not have genus $0$.
\end{proof}
\begin{remark}\label{rem:part-case}
We can say more about the case $\sigma(\Gamma_1)=\Gamma_2$. In this situation, we claim every hyperelliptic involution acts trivially on $\Gamma_1\cap \Gamma_2$ scheme-theoretically, not only set-theoretically. In fact, suppose we have a fix point $p\in \Gamma_1\cap\Gamma_2$. We know that $\Gamma_1$ and $\Gamma_2$ are irreducible components of genus $0$ and the image of $p$ in the quotient by $\sigma$ is a smooth point. Therefore we can consider the local ring $A:=\cO_{C,p}$ and we know that the invariant subalgebra is a DVR which is denoted by $R$. By flatness, we know that $A=R[y]/(y^2-h)$ where $h$ is an element of $R$ and $\sigma$ is defined by the association $y\mapsto -y$. By hypothesis, $A$ has two minimal primes, namely $p$ and $q$, such that $p\cap q=0$. We can consider the morphism
$$A \rightarrow A/p\oplus A/q$$
which is injective after completion, therefore injective. Because $R$ is a DVR, an easy computation shows that $h$ is a square, thus $A$ is of the form $R[y]/(y^2-r^2)$ with $r\in R$ and the involution is defined by the formula $\sigma(a+by)=a-by$. Clearly we have that the two minimal primes are $y-r$ and $y+r$ and the intersection $\Gamma_1\cap\Gamma_2$ is contained in the fixed locus of the action because is defined by the ideal $(y,r)$ (the fixed locus is defined by the ideal $(y)$).
\end{remark}
Finally we can prove the theorem.
\begin{theorem}
Let $C/k$ be a $A_r$-stable curves of genus $g\geq 2$. If $\sigma_1$ and $\sigma_2$ are two hyperelliptic involution of $C$, then $\sigma_1=\sigma_2$.
\end{theorem}
\begin{proof}
We prove that two hyperelliptic involutions coincide restricted to a decomposition of $C$ in subcurves.
The integral case is done in \Cref{prop:integral}. Because of \Cref{cor:action-irrcomp} and \Cref{lem:action-intersection}, we know that every hyperelliptic involution $\sigma$ acts in the same way on the set of irreducible components and on the (set-theoretic) intersection on every pair of irreducible components.
If we consider an irreducible component of genus greater than $2$, then $\sigma$ restricts to the irreducible component, it still is a hyperelliptic involution and we can use \Cref{prop:integral} to get the uniqueness restricted to the component. If we restrict $\sigma$ to a component $\Gamma$ of genus $1$, we still have that it is a hyperelliptic involution. If we consider a point $p \in \Gamma\cap C-\Gamma$, we get that every other hyperelliptic involution $\sigma'$ has to verify $\sigma'(p)=\sigma(p)$ thanks to \Cref{lem:action-intersection}. Then we can apply \Cref{lem:genus1} to get the uniqueness restricted to the component $\Gamma$.
Finally, if $\Gamma$ has genus $0$, one can have only two possibilities, namely $\sigma(\Gamma)=\Gamma$ or $\sigma(\Gamma)\neq \Gamma$. If $\sigma(\Gamma)=\Gamma$, then two different involutions have to coincide when restricted to $\Gamma$ in at least two points by stability (we have tacnodes or nodes as intersections). This easily implies they coincide on the whole component. If $\sigma(\Gamma)\neq \Gamma$, it is easy to see that every hyperelliptic involution restricted to $\sigma(\Gamma) ^{\rm c}p \Gamma$ are determined by an involution of $\PP^1$ (one can just choose two identification $\Gamma\simeq \PP^1$ and $\sigma(\Gamma)\simeq \PP^1$). Because $\sigma$ does not fix $\Gamma$, we can use \Cref{lem:exist-decomposition} to get a decomposition $C$ in subcurves $D_1,\dots,D_m$ with the properties listed in the lemma. The stability condition, i.e. $m+l(\Gamma\cap \sigma(\Gamma))\geq 3$, implies that two hyperelliptic involution restricted to $\Gamma^{\rm c}p \sigma(\Gamma)$ are determined by two involution of $\PP^1$ which coincide restricted to a subscheme of length $m+l(\Gamma\cap\sigma(\Gamma))$, therefore they coincide.
\end{proof}
\begin{remark}
Notice that for the case $m=0$ and $\Gamma\cap\sigma(\Gamma)$ supported on a single point we really need \Cref{rem:part-case}. In fact for the case of two $\PP^1$ glued on a subscheme of length greater or equal than $3$ concentrated on a single point, the condition that two hyperelliptic involution have to coincide on the set-theoretic intersection is not enough to conclude they are the same.
\end{remark}
\subsection*{Unramifiedness}
Next we focus on the unramifiedness of the map $\eta$. We prove it using deformation theory of curves and of morphisms of curves.
\begin{remark}\label{rem:deformation}
Let $\cX,\cY$ be two algebraic stacks and let $f:\cY\arr \cX$ by a morphism. Suppose it is given a 2-commutative diagram
$$
\begin{tikzcd}
\spec k\arrow[r, "y"] \arrow[d] & \cY \arrow[d] \\
{\spec k[\varepsilon]} \arrow[r, "x_{\varepsilon}"] & \cX
\end{tikzcd}
$$
with $k$ a field and $k[\varepsilon]$ the ring of dual numbers over $k$. We define $x:=f(y)$ and $\cY_x$ the fiber product of $f$ with the morphism induced by $x$; clearly we have a lifting of $y$ from $\cY$ to $\cY_x$ which is denoted by $y$ by abuse of notation. By standard argument in deformation theory, we get the following exact sequence of vector spaces over $k$:
$$
\begin{tikzcd}
0 \arrow[r] & T_{\id}\aut_{\cY_x}(y) \arrow[r] & T_{\id}\aut_{\cY}(y) \arrow[r] & T_{\id}\aut_{\cX}(x) \arrow[lld, "\alpha_f(y)" description ] \\
& \pi_0(T_{y}\cY_x) \arrow[r] & \pi_0(T_y\cY) \arrow[r] & \pi_0(T_x\cX)
\end{tikzcd}
$$
where $T_x \cX$ is the groupoid of morphisms $x_{\varepsilon}:\spec k[\varepsilon] \arr \cX$ such that the composition with $\spec k \hookrightarrow \spec k[\varepsilon]$ is exactly $x:\spec k \arr \cX$. By standard notation, we call $T_x\cX$ the tangent space of $\cX$ at $x$.
\end{remark}
If we prove that $\pi_0(T_y\cY_x)=0$, then the morphism $f$ is fully faithful at the level of tangent spaces, and therefore unramified. We prove that this is true for the morphism $\eta$.
First of all, we need to describe the fiber $\widetilde{\cH}_C$ of the morphism $\eta: \widetilde{\cH}_g^r \rightarrow \widetilde{\mathcal M}_g^r$ in a point $C \in \widetilde{\mathcal M}_g^r(k)$, where $k/\kappa$ is an extension of fields with $k$ algebraically closed. As the map $\eta$ is faithful, we know that $\widetilde{\cH}_C$ is equivalent to a set. Given a point $(C,\sigma)\in \widetilde{\cH}_C(k)$, we have that an element in $T_{(C,\sigma)}\widetilde{\cH}_C$ is a pair $(C[\varepsilon],\sigma_{\varepsilon})$ where $C[\varepsilon]$ is the trivial deformation of $C$ and $\sigma_{\varepsilon}:C[\varepsilon]\rightarrow C[\varepsilon]$ is a deformation of $\sigma$. Therefore, we need to prove the uniqueness of the hyperelliptic involution for deformations. To do so, we study the deformations of the quotient map $\pi: C \rightarrow Z:=C/\sigma$.
Let ${\rm Def}^{\rm fix}_{C/Z}$ be the deformation functor associated to the problem of deforming the morphism $\pi:C \rightarrow Z$ with both source and target fixed and ${\rm InfAut}(Z)$ be the deformation functor of infinitesimal automorphisms of $Z$. There is a natural morphism of deformation functors
$$ \alpha:{\rm InfAut}(Z) \longrightarrow {\rm Def}^{\rm fix}_{C/Z}$$
whose restriction to the tangent spaces $d\alpha$ induces a morphism of $k$-vector spaces. Furthermore, we have a map $$\GG_{\rma}mma:T_{(C,\sigma)}\widetilde{\cH}_C \longrightarrow {\rm Def}^{\rm fix}_{C/Z} $$ defined by the association $(C[\varepsilon],\sigma_{\varepsilon})\mapsto \pi_{\varepsilon}:C[\varepsilon]\rightarrow Z[\varepsilon]\simeq C[\varepsilon]/\sigma_{\varepsilon}$.
\begin{remark}
It is not completely trivial that the quotient $C[\varepsilon]/\sigma_{\varepsilon}$ is isomorphic to the trivial deformation of $Z$. One can prove it using the fact that the morphism $\pi$ is finite reducing to the affine case.
\end{remark}
Notice that $\GG_{\rma}mma(\sigma_{\varepsilon}) \in \im{d\alpha}$ implies $\sigma_{\varepsilon}=0$ because of \Cref{lem:unique-inv-quotient}. Therefore, it is enough to prove $\im{\GG_{\rma}mma}\subset \im{d\alpha}$.
Let us focus on the morphism $\alpha$. The morphism $d\alpha$ can be identified with the map
$$ \hom_{\cO_Z}(\Omega_Z,\cO_Z) \longrightarrow \hom_{\cO_Z}(\Omega_Z,\pi_*\cO_C)$$
induced by applying $\hom_{\cO_Z}(\Omega_Z,-)$ to the natural exact sequence
$$ 0 \rightarrow \cO_Z \rightarrow \pi_*\cO_C \rightarrow L\rightarrow 0 .$$
Clearly, if we have $\hom_{\cO_Z}(\Omega_Z, L)=0$, we get that $d\alpha$ is surjective and we have done. This is not true in general and we will see why in \Cref{ex:not-involution}. Therefore we need to treat the problem with care, studying the deformation spaces involved.
Let us start with the case of $(C,\sigma)$, where $C$ does not have separating node. Thanks to the description in \Cref{prop:description-quotient}, we have that the quotient map $\pi:C \rightarrow Z$ is finite flat of degree $2$. The theory of cyclic covers implies that every deformation of $\pi$ still induces a hyperelliptic involution, meaning that in this case the composition
$$ \bar{\GG_{\rma}mma}: T_{(C,\sigma)}\widetilde{\cH}_C \longrightarrow \frac{{\rm Def}^{\rm fix}_{C/Z}}{\im{d\alpha}} $$
is an isomorphism of vector spaces.
However, if $C$ has at least one separating node, the deformations of $\pi$ do not always give a deformation of the involution of $\sigma$.
\begin{example}\label{ex:not-involution}
Let $C$ be an ($A_1$)-stable curve of genus $2$ over $k$ with a separating node $p \in C(k)$. We have a hyperelliptic involution $\sigma$ which fixes the two genus $1$ components such that the two points $p_1,p_2$ over the node $p$ in the normalization of $C$ are fixed by the involution as well. Therefore we have a quotient morphism $C \rightarrow Z$, where $Z$ is a genus $0$ curve with two components meeting in on separating node $q$, image of $p$. The morphism $\pi$ is finite and it is flat of degree $2$ restricted to the open $C \smallsetminus p$. On the contrary, locally around the point $p$ is finite completely ramified of degree $3$, i.e. $\pi^{-1}(q)$ is the spectrum of a length $3$ local artinian $k$-algebra. Clearly, we can deform $\pi$ without deforming source and target but making it unramified over $q$. This deformation cannot correspond to a hyperelliptic involution.
In another way, we are deforming the two involutions on the two components of $C$ such that the fixed locus moves away from $p_1$ and $p_2$. This implies that we cannot patch them together to get an involution of $C$. To sum up, we need to consider only deformations where the fixed locus of the two involutions does not move.
Now we look at the tangent space. Let $C_1$ and $C_2$ be the two genus $1$ component of $C$ and $Z_1$ and $Z_2$ are the two components of $Z$ (reduced schematic images of $C_1$ and $C_2$ respectively). Let $L_i$ be the quotient line bundle of $\cO_{Z_i} \hookrightarrow \pi_*\cO_{C_i}$ for $i=1,2$. An easy computation (see \Cref{lem:decomp-line-bundle}) shows that $$\hom_{\cO_Z}(\Omega_Z,L)=\hom_{\cO_{Z_1}}(\Omega_{Z_1},L_1)\oplus \hom_{\cO_{Z_2}}(\Omega_{Z_2},L_2)$$
and in particular $\hom_{\cO_Z}(\Omega_Z,L)$ is not zero. Nevertheless, if we ask that our deformation of $\pi$ is totally ramified over the separating node, we get exactly the space $\hom_{\cO_{Z_1}}(\Omega_{Z_1},L_1(-p_1))\oplus \hom_{\cO_{Z_2}}(\Omega_{Z_2},L_2(-p_2))$ which is zero. We are going to generalize this computation to our setting.
\end{example}
Suppose $\Gamma$ is an subcurve of $Z$, then we define the subcurve $C_{\Gamma}:=\pi^{-1}(\Gamma)_{\rm red}$ of $C$. Let $\pi_{\Gamma}:C_{\Gamma}\rightarrow \Gamma$ be the restriction of $\pi$ to $C_{\Gamma}$ and $L_{\Gamma}$ be the quotient bundle of the natural map $\cO_{\Gamma}\hookrightarrow \pi_{\Gamma,*}\cO_{C_{\Gamma}}$.
The main statement in this section is the following theorem.
\begin{theorem}
In the situation above, the morphism $\bar{\GG_{\rma}mma}$ factors through the following inclusion of vector spaces
$$ \bigoplus_{\Gamma \in {\rm Irr}(Z)}\hom_{\cO_{\Gamma}}(\Omega_{\Gamma},L_{\Gamma}(-D_{\Gamma}))\subset \hom_{\cO_Z}(\Omega_{Z},L)$$
where $D_{\Gamma}$ is the Cartier divisor on $\Gamma$ defined as $\sum^{\Gamma'\neq \Gamma}_{\Gamma' \in {\rm Irr(Z)}}\Gamma \cap \Gamma'$, or equivalently $\Gamma \cap (Z - \Gamma)$.
\end{theorem}
The theorem above implies the result we need to conclude the study of the unramifiedness of $\eta$.
\begin{corollary}
For every $(C,\sigma) \in \widetilde{\cH}_C$, the morphism $\bar{\GG_{\rma}mma}\equiv 0$, which implies that $\im \GG_{\rma}mma \subset \im{d\alpha}$.
\end{corollary}
\begin{proof}
It is enough to prove that for every $\Gamma$ irreducible component of $Z$, we have $\hom_{\cO_{\Gamma}}(\Omega_{\Gamma},L_{\Gamma}(-D_{\Gamma}))=0$. Let us explain why this follows from the stability condition on $C$. Clearly $C_{\Gamma}$ is a $A_r$-prestable hyperelliptic curve, with a $2:1$-morphism over $\Gamma\simeq \PP^1$. Let $n_{\Gamma}$ be the number of nodal point on the component $\Gamma$, or equivalently the degree of $D_{\Gamma}$. Thus $\deg(\Omega_{\PP^1}^{\vee} \otimes L_{\Gamma}(-D_{\Gamma}))=+2+(-h_{\Gamma}-1-n_{\Gamma})=1-h_{\Gamma}-n_{\Gamma}$, where $h_{\Gamma}$ is the (arithmetic) genus of $C_{\Gamma}$. The stability condition on $C$ implies that $2h_{\Gamma}-2+2n_{\Gamma}>0$ because the restriction to $C_{\Gamma}$ of the fiber of a node has at most length $2$. Therefore $\deg (\Omega_{\PP^1}^{\vee} \otimes L_{\Gamma}(-D_{\Gamma}))<0$ and we are done. Equivalently, it is clear that if $h_{\Gamma}>1$, then the degree we want to compute is negative. If $h_{\Gamma}=1$, then $C_{\Gamma}$ has at least a point of intersection with $C-C_{\Gamma}$, therefore $n_{\Gamma}>0$. If $h_{\Gamma}=0$, then $C_{\Gamma}$ intersect $C-C_{\Gamma}$ in subscheme of length at least $3$, but because the restriction to $C_{\Gamma}$ of the fiber of a node of $Z$ has at most length $2$, the support of the intersection contains at least two points, which implies $n_{\Gamma}>1$.
\end{proof}
To prove the theorem, we reduce to the case when the map $\pi$ is flat, or equivalently there are no separating node on $C$ and then we prove the statement in that case.
\begin{definition}\label{def:sep-dec}
Let $C$ be an $A_r$-prestable curve and let $\{C_i\}_{i \in I}$ be a set subcurves of $C$ (see \Cref{def:subcurve} for the definition of subcurves). We say that $\{C_i\}_{i \in I}$ is an $A_1$-separating decomposition of $C$ if the following three properties are satisfied:
\begin{enumerate}
\item $C=\bigcup_{i \in I} C_i$,
\item no $C_i$ has a separating node (for $C_i$ itself),
\item $C_i\cap C_j$ is either empty or a separating node for every $i\neq j$.
\end{enumerate}
\end{definition}
Such a decomposition exists and is unique.
Let $(C,\sigma)$ be a hyperelliptic $A_r$-stable curve and $\{C_i\}_{i \in I}$ be its $A_1$-separating decomposition. As usual, we denote by $\pi:C \rightarrow Z:=C/\sigma$ the quotient morphism. Fix an index $i \in I$. Let $Z_i$ be the schematic image of $C_i$ through $\pi$ and $\pi_i:C_i \rightarrow Z_i$ be the restriction of $\pi$ to $C_i$. Using again the local description in \Cref{prop:description-quotient}, we get that $Z_i$ is reduced (therefore a subcurve of $Z$) and $\pi_i$ is flat. Finally, let $L_i$ the quotient of the natural map
$$\cO_{Z_i} \hookrightarrow \pi_{i,*}\cO_{C_i}$$
which is a line bundle because $\pi_i$ is flat finite of degree $2$ (and we are in characteristic different from $2$).
The following lemma generalizes the idea in \Cref{ex:not-involution}.
\begin{lemma}\label{lem:decomp-line-bundle}
In the situation above, we have an isomorphism of coherent sheaves over $Z$
$$ L \simeq \bigoplus_{i \in I} \iota_{Z_i,*}L_i$$
where $\iota_{Z_i}:Z_i \hookrightarrow Z$ is the closed immersion of the subcurve $Z_i$ in $Z$.
\end{lemma}
\begin{proof}
The commutative diagram
$$\begin{tikzcd}
\bigsqcup_{i \in I} C_i \arrow[r, "\bigsqcup \iota_{C_i}"] \arrow[d, "\bigsqcup \pi_i"'] & C \arrow[d, "\pi"] \\
\bigsqcup_{i\in I}Z_i \arrow[r, "\bigsqcup \iota_{Z_i}"] & Z
\end{tikzcd}
$$
induces a commutative diagram at the level of structural sheaves
$$
\begin{tikzcd}
\cO_Z \arrow[r, hook] \arrow[d, hook] & \bigoplus_{i \in I} \iota_{Z_i,*}\cO_{Z_i} \arrow[d, hook] \\
\pi_*\cO_{C} \arrow[r, hook] & \bigoplus_{i \in I}\pi_*\iota_{C_i,*}\cO_{C_i}.
\end{tikzcd}
$$
We want to prove that the map induced between the quotients of the two vertical maps is an isomorphism, in fact the quotient of the right-hand vertical map is trivially isomorphic to $\bigoplus_{i \in I} \iota_{Z_i,*}L_i$ by construction. By the Snake lemma, it is enough to prove that the induced morphism between the two horizontal quotients is an isomorphism. This follows from a local computation in the separating nodes.
\end{proof}
\begin{remark}
Because of the fundamental exact sequence for the differentials associated to the immersion $\iota_{Z_i}:Z_i\hookrightarrow Z$, we have that
$$ \hom_{\cO_{Z_i}}(\Omega_{Z_i},L_i)\simeq \hom_{\cO_{Z_i}}(\iota_{Z_i}^*\Omega_{Z},L_i)$$
and therefore the previous lemma implies that
$$ \hom_{\cO_Z}(\Omega_Z,L) \simeq \bigoplus_{i \in I} \hom_{\cO_{Z_i}}(\Omega_{Z_i},L_{i}).$$
It is easy to see that the last isomorphism can be described using deformations just restricting a deformation of $\pi$ with both source and target fixed to a deformation of $\pi_i$ with both source and target fixed.
\end{remark}
\begin{proposition}
The map $\bar{\GG_{\rma}mma}$ factors through the inclusion of vector spaces
$$ \bigoplus_{i\in I}\hom_{\cO_{Z_i}}(\Omega_{Z_i},L_i(-D_i)) \subset \hom_{\cO_Z}(\Omega_Z, L) $$
where $D_i$ is the Cartier divisor on $Z_i$ defined as $\Sigma_{j \in I, j\neq i} (Z_i \cap Z_j)$.
\end{proposition}
\begin{proof}
Let us start with a deformation of the hyperelliptic involution $\sigma_{\varepsilon}:C[\varepsilon]\rightarrow C[\varepsilon]$. Let $p:\spec k \hookrightarrow C$ be a separating nodal point, then a local computation around $p$ in $C[\varepsilon]$ shows that the following diagram is commutative
$$\begin{tikzcd}
{\spec k[\varepsilon]} \arrow[r, "{p[\varepsilon]}", hook] \arrow[d, "{p[\varepsilon]}"', hook] & {C[\varepsilon]} \\
{C[\varepsilon]} \arrow[ru, "\sigma_{\varepsilon}"] &
\end{tikzcd}$$
where $p[\varepsilon]$ is the trivial deformation of $p$. This means that every deformation of the hyperelliptic involution (where the curve $C$ is not deformed) cannot deform around the separating nodes. This also implies that the same is true for $\pi_{\varepsilon}$ the quotient morphism (which can be associated to the element $\bar{\GG_{\rma}mma}(\sigma_{\varepsilon})$), namely $\pi_{\varepsilon}\circ p[\varepsilon]= q[\varepsilon]$ where $q=\pi(p)$. Furthermore, the same is true for the restriction $\pi_{i,\varepsilon}$ of $\pi_{\varepsilon}$ to $C_i[\varepsilon]$ for every $i \in I$. Therefore, the statement follows from the following fact: given the element $\delta_{i,\varepsilon}$ representing $\pi_{i,\varepsilon}$ in $\hom_{\cO_{Z_i}}(\Omega_{Z_i},L_i)$, the condition $ \pi_{i,\varepsilon}\circ p[\varepsilon]= q[\varepsilon]$ translates into the condition $\delta_{i,\varepsilon}(p)=0$, which implies $\delta_{i,\varepsilon} \in \hom_{\cO_{Z_i}}(\Omega_{Z_i},L_i(-p))$.
\end{proof}
\begin{remark}
Notice that the intersection $Z_i \cap Z_j$ is a smooth point on $Z_i$, therefore a Cartier divisor.
\end{remark}
Finally, we have reduced ourself to study the case of $C$ having no separating nodes, or equivalently the quotient morphism $\pi:C \rightarrow Z$ being flat. We have to describe the whole $\hom_{\cO_Z}(\Omega_{Z},L)$ when $(C,\sigma)$ is a hyperelliptic $A_r$-prestable curve without separating node, because every deformation of the morphism $\pi$ gives rise to a deformation of $\sigma$.
Let $\{ Z_i \}_{i \in I}$ be the $A_1$-separating decomposition of $Z$ (or equivalently the irreducible component decomposition as $Z$ has genus 0). We denote by $\pi_i:C_i \rightarrow Z_i$ the restriction of $\pi:C\rightarrow Z$ to $Z_i$, i.e. $C_i:=Z_i\times_Z C$. Again $L_i$ is the quotient line bundle of the natural morphism $\cO_{Z_i} \hookrightarrow \pi_{i,*}\cO_{C_i}$.
\begin{remark}
Notice that in this situation we are just considering the pullback of $\pi$, while in the case of separating nodes we needed to work with the restriction to the subcurve $C_i$. The reason is that in the previous situation, $\pi^{-1}(Z_i)=\pi^{-1}\pi(C_i)$ was set-theoretically equal to $C_i$, but not schematically. In fact, $\pi^{-1}Z_i$ is not reduced, and it has embedded components supported on the restrictions of the separating nodes.
\end{remark}
\begin{proposition}
In the situation above,
$$\hom_{\cO_Z}(\Omega_{Z},L) \simeq \bigoplus_{i \in I}\hom_{\cO_{Z_i}}(\Omega_{Z_i},L_i(-D_i))$$
where $D_i$ is the Cartier divisor on $Z_i$ defined as $\sum_{j \in I. j \neq i}(Z_i \cap Z_j)$.
\end{proposition}
\begin{proof}
Firstly, we consider the exact sequence
$$ 0 \rightarrow \cO_Z \rightarrow \bigoplus_{i \in I}\iota_{Z_i,*} \cO_{Z_i} \rightarrow \bigoplus_{n \in N(Z)}k(n) \rightarrow 0 $$
where $N(Z)$ is the set of nodal point of $Z$.
Because $L$ is a line bundle ($\pi$ is flat), if we tensor the exact sequence with $L$ and then apply the functor $\hom_{\cO_Z}(\Omega_{Z},-)$, we end up with an exact sequence
$$ 0 \rightarrow \hom_{\cO_Z}(\Omega_{Z},L) \rightarrow \bigoplus_{i \in I}\hom_{\cO_{Z_i}}(\iota_{Z_i}^*\Omega_Z,\iota_{Z_i}^*L) \rightarrow \bigoplus_{n \in N(Z)}\hom_{\cO_Z}(\Omega_Z,K(n)). $$
The flatness of $\pi$ implies that $\iota_{Z_i}^*L \simeq L_{Z_i}$ and using the fundamental exact sequence of the differentials we get
$$\hom_{\cO_{Z_i}}(\iota_{Z_i}^*\Omega_Z,L_i) \simeq \hom_{\cO_{Z_i}}(\Omega_{Z_i},L_i).$$
Notice that $\hom_{\cO_Z}(\Omega_{Z},k(n))=k(n)dx\oplus k(n)dy$ where $dx,dy$ are the two generators of $\Omega_Z$ locally at the node $n \in Z$. Therefore, an element $f \in \hom_{\cO_Z}(\Omega_{Z},L)$ is the same as an element $\{f_i\} \in \bigoplus_{i \in I}\hom_{\cO_{Z_i}}(\Omega_{Z_i},L_i)$ such that $f_i(n)=0$ for every $n \in N(Z) \cap Z_i$.
\end{proof}
\begin{comment}
Let $n$ be an integer coprime the characteristic of $k$.
\begin{lemma}
Suppose $\cX$ is an algebraic stack over $k$ and $(x,\sigma): \spec L \arr \mathit{I}_{\cX}[n]$ be an $L$-point of the $n$-torsion of the inertia of $\cX$ with $L$ a field extension of $k$. Suppose that $\aut_{\cX}(x)$ is an algebraic linear group. Then the image of the morphism of $L$-vector spaces:
$$ \alpha(x,\sigma): T_{\id}\aut_{\cX}(x) \arr T_{\sigma}\aut_{\cX}(x)$$
is exactly the sub-vector space $T_{\sigma}(\aut_{\cX}(x)[n])$.
\end{lemma}
\begin{proof}
First of all, if we apply the remark to the morphism $\mathit{I}_{\cX} \arr \cX$, we notice that representability implies that $T_{\id}\aut_{\aut_{\cX}(x)}(\sigma)=0$, therefore the target of $\alpha$ is just a set, not a groupoid. If we denote by $G$ the group $\aut_{\cX}(x)$, we have that the morphism $\alpha$ is the differential of the morphism defined by the association $g \mapsto g^{-1}\sigma g$. Because $G$ is linear, it is a closed subgroup of ${\rm GL}_n$ therefore we can do the computations on ${\rm GL}_n$ to describe the morphism $\alpha$. A straightforward computation show that $\alpha=l_{\sigma}-r_{\sigma}$ where $l_{\sigma}$ (respectively $r_{\sigma}$) is the differential of the map $L_{\sigma}$ (respectively $R_{\sigma}$), the left (respectively right) multiplication by $\sigma$. If we compose $\alpha$ with $l_{\sigma^{-1}}$ we get the endomorphism of the Lie algebra $L_G$ of $G$ defined by the association $v\mapsto v-l_{\sigma^{-1}}r_{\sigma}(v)$. If we denote by $A:=l_{\sigma^{-1}}r_{\sigma}$, this will be an endomorphism of $L_G$ such that $A^n=\id$ because $\sigma$ is a $n$-torsion element of $G$.
We now focus on $T_{\sigma}G[n]$. Consider the morphism of schemes $n:G\arr G$ defined by $g\arr g^n$. Clearly $T_{\sigma}G[n]$ is the kernel of the morphism $d_{\sigma}n:T_{\sigma}G \arr T_{\id}G$. Using linearity again, we can show that the morphism is defined by the association $v \mapsto \sum_{i=0}^{n-1} l_{\sigma^{k}}r_{\sigma^{n-k-1}}(v)$. If we identify $T_{\sigma}G$ with $T_{\id}G$ through $l_{\sigma^{-1}}$ we get that the isomorphic image of $T_{\sigma}G[n]$ inside $L_G$ is the kernel of the endomorphism of $L_G$ defined by the formula $\id_{L_G}+A+A^2+\dots+A^{n-1}$. Finally, we have reduced to a linear algebra's statement. Let $V$ a vector space over and $A$ an endomorphism such that $A^n=\id_V$, where $n$ is coprime with the characteristic of the field. Then $\im{\id - A} = \ker (\id+A+\dots+A^{n-1})$. One inclusion is trivial, and the other follows from the fact that the hypothesis on the characteristic implies that the two polynomials $x-1$ and $1+x+\dots+x^{n-1}$ are coprime. Therefore one can use Bezout's theorem to conclude.
\end{proof}
\begin{remark}
Notice that the hypothesis of $L_G$ being finite-dimensional is not necessary. In fact the same exact proof works also in the specific case of deformations of a pair $(X,\sigma)$ where $X$ is a scheme of finite type over a field and $\sigma$ is an automorphism of $X$ of degree $n$, where $n$ is an integer coprime to the characteristic of the field, without the assumption the $\aut(X)$ is linear because one can describe the maps in the sequence using the derivations of $X$. I think it should be true also without the linearity hypothesis, but I don't know how to describe the differentials of the two morphism for arbitrary stacks.
\end{remark}
\end{comment}
\subsection*{Universally closedness}
This section is dedicated to prove that $\eta$ is universally closed. The valuative criterion tells us that $\eta$ is universally closed if and only if for every diagram
$$
\begin{tikzcd}
\spec Q \arrow[r] \arrow[d, hook] & \widetilde{\cH}_g^r \arrow[d, "\eta"] \\
\spec R \arrow[r] & \widetilde{\mathcal M}_g^r
\end{tikzcd}
$$
where $R$ is a noetherian complete DVR with algebraically closed residue field $k$ and field of fractions $K$, there exists a lifting
$$
\begin{tikzcd}
\spec K \arrow[r] \arrow[d, hook] & \widetilde{\cH}_g^r \arrow[d, "\eta"] \\
\spec R \arrow[r] \arrow[ru, dashed] & \widetilde{\mathcal M}_g^r
\end{tikzcd}
$$
which makes everything commutes.
This amounts to extending the hyperelliptic involution from the general fiber of a family of $A_r$-stable curves over a DVR to the whole family. The precise statement is the following.
\begin{theorem}\label{theo:univ-closed}
Let $C_R\rightarrow \spec R$ be a family of $A_r$-stable curves over $R$, which is a noetherian complete DVR with fraction field $K$ and algebraically closed residue field $k$. Suppose there exists a hyperelliptic involution $\sigma_K$ of the general fiber $C_K \rightarrow \spec K$. Then there exists a unique hyperelliptic involution $\sigma_R$ of the whole family $C_R\rightarrow \spec R$ such that $\sigma_R$ restrict to $\sigma_K$ over the general fiber.
\end{theorem}
\begin{remark}
In the ($A_1$)-stable case, \Cref{theo:univ-closed} follows from the finiteness of the inertia, as $\overline{\mathcal{M}}_g$ is a Deligne-Mumford separated (in fact proper) stacks. However if $r\geq 2$, the inertia of $\widetilde{\mathcal M}_g^r$ is not proper. As a matter of fact, $I_{\widetilde{\mathcal M}_g^r}[2]$, i.e. the $2$-torsion of $I_{\widetilde{\mathcal M}_g^r}$, is not finite over $\widetilde{\mathcal M}_g^r$ either. It is clearly quasi-finite, but the properness fails as the following example shows.
Let $k=\CC$ and $\AA^1$ be the affine line over the complex number. We consider the following situation: consider the projective line $p_1:\PP^1\times \AA^1\rightarrow \AA^1$ over $\AA^1$ and the line bundle $p_2^*\cO_{\PP^1}(-4)$ (where $p_2:\PP^1\times \AA^1\rightarrow \PP^1$ is the natural projection). Every non-zero section $f\in p_{1,*}p_2^*\cO_{\PP^1}(8)\simeq \H^0(\PP^1,\cO(8)) \otimes_{\CC} \cO_{\AA^1}$ gives rise to a cyclic cover of
$$\begin{tikzcd}
C \arrow[rd] \arrow[rr, "2:1"] & & \PP^1\times\AA^1 \arrow[ld, "p_1"] \\
& \AA^1 &
\end{tikzcd}$$
such that $C_f\rightarrow \AA^1$ is a family of (arithmetic) genus $3$ curves (see \cite{ArVis}). One can prove that they are all $A_r$-stable if $r\geq 8$. Furthermore, every automorphism of the data $(\PP^1\times \AA^1,\cO(-4),f)$ gives us an automorphism of $C$ which commutes with the cyclic cover map. We have already proven that the association is fully faithful and these are the only possible automorphisms.
Consider the section $f:=x_0^2x_1^2(x_0-x_1)^2(tx_0-x_1)^2 \in \H^0(\PP^1,\cO(8))\otimes \CC[t]$ where $[x_0:x_1]$ are the homogeneous coordinates of the projective line. Then it is easy to show that the family $C_f$ lives in $\widetilde{\mathcal M}_3^3$. If $t\neq 0,1$, we have constructed a $A_1$-stable genus $3$ curve which can be obtained by gluing two projective line in four points with the same cross ratio, which is exactly $t$. Whereas if $t$ is either $0$ or $1$, we obtain a genus $3$ curve which is $A_3$-stable, as two of the four nodes in the generic case collapse into a tacnode.
Let $\phi_t$ be an element of $\mathrm{PGL}_2(k(t))$ defined by the matrix
$$
\begin{bmatrix}
0 & 1 \\
t & 0
\end{bmatrix},
$$
hence an easy computation shows that $\phi_t$ is an involution of the data $$(\PP^1_{\CC(t)},\cO(-4),f)$$, and therefore of $C_f\otimes_{\AA^1}\spec \CC(t)$. First of all, this is not hyperelliptic: the quotient of $C_f\otimes_{\AA^1}\spec \CC(t)$ by this involution is a genus $1$ curve geometrically obtained by intersecting two projective lines in two points. Furthermore, it does not have a limit for $t=0$. We really need the hyperelliptic condition to have our result.
\end{remark}
First of all, we can reduce to considering morphisms $\spec K \rightarrow \widetilde{\cH}_g^r$ which land in a dense open of $\widetilde{\cH}_g^r$. Therefore \Cref{prop:smooth-hyp} implies that it is enough to prove the theorem when the family $C_R \rightarrow \spec R$ is generically smooth. In particular this implies $C_R$ is a $2$-dimensional normal scheme.
Using the normality of $C_R$ and the properness of the morphism $C_R \rightarrow \spec R$, one can prove that $\sigma_K$ can be uniquelly extended to an open $U$ of $C_R$ whose complement has $\codim 2$. Let us call $\sigma':C_R \dashrightarrow C_R$ the extension of $\sigma_K$ to the open $U$.
\begin{lemma}\label{lem:contract}
In the situation above, suppose that $\sigma'$ does not contract any one-dimensional subscheme of $C_R$ to a point. Then there exists an extension of $\sigma'$ to a regular morphism $\sigma_R:C_R\rightarrow C_R$ which is a hyperelliptic involution.
\end{lemma}
\begin{proof}
Let $U\subset C_R$ be the open subscheme where $\sigma'$ is defined and let $\Theta\subset C_R\times_{\spec R} C_R$ be the closure of the graph $U\hookrightarrow C_R\times_{\spec R} C_R$ associated to $\sigma'$. We denote by $p_i$ the restriction of the $i$-th projection to $\Theta$ for $i=1,2$. Suppose there exists a one-dimensional irreducible scheme $\Gamma\subset \Theta$ such that $p_2(\Gamma)$ is just a point, then $p_1(\Gamma)$ has to be one-dimensional but this would imply that $\sigma'$ contracts it. Therefore $p_2$ is quasi-finite and proper, thus finite. We have then a finite birational morphism to a normal variety, therefore it is an isomorphism. Hence we have that $\sigma'^{-1}$ can be defined over $C_R$, i.e. it is a regular morphism. We denote it by $\widetilde{\sigma}$. Because both $\widetilde{\sigma}$ and $\sigma'$ restricts to $\sigma_K$ at the generic fiber, we have that $\widetilde{\sigma}$ is in fact an extension of $\sigma'$ to a regular morphism and it is an involution. The hyperelliptic property follows from \Cref{prop:open-closed-imm}.
\end{proof}
The previous lemma implies that it is enough to prove that $\sigma'$ does not contract any one-dimensional subscheme of $C_R$. To do this, we use a classical but fundamental fact for smooth hyperelliptic curves.
\begin{lemma}\label{lem:can-com}
Let $C$ be a smooth hyperelliptic curve of genus $g$ over an algebraically closed field. Then the canonical morphism $$\phi_{|\omega_C|}:C\longrightarrow \PP^{g-1}$$
factors through the hyperelliptic quotient.
\end{lemma}
Before continuing with the proof of \Cref{theo:univ-closed}, we recall why the genus $2$ case is special.
\begin{remark} \label{rem:genus-2}
If $g=2$, we need to prove that $\eta$ is an isomorphism. In this situation, given a family $\pi:C\rightarrow S$ of $A_r$-stable curves, there is always a hyperelliptic involution. Indeed, one can consider the morphism $\phi_{|\omega_C^{\otimes 2}|}:C \rightarrow \PP(\pi_*\omega_C^{\otimes 2}) $. A straightforward computation proves that $\omega_C^{\otimes 2}$ is globally generated and the image $Z$ of the morphism associated to $\omega_C^{\otimes 2}$ is a family of genus $0$ curve. In fact we have a factorization of the morphism $\phi_{|\omega_C^{\otimes 2}|}$
$$
\begin{tikzcd}
C \arrow[rd,"\pi"] \arrow[r, "p"] & Z \arrow[d] \arrow[r, hook] & \PP(\pi_*\omega_C^{\otimes 2}) \arrow[ld] \\
& S &
\end{tikzcd}$$
and one can construct an involution $\sigma$ of $C$ such that the quotient morphism is exactly $p$.
The same strategy cannot work for higher genus, as we know that $\eta$ is not surjective.
\end{remark}
The idea is to use the canonical map to prove that the involution $\sigma'$ does not contract any one-dimensional subscheme of $C_R$, or equivalently any irreducible component of the special fiber $C_k:=C\otimes_R k$, where $k$ is the residue field of $R$. Indeed, $\sigma'$ commutes with the canonical map on a dense open, namely the generic fiber, because of \Cref{lem:can-com}. Reducedness and separatedness of $C_R$ imply that they commute wherever they are both defined. Let $\sigma'_k$ the restriction of $\sigma'$ to the special fiber and let $\Gamma$ be an irreducible component of $C_k$. Suppose we have the two following properties:
\begin{itemize}
\item[1)] the open of definition of the canonical morphism of $C_k$ intersects $\Gamma$,
\item[2)] the canonical morphism of $C_k$ does not contract $\Gamma$ to a point;
\end{itemize}
thus $\sigma'_k$ does not contract $\Gamma$ to a point, and neither does $\sigma'$.
In the rest of the section, we describe the base point locus of $|\omega_C|$ and we prove that the canonical morphism contracts only a specific type of irreducible components. Then, we prove that $\sigma'$ does not contract these particular components using a variation of the canonical morphism. Thus we can apply \Cref{lem:contract} to get \Cref{theo:univ-closed}.
\begin{proposition}\label{prop:base-point-can}
Let $C/k$ be an $A_r$-stable genus $g$ curve over an algebraically closed field. The canonical map is defined on the complement of the set ${\rm SN}(C)$, which contains two type of closed points, namely:
\begin{itemize}
\item[$(1)$] $p$ is a separating nodal point;
\item[$(2)$] $p$ belongs to an irreducible component of arithmetic genus $0$ which intersect the rest of curve in separating nodes.
\end{itemize}
\end{proposition}
\begin{proof}
Follows from \cite[Theorem D]{Cat}.
\end{proof}
\Cref{prop:base-point-can} implies that there may be irreducible components of $C_k$ where the canonical map is not defined. We prove in \Cref{lem:not-poss-comp} that in fact they cannot occur in our situation.
Now we prove two lemmas which partially imply the previous one but we need them in this particular form. The first one is a characterization of the points of type (2) as in \Cref{prop:base-point-can}.
\begin{lemma}\label{lem:char-type2}
Let $C$ be an $A_r$-prestable curve of genus $g\geq 1$ over $k$ and $p\in C$ be a smooth point. Then $\H^0(C,\cO(p))=2$ if and only if $p$ is of type $(2)$ as in \Cref{prop:base-point-can}.
\end{lemma}
\begin{proof}
The \emph{if} part follows from an easy computation. Let us prove the \emph{only if} implication. Let $\Gamma$ be the irreducible component that contains $p$. Then $h^0(\Gamma,\cO(p))\geq h^0(C,\cO(p))$, which implies $h^0(\Gamma,\cO(p))=2$ or equivalently $\Gamma$ is a projective line. Because $\cO(p)\vert_{\tilde{\Gamma}}\simeq \cO_{\tilde{\Gamma}}$ for every irreducible component $\tilde{\Gamma}$ different from $\Gamma$, we have that $h^0(C,\cO(p))=2$ implies that for every connected subcurve $C'\subset C$ not cointaining $\Gamma$, we have that $C'\cap \Gamma$ is a $0$-dimensional subscheme of length at most $1$. This is equivalent to the fact that $\Gamma$ intersects $C-\Gamma$ only in separating nodes.
\end{proof}
The second lemma helps us describe the canonical map of $C$ when restricted to its $A_1$-separating decomposition.
\begin{lemma}\label{lem:sep-decom-hodge}
Let $C/k$ be an $A_r$-stable curve of genus $g$ and let $\{C_i\}_{i \in I}$ its $A_1$-separating decomposition (see \Cref{def:sep-dec}). Then we have that
$$\H^0(C,\omega_C)=\bigoplus_{i \in I}\H^0(C_i,\omega_{C_i}).$$
\end{lemma}
\begin{proof}
Let $n \in C$ be a separating nodal point and let $C_1$ and $C_2$ the two subcurves of $C$ such that $C=C_1^{\rm c}p C_2$ and $C_1\cap C_2=\spec k(n)$. Then we have the exact sequence of coherent sheaves
$$ 0 \rightarrow \omega_C \rightarrow \iota_{1,*}\omega_{C_1}(n)\oplus \iota_{2,*}\omega_{C_2}(n) \rightarrow k(n) \rightarrow 0$$
where $\iota_i:C_i \hookrightarrow C$ is the natural immersion for $i=1,2$. Taking the global sections, $h^1(\omega_C)=1$ implies that $\H^0(C,\omega_C)=\H^0(C_1,\omega_{C_1}(n))\oplus\H^0(C_2,\omega_{C_2}(n))$. Finally, we claim that $\H^0(C,\omega_C(p))=\H^0(C,\omega_C)$ for every smooth point $p$ on a connected reduced Gorenstein curve $C$. The global sections of the following exact sequence
$$0 \rightarrow \cO_C(-p) \rightarrow \cO_C \rightarrow k(p) \rightarrow 0$$
implies that the claim is equivalent to the vanishing of $\H^0(C,\cO_C(-p))$. This is an easy exercise.
\end{proof}
Let $\{C_i\}_{i \in I}$ be the $A_1$-separating decomposition of $C$ and $g_i$ be the genus of $C_i$. From now on, we suppose that there are no points of type $(2)$ as in \Cref{prop:base-point-can}. In particular, $g_i>0$ for every $i \in I$.
The previous result implies that the composite
$$
\begin{tikzcd}
C_i \arrow[r, "\iota_i", hook] & C \arrow[r, "\phi_{|\omega_C|}", dashed] & \PP^{g-1}
\end{tikzcd}
$$
factors through a map $f:C_i\dashrightarrow \PP^{g_i-1}$ induced by the vector space $\H^0(C_i,\omega_{C_i})$ inside the complete linear system of $\omega_{C_i}(\Sigma_i)$, where $\Sigma_i$ is the Cartier divisor on $C_i$ defined by the intersection of $C_i$ with the curve $C-C_i$, or equivalently the restriction of the separating nodal points on $C_i$. This follows because the restriction
$$ \H^0(C,\omega_C) \longrightarrow \H^0(C_i,\omega_{C_i})$$
is surjective as long as there are no points of type $(2)$ as in \Cref{prop:base-point-can} on $C_i$. Notice that the rational map $f$ is defined on the open $C_i\smallsetminus \Sigma_i$, and we have that $f\vert_{C_i\smallsetminus \Sigma_i}\equiv \phi_{|\omega_{C_i}|}\vert_{C_i\smallsetminus \Sigma_i}$.
We have two cases to analyze: $C_i$ is either an $A_r$-stable genus $g_i$ curve or only $A_r$-prestable. In the first case, $C_i$ is an $A_r$-stable curve without separating nodes, and therefore the canonical morphism $\phi_{|\omega_{C_i}|}$ is globally defined and finite, because $\omega_{C_i}$ is ample. Therefore the canonical morphism of $C$ does not contract any irreducible component of the subcurve $C_i$.
Suppose now that $C_i$ is not $A_r$-stable but only prestable and let $\Gamma$ be an irreducible component where $\omega_{C_i}\vert_{\Gamma}$ is not ample, in particular $g_{\Gamma}\leq 1$. In this case, the canonical morphism is not helpful.
\begin{remark}
Let $C$ be a ($A_1$)-stable genus $2$ curve such that it is constructed as intersecting two smooth genus $1$ curves in a separating node. One can prove easily that the canonical morphism is trivial. Similarly, if we have a genus $g$ stable curve constructed by intersecting smooth genus $1$ curves in separating nodes, the same is still true.
\end{remark}
The idea is to construct a variation of the canonical morphism which does not contract the component $\Gamma$ and it still commutes with the involution. The following lemma gives us a possible candidate.
\begin{lemma}\label{lem:var-can}
Let $C_R\rightarrow \spec R$ be a generically smooth family of $A_r$-stable genus $g$ curves over $R$, a discrete valuation ring, $p \in C_k$ be a separating node of the special fiber of the family and $C_1$ and $C_2$ be the two subcurves of $C_k$ such that $C_k=C_1^{\rm c}p C_2$ and $C_1\cap C_2=\{p\}$. Then there exists an integer $m$ such that the followings are true:
\begin{itemize}
\item[(i)] $mC_1$ and $mC_2$ are Cartier divisors of $C_R$,
\item[(ii)] (if we denote by $\cO(mC_1)$ and $\cO(mC_2)$ the induced line bundles) we have $\cO(mC_2)\vert_{C_1}=\cO_{C_1}(p)$ and $\cO(mC_2) \vert_{C_2}=\cO_{C_2}(-p)$, \item[(iii)] the line bundle $\omega_{C_R}(mC_i)$ verifies base change on $\spec R$.
\end{itemize}
\end{lemma}
\begin{proof}
The existence of an integer $m$ that verifies (i) follows from the theory of Du Val singularities, as a separating node in a normal surface \'etale locally is of the form $y^2+x^2+t^m$ where $t$ is a uniformizer of the DVR. An \'etale-local computation shows that $\cO(mC_1)\vert_{C_2}=\cO_{C_2}(p)$ and because as line bundles $\cO(-mC_1-mC_2)$ is the pullback from the DVR of the ideal $(t^m)$ we get $\cO(mC_2)\vert_{C_2}=\cO_{C_2}(-p)$.
Because $R$ is reduced, to prove $(iii)$ it is enough to prove that $$\dim_k\H^0(C_k,\omega_{C/R}(mC_i)\vert_{C_k})=g$$
and then use Grauert's theorem.
Without loss of generality, we can suppose $i=1$.
Let us denote by $\cL$ the line bundle $\omega_{C/R}(mC_1)$ restricted to $C_k$. Then we have the usual exact sequence
$$0\rightarrow \H^0(C_k,\cL)\rightarrow \H^0(C_1,\cL\vert_{C_1})\oplus \H^0(C_2,\cL\vert_{C_2}) \rightarrow \cL(p)$$
where we have that $\cL\vert_{C_1}\simeq \omega_{C_1}$ while $\cL\vert_{C_2}\simeq \omega_{C_2}(2p)$. Because $\omega_{C_2}(2p)$ is globally generated in $p$, we get that $$h^0(\cL)=h^0(\omega_{C_1})+h^0(\omega_{C_2}(2p))-1=h^0(\omega_{C_1})+h^0(\omega_{C_2})=g_1+g_2$$ where $g_i$ is the arithmetic genus of $C_i$ for $i=1,2$. Because $p$ is a separating node, thus $g_1+g_2=g$ and we are done.
\end{proof}
\begin{remark}
In the situation of \Cref{lem:var-can}, the involution $\sigma'$ commutes with the rational map associated with the complete linear system of $\omega_{C_R}(mC_i)$ for $i=1,2$. This follows from the fact that $\omega_{C_R}(mC_i)$ restrict to the canonical line bundle in the generic fiber.
\end{remark}
If $g_{\Gamma}=1$, then $C_i=\Gamma$ (because it is connected). If $g_{\Gamma}=0$, we have that the intersection $\Gamma \cap (C_i-\Gamma)$ has length $2$ with $C_i-\Gamma$ connected subcurve of $C_i$ (because $g_{C_i}>0$). Let $p$ be a separating node in $\Gamma \cap (C-\Gamma)$. We apply \Cref{lem:var-can} and let us denote by $C_1$ the subcurve containing $\Gamma$. For the sake of notation, we define $\cL_{\Gamma}:=\omega_{C_R}(mC_2)$.
\begin{proposition}
In the situation above, the morphism $\phi_{\cL_{\Gamma}}$ induced by the complete linear system of $\cL_{\Gamma}$ does not contract $\Gamma$.
\end{proposition}
\begin{proof}
We need to prove that the morphism $\phi_{\cL_{\Gamma}}$ restricted to the special fiber does not contract $\Gamma$. Using the description of $\H^0(C_k,\cL_{\Gamma}\vert_{C_k})$ of \Cref{lem:var-can}, we can prove an explicit description of $\phi_{\cL_{\Gamma}}$ in both cases.
If $g_{\Gamma}=1$, then the restriction of $\phi_{\cL_{\Gamma}}$ to $\Gamma$ is the map induced by the vector space $\H^0(\Gamma,\omega_{\Gamma}(2p))$ inside the complete linear system of $\omega_{\Gamma}(2p + (\Sigma_{\Gamma}-p))$ where $\Sigma_{\Gamma}$ is the Cartier divisor on $\Gamma$ associated with the intersection $\Gamma \cap (C-\Gamma)$. We are using that there are no points of type $(2)$ as in \Cref{prop:base-point-can}, which implies that the morphism
$$ \H^0(C,\phi_{\cL}) \rightarrow \H^0(\Gamma,\phi_{\cL_{\Gamma}})$$
is surjective. Similarly as we have done before, we have that $\phi_{\cL_{\Gamma}}\vert_{\Gamma}$ is defined everywhere except in support of the Cartier divisor $\Sigma_{\Gamma}-p$ and it coincides with $\omega_{\Gamma}(2p)$ where it is defined . This implies $\Gamma$ is not contracted to a point.
Finally, if $g_{\Gamma}=0$, let $D$ be the Cartier divisor on $\Gamma$ associated with the intersection $\Gamma\cap(C_i-\Gamma)$ of length $2$ and by ${\rm res}_D$ one of the following morphisms:
\begin{itemize}
\item ${\rm res}_{q_1}+{\rm res}_{q_2}$ when $D=q_1+q_2$,
\item ${\rm res}_q$ when $D=2q$.
\end{itemize}
We have that $\phi_{\cL_{\Gamma}}$ restricted to $\Gamma$ is the morphism induced by kernel $V$ of
$$ {\rm res}_{D}(2p+(\Sigma-p)):\H^0(\Gamma,\omega_{\Gamma}(2p+(\Sigma_{\Gamma}-p)+D)) \longrightarrow k$$
where the morphism of vector spaces ${\rm res}_{D}(2p+(\Sigma-p))$ is the tensor of the morphism ${\rm res}_D$ by the line bundle $\cO(2p+(\Sigma-p))$. This again is a conseguence of \Cref{lem:not-poss-comp}. As before, we can prove that $\phi_V$ is defined everywhere except in the support of $\Sigma-p$ and it coincides with the morphism associated with the kernel $W$ of the morphism
$$ {\rm res}_{D}(2p):\H^0(\Gamma,\omega_{\Gamma}(2p+D)) \longrightarrow k$$
where the morphism of vector spaces ${\rm res}_{D}(2p)$ is the tensor of the morphism ${\rm res}_D$ by the line bundle $\cO(2p)$. A straightforward computation shows that $\phi_W$ does not contract $\Gamma$.
\end{proof}
We have assumed that there are no points of type $(2)$ as in \Cref{prop:base-point-can}. The last lemma explain why this assumption can be made. We decided to leave it as the last lemma because it is the only statement where we used a result for the $A_1$-stable case which we cannot prove indipendently for the $A_r$-stable case.
\begin{lemma}\label{lem:not-poss-comp}
Let $C\rightarrow \spec R$ be an $A_r$-stable curve of genus $g$ and suppose the generic fiber is smooth and hyperelliptic. Then if $C_k$ is the special fiber, there are no point $p\in C_k$ of type $(2)$ as in \Cref{prop:base-point-can}.
\end{lemma}
\begin{proof}
The proof relies on the fact that the stable reduction for the $A_n$-singularities is well-know, see \cite{Hass} and \cite{CasMarLaz}. Let $C_K$ be the generic fiber, then we know that there exists, a family $\widetilde{C}$ such that the special fiber $\widetilde{C}_k$ is stable. The explicit description of the stable reduction implies that if there is a component $\Gamma$ in $C_k$ of genus $0$ intersecting the rest of curve only in separating nodes, then the same component should appear in the stable reduction. But this is absurd, as the stable limit of a smooth hyperelliptic curve is a stable hyperelliptic curve, which does not have such irreducible component.
\end{proof}
\operatorname{char}pter{The Chow ring of $\widetilde{\mathcal M}_3^7$}\label{chap:2}
In this chapter we explain the stratey used for the computation of the Chow ring of $\widetilde{\mathcal M}_3^7$. The idea is to use a gluing lemma, whose proof is an exercise in homological algebra.
Let $i:\cZ\hookrightarrow\cX$ be a closed immersion of smooth global quotient stacks over $\kappa$ of codimension $d$ and let $\cU:=\cX\smallsetminus \cZ$ be the open complement and $j:\cU \hookrightarrow \cX$ be the open immersion. It is straightforward to see that the pullback morphism $i^*:\ch(\cX)\rightarrow \ch(\cZ)$ induces a morphism $ \ch(\cU) \rightarrow \ch(\cZ)/(c_d(N_{\cZ|\cX}))$, where $N_{\cZ|\cX}$ is the normal bundle of the closed immersion. This morphism is denoted by $i^*$ by abuse of notation.
Therefore, we have the following commutative diagram of rings:
$$
\begin{tikzcd}
\ch(\cX) \arrow[d, "j^*"] \arrow[rr, "i^*"] & & \ch(\cZ) \arrow[d, "q"] \\
\ch(\cU) \arrow[rr, "i^*"] & & \frac{\ch(\cZ)}{(c_d(N_{\cZ|\cX}))}
\end{tikzcd}
$$
where $q$ is just the quotient morphism.
\begin{lemma}\label{lem:gluing}
In the situation above, the induced map
$$\zeta: \ch(\cX)\longrightarrow \ch(\cZ)\times_\frac{\ch(\cZ)}{(c_d(N_{\cZ|\cX}))} \ch(\cU)$$
is surjective and $\ker \zeta= i_* {\rm Ann}(c_d(N_{\cZ|\cX}))$. In particular, if $c_d(N_{\cZ|\cX})$ is a non-zero divisor in $\ch(\cZ)$, then $\zeta$ is an isomorphism.
\end{lemma}
From now on, we refer to the condition \emph{$c_d(N_{\cZ|\cX})$ is not a zero divisor} as the gluing condition.
\begin{remark}
Notice that if $\cZ$ is a Deligne-Mumford separated stack, the gluing condition is never satisfied because the rational Chow ring of $\cZ$ is isomorphic to the one of its moduli space (see \cite{Vis1} and \cite{EdGra}).
However, there is hope that the positive dimensional stabilizers we have in $\widetilde{\mathcal M}_g^r$ allow the gluing condition to occur. For instance, consider $\widetilde{\mathcal M}_{1,1}^2$, or the moduli stack of genus $1$ marked curves with at most $A_2$-singularities. We know that
$$ \widetilde{\mathcal M}_{1,1}^2 \simeq [\AA^2/\GG_{\rmm}]$$
therefore its Chow ring is a polynomial ring, which has plenty of non-zero divisors. See \cite{DiLorPerVis} for the description of $\widetilde{\mathcal M}_{1,1}$ as a quotient stack.
\end{remark}
This is the reason why we introduced $\widetilde{\mathcal M}_g^r$, which is a non-separated stacks. We are going to compute the Chow ring of $\widetilde{\mathcal M}_3^r$ for $r=7$ using a stratification for which we can apply \Cref{lem:gluing} iteratively.
Now we introduce the stratification and we describe the strata as quotient stacks to compute their Chow rings. For the rest of the chapter, we denote by $\widetilde{\mathcal M}_{g,n}$ the stack $\widetilde{\mathcal M}_{g,n}^{2g+1}$ which is the largest moduli stack of $n$-pointed $A_r$-stable curves of genus $g$ we can consider (see \Cref{rem: max-sing}). We denote by $\widetilde{\mathcal C}_{g,n}$ the universal curve of $\widetilde{\mathcal M}_{g,n}$.
Every Chow ring is considered with $\ZZ[1/6]$-coefficients unless otherwise stated. This assumption is not necessary for some of the statements, but it makes our computations easier. We do not know if the gluing condition still holds with integer coefficients.
First of all, we recall the definitions of some substacks of $\overline{\cM}_3$:
\begin{itemize}
\item $\overline{\Delta}_1$ is the codimension 1 closed substack of $\overline{\cM}_3$ classifying stable curves with a separating node,
\item $\overline{\Delta}_{1,1}$ is the codimension 2 closed substack of $\overline{\cM}_3$ classifying stable curves with two separating nodes,
\item $\overline{\Delta}_{1,1,1}$ is the codimension 3 closed substack of $\overline{\cM}_3$ classifying stable curves with three separating nodes.
\end{itemize}
An easy combinatorial computation shows that we cannot have more than $3$ separating nodes for genus $3$ stable curves. The same substacks can be defined for $\widetilde{\mathcal M}_3^r$ for any $r$, but we need to prove that they are still closed inside $\widetilde{\mathcal M}_3^r$. This is a conseguence of \Cref{lem:sep-sing}.
We denote by $\widetilde{\Delta}_{1,1,1}\subset \widetilde{\Delta}_{1,1} \subset \widetilde{\Delta}_1$ the natural generalization of $\overline{\Delta}_{1,1,1} \subset \overline{\Delta}_{1,1}\subset \overline{\Delta}_1$ in $\widetilde{\mathcal M}_3$.
We also consider $\widetilde{\cH}_3^7$ as a stratum of the stratification. The following diagram
$$
\begin{tikzcd}
& & \widetilde{\cH}_3^7 \arrow[rd] & \\
{\widetilde{\Delta}_{1,1,1}} \arrow[r] & {\widetilde{\Delta}_{1,1}} \arrow[r] & \widetilde{\Delta}_1 \arrow[r] & \widetilde{\mathcal M}_3^7
\end{tikzcd}
$$
represents the poset associated to the stratification. As before, we write $\widetilde{\cH}_3$ instead of $\widetilde{\cH}_3^7$.
\begin{comment}
We have already introduced and described $\widetilde{\cH}_3$, the closure of $\overline{\cH}_3$ inside $\widetilde{\mathcal M}_3$. The main result of the previous chapter was a description of $\widetilde{\cH}_3$ as the moduli stack of $A_r$-stable curves plus an involution with finite stabilizers such that the geometric quotient is a nodal genus $0$ curve. In particular, one can construct a morphism
$$ \widetilde{\cH}_3 \longrightarrow \cM_0$$
where we denote by $\cM_0$ the moduli stack parametrizing genus $0$ nodal curves (with no stability condition). Clearly, $\cM_0$ is wild, but in fact $\widetilde{\cH}_3$ factors through a substack of $\cM_0$. Let $e$ be an integer and let $\cM_0^{\leq e}$ be the substack of $\cM_0$ parametrizing genus $0$ nodal curve with at most $e+1$ irreducible components or equivalently with at most $e$ nodes. If $e=0$, $\cM_0^{\leq 0}$ be just denote by $\cM_0^{0}$.
The previous lemma allows us to construct a morphism
$$ \beta: \widetilde{\cH}_g^r \longrightarrow \cM_0^{\leq 2g-3}$$
for every $g\geq 2$ and $r\geq 0$.
Let us denote by $\cM_0^{\geq e}$ the closed substack of $\cM_0^{\leq 2g-3}$ which parametrizes genus $0$ nodal curves with at least $e$ nodes, or $e+1$ irreducible components for $e=0,\dots,2g-3$. We have a succession of closed substack
$$ \cM_0^{\geq 2g-3} \subset \cM_0^{\geq 2g-4} \subset \dots \subset \cM_0^{\geq 1} \subset \cM_0^{\geq 0}=\cM_0^{\leq 2g-3}$$
which can be lifted to $\widetilde{\cH}_g^r$. We denote by $\Xi_n$ the preimage $\beta^{-1}(\cM_0^{\geq n-1})$ for every $n=1,\dots,2g-2$.
Notice that we also have the complement succession of open substacks
$$ \cM_0^0 \subset \cM_0^{\leq 1} \subset \dots \subset \cM_0^{\leq 2g-3}.$$
Finally we have a stratification of $\widetilde{\cH}_g^r$ by closed substacks
$$\Xi_{2g-2}\subset \Xi_{2g-1} \subset \dots \subset \xi_1 \subset \Xi_1=\widetilde{\cH}_g^r$$
where $\Xi_n$ classifies $A_r$-stable hyperelliptic curves such that the geometric quotient has at least $n$ irreducible components.
Finally we are ready to introduce our stratification for the case $g=3$:
$$
\begin{tikzcd}
\widetilde{\mathcal M}_3 & \widetilde{\cH}_3 \arrow[l] & \xi_1 \arrow[l] & \Xi_3 \arrow[l] & \Xi_4 \arrow[l] \\
\widetilde{\Delta}_1 \arrow[u] & \widetilde{\cH}_3\cap \widetilde{\Delta}_1 \arrow[u, dotted] \arrow[l, dotted] & \xi_1\cap \widetilde{\Delta}_1 \arrow[l, dotted] \arrow[u, dotted] & \Xi_3\cap \widetilde{\Delta}_1 \arrow[l, dotted] \arrow[u, dotted] & \Xi_4\cap \widetilde{\Delta}_1 \arrow[l, dotted] \arrow[u, dotted] \\
{\widetilde{\Delta}_{1,1}} \arrow[u] & {\widetilde{\cH}_3\cap \widetilde{\Delta}_{1,1}} \arrow[u, dotted] \arrow[l, dotted] & {\xi_1\cap \widetilde{\Delta}_{1,1}} \arrow[l, dotted] \arrow[u, dotted] & {\Xi_3\cap \widetilde{\Delta}_{1,1}} \arrow[l, dotted] \arrow[u, dotted] & {\Xi_4\cap \widetilde{\Delta}_{1,1}} \arrow[l, dotted] \arrow[u, dotted] \\
{\widetilde{\Delta}_{1,1,1}} \arrow[u] & {\widetilde{\cH}_3\cap \widetilde{\Delta}_{1,1,1}} \arrow[u, dotted] \arrow[l, dotted] & {\xi_1\cap \widetilde{\Delta}_{1,1,1}} \arrow[l, dotted] \arrow[u, dotted] & {\Xi_3\cap \widetilde{\Delta}_{1,1,1}} \arrow[l, dotted] \arrow[u, dotted] & {\Xi_4\cap \widetilde{\Delta}_{1,1,1}} \arrow[l, dotted] \arrow[u, dotted].
\end{tikzcd}
$$
In the previous diagram, the full arrows are the closed immersion of the two stratification we introduced, while the dotted arrows are the natural inclusions due to the intersections of the two stratification.
Clearly we will not need the description for all the strata, for instance one can prove that $\Xi_4\cap \widetilde{\Delta}_{1,1,1}=\emptyset$.
\end{comment}
Now, we describe the strategy used to compute the Chow ring of $\widetilde{\mathcal M}_3$. Our approach focuses firstly on the computation of the Chow ring of $\widetilde{\mathcal M}_3 \smallsetminus \widetilde{\Delta}_1$. This is the most difficult part as far as computations is concerned. We first compute the Chow ring of $\widetilde{\cH}_3 \smallsetminus \widetilde{\Delta}_1$, which can be done without the gluing lemma. Then we apply the gluing lemma to $\widetilde{\mathcal M}_3 \smallsetminus (\widetilde{\cH}_3 ^{\rm c}p \widetilde{\Delta}_1)$ and $\widetilde{\cH}_3 \smallsetminus \widetilde{\Delta}_1$ to get a description for the Chow ring of $\widetilde{\mathcal M}_3 \smallsetminus \widetilde{\Delta}_1$.
Notice that neither $\widetilde{\Delta}_1$ and $\widetilde{\Delta}_{1,1}$ are smooth stacks therefore we cannot use \Cref{lem:gluing}. Nevertheless, both $\widetilde{\Delta}_1 \smallsetminus \widetilde{\Delta}_{1,1}$ and $\widetilde{\Delta}_{1,1} \smallsetminus \widetilde{\Delta}_{1,1,1}$ are smooth, therefore we apply \Cref{lem:gluing} to $\widetilde{\Delta}_1 \smallsetminus \widetilde{\Delta}_{1,1}$ and $\widetilde{\mathcal M}_3\smallsetminus \widetilde{\Delta}_1$ to describe the Chow ring of $\widetilde{\mathcal M}_3 \smallsetminus \widetilde{\Delta}_{1,1}$, and then apply it again to $\widetilde{\mathcal M}_3 \smallsetminus \widetilde{\Delta}_{1,1}$ and $\widetilde{\Delta}_{1,1} \smallsetminus\widetilde{\Delta}_{1,1,1}$. Finally, the same procedure allows us to glue also $\widetilde{\Delta}_{1,1,1}$ and get the description of the Chow ring of $\widetilde{\mathcal M}_3$.
In this chapter, we describe the following strata and their Chow rings:
\begin{itemize}
\item $\widetilde{\cH}_3\smallsetminus \widetilde{\Delta}_1$,
\item $\widetilde{\mathcal M}_3\smallsetminus (\widetilde{\cH}_3 ^{\rm c}p \widetilde{\Delta}_1)$,
\item $\widetilde{\Delta}_1\smallsetminus \widetilde{\Delta}_{1,1}$,
\item $\widetilde{\Delta}_{1,1}\smallsetminus \widetilde{\Delta}_{1,1,1}$,
\item $\widetilde{\Delta}_{1,1,1}$,
\end{itemize}
and in the last section we give the complete description of the Chow ring of $\widetilde{\mathcal M}_3$.
\section{Chow ring of $\widetilde{\cH}_3 \smallsetminus \widetilde{\Delta}_1$}\label{sec:H3tilde}
In this section we are going to describe $\widetilde{\cH}_3 \smallsetminus \widetilde{\Delta}_1$ and compute its Chow ring.
Recall that we have an isomorphism (see \Cref{prop:descr-hyper}) between $\widetilde{\cH}_3^7$ and $\widetilde{\mathcal C}_3^7$. We are going to describe $\widetilde{\cH}_3\smallsetminus \widetilde{\Delta}_1$ as a subcategory of $\widetilde{\mathcal C}_3^7$ through the isomorphism cited above.
Consider the natural morphism (see the proof of \Cref{prop:smooth-hyp})
$$ \pi_3: \widetilde{\cH}_3 \smallsetminus \widetilde{\Delta}_1 \rightarrow \cP_3$$
where $\cP_3$ is the moduli stack parametrizing pairs $(Z/S,\cL)$ where $Z$ is a twisted curve of genus $0$ and $\cL$ is a line bundle on $Z$. The idea is to find the image of this morphism. First of all, we can restrict to the open of $\cP_3$ parametrizing pairs $(Z/S,\cL)$ such that $Z/S$ is an algebraic space, because we are removing $\widetilde{\Delta}_1$. In fact, if there are no separating points, $Z$ coincides with the geometric quotient of the involution (see the proof of \Cref{prop:cyclic-covers}). Moreover, we prove an upper bound to the number of irreducible components of $Z$.
\begin{lemma}\label{lem:max-comp}
Let $(C,\sigma)$ be a hyperelliptic $A_r$-stable curve of genus $g$ over an algebraically closed field and let $Z:=C/\sigma$ be the geometric quotient. If we denote by $v$ the number of irreducible components of $Z$, then we have $v\leq 2g-2$. Furthermore, if $C$ has no separating nodes, we have that $v\leq g-1$.
\end{lemma}
\begin{proof}
Let $\Gamma$ be an irreducible component of $Z$. We denote by $g_{\Gamma}$ the genus of the preimage $C_{\Gamma}$ of $\Gamma$ through the quotient morphism, by $e_{\Gamma}$ the number of nodes lying on $\Gamma$ and by $s_{\Gamma}$ the number of nodes on $\Gamma$ such that the preimage is either two nodes or a tacnode (see \Cref{prop:description-quotient}). We claim that the stability condition on $C$ implies that
$$ 2g_{\Gamma}-2+e_{\Gamma}+s_{\Gamma}>0$$
for every $\Gamma$ irreducible component of $Z$. If $C_{\Gamma}$ is integral, then it is clear. Otherwise, $C_{\Gamma}$ is a union of two projective line meeting on a subscheme of length $n$. In this situation, $s_{\Gamma}=e_{\Gamma}:=m$ and the inequality is equivalent to $n+m\geq 3$, which is the stability condition for the two projective lines. Using the identity $$g=\sum_{\Gamma}(g_{\Gamma}+\frac{s_{\Gamma}}{2}),$$
a simple computation using the claim gives us the thesis.
Suppose $(C,\sigma)$ is not in $\widetilde{\Delta}_1$.
Notice that the involution $\sigma$ commutes with the canonical morphism, because it does over the open dense substack of $\widetilde{\cH}_g$ parametrizing smooth curves. If we consider the factorization of the canonical morphism
$$ C \rightarrow Z \rightarrow \PP(\H^0(C,\omega_C))=\PP^{g-1}$$
we know that $\cO_Z(1)$ has degree $g-1$ because the quotient morphism $C\rightarrow Z$ has degree $2$ and $Z\rightarrow \PP^{g-1}$ is finite. It follows that $v\leq g-1$.
\end{proof}
\begin{remark}
The first inequality is sharp even for ($A_1$-)stable hyperelliptic curves. Let $v:=2m$ be an even number and let $(E_1,e_1),(E_2,e_2)$ be two smooth elliptic curves and $P_1,\dots,P_{2m-2}$ be $2m-2$ projective lines. We glue $P_{2i-1}$ to $P_{2i}$ in $0$ and $\infty$ for every $i=1,\dots,m-1$ and we glue $P_{2i}$ to $P_{2i+1}$ in $1$ for every $i=1,\dots,m-2$. Finally we glue $(E_1,e_1)$ to $(P_1,1)$ and $(E_2,e_2)$ to $(P_{2m-2},1)$. It is clear that the curve is $A_1$-stable hyperelliptic of genus $m+1$ and its geometric quotient has $2m$ components. The odd case can be dealt similarly.
\begin{figure}\label{fig:Contro}
\end{figure}
The same is true for the second inequality. Suppose $v:=2m$ is an even positive integer. Let $(E_1,e_1,f_1),(E_2,e_2,f_2)$ be two $2$-pointed smooth genus $1$ curves and $P_1,\dots,P_{2m-2}$ be $2m-2$ projective lines. Now we glue $P_{2i-1}$ to $P_{2i}$ in $0$ and $\infty$ for every $i=1,\dots,m-1$ and we glue $P_{2i}$ to $P_{2i+1}$ in $1$ and $-1$ for every $i=1,\dots,v-2$. Finally we glue $(E_1,e_1,f_1)$ to $(P_1,1,-1)$ and $(E_2,e_2,f_2)$ to $(P_{2m-2},1,-1)$. It is clear that the curve is $A_1$-stable hyperelliptic of genus $2m+1$ and its geometric quotient has $2m$ components. The odd case can be dealt similarly.
\begin{figure}\label{fig:Contro1}
\end{figure}
\end{remark}
\Cref{lem:max-comp} assures us that the geometric quotient $Z$ has at most two irreducible components if $C$ has genus $3$ and does not have separating nodes. Therefore, the datum $(Z/S,\cL,i)$ is in the image of $\pi_3$ only if the fiber $Z_s$ has at most $2$ irreducible components for every geometric point $s \in S$. Moreover, we know that not all the pairs $(\cL,i)$ give us a $A_r$-stable hyperelliptic curve of genus $3$. We need to translate the conditions in \Cref{def:hyp-A_r} in our setting. Because there are no stacky points in $Z$, the conditions in (a) are empty. Furthermore, if $Z$ is integral, it is immediate that there is nothing to prove as we are considering $r=7$. Therefore we can suppose $Z$ is the union of two irreducible components $\Gamma_1$ and $\Gamma_2$ intersecting in a single node. Let us call $n_1$ and $n_2$ the degrees of the restrictions of $\cL$ to $\Gamma_1$ and $\Gamma_2$ respectively. Thus (b1)+(c1) implies that $n_1\leq -2$ and $n_2\leq -2$. Finally the condition $\chi(\cL)=-3$ gives us $n_1=n_2=-2$.
We denote by $\cM_0^{\leq 1}$ the moduli stack parametrizing families $Z/S$ of genus $0$ nodal curves with at most $1$ node and $\cM_0^{1}$ the closed substack of $\cM_0^{\leq 1}$ parametrizing families $Z/S$ with exactly $1$ node. We follows the notation as in \cite{EdFul2}.
Finally, let us denote by $\cP_3'$ the moduli stack parametrizing pairs $(Z/S,\cL)$ where $Z/S$ is a family of genus $0$ nodal curves with at most 1 node and $\cL$ is a line bundle that restricted to the geometric fibers over every point $s \in S$ has degree $-4$ if $Z_s$ is integral or bidegree $(-2,-2)$ if $Z_s$ is reducible.
Therefore $\widetilde{\cH}_3\smallsetminus\widetilde{\Delta}_1$ can be seen as an open substack of the vector bundle $\pi_*\cL^{\otimes -2}$ of $\cP_3'$, where $(\pi:\cZ \rightarrow \cP_3',\cL)$ is the universal object of $\cP_3'$. We can explicitly describe the complement of $\widetilde{\cH}_3 \smallsetminus \widetilde{\Delta}_1$ inside $\VV(\pi_{*}\cL^{\otimes-2})$. The only conditions the section $i$ has to verify in this particular case are (b1) and (b2) of \Cref{def:hyp-A_r}. The closed substack of $\VV(\pi_*\cL^{\otimes -2})$ that parametrizes sections which do not verify (b1) or (b2) can be described as the union of two components. The first one is the zero section of the vector bundle itself. The second component $D$ parametrizes sections $h$ supported on the locus $\cM_0^{\geq 1}$ of reducible genus $0$ curves which vanish at the node $n$ and such that the vanishing locus $\VV(h)_n$ localized at the node is either positive dimensional or it has length different from $2$.
\begin{remark}
The $\GG_{\rmm}$-gerbe $\cP_3'\rightarrow \cM_0^{\leq 1}$ is trivial, as there exists a section defined by the association $Z/S \mapsto \omega_{Z/S}^{\otimes 2}$. Therefore, $\cP_3' \simeq \cM_0^{\leq 1}\times \cB\GG_{\rmm}$.
\end{remark}
We have just proved the following description.
\begin{proposition}
The stack $\widetilde{\cH}_3 \smallsetminus \widetilde{\Delta}_1$ is isomorphic to the stack $\VV(\pi_*\omega_{\cZ/\cP_3'}^{\otimes -4})\smallsetminus (D ^{\rm c}p 0)$, where $0$ is the zero section of the vector bundle and $D$ parametrizes sections $h$ supported on $\cM_0^1 \times \cB\GG_{\rmm}$ such that the vanishing locus $\VV(h)$ localized at the node does not have length $2$.
\end{proposition}
Lastly, we state a fact that is well-known to experts, which is very helpful for our computations. It is one of the reasons why we choose to do the computations inverting $2$ and $3$ in the Chow rings. Consider a morphism $f:\cX \rightarrow \cY$ which is representable, finite and flat of degree $d$ between quotient stacks and consider the cartesian diagram of $f$ with itself:
$$
\begin{tikzcd}
\cX \times_{\cY} \cX \arrow[r, "p_1"] \arrow[d, "p_1"] & \cX \arrow[d, "f"] \\
\cX \arrow[r, "f"] & \cY.
\end{tikzcd}
$$
\begin{lemma}\label{lem:chow-tor}
In the situation above, the diagram in the category of groups
$$
\begin{tikzcd}
\ch(\cY)[1/d] \arrow[r, "f^*"] & \ch(\cX)[1/d] \arrow[r, "p_1^*"', bend right] \arrow[r, "p_2^*", bend left] & \ch(\cX \times_{\cY} \cX)[1/d]
\end{tikzcd}
$$
is an equalizer.
\end{lemma}
\begin{proof}
This is just an application of the formula $g_*g^*(-)=d(-)$ for finite flat representable morphisms of degree $d$.
\end{proof}
\begin{remark}
We are interested in the following application: if $f:\cX \rightarrow \cY$ is $G$-torsor where $G$ is a costant group, we have an honest action of $G$ over $\ch(\cX)$ and the lemma tells us that the pullback $f^*$ is isomorphism between $\ch(\cY)$ and $\ch(\cX)^{G-{\rm inv}}$, if we invert the order of $G$ in the Chow groups.
\end{remark}
\subsection*{Relation coming from the zero-section}
By a standard argument in intersection theory, we know that
$$ \ch(\VV\smallsetminus 0)=\ch(\cX)/(c_{\rm top}(\VV))$$
for a vector bundle $\VV$ on a stack $\cX$, where $c_{\rm top}(-)$ is the top Chern class.
To compute the Chern class, first we need to describe the Chow ring of $\cP_3'$. In \cite{EdFul2}, they compute the Chow ring of $\cM_{0}^{\leq 1}$ with integral coefficients, and their result implies that
$$ \ch(\cP_3')=\ch(\cM_0^{\leq 1}\times \cB\GG_{\rmm})=\ZZ[1/6,c_1,c_2,s]$$
where $c_i:=c_i\Big(\pi_*\omega_{\cZ/\cM_0^{\leq 1}}^{\vee}\Big)$ for $i=1,2$ and $s:=c_1\Big(\pi_*(\cL\otimes\omega_{\cZ/\cM_0^{\leq 1}}^{\otimes -2})\Big)$.
\begin{remark}
We are using the fact that $\ch(\cX\times \cB\GG_{\rmm})\simeq \ch(\cX)\otimes \ch(\cB\GG_{\rmm})$ for a smooth quotient stack $\cX$. This follows from Lemma 2.12 of \cite{Tot}.
\end{remark}
We need to compute $c_{9}(\pi_*\cL^{\otimes -4})$. If we denote by $\cS$ the line bundle on $\cP_3'$ such that $\pi^*(\cS):=\cL\otimes\omega_{\cZ/\cM_0^{\leq 1}}^{\otimes -2}$, we have that
$$ \pi_*\cL^{\otimes -4} = \pi_*\omega_{\cZ}^{\otimes -4} \otimes \cS^{\otimes -2}.$$
\begin{proposition}
We have an exact sequence of vector bundles over $\cP_3'$
$$
0\rightarrow \det (\pi_*\omega_{\cZ}^{\vee})\otimes {\rm Sym}^2 \pi_*\omega_{\cZ}^{\vee} \rightarrow {\rm Sym}^4 \pi_*\omega_{\cZ}^{\vee} \rightarrow \pi_*\omega_{\cZ}^{\otimes -4} \rightarrow 0.$$
\end{proposition}
\begin{proof}
Let $S$ be a $\kappa$-scheme and let $(Z/S,L)$ be an object of $\cP_3'(S)$. Consider the $S$-morphism
$$
\begin{tikzcd}
Z \arrow[rd, "\pi"] \arrow[r, "i", hook] & \PP(\cE) \arrow[d, "p"] \\
& S
\end{tikzcd}
$$
induced by the complete linear system of the line bundle $\omega_{Z/S}^{\vee}$, namely $\cE:=(\pi_*\omega_{Z/S}^{\vee})^{\vee}$. Then $i$ is a closed immersion and we have the following facts:
\begin{itemize}
\item $i^*\cO_{\PP(\cE)}(1)=\omega_{Z/S}^{\vee}$,
\item $\cO_{\PP(\cE)}(-Z)\simeq\omega_{\PP(\cE)/S}(1)$;
\end{itemize}
see the proof of Proposition 6 of \cite{EdFul2} for a detailed discussion. Because
$$ \pi_*\omega_{Z/S}^{\otimes -4}=p_*i_*(\omega_{Z/S}^{\otimes -4})=p_*i^*\cO_{\PP(\cE)}(4)$$
we can consider the exact sequence
$$ 0 \rightarrow \cO_{\PP(\cE)}(4-Z) \rightarrow \cO_{\PP(\cE)}(4) \rightarrow i^{*}\cO_{\PP(\cE)}(4) \rightarrow 0. $$
If we do the pushforward through $p$, the sequence remain exact for every geometric fiber over $S$, because $Z$ is embedded as a conic. Therefore we get
$$ 0\rightarrow p_*(\cO_{\PP(\cE)}(5)\otimes \omega_{\PP(\cE)/S}) \rightarrow p_*\cO_{\PP(\cE)}(4) \rightarrow \pi_*\omega_{Z}^{\otimes -4} \rightarrow 0$$
and using the formula $\omega_{\PP(\cE)}=\cO_{\PP(\cE)}(-2) \otimes p^*\det \cE^{\vee}$, we get the thesis.
\end{proof}
We have found the first relation in our strata, which is
$$c_9:=\frac{c_{15}(\cS^{\otimes -2}\otimes {\rm Sym}^4 \pi_*\omega_{\cZ}^{\vee} )}{c_6(\cS^{\otimes -2}\otimes\det (\pi_*\omega_{\cZ}^{\vee})\otimes {\rm Sym}^2 \pi_*\omega_{\cZ}^{\vee})}$$
and can be described completely in terms of the Chern classes $c_1,c_2$ of $\pi_*\omega_{\cZ}^{\vee}$ and $s=c_1(\cS)$.
\subsection*{Relations from the locus $D$}
We concentrate now on the locus $D$. First of all notice that $D$ is contained in the restriction of $\pi_*\cL^{\otimes -4}$ to $\cM_0^{1}\times \cB\GG_{\rmm}$.
\begin{remark}
In \cite{EdFul2}, the authors describe the stack $\cM_0^{\leq 1}$ as the quotient stack $[S/\mathrm{GL}_3]$ where $S$ is an open of the six-dimensional $\mathrm{GL}_3$-representation of homogeneous forms in three variables, namely $x,y,z$, of degree $2$. The action can be described as
$$ A.f(x):=\det(A)f(A^{-1}(x,y,z))$$
for every $A \in \mathrm{GL}_3$ and $f \in S$, and the open subscheme $S$ is the complement of the closed invariant subscheme parametrizing non-reduced forms.
The proof consists in using the line bundle $\pi_*\omega_{\cZ}^{\vee}$, which is very ample, to describe the $\cM_0^{\leq 1}$ as the locus of reduced conics in $\PP^2$ with an action of $\mathrm{GL}_3$. For a more detailed discussion, see \cite{EdFul2}.
In this setting, $\cM_0^1$ correspond to the closed locus $S^1$ of $S$ parametrizing reducible reduced conics in $\PP^2$. It is easy to see that the action of $\mathrm{GL}_3$ over $S^1$ is transitive, therefore $\cM_0^1 \simeq \cB H$, with $H$ the subgroup of $\mathrm{GL}_3$ defined as the stabilizers of the element $xy \in S^1$. A straightforward computation shows that $H\simeq (\GG_{\rmm}\ltimes \GG_{\rma})^2 \ltimes C_2$, where $C_2$ is the costant group with two elements.
\end{remark}
As we are inverting $2$ in the Chow rings, we can use \Cref{lem:chow-tor} to describe Chow ring of $\cM_0^1$ as the invariant subring of $\ch(\cB\GG_{\rmm}^2)$ of a specific action of $C_2$. The $\GG_{\rma}$'s do not appear in the computation of the Chow ring thanks to Proposition 2.3 of \cite{MolVis}. We can see that the elements of the form $(t_1,t_2,1)$ of $\GG_{\rmm}^2 \ltimes C_2$ correspond to the matrices in $\mathrm{GL}_3$ of the form
$$
\begin{pmatrix*}
t_1 & 0 & 0 \\
0 & t_2 & 0 \\
0 & 0 & 1
\end{pmatrix*};
$$
the elements of the form $(t_1,t_2,-1)$ correspond to the matrices
$$
\begin{pmatrix*}
0 & t_1 & 0 \\
t_2 & 0 & 0 \\
0 & 0 & 1
\end{pmatrix*}.
$$
It is immediate to see that the action of $C_2$ over $\GG_{\rmm}^2$ can be described as $(-1).(t_1,t_2)=(t_2,t_1)$. Therefore if we denote by $t_1$ and $t_2$ the generator of the Chow ring of the two copies of $\GG_{\rmm}$ respectively, we get
$$ \ch(\cB(\GG_{\rmm}^2\ltimes C_2)) \simeq \ZZ[1/6, t_1+t_2, t_1t_2].$$
A standard computation shows the following result.
\begin{lemma}
If we denote by $i:\cM_0^1 \into \cM_0^{\leq 1}$ the (regular) closed immersion, we have that $i^{*}(c_1)=t_1+t_2$ and $i^{*}(c_2)=t_1t_2$ and therefore $i^{*}$ is surjective. Moreover, we have the equality $[\cM_0^1]=-c_1$ in $\cM_0^{\leq 1}$.
\end{lemma}
\begin{proof}
The description of $i^{*}(c_i)$ for $i=1,2$ follows from the explicit description of the inclusion
$$\cM_0^1 = [\{xy\}/H] \into [S/\mathrm{GL}_3]=\cM_0^{\leq 1}.$$
Regarding the second part of the statement, it is enough to observe that $\cM_0^1=[S^1/\mathrm{GL}_3] \into [S/\mathrm{GL}_3]$ where $S^1$ is the hypersurface of $S$ described by the vanishing of the determinant of the general conic. A straightforward computation of the $\mathrm{GL}_3$-character associated to the determinant formula shows the result.
\end{proof}
Finally, we focus on $D$. The vector bundle $\pi_*\cL^{\otimes -4}$ (or equivalently $\pi_*\omega_{\cZ}^{\otimes -4} \otimes \cS^{\otimes -2}$) can now be seen as a $9$-dimensional $H$-representation. Specifically, we are looking at sections of $\pi_*\omega_{\cZ}^{\otimes -4} \otimes \cS^{\otimes -2}$ on the curve $xy=0$, which are a $9$-dimensional vector space $\AA(4,4)$ parametrizing a pair of binary forms of degree $4$, which have to coincides in the point $x=y=0$. Let us denote by $\infty$ the point $x=y=0$, which is in common for the two components. With this notation, $D$ parametrizes pairs $(f,g)$ such that $f(\infty)=g(\infty)=0$ and either the coefficient of $xz^3$ or the one of $yz^3$ vanishes.
\begin{remark}
This follows from the local description of the $2:1$-cover. In fact, if $\infty$ is the intersection of the two components, we have that \'etale locally the double cover looks like
$$ k[[x,y]]/(xy) \into k[[x,y,t]](t^2-h(x,y))$$
where $h$ is exactly the section of $\pi_*\omega_{\cZ}^{\otimes -4} \otimes \cS^{\otimes -2}$. Because we can only allow nodes or tacnodes as fibers over a node in the quotient morphism by \Cref{prop:description-quotient}, we get that $h$ is either a unit (the quotient morphism is \'etale) or $h$ is of the form $xp(x)+yq(y)$ such that $p(0)\neq 0$ and $q(0)\neq 0$.
\end{remark}
The action of $\GG_{\rmm}^2\times \GG_{\rmm}$ (where the second group of the product is the one whose generator is $s$) over the coefficient of $x^iz^{4-i}$ (respectively $y^iz^{4-i}$) can be described by the character $t_1^is^{-2}$ (respectively $t_2^is^{-2}$).
\begin{lemma}
The ideal of relations coming from $D$ in $\VV(\pi_*\cL^{\otimes -2})\vert_{\cM_0^1 \times \cB\GG_{\rmm}}$ is generated by the two classes $2s(4s-(t_1+t_2))$ and $2s(4s^2-2s(t_1+t_2)+t_1t_2))$. Therefore we have that the ideal of relations coming from $D$ in $\cP_3'$ is generated by the two relations $D_1:=2sc_1(c_1-4s)$ and $D_2:=2sc_1(4s^2-2sc_1+c_2)$.
\end{lemma}
\begin{proof}
Because of \Cref{lem:chow-tor}, we can start by computing the ideal of relations in the $\GG_{\rmm}^2$-equivariant setting (i.e. forgetting the action of $C_2$) and then considering the invariant elements (by the action of $C_2$). It is clear the ideal of relation $I$ in the $\GG_{\rmm}^2$-equivariant setting is of the form $(2s(2s-t_1),2s(2s-t_2))$. Thus the ideal $I^{\rm inv}$ is generated by the elements $2s(4s-(t_1+t_2))$ and $2s(2s-t_1)(2s-t_2)$.
\end{proof}
As a corollary, we get the Chow ring of $\widetilde{\cH}_3\smallsetminus \widetilde{\Delta}_1$. Before describing it, we want to change generators. We can express $c_1$, $c_2$ and $s$ using the classes $\lambda_1$, $\lambda_2$ and $\xi_1$ where $\lambda_i$ as usual is the $i$-th Chern class of the Hodge bundle $\HH$ and $\xi_1$ is the fundamental class of $\Xi_1$, which is defined as the pullback
$$
\begin{tikzcd}
\Xi_1 \arrow[d] \arrow[r, hook] & \widetilde{\cH}_3\smallsetminus \widetilde{\Delta}_1 \arrow[d] \\
\cM_0^1\times \cB\GG_{\rmm} \arrow[r, hook] & \cP_3'=\cM_0^{\leq 1}\times \cB\GG_{\rmm}.
\end{tikzcd}
$$
\begin{lemma}\label{lem:lambda-class-H}
In the situation above, we have that $s=(-\xi_1 - \lambda_1)/3$, $c_1=-\xi_1$ and $c_2= \lambda_2 - (\lambda_1^2 - \xi_1^2)/3$. Furthermore, we have the following relation
$$\lambda_3=(\xi_1+\lambda_1)(9\lambda_2+(\xi_1+\lambda_1)(\xi_1-2\lambda_1))/27.$$
\end{lemma}
\begin{proof}
First of all, the relation $\xi_1=-c_1$ is clear from the construction of $\xi_1$, as we have already computed the fundamental class $\cM_0^1$ in $\ch(\cM_0^{\leq 1})$.
Let $f:C\rightarrow Z$ be the quotient morphism of an object in $\widetilde{\cH}_3\smallsetminus \widetilde{\Delta}_1$ and let $\pi_C:C\rightarrow S$ and $\pi_Z:Z\rightarrow S$ be the two structural morphisms. Grothendieck duality implies that
$$ f_*\omega_{C/S} = \hom_Z(f_*\cO_C, \omega_{Z/S})$$
but because $f$ is finite flat, we know that $f_*\cO_C=\cO_Z \oplus L$, i.e. $f_*\omega_{C}=\omega_{Z}\oplus (\omega_{Z}\otimes L^{\vee})$. Recall that $L\simeq \omega_{Z/S}^{\otimes 2} \otimes \pi_Z^*\cS$ for a line bundle $\cS$ on the base. Therefore if we consider the pushforward through $\pi_Z$, we get
$$ \pi_{C,*}\omega_{C/S}=\pi_{Z,*}(\omega_{Z/S}^{\vee})\otimes \cS^{\vee}$$
and the formulas in the statement follow from simple computations with Chern classes.
\end{proof}
\begin{corollary}\label{cor:chow-hyper}
We have the following isomorphism of rings:
$$\ch(\widetilde{\cH}_3)= \ZZ[1/6,\lambda_1,\lambda_2,\xi_1]/(c_9,D_1,D_2)$$
where $D_1=2\xi_1(\lambda_1+\xi_1)(4\lambda_1+\xi_1)/9$, $D_2:=2\xi_1(\xi_1+\lambda_1)(9\lambda_2+(\xi_1+\lambda_1)^2)/27$ and $c_9$ is a polynomial in degree $9$.
\end{corollary}
\begin{remark}
The polynomial $c_9$ has the following form:
\begin{equation*}
\begin{split}
c_9 = & -\frac{16192}{19683}\lambda_1^9 - \frac{23200}{6561}\lambda_1^8\xi_1 - \frac{31040}{6561}\lambda_1^7\xi_1^2 +
\frac{1376}{729}\lambda_1^7\lambda_2 - \frac{320}{6561}\lambda_1^6\xi_1^3 + \\& + \frac{4576}{243}\lambda_1^6\xi_1\lambda_2 +
\frac{30784}{6561}\lambda_1^5\xi_1^4 + \frac{10144}{243}\lambda_1^5\xi_1^2\lambda_2 + \frac{3968}{81}\lambda_1^5\lambda_2^2 +
\frac{16256}{6561}\lambda_1^4\xi_1^5 + \\ & + \frac{15136}{729}\lambda_1^4\xi_1^3\lambda_2 + \frac{992}{27}\lambda_1^4\xi_1\lambda_2^2
- \frac{320}{243}\lambda_1^3\xi_1^6 - \frac{5792}{243}\lambda_1^3\xi_1^4\lambda_2 -
\frac{11072}{81}\lambda_1^3\xi_1^2\lambda_2^2 - \\ & - \frac{7264}{27}\lambda_1^3\lambda_2^3 - \frac{7360}{6561}\lambda_1^2\xi_1^7 -
\frac{5216}{243}\lambda_1^2\xi_1^5\lambda_2 - \frac{11392}{81}\lambda_1^2\xi_1^3\lambda_2^2 -
\frac{2848}{9}\lambda_1^2\xi_1\lambda_2^3 + \\ & + \frac{640}{6561}\lambda_1\xi_1^8 + \frac{1952}{729}\lambda_1\xi_1^6\lambda_2 +
\frac{832}{27}\lambda_1\xi_1^4\lambda_2^2 + \frac{1568}{9}\lambda_1\xi_1^2\lambda_2^3 +384\lambda_1\lambda_2^4 + \\ & +
\frac{2912}{19683}\xi_1^9 + \frac{352}{81}\xi_1^7\lambda_2 + \frac{3808}{81}\xi_1^5\lambda_2^2 +
\frac{5984}{27}\xi_1^3\lambda_2^3 + 384\xi_1\lambda_2^4.
\end{split}
\end{equation*}
\end{remark}
\subsection*{Normal bundle of $\widetilde{\cH}_3\smallsetminus \widetilde{\Delta}_1$ in $\widetilde{\mathcal M}_3 \smallsetminus \widetilde{\Delta}_1$}
We end up the section with the computation of the first Chern class of the normal bundle of the closed immersion $\widetilde{\cH}_3\smallsetminus \widetilde{\Delta} \into \widetilde{\mathcal M}_3 \smallsetminus \widetilde{\Delta}_1$. For the sake of notation, we denote the normal bundle by $N_{\cH|\cM}$.
\begin{proposition}
The fundamental class of $\overline{\cH}_3$ in $\ch(\overline{\cM}_3)$ is equal to $9\lambda_1-\delta_0-3\delta_1$.
\end{proposition}
\begin{proof}
This is Theorem 1 of \cite{Est}. It is important to notice that in the computations the author just need to invert $2$ in the Picard group to get the result.
\end{proof}
\begin{remark}
As in the ($A_1$-)stable case, we define by $\Delta_0$ the closure of the substack of $\widetilde{\mathcal M}_3$ which parametrizes curves with a non-separating node. Alternately, we can consider the stack $\Delta$ in the universal curve $\widetilde{\mathcal C}_3$ of $\widetilde{\mathcal M}_3$, defined as the vanishing locus of the first Fitting ideal of $\Omega_{\widetilde{\mathcal C}_3|\widetilde{\mathcal M}_3}$. If we take a connected component $\Sigma \subset \Delta$, one can see that the induced morphism $\Sigma \rightarrow \widetilde{\mathcal M}_3$ is a closed embedding. For a more detailed discussion, see Appendix A of \cite{DiLorVis}.
We denote by $\widetilde{\Delta}$ the image with its natural stacky structure and by $\widetilde{\Delta}_0$ the complement of the inclusion $\widetilde{\Delta}_1 \subset \widetilde{\Delta}$. Thanks to \Cref{lem:sep-sing}, we know that $\widetilde{\Delta}_1\into \widetilde{\Delta}$ is also open, therefore we get that $\widetilde{\Delta}_0$ is a closed substack of $\widetilde{\mathcal M}_3$. We denote by $\delta_0$ its fundamental class in the Chow ring of $\widetilde{\mathcal M}_3$.
\end{remark}
Because $\widetilde{\mathcal M}_3\smallsetminus\overline{\cM}_3$ has codimension $2$, we get that the same formula works in our context. Because $\delta_1$ is defined as the fundamental class of $\widetilde{\Delta}_1$, we just need to compute $\delta_0$ restricted to $\widetilde{\cH}_3\smallsetminus \widetilde{\Delta}_1$ to get the description we want. To do so, we compute the restriction of $\delta_0$ to $\widetilde{\cH}_3 \smallsetminus (\widetilde{\Delta}_1 ^{\rm c}p \Xi_1)$ and to $\Xi_1\smallsetminus \widetilde{\Delta}_1$ and then glue the informations together.
First of all, notice that $\widetilde{\cH}_3 \smallsetminus (\widetilde{\Delta}_1 ^{\rm c}p \Xi_1)$ is an open inside a $9$-dimensional representation $V$ of $\mathrm{PGL}_2\times \GG_{\rmm}$, as we have $2:1$-covers of $\PP^1$. Unwinding the definitions, we get the following result.
\begin{lemma}\label{lem:ar-vis}
The representation $V$ of $\mathrm{PGL}_2 \times \GG_{\rmm}$ above coincides with the one given by Arsie and Vistoli in Corollary 4.6 of \cite{ArVis}.
\end{lemma}
This implies that we can see it as an open inside $[\AA(8)/(\mathrm{GL}_2/\mu_4)]$, where $\AA(8)$ is the vector space of binary forms of degree $8$ and $\mathrm{GL}_2/\mu_4$ acts by the equation $A.f(x)=f(A^{-1}x)$. By the theory developed in \cite{ArVis} (see \Cref{prop:descr-hyper} in our situation), it is clear that the sections $f \in \AA(8)$ describe the branching locus of the quotient morphism. In particular, worse-than-nodal singularities on the $2:1$-cover of the projective line correspond to points on $\PP^1$ where the branching divisor is not \'etale, or equivalently points where $f$ has multiplicity more than $1$. Therefore $\delta_0$ is represented by the closed invariant subscheme of singular forms inside $\AA(8)$. This was already computed by Di Lorenzo (see the first relation in Theorem 6 in \cite{DiLor}), and we have that $ \delta_0=28\tau$ with $$\tau=c_1(\pi_*(\omega_C(-W)^{\otimes 2}))$$
where $W$ is the ramification divisor in $C$. Notice that if $f:C\rightarrow \PP^1$ is the cyclic cover of the projective line, we have that $W\simeq f^{*}\cL^{\otimes \vee}$. A computation using Grothendieck duality gives us that $\tau=-s$.
We have that $\delta_0:=as+bc_1$ in $\ch(\widetilde{\cH}_3\smallsetminus \widetilde{\Delta}_1)$ for some elements in $\ZZ[1/6]$.
The computations above implies that if we restrict to the open complement of $\Xi_1 \smallsetminus \widetilde{\Delta}_1$, we get $a=-28$.
\begin{remark}
Notice that although $\mathrm{GL}_2/\mu_4$ is not special, if we invert $2$ in the Chow rings, we have that the pullback morphism
$$ \ch(\cB\mathrm{GL}_2/\mu_4) \longrightarrow \ch(\cB \mathrm{GL}_2)$$
is an isomorphism. Therefore one can do the computations using the maximal torus in $\mathrm{GL}_2$ and apply the formula in \Cref{rem:gener} with $N=8$ and $k=2$ to get the same result.
\end{remark}
The restriction to $\Xi_1\smallsetminus \widetilde{\Delta}_1$ is a bit more complicated, because we have that $\Xi_1 \subset \widetilde{\Delta}_0$. Recall the description
$$ \Xi_1\smallsetminus \widetilde{\Delta}_1 = [\AA(4,4)\smallsetminus D/H]$$
where $\AA(4,4)$ is the vector space of pairs of binary forms $(f(x,z),g(y,z))$ of degree $4$ such that $f(0,1)=g(0,1)$. We define an open $\Xi_1^0$ of $\Xi_1\smallsetminus \widetilde{\Delta}_1$ which are the pairs $(f,g)$ such that $f(0,1)=g(0,1)\neq 0$. Clearly, $D$ does not intersect $\Xi_1^0$.
We do the computations on $\Xi_1^0$ and verify they are enough to determine the coefficient $b$ in the description $\delta_0=-28s+bc_1$.
\begin{remark}
The class $[\Xi_1\smallsetminus \Xi_1^0]$ in $\ch(\Xi_1)$ is equal to $-2s$. In fact, it can be described as the vanishing locus of the coefficient of $z^4$ for the pair $(f,g) \in \AA(4,4)$. Therefore the Picard group (up to invert $2$) of $\Xi_1^0$ is free generated by $c_1$.
\end{remark}
Let us define a closed substack inside $\Xi_1^0$: we define $\Delta'$ as the locus parametrizing pairs $(f,g)$ such that either $f$ or $g$ are singular forms.
\begin{lemma}
In the setting above, we have the equality
$$ \Delta'=12c_1$$
in the Picard group of $\Xi_1^0$.
\end{lemma}
\begin{proof}
As a conseguence of \Cref{lem:chow-tor}, we can do the $\GG_{\rmm}^2$-equivariant computations of the equivariant class of $\Delta'$. We have that $\Delta'=\Delta_1' ^{\rm c}p \Delta_2'$ where $\Delta_1'$ (respectively $\Delta'_2$) is the substack parametrizing pairs $(f,g)$ such that $f$ (respectively $g$) is a singular form.
We reduce ourself to compute the class the locus of singular forms inside $\AA(4)$. The result then follows from a straightforward computation.
\end{proof}
Now we are ready to compute the restriction of $\delta_0$.
\begin{lemma}
In the situation above, we have
$$ \delta_0\vert_{\Xi_1^0}= -2c_1 + [\Delta'] $$
inside the Chow ring of $\Xi_1^0$.
\end{lemma}
\begin{proof}
Because we are computing the Chern class of a line bundle, we can work up to codimension two, or equivalently we can restrict everything to $\overline{\cM}_3\smallsetminus \overline{\Delta}_1$, which by abuse of notation is denoted by $\cM$. We denote by $\cC$ the universal curve over $\cM$. Consider the closed substack $\Delta$ in $\cC$ of singular points of the morphism $\pi:\cC\rightarrow \cM$ defined by the first Fitting ideal of $\Omega_{\cC/\cM}$. We get a morphism $\pi\vert_{\Delta}:\Delta \rightarrow \cM$ whose image is exactly $\Delta_0$, and the morphism is finite birational. Moreover, $\Delta$ is smooth and its connected components map isomorphically to the irreducible components of $\Delta_0$. Therefore if $\{\Gamma_i\}_{i \in I}$ is the set of irreducible components of $\Delta_0$, we have that
$$\delta_0\vert_{\Xi_1^0}=\sum_{i \in I} N_{\Gamma_i/\cM}\vert_{\Xi_1^0}$$
because $\Gamma_i$ is a smooth Cartier divisor of $\cM$. For a more detailed discussion, see Appendix A of \cite{DiLorVis}.
Let us look at the geometric point of $\Xi_1^0$. A curve in $\Xi_1^0$ is a $2:1$-cover of a reducible reduced conic, or equivalently it can described as two genus $1$ curves meeting at a pair of nodes. The two nodes are the fiber over the point $\infty$ in the intersection of the two components of the conic. Therefore, it is clear that $\Xi_1^0$ is contained in two of the $\Gamma_i$'s, say $\Gamma_1$ and $\Gamma_2$ and intersect only two of the others, say $\Gamma_3$ and $\Gamma_4$, transversally, namely when one of the two genus $1$ curves is singular.
The cicle $\Gamma_3+\Gamma_4$ restricted to $\Xi_1^0$ is exactly the fundamental class of $\Delta'$. It remains to compute $c_1(N_{\Gamma_i|\cM})\vert_{\Xi_1^0}$ for $i=1,2$. Consider the commutative $H$-equivariant diagram
$$
\begin{tikzcd}
p \arrow[r, hook] \arrow[d, Rightarrow, no head] & C \arrow[d, "f"] \\
q \arrow[r, hook] & Z
\end{tikzcd}
$$
where $q$ is the node of the conic and $p$ is one of the two nodes lying over $q$. We know that $f$ is \'etale over the node of $Z$ as we restricted to the open $\Xi_1^0$. Therefore the normal bundle $N_{p|C}$ is isomorphic equivariantly to $N_{q|Z}$ and it is enough to compute the Chern class of $N_{q|Z}$ as a character of $\GG_{\rmm}^2 \ltimes C_2$. A straightforward computation shows that
$$ c_1(N_{q|Z})=c_1$$
where $c_1$ is the $\GG_{\rmm}^2$-character with weight $(1,1)$.
\end{proof}
\begin{corollary}\label{cor:norm-hyper}
The first Chern class of $N_{\cH|\cM}$ is equal to $(2\xi_1-\lambda_1)/3$.
\end{corollary}
\section{Description of $\widetilde{\mathcal M}_3 \smallsetminus (\widetilde{\Delta}_1^{\rm c}p \widetilde{\cH}_3)$}\label{sec:m3tilde-open}
We focus now on the open stratum. Recall that the canonical bundle of a smooth genus $g$ curve is either very ample or the curve is hyperelliptic and the quotient morphism factors through the canonical morphism. This cannot be true for $A_r$-stable curves as in $\widetilde{\Delta}_1$ we have that the dualizing line bundle is not globally generated, see \Cref{prop:base-point-can}. Nevertheless, if we remove $\widetilde{\Delta}_1$, we have the same result for genus $3$ curves.
\begin{lemma}
Suppose $C$ is an $A_r$-stable curve of genus $3$ over an algebraically closed field which does not have separating nodes and it is not hyperelliptic. Then the canonical morphism (induced by the complete linear system of the dualizing sheaf) is a closed immersion.
\end{lemma}
\begin{proof}
This proof is done using the theory developed in \cite{Cat} to deal with most of the cases and analizying the rest of the them separately.
Firstly, we prove that if $C$ is a $2$-connected $A_r$-stable curve of genus $3$ (i.e. it is in $\widetilde{\mathcal M}_3\smallsetminus \widetilde{\Delta}_1$) then we have that $C$ is hyperelliptic if and only if there exist two smooth points $x,y$ such that $\dim \H^0(C,\cO(x+y))=2$ (notice that this is the definition of hyperelliptic as in \cite{Cat}).
One implication is clear. Suppose there exists two smooth points $x,y$ such that $\dim \H^0(C,\cO(x+y))=2$. Proposition 3.14 of \cite{Cat} gives us that we can have two possibilities:
\begin{itemize}
\item[(a)] $x,y$ belongs to $2$ different irreducible components $Y_1,Y_2$ of genus $0$ such that every connected component $Z$ of $C-Y_1-Y_2$ intersect $Y_1$ in a node and $Y_2$ in a (different) node,
\item[(b)] $x,y$ belong to an irreducible hyperelliptic curve $Y$ such that for every connected component $Z$ of $C-Y$ intersect $Y$ in a Cartier divisor isomorphic to $\cO(x+y)$.
\end{itemize}
Regarding (a), the stability condition implies that the only possibilities are either that there are no other connected components of $C-Y_1-Y_2$, which implies $C$ is hyperelliptic, or we have only one connected component $Z$ of $C-Y_1-Y_2$ which is of genus $1$. Because $Z$ is of genus $1$ and intersect $Y_1^{\rm c}p Y_2$ in two points, we have that there exists a unique hyperelliptic involution of $Z$ that exchanges them, see \Cref{lem:genus1}. Therefore again $C$ is hyperelliptic. In case (b), the stability condition implies that the only possibilities are either that $C$ is irreducible, and therefore hyperelliptic, or $C$ is the union of two genus $1$ curves intersecting in a length $2$ divisor. Again it follows from \Cref{lem:genus1} that $C$ is hyperelliptic.
Now, we focus on an other definition given in \cite{Cat}. The author define $C$ to be strongly connected if there are no pairs of nodes $x,y$ such that $C\smallsetminus \{x,y\}$ is disconnected. Furthermore, the author define $C$ very strongly connected if it is strongly connected and there is not a point $p \in C$ such that $C\smallsetminus \{p\}$ is disconnected.
In our situation, a curve $C$ is not very strongly connected if
\begin{itemize}
\item[(1)] $C$ is the union of two genus $1$ curves meeting at a divisor of length $2$,
\item[(2)] $C$ is the union of two genus $0$ curves meeting in a singularity of type $A_7$,
\item[(3)] $C$ is the union of a genus $0$ and a genus $1$ curve meeting in a singularity of type $A_5$.
\end{itemize}
Case (1) is always hyperelliptic for \Cref{lem:genus1}. An easy computation shows that the case (3) is never hyperelliptic and the canonical morphism identify $C$ with the union of a cubic and a flex tangent in $\PP^2$. Finally, one can show that in case (2) the canonical morphism restricted to the two components is a closed embedding, therefore it is clear that it is either a finite flat morphism of degree $2$ over its image ($C$ hyperelliptic) or it is a closed immersion globally on $C$.
It remains to prove the statement in the case $C$ is very strongly connected. This is Theorem G of \cite{Cat}.
\end{proof}
\begin{remark}
Notice that this lemma is really specific to genus $3$ curves and it is false in genus $4$. Consider a genus $2$ smooth curve $C$ meeting a genus $1$ smooth curve $E$ in two points, which are not a $g_1^2$ for $C$. Then the canonical morphism is $2:1$ restricted to $E$ but it is birational on $C$.
\end{remark}
The previous lemma implies that the description of $\cM_3 \smallsetminus \cH_3$ proved by Di Lorenzo in Proposition 3.1.3 of \cite{DiLor2} can be generalized in our setting. Specifically, we have the following isomorphism:
$$ \widetilde{\mathcal M}_3 \smallsetminus (\widetilde{\Delta}_1^{\rm c}p \widetilde{\cH}_3) \simeq [U/\mathrm{GL}_3]$$
where $U$ is an invariant open subscheme inside the space $\AA(3,4)$ of (homogeneous) forms in three coordinates of degree $4$ which is a representation of $\mathrm{GL}_3$ with the action described by the formula $A.f(x):=\det(A) f(A^{-1}x)$. The complement parametrizes forms $f$ such that the induced projective curve $\VV(f)$ in $\PP^2$ is not $A_r$-prestable.
We use the description as a quotient stack to compute its Chow ring. The strategy is similar to the one adopted in \cite{DiLorFulVis} with a new idea to simplify computations.
As usual, we pass to the projectivization of $\AA(3,4)$ which we denote by $\PP^{14}$. We induce an action of $\mathrm{GL}_3$ on $\PP^{14}$ setting $A.[f]=[f(A^{-1}x)]$, and if we denote by $\overline{U}$ the projectivization of $U$, we get
$$\ch_{\mathrm{GL}_3}(U)= \ch_{\mathrm{GL}_3}(\overline{U})/(c_1-h)$$
where $c_i$ is the $i$-th Chern class of the standard representation of $\mathrm{GL}_3$ and $h=\cO_{\PP^{14}}(1)$ the hyperplane section of the projective space of ternary forms. The idea is to compute the relations that come from the closed complement of $\overline{U}$ and then set $h=c_1$ to get the Chow ring of $\widetilde{\mathcal M}_3\smallsetminus (\widetilde{\cH}_3^{\rm c}p \widetilde{\Delta}_1)$ as the quotient of $\ch(\cB\mathrm{GL}_3)$ by these relations.
\begin{remark}
Notice that $\lambda_i:=c_i(\HH)$ where $\HH$ is the Hodge bundle can be identified with the Chern classes of the dual of the standard representation.
\end{remark}
We consider the quotient (stack) of $\PP^{14}\times \PP^{2}$ by the following $\mathrm{GL}_3$-action
$$A.([f],[p]):=([f(A^{-1}x),Ap]),$$
and we denote by $Q_4$ the universal quartic over $[\PP^{14}/\mathrm{GL}_3]$, or equivalently the substack of $[\PP^{14}\times \PP^2/\mathrm{GL}_3]$ parametrizing pairs $([f],[p])$ such that $f(p)=0$.
Now, we introduce a slightly more general definition of $A_n$-singularity.
\begin{definition}\label{def:A-sing}
We say that a point $p$ of a curve $C$ is an $A_{\infty}$-singularity if we have an isomorphism
$$ \widehat{\cO}_{C,p}\simeq k[[x,y]]/(y^2).$$
Furthermore, we say that $p$ is a $A$-singularity if it an $A_n$-singularity for $n$ either a positive integer or $\infty$.
\end{definition}
We describe when an $A_{\infty}$-singularity can occur for plane curves.
\begin{lemma}
A point $p$ of a plane curve $f$ is an $A_{\infty}$-singularity if and only if $p$ lies on a unique irreducible component $g$ of $f$ where $g$ is the square of a smooth plane curve.
\end{lemma}
\begin{proof}
Denote by $A$ the localization of $k[x,y]/(f)$ at the point $p$, which we can suppose to be the maximal ideal $(x,y)$. Because $A$ is an excellent ring and the completion is non-reduced, we get that $A$ is also non reduced. Let $h$ be a nilpotent element in $A$. Because the square of the nilpotent ideal in the completion is zero, we get that $h^2=0$. Since $k[x,y]$ is a UFD, we get the thesis.
\end{proof}
The reason why we introduced $A$-singularity is that they have an explicit description in terms of derivative of the defining equation for plane curves.
\begin{lemma}\label{lem:A-sing}
A point $p$ on a plane curve defined by $f$ is not an $A$-singularity if and only if both the gradient and the Hessian of $f$ vanishes at the point $p$.
\end{lemma}
\begin{proof}
If $f$ is an $A$-singularity, one can compute its Hessian and gradient (up to a change of coordinates) looking at the complete local ring, therefore it is a trivial computation.
On the contrary, if the gradient does not vanish, it is clear that $p$ is a smooth point of $f=0$. Otherwise, if the gradient vanishes but there is a double derivative different from zero, we can use Weierstrass preparation theorem and the square completion procedure ($ {\rm char}(\kappa)\neq 2$) to get the result.
\end{proof}
Now, we introduce a weaker version of Chow envelopes, which depends on what coefficients we consider for the Chow groups.
\begin{definition}
Let $R$ be a ring. We say that a representable proper morphism $f:X\rightarrow Y$ is an algebraic Chow envelope for $Y$ with coefficients in $R$ if the morphism $f_*:\ch(X)\otimes_{\ZZ}R \rightarrow \ch(Y)\otimes_{\ZZ}R$ is surjective.
\end{definition}
\begin{remark}
Recall the definition of Chow envelope between algebraic stacks as in Definition 3.4 of \cite{DiLorPerVis}. Because they are working with integer coefficients, Proposition 3.5 of \cite{DiLorPerVis} implies that an algebraic Chow envelope is an algebraic Chow envelope for every choice of coefficients.
\end{remark}
From now on, algebraic Chow envelopes with coefficients in $\ZZ[1/6]$ are simply called algebraic Chow envelopes.
Consider now the substack $X\subset Q_4$ parametrizing pairs $(f,p)$ such that $p$ is singular but not an $A$-singularity of $f$ . Thanks to the previous lemma, we can describe $X$ as the vanishing locus of the second derivates of $f$ in $p$. By construction, $X$ cannot be an algebraic Chow envelope for the whole complement of $\overline{U}$ because we have quartics $f$ which are squares of smooth conics and they do not appear in $X$. Therefore, we have to add the relations coming from this locus.
We want to study the fibers of the proper morphism $X\rightarrow \PP^{14}$ to prove that it is an algebraic Chow envelope for its image. First, we need to understand how many and what kind of singular point appears in a reduced quartic in $\PP^2$.
\begin{lemma}\label{lem:sing-points}
A reduced quartic in $\PP^2$ has at most $6$ singular points. If it has exactly $5$ singular points, then it is the union of a smooth conic and a reducible reduced one. If it has exactly $6$ singular points, then it is the union of two reducible reduced conics that do not share a component.
\end{lemma}
\begin{proof}
Suppose the quartic $F:=\VV(f)$ is irreducible. Then we can have at most $3$ singular points. In fact, suppose $p_1,\dots,p_4$ are four singular points. Then there exists a conic $Q$ passing through the four points and another smooth point of $f$. Thus $Q \cap F$ would have length at least $9$, which is impossible by Bezout's theorem.
The same reasoning cannot apply if $F$ is the union of two smooth conics meeting at four points, which is a possible situation. Nevertheless, if we suppose that $F$ has at least $5$ different singular points we would have that there exists a conic $Q$ inside $F$, therefore $F=Q^{\rm c}p Q'$ with $Q'$ another conic because $F$ is a quartic. It is then clear that the singular points are at most $6$ and one can prove the rest of the statement.
\end{proof}
We denote by $Z_{\{2\}}^{[2]}$ the substack of quartics which are squares of smooth conics and by $\overline{Z}_{\{2\}}^{[2]}$ its closure in $\PP^{14}$. We use the same notation as in \cite{DiLorFulVis}.
Let us denote by $z_2$ the fundamental class of $\overline{Z}_{\{2\}}^{[2]}$ in $\ch_{\mathrm{GL}_3}(\PP^{14})$. We also denote by $\rho$ the morphism $X\rightarrow \PP^{14}$and by $i_T:T\into \PP^{14}$ the closed complement of $\overline{U}$ in $\PP^{14}$. We are ready for the main proposition.
\begin{proposition}
The ideal generated by $\im{i_{T,*}}$ is equal to the ideal generated by $\im{\rho_{*}}$ and by $z_2$.
\end{proposition}
\begin{proof}
Consider $\PP^5$ the space of conics in $\PP^2$ with the action of $\mathrm{GL}_3$ defined by the formula $A.f(x):=f(A^{-1}x)$ and the equivariant morphism $\beta:\PP^5 \rightarrow \PP^{14}$ defined by the association $f \mapsto f^2$. We are going to prove that
$$ \rho \sqcup \beta : X \sqcup \PP^5 \longrightarrow \PP^{14}$$
is an algebraic Chow envelope for $T$ and then prove that the only generator we need for the image of $\beta_*$ is the fundamental class $\beta_*(1)$, which coincides with $z_2$.
Let $L/\kappa$ be a field extension. First of all, \Cref{lem:sing-points} tells us that a reduced quartic in $\PP^2$ has at most $6$ singular points. Therefore if $f$ is an $L$-point of $\PP^{14}$ which represents a reduced quartic, the fiber $\rho^{-1}(f)\rightarrow \spec L$ is a finite flat morphism of degree at most $6$. As we are inverting $2$ and $3$ in the Chow rings, the only case we need to worry is when the morphism has degree $5$, i.e. when $f$ is union of a singular conic and a smooth conic. However, in that situation we have a rational point, namely the section that goes to the intersection of the two lines that form the singular conic. This prove that $\rho$ is an algebraic Chow envelope for the open of reduced curves in $T$.
Consider a $L$-point $f$ of $\PP^{14}$ which represents a non-reduced quartic. Then $f$ is one of the following:
\begin{enumerate}
\item[(1)] $f$ is the product of a double line and a reduced conic that does not contain the line,
\item[(2)] $f$ is the product of a triple line and a different line,
\item[(3)] $f$ is the product of two different double lines,
\item[(4)] $f$ is the fourth power of a line,
\item[(5)] $f$ is a double smooth conic.
\end{enumerate}
For a more detailed discussion, see Section 1 of \cite{DiLorFulVis}.
We are going to prove that in situations from (1) to (4), the fiber $\rho^{-1}(f)$ is an algebraic Chow envelope for $\spec L$. In cases (1) and (3), we have that the fiber is finite of degree respectively 2 and 1, therefore an algebraic Chow envelope. In cases (2) and (4) the fiber is a line, which is a projective bundle and therefore an algebraic Chow envelope (we don't have to worry about non-reduced structures).
Clearly the fiber in the case (5) is empty, as $f$ has only $A_{\infty}$-singularities as closed points. Therefore we really need the morphism $\beta$ which is an algebraic Chow envelope for case (5).
It remains to prove the image of $\beta_*$ is generated by $\beta_*(1)$. This follows from the fact that $\beta^{*}(h_{14})=2h_{5}$, where $h_{14}$ (respectively $h_5$) is the hyperplane section of $\PP^{14}$ (respectively $\PP^5$).
\end{proof}
We conclude this section computing explicitly the relations we need and finally getting the Chow ring of $\widetilde{\mathcal M}_3\smallsetminus (\widetilde{\Delta}_1 ^{\rm c}p \widetilde{\cH}_3)$.
The computation for the class $z_2$ can be done using an explicit localization formula. As a matter of fact, the exact computation was already done in Proposition 4.6 of \cite{DiLorFulVis}, although it was not shown as it was not relevant for their computations.
\begin{remark}
After the identification $h=-\lambda_1$, the localization formula gives us
\begin{equation*}
\begin{split}
z_2=-1152\lambda_1^3\lambda_3^2 + 256\lambda_1^2\lambda_2^2\lambda_3 + 5824\lambda_1\lambda_2\lambda_3^2 - 1152\lambda_2^3\lambda_3 - 10976\lambda_3^3.
\end{split}
\end{equation*}
\end{remark}
In order to compute the ideal generated by $\im{\rho_*}$, we introduce a simplified description of $Q_4$ and of $X$. This description is not completely necessary for this specific computation, but it turns out very useful for the computations of the $A_n$-strata in \Cref{chap:3}.
\begin{lemma}
The universal quartic $Q_4$ is naturally isomorphic to the quotient stack $[\PP^{13}/H]$ where $H$ is a parabolic subgroup of $\mathrm{GL}_3$.
\end{lemma}
\begin{proof}
The action of $\mathrm{GL}_3$ is transitive over $\PP^2$, therefore we can consider the subscheme $\PP^{14}\times \{[0:0:1]\}$ in $\PP^{14}\times \PP^2$. If we denote by $H$ the stabilizers of the point $[0:0:1]$ in $\PP^{2}$, we get that
$$[\PP^{14}\times \PP^2/\mathrm{GL}_3]\simeq [\PP^{14}/H].$$
The statement follows from noticing that the equation $f([0:0:1])=0$ determines a hyperplane in $\PP^{14}$.
\end{proof}
\begin{remark}\label{rem:H}
The group $H$ can be described as the subgroup of $\mathrm{GL}_3$ of matrices of the form
$$
\begin{pmatrix}
a & b & 0 \\
c & d & 0 \\
f & g & h \\
\end{pmatrix}.
$$
Notice that the submatrix
$$
\begin{pmatrix}
a & b \\
c & d
\end{pmatrix}
$$
is in $\mathrm{GL}_2$, and in fact we can describe $H$ as $(\mathrm{GL}_2\times \GG_{\rmm})\ltimes \GG_{\rma}^2$.
\end{remark}
This description essentially centers our focus in the point $[0:0:1]$. This implies that the coordinates of the space $\PP^{14}$ can be identified to the coefficient of the Taylor expansion of $f$ in $[0:0:1]$ and this is very useful for computations. \Cref{lem:A-sing} implies that $X$ can be described inside $[\PP^{14}/H]$ as a projective subbundle of codimension $6$. Namely, $X$ is the substack described by the equations $$a_{00}=a_{10}=a_{01}=a_{20}=a_{11}=a_{02}=0$$
where $a_{ij}$ is the coefficient of the monomial $x^iy^jz^{4-i-j}$. One can verify easily that this set of equations is invariant for the action of $H$. The fact that $X$ is a projective subbundle implies that we can repeat the exact same strategy adopted in Section 5 of \cite{FulVis} to prove our version of Theorem 5.5 as in \cite{FulVis}.
For the sake of notation, we denote by $h_2$ (respectively $h_{14}$) the hyperplane section of $\PP^2$ (respectively $\PP^{14}$).
\begin{proposition}
There exists a unique polynomial $p(h_{14},h_{2})$ with coefficients in $\ch(\cB \mathrm{GL}_3)$ such that
\begin{itemize}
\item $p(h_{14},h_{2})$ represent the fundamental class of $X$ in the $\mathrm{GL}_3$-equivariant Chow ring of $\PP^{14}\times \PP^2$,
\item the degree of $p$ with respect to the variable $h_{14}$ is strictly less than $15$,
\item the degree of $p$ with respect to the variable $h_{2}$ is strictly less than $3$.
\end{itemize}
Furthermore, if $p$ is of the form $$p_2(h_{14})h_2^2+p_1(h_{14})h_2+p_0(h_{14})$$
with $p_0,p_1,p_2 \in \ch_{\mathrm{GL}_3}(\PP^{14})$ and $\deg p_i \leq 14$, we have that $\im{\rho_*}$ is equal to the ideal generated by $p_0$, $p_1$ and $p_2$ in $\ch_{\mathrm{GL}_3}(\PP^{14})$.
\end{proposition}
\begin{proof}
For a detailed proof of the proposition see Section 5 of \cite{FulVis}.
\end{proof}
A computation shows the description of the Chow ring of $\widetilde{\mathcal M}_3\smallsetminus (\widetilde{\Delta}_1 ^{\rm c}p \widetilde{\cH}_3)$.
\begin{corollary}\label{cor:chow-quart}
We have an isomorphism of rings
$$ \ch\Big(\widetilde{\mathcal M}_3\smallsetminus (\widetilde{\Delta}_1 ^{\rm c}p \widetilde{\cH}_3)\Big) \simeq \ZZ[\lambda_1,\lambda_2,\lambda_3]/(z_2,p_0,p_1,p_2)$$
where the generators of the ideal can be described as follows:
\begin{itemize}
\item $p_2=12\lambda_1^4 - 44\lambda_1^2\lambda_2 + 92\lambda_1\lambda_3$,
\item $p_1=-14\lambda_1^3\lambda_2 + 2\lambda_1^2\lambda_3 + 48\lambda_1\lambda_2^2 - 96\lambda_2\lambda_3$,
\item $p_0=15\lambda_1^3\lambda_3 - 52\lambda_1\lambda_2\lambda_3 + 112\lambda_3^2$,
\item $z_2=-1152\lambda_1^3\lambda_3^2 + 256\lambda_1^2\lambda_2^2\lambda_3 + 5824\lambda_1\lambda_2\lambda_3^2 - 1152\lambda_2^3\lambda_3 - 10976\lambda_3^3.$
\end{itemize}
\end{corollary}
\begin{remark}
We remark that $z_2$ is not in the ideal generated by the other relations, meaning that it was really necessary to introduce the additional stratum of non-reduced curves for the computations.
\end{remark}
\section{Description of $\widetilde{\Delta}_1 \smallsetminus \widetilde{\Delta}_{1,1}$}\label{sec:detilde-1}
To describe $\widetilde{\Delta}_1\smallsetminus \widetilde{\Delta}_{1,1}$, we recall the gluing morphism in the case of stable curves:
$$ \overline{\cM}_{1,1} \times \overline{\cM}_{2,1} \longrightarrow \overline{\Delta}_1,$$
which is an isomorphism outside the $\overline{\Delta}_{1,1}$. The same is true for the $A_r$-case.
Thus, we need to describe the preimage of $\widetilde{\Delta}_{1,1}$ through the morphism. Denote by $\widetilde{\Theta}_1\subset \widetilde{\mathcal M}_{2,1}$ the pullback of $\widetilde{\Delta}_1 \subset \widetilde{\mathcal M}_2$ through the morphism $\widetilde{\mathcal M}_{2,1} \rightarrow \widetilde{\mathcal M}_2$ which forgets the section. Thus one can easily prove that
$$ \widetilde{\mathcal M}_{1,1}\times (\widetilde{\mathcal M}_{2,1}\smallsetminus \widetilde{\Theta}_1) \simeq \widetilde{\Delta}_1 \smallsetminus \widetilde{\Delta}_{1,1}.$$
It remains to describe the stack $\widetilde{\mathcal M}_{2,1}\smallsetminus \widetilde{\Theta}_1$. We start by describing $\widetilde{\mathcal C}_2 \smallsetminus \widetilde{\Theta}_1$, the universal curve over $\widetilde{\mathcal M}_2\smallsetminus \widetilde{\Delta}_1$.
\begin{proposition}
We have the following isomorphism
$$ \widetilde{\mathcal C}_2 \smallsetminus \widetilde{\Theta}_1 \simeq [\widetilde{\AA}(6)\smallsetminus 0/B_2],$$
where $B_2$ is the Borel subgroup of lower triangular matrices inside $\mathrm{GL}_2$ and $\widetilde{\AA}(6)$ is a $7$-dimensional $B_2$-representation.
\end{proposition}
\begin{proof}
This is a straightforward generalization of Proposition 3.1 of \cite{DiLorPerVis}.We remove the $0$-section because we cannot allow non-reduced curve to appear (condition (b1) in \Cref{def:hyp-A_r}).
\end{proof}
The representation $\widetilde{\AA}(6)$ can be described as follows: consider the $\mathrm{GL}_2$-representation $\AA(6)$ of binary forms with coordinates $(x_0,x_1)$ of degree $6$ with an action described by the formula
$$ A.f(x_0,x_1):=\det(A)^{2}f(A^{-1}x),$$
and consider also the $1$-dimensional $B_2$-representation $\AA^1$ with an action described by the formula
$$ A.s:=\det(A)a_{22}^{-3}s$$
where
$$
A:=\begin{pmatrix}
a_{11} & a_{12} \\
0 & a_{22}
\end{pmatrix}.
$$
We define $\widetilde{\AA}(6)$ to be the invariant subscheme of $\AA(6)\times \AA^1$ defined by the pairs $(f,s)$ that satisfy the equation $f(0,1)=s^2$. If we use $s$ instead of the coefficient of $x_1^6$ in $f$ as a coordinate, it is clear that $\widetilde{\AA}(6)$ is isomorphic to a $7$-dimensional $B_2$-representation.
The previous proposition combined with \Cref{prop:contrac} gives us the following description of the Chow ring of $\widetilde{\mathcal M}_{2,1}\smallsetminus \widetilde{\Theta}_1$.
\begin{corollary}\label{cor:mtilde_21}
$\widetilde{\mathcal M}_{2,1}\smallsetminus \widetilde{\Theta}_1$ is the quotient by $B_2$ of the complement of the subrepresentation of $\widetilde{\AA}(6)$ described by the equations $s=a_5=a_4=a_3=0$ where $a_i$ is the coefficient of $x_0^{6-i}x_1^i$.
\end{corollary}
\begin{proof}
\Cref{prop:contrac} gives that $\widetilde{\mathcal M}_{2,1}\smallsetminus \widetilde{\Theta}_1$ is isomorphic to the open of $\widetilde{\mathcal C}_2 \smallsetminus \widetilde{\Theta}_1$ parametrizing pairs $(C/S,p)$ such that $C$ is a $A_5$-stable genus $2$ curve and $p$ is a section whose geometric fibers over $S$ are $A_n$-singularity for $n\leq 2$. In particular, we need to describe the closed invariant subscheme $D_3$ of $\widetilde{\AA}(6)$ that parametrizes pairs $(C,p)$ such that $p$ is a $A_n$-singularities with $n\geq 3$.
To do so, we need to explicit the isomorphism
$$ \widetilde{\mathcal C}_2 \smallsetminus \widetilde{\Theta}_1 \simeq [\widetilde{\AA}(6)\smallsetminus 0/B_2]$$
as described in Proposition 3.1 of \cite{DiLorPerVis}. Given a pair $(f,s) \in \widetilde{\AA}(6)\smallsetminus 0$, we construct a curve $C$ as $\spec_{\PP^1}(\cA)$ where $\cA$ is the finite locally free algebra of rank two over $\PP^1$ defined as $\cO_{\PP^1}\oplus \cO_{\PP^1}(-3)$. The multiplication is induced by the section $f:\cO_{\PP^1}(-6)\into \cO_{\PP^1}$. The section $p$ of $C$ can be defined seeing $s$ as a section of $\H^0(\cO_{\PP^1}(-3)\vert_{\infty})$, where $\infty \in \PP^1$. This implies that the point $p$ is a $A_n$-singularity for $n\geq 3$ if and only if $\infty$ is a root of multiplicity at least $4$ for the section $f$. The statement follows.
\end{proof}
\Cref{cor:mtilde_21} gives us the description of the Chow ring of $\widetilde{\Delta}_{1}\smallsetminus \widetilde{\Delta}_{1,1}$. We denote by $t$ the generator of the Chow ring of $\widetilde{\mathcal M}_{1,1}$ defined as $t:=c_1(p^*\cO(p))$ for every object $(C,p)\in \widetilde{\mathcal M}_{1,1}$. This is the $\psi$-class of $\widetilde{\mathcal M}_{1,1}$.
\begin{proposition}\label{prop:descr-detilde-1}
We have the following isomorphism
$$ \ch(\widetilde{\Delta}_1 \smallsetminus \widetilde{\Delta}_{1,1}) \simeq \ZZ[1/6,t_0,t_1,t]/(f)$$
where $f \in \ZZ[1/6,t_0,t_1]$ is the polynomial:
$$f=2t_1(t_0+t_1)(t_0-2t_1)(t_0-3t_1).$$
\end{proposition}
\begin{proof}
Because $\widetilde{\mathcal M}_{1,1}$ is a vector bundle over $\cB \GG_{\rmm}$, the morphism
$$ \ch(\widetilde{\mathcal M}_{2,1}\smallsetminus \widetilde{\Theta}_1)\otimes \ch(\cB\GG_{\rmm}) \rightarrow \ch(\widetilde{\Delta}_1 \smallsetminus \widetilde{\Delta}_{1,1})$$
is an isomorphism. Therefore it is enough to describe the Chow ring of $\widetilde{\mathcal M}_{2,1}\smallsetminus \widetilde{\Theta}_1$. The previous corollary gives us that if $T_2$ is the maximal torus of $B_2$ and $t_0,t_1$ are the two generators for the character group of $T_2$, we have that
$$\ch(\widetilde{\mathcal M}_{2,1}\smallsetminus \widetilde{\Theta}_1)\simeq \ZZ[1/6,t_0,t_1]/(f)$$
where $f$ is the fundamental class associated with the vanishing of the coordinates $s,a_5,a_4,a_3$ of $\widetilde{\AA}(6)$. Again, we are using Proposition 2.3 of \cite{MolVis}. This computation in the $\GG_{\rmm}^2$-equivariant setting is straightforward and it gives us the result.
\end{proof}
Once again, we use the results in Appendix A of \cite{DiLorVis} to describe the first Chern of the normal bundle of $\widetilde{\Delta}_1 \smallsetminus \widetilde{\Delta}_{1,1}$ in $\widetilde{\mathcal M}_3 \smallsetminus \widetilde{\Delta}_{1,1}$.
\begin{proposition}\label{prop:relation-detilde-1}
The closed immersion $\widetilde{\Delta}_1\smallsetminus \widetilde{\Delta}_{1,1} \into \widetilde{\mathcal M}_3 \smallsetminus \widetilde{\Delta}_{1,1}$ is regular and the first Chern class of the normal bundle is of equal to $t+t_1$. Moreover, we have the following equalities in the Chow ring of $\widetilde{\Delta}_{1}\smallsetminus \widetilde{\Delta}_{1,1}$:
\begin{itemize}
\item[(1)] $\lambda_1=-t-t_0-t_1$,
\item[(2)] $\lambda_2=t_0t_1+t(t_0+t_1)$,
\item[(3)] $\lambda_3=-t_0t_1t$,
\item[(4)] $[H]\vert_{\widetilde{\Delta}_{1}\smallsetminus \widetilde{\Delta}_{1,1}}=t_0-2t_1$.
\end{itemize}
\end{proposition}
\begin{proof}
The closed immersion is regular because the two stacks are smooth. Because of Appendix A of \cite{DiLorVis}, we know that the normal bundle is the determinant of a vector bundle $N$ of rank $2$ tensored by a $2$-torsion line bundle whose first Chern class vanishes when we invert $2$ in the Chow ring. Moreover, we can describe $N$ in the following way. Suppose $(C/S,p)$ is an object of $(\widetilde{\Delta}_{1}\smallsetminus \widetilde{\Delta}_{1,1})(S)$, where $p$ is the section whose geometric fibers over $S$ are separating nodes. Then $N$ is the normal bundle of $p$ inside $C$, which has rank $2$. If we decompose $C$ as a union of an elliptic curve $(E,e)\in \widetilde{\mathcal M}_{1,1}$ and a $1$-pointed genus $2$ curve $(C_0,p_0) \in \widetilde{\mathcal M}_{2,1}$, it is clear that $N=N_{e/E}\oplus N_{p_0/C_0}$. Therefore it is enough to compute the two line bundles separately.
The first Chern class of the line bundle $N_{e/E}$ is exactly the $\psi$-class of $\widetilde{\mathcal M}_{1,1}$, which is $t$ by definition. Similarly, the line bundle $N_{p_0/C_0}$ is the $\psi$-class of $\widetilde{\mathcal M}_{2,1}$. Using the description of $\widetilde{\mathcal C}_2$ as a quotient stack (see the proof of \Cref{cor:mtilde_21}), one can prove that
\begin{itemize}
\item $N_{p_0/C_0}=\cO_{\PP^1}(1)^{\vee}\vert_{p_{\infty}}=E_1$, where $E_1$ is the character of $B_2$ whose Chern class is $t_1$;
\item $\pi_*\omega_{C_0}=E_0^{\vee}\oplus E_1^{\vee}$, where $E_0$ is the character of $B_2$ whose Chern class is $t_0$.
\end{itemize}
We now prove the description of the fundamental class of the hyperelliptic locus. We have that $\widetilde{\cH}_3$ intersects transversely $\widetilde{\Delta}_1\smallsetminus \widetilde{\Delta}_{1,1}$ in the locus where the section of $\widetilde{\mathcal M}_{2,1}$ is a Weierstrass point for the (unique) involution of a genus $2$ curve. The description of $\widetilde{\mathcal C}_2 \smallsetminus \widetilde{\Theta}_1$ as a quotient stack (see the proof of \Cref{cor:mtilde_21}) implies that $[H]\vert_{\widetilde{\Delta}_1\smallsetminus \widetilde{\Delta}_{1,1}}$ is equal to the class of the vanishing locus $s=0$ in $\widetilde{\AA}(6)$, which is easily computed as we know explicitly the action of $B_2$. The only thing that remains to do is the descriptions of the $\lambda$-classes. They follow from the following lemma.
\end{proof}
\begin{lemma}\label{lem:hodge-sep}
Let $C/S$ be an $A_r$-stable curve over a base scheme $S$ and $p$ a separating node (i.e. a section whose geometric fibers over $S$ are separating nodes). Consider $b:\widetilde{C}\arr C$ the blowup of $C$ in $p$ and denote by $\widetilde{\pi}$ the composition $\pi \circ b$. Then we have $$\pi_*\omega_{C/S}\simeq \widetilde{\pi}_*\omega_{\widetilde{C}/S}.$$
\end{lemma}
\begin{proof}
Denote by $D$ the dual of the conductor ideal, see \Cref{lem:blowup}. We know by the Noether formula that $b^*\omega_{C/S}=\omega_{\widetilde{C}/S}(D)$, therefore if we tensor the exact sequence
$$0 \rightarrow \cO_C \rightarrow f_*\cO_{\widetilde{C}} \rightarrow Q \rightarrow 0$$
by $\omega_{C/S}$, we get the following injective morphism
$$ \omega_{C/S} \into f_*\omega_{\widetilde{C}/S}(D). $$
As usual, the smoothness of $\widetilde{\mathcal M}_g^r$ implies that we can apply Grauert's theorem to prove that the line bundles $\omega_{\widetilde{C}}$ and $\omega_{\widetilde{C}}(D)$ satisfy base change over $S$. Consider now the morphism on global sections
$$ \pi_*\omega_{C/S} \into\widetilde{\pi}_*\omega_{\widetilde{C}/S}(D),$$
because they both satisfy base change, we can prove the surjectivity restricting to the geometric points of $S$. The statement for algebraically closed fields has already been proved in \Cref{lem:sep-decom-hodge}.
Finally, given the morphism
$$ \omega_{\widetilde{C}} \into \omega_{\widetilde{C}}(D) $$
we consider the global sections
$$ \widetilde{\pi}_*\omega_{\widetilde{C}/S} \into \widetilde{\pi}_*\omega_{\widetilde{C}/S}(D)$$
and again the surjectivity follows restricting to the geometric fibers over $S$.
\end{proof}
\section{Description of $\widetilde{\Delta}_{1,1} \smallsetminus \widetilde{\Delta}_{1,1,1}$}\label{sec:detilde-1-1}
Let us consider the following morphism
$$ \widetilde{\mathcal M}_{1,2} \times \widetilde{\mathcal M}_{1,1} \times \widetilde{\mathcal M}_{1,1} \longrightarrow \widetilde{\Delta}_{1,1}$$
defined by the association $$(C,p_1,p_2),(E_1,e_1),(E_2,e_2) \mapsto E_1 ^{\rm c}p_{e_1\equiv p_1} C ^{\rm c}p_{p_2 \equiv e_2} E_2$$
where the cup symbol represent the gluing of the two curves using the two points specified. The operation is obviously non commutative.
The preimage of $\widetilde{\Delta}_{1,1,1}$ through the morphism is the product $\widetilde{\mathcal M}_{1,1}\times \widetilde{\mathcal M}_{1,1}\times \widetilde{\mathcal M}_{1,1}$ where $\widetilde{\mathcal M}_{1,1}\subset \widetilde{\mathcal M}_{1,2}$ is the universal section of the natural functor $\widetilde{\mathcal M}_{1,2}\rightarrow \widetilde{\mathcal M}_{1,1}$.
\begin{lemma}\label{lem:detilde-1-1}
The morphism
$$ \pi_2:(\widetilde{\mathcal M}_{1,2}\smallsetminus \widetilde{\mathcal M}_{1,1}) \times \widetilde{\mathcal M}_{1,1} \times \widetilde{\mathcal M}_{1,1} \longrightarrow \widetilde{\Delta}_{1,1}\smallsetminus \widetilde{\Delta}_{1,1,1}$$
described above is a $C_2$-torsor.
\end{lemma}
\begin{proof}
First of all, let us denote by $\cX\rightarrow \cY$ the morphism of algebraic stacks in the statement. It is easy to verify that it is representable.
Consider the fiber product $\cX \times_{\cY} \cX$. We can construct a morphism $\eta: \cX \times C_2 \rightarrow \cX \times_{\cY} \cX$ over $\cX$ using the following $C_2$-action on the objects of $\cX$:
$$ \Big( (C,p_1,p_2), (E_1,e_1), (E_2,e_2) \Big) \mapsto \Big((C,p_2,p_1), (E_2,e_2), (E_1,e_1)\Big)$$
whereas on morphism it is defined in the natural way. We use the definition of action over a stack as in \cite{Rom}.
It is easy to see that $\eta$ is an isomorphism when restricted to the geometric fibers over $\cX$. This implies that $\cX \times_{\cY} \cX$ is quasi-finite and representable over $\cX$ and the length of the geometric fibers is constant. Because $\cX$ is smooth, we get that $\cX \times_{\cY} \cX$ is flat over $\cX$ and therefore that $\eta$ is an isomorphism.
\end{proof}
Denote by $\widetilde{\mathcal C}_{1,1}$ the universal curve over $\widetilde{\mathcal M}_{1,1}$. It naturally comes with a universal section $\widetilde{\mathcal M}_{1,1}\into \widetilde{\mathcal C}_{1,1}$.
\begin{lemma}\label{lem:mtilde-12}
We have the isomorphism
$$ \widetilde{\mathcal M}_{1,2}\smallsetminus\widetilde{\mathcal M}_{1,1} \simeq \widetilde{\mathcal C}_{1,1}\smallsetminus \widetilde{\mathcal M}_{1,1}$$
and therefore we have the following isomorphism of rings
$$ \ch( \widetilde{\mathcal M}_{1,2}\smallsetminus\widetilde{\mathcal M}_{1,1})\simeq \ZZ[1/6,t]. $$
\end{lemma}
\begin{proof}
The isomorphism is a corollary of \Cref{prop:contrac}. The computation of the Chow ring of $\widetilde{\mathcal C}_{1,1}\smallsetminus \widetilde{\mathcal M}_{1,1}$ is Lemma 3.2 of \cite{DiLorPerVis}.
\end{proof}
\begin{remark}
It is important to notice that $\widetilde{\mathcal M}_{1,2}$ has at most $A_3$-singularities while $\widetilde{\mathcal C}_{1,1}$ has at most cusps, because it is the universal curve of $\widetilde{\mathcal M}_{1,1}$.
\end{remark}
\begin{corollary}
The algebraic stack $\widetilde{\Delta}_{1,1}\smallsetminus \widetilde{\Delta}_{1,1,1}$ is smooth.
\end{corollary}
\Cref{lem:chow-tor} implies that
$$\ch(\widetilde{\Delta}_{1,1}\smallsetminus \widetilde{\Delta}_{1,1,1})\simeq \ZZ[1/6,t,t_1,t_2]^{\rm inv},$$
where the action of $C_2$ is defined on object by the following association
$$ \Big( (C,p_1,p_2), (E_1,e_1), (E_2,e_2) \Big) \mapsto \Big((C,p_2,p_1), (E_2,e_2), (E_1,e_1)\Big).$$
A computation shows that the involution acts on the Chow ring of the product in the following way
$$ (t,t_1,t_2) \mapsto (t,t_2,t_1)$$
and therefore we have the description we need.
\begin{proposition}
In the situation above, we have the following isomorphism:
$$ \ch(\widetilde{\Delta}_{1,1}\smallsetminus \widetilde{\Delta}_{1,1,1})\simeq \ZZ[1/6,t,c_1,c_2]$$
where $c_1:=t_1+t_2$ and $c_2:=t_1t_2$.
\end{proposition}
It remains to describe the normal bundle of the closed immersion $\widetilde{\Delta}_{1,1}\smallsetminus \widetilde{\Delta}_{1,1,1} \into \widetilde{\mathcal M}_{3}\smallsetminus\widetilde{\Delta}_{1,1,1}$ and the other classes. As usual, we denote by $\delta_1$ the fundamental class of $\widetilde{\Delta}_1$ in $\widetilde{\mathcal M}_3$.
\begin{proposition}\label{prop:relat-detilde-1-1}
We have the following equalities in the Chow ring of $\widetilde{\Delta}_{1,1}\smallsetminus \widetilde{\Delta}_{1,1,1}$:
\begin{itemize}
\item[(1)] $\lambda_1=-t-c_1$,
\item[(2)] $\lambda_2=c_2+tc_1$,
\item[(3)] $\lambda_3=-tc_2$,
\item[(4)] $[H]\vert_{\widetilde{\Delta}_{1,1}\smallsetminus \widetilde{\Delta}_{1,1,1}} = -3t$,
\item[(5)] $\delta_1\vert_{\widetilde{\Delta}_{1,1}\smallsetminus \widetilde{\Delta}_{1,1,1}} = 2t+c_1$.
\end{itemize}
Furthermore, the second Chern class of the normal bundle of the closed immersion $\widetilde{\Delta}_{1,1}\smallsetminus \widetilde{\Delta}_{1,1,1} \into \widetilde{\mathcal M}_{3}\smallsetminus\widetilde{\Delta}_{1,1,1}$ is equal to $c_2+tc_1+t^2$.
\end{proposition}
\begin{proof}
The restriction of the $\lambda$-classes can be computed using \Cref{lem:hodge-sep}. The proof of formula (5) is exactly the same of the computation of the normal bundle of $\widetilde{\Delta}_{1}\smallsetminus \widetilde{\Delta}_{1,1}$ in $\widetilde{\mathcal M}_3 \smallsetminus \widetilde{\Delta}_{1,1}$. The only thing to remark is that the two $\psi$-classes of $\widetilde{\mathcal M}_{1,2}\smallsetminus \widetilde{\mathcal M}_{1,1}$ coincide with the generator $t$ of $\ch(\widetilde{\mathcal M}_{1,2}\smallsetminus \widetilde{\mathcal M}_{1,1})$.
As far as the fundamental class of the hyperelliptic locus is concerned, it is clear that it coincides with the fundamental class of the locus in $\widetilde{\mathcal M}_{1,2}\smallsetminus \widetilde{\mathcal M}_{1,1}$ parametrizing $2$-pointed stable curves of genus $1$ such that the two sections are both fixed points for an involution. This computation can be done using the description of $\widetilde{\mathcal M}_{1,2}\smallsetminus \widetilde{\mathcal M}_{1,1}$ as $\widetilde{\mathcal C}_{1,1}\smallsetminus \widetilde{\mathcal M}_{1,1}$, in particular as in Lemma 3.2 of \cite{DiLorPerVis}. In fact, they proved that $\widetilde{\mathcal C}_{1,1}\smallsetminus \widetilde{\mathcal M}_{1,1}$ is an invariant subscheme $W$ of a $\GG_{\rmm}$-representation $V$ of dimension $4$, where the action can be described as $$t.(x,y,\alpha,\beta)=(t^{-2}x,t^{-3}y,t^{-4}\alpha,t^{-6}\beta)$$
for every $t \in \GG_{\rmm} $ and $(x.y.\alpha,\beta) \in V$. Specifically, $W$ is the hypersurface in $V$ defined by the equation $y^2=x^3+\alpha x+\beta$, which is exactly the dehomogenization of the Weierstrass form of an elliptic curve with a flex at the infinity. A straightforward computation shows that the hyperelliptic locus is defined by the equation $y=0$.
Finally, the normal bundle of the closed immersion can be described using the theory developed in Appendix A of \cite{DiLorVis} as the sum
$$(N_{p_1|C}\otimes N_{e_1|E_1})\oplus (N_{p_2|C}\otimes N_{e_2|E_2})$$
for every element $[(C,p_1,p_2), (E_1,e_1), (E_2,e_2)]$ in $\widetilde{\Delta}_{1,1}\smallsetminus \widetilde{\Delta}_{1,1,1}$.
\end{proof}
\section{Description of $\widetilde{\Delta}_{1,1,1}$}\label{sec:detilde-1-1-1}
Finally, to describe the last strata, we use the morphism introduced in the previous paragraph to define
$$c_6:\widetilde{\mathcal M}_{1,1} \times \widetilde{\mathcal M}_{1,1}\times \widetilde{\mathcal M}_{1,1} \rightarrow \widetilde{\Delta}_{1,1,1}$$
which can be described as taking three elliptic (possibly singular) curves $(E_1,e_1)$, $(E_2,e_2)$, $(E_3,e_3)$ and attach them to a projective line with three distinct points $(\PP^1,0,1,\infty)$ using the order $e_1\equiv 0$, $e_2\equiv 1$, $e_3 \equiv \infty$. We denote by $S_3$ the group of permutation of a set of three elements.
\begin{lemma}\label{lem:descr-delta-1-1-1}
The morphism $c_6$ is a $S_3$-torsor.
\end{lemma}
\begin{proof}
The proof of \Cref{lem:detilde-1-1} can be adapted perfectly to this statement.
\end{proof}
The previous lemma implies that we have an action of $S_3$ on $\ch(\widetilde{\mathcal M}_{1,1}^{\times 3})$. Therefore it is clear that
$$ \ch(\widetilde{\Delta}_{1,1,1})=\ZZ[1/6,c_1,c_2,c_3]$$
where $c_i$ is the $i$-th symmetric polynomial in the variables $t_1,t_2,t_3$, which are the generators of the Chow rings of the factors..
\begin{proposition}
We have that the following equalities in the Chow ring of $\widetilde{\Delta}_{1,1,1}$:
\begin{itemize}
\item[(1)] $\lambda_1=-c_1$,
\item[(2)] $\lambda_2=c_2$,
\item[(3)] $\lambda_3=-c_3$,
\item[(4)] $[H]\vert_{\widetilde{\Delta}_{1,1,1}}=0$,
\item[(5)] $\delta_1\vert_{\widetilde{\Delta}_{1,1,1}}=c_1$,
\item[(6)] $\delta_{1,1}\vert_{\widetilde{\Delta}_{1,1,1}}=c_2$.
\end{itemize}
Furthermore, the third Chern class of the normal bundle of the closed immersion $\widetilde{\Delta}_{1,1,1}\into \widetilde{\mathcal M}_3$ is equal to $c_3$.
\end{proposition}
\begin{proof}
We can use \Cref{lem:hodge-sep} to get the description of the $\lambda$-classes. To compute $\delta_1$ and $\delta_{1,1}$ and the third Chern class of the normal bundle one can use again the results of Appendix A of \cite{DiLorVis} and adapt the proof of \Cref{prop:relat-detilde-1-1}. Notice that in this case $$N_{\widetilde{\Delta}_{1,1,1}|\widetilde{\mathcal M}_3}=N_{e_1|E_1}\oplus N_{e_2|E_2}\oplus N_{e_3|E_3}.$$ Finally, we know that there are no hyperelliptic curves of genus $3$ with three separating nodes. In fact, any involution restricts to the identity over the projective line, because it fixes three points. But then its fixed locus is not finite, therefore it is not hyperelliptic.
\end{proof}
\section{The gluing procedure and the Chow ring of $\widetilde{\mathcal M}_3$}\label{sec:chow-m3tilde}
In the last section of this chapter, we explain how to calculate explicitly the Chow ring of $\widetilde{\mathcal M}_3$. It is not clear a priori how to describe the fiber product that appears in \Cref{lem:gluing}.
Let $\cU$, $\cZ$ and $\cX$ as in \Cref{lem:gluing} and denote by $i:\cZ \into \cX$ the closed immersion and by $j:\cU \into \cZ$ the open immersion. Let us set some notations.
\begin{itemize}
\item $ \ch(\cU)$ is generated by the elements $x'_1,\dots,x'_r$ and let $x_1,\dots,x_r$ be some liftings of $x_1',\dots,x_r'$ in $\ch(\cX)$; we denote by $\eta$ the morphism
$$ \ZZ[1/6,X_1,\dots,X_r,Z] \longrightarrow \ch(\cX)$$
where $X_h$ maps to $x_h$ for $h=1,\dots,r$ and $Z$ maps to $[\cZ]$, the fundamental class of $\cZ$; we denote by $\eta_{\cU}$ the composite
$$ \ZZ[1/6,X_1,\dots,X_r] \into \ZZ[1/6,X_1,\dots,X_r,Z] \longrightarrow \ch(\cX)\rightarrow \ch(\cU).$$
Furthermore, we denote by $p_1'(X), \dots, p_n'(X)$ a choice of generators for $\ker(\eta_{\cU})$, where $X:=(X_1,\dots,X_r)$.
\item $ \ch(\cZ)$ is generated by elements $y_1,\dots,y_s \in \ch(\cZ)$; we denote by $a$ the morphism
$$ \ZZ[1/6,Y_1,\dots,Y_s] \longrightarrow \ch(\cZ)$$
where $Y_h$ maps to $y_h$ for $h=1,\dots,s$. Furthermore, we denote by $q_1'(Y), \dots, q_m'(Y)$ a choice of generators for $\ker(a)$, where $Y:=(Y_1,\dots,Y_s)$.
\end{itemize}
Because $a$ is surjective, there exists a morphism
$$ \eta_{\cZ}: \ZZ[1/6,X_1,\dots,X_r,Z] \longrightarrow \ZZ[1/6,Y_1,\dots,Y_r]$$
which is a lifting of the morphism $i^*$, i.e. it makes the following diagram
$$
\begin{tikzcd}
{\ZZ[1/6,X_1,\dots,X_r,Z]} \arrow[d, "\eta"] \arrow[r, "\eta_{\cZ}", dotted] & {\ZZ[1/6,Y_1,\dots,Y_s]} \arrow[d, "a"] \\
\ch(\cX) \arrow[r, "i^*"] & \ch(\cZ)
\end{tikzcd}
$$
commute.
The cartesianity of the diagram in \Cref{lem:gluing} implies the following lemma.
\begin{lemma}\label{lem:gluing-sur}
In the situation above, suppose that the morphism $\eta_{\cZ}$ is surjective. Then $\eta$ is surjective.
\end{lemma}
\begin{proof}
This follows because $ j^*\circ \eta$ is always surjective and the hypothesis implies that also $i^*\circ \eta$ is surjective.
\end{proof}
In the hypothesis of the lemma, we can also describe explicitly the relations, i.e. the kernel of $\eta$. First of all, denote by $q_h(X,Z)$ some liftings of $q_h'(Y)$ through the morphism $\eta_{\cZ}$ for $h=1,\dots,m$. A straightforward computation shows that we have $Zq_h(X,Z) \in \ker \eta$ for $h=1,\dots,m$.
We have found our first relations. We refer to these type of relations as liftings of the closed stratum's relations.
Another important set of relations comes from the kernel of $\eta_{\cZ}$. In fact, if $v \in \ker\eta_{\cZ}$, then a simple computation shows that $\eta(Zv)=0$. Therefore if $v_1,\dots,v_l$ are generators of $\ker\eta_{\cZ}$, we get that $Zv_h \in \ker\eta$ for $h=1,\dots,l$.
Finally, the last set of relations are the relations that come from $\cU$, the open stratum. The element $p_h(X)$ in general is not a relation as its restriction to $\ch(\cZ)$ can fail to vanish. We can however do the following procedure to find a modification of $p_h$ which still vanishes restricted to the open stratum and it is in the kernel of $\eta$. Recall that we have a well-defined morphism
$$ i^*:\ch(\cU) \longrightarrow \ch(\cZ)/(c_{\rm top}(N_{\cZ|\cX}))$$
which implies that $\eta_{\cZ}(p_h) \in (q'_1,\dots,q'_m,c_{\rm top}(N_{\cZ|\cX}))$. We choose an element $g'_h \in \ZZ[1/6,Y_1,\dots,Y_s]$ such that
$$ \eta_{\cZ}(p_h) + g_h'c_{\rm top}(N_{\cZ|\cX}) \in (q'_1,\dots,q'_m)$$
and consider a lifting $g_h$ of $g'_h$ through the morphism $\eta_{\cZ}$. A straightforward computation shows that $p_h+Zg_h \in \ker \eta$ for every $h=1,\dots,n$.
\begin{proposition}\label{prop:desc-gluing}
In the situation above, $\ker\eta$ is generated by the elements $Zq_1,\dots,Zq_m,Zv_1,\dots,Zv_l,p_1+Zg_1,\dots,p_n+Zg_n$.
\end{proposition}
\begin{proof}
Consider the following commutative diagram
$$
\begin{tikzcd}
{\ZZ[1/6,X_1,\dots,X_r,Z]} \arrow[r, "\eta"] \arrow[d, "\eta_{\cZ}"] & \ch(\cX) \arrow[d, "i^*"] \arrow[r, "j^*"] & \ch(\cU) \arrow[d, "i^*"] \\
{\ZZ[1/6,Y_1,\dots,Y_s]} \arrow[r, "a"] & \ch(\cZ) \arrow[r, "b"] & \ch(\cZ)/(c_{\rm top}(N_{\cZ|\cX}))
\end{tikzcd}
$$
where $b$ is the quotient morphisms. Recall that the right square is cartesian. Notice that because $\eta_{\cZ}$ is surjective, all the other morphisms are surjectives.
We denote by $c$ the top Chern class of the normal bundle $N_{\cZ|\cX}$. Let $p(X)+Zg(X,Z)$ be an element in $\ker \eta$. Because we have $$(j^*\circ \eta)(p(X)+Zg(X,Z))=0,$$ thus $p \in (p_1',\dots,p_n')$ which implies there exists $b_1,\dots,b_n \in \ZZ[1/6,X_1,\dots,X_r]$ such that
$$ p = \sum_{h=1}^n b_hp_h . $$
Now, we pullback the element to $\cZ$, thus we get
$$ (i^*\circ \eta)(p+Zg)=a\Big(\sum_{h=1}^n \eta_{\cZ}(b_h) \eta_{\cZ}(p_h) + c \eta_{\cZ}(g)\Big)$$
or equivalently
$$\sum_{h=1}^n \eta_{\cZ}(b_h) \eta_{\cZ}(p_h) + c \eta_{\cZ}(g) \in (q_1',\dots, q_m').$$
By construction $\eta_{\cZ}(p_h)=-g_h'c + (q_1',\dots,q_m')$ and $\eta_{\cZ}(g_h)=g_h'$, therefore we get
$$ c \eta_{\cZ}\Big(g-\sum_{h=1}^nb_hg_h\Big) \in (q_1',\dots,q_m'). $$
Because $c$ is a non-zero divisor in $\ch(\cZ)$ we have that
$$ \eta_{\cZ}\Big(g-\sum_{h=1}^nb_hg_h\Big) \in (q_1',\dots,q_m')$$
or equivalently
$$g= \sum_{h=1}^n b_hg_h + t$$
with $t \in \ker(a \circ \eta_{\cZ})$. Therefore we have that
$$ p+Zg=\sum_{h=1}^n b_h(p_h+Zg_h)+Zt$$
with $t \in \ker(a \circ \eta_{\cZ})$. One can check easily that $\ker(a \circ \eta_{\cZ})$ is generated by $(v_1,\dots,v_l,q_1,\dots,q_m)$.
\end{proof}
Now, we sketch how to apply this procedure to the first two strata, namely $\widetilde{\cH}_3\smallsetminus \widetilde{\Delta}_{1}$ and $\widetilde{\mathcal M}_3 \smallsetminus (\widetilde{\cH}_3 ^{\rm c}p \widetilde{\Delta}_1)$, to get the Chow ring of $\widetilde{\mathcal M}_3 \smallsetminus \widetilde{\Delta}_1$. This is the most complicated part as far as computations is concerned. The other gluing procedures are left to the curious reader as they follows the exact same ideas.
In our situation, we have $\cU:=\widetilde{\mathcal M}_{3}\smallsetminus (\widetilde{\cH}_3 ^{\rm c}p \widetilde{\Delta}_1)$ and $\cZ:= \widetilde{\cH}_3 \smallsetminus \widetilde{\Delta}_1$. We know the description of their Chow rings thanks to \Cref{cor:chow-hyper} and \Cref{cor:chow-quart}. Let us look at the generators we need. The Chow ring of $\cU$ is generated by $\lambda_1$, $\lambda_2$ and $\lambda_3$. The Chow ring of $\cZ$ is generated by $\lambda_1$, $\lambda_2$ and $\xi_1$. Therefore the morphism
$$\eta_{\cZ} : \ZZ[1/6,\lambda_1,\lambda_2,\lambda_3, H] \longrightarrow \ZZ[1/6,\lambda_1,\lambda_2,\xi_1]$$
is surjective because $\eta_{\cZ}(H)=(2\xi_1-\lambda_1)/3$. We can also describe the $\ker \eta_{\cZ}$ which is generated by any lifting of the description of $\lambda_3$ in $\cZ$ (see \Cref{lem:lambda-class-H}). This gives us our first relation (after multiplying it by $H$, the fundamental class of the hyperelliptic locus). Furthermore, we can consider the ideal of relations in $\widetilde{\cH}_{3}\smallsetminus \widetilde{\Delta}_1$ which is generated by the relations $c_9$, $D_1$ and $D_2$ we described in \Cref{cor:chow-hyper}. Therefore we have other three relations.
Lastly, we consider the four relations as in \Cref{cor:chow-quart} and compute their image through $\eta_{\cZ}$. The hardest part is to find a description of these elements in terms of the generators of the ideal of relations of $\widetilde{\cH}_3 \smallsetminus \widetilde{\Delta}_1$ and the top Chern class of the normal bundle of the closed immersion. We do not go into details about the computations, but the main idea is to notice that every monomial of the polynomials we need to describe can be written in terms of the relations and the top Chern class.
We state our theorem, which gives us the description of the Chow ring of $\widetilde{\mathcal M}_3$. We write the explicit relations in \Cref{rem:relations-Mtilde}.
\begin{theorem}
We have the following isomorphism
$$ \ch(\widetilde{\mathcal M}_3)\simeq \ZZ[1/6,\lambda_1,\lambda_2,\lambda_3,H,\delta_1,\delta_{1,1},\delta_{1,1,1}]/I$$
where $I$ is generated by the following relations:
\begin{itemize}
\item $k_h$, which comes from the generator of $\ker i_H^*$, where $i_H: \widetilde{\cH}_3\smallsetminus \widetilde{\Delta}_1 \into \widetilde{\mathcal M}_3\smallsetminus \widetilde{\Delta}_1$;
\item $k_{1}(1)$ and $k_1(2)$, which come from the two generators of $\ker i_{1}^*$ where $i_{1}: \widetilde{\Delta}_1\smallsetminus \widetilde{\Delta}_{1,1} \into \widetilde{\mathcal M}_3\smallsetminus \widetilde{\Delta}_{1,1}$;
\item $k_{1,1}(1)$, $k_{1,1}(2)$ and $k_{1,1}(3)$, which come from the three generators of $\ker i_{1,1}^*$ where $i_{1,1}: \widetilde{\Delta}_{1,1}\smallsetminus \widetilde{\Delta}_{1,1,1} \into \widetilde{\mathcal M}_3\smallsetminus \widetilde{\Delta}_{1,1,1}$;
\item $k_{1,1,1}(1)$, $k_{1,1,1}(2)$, $k_{1,1,1}(3)$ and $k_{1,1,1}(4)$, which come from the four generators of $\ker i_{1,1,1}^*$ where $i_{1,1,1}: \widetilde{\Delta}_{1,1,1} \into \widetilde{\mathcal M}_3$;
\item $m(1)$, $m(2)$, $m(3)$ and $r$, which are the litings of the generators of the relations of the open stratum $\widetilde{\mathcal M}_3\smallsetminus (\widetilde{\cH}_3 ^{\rm c}p \widetilde{\Delta}_1)$;
\item $h(1)$, $h(2)$ and $h(3)$, which are the liftings of the generators of the relations of the stratum $\widetilde{\cH}_3 \smallsetminus \widetilde{\Delta}_1$;
\item $d_1(1)$, which is the lifting of the generator of the relations of the stratum $\widetilde{\Delta}_1\smallsetminus \widetilde{\Delta}_{1,1}$.
\end{itemize}
Furthemore, $h(2)$, $h(3)$ and $d_1(1)$ are in the ideal generated by the other relations.
\end{theorem}
\begin{remark}\label{rem:relations-Mtilde}
We write explicitly the relations.
\begin{itemize}
\item[]
\begin{equation*}
\begin{split}
k_h& =\frac{1}{8}\lambda_1^3H + \frac{1}{8}\lambda_1^2H^2 + \frac{1}{4}\lambda_1^2H\delta_1 - \frac{1}{2}\lambda_1\lambda_2H - \frac{1}{8}\lambda_1H^3 +
\frac{7}{8}\lambda_1H\delta_1^2 + \\ & + \frac{3}{2}\lambda_1\delta_1\delta_{1,1} - \frac{1}{2}\lambda_2H^2 + \lambda_3H - \frac{1}{8}H^4 - \frac{1}{4}H^3\delta_1 +
\frac{1}{8}H^2\delta_1^2 + \frac{3}{4}H\delta_1^3 + \\ & + \frac{3}{2}\delta_1^2\delta_{1,1}
\end{split}
\end{equation*}
\item[]
\begin{equation*}
\begin{split}
k_1(1)=\frac{1}{4}\lambda_1^2\delta_1 + \frac{1}{2}\lambda_1H\delta_1 + 2\lambda_1\delta_1^2 + \lambda_2\delta_1 + \frac{1}{4}H^2\delta_1 + H\delta_1^2 + \frac{7}{4}\delta_1^3 -\delta_1\delta_{1,1}
\end{split}
\end{equation*}
\item[]
\begin{equation*}
\begin{split}
k_1(2)& =\frac{1}{4}\lambda_1^3\delta_1 + \frac{1}{2}\lambda_1^2H\delta_1 + \frac{5}{4}\lambda_1^2\delta_1^2 + \frac{1}{4}\lambda_1H^2\delta_1 + \frac{3}{2}\lambda_1H\delta_1^2 +
\frac{7}{4}\lambda_1\delta_1^3 + \\ & + \lambda_1\delta_1\delta_{1,1} - \lambda_1\delta_{1,1,1} + \lambda_3\delta_1+ \frac{1}{4}H^2\delta_1^2 + H\delta_1^3 + \frac{3}{4}\delta_1^4 +
\delta_1^2\delta_{1,1}
\end{split}
\end{equation*}
\item[]
\begin{equation*}
\begin{split}
k_{1,1}(1)=-\lambda_1^2\delta_{1,1} - 2\lambda_1\delta_1\delta_{1,1} - \lambda_2\delta_{1,1} - \delta_1^2\delta_{1,1} + \delta_{1,1}^2
\end{split}
\end{equation*}
\item[]
\begin{equation*}
\begin{split}
k_{1,1}(2)=3\lambda_1\delta_{1,1} + H\delta_{1,1} + 3\delta_1\delta_{1,1}
\end{split}
\end{equation*}
\item[]
\begin{equation*}
\begin{split}
k_{1,1}(3)&=2\lambda_1^3\delta_{1,1} + 5\lambda_1^2\delta_1\delta_{1,1} + \lambda_1\lambda_2\delta_{1,1} + 4\lambda_1\delta_1^2\delta_{1,1} + \lambda_2\delta_1\delta_{1,1} + \\ & + \lambda_2\delta_{1,1,1} + \lambda_3\delta_{1,1} +
\delta_1^3\delta_{1,1}
\end{split}
\end{equation*}
\item[]
\begin{equation*}
\begin{split}
k_{1,1,1}(1)=\lambda_1\delta_{1,1,1} + \delta_1\delta_{1,1,1}
\end{split}
\end{equation*}
\item[]
\begin{equation*}
\begin{split}
k_{1,1,1}(2)=\lambda_2\delta_{1,1,1} - \delta_{1,1}\delta_{1,1,1}
\end{split}
\end{equation*}
\item[]
\begin{equation*}
\begin{split}
k_{1,1,1}(3)=\lambda_3\delta_{1,1,1} + \delta_{1,1,1}^2
\end{split}
\end{equation*}
\item[]
\begin{equation*}
\begin{split}
k_{1,1,1}(4)=H\delta_{1,1,1}
\end{split}
\end{equation*}
\item[]
\begin{equation*}
\begin{split}
m(1)&=12\lambda_1^4 - \frac{7}{3}\lambda_1^3H + 27\lambda_1^3\delta_1 - 44\lambda_1^2\lambda_2 - \frac{706}{9}\lambda_1^2H^2 - \frac{65}{2}\lambda_1^2H\delta_1 + \\ &
+ 84\lambda_1^2\delta_1^2 - 32\lambda_1^2\delta_{1,1} - 38\lambda_1\lambda_2H + 92\lambda_1\lambda_3 - \frac{715}{9}\lambda_1H^3 - \\ & -
\frac{1340}{9}\lambda_1H^2\delta_1 - 25\lambda_1H\delta_1^2 + 69\lambda_1\delta_1^3 - 130\lambda_1\delta_1\delta_{1,1} + 92\lambda_1\delta_{1,1,1} + \\ & +
6\lambda_2H^2 - \frac{46}{3}H^4 - \frac{1205}{18}H^3\delta_1 - \frac{562}{9}H^2\delta_1^2 - \frac{101}{6}H\delta_1^3 -
54\delta_1^2\delta_{1,1}
\end{split}
\end{equation*}
\item[]
\begin{equation*}
\begin{split}
m(2)&=-\frac{55}{18}\lambda_1^4H + \frac{9}{2}\lambda_1^4\delta_1 - 14\lambda_1^3\lambda_2 - \frac{31}{3}\lambda_1^3H^2 - \frac{58}{9}\lambda_1^3H\delta_1 +
69\lambda_1^3\delta_1^2 - \\ & - \frac{272}{3}\lambda_1^3\delta_{1,1} - \frac{173}{9}\lambda_1^2\lambda_2H + 2\lambda_1^2\lambda_3 - \frac{137}{18}\lambda_1^2H^3
- \frac{167}{4}\lambda_1^2H^2\delta_1 + \\ & + \frac{1831}{36}\lambda_1^2H\delta_1^2 + \frac{459}{2}\lambda_1^2\delta_1^3 -
\frac{461}{3}\lambda_1^2\delta_1\delta_{1,1} + 2\lambda_1^2\delta_{1,1,1} + 48\lambda_1\lambda_2^2 + \\ & + \frac{1}{9}\lambda_1\lambda_2H^2 - \frac{1}{3}\lambda_1H^4 -
\frac{605}{18}\lambda_1H^3\delta_1 - \frac{955}{18}\lambda_1H^2\delta_1^2 + 139\lambda_1H\delta_1^3 + \\ & + 291\lambda_1\delta_1^4 -
49\lambda_1\delta_1^2\delta_{1,1} + 48\lambda_2^2H - 96\lambda_2\lambda_3 + \frac{16}{3}\lambda_2H^3 + 48\lambda_2\delta_1\delta_{1,1} - \\ & - 96\lambda_2\delta_{1,1,1}
- \frac{241}{36}H^4\delta_1 - \frac{1111}{36}H^3\delta_1^2 - \frac{63}{4}H^2\delta_1^3 + \frac{367}{4}H\delta_1^4 + 126\delta_1^5
\end{split}
\end{equation*}
\item[]
\begin{equation*}
\begin{split}
m(3)&=\frac{419}{648}\lambda_1^5H - 9\lambda_1^5\delta_1 + \frac{17903}{972}\lambda_1^4H^2 - \frac{19763}{648}\lambda_1^4H\delta_1 -
\frac{285}{4}\lambda_1^4\delta_1^2 + \\ & + \frac{57344}{27}\lambda_1^4\delta_{1,1} - \frac{401}{162}\lambda_1^3\lambda_2H + 15\lambda_1^3\lambda_3 +
\frac{100795}{1944}\lambda_1^3H^3 - \\ & - \frac{16057}{972}\lambda_1^3H^2\delta_1 - \frac{100555}{648}\lambda_1^3H\delta_1^2 -
\frac{861}{4}\lambda_1^3\delta_1^3 + \frac{614635}{54}\lambda_1^3\delta_1\delta_{1,1} + \\ & + 15\lambda_1^3\delta_{1,1,1} - \frac{6433}{81}\lambda_1^2\lambda_2H^2 -
32\lambda_1^2\lambda_2\delta_{1,1} + \frac{11561}{216}\lambda_1^2H^4 + \\ & + \frac{12349}{324}\lambda_1^2H^3\delta_1 +
\frac{559}{12}\lambda_1^2H^2\delta_1^2 - \frac{120883}{324}\lambda_1^2H\delta_1^3 - \frac{1263}{4}\lambda_1^2\delta_1^4 + \\ & +
\frac{198799}{9}\lambda_1^2\delta_1^2\delta_{1,1} - 22\lambda_1\lambda_2^2H - 52\lambda_1\lambda_2\lambda_3 - 151\lambda_1\lambda_2H^3 + \\ & +
54\lambda_1\lambda_2\delta_1\delta_{1,1} - 52\lambda_1\lambda_2\delta_{1,1,1} + \frac{2845}{162}\lambda_1H^5 + \frac{19415}{324}\lambda_1H^4\delta_1 + \\ & +
\frac{59303}{324}\lambda_1H^3\delta_1^2 + \frac{10946}{243}\lambda_1H^2\delta_1^3 - \frac{66367}{162}\lambda_1H\delta_1^4 -
\frac{903}{4}\lambda_1\delta_1^5 + \\ & + 18519\lambda_1\delta_1^3\delta_{1,1} - 22\lambda_2^2H^2 - \frac{1333}{18}\lambda_2H^4 +
86\lambda_2\delta_1^2\delta_{1,1} + 112\lambda_3^2 + \\ & + 112\lambda_3\delta_{1,1,1} - \frac{407}{216}H^6 + \frac{14521}{648}H^5\delta_1 +
\frac{40205}{648}H^4\delta_1^2 + \frac{147917}{972}H^3\delta_1^3 - \\ & - \frac{8563}{243}H^2\delta_1^4 -
\frac{104075}{648}H\delta_1^5 - 63\delta_1^6 + \frac{11377}{2}\delta_1^4\delta_{1,1}
\end{split}
\end{equation*}
\item[]
\begin{equation*}
\begin{split}
h(1)&=\frac{3}{4}\lambda_1^3H + \frac{13}{4}\lambda_1^2H^2 + \frac{9}{4}\lambda_1^2H\delta_1 + \frac{13}{4}\lambda_1H^3 + \frac{13}{2}\lambda_1H^2\delta_1 +
\frac{9}{4}\lambda_1H\delta_1^2 + \\ & + \frac{3}{4}H^4 + \frac{13}{4}H^3\delta_1+ \frac{13}{4}H^2\delta_1^2 + \frac{3}{4}H\delta_1^3
\end{split}
\end{equation*}
\item[]
\begin{equation*}
\begin{split}
r&=-\frac{7}{81}\lambda_1^8H + \frac{247145}{2916}\lambda_1^7H^2 - \frac{39125}{81}\lambda_1^7H\delta_1 - 20800\lambda_1^7\delta_{1,1} - \\ & -
\frac{1286}{243}\lambda_1^6\lambda_2H + \frac{1573727}{8748}\lambda_1^6H^3 -\frac{400579}{162}\lambda_1^6H^2\delta_1 -
\frac{618187}{162}\lambda_1^6H\delta_1^2 + \\ & + \frac{31710800}{81}\lambda_1^6\delta_1\delta_{1,1} - \frac{736943}{729}\lambda_1^5\lambda_2H^2 -
24288\lambda_1^5\lambda_2\delta_{1,1} - \frac{2193853}{8748}\lambda_1^5H^4 - \\ & - \frac{4516739}{729}\lambda_1^5H^3\delta_1 -
\frac{35110427}{2916}\lambda_1^5H^2\delta_1^2 - \frac{3448919}{243}\lambda_1^5H\delta_1^3 + \\ & +
\frac{253855904}{81}\lambda_1^5\delta_1^2\delta_{1,1} - \frac{136}{3}\lambda_1^4\lambda_2^2H - \frac{2054561}{729}\lambda_1^4\lambda_2H^3+ \\ & +
133280\lambda_1^4\lambda_2\delta_1\delta_{1,1} - \frac{2986483}{2916}\lambda_1^4H^5 - \frac{40469125}{4374}\lambda_1^4H^4\delta_1 - \\ & -
\frac{52534855}{2916}\lambda_1^4H^3\delta_1^2- \frac{33507299}{1458}\lambda_1^4H^2\delta_1^3 -
\frac{17079953}{486}\lambda_1^4H\delta_1^4 + \\ & + \frac{246525952}{27}\lambda_1^4\delta_1^3\delta_{1,1} + \frac{26584}{9}\lambda_1^3\lambda_2^2H^2
- 9248\lambda_1^3\lambda_2^2\delta_{1,1} - \frac{364370}{243}\lambda_1^3\lambda_2H^4 + \\ & + 911024\lambda_1^3\lambda_2\delta_1^2\delta_{1,1} -
1152\lambda_1^3\lambda_3^2 - 1152\lambda_1^3\lambda_3\delta_{1,1,1} - \frac{315269}{324}\lambda_1^3H^6 - \\ & -
\frac{6063527}{729}\lambda_1^3H^5\delta_1 - \frac{80352425}{4374}\lambda_1^3H^4\delta_1^2 -
\frac{8710724}{2187}\lambda_1^3H^3\delta_1^3 - \\ & - \frac{26273782}{729}\lambda_1^3H^2\delta_1^4 -
\frac{13515761}{243}\lambda_1^3H\delta_1^5 + \frac{41134658}{3}\lambda_1^3\delta_1^4\delta_{1,1} + 896\lambda_1^2\lambda_2^3H + \\ & +
256\lambda_1^2\lambda_2^2\lambda_3 + \frac{225704}{27}\lambda_1^2\lambda_2^2H^3 - 11680\lambda_1^2\lambda_2^2\delta_1\delta_{1,1} +
256\lambda_1^2\lambda_2^2\delta_{1,1,1} + \\ & + \frac{19484}{9}\lambda_1^2\lambda_2H^5 + 1716232\lambda_1^2\lambda_2\delta_1^3\delta_{1,1} -
\frac{70385}{324}\lambda_1^2H^7 - \frac{716609}{162}\lambda_1^2H^6\delta_1 - \\ & - \frac{3175405}{243}\lambda_1^2H^5\delta_1^2 +
\frac{14917697}{2187}\lambda_1^2H^4\delta_1^3 + \frac{27927485}{1458}\lambda_1^2H^3\delta_1^4 - \\ & -
\frac{24157867}{486}\lambda_1^2H^2\delta_1^5 - \frac{12315655}{243}\lambda_1^2H\delta_1^6 +
11373175\lambda_1^2\delta_1^5\delta_{1,1} + \\ & + 1664\lambda_1\lambda_2^3H^2 - 1152\lambda_1\lambda_2^3\delta_{1,1} +
192920/27\lambda_1\lambda_2^2H^4 + 7328\lambda_1\lambda_2^2\delta_1^2\delta_{1,1} + \\ & + 5824\lambda_1\lambda_2\lambda_3^2+
5824\lambda_1\lambda_2\lambda_3\delta_{1,1,1} + \frac{66191}{27}\lambda_1\lambda_2H^6 + 1353816\lambda_1\lambda_2\delta_1^4\delta_{1,1} + \\ & +
\frac{12985}{108}\lambda_1H^8 - \frac{104593}{81}\lambda_1H^7\delta_1 - \frac{1752295}{324}\lambda_1H^6\delta_1^2 +
\frac{906349}{729}\lambda_1H^5\delta_1^3 + \\ & + \frac{57919459}{2187}\lambda_1H^4\delta_1^4 + \frac{6920350}{729}\lambda_1H^3\delta_1^5
- \frac{53724649}{1458}\lambda_1H^2\delta_1^6 - \\ & - \frac{5743493}{243}\lambda_1H\delta_1^7 + 4958470\lambda_1\delta_1^6\delta_{1,1} -
1152\lambda_2^3\lambda_3 + 768\lambda_2^3H^3 - \\ & - 1152\lambda_2^3\delta_1\delta_{1,1} - 1152\lambda_2^3\delta_{1,1,1} +
\frac{16064}{9}\lambda_2^2H^5 + 9760\lambda_2^2\delta_1^3\delta_{1,1} + \frac{5399}{9}\lambda_2H^7 + \\ & + 391040\lambda_2\delta_1^5\delta_{1,1} -
10976\lambda_3^3 - 10976\lambda_3^2\delta_{1,1,1} + \frac{171}{4}H^9 - \frac{7903}{54}H^8\delta_1 - \\ & -
\frac{304223}{324}H^7\delta_1^2 - \frac{365225}{486}H^6\delta_1^3 + \frac{4136302}{729}H^5\delta_1^4 +
\frac{43734445}{4374}H^4\delta_1^5 - \\ & - \frac{3256102}{2187}H^3\delta_1^6 - \frac{14121601}{1458}H^2\delta_1^7 -
\frac{1042615}{243}H\delta_1^8 + 887989\delta_1^7\delta_{1,1}
\end{split}
\end{equation*}
\end{itemize}
\end{remark}
\operatorname{char}pter{The Chow ring of $\overline{\cM}_3$}\label{chap:3}
This chapter is dedicated to the computation of the Chow ring of $\overline{\cM}_3$ and the comparison with the result of Faber.
The first part focuses on describing the strata of $A_r$-singularities that we eventually remove from $\widetilde{\mathcal M}_3$ to get the Chow ring of $\overline{\cM}_3$.
The second part focuses on the abstract computations, namely on finding the generators of the ideal of relations coming from the closed stratum of singularity of type $A_r$ with $r\geq 2$.
In the third part, we describe how to compute these relations in the Chow ring of $\widetilde{\mathcal M}_3$: the idea is to use the stratification introduced in \Cref{chap:2}. In fact, we compute every relation restricting it to every strata and then gluing the informations to get an element in $\widetilde{\mathcal M}_3$ .
In the four part, we compare our description to the one in \cite{Fab}.
\section{The substack of $A_r$-singularity}\label{sec:3-1}
In this section we describe the closed substack of $\widetilde{\mathcal M}_g^r$ which parametrizes $A_r$-stable curves with at least a singularity of type $A_h$ with $h\geq 2$. We do so by stratifying this closed substack considering singularities of type $A_h$ with a fixed $h$ greater than $2$.
Let $g\geq 2$ and $r\geq 1$ be two integers and $\kappa$ be the base field of characteristic greater than $2g+1$. We recall the sequence of open subset (see \Cref{rem: max-sing})
$$ \widetilde{\mathcal M}_g^0 \subset \widetilde{\mathcal M}_g^1 \subset \dots \subset \widetilde{\mathcal M}_g^r$$
and we define $\widetilde{\cA}_{\geq n}:=\widetilde{\mathcal M}_g^r\smallsetminus \widetilde{\mathcal M}_g^{n-1}$ for $n=0,\dots,r+1$ setting $\widetilde{\mathcal M}_g^{-1}:=\emptyset$.
We now introduce an alternative to $\widetilde{\cA}_{\geq n}$ which is easier to describe. Suppose $n$ is a positive integer smaller or equal than $r$ and let $\cA_{\geq n}$ be the substack of the universal curve $\widetilde{\mathcal C}_g^r$ of $\widetilde{\mathcal M}_g^r$ parametrizing pairs $(C/S,p)$ where $p$ is a section whose geometric fibers over $S$ are $A_r$-singularities for $r\geq n$. We give to $\cA_{\geq n}$ the structure of closed substack of $\widetilde{\mathcal C}_g^r$ inductively on $n$. Clearly if $n=0$ we have $\cA_{\geq 0}=\widetilde{\mathcal C}_g^r$. To define $\cA_{\geq 1}$, we need to find the stack-theoretic structure of the singular locus of the natural morphism $\widetilde{\mathcal C}_g^r \rightarrow \widetilde{\mathcal M}_g^r$. This is standard and it can be done by taking the zero locus of the $1$-st Fitting ideal of $\Omega_{\widetilde{\mathcal C}_g^r|\widetilde{\mathcal M}_g^r}$. We have that $\cA_{\geq 1}\rightarrow \widetilde{\mathcal M}_g^r$ is now finite and it is unramified over the nodes, while it ramifies over the more complicated singularities. Therefore, we can denote by $\cA_{\geq 2}$ the substack of $\cA_{\geq 1}$ defined by the $0$-th Fitting ideal of $\Omega_{\cA_{\geq 1}|\widetilde{\mathcal M}_g^r}$. A local computation shows us that $\cA_{\geq 2} \rightarrow \widetilde{\mathcal M}_g^r$ is unramified over the locus of $A_2$-singularities and ramified elsewhere. Inductively, we can iterate this procedure considering the $0$-th Fitting ideal of $\Omega_{\cA_{\geq n-1}|\widetilde{\mathcal M}_g^r}$ to define $\cA_{\geq n}$.
A local computation shows that the geometric points of $\cA_{\geq n}$ are exactly the pairs $(C,p)$ such that $p$ is an $A_{n'}$-singularity for $n\leq n'\leq r$.
Let us define $\cA_n:=\cA_{\geq n}\smallsetminus \cA_{\geq n+1}$ for $n=0,\dots,r-1$. We have a stratification of $\cA_{\geq 2}$
$$ \cA_{r}=\cA_{\geq r} \subset \cA_{\geq r-1} \subset \dots \subset \cA_{\geq 2}$$
where the $\cA_n$'s are the associated locally closed strata for $n=2,\dots, r$.
The first reason we choose to work with $\cA_{\geq n}$ instead of $\widetilde{\cA}_{\geq n}$ is the smoothness of the locally closed substack $\cA_n$ of $\widetilde{\mathcal C}_g^r$.
\begin{proposition}
The stack $\cA_n$ is smooth.
\end{proposition}
\begin{proof}
We can adapt the proof of Proposition 1.6 of \cite{DiLorPerVis} perfectly. The only thing to point out is that the \'etale model induced by the deformation theory of the pair $(C,p)$ would be $y^2=x^n+a_{n-2}x^{n-2}+\dots+a_1x+a_0$, thus the restriction to $\cA_n$ is described by the equation $a_{n-2}=\dots=a_1=a_0$. The smoothness of $\widetilde{\mathcal M}_g^r$ implies the statement.
\end{proof}
Before going into details for the odd and even case, we describe a way of desingularize a $A_n$-singularity.
\begin{lemma}\label{lem:blowup-an}
Let $(C,p) \in \cA_n(S)$, then the ideal $I_p$ associated to the section $p$ verifies the hypothesis of \Cref{lem:blowup}. If we denote by $b:\widetilde{C}\rightarrow C$ the blowup morphism and by $D$ the preimage $b^{-1}(p)$, then $D$ is finite flat of degree $2$ over $S$.
\begin{itemize}
\item If $n=1$, $D$ is a Cartier divisor of $\widetilde{C}$ \'etale of degree $2$ over $S$.
\item If $n\geq 2$, the $0$-th Fitting ideal of $\Omega_{D|S}$ define a section $q$ of $D\subset \widetilde{C}\rightarrow S$ such that $\widetilde{C}$ is an $A_r$-prestable curve and $q$ is an $A_{n-2}$-singularity of $\widetilde{C}$.
\end{itemize}
\end{lemma}
\begin{proof}
Suppose that the statement is true when $S$ is reduced. Because $\cA_n$ is smooth, we know that up to an \'etale cover of $S$, our object $(C,p)$ is the pullback of an object over a smooth scheme, therefore reduced. All the properties in \Cref{lem:blowup-an} are stable by base change and satisfies \'etale descent, therefore we have the statement for any $S$.
Assume that $S$ is reduced. To prove the first part of the statement, it is enough to prove that the geometric fiber $\cO_{X_s}/I_s^m$ has constant length over every point $s \in S$. This follows from a computation with the complete ring in $p_s$. Regarding the rest of the statement, we can restrict to the geometric fibers over $S$ and reduce to the case $S=\spec k$ where $k$ is an algebraically closed field over $\kappa$. The statement follows from a standard blowup computation.
\end{proof}
Let us start with $(C,p)\in \cA_n(S)$. We can construct a (finite) series of blowups which desingularize the family $C$ in the section $p$.
Suppose $n$ is even. If we apply \Cref{lem:blowup-an} iteratively, we get the successive sequence of blowups over $S$
$$
\begin{tikzcd}
\widetilde{C}_m \arrow[r, "b_m"] & \widetilde{C}_{m-1} \arrow[r, "b_{m-1}"] & \dots \arrow[r, "b_1"] & \widetilde{C}_0:=C
\end{tikzcd}
$$
with sections $q_h:S \rightarrow \widetilde{C}_h$ where $m:=n/2$, the morphism $b_h:\widetilde{C}_m:={\rm Bl}_{q_{h-1}}\widetilde{C}_{h-1}\rightarrow \widetilde{C}_{h-1}$ is the blowup of $\widetilde{C}_{h-1}$ with center $q_{h-1}$ ($q_0:=p$) and $q_h$ is the section of $\widetilde{C}_h$ over $q_{h-1}$ as in \Cref{lem:blowup-an}. We have that $\widetilde{C}_m$ is an $A_r$-prestable curve of genus $g-m$ and $q_m$ is a smooth section.
On the contrary, if $n:=2m-1$ is odd, the same sequence of blowups gives us a curve $\widetilde{C}_m$ which has arithmetic genus either $g-m$ or $g-m+1$ depending on whether the geometric fibers of $\widetilde{C}_m$ are connected or not and an \'etale Cartier divisor $D$ of degree $2$ over $S$.
\begin{definition}
Let $(C,p)$ be an object of $\cA_n(S)$ with $n=2m$ or $n=2m-1$ for $S$ any scheme. The composition of blowups $b_m\circ b_{m-1} \circ \dots \circ b_{1}$ described above is called the relative $A_n$-desingularization and it is denoted as $b_{C,p}$. By abuse of notation, we refer to the source of the relative $A_n$-desingularization as relative $A_n$-desingularization. We also denote by $J_b$ the conductor ideal associated to it.
We say that an object $(C/S,p)$ of $\cA_n$ is a separating $A_n$-singularity if the geometric fibers over $S$ of the relative $A_n$-desingularization are not connected.
\end{definition}
\begin{remark}
By construction, the relative $A_n$-desingularization is compatible with base change.
\end{remark}
\begin{lemma}\label{lem:conductor}
Let $(C,p)$ be an object of $\cA_n(S)$ with $n=2m$ or $n=2m-1$ and let $b_{C,p}:\widetilde{C}\rightarrow C$ the relative $A_n$-desingularization. We have that $J_b$ is flat over $S$ and its formation is compatible with base change over $S$. Furthermore, we have that $J_b=I_{b^{-1}(p)}^m$ as an ideal of $\widetilde{C}$, where $I_{b^{-1}(p)}$ is the ideal associated to the preimage of $p$ through $b$.
\end{lemma}
\begin{proof}
The flatness and compatibility with base change are standard. If $S$ is the spectrum of an algebraically closed field, we know that the equality holds. We can consider the diagram
$$
\begin{tikzcd}
& & f_*I_{b^{-1}(p)}^m \arrow[d] & & \\
0 \arrow[r] & \cO_C \arrow[r, hook] & f_*\cO_{\widetilde{C}} \arrow[r, two heads] & Q \arrow[r] & 0
\end{tikzcd}
$$
and show that the composite morphism $f_*I_{b^{-1}(p)}^m\rightarrow Q$ is the zero map, restricting to the geometric fibers over $S$ (in fact they are both finite and flat over $S$). Therefore $I_{b^{-1}(p)}^m$ can be seen also as an ideal of $\cO_{C}$. Because the conductor ideal is the largest ideal of $\cO_{C}$ which is also an ideal of $\cO_{\widetilde{C}}$, we get an inclusion $I_{b^{-1}(p)}^m\subset J_f$ whose surjectivity can be checked on the geometric fibers over $S$.
\end{proof}
\begin{remark}\label{rem:stab}
The stability condition for $\widetilde{C}$ can be described using the Noether formula (see Proposition 1.2 of \cite{Cat}). We have that $\omega_{C/S}$ is ample if and only if $\omega_{\widetilde{C}/S}(J_b^{\vee})$ is ample.
\end{remark}
Lastly, we prove that the stack parametrizing separating $A_n$-singularities for a fixed positive integer $n$ is closed inside $\widetilde{\mathcal C}_g^r$.
\begin{lemma}\label{lem:sep-sing}
Let $C\rightarrow \spec R$ a family of $A_r$-prestable curves over a DVR and denote by $K$ the function field of $R$. Suppose there exists a generic section $s_K$ of the morphism such that its image is a separating $A_{r_0}$-singularity (with $r_0\leq r$). Then the section $s_R$ (which is the closure of $s_K$) is still a separating $A_{r_0}$-singularity.
\end{lemma}
\begin{proof}
Because $s_K$ is a separating $A_{r_0}$-singularity, then $r_0$ is necessarily odd. Furthemore, we have that the special fiber $s_k:=s_R\otimes_R k$ is an $A_{r_1}$-singularity with $r_1\geq r_0$.
Let us call $m_R$ the ideal associated with the section $s_R$. Because $C/\spec R$ is $A_r$-prestable, we can compute $\dim_L(\cO_C/I_R^h\otimes_R L)$ when $L$ is the algebraic closure of either the function field $K$ or the residue field $k$. We have that
$$ \dim_L L[[x,y]]/(y^2-x^n,m^h)=2h-1$$
for every $h\geq 1$ and every $n\geq 2$, where $m=(x,y)$.
Therefore, $\dim_L(\cO_C/I_R^h\otimes_R L)$ is constant on the geometric fibers over $\spec R$ and we get that $\cO_C/I_R^h$ is $R$-flat because $R$ is reduced. Consider now $\widetilde{C}_1:={\rm Bl}_{I_R}C$, which is still $R$-flat, proper and finitely presented thanks to \Cref{lem:blowup} and commutes with base change. We denote by $b:\widetilde{C}_1 \rightarrow C$ the blowup morphism. We have that a local computations shows that if $r_0\geq 3$ we have $b^{-1}(s_R)_{\rm red}=\spec R$ and thus it defines a section $q_1$ of $\widetilde{C}_1$ which is a $A_{r_0-2}$-singularity at the generic fiber and a $A_{r_1-2}$-singularity at the special fiber. We can therefore iterate this procedure $r_0/2$ times until we get $\widetilde{C}_{r_0/2}\rightarrow \spec R$ which is a flat proper finitely presented morphism whose generic fiber is not geometrically connected. Therefore the special fiber is not geometrically connected as it is geometrically reduced. This clearly implies that $r_1=r_0$ and that $s_R$ is a separating section.
\end{proof}
\subsection*{Description of $\cA_n$ for the even case}
Let $n:=2m$ be an even number with $m\geq 1$. Firstly, we study the $A_{2m}$-singularity locally, and then we try to describe everything in families of projective curves.
Let $(C,p)$ be a $1$-dimensional reduced $1$-pointed scheme of finite type over an algebraically closed field where $p$ is an $A_{2m}$-singularity. Consider now the partial normalization in $p$ which gives us a finite birational morphism
$$b:\widetilde{C} \longrightarrow C$$
which is infact an homeomorphism. This implies that the only think we need to understand is how the structural sheaf changes through $b$. We have the standard exact sequence
$$ 0 \rightarrow \cO_C \rightarrow \cO_{\widetilde{C}} \rightarrow Q \rightarrow 0$$
where $Q$ is a coherent sheaf on $C$ with support on the point $p$. Consider now the conductor ideal $J_b$ of the morphism $b$, which is both an ideal of $\cO_C$ and of $\cO_{\widetilde{C}}$. Consider the morphism of exact sequences
$$
\begin{tikzcd}
0 \arrow[r] & \cO_C \arrow[r] \arrow[d, two heads] & \cO_{\widetilde{C}} \arrow[r] \arrow[d, two heads] & Q \arrow[r] \arrow[d, Rightarrow, no head] & 0 \\
0 \arrow[r] & \cO_C/J_b \arrow[r] & \cO_{\widetilde{C}}/J_b \arrow[r] & Q \arrow[r] & 0,
\end{tikzcd}
$$
it is easy to see that the vertical morphism on the right is an isomorphism. Therefore \Cref{lem:cond-diag} and \Cref{lem:conductor} imply that to construct an $A_{2m}$-singularity we need the partial normalization $\widetilde{C}$, the section $q$ and a subalgebra of $\cO_{\widetilde{C}}/m_q^{2m}$. Notice that not every subalgebra works.
\begin{remark}
First of all, a local computation shows that the extension $\cO_{C}/J_b \into \cO_{\widetilde{C}}/m_q^{2m}$ is finite flat of degree $2$. Luckily, the converse is also true: if $B=k[[t]]$ and $I=(t^{2m})$, then the subalgebras $C$ such that $C \into B/J_b$ is finite flat of degree $2$ are exactly the ones whose pullback through the projection $B \rightarrow B/J_b$ is an $A_{2m}$-singularity. This should serve as a motivation for the alternative description we are going to prove for $\cA_{2m}$.
\end{remark}
The idea now is to prove that the same exact picture works for families of curves. The result holds in a greater generality, but we describe directly the case of families of $A_r$-prestable curves.
We want to construct an algebraic stack whose objects are triplets $(\widetilde{C}/S,q,A)$ where $(\widetilde{C}/S,q)$ is a $A_r$-prestable $1$-pointed curve of genus $g-m$ with some stability condition (see \Cref{rem:stab}) and $A\subset \cO_{\widetilde{C}}/I_q^{2m}$ is a finite flat extension of degree $2$ of flat $\cO_S$-algebras, where $I_q$ is the ideal sheaf associated to the section $q$.
Firstly, we introduce the stack $\widetilde{\mathcal M}_{h,[l]}^r$ parametrizing $A_r$-prestable 1-pointed curves $(\widetilde{C},q)$ such that $\omega_{\widetilde{C}}(lq)$ is ample. This is not difficult to describe, in fact we have a natural inclusion
$$ \widetilde{\mathcal M}_{h,[l+1]}^r\subset \widetilde{\mathcal M}_{h,[l]}^r$$
which is an equality for $l\geq 2$ if $h\geq 1$ and for $l\geq 3$ if $h=0$. We have that the only curves that live in $\widetilde{\mathcal M}_{h,[l]}^r\smallsetminus \widetilde{\mathcal M}_{h,1}$ are curves that have one (irreducible) tail of genus $0$ and the section lands on the tail. We have the following result.
\begin{proposition}\label{prop:desc-str}
In the situation above, if $h\geq 1$ we have an isomorphism
$$ \widetilde{\mathcal M}_{h,[l]}^r \simeq \widetilde{\mathcal M}_{h,1}^r\times [\AA^1/\GG_{\rmm}]$$
for $l\geq 2$. If $h=0$, we have an isomorphism
$$ \widetilde{\mathcal M}_{0,[l]}\simeq \cB(\GG_{\rmm} \rtimes \GG_{\rma})$$
for $l\geq 3$.
\end{proposition}
\begin{proof}
The case $h=0$ is straightforward. Clearly for $h\geq 1$ it is enough to construct the isomorphism for $l=2$.
In the proof of Theorem 2.9 of \cite{DiLorPerVis}, they proved the description for $r=2$, but the proof can be generalized easily for any $r$. We sketch an alternative proof. First of all, it is easy to see that $\widetilde{\mathcal M}_{h,[l]}^r$ is smooth using deformation theory.
We follow the construction introduced in Section 2.3 of \cite{DiLorPerVis}: consider $\widetilde{\mathcal C}_{h,1}^r$ the universal curve of $\widetilde{\mathcal M}_{h,1}^r$ and define $\cD_{h,1}$ to be the blowup of $\widetilde{\mathcal C}_{h,1}^r \times [\AA^1/\GG_{\rmm}]$ in the center $\widetilde{\mathcal M}_{h,1}^r\times \cB\GG_{\rmm}$, where $\widetilde{\mathcal M}_{h,1}^r\hookrightarrow \widetilde{\mathcal C}_{h,1}^r$ is the universal section. If we denote by $q$ the proper transform of the closed substack
$$\cM_{h,1}^r \times [\AA^1/\GG_{\rmm}] \into \widetilde{\mathcal C}_{h,1}^r \times [\AA^1/\GG_{\rmm}]$$
we get that $(\cD_{h,1}\rightarrow \widetilde{\mathcal M}_{h,1}^r \times [\AA^1/\GG_{\rmm}],q)$ define a morphism
$$ \varphi:\widetilde{\mathcal M}_{h,1}^r \times [\AA^1/\GG_{\rmm}] \longrightarrow \widetilde{\mathcal M}_{h,[l]}^r$$
which by construction is birational, as it is an isomorphism restricted to $\widetilde{\mathcal M}_{h,1}^r$. Furthermore, we have that it is an isomorphism on geometric points, i.e.
$$ \varphi(k): (\widetilde{\mathcal M}_{h,1}^r \times [\AA^1/\GG_{\rmm}])(k) \longrightarrow (\widetilde{\mathcal M}_{h,[l]}^r)(k) $$
is an equivalence of groupoids for any $k$ algebraically closed field over $\kappa$. See Proposition 2.5 of \cite{DiLorPerVis}. This implies that $\varphi$ is representable by algebraic spaces and quasi-finite.
Suppose that $\varphi$ is separated. Because $\widetilde{\mathcal M}_{h,[l]}^r$ is smooth, we can use the Zariski Main Theorem for algebraic spaces to prove that $\varphi$ is an isomorphism.
To check separatedness, we can use the valuative criterion. Suppose we are given a commutative diagram
$$
\begin{tikzcd}
\spec K \arrow[d] \arrow[r] & \spec R \arrow[d, "y"] \arrow[ld, "x_1"', shift right] \arrow[ld, "x_2", shift left] \\
{\widetilde{\mathcal M}_{h,1}^r \times [\AA^1/\GG_{\rmm}]} \arrow[r, "\varphi"'] & {\widetilde{\mathcal M}_{h,[l]}^r}
\end{tikzcd}
$$
where $R$ is a DVR and $K$ is its fraction field. We need to prove that $x_1\simeq x_2$ as objects of $\widetilde{\mathcal M}_{h,1}^r\times [\AA^1/\GG_{\rmm}]$. First of all, one can prove easily that $$\varphi\vert_{\widetilde{\mathcal M}_{h,1}^r\times \cB\GG_{\rmm}}: \widetilde{\mathcal M}_{h,1}^r\times \cB\GG_{\rmm} \longrightarrow \widetilde{\mathcal M}_{h,[l]}^r$$
is a closed immersion. Because $\varphi\vert_{\widetilde{\mathcal M}_{h,1}^r}$ is an isomorphism, it is enough to prove the statement for the full subcategory $\cM_R$ of $\widetilde{\mathcal M}_{h,1}^r\times [\AA^1/\GG_{\rmm}](\spec R)$ of morphisms $x:\spec R \rightarrow \widetilde{\mathcal M}_{h,1}^r\times [\AA^1/\GG_{\rmm}]$ such that $\spec K$ factors through $\widetilde{\mathcal M}_{h,1}^r\into \widetilde{\mathcal M}_{h,1}^r\times [\AA^1/\GG_{\rmm}]$ and $x$ lands in $\widetilde{\mathcal M}_{h,1}\times \cB\GG_{\rmm}$ when restricted to the residue field of $R$. Let $\cM_R'$ be the image $\varphi(R)(\cM_R)$ in $\widetilde{\mathcal M}_{h,[l]}^r(R)$. We can define a functor
$$ \phi_R :\cM_R \longrightarrow (\widetilde{\mathcal M}_{h,1}^r \times [\AA^1/\GG_{\rmm}])(\spec R)$$
such that $\phi_R(\varphi(R)(x))\simeq x$ for every $x \in \cM_R$. We construct the morphism $\phi_R$ and leave it to the reader to prove that it is a left inverse of $\varphi(R)$. To do so, we first define the morphism
$$ \cM_R' \longrightarrow [\AA^1/\GG_{\rmm}](\spec R)$$
in the following way. An object of $\cM_R$ is a $1$-pointed $A_r$-prestable curve $(\widetilde{C}_R,q)$ over $R$ such that the generic fiber is a $1$-pointed $A_r$-stable curve of genus $h$ while the special fiber has a rational tail $\Gamma$ which contains the section and intersects the rest of the curve in a separating node $n$. Because \'etale locally the node $n$ has equation $xy=s$ where $s \in R$, we have a morphism $s:\spec R \rightarrow \AA^1$ which is well-defined up to an invertible element of $R$. We have defined an object in $[\AA^1/\GG_{\rmm}](R)$.
Finally, we define a morphism $\widetilde{\mathcal M}_{h,[l]}^r \rightarrow \widetilde{\mathcal M}_{h,1}^r$, which in particular gives us a functor $\cM_R' \rightarrow \widetilde{\mathcal M}_{h,1}^r(\spec R)$ and it is straightforward to prove that factors through $\cM_R$. Consider the universal curve $\widetilde{\mathcal C}_{h,[l]}^r$ over $\widetilde{\mathcal M}_{h,[l]}^r$ with the section $p$. Then we can consider the morphism induced by the complete linear system of $\omega_{\widetilde{\mathcal C}_{h,[l]}^r|\widetilde{\mathcal M}_{h,[l]}^r}(p)^{\otimes 3}$ and we denote by $\widetilde{\mathcal C}'$ its stacky image. This morphism contracts the tails of genus $0$. One can prove using the results in \cite{Knu} that $\widetilde{\mathcal C}'\rightarrow \widetilde{\mathcal M}_{h,[l]}^r$ defines a morphism of stacks
$$ p: \widetilde{\mathcal M}_{h,[l]}^r \rightarrow \widetilde{\mathcal M}_{h,1}^r$$
such that the composition
$$p \circ \varphi: \widetilde{\mathcal M}_{h,1}^r\times [\AA^1/\GG_{\rmm}] \longrightarrow \widetilde{\mathcal M}_{h,1}^r$$
is the natural projection.
\end{proof}
\begin{remark}
In the proof above, one can prove that the morphism $\phi_R$ is also a right-inverse of $\varphi(R)$ restricted to $\cM_R$. This implies that $\varphi$ is proper and thus there is no need to use the Zariski Main theorem.
\end{remark}
Finally we are ready to describe $\cA_n$. In Appendix A, we define the stack $\cF_n^c$ which parametrizes pointed finite flat curvilinear algebras of length $n$ and $\cE_{m,d}^c$ which parametrizes finite flat curvilinear extensions of degree $d$ of pointed finite flat curvilinear algebras of degree $m$. We prove that the natural morphism
$$ \cE_{m,d}^c \longrightarrow \cF_{md}^c$$
defined by the association $(A\into B) \mapsto B$ is an affine bundle. See \Cref{lem:descr-affine-bundle}.
Let
$$ \cE_{m,2}^c\longrightarrow \cF_{2m}^c$$
as above and consider the morphism of stacks
$$ \widetilde{\mathcal M}_{g-m,[2m]}^r\longrightarrow \cF_{2m}^c$$
defined by the association $(\widetilde{C},q) \mapsto \cO_{\widetilde{C}}/I_q^{2m}$ where $I_q$ is the ideal defined by the section $q$.
We denote by $\cA'_{2m}$ the fiber product $\cE_{m,2}^c \times_{\cF_{2m}^c} \widetilde{\mathcal M}_{g-m,[2m]}^r$. By definition, an object of $\cA_{2m}'$ over $S$ is of the form $(\widetilde{C},q, A \subset \cO_{\widetilde{C}}/I_q^{2m})$ where $A \subset \cO_{\widetilde{C}}/I_q^{2m}$ is a finite flat extension of algebras of degree $2$. Given two objects $(\pi_1:\widetilde{C}_1/S,q_1, A_1 \subset \cO_{\widetilde{C}_1}/I_{q_1}^2m)$ and $(\pi_2:\widetilde{C}_2/S,q_2, A_2 \subset \cO_{\widetilde{C}}/I_{q_2}^2m)$, a morphism over $S$ between them is a pair $(f,\alpha)$ where $f:(\widetilde{C}_1,q_1)\rightarrow (\widetilde{C}_2,q_2)$ is a morphism in $\widetilde{\mathcal M}_{g-m,[2m]}^r(S)$ while $\alpha:A_2 \rightarrow A_1$ is an isomorphism of finite flat algebras over $S$ such that the diagram
$$
\begin{tikzcd}
A_2 \arrow[r, "\alpha"] \arrow[d, hook] & A_1 \arrow[d, hook] \\
\pi_{2,*}(\cO_{\widetilde{C}_2}/I_{q_2}^{2m}) \arrow[r] & \pi_{1,*}(\cO_{\widetilde{C}_1}/I_{q_1}^{2m})
\end{tikzcd}
$$
is commutative.
We want to construct a morphism from $\cA'_{2m}$ to $\cA_{2m}$. Let $S$ be a scheme and $(\widetilde{C},q, A \subset \cO_{\widetilde{C}}/I_q^{2m})$ be an object of $\cA'_{2m}(S)$. We consider the diagram
$$
\begin{tikzcd}
\spec_{\cO_S}(\cO_{\widetilde{C}}/I_q^{2m}) \arrow[d, "2:1"] \arrow[r, hook] & \widetilde{C} \arrow[dd, bend left] \\
\spec_{\cO_S}(A) \arrow[rd] & \\
& S
\end{tikzcd}
$$
and complete it with the pushout (see \Cref{lem:pushout}), which has a morphism over $S$:
$$
\begin{tikzcd}
\spec_{\cO_S}(\cO_{\widetilde{C}}/I_q^{2m}) \arrow[d, "2:1"] \arrow[r, hook] & \widetilde{C} \arrow[dd, bend left] \arrow[d, dotted] \\
\spec_{\cO_S}(A) \arrow[rd] \arrow[r, dotted] & C \arrow[d, dotted] \\
& S.
\end{tikzcd}
$$
In this case, $\widetilde{C}$ and $C$ share the same topological space, whereas the structural sheaf $\cO_C$ of the pushout is the fiber product
$$
\begin{tikzcd}
\cO_C:=\cO_{\widetilde{C}}\times_{\cO_{\widetilde{C}}/I_q^{2m}} A \arrow[d, two heads] \arrow[r, hook] & \cO_{\widetilde{C}} \arrow[d, two heads] \\
A \arrow[r, hook] & \cO_{\widetilde{C}}/I_q^{2m};
\end{tikzcd}
$$
we define $I_p:=I_q\vert_{\cO_C}$ which induces a section $p:S \rightarrow C$ of $C\rightarrow S$.
\begin{lemma}
In the situation above, $C/S$ is an $A_r$-stable curve of genus $g$ and $p_s$ is a $A_{2m}$-singularity for every geometric point $s \in S$. Furthermore, the formation of the pushout commutes with base change over $S$.
\end{lemma}
\begin{proof}
The fact that $C/S$ is flat, proper and finitely presented is a consequence of \Cref{lem:pushout}. The same is true for the compatibility with base change over $S$. We only need to check that $C_s$ is an $A_r$-stable curve for every geometric point $s \in S$. Therefore we can assume $S=\spec k$ with $k$ an algebraically closed field over $\kappa$. Connectedness is trivial as the topological space is the same. We need to prove that $p$ is an $A_{2m}$-singularity. Consider the cartesian diagram of local rings
$$
\begin{tikzcd}
{\cO_{C,p}:=\cO_{\widetilde{C},q}\times_{\cO_{\widetilde{C},p}/m_q^{2m}} A} \arrow[d, two heads] \arrow[r, hook] & {\cO_{\widetilde{C},q}} \arrow[d, two heads] \\
A \arrow[r, hook] & {\cO_{\widetilde{C},q}/m_q^{2m}}
\end{tikzcd}
$$
and pass to the completion with respect to $m_p$, the maximal ideal of $\cO_{C,p}$. Because the extensions are finite, we get the following cartesian diagram of rings
$$
\begin{tikzcd}
B \arrow[d, two heads] \arrow[r, hook] & {k[[t]]} \arrow[d, two heads] \\
{k[[t]]/(t^m)} \arrow[r, "\phi_2", hook] & {k[[t]]/(t^{2m})};
\end{tikzcd}
$$
using the description of $\phi_2$ as in \Cref{lem:triv-ext}, it is easy to see that it is defined by the association $t \mapsto t^2$ up to an isomorphism of $k[[t]]/(t^{2m})$. This concludes the proof.
\end{proof}
Therefore we have constructed a morphism of algebraic stacks
$$F:\cA'_{2m} \longrightarrow \cA_{2m}$$
defined on objects by the association
$$(\widetilde{C},q,A\subset \cO_{\widetilde{C}}) \mapsto (\widetilde{C}\bigsqcup_{\spec (\cO_{\widetilde{C}}/I_q^{2m})} \spec A, p) $$
and on morphisms in the natural way using the universal property of the pushout.
To construct the inverse, we use the relative $A_n$-desingularization which is compatible with base change. \Cref{lem:conductor} implies that we can define a functor $G:\cA_{2m}\rightarrow \cA_{2m}'$ on objects
$$ (C/S,p) \mapsto (\widetilde{C},q, \cO_C/J_b \subset \cO_{\widetilde{C}}/I_q^{2m}) $$
where $b:\widetilde{C}\rightarrow C$ is the relative $A_{2m}$-desingularization, $J_b$ is the conductor ideal relative to $b$ and $q$ is the smooth section of $\widetilde{C}\rightarrow S$ defined as the vanishing locus of the $0$-th Fitting ideal of $\Omega_{b^{-1}(p)|S}$. It is defined on morphisms in the obvious way.
\begin{proposition}\label{prop:descr-an-pari}
The two morphisms $F$ and $G$ are quasi-inverse of each other.
\end{proposition}
\begin{proof}
The statement is a conseguence of \Cref{prop:pushout-blowup} and \Cref{prop:blowup-pushout}.
\end{proof}
\begin{corollary}\label{cor:descr-an-pari}
$\cA_{2m}$ is an affine bundle of dimension $(m-1)$ over the stack $\widetilde{\mathcal M}^r_{g-m,[2m]} $ for $m\geq 1$.
\end{corollary}
\begin{proof}
Because $\cA_{2m}'$ is constructed as the fiber product $\cE_{m,2}^c \times_{\cF_{2m}^c} \widetilde{\mathcal M}_{g-m,[2m]}^r$, the statement follows from \Cref{lem:descr-affine-bundle}.
\end{proof}
\subsection{Description of $A_n$ for the odd case}
Let $n:=2m-1$ and $(C,p)$ be a $1$-dimensional reduced $1$-pointed scheme of finite type over an algebraically closed field and $p$ is an $A_{2m-1}$-singularity. Consider now the partial normalization in $p$ which gives us a finite birational morphism
$$b:\widetilde{C} \longrightarrow C$$
which is not an homeomorphism, as we know that $f^{-1}(p)$ is a reduced divisor of $\widetilde{C}$ of length $2$. We can use \Cref{lem:cond-diag} to prove that the extension $\cO_C \hookrightarrow f_*\cO_{\widetilde{C}}$ can be constructed pulling back a subalgebra of $f_*\cO_{\widetilde{C}}/J_b$ through the quotient $f_*\cO_{\widetilde{C}}\rightarrow f_*\cO_{\widetilde{C}}/J_b$, as in the even case. We can describe the subalgebra in the following way.
Consider the divisor $b^{-1}(p)$ which is the disjoint union of two closed points, namely $q_1$ and $q_2$. Then the composition
$$
\begin{tikzcd}
\cO_C/J_b \arrow[r] & f_*\cO_{\widetilde{C}}/J_b=\cO_{mq_1}\oplus \cO_{mq_2} \arrow[r] & \cO_{mq_i}
\end{tikzcd}
$$
is an isomorphism for $i=1,2$, where $\cO_{mq_i}$ is the structure sheaf of the support of the Cartier divisor $mq_i$ for $i=1,2$ and the right map is just the projection. Therefore the subalgebra $\cO_C/J_b$ of $f_*\cO_{\widetilde{C}}/J_b$ is determined by an isomorphism between $\cO_{mq_1}$ and $\cO_{mq_2}$. Recall that \Cref{rem:genus-count} implies that we can have two different situation: either $\widetilde{C}$ is connected and its genus is $g-m$ or it has two connected components of total genus $g-m+1$ and the two points lie in different components.
\begin{definition}
Let $0\leq i\leq (g-m+1)/2$. We define $\cA_{2m-1}^{i}$ to be the substack of $\cA_{2m-1}$ parametrizing $1$-pointed curves $(C/S,p)$ such that the geometric fibers over $S$ of the relative $A_{2m-1}$-desingularization are the disjoint union of two curves of genus $i$ and $g-m-i+1$.
Furthermore, we define $\cA_{2m-1}^{\rm ns}$ the substack of $\cA_{2m-1}$ parametrizing curves $(C/S,p)$ such that the geometric fibers of the relative $A_{2m-1}$-desingularization are connected of genus $g-m$.
\end{definition}
\begin{proposition}
The algebraic stack $\cA_{2m-1}$ is the disjoint union of $\cA_{2m-1}^{\rm ns}$ and $\cA_{2m-1}^i$ for every $0 \leq i\leq (g-m+1)/2$.
\end{proposition}
\begin{proof}
Let $(C/S,p)$ be an object of $\cA_{2m-1}$ and consider the relative \\ $A_n$-desingularization $$b:(\widetilde{C},q)\rightarrow (C,p)$$
over $S$. Because $\widetilde{C}\rightarrow S$ is flat, proper, finitely presented and the fibers over $S$ are geometrically reduced, we have that the number of connected components of the geometric fibers of $\widetilde{C}$ over $S$ is locally constant. See Proposition 15.5.7 of \cite{EGA}. Therefore we have that $\cA_{2m-1}^{\rm ns}$ is open and closed inside $\cA_{2m-1}$. Furthermore, we have that the objects of the complement $\cA_{2m-1}^{\rm s}$ of $\cA_{2m-1}^{\rm ns}$ are pairs $(C/S,p)$ such that the geometric fibers of $\widetilde{C}$ over $S$ have two connected components. Suppose that $S=\spec R$ with $R$ a strictly henselian ring over $\kappa$. We know that $b^{-1}(p)$ is \'etale of degree $2$ over $R$, thus it is the disjoint union of two copies of $R$. We denote by $q$ one of the two sections of $\widetilde{C}\rightarrow \spec R$ and we denote by $C_s^0$ the connected component of the fiber $\widetilde{C}_s$ which contains $q_s$ for every point $s \in \spec R$. Proposition 15.6.5 and Proposition 15.6.8 in \cite{EGA} imply that the set-theoretic union
$$ C_0:=\bigcup_{s \in S} C_s^0$$
is a closed and open subscheme of $\widetilde{C}$, therefore the morphism $C_0\rightarrow S$ is still proper, flat and finitely presented. In particular, the arithmetic genus of the fibers is locally constant. It is easy to see that in particular $\cA_{2m-1}^{\rm s}$ is the disjoint union of $\cA_{2m-1}^{\rm i}$ for $0\leq i\leq (g-m+1)/2$.
\end{proof}
We start by describing $\cA_{2m-1}^{\rm ns}$. Let $\widetilde{\mathcal M}_{h,2[l]}^r$ be a fibered category in groupoid over the category of schemes whose objects are of the form $(\widetilde{C}/S,q_1,q_2)$ where $\widetilde{C}\rightarrow S$ is a flat, proper, finitely presented morphism of scheme, $q_1$ and $q_2$ are smooth sections, $\widetilde{C}_s$ is an $A_r$-prestable curve of genus $h$ for every geometric point $s \in S$ and $\omega_{\widetilde{C}}(l(q_1+q_2))$ is relatively ample over $S$. The morphisms are defined in the obvious way.
\begin{proposition}
We have an isomorphism of fibered category
$$\widetilde{\mathcal M}_{h,2[l]}^r\simeq \widetilde{\mathcal M}_{h,2}^r \times [\AA^1/\GG_{\rmm}] \times [\AA^1/\GG_{\rmm}]$$
for $l\geq 2$ and $h\geq 1$. Furthermore, we have
$$ \widetilde{\mathcal M}_{0,2[l]}\simeq \cB\GG_{\rmm} \times [\AA^1/\GG_{\rmm}]$$
for $l\geq 3$.
\end{proposition}
\begin{proof}
The proof is an adaptation of the one of \Cref{prop:desc-str}.
\end{proof}
We want to construct an algebraic stack with a morphism over $\widetilde{\mathcal M}_{h,2[m]}^r$ whose fibers parametrize the isomorphisms between the two finite flat $S$-algebras $\cO_{mq_1}$ and $\cO_{mq_2}$.
\begin{remark}
Recall that we have a smooth stack $\cE_{m,d}^c$ (see Appendix A for a detailed discussion) which parametrizes finite flat extensions $A\into B$ of degree $d$ with $B$ curvilinear of length $m$. If $d=1$, the stack $\cE_{m,1}^c$ parametrizes isomorphisms of finite flat algebras of length $m$. We also have a map
$$ \cE_{m,1}^c \longrightarrow \cF_m^c \times \cF_m^c$$
defined on objects by the association $(A\into B) \mapsto (A,B)$, which is a $\GG_{\rmm} \times \GG_{\rma}^{m-2}$-torsor. See Appendix A for a more detailed discussion.
\end{remark}
Consider the morphism
$$\widetilde{\mathcal M}_{g-m,2[m]}^r \longrightarrow \cF_{m}^c \times \cF_m^c$$
defined by the association
$$(\widetilde{C},q_1,q_2)\mapsto (\cO_{\widetilde{C}}/I_{q_1}^m,\cO_{\widetilde{C}}/I_{q_2}^m);$$
and let $I_{2m-1}^{\rm ns}$ be the fiber product $\widetilde{\mathcal M}_{h,2[m]}^r \times_{(\cF_{m}^c\times\cF_{m}^c)} \cE_{m,1}^c$. It parametrizes objects $(C,q_1,q_2,\phi)$ such that $(C,q_1,q_2) \in \widetilde{\mathcal M}_{h,2[m]}^r$ and an isomorphism $\phi$ between $\cO_{mq_1}$ and $\cO_{mq_2}$ as $\cO_S$-algebras which commutes with the sections.
We can construct a morphism
$$ I_{2m-1}^{\rm ns} \longrightarrow \cA_{2m-1}^{\rm ns}$$
in the following way: let $(\widetilde{C},q_1,q_2,\phi) \in I_{2m-1}^{\rm ns}(S)$, then we have the diagram
$$
\begin{tikzcd}
\spec_S(\cO_{mq_1})\bigsqcup \spec_S(\cO_{mq_2}) \arrow[d, "{(\id,\phi)}"] \arrow[r, hook] & \widetilde{C} \\
\spec_S(\cO_{mq_1}) &
\end{tikzcd}
$$
where the morphism $(\id,\phi)$ is \'etale of degree $2$. We denote by $C$ the pushout of the diagram and $p$ the image of $q$. We send $(\widetilde{C},q_1,q_2,\phi)$ to $(C,q)$. Notice that because both $q_1$ and $q_2$ are smooth sections, we have that $\cO_{mq_i}$ is the scheme-theoretic support of a Cartier divisor of $\widetilde{C}$, therefore it is flat for $i=1,2$. \Cref{lem:pushout} assures us that this construction is functorial and commutes with base change.
\begin{proposition}\label{prop:descr-an-odd-ns}
The pushout functor
$$F^{\rm ns}: I_{2m-1}^{\rm ns} \longrightarrow \cA_{2m-1}^{\rm ns}$$
is representable finite \'etale of degree $2$.
\end{proposition}
\begin{proof}
It is a direct conseguence of both \Cref{prop:pushout-blowup} and \Cref{prop:blowup-pushout}. In fact the two propositions assure us that the object $(C,p)$ is uniquely determined by the relative $A_{2m-1}$-desingularization $b:\widetilde{C}\rightarrow C$, the fiber $b^{-1}(p)$ and an automorphism of the $m$-thickening of $b^{-1}(p)$ which restricted to the geometric fibers over $S$ acts exchanging the two points of $b^{-1}(p)$. Because $b^{-1}(p)$ is finite \'etale of degree $2$, it is clear that $F^{\rm ns}$ is a finite \'etale morphism of degree $2$.
\end{proof}
Let $0\leq i\leq (g-m+1)/2$. In the same way, we can define morphisms
$$ \widetilde{\mathcal M}_{i,[m]}\times \widetilde{\mathcal M}_{g-i-m+1,[m]} \longrightarrow \cF_{m}^c \times \cF_m^c$$
defined by the association $$\Big((\widetilde{C}_1,q_1),(\widetilde{C}_{2},q_{2})\Big) \mapsto \Big((\cO_{\widetilde{C_1}}/I_{q_1}^m,\cO_{\widetilde{C}_{2}}/I_{q_{2}}^m)\Big)$$
and we denote by $I_{2m-1}^{i}$ the fiber product $$(\widetilde{\mathcal M}_{i,[m]}\times \widetilde{\mathcal M}_{g-m-i+1,[m]})\times_{(\cF_{m}^c\times\cF_{m}^c)} \cE_{m,1}^c.$$
Similarly to the previous case, we can construct a functor
$$ F^i: I_{2m-1}^i \longrightarrow \cA_{2m-i}^i$$
using the pushout construction. Again, we have the following result.
\begin{proposition}\label{prop:descr-an-odd-i}
The functor $F^i$ is an isomorphism for $i \neq (g-m+1)/2$ whereas is finite \'etale of degree $2$ for $i = (g-m+1)/2$.
\end{proposition}
\begin{proof}
The proof is exactly the same as \Cref{prop:descr-an-odd-ns}.
\end{proof}
\section{The relations from the $A_n$-strata}\label{sec:3-2}
In this section, we are going to describe the image of the pushforward of the closed immersion $\widetilde{\cA}_{\geq 2} \into \widetilde{\mathcal M}_3$ in $\ch(\widetilde{\mathcal M}_3)$. First of all, we explain why we can reduce to study the stacks $\cA_n$.
We have the following result.
\begin{proposition}
The functor forgetting the section gives us a natural morphism
$$ \cA_{\geq n} \rightarrow \widetilde{\cA}_{\geq n}$$
which is finite birational and it is surjective at the level of Chow groups.
\end{proposition}
\begin{proof}
It follows from the fact that every $A_r$-stable genus $3$ curve has at most three singularity of type $A_n$ for $n\geq 2$.
\end{proof}
Consider now the proper morphisms
$$ \rho_{\geq n}:\cA_{\geq n} \longrightarrow \widetilde{\mathcal M}_g^r$$
and their restrictions to $\cA_n$
$$ \rho_n :\cA_n \longrightarrow \widetilde{\mathcal M}_g^r\smallsetminus \widetilde{\cA}_{\geq n+1}$$
which is still proper; let $\{f_i\}_{i \in I_n}$ be a set of elements of $\ch(\cA_n)$ indexed by some set $I_n$ such that $\im{\rho_{n,*}}$ is generated by the set $\{\rho_{n,*}(f_i)\}_{i \in I_n}$. We choose a lifting $\tilde{f}_i$ of every $f_i$ to the Chow group of $\cA_{\geq n}$ for every $n=2,\dots,r$ and every $i \in I_n$. We have the following result.
\begin{lemma}\label{lem:strata}
In the setting above, we have that $\im{\rho_{\geq 2,*}}$ is generated by $\{\rho_{\geq n,*}(\tilde{f}_i)\}_{\forall n, \forall i \in I_n}$
\end{lemma}
\begin{proof}
This is a direct conseguence of Lemma 3.3 of \cite{DiLorFulVis}.
\end{proof}
The previous lemma implies that we need to focus on finding the generators of the relations coming from the strata $\cA_n$ for $n=2,\dots,r$. Therefore in the remaining part of the section we study the morphism $\rho_n$ and we prove that $\rho_n^*$ is surjective at the level of Chow rings for every $n\geq 3$. The same is not true for $n=2$ but we describe geometrically the generators of the image of $\rho_{2,*}$.
\subsection*{Generators for the image of $\rho_{n,*}$ if $n$ is even}
Recall that we are interested in the case $r=7$ and $g=3$ and therefore the characteristic of $\kappa$ is greater than $7$. We are going to prove that the morphism
$$ \rho_n^*: \ch(\widetilde{\mathcal M}_3^7) \longrightarrow \ch(\cA_n)$$
is surjective for $n=4,6$.
\begin{proposition}\label{prop:rho-6-surj}
The morphism $\rho_6^*$ is surjective.
\end{proposition}
\begin{proof}
We start by considering the isomorphism proved in \Cref{cor:descr-an-pari}. Because in this situation $g=3$ and $m=3$ we have that
$$\cA_6 \simeq [V/\GG_{\rmm} \ltimes \GG_{\rma}]$$
where $V$ is the vector bundle considered in \Cref{lem:descr-affine-bundle}. As a matter of fact, we proved that the following commutative diagram of stacks
$$
\begin{tikzcd}
\cA_6 \arrow[d] \arrow[r] & {\cE_{3,2}^c\simeq [V/G_{6}]} \arrow[d] \\
{\widetilde{\mathcal M}_{0,[3]}\simeq \cB(\GG_{\rmm} \ltimes \GG_{\rma})} \arrow[r, "\cB f"] & \cF_{6}^c:=\cB G_6
\end{tikzcd}
$$
is cartesian. The morphism $\cB f$ can be described as the morphism of classifying stacks induced by the morphism of groups schemes
$$ f:\GG_{\rmm} \ltimes \GG_{\rma}\simeq \aut(\PP^1,\infty) \longrightarrow G_6$$
defined by the association $\phi \mapsto \phi\otimes_{\cO_{\PP^1}} \cO_{\PP^1}/m_{\infty}^6$, where $m_{\infty}$ is the maximal ideal of the point $\infty$. For the definition of $G_6$, see \Cref{cor:descr-finite-alg}. A simple computation, using the explicit formula of the action of $G_6$ on $V$, shows that
$$ \cA_6\simeq [V/\GG_{\rmm} \ltimes \GG_{\rma}] \simeq [\AA^1/\GG_{\rmm}]$$
where the action of $\GG_{\rmm}$ on $\AA^1$ has weigth $-3$. More explicitly, if we identify $\cO_{\PP^1}/m_{\infty}^6$ with the algebra $\kappa[t]/(t^6)$, an element $\lambda \in \AA^1$ is equivalent to the inclusion of algebras $$\kappa[t]/(t^3) \into \kappa[t]/(t^6)$$ defined by the association $t \mapsto t^2+\lambda t^5$.
We want to understand the pullback of the hyperelliptic locus, i.e. $\rho_6^*(H)$. It is clear that the locus $\rho_6^{-1}(\widetilde{\cH}_3)$ is the locus in $[\AA^1/\GG_{\rmm}]$ such that the involution $t \mapsto -t$ fixes the inclusion $t \mapsto t^2+\lambda t^5$. This implies $\lambda=0$ and therefore $\rho_6^*(\lambda)=-3s$ where $s$ is the generator of $\ch([\AA^1/\GG_{\rmm}])$.
\end{proof}
\begin{proposition}
The morphism $\rho_4^*$ is surjective.
\end{proposition}
\begin{proof}
Again, \Cref{cor:descr-an-pari} shows that $\cA_{4}$ is an affine bundle of $\widetilde{\mathcal M}_{1,1}\times [\AA^1/\GG_{\rmm}]$ and therefore
$$ \ch(\cA_{4})\simeq \ZZ[1/6,t,s]$$
where $s$ is the generator of the Picard group of $[\AA^1/\GG_{\rmm}]$ and $t$ is the $\psi$-class of $\widetilde{\mathcal M}_{1,1}$ (which is a generator of the Chow ring of $\widetilde{\mathcal M}_{1,1}$). Exactly as it happens for the pinching morphism described in \cite{DiLorPerVis}, we have $\rho_4^*(\delta_1)=s$ (see Lemma 5.9 of \cite{DiLorPerVis}). We need to compute now $\rho_4^*(H)$. Consider now the open immersion
$$ \cA_4\vert_{\widetilde{\mathcal M}_{1,1}} \into \cA_4$$
induced by the open immersion $\widetilde{\mathcal M}_{1,1} \into \widetilde{\mathcal M}_{1,1} \times [\AA^1/\GG_{\rmm}]$. We have that
$$ \ch(\cA_4\vert_{\widetilde{\mathcal M}_{1,1}})=\ZZ[1/6,t,s]/(s)$$
therefore it is enough to prove that $\rho_4^*(H)$ restricted to this open is of the form $-2t$ to have the surjectivity of $\rho_4^*$.
We know that $\widetilde{\mathcal M}_{1,1}\simeq [\AA^2/\GG_{\rmm}]$, therefore it is enough to restrict to $\cA_4\vert_{\cB\GG_{\rmm}}$ because the pullback of the closed immersion $\cB\GG_{\rmm} \into \widetilde{\mathcal M}_{1,1}$ is an isomorphism of Chow rings. Similarly to the proof of \Cref{prop:rho-6-surj}, a simple computation shows that $\cA_4\vert_{\cB\GG_{\rmm}}$ is isomorphic to $[\AA^1/\GG_{\rmm}]$ where $\GG_{\rmm}$ acts with weight $-2$. An element in $\lambda \in \AA^1$ is equivalent to the inclusion of algebras $\kappa[t]/(t^2) \into \kappa[t]/(t^4)$ defined by the association $t\mapsto t^2+\lambda t^3$.
The locus $\widetilde{\cH}_3$ coincides with the locus in $[\AA^1/\GG_{\rmm}]$ described by the equation $\lambda=0$. Therefore $\rho_4^*(H)=-2s$ and we are done.
\end{proof}
Before going to study the morphism $\rho_2$, we need to understand its source. We have that
$$\cA_{2}\simeq \cA_{2}'\simeq \widetilde{\mathcal M}_{2,1} \times [\AA^1/\GG_{\rmm}].$$
Recall that $\widetilde{\mathcal M}_{2,1}$ is an open substack of $\widetilde{\mathcal C}_2$ thanks to \Cref{prop:contrac}. Therefore, the Chow ring of $\widetilde{\mathcal M}_{2,1}$ is a quotient of the one of $\widetilde{\mathcal C}_2$.
\begin{lemma}\label{lem:chow-ring-C2}
The Chow ring of $\widetilde{\mathcal C}_2$ is a quotient of the polynomial ring generated by
\begin{itemize}
\item the $\lambda$-classes $\lambda_1$ and $\lambda_2$ of degree $1$ and $2$ respectively,
\item the $\psi$-class $\psi_1$,
\item two classes $\theta_1$ and $\theta_2$ of degree $1$ and $2$ respectively;
\end{itemize}
furthermore, the ideal of relations is generated by
\begin{itemize}
\item $\lambda_2-\theta_2-\psi_1(\lambda_1-\psi_1)$,
\item $\theta_1(\lambda_1+\theta_1)$,
\item $\theta_2\psi_1$,
\item $\theta_2(\lambda_1+\theta_1-\psi_1)$,
\item an homogeneous polynomial of degree $7$.
\end{itemize}
\end{lemma}
\begin{proof}
We do not describe all the computation in details. The idea is to use the stratification introduced in Section 4 of \cite{DiLorPerVis},i.e.
$$\widetilde{\Theta}_2 \subset \widetilde{\Theta}_1 \subset \widetilde{\mathcal C}_2$$
where $\widetilde{\Theta}_1$ is the pullback of $\widetilde{\Delta}_1$ through to morphism $\widetilde{\mathcal C}_2 \rightarrow \widetilde{\mathcal M}_2$ and $\widetilde{\Theta}_2$ is the closed substack of $\widetilde{\mathcal C}_2$ parametrizing pairs $(C,p)$ such that $p$ is a separating node. We denote by $\theta_1$ and $\theta_2$ the fundamental classes of $\widetilde{\Theta}_1$ and $\widetilde{\Theta}_2$. Notice that the only difference with our situation is in the open stratum $\widetilde{\mathcal C}_2 \smallsetminus \widetilde{\Theta}_1$. In fact, the authors of \cite{DiLorPerVis} proved in Proposition 4.1 that
$$\widetilde{\mathcal C}_2^2\smallsetminus \widetilde{\Theta}_1 \simeq [U/B_2]$$
where $U$ is an open inside a $B_2$-representation $\widetilde{\AA}(6)$.
The same proof generalizes in the case $r=7$ (see \Cref{cor:mtilde_21}) and it gives us that
$$ \widetilde{\mathcal C}_2^7\smallsetminus \widetilde{\Theta}_1 \simeq [\widetilde{\AA}(6)\smallsetminus 0/B_2]$$
and therefore the zero section in $\widetilde{\AA}(6)$ gives us a relation of degree $7$. We also have the following isomorphisms:
\begin{itemize}
\item $\widetilde{\Theta}_1 \smallsetminus \widetilde{\Theta}_2 \simeq (\widetilde{\mathcal C}_{1,1}\smallsetminus \widetilde{\mathcal M}_{1,1})\times \widetilde{\mathcal M}_{1,1}$,
\item $\widetilde{\Theta}_2 \simeq \widetilde{\Delta}_1$;
\end{itemize}
thus we have the following descriptions of the Chow rings of the strata:
\begin{itemize}
\item $\ch(\widetilde{\mathcal C}_2\smallsetminus \widetilde{\Theta}_1) \simeq \ZZ[1/6,t_0,t_1]/(f_7)$,
\item $\ch(\widetilde{\Theta}_1 \smallsetminus \widetilde{\Theta}_2
) \simeq \ZZ[1/6,t,s]$,
\item $\ch(\widetilde{\Theta}_2) \simeq \ZZ[1/6,\lambda_1,\lambda_2]$
\end{itemize}
where $f_7$ is an homogeneous polynomial of degree $7$. Finally, one can prove the following identities:
\begin{itemize}
\item $\lambda_1\vert_{\widetilde{\mathcal C}_2\smallsetminus \widetilde{\Theta}_1} = -t_0-t_1$, $\lambda_1\vert_{\widetilde{\Theta}_1\smallsetminus \widetilde{\Theta}_2}=-t-s$;
\item $\lambda_2\vert_{\widetilde{\mathcal C}_2\smallsetminus \widetilde{\Theta}_1}=t_0t_1$, $\lambda_2\vert_{\widetilde{\Theta}_1\smallsetminus \widetilde{\Theta}_2}=st$;
\item $\psi_1\vert_{\widetilde{\mathcal C}_2\smallsetminus \widetilde{\Theta}_1}=t_1$, $\psi_1\vert_{\widetilde{\Theta}_1\smallsetminus \widetilde{\Theta}_2}=t$, $\psi_1\vert_{\widetilde{\Theta}_2}=0$;
\item $\theta_1\vert_{\widetilde{\Theta}_1\smallsetminus \widetilde{\Theta}_2}=t+s$, $\theta_1\vert_{\widetilde{\Theta}_2}=-\lambda_1$;
\item $\theta_2\vert_{\widetilde{\Theta}_2}=\lambda_2$.
\end{itemize}
The result follows from applying the gluing lemma.
\end{proof}
\begin{remark}
Clearly, $\rho_2^*$ cannot be surjective at the level of Chow rings, as it not true even at the level of Picard groups. In fact, the Picard group of $\widetilde{\mathcal M}_3$ is an abelian free group of rank $3$ while the Picard group of $\widetilde{\mathcal M}_{2,1}\times [\AA^1/\GG_{\rmm}]$ is an abelian free group of rank $4$.
\end{remark}
We are ready for the proposition.
\begin{proposition}\label{prop:gener-rho-2}
The image of the pushforward of
$$\rho_2: \widetilde{\mathcal M}_{2,1} \times [\AA^1/\GG_{\rmm}] \simeq \cA_2 \longrightarrow \widetilde{\mathcal M}_3\smallsetminus \widetilde{\cA}_{\geq 3}$$
is generated by the elements $\rho_{2,*}(1)$, $\rho_{2,*}(s)$ and $\rho_{2,*}(s\theta_1)$, where $s$ is the generator of the Chow ring of $[\AA^1/\GG_{\rmm}]$ and $\theta_1$ is the fundamental class of the locus parametrizing curves with a separating node.
\end{proposition}
\begin{proof}
For this proof, we denote by $\lambda_1$ and $\lambda_2$ the Chern classes of the Hodge bundle of $\widetilde{\mathcal M}_{2,1}$, whereas the $i$-th Chern class of the Hodge bundle of $\widetilde{\mathcal M}_3$ is denoted by $c_i(\HH)$ for $i=1,2,3$.
We need to describe the pullback of the generators of the Chow ring of $\widetilde{\mathcal M}_3$ through $\rho_2$. By construction, it is easy to see that $\rho_2^*(\delta_1)=s+\theta_1$, $\rho_2^*(\delta_{1,1})=\theta_2+s\theta_1$ and $\rho_2^*(\delta_{1,1,1})=s\theta_2$.
Notice that $\rho_2^{-1}(\widetilde{\cH}_3)$ is the fundamental class of the closed substack $\widetilde{\mathcal M}_{2,\omega} \times [\AA^1/\GG_{\rmm}]$, where $\widetilde{\mathcal M}_{2,\omega}$ is the closed substack of $\widetilde{\mathcal M}_{2,1}$ which parametrizes pairs $(C,p)$ such that $p$ is fixed by the (unique) involution of $C$. To compute its class, we need to use the stratification used in the proof of \Cref{lem:chow-ring-C2}.
In the open stratum $\widetilde{\mathcal M}_{2,1}\smallsetminus \widetilde{\Theta}_1$, \Cref{prop:relation-detilde-1} implies that the restriction of $[\widetilde{\mathcal M}_{2,\omega}]$ is equal to $-\lambda_1-3 \psi_1$.
In the stratum $\widetilde{\Theta}_1\smallsetminus \widetilde{\Theta}_2$, we have that the restriction is of the form $-3\psi_1$. This implies that $\rho_2^*(H)=-\lambda_1-3\psi_1-\theta_1$.
Finally, to compute the restriction of $c_i(\HH)$ for $i=1,2,3$, we can restrict to the closed substack $\widetilde{\mathcal M}_{2,1}\times \cB\GG_{\rmm} \into \widetilde{\mathcal M}_{2,1} \times [\AA^1/\GG_{\rmm}]$ as the pullback of the closed immersion is clearly an isomorphism because it is the zero section of a vector bundle. The explicit description of the isomorphism $\widetilde{\mathcal M}_{2,1}\times [\AA^1/\GG_{\rmm}] \simeq \cA_2$ (which was constructed in Section 2 of \cite{DiLorPerVis}) implies that the morphism $\rho_2\vert_{\widetilde{\mathcal M}_{2,1}\times \cB\GG_{\rmm}}$ maps an object $(\widetilde{C}/S,q)$ to the object $(C/S,p)$ in the following way: consider the projective bundle $\PP(N_{q}\oplus N_0)$ over $S$, where $N_q$ is the normal bundle of the section $q$ and $N_0$ is the pullback to $S$ of the $1$-dimensional representation of $\GG_{\rmm}$ of weight $1$; we have two natural sections defined by the two subbundles $N_q$ and $N_0$ of $N_q\oplus N_0$, namely $\infty$ and $0$; the object $(C/S,p)$ is defined by gluing $\infty$ with $q$, pinching in $0$ and then setting $p:=0$. A computation identical to the one of Proposition 5.9 of \cite{DiLorPerVis} implies the following formulas:
\begin{itemize}
\item $\rho_2^*(c_1(\HH))=\lambda_1+\psi_1-s$,
\item $\rho_2^*(c_2(\HH))=\lambda_2+\lambda_1(\psi_1-s)$,
\item $\rho_2^*(c_3(\HH))=\lambda_2(\psi_1-s)$.
\end{itemize}
The description of the restrictions of the generators of $\ch(\widetilde{\mathcal M}_3)$ gives us that the image of $\rho_{2,*}$ is the ideal generated by $\rho_{2,*}(s^i)$ for every $i$ non-negative integer. Moreover, we have that
$$\rho_2^*(\delta_{1,1,1})=s\theta_2=s(\rho_2^*(\delta_{1,1})-s(\rho_2^*(\delta_1)-s))$$
which implies that $\rho_{2,*}(s^i)$ is in the ideal generated by $\rho_{2,*}(1)$, $\rho_{2,*}(s)$ and $\rho_2^*(s^2)$ for every $i\geq 3$. Finally, we have that
$$s\theta_1=s(\rho_2^*(\delta_1)-s)$$
therefore we can use $s\theta_1$ as a generator with $\rho_{2,*}(1)$ and $\rho_{2,*}(s)$ instead of $\rho_{2,*}(s^2)$.
\end{proof}
\begin{remark}
Notice that $\rho_{2,*}(s)$ is equal to the fundamental class of the image of the morphism $\widetilde{\mathcal M}_{2,1} \times \cB\GG_{\rmm} \rightarrow \widetilde{\Delta}_1 \into \widetilde{\mathcal M}_{3}$. We denote this closed substack $\widetilde{\Delta}_1^c$; it parametrizes curves $C$ obtained by gluing a genus $2$ curve with a genus $1$ cuspidal curve in a separating node.
In the same way, $\rho_{2,*}(s\theta_1)$ is equal to the fundamental class of the image of the morphism $\widetilde{\Theta}_1 \times \cB\GG_{\rmm} \into \widetilde{\Delta}_{1,1} \into \widetilde{\mathcal M}_3$. We denote this closed substack $\widetilde{\Delta}_{1,1}^c$; it parametrizes curves $C$ in $\widetilde{\Delta}_{1,1}$ such that one of the two elliptic tails is cuspidal.
\end{remark}
\subsection*{Generators for the image of $\rho_{n,*}$ if $n$ is odd}
Now we deal with the odd case. This is a bit more convoluted as we have several strata to deal with for every $n$. Let us recall the results we have proven.
First of all, $\cA_{2m-1}$ is the disjoint union of $\cA_{2m-1}^{\rm ns}$ and $\cA_{2m-1}^i$ for $0\leq i\leq (g-m+1)/2$. Because $g=3$, we have the following possibilities:
\begin{itemize}
\item if $m=4$, we have only one component, namely $\cA_7^0$;
\item if $m=3$, we have two components, namely $\cA_5^0$ and $\cA_5^{\rm ns}$;
\item if $m=2$, we have three components, namely $\cA_5^0$, $\cA_5^1$ and $\cA_5^{\rm ns}$.
\end{itemize}
First of all, notice that $\cA_5^0$ is empty, due to the stability condition. Therefore we need to deal with $5$ components.
We start with the case $m=4$.
\begin{proposition}\label{prop:rho-7-surj}
The pullback of the morphism
$$\rho_7:\cA_7 \longrightarrow \widetilde{\mathcal M}_3$$
is surjective.
\end{proposition}
\begin{proof}
The proof is similar to one of \Cref{prop:rho-6-surj}. First of all, we describe the Chow ring of $\cA_7^0$. We apply \Cref{prop:descr-an-odd-i} and \Cref{lem:chow-tor} and we get that
$$\ch(\cA_7^0) \simeq \ch(I_{7}^0)^{\rm inv}$$
where the invariants are taking with respect of the action of $C_2$ induced by the involution defined by the association $$\Big((C_1,p_1),(C_2,p_2),\phi\Big) \mapsto \Big((C_2,p_2),(C_1,p_1),\phi^{-1}\Big).$$
By construction, we have that $I_7^0$ is the fiber product of the diagram
$$
\begin{tikzcd}
& {[E_{4,1}/G_4^{\times 2}]} \arrow[d] \\
\cB(\GG_{\rmm} \ltimes \GG_{\rma})^{\times 2} \arrow[r, "{\cB(f^{\times 2})}"] & \cB (G_4^{\times 2})
\end{tikzcd}
$$
where the morphism $f$ is described in the proof of \Cref{prop:rho-6-surj}. A simple computation shows that
$$ I_7^0 \simeq [\AA^1/\GG_{\rmm}]$$
where $\AA^1$ is the $\GG_{\rmm}$-representation with weight $2$. Furthermore, one can prove that $C_2$ acts trivially on $\GG_{\rmm}$ and acts on $\AA^1$ by the rule $\lambda \mapsto -\lambda$. Therefore it is clear that
$$\ch(I_7^0) \simeq \ZZ[1/6,s]$$
where $s$ is the generator of the Chow ring of $\cB \GG_{\rmm}$ and a simple computation shows the $\rho_7^{*}(H)=2s$.
\end{proof}
We now deal with the case $m=3$. We have to split it in two subcases, namely $\cA_5^0$ and $\cA_5^{\rm ns}$. We denote by $\rho_5^{0}$ and $\rho_5^{\rm ns}$ the restriction of $\rho_5$ to the two connected components $\cA_5^0$ and $\cA_5^{\rm ns}$ respectively.
\begin{proposition}\label{prop:rho-5-0-surj}
The pullback of the morphism
$$\rho_5^{0}:\cA_5^0 \longrightarrow \widetilde{\mathcal M}_3$$
is surjective.
\end{proposition}
\begin{proof}
In this case, \Cref{prop:descr-an-odd-i} tells us that $\cA_5^0$ is isomorphic to $I_5^0$. We have a commutative diagram
$$
\begin{tikzcd}
& {[E_{3,1}/G_3\times G_3]} \arrow[d] \\
{\cB(\GG_{\rmm} \ltimes \GG_{\rma})\times \widetilde{\mathcal M}_{1,1}\times [\AA^1/\GG_{\rmm}]} \arrow[r, "{(\cB f,g)}"] & \cB (G_3\times G_3)
\end{tikzcd}
$$
where the morphism $\cB f:\cB(\GG_{\rmm} \ltimes \GG_{\rma}) \rightarrow \cB G_3$ is the same as in \Cref{prop:rho-7-surj}, whereas we recall that
$$ g:\widetilde{\mathcal M}_{1,[3]} \longrightarrow G_3$$
is defined by the association $(E,e)\mapsto \cO_{E}/m_e^3$. Because the morphism $f$ is injective (and therefore $\cB f$ is representable) and $E_{3,1}$ is a $\GG_{\rmm} \ltimes \GG_{\rma}$-torsor, we have that $\cA_{5}^0 \simeq \widetilde{\mathcal M}_{1,1}\times [\AA^1/\GG_{\rmm}]$. Therefore we have an isomorphism
$$\ch(\cA_5^0) \simeq \ZZ[1/6,t,s]$$
where $t$ (respectively $s$) is the generator of the Chow ring of $\widetilde{\mathcal M}_{1,1}$ (respectively $[\AA^1/\GG_{\rmm}]$). We can describe $\rho_5^0$ in the following way: if we have a geometric point $(E,e)$ in $\widetilde{\mathcal M}_{1,[3]}$, the image is the genus $3$ curve $(C,p)$ obtained by gluing the projective line $\PP^1$ to $E$ identifying $\infty$ with $e$ in a $A_5$-singularity (using the pushout construction as we have defined earlier) and setting $p=e$. Notice that there is a unique way of creating the $A_5$-singularity up to a unique isomorphism of $(\PP^1,\infty)$.
Clearly, $\rho_5^{0,*}(\delta_1)=s$. However, $\rho_5^{0,*}(H)=0$, therefore we need to understand the pullback of the Chern classes of the Hodge bundle. It is enough to prove that $\rho_5^{0,*}(\lambda_1)=-2t+ns$ for some integer $n$. Therefore we can restrict the computation to the open substack $\widetilde{\mathcal M}_{1,1}\subset \widetilde{\mathcal M}_{1,1}\times [\AA^1/\GG_{\rmm}]$. Moreover, as the closed embedding $\cB\GG_{\rmm} \into \widetilde{\mathcal M}_{1,1}$ is a zero section of a vector bundle over $\cB\GG_{\rmm}$, it is enough to do the computation restricting everything to $\cB\GG_{\rmm} \into \widetilde{\mathcal M}_{1,1}\times [\AA^1/\GG_{\rmm}]$. Therefore, suppose we have an elliptic cuspidal curve $(E,e)$ with $e$ a smooth point, the image through $\rho_5^0$ is a $1$-pointed genus $3$ curve $(C,p)$ constructed gluing the projective line $(\PP^1,\infty)$ and $(E,e)$ (identifying $e$ and $\infty$) in an $A_5$-singularity and setting $p:=e$. We need to understand the $\GG_{\rmm}$-action over the vector space $\H^0(C,\omega_C)$.
Consider the exact sequence
$$ \begin{tikzcd}
0 \arrow[r] & \cO_C \arrow[r] & \cO_{E}\oplus \cO_{\PP^1} \arrow[r] & Q \arrow[r] & 0
\end{tikzcd}$$
and the induced long exact sequence on the global sections
$$
\begin{tikzcd}
0 \arrow[r] & \kappa \arrow[r] & \kappa^{\oplus 2} \arrow[r] & Q \arrow[r] & {\H^1(C,\cO_C)} \arrow[r] & {\H^1(E,\cO_E)} \arrow[r] & 0.
\end{tikzcd}
$$
We know that $\lambda_1=-c_1^{\GG_{\rmm}}(\H^1(C,\cO_C))$ and $c_1^{\GG_{\rmm}}(\H^1(E,\cO_E))=t$. It is enough to describe $c_1^{\GG_{\rmm}}(Q)$. Recall that $Q$ fits in a exact sequence of $\GG_{\rmm}$-representations
$$
\begin{tikzcd}
0 \arrow[r] & \cO_{3\infty}=\kappa[t]/(t^3) \arrow[r] & \cO_{3e}\oplus \cO_{3\infty}=\kappa[t]/(t^3)^{\oplus 2}\arrow[r] & Q \arrow[r] & 0
\end{tikzcd}
$$
where as usual $\cO_{3e}$ (respectively $\cO_{3\infty}$) is the quotient $\cO_E/m_e^3$ (respectively $\cO_{\PP^1}/m_{\infty}^3)$. Therefore an easy computation shows that $c_1(Q)=-2t$ and thus the restriction of $\lambda_1$ to $\cB\GG_{\rmm}$ is equal to $-2t$.
\end{proof}
Finally, a proof similar to the one of \Cref{prop:rho-7-surj} gives us the following result.
\begin{proposition}
The pullback of the morphism
$$\rho_5^{\rm ns}:\cA_5^{\rm ns} \longrightarrow \widetilde{\mathcal M}_3$$
is surjective.
\end{proposition}
\begin{proof}
We leave to the reader to check the details. See also \Cref{prop:rho-3-surj} for a similar result in the non-separating case.
\end{proof}
It remains to prove the case for $m=2$, or the strata classifying tacnodes.
\begin{proposition}\label{prop: rho-3-1-surj}
The pullback of the morphism
$$ \rho_3^{1}: \cA_3^1 \longrightarrow \widetilde{\mathcal M}_3$$
is surjective.
\end{proposition}
\begin{proof}
Thanks to \Cref{prop:descr-an-odd-i}, we can describe $\cA_3^1$ using a $C_2$-action on $I_3^1$, which is a $\GG_{\rmm}$-torsor over the stack $\widetilde{\mathcal M}_{1,[2]}\times \widetilde{\mathcal M}_{1,[2]}$ where $\widetilde{\mathcal M}_{1,[2]}\simeq \widetilde{\mathcal M}_{1,1}\times [\AA^1/\GG_{\rmm}]$. In fact, \Cref{lem:chow-tor} implies that
$$ \ch(\cA_3^1)\simeq \ch(I_3^1)^{\rm inv}.$$
Because the pullback of the closed immersion
$$ \widetilde{\mathcal M}_{1,1} \times \cB\GG_{\rmm} \into \widetilde{\mathcal M}_{1,1} \times [\AA^1/\GG_{\rmm}]$$
is an isomorphism at the level of Chow rings, it is enough to understand the $\GG_{\rmm}$-torsor when restricted to $(\widetilde{\mathcal M}_{1,1}\times \cB\GG_{\rmm})^{\times 2}$. Let us denote by $t_i$ (respectively $s_i$) the generator of the Chow ring of $\widetilde{\mathcal M}_{1,1}$ (respectively $\cB\GG_{\rmm}$) seen as the $i$-th factor of the product for $i=1,2$. Exactly as we have done in \Cref{prop:gener-rho-2}, we can describe the objects in this product as $$\Big((E_1,e_1), (\PP(N_{e_1}\oplus N_{s_1}),0,\infty), (E_2,e_2), (\PP(N_{e_2}\oplus N_{s_2}),0,\infty)\Big).$$ We recall that $N_{e_i}$ is the normal bundle of the section $e_i$ and $N_{s_i}$ is the representation of $\GG_{\rmm}$ (whose generator is $s_i$) with weight $1$ (for $i=1,2$). By construction, the first Chern class of $N_{e}$ is the $\psi$-class associated to an object $(E,e)$ in $\widetilde{\mathcal M}_{1,1}$.
The $\GG_{\rmm}$-torsor comes from identifying the two tangent spaces at $\infty$ of the two projective bundles. A computation shows that its class (namely the first Chern class of the line bundle associated to it) in the Chow ring of the product is of the form $t_1-s_1-t_2+s_2$. Therefore, if we denote by $b_i:=t_i-s_i$ for $i=1,2$, we have the following description of the Chow ring of $I_3^1$:
$$\ch(I_3^1)\simeq \ZZ[1/6,t_1,t_2,b_1,b_2]/(b_1-b_2).$$
Furthermore, the $C_2$-action over $I_3^1$ translates into an action on the Chow ring defined by the association
$$ (t_1,t_2,b_1,b_2) \mapsto (t_2,t_1,b_2,b_1)$$
therefore it is easy to compute the ring of invariants. We have the following result:
$$ \ch(\cA_3^1)\simeq \ZZ[1/6,d_1,d_2,b]$$
where $d_1:=t_1+t_2$ and $d_2:=t_1t_2$. An easy computation shows that $\rho_{3}^{1,*}(\delta_1)=d_1-2b$ and $\rho_3^{1,*}(\delta_{1,1})=d_2-bd_1+b^2$. Finally, a computation identical to the one in the proof of \Cref{prop:rho-5-0-surj} for the $\lambda$-classes, shows us that $\rho_{3}^{1,*}(\lambda_1)=b-d_1$. The statement follows.
\end{proof}
Finally, we arrived at the end of this sequence of abstract computations.
\begin{proposition}\label{prop:rho-3-surj}
The pullback of the morphism
$$\rho_3^{\rm ns}: \cA_3^{\rm ns} \longrightarrow \widetilde{\mathcal M}_3$$
is surjective.
\end{proposition}
\begin{proof}
For this proof, we denote by $\lambda_i$ the Chern classes of the Hodge bundle of $\widetilde{\mathcal M}_{1,2}$, while the Chern classes of the Hodge bundle of $\widetilde{\mathcal M}_3$ is denoted by $c_i(\HH)$.
Thanks to \Cref{prop:descr-an-odd-ns}, we have that the Chow ring of $\cA_3^{\rm ns}$ can be recovered by the Chow ring of the $\GG_{\rmm}$-torsor $I_3^{\rm ns}$ over $\widetilde{\mathcal M}_{1,2}\times [\AA^1/\GG_{\rmm}]\times [\AA^1/\GG_{\rmm}]$.
First of all, we know that $\widetilde{\mathcal M}_{1,2}\simeq \widetilde{\mathcal C}_{1,1}$ thanks to \Cref{prop:contrac} and we know the Chow ring of $\widetilde{\mathcal C}_{1,1}$ is isomorphic to
$$\ZZ[1/6,\lambda_1,\mu_1]/(\mu_1(\lambda_1+\mu_1));$$
see Proposition 3.3 of \cite{DiLorPerVis}. It is important to remark that $\mu_1$ is the fundamental class of the locus in $\widetilde{\mathcal C}_{1,1}$ parametrizing $(E,e_1,e_2)$ such that the two sections coincide.
We need to understand the class of the $\GG_{\rmm}$-torsor $I_3^{\rm ns}$ over $\widetilde{\mathcal M}_{1,2}\times [\AA^1/\GG_{\rmm}] \times [\AA^1/\GG_{\rmm}]$. As usual, we reduce to the closed substack $\widetilde{\mathcal M}_{1,2} \times \cB\GG_{\rmm} \times \cB\GG_{\rmm}$. If we denote by $s_1$ (respectively $s_2$) the generator of the Chow ring of the first $\cB\GG_{\rmm}$ (respectively the second $\cB\GG_{\rmm}$) in the product, we have that the same description we used in \Cref{prop: rho-3-1-surj} for the $\GG_{\rmm}$-torsor applies here and therefore we only need to understand the description of the two $\psi$-classes in $\ch(\widetilde{\mathcal M}_{1,2})$, namely $\psi_1$ and $\psi_2$. We claim that $\psi_1=\psi_2$. Consider the autoequivalence
$$ \widetilde{\mathcal M}_{1,2} \longrightarrow \widetilde{\mathcal M}_{1,2},$$
which is defined by the association $(E,e_1,e_2) \mapsto (E,e_2,e_1)$ (and therefore acts on the Chow rings sending $\psi_1$ in $\psi_2$ and viceversa). It is easy to see that is isomorphic to the identity functor because of the unicity of the involution, see for instance \Cref{lem:genus1}.
This implies that the class associated to the torsor $I_3^{\rm ns}$ is of the form $s_1-s_2$. Because the action of $C_2$ on $I_3^1$ translates into the involution
$$ (\lambda_1,\mu_1,s_1,s_2) \mapsto (\lambda_1,\mu_1,s_2,s_1)$$
of the Chow ring, we finally have
$$ \ch(\cA_3^{\rm ns})\simeq \ZZ[1/6, \lambda_1,\mu_1, s]/(\mu_1(\lambda_1+\mu_1))$$
where $s:=s_1=s_2$. It is easy to see that $\rho_3^{\rm ns,*}(\delta_1)=\mu_1$. Moreover, the same ideas for the computations of the $\lambda$-classes used in \Cref{prop:rho-5-0-surj} gives us that $\rho_3^{\rm ns,*}(c_1(\HH))=-s$. Finally, it is enough to prove that $\rho_3^{\rm ns,*}(H)=-12\lambda_1$ modulo the ideal $(\mu_1,s)$, therefore we can restrict our computation to $\widetilde{\mathcal M}_{1,2}\smallsetminus \widetilde{\mathcal M}_{1,1}\subset \widetilde{\mathcal M}_{1,2}\times [\AA^1/\GG_{\rmm}] \times [\AA^1/\GG_{\rmm}]$. Notice that in this situation the $\GG_{\rmm}$-torsor is trivial. Recall that we have the formula $H=9c_1(\HH)-\delta_0-3\delta_1$ by \cite{Est}. Therefore it follows that $\rho_3^{\rm ns,*}(H)=-\delta_0$. To compute $\delta_0$ in $\widetilde{\mathcal M}_{1,2}\smallsetminus \widetilde{\mathcal M}_{1,1}$, we can consider the natural morphism
$$ \widetilde{\mathcal C}_{1,1} \longrightarrow \widetilde{\mathcal M}_{1,1}$$
and use the fact that $\delta_0 = 12\lambda_1$ in $\ch(\widetilde{\mathcal M}_{1,1})$.
\end{proof}
\Cref{lem:strata} implies that the image of $\rho_{\geq 2,*}$ in $\ch(\widetilde{\mathcal M}_3)$ is generated by the following cycles:
\begin{itemize}
\item the fundamental classes of $\widetilde{\Delta}_1^c$ and $\widetilde{\Delta}_{1,1}^c$;
\item the fundamental classes of the images of $\rho_7$, $\rho_5^0$ and $\rho_3^1$, which are closed inside $\widetilde{\mathcal M}_3$ because of \Cref{lem:sep-sing}; by abuse of notation, we denote these closed substack by $\cA_7$, $\cA_5^0$ and $\cA_3^1$ respectively;
\item the fundamental classes of the closure of the images of $\rho_6$, $\rho_5^{\rm ns}$, $\rho_4$, $\rho_3^{\rm ns}$ and $\rho_2$; by abuse of notation, we denote these closed substack as $\cA_6$, $\cA_5$, $\cA_4$, $\cA_3$ and $\cA_2$ respectively.
\end{itemize}
\begin{remark}
Notice that $\cA_6$, $\cA_4$ and $\cA_2$ are the stacks we previously denoted by $\widetilde{\cA}_{\geq 6}$, $\widetilde{\cA}_{\geq 4}$ and $\widetilde{\cA}_{\geq 2}$ respectively. Moreover, the stacks $\cA_5^{\rm ns}$ and $\cA_3^{\rm ns}$ are substacks of $\widetilde{\cA}_{\geq 5}$ and $\widetilde{\cA}_{\geq 3}$ respectively.
\end{remark}
\begin{corollary}\label{cor:relations}
The Chow ring of $\overline{\cM}_3$ is the quotient of the Chow ring of $\widetilde{\mathcal M}_3$ by the fundamental classes of $\cA_7$, $\cA_6$, $\cA_5^0$, $\cA_5^{\rm ns}$, $\cA_4$, $\cA_3^1$, $\cA_3^{\rm ns}$, $\cA_2$, $\widetilde{\Delta}_1^c$ and $\widetilde{\Delta}_{1,1}^c$.
\end{corollary}
\section{Explicit description of the relations}\label{sec:strategy}
We illustrate the strategy to compute the explicit description of the relations listed in \Cref{cor:relations}. Suppose we want to compute the fundamental class of a closed substack $X$ of $\widetilde{\mathcal M}_3$. First of all, we need to compute the classes of the restrictions of $X$ on every stratum, namely
$$ X\vert_{\widetilde{\mathcal M}_3\smallsetminus (\widetilde{\cH}_3 ^{\rm c}p \widetilde{\Delta}_1)}, X\vert_{\widetilde{\cH}_3\smallsetminus \widetilde{\Delta}_1}, X\vert_{\widetilde{\Delta}_1\smallsetminus \widetilde{\Delta}_{1,1}}, X\vert_{\widetilde{\Delta}_{1,1}\smallsetminus \widetilde{\Delta}_{1,1,1}}, X\vert_{\widetilde{\Delta}_{1,1,1}}.$$
Once we have the explicit descriptions, we need to patch them together. Let us show how to do it for the first two strata, i.e. $\widetilde{\mathcal M}_3\smallsetminus(\widetilde{\cH}_3 ^{\rm c}p \widetilde{\Delta}_1)$ and $\widetilde{\cH}_3 \smallsetminus \widetilde{\Delta}_1$. Suppose we have the description of $X^q:=X\vert_{\widetilde{\mathcal M}_3 \smallsetminus (\widetilde{\Delta}_{1}^{\rm c}p \widetilde{\cH}_3)}$ and of $X^h:=X\vert_{\widetilde{\cH}_3 \smallsetminus \widetilde{\Delta}_1}$ in their respective Chow rings. Then we can compute $X\vert_{\widetilde{\mathcal M}_{3}\smallsetminus \widetilde{\Delta}_1}$ using \Cref{lem:gluing}. Suppose we are given
$$ X\vert_{\widetilde{\mathcal M}_3\smallsetminus \widetilde{\Delta}_1}=p + Hq \in \ch(\widetilde{\mathcal M}_3\smallsetminus \widetilde{\Delta}_1)$$
an expression of $X$, we need to compute the two polynomials $p$ and $q$. If we restrict $X$ to $\widetilde{\mathcal M}_3\smallsetminus (\widetilde{\cH}_3^{\rm c}p \widetilde{\Delta}_1)$, we get that the polynomial $p$ can be chosen to be just any lifting of $X^q$. Now if we restrict to $\widetilde{\cH}_3 \smallsetminus \widetilde{\Delta}_1$ we get that
$$ i_h^*p + c_{\rm top}(N_{\cH|\cM})q = X^h$$
where as usual $i_h:\widetilde{\cH}_3 \smallsetminus \widetilde{\Delta}_1 \into \widetilde{\mathcal M}_3 \smallsetminus \widetilde{\Delta}_1$ is the closed immersion of the hyperelliptic stratum and $N_{\cH|\cM}$ is the normal bundle of this immersion. Because of the commutativity of the diagram in \Cref{lem:gluing}, we have that $X^h-i_h^*p$ is in the ideal in $\ch(\widetilde{\cH}_3 \smallsetminus \widetilde{\Delta}_1)$ generated by $c_{\rm top}(N_{\cH|\cM})$. However the top Chern class is a non-zero divisor, thus we have that we can choose $q$ to be just a lifting of $\widetilde{q}$, where $\widetilde{q}$ is an element in $\ch(\widetilde{\cH}_3 \smallsetminus \widetilde{\Delta}_1)$ such that $X^h-i_h^*p=c_{\rm top}(N_{\cH|\cM})\widetilde{q}$. Although we have done a lot of choices, it is easy to see that the presentation of $X\vert_{\widetilde{\mathcal M}_3 \smallsetminus \widetilde{\Delta}_1}$ is unique in the Chow ring of $\widetilde{\mathcal M}_3\smallsetminus \widetilde{\Delta}_1$, i.e. two different presentations differ by a relation in the Chow ring.
We show how to apply this strategy firstly for the computations of the fundamental classes $\delta_{1}^c$ and $\delta_{1,1}^c$ of $\widetilde{\Delta}_1^c$ and $\widetilde{\Delta}_{1,1}^c$ respectively.
\begin{proposition}
We have the following description:
$$ \delta_1^c = 6\big(\delta_1(H+\lambda_1+3\delta_1)^2+4\delta_{1,1}(\lambda_1-H-2\delta_1)+12\delta_{1,1,1}\big)$$
and
$$ \delta_{1,1}^c = 24(\delta_{1,1}(\delta_1+\lambda_1)^2+\delta_{1,1,1}\delta_1)$$
in the Chow ring of $\widetilde{\mathcal M}_3$.
\end{proposition}
\begin{proof}
First of all, we have that $\widetilde{\Delta}_1^c\subset \widetilde{\Delta}_1$, therefore the generic expression of the class $\delta_1^c$ is of the form
$$ \delta_1 p_2 + \delta_{1,1} p_1 + \delta_{1,1,1} p_0$$
where $p_0,p_1,p_2$ are homogeneous polynomial in $\ZZ[1/6,\lambda_1,\lambda_2,\lambda_3, \delta_1, \delta_{1,1},\delta_{1,1,1}, H]$ of degree respectively $0,1,2$.
We start by restricting $\delta_1^c$ to $\widetilde{\mathcal M}_3\smallsetminus \widetilde{\Delta}_{1,1}$. Here we have the sequence of embeddings
$$ \widetilde{\Delta}_1^c\cap (\widetilde{\Delta}_1 \smallsetminus \widetilde{\Delta}_{1,1}) \into (\widetilde{\Delta}_1 \smallsetminus \widetilde{\Delta}_{1,1}) \into \widetilde{\mathcal M}_3 \smallsetminus \widetilde{\Delta}_{1,1}$$
which implies that $\delta_1^c$ restricted to $\ch(\widetilde{\Delta}_{1}\smallsetminus \widetilde{\Delta}_{1,1})$ is equal $24t^2(t+t_1)$, where $24t^2$ is the fundamental class of the closed embedding $\widetilde{\Delta}_1^c \into \widetilde{\Delta}_{1}$ (restricted to the open $\widetilde{\Delta}_1 \smallsetminus \widetilde{\Delta}_{1,1}$) while $(t+t_1)$ is the normal bundle of the closed embedding $i_{1}:\widetilde{\Delta}_1 \smallsetminus \widetilde{\Delta}_{1,1} \into \widetilde{\mathcal M}_3 \smallsetminus \widetilde{\Delta}_{1,1}$. Because $t+t_1$ is not a zero divisor in the Chow ring, we have that $i_1^*(p_2)=24t^2$ which implies $p_2=6(H+\lambda_1+3\delta_1)^2$.
Now we have to compute the restriction of $\delta_1^c$ to $\widetilde{\Delta}_{1,1}\smallsetminus \widetilde{\Delta}_{1,1,1}$. This is not trivial, because $\widetilde{\Delta}_1^c$ is contained in $\widetilde{\Delta}_1$ but the closed immersion $\widetilde{\Delta}_{1,1} \smallsetminus \widetilde{\Delta}_{1,1,1} \into \widetilde{\Delta}_{1}\smallsetminus \widetilde{\Delta}_{1,1,1}$ is not regular. As a matter of fact, one can prove that doing the naive computation does not work, i.e. the difference $\delta_1^c-\delta_1p_2$ restricted to $\widetilde{\Delta}_{1,1}\smallsetminus \widetilde{\Delta}_{1,1,1}$ is not divisible by the top Chern class of the normal bundle of $i_{1,1}:\widetilde{\Delta}_{1,1}\smallsetminus \widetilde{\Delta}_{1,1,1}\into \widetilde{\mathcal M}_3 \smallsetminus \widetilde{\Delta}_{1,1,1}$. To do it properly, we can consider the following cartesian diagram
$$
\begin{tikzcd}
{(\widetilde{\mathcal M}_{2,1}\smallsetminus \widetilde{\Theta}_2)\times \cB\GG_{\rmm}} \arrow[r, "\rho_2^c"] & {\widetilde{\mathcal M}_3 \smallsetminus \widetilde{\Delta}_{1,1,1}} \\
{(\widetilde{\mathcal M}_{1,2}\smallsetminus \widetilde{\mathcal M}_{1,1})\times \widetilde{\mathcal M}_{1,1}\times \cB\GG_{\rmm}} \arrow[u, hook, "\mu_1\times \id"] \arrow[r, "\rho_{1,1}"] & {\widetilde{\Delta}_{1,1}\smallsetminus \widetilde{\Delta}_{1,1,1}} \arrow[u, "{i_{1,1}}", hook]
\end{tikzcd}
$$
where
$$\mu_1:(\widetilde{\mathcal M}_{1,2}\smallsetminus \widetilde{\mathcal M}_{1,1})\times \widetilde{\mathcal M}_{1,1}\simeq (\widetilde{\Theta}_1\smallsetminus \widetilde{\Theta}_2) \into \widetilde{\mathcal C}_2\smallsetminus \widetilde{\Theta}_2 \simeq \widetilde{\mathcal M}_{2,1} \smallsetminus \widetilde{\Theta}_2$$
is described in the proof of \Cref{lem:chow-ring-C2} and $\rho_2^c$ is the restriction of $\rho_2$ to the closed substack $\widetilde{\Delta}_1^c$. Notice that $\mu_1\times \id$ is a regular embedding of codimension $1$ whereas $i_{1,1}$ is regular of codimension $2$. Excess intersection theory implies that
$$ \delta_1^c=\rho_{2,*}^c(1)=\rho_{(1,1),*}(c_1(\rho_{1,1}^*N_{i_{1,1}}/N_{\mu_1\times \id}));$$
the normal bundle $N_{\mu_1}$ was described in Proposition 4.9 of \cite{DiLorPerVis} while $N_{i_{1,1}}$ was described in \Cref{prop:relat-detilde-1-1} (see also the proof of \Cref{lem:chow-ring-C2}). A computation shows that
$$ c_1(\rho_{1,1}^*N_{i_{1,1}}/N_{\mu\times \id})=(t+t_2)$$
where $t$ is the generator of $\ch(\widetilde{\mathcal M}_{1,2}\smallsetminus \widetilde{\mathcal M}_{1,1})$ while $t_2$ is the generator of $\ch(\cB\GG_{\rmm})$. Finally, we need to compute $\rho_{(1,1),*}(t+t_2)$. This can be done by noticing that $\rho_{1,1}$ factors through the morphism described in \Cref{lem:detilde-1-1}, i.e. the diagram
$$
\begin{tikzcd}
{(\widetilde{\mathcal M}_{1,2}\smallsetminus \widetilde{\mathcal M}_{1,1})\times \widetilde{\mathcal M}_{1,1}\times \cB\GG_{\rmm}} \arrow[r, "\tilde{\rho}_{1,1}"] \arrow[rd, "\rho_{1,1}"] & {(\widetilde{\mathcal M}_{1,2}\smallsetminus \widetilde{\mathcal M}_{1,1})\times \widetilde{\mathcal M}_{1,1} \times \widetilde{\mathcal M}_{1,1}} \arrow[d, "\pi_2"] \\
& {\widetilde{\Delta}_{1,1}\smallsetminus \widetilde{\Delta}_{1,1,1}}
\end{tikzcd}
$$
is commutative, where $\tilde{\rho}_{1,1}$ is induced by the zero section $\cB\GG_{\rmm} \into [\AA^2/\GG_{\rmm}]\simeq \widetilde{\mathcal M}_{1,1}$. A simple computation using the fact that $\pi_2$ is a $C_2$-torsor gives us $$\delta_1^c\vert_{\widetilde{\Delta}_{1,1}\smallsetminus \widetilde{\Delta}_{1,1,1}}=24(c_1t^2-2c_2t+c_1^3-3c_1c_2).$$
We can now divide $(\delta_1^c-\delta_1p_2)\vert_{\widetilde{\Delta}_{1,1}\smallsetminus \widetilde{\Delta}_{1,1,1}}$ by $c_2+tc_1+t^2$, which is the $2$nd Chern class of the normal bundle of $i_{1,1}$, and get $i_{1,1}^*p_1=24(-2t-3c_1)$. Therefore we have $p_1=24(\lambda_1-H-2\delta_1)$.
Finally, we have to restrict $\delta_1^c$ to $\widetilde{\Delta}_{1,1,1}$. Again we can consider the following diagram
$$
\begin{tikzcd}
{\widetilde{\mathcal M}_{1,1}\times \widetilde{\mathcal M}_{1,1}\times \cB\GG_{\rmm}} \arrow[r, "c_2\times \id"] \arrow[d, "\tilde{\rho}_2", hook] & \widetilde{\Theta}_2 \times \cB\GG_{\rmm} \arrow[r, "\mu_2\times \id", hook] \arrow[d, "\alpha"] & {\widetilde{\mathcal M}_{2,1} \times \cB\GG_{\rmm}} \arrow[d] \\
{\widetilde{\mathcal M}_{1,1}\times \widetilde{\mathcal M}_{1,1}\times \widetilde{\mathcal M}_{1,1}} \arrow[r, "c_6"] & {\widetilde{\Delta}_{1,1,1}} \arrow[r, "{i_{1,1,1}}", hook] & \widetilde{\mathcal M}_3
\end{tikzcd}
$$
where the right square is cartesian, the morphism $\mu_2:\widetilde{\Theta}_2 \into \widetilde{\mathcal C}_2\simeq \widetilde{\mathcal M}_{2,1}$ is the closed immersion described in Section 4 of \cite{DiLorPerVis} (see also the proof of \Cref{lem:chow-ring-C2}), the morphism $c_2:\widetilde{\mathcal M}_{1,1}\times \widetilde{\mathcal M}_{1,1}\rightarrow \widetilde{\Delta}_1\simeq \widetilde{\Theta}_2$ is the gluing morphism (see proof of Lemma 4.8 of \cite{DiLorPerVis}) and $c_6$ is the morphism described in \Cref{lem:descr-delta-1-1-1}.
Excess intersection theory implies that
$$\delta_1^c\vert_{\widetilde{\Delta}_{1,1,1}}= \alpha_*(N_{\mu_2});$$
moreover we have that
$$N_{\mu_2}= \frac{(c_2\times \id)_*\tilde{\rho}_2^*(t_1+t_2)}{2}$$
where $t_i$ is the generator of the $i$-th factor of the product $\widetilde{\mathcal M}_{1,1}^{\times 3}$ for $i=1,2,3$. Therefore
$$\delta_{1}^c\vert_{\widetilde{\Delta}_{1,1,1}}=c_{6,*}(12t_3^2(t_1+t_2))=24(c_1c_2-3c_3)$$
in the Chow ring of $\widetilde{\Delta}_{1,1,1}$.
This leads finally to the description. The same procedure can be used to compute the class of $\delta_{1,1}^c$.
\end{proof}
\begin{remark}
The relation $\delta_1^c$ gives us that we do not need the generator $\delta_{1,1,1}$ in $\ch(\overline{\cM}_3)$.
\end{remark}
\subsection*{The fundamental class of $\cA_5^0$ and $\cA_3^1$}
Now we concentrate on describing two of the strata of separating singularities, namely $\cA_{3}^1$ and $\cA_5^1$. Let us start with $\cA_3^1$.
\begin{proposition}
We have the following description
$$ [\cA_3^1]=\frac{1}{2}(H+\lambda_1+\delta_1)(3H+\lambda_1+\delta_1) $$
in the Chow ring of $\widetilde{\mathcal M}_3$.
\end{proposition}
\begin{proof}
A generic object in $\cA_3^1$ can be described as two genus $1$ curves intersecting in a separating tacnode. We claim that the morphism $\cA_3^1 \rightarrow \widetilde{\mathcal M}_3$ (which is proper thanks to \Cref{lem:sep-sing}) factors thorough $\Xi_1\subset\widetilde{\cH}_3 \into \widetilde{\mathcal M}_3$. In fact, given an element $((E_1,e_1),(E_2,e_2),\phi)$ in $\cA_3^1$ with $E_1$ and $E_2$ smooth genus $1$ curves, we can consider the hyperelliptic involution of $E_1$ (respectively $E_2$) induced by the complete linear sistem of $\cO(2e_1)$ (respectively $\cO(2e_2)$). It is easy to see that their differentials commute with the isomorphism $\phi$ and therefore we get an involution of the genus $3$ curve whose quotient is a genus $0$ curve with one node. Because we have considered a generic element and the hyperelliptic locus is closed, we get the claim. In particular, $\cA_3^1\vert_{\widetilde{\mathcal M}_3 \smallsetminus (\widetilde{\cH}_3 ^{\rm c}p \widetilde{\Delta}_1)}$ is zero.
Consider now the description of $\widetilde{\cH}_3\smallsetminus \widetilde{\Delta}_1$. Because $\cA_3^1\smallsetminus \widetilde{\Delta}_1 \into \Xi_1\smallsetminus \widetilde{\Delta}_1 \into \widetilde{\cH}_3 \smallsetminus \widetilde{\Delta}_1$, we have that excess intersection theory gives us the equalities
$$ [\cA_3^1]\vert_{\widetilde{\cH}_3 \smallsetminus \widetilde{\Delta}_1} = c_1(N_{\cH|\cM})[a_3]=\frac{2\xi_1-\lambda_1}{3}[a_3]$$
where $a_3$ is the class of $\cA_3^1$ as a codimension $2$ closed substack in $\widetilde{\cH}_3\smallsetminus \widetilde{\Delta}_1$. Suppose we have an element $(Z/S,L,f)$ in $\widetilde{\cH}_3 \smallsetminus \widetilde{\Delta}_1$; first of all we have that $\cA_3^1\subset \Xi_1$, therefore $Z/S \in \cM_0^{1}$. Furthermore, because we have a separating tacnode between two genus $1$ curve, it is clear that the nodal section $n$ in $Z$ has to be in the branching locus, or equivalently $f(n)=0$. The converse is also true thanks to \Cref{prop:description-quotient}. Due to the description of $\Xi_1$, we have that
$$ a_3 = (-2s)[\Xi_1] = (-2s)(-c_1) $$
and therefore using $\lambda$-classes as generators
$$ [\cA_3^1]\vert_{\widetilde{\cH}_3 \smallsetminus \widetilde{\Delta}_1}=\frac{1}{3}a_3 (2\xi_1-\lambda_1) = \frac{2}{9}(\xi_1+\lambda_1)(2\xi_1-\lambda_1)\xi_1.$$
Let us focus on the restriction to $\widetilde{\Delta}_1 \smallsetminus \widetilde{\Delta}_{1,1} \simeq (\widetilde{\mathcal M}_{2,1}\smallsetminus \widetilde{\Theta}_1) \times \widetilde{\mathcal M}_{1,1}$. It is easy to see that the only geometric objects that are in $\cA_3^1$ are of the form $((C,p),(E,e))$ where $p$ lies in an almost-bridge of the genus $2$ curve $p$, i.e. $p$ is a smooth point in a projective line that intersects the rest of the curve in a tacnode. Recall that we have an open immersion
$$\widetilde{\mathcal M}_{2,1}\smallsetminus \widetilde{\Theta}_1 \simeq \widetilde{\mathcal C}_{2}\smallsetminus \widetilde{\Theta}_1$$
described in \Cref{prop:contrac}. Through this identification, $(C,p)$ corresponds to a pair $(C',p')$ where $C'$ is an ($A_r$-)stable genus $2$ curve and $p'$ is a cuspidal point. Therefore it is easy to see that in the notation of \Cref{cor:mtilde_21}, we have that the fundamental class of the separating tacnodes is described by the equations $s=a_5=a_4=0$. We get the following expression
$$ [\cA_3^1]\vert_{\widetilde{\Delta}_{1}\smallsetminus\widetilde{\Delta}_{1,1}} = -2t_1(t_0-2t_1)(t_0-3t_1) \in \ch(\widetilde{\Delta}_1 \smallsetminus \widetilde{\Delta}_{1,1}).$$
The same idea works for $\widetilde{\Delta}_{1,1}\smallsetminus \widetilde{\Delta}_{1,1,1}$ (using \Cref{lem:mtilde-12}) and gives us the following description
$$ [\cA_3^1]\vert_{\widetilde{\Delta}_{1,1}\smallsetminus \widetilde{\Delta}_{1,1,1}}= -24t^3 \in \ch(\widetilde{\Delta}_{1,1} \smallsetminus \widetilde{\Delta}_{1,1,1}).$$
Finally, it is clear that $\cA_3^1 \cap \widetilde{\Delta}_{1,1,1} = \emptyset$.
\end{proof}
Now we focus on the fundamental class of $\cA_5^0$. A generic object in $\cA_5^0$ can be described as a genus $1$ curve and a genus $0$ curve intersecting in a $A_5$-singularity. Notice that this implies that $\cA_5^0 \cap \widetilde{\cH}_3=\emptyset$ because the only possible involution of an $A_5$-singularity has to exchange the two irreducible components (see \Cref{prop:description-quotient}). In the same way, it is easy to prove that $\cA_5^0 \cap \widetilde{\Delta}_{1,1} = \emptyset$.
The intersection of $\cA_5^0$ with $\widetilde{\Delta}_{1}$ is clearly transversal, therefore
$$[\cA_5^0]\vert_{\widetilde{\Delta}_{1}\smallsetminus \widetilde{\Delta}_{1,1}}=[\cA_5^0 \cap \widetilde{\Delta}_1].$$
\begin{lemma}\label{lem:a-5-0}
We have the following equality
$$ [\cA_5^0]\vert_{\widetilde{\Delta}_{1}\smallsetminus \widetilde{\Delta}_{1,1}} = 72(t_0+t_1)^3t_0t_1 - 384(t_0+t_1)(t_0t_1)^2$$
in the Chow ring of $\widetilde{\Delta}_1 \smallsetminus \widetilde{\Delta}_{1,1}$.
\end{lemma}
\begin{proof}
Because every $A_5$-singularity for a curve of genus $2$ is separating, it is enough to compute the fundamental class of the locus of $\cA_5$-singularities in $\widetilde{\mathcal C}_2 \smallsetminus \widetilde{\Theta}_1$. We know that
$$\widetilde{\mathcal C}_2 \smallsetminus \widetilde{\Theta}_1 \simeq [\widetilde{\AA}(6)\smallsetminus 0/B_2]$$
see \Cref{cor:mtilde_21} for a more detailed discussion. We have that an element $(f,s) \in \widetilde{\AA}(6)$ defines a genus $2$ curve with an $A_5$-singularity if and only if $f \in \AA(6)$ has a root of multiplicity $6$. Because $B_2$ is a special group, it is enough to compute the $T$-equivariant fundamental class of the locus parametrizing sections of $\AA(6)$ which have a root of multiplicity $6$, where $T$ is the maximal torus inside $B_2$. Therefore one can use the formula in \Cref{rem:gener}.
\end{proof}
It remains to describe the restriction of $\cA_5^0$ in $\widetilde{\mathcal M}_3\smallsetminus (\widetilde{\cH}_3 \cap \widetilde{\Delta}_1)$. First of all, we pass to the projective setting. Recall that
$$ \widetilde{\mathcal M}_3\smallsetminus (\widetilde{\cH}_3 \cap \widetilde{\Delta}_1) \simeq [U/\mathrm{GL}_3]$$
where $U$ is an invariant open inside a $\mathrm{GL}_3$-representation of the space of forms in three coordinates of degree $4$. Because $U$ does not cointain the zero section, we can consider the projectivization $\overline{U}$ in $\PP^{14}$ and we have the isomorphism
$$ \ch_{\mathrm{GL}_3}(U)\simeq \ch_{\mathrm{GL}_3}(\overline{U})/(c_1-h)$$
where $c_1$ is the first Chern class of the standard representation of $\mathrm{GL}_3$. We want to describe the locus $X_5^0$ which parametrizes pairs $(f,p)$ such that $p$ is a separating $A_5$-singularity of $f$, in fact, because a quartic in $\PP^2$ has at most one $A_5$-singularity, we can compute the pushforward through $\pi$ of the fundamental class of $X_5^0$ and then set $h=c_1$ to the get the fundamental class of $\cA_5^0$. Notice that pair $(f,p) \in X_5^0$ can be described as a cubic $g$ and a line $l$ such that $f=gl$ and $p$ is the only intersection between $g$ and $l$, or equivalently $p$ is a flex of $g$ and $l$ is the flex tangent.
To describe $X_5^0$, we first introduce the locally closed substack $X_2$ of $[\PP^{14}\times \PP^2/\mathrm{GL}_3]$ which parametrizes pairs $(f,p)$ such that $p$ is a $A_r$-singularity of $f$ with $r\geq 2$ (eventually $r=\infty$).
Recall that we have the isomorphism
$$ [\PP^{14}\times \PP^2/\mathrm{GL}_3] \simeq [\PP^{14}/H]$$
where $H$ is the stabilizer subgroup of $\mathrm{GL}_3$ of the point $[0:0:1]$ in $\PP^2$. The isomorphism is a consequence of the transitivity of the action of $\mathrm{GL}_3$ on $\PP^2$. Moreover, $H\simeq (\mathrm{GL}_2\times \GG_{\rmm})\ltimes \GG_{\rma}^2$; see \Cref{rem:H}. We denote by $G:=(B_2\times \GG_{\rmm})\ltimes \GG_{\rma}^2$ where $B_2 \into \mathrm{GL}_2$ is the Borel subgroup of upper triangular matrices inside $\mathrm{GL}_2$.
Finally, let us denote by $L$ the standard representation of $\GG_{\rmm}$, by $E$ the standard representation of $\mathrm{GL}_2$ and $E_0 \subset E$ the flag induced by the action of $B_2$ on $E$; $E_1$ is the quotient $E/E_0$. We denote by $V_3$ the $H$-representation
$$(L^{\vee} \otimes {\rm Sym}^3E^{\vee}) \oplus {\rm Sym}^4E^{\vee}.$$
\begin{lemma}\label{lem:X-2}
In the setting above, we have the following isomorphism
$$ X_2 \simeq [V_3 \otimes E_0^{\otimes 2} \otimes L^{\otimes 2}/G]$$
of algebraic stacks.
\end{lemma}
\begin{proof}
Thanks to the isomorphism
$$ [\PP^{14}\times \PP^2/\mathrm{GL}_3] \simeq [\PP^{14}/H]$$
we can describe $X_2$ as a substack of the right-hand side. The coordinates of $[\PP^{14}/H]$ are the coefficients of the generic polynomial $p=a_{00}+a_{10}x+ a_{01}y+ \dots +a_{04}y^4$ which is the dehomogenization of the generic quartic $f$ in the point $[0:0:1]$.
First of all, notice that $X_2$ is contained in the complement of the closed substack $X$ parametrizing polynomials $f$ such that its first and second derivatives in $x$ and $y$ vanish in $(0,0):=[0:0:1]$, thanks to \Cref{lem:A-sing}. This is an $H$-equivariant subbundle of codimension $6$ in $[\PP^{14}/H]$ defined by the equations $$a_{00}=a_{10}=a_{01}=a_{20}=a_{11}=a_{02}=0.$$
We denote it simply by $[\PP^8/H]$. Moreover, $X_2$ is cointained in the locus parametrizing quartic $f$ such that $f$ is singular in $(0,0)$. This is an $H$-equivariant subbundle defined by the equations $a_{00}=a_{10}=a_{01}=0$. We denote it simply $[\PP^{11}/H]$, therefore $X_2 \into [\PP^{11}\smallsetminus \PP^8/H]$. By construction, $\PP^{11}$ can be described as the projectivization of the $H$-representation $V$ defined as
$$ (L^{\otimes -2}\otimes{\rm Sym}^2E^{\vee}) \oplus V_3$$
whereas $\PP^8$ is the projectivization of $V_3$.
Consider an element $p$ in $\PP^{11}$, described as the polynomial $a_{20}x^2+a_{11}xy+a_{02}y^2 + p_3(x,y) +p_4(x,y)$ where $p_3$ and $p_4$ are homogeneous polynomials in $x,y$ of degree respectively $3$ and $4$. Notice that $(0,0)$ is an $A_1$-singularity, i.e. an ordinary node, if and only if $a_{11}^2-4a_{02}a_{20}\neq 0$. In fact, an $A$-singularity (eventually $A_{\infty}$) is a node if and only if it has two different tangent lines. Therefore $X_2$ is equal to the locus where the equality holds and therefore it is enough to describe $\VV(a_{11}^2-4a_{02}a_{20})$ in $[\PP^{11}\smallsetminus \PP^8/H]$.
We define $W$ as the $H$-representation
$$W:=(L^{\vee}\otimes E^{\vee}) \oplus (L^{\vee} \otimes {\rm Sym}^3E^{\vee}) \oplus {\rm Sym}^4E^{\vee}$$
and we consider the $H$-equivariant closed embedding of $W\into V$ induced by the morphism of $H$-schemes
$$ L^{\vee}\otimes E^{\vee} \into {\rm Sym}^2(L^{\vee} \otimes E^{\vee})\simeq L^{\otimes -2}\otimes {\rm Sym}^2E^{\vee}$$
which is defined by the association $(f_1,f_2) \mapsto (f_1^2,2f_1f_2,f_2^2)$. Now, consider the $\GG_{\rmm}$-action on $W$ defined using weight $1$ on the $2$-dimensional vector space $(L^{\vee}\otimes E^{\vee})$ and weight $2$ on $V_3$. We denote by $\PP_{1,2}(W)$ the quotient stack and by $\PP_2(V_3)\into \PP_{1,2}(W)$ the closed substack induced by the embedding $V_3\into W$. One can prove that the morphism
$$ [\PP_ {1,2}(W)\smallsetminus \PP_2(V_3)/H] \into [\PP(V)\smallsetminus \PP(V_3)/H]$$
induced by the closed immersion $W \into V$ is a closed immersion too and its scheme-theoretic image is exactly the locus $\VV(a_{11}^2-4a_{02}a_{20})$. We are considering the action of $H$ on a stack as defined in \cite{Rom}. Finally, because the action of $\mathrm{GL}_2$ over $E^{\vee}$ is transitive, we have the isomorphism
$$ [\PP_{1,2}(W) \smallsetminus \PP_2(V_3)/H] \simeq [\PP_{1,2}(W_0) \smallsetminus \PP_2(V_3)/G]$$
where the $G$-representation $W_0\into W$ is defined as $(L^{\vee}\otimes E_0^{\vee})\oplus V_3$. We want to stress that this is true only if we remove the locus $\PP_2(V_3)$, in fact the subgroup of stabilizers of $W_0$ in $W$ is equal to $H$ when restricted to $V_3$. Finally, we notice that we have an isomorphism $$\PP_{1,2}(W_0)\smallsetminus \PP_2(V_3)\simeq V_3 \otimes (E_0^{\otimes 2}\otimes L^{\otimes 2})$$ of $H$-stacks.
\end{proof}
\begin{remark}
By construction, an element $f \in V_3 \otimes E_0^{\otimes 2} \otimes L^{\otimes 2}$ is associated to the curve $y^2=f(x,y)$. Notice that $E^{\vee}$ is the vector space $E_0^{\vee}\oplus E_1^{\vee}$ where $x$ is a generator for $E_1^{\vee}$ and $y$ is a generator for $E_0^{\vee}$. Moreover, $L^{\vee}$ is generated by $z$.
\end{remark}
\begin{corollary}
We have an isomorphism of rings
$$ \ch(X_2)\simeq \ZZ[1/6,t_1,t_2,t_3]$$
where $t_1,t_2,t_3$ are the first Chern classes of the standard representations of the three copies of $\GG_{\rmm}$ in $G$. Specifically, $t_1,t_2,t_3$ are the first Chern classes of $E_1,E_0,L$ respectively.
\end{corollary}
Recall that $X$ is the closed substack of $\PP^{14}\times \PP^2$ which parametrizes pairs $(f,p)$ such that $p$ is a singular point of $f$ but not an $A$-singularity, see \Cref{def:A-sing}.
Thus, we have a closed immersion
$$ i_2:=X_2 \into [(\PP^{14}\times \PP^2)\smallsetminus X/\mathrm{GL}_3]$$
and we can describe its pullback at the level of Chow ring. We have an isomorphism
$$\ch_{\mathrm{GL}_3}(\PP^{14}\times \PP^2) \simeq \frac{\ZZ[1/6,c_1,c_2,c_3,h_{14},h_{2}]}{(p_2(h_2),p_{14}(h_{14}))}$$
where $c_i$ is the $i$-th Chern class of the standard representation of $\mathrm{GL}_3$, $h_{14}$ (respectively $h_{2}$) is the hyperplane section of $\PP^{14}$ (respectively of $\PP^{2}$) and $p_{14}$ (respectively $p_2$) is a polynomial of degree $15$ (respectively $3$) with coefficients in $\ch(\cB\mathrm{GL}_3)$.
\begin{proposition}\label{prop:i-2}
The closed immersion $i_2$ is the complete intersection in $[(\PP^{14}\times \PP^2)\smallsetminus X/\mathrm{GL}_3]$ defined by equations
$$ a_{00}=a_{10}=a_{01}=a_{11}^2-4a_{20}a_{02}=0$$
whose fundamental class is equal to
$$2(h+k-c_1)(h+4k)((h+3k)^2-(c_1+k)(h+2k)+c_2).$$
Moreover, the morphism $i_2^*$ is defined by the following associations:
\begin{itemize}
\item $i_2^*(k)=-t_3$,
\item $i_2^*(c_1)=t_1+t_2+t_3$,
\item $i_2^*(c_2)=t_1t_2+t_1t_3+t_2t_3$,
\item $i_2^*(c_3)=t_1t_2t_3$,
\item $i_2^*(h)=2(t_2+t_3)$,
\end{itemize}
\end{proposition}
\begin{proof}
It follows from the proof of \Cref{lem:X-2}.
\end{proof}
Because $i_2^*$ is clearly surjective, it is enough to compute the fundamental class of $X_5^0$ as a closed subscheme of $X_2$, choose a lifting through $i_2^*$ and then multiply it by the fundamental class of $X_2$.
We are finally ready to do the computation.
\begin{corollary}
The closed substack $X_5^0$ of $X_2$ is the complete intersection defined by the vanishing of the coefficients of $x^3, x^2y, x^4$ as coordinates of $V_3 \otimes E_0^{\otimes 2} \otimes L^{\otimes 2}$.
\end{corollary}
\begin{proof}
The element of the representation $V_3 \otimes E_0^{\otimes 2} \otimes L^{\otimes 2}$ are the coefficients of polynomials of the form $p_3(x,y)+p_4(x,y)$ where $p_3$ (respectively $p_4$) is the homogeneous component of degree $3$ (respectively $4$). If we look at them in $X_2$, they define a polynomial $y^2+p_3(x,y)+p_4(x,y)$. It is clear now that a polynomial of this form is the product of a line and a cubic if and only if the coefficient of $x^3$ and $x^4$ are zero. Moreover, the condition that $(0,0)$ is the only intersection between the line and the cubic is equivalent to asking that the coefficient of $x^2y$ is zero.
\end{proof}
\begin{remark}
A straightforward computation gives use the fundamental class of $X_5^0$ and the strategy described at the beginning of \Cref{sec:strategy} gives us the description of the fundamental class of $\cA_5^0$ in the Chow ring of $\widetilde{\mathcal M}_3$.
We do not write down the explicit description because it is contained inside the ideal generated by the other relations.
\end{remark}
\subsection*{Fundamental class of $A_n$-singularity}
We finally deal with the computation of the remaining fundamental classes. As usual, our strategy assures us that it is enough to compute the restriction of every fundamental class to every stratum. We do not give detail for every fundamental class. We instead describe the strategy to compute all of them in every stratum and leave the remaining computations to the reader.
We start with the open stratum $\widetilde{\mathcal M}_3 \smallsetminus (\widetilde{\cH}_3 ^{\rm c}p \widetilde{\Delta}_1)$, which is also the most diffucult one. Luckily, we have already done all the work we need for the previous case. We adopt the same exact idea we used for the computation of $\cA_5^0$.
\begin{remark}
First of all, we can reduce the computation to the fundamental class of the locus $X_n$ in $(\PP^{14}\times \PP^{2})\smallsetminus X$ parametrizing $(f,p)$ such that $p$ is an $A_h$-singularity for $h\geq n$. As above, $X$ is the closed locus paramatrizing $(f,p)$ such that $p$ is a singular point of $f$ but not an $A$-singularity. Consider the morphism
$$\pi: \PP^{14}\times \PP^{2} \rightarrow \PP^{14}$$
and consider the restriction
$$\pi\vert_{X_n}: X_n \longrightarrow\cA_n\smallsetminus (\widetilde{\cH}_3 ^{\rm c}p \widetilde{\Delta}_1);$$
this is finite birational because generically a curve in $\cA_n$ has only one singular point. Therefore it is enough to compute the $H$-equivariant class of $X_n$ in $(\PP^{14}\times \PP^{2})\smallsetminus X$ and then compute the pushforward $\pi_*(X_n)$. This is an exercise with Segre classes.
We give the description of the relevant strata in the Chow ring of $\widetilde{\mathcal M}_3$ in \Cref{rem:relations-Mbar}.
\end{remark}
\begin{proposition}
In the situation above, we have that
$$[X_n]=C_ni_{2,*}(1) \in \ch_{\mathrm{GL}_3}((\PP^{14}\times \PP^2)\smallsetminus X)$$
where
$$i_{2,*}(1)=2(h+k-c_1)(h+4k)((h+3k)^2-(c_1+k)(h+2k)+c_2)$$
while $C_2=1$ and $C_n=c_3c_4\dots c_{n}$ for $n\geq 3$ where
$$ c_m:= -mc_1+\frac{2m-1}{2}h+(4-m)k $$ for every $3\leq m \leq 7$.
\end{proposition}
\begin{proof}
\Cref{prop:i-2} and \Cref{lem:X-2} imply that it is enough to compute the fundamental class of $X_n$ in $X_2$. It is important to remind that the coordinates of $X_2$ are the coefficients of the polynomial $p_3(x,y)+p_4(x,y)$ where $p_3$ and $p_4$ are homogeneous polynomials in $x,y$ of degree respectively $3$ and $4$. Moreover, if we see it as an element of $[\PP^{14}\times \PP^{2}/\mathrm{GL}_3]$, it is represented by the pair $(y^2z^2+p_3(x,y)z+p_4(x,y), [0:0:1])$. Therefore, we need to find a relation between the coefficients of $p_3$ and $p_4$ such that the point $(0,0):=[0:0:1]$ is an $A_h$-singularity for $h \geq n$.
To do so, we apply Weierstrass preparation theorem. Specifically, we use Algorithm 5.2 in \cite{Ell}, which allows us to write the polynomial $y^2+p_3(x,y) + p_4(x,y)$ in the form $y^2+p(x)y+q(x)$ up to an invertible element in $k[[x,y]]$. The square completion procedure implies that, up to an isomorphism of $k[[x,y]]$, we can reduce to the form $y^2+[q(x)-p(x)^2/4]$. Although $q(x)$ and $p(x)$ are power series, we just need to understand the coefficients of $h(x):=q(x)-p(x)^2/4$ up to degree $8$. Clearly the coefficient of $1$, $x$ and $x^2$ are already zero by construction. In general for $n\geq 3$, if $c_n$ is the coefficient of $x^{n}$ inside $h(x)$, we have that $X_n$ is the complete intersection inside $X_2$ of the hypersurfaces $c_i=0$ for $3\leq i \leq n$. We can use now the description of $X_2$ as a quotient stack (see \Cref{lem:X-2}) to compute the fundamental classes.
\end{proof}
\begin{remark}
Notice that for $\cA_5$, we also have the contribution of the closed substack $\cA_5^0$ that we need to remove to get the fundamental class of the non-separating locus. To same is not true for $\cA_3$, because $\cA_3^1 \subset \Xi_1$.
\end{remark}
It remains to compute the fundamental class of $\cA_n$ restricted to the other strata. The easiest case is $\widetilde{\Delta}_{1,1,1}$, because clearly there are no $\cA_n$-singularities for $n\geq 3$. Therefore $\cA_n\vert_{\widetilde{\Delta}_{1,1,1}}=0$ for every $n\geq 3$. Regarding $\cA_2$, it enough to compute its pullback through the $6:1$-cover described in \Cref{lem:descr-delta-1-1-1}. We get the following result.
\begin{proposition}
The restriction of $\cA_n$ to $\widetilde{\Delta}_{1,1,1}$ is of the form
$$ 24(c_1^2-2c_2) \in \ZZ[1/6,c_1,c_2,c_3]\simeq \ch(\widetilde{\Delta}_{1,1,1})$$
for $n=2$ while it is trivial for $n\geq 3$.
\end{proposition}
As far as $\widetilde{\Delta}_{1,1} \smallsetminus \widetilde{\Delta}_{1,1,1}$ is concerned, we have that $\cA_n \cap \widetilde{\Delta}_{1,1} = \emptyset$ for $n\geq 4$. Moreover, $\cA_3 \cap \widetilde{\Delta}_{1,1}=\emptyset$ because every tacnode is separating in the stratum $\widetilde{\Delta}_{1,1}\smallsetminus \widetilde{\Delta}_{1,1,1}$. Therefore again, we only need to do the computation for $\cA_2$, which is straightforward.
\begin{proposition}
The restriction of $\cA_n$ to $\widetilde{\Delta}_{1,1}\smallsetminus \widetilde{\Delta}_{1,1,1}$ is of the form
$$ 24(t^2+c_1^2-2c_2) \in \ZZ[1/6,c_1,c_2,t]\simeq \ch(\widetilde{\Delta}_{1,1}\smallsetminus \widetilde{\Delta}_{1,1,1})$$
for $n=2$ while it is trivial for $n\geq 3$.
\end{proposition}
We now concentrate on the stratum $\widetilde{\Delta}_{1}\smallsetminus \widetilde{\Delta}_{1,1}$. Recall that we have the isomorphism
$$\widetilde{\Delta}_{1} \smallsetminus \widetilde{\Delta}_{1,1}\simeq (\widetilde{\mathcal M}_{2,1}\smallsetminus \widetilde{\Theta}_1) \times \widetilde{\mathcal M}_{1,1}$$
as in \Cref{prop:descr-detilde-1} and we denote by $t_0,t_1$ the generators of the Chow ring of $\widetilde{\mathcal M}_{2,1}$ and by $t$ the generator of the Chow ring of $\widetilde{\mathcal M}_{1,1}$.
\begin{proposition}
We have the following description of the $A_n$-strata in the Chow ring of $\widetilde{\Delta}_{1}\smallsetminus \widetilde{\Delta}_{1,1}$:
\begin{itemize}
\item the fundamental classes of $\cA_5$, $\cA_6$ and $\cA_7$ are trivial,
\item the fundamental class of $\cA_4$ is equal to $40(t_0+t_1)^2t_0t_1$,
\item the fundamental class of $\cA_3$ is equal to
$-24(t_0+t_1)^3 + 48(t_0+t_1)t_0t_1$,
\item the fundamental class of $\cA_2$ is equal to
$24(t_0+t_1)^2 - 48t_0t_1+24t^2$.
\end{itemize}
\end{proposition}
\begin{proof}
If $n=6,7$ the intersection of $\cA_n$ with $\widetilde{\Delta}_{1}\smallsetminus \widetilde{\Delta}_{1,1}$ is empty. Notice that if $n=5$, it is also trivial as we are interested in non-separating singularities. It remains to compute the case for $n=2,3,4$. It is clear that if $n=3,4$, the factor $\widetilde{\mathcal M}_{1,1}$ of the product does not give a contribution, therefore it is enough to describe the fundamental class of the locus of $A_n$-singularities in $\widetilde{\mathcal M}_{2,1}\smallsetminus \widetilde{\Theta}_1$ for $n=3,4$. We do it exactly as in the proof of \Cref{lem:a-5-0}. The computation for $n=2$ again is straightforward.
\end{proof}
Last part of the computations is the restriction to the hyperelliptic locus.
\begin{remark}\label{rem:sep-tac}
We recall the stratification
$$ \cA_3^1\smallsetminus \widetilde{\Delta}_1 \subset \Xi_1\smallsetminus \widetilde{\Delta}_1 \subset \widetilde{\cH}_3\smallsetminus \widetilde{\Delta}_1$$
defined in the following way: $\Xi_1$ parametrizes triplets $(Z,L,f)$ in $\widetilde{\cH}_3\smallsetminus \widetilde{\Delta}_1$ such that $Z$ is genus $0$ curve with one node whereas $\cA_3^1$ parametrizes triplets $(Z,L,f)$ in $\Xi_1$ such that $f$ vanishes at the node. Using the results in \Cref{sec:H3tilde}, we get that
\begin{itemize}
\item $\ch(\widetilde{\cH}_3\smallsetminus (\widetilde{\Delta}_1 ^{\rm c}p \Xi_1)) \simeq \ZZ[1/6,s,c_2]/(f_9)$,
\item $\ch(\Xi_1\smallsetminus (\widetilde{\Delta}_1 ^{\rm c}p \cA_3^1))\simeq \ZZ[1/6,c_1,c_2]$,
\item $\ch(\cA_3^1 \smallsetminus \widetilde{\Delta}_1)\simeq \ZZ[1/6,s]$
\end{itemize}
where $f_9$ is the restriction of the relation $c_9$ to the open stratum. Furthermore, the normal bundle of the closed immersion
$$ \Xi_1\smallsetminus (\widetilde{\Delta}_1^{\rm c}p \cA_3^1) \into \widetilde{\cH}_3 \smallsetminus (\widetilde{\Delta}_1 ^{\rm c}p \cA_3^1)$$
is equal to $-c_1$ whereas the normal bundle of the closed immersion
$$\cA_3^1 \smallsetminus \widetilde{\Delta}_1 \into \Xi_1\smallsetminus \widetilde{\Delta}_1$$
is equal to $-2s$.
\end{remark}
\Cref{lem:gluing} implies that we can compute the restriction of $\cA_n$ to the hyperelliptic locus using the stratification $$ \cA_3^1\smallsetminus \widetilde{\Delta}_1 \subset \Xi_1\smallsetminus \widetilde{\Delta}_1 \subset \widetilde{\cH}_3\smallsetminus \widetilde{\Delta}_1,$$
i.e. it is enough to compute the restriction of $\cA_n$ to $\cA_3^1\smallsetminus \widetilde{\Delta}_1$, $\Xi_1\smallsetminus (\widetilde{\Delta}_1^{\rm c}p \cA_3^1)$ and $\widetilde{\cH}_3\smallsetminus (\widetilde{\Delta}_1 ^{\rm c}p \Xi_1)$.
\begin{lemma}
The restriction of $\cA_n$ to $\cA_3^1\smallsetminus \widetilde{\Delta}_1$ is empty if $n\geq 3$ whereas we have the equality $$[\cA_2]\vert_{ \cA_3^1\smallsetminus \widetilde{\Delta}_1}=72s^2$$
in the Chow ring of $\cA_3^1\smallsetminus \widetilde{\Delta}_1$.
Moreover, the restriction of $\cA_n$ to $\Xi_1\smallsetminus (\widetilde{\Delta}_1^{\rm c}p\cA_3^1)$ is empty if $n\geq 4$ whereas we have the two equalities
\begin{itemize}
\item$ [\cA_2]\vert_{\Xi_1\smallsetminus (\widetilde{\Delta}_1 ^{\rm c}p\cA_3^1)} =24c_1^2-48c_2$
\item$ [\cA_3]\vert_{\Xi_1\smallsetminus (\widetilde{\Delta}_1^{\rm c}p \cA_3^1)} = 24c_1^3-72c_1c_2$
\end{itemize}
in the Chow ring of $\Xi_1\smallsetminus (\widetilde{\Delta}_1^{\rm c}p \cA_3^1)$.
\end{lemma}
\begin{proof}
This is an easy exercise and we leave it to the reader.
\end{proof}
\subsubsection*{Restriction to $\widetilde{\cH}_3 \smallsetminus (\Xi_2 ^{\rm c}p \widetilde{\Delta}_1)$}
It remains to compute the restriction of $\cA_n$ to the open stratum $\widetilde{\cH}_3 \smallsetminus (\Xi_2 ^{\rm c}p \widetilde{\Delta}_1)$.
\begin{remark}\label{rem:ar-vis-2}
We know that $\widetilde{\cH}_3 \smallsetminus (\widetilde{\Delta}_1^{\rm c}p \Xi_1)$ is isomorphic to the quotient stack $[\AA(8)\smallsetminus 0/(\mathrm{GL}_2/\mu_4)]$, where $\AA(8)$ is the space of homogeneous forms in two variables of degree $8$. Moreover, we have the isomorphism $\mathrm{GL}_2/\mu_4 \simeq \mathrm{PGL}_2\times \GG_{\rmm}$. See \Cref{lem:ar-vis} and \cite{ArVis} for a more detailed discussion.
Therefore we have that
$$\ch(\widetilde{\cH}_3\smallsetminus (\widetilde{\Delta}_1^{\rm c}p \Xi_1))\simeq \ZZ[1/6,s,c_2]$$ where $s$ is the first Chern class of the standard representation of $\GG_{\rmm}$ while $c_2$ is the generator of the Chow ring of $\cB\mathrm{PGL}_2$. Notice that we have a morphism
$$ \mathrm{GL}_2 \longrightarrow \mathrm{PGL}_2 \times \GG_{\rmm}$$
defined by the association $A\mapsto ([A],\det{A}^2)$ which coincides with the natural quotient morphism
$$ q:\mathrm{GL}_2 \longrightarrow \mathrm{GL}_2/\mu_4.$$
We have that $q^*(s)=2d_1$ and $q^*(c_2)=d_2$, where $d_1$ and $d_2$ are the first and second Chern class of the standard representation of $\mathrm{GL}_2$.
\end{remark}
Using the description of $\widetilde{\cH}_3\smallsetminus (\widetilde{\Delta}_1^{\rm c}p \Xi_1)$ as a quotient stack highlighted in the previous remark, the restriction of $\cA_n$ to $\widetilde{\cH}_3\smallsetminus (\widetilde{\Delta}_1^{\rm c}p \Xi_1)$ is the locus $H_n$ which parametrizes forms $f \in \AA(8)$ such that $f$ has a root of multiplicity at least $n+1$. Thanks to \Cref{rem:ar-vis-2}, it is enough to compute the fundamental class of $H_n$ after pulling it back through $q$, i.e. compute the $\mathrm{GL}_2$-equivariant fundamental class of $H_n$. Because $\mathrm{GL}_2$ is a special group, we can reduce to do the computation of the $T$-equivariant fundamental class of $H_n$, where $T$ is the torus of diagonal matrices in $\mathrm{GL}_2$. Therefore, we can use the formula in \Cref{rem:gener} to get the explicit description of the $T$-equivariant class of $H_n$.
\section{The Chow ring of $\overline{\cM}_3$ and the comparison with Faber's result}\label{sec:3-4}
We are finally ready to present our description of the Chow ring of $\overline{\cM}_3$.
\begin{theorem}\label{theo:chow-ring-m3bar}
Let $\kappa$ be the base field of characteristic different from $2,3,5,7$. The Chow ring of $\overline{\cM}_3$ with $\ZZ[1/6]$-coefficients is the quotient of the graded polynomial algebra
$$\ZZ[1/6,\lambda_1,\lambda_2,\lambda_3,\delta_{1},\delta_{1,1},\delta_{1,1,1},H]$$
where
\begin{itemize}
\item[] $\lambda_1,\delta_1,H$ have degree $1$, \item[]$\lambda_2,\delta_{1,1}$ have degree $2$, \item[]$\lambda_3,\delta_{1,1,1}$ have degree $3$.
\end{itemize}
The quotient ideal is generated by 15 homogeneous relations:
\begin{itemize}
\item[] $[\cA_2]$, which is in codimension $2$;
\item[] $[\cA_3], [\cA_3^{1}], \delta_1^c, k_1(1),k_{1,1}(2)$, which are in codimension $3$,
\item[] $[\cA_4], \delta_{1,1}^c, k_{1,1}(1), k_{1,1,1}(1), k_{1,1,1}(4), m(1),k_h,k_1(2)$, which are in codimension $4$,
\item[] $ k_{1,1}(3)$, which is in codimension $5$.
\end{itemize}
\end{theorem}
\begin{remark}\label{rem:relations-Mbar}
We write the relations explicitly.
\begin{itemize}
\item[]
\begin{equation*}
\begin{split}
[\cA_2]=24(\lambda_1^2-2\lambda_2)
\end{split}
\end{equation*}
\item[]
\begin{equation*}
\begin{split}
[\cA_3]&= 36\lambda_1^3 + 10\lambda_1^2H + 21\lambda_1^2\delta_1 - 92\lambda_1\lambda_2 - 4\lambda_1H^2 + 18\lambda_1H\delta_1 + \\ & +72\lambda_1\delta_1^2 +
+ 88\lambda_1\delta_{1,1} - 20\lambda_2H + 56\lambda_3 - 2H^3 + 9H^2\delta_1 + 54H\delta_1^2 + \\ & + 87\delta_1^3 -
4\delta_1\delta_{1,1}+ 56\delta_{1,1,1}
\end{split}
\end{equation*}
\item[]
\begin{equation*}
\begin{split}
[\cA_3^1]=\frac{H}{2}(\lambda_1+3H+\delta_1)(\lambda_1+H+\delta_1)
\end{split}
\end{equation*}
\item[]
\begin{equation*}
\begin{split}
\delta_1^c&=6(\lambda_1^2\delta_1 + 2\lambda_1H\delta_1 + 6\lambda_1\delta_1^2 + 4\lambda_1\delta_{1,1} + H^2\delta_1 + 6H\delta_1^2 -\\ & - 4H\delta_{1,1}
+ 9\delta_1^3 - 8\delta_1\delta_{1,1} + 12\delta_{1,1,1})
\end{split}
\end{equation*}
\item[]
\begin{equation*}
\begin{split}
k_1(1)=\delta_1\Big(\frac{1}{4}\lambda_1^2 + \frac{1}{2}\lambda_1H + 2\lambda_1\delta_1 + \lambda_2 + \frac{1}{2}H^2 + H\delta_1^2 + \frac{7}{4}\delta_1^2 -\delta_{1,1}\Big)
\end{split}
\end{equation*}
\item[]
\begin{equation*}
\begin{split}
k_{1,1}(2)=\delta_{1,1}(3\lambda_1 + H + 3\delta_1)
\end{split}
\end{equation*}
\item[]
\begin{equation*}
\begin{split}
[\cA_4]&=36\lambda_1^4 + \frac{1048}{27}\lambda_1^3H + 66\lambda_1^3\delta_1 - 92\lambda_1^2\lambda_2 - \frac{146}{81}\lambda_1^2H^2 +
\frac{517}{9}\lambda_1^2H\delta_1 + \\ & + 207\lambda_1^2\delta_1^2 - 176\lambda_1^2\delta_{1,1} - 84\lambda_1\lambda_2H + 56\lambda_1\lambda_3 +
\frac{16}{81}\lambda_1H^3 +\\ & + \frac{3272}{81}\lambda_1H^2\delta_1 + \frac{1282}{9}\lambda_1H\delta_1^2 + 222\lambda_1\delta_1^3 -
340\lambda_1\delta_1\delta_{1,1} + \\ & + 56\lambda_1\delta_{1,1,1} + 8\lambda_2H^2 + \frac{130}{27}H^4 + \frac{2041}{81}H^3\delta_1 +
\frac{4957}{81}H^2\delta_1^2 + \\ & + \frac{2101}{27}H\delta_1^3 + 45\delta_1^4 - 72\delta_1^2\delta_{1,1}
\end{split}
\end{equation*}
\item[]
\begin{equation*}
\begin{split}
\delta_{1,1}^c=24\big(\delta_{1,1}(\lambda_1+\delta_1)^2+\delta_{1,1,1}\big)
\end{split}
\end{equation*}
\item[]
\begin{equation*}
\begin{split}
k_{1,1}(1)=\delta_{1,1}(\delta_{1,1}-\lambda_2-(\lambda_1+\delta_1)^2)
\end{split}
\end{equation*}
\item[]
\begin{equation*}
\begin{split}
k_{1,1,1}(1)=\delta_{1,1,1}(\lambda_1 + \delta_1)
\end{split}
\end{equation*}
\item[]
\begin{equation*}
\begin{split}
k_{1,1,1}(4)=H\delta_{1,1,1}
\end{split}
\end{equation*}
\item[]
\begin{equation*}
\begin{split}
m(1)&=12\lambda_1^4 - \frac{7}{3}\lambda_1^3H + 27\lambda_1^3\delta_1 - 44\lambda_1^2\lambda_2 - \frac{706}{9}\lambda_1^2H^2 - \frac{65}{2}\lambda_1^2H\delta_1 + \\ &
+ 84\lambda_1^2\delta_1^2 - 32\lambda_1^2\delta_{1,1} - 38\lambda_1\lambda_2H + 92\lambda_1\lambda_3 - \frac{715}{9}\lambda_1H^3 - \\ & -
\frac{1340}{9}\lambda_1H^2\delta_1 - 25\lambda_1H\delta_1^2 + 69\lambda_1\delta_1^3 - 130\lambda_1\delta_1\delta_{1,1} + 92\lambda_1\delta_{1,1,1} + \\ & +
6\lambda_2H^2 - \frac{46}{3}H^4 - \frac{1205}{18}H^3\delta_1 - \frac{562}{9}H^2\delta_1^2 - \frac{101}{6}H\delta_1^3 -
54\delta_1^2\delta_{1,1}
\end{split}
\end{equation*}
\item[]
\begin{equation*}
\begin{split}
k_h&= \frac{1}{8}\lambda_1^3H + \frac{1}{8}\lambda_1^2H^2 + \frac{1}{4}\lambda_1^2H\delta_1 - \frac{1}{2}\lambda_1\lambda_2H - \frac{1}{8}\lambda_1H^3 +
\frac{7}{8}\lambda_1H\delta_1^2 + \\ & + \frac{3}{2}\lambda_1\delta_1\delta_2 \frac{1}{2}\lambda_2H^2 + \lambda_3H - \frac{1}{8}H^4 - \frac{1}{4}H^3\delta_1 +
\frac{1}{8}H^2\delta_1^2 + \frac{3}{4}H\delta_1^3 + \frac{3}{2}\delta_1^2\delta_2
\end{split}
\end{equation*}
\item[]
\begin{equation*}
\begin{split}
k_1(2)&=\frac{1}{4}\lambda_1^3\delta_1 + \frac{1}{2}\lambda_1^2H\delta_1 + \frac{5}{4}\lambda_1^2\delta_1^2 + \frac{1}{4}\lambda_1H^2\delta_1 + \frac{3}{2}\lambda_1H\delta_1^2 +
\frac{7}{4}\lambda_1\delta_1^3 +\\ & + \lambda_1\delta_1\delta_2 - \lambda_1\delta_3 + \lambda_3\delta_1+ \frac{1}{4}H^2\delta_1^2 + H\delta_1^3 + \frac{3}{4}\delta_1^4 +
\delta_1^2\delta_2
\end{split}
\end{equation*}
\item[]
\begin{equation*}
\begin{split}
k_{1,1}(3)&=2\lambda_1^3\delta_2 + 5\lambda_1^2\delta_1\delta_2 + \lambda_1\lambda_2\delta_2 + 4\lambda_1\delta_1^2\delta_2 + \lambda_2\delta_1\delta_2 + \lambda_2\delta_3 + \lambda_3\delta_2 +\\ & +
\delta_1^3\delta_2
\end{split}
\end{equation*}
\end{itemize}
\end{remark}
\begin{remark}
Notice that the relation $[\cA_2]$ and $\delta_1^c$ gives us that $\lambda_2$ and $\delta_{1,1,1}$ can be obtained using the other generators.
\end{remark}
Lastly, we compare our result with the one of Faber, namely Theorem 5.2 of \cite{Fab}. Recall that he described the Chow ring of $\overline{\cM}_3$ with rational coefficients as the graded $\QQ$-algebra defined as a quotient of the graded polynomial algebra generated by $\lambda_1$, $\delta_1$, $\delta_0$ and $\kappa_2$. We refer to \cite{Mum} for a geometric description of these cycles. The quotient ideal is generated by $3$ relations in codimension $3$ and six relations in codimension $4$.
First of all, if we invert $7$, we have that the relation $[\cA_3]$ implies that also $\lambda_3$ is not necessary as a generator. Therefore if we tensor with $\QQ$, our description can be simplified and we end up having exactly $4$ generators, namely $\lambda_1$, $\delta_1$, $\delta_{1,1}$ and $H$ and $9$ relations. Notice that the identity
$$ [H]=9\lambda_1 - 3\delta_1 - \delta_0$$
allows us to easily pass from the generator $H$ to the generator $\delta_0$ that was used in \cite{Fab}. Finally, Table 2 in \cite{Fab} gives us the identity
$$ \delta_{1,1}=-5\lambda_1^2+\frac{\lambda_1\delta_0}{2} +\lambda_1\delta_1+\frac{\delta_1^2}{2}+\frac{\kappa_2}{2}$$
which explains how to pass from the generator $\delta_{1,1}$ to the generator $\kappa_2$ used in \cite{Fab}.
Thus we can construct two morphism of $\QQ$-algebras:
$$ \phi: \QQ[\lambda_1,H,\delta_1,\delta_{1,1}]\longrightarrow \QQ[\lambda_1, \delta_0,\delta_1,\kappa_2] $$
and
$$ \varphi: \QQ[\lambda_1,\delta_0,\delta_1,\kappa_2] \longrightarrow \QQ[\lambda_1,H,\delta_1,\delta_{1,1}]$$
which are one the inverse of the other. A computation shows that $\phi$ sends ideal of relations to the one in \cite{Fab} and $\varphi$ sends the ideal of relations in \cite{Fab} to the one we constructed.
\appendix
\operatorname{char}pter{Moduli stack of finite flat extensions}
In this appendix, we describe the moduli stack of finite flat extensions of curvilinear algebras. This is used in Chapter 3 to describe the stack of $A_n$-singularities.
Let $m$ be a positive integer and $\kappa$ be a base field. We denote by $F\cH_m$ the finite Hilbert stack of a point, i.e. the stack whose objects are pairs $(A,S)$ where $S$ is a scheme and $A$ is a locally free finite $\cO_S$-algebra of degree $m$. We know that $F\cH_m$ is an algebraic stack, which is in fact a quotient stack of an affine scheme of finite type by a smooth algebraic group. For a more detailed treatment see Section 96.13 of \cite{StProj}.
Given another positive integer $d$, we can consider a generalization of the stack $F\cH_m$. Given a morphism $\cX \rightarrow \cY$, one can consider the finite Hilbert stack $F\cH_d(\cX/\cY)$ which parametrizes commutative diagram of the form
$$
\begin{tikzcd}
Z \arrow[r] \arrow[d, "f"] & \cX \arrow[d] \\
S \arrow[r] & \cY
\end{tikzcd}
$$
where $f$ is finite locally free of degree d. This again can be prove to be algebraic. If $\cY=\spec k$, we denote it simply by $F\cH_d(\cX)$. For a more detailed treatment see Section 96.12 of \cite{StProj}.
Finally, we define $E\cH_{m,d}$ ti be the fibered category in groupoids whose objects over a scheme $S$ are finite locally free extensions of $\cO_S$-algebras $A \into B$ of degree $d$ such that $A$ is a finite locally free $\cO_S$-algebra of degree $m$. Morphisms are defined in the obvious way. Clearly the algebra $B$ is finite locally free of degree $dm$.
\begin{proposition}
The stack $F\cH_{d}(F\cH_m)$ is naturally isomorphic to $E\cH_{m,d}$, therefore $E\cH_{m,d}$ is an algebraic stack.
\end{proposition}
\begin{proof}
The proof follows from unwinding the definitions.
\end{proof}
We want to add the datum of a section of the structural morphism $\cO_S \into B$. This can be done passing to the universal object of $F\cH_{dm}$.
Let $n$ be a positive integer and let $B_{\rm univ}$ the universal object of $F\cH_{n}$; consider $\cF_{n}:=\spec_{F\cH_{n}}(B_{\rm univ})$ the generalized spectrum of the universal algebra over $F\cH_{n}$. It parametrizes pairs $(B,q)$ over $S$ where $B \in F\cH_{n}(S)$ and $q:B \rightarrow \cO_S$ is a section of the structural morphism $\cO_S \into B$.
\begin{definition}
We say that a pointed algebra $(A,p) \in \cF_n$ over an algebraically closed field $k$ is linear if $\dim_k m_p/m_p^2 \leq 1$, where $m_p$ is the maximal ideal associated to the section $p$. We say that $(A,p)$ is curvilinear if $(A,p)$ is linear and $\spec A=\{p\}$.
\end{definition}
We consider the closed substack defined by the $1$-st Fitting ideal of $\Omega_{A|\cO_S}$ in $\cF_{n}$. This locus parametrizes non-linear pointed algebras . Therefore we can just consider the open complement, which we denote by $\cF_{n}^{\rm lin}$. We can inductively define closed substacks of $\cF_{n}^{\rm lin}$ in the following way: suppose $\cS_h$ is defined, then we can consider $\cS_{h+1}$ to be the closed substack of $\cS_h$ defined by the $0$-th fitting ideal of $\Omega_{\cS_h|F\cH_{n}}$. We set $\cS_1=\cF_{n}^{\rm lin}$. It is easy to prove that the geometric points of $\cS_h$ are pairs $(A,p)$ such that $A$ localized at $p$ has length greater or equal than $h$. Notice that this construction stabilizes at $h=n$ and the geometric points of $\cS_h$ are exactly the curvilinear pointed algebras. Finally, we denote by $\cF_{n}^{c}:=\cS_{n}$ the last stratum. As it is a locally closed substack of an algebraic stacks of finite type, it is algebraic of finite type too.
\begin{lemma}\label{lem:loc-triv-alg}
If $(B,q) \in \cF_{n}^c(S)$ for some scheme $S$, then it exists an \'etale cover $S'\rightarrow S$ such that $B\otimes_S S' \simeq \cO_{S'}[t]/(t^{n})$ and $q\otimes_S S'=q_0\otimes S'$ where
$$q_0:\kappa[t]/(t^{n}) \longrightarrow \kappa$$
is defined by the assocation $t\mapsto 0$.
\end{lemma}
\begin{proof}
We are going to prove that $\cF_{n}^c$ has only one geometric point (up to isomorphism) and its tangent space is trivial. Thus the thesis follows, by a standard argument in deformation theory.
Suppose then $S=\spec k$ is the spectrum of an algebraically closed field and $B$ is a finite $k$-algebra (of degree $n$). Because $(B,q)$ is linear, we have that $\dim_k(m_q/m_q^2)\geq 1$. We can construct then a surjective morphism
$$ k[[t]] \longrightarrow B_{m_q}$$
whose kernel is generated by the monomial $t^n$. We have then $(B,q)\simeq (k[t]/t^{n'},q_0)$ for some positive integer $n'$. Because $B$ is local of length $n$, we get that $n'=n$.
Let us study the tangent space of $\cF_n^c$ at the pointed algebra $(k[t]/(t^{n}),q_0)$ where $k/\kappa$ is a field. We have that any deformation $(B_{\varepsilon},q_{\varepsilon})$ of the pair $(k[t]/t^n,q_0)$ is of the form
$$ k[t,\varepsilon]/(p(t,\varepsilon))$$
where $p(t,0)=t^{n}$. Because the section is defined by the association $t\mapsto b\varepsilon$, we have also that $p(b\varepsilon,\varepsilon)=p(0,\varepsilon)=0$. It is easy to see that $(B_{\varepsilon},q_{\varepsilon}) \in \cF^c_{n}$ only if $$p(b\varepsilon,\varepsilon)=p'(b\varepsilon,\varepsilon)=\dots=p^{(n-1)}(b\varepsilon,\varepsilon)=0$$
where the derivatives are done over $k[\varepsilon]/(\varepsilon^2)$, thus $p(t,\varepsilon)=(t-b\varepsilon)^{n}$. The algebra obtained is clearly isomorphic to trivial one.
\end{proof}
Let $G_{n}:=\mathop{\underline{\mathrm{Aut}}}\nolimits\Big(\kappa[t]/(t^{n}),q_0\Big)$ be the automorphism group of the trivial algebra. One can describe $G$ as the semi-direct product of $\GG_{\rmm}$ and a group $U$ which is isomorphic to an affine space of dimension $n-2$.
\begin{corollary}\label{cor:descr-finite-alg}
We have that $\cF_{n}^c$ is isomorphic to $\cB G_{n}$, the classifying stack of the group $G_{n}$.
\end{corollary}
We denote by $\cE_{m,d}^c$ the fiber product $E\cH_{m,d}\times_{F\cH_{dm}}\cF_{dm}^c$. We get the morphism of algebraic stacks
$$ \cE_{m,d}^c \longrightarrow \cF_{dm}^c$$
defined by the association $(S,A\into B \rightarrow \cO_S) \mapsto (S,B\rightarrow \cO_S)$. Notice that the morphism is faithful, therefore representable by algebraic spaces. Clearly the composite $A\rightarrow \cO_S$ is still a section.
We now study the stack $\cE_{m,d}^c$.
\begin{lemma}\label{lem:triv-ext}
If $(A\into B,q) \in \cE_{m,d}^c(S)$ for some scheme $S$, then it exists an \'etale cover $\pi:S'\rightarrow S$ such that
$$\pi^*\Big(A\into B,q\Big) \simeq \Big(\phi_d:\cO_{S'}[t]/(t^{m})\into \cO_{S'}[t]/(t^{dm}),q_0\otimes_{S}S'\Big)$$
where $\phi_d(t)=t^dp(t)$ with $p(0) \in \cO_{S'}^{\times}$.
\end{lemma}
\begin{proof}
First of all, an easy computation shows that any finite flat extension of pointed algebras of degree $d$ $$\cO_{S}[t]/(t^{m})\into \cO_{S}[t]/(t^{dm})$$
is of the form $\phi_d$ for any scheme $S$. Therefore, it is enough to prove that if $A\into B$ is finite flat of degree $d$, then $B$ curvilinear implies $A$ curvilinear. In analogy with the proof of \Cref{lem:loc-triv-alg}, we prove the statement for $S=\spec k$ and then for $S=\spec k[\varepsilon]/(\varepsilon^2)$ with $k$ algebraically closed field.
Firstly, suppose $S=\spec k$ with $k$ algebraically closed. By \Cref{lem:loc-triv-alg}, we know that $(B,q)\simeq (k[t]/(t^{dm}),q_0)$. We need to prove now that also $A$ is curvilinear. Clearly $A$ is local because of the going up property of flatness and we denote by $m_A$ its maximal ideal.
If we tensor $A\into B$ by $A/m_A$, we get $k\into k[t]/m_Ak[t]$. However flatness implies that the extension $k\into k[t]/(m_Ak[t])$ has degree $d$, therefore because $k[t]$ is a PID, it is clear that $m_Ak[t]\subset m_q^{d}$ and the morphism $m_A/m_A^2\rightarrow m_q^d/m_q^{d+1}$ is surjective. Let $a \in m_A$ be an element whose image is $t^d$ modulo $t^{d+1}$. If we now consider $A/a\into B/aB\simeq k[t]/(t^d)$, by flatness it still has length $d$, but this implies $A/a\simeq k$ or equivalently $m_A=(a)$. Therefore $A$ is curvilinear too.
Suppose now $S=\spec k[\varepsilon]$.
We know that, given a morphism of schemes $X\rightarrow Y$, we have the exact sequence
$$ {\rm Def}_{X\rightarrow Y} \longrightarrow {\rm Def}_X \oplus {\rm Def}_Y \longrightarrow \ext^1_{\cO_X}(Lf^*{\rm NL}_Y,\cO_X)$$
where ${\rm NL}_Y$ is the naive cotangent complex of $Y$. We want to prove that if $X\rightarrow Y$ is the spectrum of the extension $A\into B$, then the morphism
$$ {\rm Def}_X \longrightarrow \ext^1_{\cO_X}(Lf^*{\rm NL}_Y,\cO_X)$$
is injective. This implies the thesis because ${\rm Def}_Y=0$.
We can describe the morphism
$$
{\rm Def}_X \longrightarrow \ext^1_{\cO_X}(Lf^*{\rm NL}_Y,\cO_X)
$$ explicitly using the Schlessinger's functors $T^i$. More precisely, it can be described as a morphism
$$T^1(A/k,A)\rightarrow T^1(A/k,B);$$
an easy computation shows that $T^1(A/k,A)\simeq k[t]/(t^{m-1})$ and $T^1(A/k,B)\simeq k[t]/(t^{d(m-1)})$. Through these identifications, the morphism $$T^1(A/k,A)\rightarrow T^1(A/k,B)$$ is exactly the morphism $\phi_d$ defined earlier, namely it is defined by the association $t\mapsto t^dp(t)$ with $p(0)\neq 0$. The injectivity is then straightforward.
\end{proof}
Now we are ready to describe the morphism $\cE_{m,d} \rightarrow \cF_{md}$. Let us define $A_0:=\kappa[t]/(t^m)$ and $B_0:=\kappa[t]/(t^{md})$.
Let $E_{m,d}$ be the category fibered in groupoids whose objects are $$\Big(S,(A\into B,q) ,\phi_A:(A,q)\simeq (A_0\otimes S, q_0 \otimes S), \phi_B: (B,q )\simeq (B_0 \otimes S,q_0\otimes S)\Big)$$ where $(A\into B,q) \in \cE_{m,d}(S)$. The morphism are defined in the obvious way. It is easy to see that $E_{m,d}$ is in fact fibered in sets. As before, we set $G_m:=\mathop{\underline{\mathrm{Aut}}}\nolimits(A_0,q_0)$ and $G_{dm}:=\mathop{\underline{\mathrm{Aut}}}\nolimits(B_0,q_0)$. Clearly we have a right action of $G_{dm}$ and a left action of $G_m$ on $E_{m,d}$.
\begin{proposition}
We have the follow isomorphism of fibered categories
$$ \cE_{m,d} \simeq [E_{m,d}/G_m\times G_{md}]$$
and through this identification the morphism $\cE_{m,d} \rightarrow \cF_{dm}$ is just the projection to the classifying space $\cB G_{md}$.
\end{proposition}
\begin{proof}
It follows from \Cref{lem:triv-ext}.
\end{proof}
\Cref{lem:triv-ext} also tells us how to describe $E_{m,d}$: the map $\phi_d$ is completely determined by $p(t) \in \cO_S[t]/(t^{d(m-1)})$ with $p(0)\in \cO_S^{\times}$. Therefore, we have a morphism
$$(\AA^1\smallsetminus 0)\times \AA^{d(m-1)-1} \longrightarrow E_{m,d}$$
which is easy to see that it is an isomorphism. Consider now the subscheme of $E_{m,d}$ defined as $$V:=\{ f \in (\AA^1\smallsetminus 0)\times \AA^{d(m-1)-1}|\, a_0=1, \, a_{kd}(f)=0\quad {\rm for }\quad k=1,\dots,m-2\}$$
where $a_{l}(f)$ is the coefficient of $t^{l}$ of the polynomial $f$. Clearly $V$ is an affine space of dimension $(m-1)(d-1)$.
\begin{lemma}\label{lem:descr-affine-bundle}
In the situation above, we have the isomorphism
$$ V \simeq [E_{m,d}/G_m],$$
in particular the morphism $\cE_{m,d} \rightarrow \cF_{md}$ is an affine bundle of dimension $(m-1)(d-1)$.
\end{lemma}
\begin{proof}
Consider the morphism
$$ G_m \times V \longrightarrow E_{m,d}$$
which is just the restriction of the action of $G_m$ to $V$. A straightforward computation shows that it is an isomorphism. The statement follows.
\end{proof}
\operatorname{char}pter{Pushout and blowups in families}
In this appendix, we discussion two well-known constructions: pushout and blowup. Specifically, we study these two constructions in families and give conditions to get flatness and compatibility with base change. Moreover, we study when the two constructions are one the inverse of the other.
\begin{lemma}\label{lem:pushout}
Let $S$ be a scheme. Consider three schemes $X$,$Y$ and $Y'$ which are flat over $S$. Suppose we are given $Y\hookrightarrow X$ a closed immersion and $Y\rightarrow Y'$ a finite flat morphism. Then the pushout $Y'\bigsqcup_Y X$ exists in the category of schemes, it is flat over $S$ and it commutes with base change.
Furthermore, if $X$ and $Y'$ are proper and finitely presented scheme over $S$, the same is true for $Y' \bigsqcup_Y X$.
\end{lemma}
\begin{proof}
The existence of the pushout follows from Proposition 37.65.3 of \cite{StProj}. In fact, the proposition tells us that the morphism $Y'\rightarrow Y'\bigsqcup_Y X$ is a closed immersion and $X \rightarrow Y'\bigsqcup_Y X$ is integral, in particular affine. It is easy to prove that it is in fact finite and surjective because $Y\rightarrow Y'$ is finite and surjective. Let us prove that $Y'\bigsqcup_Y X \rightarrow S$ is flat. Because flatness is local condition and all morphisms are affine, we can reduce to a statement of commutative algebra.
Namely, suppose we are given $R$ a commutative ring and $A$,$B$ two flat algebras. Let $I$ be an ideal of $A$ such that $A/I$ is $R$-flat and $B\hookrightarrow A/I$ be a finite flat extension. Then $B\times_{A/I}A$ is $R$-flat. To prove this, we complete the fiber square with the quotients
$$
\begin{tikzcd}
0 \arrow[r] & B\times_{A/I}A \arrow[d, two heads] \arrow[r, hook] & A \arrow[d, two heads] \arrow[r] & Q' \arrow[d] \arrow[r] & 0 \\
0 \arrow[r] & B \arrow[r, hook] & A/I \arrow[r] & Q \arrow[r] & 0
\end{tikzcd}
$$
and we notice that $Q'\rightarrow Q$ is an isomorphism. Because the extension $B\hookrightarrow A/I$ is flat, then $Q$ (thus $Q'$) is $R$-flat and the $R$-flatness of $A$ and $Q'$ implies the flatness of $B\times_{A/I}A$.
Suppose now we have a morphism $T\rightarrow S$. This induces a natural morphism
$$\phi_T:Y'_T \bigsqcup_{Y_T} X_T \rightarrow (Y'\bigsqcup_Y X)_T$$ where by $(-)_T$ we denote the base change $(-)\times_S T$. Because being an isomorphism is a local condition, we can reduce to the following commutative algebra's statement. Suppose we have a morphism $R\rightarrow \widetilde{R}$; we can consider the same morphism of exact sequence of $R$-modules as above and tensor it with $\widetilde{R}$. We denote by $\widetilde{(-)}$ the tensor product $(-)\otimes_R \widetilde{R}$. The flatness of $Q$ implies that we get that the commutative diagram
$$
\begin{tikzcd}
0 \arrow[r] & \widetilde{B\times_{A/I}A} \arrow[d, two heads] \arrow[r, hook] & \widetilde{A} \arrow[d, two heads] \arrow[r] & \widetilde{Q'}\arrow[d] \arrow[r] & 0 \\
0 \arrow[r] & \widetilde{B} \arrow[r, hook] & \widetilde{A/I} \arrow[r] & \widetilde{Q} \arrow[r] & 0
\end{tikzcd}
$$
is still a morphism of exact sequence. Because also $\widetilde{B} \times_{\widetilde{A/I}} \widetilde{A}$ is the kernel of $\widetilde{A} \rightarrow \widetilde{Q'}$, we get that $\phi_T$ is an isomorphism.
Finally, using the fact that the pushout is compatible with base change, we get that we can apply the same strategy used in \Cref{lem:quotient} to reduce to the case $S=\spec R_0$ with $R_0$ of finite type over $k$. Thus we can use Proposition 37.65.5 of \cite{StProj} to prove that the pushout (in the situation when $Y'\rightarrow Y$ is flat) preserves the property of being proper and finitely presented.
\end{proof}
The second construction we want to analyze is the blow-up.
\begin{lemma}\label{lem:blowup}
Let $X/S$ be a flat, proper and finitely presented morphism of schemes and let $I$ be an ideal of $X$ such that $\cO_{X}/I^{m}$ is flat over $S$ for every $m\geq 0$. Denote by ${\rm Bl}_IX\rightarrow X$ the blowup morphism, then ${\rm Bl}_IX \rightarrow S$ is flat, proper and finitely presented and its formations commutes with base change.
\end{lemma}
\begin{proof}
The flatness of $\cO_X/I^m$ implies the flatness of $I^m$, therefore $\oplus_{m\geq 0}I^m$ is clearly flat over $S$. The universal property of the blowup implies that it is enough to check that the formation of the blowup commutes with base change when we restrict to the fiber of a closed point $s \in S$. Therefore it is enough to prove that the inclusion $m_sI^m\subset m_s\cap I^m$ is an equality for every $m\geq 0$ with $m_s$ the ideal associated to the closed point $s$. The lack of surjectivity of that inclusion is encoded in $\tor_S^1(\cO_X/I^m,\cO_S/m_S)$ which is trivial due to the $S$-flatness of $\cO_X/I^m$. The rest follows from classical results.
\end{proof}
\begin{remark}
Notice that the flatness of the blowup follows from the flatness of $I^m$ for every $m$ whereas we need the flatness of $\cO_X/I^m$ to have the compatibility with base change.
\end{remark}
\begin{lemma}\label{lem:cond-diag}
Let $R$ be a ring and $A\into B$ be an extension of $R$-algebras. Suppose $I\subset A$ is an ideal of $A$ such that $I=IB$. Then the following commutative diagram
$$
\begin{tikzcd}
A \arrow[r, hook] \arrow[d, two heads] & B \arrow[d, two heads] \\
A/I \arrow[r, hook] & B/I
\end{tikzcd}
$$
is a cartesian diagram of $R$-algebras. Furthermore, suppose we have a cartesian diagram of $R$-algebras
$$
\begin{tikzcd}
A:=\widetilde{A}\times_{B/I}B \arrow[d] \arrow[r, hook] & B \arrow[d] \\
\widetilde{A} \arrow[r, hook] & B/I
\end{tikzcd}
$$
then the morphism $A\rightarrow \widetilde{A}$ is surjective and its kernel coincide (as an $R$-module) with the ideal $I$.
\end{lemma}
\begin{proof}
It follows from a straightforward computation in commutative algebra.
\end{proof}
Finally, we prove that the two constructions are one the inverse of the other.
\begin{proposition}\label{prop:pushout-blowup}
Suppose we are given a diagram
$$
\begin{tikzcd}
\widetilde{D} \arrow[d] \arrow[r, hook] & \widetilde{X} \\
D &
\end{tikzcd}
$$ of proper, flat, finitely presented schemes over $S$ such that $\widetilde{D} \rightarrow D$ is a finite flat morphism and $\widetilde{D}\into \widetilde{X}$ is a closed immersion of an effective Cartier divisor. Consider the pushout $X:=\widetilde{X}\amalg_{\widetilde{D}}D$ as in \Cref{lem:pushout} and denote by $I_D$ the ideal associated with the closed immersion $D\into X$. Then the pair $(X,I_D)$ over $S$ verifies the hypothesis of \Cref{lem:blowup}. Furthermore, if we denote by $(\overline{X},\overline{D})$ the blowup of the pair $(X,D)$, there exists a unique isomorphism $(\widetilde{X},\widetilde{D}) \simeq (\overline{X},\overline{D})$ of pairs over $(X,D)$.
\end{proposition}
\begin{proof}
Consider the pushout diagram over $S$
$$
\begin{tikzcd}
\widetilde{D} \arrow[d] \arrow[r, hook] & \widetilde{X} \arrow[d] \\
D \arrow[r, hook] & X;
\end{tikzcd}
$$
because every morphism is finite and flatness is a local condition, we can restrict ourself to the affine case and \Cref{lem:cond-diag} assures us that $I_D=I_{\widetilde{D}}$, and in particular $I_D^n=I_{\widetilde{D}}^n$ for every $n\geq 1$. Because $I_{\widetilde{D}}$ is a flat Cartier divisor, the same is true for its powers.
Regarding the second part of the statement, we know that the unicity and existence of the morphism $$(\overline{X},\overline{D}) \longrightarrow (\widetilde{X},\widetilde{D})$$
are assured by the universal property of the blowup. As being an isomorphism is a local property, we can reduce again to the affine case (all the morphisms involved are finite). We have an extension of algebras $A\into B$ with an ideal $I$ of $A$ such that $I=IB$ and $I$ is free of rank $1$ as a $B$-module. Therefore we can describe the Rees algebra as follows
$$R_A(I):=\bigoplus_{n\geq 0}I^n=A\oplus tB[t]$$
because $I$ is free of rank $1$ over $B$. It is immediate to see that the morphism $\spec B \rightarrow \operatorname{Proj}_A(R_A(I))$ is an isomorphism over $\spec A$.
\end{proof}
\begin{proposition}\label{prop:blowup-pushout}
Let $D\into X$ be a closed immersion of proper, flat, finitely presented schemes over $S$ such that the ideal $I_D^n$ is $S$-flat for every $n\geq 1$, consider the blowup $b:\widetilde{X}:={\rm Bl}_DX\rightarrow X$ and denote by $\widetilde{D}$ the proper transform of $D$. Suppose that $\widetilde{D}\rightarrow D$ is finite flat (in particular the morphism $b$ is finite birational). Moreover, suppose that the ideal $I_D$ is cointained in the conductor ideal $J_b$ of the morphism $b$. Then it exists a unique isomorphism $\widetilde{X}\amalg_{\widetilde{D}} D\rightarrow X$, which makes everything commutes.
\end{proposition}
\begin{proof}
As in the previous proposition, the existence and unicity of the morphism are conseguences of the universal property of the pushout. Therefore as all morphisms are finite we can restrict to the affine case. The fact that the ideal $I_D$ is contained in the conductor ideal implies that we can use \Cref{lem:cond-diag} and conclude. We leave the details to the reader.
\end{proof}
\operatorname{char}pter{Discriminant relations}
In this appendix, we generalize Proposition 4.2 of \cite{EdFul}. We do not need this result in its full generality in our work, only the formulas in \Cref{rem:gener}.
First of all, we set some notations. Everything is considered over a base field $\kappa$. Let $T$ be the $2$-dimensional split torus $\GG_{\rmm}^2$ which embeds in $\mathrm{GL}_2$ as the diagonal matrices and let $E$ be the standard representation of $\mathrm{GL}_2$. Let $n$ be positive integer. We denote by $\AA(n)$ the $n$-th symmetric power of the dual representation of $E$ and by $\PP^n$ the projective bundle $\PP(\AA(n))$. We denote by $\xi_n$ the hyperplane sections of $\PP^n$. Moreover, we denote by $h_i$ the element of $\ch_T(\PP^n)$ associated to the hyperplane defined by the equation $a_{i,n-i}=0$ for every $i=0,\dots,n$ where $a_{i,n-i}$ is the coordinate of $\PP^n$ associated to the coefficient of $x_0^ix_1^{n-i}$ and $x_0,x_1$ is a $T$-base for $E^{\vee}$. We have the identity
$$ h_i=\xi_n -(n-i)t_0- it_1$$
where $t_0,t_1$ are the generators of $\ch(\cB T)$ (acting respectively on $x_0$ and $x_1$). Let $\tau \in \ch(\cB T)$ be the element $t_0-t_1$, then the previous identity can be written as
$$ h_i=h_0+i\tau.$$
Notice that we can reduce to the $T$-equivariant setting exactly as the authors do in \cite{EdFul}, because $\mathrm{GL}_2$ is a special group and therefore the morphism
$$ \ch(\cB \mathrm{GL}_2) \longrightarrow \ch(\cB T)$$
is injective
Let $N$ and $k$ be two positive integers such that $k\leq N$. Inside $\PP^N$, we can define a closed subscheme $\Delta_k$ parametrizing (classes of) homogeneous forms in two variables $x_0,x_1$ which have a root of multiplicity at least $k$.
We want to study the image of the pushforward of the closed immersion
$$ \Delta_k \into \PP^N;$$
we have the description of the Chow ring of $\PP^N$ as the quotient
$$ \ch_{\mathrm{GL}_2}(\PP^N) \simeq \ZZ[c_1,c_2,\xi_N]/(p_N(\xi_N))$$
where $p_N(\xi_N)$ is a monic polynomial in $h$ of degree $N+1$ with coefficients in $\ch(\cB\mathrm{GL}_2)\simeq \ZZ[c_1,c_2]$. The coefficient of $\xi_N^i$ is the $(N-i)$-th Chern class of the $\mathrm{GL}_2$-representation $\AA(N)$ for $i=0,\dots,N$.
Exactly as it was done in \cite{Vis3} and generalized in \cite{EdFul2}, we introduce the multiplication morphism for every positive integer $r$ such that $r\leq N/m$
$$ \pi_r: \PP^r \times \PP^{N-kr} \longrightarrow \PP^N$$
defined by the association $(f,g)\mapsto f^kg$.
The $\mathrm{GL}_2$-action on the left hand side is again induced by the symmetric powers of the dual of $E$. Notice that we are not assuming that $N$ is a multiple of $k$. We have an analogue of Proposition 3.3 of \cite{Vis3} or Proposition 4.1 of \cite{EdFul2}.
\begin{proposition}
Suppose that the characteristic of $\kappa$ is greater than $N$, then the disjoint union of the morphisms $\pi_r$ for $1\leq r\leq N/k$ is a Chow envelope for $\Delta_k\into \PP^N$.
\end{proposition}
Therefore it is enough to study the image of the pushforward of $\pi_r$ for $r\leq N/k$. We have that $\pi^*(\xi_N)=k\xi_r+\xi_{N-kr}$, therefore for a fixed $r$ we have that the image of $\pi_{r,*}$ is generated as an ideal by $\pi_{r,*}(\xi_r^m)$ for $0\leq m\leq r$.
\begin{remark}
Fix $r\leq N/k$. We have that
$$ \pi_{r,*}(\xi_r^m) \in \big(\pi_{r,*}(1), \pi_{r,*}(h_0), \dots, \pi_{r,*}(h_0\dots h_{m-1})\big)$$
in $\ch_T(\PP^r)$ for $m\leq r$. In fact, we have
$$ h_0\dots h_{m-1}= \prod_{i=0}^{m-1} (\xi_r-(r-i)t_0-it_1) = \xi_r^m + \sum_{i=0}^{m-1}\alpha_i\xi_r^i $$
with $\alpha_i \in \ch(\cB T)$. Therefore we can prove it by induction on $m$.
\end{remark}
Therefore, it is enough to describe the ideal generated by $\pi_{r,*}(h_0\dots h_m)$ for $1\leq r\leq N/k$ and $-1\leq m \leq r-1$. We define the element associated to $m=-1$ as $\pi_{r,*}(1)$.
Our goal is to prove that the ideal is in fact generated by $\pi_{1,*}(1)$ and $\pi_{1}(h_0)$. To do so, we have to introduce some morphisms first.
Let $n$ be an integer and $\rho_n:(\PP^1)^{\times n} \rightarrow \PP^n$ by the $n$-fold product morphism, which is an $S_n$-torsor, where $S_n$ is the $n$-th symmetric group. Furthermore, we denote by $\Delta^n:\PP^1 \into (\PP^1)^{\times n}$ the small diagonal in the $n$-fold product, i.e. the morphism defined by the association $f \mapsto (f,f,\dots,f)$. We denote by $h_i$ the fundamental class of $[\infty]:=[0:1]$ in the Chow ring of the $i$-th element of the product $(\PP^1)^n$ (and by pullback in the Chow ring of the product).
\begin{remark}
Notice that we are using the same notation for two different elements: if we are in the projective space $\PP^n$, $h_i$ is the hyperplane defined by the vanishing of the $i+1$-th coordinate of $\PP^n$. On the contrary, if we are in the Chow ring of the product $(\PP^1)^n$, it represents the subvariety defined as the pullback through the $i$-th projection of the closed immersion $\infty \into \PP^1$. Notice that $\rho_{n,*}(h_0\dots h_s)$ is equal to $s! (n-s)! h_0\dots h_s$ for every $s\leq n$.
\end{remark}
We have a commutative diagram of finite morphisms
$$
\begin{tikzcd}
(\PP^1)^r \times (\PP^1)^{N-kr} \arrow[r, "\alpha_r^k"] \arrow[d, "\rho_r \times \rho_{N-kr}"'] & (\PP^1)^N \arrow[d, "\rho_N"] \\
\PP^r \times \PP^{N-kr} \arrow[r, "\pi_r"] & \PP^N
\end{tikzcd}
$$
where $\alpha_r^k= (\Delta^k)^{\times r} \times \id_{(\PP^1)^{N-kr}}$. We can use this diagram to have a concrete description of $\pi_{r,*}(h_0\dots h_m)$. In order to do so, we first need the following lemma to describe the fundamental class of the image of $\alpha_r^k$.
\begin{lemma}\label{lem:k-diag}
We have the following identity
$$ [\Delta^k]= \sum_{j=0}^{k-1} \tau^{k-1-j}\sigma_j^k(h_1,\dots,h_k)$$
in the Chow ring of $(\PP^1)^{\times k}$ for every $k \geq 2$, where $\sigma_j^k(-)$ is the elementary symmetric function with $k$ variables of degree $j$.
\end{lemma}
\begin{proof}
The diagonal $\Delta^k$ is equal to the complete intersection of the hypersurfaces of $(\PP^1)^k$
of equations $x_{0,i}x_{1,i+1}-x_{0,i+1}x_{1,i}$ for $1\leq i \leq k-1$ (we are denoting by $x_{0,i},x_{i,1}$ the two coordinates of the $i$-th factor of the product). Therefore we have
$$ \Delta^k=\prod_{i=1}^{k-1}(h_i+h_{i+1}+\tau). $$
Notice that in the Chow ring of $(\PP^1)^k$ we have $k$ relations of degree $2$ which can be written as $h_i^2+\tau h_i=0$ for every $i=1,\dots,k$.
The case $k=2$ was already proven in Lemma 3.8 of \cite{Vis3}. We proceed by induction on $k$. We have
$$\Delta^{k+1}=\prod_{i=1}^{k}(h_i+h_{i+1}+\tau)=(h_{k+1}+h_k+\tau) \Delta^k$$
and thus by induction
$$\Delta^{k+1}=\sum_{i=0}^{k-1}h_{k+1} \tau^{k-1-i}\sigma_i^k + \sum_{i=0}^{k-1} \tau^{k-i}\sigma_i^k +\sum_{i=0}^{k-1} h_k \tau^{k-1-i}\sigma_i^k.$$
Recall that we have the relations $$\sigma_j^k(x_1,\dots,x_k)=x_k\sigma_{j-1}^{k-1}(x_1,\dots,x_{k-1}) + \sigma_j^{k-1}(x_1,\dots,x_{k-1})$$ between elementary symmetric functions (with $\sigma_j^k=0$ for $j>k$), therefore we have
$$ \sum_{i=0}^{k-1} \tau^{k-1-i} h_k\sigma_i^k = \sum_{i=0}^{k-1} \tau^{k-1-i}(h_k^2\sigma_{i-1}^{k-1}+h_k\sigma_{i}^{k-1})=\sum_{i=0}^{k-1} \tau^{k-1-i}h_k(-\tau \sigma_{i-1}^{k-1} +\sigma_i^{k-1})$$
where we used the relation $h_k^2+\tau h_k=0$ in the last equalities. Therefore we get
\begin{equation*}
\begin{split}
\Delta^{k+1}&=\sum_{i=0}^{k-1} \tau^{k-1-i}(h_{k+1}\sigma_i^k+h_k\sigma_i^{k-1}) + \sum_{i=0}^{k-1} \tau^{k-i}(\sigma_i^k-h_k\sigma_{i-1}^{k-1})= \\ & =\sum_{i=0}^{k-1}\tau^{k-1-i}(h_{k+1}\sigma_i^k+h_k\sigma_i^{k-1}) + \sum_{i=0}^{k-1} \tau^{k-i}\sigma_i^{k-1}
\end{split}
\end{equation*}
Shifting the index of the last sum, it is easy to get the following identity
$$ \Delta^{k+1}= h_{k+1}\sigma_{k-1}^k+h_k\sigma_{k-1}^{k-1} + \tau^k +\sum_{i=0}^{k-2}\tau^{k-1-i}(h_{k+1}\sigma_i^k+h_k\sigma_i^{k-1}+\sigma_{i+1}^{k-1})$$
and the statement follows from shifting the last sum again and from using the relations between the symmetric functions (notice that $h_k\sigma_{k-1}^{k-1}=\sigma_k^k$).
\end{proof}
\begin{remark}
We define by $\theta_{m,r}$ the $T$-equivariant closed subvariety of $(\PP^1)^r \times (\PP^1)^{N-kr}$ of the form
$$\theta_{m,r}:=\infty^{m+1} \times (\PP^1)^{r-(m+1)} \times (\PP^1)^{N-kr}$$
induced by the $T$-equivariant closed immersion $\infty \into \PP^1$.
We have that $$(\rho_r \times \rho_{N-kr})_*(\theta_{m.r})=(r-(m+1))! (N-kr)! h_0\dots h_m$$ for every $m\leq r-1$.
\end{remark}
From now on, we set $d:=r-(m+1)\geq 0$. Thanks to the remark and the commutativity of the diagram we constructed, we can computate the pushforward $\rho_{N,*}\alpha_{r,*}^k(\theta_{m,r})$ and then divide it by $d!(N-kr)!$ to get $\pi_{r,*}(h_0\dots h_m)$.
We denote by $\alpha_l^{(k,d)}$ the integer
$$ \alpha_{(k,d)}:=\sum_{j_1+\dots+j_d=l}^{0\leq j_s \leq k-1}\binom{k}{j1}\dots \binom{k}{j_d}$$
and by $\beta_l^{(k,m,r)}$ the integer (because $l\leq d(k-1)$)
$$ \beta_l^{(k,m,r)}:=\frac{(N-(m+1)k-l)!}{(N-kr)! d!}.$$
\begin{lemma}\label{lem:pi-r-1}
We get the following equality
$$ \pi_{r,*}(h_0\dots h_m) = \sum_{l=0}^{d(k-1)} \alpha_l^{(k,d)}\beta_{l}^{(k,m,r)} \tau^{d(k-1)-l} h_0\dots h_{(m+1)k+l-1}$$
in the $T$-equivariant Chow ring of $\PP^N$.
\end{lemma}
\begin{proof}
Thanks to \Cref{lem:k-diag}, we have that
\begin{equation*}
\begin{split}
&\alpha_{r,*}^k(\theta_{m,r})=[\infty^{k(m+1)} \times (\Delta^k)^d \times (\PP^1)^{N-kr}]= \\ & =h_1\dots h_{(m+1)k} \prod_{i=1}^d \Big(\sum_{j=0}^{k-1} \tau^{k-1-j}\sigma_j^{k}(h_{(m+j)k+1},h_{(m+j)k+2},\dots,h_{(m+j+1)k})\Big);
\end{split}
\end{equation*}
we need to take the image through $\rho_{N,*}$ of this element. However, we have that $\rho_{N,*}(h_{i_1} \dots h_{i_s})=\rho_{N,*}(h_1\dots h_s)$ for every $s$-uple of $(i_1,\dots,i_s)$ of distinct indexes because $\rho_N$ is a $S_N$-torsor. Therefore a simple computation shows that $\rho_{N,*}\alpha_{r,*}^k(\theta_{m,r})$ has the following form:
$$\sum_{l=0}^{d(k-1)}\Big(\sum_{j_1+\dots+j_d=l}^{0\leq j_s \leq k-1}\binom{k}{j_1}\dots \binom{k}{j_d} \Big)(N-(m+1)k-l)! \tau^{d(k-1)-l} h_0 \dots h_{(m+1)k+l-1}.$$
The statement follows.
\end{proof}
\begin{remark}\label{rem:gener}
Notice that the expression makes sense also per $m=-1$ and in fact we get a description of the $\pi_{r,*}(1)$.
Let us describe the case $r=1$. Clearly we only have $d=1$ (or $m=-1$) and $d=0$ (or $m=0$). If $d=0$, the formula gives us
$$\pi_{1,*}(h_0)=h_0\dots h_{k-1} \in \ch_T(\PP^N);$$
if $d=1$ a simple computation shows
$$ \pi_{1,*}(1)=\sum_{l=0}^{k-1}(k-l)!\binom{k}{l}\binom{N-k}{N-l}\tau^{k-1-l}h_0\dots h_{l-1} \in \ch_T(\PP^N).$$
These two formula gives us the $T$-equivariant class of these two elements. As matter of fact, $\pi_{1,*}(1)$ is also a $\mathrm{GL}_2$-equivariant class by definition. As far as $\pi_{1,*}(h_0)$ is concerned, this is clearly not $\mathrm{GL}_2$-equivariant. Nevertheless we can consider
$$\pi_ {1,*}(\xi_1)=\pi_{1,*}(h_0)+t_1\pi_{1,*}(1)$$
which is a $\mathrm{GL}_2$ -equivariant class.
We can describe $\pi_{1,*}(1)$ geometrically. In fact, it is the fundamental class of the locus describing forms $f$ such that $f$ has a root with multiplicity at least $k$. This locus is strictly related to locus of $A_k$-singularities in the moduli stack of cyclic covers of the projective line of degree $2$.
\end{remark}
We denote by $I$ the ideal generated by the two elements described in the previous remark. First of all, we prove that almost all the pushforwards we need to compute are in this ideal.
\begin{proposition}
We have that
$$ \pi_{r,*}(h_0\dots h_m) \in I $$
for every $1\leq r \leq N/k$ and $0\leq m \leq r-1$.
\end{proposition}
\begin{proof}
\Cref{lem:pi-r-1} implies that it is enough to prove that $(m+1)k+l-1\geq k-1$ for every $l=0,\dots, d(k-1)$ where $d=r-(m+1)$, because it implies that every factor of $\pi_{r,*}(h_0\dots h_m)$ is divisible by $h_0\dots h_{k-1}$. This follows from $m \geq 0$.
\end{proof}
Therefore it only remains to prove that $\pi_{r,*}(1)$ is in the ideal $I$ for $r\geq 2$. To do so, we need to prove some preliminary results.
\begin{proposition}\label{prop:square-power}
We have the following equality
$$ h_0^2 \dots h_{n-1}^2 h_n \dots h_{m-1} = \sum_{s=0}^n (-1)^s s! \binom{n}{s}\binom{m}{s}\tau^s h_0\dots h_{m+n-s-1}$$
in the $T$-equivariant Chow ring of $\PP^N$ for every $n\leq m$.
\end{proposition}
\begin{proof}
Denote by $a_{n,m}$ the left term of the equality. Because we have the identity $h_i=h_j+(j-i)\tau$ in the $T$-equivariant Chow ring of $\PP^N$, we have the following formula
$$ a_{n,m}=a_{n-1,m+1}-(m-n+1)\tau a_{n-1,m}$$
which gives us that $a_{n,m}$ is uniquely determined from the elements $a_{0,j}$ for $j\leq N$. This implies that it is enough to prove that the formula in the statement verifies the recursive formula above. This follows from straightforward computation.
\end{proof}
Before going forward with our computation, we recall the following combinatorial fact.
\begin{lemma}\label{lem:comb}
For every pair of non-negative integers $k,m\leq N$ we have that
$$\sum_{l=0}^{k-1} (-1)^l \binom{m}{l}\binom{N-l}{k-1-l}=\binom{N-m}{k-1}.$$
\end{lemma}
We are going to use it to prove the following result.
\begin{proposition}\label{prop:square-h}
For every non-negative integer $ t\leq k-1$, we have the following equality
$$ h_0\dots h_{t-1} \pi_{1,*}(1) = \sum_{f=0}^{k-1} \frac{(N-f-t)!}{(N-k-t)!} \binom{k}{f}\tau^{k-1-f}h_0\dots h_{t+f-1} + I$$
in the $T$-equivariant Chow ring of $\PP^N$. Again, for $t=0$, we end up with the formula for $\pi_{1,*}(1)$.
\end{proposition}
\begin{proof}
The left hand side of the equation in the statement can be written as
\begin{equation*}
\begin{split}
\sum_{l=0}^{t}(k-l)!\binom{k}{l}\binom{N-k}{N-l}\tau^{k-1-l}h_0^2\dots h_{l-1}^2 h_l\dots h_{t-1} + \\ + \sum_{l=t+1}^{k-1}(k-l)!\binom{k}{l}\binom{N-k}{N-l}\tau^{k-1-l}h_0^2\dots h_{t-1}^2 h_t \dots h_{l-1}.;
\end{split}
\end{equation*}
see \Cref{rem:gener}.
If we apply \Cref{prop:square-power} to the two sums, we get
$$ \sum_{l=0}^t \sum_{s=0}^l (-1)^s\frac{k! (N-l)! t!}{(k-l)! (N-k)! s! (l-s)! (t-s)!} \tau^{k-1-l+s}h_0\dots h_{l+t-s-1}$$
and
$$ \sum_{l=t+1}^t \sum_{s=0}^t (-1)^s\frac{k! (N-l)! t!}{(k-l)! (N-k)! s! (l-s)! (t-s)!} \tau^{k-1-l+s}h_0\dots h_{l+t-s-1}; $$
if we exchange the sums in each factor and put everything together we end up with
$$ \sum_{s=0}^t \sum_{l=s}^{k-1} (-1)^s\frac{k! (N-l)! t!}{(k-l)! (N-k)! s! (l-s)! (t-s)!} \tau^{k-1-l+s}h_0\dots h_{l+t-s-1}. $$
Shifting the inner sum and setting $f:=l-s$, we get
$$ \sum_{s=0}^t \sum_{f=0}^{k-1-s} (-1)^s\frac{k! (N-s-f)! t!}{(k-s-f)! (N-k)! s! f! (t-s)!} \tau^{k-1-f}h_0\dots h_{l+f-1}.$$
Notice that we can extend the inner sum up to $k-1$ as all the elements we are adding are in the ideal $I$. Therefore we exchange the sums again and get
$$ \sum_{f=0}^{k-1} (-1)^s\frac{k!}{f!} \tau^{k-1-f}h_0\dots h_{l+f-1}\Big( \sum_{s=0}^t (-1)^s \binom{N-s-f}{k-s-f}\binom{t}{s}\Big) + I $$
and we can conclude using \Cref{lem:comb}.
\end{proof}
We state now the last technical lemma.
\begin{lemma}\label{lem:tec}
If we define $\Gamma_t$ to be the element
$$ \frac{(N-t)!}{(N-2k+1)!} \tau^{2(k-1)-t}h_0\dots h_{t-1}$$
in the $T$-equivariant Chow ring of $\PP^N$, we have that $\Gamma_t \in I$ for every $t\leq k-1$.
\end{lemma}
\begin{proof}
We proceed by induction on $m=k-1-t$. The case $m=0$ follows from the previous proposition.
Suppose that $\Gamma_s \in I$ for every $s\geq k-t$. If we consider the element in $I$
$$ \frac{(N-k-t)!}{(N-2k+1)!}\tau^{k-1-t}h_0 \dots h_{t-1} \pi_{1,*}(1)$$
we can apply the previous proposition again and get
$$ \sum_{f=0}^{k-1} \binom{k}{f}\frac{(N-f-t)!}{(N-2k+1)!} \tau^{2(k-1)-(f+t)}h_0\dots h_{t+f-1} \in I$$
which is the same as
$$ \sum_{f=0}^{k-1} \binom{k}{f} \Gamma_{f+t} \in I$$. The statement follows by induction.
\end{proof}
\begin{remark}\label{rem:nec}
It is important to notice that more is true, the same exact proof shows us that
$$ \Gamma_t \in \binom{2(k-1)-t}{k-1-t} \cdot I$$
for every $t \leq k-1$. This will not be needed, except for the case $t=0$, where this implies that in particular $\Gamma_0 \in 2 \cdot I$.
\end{remark}
Before going to prove the final proposition, we recall the following combinatorial fact.
\begin{lemma}\label{lem:comb2}
We have the following numerical equality
$$ \sum_{j_1+\dots +j_r=l}^{0\leq j_s \leq k-1} \binom{k}{j_1}\dots \binom{k}{j_r} = \binom{rk}{l}$$
for every $l\leq k-1$. In particular in our situation we have $$\alpha_{l}^{k,r}=\binom{rk}{l}$$
for $l\leq k-1$.
\end{lemma}
Finally, we are ready to prove the last statement.
\begin{proposition}
We have $\pi_{r,*}(1)$ is contained in the ideal $I$ for $r \geq 2$.
\end{proposition}
\begin{proof}
Notice that
\begin{equation*}
\begin{split}
\pi_{r,*}(1)&=\sum_{l=0}^{r(k-1)} \alpha_l^{(k,r)}\beta_l^{(k,-1,r)}\tau^{r(k-1)-l}h_0\dots h_{l-1}=\\ & =\sum_{l=0}^{k-1} \alpha_l^{(k,r)}\beta_l^{(k,-1,r)}\tau^{r(k-1)-l}h_0\dots h_{l-1} + I
\end{split}
\end{equation*}
therefore we have to study the first $k-1$ elements of the sum. Using \Cref{lem:comb2}, we get the following chain of equalities modulo the ideal I;
\begin{equation*}
\begin{split}
\pi_{r,*}(1)&= \sum_{l=0}^{k-1} \binom{rk}{l}\frac{(N-l)!}{(N-rk)!r!}\tau^{r(k-1)-l}h_0\dots h_l= \\ & =
\sum_{l=0}^{k-1}\binom{rk}{l}\frac{(N-2k+1)!}{(N-rk)!r!}\tau^{(r-2)(k-1)}\Gamma_l;
\end{split}
\end{equation*}
therefore it remains to prove that the coefficient
$$
\binom{rk}{l}\frac{(N-2k+1)!}{(N-rk)!r!}
$$
is an integer for every $r \geq 2$ and any $l\leq k-1$. First of all, we notice that this is the same as
$$\binom{rk}{l}\binom{N-rk+r}{r}\frac{(N-2k+1)!}{(N-rk+r)!}$$
which implies that for $r\geq 3$ and $l\leq k-1$ this is an integer. It remains to prove the statement for $r=2$, i.e. to prove that the number
$$ \binom{2k}{l}\frac{N-2k+1}{2}$$
is an integer. Notice that it is clearly true for $l\geq 1$ but not for $l=0$. However, we have that $\Gamma_0 \in 2\cdot I$ by \Cref{rem:nec}, therefore we are done.
\end{proof}
\begin{corollary}
The ideal generated by the relations induced by $\Delta^k$ in $\PP^N$ is generated by the two elements $\pi_{1,*}(1)$ and $\pi_{1,*}(h_0)$. See \Cref{rem:gener} for the explicit description.
\end{corollary}
\addcontentsline{toc}{chapter}{Bibliography}
\end{document}
|
\begin{document}
\title{Non-hexagonal lattices from a two species interacting system}
\begin{abstract}
A two species interacting system motivated by the density functional theory
for triblock copolymers contains long range interaction that affects
the two species differently. In a two species periodic assembly of
discs, the two species appear alternately on a lattice.
A minimal two species periodic assembly is one with the least
energy per lattice cell area.
There is a parameter $b$ in $[0,1]$ and the type of the lattice associated
with a minimal assembly varies depending on $b$. There are several
thresholds defined by a number $B=0.1867...$
If $b \in [0, B)$, a minimal assembly is associated with a rectangular
lattice whose ratio of the longer side and the
shorter side is in $[\sqrt{3}, 1)$;
if $b \in [B, 1-B]$, a minimal assembly is associated with a square lattice;
if $b \in (1-B, 1]$, a minimal assembly is associated with a
rhombic lattice with an acute angle in $[\frac{\pi}{3}, \frac{\pi}{2})$.
Only when $b=1$, this rhombic lattice is a hexagonal lattice.
None of the other values of $b$ yields a hexagonal lattice,
a sharp contrast to the situation for one species
interacting systems, where hexagonal lattices are ubiquitously observed.
\
\noindent{\bf Key words}. Two species interacting system, triblock copolymer,
two species periodic assembly of discs, rectangular lattice, square lattice,
rhombic lattice, hexagonal lattice, duality property.
\
\noindent {\bf AMS Subject Classifications}. 82B20 82D60 92C15 11F20
\operatorname{e}nd{abstract}
\section{Introduction}
\setcounter{equation}{0}
From honeycomb to chicken wire fence, from graphene to carbon nanotube,
the hexagonal pattern is ubiquitous in nature.
The honeycomb conjecture states that the hexagonal tiling is the best way
to divide a surface into regions of equal area with the least total perimeter
\cite{hales}. The Fekete problem minimizes an interaction energy of points
on a sphere and
obtains a hexagonal arrangement of minimizing points (with some defects
due to a topological reason) \cite{bendito}.
Against this conventional wisdom,
we present a problem where the hexagonal pattern is {\operatorname{e}m generally not}
the most favored structure. Our study is motivated by
Nakazawa and Ohta's theory for triblock copolymer morphology
\cite{nakazawa, rw4}. In an ABC triblock
copolymer a molecule is a subchain of type A monomers connected to a subchain of type B monomers which in turn is connected to a subchain of type C monomers.
Because of the repulsion between the unlike monomers, the different type subchains tend to segregate. However since subchains are chemically bonded in molecules, segregation cannot lead to a macroscopic phase separation; only micro-domains
rich in individual type monomers emerge, forming morphological phases. Bonding of distinct monomer subchains provides an inhibition mechanism in block
copolymers.
The mathematical study of the triblock copolmyer problem is still in the early
stage. There are existence theorems about stationary assemblies of
core-shells \cite{rwang}, double bubbles \cite{rw19}, and discs \cite{rwang2},
with the last work being the most relevant to this paper.
Here we treat two of the three monomer types of a triblock copolymer
as species and view the third type as the surrounding environment, dependent on
the two species. This way a triblock copolymer is a two species interacting
system.
The definition of our two species interacting system starts with a
lattice $\Lambda$ on the complex plane generated by two nonzero complex
numbers $\alpha_1$ and $\alpha_2$, with $\operatorname{Im} (\alpha_2/\alpha_1) >0$,
\begin{equation}
\label{Lambda}
\Lambda = \{ j_1 \alpha_1 + j_2 \alpha_2: \ j_1, j_2 \in \mathbb{Z}\}.
\operatorname{e}nd{equation}
Define by $D_\alpha$ the parallelogram cell
\begin{equation}
\label{cell}
D_\alpha = \{ t_1 \alpha_1 + t_2 \alpha_2: t_1,t_2 \in (0,1) \}
\operatorname{e}nd{equation}
associated to the basis $\alpha = (\alpha_1, \alpha_2)$
of the lattice $\Lambda$. The lattice $\Lambda$ defines an equivalence
relation on $\mathbb{C}$ where two complex numbers are equivalent if
their difference is in $\Lambda$. The resulting space of equivalent classes
is denoted $\mathbb{C} / \Lambda$. It can be represented by $D_\alpha$ where
the opposite edges of $D_\alpha$ are identified.
There are two sets of parameters in our model. The first consists of two numbers
$\omega_1$ and $\omega_2$ satisfying
\begin{equation}
0< \omega_1, \ \omega_2 <1, \ \mbox{and} \ \omega_1 + \omega_2 <1.
\label{omega}
\operatorname{e}nd{equation}
The second is a two by
two symmetric matrix $\gamma$,
\begin{equation}
\label{gamma}
\gamma = \left [ \begin{array}{ll} \gamma_{11} & \gamma_{12}
\\ \gamma_{21} & \gamma_{22} \operatorname{e}nd{array} \right ],
\ \gamma_{12}=\gamma_{21}.
\operatorname{e}nd{equation}
Furthermore, in this paper we assume that
\begin{equation}
\gamma_{11} >0, \ \gamma_{22}>0, \ \gamma_{12}\geq 0,
\ \gamma_{11} \gamma_{22} - \gamma_{12}^2 \geq 0.
\label{gamma-1}
\operatorname{e}nd{equation}
Our model is a variational problem defined on pairs of
$\Lambda$-periodic sets with prescribed average size. More specifically
a pair $(\Omega_1, \Omega_2)$ of two subsets of $\mathbb{C}$ is
admissible if the following conditions hold.
Both $\Omega_1$ and $\Omega_2$ are $\Lambda$-periodic, i.e.
\begin{equation}
\Omega_j + \lambda = \Omega_j, \ \mbox{for all} \ \lambda \in \Lambda,
\ j=1,2; \label{periodicity} \operatorname{e}nd{equation}
$\Omega_1$ and $\Omega_2$ are disjoint in the sense that
\begin{equation}
|\Omega_1 \cap \Omega_2| = 0;
\label{disjoint}
\operatorname{e}nd{equation}
the average size of $\Omega_1$ and $\Omega_2$
are fixed at $\omega_1$, $\omega_2 \in (0,1)$ respectively, i.e.
\begin{equation}
\label{areaconstraint}
\frac{|\Omega_j \cap D_\alpha |}{|D_\alpha|} = \omega_j, \ j=1,2.
\operatorname{e}nd{equation}
In \operatorname{e}qref{disjoint} and \operatorname{e}qref{areaconstraint}, $|\cdot|$ denotes the
two-dimensional
Lebesgue measure.
Although it can be given in terms of $\alpha_1$ and $\alpha_2$,
$|D_\alpha|$ actually depends on the lattice $\Lambda$, not the particular basis $\alpha$,
and therefore we alternatively write it as $|\Lambda|$,
\begin{equation}
\label{latticearea}
|\Lambda|=|D_\alpha| = \operatorname{Im} (\overline{\alpha_1} \alpha_2).
\operatorname{e}nd{equation}
Given an admissible pair $(\Omega_1, \Omega_2)$, let
$\Omega_3=\mathbb{C} \backslash (\Omega_1 \cup \Omega_2)$.
Again $\Lambda$ imposes an equivalent relation on $\Omega_j$ and the
resulting space of equivalence classes is denoted $\Omega_j / \Lambda$,
$j=1,2,3$. Define a functional
${\cal J}_\Lambda$ to be the free energy of $(\Omega_1, \Omega_2)$ on a cell
given by
\begin{equation}
\label{J}
{\cal J}_\Lambda(\Omega_1, \Omega_2) = \frac{1}{2} \sum_{j=1}^3
{\cal P}_{\mathbb{C}/\Lambda} (\Omega_j/\Lambda)
+ \sum_{j,k=1}^2 \frac{\gamma_{jk}}{2}\int_{D_\alpha}
\nabla I_\Lambda(\Omega_j)(\zeta) \cdot \nabla I_\Lambda(\Omega_k)(\zeta)
\, d\zeta.
\operatorname{e}nd{equation}
In (\operatorname{Re}f{J}), ${\cal P}_{\mathbb{C}/\Lambda}(\Omega_j/\Lambda)$, $j=1,2$,
is the perimeter of $\Omega_j/\Lambda$
in $\mathbb{C}/\Lambda$. One can take a representation $D_\alpha$ of
$\mathbb{C}/\Lambda$, with its opposite sides identified, and treat
$\Omega_j \cap D_\alpha$, also with points on opposite sides identified, as
a subset of $D_\alpha$. Then ${\cal P}_{\mathbb{C}/\Lambda} (\Omega_j/\Lambda)$ is
the perimeter of $\Omega_j \cap D_\alpha$. If $\Omega_j \cap D_\alpha$ is bounded
by $C^1$ curves, then the perimeter is just the total length of the curves.
More generally, for a merely measurable $\Lambda$-periodic $\Omega_j$,
\begin{equation}
\label{perimeter}
{\cal P}_{\mathbb{C}/\Lambda} (\Omega_j/\Lambda)
= \sup_g \Big \{ \int_{\Omega_j \cap D_\alpha} \mbox{div} \,g(x) \, dx: \
g \in C^1(\mathbb{C}/\Lambda, \mathbb{R}^2), \ |g(x)| \leq 1
\ \forall x \in \mathbb{C} \Big
\}.
\operatorname{e}nd{equation}
Here $g \in C^1(\mathbb{C}/\Lambda, \mathbb{R}^2)$ means that $g$
is a continuously differential, $\Lambda$-periodic vector field
on $\mathbb{C}$; $|g(x)|$ is the geometric norm of the vector
$g(x) \in \mathbb{R}^2$.
In $\sum_{j=1}^3{\cal P}_{\mathbb{C}/\Lambda} (\Omega_j/\Lambda)$
each boundary curve separating a $\Omega_j/\Lambda$ from a
$\Omega_k/\Lambda$, $j,k=1,2,3$, $j\ne k$,
is counted exactly twice. The constant $\frac{1}{2}$
in the front takes care of the double counting.
The function $I_\Lambda(\Omega_j)$ is the $\Lambda$-periodic
solution of Poisson's equation
\begin{equation}
\label{I}
- \Delta I_\Lambda(\Omega_j)(\zeta) = \chi_{\Omega_j}(\zeta) - \omega_j
\ \mbox{in} \ \mathbb{C},
\ \ \int_{D_\alpha} I_\Lambda(\Omega_j)(\zeta) \, d\zeta =0,
\operatorname{e}nd{equation}
where $\chi_{\Omega_j}$ is the characteristic function of $\Omega_j$.
Despite the appearance, the functional ${\cal J}_\Lambda$ depends on the
lattice $\Lambda$ instead of the particular basis $\alpha$.
A stationary point $(\Omega_1,\Omega_2)$ of ${\cal J}_\Lambda$
is a solution to the following equations of a free boundary problem:
\begin{align}
\kappa_{13} + \gamma_{11} I_\Lambda(\Omega_1) + \gamma_{12} I_\Lambda(\Omega_2)
&= \mu_1
\ \ \mbox{on} \ \partial \Omega_1 \cap \partial \Omega_3 \label{euler1}
\\ \kappa_{23} + \gamma_{12} I_\Lambda(\Omega_1) + \gamma_{22} I_\Lambda(\Omega_2)
&= \mu_2
\ \ \mbox{on} \ \partial \Omega_2 \cap \partial \Omega_3
\label{euler2} \\
\kappa_{12} + (\gamma_{11}-\gamma_{12}) I_\Lambda(\Omega_1) +
(\gamma_{12}-\gamma_{22}) I_\Lambda(\Omega_2)
&= \mu_1-\mu_2 \ \ \mbox{on} \
\partial \Omega_1 \cap \partial \Omega_2 \label{euler3} \\
T_{13} + T_{23} + T_{12} &= \varepsilonc{0} \ \ \mbox{at} \
\partial \Omega_1 \cap \partial \Omega_2 \cap \partial \Omega_3.
\label{euler4}
\operatorname{e}nd{align}
In (\operatorname{Re}f{euler1})-(\operatorname{Re}f{euler3}) $\kappa_{13}$, $\kappa_{23}$, and $\kappa_{12}$
are the curvatures of the curves $\partial \Omega_1 \cap \partial \Omega_3$,
$\partial \Omega_2 \cap \partial \Omega_3$, and
$\partial \Omega_1 \cap \partial \Omega_2$, respectively.
The unknown constants $\mu_1$ and $\mu_2$ are Lagrange multipliers
associated with the constraints \operatorname{e}qref{areaconstraint} for $\Omega_1$ and
$\Omega_2$ respectively.
The three interfaces, $\partial \Omega_1 \cap \partial \Omega_3$,
$\partial \Omega_2 \cap \partial \Omega_3$ and
$\partial \Omega_1 \cap \partial \Omega_2$,
may meet at a common point in $D$, which is termed a triple junction point.
In (\operatorname{Re}f{euler4}), $T_{13}$, $T_{23}$ and $T_{12}$ are respectively
the unit tangent vectors
of these curves at triple junction points. This equation simply says that at
a triple junction
point three curves meet at $\frac{2\pi}{3}$ angle.
In this paper, we only consider a special type of $(\Omega_1, \Omega_2)$,
termed two species periodic assemblies of discs, denoted by
$(\Omega_{\alpha,1}, \Omega_{\alpha,2})$, with
\begin{align}
\Omega_{\alpha, 1}& = \bigcup_{\lambda \in \Lambda}
\Big \{ B(\xi, r_1) \cup B(\xi', r_1):
\xi = \frac{3}{4} \alpha_1
+ \frac{1}{4}\alpha_2 + \lambda, \
\xi' = \frac{1}{4} \alpha_1
+ \frac{3}{4} \alpha_2 + \lambda \Big \},
\label{Omegaalpha1} \\
\Omega_{\alpha, 2}& = \bigcup_{\lambda \in \Lambda}
\Big \{ B(\xi, r_2) \cup B(\xi', r_2):
\xi = \frac{1}{4} \alpha_1
+ \frac{1}{4} \alpha_2 + \lambda , \
\xi' = \frac{3}{4} \alpha_1
+ \frac{3}{4} \alpha_2 + \lambda
\Big \}. \label{Omegaalpha2}
\operatorname{e}nd{align}
In \operatorname{e}qref{Omegaalpha1} and \operatorname{e}qref{Omegaalpha2},
$B(\xi, r_j)$, or $B(\xi',r_j)$, is the closed disc centered at
$\xi$ of radius $r_j$; the $r_j$'s are given by
\begin{equation}
\omega_j = \frac{2\pi r_j^2}{|D_\alpha|}, \ j=1,2.
\label{rj}
\operatorname{e}nd{equation}
Be aware that $(\Omega_{\alpha,1},\Omega_{\alpha,2})$ defined this way
depends on the basis $\alpha$, not
the lattice $\Lambda$ generated by $\alpha$. One may have two different
bases that generate the same lattice, but they define two distinct
assemblies.
Shifting $(\Omega_{\alpha,1}, \Omega_{\alpha,2})$ does not
change its energy, so
our choice for the centers of the discs
in \operatorname{e}qref{Omegaalpha1} and \operatorname{e}qref{Omegaalpha2} is not the only one.
Another aesthetically pleasing placement is to put the disc centers on
the lattice points and half lattice points; see Figure \operatorname{Re}f{f-assemblies}.
Nevertheless we prefer not to have discs on the boundary of the parallelogram
cell $D_\alpha$.
\begin{figure}
\centering
\includegraphics[scale=0.13]{rhombus.png}
\includegraphics[scale=0.13]{shift.png}
\caption{A two species periodic assembly of discs given by
\operatorname{e}qref{Omegaalpha1} and \operatorname{e}qref{Omegaalpha2}, and a shift of the assembly
with disc centers at the lattice points and the half lattice
points.}
\label{f-assemblies}
\operatorname{e}nd{figure}
A two species periodic assembly $(\Omega_{\alpha,1}, \Omega_{\alpha,2})$
is not a stationary point of the energy
fuctional ${\cal J}_\Lambda$. However Ren and Wang have shown the existence
of statoinary points that are unions of perturbed discs in a bounded
domain with the Neumann bounary condition \cite{rwang2}.
Numerical evidence strongly suggests the existence of
stationary points similar to two species assemblies \cite{wang-ren-zhao}.
In this paper we determine, in terms of $\alpha$,
which $(\Omega_{\alpha,1}, \Omega_{\alpha,2})$ is the most energetically
favored. For this purpose, it is more appropriate to consider the
energy per cell area instead of the energy on a cell. Namely consider
\begin{equation}
\label{tJ}
\widetilde{\cal J}_\Lambda(\Omega_1,\Omega_2)
= \frac{1}{|\Lambda|} {\cal J}_\Lambda(\Omega_1,\Omega_2),
\operatorname{e}nd{equation}
take $(\Omega_1,\Omega_2)$ to be a two species periodic assembly, and
minimize energy per cell area among all such assemblies
with respect to $\alpha$, i.e.
\begin{equation}
\min_\alpha \Big \{ \widetilde{\cal J}_\Lambda
(\Omega_{\alpha,1}, \Omega_{\alpha,2}):
\ \alpha=(\alpha_1,\alpha_2), \
\alpha_1, \alpha_2 \in \mathbb{C}\backslash\{0\},
\ \operatorname{Im} \frac{\alpha_2}{ \alpha_1} >0,
\ \Lambda \ \mbox{is generated by}
\ \alpha \Big \}.
\label{minimization}
\operatorname{e}nd{equation}
Several lattices will appear as the most favored structures. They are
illustrated in Figure \operatorname{Re}f{f-lattices}. A rectangular
lattice has a basis $\alpha$ whose parallelogram cell $D_\alpha$ is a rectangle.
A square lattice has a square as a parallelogram cell. A rhombic lattice
has a rhombus cell,
i.e. a parallelogram cell whose four sides have the same length. Finally
a hexagonal lattice has a parallelogram cell with
four equal length sides and an angle of $\frac{\pi}{3}$ between two sides.
If we let
\begin{equation}
\tau = \frac{\alpha_2}{\alpha_1},
\operatorname{e}nd{equation}
then in terms of $\tau$,
$\Lambda$ is rectangular if $\operatorname{Re} \tau =0$, $\Lambda$ is
square if $\tau = i$, $\Lambda$ is rhombic if $|\tau| =1$, and
$\Lambda$ is hexagonal if $\tau = \frac{1}{2} + \frac{\sqrt{3}}{2} i$.
Note that these classes of lattices are not mutually exclusive.
A hexagonal lattice is a rhombic lattice; a square
lattice is both a rectangular lattice and a rhombic lattice.
The reason that a rhombic lattice with a $\frac{\pi}{3}$ angle is termed
a hexagonal lattice comes from its Voronoi cells. At each
lattice point, the Voronoi cell of this lattice point consists of points
in $\mathbb{C}$ that are closer to this lattice point than any
other lattice points.
For the rhombic lattice with a $\frac{\pi}{3}$ angle, the Voronoi
cell at each lattice point is a regular hexagon. With Voronoi cells at
all lattice points, the hexagonal lattice gives rise to a honeycomb pattern.
The main result of this paper asserts that for a two species
periodic assembly of discs to minimize the energy per cell area,
its associated parallelogram cell is either a
rectangle (including a square) whose ratio of the longer side
and the shorter side lies between $1$ and $\sqrt{3}$, or a rhombus
(including one with a $\frac{\pi}{3}$ acute angle)
whose acute angle is between $\frac{\pi}{3}$ and
$\frac{\pi}{2}$. Any two species
periodic assembly of discs that minimizes the energy per cell area is
called a minimal assembly and its associated lattice
is called a minimal lattice.
The most critical parameter in this problem is $b$ given in terms of
$\omega_j$ and $\gamma_{jk}$ by
\begin{equation}
b = \frac{2 \gamma_{12} \omega_1\omega_2}
{ \gamma_{11} \omega_1^2 + \gamma_{22} \omega_2^2}.
\label{binit}
\operatorname{e}nd{equation}
Conditions \operatorname{e}qref{omega}, \operatorname{e}qref{gamma}, and \operatorname{e}qref{gamma-1}
on $\omega_j$ and $\gamma_{jk}$ imply that
\begin{equation}
b \in [0,1].
\label{b-range}
\operatorname{e}nd{equation}
To ensure the disjoint condition \operatorname{e}qref{disjoint}
for potential minimal assemblies we assume that
$\omega_1$ and $\omega_2$ are sufficiently small. Namely let $\omega_0>0$
be small enough so that if
\begin{equation}
\omega_j < \omega_0, \ \ j=1,2,
\label{omega0}
\operatorname{e}nd{equation}
and $(\Omega_{\alpha, 1}, \Omega_{\alpha,2})$ is a
two species periodic assembly of discs
whose basis $(\alpha_1,\alpha_2)$ satisfies
\begin{equation}
\label{rectangle-cond}
\operatorname{Re} \tau =0 \ \mbox{and} \ |\tau| \in [1, \sqrt{3}],
\operatorname{e}nd{equation}
or
\begin{equation}
\label{rhombus-cond}
|\tau| =1 \ \mbox{and} \ \arg \tau \in
\Big [\frac{\pi}{3}, \frac{\pi}{2} \Big ],
\operatorname{e}nd{equation}
then $(\Omega_{\alpha, 1}, \Omega_{\alpha,2})$ is disjoint in the sense
of \operatorname{e}qref{disjoint}.
The line segment \operatorname{e}qref{rectangle-cond} and the arc \operatorname{e}qref{rhombus-cond}
are illustrated in the first plot of Figure \operatorname{Re}f{f-W}.
Now we state our theorem.
\begin{theorem}
Let the parameters $\omega_j$, $j=1,2$, and $\gamma_{jk}$, $j,k=1,2$,
satisfy the conditions \operatorname{e}qref{omega}, \operatorname{e}qref{gamma}, \operatorname{e}qref{gamma-1},
and \operatorname{e}qref{omega0}.
The minimization problem \operatorname{e}qref{minimization} always admits a minimum. Let
$\alpha_\ast=(\alpha_{\ast,1}, \alpha_{\ast,2})$ be a minimum of
\operatorname{e}qref{minimization}, $\Lambda_\ast$ be the lattice determined by
$\alpha_\ast$.
Then there exists $B = 0.1867...$ such that the following statements hold.
\begin{enumerate}
\item If $b=0$, then $\Lambda_\ast$ is a rectangular lattice
whose ratio of the longer side and the
shorter side is $\sqrt{3}$.
\item If $b \in (0, B)$, then $\Lambda_\ast$ is a rectangular lattice
whose ratio of the longer side and the shorter side is
in $(1, \sqrt{3})$. As $b$ increases from $0$ to $B$, this ratio
decreases from $\sqrt{3}$ to $1$.
\item If $b \in [B, 1-B]$, then $\Lambda_\ast$ is a square lattice.
\item If $b \in (1-B, 1)$, then $\Lambda_\ast$ is a non-square,
non-hexagonal rhombic lattice with an acute angle in
$(\frac{\pi}{3}, \frac{\pi}{2})$. As $b$ increases from
$1-B$ to $1$, this angle decreases from $\frac{\pi}{2}$ to
$\frac{\pi}{3}$.
\item If $b =1$, then $\Lambda_\ast$ is a hexagonal lattice.
\operatorname{e}nd{enumerate}
\label{t-main}
\operatorname{e}nd{theorem}
The threshold $B$ is defined precisely in \operatorname{e}qref{B} by two infinite
series, from which one finds its numerical value.
Only in the case $b=1$, $\widetilde{\cal J}_\Lambda
(\Omega_{\alpha,1}, \Omega_{\alpha,2})$ is minimized by a hexagonal lattice. In
all other cases minimal lattices are not hexagonal. As a matter of fact,
our assumption on $\gamma$ in this paper is a bit different from the
conditions for
$\gamma$ in a triblock copolymer. In a triblock copolymer, instead of
\operatorname{e}qref{gamma-1}, $\gamma$ needs to be positive definite.
In \cite{rwang2}, where Ren and Wang found assemblies of perturbed discs
as stationary points, $\gamma_{12}$ is positive.
In terms of $b$, $\gamma$ being positive
definite and $\gamma_{12}>0$ mean that $b \in (0,1)$.
In this paper we include both the $b=0$ case and the $b=1$ case
for good reasons.
The case $b=1$ corresponds to $\gamma_{11} \gamma_{22} - \gamma_{12}^2=0$,
i.e. $\gamma$ has a non-trivial kernel, and
$(-\omega_1, \omega_2)$ is in the kernel of $\gamma$.
This case is actually very special. It is
equivalent to a problem studied by
Chen and Oshita in \cite{chenoshita-2}, a simpler one species analogy of
the two species problem studied here. The motivation of that problem
comes from the study of diblock copolymers where a
molecule is a subchain of type A monomers connected to a subchain of
type B monomers. With one type treated as a species and the other as
the surrounding environment, a diblock copolymer is a one species
interacting system.
The recent years have seen active work on the diblock copolymer problem;
see \cite{rw, rw12, aco,
choksi-peletier, morini-sternberg, goldman-muratov-serfaty}
and the references therein.
Based on a density functional theory of Ohta and Kawasaki
\cite{ok}, the free energy of a diblock copolymer system on a
$\Lambda$-periodic domain is
\begin{equation}
{\cal E}_\Lambda (\Omega) = {\cal P}_{\mathbb{C}/\Lambda}(\Omega/\Lambda)
+ \frac{\gamma_d}{2} \int_{D_\alpha} |\nabla I_\Lambda(\Omega)(\zeta)|^2 \,
d \zeta.
\operatorname{e}nd{equation}
Here, analogous to the two species problem, $\Omega$ is a $\Lambda$-periodic
subset of $\mathbb{C}$ under the average area constraint
$\frac{|\Omega \cap D_\alpha|}{|D_\alpha|} = \omega$
where $\omega \in (0,1)$ is one of the two given parameters.
The other parameter is
the number $\gamma_d >0$. Now take $\Omega$ to be $\Omega^d_\Lambda$,
the union of discs centered at
$\frac{\alpha_1+\alpha_2}{2} + \lambda$, $\lambda \in \Lambda$,
of radius $\sqrt{\frac{\omega |D_\alpha|}{\pi}}$,
and minimize
the energy per cell area with respect to $\Lambda$:
\begin{equation}
\label{db-E}
\min_\Lambda \frac{1}{|\Lambda|} {\cal E}_\Lambda(\Omega^d_\Lambda)
\operatorname{e}nd{equation}
This time, unlike in the two species problem, $\Omega^d_\Lambda$ depends on
the lattice $\Lambda$, not the basis $(\alpha_1, \alpha_2)$.
Chen and Oshita showed that \operatorname{e}qref{db-E} is minimized by a hexagonal lattice.
In \cite{sandier-serfaty} Sandier and Serfaty studied the
Ginzburg-Landau problem with magnetic field and arrived at a reduced
energy. Minimization of this energy turns out to be the same as
the minimization problem \operatorname{e}qref{db-E}.
In our two species problem \operatorname{e}qref{minimization},
the condition $b=1$ actually makes
the two species indistinguishable as far as interaction is concerned.
It means that the two species function as one species, hence the equivalence
to the one species problem \operatorname{e}qref{db-E}.
The case $b=0$ is {\operatorname{e}m dual} to the $b=1$ case, a point explained below.
It is therefore natural to include both cases.
Our work starts with Lemma \operatorname{Re}f{l-size}, which states that for us
to solve \operatorname{e}qref{minimization} it suffices to minimize the energy among
two species periodic assemblies of unit cell area.
Then in Lemma \operatorname{Re}f{l-Ff} it is
shown that the latter problem is equivalent to maximizing a function,
\begin{equation}
f_b (z) = b \log \big | \operatorname{Im} \big (z \big ) \operatorname{e}ta\big (z \big) \big|
+ (1-b) \log \big | \operatorname{Im} \big(\frac{z+1}{2}\big)
\operatorname{e}ta \big(\frac{z+1}{2}\big) \big |,
\label{fb-intro}
\operatorname{e}nd{equation}
with respect to $z$ in the set $\{ z \in \mathbb{C}: \ \operatorname{Im} z >0,
\ |z| \geq 1, \ 0 \leq \operatorname{Re} z \leq 1 \}$. Here
\begin{equation}
\label{eta-intro}
\operatorname{e}ta(z) = e^{\frac{\pi}{3} z i}\prod_{n=1}^\infty\big(1- e^{2\pi nz i}\big)^4
\operatorname{e}nd{equation}
is the fourth power of the Dedekind eta function.
If $b=1$, then $f_b=f_1$ and we are looking at the problem studied by
Chen and Oshita \cite{chenoshita-2}, and Sandier and Serfaty
\cite{sandier-serfaty}. In this case, $f_1$ is maximized in a smaller set,
$\{ z \in \mathbb{C}: \ |z| \geq 1, \ 0 \leq \operatorname{Re} z \leq 1/2 \}$.
Using a maximum principle argument, Chen and Oshita showed that $f_1$
is maximized at $z = \frac{1}{2} + \frac{\sqrt{3}}{2} i$, which
corresponds to the hexagonal lattice. Sandier and Serfaty used a relation
between the Dedekind eta function and the Epstein zeta function, and
a property of the Jacobi theta function to arrive at the same conclusion.
Neither method seems to be applicable to the two species system with
$b \ne 1$. Instead we rely on a duality principle, Lemma \operatorname{Re}f{l-dual},
which shows that maximizing $f_b$ is equivalent to maximizing $f_{1-b}$.
This allows us to only consider $b \in [0,1/2]$, and there we are
able to show that $f_b(z)$ attains the maximum on the imaginary
axis above $i$, i.e. $\operatorname{Re} z =0$ and $\operatorname{Im} z \geq 1$.
So we turn to maximize $f_b(y i)$ with respect to $y \geq 1$.
The most technical part of this work, Lemma \operatorname{Re}f{l-fb-im}, shows that when
$b =0$, $f_0(y i)$ is maximized at $y=\sqrt{3}$;
when $b \in (0, B)$, $f_b(y i)$ is maximized at some
$y = q_b \in (1, \sqrt{3})$;
when $b \in [B, 1]$, $f_b(y i)$ is maximized at $y=1$.
The theorem then follows readily.
The key step in the proof of Lemma \operatorname{Re}f{l-fb-im}
is to establish a monotonicity property
for the ratio of the derivatives of $f_0(y i)$ and $f_1(y i)$ with respective
to $y \in (1, \sqrt{3})$. This piece of argument is placed in the appendix
so a reader who is more interested in the overall strategy of this work
may skip it at the first reading.
\begin{figure}
\centering
\includegraphics[scale=0.12]{rectangle.png}
\includegraphics[scale=0.12]{square.png}
\includegraphics[scale=0.12]{rhombus.png}
\includegraphics[scale=0.15]{hexagon.png}
\caption{Two species periodic assemblies of discs with their associated
lattices.
First row from left to right: a rectangular lattice and a square lattice.
Second row from left to right: a rhombic lattice and a hexagonal lattice.}
\label{f-lattices}
\operatorname{e}nd{figure}
\section{Derivation of $f_b$}
\setcounter{equation}{0}
The size and the shape of a two species periodic assembly play different
roles in its energy. To separate the two factors write the basis
of a given two species periodic assembly as
$t \alpha = (t\alpha_1, t\alpha_2)$ where $ t \in (0, \infty)$ and
the parallelogram generated by $\alpha = (\alpha_1, \alpha_2)$ has the unit
area, i.e. $|D_\alpha| =1$. This way the assembly is now denoted by
$(\Omega_{t\alpha,1},\Omega_{t\alpha,2}) $, with $t$ measuring the size of
the assembly (note $|D_{t\alpha}| = t^2$), and $D_\alpha$ describing
the shape of the assembly.
The lattice generated by $\alpha$ is denoted $\Lambda$ and the
lattice generated by $t \alpha$ is $t \Lambda$.
\begin{lemma}
Fix $\alpha_1, \alpha_2 \in
\mathbb{C}\backslash \{0\}$, $\operatorname{Im}(\alpha_2/\alpha_1)>0$, and
$|D_\alpha|=1$.
Among all 2 species periodic assemblies $\Omega_{t\alpha}$,
$t \in (0,\infty)$, the energy per cell area is minimized by the one with
$t = t_{\alpha}$, where
\begin{equation}
\label{t-alpha}
t_\alpha = \Big ( \frac{2 \sqrt{2\pi \omega_1} + 2 \sqrt{2\pi \omega_2} }
{\sum_{j,k=1}^2 \gamma_{jk} \int_{D_\alpha}
\nabla I_{\Lambda}(\Omega_{\alpha,j})(\zeta)
\cdot \nabla I_{\Lambda}(\Omega_{\alpha, k})(\zeta) \, d \zeta } \Big )^{1/3}.
\operatorname{e}nd{equation}
The energy per cell area of this assembly is
\begin{equation}
\label{energy-per-cell}
\widetilde{\cal J}_{t_\alpha \Lambda}
((\Omega_{t_\alpha \alpha,1}, \Omega_{t_\alpha \alpha,2} ) =
3 \Big ( \sqrt{2\pi \omega_1} + \sqrt{2\pi \omega_2} \Big )^{2/3}
\Big ( \sum_{j,k=1}^2 \frac{\gamma_{jk}}{2} \int_{D_\alpha}
\nabla I_{\Lambda}(\Omega_{\alpha,j})(\zeta)
\cdot \nabla I_{\Lambda}(\Omega_{\alpha, k})(\zeta) \, d \zeta \Big )^{1/3}.
\operatorname{e}nd{equation}
Consequently minimization of \operatorname{e}qref{minimization} is reduced to minimizing
\begin{equation}
\label{F}
{\cal F}(\alpha) = \sum_{j,k=1}^2 \frac{\gamma_{jk}}{2}
\int_{D_\alpha}
\nabla I_{\Lambda}(\Omega_{\alpha,j})(\zeta)
\cdot \nabla I_{\Lambda}(\Omega_{\alpha, k})(\zeta) \, d \zeta, \ \ |D_\alpha| =1,
\operatorname{e}nd{equation}
with respect to $\alpha$ of unit cell area.
\label{l-size}
\operatorname{e}nd{lemma}
\begin{proof}
Between the two lattices, the functions
$I_{t\Lambda}(\Omega_{t\alpha,j})$ and $I_{\Lambda}(\Omega_{\alpha,j})$
are related by
\begin{equation}
\label{scale-I}
I_{t\Lambda}(\Omega_{t\alpha,j})(\chi) = t^2 I_{\Lambda}(\Omega_{\alpha,j})(\zeta),
\ \ t \zeta=\chi, \ \zeta, \chi \in \mathbb{C}
\operatorname{e}nd{equation}
because of the equation \operatorname{e}qref{I}. Then
\begin{align*}
{\cal J}_{t\Lambda}(\Omega_{t\alpha,1},\Omega_{t \alpha,2} ) & =
t \Big ( 2 \sqrt{2\pi \omega_1} + 2 \sqrt{2\pi \omega_2} \Big )
+ \sum_{j,k=1}^2 \frac{\gamma_{jk}}{2} \int_{D_{t\alpha}}
\nabla I_{t\Lambda}(\Omega_{t\alpha,j}) (\chi) \cdot
\nabla I_{t\Lambda}(\Omega_{t\alpha,k})(\chi) \, d \chi \\
&= t \Big ( 2 \sqrt{2\pi \omega_1} + 2 \sqrt{2\pi \omega_2} \Big )
+ t^4 \sum_{j,k=1}^2 \frac{\gamma_{jk}}{2} \int_{D_\alpha}
\nabla I_{\Lambda}(\Omega_{\alpha,j})(\zeta)
\cdot \nabla I_{\Lambda}(\Omega_{\alpha, k})(\zeta) \, d \zeta.
\operatorname{e}nd{align*}
The energy per cell area is
\begin{align*}
\nonumber
\widetilde{\cal J}_{t\Lambda}(\Omega_{t\alpha,1}, \Omega_{t \alpha,2}) & =
\Big (\frac{1}{t} \Big )^2 {\cal J}_{t\Lambda} (\Omega_{t\alpha}) \\ & =
\frac{1}{t} \Big ( 2 \sqrt{2\pi \omega_1} + 2 \sqrt{2\pi \omega_2} \Big )
+ t^2 \sum_{j,k=1}^2 \frac{\gamma_{jk}}{2} \int_{D_\alpha}
\nabla I_{\Lambda}(\Omega_{\alpha,j})(\zeta)
\cdot \nabla I_{\Lambda}(\Omega_{\alpha, k})(\zeta) \, d \zeta.
\operatorname{e}nd{align*}
With respect to $t$, the last quantity is minimized at $t = t_\alpha$
given in \operatorname{e}qref{t-alpha}, and the minimum value is given in
\operatorname{e}qref{energy-per-cell}.
Later one needs to minimize the right side of \operatorname{e}qref{energy-per-cell}
with respect to $\alpha$, $|D_\alpha|=1$. This is equivalent to minimze
${\cal F}(\alpha)$
with respect to $\alpha$, $|D_\alpha|=1$. Once a minimum, say $\alpha_\ast$,
is found, then compute $t_{\alpha_\ast}$ from \operatorname{e}qref{t-alpha} and make
the assembly $\Omega_{t_{\alpha_\ast} \alpha_\ast}$ with the basis
$t_{\alpha_\ast} \alpha_\ast$. This assembly minimizes \operatorname{e}qref{minimization}.
\operatorname{e}nd{proof}
Now that the minimization problem \operatorname{e}qref{minimization} is reduced to
minimizing ${\cal F}$, we proceed to simplify ${\cal F}(\alpha)$ to a more
amenable form. To this end, one expresses
the solution of \operatorname{e}qref{I} in terms of the Green's function
$G_\Lambda$ of the $-\Delta$ operator as
\begin{equation}
I_{\Lambda}(\Omega_j)(\zeta) = \int_{\Omega_j \cap D_\alpha}
G_\Lambda(\zeta - \chi) \, d\chi.
\label{I2}
\operatorname{e}nd{equation}
Here $G_\Lambda$ is the $\Lambda$-periodic solution of
\begin{equation}
- \Delta G_\Lambda (\zeta) = \sum_{\lambda \in \Lambda}
\delta_{\lambda} (\zeta) - \frac{1}{|\Lambda|},
\ \int_{D_\alpha} G_\Lambda(\zeta) \, d\zeta =0.
\label{G}
\operatorname{e}nd{equation}
In \operatorname{e}qref{G} $\delta_\lambda$ is the delta measure at $\lambda$, and
$D_\alpha$ is a parallelogram cell of $\Lambda$. It is known that
\begin{align}
\nonumber
G_\Lambda(\zeta) & = \frac{|\zeta|^2}{4 |\Lambda|}
- \frac{1}{2\pi} \log
\Big |
\operatorname{e}\Big (\frac{\zeta^2 \overline{\alpha_1}}{4 i |\Lambda| \alpha_1}
-\frac{\zeta}{2\alpha_1} +\frac{\alpha_2}{12\alpha_1} \Big )
\Big (1-\operatorname{e}(\frac{\zeta}{\alpha_1}) \Big ) \\ & \qquad
\prod_{n=1}^\infty\Big ( \big( 1- \operatorname{e}(n\tau + \frac{\zeta}{\alpha_1}) \big )
\big (1- \operatorname{e}(n\tau - \frac{\zeta}{\alpha_1}) \big) \Big )
\Big | \label{Green0}
\operatorname{e}nd{align}
A simple proof of this fact can be found in \cite{chenoshita-2}.
Throughout this paper one writes
\begin{equation}
\operatorname{e}(z) = e^{2\pi i z}
\label{e}
\operatorname{e}nd{equation}
and
\begin{equation}
\tau = \frac{\alpha_2}{\alpha_1}.
\label{tau}
\operatorname{e}nd{equation}
Sometimes one singles out the singularity of $G_\Lambda$ at $0$ and decompose
$G_\Lambda$ into
\begin{equation}
\label{Green}
G_\Lambda(\zeta) = -\frac{1}{2\pi} \log \frac{2\pi |\zeta|}{\sqrt{|\Lambda|}}
+ \frac{|\zeta|^2}{4 |\Lambda|} + H_\Lambda(\zeta)
\operatorname{e}nd{equation}
where
\begin{align}
\nonumber
H_\Lambda(\zeta) & = - \frac{1}{2\pi} \log
\Big |
\operatorname{e}\Big (\frac{\zeta^2 \overline{\alpha_1}}{4 i |\Lambda| \alpha_1}
-\frac{\zeta}{2\alpha_1} +\frac{\alpha_2}{12\alpha_1} \Big )
\frac{\sqrt{|\Lambda|}}{2\pi \zeta}
\Big (1-\operatorname{e}(\frac{\zeta}{\alpha_1}) \Big ) \\ & \qquad
\prod_{n=1}^\infty\Big ( \big( 1- \operatorname{e}(n\tau + \frac{\zeta}{\alpha_1}) \big )
\big (1- \operatorname{e}(n\tau - \frac{\zeta}{\alpha_1}) \big) \Big )
\Big | \label{H}
\operatorname{e}nd{align}
is a harmonic function on $(\mathbb{C} \backslash \Lambda)
\cup \{ 0\} $.
The integral term in \operatorname{e}qref{J} can be written in several different ways:
\begin{align}
\int_{D_\Lambda}
\nabla I_\Lambda(\Omega_j)(\zeta) \cdot \nabla I_\Lambda(\Omega_k)(\zeta)
\, d\zeta & = \int_{\Omega_k} I_\Lambda(\Omega_j)(\zeta) \, d \zeta
= \int_{\Omega_j} I_\Lambda(\Omega_k)(\chi) \, d \chi \nonumber \\
& = \int_{\Omega_k} \int_{\Omega_j} G_\Lambda(\zeta - \chi) \, d\chi d\zeta.
\label{rewrite}
\operatorname{e}nd{align}
\begin{lemma}
\label{l-FtF}
Minimizing
${\cal F}(\alpha)$ with respect to $\alpha$ of unit cell area
is equivalent to minimizing
\begin{equation}
\label{Ftilde}
\widetilde{\cal F} (\alpha)
= H_\Lambda(0) + G_\Lambda\Big(\frac{\alpha_1+\alpha_2}{2}\Big) +
b \Big( G_\Lambda\Big(\frac{\alpha_1}{2}\Big)
+ G_\Lambda\Big(\frac{\alpha_2}{2}\Big) \Big), \ \ |D_\alpha|=1
\operatorname{e}nd{equation}
where $\Lambda$ is the lattice generated by $\alpha$ and
\begin{equation}
\label{b}
b= \frac{2 \gamma_{12} r_1^2r_2^2}
{ \gamma_{11} r_1^4 + \gamma_{22} r_2^4}
= \frac{2 \gamma_{12} \omega_1\omega_2}
{ \gamma_{11} \omega_1^2 + \gamma_{22} \omega_2^2}.
\operatorname{e}nd{equation}
\operatorname{e}nd{lemma}
\begin{proof}
Given a disc $B(\xi_j, r_j)$ one finds
\begin{align}
\label{I-B} \nonumber
I_\Lambda(B(\xi_j,r_j))(\zeta) &=
\left \{ \begin{array}{ll}
- \frac{|\zeta - \xi_j|^2}{4} +\frac{r_j^2}{4} - \frac{r_j^2}{2} \log r_j,
& \mbox{if} \ \ |\zeta - \xi_j|\in [0, r_j], \\
-\frac{r_j^2}{2} \log |\zeta -\xi_j|, & \mbox{if} \ \ |\zeta-\xi_j| > r_j,
\operatorname{e}nd{array} \right. \\
& \qquad - \frac{r_j^2}{2} \log \frac{2\pi}{\sqrt{|\Lambda|}}
+ \frac{1}{4|\Lambda|} \Big (
\pi r_j^2 |\zeta - \xi_j|^2 + \frac{\pi r_j^4}{2} \Big )
+ \pi r_j^2 H_\Lambda(\zeta - \xi_j)
\operatorname{e}nd{align}
by \operatorname{e}qref{Green} and the mean value property of the harmonic function
$H_\Lambda$.
Then
\begin{align}
\label{Bj-Bj}
\int_{B(\xi_j, r_j)} \int_{B(\xi_j,r_j)} G_\Lambda(\zeta - \chi)
\, d\chi d\zeta
&= \int_{B(\xi_j, r_j)} I_\Lambda(B(\xi_j,r_j))(\zeta) \, d \zeta \\
&= \pi^2 r_j^4 H_\Lambda(0) + \frac{\pi r_j^4}{8} - \frac{\pi r_j^4}{2}
\log \frac{2\pi r_j}{ \sqrt{|\Lambda|}}
+ \frac{\pi^2 r_j^6}{4|\Lambda|}. \nonumber
\operatorname{e}nd{align}
When $j\ne k$,
\begin{align}
\label{Bj-Bk}
\int_{B(\xi_k, r_k)} \int_{B(\xi_j,r_j)} G_\Lambda(\zeta - \chi)
\, d\chi d\zeta
&= \int_{B(\xi_k, r_k)} I_\Lambda(B(\xi_j,r_j))(\zeta) \, d \zeta \\
&= \pi^2 r_j^2 r_k^2 G_\Lambda(\xi_j-\xi_k)
+ \frac{\pi^2(r_j^2 r_k^4 + r_j^4 r_k^2)}{8 |\Lambda|}. \nonumber
\operatorname{e}nd{align}
Since only the role played by the lattice basis $\alpha$ is of interest,
let
\begin{equation}
\label{c}
c_{jj} = \frac{\pi r_j^4}{8} - \frac{\pi r_j^4}{2}
\log \frac{2\pi r_j}{ \sqrt{|\Lambda|}}
+ \frac{\pi^2 r_j^6}{4|\Lambda|}, \ \
c_{jk} = \frac{\pi^2(r_j^2 r_k^4 + r_j^4 r_k^2)}{8 |\Lambda|}, \ j\ne k
\operatorname{e}nd{equation}
which are independent of $\alpha$ when $|\Lambda|=1$. Then
\begin{align}
\int_{B(\xi_j, r_j)} \int_{B(\xi_j,r_j)} G_\Lambda(\zeta - \chi)
\, d\chi d\zeta
&= \pi^2 r_j^4 H_\Lambda(0) + c_{jj} \\
\int_{B(\xi_k, r_k)} \int_{B(\xi_j,r_j)} G_\Lambda(\zeta - \chi)
\, d\chi d\zeta
&= \pi^2 r_j^2 r_k^2 G_\Lambda(\xi_j-\xi_k) + c_{jk}.
\operatorname{e}nd{align}
Similarly,
\begin{align}
\int_{B(\xi_j', r_j)} \int_{B(\xi_j',r_j)} G_\Lambda(\zeta - \chi)
\, d\chi d\zeta
&= \pi^2 r_j^4 H_\Lambda(0) + c_{jj} \\
\int_{B(\xi_k', r_k)} \int_{B(\xi_j',r_j)} G_\Lambda(\zeta - \chi)
\, d\chi d\zeta
&= \pi^2 r_j^2 r_k^2 G_\Lambda(\xi_j'-\xi_k') + c_{jk}, \ \ j \ne k \\
\int_{B(\xi_k, r_k)} \int_{B(\xi_j',r_j)} G_\Lambda(\zeta - \chi)
\, d\chi d\zeta
&= \pi^2 r_j^2 r_k^2 G_\Lambda(\xi_j-\xi_k') + c_{jk}',
\ \ j,k=1,2 \label{Bk-Bj'}
\operatorname{e}nd{align}
where
\begin{equation}
\label{c'}
c_{jk}' = \frac{\pi^2(r_j^2 r_k^4 + r_j^4 r_k^2)}{8 |\Lambda|}, \ \
j,k =1,2.
\operatorname{e}nd{equation}
Note that in \operatorname{e}qref{Bk-Bj'} and \operatorname{e}qref{c'} $j$ may be equal to $k$.
To complete the computation, note that
\begin{align*}
\int_{\Omega_{\alpha,1}}\int_{\Omega_{\alpha,1}} G_\Lambda(\zeta -\chi) \, d\chi d\zeta
&= \int_{B(\xi_1,r_1) \cup B(\xi_1',r_1)} \int_{B(\xi_1,r_1) \cup B(\xi_1',r_1)}
G_\Lambda(\zeta -\chi) \, d\chi d\zeta \\
&= \int_{B(\xi_1,r_1)} \int_{B(\xi_1,r_1)}
G_\Lambda(\zeta -\chi) \, d\chi d\zeta
+ \int_{B(\xi_1',r_1)} \int_{B(\xi_1',r_1)}
G_\Lambda(\zeta -\chi) \, d\chi d\zeta \\
& \qquad + \int_{B(\xi_1,r_1)} \int_{B(\xi_1',r_1)}
G_\Lambda(\zeta -\chi) \, d\chi d\zeta
+ \int_{B(\xi_1',r_1)} \int_{B(\xi_1,r_1)}
G_\Lambda(\zeta -\chi) \, d\chi d\zeta \\
&= 2(\pi^2 r_1^4 H_\Lambda(0) + c_{11})
+ 2(\pi^2 r_1^4 G_\Lambda(\xi_1-\xi_1') + c_{11}') \\
\int_{\Omega_{\alpha,2}}\int_{\Omega_{\alpha,2}} G_\Lambda(\zeta -\chi) \, d\chi d\zeta
&= 2(\pi^2 r_2^4 H_\Lambda(0) + c_{22})
+ 2(\pi^2 r_2^4 G_\Lambda(\xi_2-\xi_2') + c_{22}') \\
\int_{\Omega_{\alpha,1}}\int_{\Omega_{\alpha,2}} G_\Lambda(\zeta -\chi) \, d\chi d\zeta
&= \pi^2 r_1^2 r_2^2 G_\Lambda(\xi_1-\xi_2) + c_{12}
+ \pi^2 r_1^2 r_2^2 G_\Lambda(\xi_1'-\xi_2') + c_{12} \\
&\qquad + \pi^2 r_1^2 r_2^2 G(\xi_1-\xi_2') + c'_{12} + \pi^2 r_1^2 r_2^2 G(\xi_1'-\xi_2) + c'_{12}
\operatorname{e}nd{align*}
In accordance to \operatorname{e}qref{Omegaalpha1} and \operatorname{e}qref{Omegaalpha2} choose
\begin{align}
\xi_1 &= \frac{3}{4} \alpha_1 + \frac{1}{4} \alpha_2, \ \xi_1'=\frac{1}{4} \alpha_1 + \frac{3}{4} \alpha_2 \\
\xi_2 &= \frac{1}{4} \alpha_1 + \frac{1}{4} \alpha_2, \ \xi_1'=\frac{1}{4} \alpha_1 + \frac{3}{4} \alpha_2
\operatorname{e}nd{align}
to derive
\begin{align*}
{\cal F}(\alpha) &= \sum_{j,k=1,2} \frac{\gamma_{jk}}{2}
\int_{\Omega_{\alpha,j}}\int_{\Omega_{\alpha,k}}
G_\Lambda(\zeta -\chi) \, d\chi d\zeta \\
&= \gamma_{11} \Big ( \pi^2 r_1^4 H_\Lambda(0) +
+ \pi^2 r_1^4 G_\Lambda(\frac{\alpha_1-\alpha_2}{2}) + c_{11} + c_{11}' \Big ) \\
& \qquad + \gamma_{22} \Big ( \pi^2 r_2^4 H_\Lambda(0) +
+ \pi^2 r_2^4 G_\Lambda(\frac{\alpha_1+\alpha_2}{2}) + c_{22} + c_{22}' \Big ) \\
& \qquad + 2 \gamma_{12} \Big ( \pi^2 r_1^2r_2^2
\Big (G_\Lambda(\frac{\alpha_1}{2}) + G_\Lambda(\frac{\alpha_2}{2}) \Big )
+ c_{12} + c_{12}' \Big ) \\
&= (\gamma_{11} \pi^2 r_1^4 + \gamma_{22} \pi^2 r_2^4)
\Big ( H_\Lambda(0) + G_\Lambda(\frac{\alpha_1+\alpha_2}{2}) \Big )
+ 2 \gamma_{12} \pi^2 r_1^2 r_2^2 \Big ( G_\Lambda(\frac{\alpha_1}{2})
+ G_\Lambda(\frac{\alpha_2}{2}) \Big ) \\
& \qquad + \gamma_{11}(c_{11} + c_{11}') + \gamma_{22} (c_{22}+c_{22}')
+ 2 \gamma_{12} (c_{12} + c_{12}') \\
&= (\gamma_{11} \pi^2 r_1^4 + \gamma_{22} \pi^2 r_2^4)
\Big [ H_\Lambda(0) + G_\Lambda(\frac{\alpha_1+\alpha_2}{2}) +
\frac{2 \gamma_{12} r_1^2r_2^2}
{ \gamma_{11} r_1^4 + \gamma_{22} r_2^4}
\Big ( G_\Lambda(\frac{\alpha_1}{2})
+ G_\Lambda(\frac{\alpha_2}{2}) \Big ) \Big ] \\
& \qquad + \gamma_{11}(c_{11} + c_{11}') + \gamma_{22} (c_{22}+c_{22}')
+ 2 \gamma_{12} (c_{12} + c_{12}').
\operatorname{e}nd{align*}
Here
$G_\Lambda(\frac{\alpha_1+\alpha_2}{2}) = G_\Lambda(\frac{\alpha_1-\alpha_2}{2})$
follows from the $\Lambda$-periodicity of $G_\Lambda$.
\operatorname{e}nd{proof}
Calculations based on \operatorname{e}qref{H} show
\begin{align}
\label{H0}
H_\Lambda(0) &= - \frac{1}{2\pi} \log \Big | \sqrt{\operatorname{Im} \tau}
\operatorname{e} (\frac{\tau}{12}) \prod_{n=1}^\infty (1- \operatorname{e}(n \tau))^2 \Big | \\
\label{Galpha1alpha2}
G_\Lambda(\frac{\alpha_1}{2}+\frac{\alpha_2}{2}) &=
-\frac{1}{2\pi} \log \Big | \operatorname{e}(-\frac{\tau}{24}) \prod_{n=1}^\infty
\Big (1 + \operatorname{e} \big ((n-\frac{1}{2}) \tau) \Big )^2 \Big |
\\ \label{Galpha1}
G_\Lambda(\frac{\alpha_1}{2}) &= - \frac{1}{2\pi}
\log \Big | 2 \operatorname{e}(\frac{\tau}{12}) \prod_{n=1}^\infty
(1+ \operatorname{e}(n\tau))^2 \Big | \\
\label{Galpha2}
G_\Lambda(\frac{\alpha_2}{2}) &=
-\frac{1}{2\pi} \log \Big | \operatorname{e}(-\frac{\tau}{24}) \prod_{n=1}^\infty
\Big (1- \operatorname{e} \big ((n-\frac{1}{2})\tau \big ) \Big)^2 \Big |.
\operatorname{e}nd{align}
To derive \operatorname{e}qref{H0} we have used $\frac{1}{|\alpha_1|} = \sqrt{\operatorname{Im} \tau}$,
which follows from $1=|D_\Lambda|= \operatorname{Im} (\overline{\alpha_1} \alpha_2)$.
Regarding the four infinite products in \operatorname{e}qref{H0} through \operatorname{e}qref{Galpha2},
one has the following formulas.
\begin{lemma}
\label{l-formulae}
\begin{align*}
\prod_{n=1}^\infty (1- \operatorname{e}(n \tau))
\prod_{n=1}^\infty
\Big (1 + \operatorname{e} \big ((n-\frac{1}{2}) \tau) \Big )
= \prod_{n=1}^\infty \Big ( 1- \operatorname{e}(n \frac{\tau +1}{2}) \Big )
\\
\prod_{n=1}^\infty
\Big (1 + \operatorname{e} \big ((n-\frac{1}{2}) \tau) \Big )
\prod_{n=1}^\infty (1+ \operatorname{e}(n \tau))
\prod_{n=1}^\infty
\Big (1 - \operatorname{e} \big ((n-\frac{1}{2}) \tau) \Big )
= 1.
\operatorname{e}nd{align*}
\operatorname{e}nd{lemma}
\begin{proof}
To prove the first formula, rewrite and rearrange the terms as follows.
\begin{align*}
& \prod_{n=1}^\infty (1- \operatorname{e}(n \tau))
\prod_{n=1}^\infty
\Big (1 + \operatorname{e} \big ((n-\frac{1}{2}) \tau) \Big ) \\
& = \prod_{n=1}^\infty \Big ( 1- \operatorname{e}(2n \frac{\tau+1}{2}) \Big )
\prod_{n=1}^\infty \Big ( 1- \operatorname{e}((2n-1) \frac{\tau+1}{2}) \Big ) \\
& = \prod_{n=1}^\infty \Big ( 1- \operatorname{e}(
n \frac{\tau+1}{2}) \Big ).
\operatorname{e}nd{align*}
For the second formula, consider
\begin{align*}
& \prod_{n=1}^\infty (1-\operatorname{e}(n\tau))
\prod_{n=1}^\infty
\Big (1 + \operatorname{e} \big ((n-\frac{1}{2}) \tau) \Big )
\prod_{n=1}^\infty (1+ \operatorname{e}(n \tau))
\prod_{n=1}^\infty
\Big (1 - \operatorname{e} \big ((n-\frac{1}{2}) \tau) \Big ) \\
& = \prod_{n=1}^\infty (1-\operatorname{e}(2n\tau))
\prod_{n=1}^\infty
\Big (1 - \operatorname{e} \big ((2n-1) \tau) \Big ) \\
& = \prod_{n=1}^\infty (1-\operatorname{e}(n\tau)).
\operatorname{e}nd{align*}
The second formula follows after one divides out
$\prod_{n=1}^\infty (1-\operatorname{e}(n\tau))$.
\operatorname{e}nd{proof}
These identities will allow us to further simplify
$\widetilde{\cal F}(\alpha)$. Let
\begin{equation}
\mathbb{H} = \{ z \in \mathbb{C}: \ \operatorname{Im} z >0 \}
\label{mathH}
\operatorname{e}nd{equation}
be the upper half of the complex plane. Define
\begin{equation}
f_b (z) = b \log \big | \operatorname{Im} \big (z \big ) \operatorname{e}ta\big (z \big) \big|
+ (1-b) \log \big | \operatorname{Im} \big(\frac{z+1}{2}\big)
\operatorname{e}ta \big(\frac{z+1}{2}\big) \big |, \ z \in \mathbb{H},
\label{fb}
\operatorname{e}nd{equation}
where
\begin{equation}
\label{eta}
\operatorname{e}ta(z) = \operatorname{e} \Big(\frac{z}{6} \Big) \prod_{n=1}^\infty\big(1- \operatorname{e}(nz)\big)^4.
\operatorname{e}nd{equation}
One often writes $f_b$ as
\begin{equation}
\label{fb-2}
f_b(z) = b f_1(z) + (1-b) f_0(z),
\operatorname{e}nd{equation}
where
\begin{equation}
\label{f1f0}
f_1(z) = \log |\operatorname{Im}(z) \operatorname{e}ta(z)|, \ \ f_0(z) = \log
|\operatorname{Im}(\frac{z+1}{2}) \operatorname{e}ta (\frac{z+1}{2})|.
\operatorname{e}nd{equation}
\begin{lemma}
\label{l-Ff}
Minimizing $\widetilde{\cal F}(\alpha)$ with respect to
$\alpha$ of unit cell area is equivalent to maximizing
$f_b(z)$ with respect to $z$ in $\mathbb{H}$.
\operatorname{e}nd{lemma}
\begin{proof}
By the first formula in Lemma \operatorname{Re}f{l-formulae},
\begin{align}
H_\Lambda(0) + G_\Lambda(\frac{\alpha_1 + \alpha_2}{2})
& = - \frac{1}{2\pi} \log \Big | \sqrt{\operatorname{Im} \tau}
\operatorname{e}(\frac{\tau}{24}) \prod_{n=1}^\infty
\Big ( 1- \operatorname{e}(n \frac{\tau +1}{2}) \Big )^2
\Big | \nonumber \\ \label{T1}
& = -\frac{1}{4\pi} \log | \operatorname{Im} (\frac{\tau+1}{2})
\operatorname{e}ta(\frac{\tau+1}{2}) \Big | - \frac{1}{4\pi} \log 2.
\operatorname{e}nd{align}
Using both formulas in Lemma \operatorname{Re}f{l-formulae}, one deduces
\begin{align}
G_\Lambda(\frac{\alpha_1}{2}) + G_\Lambda(\frac{\alpha_2}{2})
& = - \frac{1}{2\pi} \log \Big | 2
\operatorname{e}(\frac{\tau}{24}) \prod_{n=1}^\infty
( 1 + \operatorname{e}(n \tau) )^2
\prod_{n=1}^\infty
\Big ( 1 - \operatorname{e}((n-\frac{1}{2}) \tau) \Big )^2
\Big | \nonumber \\ \nonumber
& = - \frac{1}{2\pi} \log \Big | 2
\operatorname{e}(\frac{\tau}{24}) \prod_{n=1}^\infty
( 1 - \operatorname{e}(n \tau) )^2
\Big / \prod_{n=1}^\infty
\Big ( 1 - \operatorname{e}(n \frac{\tau+1}{2}) \Big )^2
\Big | \\ \label{T2}
&= -\frac{1}{4\pi} \Big ( \log |\operatorname{Im} (\tau) \operatorname{e}ta (\tau)|
- \log \Big |\operatorname{Im} (\frac{\tau+1}{2}) \operatorname{e}ta (\frac{\tau+1}{2}) \Big | \Big )
-\frac{1}{4\pi} \log 2.
\operatorname{e}nd{align}
By \operatorname{e}qref{T1} and \operatorname{e}qref{T2}, $\widetilde{\cal F}(\Lambda)$
of \operatorname{e}qref{Ftilde} is reduced to
\begin{equation}
\widetilde{\cal F}(\Lambda) = -\frac{1}{4\pi}
\Big (
b \log |\operatorname{Im} (\tau) \operatorname{e}ta (\tau)| +
(1-b) \log \Big |\operatorname{Im} (\frac{\tau+1}{2}) \operatorname{e}ta (\frac{\tau+1}{2}) \Big |
\Big ) - \frac{1+b}{4\pi} \log 2,
\operatorname{e}nd{equation}
from which the lemma follows.
\operatorname{e}nd{proof}
\section{Duality property of $f_b$}
\setcounter{equation}{0}
The function $\operatorname{e}ta$ in the definition of $f_b$ satisfies two functional
equations.
\begin{lemma}
\label{l-func-eq}
For all $z \in \mathbb{H}$,
\begin{align}
\label{inv-1} \operatorname{e}ta(z+1) & =e^{\frac{2\pi i}{6}}\operatorname{e}ta(z), \\
\label{inv-2} \operatorname{e}ta\Big (-\frac{1}{z} \Big ) & = -z^2 \operatorname{e}ta(z).
\operatorname{e}nd{align}
\operatorname{e}nd{lemma}
\begin{proof}
The function $\operatorname{e}ta$ in \operatorname{e}qref{eta} is the fourth
power of the Dedekind eta function which is
\[ \operatorname{e}ta_D(z) = \operatorname{e}\Big (\frac{z}{24} \Big)
\prod_{n=1}^\infty\big(1- e^{2\pi i n z}\big) \]
so
\[ \operatorname{e}ta(z) = \operatorname{e}ta_{D}^4(z), \ \ z \in \mathbb{H}. \]
For the Dedekind eta function, it is known \cite[Chapter 2]{apostol-2} that
\begin{align}
\operatorname{e}ta_D(z+1) & = e^{\frac{2\pi i }{24}} \operatorname{e}ta_D (z) \\
\operatorname{e}ta_D \Big (-\frac{1}{z} \Big) &= \sqrt{-i z} \, \operatorname{e}ta_D(z)
\operatorname{e}nd{align}
where $\sqrt{\cdot }$ stands for the principal branch of squaure root.
\operatorname{e}nd{proof}
These functional equations lead to invariance properties.
\begin{lemma}
\label{l-invariance}
\begin{enumerate}
\item $|\operatorname{Im} (z) \operatorname{e}ta(z) |$, and consequently $f_1(z)$,
are invariant under the transforms
\[ z \rightarrow z+1, \ \ \mbox{and} \ \ z \rightarrow -\frac{1}{z}.\]
\item $| \operatorname{Im} \big(\frac{z+1}{2}\big) \operatorname{e}ta \big(\frac{z+1}{2}\big) |$,
and consequently $f_0(z)$, are invariant under the transforms
\[ z \rightarrow z+2, \ \ \mbox{and} \ \ z \rightarrow -\frac{1}{z}.\]
\operatorname{e}nd{enumerate}
\operatorname{e}nd{lemma}
\begin{proof}
The invariance of
$ | \operatorname{Im} \big (z \big ) \operatorname{e}ta\big (z \big) |$ under $z \rightarrow z+1$ and
the invariance of
$| \operatorname{Im} \big(\frac{z+1}{2}\big) \operatorname{e}ta \big(\frac{z+1}{2}\big) |$
under $z \rightarrow z+2$ follow from \operatorname{e}qref{inv-1}.
By \operatorname{e}qref{inv-2} it is easy to see that
\begin{equation}
\label{inv-1-2}
| \operatorname{Im}(-\frac{1}{z}) \operatorname{e}ta(-\frac{1}{z}) | = |\operatorname{Im} (z) \operatorname{e}ta(z) |,
\operatorname{e}nd{equation}
so $| \operatorname{Im} \big (z \big ) \operatorname{e}ta\big (z \big) |$ is invariant under
$z \rightarrow - \frac{1}{z}$.
The invariance of $| \operatorname{Im} \big (z \big ) \operatorname{e}ta\big (z \big) |$ under
$z \rightarrow z+1$ implies its invariance uder
$z \rightarrow z+k$ for any integer $k$. Now one deduces
\begin{align}
\big | \operatorname{Im} \big(\frac{(-\frac{1}{z})+1}{2}\big)
\operatorname{e}ta \big(\frac{(-\frac{1}{z})+1}{2}\big) \big |
&= \big | \operatorname{Im} \big(\frac{z-1}{2z}\big)
\operatorname{e}ta \big(\frac{z-1}{2z}\big) \big | \nonumber \\
&= \big | \operatorname{Im} \big(\frac{-z-1}{2z}\big)
\operatorname{e}ta \big(\frac{-z-1}{2z}\big) \big | \nonumber \\
&= \big | \operatorname{Im} \big(\frac{2z}{z+1}\big)
\operatorname{e}ta \big(\frac{2z}{z+1}\big) \big | \nonumber \\
&= \big | \operatorname{Im} \big(\frac{-2}{z+1}\big)
\operatorname{e}ta \big(\frac{-2}{z+1}\big) \big | \nonumber \\
&= \big | \operatorname{Im} \big(\frac{z+1}{2}\big)
\operatorname{e}ta \big(\frac{z+1}{2}\big) \big | \label{2-2}
\operatorname{e}nd{align}
by applying the invariance of
$| \operatorname{Im} \big (z \big ) \operatorname{e}ta\big (z \big) |$ under
$z \rightarrow z -1$, $z \rightarrow -\frac{1}{z}$,
$z \rightarrow z-2$, and $z \rightarrow -\frac{1}{z}$ successively.
This proves the invariance of
$| \operatorname{Im} \big(\frac{z+1}{2}\big) \operatorname{e}ta \big(\frac{z+1}{2}\big) |$
under $z \rightarrow -\frac{1}{z}$.
\operatorname{e}nd{proof}
There is another invariance that is not a linear fractional transform:
the reflection about the imaginary axis.
\begin{lemma}
\label{l-invariance-reflection}
Both $ | \operatorname{Im} \big (z \big ) \operatorname{e}ta\big (z \big) |$ and
$| \operatorname{Im} \big(\frac{z+1}{2}\big) \operatorname{e}ta \big(\frac{z+1}{2}\big) |$,
and consequently $f_1(z)$ and $f_0(z)$, are invariant under
$z \rightarrow - \bar{z}$.
\operatorname{e}nd{lemma}
\begin{proof}
These follow easily from the infinite product definition \operatorname{e}qref{eta}
of $\operatorname{e}ta$.
\operatorname{e}nd{proof}
The transforms
$z \rightarrow z+1$ and $z \rightarrow -\frac{1}{z}$ generate
the modular group $\Gamma$,
\[ \Gamma = \left \{ z \rightarrow \frac{az + b}{cz +d}: a,b,c,d \in \mathbb{Z},
\ ad - bc =1 \right \}. \]
And $\Gamma$ has
\begin{equation}
\label{gamma-fund}
F_\Gamma=\{ z \in \mathbb{H}: \ |z| >1, \ -1/2 < \operatorname{Re} z < 1/2 \}
\operatorname{e}nd{equation}
as a fundamental region. It means that every orbit under this
group has one element in
$\overline{F_\Gamma}_{\mathbb{H}}$, the closure of $F_\Gamma$ in $\mathbb{H}$,
and no two points in $F_\Gamma$ belong to the same orbit \cite{apostol-2}.
The transforms $z \rightarrow z+2$ and $z \rightarrow -\frac{1}{z}$
generate a subgroup $\Gamma'$ of $\Gamma$,
\begin{align}
\label{Gamma-2}
\Gamma' & = \Big \{ z \rightarrow \frac{az +b}{cz +d} \in \Gamma:
\ a \operatorname{e}quiv d \operatorname{e}quiv 1 \mod 2 \ \ \ \mbox{and} \ \
\ b \operatorname{e}quiv c \operatorname{e}quiv 0 \mod 2, \nonumber \\ & \qquad \mbox{or} \
\ a \operatorname{e}quiv d \operatorname{e}quiv 0 \mod 2 \ \ \ \mbox{and} \ \
\ b \operatorname{e}quiv c \operatorname{e}quiv 1 \mod 2
\Big \}.
\operatorname{e}nd{align}
It is known in number theory that this group has
\begin{equation}
F_{\Gamma'}=\{ z \in \mathbb{H}: \ |z| >1, \ -1 < \operatorname{Re} z < 1 \}
\operatorname{e}nd{equation}
as a fundamental region \cite{evans_ronald}.
Denote by ${\cal G}$ the group of diffeomorphisms of
$\mathbb{H}$ generated by
\[ z \rightarrow z+2, \ \ z \rightarrow - \frac{1}{z},
\ \ z \rightarrow - \bar{z}. \]
Note that $\Gamma'$ is a subgroup of ${\cal G}$ but $\Gamma$ is not
a subgroup of ${\cal G}$.
With the group ${\cal G}$, maximizing $f_b$ need
not be carried out in $\mathbb{H}$, but in a smaller set which contains at least one element from each orbit of ${\cal G}$. Let
\begin{equation}
\label{W}
W = \{ z \in \mathbb{H}: \ 0 < \operatorname{Re} z <1, \ |z| >1 \}
\operatorname{e}nd{equation}
and
\begin{equation}
\label{W-closure}
\overline{W}_{\mathbb{H}}
= \{ z \in \mathbb{H}: \ 0 \leq \operatorname{Re} z \leq 1, \ |z| \geq 1 \};
\operatorname{e}nd{equation}
see Figure \operatorname{Re}f{f-W}.
Note that $\overline{W}_{\mathbb{H}}$ is the closure of $W$ in $\mathbb{H}$
so $1 \not \in \overline{W}_{\mathbb{H}}$.
\begin{figure}
\centering
\includegraphics[scale=0.13]{W.png}
\includegraphics[scale=0.13]{mono.png}
\caption{The left plot shows the set $W$. In $\overline{W}_{\mathbb{H}}$,
$f_b$ attain the maximum at a point either on the thick line segment
or on the thick arc. In the right plot, with $b \in (0,B)$,
$f_b$ increases in the directions of the arrows. The dot on the
imaginary axis is $q_b i$.}
\label{f-W}
\operatorname{e}nd{figure}
\begin{lemma}
\label{l-inv-group}
\begin{enumerate}
\item $| \operatorname{Im} \big (z \big ) \operatorname{e}ta\big (z \big)|$ and $f_1(z)$ are
invariant under the group $\Gamma$ and the transform
$z \rightarrow -\bar{z}$.
\item $| \operatorname{Im} \big(\frac{z+1}{2}\big) \operatorname{e}ta \big(\frac{z+1}{2}\big) |$,
$f_0(z)$, and $f_b(z)$, for $b \in \mathbb{R}$, are invariant
under the groups $\Gamma'$ and ${\cal G}$.
\item As the group ${\cal G}$ acts on $\mathbb{H}$, each orbit of ${\cal G}$
has at least one element in $\overline{W}_{\mathbb{H}}$.
\operatorname{e}nd{enumerate}
\operatorname{e}nd{lemma}
\begin{proof}
Part 1 and part 2 follow from Lemmas \operatorname{Re}f{l-invariance} and
\operatorname{Re}f{l-invariance-reflection}. Part 3 follows from
$F_{\Gamma'}$ being the fundamental region of $\Gamma'$ and the
transform $z \rightarrow - \bar{z} \in {\cal G}$.
\operatorname{e}nd{proof}
It is instructive to understand transforms from the view point of bases. Let $\alpha=(\alpha_1, \alpha_2)$
and $\alpha'=(\alpha'_1, \alpha'_2)$ be two bases of unit cell area that define lattices $\Lambda$ and $\Lambda'$, and two species
periodic assemblies $(\Omega_{\alpha,1}, \Omega_{\alpha,2})$ and $(\Omega_{\alpha',1}, \Omega_{\alpha',2})$.
Set $z = \frac{\alpha_2}{\alpha_1}$ and $z'=\frac{\alpha'_2}{\alpha'_1}$.
If $\alpha$ and $\alpha'$ are related by the transform $z \rightarrow z'=z+1$, then
\[ \frac{\alpha_2'}{\alpha_1'} = \frac{\alpha_1+\alpha_2 }{\alpha_1} \]
and consequently there exists $ \kappa \in \mathbb{C}$ such that
$\alpha'_1 = \kappa \alpha_1$ and $\alpha'_2 = \kappa (\alpha_1+\alpha_2)$.
Since both bases have unit cell area,
\[ 1= \operatorname{Im}(\overline{\alpha'_1} \alpha'_2) = |\kappa|^2 \operatorname{Im}(\overline{\alpha_1} (\alpha_1+\alpha_2))
= |\kappa|^2 \operatorname{Im} (\overline{\alpha_1} \alpha_2) =|\kappa|^2. \]
So there exists $\theta \in [0, 2\pi)$ such that $\kappa = e^{\theta i}$ and
$\alpha'= e^{\theta i} (\alpha_1, \alpha_1+\alpha_2)$. Let $\alpha''=e^{\theta i} \alpha=e^{\theta i}(\alpha_1,\alpha_2)$. Then
$\alpha'$ and $\alpha''$ generate the same lattice $\Lambda'$, and consequently
$\Lambda$ and $\Lambda'$ are isometric in the sense that a parallelogram cell $D_\alpha$
of $\Lambda$ is just a rotation of a parallelogram cell $D_{\alpha''}$ of $\Lambda'$. However
the assemblies $(\Omega_{\alpha,1}, \Omega_{\alpha,2})$ and $(\Omega_{\alpha',1}, \Omega_{\alpha',2})$ are usually quite
different since in general $ {\cal J}_\Lambda(\Omega_{\alpha,1}, \Omega_{\alpha,2}) \ne {\cal J}_{\Lambda'}(\Omega_{\alpha',1}, \Omega_{\alpha',2})$.
The story changes if $\alpha$ and $\alpha'$ are related by the transform $z \rightarrow z'=z+2$. This time not only
$\Lambda$ and $\Lambda'$ are isometric, $ {\cal J}_\Lambda(\Omega_{\alpha,1}, \Omega_{\alpha,2}) = {\cal J}_{\Lambda'}(\Omega_{\alpha',1}, \Omega_{\alpha',2})$ as well by Lemma \operatorname{Re}f{l-inv-group}.2.
If $\alpha$ and $\alpha'$ are related by $z \rightarrow z'=-\frac{1}{z}$, one can show that
$(\alpha'_1,\alpha'_2)=e^{\theta i}(-\alpha_2,\alpha_1)$, $\theta \in [0,2\pi)$. Then $\Lambda$ and $\Lambda'$ are
isometric since $D_\alpha$ and $D_{\alpha'}$ differ by a translation and rotation. Moreover
$ {\cal J}_\Lambda(\Omega_{\alpha,1}, \Omega_{\alpha,2}) = {\cal J}_{\Lambda'}(\Omega_{\alpha',1}, \Omega_{\alpha',2})$ by Lemma \operatorname{Re}f{l-inv-group}.2.
Finally if $\alpha$ and $\alpha'$ are related by $z \rightarrow z'=-\bar{z}$, then
$(\alpha'_1,\alpha'_2)=e^{\theta i}(\overline{\alpha_1},-\overline{\alpha_2})$, $\theta \in [0,2\pi)$. Therefore
$D_\alpha$ and $D_{\alpha'}$ differ by a mirror reflection, a translation and a rotation, so $\Lambda$ and
$\Lambda'$ are again isometric. By Lemma \operatorname{Re}f{l-inv-group}.2,
$ {\cal J}_\Lambda(\Omega_{\alpha,1}, \Omega_{\alpha,2}) = {\cal J}_{\Lambda'}(\Omega_{\alpha',1}, \Omega_{\alpha',2})$.
In summary if $z$ and $z'$ are related by an element $g \in {\cal G}$, i.e. $z'=g z$, then $\Lambda$ and $\Lambda'$ are
isometric and $ {\cal J}_\Lambda(\Omega_{\alpha,1}, \Omega_{\alpha,2}) = {\cal J}_{\Lambda'}(\Omega_{\alpha',1}, \Omega_{\alpha',2})$. For isometric $\Lambda$ and $\Lambda'$, if $\Lambda$ is a rectangular lattice, then $\Lambda'$ is also a rectangular lattice of the same ratio of longer side and shorter side; if $\Lambda$ is a rhombic lattice, then $\Lambda'$ is also a rhombic lattice of the same acute angle. Therefore to prove Theorem \operatorname{Re}f{t-main}, it suffices to find every minimum of $f_b$ in $\overline{W}_{\mathbb{H}}$
and identify its associated lattice.
In the exceptional case $b=1$, since $f_1$ is invariant under $\Gamma$,
it suffices to maximize $f_1$ in a smaller set
\begin{equation}
\label{U-closure}
\overline{U}_{\mathbb{H}} = \{ z \in \mathbb{H}: \
0 \leq \operatorname{Re} z \leq 1/2, \ |z| \geq 1 \}
\operatorname{e}nd{equation}
which is the closure of
\begin{equation}
U = \Big \{ z \in \mathbb{C}: \ |z| >1, \ 0< \operatorname{Re} z < \frac{1}{2} \Big \}.
\label{U}
\operatorname{e}nd{equation}
This fact was used critically in
\cite{chenoshita-2}, but it is not valid if $b \ne 1$.
The approach of this paper works for all $b \in [0,1]$,
giving a different proof for the $b=1$ case as well.
One of the most important properties of $f_b$ is the following
duality relation between $f_b$ and $f_{1-b}$.
\begin{lemma}
\label{l-dual}
Under the transform $z \rightarrow w = \frac{z-1}{z+1}$ of $\mathbb{H}$,
\[ f_1(z) = f_0 (w) \ \ \mbox{and} \ \ f_0(z) = f_1(w), \ \ z \in \mathbb{H} \ \ \mbox{and} \ \
w=\frac{z-1}{z+1} \in \mathbb{H}. \]
Consequently, for all $b \in \mathbb{R}$,
\[ f_b(w) = f_{1-b} (z), \ \ z \in \mathbb{H} \ \ \mbox{and} \ \
w=\frac{z-1}{z+1} \in \mathbb{H}. \]
More generally, if $h: z' \rightarrow w'= \frac{z'-1}{z'+1}$ and
$g_1: z \rightarrow z'$, $g_2: w' \rightarrow w$ are transforms in ${\cal G}$,
then
\[ f_b(w) = f_{1-b} (z), \ \ z \in \mathbb{H} \ \ \mbox{and} \ \
w=g_2 \circ h \circ g_1 (z) \in \mathbb{H}.\]
\operatorname{e}nd{lemma}
\begin{proof}
The transform $z \rightarrow w'=\frac{z}{z+1}$ is in $\Gamma$, so
\[ f_1(z) = f_1(w') \]
by the invariance of $f_1$ under $\Gamma$. On the other hand substitution shows
\[ f_0(z) = f_0(\frac{w'}{-w'+1})=\log | \operatorname{Im}(\frac{1}{-2w'+2}) \operatorname{e}ta (\frac{1}{-2w'+2})|
= f_1(\frac{1}{-2w'+2}). \]
Now apply another transform $w' \rightarrow w = 2 w'-1$ which is not in $\Gamma$ to
find
\[ f_1(z) = f_1(\frac{w+1}{2})=f_0(w) \]
and
\[ f_0(z) = f_1 (\frac{1}{-w+1}) = f_1(w)\]
where the last equation follows from
the invariance of $f_1$ under $ w \rightarrow \frac{1}{-w+1} \in \Gamma$.
The composition of the two transforms is
$z \rightarrow w'=\frac{z}{z+1} \rightarrow w=2w'-1= \frac{z-1}{z+1}$.
\operatorname{e}nd{proof}
Although $b$ is supposed to be in $[0,1]$ throughout this paper,
some properties of $f_b$, like Lemma \operatorname{Re}f{l-dual},
hold for $b \in \mathbb{R}$. If this is the case we state so explicitly.
Let us write $z=x+yi$ henceforth, and set
\[ X_b(z) = \frac{\partial f_b(z)}{\partial x} = b X_1(z) + (1-b) X_0(z), \ \
Y_b(z) = \frac{\partial f_b(z)}{\partial y}= b Y_1(z) + (1-b) Y_0(z), \]
where
\begin{align}
\label{X1Y1}
X_1(z) & = \frac{\partial}{\partial x} \log |\operatorname{Im}(z) \operatorname{e}ta(z)|, \ \
Y_1(z) = \frac{\partial}{\partial y} \log |\operatorname{Im}(z) \operatorname{e}ta(z)| \\
X_0(z) & = \frac{\partial}{\partial x} \log |\operatorname{Im}(\frac{z+1}{2})
\operatorname{e}ta(\frac{z+1}{2})|, \ \
Y_0(z) = \frac{\partial}{\partial y} \log |\operatorname{Im}(\frac{z+1}{2})
\operatorname{e}ta(\frac{z+1}{2})|.
\operatorname{e}nd{align}
These functions can be written as the following series.
\begin{align}
\label{X1}
X_1(z) &= \sum_{n=1}^\infty \frac{8\pi n \sin 2\pi n x }
{ e^{2\pi ny} + e^{-2\pi ny} - 2 \cos 2\pi n x} \\ \label{Y1}
Y_1(z) &= \frac{1}{y} - \frac{\pi}{3} +
\sum_{n=1}^\infty \frac{-8 \pi n e^{-2\pi n y} + 8\pi n \cos 2\pi n x }
{ e^{2\pi ny} + e^{-2\pi ny} - 2 \cos 2\pi n x} \\ \label{X0}
X_0(z) &= \sum_{n=1}^\infty \frac{4\pi n \sin \pi n (x+1) }
{ e^{\pi ny} + e^{-\pi ny} - 2 \cos \pi n (x+1) } \\ \label{Y0}
Y_0(z) &= \frac{1}{y} - \frac{\pi}{6} +
\sum_{n=1}^\infty \frac{-4 \pi n e^{-\pi n y} + 4\pi n \cos \pi n (x+1) }
{ e^{\pi ny} + e^{-\pi ny} - 2 \cos \pi n (x+1)}.
\operatorname{e}nd{align}
We end this section with two formulas that relate
$f_b$ on the upper half of the unit circle
to $f_{1-b}$ on the upper half of the imaginary axis.
\begin{lemma}
\label{l-circle-im}
Let the upper half of the unit circle be parametrized by
$u+ i \sqrt{1-u^2}$, $u \in (-1,1)$. Then
$\frac{\sqrt{1-u^2}}{1-u} i$ parametrizes the upper half of the
imaginary axies, and
\begin{align}
\label{line-to-circle-X}
X_b(u + i\sqrt{1-u^2}) &= \frac{\sqrt{1-u^2}}{1-u} Y_{1-b}
\Big(\frac{\sqrt{1-u^2}}{1-u} i \Big) \\ \label{line-to-circle-Y}
Y_b(u + i \sqrt{1-u^2}) &= \frac{-u}{1-u} Y_{1-b}
\Big(\frac{\sqrt{1-u^2}}{1-u} i\Big)
\operatorname{e}nd{align}
hold for $u \in (-1,1)$.
\operatorname{e}nd{lemma}
\begin{proof}
Consider the transform in Lemma \operatorname{Re}f{l-dual}, $z \rightarrow w = \frac{z-1}{z+1}$. With $z=x+yi$ and
$w=u+vi$,
\begin{equation}
\label{key-transform}
u = \frac{x^2+y^2-1}{(x+1)^2+y^2}, \ \ v=\frac{2y}{(x+1)^2+y^2}.
\operatorname{e}nd{equation}
Conversely,
\begin{equation}
\label{key-transform-1}
x = \frac{1-u^2-v^2}{(1-u)^2+v^2}, \ \ y=\frac{2v}{(1-u)^2+v^2}.
\operatorname{e}nd{equation}
Differentiate $f_b(w) = f_{1-b} (z)$ with respect to $u$ and $v$ to find
\begin{align*}
&X_b(w) = X_{1-b}(z) \frac{\partial x}{\partial u} + Y_{1-b}(z)
\frac{\partial y}{\partial u} \\
& Y_b(w) = X_{1-b}(z) \frac{\partial x}{\partial v} + Y_{1-b}(z)
\frac{\partial y}{\partial v}.
\operatorname{e}nd{align*}
When $w$ is on the unit circle, $z$ is on the imaginary axis. Since $f_{1-b}$
is invariant under the reflection about the imaginary axis,
$X_{1-b}(z) =0$ on the imaginary axis. Also
\begin{equation}
\frac{\partial y}{\partial u}\Big |_{|w|=1} = \frac{\sqrt{1-u^2}}{1-u},
\ \ \frac{\partial y}{\partial v}\Big |_{|w|=1} = \frac{-u}{1-u}
\operatorname{e}nd{equation}
from which the lemma follows.
\operatorname{e}nd{proof}
\section{$f_b$ on imaginary axis}
\setcounter{equation}{0}
The behavior of $f_b$ on the imaginary axis is studied in this section.
Let us record some of the derivatives of $f_1$ and $f_0$ on the
imaginary axis for use here and later.
Let
\begin{equation}
r = e^{-\pi y}.
\label{r}
\operatorname{e}nd{equation}
Then by \operatorname{e}qref{Y1} and \operatorname{e}qref{Y0},
\begin{align}
Y_1(yi) &= \frac{1}{y} - \frac{\pi}{3} + \sum_{n=1}^\infty
\frac{8\pi n r^{2n}}{1-r^{2n}} \label{Y1-im}\\
\frac{\partial Y_1(yi)}{\partial y}
&= -\frac{1}{y^2} - \sum_{n=1}^\infty
\frac{16\pi^2 n^2 r^{2n}}{(1-r^{2n})^2} \label{Y1'-im}\\
\frac{\partial^2 Y_1(yi)}{\partial y^2}
&= \frac{2}{y^3} + \sum_{n=1}^\infty
\frac{32\pi^3 n^3 (r^{2n}+r^{4n})}{(1-r^{2n})^3}\label{Y1''-im} \\
\frac{\partial^3 Y_1(yi)}{\partial y^3}
&= -\frac{6}{y^4} - \sum_{n=1}^\infty
\frac{64\pi^4 n^4 (r^{2n}+4r^{4n}+r^{6n})}{(1-r^{2n})^4} \label{Y1'''-im}\\
Y_0(yi) &= \frac{1}{y} - \frac{\pi}{6} + \sum_{n=1}^\infty
\frac{4\pi n (-r)^n}{1-(-r)^n} \label{Y0-im} \\
\frac{\partial Y_0(yi)}{\partial y}
&= -\frac{1}{y^2} - \sum_{n=1}^\infty
\frac{4\pi^2 n^2 (-r)^n}{(1 - (-r)^n)^2} \label{Y0'-im} \\
\frac{\partial^2 Y_0(yi)}{\partial y^2}
&= \frac{2}{y^3} + \sum_{n=1}^\infty
\frac{4\pi^3 n^3 \Big ((-r)^n+r^{2n} \Big )}{(1 - (-r)^n)^3}
\label{Y0''-im} \\
\frac{\partial^3 Y_0(yi)}{\partial y^3}
&= -\frac{6}{y^4} - \sum_{n=1}^\infty
\frac{4\pi^4 n^4\Big((-r)^n+4r^{2n} +(-r)^{3n} \Big)}{(1 - (-r)^n)^4}.
\label{Y0'''-im}
\operatorname{e}nd{align}
\begin{lemma}
\label{l-imaginary}
For all $b \in \mathbb{R}$,
\[ f_b(yi) = f_b \Big ( \frac{i}{y} \Big ), \ \ y>0. \]
Consequently,
\[ Y_b(yi) = \Big (- \frac{1}{y^2} \Big )
Y_b \Big ( \frac{i}{y} \Big ), \ \ y>0. \]
In particular \[ Y_b(i)=0. \]
\operatorname{e}nd{lemma}
\begin{proof}
Apply the invariance of $f_b(z)$ under $z \rightarrow -\frac{1}{z}$ with
$z=yi$, $y>0$. Then differentiate with respect to $y$ and set $y=1$.
\operatorname{e}nd{proof}
\begin{lemma}
\label{l-f1-im}
The function $y \rightarrow f_1(yi)$, $y>0$, has one
critical point at $y=1$. Moreover
\begin{align*}
Y_1(yi) &> 0 \ \mbox{if} \ y \in (0,1) \\
Y_1(yi) &< 0 \ \mbox{if} \ y \in (1,\infty)
\operatorname{e}nd{align*}
\operatorname{e}nd{lemma}
\begin{proof}
Lemma \operatorname{Re}f{l-imaginary} asserts that $y=1$ is a critical point of
$y \rightarrow f_1(y i)$, $y>0$, i.e. $Y_1(i)=0$. Define
\begin{equation}
A(z) = \arg (z \operatorname{e}ta(z)) = \arg (z) + \arg (\operatorname{e}ta(z))
\operatorname{e}nd{equation}
Note
\begin{equation}
\label{log-arg}
\operatorname{Re}(\log(z \operatorname{e}ta(z))) = \log |z \operatorname{e}ta(z)|, \ \ \operatorname{Im}(\log(z
\operatorname{e}ta(z)))=A(z).
\operatorname{e}nd{equation}
Hence $A$ is a harmonic function. We consider $A(z)$ in
$U$ and and its closure $\overline{U}_{\mathbb{H}}$ given in
\operatorname{e}qref{U} and \operatorname{e}qref{U-closure} respectively.
On the imaginary axis, for $y>0$, since $\operatorname{e}ta(yi)$ is real and positive,
\begin{equation}
\label{A-im}
A(yi) = \arg(yi) + \arg(\operatorname{e}ta(yi)) = \frac{\pi}{2} + 0 =
\frac{\pi}{2},
\operatorname{e}nd{equation}
On the line $x=\frac{1}{2}$, $\arg(\operatorname{e}ta(z)) = \frac{\pi}{6}$
since $e^{2\pi n (\frac{1}{2} + yi)i}$ is real, and
\begin{equation}
A\Big(\frac{1}{2} + yi\Big) = \arctan (2y) + \frac{\pi}{6}.
\operatorname{e}nd{equation}
In particular
\begin{equation}
\label{A-x=1/2}
A\Big(\frac{1}{2} + yi\Big) > \frac{\pi}{2} \ \ \mbox{if } y >
\frac{\sqrt{3}}{2}.
\operatorname{e}nd{equation}
As $y \rightarrow \infty$ in $z=x+yi$,
\begin{equation}
\label{A-infty}
\lim_{y\rightarrow \infty} A(x+yi) = \frac{\pi}{2} + \frac{\pi x}{3} \ \
\mbox{uniformly with respect to } x \in \Big [0, \frac{1}{2} \Big ].
\operatorname{e}nd{equation}
Now consider $A$ on the unit circle.
By the functional equation \operatorname{e}qref{inv-2}
one has, in polar coordinates $z = r e^{i \theta}$,
\[ \log (r |\operatorname{e}ta(r e^{i\theta}) |) = \log (\frac{1}{r} |\operatorname{e}ta
(-\frac{1}{r} e^{-i\theta}) | ). \]
By the definition of $\operatorname{e}ta$, one sees that $|\operatorname{e}ta (- \bar{\zeta})|
= | \operatorname{e}ta(\zeta) |$ for all $\zeta \in \mathbb{H}$. Therefore
\[ \log (r |\operatorname{e}ta(r e^{i\theta}) |) = \log (\frac{1}{r} |\operatorname{e}ta
(\frac{1}{r} e^{i\theta}) | ). \]
Differentiating the last equation with respect to $r$ and setting
$r=1$ afterwards, one derives
\[ \frac{\partial }{\partial r} \Big |_{r=1} \log (r |\operatorname{e}ta(r
e^{i\theta}) |) =0. \]
One of the Cauchy-Riemann equations in polar coordinates for $\log (z
\operatorname{e}ta(z))$ is
\[ \frac{\partial }{\partial r} \operatorname{Re}(\log (z\operatorname{e}ta(z))) =
\frac{1}{r} \frac{\partial}{\partial \theta} \operatorname{Im}(\log(z\operatorname{e}ta(z))). \]
By \operatorname{e}qref{log-arg}
\[ \frac{\partial }{\partial \theta} A(e^{i\theta}) =0, \]
namely $A$ is constant on the unit circle. Since
$\operatorname{e}ta(i)$ is real and positive, $A(i) = \frac{\pi}{2}$. Hence
\begin{equation}
\label{A-arc}
A(z) =\frac{\pi}{2} \ \ \mbox{if} \ |z|=1 \ \mbox{and} \
\frac{\pi}{3} \leq \arg z \leq \frac{\pi}{2} .
\operatorname{e}nd{equation}
By \operatorname{e}qref{A-im}, \operatorname{e}qref{A-x=1/2}, \operatorname{e}qref{A-infty},
\operatorname{e}qref{A-arc}, and the maximum principle,
\begin{equation}
A(z) > \frac{\pi}{2}, \ \ z \in U;
\operatorname{e}nd{equation}
by the Hopf lemma,
\begin{equation}
\frac{\partial}{\partial x} \Big |_{x=0, y>1} A(z) >0.
\operatorname{e}nd{equation}
By a Cauchy-Riemann equation
\begin{equation}
Y_1(yi) = - \frac{\partial}{\partial x} \Big |_{x=0, y>1} A(z) <0,
\ y \in (1,\infty).
\label{y>1}
\operatorname{e}nd{equation}
For $y \in (0,1)$, by Lemma \operatorname{Re}f{l-imaginary},
\begin{equation}
Y_1(yi) = \Big (-\frac{1}{y^2} \Big )
Y_1\Big (\frac{i}{y} \Big)>0, \ \ y \in (0,1).
\label{y<1}
\operatorname{e}nd{equation}
This completes the proof.
\operatorname{e}nd{proof}
\begin{lemma}
\label{l-f0-im}
The function $ y \rightarrow f_0(yi)$, $y>0$,
has three critical points at $\frac{\sqrt{3}}{3}$, $1$, and $\sqrt{3}$.
Moreover
\begin{align*}
Y_0(yi) &> 0 \ \mbox{if} \ y \in (0, \sqrt{3}/3) \\
Y_0(yi) &< 0 \ \mbox{if} \ y \in (\sqrt{3}/3, 1) \\
Y_0(yi) &> 0 \ \mbox{if} \ y \in (1,\sqrt{3}) \\
Y_0(yi) &< 0 \ \mbox{if} \ y \in (\sqrt{3}, \infty).
\operatorname{e}nd{align*}
\operatorname{e}nd{lemma}
\begin{proof}
The transfrom $z \rightarrow \frac{z-1}{2z-1} \in \Gamma$ maps
$\frac{1}{2} + y i$ to $\frac{1}{2} + \frac{i}{4y}$, so
\[ \log | y \operatorname{e}ta (\frac{1}{2} + yi) |
= \log | \frac{1}{4y} \operatorname{e}ta(\frac{1}{2} + \frac{i}{4y})|
\]
Differentiation with respect to $y$ shows that
\begin{equation} \label{Y1-1/2-reflect}
Y_1\Big (\frac{1}{2}+ yi \Big )
= - \frac{1}{4y^2} Y_1\Big (\frac{1}{2}+ \frac{i}{4y}\Big ).
\operatorname{e}nd{equation}
One consequence of \operatorname{e}qref{Y1-1/2-reflect} is that
\begin{equation}
\label{c-1}
Y_0(i)=\frac{1}{2} Y_1\Big (\frac{1}{2}+ \frac{i}{2} \Big ) =0;
\operatorname{e}nd{equation}
namely that $1$ is a critical point of $y \rightarrow f_0(yi)$.
The combined transform of
$z \rightarrow w=-\frac{1}{z} \in \Gamma$ and $w \rightarrow - \bar{w}$
maps the line $\frac{1}{2} + yi$ to $\frac{2}{4y^2+1} + \frac{4y}{4y^2+1} i$,
the unit circle centered at $1$. The invariance of $f_1$ under
this transform yields
\[ f_1 \Big (\frac{1}{2}+y i \Big)
= f_1 \Big (\frac{2}{4y^2+1}+ \frac{4yi}{4y^2+1} \Big )\]
Differentiation with respect to $y$ shows that
\[ Y_1\Big (\frac{1}{2}+yi \Big) = X_1(\frac{2}{4y^2+1}+ \frac{4yi}{4y^2+1})
\frac{\partial }{\partial y} \Big ( \frac{2}{4y^2+1} \Big )
+ Y_1(\frac{2}{4y^2+1}+ \frac{4yi}{4y^2+1})
\frac{\partial }{\partial y} \Big ( \frac{4y}{4y^2+1} \Big ). \]
By \operatorname{e}qref{X1}
\[ X_1 \Big (\frac{1}{2}+v i \Big) =0, \ \ v>0. \]
Hence, with $y=\frac{\sqrt{3}}{2}$, one deduces
\begin{equation}
\label{c-sqrt3}
Y_0(\sqrt{3}\, i)=\frac{1}{2} Y_1(\frac{1}{2}+ \frac{\sqrt{3}}{2}i) = 0,
\operatorname{e}nd{equation}
i.e. $\sqrt{3}$ is a critical point of $y \rightarrow f_0(yi)$.
By \operatorname{e}qref{Y1-1/2-reflect}, $\frac{\sqrt{3}}{3}$ is also a critical point
of $y \rightarrow f_0(y i)$.
Now show that
\begin{equation}
\label{Y0-im-4}
Y_0(yi) <0, \ \mbox{if} \ y \in (\sqrt{3}, \infty).
\operatorname{e}nd{equation}
This fact was established by Chen and Oshita in \cite{chenoshita-2}.
Here we give a
more direct alternative proof.
Consider the expression for $\frac{\partial Y_0(y i)}{\partial y}$ in
\operatorname{e}qref{Y0'-im}. Note that the series
\begin{equation}
\label{series-1}
\sum_{n=1}^\infty \frac{n^2(-r)^n}{(1-(-r)^n)^2}
\operatorname{e}nd{equation}
is alternating. The only nontrivial property to verify is that the
absolute values of the terms decrease, and this follows from the following
estimate.
\begin{align}
\frac{n^2r^n}{(1-(-r)^n)^2} - \frac{(n+1)^2r^{n+1}}{(1-(-r)^{n+1})^2}
& = \frac{(n+1)^2 r^{n+1}}{(1-(-r)^n)^2}
\Big ( \frac{n^2}{(n+1)^2 r} - \frac{(1-(-r)^n)^2}{(1-(-r)^{n+1})^2} \Big )
\nonumber \\ & \geq \frac{(n+1)^2 r^{n+1}}{(1-(-r)^n)^2}
\Big ( \frac{e^{\sqrt{3}\pi}}{4} - \frac{(1+e^{-\sqrt{3} \pi})^2}
{(1-e^{-2\sqrt{3}})^2} \Big ) \nonumber \\
& = \frac{(n+1)^2 r^{n+1}}{(1-(-r)^n)^2} \ \times 56.68... >0.
\label{alt-3}
\operatorname{e}nd{align}
This allows us to estimate $\frac{\partial Y_0(y i)}{\partial y}$ as follows
\begin{align}
\frac{\partial Y_0(y i)}{\partial y} & < -\frac{1}{y^2}
+ \frac{ 4\pi^2 e^{-\pi y}} {(1+e^{-\pi y})^2} \nonumber \\
& < -\frac{1}{y^2} + 4\pi^2 e^{-\pi y} \nonumber \\
& = \frac{1}{y^2} \Big ( -1 + 4\pi y^2 e^{-\pi y} \Big ) \nonumber \\
& \leq \frac{1}{y^2} \Big ( -1 + 4\pi (\sqrt{3})^2 e^{-\pi \sqrt{3}} \Big )
\nonumber \\
& \leq \frac{1}{y^2} \ \times ( - 0.8388...) < 0. \label{Y0'-im-3}
\operatorname{e}nd{align}
Here to reach the fourth line, one notes that
\[ (-1 + 4\pi y^2 e^{-\pi y})' = 4 \pi e^{-\pi y} y (2- \pi y) <0, \ \
\mbox{if} \ y > \sqrt{3}. \]
Since $Y_0(\sqrt{3} \, i)=0$, \operatorname{e}qref{Y0'-im-3} implies \operatorname{e}qref{Y0-im-4}.
By \operatorname{e}qref{Y0-im-4} and Lemma \operatorname{Re}f{l-imaginary}, one deduces
\begin{equation} \label{Y0-im-1}
Y_0(yi) > 0, \ \ \mbox{if} \ \ y \in (0,\frac{\sqrt{3}}{3}).
\operatorname{e}nd{equation}
Next consider $Y_0(yi)$ for $y \in (1, \sqrt{3})$.
By \operatorname{e}qref{Y0''-im}
\begin{equation}
\frac{\partial^2 Y_0(y i)}{\partial y^2}
= \frac{2}{y^3} + \sum_{n=1}^\infty \frac{4 \pi^3 n^3 ((-r)^n +
r^{2n})}{(1- (-r)^n)^3 }, \ \ r=e^{-\pi y}.
\label{Y1''-1/2}
\operatorname{e}nd{equation}
It turns out that the series
\begin{equation}
\sum_{n=1}^\infty \frac{n^3 (-r)^n} {(1- (-r)^n)^3 }
\label{alt-2}
\operatorname{e}nd{equation}
which is part of \operatorname{e}qref{Y1''-1/2}
is alternating. To see that
the absolute values of the terms in \operatorname{e}qref{alt-2} decrease, note
\[ \qquad \frac{n^3 r^n} {(1- (-r)^n)^3 }
- \frac{(n+1)^3 r^{n+1}} {(1- (-r)^{n+1})^3 }
= \frac{(n+1)^3 r^{n+1}}{(1-(-r)^n)^3}
\Big [ \frac{n^3}{(n+1)^3 r} - \frac{(1-(-r)^n)^3}{(1-(-r)^{n+1})^3}
\Big ]
\]
and it suffices to show that the quantity in the brackets is positive.
For $y > 1$,
\[ \frac{n^3}{(n+1)^3 r} - \frac{(1-(-r)^n)^3}{(1-(-r)^{n+1})^3}
> \frac{e^{\pi}}{8} -
\frac{(1 + e^{-\pi})^3 }
{ (1- e^{-2\pi})^3}
= 2.8925... - 1.1417... \ > \ 0. \]
An upper bound for \operatorname{e}qref{alt-2} is available if one chooses two terms
from the series:
\begin{equation}
\sum_{n=1}^\infty \frac{n^3 (-r)^n} {(1- (-r)^n)^3 }
< \frac{-r}{(1+r)^3} + \frac{8 r^2}{(1-r^2)^3}. \label{alt-2-2}
\operatorname{e}nd{equation}
Then \operatorname{e}qref{Y1''-1/2} becomes
\begin{align}
\frac{\partial^2 Y_0(y i) }{\partial y^2} & <
\frac{2}{y^3} - \frac{4 \pi^3 r}{(1+r)^3}
+ \frac{32\pi^3 r^2}{(1-r^2)^3}
+ 4\pi^3 \sum_{n=1}^\infty \frac{ n^3 r^{2n}}{(1- (-r)^n)^3 } \nonumber \\
& < \frac{2}{y^3} - \frac{4 \pi^3 r}{(1+r)^3}
+ \frac{32\pi^3 r^2}{(1-r^2)^3}
+ \frac{4 \pi^3}{(1-r)^3} \sum_{n=1}^\infty n^3 r^{2n} \nonumber \\
&= \frac{2}{y^3} - \frac{4 \pi^3 r}{(1+r)^3}
+ \frac{32\pi^3 r^2}{(1-r^2)^3} +
\frac{4 \pi^3 r^2 (1+4r^2+r^4) }{(1-r)^3(1-r^2)^4} \nonumber \\
& < \frac{2}{y^3} - \Big [ \frac{4\pi^3}{(1+e^{-\pi})^3} \Big ] r
+ \Big [ \frac{32\pi^3}{(1-e^{-2\pi})^3}
+\frac{4\pi^3 (1+4e^{-2\pi} + e^{-4\pi})}{(1-e^{-\pi})^3
(1-e^{-2\pi})^4}\Big ] s^2 \nonumber \\
&= \frac{2}{y^3} - A_1 r + A_2 r^2 \nonumber \\
&= r \, \kappa(y), \label{Y1''-1/2-2}
\operatorname{e}nd{align}
where we have used the summation formula
\begin{equation} \label{sum-1} \sum_{n=1}^\infty n^3 t^n =
t \Big ( t \Big [ t \Big ( \frac{1}{1-t} \Big )_t \Big ]_t \Big )_t
= \frac{t(1+4t+t^2)}{(1-t)^4}, \ \ |t|<1
\operatorname{e}nd{equation}
to reach the third line, $A_1$ and $A_2$ are given by
\begin{equation}
A_1 = \frac{4\pi^3}{(1+e^{-\pi})^3}=109.24... , \ \
A_2 = \frac{32\pi^3}{(1-e^{-2\pi})^3}
+\frac{4\pi^3 (1+4e^{-2\pi} + e^{-4\pi})}{(1-e^{-\pi})^3
(1-e^{-2\pi})^4} = 1,141.50...
\label{A1A2}
\operatorname{e}nd{equation}
and $\kappa$ is
\begin{equation}
\kappa(y) = \frac{2}{y^3 r} - A_1 + A_2 r
= \frac{2e^{\pi y} }{y^3} - A_1 + A_2 e^{-\pi y}.
\label{kappa}
\operatorname{e}nd{equation}
Regarding $\kappa$, one finds
\begin{align*}
\kappa''(y)
& = e^{\pi y} \big ( 2 \pi^2 y^{-3} -12 \pi y^{-4} + 24 \pi y^{-5} \big )
+ \pi^2 A_2 e^{-\pi y} \\
& = 2 e^{\pi y} y^{-5} \big ( (\pi y - 3)^2 +3 \big )+
\pi^2 A_2 e^{-\pi y} \ > \ 0,
\operatorname{e}nd{align*}
and
\[ \kappa(1) = -13.63...<0, \ \
\kappa(\sqrt{3}) = -15.47...<0. \]
Hence
\begin{equation}
k(y) <0, \ \ y \in [ 1, \sqrt{3}],
\label{kappa-2}
\operatorname{e}nd{equation}
and by \operatorname{e}qref{Y1''-1/2-2}
\begin{equation}
\frac{\partial^2 Y_0(y i) }{\partial y^2}
<0, \ \ y \in [1, \sqrt{3}].
\label{Y1''-1/2-3}
\operatorname{e}nd{equation}
Since $Y_0 (1)
=Y_0(\sqrt{3})=0$ by \operatorname{e}qref{c-1} and
\operatorname{e}qref{c-sqrt3}, \operatorname{e}qref{Y1''-1/2-3} implies
\begin{equation}
\label{Y0-im-3}
Y_0(yi) > 0, \ \ \mbox{if} \ \ y \in
(1, \sqrt{3}).
\operatorname{e}nd{equation}
By \operatorname{e}qref{Y1-1/2-reflect}, \operatorname{e}qref{Y0-im-3} implies
\begin{equation}
\label{Y0-im-2}
Y_0(yi) < 0, \ \ \mbox{if} \ \ y \in
\Big (\frac{\sqrt{3}}{3},1 \Big ).
\operatorname{e}nd{equation}
The lemma follows from \operatorname{e}qref{Y0-im-1}, \operatorname{e}qref{Y0-im-2},
\operatorname{e}qref{Y0-im-3}, and \operatorname{e}qref{Y0-im-4}.
\operatorname{e}nd{proof}
For $b$ between $(0,1)$, the next lemma shows that the shape of $f_b$
is similar to $f_0$ if $b$ is small and similar to $f_1$ if $b$ is large.
The borderline is $B$ given by
\begin{equation}
\label{B}
B = \frac{\frac{\partial Y_0(i)}{\partial y}}
{\frac{\partial Y_0(i)}{\partial y} - \frac{\partial Y_1(i)}{\partial y}}
= \frac{0.2982...}{ 0.2982...- (-1.298...)} = 0.1867...
\operatorname{e}nd{equation}
The numerical values in \operatorname{e}qref{B}
are computed from the series \operatorname{e}qref{Y1'-im} and \operatorname{e}qref{Y0'-im}.
One interpretation of $B$ is that if $b=B$, the second derivative of
$y \rightarrow f_B(y i)$ vanishes at $y=1$, i.e.
\begin{equation}
\label{interp}
\frac{\partial Y_B(i)}{\partial y}=0.
\operatorname{e}nd{equation}
\begin{lemma}
\label{l-fb-im}
The following properties hold for $y \rightarrow f_b(bi)$, $y \in (0,\infty)$.
\begin{enumerate}
\item When $b \in [0, B)$, the function $y \rightarrow f_b(yi)$, $y>0$,
has exactly three critical points at $\frac{1}{q_b}$, $1$, and
$q_b$, where $q_b \in (1, \sqrt{3}]$. Moreover
\begin{enumerate}
\item $Y_b(yi) >0$ if $ y \in (0, \frac{1}{q_b})$,
\item $Y_b(yi) < 0$ if $ y \in (\frac{1}{q_b},1)$,
\item $Y_b(yi) >0$ if $ y \in (1, q_b)$,
\item $Y_b(yi) < 0$ if $ y \in (q_b,\infty)$.
\operatorname{e}nd{enumerate}
As $b$ increases from $0$ to $B$,
$q_b$ decreases from $\sqrt{3}$ towards $1$.
\item When $b \in [B, 1]$, the function $y \rightarrow f_b(0,y)$, $y>0$,
has only one critical point at $1$, and
\begin{enumerate}
\item $Y_b(yi)>0$ if $y \in (0,1)$,
\item $Y_b(yi)<0$ if $y \in (1,\infty)$.
\operatorname{e}nd{enumerate}
\operatorname{e}nd{enumerate}
\operatorname{e}nd{lemma}
\begin{proof}
The shapes of $f_1$ and $f_0$ are already established in Lemmas
\operatorname{Re}f{l-f1-im} and \operatorname{Re}f{l-f0-im}.
These lemmas imply that
$y=1$ is a critical point of $y \rightarrow f_b(y i)$, $y>0$, i.e.
\begin{equation}
\label{crit-1}
Y_b(i)=0, \ \ \mbox{for all} \ b \in [0,1],
\operatorname{e}nd{equation}
and moreover,
\begin{equation}
Y_b(yi)<0, \ \ \mbox{if} \ b \in (0,1] \ \mbox{and} \ y \geq \sqrt{3}.
\operatorname{e}nd{equation}
To study $Y_b(yi)$ for $y \in (1, \sqrt{3})$ and $b \in (0,1)$, write $Y_b =
b Y_1 + (1-b) Y_0$ as
\begin{equation}
\label{Yb-alt}
Y_b(yi) = b Y_1(yi) \Big ( 1+ \big ( \frac{1-b}{b} \big )
\frac{Y_0(yi)}{Y_1(yi)} \Big ).
\operatorname{e}nd{equation}
Recall $Y_1(y i) <0$ for $y>1$ from Lemma \operatorname{Re}f{l-f1-im}.
Regarding the quotient $\frac{Y_0(yi)}{Y_1(yi)}$,
since $Y_0(i)=Y_1(i)=0$, $\frac{Y_0(i)}{Y_1(i)}$ is understood as the limit
\begin{equation}
\label{LHospital}
\frac{Y_0(i)}{Y_1(i)} = \lim_{y \rightarrow 1} \frac{Y_0(yi)}{Y_1(yi)}
= \frac{\frac{\partial Y_0(i)}{\partial y}}
{\frac{\partial Y_1(i)}{\partial y}}
= \frac{0.2982...}{-1.298... }= -0.2297...<0
\operatorname{e}nd{equation}
evaluated by L'Hospital's rule.
Since $Y_0(\sqrt{3} \,i) =0$ by Lemma \operatorname{Re}f{l-f0-im},
\begin{equation}
\label{quo-sqrt3}
\frac{Y_0(\sqrt{3}\, i)}{Y_1(\sqrt{3}\,i)} =0.
\operatorname{e}nd{equation}
Lemmas \operatorname{Re}f{l-f1-im} and \operatorname{Re}f{l-f0-im} also assert that
$Y_1(yi) < 0$ and $Y_0(yi) >0$ if $y \in (1,\sqrt{3})$, so
\begin{equation}
\label{quo-between}
\frac{Y_0(y i)}{Y_1(yi)} < 0, \ \ y \in (1, \sqrt{3}).
\operatorname{e}nd{equation}
However the most important property of this quotient is its monotonicity
on $(1, \sqrt{3})$, namely
\begin{equation}
\label{quo-mono}
\frac{\partial }{\partial y} \Big ( \frac{Y_0(y i)}{Y_1(yi)} \Big )
> 0, \ \ y \in (1, \sqrt{3}).
\operatorname{e}nd{equation}
The proof of \operatorname{e}qref{quo-mono} is long and brute force. We leave it in the
appendix. The first time reader may wish to skip this part.
Return to \operatorname{e}qref{Yb-alt}. Since $Y_1(yi) <0$ on $(1, \infty)$ and
$\frac{1-b}{b} \in (0, \infty)$ when $b \in (0,1)$,
\operatorname{e}qref{quo-mono} implies that
$Y_b(yi)$ can have at most one zero in $(1, \sqrt{3})$
at which $Y_b(y i)$ changes sign. Because of \operatorname{e}qref{quo-sqrt3},
\begin{equation}
1 + \Big ( \frac{1-b}{b} \Big )
\frac{Y_0(\sqrt{3}\, i)}{Y_1(\sqrt{3}\,i)} = 1+0>0, \ \ b \in (0,1).
\operatorname{e}nd{equation}
Hence $Y_b(y i)$ admits a zero in $(1, \sqrt{3})$ if and only if
\begin{equation}
\label{iff}
1+ \Big ( \frac{1-b}{b} \Big )
\frac{Y_0(i)}{Y_1(i)} <0.
\operatorname{e}nd{equation}
The condition \operatorname{e}qref{iff} is equivalent to
\begin{equation}
\label{iff-2}
b < B
\operatorname{e}nd{equation}
by \operatorname{e}qref{LHospital}. We denote this zero in $(1,\sqrt{3})$
of $Y_b(yi)$ by $q_b$ when $ b \in (0, B)$.
It is also clear from \operatorname{e}qref{Yb-alt} and \operatorname{e}qref{quo-mono} that
as $b$ increases from $0$ to $B$, $q_b$ decreases monotonically
from $\sqrt{3}$ towards $1$. This proves parts 1(c), 1(d), and
2(b) of the lemma. The remaining parts follow from
Lemma \operatorname{Re}f{l-imaginary}.
\operatorname{e}nd{proof}
\section{$f_b$ on upper half plane}
\setcounter{equation}{0}
We start with a study of the singular point $z=1$. Recall the set
$W$ from \operatorname{e}qref{W}.
\begin{lemma}
\label{l-singular}
\[ \limsup_{W \ni z \rightarrow 1} X_b(z) =0. \]
\operatorname{e}nd{lemma}
\begin{proof}
Note that if $z=x+yi \in W$, then, when $y < 1$,
\begin{equation}
\label{x-range}
0< 1-x < 1 - \sqrt{1-y^2}.
\operatorname{e}nd{equation}
So $W \ni z \rightarrow 1$ is equivalent to that
$z \in W$ and $y \rightarrow 0$.
We first show that
\begin{equation}
\label{X1-singular}
\limsup_{W \ni z \rightarrow 1} X_1(z) \leq 0.
\operatorname{e}nd{equation}
Namely,
for every $\fbox{}silon >0$ there exists $\delta >0$
such that if $z=x+yi \in W$ and $y < \delta$, then $X_b(z) < \fbox{}silon$.
Recall
\[
X_1(z) = \sum_{n=1}^\infty \frac{8\pi n \sin 2\pi n x }
{ e^{2\pi ny} + e^{-2\pi ny} - 2 \cos 2\pi n x}
\]
from \operatorname{e}qref{X1}.
Separate this infinite sum into two parts according to
whether $n y^2 < \frac{1}{2}$ or $ny^2 \geq \frac{1}{2}$. Write
\begin{align}
a_n(z) &= \frac{8\pi n \sin 2\pi n x }
{ e^{2\pi ny} + e^{-2\pi ny} - 2 \cos 2\pi n x} \\
A(z) &= \sum_{n< 1/(2y^2)} a_n(z) \\
\widetilde{A}(z) &= \sum_{n\geq 1/(2y^2)} a_n(z)
\operatorname{e}nd{align}
so that
\begin{equation}
\label{AtA}
X_1(z) = A(z) + \widetilde{A}(z). \\
\operatorname{e}nd{equation}
Consider the case $n < \frac{1}{2y^2}$.
Since $1-\sqrt{1-y^2} < y^2$ when $y \in (0,1)$ , \operatorname{e}qref{x-range} implies
\begin{equation}
\label{x-range-2}
0< 1-x < y^2.
\operatorname{e}nd{equation}
Then $0< 2 \pi n (1-x) < 2\pi n y^2 < \pi$ and
$\sin 2\pi n x = - \sin 2\pi n (1-x) < 0$. Hence, every term
$a_n(z)$ in $A(z)$ is negative, and consequently
\begin{equation}
\label{A}
A(z) < 0.
\operatorname{e}nd{equation}
When $n \geq \frac{1}{2y^2}$,
\begin{equation}
\label{estimate}
|a_n(z) | \leq \frac{8\pi n}{e^{2\pi n y} -2}
\leq \frac{1}{e^{\pi ny}}.
\operatorname{e}nd{equation}
To see the last inequality, note
\[ \frac{8\pi n}{e^{2\pi n y} -2} \leq
\frac{8\pi n}{e^{2\pi n y} -2} \big ( 2n y^2 \big )
= \frac{16 \pi (ny)^2}{e^{2\pi ny}-1}.
\]
There exists $t_0>0$ such that for all $t > t_0$,
$\frac{16 \pi t^2}{e^{2\pi t}-1} \leq \frac{1}{e^{\pi t}}$. Since
$n \geq \frac{1}{2y^2}$, $n y \geq \frac{1}{2y}$. By choosing
$y < \frac{1}{2 t_0}$, we have $n y > t_0$ and the last inequality of
\operatorname{e}qref{estimate} follows.
Then
\begin{equation}
\label{estimate-2}
| \widetilde{A} (z) |
\leq \sum_{n\geq 1/(2y^2)} \frac{1}{e^{\pi n y}}
\leq \frac{e^{-\pi y \left [\frac{1}{2y^2} \right]}}{1- e^{-\pi y}}
\rightarrow 0 \ \ \mbox{as} \ y \rightarrow 0
\operatorname{e}nd{equation}
where $\left[\frac{1}{2y^2} \right]$ is the integer part of $\frac{1}{2y^2}$.
The claim \operatorname{e}qref{X1-singular} now follows from \operatorname{e}qref{A} and
\operatorname{e}qref{estimate-2}.
Next we claim that
\begin{equation}
\label{X0-singular}
\limsup_{W \ni z \rightarrow 1}X_0(z) \leq 0.
\operatorname{e}nd{equation}
This is proved by a similar argument whose details are omitted.
By \operatorname{e}qref{X1-singular} and \operatorname{e}qref{X0-singular} we obtain that
\begin{equation}
\label{Xb-singular}
\limsup_{W \ni z \rightarrow 1}X_b(z) \leq 0.
\operatorname{e}nd{equation}
From the series \operatorname{e}qref{X1} and \operatorname{e}qref{X0},
\begin{equation}
\label{Xb-x=1}
X_b(1+yi) =0. \ \ y>0
\operatorname{e}nd{equation}
This turns \operatorname{e}qref{Xb-singular} to
\begin{equation}
\label{Xb-singular-2}
\limsup_{W \ni z \rightarrow 1}X_b(z) = 0,
\operatorname{e}nd{equation}
proving the lemma.
\operatorname{e}nd{proof}
Recall that for $b \in [0, B)$, the largest of the three critical points
of the function $y \rightarrow f_b(yi)$, $y > 0$, is denoted $q_b$.
This $q_b$ is a maximum and $1 < q_b \leq \sqrt{3}$.
By convention
if $b \in [B, 1]$, we set $q_b=1$ which is the unique critical point
(a maximum) of $y \rightarrow f_b(yi)$, $y > 0$.
If $b \in [B, 1]$, then $1-b \in [0, 1-B]$ and $q_{1-b}$ is defined as above.
The transform $z=x+y i \rightarrow w=u+vi$ in \operatorname{e}qref{key-transform} sends
the point $z=q_{1-b}i$ to $w = \frac{q_{1-b}^2-1 + 2q_{1-b} i}{q_{1-b}^2+1}$.
Define
\begin{equation}
\label{pq}
p_b = \frac{q_{1-b}^2-1}{q_{1-b}^2+1}.
\operatorname{e}nd{equation}
Then
\begin{equation}
\label{pq-2}
\frac{q_{1-b}^2-1 + 2q_{1-b} i}{q_{1-b}^2+1} = p_b + i \sqrt{1-p_b^2}.
\operatorname{e}nd{equation}
\begin{lemma}
Let $b \in [0,1-B]$ and $W$ be given in \operatorname{e}qref{W}.
Then $X_b(z) <0$ for all $z \in W$.
\label{l-monotone}
\operatorname{e}nd{lemma}
\begin{proof}
From \operatorname{e}qref{X1} and \operatorname{e}qref{X0} one deduces
\begin{equation}
\label{X-line-x=0,1}
X_b(yi) =0 \ \ \mbox{and} \ \ X_b(1+yi)=0, \ \ y>0.
\operatorname{e}nd{equation}
Also
\begin{equation}
\label{X-y=inf}
\lim_{y \rightarrow \infty} X_b(z) = 0
\ \ \mbox{uniformly with respect to} \ x \in\mathbb{R}.
\operatorname{e}nd{equation}
On the unit circle, we know from
\operatorname{e}qref{line-to-circle-X}
\begin{equation}
\label{line-to-circle-X-2}
X_b(x + i\sqrt{1-x^2}) = \frac{\sqrt{1-x^2}}{1-x}
Y_{1-b}\Big(\frac{\sqrt{1-x^2}}{1-x} \, i \Big), \ \ x \in (-1,1).
\operatorname{e}nd{equation}
When $b \in [0, 1-B]$, $ 1-b \in [B, 1]$. By Lemma \operatorname{Re}f{l-fb-im}.2,
\[ Y_{1-b}\Big(\frac{\sqrt{1-x^2}}{1-x} \, i \Big) <0, \ \ \mbox{if} \ \
x \in (0,1). \]
This shows
\begin{equation}
\label{X-circle}
X_b(x + i\sqrt{1-x^2}) <0, \ \ \mbox{if} \ \ x \in (0,1).
\operatorname{e}nd{equation}
The lemma follows from
\operatorname{e}qref{X-line-x=0,1}, \operatorname{e}qref{X-y=inf}, \operatorname{e}qref{X-circle},
and Lemma \operatorname{Re}f{l-singular} by the maximum principle.
\operatorname{e}nd{proof}
We are now ready to prove the main theorem.
\begin{proof}[Proof of Theorem \operatorname{Re}f{t-main}]
\
\
\noindent Claim 1.
Let $b \in [0, 1-B]$. Then $f_b(z)$ on the upper half plane
is maximized at $q_b i$ and the points in the orbit of $q_b i$
under the group ${\cal G}$.
\
The second plot of Figure \operatorname{Re}f{f-W} demonstrates our argument.
By Lemma \operatorname{Re}f{l-inv-group}.3 it suffices to consider $f_b$
in $\overline{W}_{\mathbb{H}}$. In $W$
Lemma \operatorname{Re}f{l-monotone} asserts that $f_b$ is strictly decreasing in
the horizontal direction, so it can only attain a maximum
in $\overline{W}_{\mathbb{H}}$
on the part of the unit circle in the first quadrant, i.e.
$\{ w \in \mathbb{C}: \ |w|=1, \ 0<\operatorname{Re} w<1, \ \operatorname{Im} w >0\}$, or on
the part of the imaginary axis above $i$, i.e.
$\{ z \in \mathbb{C}: \ \operatorname{Re} z =0, \ \operatorname{Im} z \geq 1 \}$.
First rule out the unit circle.
By Lemma \operatorname{Re}f{l-dual}
\begin{equation}
f_b(w)=f_{1-b}(z), \ z \in \mathbb{H}, \ w=\frac{z-1}{z+1} \in \mathbb{H}.
\label{dual}
\operatorname{e}nd{equation}
Take $z=yi$ to be on the imaginary axis. Then
\[ w= \frac{y^2-1}{y^2+1} + \frac{2y}{y^2+1} i \]
is on the unit circle. As $z$ moves from $i$ to $\infty$ upward
along the imaginary axis, $w$ moves from
$i$ to $1$ clockwise along the unit circle.
When $b \in [0,1-B]$, $1-b \in [B,1]$.
Since $y \rightarrow f_{1-b}(yi)$
is strictly decreasing for $y \in (1, \infty)$
by Lemma \operatorname{Re}f{l-fb-im}.2, $f_b(w)$ is strictly decreasing when
$w$ moves from $i$ to $1$ clockwise along the unit circle. Then
$f_b$ cannot attain a maximum on
$\{ z\in \mathbb{C}: \ |w|=1, \ 0<\operatorname{Re} w<1, \ \operatorname{Im} w>0\}$.
Therefore in $\overline{W}_{\mathbb{H}}$, $f_b$ can only achieve a maximum
on $\{ z \in \mathbb{C}: \ \operatorname{Re} z =0, \ \operatorname{Im} z \geq 1 \}$.
By Lemma \operatorname{Re}f{l-fb-im}.1, it does so at $q_b i$. This proves Claim 1.
\
By Lemma
\operatorname{Re}f{l-fb-im}.1 and the convention that $q_b=1$ if $b \in [B,1]$,
three possibilities exist for $q_b$ when $b \in [0, 1-B]$.
When $b=0$, $q_b=\sqrt{3}$, which proves part 1 of the theorem.
When $b \in (0,B)$, $q_b \in (1, \sqrt{3})$, which proves part 2 of
the theorem.
When $b\in [B,1-B]$, $q_b =1$, which proves part 3 of the theorem.
\
Now consider the case $B \in (1-B,1]$.
\
\noindent Claim 2.
If $b \in (1-B, 1]$, then $f_b$ on the upper half plane
is maximized at $p_b +i \sqrt{1-p_b^2}$ and the points in its orbit
under the group ${\cal G}$.
\
By Lemma \operatorname{Re}f{l-dual}, the duality property, we have
\[ f_b(w) = f_{1-b} (z), \ \ z \in \mathbb{H} \ \ \mbox{and} \ \
w=\frac{z-1}{z+1} \in \mathbb{H} \]
If $w_\ast$ maximizes $f_b$, then $z_\ast = \frac{w_\ast+1}{-w_\ast+1}$
maximizes $f_{1-b}$. Since $b \in (1-B, 1]$, $1-b \in [0, B)$. By
Claim 1, $z_\ast = q_{1-b}i$ or a point in the orbit
of $q_{1-b}i$ under ${\cal G}$.
Under the transform
$w=\frac{z-1}{z+1}$, $z_\ast=q_{1-b}i$ corresponds to
\begin{equation} w_\ast=\frac{q_{1-b}i-1}{q_{1-b}i+1}
= \frac{q_{1-b}^2-1 + 2q_{1-b} i}{q_{1-b}^2+1}
= p_b + i \sqrt{1-p_b^2}
\label{wast}
\operatorname{e}nd{equation}
by \operatorname{e}qref{pq-2}. This proves Claim 2.
\
When $b\in (1-B,1)$, $q_{1-b} \in (1, \sqrt{3})$ by
Lemma \operatorname{Re}f{l-fb-im}.1. Then by \operatorname{e}qref{pq-2}, $p_b+i \sqrt{1-p_b^2}$
identified as a maximum of $f_b$ in Claim 2 is in
$\{ z \in \mathbb{C}: \ |z|=1,
\ \frac{\pi}{3} < \arg z < \frac{\pi}{2} \}$. This proves
part 4 of the theorem. Finally when $b=1$, $q_0=\sqrt{3}$ and
\begin{equation}
\label{b=1}
p_1 + i \sqrt{1-p_1^2}=\frac{1+\sqrt{3} i}{2}
\operatorname{e}nd{equation}
by \operatorname{e}qref{pq-2}. This proves part 5 of the theorem.
\operatorname{e}nd{proof}
\appendix
\operatorname{Re}newcommand{A.\arabic{equation}}{A.\arabic{equation}}
\setcounter{equation}{0}
\vskip 1cm
\noindent{\Large \bf Appendix}
\
\begin{proof}[Proof of \operatorname{e}qref{quo-mono}]
Here we prove the monotonicity of the function
$y \rightarrow \frac{Y_0(yi)}{Y_1(y i)}$, $y \in (1, \sqrt{3})$, by
showing
\begin{equation}
\label{mono}
\frac{\partial }{\partial y} \Big ( \frac{Y_0(yi)}{Y_1(y i)} \Big )
>0, \ y \in (1, \sqrt{3}).
\operatorname{e}nd{equation}
Our proof uses the following L'Hospital like criterion for
monotonicity. See \cite{anderson-vamanamurthy-vuorinen} for more
information on this trick.
\
\noindent Claim.
\begin{equation}
\frac{\partial }{\partial y} \Big ( \frac{Y_0(yi)}{Y_1(y i)} \Big ) >0
\ \mbox{on}
\ (1, \sqrt{3}) \ \mbox{if} \ y \rightarrow
\frac{\frac{\partial Y_0(yi)}{\partial y}}{
\frac{\partial Y_1(y i)}{\partial y}}
\ \mbox{is strictly increasing on} \ (1,\sqrt{3}).
\label{gromov}
\operatorname{e}nd{equation}
\
Let $y \in (1, \sqrt{3})$. There exist $y_1 \in (1, y)$ and $y_2\in (1,y)$
such that
\begin{align*}
\frac{\partial }{\partial y} \Big ( \frac{Y_0(yi)}{Y_1(y i)} \Big )
& = \frac{\frac{\partial Y_0(yi)}{\partial y}
Y_1(yi) - Y_0(y i) \frac{\partial Y_1(yi)}{\partial y} }{Y_1^2(yi)} \\
& = \frac{\frac{\partial Y_1(yi)}{\partial y} }{Y_1(y i)}
\Big ( \frac{ \frac{\partial Y_0(yi)}{\partial y} }
{ \frac{\partial Y_1(yi)}{\partial y} }
- \frac{Y_0(yi)}{Y_1(yi)}\Big ) \\
& = \frac{\frac{\partial Y_1(yi)}{\partial y} }{Y_1(y i)-Y_1(i)}
\Big ( \frac{ \frac{\partial Y_0(yi)}{\partial y} }
{ \frac{\partial Y_1(yi)}{\partial y} }
- \frac{Y_0(yi)-Y_0(i)}{Y_1(yi)-Y_1(i)}\Big ) \\
& = \frac{\frac{\partial Y_1(yi)}{\partial y} }
{\frac{\partial Y_1(y_1i)}{\partial y} (y-1) }
\Big ( \frac{ \frac{\partial Y_0(yi)}{\partial y} }
{ \frac{\partial Y_1(yi)}{\partial y} }
- \frac{ \frac{\partial Y_0(y_2i)}{\partial y} }
{ \frac{\partial Y_1(y_2i)}{\partial y} }\Big )
\operatorname{e}nd{align*}
since $Y_1(i)=Y_0(i)=0$.
Because $\frac{\partial Y_1(yi)}{\partial y}$ does not change sign in
$(1, \sqrt{3})$,
\begin{equation}
\label{term1} \frac{\frac{\partial Y_1(yi)}{\partial y} }
{\frac{\partial Y_1(y_1i)}{\partial y} (y-1) } >0.
\operatorname{e}nd{equation}
Moreover, since
\[ y \rightarrow
\frac{\frac{\partial Y_0(yi)}{\partial y}}{
\frac{\partial Y_1(y i)}{\partial y}} \]
is strictly increasing,
\begin{equation}
\label{term2}
\frac{ \frac{\partial Y_0(yi)}{\partial y} }
{ \frac{\partial Y_1(yi)}{\partial y} }
- \frac{ \frac{\partial Y_0(y_2i)}{\partial y} }
{ \frac{\partial Y_1(y_2i)}{\partial y} } >0.
\operatorname{e}nd{equation}
The claim then follows from \operatorname{e}qref{term1} and \operatorname{e}qref{term2}.
\
We proceed to show that
\begin{equation}
\frac{\partial }{\partial y} \Big (
\frac{\frac{\partial Y_0(yi)}{\partial y}}{
\frac{\partial Y_1(y i)}{\partial y}} \Big )
= \frac{\frac{\partial^2 Y_0(yi)}{\partial y^2}
\frac{\partial Y_1(yi)}{\partial y} -
\frac{\partial Y_0(yi)}{\partial y}
\frac{\partial^2 Y_1(yi)}{\partial y^2}
}
{ \Big ( \frac{\partial Y_1(y i)}{\partial y} \Big )^2}
>0, \ y \in (1, \sqrt{3}). \label{gromov2}
\operatorname{e}nd{equation}
Define
\begin{equation} \label{T}
T(y)= \frac{\partial^2 Y_0(yi)}{\partial y^2}
\frac{\partial Y_1(yi)}{\partial y} -
\frac{\partial Y_0(yi)}{\partial y}
\frac{\partial^2 Y_1(yi)}{\partial y^2}.
\operatorname{e}nd{equation}
By \operatorname{e}qref{Y1'-im}, $\frac{\partial Y_1(yi)}{\partial y} < 0$ on $(0, \infty)$.
Therefore to prove \operatorname{e}qref{gromov2}, it suffices to show
\begin{equation}
T(y) > 0, \ y \in (1, \sqrt{3}).
\label{T>0}
\operatorname{e}nd{equation}
We divide $(1, \sqrt{3})$ into two intervals: $(1, \beta)$ and
$[\beta, \sqrt{3})$ where $\beta \in (1,\sqrt{3})$ is to be determined.
First consider $T(y)$ on $(1,\beta)$.
Lemma \operatorname{Re}f{l-imaginary} asserts that for $j=0,1$,
\[ Y_j(yi) = \Big (- \frac{1}{y^2} \Big ) Y_j\Big (\frac{i}{y} \Big ). \]
Differentiation shows that
\begin{align}
\frac{\partial Y_j(yi)}{\partial y} &= 2 y^{-3}
Y_j\Big (\frac{i}{y} \Big )+ y^{-4}
\frac{\partial Y_j \big (\frac{i}{y} \big )}{\partial y} \\
\frac{\partial^2 Y_j(yi)}{\partial y^2} &=-6 y^{-4}
Y_j\Big (\frac{i}{y} \Big ) - 6 y^{-5}
\frac{\partial Y_j \big (\frac{i}{y} \big )}{\partial y}
- y^{-6} \frac{\partial^2 Y_j \big (\frac{i}{y} \big )}{\partial y^2}.
\label{Y''-ref}
\operatorname{e}nd{align}
Taking $y=1$ in \operatorname{e}qref{Y''-ref} and using $Y_j(i)=0$, one obtains
\begin{equation}
\label{recursive}
\frac{\partial^2 Y_j(i)}{\partial y^2}
= -3 \frac{\partial Y_j(i)}{\partial y}, \ j=1,0.
\operatorname{e}nd{equation}
In particular \operatorname{e}qref{recursive} implies that
\begin{equation}
\label{Tat1}
T(1)=0.
\operatorname{e}nd{equation}
Next consider the derivative of $T$,
\begin{equation} \label{T'}
T'(y) = \frac{\partial^3 Y_0(yi)}{\partial y^3}
\frac{\partial Y_1(yi)}{\partial y} -
\frac{\partial Y_0(yi)}{\partial y}
\frac{\partial^3 Y_1(yi)}{\partial y^3}.
\operatorname{e}nd{equation}
It is clear from \operatorname{e}qref{Y1'-im} and \operatorname{e}qref{Y1'''-im} that
\begin{equation}
\frac{\partial Y_1(yi)}{\partial y} <0, \ \
\frac{\partial^3 Y_1(yi)}{\partial y^3} <0, \ \ y >0. \label{T12}
\operatorname{e}nd{equation}
Similar to the argument following \operatorname{e}qref{alt-2}, one finds
the series in \operatorname{e}qref{Y0'-im} to be alternating when $y>1$. Therefore,
\begin{equation}
\frac{\partial Y_0(yi)}{\partial y}
> -\frac{1}{y^2} + \frac{4\pi^2 e^{-\pi y}}{(1+e^{-\pi y})^2}
- \frac{16\pi^2 e^{-2 \pi y}}{(1-e^{-2\pi y})^2}
\label{Y0'-im-2}
\operatorname{e}nd{equation}
We will later choose $\beta\in (1,\sqrt{3})$
so that when $y=\beta$, the right side
of \operatorname{e}qref{Y0'-im-2} is positive; namely choose $\beta$ to make
\begin{equation}
-\frac{1}{\beta^2} + \frac{4\pi^2 e^{-\pi \beta}}{(1+e^{-\pi \beta})^2}
- \frac{16\pi^2 e^{-2 \pi \beta}}{(1-e^{-2\pi \beta})^2} > 0.
\label{eta-1}
\operatorname{e}nd{equation}
Since
\begin{equation}
\label{Y0''-1-sqrt3}
\frac{\partial^2 Y_0(yi)}{\partial y^2} <0, \ \
y \in (1, \sqrt{3})
\operatorname{e}nd{equation}
by \operatorname{e}qref{Y1''-1/2-3}, the condition
\operatorname{e}qref{eta-1} implies that
\begin{equation}
\frac{\partial Y_0(yi)}{\partial y} >0, \ \ y \in (1,\beta).
\label{T3}
\operatorname{e}nd{equation}
Regarding $\frac{\partial^3 Y_0(yi)}{\partial y^3}$, write it as
\begin{equation}
\frac{\partial^3 Y_0(yi)}{\partial y^3} = -\frac{6}{y^4}
-4\pi^4 \sum_{n=1}^\infty \frac{n^4 (-r)^n}{(1-(-r)^n)^4}
- 16\pi^4 \sum_{n=1}^\infty \frac{n^4 r^{2n}}{(1-(-r)^n)^4}
- 4\pi^4 \sum_{n=1}^\infty \frac{n^4 (-r)^{3n}}{(1-(-r)^n)^4}
\label{Y0'''-series}
\operatorname{e}nd{equation}
where
\begin{equation}
\label{r-2} r=e^{-\pi y}.
\operatorname{e}nd{equation}
as before. Clearly, the second series in \operatorname{e}qref{Y0'''-series}
is positive for all
$y>0$. One can also show as before that when $y>1$, the first and the third
series in \operatorname{e}qref{Y0'''-series} are both alternating. Pick three leading
terms from the first series and one term from each of the second and the
third series to form an upper bound:
\begin{align}
\frac{1}{4\pi^4}
\frac{\partial^3 Y_0(yi)}{\partial y^3} & < -\frac{6}{4\pi^4 y^4}
+ \frac{r}{(1+r)^4} - \frac{16 r^2}{(1-r^2)^4}
+ \frac{81r^3}{(1+r^3)^4}
- \frac{4 r^2}{(1+r)^4}
+ \frac{r^3}{(1+r)^4} \nonumber \\
& < -\frac{6}{4\pi^4 y^4}
+ \frac{r}{(1+r)^4} - \frac{16 r^2}{(1-r^2)^4} + 82 r^3
- \frac{4r^2}{(1+r)^4} \nonumber \\
& = -\frac{6}{4\pi^4 y^4} +
\frac{r}{(1-r^2)^4} \Big ( 1- 24 r + 104 r^2 -28 r^3
- 311 r^4 -4 r^5 + 497 r^6 -328 r^8 + 82r^{10} \Big ) \nonumber \\
& < -\frac{6}{4\pi^4 y^4} +
\frac{r}{(1-r^2)^4} \big ( 1- 24 r + 104 r^2 \big ) \nonumber \\
& \leq -\frac{6}{4\pi^4 y^4} +
\frac{1}{(1-e^{-2\pi})^4} \ r \big ( 1- 24 r + 104 r^2 \big ).
\operatorname{e}nd{align}
Denote the last line by
\begin{equation}
\label{sigma}
\sigma (y) = -\frac{6}{4\pi^4 y^4} +
\frac{1}{(1-e^{-2\pi})^4} \Big ( e^{-\pi y}- 24 e^{-2\pi y} +
104 e^{-3\pi y} \Big )
\operatorname{e}nd{equation}
Compute
\begin{align}
\label{sigma'}
\sigma'(y) &= \frac{24}{4 \pi^4 y^5}
+ \frac{\pi e^{-\pi y}}{(1-e^{-2\pi})^4} \Big (-1 + 48 e^{-\pi y}
- 312 e^{-2\pi y} \Big )
\operatorname{e}nd{align}
and consider the quantity in the parentheses,
\begin{equation}
\phi(y) = -1 + 48 e^{-\pi y} - 312 e^{-2\pi y}.
\operatorname{e}nd{equation}
Since $\phi'(y) =\pi e^{-\pi y} (- 48 + 624 e^{-\pi y}) < 0$ if
$y>1$,
\begin{equation}
\phi(y) > \phi(\beta), \ \ \mbox{if} \ y \in (1,\beta).
\operatorname{e}nd{equation}
If one can make $\phi(\beta)>0$, namely if
\begin{equation}
-1 + 48 e^{-\pi \beta} - 312 e^{-2\pi \beta} >0,
\label{eta-2}
\operatorname{e}nd{equation}
then
\begin{equation}
\phi(y) >0, \ \ y\in (1,\beta).
\operatorname{e}nd{equation}
Consequently, by \operatorname{e}qref{sigma'}
\begin{equation}
\label{sigma'-2}
\sigma'(y) >0, \ \ y \in (1,\beta)
\operatorname{e}nd{equation}\
and
\begin{equation}
\sigma(y) < \sigma(\beta), \ \ y \in (1,\beta).
\operatorname{e}nd{equation}
This shows that
\begin{equation}
\frac{\partial^3 Y_0(y i)}{\partial y^3} \leq 4\pi^4 \sigma (\beta),
\ \ y \in (1,\beta).
\operatorname{e}nd{equation}
If one can choose $\beta$ so that $\sigma(\beta)<0$, namely
\begin{equation}
\label{eta-3}
-\frac{6}{4\pi^4 \beta^4} +
\frac{1}{(1-e^{-2\pi})^4} \Big ( e^{-\pi \beta}- 24 e^{-2\pi \beta} +
104 e^{-3\pi \beta} \Big ) <0,
\operatorname{e}nd{equation}
then
\begin{equation}
\label{T4}
\frac{\partial^3 Y_0(yi)}{\partial y^3} < 0, \ \ y\in (1,\beta).
\operatorname{e}nd{equation}
Following \operatorname{e}qref{T12}, \operatorname{e}qref{T3}, and \operatorname{e}qref{T4}, one has that
\begin{equation}
T'(y) > 0, \ \ y \in (1,\beta).
\label{T'-2}
\operatorname{e}nd{equation}
By \operatorname{e}qref{Tat1}, \operatorname{e}qref{T'-2} implies that
\begin{equation}
\label{small}
T(y) >0, \ \ y \in (1, \beta)
\operatorname{e}nd{equation}
provided \operatorname{e}qref{eta-1}, \operatorname{e}qref{eta-2}, and \operatorname{e}qref{eta-3} hold.
Next consider $T(y)$ for $y\in [\beta, \infty)$. Introduce
\begin{align}
d(y) &= Y_0(yi) - Y_1(yi) =
\frac{\pi}{6} - \sum_{k=1}^\infty \frac{4\pi(2k-1)r^{2k-1}}{1+r^{2k-1}}
\label{d} \\
d'(y) &= \sum_{k=1}^\infty \frac{4\pi^2(2k-1)^2r^{2k-1}}{(1+r^{2k-1})^2}
\label{d'} \\
d''(y) &= \sum_{k=1}^\infty \frac{4\pi^3(2k-1)^3(-r^{2k-1}+r^{2(2k-1)})}
{(1+r^{2k-1})^3}. \label{d''}
\operatorname{e}nd{align}
Then by \operatorname{e}qref{Y1'-im}, \operatorname{e}qref{Y1''-im}, \operatorname{e}qref{d'}, and \operatorname{e}qref{d''},
\begin{align}
T(y) &= d''(y) \frac{\partial Y_1(yi)}{\partial y}
- d'(y) \frac{\partial^2 Y_1(yi)}{\partial y^2} \nonumber \\
& = \sum_{k=1}^\infty
\frac{4\pi^2 (2k-1)^2 r^{2k-1}}{y^2(1+r^{2k-1})^2}
\Big ( \frac{\pi (2k-1) (1-r^{2k-1})}{1+r^{2k-1}} - \frac{2}{y}
\Big ) \nonumber \\
& \qquad + \sum_{k=1}^\infty \sum_{n=1}^\infty
\frac{16 \pi^5 (2k-1)^2 (2n)^2 r^{2n+2k-1}}{(1+r^{2k-1})^2 (1-r^{2n})^2}
\Big ( \frac{(2k-1)(1-r^{2k-1})}{1+r^{2k-1}}
- \frac{2n (1+r^{2n})}{1-r^{2n}}
\Big ) \label{T-2} \\
&= \sum_{k=1}^\infty c_k + \sum_{k=1}^\infty \sum_{n=1}^\infty d_{kn}
\label{T-c-d}
\operatorname{e}nd{align}
where $c_k$ and $d_{kn}$ are defined by the terms in \operatorname{e}qref{T-2}.
Regarding $c_k$, because, with $y>1$,
\[ \frac{\pi (2k-1) (1-r^{2k-1})}{1+r^{2k-1}} - \frac{2}{y}
\geq \frac{\pi(1-r)}{1+r} -2
> \frac{\pi(1-e^{-\pi})}{1+e^{-\pi}} -2 >0, \]
one has
\begin{equation}
\label{c-2}
c_k >0 \ \ \mbox{for all} \ k.
\operatorname{e}nd{equation}
The terms $d_{kn}$ has the following property
\begin{equation}
d_{kn} > 0 \ \ \mbox{if} \ k >n, \ \ \
d_{kn} <0 \ \ \mbox{if} \ k \leq n.
\label{d-2}
\operatorname{e}nd{equation}
To see \operatorname{e}qref{d-2}, define
\begin{equation}
\rho_j(r) = \frac{j (1+(-r)^j)}{1-(-r)^j}
\operatorname{e}nd{equation}
so that the quantity in the second parenthesis pair of \operatorname{e}qref{T-2} is
$\rho_{2k-1}(r) - \rho_{2n} (r)$. The claim \operatorname{e}qref{d-2} follows if
$\rho_j(r)$ is increasing with respect to $j$. To this end consider
\begin{equation}
\label{rho}
\rho_{j+1}(r) - \rho_j(r)
= \frac{1-(2j+1) (r+1) (-r)^j + r^{2j+1}}{(1-(-r)^{j+1}) (1-(-r)^j)}.
\operatorname{e}nd{equation}
This is clearly positive when $j$ is odd since $0<r<1$. When $j$ is even,
denote the numerator in \operatorname{e}qref{rho} by
\[ \mu_j(r) = 1- (2j+1) (r+1) r^j + r^{2j+1}. \]
Then
\[ \mu_j'(r) = (2j+1) r^{j-1} \big (- j -(j+1) r + r^{j+1} \big ). \]
As $0<r<1$,
\[ - j -(j+1) r + r^{j+1} < - j -(j+1) r + r = -j - j r <0. \]
Hence $\mu_j'(r) <0$ for $r \in (0,1)$. Moreover
\[ \mu_j(r) > 1-2(2j+1)r^j, \]
so
\[ \mu_j(e^{-\pi}) > 1- 2(2j+1) e^{-\pi j} > 0\]
for all $j \geq 1$. Therefore $\mu_j(r) >0$ since $r \in (0, e^{-\pi})$
and \operatorname{e}qref{d-2} is proved.
By \operatorname{e}qref{d-2} drop the positive $d_{kn}$'s to bound the double sum
in \operatorname{e}qref{d} from below by
\begin{equation}
\label{d-3}
\sum_{l=1}^\infty \sum_{n=1}^\infty d_{kn}
> \sum_{k=1}^\infty d_{kk} + \sum_{k=1}^\infty \sum_{n=k+1}^\infty d_{kn}.
\operatorname{e}nd{equation}
First consider $\sum_{k=1}^\infty d_{kk}$:
\begin{equation}
\label{dkk}
d_{kk} = \frac{16 \pi^5 (2k-1)^2 (2k)^2 r^{4k-1}}{(1+r^{2k-1})^2 (1-r^{2k})^2}
\Big ( \frac{(2k-1)(1-r^{2k-1})}{1+r^{2k-1}}
- \frac{2k (1+r^{2k})}{1-r^{2k}}
\Big ).
\operatorname{e}nd{equation}
For $y>1$,
\begin{align*}
(1+r^{2k-1}) (1-r^{2k}) &=
1 + r^{2k-1}(1-r-r^{2k}) \\ & \geq 1 + r^{2k-1}(1-r-r^2) \\
& > 1 + r^{2k-1}(1-e^{-\pi}-e^{-2\pi}) \\
& = 1 + r^{2k-1} \times 0.9549... \ > \ 1.
\operatorname{e}nd{align*}
Also, when $y>1$, both $ \frac{(4k-2)r^{2k-1}}{1+r^{2k-1}}$ and
$\frac{4k r^{2k}}{1-r^{2k}}$ are decreasing with respect to $k$. Hence
\begin{align*}
\frac{(2k-1)(1-r^{2k-1})}{1+r^{2k-1}}
- \frac{2k (1+r^{2k})}{1-r^{2k}}
& = -1 - \frac{(4k-2)r^{2k-1}}{1+r^{2k-1}} - \frac{4k r^{2k}}{1-r^{2k}} \\
& \geq -1 - \frac{2 r}{1+r} - \frac{4 r^2}{1-r^2} \\
& > -1 - \frac{2 e^{-\pi}}{1+e^{-\pi}} - \frac{4 e^{-2\pi}}{1-e^{-2\pi}} \\
&= - \Big ( \frac{1+e^{-\pi}}{1-e^{-\pi}} \Big ).
\operatorname{e}nd{align*}
One estimates
\begin{align}
\sum_{k=1}^\infty d_{kk} & > - 16\pi^5 \Big ( \frac{1+e^{-\pi}}{1-e^{-\pi}} \Big )
\sum_{k=1}^\infty (2k-1)^2 (2k)^2 r^{4k-1} \nonumber \\
& = - 64 \pi^5 \Big ( \frac{1+e^{-\pi}}{1-e^{-\pi}} \Big )
\frac{r^3 (1+31r^4+55r^8+9r^{12})}{(1-r^4)^5} \label{dkk-2}
\operatorname{e}nd{align}
with the help of the summation formula
\begin{equation}
\label{sum-2}
\sum_{k=1}^\infty (2k)^2 (2k-1)^2 r^{4k-1}
= \frac{r^2}{16} \Big ( r \Big ( r^{-1} \Big ( r \Big ( \frac{1}{1-r^4}
\Big )_r \Big )_r \Big )_r \Big )_r
= \frac{4 r^3 (1+31r^4+55r^8+9r^{12})}{(1-r^4)^5}.
\operatorname{e}nd{equation}
Next consider the double sum on the right of \operatorname{e}qref{d-3}. Dropping
the first term in the second parenthesis pair of \operatorname{e}qref{T-2} one obtains
\begin{equation}
\label{d-4}
\sum_{k=1}^\infty \sum_{n=k+1}^\infty d_{kn} >
\sum_{k=1}^\infty \sum_{n=k+1}^\infty
\frac{16 \pi^5 (2k-1)^2 (2n)^2 r^{2n+2k-1}}{(1+r^{2k-1})^2 (1-r^{2n})^2}
\Big (- \frac{2n (1+r^{2n})}{1-r^{2n}} \Big ).
\operatorname{e}nd{equation}
For $n \geq k+1$,
\begin{align*}
(1+r^{2k-1}) (1-r^{2n}) &= 1 + r^{2k-1} (1-r^{2n-2k+1}-r^{2n}) \\
& > 1+ r^{2k+2} \ \geq \ 1+ r^{2n}.
\operatorname{e}nd{align*}
Consequently
\begin{align*}
\frac{1+r^{2n}}{(1+r^{2k-1})^2 (1-r^{2n})^3} & <
\frac{1}{(1+r^{2k-1}) (1-r^{2n})^2} \\
& = \frac{1}{1+r^{2k-1} \big ( (1-r^{2n})^2 +r^{2n-2k+1}(-2+r^{2n}) \big )} \\
& \leq \frac{1}{1+r^{2k-1} \big ( (1-r^2)^2 - 2r^3 \big )} \ < \ 1.
\operatorname{e}nd{align*}
Return to \operatorname{e}qref{d-4} to deduce
\begin{align}
\sum_{k=1}^\infty \sum_{n=k+1}^\infty d_{kn} & >
- \sum_{k=1}^\infty \sum_{n=k+1}^\infty 16 \pi^5 (2k-1)^2 (2n)^3 r^{2n+2k-1}
\nonumber \\
& \geq - \sum_{k=1}^\infty \sum_{n=2}^\infty 128 \pi^5 (2k-1)^2 n^3 r^{2n+2k-1}
\nonumber \\
& = - \sum_{k=1}^\infty 128 \pi^5 (2k-1)^2 r^{2k}
\Big (\frac{r^3(8-5r^2+4r^4-r^6)} {(1-r^2)^4} \Big ) \nonumber \\
& \geq - \sum_{k=1}^\infty 128 \pi^5 (2k-1)^2 r^{2k+3}
\Big ( \frac{8}{(1-e^{-2\pi})^4 } \Big ) \nonumber \\
& = - \frac{1024 \pi^5}{(1-e^{-2\pi})^4 }
\Big ( \frac{r^5(1+6r^2+r^4)}{(1-r^2)^3} \Big ). \label{d-6}
\operatorname{e}nd{align}
We have used the summation formulas
\begin{align}
\sum_{n=2}^\infty n^3 r^{2n-1}
&= \frac{1}{8} \Big ( r \Big ( r \Big( \frac{1}{1-r^2}
\Big )_r \Big )_r \Big )_r - r
= \frac{r^3(8-5r^2+4r^4-r^6)} {(1-r^2)^4} \label{sum-3} \\
\sum_{k=1}^\infty (2k-1)^2 r^{2k+3} &=
r^5 \Big ( r \Big ( \frac{r}{1-r^2} \Big )_r \Big )_r
= \frac{r^5(1+6r^2+r^4)}{(1-r^2)^3} \label{sum-4}
\operatorname{e}nd{align}
to reach the third line and the last line respectively.
By \operatorname{e}qref{T-c-d}, \operatorname{e}qref{c-2}, \operatorname{e}qref{d-3} \operatorname{e}qref{dkk-2} and \operatorname{e}qref{d-6},
taking two terms from $\sum_{k=1}^\infty c_k$ and using $1<y<\sqrt{3}$, we find
\begin{align}
T(y) & \geq \frac{4\pi^2 r}{y^2(1+r)^2}
\Big ( \frac{\pi (1-r)}{1+r} - \frac{2}{y} \Big )
+ \frac{36\pi^2 r^3}{y^2(1+r^3)^2}
\Big ( \frac{3 \pi (1-r^3)}{1+r^3} - \frac{2}{y} \Big ) \nonumber
\\ & \qquad - 64 \pi^5 \Big ( \frac{1+e^{-\pi}}{1-e^{-\pi}} \Big )
\frac{r^3 (1+31r^4+55r^8+9r^{12})}{(1-r^4)^5}
- \frac{1024 \pi^5}{(1-e^{-2\pi})^4 }
\Big ( \frac{r^5(1+6r^2+r^4)}{(1-r^2)^3} \Big ) \nonumber
\\ & > \frac{4\pi^2 r}{y^2(1+r)^2}
\Big ( \frac{\pi (1-r)}{1+r} - \frac{2}{y} \Big )
+ \frac{36\pi^2 r^3}{3(1+e^{-3\pi})^2}
\Big ( \frac{3 \pi (1-e^{-3\pi})}{1+e^{-3\pi}} - 2 \Big ) \nonumber
\\ & \qquad - 64 \pi^5 \Big ( \frac{1+e^{-\pi}}{1-e^{-\pi}} \Big )
\frac{r^3 (1+31e^{-4\pi}+55e^{-8\pi} +9e^{-12\pi})}{(1-e^{-4\pi})^5}
\nonumber \\ & \qquad - \frac{1024 \pi^5}{(1-e^{-2\pi})^4 }
\Big ( \frac{r^3 e^{-2\pi}(1+6e^{-2\pi}+e^{-4\pi})}{(1-e^{-2\pi})^3} \Big )
\nonumber \\ & = \frac{4\pi^2 r}{y^2(1+r)^2}
\Big ( \frac{\pi (1-r)}{1+r} - \frac{2}{y} \Big ) + A r^3 \label{T-3}
\operatorname{e}nd{align}
where
\begin{align}
A &= \frac{36\pi^2}{3(1+e^{-3\pi})^2}
\Big ( \frac{3 \pi (1-e^{-3\pi})}{1+e^{-3\pi}} - 2 \Big ) \nonumber
\\ & \qquad - 64 \pi^5 \Big ( \frac{1+e^{-\pi}}{1-e^{-\pi}} \Big )
\frac{(1+31e^{-4\pi}+55e^{-8\pi} +9e^{-12\pi})}{(1-e^{-4\pi})^5} \nonumber
\\ & \qquad - \frac{1024 \pi^5}{(1-e^{-2\pi})^4 }
\Big ( \frac{ e^{-2\pi}(1+6e^{-2\pi}+e^{-4\pi})}{(1-e^{-2\pi})^3} \Big )
\nonumber
\\ & = -21,077.61... \label{constant-A}
\operatorname{e}nd{align}
Continuing from \operatorname{e}qref{T-3}, one has
\begin{align}
T(y) & > \frac{\pi^2 r^2}{(1+r)^3}
\Big ( \Big ( \frac{4\pi}{y^2} - \frac{8}{y^3} \Big ) r^{-1}
- \Big (\frac{4\pi}{y^2} + \frac{8}{y^3} \Big )
+ \frac{A r (1+r)^3}{\pi^2} \Big ).
\operatorname{e}nd{align}
Bound the last term by
\begin{equation}
\frac{A r (1+r)^3}{\pi^2} \geq \frac{A e^{-\pi \beta}
(1+e^{-\pi \beta})^3}{\pi^2}
\label{boundby}
\operatorname{e}nd{equation}
and define
\begin{equation}
\label{nu}
\nu(y) = \Big ( \frac{4\pi}{y^2} - \frac{8}{y^3} \Big ) e^{\pi y}
- \Big (\frac{4\pi}{y^2} + \frac{8}{y^3} \Big )
+ \frac{A e^{-\pi \beta} (1+e^{-\pi \beta})^3}{\pi^2}
\operatorname{e}nd{equation}
so that
\begin{equation}
\label{T-4}
T(y) > \frac{\pi^2 r^2}{(1+r)^3} \nu(y).
\operatorname{e}nd{equation}
Regarding $\nu(y)$, one finds
\begin{equation}
\label{nu'}
\nu'(y) = e^{\pi y} y^{-3} \Big ( 4\pi^2 \Big (
\sqrt{y} - \frac{2}{2\sqrt{y}}\Big )^2 + \frac{8}{y} \Big )
+ \frac{8\pi}{y} + \frac{24}{y^4} > 0.
\operatorname{e}nd{equation}
Then \operatorname{e}qref{T-4} implies
\begin{equation}
\label{T-5}
T(y) > \frac{\pi^2 r^2}{(1+r)^3} \nu(\beta)
\operatorname{e}nd{equation}
where
\begin{equation}
\label{nu-eta}
\nu(\beta) = \Big ( \frac{4\pi}{\beta^2} - \frac{8}{\beta^3} \Big ) e^{\pi \beta}
- \Big (\frac{4\pi}{\beta^2} + \frac{8}{\beta^3} \Big )
+ \frac{A e^{-\pi \beta} (1+e^{-\pi \beta})^3}{\pi^2}.
\operatorname{e}nd{equation}
Therefore if we can find $\beta$ so that $\nu(\beta)>0$, namely
\begin{equation}
\label{eta-4}
\Big ( \frac{4\pi}{\beta^2} - \frac{8}{\beta^3} \Big ) e^{\pi \beta}
- \Big (\frac{4\pi}{\beta^2} + \frac{8}{\beta^3} \Big )
+ \frac{A e^{-\pi \beta} (1+e^{-\pi \beta})^3}{\pi^2} >0.
\operatorname{e}nd{equation}
then
\begin{equation}
\label{large}
T(y) >0, \ \ y \in [\beta, \sqrt{3}]
\operatorname{e}nd{equation}
In summary, to invoke \operatorname{e}qref{small} and \operatorname{e}qref{large} one must choose
$\beta$ so that
\operatorname{e}qref{eta-1}, \operatorname{e}qref{eta-2}, \operatorname{e}qref{eta-3}, \operatorname{e}qref{eta-4} all hold.
Our choice is
\begin{equation}
\beta = 1.08
\operatorname{e}nd{equation}
at last. One readily checks the four conditions.
\begin{align}
& -\frac{1}{\beta^2} + \frac{4\pi^2 e^{-\pi \beta}}{(1+e^{-\pi \beta})^2}
- \frac{16\pi^2 e^{-2 \pi \beta}}{(1-e^{-2\pi \beta})^2}
\ \Big | _{\beta =1.08} = 0.2058... >0; \\
& -1 + 48 e^{-\pi \beta} - 312 e^{-2\pi \beta} \ \Big |_{\beta=1.08}
= 0.2608... >0; \\
& -\frac{6}{4\pi^4 \beta^4} +
\frac{1}{(1-e^{-2\pi})^4} \Big ( e^{-\pi \beta}- 24 e^{-2\pi \beta} +
104 e^{-3\pi \beta} \Big )\ \Big |_{\beta=1.08} = -0.0007930... <0; \\
& \Big ( \frac{4\pi}{\beta^2} - \frac{8}{\beta^3} \Big ) e^{\pi \beta}
- \Big (\frac{4\pi}{\beta^2} + \frac{8}{\beta^3} \Big )
+ \frac{A e^{-\pi \beta} (1+e^{-\pi \beta})^3}{\pi^2} \ \Big |_{\beta =1.08}
= 35.20...>0.
\operatorname{e}nd{align}
The proof of \operatorname{e}qref{quo-mono} is complete.
\operatorname{e}nd{proof}
\
\noindent {\bf Acknowledgment}. S. Luo and J. Wei are partially
supported by NSERC-RGPIN-2018-03773;
X. Ren is partially supported by NSF grant DMS-1714371.
\operatorname{e}nd{document}
|
\begin{document}
\title{Entanglement cost in practical scenarios}
\author{Francesco Buscemi} \email{[email protected]}
\affiliation{Institute for Advanced Research, Nagoya University, Nagoya 464-8601, Japan}
\author{Nilanjana Datta} \email{[email protected]}
\affiliation{Statistical Laboratory, DPMMS, University of Cambridge,
Cambridge CB3 0WB, UK}
\operatorname{d}ate{\today}
\begin{abstract}
We quantify the one-shot entanglement cost of an arbitrary bipartite state, that is the minimum number of singlets needed by two distant parties to create a single copy of the state up to a finite accuracy, using local operations and classical communication only. This analysis, in contrast to the traditional one, pertains to scenarios of practical relevance, in which resources are finite and transformations can only be achieved approximately. Moreover, it unveils a fundamental relation between two well-known entanglement measures, namely, the Schmidt number and the entanglement of formation. Using this relation, we are able to recover the usual expression of the entanglement cost as a special case.
\end{abstract}
\maketitle
Among quantum information processing tasks, entanglement manipulation, namely, the interconversion between entangled states using only local transformations and classical communication, represents an important primitive. In this scenario, the abstract notion of entanglement becomes a fungible resource ``\emph{as real as energy}''~\cite{horo-rev}. This is one of the reasons for which intensive research has been devoted to the study of entanglement manipulations since the very early stages of Quantum Information Theory, making such an \emph{operational} theory of entanglement one of its biggest successes.
In this context, however, the word `operational' should not be confused with `practical'. Indeed, most results we have at present about entanglement resource theory rely on two unrealistic (and very strong) assumptions:
\noindent
$(i)$ Many independent and identically distributed (i.i.d.) copies of the initial resource (e.g. the initial entangled state) are to be converted into many i.i.d. copies of the target state. This corresponds to assuming the absence of
correlations in the noisy (partially entangled) states which are either produced or consumed by the entanglement manipulation procedure;
\noindent
$(ii)$ The optimal interconversion rate is computed as the asymptotic input/output ratio, in the limit of infinitely many initial and final copies.
\noindent
These two assumptions constitute what is usually called the asymptotic i.i.d. scenario. In order to establish a \emph{truly general} entanglement resource theory, then, one should drop both assumptions $(i)$ and $(ii)$. The highest possible degree of theoretical generality is described by the so-called \emph{one-shot} scenario, in which a single initial state has to be transformed into a single desired final state, up to a finite accuracy. Incidentally, this is indeed the scenario in which experiments are performed, since resources available in nature are typically
finite and correlated, and transformations can only be achieved approximately.
One end of such a generalized entanglement resource theory, namely, one-shot entanglement distillation, was considered by the present authors in~\cite{one-shot-distill}: there we described the case of two distant parties trying to convert, up to some fixed error $\varepsilon$, a finite number of initially shared noisy bipartite entangled states into noiseless entanglement, i.e. singlets, using local operations and classical communication (LOCC) only. In this Letter we completely characterize the other end of the theory, namely, \emph{one-shot entanglement dilution}: here the goal is to utilize a finite amount of initial noiseless entanglement to produce (again, by LOCC and up to some fixed error $\varepsilon$) a single bipartite target state $\rho_{AB}$, which might not be directly available otherwise. In this scenario, entanglement dilution is relevant as the `reverse' of entanglement distillation: it shows that singlets indeed provide a universal resource from which any bipartite state can be obtained by LOCC, quantifying, at the same time, the minimum amount of singlets needed (i.e. the \emph{cost}) to produce a given bipartite state.
Our main result~\cite{extramat} is a formula for the minimum number of singlets necessary for successfully producing a given target state $\rho_{AB}$ up to a finite error $\varepsilon$. We refer to this quantity as the \emph{one-shot entanglement cost} $E_C^{(1)}(\rho_{AB};\varepsilon)$. The formula we derive involves a generalized quantum relative entropy, namely, the relative R\'enyi entropy of order zero~\cite{ohya-petz}, and makes use of a smoothing procedure similar to that introduced in~\cite{renato}. When specialized to the asymptotic i.i.d. scenario, our formula yields the entanglement cost given in terms of the regularized entanglement of formation~\cite{bennett2,ent_cost}. This is in accordance with the claim that one-shot entanglement resource theory is more general than the asymptotic i.i.d. one. Finally, as a by-product of our findings, we are able to prove that two entanglement monotones, namely the entanglement of formation~\cite{bennett2} and the Schmidt number~\cite{terhal-pawel}, which were previously considered to be unrelated, are in fact directly connected, in the sense that the former is recovered from the latter by suitable smoothing and regularization, as explained below.
\emph{Basic concepts.}---In order to clearly state our main results, given in Theorems~\ref{thm_main} and~\ref{thm_main2} below, we first have to introduce some notations and definitions. Throughout the paper, the letter $\mathscr{H}$ denotes finite dimensional Hilbert space, whereas $\mathfrak{S}(\mathscr{H})$ denotes the set of states (or density operators, i.e. positive operators of unit trace) acting on $\mathscr{H}$. Further, let $\openone$ denote the identity operator acting on $\mathscr{H}$. Given a positive operator $\omega\geqslantslant 0$, we denote by $\mathfrak{p}i_\omega$ the projector onto its support, and, for a pure state $|\varphi\rightarrowngle$, we denote the projector $|\varphi\rightarrowngle\leftarrowngle\varphi|$ simply as $\varphi$. Moreover, given two Hilbert spaces $\mathscr{H}_A$ and $\mathscr{H}_B$, of dimensions $d_A$ and $d_B$ respectively, with two given orthonormal bases $\{|i_A\rightarrowngle\}_{i=1}^{d_A}$ and $\{|i_B\rightarrowngle\}_{i=1}^{d_B}$, we define the canonical maximally entangled state (MES) in $\mathscr{H}_A\otimes\mathscr{H}_B$ of Schmidt number $M \leqslantslant\min \{d_A,d_B\}$ to be $|\mathfrak{p}si^+_M\rightarrowngle= M^{-1/2} \sum_{i=1}^M |i_A\rightarrowngle\otimes |i_B\rightarrowngle$.
Information-theoretical protocols, since Shannon, are usually characterized in term of suitable entropic quantities. In Quantum Information Theory too, entropic quantities like the von Neumann entropy, the conditional entropy, and the mutual information are often encountered. All these quantities can in fact be derived from the \emph{quantum relative entropy}~\cite{ohya-petz}, which is defined, for a state $\rho$ and an operator $\sigma\geqslantslant 0$, as \begin{equation} \nonumber S_r(\rho\|\sigma):=\leqslantslantft\{ \begin{split}
&\operatorname{Tr}[\rho\log\rho -\rho\log\sigma],\textrm{ if }\mathfrak{p}i_\rho\leqslantslant\mathfrak{p}i_\sigma,\\
&+\infty,\textrm{ otherwise}. \end{split} \right. \end{equation} (The logarithm in the above equation and in what follows is taken to base 2.) For example, the von Neumann entropy of a state $\rho$, defined as $S(\rho):=-\operatorname{Tr}[\rho\log\rho]$, can be equivalently written as $S(\rho)=-S_r(\rho\|\openone)$. Our main results are however expressed in terms of an alternative relative entropy, namely, the \emph{relative R\'enyi entropy of order zero}, which, for a state $\rho$ and an operator $\sigma\geqslantslant 0$, is defined as \begin{equation}\nonumber S_0(\rho\|\sigma):=\leqslantslantft\{ \begin{split}
&-\log\operatorname{Tr}[\mathfrak{p}i_\rho\ \sigma],\textrm{ if }\operatorname{Tr}[\mathfrak{p}i_\rho\mathfrak{p}i_\sigma]\neq 0,\\
&+\infty,\textrm{ otherwise}. \end{split}\right. \end{equation} From these two relative entropies, $S_r$ and $S_0$, we define the corresponding conditional entropy of a given bipartite state $\rho_{AB}$ given a state $\sigma_B$ as \begin{equation}
\leftarrowbel{eq:19}
H_{\star}(\rho_{AB}|\sigma_B):=-S_{\star}(\rho_{AB}\|\openone_A\otimes\sigma_B),
\end{equation}
and the conditional entropy of $\rho_{AB}$ given the subsystem $B$ as
\begin{equation}
\leftarrowbel{eq:22}
H_{\star}(\rho_{AB}|B):=\max_{\sigma_B\in\mathfrak{S}(\mathscr{H}_B)}H_{\star}(\rho_{AB}|\sigma_B),
\end{equation}
for $\star\in\{r,0\}$. It turns out (see e.g. Lemma~6 in~\cite{q1}) that $H_r(\rho_{AB}|B)=H_r(\rho_{AB}|\rho_B)=S(\rho_{AB})-S(\rho_B)$, where $\rho_B=\operatorname{Tr}_A[\rho_{AB}]$, for any given $\rho_{AB}$. However, in general, $H_0(\rho_{AB}|B)\neq H_0(\rho_{AB}|\rho_B)$.
It is also convenient to introduce, for any given decomposition of a bipartite state
$\rho_{AB}$ into a pure-state ensemble $\mathfrak{E}=\{p_i,|\phi^i_{AB}\rightarrowngle\}$ such that $\sum_ip_i\phi^i_{AB}=\rho_{AB}$, the tripartite classical-quantum (c-q) state
\begin{equation}
\leftarrowbel{eq:28}
\rho_{RAB}^\mathfrak{E}:=\sum_ip_i|i\rightarrowngle\leftarrownglei|_R\otimes\phi_{AB}^i,
\end{equation}
where $R$ denotes an auxiliary classical system represented by the fixed orthonormal basis $\{|i_R\rightarrowngle\}$. Given a pure-state ensemble $\mathfrak{E}$, let $\rho_A^i:=\operatorname{Tr}_B[\phi_{AB}^i]$, for all $i$.
As noted earlier, in the realistic scenario of finite entanglement resources and imperfect transformations, one is compelled to allow for a non-vanishing error, say $\varepsilon$, in achieving the final desired state. This error $\varepsilon$ manifests itself as a ``smoothing'' of the underlying information-theoretical quantity characterizing the task, which in our case turns out to be a conditional R\'enyi entropy of order zero. This fact leads us to define, in analogy with~\cite{renato}, a smoothing as follows: for any $\varepsilon\geqslantslant0$ and any pure-state ensemble $\mathfrak{E}=\{p_i,|\phi^i_{AB}\rightarrowngle\}$ of $\rho_{AB}$, we define the \emph{c-q--smoothed conditional zero-R\'enyi entropy} of the c-q state $\rho_{RA}^\mathfrak{E}:=\operatorname{Tr}_B[\rho_{RAB}^\mathfrak{E}]=\sum_i p_i|i\rightarrowngle\leftarrownglei|_R\otimes \rho_A^i$, given $R$, as \begin{equation}\leftarrowbel{eq:2} H_0^\varepsilon(\rho_{RA}^\mathfrak{E}|R):=\min_{\omega_{RA}\inB_{\mathrm{cq}}^\eps(\rho_{RA}^\mathfrak{E})}H_0(\omega_{RA}|R), \end{equation} where the minimum is taken over classical-quantum operators belonging to the set $B_{\mathrm{cq}}^\eps(\rho_{RA}^\mathfrak{E})$ defined, for any pure-state ensemble $\mathfrak{E}=\{p_i,|\phi^i_{AB}\rightarrowngle\}$ of $\rho_{AB}$, as follows: \begin{equation}
\nonumber
B_{\mathrm{cq}}^\eps(\rho_{RA}^\mathfrak{E}):=\leqslantslantft\{\omega_{RA}\geqslantslant
0 \leqslantslantft|
\begin{split}
&\omega_{RA}=\sum_i|i\rightarrowngle\leftarrownglei|_R\otimes\omega^i_A\\
&\&\ \N{\omega_{RA}-\rho_{RA}^\mathfrak{E}}_1\leqslantslant\varepsilon
\end{split}
\right.\right\},
\end{equation}
with $\N{X}_1:=\operatorname{Tr}|X|$. The basis $\{|i_R\rightarrowngle\}$ used in the above definition is the same as that appearing in eq.~(\ref{eq:28}). Note that operators in $B_{\mathrm{cq}}^\eps(\rho_{RA}^\mathfrak{E})$ are actually very close to being density operators, since $1-\varepsilon\leqslantslant\operatorname{Tr}[\omega_{RA}]\leqslantslant 1+\varepsilon$, for any $\omega_{RA}\in B_{\mathrm{cq}}^\eps(\rho_{RA}^\mathfrak{E})$.
\emph{Main result.}---Two parties, Alice and Bob, share a {\em{single copy}} of a maximally entangled state $|\mathfrak{p}si^+_M\rightarrowngle$ of Schmidt number $M$, and wish to convert it into a given bipartite target state $\rho_{AB}$ using an LOCC map $\Lambda$. We refer to the protocol used for this conversion as {\em{one-shot entanglement dilution}}. For sake of generality, we consider the situation where the final state of the protocol is $\varepsilon$-close to the target state with respect to a suitable distance measure, for any given $\varepsilon\geqslantslant 0$. As a measure of closeness, we choose here the (squared) fidelity, which is defined, for states $\rho$ and $\sigma$, as $F^2(\rho,\sigma):=\leqslantslantft(\operatorname{Tr}|\sqrt{\rho}\sqrt{\sigma}|\right)^2$. In this way, defining the fidelity of the protocol to be $F^2(\Lambda(\mathfrak{p}si_M^+),\rho_{AB})$, we require $F^2(\Lambda(\mathfrak{p}si_M^+),\rho_{AB})\geqslantslant 1-\varepsilon$. Further, for any given initial resource $|\mathfrak{p}si^+_M\rightarrowngle$ and any given target state $\rho_{AB}$, we denote the optimal fidelity of one-shot entanglement dilution as
\begin{equation}
\nonumber
\mathsf{F}_{\mathrm{dil}}(\rho_{AB},M):=\max_{\Lambda\in\mathrm{LOCC}}F^2(\Lambda(\mathfrak{p}si_M^+),\rho_{AB}).
\end{equation}
\begin{definition}[One-shot entanglement cost] For any given $\rho_{AB}$ and $\varepsilon\geqslantslant0$, the \emph{one-shot entanglement cost} is defined as follows:
\begin{equation}\nonumber
E_C^{(1)}(\rho_{AB};\varepsilon):=\min_{M\in\mathbb{N}}\leqslantslantft\{\log M: \mathsf{F}_{\mathrm{dil}}(\rho_{AB},M)\geqslantslant 1-\varepsilon\right\}.
\end{equation}\leftarrowbel{def:ent-cost}
\end{definition}
Notice that, by its very definition, the one-shot entanglement cost $E_C^{(1)}(\rho_{AB};\varepsilon)$ constitutes, for any $\varepsilon\geqslantslant 0$, an entanglement (weak) monotone, in that it cannot increase under the action of an LOCC map,~\cite{footnote1}. As mentioned earlier, the smoothing here emerges naturally from a purely operational consideration, in the sense that it is a natural consequence of the finite accuracy we allow in the protocol. This is in contrast to the approach adopted in Ref.~\cite{mora}, where a smoothing is instead introduced axiomatically.
Our main result is given by the following theorem:
\begin{theo}\leftarrowbel{thm_main} For any given target state $\rho_{AB}$ and any given error parameter $\varepsilon\geqslantslant 0$, the one-shot entanglement cost under LOCC, corresponding to an error less than or equal to $\varepsilon$, satisfies the following bounds: \begin{equation}
\nonumber
\min_{\mathfrak{E}}H_0^{2\sqrt{\varepsilon}}(\rho_{RA}^\mathfrak{E}|R)\leqslantslant
E_C^{(1)}(\rho_{AB};\varepsilon)
\leqslantslant\min_{\mathfrak{E}}H_0^{\varepsilon/2}(\rho_{RA}^\mathfrak{E}|R),
\end{equation}
where the minimum is taken over all pure-state ensemble decompositions $\mathfrak{E}=\{p_i,|\phi^i_{AB}\rightarrowngle\}$ of $\rho_{AB}$, and $\rho_{RA}^\mathfrak{E}=\operatorname{Tr}_B[\rho_{RAB}^\mathfrak{E}]$, with $\rho_{RAB}^\mathfrak{E}$ being the tripartite extension of $\rho_{AB}$ defined in~(\ref{eq:28}).
\end{theo}
For any given $\varepsilon\geqslantslant 0$, Theorem~\ref{thm_main} essentially identifies $\min_{\mathfrak{E}}H_0^{\varepsilon}(\rho_{RA}^\mathfrak{E}|R)$ as the quantity representing the one-shot entanglement cost $ E_C^{(1)}(\rho_{AB};\varepsilon)$,~\cite{footnote2}.
The theory developed here not only provides a complete characterization of the one-shot entanglement cost, it also yields a simple proof of a fundamental asymptotic result. It is known~\cite{ent_cost} that the asymptotic entanglement cost $E_C(\rho_{AB})$ of preparing a bipartite state $\rho_{AB}$ is equal to the \emph{regularized entanglement of formation}, defined as, \begin{equation}
\leftarrowbel{eq:35}
E_F^\infty(\rho_{AB}):=\lim_{n\to\infty}\mathsf{F}_{\mathrm{dil}}rac 1nE_F(\rho_{AB}^{\otimes n}),
\end{equation}
where $E_F(\rho_{AB}):=\min_\mathfrak{E}\sum_ip_iS(\rho_A^i)$ denotes the entanglement of formation of the state $\rho_{AB}$ \cite{bennett2}. Applying our main result, Theorem~\ref{thm_main}, to the case of multiple ($n$) copies of the bipartite state $\rho_{AB}$, and taking the asymptotic limit ($n\to\infty$) yields a new proof of the identity $E_C(\rho_{AB})=E_F^\infty(\rho_{AB})$:
\begin{theo}\leftarrowbel{thm_main2}
For any given target state $\rho_{AB}$, the following identity holds:
\begin{equation}\leftarrowbel{eq:thm_main2}
\lim_{\varepsilon\to 0^+}\lim_{n\to\infty}\mathsf{F}_{\mathrm{dil}}rac 1n E_C^{(1)}(\rho_{AB}^{\otimes n};\varepsilon)=E_F^\infty(\rho_{AB}).
\end{equation}
\end{theo}
Theorem~\ref{thm_main2}, together with the results in Ref.~\cite{ent_cost}, establishes that the asymptotic entanglement cost is alternatively expressible as the regularized one-shot entanglement cost, in the limit $\varepsilon\to 0^+$.
The theorems stated above emphasize the generality and two-fold relevance of the one-shot analysis: on one hand, it gives a complete description of realistic scenarios of entanglement dilution, on the other hand, it provides a unified theoretical framework from which previous results can be derived as special cases.
\emph{Discussion.}---In the case of perfect (zero-error) entanglement dilution, corresponding to the case $\varepsilon=0$, Theorem~\ref{thm_main} says that the corresponding one-shot entanglement cost is given by
\begin{equation}\leftarrowbel{eq:zero-err-cost}
E_C^{(1)}(\rho_{AB};0)=\min_{\mathfrak{E}} H_0(\rho_{RA}^\mathfrak{E}|R).
\end{equation}
The above equation can be made more explicit as follows:
\begin{equation}\nonumber
E_C^{(1)}(\rho_{AB};0)=\min_\mathfrak{E}\max_i\log\operatorname{Tr}\leqslantslantft[\mathfrak{p}i_{\rho^i_A}\right],
\end{equation}
where, for any given pure-state ensemble decomposition $\mathfrak{E}=\{p_i,|\phi^i_{AB}\rightarrowngle\}$ of $\rho_{AB}$, $\rho^i_A:=\operatorname{Tr}_B[\phi^i_{AB}]$. The quantity on the right-hand side of the equation above coincides with the logarithm of the Schmidt number (log-Schmidt number, for short) of the mixed state $\rho_{AB}$, introduced and studied in~\cite{terhal-pawel}. In~\cite{hayashi}, the same quantity, was denoted as $E_{sr}(\rho_{AB})$, and was shown to characterize the zero-error entanglement cost $E_C^{(1)}(\rho_{AB};0)$. However, until now, there was a gap in the theory of entanglement dilution, in the sense that it was unclear how these zero-error results could be related to the usual notion of entanglement cost, for which the error vanishes only in the asymptotic limit.
The results we presented above show that it is indeed possible to fill such a gap by suitably smoothing the zero-error quantities. In fact, let us introduce a smoothed log-Schmidt number as follows:
\begin{equation}\leftarrowbel{eq:esrsm}
E_{sr}^\varepsilon(\rho_{AB}):=\min_{\omega_{AB}\in C_\varepsilon(\rho_{AB})}E_{sr}(\omega_{AB}),
\end{equation}
where now the smoothing is performed with respect to the compact set of normalized states $C_\varepsilon(\rho_{AB})$ centered at $\rho_{AB}$ defined as: \begin{equation}\nonumber \begin{split}
&C_\varepsilon(\rho_{AB})\\ &:=\leqslantslantft\{\omega_{AB}\in\mathfrak{S}(\mathscr{H}_A\otimes\mathscr{H}_B)\leqslantslantft|F^2(\omega_{AB},\rho_{AB})\geqslantslant1-\varepsilon\right.\right\}.
\end{split}
\end{equation}
Then, using the arguments given below, one can prove that, for any $\varepsilon\geqslantslant0$, the identity $E_C^{(1)}(\rho_{AB};\varepsilon)=E_{sr}^\varepsilon(\rho_{AB})$ holds. First, for any $\omega_{AB}\in C_\varepsilon(\rho_{AB})$, $E_{sr}(\omega_{AB})$ singlets can be used to create, with zero-error, the state $\omega_{AB}$, which is, by construction, $\varepsilon$-close to $\rho_{AB}$. This proves that $E_C^{(1)}(\rho_{AB};\varepsilon) \leqslantslant E_{sr}^\varepsilon(\rho_{AB})$. For the other direction, let us assume that $E_C^{(1)}(\rho_{AB};\varepsilon)<E_{sr}^\varepsilon(\rho_{AB})$. Definition~\ref{def:ent-cost} then implies that, with $E_C^{(1)}(\rho_{AB};\varepsilon)$ singlets, it is possible to create a state, say $\tilde\omega_{AB}$, which is $\varepsilon$-close to $\rho_{AB}$. This in turn implies that $\tilde\omega_{AB}\in C_\varepsilon(\rho_{AB})$, with $E_{sr}(\tilde\omega_{AB})=E_C^{(1)}(\rho_{AB};\varepsilon)<E_{sr}^\varepsilon(\rho_{AB})$, which contradicts the fact that $E_{sr}^\varepsilon(\rho_{AB})$ is defined as a minimum in~(\ref{eq:esrsm}).
We hence obtain the following corollary of Theorem~\ref{thm_main2}:
\begin{coro} For any given state $\rho_{AB}$, the entanglement of formation $E_F(\rho_{AB})$ and the log-Schmidt number $E_{sr}(\rho_{AB})$ are related as follows: \begin{equation}\leftarrowbel{eq:corol}
\lim_{\varepsilon\to 0^+}\lim_{n\to\infty}\mathsf{F}_{\mathrm{dil}}rac 1n E_{sr}^\varepsilon(\rho_{AB}^{\otimes n})=E_F^\infty(\rho_{AB}).
\end{equation}
\end{coro}
\emph{Essence of proofs.}---We present here only the main steps of the proofs of the results stated above. The interested reader is referred to~\cite{extramat} for detailed derivations. The proof of Theorem~\ref{thm_main} relies on the following lemma:
\begin{lemma}[\cite{bowen-datta,hayashi}]\leftarrowbel{fidelity}
For any given bipartite state $\rho_{AB}$, the optimal dilution fidelity is given by \begin{equation}
\leftarrowbel{eq:8}
\mathsf{F}_{\mathrm{dil}}(\rho_{AB},M)=\max_\mathfrak{E}\sum_ip_i\sum_{j=1}^M\leftarrowmbda_j^{(i)},
\end{equation}
where the maximum is over all pure-state decomposition $\mathfrak{E}=\{p_i,|\phi_{AB}^i\rightarrowngle\}$ of $\rho_{AB}$, and $\{\leftarrowmbda_j^{(i)}\}_j$ are the eigenvalues of $\rho^i_A=\operatorname{Tr}_B[\phi_{AB}^i]$, arranged in non-increasing order.
\end{lemma}
Using this lemma and Definition~\ref{def:ent-cost}, we can prove that $E_C^{(1)}(\rho_{AB};\varepsilon)=\min_{\mathfrak{E}}E^\varepsilon(\mathfrak{E})$, where
\begin{equation}\nonumber
\begin{split}
&E^\varepsilon(\mathfrak{E})\\ &:=\min_{\{\mathfrak{p}i^i_A\}}\leqslantslantft\{\max_i\log\operatorname{Tr}\leqslantslantft[\mathfrak{p}i^i_A\right]\leqslantslantft|\sum_ip_i\operatorname{Tr}\leqslantslantft[\mathfrak{p}i^i_A\rho^i_A\right]\geqslantslant1-\varepsilon\right.\right\},
\end{split}
\end{equation}
where $\{\mathfrak{p}i^i_A\}$ is an unconstrained set of projectors, that is, not necessarily orthogonal nor complete. The proof of Theorem~\ref{thm_main} then reduces to proving that $H_0^{2\sqrt{\varepsilon}}(\rho_{RA}^\mathfrak{E}|R) \leqslantslant E^\varepsilon(\mathfrak{E})\leqslantslant H_0^{\varepsilon/2}(\rho_{RA}^\mathfrak{E}|R)$, for any ensemble $\mathfrak{E}$ and any $\varepsilon\geqslantslant 0$. This is done by standard tools like convexity arguments and the ``gentle measurement'' lemma~\cite{gentle}.
As regards the asymptotic result of Theorem~\ref{thm_main2}, the starting point is to note that the entanglement of formation itself can be expressed as a conditional entropy $E_F(\rho_{AB})=\min_{\mathfrak{E}}H_r(\rho_{RA}^\mathfrak{E}|R)$, in close analogy with the expression~(\ref{eq:zero-err-cost}) of the zero-error one-shot entanglement cost. Theorem~\ref{thm_main2} then reduces to the identity
\begin{equation}\leftarrowbel{eq:asymp2}
\begin{split}
\lim_{\varepsilon\to 0^+}&\lim_{n\to\infty}\mathsf{F}_{\mathrm{dil}}rac 1n \min_{\mathfrak{E}_n}H_0^{\varepsilon}(\rho_{R_nA_n}^{\mathfrak{E}_n}|R_n)\\
=&\lim_{n\to\infty}\mathsf{F}_{\mathrm{dil}}rac 1n \min_{\mathfrak{E}_n}H_r(\rho_{R_nA_n}^{\mathfrak{E}_n}|R_n) \equiv E_F^\infty(\rho_{AB}),
\end{split}
\end{equation}
where $\mathfrak{E}_n$ denotes a pure-state ensemble decomposition $\{p_i^n,|\phi^i_{A_nB_n}\rightarrowngle\}$ of $\rho_{AB}^{\otimes n}$, such that $\rho_{AB}^{\otimes n}=\sum_ip_i^n \phi^i_{A_nB_n}$, and $\rho_{R_nA_n}^{\mathfrak{E}_n}=\operatorname{Tr}_{\mathscr{H}_B^{\otimes n}}[\rho^{\mathfrak{E}_n}_{R_nA_nB_n}]$, with $\rho_{R_nA_nB_n}^{\mathfrak{E}_n}$ denoting the c-q extension of $\rho_{AB}^{\otimes n}$ as in equation~(\ref{eq:28}). The identity~(\ref{eq:asymp2}) is proved by employing the information spectrum method~\cite{info-spect}, results of~\cite{nila}, and a generalized version of Stein's lemma established in~\cite{brandao-plenio}.
\emph{Acknowledgments}---FB acknowledges support from Japan's MEXT-SCF Program for Improvement of Research Environment for Young Researchers. ND acknowledges support from the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement number 213681. This work was completed when FB was visiting the Statistical Laboratory of the University of Cambridge.
\begin{thebibliography}{99}
\bibitem{horo-rev} R.~Horodecki, P.~Horodecki, M.~Horodecki, and K.~Horodecki, Rev.~Mod.~Phys. {\bf 81}, 865 (2009).
\bibitem{one-shot-distill} F.~Buscemi and N.~Datta, J.~Math.~Phys. {\bf 51}, 102201 (2010).
\bibitem{extramat} For the technical details, we refer to the supplementary materials provided together with the submission.
\bibitem{ohya-petz} see e.~g. M.~Ohya and D.~Petz, \emph{Quantum Entropy and Its Use} (Springer, 1993), and references therein.
\bibitem{renato} R.~Renner, \emph{Security of Quantum Key
Distribution} (PhD thesis, ETH Zurich, 2005).
\bibitem{bennett2} C.~H.~Bennett, D.~P.~DiVincenzo, J.~A.~Smolin, and W.~K.~Wootters,
Phys.~Rev.~A {\bf 54}, 3824 (1996).
\bibitem{ent_cost} P.~M.~Hayden, M.~Horodecki, and B.~M.~Terhal, J.~Phys.~A: Math. Gen. {\bf 34}, 6891 (2001).
\bibitem{terhal-pawel} B.~M.~Terhal and P.~Horodecki, Phys.~Rev.~A {\bf 61}, 040301 (2000).
\bibitem{hayashi} M.~Hayashi, \emph{Quantum Information: an Introduction}
(Springer-Verlag, Berlin, Heidelberg, 2006).
\bibitem{footnote1} If $\Lambda$ is an LOCC transformation, $E_C^{(1)}(\rho_{AB};\varepsilon)\geqslantslant E_C^{(1)}(\Lambda(\rho_{AB});\varepsilon)$. To see this, note that, by Definition~\ref{def:ent-cost}, there exists an LOCC map $T$ that transforms $E_C^{(1)}(\rho_{AB};\varepsilon)$ singlets into a state $\tilde\rho_{AB}$ which is $\varepsilon$-close to $\rho_{AB}$. Then, by the monotonicity of the fidelity, $\Lambda(\tilde\rho_{AB})$ is also $\varepsilon$-close to $\Lambda(\rho_{AB})$, meaning that the composed LOCC map $\Lambda\circ T$ transforms $E_C^{(1)}(\rho_{AB};\varepsilon)$ singlets into a state which is $\varepsilon$-close to $\Lambda(\rho_{AB})$. This implies that $ E_C^{(1)}(\Lambda(\rho_{AB});\varepsilon)\leqslantslant E_C^{(1)}(\rho_{AB};\varepsilon)$.
\bibitem{mora} C.-E.~Mora, M.~Piani, and H.-J.~Briegel, New~J.~Phys. {\bf 10}, 083027 (2008).
\bibitem{footnote2} The presence of different smoothing parameters in the bounds of Theorem~\ref{thm_main} arises from having defined the dilution error in terms of the fidelity and the c-q smoothing in terms of the trace-norm. Such definitions have been adopted anyway, for sake of overall simplicity.
\bibitem{bowen-datta} G.~Bowen and N.~Datta, arXiv:0704.1957v1 [quant-ph].
\bibitem{q1} F.~Buscemi and N.~Datta, IEEE
Trans. Inform. Theory {\bf 56}, No.3, 1447 (2010).
\bibitem{gentle} A.~Winter, IEEE Trans. Inf. Theory, {\bf{45}}, 2481 (1999); T.~Ogawa and H.~Nagaoka, Proc. of Int. Symp. Inf. Th., ISIT 2002, p.73.
\bibitem{info-spect} S.~Verdu and T.~S.~Han, IEEE Trans. Inf. Theory {\bf 40}, 1147
(1994); T.~Ogawa and H.~Nagaoka, IEEE
Trans. Inform. Theory {\bf 46}, 2428 (2000); M.~Hayashi and H.~Nagaoka, IEEE Trans.
Inf. Th. {\bf 49}, No.7, 1753 (2003); G.~Bowen and N.~Datta, Proc. of Int. Symp. Inf. Th., ISIT 2006, p.45; H.~Nagaoka and M.~Hayashi, IEEE Trans. Inf. Theory {\bf 53}, 534 (2007).
\bibitem{nila} N.~Datta, IEEE Trans.~Inf.~Th. {\bf 55}, No.6, 2816 (2009).
\bibitem{brandao-plenio} F.~G.~S.~L.~Brandao and M.~B.~Plenio,
Comm. Math. Phys. {\bf{295}}, 791 (2010).
\end{thebibliography}
\section{Detailed proof of Theorem~\ref{thm_main}}
For a given bipartite state $\rho_{AB}$, let $\mathfrak{E}=\{p_i,|\phi^i_{AB}\rightarrowngle\}$ be an ensemble of pure states such that $\sum_ip_i\phi^i_{AB}=\rho_{AB}$. We introduce the following quantity:
\begin{equation}
\leftarrowbel{eq:1}
\begin{split}
&E^\varepsilon(\mathfrak{E})\\
&:=\min_{\{\mathfrak{p}i^i_A\}}\leqslantslantft\{\max_i\log\operatorname{Tr}\leqslantslantft[\mathfrak{p}i^i_A\right]\leqslantslantft|\sum_ip_i\operatorname{Tr}\leqslantslantft[\mathfrak{p}i^i_A\rho^i_A\right]\geqslantslant1-\varepsilon\right.\right\},
\end{split}
\end{equation}
where $\rho^i_A:=\operatorname{Tr}_B[\phi^i_{AB}]$, and $\{\mathfrak{p}i^i_A\}$ is an
unconstrained set of projectors, that is, not necessarily orthogonal nor complete.
We first prove the following lemma relating $E^\varepsilon(\mathfrak{E})$ to the c-q--smoothed conditional zero-R\'enyi entropy appearing in the statement of Theorem~\ref{thm_main}:
\begin{lemma}\leftarrowbel{lemma:useful}
For any $\varepsilon\geqslantslant 0$, and any choice of the pure-state ensemble $\mathfrak{E}$ for
$\rho_{AB}$, the following holds:
\begin{equation}
\leftarrowbel{eq:4}
H_0^{2\sqrt{\varepsilon}}(\rho_{RA}^\mathfrak{E}|R)\leqslantslant E^\varepsilon(\mathfrak{E})\leqslantslant H_0^{\varepsilon/2}(\rho_{RA}^\mathfrak{E}|R),
\end{equation}
where $\rho_{RA}^\mathfrak{E}$ is the reduced state obtained from the tripartite extension $\rho_{RAB}^\mathfrak{E}$ defined in~(\ref{eq:28}). $\square$
\end{lemma}
\noindent{\bf Proof}. We first prove the bound
\begin{equation}
\leftarrowbel{eq:17}
H_0^{\varepsilon}(\rho_{RA}^\mathfrak{E}|R)\geqslantslant E^{2\varepsilon}(\mathfrak{E}).
\end{equation}
Let $\overline\omega_{RA}=\sum_i|i\rightarrowngle\leftarrownglei|_R\otimes\overline\omega_A^i$ in
$B_{\mathrm{cq}}^\eps(\rho_{RA}^\mathfrak{E})$ be the operator achieving the minimum
in~(\ref{eq:2}). The projection onto its support is given by
$\mathfrak{p}i_{{\overline\omega}_{RA}}=\sum_i|i\rightarrowngle\leftarrownglei|_R\otimes\mathfrak{p}i_{\overline\omega_A^i}$. Hence, $\overline\omega_{RA}\inB_{\mathrm{cq}}^\eps(\rho_{RA}^\mathfrak{E})$ yields a set of projectors
$\{\mathfrak{p}i_{\overline\omega^i_A}\}$ for which
\begin{equation}
\nonumber
\begin{split}
&\sum_ip_i\operatorname{Tr}[\mathfrak{p}i_{\overline\omega^i_A}\rho_A^i]=\operatorname{Tr}[\mathfrak{p}i_{\overline\omega_{RA}}\rho_{RA}^\mathfrak{E}]\\
=&\operatorname{Tr}[\mathfrak{p}i_{\overline\omega_{RA}}\overline\omega_{RA}]+\operatorname{Tr}[\mathfrak{p}i_{\overline\omega_{RA}}(\rho_{RA}^\mathfrak{E}-\overline\omega_{RA})]\\
\geqslantslant&1-\varepsilon-\varepsilon=1-2\varepsilon.
\end{split}
\end{equation}
In the last line we have made use of the fact that $\overline\omega_{RA}\inB_{\mathrm{cq}}^\eps(\rho^\mathfrak{E}_{RA})$, due to which $\operatorname{Tr}[\overline\omega_{RA}]\geqslantslant 1-\varepsilon$. This implies that the set of projectors $\{\mathfrak{p}i_{\overline\omega^i_A}\}$ satisfies the condition required in definition~(\ref{eq:1}) of $E^{2\varepsilon}(\mathfrak{E})$, hence proving~(\ref{eq:17}).
We now prove the lower bound
\begin{equation}
\leftarrowbel{eq:21}
E^{\varepsilon}(\mathfrak{E})\geqslantslant H_0^{2\sqrt{\varepsilon}}(\rho_{RA}^\mathfrak{E}|R).
\end{equation}
Let $\leqslantslantft\{\overline\mathfrak{p}i^i_A\right\}$ be the set of projectors achieving the minimum in eq.~(\ref{eq:1}). Therefore, $\sum_ip_i\operatorname{Tr}\leqslantslantft[\overline\mathfrak{p}i^i_A\rho^i_A\right]\geqslantslant 1-\varepsilon$. For later convenience, let us set $\varepsilon_i:=1-\operatorname{Tr}\leqslantslantft[\overline\mathfrak{p}i^i_A\rho^i_A\right]$, so that $\sum_ip_i\varepsilon_i\leqslantslant\varepsilon$. Let us define $\overline\omega_A^i:=\overline\mathfrak{p}i^i_A\rho_A^i\overline\mathfrak{p}i^i_A$ and $\overline\omega_{RA}:=\sum_ip_i|i\rightarrowngle\leftarrownglei|_R\otimes \overline\omega_A^i$.
The so-called Gentle Measurement Lemma~\cite{gentle} guarantees that $\N{\overline\omega_A^i-\rho_A^i}\leqslantslant2\sqrt{\varepsilon_i}$, for all $i$. Also, by the concavity of $x\mapsto\sqrt{x}$, we have:
\begin{equation}
\leftarrowbel{eq:23}
\begin{split}
\N{\overline\omega_{RA}-\rho_{RA}^\mathfrak{E}}_1&=\sum_ip_i\N{\overline\omega_A^i-\rho_A^i}_1\\
&\leqslantslant\sum_ip_i2\sqrt{\varepsilon_i}\\
&\leqslantslant2\sqrt{\sum_ip_i\varepsilon_i}\leqslantslant2\sqrt{\varepsilon}.
\end{split}
\end{equation}
The above inequalities prove that $\overline\omega_{RA}\in
B_{\mathrm{cq}}^{2\sqrt{\varepsilon}}(\rho_{RA}^\mathfrak{E})$. Moreover, since, by
construction, $\mathfrak{p}i_{\overline\omega^i_A}\leqslantslant\overline\mathfrak{p}i^i_A$ for all $i$,
we obtain eq.~(\ref{eq:21}). $\blacksquare$
With Lemma~\ref{lemma:useful} in hand, the proof of Theorem~\ref{thm_main} reduces to proving the following identity:
\begin{equation}
\leftarrowbel{eq:30}
E_C^{(1)}(\rho_{AB};\varepsilon)=\min_{\mathfrak{E}}E^\varepsilon(\mathfrak{E}).
\end{equation}
We split the proof of this identity into Lemma~\ref{lemma:3} and
Lemma~\ref{lemma:4} below.
\begin{lemma}[Direct part]\leftarrowbel{lemma:3} For any $\varepsilon\geqslantslant 0$,
\begin{equation}
\nonumber
E_C^{(1)}(\rho_{AB};\varepsilon)\leqslantslant\min_{\mathfrak{E}}E^\varepsilon(\mathfrak{E}).
\end{equation}
\end{lemma}
\noindent{\bf Proof}. From Lemma~\ref{fidelity},
\begin{equation}
\leftarrowbel{eq:9}
\mathsf{F}_{\mathrm{dil}}(\rho_{AB},M)=\max_\mathfrak{E}\sum_ip_i\operatorname{Tr}[Q_M^i\ \rho^i_A],
\end{equation}
where, for each $i$, $Q_M^i$ is the projector onto the eigenvectors associated with the $M$ largest eigenvalues of $\rho^i_A$. Let us now fix an ensemble
decomposition $\overline\mathfrak{E}:=\leqslantslantft\{\overline p_i,|\overline \phi^i_{AB}\rightarrowngle\right\}$ for $\rho_{AB}$, and choose the integer $\overline M$ such that $\log\overline M=E^\varepsilon\leqslantslantft(\overline\mathfrak{E}\right)$. Then, from definition~(\ref{eq:1}), we know that there
exists a set of projectors $\{\mathfrak{p}i^i_A\}$, with $\operatorname{rank}
\mathfrak{p}i^i_A\leqslantslant\overline M$ for all $i$, such that $\sum_i\overline p_i\operatorname{Tr}[\mathfrak{p}i^i_A\overline
\rho^i_A]\geqslantslant1-\varepsilon$. This implies that $\log\overline M$ is an
$\varepsilon$-achievable rate, since
\begin{equation}
\leftarrowbel{eq:10}
\begin{split}
\mathsf{F}_{\mathrm{dil}}(\rho_{AB},\overline M)&\geqslantslant\sum_i\bar p_i\operatorname{Tr}\leqslantslantft[\overline Q_{\overline M}^i\overline\rho_A^i\right]\\
&\geqslantslant
\sum_i\bar p_i\operatorname{Tr}\leqslantslantft[\mathfrak{p}i^i_A\overline
\rho^i_A\right]\geqslantslant1-\varepsilon,
\end{split}
\end{equation}
where $\overline Q_{\bar M}^i$ is, for each $i$, the projector onto the $\overline M$ largest
eigenvalues of $\overline\rho_A^i$. The second inequality in~(\ref{eq:10})
is due to the fact that, for any projector $\mathfrak{p}i^i_A$ with
$\operatorname{rank}\mathfrak{p}i^i_A\leqslantslant\overline M$, $\operatorname{Tr}[\mathfrak{p}i^i_A\overline\rho_A^i]\leqslantslant\operatorname{Tr}\leqslantslantft[\overline Q_{\overline M}^i\overline\rho_A^i\right]$. Hence $E^\varepsilon\leqslantslantft(\overline\mathfrak{E}\right)$ is itself an $\varepsilon$-achievable rate for any choice of $\overline\mathfrak{E}$, and the statement of the lemma follows. $\blacksquare$
\begin{lemma}[Weak converse]\leftarrowbel{lemma:4} For any $\varepsilon\geqslantslant 0$,
\begin{equation}
\leftarrowbel{eq:6}
E_C^{(1)}(\rho_{AB};\varepsilon)\geqslantslant\min_{\mathfrak{E}}E^\varepsilon(\mathfrak{E}).
\end{equation}
\end{lemma}
\noindent{\bf Proof}. Let $\log M$ be an $\varepsilon$-achievable rate. This
is equivalent to saying that $\mathsf{F}_{\mathrm{dil}}(\rho_{AB},M)\geqslantslant1-\varepsilon$. In the
following, we prove that this implies that
\begin{equation}
\leftarrowbel{eq:11}
\log M\geqslantslant\min_\mathfrak{E}
E^\varepsilon(\mathfrak{E}).
\end{equation}
Let $\overline\mathfrak{E}:=\{p_i,|\phi^i_{AB}\rightarrowngle\}$ be the ensemble decomposition of
$\rho_{AB}$ achieving $\mathsf{F}_{\mathrm{dil}}(\rho_{AB},M)$ in~(\ref{eq:8}), and consider
the Schmidt decomposition of its elements $|\phi^i_{AB}\rightarrowngle
=\sum_j\sqrt{\leftarrowmbda_j^{(i)}} |j_A^{(i)}\rightarrowngle|j_B^{(i)}\rightarrowngle$, where the Schmidt
coefficients $\{\leftarrowmbda^{(i)}_j\}_j$ are arranged in non-increasing order for all $i$. The optimal dilution fidelity given by eq.~(\ref{eq:8}) can then be expressed as
\begin{equation}
\leftarrowbel{eq:12}
\mathsf{F}_{\mathrm{dil}}(\rho_{AB},M)=\sum_ip_i\operatorname{Tr}[\overline\omega^i_A]\geqslantslant1-\varepsilon,
\end{equation}
where
\begin{equation}
\leftarrowbel{eq:13}
\overline\omega^i_A:=\sum_{j=1}^M\leftarrowmbda_j^{(i)}|j^{(i)}\rightarrowngle\leftarrownglej^{(i)}|_A.
\end{equation}
We now proceed by observing that
\begin{equation}
\leftarrowbel{eq:14}
\begin{split}
&E^\varepsilon\leqslantslantft(\overline\mathfrak{E}\right)\\
&\leqslantslant\min_{\{\omega^i_A\}}\leqslantslantft\{\max_i\log\operatorname{Tr}\leqslantslantft[\mathfrak{p}i_{\omega^i_A}\right]\leqslantslantft| \begin{split}
&\sum_ip_i\operatorname{Tr}[\omega^i_A]\geqslantslant1-\varepsilon\\
&\&\ \omega^i_A\leqslantslant\rho^i_A,\ \mathsf{F}_{\mathrm{dil}}orall i\\
\end{split} \right.\right\}.
\end{split}
\end{equation}
This is due to the fact that, for any set of operators $\{\omega^i_A\}$ satisfying both conditions at the right hand side, the corresponding set of projectors $\{\mathfrak{p}i_{\omega^i_A}\}$ satisfies the conditions required in the definition~(\ref{eq:1}) of $E^\varepsilon\leqslantslantft(\overline\mathfrak{E}\right)$. This is because $1-\varepsilon\leqslantslant \sum_ip_i\operatorname{Tr}\leqslantslantft[\mathfrak{p}i_{\omega^i_A}\omega^i_A\right]\leqslantslant \sum_ip_i\operatorname{Tr}\leqslantslantft[\mathfrak{p}i_{\omega^i_A}\rho^i_A\right]$. In particular, also the set of subnormalized density operators $\{\overline\omega^i_A\}$ defined by~(\ref{eq:13}) satisfies both conditions required at the right hand side of eq.~(\ref{eq:14}), since $\overline\omega^i_A\leqslantslant\rho^i_A$ for all $i$ (by definition), and $\sum_ip_i\operatorname{Tr}[\overline\omega^i_A] \geqslantslant 1-\varepsilon$ (by eq.~(\ref{eq:12})). We then have:
\begin{equation}
\leftarrowbel{eq:15}
\begin{split}
\min_\mathfrak{E} E^\varepsilon(\mathfrak{E}) \leqslantslant E^\varepsilon(\bar\mathfrak{E})&\leqslantslant
\max_i\log\operatorname{Tr}\leqslantslantft[\mathfrak{p}i_{\bar\omega^i_A}\right]\\
&\leqslantslant\log M,
\end{split}
\end{equation}
for any $\varepsilon$-achievable rate $\log M$. $\blacksquare$
\section{Detailed proof of Theorem~\ref{thm_main2}}
\leftarrowbel{asymp}
By defining the asymptotic entanglement cost as
\begin{equation}\leftarrowbel{eq:asym-cost}
E_C(\rho_{AB}):=\lim_{\varepsilon\to 0^+}\lim_{n\to\infty}\mathsf{F}_{\mathrm{dil}}rac 1n E_C^{(1)}(\rho_{AB}^{\otimes n};\varepsilon),
\end{equation}
we prove that
\begin{equation}
E_C(\rho_{AB})=E_F^\infty(\rho_{AB}).
\end{equation}
Hence, we also prove indirectly that definition~(\ref{eq:asym-cost}) is equivalent to the alternative definitions of asymptotic entanglement cost proposed in Ref.~\cite{ent_cost}. We split the proof of Theorem~\ref{thm_main2} into Lemma~\ref{lem:asym-conv} and Lemma~\ref{lem:asym-dir} below. In the following, $\mathfrak{E}_n$ denotes a pure-state ensemble decomposition $\{p_i^n,|\phi^i_{A_nB_n}\rightarrowngle\}$ of $\rho_{AB}^{\otimes n}$, such that $\rho_{AB}^{\otimes n}=\sum_ip_i^n \phi^i_{A_nB_n}$. Notice that, even though the state $\rho_{AB}^{\otimes n}$ is in product form, the pure states $\{|\phi^i_{A_nB_n}\rightarrowngle\}$ in some of its decompositions may well be entangled. For the reader's convenience, we recall that the entanglement of formation $E_F(\rho_{AB}):=\min_{\mathfrak{E}}\sum_iS(\rho^i_A)$ can itself be written as a conditional entropy: $E_F(\rho_{AB})=\min_{\mathfrak{E}}H_r(\rho_{RA}^\mathfrak{E}|R)= \min_{\mathfrak{E}}H_r(\rho_{RA}^\mathfrak{E}|\rho_{R}^\mathfrak{E})$, where $\rho_{RA}^\mathfrak{E}= \operatorname{Tr}_B \rho_{RAB}^\mathfrak{E}$, with $\rho_{RAB}^\mathfrak{E}$ being the tripartite extension of the state $\rho_{AB}$, defined by~(\ref{eq:28}). Hence, $E_F^\infty(\rho_{AB})=\lim_{n\to\infty}\mathsf{F}_{\mathrm{dil}}rac 1n \min_{\mathfrak{E}_n}H_r(\rho^{\mathfrak{E}_n}_{R_nA_n}|\rho_{R_n}^{\mathfrak{E}_n})$, where $\rho^{\mathfrak{E}_n}_{R_nA_n}$ is as in
(11).
\begin{lemma}\leftarrowbel{lem:asym-conv}
The following holds:
\begin{equation}\leftarrowbel{eq:final}
E_C(\rho_{AB})\geqslantslant E_F^\infty(\rho_{AB}).\ \square
\end{equation}
\end{lemma}
\noindent{\bf Proof}. We start with the lower bound in Theorem~\ref{thm_main}. For any $\varepsilon\geqslantslant0$ and any $n\in\mathbb{N}$, this gives
\begin{equation}
\begin{split}
&\mathsf{F}_{\mathrm{dil}}rac 1n E_C^{(1)}\leqslantslantft(\rho_{AB}^{\otimes n};\mathsf{F}_{\mathrm{dil}}rac{\varepsilon^2}4\right)\\
\geqslantslant&\mathsf{F}_{\mathrm{dil}}rac 1n\min_{\mathfrak{E}_n}H_0^\varepsilon\leqslantslantft(\rho^{\mathfrak{E}_n}_{R_nA_n}|R_n\right)\\
=&\mathsf{F}_{\mathrm{dil}}rac 1n\min_{\mathfrak{E}_n}\min_{\omega_{R_nA_n}^n\inB_{\mathrm{cq}}^\eps\leqslantslantft(\rho_{R_nA_n}^{\mathfrak{E}_n}\right)}H_0(\omega^n_{R_nA_n}|R_n)\\
=&\mathsf{F}_{\mathrm{dil}}rac 1nH_0(\overline\omega^n_{R_nA_n}|R_n)\\
\geqslantslant& \mathsf{F}_{\mathrm{dil}}rac 1nH_r(\overline\omega^n_{R_nA_n}|R_n)\\
=&\mathsf{F}_{\mathrm{dil}}rac 1nH_r(\overline\omega^n_{R_nA_n}|\overline\omega_{R_n}^n)\\
\geqslantslant&\mathsf{F}_{\mathrm{dil}}rac 1n\min_{\mathfrak{E}_n}H_r\leqslantslantft(\rho^{\mathfrak{E}_n}_{R_nA_n}\leqslantslantft|\rho_{R_n}^{\mathfrak{E}_n}\right.\right)-O(\varepsilon)-O(1/n),
\end{split}
\end{equation}
\noindent where: in the fourth line, $\overline\omega^n_{RA}$ is the minimizing operator for the minimizing pure-state ensemble $\overline\mathfrak{E}_n$ of $\rho_{AB}^{\otimes n}$; in the fifth line we use the fact that $H_0(\overline\omega^n_{R_nA_n}|R_n)\geqslantslant H_r(\overline\omega^n_{R_nA_n}|R_n)$ which follows from the well-known fact that $S_0(\rho\|\sigma)\leqslantslant S_r(\rho\|\sigma)$~\cite{ohya-petz}; the sixth line follows from Lemma~6 in~\cite{q1}. The last approximation comes by applying Fannes' inequality to $H_r(\overline\omega^n_{R_nA_n}|\overline\omega_{R_n}^n) =S(\overline\omega^n_{R_nA_n})-S(\overline\omega_{R_n}^n)$. Then, by considering the limit $n\to\infty$ followed by $\varepsilon\to 0^+$, we arrive at eq.~(\ref{eq:final}). $\blacksquare$
\begin{lemma}\leftarrowbel{lem:asym-dir}
The following holds:
\begin{equation}\leftarrowbel{eq:to_prove}
E_C(\rho_{AB})\leqslantslant E_F^\infty(\rho_{AB}).\ \square
\end{equation}
\end{lemma}
\noindent{\bf Proof}. From the upper bound in Theorem~\ref{thm_main} we obtain
\begin{equation}\leftarrowbel{qwerty}
\mathsf{F}_{\mathrm{dil}}rac 1n E_C^{(1)}(\rho_{AB}^{\otimes n};2\varepsilon)\leqslantslant\mathsf{F}_{\mathrm{dil}}rac 1n\min_{\mathfrak{E}_n}H_0^{\varepsilon}(\rho^{\mathfrak{E}_n}_{R_nA_n}|R_n),
\end{equation}
where $\rho_{RA}^{\mathfrak{E}_n}=\operatorname{Tr}_{\mathscr{H}_B^{\otimes n}}[\rho^{\mathfrak{E}_n}_{R_nA_nB_n}]$, with $\rho_{R_nA_nB_n}^{\mathfrak{E}_n}\in\mathfrak{S}\leqslantslantft((\mathscr{H}_R\otimes\mathscr{H}_A\otimes \mathscr{H}_B)^{\otimes n}\right)$ denoting the c-q extension of the state $\rho_{AB}^{\otimes n}$ corresponding to its pure-state ensemble decomposition $\mathfrak{E}_n$. By taking the appropriate limits ($n\to\infty$ followed by $\varepsilon\to 0$) on either side of~(\ref{qwerty}) and employing Lemma~\ref{lemma:six} and Lemma~\ref{lemma:seven} below, one arrives at:
\begin{equation}\nonumber
\begin{split} E_C(\rho_{AB})&\leqslantslant\min_{\mathfrak{E}}\leqslantslantft\{-S_r(\rho_{RA}^\mathfrak{E}\|\rho_R^\mathfrak{E}\otimes\openone_A)\right\}\\
&\equiv \min_\mathfrak{E} H_r(\rho_{RA}^\mathfrak{E}|\rho_R^\mathfrak{E})= E_F(\rho_{AB}).
\end{split}
\end{equation}
Inequality~(\ref{eq:to_prove}) is finally obtained by employing standard blocking arguments, see for example Ref.~\cite{ent_cost}. $\blacksquare$
Before stating and proving Lemma~\ref{lemma:six} and Lemma~\ref{lemma:seven}, we need to recall some definitions and notations extensively used in the Quantum Information Spectrum Method~\cite{info-spect}. A fundamental quantity used in this approach is the \emph{quantum spectral inf-divergence rate}, defined as follows:
\begin{definition}[Spectral inf-divergence rate]
Given a sequence of states $\hat\rho=\{\rho_n\}_{n=1}^\infty$, $\rho_n\in\mathfrak{S}(\mathscr{H}^{\otimes n})$, and a
sequence of positive operators
$\hat\sigma=\{\sigma_n\}_{n=1}^\infty$, where $\sigma_n$ acts on $\mathscr{H}^{\otimes n}$, the \emph{quantum spectral
inf-divergence rate} is defined in terms of the difference
operators $\Delta_n(\gamma) = \rho_n - 2^{n\gamma}\sigma_n$ as
\begin{equation}
\underline{D}(\hat\rho \| \hat\sigma) := \sup \leqslantslantft\{ \gamma :
\liminf_{n\rightarrow \infty} \mathrm{Tr}\leqslantslantft[ \{ \Delta_n(\gamma)\geqslantslant 0\} \Delta_n(\gamma)\right] = 1 \right\}, \leftarrowbel{udiv}
\end{equation}
where the notation $\{X\geqslantslant 0\}$, for a self-adjoint operator $X$, is used to indicate the projector onto the subspace where $X\geqslantslant 0$.
\end{definition}
\noindent We first note that, by definitions~(\ref{eq:19}) and~(\ref{eq:22}), we have:
\begin{equation}\leftarrowbel{zxcvbn}
\begin{split}
&\min_{\mathfrak{E}_n}H_0^{\varepsilon}(\rho^{\mathfrak{E}_n}_{R_nA_n}|R_n)\\
=&-\max_{\mathfrak{E}_n} \max_{\omega^n_{R_nA_n}\in B_{\mathrm{cq}}^\eps(\rho_{R_nA_n}^{\mathfrak{E}_n})}\min_{\sigma^n_{R_n}}S_0(\omega^n_{R_nA_n}\|\sigma^n_{R_n}\otimes\openone_A^{\otimes n}).
\end{split}
\end{equation}
We then prove the following lemma:
\begin{lemma}\leftarrowbel{lemma:six}
For any bipartite state $\rho_{AB}$, with a pure-state ensemble decomposition
$\mathfrak{E}$, let $\rho_{RAB}^\mathfrak{E}$ denote its c-q extension. Then using the notation of (11), we have
\begin{align}
&\lim_{\varepsilon\to 0}\lim_{n\to\infty}\leqslantslantft\{- \min_{\mathfrak{E}_n}H_0^{\varepsilon}(\rho^{\mathfrak{E}_n}_{R_nA_n}|R_n) \right\}\nonumber\\
\geqslantslant&\max_{\mathfrak{E}}\min_{\hat\sigma_R}\underline{D}(\hat\rho_{RA}^{\mathfrak{E}}\|\hat\sigma_R\otimes\hat\openone_A),\leftarrowbel{eq:rhs}
\end{align}
where $\hat\rho_{RA}^{\mathfrak{E}}:=\leqslantslantft\{(\rho_{RA}^{\mathfrak{E}})^{\otimes n}\right\}_{n\geqslantslant 1}$, $\hat\openone_A:=\{\openone_A^{\otimes n}\}_{n\geqslantslant 1}$, and $\hat\sigma_R:=\{\sigma_R^n\in\mathfrak{S}(\mathscr{H}_R^{\otimes n})\}_{n\geqslantslant 1}$.
\end{lemma}
\noindent{\bf Proof}. Let $\bar\mathfrak{E}$ be the pure state ensemble decomposition of $\rho_{AB}$ for which the maximum on the r.h.s. of eq.~(\ref{eq:rhs}) is achieved, and let $\rho_{RA}^{\bar\mathfrak{E}}$ be its reduced state. Since $\bar\mathfrak{E}$ is fixed, in the following, we drop the superscript whenever no confusion arises,
denoting $\rho_{RA}^{\bar\mathfrak{E}}$ simply as $\rho_{RA}$.
Note that, for any fixed $\varepsilon> 0$,
\begin{eqnarray}
&& - \min_{\mathfrak{E}_n}H_0^{\varepsilon}(\rho^{\mathfrak{E}_n}_{R_nA_n}|R_n) \nonumber\\
&=&\max_{\mathfrak{E}_n}\max_{\omega_{R_nA_n}^n\inB_{\mathrm{cq}}^\eps(\rho_{R_nA_n}^{\mathfrak{E}_n})}\min_{\sigma_{R_n}^n}S_0(\omega_{R_nA_n}^{\mathfrak{E}_n}\|\sigma_{R_n}^n\otimes\openone_A^{\otimes n})\nonumber\\
&\geqslantslant&\max_{\mathfrak{E}}\max_{\omega_{R_nA_n}^n\inB_{\mathrm{cq}}^\eps((\rho_{RA}^{\mathfrak{E}})^{\otimes n})}\min_{\sigma_{R_n}^n}S_0(\omega_{R_nA_n}^n\|\sigma_{R_n}^n\otimes\openone_A^{\otimes n})\nonumber\\
&\geqslantslant&\max_{\omega_{R_nA_n}^n\inB_{\mathrm{cq}}^\eps(\rho_{RA}^{\otimes n})}\min_{\sigma_{R_n}^n}S_0(\omega_{R_nA_n}^n\|\sigma_{R_n}^n\otimes\openone_A^{\otimes n}).\leftarrowbel{eq:here2}
\end{eqnarray}
For each $\sigma_{R_n}^n$ and any $\gamma\in\mathbb{R}$, define the projector \begin{equation}
P_n^\gamma\equiv P_n^\gamma(\sigma_{R_n}^n):=\{\rho_{RA}^{\otimes n}- 2^{n\gamma}(\sigma_{R_n}^n\otimes\openone_A^{\otimes n}) \geqslantslant 0\}.
\end{equation}
Since the operator $\omega^n_{R_nA_n}$ in~(\ref{eq:here2}) is a c-q operator, it is clear that the minimization over $\sigma_{R_n}^n$ in~(\ref{eq:here2}) can be restricted to states diagonal in the basis chosen in representing c-q operators. Consequently, also $P_n^\gamma$ has the same c-q structure.
Next, let us denote by $\hat\rho_{RA}$ the i.i.d. sequence of states $\{\rho_{RA}^{\otimes n}\}_{n\geqslantslant 1}$. For any sequence $\hat\sigma_R:=\{\sigma_{R_n}^n\}_{n\geqslantslant 1}$, fix $\operatorname{d}elta>0$ and choose $\gamma\equiv\gamma(\hat\sigma_R):= \underline{D}(\hat\rho_{RA}\|\hat\sigma_R\otimes\hat\openone_A) -\operatorname{d}elta$. Then it follows from the definition~(\ref{udiv}) that, for $n$ large enough, \begin{equation}
\operatorname{Tr}\leqslantslantft[P_n^\gamma\ \rho_{RA}^{\otimes n}\right]\geqslantslant 1-\varepsilon^2/4,
\end{equation}
for any $\varepsilon>0$. Further, define $\omega_{R_nA_n}^{n,\gamma}\equiv \omega_{R_nA_n}^{n,\gamma}(\sigma_{R_n}^n):=P_n^\gamma \rho_{RA}^{\otimes n} P_n^\gamma$, which is clearly in $B^{\varepsilon}_{\mathrm{cq}}(\rho_{RA}^{\otimes n})$, due to the Gentle Measurement Lemma~\cite{gentle}.
Then, using the fact that $\mathfrak{p}i_{\omega_{R_nA_n}^{n,\gamma}} \leqslantslant P_n^\gamma$, and Lemma~2 of~\cite{nila}, we have, for any fixed $\varepsilon>0$,
\begin{eqnarray}
&& \lim_{n\to\infty}\mathsf{F}_{\mathrm{dil}}rac 1n\,\{\textrm{r.h.s. of (\ref{eq:here2})}\}\nonumber\\
&\geqslantslant & \lim_{n\to\infty}\mathsf{F}_{\mathrm{dil}}rac 1n\min_{\sigma_{R_n}^n} S_0(\omega^{n,\gamma}_{R_nA_n}\|\sigma_{R_n}^n\otimes\openone_A^{\otimes n})\nonumber\\
&= &\lim_{n\to\infty}\mathsf{F}_{\mathrm{dil}}rac 1n\min_{\sigma_{R_n}^n}\leqslantslantft\{-\log\operatorname{Tr}\leqslantslantft[\mathfrak{p}i_{\omega^{n,\gamma}_{R_nA_n}}(\sigma_{R_n}^n\otimes\openone_A^{\otimes n})\right]\right\}\nonumber\\
&\geqslantslant & \lim_{n\to\infty}\mathsf{F}_{\mathrm{dil}}rac 1n\min_{\sigma_{R_n}^n}\leqslantslantft\{ -\log\operatorname{Tr}\leqslantslantft[P_n^\gamma(\sigma_{R_n}^n\otimes\openone_A^{\otimes n})\right]\right\}\nonumber\\
&\geqslantslant &\min_{\hat\sigma_R} \gamma(\hat\sigma_R)\nonumber\\
&=& \min_{\hat\sigma_R}\underline{D}(\hat\rho_{RA}\|\hat\sigma_R\otimes\hat\openone_A)-\operatorname{d}elta\nonumber\\
&=& \max_{\mathfrak{E}}\min_{\hat\sigma_R}\underline{D}(\hat\rho_{RA}^\mathfrak{E}\|\hat\sigma_R\otimes\hat\openone_A)-\operatorname{d}elta
\end{eqnarray}
Since this holds for any arbitrary $\operatorname{d}elta>0$, it yields the required inequality~(\ref{eq:rhs}) in the limit $\varepsilon\to 0$. $\blacksquare$
From (\ref{qwerty}), (\ref{zxcvbn}) and Lemma \ref{lemma:six} it follows that
\begin{equation}
E_C(\rho_{AB}) \leqslantslant - \max_{\mathfrak{E}}\min_{\hat\sigma_R}\underline{D}(\hat\rho_{RA}^\mathfrak{E}\|\hat\sigma_R\otimes\hat\openone_A),
\leftarrowbel{poi}
\end{equation}
with $\hat\rho_{RA}^\mathfrak{E}=\{(\rho_{RA}^\mathfrak{E})^{\otimes n}\}_{n\geqslantslant 1}$.
Further, from the Generalized Stein's Lemma~\cite{brandao-plenio} and Lemma~4 in~\cite{q1}, the lemma below follows:
\begin{lemma}\leftarrowbel{lemma:seven}
For any given bipartite state $\rho_{RA}$,
\begin{equation}
\min_{\hat\sigma_R}\underline{D}(\hat\rho_{RA}\|\hat\sigma_R\otimes\hat\openone_A)=S_r(\rho_{RA}\|\rho_R\otimes\openone_A),
\end{equation}
where $\hat\rho_{RA}=\{\rho_{RA}^{\otimes n}\}_{n\geqslantslant 1}$, $\hat\sigma_R:=\{\sigma_{R_n}^n\in\mathfrak{S}(\mathscr{H}_R^{\otimes n})\}_{n\geqslantslant 1}$, and $\hat\openone_A:=\{\openone_A^{\otimes n}\}_{n\geqslantslant 1}$.
\end{lemma}
\noindent{\bf Proof}. Consider the family of sets $\mathcal{M}:=\{\mathcal{M}_n\}_{n\geqslantslant 1}$
\begin{equation}\leftarrowbel{eq:defsets}
\mathcal{M}_n:=\leqslantslantft\{\sigma_{R_n}^n\otimes\tau_{A_n}^n\in\mathfrak{S}(\mathscr{H}_R^{\otimes n}\otimes\mathscr{H}_A^{\otimes n})\right\},
\end{equation}
such that $\tau_{A_n}^n:=(\openone_A/d_A)^{\otimes n}$. For this family, the Generalized Stein's Lemma~(Proposition III.1 of~\cite{brandao-plenio}) holds.
More precisely, for a given bipartite state $\rho_{RA}$, let us define
\begin{equation}
S_\mathcal{M}^\infty(\rho_{RA}):=\lim_{n\to\infty}\mathsf{F}_{\mathrm{dil}}rac 1nS_{\mathcal{M}_n}(\rho_{RA}^{\otimes n}),
\end{equation}
with $S_{\mathcal{M}_n}(\rho_{RA}^{\otimes n}):=\min_{\omega^n_{R_nA_n}\in\mathcal{M}_n}S_r(\rho^{\otimes n}_{RA}\|\omega^n_{R_nA_n})$, and
$\Delta_n(\gamma) = \rho_{RA}^{\otimes n} - 2^{n\gamma}\omega^n_{R_nA_n}$. From the Generalized Stein's Lemma~\cite{brandao-plenio} it follows that, for $\gamma>S^\infty_\mathcal{M}(\rho_{RA})$,
\begin{equation}
\lim_{n\to\infty}\min_{\omega^n_{R_nA_n}\in\mathcal{M}_n}\operatorname{Tr}\leqslantslantft[\{\Delta_n(\gamma)\geqslantslant 0\}\Delta_n(\gamma)\right]=0,
\end{equation}
implying that
$\min_{\hat\omega_{RA}\in\mathcal{M}}\underline{D}(\hat\rho_{RA}\|\hat\omega_{RA})\leqslantslant S^\infty_\mathcal{M}(\rho_{RA})$.
On the other hand, for $\gamma<S^\infty_\mathcal{M}(\rho_{RA})$,
\begin{equation}
\leftarrowbel{eq:36}
\lim_{n\to\infty}\min_{\omega^n_{R_nA_n}\in\mathcal{M}_n}\operatorname{Tr}\leqslantslantft[\{\Delta_n(\gamma)\geqslantslant 0\}\Delta_n(\gamma)\right]=1,
\end{equation}
implying that $\min_{\hat\omega_{RA}\in\mathcal{M}}\underline{D}(\hat\rho_{RA}\|\hat\omega_{RA})\geqslantslant S^\infty_\mathcal{M}(\rho_{RA})$.
Hence $$\min_{\hat\omega_{RA}\in\mathcal{M}}\underline{D}(\hat\rho_{RA}\|\hat\omega_{RA})= S^\infty_\mathcal{M}(\rho_{RA}).$$
Finally, by noticing that, due to the definition~(\ref{eq:defsets}) of $\mathcal{M}$,
\begin{equation}
\begin{split} \min_{\hat\omega_{RA}\in\mathcal{M}}&\underline{D}(\hat\rho_{RA}\|\hat\omega_{RA})\\
=\min_{\hat\sigma_R}&\underline{D}(\hat\rho_{RA}\|\hat\sigma_R\otimes\hat\openone_A)+\log d_A,
\end{split}
\end{equation}
and that, due to Lemma~4 in~\cite{q1},
\begin{equation}
S^\infty_\mathcal{M}(\rho_{RA})=S_r(\rho_{RA}\|\rho_R\otimes\openone_A)+\log d_A,
\end{equation}
we obtain the statement of the lemma. $\blacksquare$
\end{document}
|
\begin{document}
\author{Chenlu Zhang}
\address{College of Mathematics and Statistics, Chongqing University,
Chongqing, 401331, China.}
\email{[email protected]}
\author{Huaqiao Wang}
\address{College of Mathematics and Statistics, Chongqing University,
Chongqing, 401331, China.}
\email{[email protected]}
\title[Strong solutions of the LLS equation]
{Global existence of strong solutions to the Landau--Lifshitz--Slonczewski equation}
\thanks{Corresponding author: [email protected]}
\keywords{Landau--Lifshitz--Slonczewski equation, Strong solutions, Existence and uniqueness, Besov space, energy estimates.}\
\subjclass[2010]{82D40; 35K10; 35D35.}
\begin{abstract}
In this paper, we focus on the existence of strong solutions for the Cauchy problem of the three-dimensional Landau-Lifshitz-Slonczewski
equation. We construct a new combination of Bourgain space and Lebesgue space where linear and nonlinear estimates can be closed by applying
frequency decomposition and energy methods. Finally, we establish the existence and uniqueness of the global strong solution provided that
the initial data belongs to Besov space $\dot{B}^{\frac{n}{2}}_{\Omega}$.
\end{abstract}
\maketitle
\section{Introduction}\label{Sec1}
The discovery of the spin-transfer torque (STT for short) effect is a milestone in the theory of micromagnetics and it breaks the traditional way of manipulating magnetic torque by magnetic fields. In the 1970s, Berger \cite{BerL,FreBerL,Hung} discovered that electric currents can drive the motion of magnetic domain walls. In the late 1980s, Slonczewski \cite{SlonJC} showed the existence of interlayer exchange coupling between the two ferromagnetic electrodes of a magnetic tunnel junction. Thereafter, the existence of STT effect was be confirmed, for details see \cite{Kub,San,Tso}. There are many types of STT, including Slonczewski STT, vertical STT, adiabatic and non-adiabatic STT. In the following, we focus on the third type of STT.
When the current flows along the film surface in any direction, the adiabatic and non-adiabatic STT effect are expressed by (see \cite{Smi}):
\begin{align*}
\boldsymbol{T}_{ad}&=\theta_{3}\boldsymbol{m}\times\left[\boldsymbol{m}\times(v\cdot\nabla\boldsymbol{m})\right],\\
\boldsymbol{T}_{na}&=\theta_{4}\boldsymbol{m}\times(v\cdot\nabla\boldsymbol{m}),
\end{align*}
where $v$ is the density of the spin-polarized current in the current direction, $\theta_{3}$ and $\theta_{4}$ are
dimensionless constants. $\boldsymbol{T}_{ad}$ denotes the adiabatic STT, which was first introduced by Bazaliy in \cite{Ral}.
$\boldsymbol{T}_{na}$ denotes the non-adiabatic STT, which was proposed by Zhang in \cite{Smi}.
If we only consider the effect of magnetic field, the dynamic behavior of magnetization can be described by the Landau-Lifshitz-Gilbert (LLG for short) equation:
\begin{align}
\frac{\partial{\boldsymbol{m}}}{\partial{t}}=\theta_{1}\boldsymbol{m}\times\boldsymbol{H}_{\rm eff}+\theta_{2}
\left(\boldsymbol{m}\times\frac{\partial{\boldsymbol{m}}}{\partial{t}}\right),\;\; \boldsymbol{m}(0,x)=\boldsymbol{m}_0.\label{1.8}
\end{align}
where $\boldsymbol{m}(t,x):\mathbb{R}\times\mathbb{R}^{n}\rightarrow\mathbb{S}^{2}\subset \mathbb{R}^{3}$ is the magnetic intensity, $\boldsymbol{H}_{\rm eff}$ is the effective field, $\theta_{1}$ is a constant related to the magnetogyric ratio, $\theta_{2}$ is the Gilbert damping parameter. The first term on the right-hand side of \eqref{1.8} represents Larmor precession and the second term is Gilbert damping term.
For the magnetic nanowires \cite{Hei,Mye} with in-plane current flow, the current can induce STT effect. In general, physicists \cite{Han} directly add the STT term to the right-hand side of \eqref{1.8}, then the LLG equation with STT effect is expressed by:
\begin{align}
\frac{{\partial} \boldsymbol{m}}{\partial t}&=\theta_{1}(\boldsymbol{m}\times\boldsymbol{H}_{\rm eff})
+\theta_{2}\left(\boldsymbol{m}\times\frac{{\partial}\boldsymbol{m}}{\partial t}\right)+\boldsymbol{T}_{ad}+\boldsymbol{T}_{na},\;\; \boldsymbol{m}(0,x)=\boldsymbol{m}_0. \label{1.12}
\end{align}
In this paper, we only consider the exchange field, i.e., $\boldsymbol{H}_{\rm eff}=\Delta\boldsymbol{m}$. By using the fact that $|\boldsymbol{m}|=1$, adding $\theta_{2}\boldsymbol{m}\times \eqref{1.12}$ into \eqref{1.12}, and omitting some coefficients for simplicity, we obtain the Landau-Lifshitz-Slonczewski (LLS for short) equation:
\begin{align}\label{main-eq}
\begin{cases}
{\partial_t}\boldsymbol{m}+(v\cdot\nabla)\boldsymbol{m}
+\boldsymbol{m}\times(v\cdot\nabla)\boldsymbol{m}
=\boldsymbol{m}\times\Delta\boldsymbol{m}
-\varepsilon(\boldsymbol{m}\times\boldsymbol{m}\times\Delta\boldsymbol{m}),\\
\boldsymbol{m}(0,x)=\boldsymbol{m}_{0},
\end{cases}
\end{align}
where $\varepsilon\in(0,1)$ is the Gilbert damping coefficient.
If we discard the STT effect, the LLS equation becomes the LLG equation. For the case of dimension $n=1$, Zhou-Guo \cite{ZhouG}
obtained the global existence of weak solutions for the LLG equation using the Leray-Schauder fixed point theorem. Later, Zhou-Guo-Tan \cite{ZhouGT} established the global existence and uniqueness of smooth solutions to the Cauchy problem with the periodic boundary by the difference method and the energy method. Afterwards, for the nonhomogeneous LLG equation, Ding-Guo-Su \cite{DingGS} considered the existence and uniqueness of local smooth solutions to the Cauchy problem with periodic boundary. In dimension $n=2$, Harpe \cite{Har} obtained the regularity of weak solutions to the initial-boundary value problem in a bounded open domain by using the Ginzburg-Landau approximation method. Furthermore, if the first-order derivative of the initial data is sufficiently small, Carbou-Fabrie \cite{Car} obtained the global existence of regular solutions of the initial-boundary value problem. When weak solutions have finite energy, Chen-Ding-Guo \cite{Chen} showed that the weak solution is regular and unique except for at most finite points in a compact two-dimensional manifold without boundary.
In higher dimensions, there have been many results for the LLG equation. Guo-Hong \cite{GuoHo} established the connection between the LLG equation and the harmonic map heat flow equation, and then they showed the existence of global weak solutions for the Cauchy problem by using penalized approximation method and also obtained the weak solution is regular except for at most finite points. Moreover, they considered the global smoothness of solutions when the regularity of the initial data is well enough or the initial energy is sufficiently small. Ding-Guo \cite{Ding} built up a special energy inequality and monotonicity inequality of weak solutions, and then showed that the weak solution is regular in a bounded compact Riemannian manifold. If the initial energy is sufficiently small, Ding-Wang \cite{DingW} proved that short-time smooth solutions must blow-up in finite time by using approximation method in an unbounded Riemannian manifold. In recent years, many researchers have noticed that the relationship between the LLG equation and the Ginzburg-Landau equation. Melcher \cite{Mel} transformed the LLG equation into the complex Ginzburg-Landau equation by moving frames and established the global existence and uniqueness of smooth solutions to the Cauchy problem in Sobolev space. Later, Lin-Lai-Wang \cite{Lin} also proved the global solvability of the Cauchy Problem in Morrey space. Recently, Guo-Huang \cite{GuoHu} investigated the existence and uniqueness of strong solutions in the critical Besov space by using the stereographic projection and frequency decomposition.
Obviously it is known that the well-posedness of the LLG equation have been established well, however there are a few results about the well-posedness of the LLS equation. In Sobolev space, Melcher-Ptashnyk \cite{MelP} studied the existence and uniqueness of global weak solutions and also considered smooth solutions when the initial data is small and regular sufficiently for the three-dimensional LLS equation. The regularity of the solution in Sobolev space seems very challenging for lack of regularity of the initial data, which also reflects the difficulties we encounter always in Sobobev space. Our goal is to investigate strong solutions of the LLS equation with lower regularity of the initial data in other suitable space. The Bourgain space was first used to systematically study the low regularity theory of the Schr\"{o}dinger equation and the KdV equation by Bourgain \cite{Bour1,Bour2}. Later, the method has been widely applied to the KdV equation, nonlinear wave equations and the Schr\"{o}dinger equation, etc., for example see \cite{Ken,Kla,Tao}. The advantages of the Bourgain space are that we can lower the regularity of the initial data and use the Littlewood-Paley decomposition to transform the differential operator into scalar multiplication in frequency space, namely we use the frequency decomposition to balance the coefficients instead of reducing the regularity of space. By Duhamel's principle, the operator \;$e^{it\Delta}$\; appears in the expression of solutions and we cannot find a suitable estimate to deal with it in Bourgain space only. So we look for the semigroup estimates of the operator in the anisotropic Lebesgue space. Inspired by Guo-Huang \cite{GuoHu} who proved the existence of global strong solutions of the LLG equation in the critical Besov space, we apply the frequency decomposition to address the nonlinear terms of the LLS equation. Unlike the main nonlinear part of LLG equation is only $u(\nabla u)^{2}$, the major nonlinear parts of the LLS equation are $u(\nabla u)^{2}$ and $u^{2}\nabla u$. Since the sum of partial derivatives of the two terms are not equal, if we choose the same form of frequency decomposition, either we obtain different solution spaces or the coefficients in front of the same solution space are different. Both of these will construct a restricted and smaller solution space. Therefore, on the basis of the solution space previously obtained by the term $u(\nabla u)^{2}$, we exploit a new frequency decomposition to deal with the term $u^{2}\nabla u$ and combine a suitable solution space to close the energy of linear and nonlinear parts.
First, we transform the LLS equation \eqref{main-eq} into the complex derivative Ginzburg-Landau type equation by using the stereographic projection transform. Let
\begin{align*}
u=\frac{\boldsymbol{m}_{1}+i\boldsymbol{m}_{2}}{1+\boldsymbol{m}_{3}},
\end{align*}
where $\boldsymbol{m}=(\boldsymbol{m}_{1},\boldsymbol{m}_{2},\boldsymbol{m}_{3})$ is the solution of equation \eqref{main-eq}.
Obviously, the inverse of this projection is
\begin{align}\label{Ucoord}
(\boldsymbol{m}_{1},\boldsymbol{m}_{2},\boldsymbol{m}_{3})
=(\frac{u+\bar{u}}{1+|u|^{2}},\frac{-i(u-\bar{u})}{1+|u|^{2}},\frac{1-|u|^{2}}{1+|u|^{2}}),
\end{align}
Substituting \eqref{Ucoord} into \eqref{main-eq}, we obtain three equations respectively:
\begin{align}
&(1-\bar{u}^{2})A(u,\bar{u})+F(u,\bar{u})=(1-u^{2})\bar{A}(u,\bar{u})+\bar{F}(u,\bar{u}),\label{m1eq}\\
&(1+\bar{u}^{2})A(u,\bar{u})+H(u,\bar{u})=-(1+u^{2})\bar{A}(u,\bar{u})-\bar{H}(u,\bar{u}),\label{m2eq}\\
&\bar{u}A(u,\bar{u})=u\bar{A}(u,\bar{u}),\label{m3eq}
\end{align}
where
\begin{align*}
A(u,\bar{u})=iu_{t}-\left[(-1+\varepsilon i)\Delta u-(-1+i)(v\cdot\nabla)u-\frac{2(-1+\varepsilon i)
\bar{u}(\nabla u)^{2}}{1+|u|^{2}}\right],
\end{align*}
\begin{align*}
F(u,\bar{u})=\frac{2(1+\bar{u}^{2})(1-|u|^2)(v\cdot\nabla)\bar{u}}{1+|u|^{2}},\quad
H(u,\bar{u})=\frac{4(v\cdot\nabla)u(\bar{u}^{2}+|u|^{2})}{1+|u|^{2}}.
\end{align*}
Combining \eqref{m1eq} and \eqref{m2eq}, we have
\begin{align*}
A(u,\bar{u})=-u^{2}\bar{A}(u,\bar{u})+\frac{\bar{F}(u,\bar{u})-F(u,\bar{u})}{2}-\frac{H(u,\bar{u})+\bar{H}(u,\bar{u})}{2}.
\end{align*}
By \eqref{m3eq}, one has
\begin{align*}
A(u,\bar{u})=\frac{\left[\bar{F}(u,\bar{u}-F(u,\bar{u})\right]-\left[H(u,\bar{u})+\bar{H}(u,\bar{u})\right]}{2(1+|u|^{2})}
=\frac{{-i\rm{Im}}F-{\rm{Re}}H}{1+|u|^{2}},
\end{align*}
then $u$ solves the following complex derivative Ginzburg-Landau equation:
\begin{align*}
\begin{cases}
\partial_{t}u=-(\varepsilon+i)\Delta u+(1+i)(v\cdot\nabla)u+\frac{2(\varepsilon+i)\bar{u}(\nabla u)^{2}}{1+|u|^{2}}
+\frac{{\rm{Im}}F-i{\rm{Re}}H}{1+|u|^{2}},\\
u(0,x)=u_{0}.
\end{cases}
\end{align*}
Setting $\tilde{t}=-t$, we obtain
\begin{align}\label{deri-Ginz}
\begin{cases}
\partial_{t}u=(\varepsilon+i)\Delta u-(1+i)(v\cdot\nabla)u-\frac{2(\varepsilon+i)\bar{u}(\nabla u)^{2}}{1+|u|^{2}}
-\frac{{\rm{Im}}F-i{\rm{Re}}H}{1+|u|^{2}},\\
u(0,x)=u_{0}.
\end{cases}
\end{align}
Now, we state our main result in the following.
\begin{theorem}\label{main-th}
Assume that $n \geq 3$, $\|v\|_{L^{\infty}_{t,x}}\leq \eta$ and $\eta>0$ is small enough. When the initial data $\boldsymbol{m}_{0}=(\boldsymbol{m}_{01},\boldsymbol{m}_{02},\boldsymbol{m}_{03}) \in \dot{B}^{\frac{n}{2}}_{\Omega}(\mathbb{R}^{n}; \mathbb{S}^{2})$ satisfies $\left\|\frac{\boldsymbol{m}_{01}+i\boldsymbol{m}_{02}}{1+\boldsymbol{m}_{03}}\right\|_{F^{\frac{n}{2}}\cap Z^{\frac{n}{2}}}\leq \eta$, then the LLS equation \eqref{main-eq} has a unique strong solution $\boldsymbol{m}$ in the combination of Bourgain space and Lebesgue space, i.e., $F^{\frac{n}{2}}\cap Z^{\frac{n}{2}}$.
\end{theorem}
Observing that the connection between equations \eqref{main-eq} and \eqref{deri-Ginz}, we prove the well-posedness of equation \eqref{deri-Ginz} instead of equation \eqref{main-eq}. Our main strategy is to use the fixed point theorem; it is important to close the linear and nonlinear estimates. According to the structure of the solution, we construct a new solution space $F^{\frac{n}{2}}\cap Z^{\frac{n}{2}}$ for balancing the coefficients from the linear and nonlinear parts. More precisely, we first close the linear estimates in the space $F^{\frac{n}{2}}\cap Z^{\frac{n}{2}}$ (see Proposition \ref{pro linear-es}) by Lemma \ref{Leb-linear}--Lemma \ref{X-linear}. Later, we prove that the nonlinear estimates in the space $F^{\frac{n}{2}}\cap Z^{\frac{n}{2}}$ is close (see Proposition \ref{pro nonlinear-es}) by using Lemma \ref{Lemma4.3}--Lemma \ref{Lemma4.2}. Combining the contractive result \eqref{contractive map} with Propositions \ref{pro linear-es} and \ref{pro nonlinear-es} which yield that the solution map is self-to-self, we obtain the well-posedness of equation \eqref{deri-Ginz}.
This paper is arranged as follows. In Section \ref{Sec2}, we introduce our main dyadic function spaces and recall some basic definitions.
In Section \ref{Sec3}, we list some useful lemmas and establish the linear estimate in Proposition \ref{pro linear-es}. In Section \ref{Sec4}, we obtain the nonlinear estimates in Proposition \ref{pro nonlinear-es}. In Section \ref{Sec5}, we devote to giving the proof of our main results (see Theorem \ref{main-th}).
\section{Definition and notation}\label{Sec2}
In this section, we recall some definitions and useful notations. For $\Omega \in \mathbb{S}^{2}$, the space $\dot{B}^{\frac{n}{2}}_{\Omega}$ is defined by
\begin{align*}
\dot{B}^{\frac{n}{2}}_{\Omega}:=\dot{B}^{\frac{n}{2}}_{\Omega}(\mathbb{R}^{n};\mathbb{S}^{2})=\{f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{3};~
f-\Omega\in \dot{B}^{\frac{n}{2}}_{2,1},~|f(x)|=1~a.e.\text{ in }\mathbb{R}^{3}\},
\end{align*}
where the space $\dot{B}^{\frac{n}{2}}_{2,1}$ is the standard Besov space.
Let $\eta(|\xi|):\mathbb{R}\rightarrow[0,1]$ be a non-negative, smooth and radially decreasing function, where $0<\eta(|\xi|)\leq1$ compactly supported in $|\xi|\leq\frac{8}{5}$ and $\eta\equiv1$ when $|\xi|\leq\frac{5}{4}$.
Let $\chi_{k}(\xi)=\eta(\frac{|\xi|}{2^{k}})-\eta(\frac{|\xi|}{2^{k-1}})$,
$\chi_{\leq k}(\xi)=\eta(\frac{|\xi|}{2^{k}}),\tilde{\chi}_{k}(\xi)=\sum^{9n}_{l=-9n}\chi_{k+l}({\xi})$, we define the homogeneous and inhomogeneous Littlewood-Paley projector $P_{k}$ and $P_{\leq k}$ on $L^{2}(\mathbb{R}^{n})$ respectively by
\begin{align*}
\widehat{P_{k}u}(\xi)=\chi_{k}(\xi)\widehat{u}(\xi),\quad \widehat{P_{\leq k}u}(\xi)=\chi_{\leq k}(\xi)\widehat{u}(\xi),
\end{align*}
for any $k\in\mathbb{Z}$, and $P_{\geq k}=I-P_{\leq k-1},\;P_{[k_{1},k_{2}]}=\sum_{j=k_{1}}^{k2}{P_{j}}$, where $I$ is the unit projection. Similarly, define $\widehat{\widetilde{P_{k}}u}=\widetilde{\chi_{k}}(\xi)\widehat{u}(\xi)$.
We define the modulation projector $Q_{k},Q_{\leq k}$ on $L^{2}(\mathbb{R}\times\mathbb{R}^{n})$ by
\begin{align*}
\widehat{Q_{k}u}(\xi,\tau)=\chi_{k}(\tau+|\xi|^{2})\widehat{u}(\xi,\tau),\quad
\widehat{Q_{\leq k}u}(\xi,\tau)=\chi_{\leq k}(\tau+|\xi|^{2})\widehat{u}(\xi,\tau),
\end{align*}
for any $k\in\mathbb{Z}$, where $Q_{\geq k}=I-Q_{\leq k-1},\;Q_{[k_{1},k_{2}]}=\sum_{j=k_{1}}^{k_{2}}{Q_{j}}$.
We define the anisotropic Lebesgue space $L^{p,q}_{e}(\mathbb{R}\times\mathbb{R}^{n}),~1\leq p,q<\infty$ by
\begin{align*}
\|f\|_{L^{p,q}_{\mathbf{e}}(\mathbb{R}^{n+1})}=\left(\int_{\mathbb{R}}\left(\int_{H_{\mathbf{e}}
\times\mathbb{R}}|f(\lambda \mathbf{e}+y,t)|^{q}dydt \right)^{\frac{p}{q}}d\lambda \right)^{\frac{1}{p}}.
\end{align*}
We decompose $\mathbb{R}^{n}=\lambda \mathbf{e}\oplus H_{\mathbf{e}}$, where $\mathbf{e}\in{\mathbb{S}^{n-1}}$ and $\mathbb{S}^{n-1}$
is the unit sphere on $\mathbb{R}^{n}$, $H_{\mathbf{e}}$ is the hyperplane with normal vector $\mathbf{e}$.
Write $L^{p,q}_{\mathbf{e}_{j}}=L^{p}_{x_{j}}L^{q}_{\bar{x_{j}},t}$, where $x=x_{i}\oplus\bar{x}_{i}$. The symbol $A\lesssim B$ means that $A\leq CB$ where $C$ is a positive constant.
The Bourgain space $X^{s,b}$ has been used to the low regularity theory for the Cauchy problem of the nonlinear dispersive equation. In this paper, by means of the modulation-homogeneous version as in \cite{Beje,Guo}. We define $X^{0,b,q}$ by
\begin{align*}
\|f\|_{X^{0,b,q}}=\left(\sum_{k\in\mathbb{Z}}2^{kbq}\|Q_{k}f\|_{L^{2}_{t,x}}^{q}\right)^{1/q}.
\end{align*}
If$\;u(x,t)\in L^{2}(\mathbb{R}^{+}\times\mathbb{R}^{n})$ has spatial frequency in ${|\xi|\sim2^{k}}$, we define the main dyadic function space by
\begin{align*}
\|u\|_{F_{k}}=&\|u\|_{X^{0,\frac{1}{2},1}_{+}}+\|u\|_{L^{\infty}_{t}L^{2}_{x}}+\|u\|_{L^{2}_{t}L^{\frac{2n}{n-2}}_{x}}\\
&+2^{-(n-1)k/2}\sup_{\mathbf{e}_{i}\in\mathbb{S}^{n-1}}\|u\|_{L^{2,\infty}_{\mathbf{e}_{i}}}
+2^{\frac{k}{2}}\sup_{|j-k|\leq20}\sup_{\mathbf{e}_{i}\in\mathbb{S}^{n-1}}\|P_{j,\mathbf{e}_{i}}u\|_{L^{\infty,2}_{\mathbf{e}_{i}}},\\
\|u\|_{Y_{k}}=&\|u\|_{L^{\infty}_{t}L^{2}_{x}}+\|u\|_{L^{2}_{t}L^{\frac{2n}{n-2}}_{x}}
+2^{-(n-1)k/2}\sup_{\mathbf{e}_{i}\in\mathbb{S}^{n-1}}\|u\|_{L^{2,\infty}_{\mathbf{e}_{i}}}\\
&+2^{-k}\inf_{u=u_{1}+u_{2}}(\|u_{1}\|_{X^{0,1}}+\|u_{2}\|_{X^{0,1}}),\\
\|u\|_{Z_{k}}=&2^{-k}\|u\|_{X^{0,1}},\\
\|u\|_{N_{k}}=&\inf_{u=u_{1}+u_{2}+u_{3}}\left(\|u_{1}\|_{L^{1}_{t}L^{2}_{x}}
+2^{-k/2}\sup_{\mathbf{e}_{i}\in\mathbb{S}^{n-1}}\|u_{2}\|_{L^{1,2}_{\mathbf{e}_{i}}}
+\|u_{2}\|_{X^{0,-\frac{1}{2},1}}\right)+2^{-k}\|u\|_{L^{2}_{t,x}}.
\end{align*}
Obviously, $F_{k}\cap Z_{k}\subset Y_{k}$, then we define some spaces with the following norms:
\begin{align*}
\|u\|_{F^{s}}&=\sum_{k\in{\mathbb{Z}}}2^{ks}\|P_{k}u\|_{F_{k}},\quad \|u\|_{Y^{s}}=\sum_{k\in{\mathbb{Z}}}2^{ks}\|P_{k}u\|_{Y_{k}},\\
\|u\|_{Z^{s}}&=\sum_{k\in{\mathbb{Z}}}2^{ks}\|P_{k}u\|_{Z_{k}},\quad \|u\|_{N^{s}}=\sum_{k\in{\mathbb{Z}}}2^{ks}\|P_{k}u\|_{N_{k}}.
\end{align*}
\section{The linear estimate}\label{Sec3}
In order to prove that the solution map is closed, we need to discuss the property of solution spaces. In this section, we estimate the linear parts of equation \eqref{deri-Ginz}, and obtain the uniform estimates with respect to $\varepsilon$. Inspired by \cite{IonK,IonK1,Keel}, we can resort to some conclusions similar to Strichartz estimates from the Schr\"{o}dinger equation:
\begin{lemma}\label{semi-stri}
Assume that $n\geq3$, for any $k\in\mathbb{Z}$, we have
\begin{align*}
&\|e^{it\Delta}P_{k}f\|_{L^{2}_{t}L^{\frac{2n}{n-2}}_{x}\cap L^{\infty}_{t}L^{2}_{x}}
\leq C\|P_{k}f\|_{L^{2}_{x}}, \\%这里有没有系数\alpha 都可以,原因是在Strichartz估计中只要满足半群定义就可以.
&\sup_{\mathbf{e}_{j}\in\mathbb{S}^{n-1}}\|e^{i t\Delta}P_{k}f\|_{L^{2,\infty}_{\mathbf{e}_{j}}}
\leq C2^{\frac{(n-1)k}{2}}\|P_{k}f\|_{L^{2}_{t,x}},\\
&\sup_{\mathbf{e}_{j}\in\mathbb{S}^{n-1}}\|e^{i t\Delta}P_{k,\mathbf{e}_{j}}f\|_{L^{\infty,2}_{\mathbf{e}_{j}}}
\leq C2^{-\frac{k}{2}}\|P_{k}f\|_{L^{2}_{t,x}}.
\end{align*}
where the constant $C>0$ is independent of $k$.
\end{lemma}
Notice that the space $F_{k}\cap Z_{k}$ consists of three main spaces: the general Lebesgue space $L_{t}^{p}L_{x}^{q}$, the anisotropic Lebesgue space $L_{\mathbf{e}}^{p,q}$ and $X^{0,b,q}$. We introduce a present lemma for connecting the norm of $X^{0,b,q}$ with other norms which is an extension of \cite[Proposition 5.4]{Wang}.
\begin{lemma}\label{semi-Banach}
Assume that $\mathbb{X}$ is an arbitrary space-time Banach space, if for any $f_{0}\in L^{2}_{x}$, $\tau_{0}\in \mathbb{R}$, one has
\begin{align*}
\|e^{it\tau_{0}}S(t)f_{0}\|_{\mathbb{X}}\leq C(k)\|f_{0}\|_{L^{2}_{x}},
\end{align*}
then for any $k\in \mathbb{Z}$, $f\in L^{2}_{t,x}$,
\begin{align*}
\|P_{k}f\|_{\mathbb{X}}\leq C(k)\|P_{k}f\|_{X^{0,\frac{1}{2},1}},
\end{align*}
where $C(k)$ is the polynomial with respect to $k$.
\end{lemma}
In the following we discuss the linear estimates of $u(t,x)$ in each of the three spaces.
\begin{lemma}[The general Lebesgue space]\label{Leb-linear}
Assume that $n\geq3$, $u(t,x)$ is the solution of \eqref{deri-Ginz}, $J(u(t,x))$ is the nonlinear term, for any $\varepsilon>0$,
\begin{align*}
u_{t}-(\varepsilon+i)\Delta u=J(u(t,x)),\quad u(0,x)=u_{0}.
\end{align*}
Then for any $\mathbf{e}_{i}\in \mathbb{S}^{n-1}$, we have
\begin{align}
\|P_{k}u\|_{L^{2}_{t}L^{\frac{2n}{n-2}}_{x}\cap L^{\infty}_{t}L^{2}_{x}}
\leq C\left(\|u_{0}\|_{L^{2}_{x}}+\|J(u(t,x))\|_{N_{k}}\right),\label{Leb-linear-eq}
\end{align}
where the constant $C>0$ is independent of $\varepsilon$ and $k$.
\end{lemma}
\begin{proof}
By the Duhamel's principle, we obtain
\begin{align*}
u=e^{(\varepsilon+ i)t\Delta}u_{0}+\int_{0}^{t}e^{(\varepsilon+ i)(t-s)\Delta}J(s)ds.
\end{align*}
Firstly, assuming that the term $J(u(t,x))=0$ and using the Fourier transform, one has
\begin{align*}
e^{\varepsilon t\Delta}(e^{i t\Delta}u_{0})
&=\mathscr{F}^{-1}_{\xi}\{e^{-\varepsilon t|\xi|^{2}}\cdot e^{-i t|\xi|^{2}}\hat{u_{0}}\}\\
&=\mathscr{F}^{-1}_{\xi}\{e^{-\varepsilon t|\xi|^{2}}\}\ast e^{i t\Delta}u_{0} \\
&\lesssim (\varepsilon t)^{-\frac{n}{2}}e^{-\frac{x^{2}}{4\varepsilon t}} \ast e^{i t\Delta}u_{0},
\end{align*}
then by the $p-p$ boundedness of the operator $P_{k}$, Young's inequality and Lemma \ref{semi-stri}, we have
\begin{align*}
\|P_{k}u\|_{L^{2}_{t}L^{\frac{2n}{n-2}}_{x}\cap L^{\infty}_{t}L^{2}_{x}}
&\lesssim \|e^{(\varepsilon+ i)t\Delta}u_{0}\|_{L^{2}_{t}L^{\frac{2n}{n-2}}_{x}\cap L^{\infty}_{t}L^{2}_{x}} \\
&\lesssim\|(\varepsilon t)^{-\frac{n}{2}}e^{-\frac{x^{2}}{4\varepsilon t}}\|_{L^{2}_{t}L^{1}_{x}\cap L^{\infty}_{t}L^{1}_{x}}
\|e^{i t\Delta}u_{0}\|_{L^{2}_{t}L^{\frac{2n}{n-2}}_{x}\cap L^{\infty}_{t}L^{2}_{x}} \\
&\lesssim \|u_{0}\|_{L^{2}_{x}}.
\end{align*}
Secondly, we suppose that $u_{0}=0$. Notice that the norm $N^{\frac{n}{2}}$ consists of four parts, which are proved separately below. We extend the term $J(u(t,x))$ to the term $\tilde{J}(u(t,x))$ satisfying
\begin{align*}
\|\tilde{J}(u(t,x))\|_{X^{0,-\frac{1}{2},\infty}}\lesssim \|J(u(t,x))\|_{X^{0,-\frac{1}{2},\infty}_{+}}.
\end{align*}
and define $\tilde{u}=\mathscr{F}^{-1}_{\tau,\xi}\frac{1}{\tau+|\xi|^{2}+i\varepsilon|\xi|^{2}}\mathscr{F}_{t,x}\tilde{J}(u(t,x))$.
Next, we want to show
\begin{align*}
\|P_{k}u\|_{L^{2}_{t}L^{\frac{2n}{n-2}}_{x}\cap L^{\infty}_{t}L^{2}_{x}}
\lesssim \|J(u(t,x))\|_{X^{0,-\frac{1}{2},1}_{+}}.
\end{align*}
Using Lemma \ref{semi-Banach}, we get
\begin{align}
\begin{split}
\|P_{k}\tilde{u}(t,x)\|_{L^{2}_{t}L^{\frac{2n}{n-2}}_{x}\cap L^{\infty}_{t}L^{2}_{x}}
&\lesssim \|P_{k}\tilde{u}(t,x)\|_{X^{0,\frac{1}{2},1}} \\
&\lesssim \sum_{j\in\mathbb{Z}}2^{\frac{j}{2}}\left\|\mathscr{F}^{-1}_{\tau,\xi}\bigg\{\chi_{j}(\tau+|\xi|^{2})\chi_{k}(\xi)
\frac{1}{\tau+|\xi|^{2}+i\varepsilon|\xi|^{2}}\mathscr{F}_{t,x}\{\tilde{J}(u)\}\bigg\}\right\|_{L^{2}_{t,x}} \\
&\lesssim \sum_{j\in\mathbb{Z}}2^{\frac{j}{2}}\left\|\mathscr{F}^{-1}_{\tau,\xi}\bigg\{\chi_{j}(\tau+|\xi|^{2})
\mathscr{F}_{t,x}\{\tilde{J}(u)\}\bigg\}\right\|_{L^{2}_{t,x}}\cdot2^{-j} \\
&\lesssim \|J(u(t,x))\|_{X^{0,-\frac{1}{2},1}_{+}}. \label{3.2}
\end{split}
\end{align}
Inspired by \cite{Keel}, we can easily show that
\begin{align*}
\|P_{k}u\|_{L^{2}_{t}L^{\frac{2n}{n-2}}_{x}}\lesssim \|J(u(t,x))\|_{L^{1}_{t}L^{2}_{x}}.
\end{align*}
Notice that $(2,\frac{2n}{n-2})\in \Lambda_{0}$ (see \cite{Keel}), and $(q,r)=(2,\frac{2n}{n-2}), (q',r')=(1,2)$ satisfy
\begin{align}
\frac{1}{q}+\frac{n}{r}=\frac{1}{q'}+\frac{n}{r'}-2. \label{3.9}
\end{align}
By Theorem 2.1 in \cite[Introduction]{Keel}, we obtain
\begin{align}
&\left\|\int_{0}^{t}e^{i(t-s)\Delta} J(u(s))ds\right\|_{L^{2}_{t}L^{\frac{2n}{n-2}}_{x}}\lesssim \|J(u)\|_{L^{1}_{t}L^{2}_{x}},\label{3.9.1}\\
&\|e^{it\Delta} u(0)\|_{L^{\infty}_{t}L^{2}_{x}}\lesssim \|u(0)\|_{L^{2}},\label{3.9.2}\\
&\left\|\int_{-\infty}^{+\infty}e^{-is\Delta}J(u(s))ds\right\|_{L^{2}}\lesssim \|J(u)\|_{L^{1}_{t}L^{2}_{x}}.\label{3.9.3}
\end{align}
Combining \eqref{3.9.1} with Young's inequality, one has
\begin{align*}
&\left\|\int_{0}^{t}e^{i(t-s)\Delta+\varepsilon(t-s)\Delta} J(u(s))ds\right\|_{L^{2}_{t}L^{\frac{2n}{n-2}}_{x}}\\
&\lesssim \left\|\int_{0}^{t}[\varepsilon(t-s)]^{-\frac{n}{2}}\|e^{-\frac{x^{2}}{4\varepsilon(t-s)}}\ast e^{i(t-s)\Delta}J(u(s))\|
_{L^{\frac{2n}{n-2}}_{x}}ds\right\|_{L^{2}_{t}}\\
&\lesssim \left\|\int_{0}^{t}e^{i(t-s)\Delta}J(u(s))ds\right\|_{L^{2}_{t}L^{\frac{2n}{n-2}}_{x}}
\lesssim \|J(u)\|_{L^{1}_{t}L^{2}_{x}}.
\end{align*}
Now we prove that
\begin{align*}
\|P_{k}u\|_{L^{\infty}_{t}L^{2}_{x}}\lesssim \|J(u(t,x))\|_{L^{1}_{t}L^{2}_{x}}.
\end{align*}
Unlike the previous proof, $(\infty,2), (1,2)$ are conjugate indices and do not satisfy the relationship of \eqref{3.9}. Considering \eqref{3.9.2} and \eqref{3.9.3}, employing the Plancherel equality and Bochner's inequality, we find that
\begin{align*}
\|P_{k}u\|_{L^{\infty}_{t}L^{2}_{x}}
&\lesssim \left\|\int_{0}^{t}\|e^{-i(t-s)|\xi|^{2}-\varepsilon(t-s)|\xi|^{2}}\hat{J}(u(s))\|_{L^{\infty}_{t}}ds\right\|_{L^{2}_{xi}}\\
&\lesssim \left\|\int_{-\infty}^{+\infty}e^{is|\xi|^{2}}\hat{J}(u(s))ds\right\|_{L^{2}_{\xi}}
\|e^{-it|\xi|^{2}-\varepsilon(t-s)|\xi|^{2}}\|_{L^{\infty}_{t,s,\xi}}\\
&\lesssim\|J(u)\|_{L^{1}_{t}L^{2}_{x}}.
\end{align*}
Finally, we show that
\begin{align*}
\|P_{k}u\|_{L^{2}_{t}L^{\frac{2n}{n-2}}_{x}\cap L^{\infty}_{t}L^{2}_{x}}
\lesssim 2^{-\frac{k}{2}}\|J(u(t,x))\|_{L^{1,2}_{\mathbf{e}_{j}}}.
\end{align*}
Similar to the argument of \eqref{3.12} in Lemma \ref{anisLeg-linear}, we can get the above estimate. Then, we complete the proof of \eqref{Leb-linear-eq}.
\end{proof}
\begin{lemma}[The anisotropic Lebesgue space]\label{anisLeg-linear}
Assume that $n\geq3$, $u(t,x)$ is the solution of \eqref{deri-Ginz}, $J(u(t,x))$ is the nonlinear term, for any $\varepsilon>0$,
\begin{align*}
u_{t}-(\varepsilon+i)\Delta u=J(u(t,x)),\quad u(0,x)=u_{0},
\end{align*}
for any $\mathbf{e}_{i}\in \mathbb{S}^{n-1}$, we have
\begin{align}
&\|P_{k}u\|_{L^{2,\infty}_{\mathbf{e}_{i}}}
\leq C\big( 2^{k(n-1)/2}\|u_{0}\|_{L^{2}_{x}}+2^{k(n-2)/2}\sup_{\mathbf{e}_{i}\in \mathbb{S}^{n-1}}\|J(u(t,x))\|_{L^{1,2}_{\mathbf{e}_{i}}}\big),\label{3.12}\\
&\|P_{k,\mathbf{e}_{i}}u\|_{L^{\infty,2}_{\mathbf{e}_{i}}}
\leq C\big( 2^{-k/2}\|u_{0}\|_{L^{2}_{x}}+2^{-k}\sup_{\mathbf{e}_{i}\in \mathbb{S}^{n-1}}
\|J(u(t,x))\|_{L^{1,2}_{\mathbf{e}_{i}}}\big),\label{3.13}
\end{align}
where the constant $C>0$ is independent of $\varepsilon$ and $k$.
\end{lemma}
\begin{proof}
First, we show the second inequality \eqref{3.13}. Assuming that the term $J(u(t,x))=0$, using the Plancherel equality, convolution theorem and Young's inequality in convolution form, we get
\begin{align*}
\|P_{k,\mathbf{e}_{i}}u\|_{L^{\infty,2}_{\mathbf{e}_{i}}}
&\lesssim \left\|\mathscr{F}^{-1}_{\xi_{i}}\big\{e^{-(i+\varepsilon)t\xi_{i}^{2}}\mathscr{F}_{x}u_{0}(x)\big\}\right\|_{L^{\infty}_{x_{i}}
L^{2}_{\bar{\xi}_{i}}L^{2}_{t}}\\
&=\left\|\mathscr{F}^{-1}_{\xi_{i}}\big\{\mathscr{F}_{x_{i}}\{e^{(i+\varepsilon)t\partial^{2}x_{i}}\}\cdot
\mathscr{F}_{x_{i}}\{\mathscr{F}_{\bar{x}_{i}}u_{0}(x)\}\big\}\right\|_{L^{\infty}_{x_{i}}L^{2}_{\bar{\xi}_{i}}L^{2}_{t}}\\
&\lesssim\|e^{(i+\varepsilon)t\partial^{2}x_{i}}\|_{L^{2}_{x_{i}}L^{2}_{t}}
\|\mathscr{F}_{\bar{x}_{i}}u_{0}(x)\|_{L^{2}_{x_{i},\bar{\xi}_{i}}}\\
&\lesssim 2^{-\frac{k}{2}}\|u_{0}(x)\|_{L^{2}_{x}}.
\end{align*}
Next we assume that the term $u_{0}=0$ (see \cite[Lemma 2.4]{HanW}). We divide the Fourier transform of space into $x_{i}$ and $\bar{x}_{i}$, use the Plancherel equality and the convolution theorem to get
\begin{align*}
\|P_{k,\mathbf{e}_{i}}u\|_{L^{\infty,2}_{\mathbf{e}_{i}}}
&\lesssim \left\|\mathscr{F}^{-1}_{\xi_{i}}\mathscr{F}_{t}\bigg\{\int_{-\infty}^{+\infty}
e^{-i(t-s)\xi_{i}^{2}-\varepsilon(t-s)\xi_{i}^{2}}\mathscr{F}_{x}J(u(s,x))ds\bigg\}\right\|_{L^{\infty}_{x_{i}}
L^{2}_{\bar{\xi}_{i}}L^{2}_{\tau}}\\
&\lesssim \left\|\mathscr{F}^{-1}_{\xi_{i}}\bigg\{\mathscr{F}_{t}\big\{e^{-it\xi_{i}^{2}-\varepsilon t\xi_{i}^{2}}\big\}
\cdot \mathscr{F}_{t}\big\{\mathscr{F}_{x}J(u(s,x))\big\}\bigg\}\right\|_{L^{\infty}_{x_{i}}L^{2}_{\bar{\xi}_{i}}L^{2}_{\tau}}\\
&\lesssim \left\|\mathscr{F}^{-1}_{\tau,\xi_{i}}\bigg\{\frac{1}{i(\tau+\xi_{i}^{2})+\varepsilon \xi_{i}^{2}}
\mathscr{F}_{t}\big\{\mathscr{F}_{y}J(u(s,y))\big\}\bigg\}\right\|_{L^{\infty}_{x_{i}}L^{2}_{\bar{\xi}_{i}}L^{2}_{t}}\\
&\lesssim \left\|\int_{y_{i}}K(\tau,z_{i})\mathscr{F}_{t,\bar{y}_{i}}\big\{J(u(s,y_{i},\bar{y}_{i}))\big\}dy_{i}\right\|
_{L^{\infty}_{x_{i}}L^{2}_{\bar{\xi}_{i}}L^{2}_{\tau}}\\
&\lesssim \|K(\tau,z_{i})\|_{L^{\infty}_{\tau,z_{i}}} \left\|\int_{y_{i}}\mathscr{F}_{t,\bar{y}_{i}}
\big\{J(u(s,y_{i},\bar{y}_{i}))\big\}dy_{i} \right\|_{L^{2}_{\tau,\bar{\xi}_{i}}}\\
&\lesssim \|K(\tau,z_{i})\|_{L^{\infty}_{\tau,z_{i}}} \|J(u(t,x))\|_{L^{1,2}_{\mathbf{e}_{i}}},
\end{align*}
where $z_{i}=x_{i}-y_{i}$ and
\begin{align*}
K(\tau,z_{i}):=\int_{\xi_{i}}e^{iz_{i}\xi_{i}}\frac{1}{i(\tau+\xi_{i}^{2})+\varepsilon \xi_{i}^{2}}d\xi_{i}.
\end{align*}
Since the operator $P_{k,\mathbf{e}_{i}}$ is supported in $\xi_{i}\sim 2^{k+9n}$, we can get
\begin{align*}
\|K(\tau,z_{i})\|_{L^{\infty}_{\tau,z_{i}}}
\lesssim \sup_{\tau\neq0}\left|\int_{\xi_{i}}\frac{1}{|\tau+\xi_{i}^{2}|}d\xi_{i}\right|
+\left|\int_{\xi_{i}}\frac{1}{\xi_{i}^{2}}d\xi_{i}\right|
\lesssim 2^{-k}.
\end{align*}
Then, we obtain \eqref{3.13}.
Next, we prove the inequality \eqref{3.12}. Assuming that the term $J(u(t,x))=0$, using the $p-p$ boundedness of the operator $P_{k}$, and combining Lemma \ref{semi-stri}, it holds
\begin{align*}
\|P_{k}u\|_{L^{2,\infty}_{\mathbf{e}_{i}}}
&\lesssim \|e^{(\varepsilon+ i)t\Delta}u_{0}\|_{L^{2,\infty}_{\mathbf{e}_{i}}}
\lesssim\|u_{0}\|_{L^{2}_{x}},
\end{align*}
where the last line is obtained in the similar way as Lemma \ref{Leb-linear}.
We assume that the term $u_{0}=0$, decompose $P_{k}u=\sum_{i=1}^{n}U_{i}$ such that $\mathscr{F}_{x}U_{i}$ is supported in $\{|\xi|\sim2^{k}: \xi_{i}\sim 2^{k}\}$ and decompose the term $u(t,x)$ as follows:
\begin{align*}
u(t,x)&=\int_{\mathbb{R}^{n+1}}\frac{e^{it\tau}e^{ix\xi}}{\tau+|\xi|^{2}+i\varepsilon |\xi|^{2}}\hat{J}(\tau,\xi)d\xi d\tau \\
&=\int_{\mathbb{R}^{n+1}}\frac{e^{it\tau}e^{ix\xi}}{\tau+|\xi|^{2}+i\varepsilon |\xi|^{2}}\hat{J}(\tau,\xi)d\xi d\tau
(1_{\{-\tau-|\bar{\xi}_{i}|^{2}\sim 2^{2k}\}^{c}}+1_{\{-\tau-|\bar{\xi}_{i}|^{2}\sim 2^{2k},\;|\tau+|\xi|^{2}|\lesssim\varepsilon 2^{2k}\}}\\
&\quad+1_{\{-\tau-|\bar{\xi}_{i}|^{2}\sim 2^{2k},\;|\tau+|\xi|^{2}|\gg\varepsilon 2^{2k}\}})d\xi d\tau \\
&=:u_{1}+u_{2}+u_{3}.
\end{align*}
For the term $u_{1}$, applying the Plancherel equality and the boundedness, we get
\begin{align*}
\|\Delta u_{1}\|_{L^{2}_{t,x}}
\lesssim \left\|\frac{|\xi|^{2}}{\tau+|\xi|^{2}+i\varepsilon|\xi|^{2}}\right\|_{L^{\infty}_{\tau,\xi}}\|J(t,x)\|_{L^{2}_{t,x}}
\lesssim \|J(t,x)\|_{L^{2}_{x}},\\
\|\partial_{t} u_{1}\|_{L^{2}_{t,x}}
\lesssim \left\|\frac{i\tau}{\tau+|\xi|^{2}+i\varepsilon|\xi|^{2}}\right\|_{L^{\infty}_{\tau,\xi}}\|J(t,x)\|_{L^{2}_{t,x}}
\lesssim \|J(t,x)\|_{L^{2}_{t,x}}.
\end{align*}
By the Sobolev embedding, we deduce that
\begin{align*}
\|u_{1}\|_{L^{2}_{x_{i}}L^{\infty}_{\bar{x}_{i}}}\lesssim \|u_{1}\|_{W^{2,2}_{x_{i}},W^{2,2}_{\bar{x}_{i}}}
\lesssim \|J(t,x)\|_{L^{2}_{x}},
\end{align*}
\begin{align*}
\|u_{1}\|_{L^{\infty}_{t}} \lesssim \|u_{1}\|_{W^{1,2}_{t}} \lesssim \|J(t,x)\|_{L^{2}_{t}},
\end{align*}
and then by applying Bernstein's inequality we obtain
\begin{align*}
\|u_{1}\|_{L^{2,\infty}_{\mathbf{e}_{i}}}\lesssim \|J(t,x)\|_{L^{2}_{t,x}} \lesssim2^{k/2}\|J(t,x)\|_{L^{1,2}_{\mathbf{e}_{i}}}
\lesssim2^{(n-2)k/2}\|J(t,x)\|_{L^{1,2}_{\mathbf{e}_{i}}}.
\end{align*}
For the term $u_{2}$, using Lemma \ref{semi-stri}, we get
\begin{align*}
\|e^{it\tau_{0}}e^{it\Delta}P_{k}u_{2}(0)\|_{L^{2,\infty}_{\mathbf{e}_{i}}}
\lesssim2^{(n-1)k/2}\|P_{k}u_{2}(0)\|_{L^{2}_{t,x}}.
\end{align*}
By Lemma \ref{semi-Banach} and the Plancherel equality, we have
\begin{align*}
&\|P_{k}u_{2}\|_{L^{2,\infty}_{\mathbf{e}_{i}}}\\
&\lesssim 2^{(n-1)k/2}\|P_{k}u_{2}\|_{X^{0,\frac{1}{2},1}}\\
&\lesssim 2^{(n-1)k/2}\sum_{j\leq \log_{2}{(\varepsilon 2^{2k})}}2^{\frac{j}{2}}
\left\|\int_{\mathbb{R}^{n+1}}\frac{e^{it\tau}e^{ix\xi}}{\tau+|\xi|^{2}+i\varepsilon |\xi|^{2}}\hat{J}(\tau,\xi)
1_{\{-\tau-|\overline{\xi_{i}}|^{2}\sim 2^{2k},\;|\tau+|\xi|^{2}|\lesssim\varepsilon 2^{2k}\}}d\xi d\tau\right\|_{L^{2}_{t,x}} \\
&\lesssim 2^{(n-1)k/2}\sum_{j\leq \log_{2}{(\varepsilon 2^{2k})}}2^{\frac{j}{2}} \left\|\frac{1}{\tau+|\xi|^{2}+i\varepsilon |\xi|^{2}}\right\|_{L^{\infty}_{\tau,\xi}}
\|J(t,x)\|_{L^{2}_{t,x}} \\
&\lesssim 2^{(n-1)k/2}\varepsilon^{\frac{1}{2}}2^{k}\cdot\varepsilon^{-1}2^{-2k}\|J(t,x)\|_{L^{2}_{t,x}}\\
&\lesssim \varepsilon^{-\frac{1}{2}}2^{\frac{(n-3)}{2}k}\|J(t,x)\|_{L^{2}_{t,x}}.
\end{align*}
Letting $\lambda:=\sqrt{-\tau-|\bar{\xi}_{i}|^{2}}$, observe that $\tau+|\xi|^{2}=-(\lambda-\xi_{i})(\lambda+\xi_{i})$, thus we get
$|\lambda+\xi_{i}|\sim 2^{k},\; |\lambda-\xi_{i}|\lesssim \varepsilon 2^{k}$, and $\xi_{i}\sim \varepsilon 2^{k}$. We use Bernstein's inequality with respect to $x_{i}$ to obtain the desired estimate:
\begin{align*}
\varepsilon^{-\frac{1}{2}}2^{\frac{(n-3)}{2}k}\|J(t,x)\|_{L^{2}_{t,x}}
\lesssim 2^{(n-2)k/2}\|J(t,x)\|_{L^{1,2}_{\mathbf{e}_{i}}}.
\end{align*}
For the term $u_{3}$, by Taylor's expansion: {\small
\begin{align*}
\frac{1}{\tau+|\xi|^{2}+i\varepsilon |\xi|^{2}}
&=\frac{1}{\tau+|\xi|^{2}}+\sum_{k=1}^{\infty}\frac{(-i\varepsilon|\xi|^{2})^{k}}{(2\lambda(\xi_{i}-\lambda))^{k+1}}
+\sum_{k=1}^{\infty}\frac{(-i\varepsilon|\xi|^{2})^{k}}{(2\lambda(\xi_{i}-\lambda))^{k+1}}
\left[\left(1+\frac{\lambda-\xi_{i}}{\lambda+\xi_{i}}\right)^{k+1}-1\right]\\
&=:\varphi_{1}(\tau,\xi)+\varphi_{2}(\tau,\xi)+\varphi_{3}(\tau,\xi),
\end{align*}}
one has
\begin{align*}
u_{3}^{j}=\int_{\mathbb{R}^{n+1}}e^{it\tau}e^{ix\xi}\varphi_{j}(\tau,\xi)\hat{J}(\tau,\xi)1_{\{-\tau-|\bar{\xi}_{i}|^{2}\sim 2^{2k},\;|\tau+|\xi|^{2}|\gg\varepsilon 2^{2k}\}} d\xi d\tau,\,\,j=1,2,3.
\end{align*}
For the term $u_{3}^{1}$, this corresponds to the case $\varepsilon=0$ which has been proved in \cite[Lemma 4.1]{IonK}. For the term $u_{3}^{2}$, we separate $\bar{\xi_{i}}$ and $\xi_{i}$ as much as possible,
\begin{align}
\begin{split}
u_{3}^{2}&=\sum_{k=1}^{\infty}(-i\varepsilon)^{k}\int_{\bar{\xi_{i}}\times\tau}e^{it\tau}e^{i\bar{x_{i}}\bar{\xi}_{i}}(2\lambda)^{-k-1}
1_{-\tau-|\bar{\xi}|^{2}\sim 2^{2k}}\\
&\quad\int_{\xi_{i}}\frac{e^{ix_{i}\xi_{i}}|\xi|^{2k}1_{|\tau+|\xi|^{2}|\gg\varepsilon2^{2k}}}{(\xi_{i}-\lambda)^{k+1}}
\hat{J}(\xi_{i},\bar{\xi}_{i},\tau)d\xi_{i}d\bar{\xi_{i}}d\tau.\label{3.17}
\end{split}
\end{align}
Let
\begin{align*}
K(y_{i},\bar{\xi_{i}},\tau)=\mathscr{F}^{-1}_{\xi_{i}}\bigg\{|\xi|^{2k}\hat{J}(\xi_{i},\bar{\xi}_{i},\tau)
1_{|\lambda+\xi_{i}|\sim2^{k}}\bigg\},
\end{align*}
thus the integral in \eqref{3.17} with respect to $\xi_{i}$ can be simplified to
\begin{align*}
&\int_{\xi_{i}}\frac{e^{ix_{i}\xi_{i}}1_{|\lambda-\xi_{i}|\gg\varepsilon2^{k}}}{(\xi_{i}-\lambda)^{k+1}}
\mathscr{F}_{y_{i}}\{K(y_{i},\bar{\xi}_{i},\tau)\}d\xi_{i}\\
&=\int_{y_{i}}K(y_{i},\bar{\xi}_{i},\tau)\int_{\xi_{i}}\frac{e^{i(x_{i}-y_{i})\xi_{i}}
1_{|\lambda-\xi_{i}|\gg\varepsilon2^{k}}}{(\xi_{i}-\lambda)^{k+1}}d\xi_{i}dy_{i}\\
&\lesssim \int_{y_{i}}\frac{1}{k}\varepsilon^{-k}2^{-k^{2}}e^{ix_{i}\lambda}K(y_{i},\bar{\xi}_{i},\tau)dy_{i},
\end{align*}
which yields that
\begin{align*}
u_{3}^{2}&\lesssim\sum_{k=1}^{\infty}(-1)^{k}i^{k}\frac{1}{k}\int_{\bar{\xi_{i}}\times\tau}e^{it\tau}e^{i\bar{x_{i}}\bar{\xi_{i}}}
(2\lambda)^{-k-1}1_{-\tau-|\bar{\xi}|^{2}\sim 2^{2k}} \int_{y_{i}}e^{ix_{i}\lambda}K(y_{i},\bar{\xi_{i}},\tau)dy_{i}d\bar{\xi_{i}}d\tau \\
&=:{\Gamma}_{3}^{2}.
\end{align*}
By the Plancherel equality, Bochner's equality and Lemma \ref{semi-stri}, we obtain
\begin{align}
\begin{split}
\|P_{k}u_{3}^{2}\|_{L^{2,\infty}_{\mathbf{e}_{i}}}
&=\left\|\mathscr{F}^{-1}_{\xi}\bigg\{\chi_{k}(\xi)e^{it|\xi|^{2}}\mathscr{F}_{x}\big\{u_{3}^{2}\cdot e^{-it|\xi|^{2}}\big\}\bigg\}\right\|_{L^{2,\infty}_{\mathbf{e}_{i}}} \\
&\lesssim \|e^{it|\xi|^{2}}\|_{L^{\infty}_{t,\xi}}\|e^{it\Delta}P_{k}\Gamma_{3}^{2}\|_{L^{2,\infty}_{\mathbf{e}_{i}}}
\lesssim 2^{(n-1)k/2}\|\Gamma_{3}^{2}\|_{L^{2}_{t,x}} \\
&\lesssim 2^{(n-1)k/2}2^{-2k^{2}-2k}\left\|1_{-\tau-|\bar{\xi}|^{2}\sim 2^{2k}}\int_{y_{i}}e^{ix_{1}\lambda}
K(y_{i},\bar{\xi_{i}},\tau)dy_{i}\right\|_{L^{2}_{x_{i}}L^{2}_{\bar{\xi_{i}},\tau}} \\
&\lesssim 2^{(n-1)k/2}2^{-2k^{2}-\frac{3}{2}k}|\xi|^{2k}\|J(y_{i},\bar{\xi_{i}},\tau)\|_{L^{1}_{y_{i}}L^{2}_{\bar{\xi_{i}},\tau}} \\
&\lesssim 2^{(n-2)k/2}\|J(u(t,x))\|_{L^{1,2}_{\mathbf{e}_{i}}}.\label{3.19}
\end{split}
\end{align}
For the term $u_{3}^{3}$, we use the mean value theorem of $\varphi_{3}(\tau,\xi)$ to get
\begin{align*}
|\varphi_{3}(\tau,\xi)|
\lesssim \sum_{k=1}^{\infty} \frac{\varepsilon^{k}|\xi|^{2k}(k+1)}{2^{k+1}\lambda^{k+1}|\xi_{i}-\lambda|^{k+1}}
\left(1+\frac{\lambda-\xi_{i}}{\lambda+\xi_{i}}\theta\right)^{k}\left|\frac{\lambda-\xi_{i}}{\lambda+\xi_{i}}\right|
\lesssim \sum_{k=1}^{\infty}\frac{k+1}{2^{2k}},\,\,\theta\in(0,1).
\end{align*}
The convergence is obvious, then it can be proved by using the similar method as the term $u_{1}$.
\end{proof}
\begin{lemma}[$X^{0,b,q}$ space]\label{X-linear}
Assume that $n\geq3$, $u(t,x)$ is the solution of \eqref{deri-Ginz}, $J(u(t,x))$ is the nonlinear term, for any $\varepsilon>0$,
\begin{align*}
u_{t}-(\varepsilon+i)\Delta u=J(u(t,x)),\quad u(0,x)=u_{0}.
\end{align*}
Then we have
\begin{align}
&\|P_{k}u\|_{X^{0,\frac{1}{2},\infty}_{+}}
\leq C\big(\|u_{0}\|_{L^{2}_{x}}+\|P_{k}J(u(t,x))\|_{X^{0,-\frac{1}{2},\infty}_{+}}\big),\label{3.14}\\
&\|P_{k}u\|_{Z_{k}}\leq C\big( \varepsilon^{\frac{1}{2}}\|u_{0}\|_{L^{2}_{x}}+2^{-k}\|J(u(t,x))\|_{L^{2}_{t,x}}\big),\label{3.15}
\end{align}
where the constant $C>0$ is independent of $\varepsilon$ and $k$.
\end{lemma}
\begin{proof}
First, we show the second inequality \eqref{3.15}. By the definition of the norm $Z_{k}$ and the fact that $|\xi|\sim 2^{k}$, one has
\begin{align*}
\|P_{k}u\|_{Z_{k}}
&=2^{-k}\left\|-\mathscr{F}^{-1}_{\xi}\{\varepsilon|\xi|^{2}\chi_{k}(\xi)\hat{u}(\xi)\}
+\mathscr{F}^{-1}_{\xi}\{\chi_{k}(\xi)\hat{J}(u(s,\xi))e^{-(i+\varepsilon)|\xi|^{2}(t-s)}\}\right\|_{L^{2}_{t,x}}\\
&\lesssim \varepsilon 2^{k}\|P_{k}u\|_{L^{2}_{t,x}}+2^{-k}\|P_{k}J(u(t,x))\|_{L^{2}_{t,x}}.
\end{align*}
We estimate the first term by the plancherel equality and H\"{o}lder's inequality:
\begin{align*}
\varepsilon 2^{k}\|P_{k}u\|_{L^{2}_{t,x}}
&\lesssim \varepsilon 2^{k}\|P_{k}e^{(i+\varepsilon)t\Delta}u_{0}\|_{L^{2}_{t,x}}
+\varepsilon 2^{k} \left\|P_{k}\int_{0}^{t}e^{(i+\varepsilon)(t-s)\Delta}J(u(s))ds\right\|_{L^{2}_{t,x}}\\
&\lesssim \varepsilon 2^{k}\left[\int_{0}^{\infty}\|e^{-(i+\varepsilon)t|\xi|^{2}}\|^{2}_{L^{\infty}_{\xi}}dt\right]^{\frac{1}{2}}
\|P_{k}u_{0}\|_{L^{\infty}_{x}}\\
&\quad+\varepsilon 2^{k}\left\|\mathscr{F}^{-1}_{\xi}\big\{\chi_{k}(\xi)\sup_{s\in[0,t]}|\hat{J}(s,\xi)|\big\}
\int_{0}^{t}e^{-(i+\varepsilon)(t-s)|\xi|^{2}}ds\right\|_{L^{2}_{t,x}}\\
&\lesssim \varepsilon 2^{k}\varepsilon^{-\frac{1}{2}}|\xi|^{-1}\|P_{k}u_{0}\|_{L^{2}_{x}}
+\varepsilon 2^{k}\varepsilon^{-1}|\xi|^{-2}\|P_{k}J(u(t,x))\|_{L^{2}_{t,x}}\\
&\lesssim \varepsilon^{\frac{1}{2}}\|u_{0}\|_{L^{2}_{x}}+2^{-k}\|J(u(t,x))\|_{L^{2}_{t,x}},
\end{align*}
thus we get \eqref{3.15}.
Next, we want to show \eqref{3.14}. Assume that $J(u(t,x))=0$, since $X^{0,b,q}$ space contains the Fourier transform for time, we extend $\tilde{u}=e^{it\Delta+\varepsilon|t|\Delta}u_{0}$. By the Plancherel equality, we obtain
\begin{align*}
\|P_{k}u\|_{X^{0,\frac{1}{2},\infty}_{+}}
&=\sup_{j\in \mathbb{Z}}2^{\frac{j}{2}}\left\|\chi_{j}(\tau+|\xi|^{2})\chi_{k}(\xi)\hat{u}_{0}(\xi)
\mathscr{F}_{t}\big\{e^{-it|\xi|^{2}-\varepsilon|t||\xi|^{2}}\big\}\right\|_{L^{2}_{\tau,\xi}}\\
&\lesssim \sup_{j\in \mathbb{Z}}2^{\frac{j}{2}}\left\|\chi_{k}(\xi)\hat{u}_{0}(\xi)\left\|\chi_{j}(\tau+|\xi|^{2})
\frac{\varepsilon|\xi|^{2}}{(\varepsilon|\xi|^{2})^{2}+(\tau+|\xi|^{2})^{2}}\right\|_{L^{2}_{\tau}}\right\|_{L^{2}_{\xi}}\\
&\lesssim \|\chi_{k}(\xi)\hat{u}_{0}(\xi)\|_{L^{2}_{\xi}}
=\|u_{0}\|_{L^{2}_{x}}.
\end{align*}
We assume that $u_{0}=0$ and we define $\tilde{u}=\mathscr{F}^{-1}_{\tau,\xi}\frac{1}{\tau+|\xi|^{2}+i\varepsilon|\xi|^{2}}\mathscr{F}_{t,x}\tilde{J}(u(t,x))$,
where the term $\tilde{J}(u(t,x))$ is the extension of the term $J(u(t,x))$. Let $\tilde{J}(u(t,x))=J(u(t,x))1_{t\geq0}+J(u(t,x))1_{t<0}$ such that
\begin{align*}
\|P_{k}u\|_{X^{0,\frac{1}{2},\infty}_{+}}
&\lesssim \|P_{k}\tilde{u}\|_{X^{0,-\frac{1}{2},\infty}}.
\end{align*}
Similar to the argument of \eqref{3.2}, we get \eqref{3.14}. The proof of Lemma \ref{X-linear} is completed.
\end{proof}
\begin{proposition}[Linear estimate]\label{pro linear-es}
Assume that $n\geq3$, $u(t,x)$ is the solution of \eqref{deri-Ginz}, $J(u(t,x))$ is the nonlinear term, for any $\varepsilon>0$,
\begin{align*}
u_{t}-(\varepsilon+ai)\Delta u=J(u(t,x)),\quad u(x,0)=u_{0}.
\end{align*}
then we have
\begin{align}
\|u(t,x)\|_{F^{\frac{n}{2}}\cap Z^{\frac{n}{2}}}\leq C\big(\|u_{0}\|_{\dot{B}^{\frac{n}{2}}_{2,1}}+\|J(x,t)\|_{N^{\frac{n}{2}}}\big),\label{3.1}
\end{align}
where the constant $C>0$ is independent of $\varepsilon$ and $k$.
\end{proposition}
\begin{proof}
Combining Lemmas \ref{Leb-linear}--\ref{X-linear}, we can get the desired result of Proposition \ref{pro linear-es}.
\end{proof}
\section{The nonlinear estimate }\label{Sec4}
In this section, we establish some nonlinear estimates, where the nonlinear parts of the complex derivative Ginzburg-Landau equation are given by
\begin{align*}
J(u(t,x))&=-(1+i)(v\cdot\nabla)u-\frac{2(\varepsilon+i)\bar{u}(\nabla u)^{2}}{1+|u|^{2}}
-\frac{{\rm{Im}}F-i{\rm{Re}}H}{1+|u|^{2}}\\
&=:J_{1}(u(t,x))+J_{2}(u(t,x))+J_{3}(u(t,x)).
\end{align*}
and
\begin{align*}
F(u,\bar{u}):=\frac{2(1+\bar{u}^{2})(1-|u|^2)(v\cdot\nabla)u}{1+|u|^{2}},\quad
H(u,\bar{u}):=\frac{4(v\cdot\nabla)u(\bar{u}^{2}+|u|^{2})}{1+|u|^{2}}.
\end{align*}
\begin{lemma}[\cite{Guo}, Lemma 5.4]\label{Qjk-bdd}
$(1)$\;Assume that $\Omega$ is a Banach space with translation invariant for time-space. If $j,k\in\mathbb{Z}$ and $j\geq2k-100$, then $Q_{\leq j}P_{k}$ is bounded on $\Omega$ with bound independent of $j,k$.
$(2)$\;For any $j,k\in\mathbb{Z}$ and $1\leq p\leq\infty$, then $Q_{\leq j}P_{k,\mathbf{e}}$ is bounded on $L^{p,2}_{\mathbf{e}}$ with bound independent of $j,k$.
$(3)$\;For any $j\in\mathbb{Z}$ and $1\leq p\leq\infty$, then $Q_{\leq j}$ is bounded on $L^{p}_{t}L^{2}_{x}$ with bound independent of $j$.
\end{lemma}
We need to estimate the nonlinear term $J(u(t,x))$ in the space $N^{n/2}$, by dividing the norm of $N^{n/2}$ space into two parts, i.e.,
the norm of $L^{2}_{t,x}$ and the norm of piecewise function denoted by $N^{*}_{k}$.
Notice that the main structure of the nonlinear parts are $u^{2}(\nabla u)$ and $u(\nabla u)^{2}$, then we establish $L^{2}_{t,x}$- and $N^{*}_{k}$-estimates of both terms, respectively.
\begin{lemma}\label{Lemma4.3}
Assume that $n\geq3$, for any $k_{j}\in\mathbb{Z},\;j=1,2,3$, we have
\begin{align}
\sum_{k_{j}}2^{k_{3}(n-2)/2}\left\|P_{k_{3}}\left[\sum_{i=1}^{n}v_{i}\partial_{x_{i}}w(P_{k_{1}}fP_{k_{2}}g)\right]\right\|
_{L^{2}_{t,x}}\leq C\|v\|_{L^{\infty}_{t,x}}\|w\|_{Y^{n/2}} \|f\|_{F^{n/2}} \|g\|_{F^{n/2}}.\label{4.3}
\end{align}
where the constant $C>0$ is independent of $\varepsilon$ and $k_{j}$.
\end{lemma}
\begin{proof}
By the symmetry of $k_{1},k_{2}$, we assume that $k_{1}\leq k_{2}$. Then we get
\begin{align*}
&\sum_{k_{j}}2^{k_{3}(n-2)/2}\left\|P_{k_{3}}\left[\sum_{i=1}^{n}v_{i}\partial_{x_{i}}w(P_{k_{1}}fP_{k_{2}}g)\right]\right\|_{L^{2}_{t,x}}\\
&\leq\sum_{k_{j}}2^{k_{3}(n-2)/2}\left\|P_{k_{3}}\left[\sum_{i=1}^{n}v_{i}\partial_{x_{i}}P_{\leq k_{1}+k_{2}-10}w
(P_{k_{1}}fP_{k_{2}}g)\right]\right\|_{L^{2}_{t,x}}\\
&\quad+\sum_{k_{j}}2^{k_{3}(n-2)/2}\left\|P_{k_{3}}\left[\sum_{i=1}^{n}v_{i}\partial_{x_{i}}P_{\geq k_{1}+k_{2}-9}
w(P_{k_{1}}fP_{k_{2}}g)\right]\right\|_{L^{2}_{t,x}}\\
&=:I_1+I_2
\end{align*}
Firstly, we estimate the term $I_1$. Assume that $k_{3}\geq k_{1}+5$, we use Bernstein's inequality and H\"{o}lder's inequality to get
\begin{align}
\begin{split}
I_1
&\lesssim \sum_{k_{j}}2^{k_{3}(n-2)/2}2^{k_{3}n/2}\left\|P_{k_{3}}\left[\sum_{i=1}^{n}v_{i}\partial_{x_{i}}P_{\leq k_{1}+k_{2}-10}w
(P_{k_{1}}fP_{k_{2}}g)\right]\right\|_{L^{2}_{t}L^{1}_{x}} \\
&\lesssim \sum_{k_{j}}2^{k_{3}(n-2)/2}2^{k_{3}n/2}\left\|P_{k_{3}}\left[\sum_{i=1}^{n}v_{i}\partial_{x_{i}}
P_{\leq k_{1}+k_{2}-10}w\right]\right\|_{L^{\infty}_{t}L^{2}_{x}}
\left\|\widetilde{P_{k_{3}}}(P_{k_{1}}fP_{k_{2}}g)\right\|_{L^{2}_{t,x}}.\label{j>k1+k2}
\end{split}
\end{align}
Notice that the operator $P_{k_{3}}$ always exerts influence on the term $P_{\leq k_{1}+k_{2}-10}w$ and the term $P_{k_{1}}fP_{k_{2}}g$ by the frequency, and the operator $\widetilde{P_{k_{3}}}$ has more larger frequency than the operator$P_{k_{3}}$. So the norm that we obtain will also be larger when we apply a larger frequency to $P_{k_{1}}fP_{k_{2}}g$ (The more accurate reason is the compact supported set becomes bigger, whereas we describe it by the frequency).
We use convolution properties and Plancherel's equality to obtain
\begin{align*}
&\left\|P_{k_{3}}\left[\sum_{i=1}^{n}v_{i}\partial_{x_{i}}
P_{\leq k_{1}+k_{2}-10}w\right]\right\|_{L^{\infty}_{t}L^{2}_{x}}\\
&\lesssim2^{k_{1}+k_{2}}\|v\|_{L^{\infty}_{t,x}}\left\|\chi_{k_{3}}(\xi)\chi_{\leq k_{1}+k_{2}-10}(\xi)
\hat{w}(\xi)\right\|_{L^{\infty}_{t}L^{2}_{\xi}} \\
&\lesssim2^{k_{1}+k_{2}}\|v\|_{L^{\infty}_{t,x}}\|P_{k_{3}}w\|_{L^{\infty}_{t}L^{2}_{x}}.
\end{align*}
Thus, we have
\begin{align*}
I_1
&\lesssim\sum_{k_{j}}2^{k_{3}n-k_{3}}2^{k_{1}+k_{2}}\|v\|_{L^{\infty}_{t,x}}\|P_{k_{3}}w\|_{L^{\infty}_{t}L^{2}_{x}}
2^{k_{1}(n-2)/2}\|P_{k_{1}}f\|_{L^{2}_{t}L^{\frac{2n}{2n-2}}_{x}}\|P_{k_{2}}g\|_{L^{\infty}_{t}L^{2}_{x}}\\
&\lesssim\sum_{k_{j}}2^{(k_{3}-k_{2})(n-1)/2}\|w\|_{Y^{n/2}} \|f\|_{F^{n/2}} \|g\|_{F^{n/2}}.
\end{align*}
Since $k_{3}\geq k_{1}+5$ and $k_{1}\leq k_{2}$, by the continuity of the frequency (the frequency is unbroken), we can know that $|k_{3}-k_{2}|\leq 5$. On the other hand, when $k_{3}\leq k_{1}+4$, we have $k_{3}-k_{2}\leq 4$, and then this case can be estimated in a similar way. Therefore, we obtain that $I_1\leq C\|v\|_{L^{\infty}_{t,x}}\|w\|_{Y^{n/2}} \|f\|_{F^{n/2}} \|g\|_{F^{n/2}}$.
Secondly, we estimate the term $I_2$. Assuming that $k_{3}\geq k_{1}+5$, using H\"{o}lder's inequality, Bernstein's inequality and convolution properties, it holds that
\begin{align*}
I_2&\lesssim\sum_{k_{j}}2^{k_{3}(n-2)/2}\left\|\sum_{i=1}^{n}v_{i}\partial_{x_{i}}P_{\geq k_{1}+k_{2}-9}
w\right\|_{L^{\infty}_{t,x}}\left\|P_{k_{1}}fP_{k_{2}}g\right\|_{L^{2}_{t,x}} \\
&\lesssim\sum_{k_{j}}2^{k_{3}(n-2)/2}\sum_{i=1}^{n}\|v\|_{L^{\infty}_{t,x}}\|P_{k_{1}+k_{2}}\partial{x_{i}w}\|_{L^{\infty}_{t,x}}
\|P_{k_{1}}f\|_{L^{2}_{t}L^{\infty}_{x}}\|P_{k_{2}}g\|_{L^{\infty}_{t}L^{2}_{x}} \\
&\lesssim\sum_{k_{j}}2^{k_{3}(n-2)/2}2^{k_{1}+k_{2}}\|v\|_{L^{\infty}_{t,x}}2^{(k_{1}+k_{2})n/2}\|P_{k_{1}+k_{2}}w\|_{L^{\infty}_{t}L^{2}_{x}}\\
&\quad\cdot2^{k_{1}(n-2)/2}\|P_{k_{1}}f\|_{L^{2}_{t}L^{\frac{2n}{2n-2}}_{x}}\|P_{k_{2}}g\|_{L^{\infty}_{t}L^{2}_{x}} \\
&\lesssim\sum_{k_{j}}2^{(k_{3}-k_{2})(n/2-1)}\|v\|_{L^{\infty}_{t,x}}\|w\|_{Y^{n/2}} \|f\|_{F^{n/2}} \|g\|_{F^{n/2}},
\end{align*}
where we have used the following fact in the second line:
\begin{align*}
&\sum_{i=1}^{n}\|v_{i}\partial_{x_{i}}P_{\geq k_{1}+k_{2}-9}w\|_{L^{\infty}_{t,x}}\\
&\lesssim\|v\|_{L^{\infty}_{t,x}}\sum_{i=1}^{n}\sum_{j:=k_{1}+k_{2}\in\mathbb{Z}}\sup_{j\geq k_{1}+k_{2}-9}
\|\partial_{x_{i}}P_{j}w\|_{L^{\infty}_{t,x}}\\
&\lesssim\|v\|_{L^{\infty}_{t,x}}\sum_{i=1}^{n}\sum_{k_{1}+k_{2}\in\mathbb{Z}}
\|\partial_{x_{i}}P_{k_{1}+k_{2}}w\|_{L^{\infty}_{t,x}}.
\end{align*}
When $k_{3}\leq k_{1}+4$, we have $k_{3}-k_{2}\leq 4$. This case can be estimated in a similar way. Hence, $I_2\leq C\|v\|_{L^{\infty}_{t,x}}\|w\|_{Y^{n/2}} \|f\|_{F^{n/2}} \|g\|_{F^{n/2}}$. The proof of \eqref{4.3} is finished.
\end{proof}
\begin{lemma}\label{Lemma4.4}
Assume that $n\geq3$, for any $k_{j}\in\mathbb{Z},\;j=1,2,3$, we have
\begin{align}
\sum_{k_{j}}2^{k_{3}n/2}\left\|P_{k_{3}}\left[\sum_{i=1}^{n}v_{i}\partial_{x_{i}}w(P_{k_{1}}fP_{k_{2}}g)\right]\right\|
_{N^{*}_{k_{3}}}\leq C\|v\|_{L^{\infty}_{t,x}}\|w\|_{Y^{n/2}} \|f\|_{F^{n/2}\cap Z^{n/2}} \|g\|_{F^{n/2}\cap Z^{n/2}},\label{4.4}
\end{align}
where the constant $C>0$ is independent of $\varepsilon$ and $k_{j}$.
\end{lemma}
\begin{proof}
By the symmetry of $k_{1}, k_{2}$, we assume that $k_{1}\leq k_{2}$. By decomposing the operator $P_{k_{3}}$, we get
\begin{align*}
&\sum_{k_{j}}2^{k_{3}n/2}\left\|P_{k_{3}}\left[\sum_{i=1}^{n}v_{i}\partial_{x_{i}}w(P_{k_{1}}fP_{k_{2}}g)\right]\right\|_{N^{*}_{k_{3}}}\\
&\leq\sum_{k_{j}}2^{k_{3}n/2}\left\|P_{k_{3}}\left[\sum_{i=1}^{n}v_{i}\partial_{x_{i}}P_{\leq k_{1}+k_{2}}w
(P_{k_{1}}fP_{k_{2}}g)\right]\right\|_{N^{*}_{k_{3}}}\\
&\quad+\sum_{k_{j}}2^{k_{3}n/2}\left\|P_{k_{3}}\left[\sum_{i=1}^{n}v_{i}\partial_{x_{i}}P_{\geq k_{1}+k_{2}}
w(P_{k_{1}}fP_{k_{2}}g)\right]\right\|_{N^{*}_{k_{3}}}\\
&=:J_1+J_2.
\end{align*}
We first estimate the term $J_2$ by decomposing the frequency. Assume that $k_{3}\geq k_{1}+20$. In order to estimate more accurately, we further assume that $-k_{2}\leq 10$, the norm on $N^{*}_{k_{3}}$ is $L_{\mathbf{e}_{j}}^{1,2}$ at this time. Then, we use H\"{o}lder's inequality and Bernstein's inequality to get{\small
\begin{align*}
&\sum_{k_{j}}2^{k_{3}n/2-k_{3}/2}\left\|P_{k_{3}}\left[\sum_{i=1}^{n}v_{i}\partial_{x_{i}}P_{\geq k_{1}+k_{2}}w
(P_{k_{1}}fP_{k_{2}}g)\right]\right\|_{L_{\mathbf{e}_{j}}^{1,2}} \\
&\lesssim \sum_{k_{j}}2^{k_{3}(n-1)/2+k_{1}+k_{2}}\|v\|_{L^{\infty}_{t,x}}\|P_{k_{1}+k_{2}}wP_{k_{2}}g\|_{L^{2}_{t,x}}
\|P_{k_{1}}f\|_{L_{\mathbf{e}_{j}}^{2,\infty}} \\
&\lesssim\sum_{k_{j}}2^{k_{3}(n-1)/2+k_{1}+k_{2}}\|v\|_{L^{\infty}_{t,x}}2^{(k_{1}+k_{2})(n-1)/2-k_{2}/2+k_{1}(n-1)/2}
\|P_{k_{1}+k_{2}}w\|_{F_{k_{1}+k_{2}}}\|P_{k_{2}}g\|_{F_{k_{2}}}\|P_{k_{1}}f\|_{F_{k_{1}}}\\
&\lesssim \sum_{k_{j}}2^{(k_{3}-k_{2})(n-1)/2-k_{2}/2}
\|v\|_{L^{\infty}_{t,x}}\|P_{k_{1}+k_{2}}w\|_{Y^{n/2}}\|P_{k_{1}}f\|_{F^{n/2}\cap Z^{n/2}}\|P_{k_{2}}g\|_{F^{n/2}\cap Z^{n/2}}.
\end{align*}}
On the other hand, assuming that $-k_{2}\geq 9$ and decomposing the modulation operator, we have
\begin{align*}
J_2&\leq \sum_{k_{j}}2^{k_{3}n/2}\left\|P_{k_{3}}\left[\sum_{i=1}^{n}v_{i}\partial_{x_{i}}P_{\geq k_{1}+k_{2}-9}Q_{\geq k_{1}+k_{2}-10}
w(P_{k_{1}}fP_{k_{2}}g)\right]\right\|_{N^{*}_{k_{3}}}\\
&\quad+\sum_{k_{j}}2^{k_{3}n/2}\left\|P_{k_{3}}\left[\sum_{i=1}^{n}v_{i}\partial_{x_{i}}P_{\geq k_{1}+k_{2}-9}Q_{\leq k_{1}+k_{2}-11}
w(P_{k_{1}}fP_{k_{2}}g)\right]\right\|_{N^{*}_{k_{3}}}\\
&=:J_{21}+J_{22}.
\end{align*}
It is easy to verify that the product of Littlewood-Paley projectors $P_{k}$ and modulation projectors $Q_{k}$ are symmetrical, i.e.,
\begin{align*}
P_{k}Q_{k}u(x,t)=Q_{k}P_{k}u(x,t).
\end{align*}
Now, we estimate the term $J_{21}$. Noticing that the norm is $L_{t}^{1}L_{x}^{2}$ on $N^{*}_{k_{3}}$, employing H\"{o}lder's inequality, Minkowski's inequality and Bernstein's inequality, one has {\small
\begin{align*}
J_{21}
&=\sum_{k_{j}}2^{k_{3}n/2}\left\|P_{k_{3}}\left[\sum_{i=1}^{n}v_{i}\partial_{x_{i}}P_{\geq k_{1}+k_{2}-9}Q_{\geq k_{1}+k_{2}-10}
w(P_{k_{1}}fP_{k_{2}}g)\right]\right\|_{L_{t}^{1}L_{x}^{2}}\\
&\lesssim\sum_{k_{j}}2^{k_{3}n/2}\|v\|_{L^{\infty}_{t,x}}\sum_{i=1}^{n}\sum_{k_{1}+k_{2}\in\mathbb{Z}}\|P_{k_{1}+k_{2}}
Q_{\geq k_{1}+k_{2}-10}\partial_{x_{i}}w\|_{_{L_{t}^{2}L_{x}^{\infty}}}
\|P_{k_{1}}f\|_{L_{\mathbf{e}_{j}}^{2,\infty}}\|P_{k_{2}}g\|_{L_{\mathbf{e}_{j}}^{\infty,2}} \\
&\lesssim \sum_{k_{j}}2^{k_{3}n/2}\|v\|_{L^{\infty}_{t,x}}2^{(k_{1}+k_{2})n/2}2^{k_{1}+k_{2}}\sum_{j\geq k_{1}+k_{2}-10}\|Q_{j}(P_{k_{1}+k_{2}}w)\|_{_{L_{t,x}^{2}}}\|P_{k_{1}}f\|_{L_{\mathbf{e}_{j}}^{2,\infty}}
\|P_{k_{2}}g\|_{L_{\mathbf{e}_{j}}^{\infty,2}} \\
&\lesssim \sum_{k_{j}}2^{k_{3}n/2+(k_{1}+k_{2})n/2+k_{1}+k_{2}}\|v\|_{L^{\infty}_{t,x}}
\left[\sum_{j\geq k_{1}+k_{2}-10}\left(2^{j}\|Q_{j}(P_{k_{1}+k_{2}}w)\|_{_{L_{t,x}^{2}}}\right)^{2}\right]^{\frac{1}{2}}\\
&\quad\left[\sum_{j\geq k_{1}+k_{2}-10}2^{-2j}\right]^{\frac{1}{2}}
\|P_{k_{1}}f\|_{L_{\mathbf{e}_{j}}^{2,\infty}}\|P_{k_{2}}g\|_{L_{\mathbf{e}_{j}}^{\infty,2}}\\
&\lesssim\sum_{k_{j}}2^{k_{3}n/2+(k_{1}+k_{2})n/2+k_{1}+k_{2}}\|v\|_{L^{\infty}_{t,x}}2^{-(k_{1}+k_{2})}
\|P_{k_{1}+k_{2}}w\|_{X^{0,1,2}}\|P_{k_{1}}f\|_{L_{\mathbf{e}_{j}}^{2,\infty}}\|P_{k_{2}}g\|_{L_{\mathbf{e}_{j}}^{\infty,2}} \\
&\lesssim \sum_{k_{j}}2^{(k_{3}-k_{2})n/2+(k_{1}+k_{2})/2}
\|v\|_{L^{\infty}_{t,x}}\|P_{k_{1}+k_{2}}w\|_{Y^{n/2}}\|P_{k_{1}}f\|_{F^{n/2}}\|P_{k_{2}}g\|_{F^{n/2}}.
\end{align*}}
\!\!For the term $J_{22}$, divide $P_{k_{1}}fP_{k_{2}}g$ into two parts by modulation projectors and then we obtain {\small
\begin{align*}
J_{22}
&\leq \sum_{k_{j}}2^{k_{3}n/2}\sum_{k_{j}}2^{k_{3}n/2}\left\|P_{k_{3}}\left[\sum_{i=1}^{n}v_{i}\partial_{x_{i}}P_{\geq k_{1}+k_{2}-9}
Q_{\leq k_{1}+k_{2}-10}wQ_{\geq k_{1}+k_{2}+40}(P_{k_{1}}fP_{k_{2}}g)\right]\right\|_{N^{*}_{k_{3}}}\\
&\quad+\sum_{k_{j}}2^{k_{3}n/2}\sum_{k_{j}}2^{k_{3}n/2}\left\|P_{k_{3}}\left[\sum_{i=1}^{n}v_{i}\partial_{x_{i}}
P_{\geq k_{1}+k_{2}-9}Q_{\leq k_{1}+k_{2}-11}wQ_{\leq k_{1}+k_{2}-39}(P_{k_{1}}fP_{k_{2}}g)\right]\right\|_{N^{*}_{k_{3}}}\\
&=:J_{221}+J_{222}.
\end{align*}}
Next, we estimate the term $J_{221}$. Similarly, one has {\small
\begin{align*}
J_{221}
&=\sum_{k_{j}}2^{k_{3}n/2}\left\|P_{k_{3}}\left[\sum_{i=1}^{n}v_{i}\partial_{x_{i}}P_{\geq k_{1}+k_{2}-9}
Q_{\leq k_{1}+k_{2}-10}wQ_{\geq k_{1}+k_{2}+40}(P_{k_{1}}fP_{k_{2}}g)\right]\right\|_{X_{+}^{0,-\frac{1}{2},1}} \\
&\lesssim \sum_{k_{j}}2^{k_{3}n/2}\sum_{j\geq k_{1}+k_{2}}2^{-\frac{j}{2}}\left\|P_{k_{3}}\left(\sum_{i=1}^{n}v_{i}
\partial_{x_{i}}P_{\geq k_{1}+k_{2}-9}Q_{\leq k_{1}+k_{2}-10}wQ_{\geq k_{1}+k_{2}+40}
(P_{k_{1}}fP_{k_{2}}g)\right)\right\|_{L_{t,x}^{2}} \\
&\lesssim \sum_{k_{j}}2^{k_{3}n/2}2^{k_{1}+k_{2}}\|v\|_{L^{\infty}_{t,x}}2^{(k_{1}+k_{2})n/2}\|P_{k_{1}+k_{2}}Q_{\leq k_{1}+k_{2}-10}w\|
_{L^{\infty}_{t}L^{2}_{x}}\|P_{k_{1}}fP_{k_{2}}g\|_{L^{2}_{t,x}} \\
&\lesssim \sum_{k_{j}}2^{k_{3}n/2}2^{(k_{1}+k_{2})(n+1)/2}\|v\|_{L^{\infty}_{t,x}}\|P_{k_{1}+k_{2}}w\|_{Y_{k_{1}+k_{2}}}
2^{(n-1)k_{1}/2}\|P_{k_{1}}f\|_{F_{k_{1}}}2^{-k_{2}/2}\|P_{k_{2}}f\|_{F_{k_{2}}} \\
&\lesssim \sum_{k_{j}}2^{(k_{3}-k_{2})n/2}\|v\|_{L^{\infty}_{t,x}}
\|P_{k_{1}+k_{2}}w\|_{Y^{n/2}}\|P_{k_{1}}f\|_{F^{n/2}}\|P_{k_{2}}g\|_{F^{n/2}},
\end{align*}}
\!\!where the boundness of the operators $Q_{k},\;Q_{\geq k}$ in space $L^{2}_{t,x}$ can be obtained by Plancherel theorem and the operator $Q_{\leq k}$ on space $L^{p}_{t}L^{2}_{x}$ is bounded by Lemma \ref{Qjk-bdd}. According to the frequency (support) of operators $Q_{\leq k_{1}+k_{2}-10}$ and $Q_{\geq k_{1}+k_{2}+40}$, here $j\geq k_{1}+k_{2}$ is obvious.
For the term $J_{222}$, by the fact that the norm on $N^{*}_{k_{3}}$ is $L^{1,2}_{\mathbf{e}_{j}}$, we have{\small
\begin{align}
\begin{split}
&J_{222}=\sum_{k_{j}}2^{k_{3}(n-1)/2}\left\|P_{k_{3}}\left[\sum_{i=1}^{n}v_{i}\partial_{x_{i}}P_{\geq k_{1}+k_{2}-9}
Q_{\leq k_{1}+k_{2}-10}wQ_{\leq k_{1}+k_{2}-39}(P_{k_{1}}fP_{k_{2}}g)\right]\right\|_{L^{1,2}_{\mathbf{e}_{j}}} \\
&\lesssim \sum_{k_{j}}2^{k_{3}(n-1)/2}\left\|P_{k_{3}}\left[\sum_{i=1}^{n}v_{i}\partial_{x_{i}}P_{\geq k_{1}+k_{2}-9}
Q_{\leq k_{1}+k_{2}-10}w\right]\right\|_{L^{2,\infty}_{\mathbf{e}_{j}}}\left\|\widetilde{P_{k_{3}}}[Q_{\leq k_{1}+k_{2}-39}(P_{k_{1}}fP_{k_{2}}g)]\right\|_{L^{2}_{t,x}} \\
&\lesssim \sum_{k_{j}}2^{k_{3}(n-1)}\|v\|_{L^{\infty}_{t,x}}\left\|P_{k_{3}}\left[\sum_{i=1}^{n}
\partial_{x_{i}}P_{\geq k_{1}+k_{2}-9}Q_{\leq k_{1}+k_{2}-10}w\right]\right\|_{L^{\infty}_{t}L^{2}_{x}}
\|P_{k_{1}}fP_{k_{2}}g\|_{L^{2}_{t,x}} \\
&\lesssim \sum_{k_{j}}2^{k_{3}(n-1)}\|v\|_{L^{\infty}_{t,x}}\|P_{k_{3}}(P_{k_{1}+k_{2}}\partial_{x_{i}}w)\|_{L^{\infty}_{t}L^{2}_{x}}
\|P_{k_{1}}fP_{k_{2}}g\|_{L^{2}_{t,x}} \\
&\lesssim \sum_{k_{j}}2^{k_{3}(n-1)}\|v\|_{L^{\infty}_{t,x}}2^{k_{1}+k_{2}}\|P_{k_{3}}w\|_{L^{\infty}_{t}L^{2}_{x}}
2^{(n-1)k_{1}/2}\|P_{k_{1}}f\|_{{L^{\infty}_{t}L^{\frac{2n}{2n-2}}_{x}}}\|P_{k_{2}}g\|_{{L^{\infty}_{t}L^{2}_{x}}} \\
&\lesssim \sum_{k_{j}}2^{(k_{3}-k_{2})(n/2-1)}\|v\|_{L^{\infty}_{t,x}}
\|P_{k_{3}}w\|_{Y^{n/2}}\|P_{k_{1}}f\|_{F^{n/2}}\|P_{k_{2}}g\|_{F^{n/2}},\label{4.11}
\end{split}
\end{align}}
\!\!where we have used Bernstein's inequality in the third line of \eqref{4.11} with respect to $\bar{x}_{j}$, and we have used Lemma \ref{Qjk-bdd} and \eqref{j>k1+k2} in the fourth line.
On the contrary, we assume that $k_{3}\leq k_{1}+19$ and repeat the above arguments. However the assumptions are changed, supposing that $-k_{1}\leq 10$ and $-k_{1}\geq 9$, respectively. Then, we obtain the desired result $J_{2}\leq C \|v\|_{L^{\infty}_{t,x}}\|w\|_{Y^{n/2}} \|f\|_{F^{n/2}\cap Z^{n/2}} \|g\|_{F^{n/2}\cap Z^{n/2}}$.
Next, we estimate the term $J_1$. Assuming that $k_{3}\geq k_{1}+20$, using the fact that the norm is $L^{1,2}_{\mathbf{e}_{j}}$ on $N^{*}_{k_{3}}$ at this time yields that
\begin{align*}
J_1
&=\sum_{k_{j}}2^{k_{3}n/2}2^{-k_{3}/2}\left\|P_{k_{3}}\left[\sum_{i=1}^{n}v_{i}\partial_{x_{i}}P_{\leq k_{1}+k_{2}}w
(P_{k_{1}}fP_{k_{2}}g)\right]\right\|_{L^{1,2}_{\mathbf{e}_{j}}} \\
&\lesssim \sum_{k_{j}}2^{k_{3}n/2}2^{-k_{3}/2}\left\|P_{k_{3}}\left[\sum_{i=1}^{n}v_{i}\partial_{x_{i}}P_{\leq k_{1}+k_{2}}
w\right]\right\|_{L^{2,\infty}_{\mathbf{e}_{j}}}\|\widetilde{P_{k_{3}}}(P_{k_{1}}fP_{k_{2}}g)\|_{L^{2}_{t,x}} \\
&\lesssim \sum_{k_{j}}2^{k_{3}n/2}2^{-k_{3}/2}\|v\|_{L^{\infty}_{t,x}}2^{(n-1)k_{3}/2}\left\|P_{k_{3}}\left[\sum_{i=1}^{n}v_{i}
\partial_{x_{i}}P_{k_{1}+k_{2}}w\right]\right\|_{L^{\infty}_{t}L^{2}_{x}}
\|P_{k_{1}}fP_{k_{2}}g\|_{L^{2}_{t,x}} \\
&\lesssim \sum_{k_{j}}2^{k_{3}(n-1)}\|v\|_{L^{\infty}_{t,x}}2^{k_{1}+k_{2}}\|P_{k_{3}}w\|_{L^{\infty}_{t}L^{2}_{x}}
2^{(n-1)k_{1}/2}\|P_{k_{1}}f\|_{{L^{\infty}_{t}L^{\frac{2n}{2n-2}}_{x}}}\|P_{k_{2}}g\|_{{L^{\infty}_{t}L^{2}_{x}}} \\
&\lesssim \sum_{k_{j}}2^{(k_{3}-k_{2})(n/2-1)}\|v\|_{L^{\infty}_{t,x}}
\|P_{k_{3}}w\|_{Y^{n/2}}\|P_{k_{1}}f\|_{F^{n/2}}\|P_{k_{2}}g\|_{F^{n/2}}.
\end{align*}
On the other hand, assume that $k_{3}\leq k_{1}+19$, exchange the space of terms $P_{k_{1}}f$ and $P_{k_{2}}g$. In conclusion, we obtain \eqref{4.4}.
\end{proof}
\begin{lemma}\label{Lemma4.1}
Assume that $n\geq3$, for any $k_{j}\in\mathbb{Z},\;j=1,2,3$, we have
\begin{align}
\sum_{k_{j}}2^{k_{3}(n-2)/2}\left\|P_{k_{3}}\left[w\sum_{i=1}^{n}(\partial_{x_{i}}P_{k_{1}}f \partial_{x_{i}}P_{k_{2}}g)\right]\right\|
_{L^{2}_{t,x}}\leq C\|w\|_{Y^{n/2}} \|f\|_{F^{n/2}} \|g\|_{F^{n/2}}.\label{4.1}
\end{align}
where the constant $C>0$ is independent of $\varepsilon$ and $k_{j}$.
\end{lemma}
\begin{proof}
By the symmetry of $k_{1},k_{2}$, assume that $k_{1}\leq k_{2}$. We use the operator $P_{k}$ to divide the frequency of $w$ into two parts:
\begin{align*}
&\sum_{k_{j}}2^{k_{3}(n-2)/2}\left\|P_{k_{3}}\left[w\sum_{i=1}^{n}(\partial_{x_{i}}P_{k_{1}}f \partial_{x_{i}}P_{k_{2}}g)\right]\right\|
_{L^{2}_{t,x}} \\
&\leq \sum_{k_{j}}2^{k_{3}(n-2)/2}\left\|P_{k_{3}}\left[P_{\geq k_{3}-10}w\sum_{i=1}^{n}(\partial_{x_{i}}P_{k_{1}}f \partial_{x_{i}}P_{k_{2}}g)\right]\right\|_{L^{2}_{t,x}} \\
&\quad+\sum_{k_{j}}2^{k_{3}(n-2)/2}\left\|P_{k_{3}}\left[P_{\leq k_{3}-11}w\sum_{i=1}^{n}(\partial_{x_{i}}P_{k_{1}}f \partial_{x_{i}}P_{k_{2}}g)\right]\right\|_{L^{2}_{t,x}}\\
&=:K_1+K_2,
\end{align*}
Now, we estimate the term $K_1$ and assume that $k_{3}\leq k_{2}+20$. By Bernstein's inequality and convolution properties, one has
\begin{align*}
K_1&\lesssim \sum_{k_{j}}2^{(n-1)k_{3}}\|P_{\geq k_{3}-10}w\|_{L^{\infty}_{t}L^{2}_{x}}\|\sum_{i=1}^{n}(\partial_{x_{i}}P_{k_{1}}f \partial_{x_{i}}P_{k_{2}}g)\|_{L^{2}_{t,x}}\\
&\lesssim \sum_{k_{j}}2^{(n-1)k_{3}}\|P_{k_{3}}w\|_{L^{\infty}_{t}L^{2}_{x}}2^{k_{1}+k_{2}}2^{(n-2)k_{1}/2}
\|P_{k_{1}}f\|_{L^{2}_{t}L^{\frac{2n}{n-2}}_{x}}\|P_{k_{2}}g\|_{L^{\infty}_{t}L^{2}_{x}}\\
&\lesssim \sum_{k_{j}}2^{(k_{3}-k_{2})(n/2-1)}\|w\|_{Y^{n/2}}\|f\|_{F^{n/2}}\|g\|_{F^{n/2}}.
\end{align*}
On the other hand, assuming that $k_{3}\geq k_{2}+21$, since $P_{\geq k_{3}}w$ has more bigger frequency than output frequency, and the frequency of $w$ is equivalent to $2^{k_{3}}$, we obtain
\begin{align*}
K_1&\lesssim \sum_{k_{j}}2^{k_{3}(n-2)/2}\|P_{k_{3}}w\|_{L^{\infty}_{t}L^{2}_{x}}\|\sum_{i=1}^{n}(\partial_{x_{i}}P_{k_{1}}f \partial_{x_{i}}P_{k_{2}}g)\|_{L^{2}_{t}L^{\infty}_{x}} \\
&\lesssim \sum_{k_{j}}2^{k_{3}(n-2)/2}\|P_{k_{3}}w\|_{L^{\infty}_{t}L^{2}_{x}}2^{k_{2}n/2}2^{k_{1}+k_{2}}\|P_{k_{1}}f P_{k_{2}}g\|_{L^{2}_{t,x}} \\
&\lesssim \sum_{k_{j}}2^{k_{3}(n-2)/2}\|P_{k_{3}}w\|_{L^{\infty}_{t}L^{2}_{x}}2^{k_{2}n/2}2^{k_{1}+k_{2}}
\|P_{k_{1}}f\|_{L^{2,\infty}_{\mathbf{e}_{j}}}\|P_{k_{2}}g\|_{L^{\infty,2}_{\mathbf{e}_{j}}}\\
&\lesssim \sum_{k_{j}}2^{k_{2}-k_{3}}\|w\|_{Y^{n/2}}\|f\|_{F^{n/2}}\|g\|_{F^{n/2}},
\end{align*}
where the output frequency of the third line is equivalent to $2^{k_{1}}+2^{k_{2}}$ by applying the convolution theorem, and we enlarge the output frequency to $2^{k_{2}}$. Hence, we get that $K_{1}\leq C\|w\|_{Y^{n/2}}\|f\|_{F^{n/2}}\|g\|_{F^{n/2}}$.
Next, we estimate the term $K_2$ when assume that $k_{3}\leq k_{2}+5$. Using Bernstein's inequality and convolution properties yields that
\begin{align*}
K_2&\lesssim \sum_{k_{j}}2^{k_{3}(n-1)}\|P_{k_{3}}w\|_{L^{\infty}_{t}L^{2}_{x}}\|\widetilde{P_{k_{3}}}(\sum_{i=1}^{n}\partial_{x_{i}}P_{k_{1}}f \partial_{x_{i}}P_{k_{2}}g)\|_{L^{2}_{t,x}} \\
&\lesssim \sum_{k_{j}}2^{k_{3}(n-1)}\|P_{k_{3}}w\|_{L^{\infty}_{t}L^{2}_{x}}2^{k_{1}+k_{2}}2^{(n-2)k_{2}/2}
\|P_{k_{1}}f\|_{L^{\infty}_{t}L^{2}_{x}}\|P_{k_{2}}g\|_{L^{2}_{t}L^{\frac{2n}{n-2}}_{x}}\\
&\lesssim \sum_{k_{j}}2^{(k_{3}-k_{1})(n/2-1)}\|w\|_{Y^{n/2}}\|f\|_{F^{n/2}}\|g\|_{F^{n/2}}.
\end{align*}
On the other hand, assuming that $k_{3}\geq k_{1}+6$ yields that $|k_{3}-k_{2}|\leq6$. Then, one has
\begin{align*}
K_2&\lesssim \sum_{k_{j}}2^{k_{3}(n-1)}\|P_{k_{3}}w\|_{L^{\infty}_{t}L^{2}_{x}}\|\widetilde{P_{k_{3}}}(\sum_{i=1}^{n}\partial_{x_{i}}P_{k_{1}}f \partial_{x_{i}}P_{k_{2}}g)\|_{L^{2}_{t,x}} \\
&\lesssim \sum_{k_{j}}2^{k_{3}(n-1)}\|P_{k_{3}}w\|_{L^{\infty}_{t}L^{2}_{x}}2^{k_{1}+k_{2}}2^{(n-2)k_{1}/2}
\|P_{k_{1}}f\|_{L^{2}_{t}L^{\frac{2n}{n-2}}_{x}}\|P_{k_{2}}g\|_{L^{\infty}_{t}L^{2}_{x}}\\
&\lesssim \sum_{k_{j}}2^{(k_{3}-k_{2})(n/2-1)}\|w\|_{Y^{n/2}}\|f\|_{F^{n/2}}\|g\|_{F^{n/2}},
\end{align*}
which implies that $K_2\leq C \|w\|_{Y^{n/2}}\|f\|_{F^{n/2}}\|g\|_{F^{n/2}}$. Combining the above estimates of $K_1$ and $K_2$, we get \eqref{4.1}. The proof of Lemma \ref{Lemma4.1} is completed.
\end{proof}
\begin{lemma}\label{Lemma4.2}
Assume that $n\geq3$, for any $k_{j}\in\mathbb{Z},\;j=1,\ldots,4$, we have
\begin{align}
\sum_{k_{j}}2^{k_{4}n/2}\left\|P_{k_{4}}\left[P_{k_{1}}w\sum_{i=1}^{n}(\partial_{x_{i}}P_{k_{2}}f \partial_{x_{i}}P_{k_{3}}g)\right]\right\|
_{N_{k_{4}}^{*}}\leq C\|w\|_{Y^{n/2}} \|f\|_{F^{n/2}\cap Z^{n/2}} \|g\|_{F^{n/2}\cap Z^{n/2}},\label{4.2}
\end{align}
where the constant $C>0$ is independent of $\varepsilon$ and $k_{j}$.
\end{lemma}
\begin{proof}
By the symmetry of $k_{2}$ and $k_{3}$, we assume that $k_{2}\leq k_{3}$. Notice that when $k_{4}\leq k_{1}+40$, the norm is $L_{t}^{1}L_{x}^{2}$. Utilizing H\"{o}lder's inequality and Bernstein's inequality, one has
\begin{align*}
&\sum_{k_{j}}2^{k_{4}n/2}\left\|P_{k_{4}}\left[P_{k_{1}}w\sum_{i=1}^{n}(\partial_{x_{i}}P_{k_{2}}f \partial_{x_{i}}P_{k_{3}}g)\right]\right\|_{L^{1}_{t}L^{2}_{x}}\\
&\lesssim \sum_{k_{j}}2^{k_{4}n/2}2^{k_{2}+k_{3}}\|P_{k_{1}}wP_{k_{2}}f\|_{L^{2}_{t,x}}\|P_{k_{3}}g\|_{L^{2}_{t}L^{\infty}_{x}}\\
&\lesssim \sum_{k_{j}}2^{k_{4}n/2}2^{k_{2}+k_{3}}\|P_{k_{1}}w\|_{L^{\infty}_{t}L^{2}_{x}}2^{k_{2}(n-2)/2}\|P_{k_{2}}f\|
_{L^{2}_{t}L^{\frac{2n}{n-2}}_{x}}2^{k_{3}(n-2)/2}\|P_{k_{3}}g\|_{L^{2}_{t}L^{\frac{2n}{n-2}}_{x}}\\
&\lesssim \sum_{k_{j}}2^{(k_{4}-k_{1})n/2}\|w\|_{Y^{n/2}}\|f\|_{F^{n/2}}\|g\|_{F^{n/2}}.
\end{align*}
On the other hand, we assume that $k_{4}\geq k_{1}+41$. For a more accurate estimate, we divide it into two cases with the further assumptions of $k_{2}$.
\underline{\bf Case~1.} When $k_{2}\leq k_{1}+20$, the norm is $L^{1,2}_{\mathbf{e}_{i}}$. By employing H\"{o}lder's inequality and Bernstein's inequality again, we obtain
\begin{align*}
&\sum_{k_{j}}2^{k_{4}n/2}2^{-k_{4}/2}\left\|P_{k_{4}}\left[P_{k_{1}}w\sum_{i=1}^{n}(\partial_{x_{i}}P_{k_{2}}f \partial_{x_{i}}P_{k_{3}}g)\right]\right\|_{L^{1,2}_{\mathbf{e}_{i}}}\\
&\lesssim \sum_{k_{j}}\sum_{i=1}^{n}2^{k_{4}(n-1)/2}\|P_{k_{1}}w\partial_{x_{i}}P_{k_{3}}g\|_{L^{2}_{t,x}}
\|\partial_{x_{i}}P_{k_{2}}f\|_{L^{2,\infty}_{\mathbf{e}_{i}}}\\
&\lesssim \sum_{k_{j}}2^{k_{4}(n-1)/2}2^{k_{2}+k_{3}}\|P_{k_{1}}w\|_{L^{2,\infty}_{\mathbf{e}_{i}}}
\|P_{k_{3}}g\|_{L^{\infty,2}_{\mathbf{e}_{i}}}\|P_{k_{2}}f\|_{L^{2,\infty}_{\mathbf{e}_{i}}}\\
&\lesssim \sum_{k_{j}}2^{(k_{4}-k_{3})(n-1)/2+(k_{2}-k_{1})/2}\|w\|_{Y^{n/2}}\|f\|_{F^{n/2}}\|g\|_{F^{n/2}}.
\end{align*}
\underline{\bf Case~2.} When $k_{2}\geq k_{1}+21$, we decompose $P_{k_{1}}fP_{k_{2}}g$ by modulation projectors $Q_{k}$ to get
\begin{align*}
&\sum_{k_{j}}2^{k_{4}n/2}2^{-k_{4}/2}\left\|P_{k_{4}}\left[P_{k_{1}}w\sum_{i=1}^{n}(\partial_{x_{i}}P_{k_{2}}f \partial_{x_{i}}P_{k_{3}}g)\right]\right\|_{N^{*}_{k_{4}}}\\
&\leq \sum_{k_{j}}2^{k_{4}n/2}2^{-k_{4}/2}\left\|P_{k_{4}}\left[P_{k_{1}}w Q_{\leq k_{2}+k_{3}}\sum_{i=1}^{n}(\partial_{x_{i}}P_{k_{2}}f \partial_{x_{i}}P_{k_{3}}g)\right]\right\|_{N^{*}_{k_{4}}}\\
&\quad+\sum_{k_{j}}2^{k_{4}n/2}2^{-k_{4}/2}\left\|P_{k_{4}}\left[P_{k_{1}}wQ_{\geq k_{2}+k_{3}+1}\sum_{i=1}^{n}(\partial_{x_{i}}P_{k_{2}}f \partial_{x_{i}}P_{k_{3}}g)\right]\right\|_{N^{*}_{k_{4}}}\\
&=:H_1+H_2.
\end{align*}
Now we estimate the terms $H_1$ and $H_2$ one by one. For the term $H_2$, decomposing $P_{k_{1}}w$ by modulation projectors, we get
\begin{align*}
H_2&\leq \sum_{k_{j}}2^{k_{4}n/2}2^{-k_{4}/2}\left\|P_{k_{4}}\left[P_{k_{1}}Q_{\geq k_{2}+k_{3}-10}wQ_{\geq k_{2}+k_{3}+1}\sum_{i=1}^{n}(\partial_{x_{i}}P_{k_{2}}f \partial_{x_{i}}P_{k_{3}}g)\right]\right\|_{N^{*}_{k_{4}}}\\
&\quad+\sum_{k_{j}}2^{k_{4}n/2}2^{-k_{4}/2}\left\|P_{k_{4}}\left[P_{k_{1}}Q_{\leq k_{2}+k_{3}-9}wQ_{\geq k_{2}+k_{3}+1}\sum_{i=1}^{n}(\partial_{x_{i}}P_{k_{2}}f \partial_{x_{i}}P_{k_{3}}g)\right]\right\|_{N^{*}_{k_{4}}}\\
&=:H_{21}+H_{22}.
\end{align*}
For the term $H_{21}$, by the fact that the norm on $N^{*}_{k_{4}}$ is $L_{t}^{1}L_{x}^{2}$, H\"{o}lder's inequality, Minkowski's inequality and Bernstein's inequality, one has
\begin{align*}
H_{21}&\lesssim \sum_{k_{j}}2^{k_{4}n/2}\|P_{k_{1}}Q_{\geq k_{2}+k_{3}-10}w\|_{L^{2}_{t}L^{\infty}_{x}}
\|Q_{\geq k_{2}+k_{3}+1}\sum_{i=1}^{n}(\partial_{x_{i}}P_{k_{2}}f \partial_{x_{i}}P_{k_{3}}g)\|_{L^{2}_{t,x}}\\
&\lesssim \sum_{k_{j}}2^{k_{4}n/2}2^{k_{1}n/2}\sum_{j\geq k_{2}+k_{3}-10}\|Q_{j}(P_{k_{1}}w)\|_{L^{2}_{t,x}}
2^{k_{2}+k_{3}}\|P_{k_{2}}fP_{k_{3}}g\|_{L^{2}_{t,x}}\\
&\lesssim \sum_{k_{j}}2^{k_{4}n/2}2^{k_{1}n/2}2^{-(k_{2}+k_{3})}\|P_{k_{1}}w\|_{X^{0,1,2}}2^{k_{2}+k_{3}}
\|P_{k_{2}}f\|_{L^{2,\infty}_{\mathbf{e}_{i}}}\|P_{k_{3}}g\|_{L^{\infty,2}_{\mathbf{e}_{i}}}\\
&\lesssim \sum_{k_{j}}2^{(k_{4}-k_{3})n/2+(k_{1}-k_{2})}\|w\|_{Y^{n/2}}\|f\|_{F^{n/2}}\|g\|_{F^{n/2}}.
\end{align*}
Noted that by the hypothesis that $k_{3}\leq k_{1}+21$ and $k_{4}\leq k_{1}+41$, we get $|k_{4}-k_{3}|\leq 20$, because the continuity of the frequency (the frequency is unbroken).
For the term $H_{22}$, using the fact that the norm is $X^{0,-1/2,1}_{+}$ on $N^{*}_{k_{3}}$ and Lemma \ref{Qjk-bdd}, it holds that
\begin{align*}
H_{22}
&\lesssim \sum_{k_{j}}2^{k_{4}n/2}\sum_{j\geq k_{2}+k_{3}}2^{-\frac{j}{2}}\|P_{k_{1}}Q_{\leq k_{2}+k_{3}-10}w\|_{L^{\infty}_{t,x}}
\|Q_{\geq k_{2}+k_{3}+1}\sum_{i=1}^{n}(\partial_{x_{i}}P_{k_{2}}f \partial_{x_{i}}P_{k_{3}}g)\|_{L^{2}_{t,x}}\\
&\lesssim \sum_{k_{j}}2^{k_{4}n/2}2^{-(k_{2}+k_{3})/2}2^{k_{1}n/2}\|P_{k_{1}}w\|_{L^{\infty}_{t}L^{2}_{x}}
2^{k_{2}+k_{3}}\|P_{k_{2}}f\|_{L^{2,\infty}_{\mathbf{e}_{i}}}\|P_{k_{3}}g\|_{L^{\infty,2}_{\mathbf{e}_{i}}}\\
&\lesssim \sum_{k_{j}}2^{(k_{4}-k_{3})n/2}\|w\|_{Y^{n/2}}\|f\|_{F^{n/2}}\|g\|_{F^{n/2}},
\end{align*}
where the frequency of $Q_{\leq k_{2}+k_{3}-10}wQ_{\geq k_{2}+k_{3}+1}(P_{k_{2}}f P_{k_{3}}g)$ can be obtained by the support from convolution, i.e., $j\geq k_{2}+k_{3}$. Combining the above estimates, we can easily know that $H_2\leq C\|w\|_{Y^{n/2}}\|f\|_{F^{n/2}}\|g\|_{F^{n/2}}$.
Next, we estimate the term $H_1$. Similarly, we employ the operator $Q_{k}$ for decomposing the frequency, and then we get
\begin{align*}
H_1&\leq \sum_{k_{j}}2^{k_{4}n/2}\left\|P_{k_{4}}\left[P_{k_{1}}w Q_{\leq k_{2}+k_{3}}\sum_{i=1}^{n}(\partial_{x_{i}}P_{k_{2}}
Q_{\geq k_{2}+k_{3}+40}f \partial_{x_{i}}P_{k_{3}}g)\right]\right\|_{N^{*}_{k_{4}}}\\
&\quad+\sum_{k_{j}}2^{k_{4}n/2}\left\|P_{k_{4}}\left[P_{k_{1}}w Q_{\leq k_{2}+k_{3}}\sum_{i=1}^{n}(\partial_{x_{i}}P_{k_{2}}
Q_{\leq k_{2}+k_{3}+39}f \partial_{x_{i}}P_{k_{3}}g)\right]\right\|_{N^{*}_{k_{4}}}\\
&=:H_{11}+H_{12}.
\end{align*}
For the term $H_{11}$, by the fact that the norm on $N^{*}_{k_{3}}$ is $L^{1,2}_{\mathbf{e}_{j}}$, applying the Minkowski's inequality and Bernstein's inequality, one has
\begin{align*}
H_{11}&\lesssim \sum_{k_{j}}2^{k_{4}n/2}2^{-k_{4}/2}\left\|P_{k_{4}}\left[P_{k_{1}}w Q_{\leq k_{2}+k_{3}}\sum_{i=1}^{n}
(\partial_{x_{i}}P_{k_{2}}Q_{\geq k_{2}+k_{3}+40}f \partial_{x_{i}}P_{k_{3}}g)\right]\right\|_{L^{1,2}_{\mathbf{e}_{j}}}\\
&\lesssim \sum_{k_{j}}2^{k_{4}(n-1)/2}\|P_{k_{1}}w\|_{L^{\infty}_{t,x}}2^{k_{2}+k_{3}}\|P_{k_{2}}Q_{\geq k_{2}+k_{3}+40}f\|_{L^{2}_{t,x}}\|P_{k_{3}}g\|_{L^{2,\infty}_{\mathbf{e}_{j}}}\\
&\lesssim \sum_{k_{j}}2^{k_{4}(n-1)/2}2^{k_{1}n/2}\|P_{k_{1}}w\|_{L^{\infty}_{t}L^{2}_{x}}2^{k_{2}+k_{3}}\|P_{k_{2}}f\|_{X^{0,1/2,1}}
2^{-(k_{2}+k_{3})/2}\|P_{k_{3}}g\|_{L^{2,\infty}_{\mathbf{e}_{j}}}\\
&\lesssim \sum_{k_{j}}2^{(k_{4}-k_{2})(n-1)/2}\|w\|_{Y^{n/2}}\|f\|_{F^{n/2}}\|g\|_{F^{n/2}}.
\end{align*}
Noticing that
\begin{align*}
-2\nabla\phi\cdot\nabla\psi=(i\partial_{t}+\Delta)\phi\cdot\psi+\phi\cdot(i\partial_{t}+\Delta)\psi
-(i\partial_{t}+\Delta)(\phi\cdot\psi),
\end{align*}
write $\Theta:=i\partial_{t}+\Delta$ and let
\begin{align*}
\phi=P_{k_{2}}Q_{\leq k_{2}+k_{3}+39}f,\;\; \psi=P_{k_{3}}g.
\end{align*}
Then, the term $H_{12}$ can be easily separated as
\begin{align*}
H_{12}&\leq\sum_{k_{j}}2^{k_{4}n/2}\left\|P_{k_{4}}\left[P_{k_{1}}w Q_{\leq k_{2}+k_{3}}(P_{k_{2}}\Theta Q_{\leq k_{2}+k_{3}+39}f \cdot P_{k_{3}}g)\right]\right\|_{N^{*}_{k_{4}}}\\
&\quad+\sum_{k_{j}}2^{k_{4}n/2}\left\|P_{k_{4}}\left[P_{k_{1}}w Q_{\leq k_{2}+k_{3}}(P_{k_{2}}Q_{\leq k_{2}+k_{3}+39}f \cdot P_{k_{3}}\Theta g)\right]\right\|_{N^{*}_{k_{4}}}\\
&\quad+\sum_{k_{j}}2^{k_{4}n/2}\left\|P_{k_{4}}\left[P_{k_{1}}w Q_{\leq k_{2}+k_{3}}\Theta(P_{k_{2}}Q_{\leq k_{2}+k_{3}+39}f \cdot P_{k_{3}} g)\right]\right\|_{N^{*}_{k_{4}}}\\
&=:H_{121}+H_{122}+H_{123}.
\end{align*}
For the term $H_{121}$, using the fact that the norm on $N^{*}_{k_{3}}$ is $L_{t}^{1}L_{x}^{2}$, Bernstein's inequality and the Plancherel equality yields that
\begin{align*}
H_{121}&\lesssim \sum_{k_{j}}2^{k_{4}n/2}2^{k_{1}n/2}\|P_{k_{1}}w\|_{L_{t}^{\infty}L_{x}^{2}}\|P_{k_{2}}\Theta Q_{\leq k_{2}+k_{3}+39}f\|_{L_{t,x}^{2}}\|P_{k_{3}}g\|_{L_{t}^{2}L_{x}^{\infty}}.
\end{align*}
By the fact that $P_{k_{2}}\Theta Q_{\leq k_{2}+k_{3}+39}=Q_{\leq k_{2}+k_{3}+39}\Theta P_{k_{2}}$ and Lemma \ref{Qjk-bdd}, we have
\begin{align*}
H_{121}&\lesssim \sum_{k_{j}}2^{k_{4}n/2}2^{k_{1}n/2}\|P_{k_{1}}w\|_{L_{t}^{\infty}L_{x}^{2}}2^{k_{2}}\|P_{k_{2}}f\|_{Z_{k_{2}}}
\|P_{k_{3}}g\|_{L_{t}^{2}L_{x}^{\frac{2n}{n-2}}} \\
&\lesssim \sum_{k_{j}}2^{(k_{4}-k_{3})n/2+(k_{2}-k_{3})}\|w\|_{Y^{n/2}}\|f\|_{Z^{n/2}}\|g\|_{F^{n/2}}.
\end{align*}
For the term $H_{122}$, by the fact that the norm on $N^{*}_{k_{3}}$ is $L^{1,2}_{\mathbf{e}_{j}}$ and applying Minkowski's inequality and Bernstein's inequality, we conclude that
\begin{align*}
H_{122}
&\lesssim \sum_{k_{j}}2^{k_{4}(n-1)/2}\|P_{k_{1}}w\|_{L^{\infty}_{t,x}}\|P_{k_{2}}Q_{\leq k_{2}+k_{3}+39}f\|_{L^{2,\infty}_{\mathbf{e}_{j}}} \|P_{k_{3}}\Theta g\|_{L^{2}_{t,x}} \\
&\lesssim \sum_{k_{j}}2^{k_{4}(n-1)/2}2^{k_{1}n/2}\|P_{k_{1}}w\|_{L^{\infty}_{t}L^{2}_{x}}\|P_{k_{2}}f\|_{L^{2,\infty}_{\mathbf{e}_{j}}}
2^{k_{2}+k_{3}}\|P_{k_{3}}g\|_{X^{0,\frac{1}{2},\infty}}2^{-\frac{j}{2}} \\
&\lesssim \sum_{k_{j}}2^{(k_{4}-k_{3})(n-2)/2}\|w\|_{Y^{n/2}}\|f\|_{F^{n/2}}\|g\|_{F^{n/2}},
\end{align*}
where we have used convolution properties in the second line, and the fact that the bound of $|\tau+|\xi|^{2}|$ is $ 2^{k_{2}+k_{3}}$ by using the effect of the output modulation operator $Q_{\leq k_{2}+k_{3}}$, i.e.,
\begin{align}\label{4.19}
\begin{split}
Q_{\leq k_{2}+k_{3}}\Theta g
&=\mathscr{F}^{-1}_{\tau,\xi}\left\{\chi_{\leq k_{2}+k_{3}}(\tau+|\xi|^{2})\cdot(-\tau-|\xi|^{2})\mathscr{F}g\right\}\\
&\lesssim 2^{k_{2}+k_{3}}Q_{\leq k_{2}+k_{3}}\cdot g.
\end{split}
\end{align}
For the term $H_{123}$, we divide the modulation operator $Q_{\leq k_{2}+k_{3}}$ into two parts for a more exactly estimate. Hence,
\begin{align*}
H_{123}&\leq \sum_{k_{j}}2^{k_{4}n/2}\left\|P_{k_{4}}\left[P_{k_{1}}w Q_{[k_{1}+k_{4}+100,k_{2}+k_{3}]}\Theta(P_{k_{2}}Q_{\leq k_{2}+k_{3}+39}f \cdot P_{k_{3}} g)\right]\right\|_{N^{*}_{k_{4}}}\\
&\quad+\sum_{k_{j}}2^{k_{4}n/2}\left\|P_{k_{4}}\left[P_{k_{1}}w Q_{\leq k_{1}+k_{4}+99}\Theta(P_{k_{2}}Q_{\leq k_{2}+k_{3}+39}f \cdot P_{k_{3}} g)\right]\right\|_{N^{*}_{k_{4}}}\\
&=:L_{1}+L_{2}.
\end{align*}
By the fact that the norm on $N^{*}_{k_{3}}$ is $L^{1,2}_{\mathbf{e}_{j}}$, using Lemma \ref{Qjk-bdd}, we can bound the term $L_{2}$ by
\begin{align*}
L_{2}
&\lesssim \sum_{k_{j}}2^{k_{4}(n-1)/2}\|P_{k_{1}}w\|_{L^{2,\infty}_{\mathbf{e}_{j}}}
2^{k_{1}+k_{4}}\|P_{k_{2}}Q_{\leq k_{2}+k_{3}+39}f P_{k_{3}} g\|_{L^{2}_{t,x}} \\
&\lesssim \sum_{k_{j}}2^{k_{4}(n-1)/2}\|P_{k_{1}}w\|_{L^{2,\infty}_{\mathbf{e}_{j}}}2^{k_{1}+k_{4}}
\|P_{k_{2}}f\|_{L^{\infty}_{t}L^{2}_{x}}2^{k_{3}(n-2)/2}\|P_{k_{3}}g\|_{L^{2}_{t}L^{\frac{2n}{n-2}}_{x}}\\
&\lesssim \sum_{k_{j}}2^{(k_{4}-k_{3})(n+1)/2+(k_{1}-k_{3})/2}\|w\|_{Y^{n/2}}\|f\|_{F^{n/2}}\|g\|_{F^{n/2}},
\end{align*}
where we have used the fact that $k_{3}\geq k_{1}+21$. For the term $L_{1}$, we use the modulation operator $Q_{k}$ to divide the term $L_{1}$ into two parts. Thus,
\begin{align*}
L_{1}
&\leq \sum_{k_{j}}\sum_{j_{2}=k_{1}+k_{4}+100}^{k_{2}+k_{3}}2^{k_{4}n/2}\left\|P_{k_{4}}Q_{\leq j_{2}-10}\left[P_{k_{1}}w Q_{j_{2}}\Theta(P_{k_{2}}Q_{\leq k_{2}+k_{3}+39}f \cdot P_{k_{3}} g)\right]\right\|_{N^{*}_{k_{4}}}\\
&\quad+\sum_{k_{j}}\sum_{j_{2}=k_{1}+k_{4}+100}^{k_{2}+k_{3}}2^{k_{4}n/2}\left\|P_{k_{4}}Q_{\geq j_{2}-9}\left[P_{k_{1}}w Q_{j_{2}}\Theta(P_{k_{2}}Q_{\leq k_{2}+k_{3}+39}f \cdot P_{k_{3}} g)\right]\right\|_{N^{*}_{k_{4}}}\\
&=:L_{11}+L_{12}.
\end{align*}
Noted that the norm of the term $L_{11}$ on $N^{*}_{k_{3}}$ is $L^{1}_{t}L^{2}_{x}$. Applying
Minkowski's inequality, Bernstein's inequality and employing the method of \eqref{4.19}, we deduce that{\small
\begin{align*}
L_{11}
&\lesssim \sum_{k_{j}}\sum_{j_{2}=k_{1}+k_{4}+100}^{k_{2}+k_{3}}2^{k_{4}n/2}\|Q_{j_{3}}P_{k_{1}}w\|_{L^{2}_{t}L^{\infty}_{x}}
\|Q_{j_{2}}\Theta(P_{k_{2}}Q_{\leq k_{2}+k_{3}+39}f \cdot P_{k_{3}} g)\|_{L^{2}_{t,x}} \\
&\lesssim \sum_{k_{j}}\sum_{j_{2}=k_{1}+k_{4}+100}^{k_{2}+k_{3}}2^{k_{4}n/2}2^{k_{1}n/2}\sum_{j_{3}\leq j_{2}-10}2^{j_{3}}\|Q_{j_{3}}P_{k_{1}}w\|_{L^{2}_{t,x}}2^{-j_{3}}2^{j_{2}}\|P_{k_{2}}f\|_{L^{\infty}_{t}L^{2}_{x}}
\|P_{k_{3}} g\|_{L^{2}_{t}L^{\infty}_{x}} \\
&\lesssim \sum_{k_{j}}2^{k_{4}n/2}2^{k_{1}n/2}\left[2^{-(k_{1}+k_{4})}-2^{-(k_{2}+k_{3})}\right]\cdot
\left[2^{(k_{1}+k_{4})}-2^{(k_{2}+k_{3})}\right]2^{k_{3}(n-2)/2}\\
&\quad\|P_{k_{1}}w\|_{X^{0,1,2}}\|P_{k_{2}}f\|_{L^{\infty}_{t}L^{2}_{x}}\|P_{k_{3}} g\|_{L^{2}_{t}L^{\frac{2n}{n-2}}_{x}} \\
&\lesssim \sum_{k_{j}}\left[2^{(k_{4}-k_{2})(n-2)/2}+2^{(k_{4}-k_{2})(n+2)/2+2(k_{1}-k_{3})}\right]
\|w\|_{Y^{n/2}}\|f\|_{F^{n/2}}\|g\|_{F^{n/2}},
\end{align*}}
where the projector $Q_{j_{3}}$ represents the influence (about frequency) of on $P_{k_{1}}w$ by $Q_{\leq j_{2}-10}$.
For the term $L_{12}$, using the fact that the norm on $N^{*}_{k_{3}}$ is $X^{0,-\frac{1}{2},1}_{+}$ and Lemma \ref{Qjk-bdd}, we obtain {\small
\begin{align*}
L_{12}
&\lesssim \sum_{k_{j}}\sum_{j_{2}=k_{1}+k_{4}+100}^{k_{2}+k_{3}}2^{k_{4}n/2}\sum_{j_{3}\geq j_{2}-9}2^{-\frac{j_{3}}{2}}
\|P_{k_{1}}w\|_{L^{\infty}_{t,x}}\|Q_{j_{2}}\Theta(P_{k_{2}}Q_{\leq k_{2}+k_{3}+39}f P_{k_{3}} g)\|_{L^{2}_{t,x}}\\
&\lesssim \sum_{k_{j}}\sum_{j_{2}=k_{1}+k_{4}+100}^{k_{2}+k_{3}}2^{k_{4}n/2}2^{\frac{j_{2}}{2}}2^{k_{1}n/2}
\|P_{k_{1}}w\|_{L^{\infty}_{t}L^{2}_{x}}\|P_{k_{2}}f\|_{L^{\infty}_{t}L^{2}_{x}}
\|P_{k_{3}}g\|_{L^{2}_{t}L^{\infty}_{x}}\\
&\lesssim \sum_{k_{j}}2^{k_{4}n/2}2^{k_{1}n/2}2^{k_{3}(n-2)/2}\left[2^{(k_{1}+k_{4})/2}-2^{(k_{2}+k_{3})/2}\right]
\|P_{k_{1}}w\|_{L^{\infty}_{t}L^{2}_{x}}\|P_{k_{2}}f\|_{L^{\infty}_{t}L^{2}_{x}}\|P_{k_{3}}g\|_{L^{2}_{t}L^{\frac{2n}{n-2}}_{x}}\\
&\lesssim \sum_{k_{j}}2^{(k_{4}-k_{2})n/2}(1+2^{k_{4}-k_{3}})\|w\|_{Y^{n/2}}\|f\|_{F^{n/2}}\|g\|_{F^{n/2}},
\end{align*}}
\!\!where we have used the fact that $2^{(k_{1}+k_{4})/2}-2^{(k_{2}+k_{3})/2}\leq 2^{k_{4}}-2^{k_{3}}$ in the last line. Hence, we can get $I\leq C \|w\|_{Y^{n/2}}\|f\|_{F^{n/2}}\|g\|_{F^{n/2}}$. Summing up all the estimates, we obtain \eqref{4.2}.
\end{proof}
\begin{proposition}[Noninear estimate]\label{pro nonlinear-es}
Assume that $n\geq3$, $u(t,x)$ is the solution of \eqref{deri-Ginz} satisfying $\|u\|_{F^{n/2}\cap Z^{n/2}}\ll1$, $J(u(t,x))$ is the nonlinear term, for any $\varepsilon>0$,
\begin{align*}
u_{t}-(\varepsilon+ai)\Delta u=J(u(t,x)),\quad u(0,x)=u_{0}.
\end{align*}
Then, we have
\begin{align}
\begin{split}
\|J(u(x,t))\|_{N^{n/2}}
&\leq C\bigg\{\|v\|_{L^{\infty}_{t,x}}\|u\|_{F^{n/2}\cap Z^{n/2}}+\frac{1}{1-\|u\|^{2}_{F^{n/2}\cap Z^{n/2}}}\\
&\quad\left[\|u\|^{3}_{F^{n/2}\cap Z^{n/2}}+\|v\|_{L^{\infty}_{t,x}}\|u\|_{F^{n/2}\cap Z^{n/2}}
+\|v\|_{L^{\infty}_{t,x}}\|u\|^{3}_{F^{n/2}\cap Z^{n/2}}\right] \\
&\quad+\frac{1}{(1-\|u\|^{2}_{F^{n/2}\cap Z^{n/2}})^{2}}\|v\|_{L^{\infty}_{t,x}}
\|u\|^{3}_{F^{n/2}\cap Z^{n/2}}\bigg\},\label{4.17}
\end{split}
\end{align}
where the constant $C>0$ is independent of $\varepsilon$.
\end{proposition}
\begin{proof}
By Taylor's expansion and the fact that $F^{n/2}\cap Z^{n/2}\subset Y^{n/2}\subset L^{\infty}_{t,x}$, we have
\begin{align*}
\left\|\frac{1}{1+|u|^{2}}\right\|_{L^{\infty}_{t,x}}
=\left\|\sum_{k=0}^{n}(-1)^{k}|u|^{2k}\right\|_{L^{\infty}_{t,x}}
\lesssim \left\|\sum_{k=0}^{n}(-1)^{k}|u|^{2k}\right\|_{F^{n/2}\cap Z^{n/2}}
\lesssim \frac{1}{1-\|u\|^{2}_{F^{n/2}\cap Z^{n/2}}}
\end{align*}
Noted that $J(u(x,t))=J_{1}(u(t,x))+J_{2}(u(t,x))+J_{3}(u(t,x))$. For the term $J_{1}(u(t,x))$, combining Lemma \ref{Lemma4.3} and Lemma \ref{Lemma4.4}, we find that
\begin{align}
\begin{split}
\|J_{1}(u(t,x))\|_{N^{n/2}}&=\|-(1+i)(v\cdot\nabla)u\|_{N^{n/2}}
\lesssim \|v\|_{L^{\infty}_{t,x}}\left\|\frac{u^{2}\nabla u}{u^{2}}\right\|_{N^{n/2}} \\
&\lesssim \frac{\|v\|_{L^{\infty}_{t,x}}}{\|u\|^{2}_{F^{n/2}\cap Z^{n/2}}}\|u^{2}\nabla u\|_{N^{n/2}} \\
&\lesssim \|v\|_{L^{\infty}_{t,x}}\|u\|_{F^{n/2}\cap Z^{n/2}},\label{4.14}
\end{split}
\end{align}
For the term $J_{2}(u(t,x))$, by using Lemma \ref{Lemma4.1} and Lemma \ref{Lemma4.2}, one has
\begin{align}
\begin{split}
\|J_{2}(u(t,x))\|_{N^{n/2}}
&\lesssim \frac{1}{1-\|u\|^{2}_{F^{n/2}\cap Z^{n/2}}}\|\bar{u}(\nabla u)^{2}\|_{N^{n/2}}\\
&\lesssim \frac{1}{1-\|u\|^{2}_{F^{n/2}\cap Z^{n/2}}}\|u\|_{Y^{n/2}}\|u\|^{2}_{F^{n/2}\cap Z^{n/2}}\\
&\lesssim \frac{1}{1-\|u\|^{2}_{F^{n/2}\cap Z^{n/2}}}\|u\|^{3}_{F^{n/2}\cap Z^{n/2}},\label{4.13}
\end{split}
\end{align}
For the term $J_{3}(u(t,x))$, which mainly consists of $F(u,\bar{u})$ and $H(u,\bar{u})$, employing Lemma \ref{Lemma4.3} and Lemma \ref{Lemma4.4}, we obtain
\begin{align}
\begin{split}
\left\|\frac{{\rm{Im}}F}{1+|u|^{2}}\right\|_{N^{n/2}}
&\lesssim\left\|\frac{F}{(1+|u|^{2})^2}\right\|_{N^{n/2}} \\
&\lesssim \left\|\frac{(1+|\bar{u}|^{2})|(1-|u|^{2})(v\cdot\nabla)u|}{(1+|u|^{2})^2}\right\|_{N^{n/2}} \\
&\lesssim \frac{1}{1-\|u\|^{2}_{F^{n/2}\cap Z^{n/2}}}\left\|[(v\cdot\nabla)u+|u|^{2}(v\cdot\nabla)u]\right\|_{N^{n/2}} \\
&\lesssim \frac{1}{1-\|u\|^{2}_{F^{n/2}\cap Z^{n/2}}}\|v\|_{L^{\infty}_{t,x}}
\left(\|u\|_{F^{n/2}\cap Z^{n/2}}+\|u\|^{3}_{F^{n/2}\cap Z^{n/2}}\right),\label{4.15}
\end{split}
\end{align}
and
\begin{align}
\begin{split}
\left\|\frac{{\rm{Re}}H}{1+|u|^{2}}\right\|_{N^{n/2}}
&\lesssim\left\|\frac{H}{(1+|u|^{2})}\right\|_{N^{n/2}} \\
&\lesssim \left\|\frac{4(v\cdot\nabla)u(\bar{u}^{2}+|u|^{2})}{(1+|u|^{2})^2}\right\|_{N^{n/2}} \\
&\lesssim \frac{1}{(1-\|u\|^{2}_{F^{n/2}\cap Z^{n/2}})^{2}}\|v\|_{L^{\infty}_{t,x}}\left\||u|^{2}\nabla u\right\|_{N^{n/2}} \\
&\lesssim \frac{1}{(1-\|u\|^{2}_{F^{n/2}\cap Z^{n/2}})^{2}}\|v\|_{L^{\infty}_{t,x}}\|u\|^{3}_{F^{n/2}\cap Z^{n/2}},\label{4.16}
\end{split}
\end{align}
Combining the estimates \eqref{4.14}--\eqref{4.16}, we get the desired result \eqref{4.17}.
\end{proof}
\section{The existence and uniqueness of strong solutions}\label{Sec5}
In this section, gathering the arguments of Section 3 and Section 4, we prove the well-posedness of the Ginzburg-Landau equation
\eqref{deri-Ginz} by the contraction mapping theorem, and then we prove the main theorem.
\begin{theorem}\label{th exist}
Assume that $n\geq3$, $0<\varepsilon< 1$, the initial data satisfies $\|u_{0}\|_{\dot{B}^{\frac{n}{2}}_{2,1}}\leq \eta$ and $\eta>0$ is small enough. For any $\|v\|_{L^{\infty}_{t,x}}\leq \eta$, then the equation
\eqref{deri-Ginz} has a unique global solution $u(t,x)$ satisfying
\begin{align*}
\|u\|_{F^{n/2}\cap Z^{n/2}}\leq C\eta,
\end{align*}
where the constant $C>0$ is independent of $\varepsilon$.
\end{theorem}
\begin{proof}
We construct the solution map by Duhamel's principle:
\begin{align*}
\Psi_{u_{0}}(u):=e^{(\varepsilon+i)t\Delta}u_{0}+\int_{0}^{t}e^{(\varepsilon+i)(t-s)\Delta}J(u(s))ds,
\end{align*}
where the initial data $u_{0}$ is given.
First, we prove the map $\Psi_{u_{0}}$ is the map from itself to itself, i.e.,
\begin{align*}
\Psi:~F^{n/2}\cap Z^{n/2}\mapsto F^{n/2}\cap Z^{n/2},
\end{align*}
The result follows easily from Proposition \ref{pro linear-es} and Proposition \ref{pro nonlinear-es}.
Next, we show that the map $\Psi_{u_{0}}$ is contractive. Suppose that $u_{1}$ and $u_{2}$ are two solutions of \eqref{deri-Ginz}, and they correspond to initial data $u_{0}$. By using Proposition \ref{pro linear-es}, we get
\begin{align*}
\|\Psi_{u_{0}}(u_{1})-\Psi_{u_{0}}(u_{2})\|_{F^{n/2}\cap Z^{n/2}}
\lesssim \|J(u_{1})-J(u_{2})\|_{N^{n/2}}.
\end{align*}
Notice that the term $J(u(t,x))$ consists of three parts, we take the term $J_{2}(u(t,x))$ as an example to prove the contraction,
\begin{align*}
\|J_{2}(u_{1})-J_{2}(u_{2})\|_{N^{n/2}}
&\lesssim \left\|\frac{\bar{u}_{1}(\nabla u_{1})^{2}}{1+|u_{1}|^{2}}
-\frac{\bar{u}_{2}(\nabla u_{2})^{2}}{1+|u_{2}|^{2}} \right\|_{N^{n/2}}\\
&\lesssim \sup_{u\in\{{u_{1},u_{2}}\}}\frac{1}{1-\|u\|^{2}_{F^{n/2}\cap Z^{n/2}}}\|(\bar{u}_{1}-\bar{u}_{2})(\nabla u)^{2}\|_{N^{n/2}}\\
&\lesssim \sup_{u\in\{{u_{1},u_{2}}\}}\frac{\|u\|^{2}_{F^{n/2}\cap Z^{n/2}}}{1-\|u\|^{2}_{F^{n/2}\cap Z^{n/2}}}
\|u_{1}-u_{2}\|_{F^{n/2}\cap Z^{n/2}}.
\end{align*}
Similarly, we deal with the terms $J_{1}(u(t,x))$ and $J_{3}(u(t,x))$ as
\begin{align*}
\|J_{1}(u_{1})-J_{1}(u_{2})\|_{N^{n/2}}
&\lesssim \|v\|_{L^{\infty}_{t,x}}\|u_{1}-u_{2}\|_{F^{n/2}\cap Z^{n/2}},
\end{align*}
and
\begin{align*}
\|J_{3}(u_{1})-J_{3}(u_{2})\|_{N^{n/2}}
&\lesssim\bigg\{\frac{1}{1-\|u\|^{2}_{F^{n/2}\cap Z^{n/2}}}\left[\|v\|_{L^{\infty}_{t,x}}
+\|v\|_{L^{\infty}_{t,x}}\|u\|^{2}_{F^{n/2}\cap Z^{n/2}}\right]\\
&\quad+\frac{1}{(1-\|u\|^{2}_{F^{n/2}\cap Z^{n/2}})^{2}}\|v\|_{L^{\infty}_{t,x}}
\|u\|^{2}_{F^{n/2}\cap Z^{n/2}}\bigg\}\|u_{1}-u_{2}\|_{F^{n/2}\cap Z^{n/2}}.
\end{align*}
Combining the above estimates with Proposition \ref{pro nonlinear-es}, one has
\begin{align}\label{5.2}
\begin{split}
&\|\Psi_{u_{0}}(u_{1})-\Psi_{u_{0}}(u_{2})\|_{F^{n/2}\cap Z^{n/2}}\\
&\leq C_{1}\sum_{i=1}^{3}\|J_{i}(u_{1})-J_{i}(u_{2})\|_{N^{n/2}}\\
&\leq C_{1}\bigg\{\|v\|_{L^{\infty}_{t,x}}+\frac{1}{1-\|u\|^{2}_{F^{n/2}\cap Z^{n/2}}}\left[\|u\|^{2}_{F^{n/2}\cap Z^{n/2}}+\|v\|_{L^{\infty}_{t,x}}
+\|v\|_{L^{\infty}_{t,x}}\|u\|^{2}_{F^{n/2}\cap Z^{n/2}}\right] \\
&\quad+\frac{1}{(1-\|u\|^{2}_{F^{n/2}\cap Z^{n/2}})^{2}}\|v\|_{L^{\infty}_{t,x}}
\|u\|^{2}_{F^{n/2}\cap Z^{n/2}}\bigg\}\|u_{1}-u_{2}\|_{F^{n/2}\cap Z^{n/2}}.
\end{split}
\end{align}
Applying the relationship between $\|u(t,x)\|_{F^{n/2}\cap Z^{n/2}}$ and $\|u_{0}\|_{\dot{B}^{\frac{n}{2}}_{2,1}}$ (see Proposition \ref{pro linear-es}), we get \small{
\begin{align}\label{5.1}
\begin{split}
&\|u(t,x)\|_{F^{\frac{n}{2}}\cap Z^{\frac{n}{2}}}\\
&\leq C_{2}\|u_{0}\|_{\dot{B}^{\frac{n}{2}}_{2,1}}+C_{1}\|J(x,t)\|_{N^{\frac{n}{2}}}\\
&\leq C_{2}\|u_{0}\|_{\dot{B}^{\frac{n}{2}}_{2,1}}+C_{1}\bigg\{\|v\|_{L^{\infty}_{t,x}}+\frac{1}{1-\|u\|^{2}_{F^{n/2}\cap Z^{n/2}}}
\left[\|u\|^{2}_{F^{n/2}\cap Z^{n/2}}+\|v\|_{L^{\infty}_{t,x}}
+\|v\|_{L^{\infty}_{t,x}}\|u\|^{2}_{F^{n/2}\cap Z^{n/2}}\right]\\
&\quad+\frac{1}{(1-\|u\|^{2}_{F^{n/2}\cap Z^{n/2}})^{2}}\|v\|_{L^{\infty}_{t,x}}
\|u\|^{2}_{F^{n/2}\cap Z^{n/2}}\bigg\}\|u\|_{F^{n/2}\cap Z^{n/2}}\\
&=:C_{2}\|u_{0}\|_{\dot{B}^{\frac{n}{2}}_{2,1}}+C_{1}\Gamma\big(\|v\|_{L^{\infty}_{t,x}},\|u\|_{F^{n/2}\cap Z^{n/2}}\big)\|u\|_{F^{n/2}\cap Z^{n/2}}.
\end{split}
\end{align}}
Next, if we suppose that $\|u\|_{F^{n/2}\cap Z^{n/2}}\leq C_{3}\|u_{0}\|_{\dot{B}^{\frac{n}{2}}_{2,1}},~\|v\|_{L^{\infty}_{t,x}}\leq \sigma_{1}$, and $C_{3}$ satisfies $C_{2}\leq \frac{C_{3}}{4}$, we can easily obtain
\begin{align*}
\Gamma\big(\|v\|_{L^{\infty}_{t,x}},\|u\|_{F^{n/2}\cap Z^{n/2}}\big)
\leq \sigma_{1}+\frac{\sigma_{1}+C_{3}^{2}\|u_{0}\|_{\dot{B}^{\frac{n}{2}}_{2,1}}^{2}+\sigma_{1} C_{3}^{2}\|u_{0}\|_{\dot{B}^{\frac{n}{2}}_{2,1}}^{2}}
{1-C_{3}^{2}\|u_{0}\|_{\dot{B}^{\frac{n}{2}}_{2,1}}^{2}}
+\frac{\sigma_{1} C_{3}^{2}\|u_{0}\|_{\dot{B}^{\frac{n}{2}}_{2,1}}^{2}}{\big(1-C_{3}^{2}\|u_{0}\|_{\dot{B}^{\frac{n}{2}}_{2,1}}^{2}\big)^{2}}
=:C_{4},
\end{align*}
assuming that $\|u_{0}\|_{\dot{B}^{\frac{n}{2}}_{2,1}}\leq \sigma_{2}$ and $\eta:=\min{\{\sigma_{1},\sigma_{2}\}}$ is small enough, yields that $C_{4}$ is small and $C_{1}C_{4}\leq \frac{1}{4}$, then \eqref{5.1} can obtain
\begin{align*}
\|u(t,x)\|_{F^{\frac{n}{2}}\cap Z^{\frac{n}{2}}}
\leq (C_{2}+C_{1}C_{4}C_{3})\|u_{0}\|_{\dot{B}^{\frac{n}{2}}_{2,1}}
\leq \frac{C_{3}}{2}\|u_{0}\|_{\dot{B}^{\frac{n}{2}}_{2,1}}.
\end{align*}
Then applying the bootstrap argument which implies the following relationship indeed hold:
\begin{align*}
\|u\|_{F^{n/2}\cap Z^{n/2}}\leq C_{3}\|u_{0}\|_{\dot{B}^{\frac{n}{2}}_{2,1}}
\end{align*}
Finally gathering the above arguments, when $\|v\|_{L^{\infty}_{t,x}},~\|u_{0}\|_{\dot{B}^{\frac{n}{2}}_{2,1}}\leq \eta$ is small enough, inequality \eqref{5.2} satisfies
\begin{align}
\|\Psi_{u_{0}}(u_{1})-\Psi_{u_{0}}(u_{2})\|_{F^{n/2}\cap Z^{n/2}}
\leq \frac{1}{4}\|u_{1}-u_{2}\|_{F^{n/2}\cap Z^{n/2}}.\label{contractive map}
\end{align}
By the contraction mapping theorem, the global well-posedness can be proved. Hence, we complete the proof of Theorem \ref{th exist}.
\end{proof}
\underline{\bf Proof of Theorem \ref{main-th}}: By the invertibility of the stereographic projection transform, the proof of the well-posedness for the strong solution of equation \eqref{main-eq} is equivalent to the proof of the well-posedness of equation \eqref{deri-Ginz}. Theorem \ref{main-th} follows from Theorem \ref{th exist} by the above arguments. Then, we complete the proof of Theorem \ref{main-th}.
\section*{Acknowledgments}
H. Wang's research is supported by the National Natural Science Foundation of China (Grant No.~11901066), the
Natural Science Foundation of Chongqing (No.~cstc2019jcyj-msxmX0167) and projects No.~2022CDJXY-001, No.~2020CDJQY-A040 supported by the Fundamental Research Funds for the Central Universities.
\end{document}
|
\begin{document}
\author{Elena Klimenko}
\address{Gettysburg College, Mathematics Department,
300 N. Washington St., CB 402, Gettysburg, PA 17325, USA}
\email{[email protected]}
\author{Natalia Kopteva}
\address{LATP, UMR CNRS 6632, CMI, 39 rue F. Joliot Curie, 13453
Marseille cedex 13, FRANCE}
\email{[email protected]}
\title[The parameter space of two-generator Kleinian groups]
{A two-dimensional slice through the parameter space of
two-generator Kleinian groups}
\begin{abstract}
We describe all real points of the parameter
space of two-generator Kleinian groups with a parabolic generator,
that is, we describe a certain two-dimensional slice through
this space.
In order to do this we
gather together known discreteness criteria for two-generator
groups and present them in the form of conditions on
parameters.
We complete the description by giving discreteness criteria
for groups
generated by a parabolic and a $\pi$-loxodromic elements whose
commutator has real trace and present all orbifolds
uniformized by such groups.
\end{abstract}
\keywords{Kleinian group, discrete group, hyperbolic orbifold}
\subjclass{Primary: 30F40; Secondary: 20H10, 22E40, 57M60.}
\thanks{The first author was supported by Gettysburg College
Research and Professional Development Grant, 2005--2006.
The research of the second author was supported by
FP6 Marie Curie IIF Fellowship and carried out at LATP (UMR CNRS 6632)}
\date{\today}
\maketitle
\section{Introduction}
A two-generator subgroup $\Gamma=\langle f,g\rangle$ of $\mathcal{P}SL$
is determined up to conjugacy
by its parameters
$\beta=\beta(f)={\rm tr}^2f-4$, $\beta'=\beta(g)={\rm tr}^2g-4$, and
$\gamma=\gamma(f,g)={\rm tr}[f,g]-2$ whenever $\gamma\not=0$
\cite{GM89}. So the conjugacy class of an ordered pair
$\{f,g\}$ can be identified with a point in the parameter
space ${\mathbb C}^3=\{(\beta,\beta',\gamma)\}$ whenever $\gamma\not=0$.
The subspace $\mathcal K$ of ${\mathbb C}^3$ that corresponds to
the discrete non-elementary groups $\Gamma=\langle f,g\rangle$
is called the {\it parameter space of two-generator Kleinian
groups}.
Note that a two-generator Kleinian group $\Gamma$
can be represented by several points
in $\mathcal K$, since the same group can have different generating pairs.
Among all two-generator subgroups of $\mathcal{P}SL$, we distinguish
the class of $\mathcal{RP}$ {\it groups}
(two-generator groups with real parameters):
$$
\mathcal{RP}=\lbrace\Gamma :\Gamma=\langle f,g\rangle
{\rm \ for\ some\ } f,g\in{\rm PSL}(2,{\mathbb C})
{\rm \ with\ }
(\beta,\beta',\gamma)\in {\mathbb R}^3\rbrace.
$$
The aim of this paper is to completely determine
all points in ${\mathbb C}^3$ that are parameters for
the discrete non-elementary $\mathcal{RP}$ groups with one generator parabolic:
$$
S_\infty=\{(\gamma,\beta):(\beta,0,\gamma)
{\rm \ are\ parameters\ for\ some\ }
\langle f,g\rangle\in\mathcal{DRP}
\},
$$
where $\mathcal{DRP}$ denotes the class of all discrete non-elementary $\mathcal{RP}$ groups.
Geometrically, $S_\infty$ is a two-dimensional slice through the
six-dimensional parameter space~$\mathcal K$.
The slice $S_\infty$ intersects the well-known Riley
slice $(0,0,\gamma)$, $\gamma\in{\mathbb C}$, which consists
of all Kleinian groups generated by two parabolics.
Consider the sequence of slices $\{S_n\}_{n=2}^\infty$, where
$$
S_n=\{(\gamma,\beta):(\beta,-4\sin^2(\pi/n),\gamma)
{\rm \ are\ parameters\ for\ some\ }
\langle f,g\rangle\in\mathcal{DRP}
\}.
$$
The first slice $S_2$ of this sequence
is of great interest in the theory of discrete groups.
This slice consists of all
parameters for discrete $\mathcal{RP}$ groups with an elliptic
generator of order~2 and was investigated in \cite{GGM01}.
It was shown that if $\langle f,g\rangle$ has parameters
$(\beta,\beta',\gamma)$, then there exists a group
$\langle f,h\rangle$ with parameters $(\beta,-4,\gamma)$
such that if $\gamma\not=0,\beta$, then $\langle f,h\rangle$
is discrete whenever $\langle f,g\rangle$ is.
Hence, the slice $S_2$ gives
necessary discreteness conditions for a group with parameters
$(\beta,\beta',\gamma)$, where $\beta$ and $\gamma$ are real.
It follows that every $S_n$ with $n>2$, including $S_\infty$,
is a subset of $S_2$.
Since a parabolic element can be viewed as the limit of a sequence
of primitive elliptic elements of order $n$ as $n\to\infty$,
the following two questions for $\{S_n\}$ and $S_\infty$
naturally arise.
\begin{itemize}
\item[(1)] Is it true that for every point $x\in S_\infty$
there exists a sequence $\{x_k\}_{k=2}^\infty$ with $x_k\in S_k$
that converges to~$x$?
\item[(2)] Is it true that for each $\varepsilon>0$ there exists
$N\in{\mathbb N}$ such that the $\varepsilon$-neighbourhood
of $S_\infty$ contains $S_n$ for all $n>N$?
\end{itemize}
Note that the structure of $S_n$ for $n>2$ is unknown.
We work out $S_\infty$ by splitting the plane $(\gamma,\beta)$
into several parts.
It turns out that $\Gamma=\langle f,g\rangle$
has an invariant plane in one
of the following cases:
(1) $\gamma<0$ and $\beta\leq -4$;
(2) $\gamma>0$ and $\beta\geq -4$.
Such discrete groups
were investigated, for example, in \cite{KS98} and
\cite{GiM91,Kna68,Mat82}, respectively.
If $\gamma<0$ and $\beta>-4$, then $\Gamma$ is truly spatial
(non-elementary and without invariant plane)
and this case is treated in~\cite{KK05}.
We get these dicreteness criteria together
and transform them
into conditions on $\beta$ and $\gamma$ if it was not done before.
So the last case to consider is when $\gamma>0$ and $\beta<-4$.
In this case $\Gamma$ is truly spatial with $f$ $\pi$-loxodromic.
We complete the study of the slice
$S_\infty$ by giving discreteness criteria for
such groups.
The paper is organised as follows.
In Section~2, discreteness criteria are given for
truly spatial $\mathcal{RP}$ groups $\Gamma$
generated by a $\pi$-loxodromic and a parabolic elements
(Theorems~\ref{criterion_psl} and~\ref{criterion_par}).
In Section~3,
for each such discrete $\Gamma$ we obtain a presentation and the
Kleinian orbifold $Q(\Gamma)$ (Theorem~\ref{groups}).
Section~4 is devoted to the analysis of the parameter space.
We completely describe the slice $S_\infty$ by
giving explicit formulas for the parameters $\beta$ and $\gamma$.
We also program the obtained formulas in the package Maple 7.0
and
plot a part of $S_\infty$ on the $(\gamma,\beta)$-plane to give an idea
of how it looks like.
\section{Discreteness criteria}
Recall that an element $f\in\mathcal{P}SL$ with real $\beta(f)$
is
{\it elliptic}, {\it parabolic}, {\it hyperbolic}, or
{\it $\pi$-loxodromic}
according to whether
$\beta(f)\in[-4,0)$, $\beta(f)=0$, $\beta(f)\in(0,+\infty)$, or
$\beta(f)\in(-\infty,-4)$.
If $\beta(f)\notin[-4,+\infty)$,
then $f$ is called {\it strictly loxodromic}.
An elliptic element $f$ of order $n$ is said to be {\it non-primitive}
if it is a rotation through $2\pi q/n$, where $q$ and $n$ are
coprime ($1<q<n/2$). If $f$ is a rotation through $2\pi/n$,
then it is called {\it primitive}.
\begin{theorem}\label{criterion_psl}
Let $f\in\mathcal{P}SL$ be a $\pi$-loxodromic element,
$g\in\mathcal{P}SL$ be a parabolic element,
and let $\Gamma=\langle f,g\rangle$ be a non-elementary
$\mathcal{RP}$ group without invariant plane.
Then
\begin{itemize}
\item[(1)]
there exist unique elements $h_1,h_2\in\mathcal{P}SL$ such that
$h_1^2=fg^{-1}f^{-1}g^{-1}$ and $(h_1g)^2=1$,
$h_2^2=f^{-1}g^{-1}f^2gf^{-1}$ and $(h_2fg^{-1}f^{-1})^2=1$.
\item[(2)]
the group $\Gamma$ is discrete if and only if one of the following
conditions holds:
\begin{itemize}
\item[(i)]
$h_1$ is either a hyperbolic, or parabolic, or primitive elliptic
element of even order $m\geq 4$, and
$h_2$ is either a hyperbolic, or parabolic, or primitive elliptic
element of order $p\geq 3$;
\item[(ii)]
$h_1$ is a primitive elliptic
element of odd order $m\geq 3$, and $h_2h_1$ is either a hyperbolic,
or parabolic, or primitive elliptic element of order $k\geq 3$.
\end{itemize}
\end{itemize}
\end{theorem}
\subsection*{Basic geometric construction}
We will construct a group $\Gamma^*$ that
contains $\Gamma=\langle f,g\rangle$ as a subgroup of finite index.
The idea is to find $\Gamma^*$ so that a fundamental polyhedron for
a discrete $\Gamma^*$ can be easily constructed.
It will be clear from the construction that $\Gamma$ is
commensurable with a reflection group which either coincides
with $\Gamma^*$ or is an index 2 subgroup of $\Gamma^*$.
The construction
presented below will be used throughout Sections~2 and~3
and we shall use the notation introduced here.
Let $f$ and $g$ be as in the statement of Theorem~\ref{criterion_psl}.
Since $\Gamma$ is a non-elementary $\mathcal{RP}$ group without invariant plane,
there exists an invariant plane of $g$, say $\eta$,
which is orthogonal to the axis of $f$ \cite[Theorem~2]{KK02}.
Denote by $M$ the fixed point of $g$ and by $\omega$ the plane
that passes through $M$ and $f$ (we denote elements and their
axes by the same letters when it does not lead to any confusion).
Note that $f$ keeps $\omega$ invariant.
Since $f$ is orthogonal to $\eta$, $\omega$ is also orthogonal
to $\eta$. Let $e$ be the half-turn with the axis $\omega\cap\eta$.
Then $e$ passes through $M$ and is orthogonal to~$f$.
Let $e_f$ and $e_g$ be half-turns such that
\begin{equation}\label{efeg}
f=e_fe {\rm \quad and \quad} g=e_ge.
\end{equation}
Then $e_f$ is orthogonal to $\omega$ and $e_g$ lies in $\eta$.
Let $\tau$ be the plane passing through $e_g$ orthogonally to $\eta$
and let $\sigma=e_f(\tau)$.
The planes $\tau$ and $\omega$ are parallel and $M$ is their common
point on the boundary $\partial{\mathbb H}^3$.
Since $e_f$ is orthogonal to $\omega$, the planes
$\sigma$ and $\omega$ are also parallel
with the common point $e_f(M)$ on $\partial{\mathbb H}^3$.
Since $e_f(M)\not=M$, the planes $\omega$, $\sigma$, and $\tau$
do not have a common point in
$\overline{{\mathbb H}^3}={\mathbb H}^3\cup\partial{\mathbb H}^3$.
Therefore,
there exists a unique plane $\delta$ orthogonal to all $\omega$,
$\sigma$, and $\tau$.
It is clear that $e_f\subset\delta$.
Consider two extensions of $\Gamma$:
$\widetilde\Gamma=\langle f,g,e\rangle$ and
$\Gamma^*=\langle f,g,e,R_\omega\rangle$.
(We denote the reflection in a plane $\kappa$ by $R_\kappa$.)
One can show that $\widetilde\Gamma=\langle e_f,e_g,e\rangle$
and $\Gamma^*=\langle e_f,R_\eta,R_\omega,R_\tau\rangle$.
From (\ref{efeg}), it follows that $\widetilde\Gamma$ contains $\Gamma$
as a subgroup of index at most~2.
Moreover, $\widetilde\Gamma$ is the orientation preserving subgroup of
$\Gamma^*$ and, hence, $\Gamma^*$ contains $\Gamma$ as a subgroup
of finite index. Therefore, $\Gamma$, $\widetilde\Gamma$, and
$\Gamma^*$ are either all discrete, or all non-discrete.
We then concentrate on the group~$\Gamma^*$.
\begin{figure}
\caption{Polyhedron $\mathcal{P}
\label{fund_poly1}
\end{figure}
Let $\mathcal{P}^*$ be the infinite volume polyhedron bounded by
$\eta$, $\omega$, $\tau$, $\sigma$, and $\delta$. $\mathcal{P}^*$ has five right
dihedral angles (between faces lying in $\eta$ and $\omega$,
$\eta$ and $\tau$, $\delta$ and $\omega$, $\delta$ and $\tau$,
and $\delta$ and $\sigma$).
The plane $\sigma$ may either intersect with, or be parallel to, or
be disjoint from each of $\tau$ and $\eta$.
If $\sigma$ and $\tau$ intersect, then we denote the dihedral angle of
$\mathcal{P}^*$ between them by $2\pi/m$, where $m>2$, $m$ is not necessary an
integer. We keep the notation $2\pi/m$ taking $m=\infty$ and
$m=\overline\infty$ for parallel or disjoint $\sigma$ and $\tau$,
respectively.
Similarly, we denote the ``dihedral angle'' between
$\eta$ and $\sigma$ by $\pi/p$, where $p>2$ is real,
$\infty$, or $\overline\infty$.
(We regard $\overline\infty>\infty>x$, $x/\infty=x/\overline\infty=0$,
$\infty/x=\infty$, $\overline\infty/x=\overline\infty$ for any
positive real $x$.)
$\mathcal{P}^*$ exists in ${\mathbb H}^3$ for all $m>2$ and $p>2$
by~\cite{Vin85}.
In Figure~\ref{fund_poly1}, $\mathcal{P}^*$ is drawn under assumption that
$m<\infty$, $p<\infty$, and
$1/2+1/p+2/m>1$. The shaded triangle shows the hyperbolic plane
orthogonal to $\eta$, $\sigma$, and $\omega$. Note that
this plane is not a face of $\mathcal{P}^*$ and is shown only to underline
the combinatorial structure of $\mathcal{P}^*$. In figures, we do not label
dihedral angles of $\pi/2$ in order to not overload the picture.
Suppose now that $m<\infty$, that is $\sigma$ and $\tau$ intersect.
Let $\xi$ be the plane passing through $e_f$ orthogonally to $\delta$.
Then $\xi$ is orthogonal to $\omega$. One can see that
$\sigma=R_\xi(\tau)$ and
$\xi$ is the bisector of the dihedral angle of $\mathcal{P}^*$ made by
$\tau$ and~$\sigma$.
Let $\mathcal{Q}^*$ be the polyhedron bounded by $\eta$, $\tau$, $\omega$,
$\delta$, and $\xi$. $\mathcal{Q}^*$ has six dihedral angles of $\pi/2$;
the dihedral angle between $\tau$ and $\xi$ is equal to $\pi/m$ with
$2<m<\infty$.
Denote the ``dihedral angle'' between $\eta$ and $\xi$ by $\pi/k$,
where $k>2$ is real, $k=\infty$, or $k=\overline\infty$.
$\mathcal{Q}^*$ exists in ${\mathbb H}^3$ for all $m>2$ and $k>2$
by~\cite{Vin85}.
Note that $R_\xi$ is not necessary in $\Gamma^*$, but if it is and
if $\Gamma^*$ is discrete, then we will see that $\mathcal{Q}^*$ is a
fundamental polyhedron for~$\Gamma^*$.
In Figure~\ref{fund_poly2}, $\mathcal{Q}^*$ is drawn under assumption that
$1/2+1/k+1/m>1$.
\begin{figure}
\caption{Polyhedron $\mathcal{Q}
\label{fund_poly2}
\end{figure}
\begin{lemma}\label{hs}
Let $f\in\mathcal{P}SL$ be a $\pi$-loxodromic element,
$g\in\mathcal{P}SL$ be a parabolic element,
and let $\Gamma=\langle f,g\rangle$ be a non-elementary
$\mathcal{RP}$ group without invariant plane.
Then there exist unique elements
$h_1,h_2\in\mathcal{P}SL$ such that
\begin{itemize}
\item[(1)] $h_1^2=fg^{-1}f^{-1}g^{-1}$ and $(h_1g)^2=1$,
\item[(2)] $h_2^2=f^{-1}g^{-1}f^2gf^{-1}$ and $(h_2fg^{-1}f^{-1})^2=1$.
\end{itemize}
Moreover, the elements $h_1$ and $h_2$ are not strictly loxodromic.
\end{lemma}
\begin{proof}
First, note that $R_\sigma=e_fR_\tau e_f$
and $g=R_\tau R_\omega$.
Therefore,
\begin{equation}\label{fprim}
R_\sigma R_\omega= e_f R_\tau e_f R_\omega=e_fR_\tau R_\omega e_f=
e_fge_f=fg^{-1}f^{-1}.
\end{equation}
Let us show that if we take $h_1=R_\xi R_\tau=R_\sigma R_\xi$, then
the assertion (1) of the lemma hold. Indeed,
$$
h_1^2=R_\sigma R_\tau=(R_\sigma R_\omega)(R_\omega R_\tau)=fg^{-1}f^{-1}g^{-1}.
$$
Moreover, $h_1g=(R_\xi R_\tau)(R_\tau R_\omega)=R_\xi R_\omega$. Since
$\xi$ and $\omega$ are orthogonal, $(R_\xi R_\omega)^2=1$.
Hence, $(h_1g)^2=1$.
Note also that since $h_1$ is a product of two reflections,
$h_1$ is not strictly loxodromic.
Now let us show that $h_1$ is unique.
The element $fg^{-1}f^{-1}g^{-1}$ is uniquely determined
as an element of $\mathcal{P}SL$.
If $fg^{-1}f^{-1}g^{-1}$ is parabolic, it has only one square
root~$h_1$. Suppose that $fg^{-1}f^{-1}g^{-1}$ is hyperbolic. Then
it has exactly two square roots, one of which is $h_1$ defined above
and the other, denoted $\overline h_1$, is a $\pi$-loxodromic element
with the same axis and translation length as $h_1$. Clearly,
$(\overline h_1g)^2\not=1$.
If $fg^{-1}f^{-1}g^{-1}$ is elliptic, then it also has two square roots
$h_1$ and $\overline h_1$,
both are elliptic elements. The element $\overline h_1$ is elliptic
with the same axis as $h_1$ and with rotation angle $(\pi-2\pi/m)$,
while $h_1$ is a rotation through $2\pi/m$ in the opposite direction.
Again, $(\overline h_1g)^2\not=1$.
Now we take
$$
h_2=R_\eta R_\sigma=(R_\eta R_\tau)(R_\tau R_\sigma)=e_gh_1^{-2}=
efgf^{-1}.
$$
Then
$$
h_2^2=f^{-1}g^{-1}f^2gf^{-1} {\rm\quad and\quad} (fg^{-1}f^{-1}h_2)^2=1.
$$
These two conditions determine
$h_2$ uniquely.
\end{proof}
Note that the elements $h_1, h_2$ defined in Lemma~\ref{hs} determine
combinatorial and metric structures of $\mathcal{P}^*$. For example, if
$h_1$ is elliptic, then its rotation angle is equal to the dihedral angle
of $\mathcal{P}^*$ between $\sigma$ and $\tau$. If $h_2$ is elliptic, then its
rotation angle is equal to the doubled dihedral angle of $\mathcal{P}^*$ between
$\eta$ and $\sigma$. Vice versa, if the metric structure of $\mathcal{P}^*$
is fixed, then the types of elements $h_1$ and $h_2$ can be determined.
The same can be said about $\mathcal{Q}^*$ and the elements $h_1$ and $h_2h_1$.
The element $h_2h_1$ is responsible for the
mutual position of the planes $\eta$ and~$\xi$ (see the proof of
Lemma~\ref{h1h2}).
Lemmas~\ref{h1}--\ref{h1h2} below give some necessary conditions
for discreteness of $\Gamma$ via conditions on elements $h_1$ and $h_2$.
One needs to
keep in mind the connection between these elements and the
polyhedra $\mathcal{P}^*$
and~$\mathcal{Q}^*$.
\begin{lemma}\label{h1}
If $\Gamma$ is discrete, then
$h_1$ is either a hyperbolic, or parabolic, or primitive elliptic
element of order $m\geq 3$.
\end{lemma}
\begin{proof}
The subgroup $H=\langle g,fgf^{-1}\rangle$ of $\Gamma$
keeps $\delta$ invariant and is conjugate to a
subgroup of ${\rm PSL}(2,\mathbb R)$.
Since $\Gamma$ is discrete, $H$ must be discrete.
By \cite{Mat82} or \cite{Bea88},
the group $H$ is discrete if and only if either
(1) $fg^{-1}f^{-1}g^{-1}=h_1^2$
is a hyperbolic, or a parabolic, or a primitive elliptic
element, or
(2) $h_1$ is a primitive elliptic element of odd order
$m$, where $m\geq 3$.
If $h_1^2$ is parabolic of hyperbolic, then $h_1$
is parabolic or hyperbolic, respectively.
If $h_1^2$ is a primitive elliptic element,
then $h_1$ is a primitive elliptic of even order $m\geq 4$.
\end{proof}
\begin{lemma}\label{h2}
If $\Gamma$ is discrete,
then $h_2$ is either a hyperbolic, or parabolic, or primitive elliptic
element of order $p\geq 3$.
\end{lemma}
\begin{proof}
Let $\kappa$ be the plane orthogonal to
$\eta$, $\sigma$, and $\omega$.
The subgroup $H=\langle e,fgf^{-1}\rangle$ of $\widetilde\Gamma$
keeps the plane $\kappa$ invariant and is conjugate to a
subgroup of ${\rm PSL}(2,\mathbb R)$.
By \cite{Mat82},
$H$ is discrete if and only if $h_2=efgf^{-1}$
is either a hyperbolic, or parabolic, or primitive elliptic
element of order $p\geq 3$.
\end{proof}
\begin{lemma}\label{h1h2}
If $\Gamma$ is discrete and $h_1$ is a primitive elliptic element
of odd order, then $h_2h_1$ is either a hyperbolic,
or parabolic, or primitive elliptic element of order $k\geq 3$.
\end{lemma}
\begin{proof}
Recall that
$\Gamma^*=\langle e_f,R_\eta,R_\tau,R_\omega\rangle$.
Since $h_1$ has odd order and $h_1^2\in\Gamma^*$,
$h_1\in\Gamma^*$. Since, moreover,
$h_1=R_\xi R_\tau$,
$e_f=R_\delta R_\xi$, and $R_\tau\in\Gamma^*$,
both $R_\xi$ and $R_\delta$ are also in~$\Gamma^*$.
Further, since the plane $\xi$ is orthogonal to $\omega$, the group
$\langle R_\eta R_\delta, e_f\rangle$
keeps $\omega$ invariant and is conjugate to a
subgroup of ${\rm PSL}(2,\mathbb R)$.
It is clear that $\langle R_\eta R_\delta, e_f\rangle$ is
discrete if and only if $R_\eta R_\xi=h_2h_1$ is a hyperbolic,
parabolic, or primitive elliptic element of order $k\geq 3$ \cite{Mat82}.
\end{proof}
\noindent
{\it Proof of Theorem~\ref{criterion_psl}.}
Lemma~\ref{hs} proves existence and uniqueness of elements $h_1$
and $h_2$. Now we prove part (2) of the theorem.
If $\Gamma$ is discrete then
$h_1$ is either a hyperbolic,
or parabolic, or primitive elliptic element of order $m\geq 3$
by Lemma~\ref{h1}.
We split the discrete groups $\Gamma$ into two families.
The first family consists of those groups for
which $h_1$ is hyperbolic, parabolic, or primitive elliptic
of even order.
By Lemma~\ref{h2}, for these groups $h_2$ is a hyperbolic,
parabolic, or primitive elliptic element.
The second family consists of the discrete groups with
$h_1$ elliptic of odd order.
Then by Lemma~\ref{h1h2}, $h_2h_1$ is a hyperbolic, or parabolic,
or primitive elliptic element of order $k\geq 3$.
(Note that in this case
$h_2$ is necessarily hyperbolic or primitive elliptic.)
So if $\Gamma$ is discrete, then either (2)(i) or
(2)(ii) of Theorem~\ref{criterion_psl} can occur.
Clearly, if neither (2)(i) nor (2)(ii) holds, then $\Gamma$ is
not discrete by Lemmas~\ref{h1}--\ref{h1h2}.
Now prove that each of (2)(i) and (2)(ii) is a sufficient condition
for $\Gamma$ to be discrete.
In each of the two cases we will give a fundamental polyhedron for
$\Gamma^*$ to show, by using the Poincar\'e polyhedron theorem
\cite{EP94}, that
$\Gamma^*$ is discrete.
Suppose that (2)(i) holds. Then since $m$ is even,
the group $G_1$ generated by the side pairing transformations
$R_\eta$, $R_\omega$,
$R_\sigma$, $R_\tau$, and $e_f$
and the polyhedron $\mathcal{P}^*$ satisfy the
Poincar\'e polyhedron theorem, $G_1$ is discrete and $\mathcal{P}^*$
is its fundamental polyhedron.
Obviously, $G_1=\Gamma^*$.
Suppose that (2)(ii) holds. Then the
group $G_2$ generated by the side pairing transformations
$R_\eta$, $R_\omega$,
$R_\xi$, $R_\tau$, and $R_\delta$ and
the polyhedron $\mathcal{Q}^*$ satisfy the Poincar\'e theorem, $G_2$ is discrete,
and $\mathcal{Q}^*$ is its fundamental polyhedron.
In the proof of Lemma~\ref{h1h2} it was shown that, for $m$ odd,
$R_\xi\in\Gamma^*$ and $R_\delta\in\Gamma^*$.
Moreover, $e_f=R_\xi R_\delta$.
Hence, $G_2=\Gamma^*$, so $\Gamma^*$ is discrete.
Theorem~\ref{criterion_psl} is proved.\qed
Our next goal is to compute parameters $(\beta(f),\beta(g),\gamma(f,g))$
for both series of discrete groups listed in Theorem~\ref{criterion_psl}.
If $f\in\mathcal{P}SL$ is a loxodromic element with translation length $d_f$ and
rotation angle $\theta_f$, then
$$
{\rm tr}^2f=4\cosh^2 \frac{d_f+i\theta_f}2
$$
and $\lambda_f=d_f+i\theta_f$ is called the
{\it complex translation length} of~$f$.
Note that if $f$ is hyperbolic then $\theta_f=0$ and ${\rm tr}^2f=4\cosh^2(d_f/2)$.
If $f$ is elliptic then $d_f=0$ and ${\rm tr}^2f=4\cos^2(\theta_f/2)$.
If $f$ is parabolic then ${\rm tr}^2f=4$;
by convention we set $d_f=\theta_f=0$.
We define the set
$$
\mathcal{U}=\{u:u=i\pi/p
{\rm \ for\ some\ } p\in{\mathbb Z}, p\geq 2\}\cup[0,+\infty).
$$
In other words, the set $\mathcal{U}$ consists of
all complex translation half-lengths
$u=\lambda_f/2$ for hyperbolic,
parabolic, and primitive elliptic elements~$f$.
Furthermore, we define a function
$t:\mathcal{U}\to\{2,3,4,\dots\}\cup\{\infty,\overline\infty\}$
as follows:
$$
t(u)=\left\{
\begin{array}{lll}
p & {\rm if} & u=i\pi/p,\\
\infty & {\rm if} & u=0,\\
\overline\infty & {\rm if} & u\in(0,+\infty).
\end{array}
\right.
$$
Given $u\in\mathcal{U}$ and $f$ with ${\rm tr}^2f=4\cosh^2u$,
$t(u)$ determines the type of $f$ and, moreover, its
order if $f$ is elliptic.
Note also that since we regard $\infty/n=\infty$ and
$\overline\infty/n=\overline\infty$, an expression of the form
$(t(u),n)=1$ with $n>1$ means, in particular, that $t(u)$ is finite.
\begin{theorem}\label{criterion_par}
Let $f,g\in\mathcal{P}SL$ with $\beta(f)<-4$,
$\beta(g)=0$, and $\gamma(f,g)>0$.
Then $\Gamma=\langle f,g\rangle$
is discrete if and only if one of the following holds:
\begin{enumerate}
\item $\gamma(f,g)=4\cosh^2u$ and $\beta(f)=-4\cosh^2v/\gamma(f,g)-4$,
where $u,v\in\mathcal{U}$ with $t(u)\geq 4$, $(t(u),2)=2$, and $t(v)\geq 3$;
\item $\gamma(f,g)=4\cosh^2u$ and $\beta(f)=-4\cosh^2v-4$, where
$u,v\in\mathcal{U}$ with $t(u)\geq 3$, $(t(u),2)=1$, and $t(v)\geq 3$.
\end{enumerate}
\end{theorem}
\begin{proof}
Obviously, $\beta(f)<-4$ and $\beta(g)=0$ if and only if
$f$ is $\pi$-loxodromic and $g$ is parabolic.
With this choice of $\beta(f)$ and $\beta(g)$,
$\gamma(f,g)>0$ if and only if the group
$\Gamma=\langle f,g\rangle$ is a non-elementary $\mathcal{RP}$ group without
invariant plane~\cite{KK02}.
This means that the hypotheses of Theorem~\ref{criterion_par}
are equivalent
to the hypotheses of Theorem~\ref{criterion_psl}.
Therefore, in order to prove Theorem~\ref{criterion_par} it is
sufficient to
calculate the parameters $\beta(f)$ and $\gamma(f,g)$ for
both families of the discrete groups listed in
Theorem~\ref{criterion_psl}.
Let $\sigma'$ be the image of $\sigma$ under $R_\omega$, that is
$R_{\sigma'}=R_\omega R_\sigma R_\omega$.
Using the identity~(\ref{fprim})
and the fact that $g=R_\tau R_\omega$,
we have
$$
[f,g]=fgf^{-1}g^{-1}=(R_\omega R_\sigma)(R_\omega R_\tau)=
(R_{\sigma'} R_\omega)(R_\omega R_\tau)=R_{\sigma'}R_\tau.
$$
Note that $\sigma'$
and $\tau$ are disjoint and $\delta$ is orthogonal to both of them.
Therefore, $[f,g]$ is a hyperbolic element with the
axis lying in $\delta$ and the translation length $2d$, where $d$ is
the distance between $\sigma'$ and $\tau$.
Hence, since $\gamma(f,g)>0$,
$$\gamma(f,g)={\rm tr}[f,g]-2=+2\cosh d-2.$$
\begin{figure}\label{poly_par}
\end{figure}
Now, using generalised triangles in
the plane $\delta$, it is not difficult to calculate that
$$
\gamma(f,g)=\left\{
\begin{array}{lll}
4\cos^2(\pi/m) & {\rm if} & 3\leq m<\infty,\\
4 & {\rm if} & m=\infty,\\
4\cosh^2(d(\sigma,\tau)/2) & {\rm if} & m=\overline\infty,\\
\end{array}
\right.
$$
where $d(\sigma,\tau)$ is the distance between $\sigma$ and $\tau$ if they are
disjoint.
Hence,
$$\gamma(f,g)=4\cosh^2u,$$
where $u\in\mathcal{U}$, $t(u)=m\geq 3$.
Let us calculate $\beta(f)$.
The element $f$ is $\pi$-loxodromic if and only if
${\rm tr}^2f=4\cosh^2(T+i\pi/2)=-4\sinh^2T$, where $2T$ is the
translation length of $f$. That is,
$$
\beta(f)=-4\sinh^2T-4.
$$
Note that $T$ is the distance between $e$ and $e_f$. It is measured in
$\omega$ and equals $BE$ (see Figure~\ref{poly_par}).
Suppose that we are in case (2)(i) of Theorem~\ref{criterion_psl},
that is $(t(u),2)=2$,
and that
$\sigma$ and $\tau$ intersect.
Recall that $\xi$ is the bisector of the dihedral angle of $\mathcal{P}^*$ made
by $\sigma$ and $\tau$.
Let $\psi$ be the angle that $\xi$ makes with $\eta$. Note that
$\psi=\angle BCE$.
From the link of $D$, we have that
$$
\cos\chi=\frac{\cos(\pi/p)}{\sin(2\pi/m)}=\frac{\cos\psi}{\sin(\pi/m)}
$$
and, therefore,
\begin{equation}\label{psi}
\cos\psi=\frac{\cos(\pi/p)}{2\cos(\pi/m)}.
\end{equation}
Further, from the link of $D$,
\begin{equation}\label{adc}
\cos\angle ADC=
\frac{\cos\psi\cdot\cos(\pi/m)}{\sin\psi\cdot\sin(\pi/m)}.
\end{equation}
From the ${\rm tr}iangle ABM$, $\cosh^2 AB=1/\sin(\pi/m)$
and, from the quadrilateral $ABCD$,
\begin{equation}\label{bc}
\sinh BC=\frac{\cos \angle ADC}{\sinh AB}
\end{equation}
Finally, from ${\rm tr}iangle BCE$,
\begin{equation}\label{sinht}
\sinh T=\sinh BE=\sin\psi\cdot \sinh BC.
\end{equation}
Combining (\ref{psi})--(\ref{sinht}), we have that
$$
\sinh^2T=\frac{\cos^2(\pi/p)}{4\cos^2(\pi/m)}=
\frac{\cos^2(\pi/p)}{\gamma(f,g)}.
$$
Similar calculations can be done for parallel or disjoint $\sigma$
and $\tau$.
Hence,
$\beta(f)=-\sinh^2T-4=-\cosh^2v/\gamma(f,g)-4$,
where $v\in\mathcal{U}$, $t(v)\geq 3$.
Now note that in case (2)(ii) of Theorem~\ref{criterion_psl},
the angle $\psi=\angle BCE$ must be of the form $\pi/k$, $k\geq 3$
is an integer, $\infty$, or $\overline\infty$.
Then we need to recompute the formulas (\ref{adc})--(\ref{sinht})
with $\psi=\pi/k$:
$$
\cos\angle ADC=\frac{\cos(\pi/k)\cdot\cos(\pi/m)}
{\sin(\pi/k)\cdot\sin(\pi/m)},
\quad
\sinh BC=\frac{\cos\phi}{\sinh a}=\frac{\cos(\pi/k)}{\sin(\pi/k)}.
$$
Then
$$
\sinh T=\sin\psi \cdot\sinh BC=\cos(\pi/k).
$$
Hence, $\beta(f)=-4\cosh^2v-4$, where $v\in\mathcal{U}$, $t(v)\geq 3$.
\end{proof}
\section{Orbifolds}
Denote by $\Omega(\Gamma)$ the discontinuity set of a Kleinian group
$\Gamma$.
The {\it Kleinian orbifold}
$Q(\Gamma)=({\mathbb H}^3\cup\Omega(\Gamma))/\Gamma$
is said to be an orientable $3$-orbifold with a complete hyperbolic structure
on its interior ${\mathbb H}^3/\Gamma$ and a conformal structure
on its boundary $\Omega(\Gamma)/\Gamma$.
We need the following (Kleinian) group presentations:
\begin{itemize}
\item
$PH[\infty,m;q]=\langle x,y,s\,|\,
x^\infty=s^2=(xs)^2=(ys)^2=(xyxy^{-1})^m=(y^{-1}xys)^q=1\rangle$,
\item
$P[\infty,m;q]=\langle w,x,y,z\,|\,w^\infty=x^2=y^2=z^2=(wx)^2=(wy)^2=(yz)^2=
(zx)^q=(zw)^m=1\rangle$,
\item
$\mathcal{S}_2[\infty,m;q]=
\langle x,L\,|\,x^\infty=(xLxL^{-1})^m=(xL^2x^{-1}L^{-2})^q=1\rangle$,
\item
$GTet_1[\infty,m;q]=\langle x,y,z\,|\,
x^\infty=y^2=z^\infty=(xy)^m=(yzy^{-1}z^{-1})^q=[x,z]=1\rangle$.
\end{itemize}
Here $m$ and $q$ are integers greater than 1, or $\infty$
or $\overline\infty$ with
the following convention.
If we have a relation of the form $w^n=1$ with $n=\overline\infty$,
then we simply remove the relation $w^n=1$
from the presentation (in fact, this means that the
element $w$ is hyperbolic). Further, if $n=\infty$ and we keep
the relation $w^n=1\sim w^\infty=1$, we get a Kleinian group
presentation where parabolics are indicated.
To get an abstract group presentation, we need to remove all
relations of the form $w^\infty=1$.
\begin{theorem}\label{groups}
Let $\Gamma=\langle f,g\rangle$ be a non-elementary discrete
$\mathcal{RP}$ group
without invariant plane. Let $\beta(f)\in(-\infty,-4)$ and
let $\beta(g)=0$.
Then $\gamma(f,g)=4\cosh^2u$, where $u\in \mathcal{U}$, $t(u)\geq 3$,
and one of the following holds:
\begin{enumerate}
\item If $(t(u),2)=2$ and $\beta(f)=-4\cosh^2v/\gamma(f,g)-4$,
where $v\in \mathcal{U}$, $t(v)\geq 3$, $(t(v),2)=1$,
then $\Gamma$ is isomorphic to $PH[\infty,t(u)/2;t(v)]$.
\item If $(t(u),2)=2$ and $\beta(f)=-4\cosh^2v/\gamma(f,g)-4$,
where $v\in \mathcal{U}$, $t(v)\geq 4$, $(t(v),2)=2$,
then $\Gamma$ is isomorphic to $\mathcal{S}_2[\infty,t(u)/2;t(v)/2]$.
\item If $(t(u),2)=1$ and
$\beta(f)=-4\cosh^2v-4$, where $v\in \mathcal{U}$,
$t(v)\geq 3$, $(t(v),2)=1$,
then $\Gamma$ is isomorphic to $P[\infty,t(u);t(v)]$.
\item If $(t(u),2)=1$ and $\beta(f)=-4\cosh^2v-4$,
where $v\in \mathcal{U}$, $t(v)\geq 4$, $(t(v),2)=2$,
then $\Gamma$ is isomorphic to $GTet_1[\infty,t(u);t(v)/2]$.
\end{enumerate}
\end{theorem}
\begin{proof}
Suppose $(t(u),2)=2$, that is the dihedral angle of
$\mathcal{P}^*$ between $\sigma$ and $\tau$ is $2\pi/m$ with $m$ even,
$\infty$, or $\overline\infty$.
Consider a polyhedron $\widetilde\mathcal{P}$ bounded by $\sigma$,
$\tau$, $\sigma'=R_\omega(\sigma)$, $\tau'=R_\omega(\tau)$,
$\eta$, and $\delta$. Applying the Poincar\'e theorem
to $\widetilde\mathcal{P}$ and the side pairing transformations
$g$, $g'=R_\sigma R_\omega$, $e$, and $e_f$, one can see that
$\langle g,g',e_f,e\rangle$ is isomorphic to $\widetilde\Gamma$
and has the presentation
$$
\langle f,g,e\,|\, g^\infty=e^2=(ef)^2=(eg)^2=(gfgf^{-1})^{m/2}
=(f^{-1}gfe)^p=1\rangle.
$$
If $p$ is odd, then $e\in\langle f,g\rangle$ and
$\widetilde\Gamma=\Gamma\cong PH[\infty,m/2;p]$.
If $p$ is even, $\infty$, or $\overline\infty$,
then $\widetilde\Gamma$ contains $\Gamma$ as
a subgroup of index~$2$ and has presentation $\mathcal{S}_2[\infty,m/2;p/2]$.
In order to see this, one can apply the Poincar\'e theorem
to a polyhedron $\mathcal{P}$ bounded by $\tau$, $\sigma$, $\tau'$, $\sigma'$,
$\eta$, and $e_f(\eta)$, and side-pairing transformations
$f$, $g$, and $g'=fg^{-1}f^{-1}$.
The proof for $(t(u),2)=1$ is analogous. In this case we need to use
the polyhedron $\mathcal{Q}^*$ as the starting point.
\end{proof}
\begin{figure}
\caption{Orbifolds embedded in ${\mathbb S}
\label{ins3}
\end{figure}
\begin{figure}
\caption{Orbifolds embedded in Seifert fibred spaces}
\label{insf}
\end{figure}
The orbifolds $Q(\Gamma)$ for the groups described in
Theorem~\ref{groups} can be obtained from corresponding fundamental
polyhedra.
In Figures~\ref{ins3} and~\ref{insf}, we schematically
draw singular sets, cusps, and boundary components of $Q(\Gamma)$
by using fat vertices and fat edges.
Roughly speaking, a fat vertex is either an interior point, or
is removed, or removed
together with its regular neighbourhood
depending on the indices. A fat edge can be labelled by $\infty$ or
$\overline\infty$. If the index at a fat edge is $\infty$, then the egde
corresponds to a cusp, and if the index is $\overline\infty$,
the edge is removed together with its regular neighbourhood.
For details, see \cite{KK05-1}.
In Figure~\ref{ins3}, orbifolds are embedded in ${\mathbb S}^3$ so that
$\infty$ is a non-singular interior point of $Q(\Gamma)$.
Note that the volume of $Q(PH[\infty,m;q])$ is always infinite
and $Q(P[\infty,m;q])$ is always non-compact.
Let $T(n)$ be a Seifert fibred solid torus obtained from a trivial
fibred solid torus $D^2\times {\mathbb S}^1$ by cutting it along
$D^2\times \{x\}$ for some $x\in {\mathbb S}^1$, rotating one of the
discs through $2\pi/n$ and glueing back together.
Denote by
$\mathcal{S}(n)$ a space obtained by glueing two copies of $T(n)$
along their boundaries fibre to fibre.
Clearly, $\mathcal{S}(n)$ is homeomorphic to ${\mathbb S}^2\times {\mathbb S}^1$
and is $n$-fold covered by trivially fibred ${\mathbb S}^2\times {\mathbb S}^1$.
There are two critical fibres whose length is $n$ times shorter
than the length of a regular fibre.
In Figure~\ref{insf}(a), orbifolds
are embedded in Seifert fibre spaces $\mathcal{S}(2)=T(2)\cup T(2)$.
We draw only the solid torus that contains
singular points (or boundary components). The other fibred torus is
meant to be attached and is not shown.
If $m<\infty$, the orbifold
$Q(\mathcal{S}_2[\infty,m;q])$
is embedded in $\mathcal{S}(2)$
in such
a manner that the axis of order $m$
lies on a critical fibre
of $\mathcal{S}(2)$.
The removed regular fibre gives rise to a cusp.
In Figure~\ref{insf}(b), orbifolds
are embedded in trivially fibred space ${\mathbb S}^2\times{\mathbb S}^1$.
The rank 2 cusp corresponds to the subgroup of
$GTet_1[\infty,m;q]$ generated by $x$ and $z$.
\section{Structure of the slice $S_\infty$}
Recall that
$$
S_\infty=\{(\gamma,\beta):(\beta,0,\gamma)
{\rm \ are\ parameters\ for\ some\ }
\langle f,g\rangle\in\mathcal{DRP}
\},
$$
where $\mathcal{DRP}$ denotes the class of all non-elementary discrete
$\mathcal{RP}$ groups.
To investigate the slice $S_\infty$, we split the plane
$(\gamma,\beta)$ as follows.
\begin{itemize}
\item[1.]
If $\beta=-4$ then by \cite[Theorem~2]{KK02}, the group
$\langle f,g\rangle$ has an invariant plane. We use \cite{GGM01}
to find all discrete groups on the line $\beta=-4$.
\item[2.]
If $\beta>-4$ and $\gamma>0$ then the group $\langle f,g\rangle$
is conjugate to a subgroup of
${\rm PSL}(2,{\mathbb R})$.
More precisely, if $-4<\beta<0$ then
$f$ is elliptic and the axis of $f$ is orthogonal to an invariant
plane of $g$ and if $\beta=0$ then the fixed points of $f$ and $g$
lie in their common invariant plane.
Discreteness criteria in terms of traces of
$f$, $g$, and $fg$ were given in~\cite{Kna68}.
For $\beta>0$, an algorithm to decide whether
$f$ and $g$ generate a discrete group was given in \cite{GiM91}.
\item[3.]
If $\beta>-4$ and $\gamma<0$ then $f$ is elliptic, parabolic,
or hyperbolic and the group $\langle f,g\rangle$ is known to be truly
spatial. Discrete such groups are described in \cite{KK05},
where $\beta$ and $\gamma$ are found explicitly.
\item[4.]
If $\beta<-4$ and $\gamma<0$ then $f$ is $\pi$-loxodromic
whose axes lies in an invariant plane of $g$. Then this plane
is invariant under action of $\langle f,g\rangle$
and $f$ acts as a glide-reflection on
it. A geometrical description of such discrete groups was given
in~\cite{KS98}.
\item[5.]
The case of $\beta<-4$ and $\gamma>0$ was treated in Section~2 of
the present paper.
\end{itemize}
We will obtain explicit formulas
for $\beta$ and $\gamma$ in the cases 2 and 4 above and
completely describe
the structure of the slice $S_\infty$.
We will pay special attention to the subsets of $S_\infty$
corresponding to free groups.
First, we need the following elementary facts.
\begin{lemma}\label{gamma}
If $f,g\in\mathcal{P}SL$ and $g$ is parabolic, then
$$
\gamma(f,g)=({\rm tr}(fg)-{\rm sign}({\rm tr} g)\cdot{\rm tr} f)^2.
$$
\end{lemma}
\begin{proof}
By the Fricke identity, we have
\begin{eqnarray*}
\gamma(f,g)&=&{\rm tr}[f,g]-2\\
&=&{\rm tr}^2f+{\rm tr}^2g+{\rm tr}^2(fg)-{\rm tr} f\cdot {\rm tr} g\cdot {\rm tr}(fg)-4\\
&=&({\rm tr}(fg)-{\rm sign}({\rm tr} g)\cdot{\rm tr} f)^2,
\end{eqnarray*}
since ${\rm tr}^2g=4$.
\end{proof}
\begin{lemma}\label{fgk}
If $f,g\in\mathcal{P}SL$ and ${\rm tr} g=2$, then
$$
{\rm tr}(fg^k)=k({\rm tr}(fg)-{\rm tr} f)+{\rm tr} f.
$$
\end{lemma}
\begin{proof}
By substituting ${\rm tr} g=2$ into the recurrent formula
$${\rm tr}(fg^k)={\rm tr}(fg^{k-1}){\rm tr} g-{\rm tr}(fg^{k-2}),$$
we immediately get the result.
\end{proof}
\begin{remark}\label{non-prim}
Suppose that $f$ is non-primitive elliptic of finite order $n$,
i.e., $\beta(f)=-4\sin^2(q\pi/n)$, where $(q,n)=1$, $1<q<n/2$.
Then there exists an integer $r$ so that $f^r$
is primitive of the same order.
Obviously, $\langle f,g\rangle=\langle f^r,g\rangle$
and $\beta(f^r)=-4\sin^2(\pi/n)$. By \cite{GM94-1},
$\gamma(f^r,g)=(\beta(f^r)/\beta(f))\gamma(f,g)$.
\end{remark}
It is natural to introduce the constant
$$
C(q,n)=\frac{\sin^2(q\pi/n)}{\sin^2(\pi/n)}=
\frac{\beta(f)}{\beta(f^r)}\geq 1
$$
that
plays an important role in parameters calculation concerning groups with
elliptic elements. It is also convenient to consider a parabolic
element $f$ as a limit rotation of order $n=\infty$
and write $0=\beta(f)=-4\sin^2(\pi/n)$ with $C(q,n)=C(1,n)=1$.
\subsection{$-4\leq\beta\leq 0$}
This means that $f$ is either elliptic or parabolic.
Obviously, if $f$ is elliptic of infinite order, then
$\langle f,g\rangle$ is not discrete. So we assume that
$\beta=-4\sin^2(q\pi/n)$, where $(q,n)=1$ and $1\leq q<n/2$, including
$\beta=0$.
\begin{theorem}\label{ell_rp}
Let $\Gamma=\langle f,g\rangle\subset\mathcal{P}SL$ have parameters
$(\beta,0,\gamma)$ with $\gamma\in{\mathbb R}\backslash \{0\}$.
Let $\beta=-4\sin^2(q\pi/n)$, where $(q,n)=1$ and $1\leq q<n/2$,
including $\beta=0$.
Then $\Gamma$ is discrete if and only if
one of the following holds:
\begin{enumerate}
\item $\gamma=-4C(q,n)\cosh^2u$, where $u\in\mathcal{U}$ and $t(u)\geq 3$;
\item $\gamma=4C(q,n)(\cos(\pi/n)+\cosh u)^2$, where $u\in\mathcal{U}$;
\item $\beta=0$ and $\gamma=4(1+\cos(2\pi/k))^2$,
where $k\geq 3$ is odd.
\end{enumerate}
\end{theorem}
\begin{proof}
Let us prove the theorem for $q=1$; in order to get the result
for $q>1$, we only need to apply Remark~\ref{non-prim}.
If $n=2$ then $\beta=-4$ and, by \cite[Theorem~4.15]{GGM01},
$\Gamma$ is discrete if and only if $\gamma=\pm 4\cosh^2u$,
where $u\in\mathcal{U}$ with $t(u)\geq 3$.
If $2<n\leq\infty$ and $\gamma<0$, then, by
\cite[Corollary 2.5]{KK05}, $\Gamma$ is discrete if and only if
$\gamma=-4\cosh^2u$, where $u\in\mathcal{U}$ and $t(u)\geq 3$.
Assume that $2<n<\infty$ and $\gamma>0$.
In this case $\Gamma$ is conjugate to a subgroup of
${\rm PSL}(2,{\mathbb R})$ and we can
apply Knapp's results \cite{Kna68} to compute $\gamma$.
Conjugate $\Gamma$ so that $\infty$ is the fixed point of $g$.
By replacing, if necessary, $f$ with $f^{-1}$ and $g$ with $g^{-1}$,
we may assume that
$$
f=\left(
\begin{array}{cc}
a& b\\
c &d
\end{array}
\right) \quad {\rm and} \quad
g=\left(
\begin{array}{rr}
-1& \tau\\
0 &-1
\end{array}
\right),
$$
where $ad-bc=1$, $a+d=-2\cos(\pi/n)$ with $n\in{\mathbb Z}$, $b>0$,
and $\tau>0$.
One can show that ${\rm tr}(fg)<2$.
By \cite[Proposition~4.1]{Kna68}, $\Gamma$ is discrete if and only if
${\rm tr}(fg)\leq-2$ or ${\rm tr}(fg)=-2\cos(\pi/k)$, where $k\geq 2$ is an integer,
that is ${\rm tr}(fg)=-2\cosh u$, where $u\in\mathcal{U}$.
Hence, by Lemma~\ref{gamma},
$\gamma=({\rm tr}(fg)+{\rm tr} f)^2=(2\cosh u+2\cos(\pi/n))^2$.
So it remains to consider the case when $n=\infty$ (i.e., $\beta=0$)
and $\gamma>0$.
Again, we normalize $\Gamma$ so that $g$ is as above and
$f=\left(
\begin{array}{rr}
-1& 0\\
-1 &-1
\end{array}
\right)$.
By \cite[Proposition~4.2]{Kna68}, such a group
is discrete if and only if $\tau\geq 4$ or $\tau=2+2\cos(2\pi/k)$
for an integer $k\geq 3$. Since in this case $\gamma=\tau^2$, we have
that $\gamma\geq 16$ or $\gamma=(2+2\cos(2\pi/k))^2$, which can be
written as $\gamma=4(1+\cosh u)^2$, where $u\in\mathcal{U}$, or
$\gamma=4(1+\cos(2\pi/k))^2$ for odd $k\geq 3$.
\end{proof}
\begin{remark}
If $-4\leq\beta\leq 0$ then
$\Gamma$ is discrete and free if and only if
$\beta=0$ and $\gamma\in(-\infty,-4]\cup[16,+\infty)$.
\end{remark}
The parameters from the infinite strip $-4\leq\beta\leq 0$ are
displayed in Figure~\ref{strip1}.
If $\beta=-4\sin^2(q\pi/n)$ is fixed,
then there exist values $\gamma_1(\beta)<0$
and $\gamma_2(\beta)>0$ so that $\Gamma$ is discrete in
the union of two rays
$(-\infty,\gamma_1(\beta)]\cup[\gamma_2(\beta),+\infty)$.
There are only countably many discrete groups
in $(\gamma_1(\beta),\gamma_2(\beta))$ with
accumulation points $\gamma_1(\beta)$
and $\gamma_2(\beta)$.
Moreover, if we denote $\beta_n^q=-4\sin^2(q\pi/n)$, then
$$
\gamma_1(\beta_n^q)<\gamma_1(\beta_n^1)<\gamma_2(\beta_n^1)<
\gamma_2(\beta_n^q)\quad {\rm for\ all\ } 1<q<n/2.
$$
\begin{figure}
\caption{Structure of the strip $-4\leq\beta\leq 0$}
\label{strip1}
\end{figure}
\subsection{$\beta>0$}
In this case $f$ is hyperbolic.
\begin{theorem}[{\cite[Corollary~2.5]{KK05}}]\label{hyp-par_rp}
Let $\Gamma=\langle f,g\rangle\subset\mathcal{P}SL$ have parameters
$(\beta,0,\gamma)$ with $\beta> 0$ and $\gamma<0$.
Then $\Gamma$
is discrete if and only if
$\gamma=-4\cosh^2u$, where $u\in\mathcal{U}$, $t(u)\geq 3$.
\end{theorem}
\begin{remark}
From \cite{KK05}, $\Gamma$ with parameters $(\beta,0,\gamma)$,
where $\beta\geq 0$ and $\gamma<0$
is free if and only if $(\gamma,\beta)$ lies in the region
$$
A=\{(\gamma,\beta):\gamma\leq -4,\beta\geq 0\}.
$$
\end{remark}
\begin{theorem}\label{hyp_fuchsian}
Let $\Gamma=\langle f,g\rangle\subset\mathcal{P}SL$ have parameters
$(\beta,0,\gamma)$ with $\beta>0$ and $\gamma>0$.
Let $k=\displaystyle\left\lceil
\frac{\sqrt{\beta+4}-2}{\sqrt{\gamma}}\right\rceil$.
The group $\Gamma$ is discrete if and only if one of
the following holds:
\begin{enumerate}
\item $\beta=(k\sqrt{\gamma}+2)^2-4$ and
$\gamma=16\cosh^4 u$, where $u\in\mathcal{U}$ and $t(u)\geq 3$;
\item $\beta=(k\sqrt{\gamma}\pm2\cos(q\pi/n))^2-4$
and
$\gamma=4C(q,n)(\cos(\pi/n)+\cosh u)^2$,
where $(q,n)=1$, $1\leq q<n/2$, and $u\in\mathcal{U}$;
\item $\beta=(k\sqrt{\gamma}-2\cosh u)^2-4$ and
$\gamma>4(1+\cosh u)^2$, where $u\geq 0$.
\end{enumerate}
\end{theorem}
\begin{proof}
Since $\gamma>0$, the axis of $f$ lies in
an invariant plane of $g$, so $\Gamma=\langle f,g\rangle$ is
conjugate to a subgroup of ${\rm PSL}(2,{\mathbb R})$.
In \cite{GiM91}, an algorithm for determining whether
such a group is discrete was given.
We will apply this algorithm and calculate
parameters for each discrete group.
Normalize $\Gamma$ so that
$\infty$ is the fixed point of $g$ and $\pm1$ are the fixed points
of~$f$. Then we can write
\begin{equation*}\label{normfg}
f=\left(
\begin{array}{cc}
a& b\\
b &a
\end{array}
\right)
\quad {\rm and}\quad
g=\left(
\begin{array}{cc}
1& \tau\\
0 &1
\end{array}
\right),
\ {\rm where}\ a^2-b^2=1,\ a>1,\ b,\tau\in{\mathbb R}.
\end{equation*}
By replacing $f$ with $f^{-1}$ and $g$ with $g^{-1}$, we may assume
that $b<0$ and $\tau>0$.
Let $k$ be a positive integer such that ${\rm tr}(fg^k)\leq 2$
and ${\rm tr}(fg^\ell)>2$ for all $\ell$ with $0\leq \ell<k$.
By Lemmas~\ref{gamma} and~\ref{fgk}, we have that
$k^2\gamma=k^2({\rm tr}(fg)-{\rm tr} f)^2=({\rm tr}(fg^k)-{\rm tr} f)^2$.
Since ${\rm tr}(fg^k)\leq 2$ and ${\rm tr} f>2$,
\begin{equation}\label{k2g}
{\rm tr} f=k\sqrt{\gamma}+{\rm tr}(fg^k).
\end{equation}
We distinguish three cases:
\noindent
1. ${\rm tr}(fg^k)=2$, that is $fg^k$ is parabolic.
From (\ref{k2g}),
$$
\beta=(k\sqrt{\gamma}+2)^2-4.
$$
By Theorem~\ref{ell_rp},
$\langle fg^k,g\rangle$ and, hence, $\langle f,g\rangle$
is discrete if and only if
\begin{itemize}
\item[] $\gamma=\gamma(fg^k,g)=4(1+\cosh v)^2$, where $v\in\mathcal{U}$, or
\item[] $\gamma=4(1+\cos(2\pi/k))^2$,
where $k\geq 3$ is odd.
\end{itemize}
These expressions can be rearranged and combined as
$\gamma=16\cosh^4u$, where $u\in\mathcal{U}$ and $t(u)\geq 3$.
\noindent
2. $-2<{\rm tr}(fg^k)<2$, that is $fg^k$ is elliptic and
${\rm tr}(fg^k)=\pm 2\cos(q\pi/n)$, where $(q,n)=1$ and $1\leq q<n/2$.
Hence, from (\ref{k2g}),
$$
\beta=(k\sqrt{\gamma}\pm2\cos(q\pi/n))^2-4.
$$
By Theorem~\ref{ell_rp},
$\langle fg^k,g\rangle$ and, hence, $\langle f,g\rangle$
is discrete if and only if
$$
\gamma=4C(q,n)(\cos(\pi/n)+\cosh u)^2,
\text{\quad where } u\in\mathcal{U}.
$$
\noindent
3. ${\rm tr}(fg^k)\leq-2$, that is $fg^k$ is hyperbolic or parabolic
so we can write
${\rm tr}(fg^k)=-2\cosh u$, where $u\geq 0$.
Then
$$
\beta=(k\sqrt{\gamma}-2\cosh u)^2-4.
$$
Consider the group $\langle g^{k-1}f,g\rangle$. The element
$g^{k-1}f$ is hyperbolic with ${\rm tr}(g^{k-1}f)>2$. Therefore,
one can normalize $\langle g^{k-1}f,g\rangle$ so that
the attracting and repelling fixed points of $g^{k-1}f$ are $x_a$
and $x_r$, respectively, and $x_a<x_r$.
Since ${\rm tr}(g^kf)\leq -2$, such a group is dicrete and free
by \cite[Case~II]{GiM91}.
So by Lemma~\ref{gamma},
we have that
\begin{eqnarray*}
\gamma=\gamma(fg^{k-1},g)&=&({\rm tr}(fg^k)-{\rm tr}(fg^{k-1}))^2\\
&=&(2\cosh u+2\cosh v)^2,
\end{eqnarray*}
where $v$ is any positive real number.
It remains to compute $k$.
Since ${\rm tr}(fg^k)=2a+b\tau k\leq 2$, we have that
$k\geq(-2a+2)/(b\tau)$. Computing $\gamma=b^2\tau^2$, we get
$b\tau=-\sqrt{\gamma}$.
So $k=\left\lceil\displaystyle\frac{\sqrt{\beta+4}-2}
{\sqrt{\gamma}}\right\rceil$.
\end{proof}
It follows from \cite{GiM91} that $\Gamma$ is free
if and only if $(\gamma,\beta)$ lies in one of the regions
$$
C_k=\{(\gamma,\beta):\gamma\geq 16, ((k-1)\sqrt{\gamma}+2)^2\leq
\beta+4\leq (k\sqrt{\gamma}-2)^2\}, \ k=1,2,3\dots
$$
\subsection{$\beta<-4$}
First, consider $\gamma<0$. In this case the axis of the $\pi$-loxodromic
generator $f$ lies in an invariant plane of $g$ \cite{KK02}, so
$\langle f,g\rangle$ keeps this plane invariant.
\begin{theorem}\label{lox_inv}
Let $\Gamma=\langle f,g\rangle\subset\mathcal{P}SL$ have parameters
$(\beta,0,\gamma)$ with $\beta<-4$ and $\gamma<0$.
Let
$k=\displaystyle\left\lceil\frac{\sqrt{-\beta-4}}{\sqrt{-\gamma}}\right\rceil$.
Then the group $\langle f,g\rangle$ is discrete if and only if
one of the following holds:
\begin{enumerate}
\item $-4(\beta+4)=\big((2k-1)\sqrt{-\gamma}\pm
\sqrt{-\gamma-8(1+\cosh u)}\big)^2$,
where $u\in\mathcal{U}$;
\item $4(\beta+4)=(2k-1)^2\gamma$ and $\gamma=-16\cos^2(\pi/p)$,
where $p\geq 3$ is odd;
\item $\beta=k^2\gamma-4$ and $\gamma=-4\cosh^2u$, where
$u\in\mathcal{U}$ and $t(u)\geq 3$.
\end{enumerate}
\end{theorem}
\begin{proof}
Let $\delta=\{(z,t):{\rm Im}\ z=0\}$ be
the invariant plane of $\Gamma$. Since the axis of
$f$ lies in $\delta$,
we can normalize $\Gamma$ so that
the fixed point of $g$ is $\infty$, the fixed points of
$f$ are $\pm1$, and
$$
f=\left(
\begin{array}{cc}
ai& bi\\
bi &ai
\end{array}
\right),
\quad
g=\left(
\begin{array}{cc}
1& \tau\\
0 &1
\end{array}
\right),
\quad {\rm where}\ b^2-a^2=1,\ a>1,\ b,\tau\in{\mathbb R}.
$$
Further, replacing $f$ with $f^{-1}$ and $g$ with $g^{-1}$,
we can assume that $b<0$ and $\tau>0$.
Since $b$ is negative, $+1$ is the repelling fixed point of $f$ and
$-1$ is attracting.
Let $e$ be the half-turn whose axis passes through the fixed point of $g$
orthogonally to the axis of $f$. That is $e$ fixes $0$ and $\infty$.
Let $e_f$ and $e_1$ be half-turns such that
$f=ee_f$ and $g=e_1e$. Since $f$ is $\pi$-loxodromic,
the axis of $e_f$ intersects the axis of $f$ (and
the plane $\delta$) orthogonally;
denote the intersection point by $A$.
Further, since $g$ is parabolic and
keeps $\delta$ invariant,
the axis of $e_1$ fixes $\infty$ and lies in the plane~$\delta$.
It is easy to calculate that
$$
e=\left(
\begin{array}{rr}
i& 0\\
0 &-i
\end{array}
\right),\quad
e_f=\left(
\begin{array}{rr}
a& b\\
-b &-a
\end{array}
\right),\quad
e_1=\left(
\begin{array}{rr}
i& \tau\\
0 &-i
\end{array}
\right).
$$
Consider half-turns $e_{k-1}=g^{k-1}e$ and $e_k=g^ke$
such that $A$ lies in the region
bounded by the axes of $e_{k-1}$
and $e_k$ in the plane $\delta$, see Figure~\ref{delta}.
It is easy to calculate that $A=-a/b-j/b$.
Since $e_k$ fixes $\infty$ and $\tau k/2$, we have that
$$
A\in\left\{(z,t):\frac{\tau(k-1)}2<{\rm Re} z\leq \frac{\tau k}2,
\ {\rm Im}\ z=0,\ t>0\right\}.
$$
Hence, we can immediately determine~$k$.
\begin{equation}\label{kbounds}
\frac{\tau(k-1)}2<-\frac ab \leq
\frac{\tau k}2.
\end{equation}
Therefore, since $2a=-i{\rm tr} f=\sqrt{-\beta-4}$ and $b\tau=-\sqrt{-\gamma}$,
$$
k=\left\lceil-\frac{2a}{b\tau}\right\rceil=
\left\lceil\frac{\sqrt{-\beta-4}}{\sqrt{-\gamma}}\right\rceil.
$$
It is easy to see that $\Gamma$ is discrete if and only if
$\widetilde\Gamma=\langle e_f,e_{k-1},e_k\rangle$ is.
Following \cite{KS98},
we give geometric conditions for $\widetilde\Gamma$ to be discrete.
\noindent
Suppose that $A\notin axis(e_k)$;
see Figure~\ref{delta}(a).
By \cite{KS98}, $\widetilde\Gamma$ is discrete if either
(a) the angle $\phi$ between $e_{k-1}$ and $e_f(e_k)$
is of the form $\pi/p$, where $p\geq 2$ is an integer,
$\infty$, or $\overline\infty$; or
(b) $\phi=2\pi/p$, where $p\geq 3$ is odd and the bisector of
$\phi$ passes through $A$.
\noindent
Suppose that $A\in axis(e_k)$;
see Figure~\ref{delta}(b).
By \cite{KS98}, $\widetilde\Gamma$ is discrete if
(c) the angle $\psi$ made by $axis(e_{k-1})$ and
$axis(\tilde e_f)$ is of the form $\pi/p$, $p\geq 3$ is
an integer, $\infty$, or $\overline\infty$, where
$\tilde e_f=e_ke_f$
is the half-turn whose axis passes through $A$ orthogonally to
$axis(e_k)$ in the plane $\delta$.
\begin{figure}
\caption{The invariant plane $\delta$}
\label{delta}
\end{figure}
There are no other discrete groups.
So, we need to calculate the parameters $\beta$ and $\gamma$
in each of the cases (a), (b), and (c).
Assume that we are in case (a) or (b). Then each $g^\ell f=e_\ell e_f$,
$\ell\in{\mathbb Z}$,
is a $\pi$-loxodromic element with translation length $2T_\ell$
and ${\rm tr}(g^\ell f)=\pm 2i\sinh T_\ell$,
where $T_\ell$ is the distance between $e_\ell$ and~$A$.
Moreover, from the matrix representation,
${\rm tr}(g^\ell f)=2ai+b\tau \ell i$.
The inequalities~(\ref{kbounds})
enable us to determine the signs of ${\rm tr}(fg^{k-1})$ and ${\rm tr}(fg^k)$:
$$
{\rm tr}(fg^k)=-2i\sinh T_k\quad {\rm and} \quad
{\rm tr}(fg^{k-1})=+2i\sinh T_{k-1}.
$$
Suppose that $p<\infty$. Simple calculations in the plane~$\delta$
show that
$$
\sinh CD=\frac{1+\cos\phi\cosh(2T_{k-1})}
{\sin\phi\sinh(2T_{k-1})}
$$
and, on the other hand,
$$
\sinh CD=\frac{\sinh T_k+\cos\phi\sinh T_{k-1}}
{\sin\phi\cosh T_{k-1}}.
$$
So, we obtain
$$
2(1+\cos\phi)=4\sinh T_{k-1}\sinh T_k={\rm tr}(fg^{k-1}){\rm tr}(fg^k).
$$
Applying Lemmas~\ref{gamma} and \ref{fgk} and the facts that
${\rm tr} f=i\sqrt{-\beta-4}$ and
${\rm tr}(fg)-{\rm tr} f=b\tau i=-i\sqrt{-\gamma}$, we get
\begin{eqnarray*}
2(1+\cos\phi)&=&
[(k-1)({\rm tr}(fg)-{\rm tr} f)+{\rm tr} f]\cdot[k({\rm tr}(fg)-{\rm tr} f)+{\rm tr} f]\\
&=&k(k-1)({\rm tr}(fg)-{\rm tr} f)^2+(2k-1)\cdot{\rm tr} f\cdot
({\rm tr}(fg)-{\rm tr} f)+{\rm tr}^2 f\\
&=&k(k-1)\gamma+(2k-1)\sqrt{-\beta-4}\sqrt{-\gamma}+\beta+4.
\end{eqnarray*}
Hence,
$-4(\beta+4)=((2k-1)\sqrt{-\gamma}\pm\sqrt{-8(1+\cos\phi)-\gamma})^2$,
where $\phi=\pi/p$, $p\geq 2$ is an integer.
Analogous calculation can be done for $p=\infty$ and $p=\overline\infty$,
and we obtain item (1) of the theorem.
In case (b), in addition,
$T_{k-1}=T_k$. Then ${\rm tr}(fg^k)=-{\rm tr}(fg^{k-1})$ and by
Lemmas~\ref{gamma} and~\ref{fgk} we have
$$
2\sqrt{-\beta-4}=(2k-1)\sqrt{-\gamma}.
$$
Therefore,
$2(1+\cos\phi)=-{\rm tr}^2(fg^k)=(-k\sqrt{-\gamma}+\sqrt{-\beta-4})^2=
-\gamma/4$.
Hence, since $\phi=2\pi/p$, $\gamma=-16\cos^2(\pi/p)$.
Now assume that we are in case (c) and $p<\infty$.
Since in this case $e_ke_f=\tilde e_f$ is an ellitic element of order~$2$,
${\rm tr}(g^kf)=0$. Therefore, since
${\rm tr}(g^kf)=-ki\sqrt{-\gamma}+i\sqrt{-\beta-4}$, we have that
$\beta=k^2\gamma-4$.
Further, since ${\rm tr}(fg^{k-1})=2i\sinh T_{k-1}$ and,
from the plane $\delta$, $\sinh T_{k-1}=\cos\psi$, we have
that
\begin{eqnarray*}
4\cos^2\psi=4\sinh^2T_{k-1}
&=&-((k-1)({\rm tr}(fg)-{\rm tr} f)+{\rm tr} f)^2\\
&=&(-(k-1)\sqrt{-\gamma}+\sqrt{-\beta-4})^2\\
&=&(-(k-1)\sqrt{-\gamma}+k\sqrt{-\gamma})^2\\
&=&-\gamma.
\end{eqnarray*}
Thus, $\gamma=-4\cos^2(\pi/p)$,
where $p\geq 3$ is
an integer. Analogous calculations can be done for $p=\infty$
and $p=\overline\infty$ and we obtain item (3) of the theorem.
\end{proof}
\begin{remark}
If $\beta<-4$ and $\gamma<0$, then $\langle f,g\rangle$
is free if and only if $(\gamma,\beta)$ lies in one of the regions
$D_k$, $k=1,2,3,\dots$, given by
{\setlength\arraycolsep{0pt}
\begin{eqnarray*}
D_k=\{ &&(\gamma,\beta):\gamma\leq -16,\\
&&\frac{((2k-1)\sqrt{-\gamma}-\sqrt{-\gamma-16})^2}{-4}
\geq \beta+4\geq
\frac{((2k-1)\sqrt{-\gamma}+\sqrt{-\gamma-16})^2}{-4}\}.
\end{eqnarray*}
}
\end{remark}
When $\gamma>0$, the parameters were described in
Theorem~\ref{criterion_par}.
Here we just note that for $\gamma>0$ and $\beta<0$,
the group $\langle f,g\rangle$ is free
if and only if $(\gamma,\beta)$ lies in the region
$$
B=\{(\gamma,\beta): \gamma\geq 4,\ \beta+4\leq -4/\gamma\}.
$$
\begin{figure}
\caption{The discrete free groups}
\label{free}
\end{figure}
Finally, we are able to draw those subsets of $S_\infty$
that correspond to discrete
free groups. These subsets are shown in Figure~\ref{free}.
The dashed lines $\beta=k^2\gamma-4$ are plotted to show
a certain symmetry of $S_\infty$.
The other discrete groups contain elliptic elements.
Their parameters are represented by
lines, parabolas, hyperbolas, and points accumulating,
as orders of elliptic
elements tend to $\infty$, to the regions of
free groups.
\begin{figure}
\caption{The structure of the slice $S_\infty$}
\label{whole}
\end{figure}
In Figure~\ref{whole}, the whole picture for the slice
$S_\infty$ is shown to give an idea of the structure of~$S_\infty$.
The formulas for $\beta$ and $\gamma$ obtained in Theorems
\ref{criterion_par}, \ref{ell_rp}, \ref{hyp-par_rp},
\ref{hyp_fuchsian}, and \ref{lox_inv},
were programmed with the package Maple~7.0
for some (sufficiently large) values of independent variables
like $n,q\in{\mathbb Z}$ and $u,v\in\mathcal{U}$ and plotted on the plane
$(\gamma,\beta)$.
The most interesting families of parameters appear when
$\gamma$ and $\beta$ are of the same sign.
For a fixed $k$, the hyperbolas
$$-4(\beta+4)=\left((2k-1)\sqrt{-\gamma}\pm
\sqrt{-\gamma-8(1+\cos(\pi/p))}\right)^2,$$
where $p\geq 2$ is an integer, form a one-parameter family of curves
converging to the boundary of $D_k$ as $p\to\infty$.
Each hyperbola has the asymptotes
$\beta=(k-1)^2\gamma-4k(1+\cos(\pi/p))+4$ and
$\beta=k^2\gamma+4k(1+\cos(\pi/p))-4$, which are obviously
parallel to $\beta=(k-1)^2\gamma-4$ and $\beta=k^2\gamma-4$,
respectively.
\begin{figure}
\caption{The structure of $\Sigma_2$}
\label{sigma2}
\end{figure}
For $\gamma>0$ and $\beta>0$, consider a one-parameter family of
parabolas $\beta_k=(k\sqrt{\gamma}\pm2)^2-4$. Let $\Sigma_k$
be the domain bounded by $\beta_k$:
$$
\Sigma_k=\{(\gamma,\beta): (k\sqrt{\gamma}-2)^2\leq \beta+4
\leq (k\sqrt{\gamma}+2)^2\}.
$$
Within each $\Sigma_k$, the parameters for discrete groups are given by
$$
\left\{
\begin{array}{l}
\beta=(k\sqrt{\gamma}\pm2\cos(q\pi/n))^2-4,\\
\gamma=4C(q,n)(\cos(\pi/n)+\cosh u)^2,
\end{array}
\right.
$$
where $(q,n)=1$, $1\leq q<n/2$, and $u\in\mathcal{U}$.
Note that for $n=2$, we have $\beta=k^2\gamma-4$ and $\gamma=4\cosh^2u$.
As $n\to\infty$, the curves
$\beta=(k\sqrt{\gamma}\pm2\cos(q\pi/n))^2-4$ accumulate to the
boundary of $\Sigma_k$, i.e., to the boundaries of
$C_{k-1}$ and $C_k$ (see Figure~\ref{sigma2} for an example of
$\Sigma_k$ for $k=2$).
\end{document}
|
\begin{equation}gin{document}
\title{Necessary and sufficient condition for quantum-generated correlations}
\mbox{asin}\,uthor{Ll. Masanes}
\mbox{asin}\,ffiliation{Dept. d'Estructura i Constituents de la Mat\`eria, Univ. Barcelona, 08028. Barcelona, Spain.}
\date{\today}
\begin{equation}gin{abstract}
We present a non-linear inequality that completely characterizes the set of correlation functions obtained from bipartite quantum systems, for the case in which measurements on each subsystem can be chosen between two arbitrary dichotomic observables. This necessary and sufficient condition is the maximal strengthening of Cirel'son's bound.
\end{abstract}
\!\mbox{conv}dot\!acs{03.67.-a, 03.67.Lx}
\maketitle
The Principle of Causality imposes bounds on the correlations between space-like separated events: they have to emerge from interactions that took place in the past. Whether these interactions are classical or quantum makes a difference in this bounds.
It was Bell \mbox{conv}ite{Bell} who first noticed that some correlations predicted by Quantum Theory (QT) are in contradiction with Local Variable Theories (LVTs), e.i. classical physics. This has been experimentally proven up to some loopholes \mbox{conv}ite{Aspect}. A lot of work has been made to characterize the bounds of LVTs' correlations \mbox{conv}ite{Fine,Werner,Zukowski,Collins}. These bounds are called Bell inequalities and are a nice frame to experimentally invalidate classical physics because, ideally, no assumptions and models for the experiments are necessary, just measuring correlations.
Little is known about the bounds for the correlations obtainable within QT: a necessary condition found by Cirel'son \mbox{conv}ite{Cirelson}, some numerical results \mbox{conv}ite{Filipp}, some general but partial results \mbox{conv}ite{conv}, and a characterization in terms of a convex hull \mbox{conv}ite{Werner}. This last description is useful for many purposes but in some situations it would be better to know the analytic shape of the bounds. This paper contains a complete and simple characterization of the bounds for the correlations attainable within QT for a scenario that is stated below. In the same foot as Bell inequalities these bounds provide a good frame to experimentally search for the existence of {\em superquantum} correlations \mbox{conv}ite{Popescu}, without assumptions and models for the experiments.
From a more practical point of view, the sharing of correlations among several parties is a useful resource for tasks like distributed computation \mbox{conv}ite{cc} and secret key agreement \mbox{conv}ite{cryptography}. Then, it is important to know if a given set of correlations is achievable with shared classical randomness, quantum entanglement, or it requires some amount of communication. One could say that Bell inequalities tell us impossibilities for Classical Information Theory. In the same way, the result of this work shows limitations for Quantum Information Theory.
The scenario that is considered in this work consists of two separated parties ---Alice and Bob--- sharing a bipartite system. Alice (Bob) can carry out two possible measurements, $A_0$ and $A_1$ ($B_0$ and $B_1$), with outcomes $1$ and $-1$. No assumption is made on the kind of systems they have. Then, although all observables are dichotomic we do not restrict to local two-dimensional systems. At space-like separated events Alice and Bob choose and perform one measurement each. With the observed results they can construct the vector of correlation functions:
\begin{equation}
{\bf x} = \begin{equation}gin{pmatrix} \langle A_0 B_0\rangle &\, \langle A_0 B_1\rangle &\, \langle A_1 B_0\rangle &\, \langle A_1 B_1\rangle \end{pmatrix}\ .
\label{correlations} \end{equation}
The set of all possible vectors of correlators is a four-dimensional cube characterized by the eight trivial inequalities:
\begin{equation}
-1 \leq x_k \leq 1 \hspace{1cm} k=1,2,3,4
\label{trivial} \end{equation}
What regions of this set are accessible depends on the theory that is used to describe the state of the compound system as well as the process of measuring it. In what follows two theories are considered.
{\em Local Variable Theories:} It was found in \mbox{conv}ite{Fine,Werner} that a vector of correlation functions ${\bf x}$ can be obtained within a LVT if and only if it satisfies the eight inequalities:
\begin{equation}a
-2 \leq-x_1 + x_2 + x_3 + x_4 \leq 2 \nonumber \\
-2 \leq x_1 - x_2 + x_3 + x_4 \leq 2 \nonumber \\
-2 \leq x_1 + x_2 - x_3 + x_4 \leq 2 \nonumber \\
-2 \leq x_1 + x_2 + x_3 - x_4 \leq 2 \label{chsh}
\end{equation}a
We denote this set by $C$. Because $C$ is defined with linear inequalities it is a convex polytope, namely a four-dimensional octahedron \mbox{conv}ite{Werner}. These eight inequalities are equivalent in the sense that, each one can be transformed into the other by interchanging parties, observables and outcomes. A representative of them is the well known CHSH \mbox{conv}ite{CHSH}.
{\em Quantum Theory} gives the same predictions than LVTs when considering separable states, but in general it goes beyond. Cirel'son proved that the inequalities \w{chsh} cannot be violated by QT with a value larger than $2\sqrt{2}$ \mbox{conv}ite{Cirelson}, for example
\begin{equation}
x_1 + x_2 + x_3 - x_4 \leq 2\sqrt{2}\ .
\label{cir} \end{equation}
The main result of this work corresponds to the following theorem:
{\bf Theorem:} {\em A vector of correlation functions ${\bf x}$ is obtainable within QT if and only if it satisfies the conditions:}
\begin{equation}a
-\!\mbox{conv}dot\!i \leq- \mbox{asin}\, x_1 + \mbox{asin}\, x_2 + \mbox{asin}\, x_3 + \mbox{asin}\, x_4 \leq \!\mbox{conv}dot\!i \nonumber \\
-\!\mbox{conv}dot\!i \leq \mbox{asin}\, x_1 - \mbox{asin}\, x_2 + \mbox{asin}\, x_3 + \mbox{asin}\, x_4 \leq \!\mbox{conv}dot\!i \nonumber \\
-\!\mbox{conv}dot\!i \leq \mbox{asin}\, x_1 + \mbox{asin}\, x_2 - \mbox{asin}\, x_3 + \mbox{asin}\, x_4 \leq \!\mbox{conv}dot\!i \nonumber \\
-\!\mbox{conv}dot\!i \leq \mbox{asin}\, x_1 + \mbox{asin}\, x_2 + \mbox{asin}\, x_3 - \mbox{asin}\, x_4 \leq \!\mbox{conv}dot\!i \label{nonlinear}
\end{equation}a
{\em where} $\mbox{asin}\, x$ {\em is the inverse of the sinus function.} The set of points fulfilling (\ref{trivial}) and (\ref{nonlinear}) is denoted by $Q$. In analogy to $C$, these eight inequalities are equivalent.
{\bf Symmetries of $Q$ and s-order.} In what follows it is seen that the sets $C$ and $Q$ have two symmetries that simplify their study. Any permutation of the coordinates $x_i$ leave the sets of equations \w{trivial}, \w{chsh} and \w{nonlinear} unaltered. The same happens when changing the sign of two coordinates. This implies that the property of belonging to $C$ or $Q$ remains unchanged after applying these transformations. Notice that any point can be transformed into one satisfying
\begin{equation}
x_1 \geq x_2 \geq x_3 \geq |x_4|\ . \label{J}
\end{equation}
Then, it is enough to consider such points. Following the same notation as in \mbox{conv}ite{ibm}, a vector is said to be s-ordered if it satisfies \w{J}. From now on, unless explicitly mentioned, any vector ${\bf x}$ is supposed to be s-ordered. Almost all the inequalities in \w{trivial}, \w{chsh} and \w{nonlinear} are automatically satisfied by these vectors. After carefully rejecting the useless inequalities we conclude that, for any s-ordered ${\bf x}$
${\bf x} \in C:$
\begin{equation}a
x_1\leq 1 \label{c1} \\ x_1+x_2+x_3-x_4 \leq 2 \label{c2}
\end{equation}a
${\bf x} \in Q:$
\begin{equation}a
x_1\leq 1 \label{q1} \\ \mbox{asin}\, x_1 + \mbox{asin}\, x_2 + \mbox{asin}\, x_3 - \mbox{asin}\, x_4 \leq \!\mbox{conv}dot\!i \label{q2}
\end{equation}a
In what follows, we proceed on proving the Theorem. We have divided the proof into an initial declaration of simple facts, six lemmas, and a final reasoning (Proof of the Theorem). The reader not interested in technical details can skip all this part.
Let us start by defining $F$ as the set of points for which at least one of the inequalities in \w{nonlinear} is saturated. All s-ordered points in $F$ saturate inequality \w{q2} and satisfy the rest \w{nonlinear}. Then by symmetry, all points in $F$ satisfy \w{nonlinear}, which implies $F \subseteq Q$. Because for all correlation vectors \w{trivial} the conditions \w{nonlinear} are well-defined and continuous, the boundary of $Q$ ($\!\mbox{conv}dot\!artial Q$) is the union of $F$ and points saturating at least one of the inequalities in \w{trivial}.
{\bf Lemma 1:} $Q$ is a convex set.
In order to prove this statement let us consider s-ordered points belonging to the hypersurface $\!\mbox{conv}dot\!artial Q$. The set of points saturating \w{q1},
\begin{equation}
x_1 = 1\ ,
\label{hp} \end{equation}
is a hyperplane, therefore it is convex. The set of points saturating \w{q2} are such that
\begin{equation}
f({\bf x})=\!\mbox{conv}dot\!i \ ,
\label{surf} \end{equation}
where $f({\bf x})= \mbox{asin}\, x_1 +\mbox{asin}\, x_2 +\mbox{asin}\, x_3 -\mbox{asin}\, x_4$. To prove the convexity of this surface the gradient and the hessian matrix of $f({\bf x})$ have to be computed:
\begin{equation}a
&& \nabla_i f({\bf x}) = \frac{1}{\mbox{conv}os \gamma_i} \label{grad} \\
&& H({\bf x})_{ij} = \frac{\!\mbox{conv}dot\!artial^2 f({\bf x})}{\!\mbox{conv}dot\!artial x_i \!\mbox{conv}dot\!artial x_j}= \frac{\sin \gamma_i}{\mbox{conv}os^3 \gamma_i} \delta_{ij}
\end{equation}a
where $\gamma_1=\mbox{asin}\, x_1$, $\gamma_2=\mbox{asin}\, x_2$, $\gamma_3=\mbox{asin}\, x_3$ and $\gamma_4=-\mbox{asin}\, x_4$. It is well known that if for each point ${\bf x}$ satisfying \w{surf} the matrix $H({\bf x})$ is positive in the subspace orthogonal to $\nabla f({\bf x})$, the surface \w{surf} is convex. Equation \w{surf} can be written as
\begin{equation}
\gamma_1 +\gamma_2 +\gamma_3 +\gamma_4 = \!\mbox{conv}dot\!i
\label{eq} \end{equation}
where each $\gamma_i$ belongs to the interval $[-\!\mbox{conv}dot\!i/2,\!\mbox{conv}dot\!i/2]$, this implies that at most one eigenvalue of $H({\bf x})$ is negative. When all the eigenvalues are positive there is no problem. Let us suppose without loss of generality that the eigenvalue corresponding to the eigenvector ${\bf v}_4=\begin{equation}gin{pmatrix}0&0&0&1\end{pmatrix}$ is negative. The vector belonging to the subspace orthogonal to the gradient which has maximal overlap with ${\bf v}_4$ is
\begin{equation}
{\bf v}_4 - \frac{{\bf v}_4\mbox{conv}dot\nabla f({\bf x})}{(\nabla f({\bf x}))^2}\, \nabla f({\bf x})\ .
\end{equation}
The expected value of $H({\bf x})$ with this vector is proportional (with positive constant) to
\begin{equation}
\sum_{i=1}^3 \frac{\sin \gamma_i}{\mbox{conv}os^5 \gamma_i} -\tan(\gamma_1+\gamma_2+\gamma_3)\left( \sum_{i=1}^3 \frac{1}{\mbox{conv}os^2 \gamma_i} \right)^2
\end{equation}
where we have used \w{eq} to substitute $\gamma_4$, and therefore the sum $\gamma_1+\gamma_2+\gamma_3$ must belongs to $[\!\mbox{conv}dot\!i/2,3\!\mbox{conv}dot\!i/2]$. We have checked numerically that this expression is positive for the allowed $\gamma$'s. Now, we have seen that the surfaces \w{hp} and \w{surf} are convex. The fact that the unitary vectors orthogonal to \w{hp} and \w{surf} are equal in the intersection of both surfaces, together with the symmetries, warrant the convexity of $Q$. To see this fact let us take the limit $x_1\rightarrow 1$ ($\gamma_1 \rightarrow \!\mbox{conv}dot\!i/2$) in the vector \w{grad}. This limit is proportional to $(1\,0\,0\,0)$, which is the vector normal to \w{hp}. The symmetries imply that the unitary vector normal to $\!\mbox{conv}dot\!artial Q$ is continuous everywhere. $\Box$
{\bf Lemma 2:} $Q$ is contained in the convex hull of the points saturating the inequalities \w{nonlinear}: $Q\subseteq \mbox{conv} F$.
It is easy to see that any compact set belongs to the convex hull of its boundary \mbox{conv}ite{lulu}, in our case $Q \subseteq \mbox{conv} \!\mbox{conv}dot\!artial Q$. Using this we can prove the lemma by showing that $\!\mbox{conv}dot\!artial Q$ is contained in the convex hull of $F$: $\!\mbox{conv}dot\!artial Q \subseteq \mbox{conv} F$, because this implies that $\mbox{conv} \!\mbox{conv}dot\!artial Q \subseteq \mbox{conv} F$, and therefore $Q\subseteq \mbox{conv} F$, which completes the proof. When studying the boundary of $Q$ we have seen that, all points in $\!\mbox{conv}dot\!artial Q$ are in $F$ or saturate at least one inequality in \w{trivial}. In what follows it is seen that all points in $\!\mbox{conv}dot\!artial Q$ which are not in $F$ belong to $\mbox{conv} F$, as we want to show. The points in $Q$ for which $x_1=1$ are the ones satisfying
\begin{equation}a
&x_4& \geq -\mbox{conv}os(\mbox{asin}\,rcsin x_2 +\mbox{asin}\,rcsin x_3 ) \label{p1} \\
&x_4& \leq \mbox{conv}os(\mbox{asin}\,rcsin x_2 -\mbox{asin}\,rcsin x_3 ) \label{p2}\ .
\end{equation}a
These conditions are obtained imposing $x_1=1$ in \w{nonlinear}, therefore, points saturating at least one of these inequalities belong to $F$. This set is plotted in right part of FIG. \ref{fig}. When $x_2$ or $x_3$ is equal to $\!\mbox{conv}dot\!m 1$ the right hand side of \w{p1} and \w{p2} coincide, which implies that the boundary of this set is made of points belonging to $F$. Using \mbox{conv}ite{lulu} again it can be said that this set is contained in $\mbox{conv} F$, and by symmetry, the whole $\!\mbox{conv}dot\!artial Q$ belongs to $\mbox{conv} F$, which finishes this proof. $\Box$
\begin{equation}gin{figure}
\includegraphics[width=3 cm]{c.eps}
\hspace{.6cm}
\includegraphics[width=3 cm]{q.eps}
\mbox{conv}aption{These figures are respectively the intersection of $C$ and $Q$ with the set of points satisfying $x_1=1$.}\label{fig}
\end{figure}
{\bf Lemma 3:} The set of all correlation vectors obtainable within QT is convex.
This result was proven in \mbox{conv}ite{Werner,conv}. Nevertheless here we attach the proof for the sake of completeness. First of all, recall that any measurement can be written as a projective one by enlarging sufficiently the Hilbert space where it acts. Then, in the proofs of Lemmas 3 and 4 we only consider projective measurements. Now, our aim is to show that if ${\bf x}$ and ${\bf x}'$ can be obtained within QT, for any value of $\lambda$ in the range $[0,1]$, the point $\lambda {\bf x} + (1-\lambda) {\bf x}'$ is also a quantum correlation vector. Let us suppose that there exists a pair of observables for Alice ($A_a$ with $a=0,1$), a pair of observables for Bob ($B_b$ with $b=0,1$), and a density matrix $\rho$, for which $\mbox{tr}(A_a \otimes B_b \ \rho)$ yields the components of ${\bf x}$, and the same for ${\bf x}'$. To obtain the components of $\lambda {\bf x} + (1-\lambda) {\bf x}'$ we can construct the pair of observables for Alice $A_a \oplus A'_a$ (with $a=0,1$), the pair of observables for Bob $B_b \oplus B'_b$ (with $b=0,1$), and the density matrix $\lambda \rho \oplus (1-\lambda) \rho'$ which accomplish our purpose. $\Box$
{\bf Lemma 4:} The set of all correlation vectors obtainable within QT is $\mbox{conv} G$, where $G$ is the set of vectors
\begin{equation}
\begin{equation}gin{pmatrix} \sin\!\mbox{conv}dot\!hi_1 &\, \sin\!\mbox{conv}dot\!hi_2 &\, \sin\!\mbox{conv}dot\!hi_3 &\, -\sin(\!\mbox{conv}dot\!hi_1+\!\mbox{conv}dot\!hi_2+\!\mbox{conv}dot\!hi_3) \end{pmatrix},
\label{generators} \end{equation}
for all values of $\!\mbox{conv}dot\!hi_1,\!\mbox{conv}dot\!hi_2,\!\mbox{conv}dot\!hi_3$.
This result is the characterization in terms of a convex hull given in \mbox{conv}ite{Werner}. The prove that we give here is simpler than the original one. To proof this lemma we proceed in two steps. First, we show that all quantum correlation vectors belong to $\mbox{conv} G$. Second, for each value of $\!\mbox{conv}dot\!hi_1,\!\mbox{conv}dot\!hi_2,\!\mbox{conv}dot\!hi_3$, we explicitly provide the quantum states and the observables giving the correlations \w{generators}. This together with Lemma 3 implies that, $\mbox{conv} G$ is contained in the set of quantum correlation vectors. These two steps are sufficient to finish the proof.
First step: because the eigenvalues of $A_a$ are $\!\mbox{conv}dot\!m 1$ and adopting the convention that any operator exponentiated to zero gives the identity, a generic pair of observables for Alice can be written as $A_a = (A_1 A_0)^a A_0$. Recalling that any $A_a$ is unitary we have $A_a = P_0^A A_0 + e^{ia\mbox{asin}\,lpha} P_1^A A_0$, with $P_0^A$ and $P_1^A$ being two projectors such that $P_0^A + P_1^A$ is the identity operator on Alice' Hilbert space. The same can be done for Bob's pair of observables $B_b$. The most general correlation vector predictable with QT is
\begin{equation}
\langle A_a B_b \rangle = \sum_{r,s=0,1} e^{i (ra\mbox{asin}\,lpha + sb\begin{equation}ta)}\ \mbox{tr}\! \left[ \rho\, (P_r^A A_0) \!\otimes\! (P_s^B B_0) \right]
\label{expected} \end{equation}
where $\rho$ is a unit-trace positive matrix acting on the tensor product of Alice' and Bob's Hilbert spaces. Let us put $\!\mbox{conv}dot\!hi_{rs}=0$ if $\mbox{tr}\! \left[ \rho\, (P_r^A A_0) \!\otimes\! (P_s^B B_0) \right]$ is positive, and $\!\mbox{conv}dot\!hi_{rs}=\!\mbox{conv}dot\!i$ if it is negative. Because the expected value \w{expected} is real, it can be written as
\begin{equation}
\sum_{r,s=0,1} \mbox{conv}os(\!\mbox{conv}dot\!hi_{rs} + ra\mbox{asin}\,lpha + sb\begin{equation}ta) \left| \mbox{tr}\! \left[ \rho\, (P_r^A A_0) \!\otimes\! (P_s^B B_0) \right] \right|. \nonumber
\end{equation}
It is easy to see in the last expression, that a generic vector $\langle A_a B_b \rangle$ can always be written as a convex combination of the vectors
\begin{equation}
\{\langle A_a B_b \rangle = \mbox{conv}os(\!\mbox{conv}dot\!hi + a\mbox{asin}\,lpha + b\begin{equation}ta);\ \forall \!\mbox{conv}dot\!hi, \mbox{asin}\,lpha, \begin{equation}ta \} \ ,
\label{bombai} \end{equation}
because the positive numbers $\left| \mbox{tr}\! \left[ \rho\, (P_r^A A_0) \!\otimes\! (P_s^B B_0) \right] \right|$ sum up to no more than one, and the null vector belongs to (\ref{bombai}). The relabeling of the angles
\begin{equation}a
&\!\mbox{conv}dot\!hi_1 =& \frac{\!\mbox{conv}dot\!i}{2} - \!\mbox{conv}dot\!hi \\
&\!\mbox{conv}dot\!hi_2 =& \frac{\!\mbox{conv}dot\!i}{2} + \!\mbox{conv}dot\!hi + \begin{equation}ta \nonumber \\
&\!\mbox{conv}dot\!hi_3 =& \frac{\!\mbox{conv}dot\!i}{2} + \!\mbox{conv}dot\!hi + \mbox{asin}\,lpha \nonumber
\end{equation}a
transforms (\ref{bombai}) in (\ref{generators}), hence, both sets are the same. Second step: it can be checked that, all vectors in (\ref{bombai}) can be obtained by measuring the two-qubit maximally entangled state
\begin{equation}
\ket{\Phi_{AB}} = \frac{1}{\sqrt{2}} \left( \ket{0_A}\!\ket{0_B} + e^{i\!\mbox{conv}dot\!hi} \ket{1_A}\!\ket{1_{B}} \right),
\end{equation}
with the observables
\begin{equation}a
&A_a& = e^{ia\mbox{asin}\,lpha}\ket{0_A}\!\bra{1_A} + e^{-ia\mbox{asin}\,lpha}\ket{1_A}\!\bra{0_A} \\
&B_b& = e^{ib\begin{equation}ta}\ket{0_B}\!\bra{1_B} + e^{-ib\begin{equation}ta}\ket{1_B}\!\bra{0_B} \ .
\end{equation}a
As we have said before, this concludes the proof of Lemma 4. $\Box$
{\bf Lemma 5:} $G$ has the same symmetries than $Q$.
The exact meaning of this statement is that, the set $G$ remains invariant under any permutation of the coordinates and under change of sign of any two coordinates. A way to show this fact is writing the definition of $G$ \w{generators} in a different way:
\begin{equation}
\begin{equation}gin{pmatrix} \sin\!\mbox{conv}dot\!hi_1 &\, \sin\!\mbox{conv}dot\!hi_2 &\, \sin\!\mbox{conv}dot\!hi_3 &\, \sin\!\mbox{conv}dot\!hi_4 \end{pmatrix}
\label{memo} \end{equation}
where the constraint
\begin{equation}
\!\mbox{conv}dot\!hi_1 +\!\mbox{conv}dot\!hi_2 +\!\mbox{conv}dot\!hi_3 +\!\mbox{conv}dot\!hi_4 = \mbox{ multiple of } 2\!\mbox{conv}dot\!i
\label{rel} \end{equation}
must hold. Now, the invariance under permutations becomes manifest. Changing the sign of two coordinates, say $i$ and $j$, is equivalent to the transformation
\begin{equation}a
\!\mbox{conv}dot\!hi_i \rightarrow \!\mbox{conv}dot\!hi_i + \!\mbox{conv}dot\!i \nonumber \\
\!\mbox{conv}dot\!hi_j \rightarrow \!\mbox{conv}dot\!hi_j + \!\mbox{conv}dot\!i
\end{equation}a
which keeps the constraint \w{rel} satisfied. $\Box$
{\bf Lemma 6:} $G$ is contained in $Q$.
The result of Lemma 5 allows us prove this lemma considering only s-ordered vectors. Then, we just have to check that all s-ordered points in \w{generators} satisfy \w{q2}, that is
\begin{equation}
f(\!\mbox{conv}dot\!hi_1)+f(\!\mbox{conv}dot\!hi_2)+f(\!\mbox{conv}dot\!hi_3)+f(\!\mbox{conv}dot\!hi_1+\!\mbox{conv}dot\!hi_2+\!\mbox{conv}dot\!hi_3) \leq \!\mbox{conv}dot\!i \ ,
\label{laden} \end{equation}
where $f(\!\mbox{conv}dot\!hi)=\mbox{asin}\,\!(\sin \!\mbox{conv}dot\!hi)$. This function is continuous, periodic and has constant slope ($\!\mbox{conv}dot\!m 1$) within intervals of length $\!\mbox{conv}dot\!i$. The points at which $f(\!\mbox{conv}dot\!hi)$ changes its slope are
\begin{equation}
\frac{\!\mbox{conv}dot\!i}{2} + \mbox{ multiple of } \!\mbox{conv}dot\!i
\label{ext} \end{equation}
In what follows it is found that the maximum of
\begin{equation}
f(\nu_1)+f(\nu_2)+f(\nu_3)+f(\nu_4)
\label{bin} \end{equation}
constrained to
\begin{equation}
\nu_1 +\nu_2 +\nu_3 -\nu_4 =0
\label{cons} \end{equation}
is $\!\mbox{conv}dot\!i$, which proves this lemma. Assume that the maximum is attained for some values of $\nu_1$, $\nu_2$, $\nu_3$ and $\nu_4$. If one $\nu_i$ is not of the form \w{ext}, equation \w{cons} implies that there is an other $\nu_j$ which is neither of the form \w{ext}. We can change the value of these two variables inside the range where $f(\nu_i)$ and $f(\nu_j)$ keep their slope constant, and equation \w{cons} holds. This operation should not increase the value of \w{bin}, otherwise the initial point would not be a maximum. This operation should not decrease the value of \w{bin}, because within the region of constant slope it could be also increased. The increase of $\nu_i$ and $\nu_j$ can be performed until one of them reaches a value of the form \w{ext}. Once this operation has been performed, if there is still an other $\nu_k$ not being like \w{ext}, the previous procedure can be repeated. All this implies that, there is a point with coordinates of the form \w{ext}, satisfying \w{cons}, which makes \w{bin} achieve its maximum value. There is only one s-ordered point with coordinates like \w{ext}: $\nu_1=\nu_2=\nu_3=\!\mbox{conv}dot\!i/2$ and $\nu_4=-\!\mbox{conv}dot\!i/2$, for which the function \w{bin} gives $\!\mbox{conv}dot\!i$. $\Box$
{\bf Proof of the Theorem:} First, let us prove that $\mbox{conv} G \subseteq Q$. The fact that $G \subseteq Q$ (Lemma 6) together with the convexity of $Q$ (Lemma 1) imply that $\mbox{conv} G \subseteq Q$. Second, let us prove that $Q \subseteq \mbox{conv} G$. To accomplish this it suffices to show that $F \subseteq G$, because this implies that $\mbox{conv} F \subseteq \mbox{conv} G$, and Lemma 2 say $Q \subseteq \mbox{conv} F$. Then, the only thing that has to be done in order to prove that $\mbox{conv} G = Q$ is to show that $F \subseteq G$. Considering only s-ordered vectors it is enough to show that all points saturating \w{q2} ---which are the ones for which \w{eq} holds--- are of the form \w{generators}. This is straightforward after identifying $\!\mbox{conv}dot\!hi_1=\gamma_1$, $\!\mbox{conv}dot\!hi_2=\gamma_2$, $\!\mbox{conv}dot\!hi_3=\gamma_3$ and $\!\mbox{conv}dot\!hi_4=-\gamma_4$. Finally, recalling that all correlation vectors obtainable within QT are $\mbox{conv} G$ (Lemma 4), the proof of the theorem is finished. $\Box$
{\bf Final Remarks.} There are many different probability distributions giving the same correlator. This can be seen in the expression relating both:
\begin{equation}a
\langle A B \rangle = &p(A=1,B=1) + p(A=-1,B=-1)& \nonumber \\
-&p(A=1,B=-1) - p(A=-1,B=1)&
\end{equation}a
It was proven in \mbox{conv}ite{Fine} that if a vector of correlators ${\bf x}$ satisfies \w{chsh} then, all probability distributions associated to ${\bf x}$ are achievable with LVTs. What has been proven in this paper concerning QT is slightly weaker than this. The fact that a vector of correlators ${\bf x}$ satisfies \w{nonlinear} only implies that there exists at least one probability distribution for each correlator in ${\bf x}$ predictable by QT.
All the results obtained in this work could be generalized to $n$ parties. In \mbox{conv}ite{Werner} there were found the generalizations of $C$ and $Q$, let us call them $C_n$ and $Q_n$. While $C_n$ is a polytope, $Q_n$ is a complicated convex set characterized in terms of a convex hull like \w{generators}. Consider now the non-linear map
\begin{equation}
\mu({\bf x})_i = \frac{2}{\!\mbox{conv}dot\!i} \,\mbox{asin}\, x_i ,
\label{mu} \end{equation}
which is bijective in the set of correlation vectors. This map has the nice property that transforms a complicated convex set into a simple polytope:
\begin{equation}
\mu(Q_2)=C_2
\label{inch} \end{equation}
The set of generators of $Q_n$ ---being \w{generators} for $n=2$--- found in \mbox{conv}ite{Werner} has the same structure for any $n$: each component is the sinus of a linear function of $n+1$ variables. This suggests that for any $n$, after performing the map $\mu$, the set of quantum correlation vectors becomes a polytope. Even more, it could be that it becomes the LVTs polytope $C_n$, as happens for $n=2$ \w{inch}. It can be seen that the last is not true recalling that, for $n=3$ there are exact contradictions like the one for the GHZ state \mbox{conv}ite{GHZ}:
\begin{equation}a
&\langle A_0 B_0 C_1 \rangle& = \langle A_0 B_1 C_0 \rangle = \langle A_1 B_0 C_0 \rangle = 1 \nonumber \\
&\langle A_1 B_1 C_1 \rangle& =-1
\label{ghz} \end{equation}a
The fact that the map $\mu$ leave invariant the components equal to $\!\mbox{conv}dot\!m 1$ implies that the image of \w{ghz} according to $\mu$ still contains the contradiction, therefore, it cannot belong to the LVTs polytope. When $n>3$ three of the parties can share a GHZ state and obtain again a contradiction. Thus, $\mu(Q_n)$ is not equal to $C_n$ when $n>2$, nevertheless, it is quite probable that $\mu(Q_n)$ is a polytope. A polytope is always characterizable by a set of linear inequalities, that in the worst case can be found numerically. In this case, the sets $Q_n$ could be characterized in a simple way.
As a final application let us mention that, when a vector of correlators \w{correlations} cannot be obtained measuring entangled states, communication is necessary. The result of this work can be used to compute the minimal average communication sufficient to get any vector of correlators when entanglement is available.
Finally, it worths mentioning the nice resemblance of the equations defining $C$ and $Q$, which are related by the map $\mu$ \w{mu}. The first set in FIG. \ref{fig} is the image of the second one according to $\mu$. Then, one can interprete these figures as pictorial three-dimensional versions of $C$ and $Q$ respectively.
{\bf Acknowledgements:} The author is grateful to A. Ac\'{\i}n, J. I. Latorre, A. Prats for helpful comments and suggestions, and G. Vidal for noticing the appearance of s-order in this problem. This work is financially supported by the projects MCYT FPA2001-3598, GC2001SGR-00065 and the grant 2002FI-00373 UB.
\begin{equation}gin{thebibliography}{99}
\bibitem{Bell} J. S. Bell, Physics 1 (1064), 195.
\bibitem{Aspect} A. Aspect, Nature 398, 189 (1999).
\bibitem{Fine} A. Fine; Phys. Rev. Lett. 48, 291 (1982).
\bibitem{Werner} R. F. Werner, M. M. Wolf; quant-ph/0102024.
\bibitem{Zukowski} M. \.{Z}ukowski, C. Brukner, Phys. Rev. Lett. 88, 210401 (2002).
\bibitem{Collins} D. Collins, N. Gisin; quant-ph/0306129.
\bibitem{Cirelson} B. S. Cirel'son; Lett. Math. Phys. 4, 93 (1980).
\bibitem{Filipp} S. Filipp, K. Svozil; quant-ph/0306092.
\bibitem{conv} I. Pitowsky; quant-ph/0112068.
\bibitem{Popescu} S. Popescu, D. Rohrlick: {\em Found. Phys.} 24, 379 (1994).
\bibitem{cc} R. Cleve, H. Buhrman; Phys. Rev. A 56, 1201 (1997).
\bibitem{cryptography} A. Ekert; Phys. Rev. Lett. 70 661 (1991).
\bibitem{polytopes} G. M. Ziegler; {\em Lectures on polytopes.} Springer-Berlag.
\bibitem{CHSH} J. Clauser, M. Horne, A. Shimony, R. Holt; Phys. Rev. Lett. 23, 880 (1969).
\bibitem{ibm} C. H. Bennett, J. I. Cirac, M. S. Leifer, D. W. Leung, N. Linden, S. Popescu, G. Vidal; Phys. Rev. A 66, 012305 (2002).
As a side remark let us mention that: if ${\bf x} \in Q(C)$ then all vectors s-majorized by ${\bf x}$ also belong to $Q(C)$.
\bibitem{lulu} {\em Proposition:} $Q$ is a compact set, then $Q \subseteq \mbox{conv} \!\mbox{conv}dot\!artial Q$.
{\em Proof:} Because $Q$ is bounded, any straight line which contains an interior point of $Q$ intersects $\!\mbox{conv}dot\!artial Q$ at least two times. Hence, any interior point ${\bf x}\in Q$ can be expressed as a convex combination of two points in $\!\mbox{conv}dot\!artial Q$ that intersect with a straight line containing ${\bf x}$.
\bibitem{GHZ} D. M. Greenberger, M. Horne, A. Zeilinger; {\em Bell's Theorem...} ed. M. Kafatos, Kluwer, Dordrecht 69 (1989).
\end{thebibliography}
\end{document}
|
\begin{document}
\title{Dynamical Casimir effect enhanced by decreasing the mirror reflectivity}
\author{Andreson L. C. Rego}
\email{[email protected]}
\affiliation{Instituto de Aplicação Fernando Rodrigues da Silveira, Universidade
do Estado do Rio de Janeiro, 20261-005, Rio de Janeiro, Brazil}
\author{Alessandra N. Braga}
\email{[email protected]}
\affiliation{Instituto de Estudos Costeiros, Universidade Federal do Pará, 68600-000,
Bragança, Brazil}
\author{Jeferson Danilo L. Silva}
\email{[email protected]}
\affiliation{Campus Salinópolis, Universidade Federal do Pará, 68721-000, Salinópolis,
Brazil}
\author{Danilo T. Alves}
\email{[email protected]}
\affiliation{Faculdade de Física, Universidade Federal do Pará, 66075-110, Belém,
Brazil}
\affiliation{Centro de F\'{i}sica, Universidade do Minho, P-4710-057, Braga, Portugal}
\begin{abstract}
In the present paper, we show that a partially reflecting static mirror with time-dependent
properties can produce, via dynamical Casimir effect in the context of a massless scalar field in $1+1$ dimensions, a larger number of particles than a perfectly reflecting one.
As particular limits, our results recover those found in the literature
for a perfect static mirror imposing a generalized or an usual time-dependent Robin boundary condition.
\end{abstract}
\maketitle
\section{Introduction}
The creation of real particles by excitation of the quantum vacuum by a moving mirror
was predicted by Moore \cite{Moore-1970}, and investigated
in other pioneering works in the 1970s \cite{DeWitt-PhysRep-1975,Fulling-Davies-PRSA-1976,Davies-Fulling-PRSA-1977,Candelas-PRSA-1977}.
Nowadays, this effect is commonly called the dynamical Casimir effect (DCE),
a name adopted by Yablonovitch \cite{Yablonovitch-PRL-1989} and Schwinger \cite{Schwinger-PNAS-1992}, motivated by a certain similarity with another quantum vacuum
effect involving mirrors, the so-called Casimir effect \cite{Casimir-1948}.
On the DCE there are some excellent reviews \cite{Dodonov-JPCS-2009, *Dodonov-PhysScr-2010, *Dalvit-CasimirPhysics-2011,Dodonov-Phys-2020}.
Moreover, a moving mirror is a particular way to excite the quantum vacuum.
An alternative way was proposed by Yablonovitch \cite{Yablonovitch-PRL-1989} and Lozovik \textit{et al.} \cite{Lozovik-PZhETF-1995}, consisting in exciting the vacuum by means of time-varying properties of material media, which can simulate a moving mirror.
Several scientists have focused on the detection of photon creation
from mechanically moving mirrors \cite{Kim-PRL-2006,*Brownell-JPA-2008,*Macri-PRX-2018,*Motazedifard-2018,*Sanz-Quantum-2018,
*DiStefano-PRL-2019,*Qin-PRA-2019,*Butera-PRA-2019}, but the observation remains a challenge \cite{Dodonov-Phys-2020}.
Other ones have focused on simulating a moving mirror through a motionless
medium whose internal properties vary in time
\cite{Braggio-EPL-2005, *Braggio-JPA-2008, *Braggio-JPCS-2009,
*Dezael-Lambrecht-EPL-2010, Wilson-Nature-2011,Kawakubo-Yamamoto-PRA-2011,
*Faccio-Carusotto-EPL-2011,*Johansson-et-al-PRL-2013,*Lahteenmaki-PNAS-2013, *Motazedifard-2015,*Dodonov-JPA-2015,*Ugalde-PRA-2016,*Braggio-JOpt-2018,*Dodonov-IOP-CS-2019,*Vezzoli-CommPhys-2019,*Schneider-et-al-PRL-2020,Johansson-PRL-2009,*Johansson-et-al-PRA-2010}, with the first observation of photon creation from vacuum reported
by Wilson \textit{et al.}, in Ref. \cite{Wilson-Nature-2011}.
One-dimensional models have had an important role in the investigation
of the DCE.
It was adopted by Moore \cite{Moore-1970}, DeWitt \cite{DeWitt-PhysRep-1975},
Fulling and Davies \cite{Fulling-Davies-PRSA-1976}, and also
in many other works as, for instance, in Refs. \cite{Dodonov-Klimov-PRA-1996,*Alves-Farina-MaiaNeto-JPA-2003,*Alves-Farina-Granhen-PRA-2006,
*Alves-Granhen-PRA-2008, *Alves-Granhen-Lima-PRD-2008, *Alves-Granhen-Silva-Lima-PLA-2010, *Alves-Granhen-Silva-Lima-PRD-2010,*Alves-Granhen-CPC-2014,*Good-PRD-2016,*Good-PRD-2020,*Good-Orlando-ArXiv-2021,
Silva-Farina-PRD-2011,Silva-Braga-Rego-Alves-PRD-2015,
Silva-Braga-Alves-PRD-2016,Silva-Braga-Rego-Alves-PRD-2020}.
In $(1+1)$D, the simulation of a motionless mirror with internal properties varying in time
was proposed by Silva and Farina \cite{Silva-Farina-PRD-2011}, who considered
the quantum vacuum field submitted to a time-dependent Robin boundary condition on a static mirror.
This model is deeply connected with the one underlying
the first experimental observation of photon creation, reported in Ref. \cite{Wilson-Nature-2011}.
The Robin boundary condition interpolates the well-known Dirichlet and Neumann ones \cite{Mintz-Farina-MaiaNeto-Rodrigues-JPA-2006-I,*Mintz-Farina-MaiaNeto-Rodrigues-JPA-2006-II}.
A generalized Robin boundary condition is one that includes a term
of second-order time derivative of the field, and has been
also considered in the investigation of the DCE
\cite{Fosco-PRD-2013, Rego-Silva-Alves-Farina-PRD-2014}.
One-dimensional models have also had relevance in the investigation of the DCE with semi-transparent mirrors.
Since real mirrors
do not behave as perfectly reflecting at all \cite{Moore-1970},
the DCE with partially reflecting mirrors has been investigated by
several authors (see, for instance, \cite{Jaekel-Reynaud-Quant-Opt-1992,*Barton-Eberlein-AnnPhys-1993,*Lambrecht-Jaekel-Reynaud-PRL-1996,*Lambrecht-Jaekel-Reynaud-EPJD-1998,*Obadia-Parentani-PRD-2001,*Haro-Elizalde-PRL-2006,Barton-Calogeracos-AnnPhys-I-1995,Nicolaevici-CQG-2001,Nicolaevici-PRD-2009,Fosco-Giraldo-Mazzitelli-PRD-2017}).
Dirac $\delta$ potentials for modeling partially reflecting moving mirrors
were considered, for instance,
in Refs. \cite{Barton-Calogeracos-AnnPhys-I-1995,Nicolaevici-CQG-2001,Dalvit-MaiaNeto-PRL-2000,
Nicolaevici-PRD-2009},
and also in the investigation of the static Casimir effect \cite{Castaneda-Guilarte-PRD-2013}.
In the limit of a perfectly reflecting mirror, the $\delta$ model leads to the situation of a Dirichlet boundary condition.
The use of $\delta-\delta^{\prime}$ potentials ($\delta^{\prime}$ is the derivative of the Dirac $\delta$) has also been considered as, for example, in Refs. \cite{Castaneda-Guilarte-PRD-2015,Silva-Braga-Alves-PRD-2016,Braga-Silva-Alves-PRD-2016,Silva-Braga-Rego-Alves-PRD-2020}.
In the limit of a perfectly reflecting mirror, the
$\delta-\delta^{\prime}$ model leads to a situation where
the field obeys the Robin boundary condition on one side,
and the Dirichlet condition on the other side of the mirror \cite{Silva-Braga-Alves-PRD-2016}.
One of the goals of the present paper is to show that transparency can enhance the number of particles created via DCE, when compared to the limiting case of a perfect mirror.
This counterintuitive effect was shown in Ref. \cite{Silva-Braga-Alves-PRD-2016}
(and highlighted by Dodonov in the review in Ref. \cite{Dodonov-Phys-2020}),
in the context of a partially reflecting moving mirror, simulated by a $\delta-\delta^{\prime}$ potential.
In the present paper, we show that even a static partially reflecting mirror, with time-dependent
properties, can produce a larger number of particles than a perfectly reflecting one.
Specifically, we investigate this in the context of a massless scalar field in $(1+1)$D,
with a mirror described by a time-dependent generalized $\delta-\delta^{\prime}$ model.
As particular limits, our results recover those found in the literature for a perfect
static mirror imposing a time-dependent Robin \cite{Silva-Farina-PRD-2011}, or
a generalized time-dependent Robin boundary condition \cite{Rego-Silva-Alves-Farina-PRD-2014}.
This paper is organized as follows.
In Sec. \ref{sec:model}, we present
the Lagrangian density of the model, and obtain the corresponding scattering coefficients.
In Sec. \ref{sec:spectrum}, the spectrum and total rate of created particles are obtained.
In Sec. \ref{sec:application}, we apply our formulas to
a typical oscillatory behavior considered in investigating the DCE.
In Sec. \ref{sec:final-remarks},
we make a brief summary of our results.
\section{\label{sec:model}The Model}
We consider a massless scalar field in $1+1$ dimensions in the presence
of partially reflecting static mirror with time-dependent material
properties. The mirror is simulated by a $\delta-\delta^{\prime}$
potential at $x=0$ coupled to the field, and the material properties
of the mirror are represented by the coupling parameters. The $\delta$
term is coupled to the field by a time-dependent parameter, $\mu(t)$,
and the $\delta^{\prime}$ one by a time-independent parameter, $\lambda_{0}$.
Moreover, it is included a modification in the kinetic term of the
Lagrangian density at $x=0$ (where the mirror is located), namely
\begin{eqnarray}
\mathcal{L} & = & \frac{1}{2}\left[1+2\chi_{0}\delta(x)\right](\partial_{t}\phi)^{2}-\frac{1}{2}(\partial_{x}\phi)^{2}\nonumber \\
& & -[\mu(t)\delta(x)+\lambda_{0}\delta^{\prime}(x)]\phi^{2}(t,x),
\label{eq:model}
\end{eqnarray}
where $\chi_{0}$ is a constant parameter. The modified kinetic term
of Eq. (\ref{eq:model}) originates a second-order time-derivative
that appears in the generalized Robin boundary condition (BC) \cite{Fosco-PRD-2013,Rego-Silva-Alves-Farina-PRD-2014}.
The model described by Eq. \eqref{eq:model} generalizes that
of a perfectly reflecting time-dependent Robin boundary condition, found in Ref. \cite{Silva-Farina-PRD-2011},
and that of a perfectly reflecting mirror imposing a generalized time-dependent Robin BC to the field, found in Ref. \cite{Rego-Silva-Alves-Farina-PRD-2014}.
It also generalizes the semi-transparent time-dependent model
considered Ref. \cite{Silva-Braga-Rego-Alves-PRD-2020}.
At the end of this section, we clarify our motivations
for choosing $\chi_0$ and $\lambda_0$ constant in time, whereas
$\mu$ is made time-dependent, and also connect this model with some physical situations.
The field equation for this model is given by
\begin{eqnarray}
[1+2\chi_{0}\delta(x)]\partial_{t}^{2}\phi(t,x)-\partial_{x}^{2}\phi(t,x)\nonumber \\
+2[\mu(t)\delta(x)+\lambda_{0}\delta^{\prime}(x)]\phi(t,x) & = & 0,
\label{eq:field-equation}
\end{eqnarray}
which becomes the massless Klein-Gordon equation
\begin{equation}
\partial_{x}^{2}\phi(t,x)-\partial_{t}^{2}\phi(t,x)=0,\text{ for }x\neq0.
\label{eq:KG-equation}
\end{equation}
A particular case of this model, with $\lambda_{0}=0$, was considered
in Ref. \cite{Fosco-PRD-2013}.
It is convenient to rewrite the field as
\begin{equation}
\phi(t,x)=\Theta(x)\phi_{+}(t,x)+\Theta(-x)\phi_{-}(t,x),
\label{phi-00}
\end{equation}
where $\Theta(x)$ is the Heaviside step function, $\phi_{+}$ and
$\phi_{-}$ are
\begin{equation}
\phi_{+}(t,x)=\varphi_{\text{out}}(t-x)+\psi_{\text{in}}(t+x),
\end{equation}
\begin{equation}
\phi_{-}(t,x)=\varphi_{\text{in}}(t-x)+\psi_{\text{out}}(t+x),
\end{equation}
and the labels ``out'' and ``in'' indicate, respectively, the outgoing and incoming
fields with respect to the mirror. Taking the Fourier
transform, we obtain
\begin{equation}
\phi_{+}(t,x)=\int\frac{\mathrm{d}\omega}{2\pi}\left[\tilde{\varphi}_{\text{out}}(\omega)\text{e}^{i\omega x}+\tilde{\psi}_{\text{in}}(\omega)\text{e}^{-i\omega x}\right]\text{e}^{-i\omega t},
\label{eq:A08}
\end{equation}
\begin{equation}
\phi_{-}(t,x)=\int\frac{\mathrm{d}\omega}{2\pi}\left[\tilde{\varphi}_{\text{in}}(\omega)\text{e}^{i\omega x}+\tilde{\psi}_{\text{out}}(\omega)\text{e}^{-i\omega x}\right]\text{e}^{-i\omega t}.
\label{eq:A09}
\end{equation}
After two successive integrations of Eq. (\ref{eq:field-equation})
across $x=0$, we obtain the following matching conditions
\begin{equation}
\tilde{\varphi}_{\text{out}}(\omega)+\tilde{\psi}_{\text{in}}(\omega)=+\frac{1+\lambda_{0}}{1-\lambda_{0}}[\tilde{\varphi}_{\text{in}}(\omega)+\tilde{\psi}_{\text{out}}(\omega)],
\label{eq:MC1}
\end{equation}
\begin{eqnarray}
\tilde{\varphi}_{\text{out}}(\omega)-\tilde{\psi}_{\text{in}}(\omega) & = & \frac{1-\lambda_{0}}{1+\lambda_{0}}\left[\tilde{\varphi}_{\text{in}}(\omega)-\tilde{\psi}_{\text{out}}(\omega)\right]\nonumber \\
& & +\frac{2i\chi_{0}\omega}{1-\lambda_{0}^{2}}\left[\tilde{\varphi}_{\text{in}}(\omega)+\tilde{\psi}_{\text{out}}(\omega)\right]\nonumber \\
& & -\frac{2i}{\omega(1-\lambda_{0}^{2})}\int\frac{\mathrm{d}\omega^{\prime}}{2\pi}\tilde{\mu}(\omega-\omega^{\prime})\nonumber \\
& & \times\left[\tilde{\varphi}_{\text{in}}(\omega^{\prime})+\tilde{\psi}_{\text{out}}(\omega^{\prime})\right],
\label{eq:MC2}
\end{eqnarray}
where $\tilde{\mu}(\omega)$ is the Fourier transform of $\mu(t)$.
We shall consider
\begin{equation}
\mu(t)=\mu_{0}[1+\epsilon f(t)],
\label{eq:mu-of-t}
\end{equation}
where $\mu_{0}$ is a constant parameter, $f(t)$ is an arbitrary
limited function with $|f(t)|\le1$ and $\epsilon\ll1$. Moreover,
the outgoing and incoming fields are grouped in column matrices:
\begin{equation}
\Phi_{\text{out}}(\omega)=\left(\begin{array}{c}
\tilde{\varphi}_{\text{out}}(\omega)\\
\tilde{\psi}_{\text{out}}(\omega)
\end{array}\right),\quad\Phi_{\text{in}}(\omega)=\left(\begin{array}{c}
\tilde{\varphi}_{\text{in}}(\omega)\\
\tilde{\psi}_{\text{in}}(\omega)
\end{array}\right).
\end{equation}
Manipulating Eqs. (\ref{eq:MC1}) and (\ref{eq:MC2}), considering
Eq. (\ref{eq:mu-of-t}) and neglecting the terms $\mathcal{O}(\epsilon^{2})$,
the outgoing fields can be rewriten in term of the incoming ones,
namely
\begin{equation}
\Phi_{\text{out}}(\omega)=S(\omega)\Phi_{\text{in}}(\omega)+\int\frac{\mathrm{d}\omega^{\prime}}{2\pi}\mathcal{S}(\omega,\omega^{\prime})\Phi_{\text{in}}(\omega^{\prime}),
\label{eq:out-in}
\end{equation}
where $S(\omega)$ is the scattering matrix for the case $\mu(t)\rightarrow\mu_{0}$,
and $\mathcal{S}(\omega,\omega^{\prime})$ is the correction to the
scattering matrix due to the time-dependence of $\mu(t)$. Explicitly,
\begin{equation}
S(\omega)=\left(\begin{array}{cc}
s_{+}(\omega) & r_{+}(\omega)\\
r_{-}(\omega) & s_{-}(\omega)
\end{array}\right),
\label{eq:Scattering-matrix}
\end{equation}
where
\begin{equation}
s_{\pm}(\omega)=\frac{\omega(1-\lambda_{0}^{2})}{i\mu_{0}-i\chi_{0}\omega^{2}+\omega(1+\lambda_{0}^{2})},
\end{equation}
\begin{equation}
r_{\pm}(\omega)=-\frac{i\mu_{0}-i\chi_{0}\omega^{2}\mp2\omega\lambda_{0}}{i\mu_{0}-i\chi_{0}\omega^{2}+\omega(1+\lambda_{0}^{2})},
\label{r-pm}
\end{equation}
are the transmission and reflection coefficients, respectively. The
term $\mathcal{S}(\omega,\omega^{\prime})$ is given by
\begin{equation}
\mathcal{S}(\omega,\omega^{\prime})=-\frac{i\epsilon\mu_{0}\tilde{f}(\omega-\omega^{\prime})\left[J_{2}+S(\omega^{\prime})\right]}{i\mu_{0}-i\chi_{0}\omega^{2}+\omega(1+\lambda_{0}^{2})},
\label{eq:S-full}
\end{equation}
where $\tilde{f}(\omega)$ is the Fourier transform of $f(t)$ and
$J_{2}$ is the $2\times2$ backward identity matrix. The scattering
matrix (\ref{eq:Scattering-matrix}) must be real in the temporal
domain, unitary and analytic for $\mathrm{Im}\,\omega>0$ \cite{Jaekel-Reynaud-Quant-Opt-1992,Lambrecht-Jaekel-Reynaud-PRL-1996},
which is guaranteed if $\mu_{0}$ and $\chi_{0}$ are non-negative.
Particularly, the limits $\mu_{0}\rightarrow\infty$ or $\chi_{0}\rightarrow\infty$
lead to the simpler case of a perfect mirror {[}$s_{\pm}(\omega)\rightarrow0${]}
imposing the Dirichlet BC to the field in both sides of the mirror,
namely
\begin{equation}
\phi(t,0^{+})=\phi(t,0^{-})=0,
\label{eq:Dirichlet-L-R}
\end{equation}
or, equivalently, $r_{\pm}(\omega)\rightarrow-1$, which leads to
$\mathcal{S}(\omega,\omega^{\prime})\rightarrow0$. On the other
hand, the limit $\lambda_{0}\rightarrow-1$ (or $\lambda_{0}\rightarrow1$)
also leads to a perfect mirror, but in this case $\mathcal{S}(\omega,\omega^{\prime})\neq0$,
and it leads to the BCs
\begin{equation}
\phi(t,0^{+})=0,
\label{eq:Dir}
\end{equation}
\begin{equation}
\mu(t)\phi(t,0^{-})+2\partial_{x}\phi(t,0^{-})+\chi_{0}\partial_{t}^{2}\phi(t,0^{-})=0.
\label{eq:time-dependent-generalized-Robin-BC}
\end{equation}
identified respectively as the Dirichlet BC (\ref{eq:Dir}) and the
generalized Robin BC with a time-dependent Robin parameter (\ref{eq:time-dependent-generalized-Robin-BC}).
Therefore, as we shall see, the results for the spectra of created
particles recover, in the appropriate limits, those
found in the literature \cite{Silva-Farina-PRD-2011, Rego-Silva-Alves-Farina-PRD-2014}.
Before continuing, let us make a brief comment about the nomenclature
we are using for the BC given in Eq. \eqref{eq:time-dependent-generalized-Robin-BC}.
Considering the particular case where $\mu(t)=0$ and $\chi_0=0$ in Eq. \eqref{eq:time-dependent-generalized-Robin-BC}, one has the Neumann BC, $\partial_{x}\phi(t,0^{-})=0$.
When we consider $\mu(t)=\mu_0>0$ and $\chi_0=0$, one has
\begin{equation}
\mu_0\phi(t,0^{-})+2\partial_{x}\phi(t,0^{-})=0,
\label{eq:robin-bc}
\end{equation}
which is a particular case of that usually called Robin's BC,
although G. Robin seems to have never used this BC (a very interesting
discussion on this subject is found in Ref. \cite{Gustafson-Math-Int-1998}).
For $\mu(t)>0$ and $\chi_0=0$, Eq. \eqref{eq:time-dependent-generalized-Robin-BC} gives
\begin{equation}
\mu(t)\phi(t,0^{-})+2\partial_{x}\phi(t,0^{-})=0,
\label{eq:time-dependent-robin-bc}
\end{equation}
which is a particular case of that called in Ref. \cite{Silva-Farina-PRD-2011} a \textit{time-dependent} Robin BC.
Following the nomenclatures adopted in the literature, the full BC given in Eq. \eqref{eq:time-dependent-generalized-Robin-BC} was called in Ref. \cite{Fosco-PRD-2013} as a \textit{generalized} Robin BC.
Concluding this section, we discuss our motivations for choosing $\chi_0$ and $\lambda_0$ constant in time, whereas $\mu$ is time dependent, and also
connect the model investigated here with physical situations.
The BC in Eq. \eqref{eq:time-dependent-generalized-Robin-BC} is
related to the first observation of photon creation from vacuum,
which involved a superconducting coplanar waveguide, terminated at a
SQUID (superconducting quantum interference device) \cite{Wilson-Nature-2011}.
In this case, the time-dependent parameter $\mu$ is
related to a time-varying effective Josephson energy, whereas
$\chi_0$ is related to the constant capacitance and inductance per unit length
of the superconducting coplanar waveguide \cite{Johansson-PRL-2009, Johansson-et-al-PRA-2010}.
This motivated our choice to investigate a model where $\chi_0$ is a constant,
and $\mu$ is a time-dependent function.
In addition, we chose the parameter $\lambda_0$ as a constant
whose value controls the reflectivity of the object in such a way that
when $\lambda_0=\pm 1$ one recovers a perfectly reflecting object, independently
of the values assumed by $\mu(t)$ and $\chi_0$ [as shown in Eq. \eqref{r-pm}].
Lastly, particular cases of the BC Eq. \eqref{eq:time-dependent-generalized-Robin-BC} are given in Eqs. \eqref{eq:robin-bc} and \eqref{eq:time-dependent-robin-bc}, and are also connected
with another physical situation related to the DCE.
The Robin BC in Eq. \eqref{eq:robin-bc} can be used to describe
a plasma model for a real metal,
with the parameter $\mu_0$ related with the plasma frequency
\cite{Mostepanenko-1985, Silva-Farina-PRD-2011}.
Moreover, when considering a time-dependent parameter $\mu(t)$, as given in Eq.
\eqref{eq:time-dependent-robin-bc}, one has the simulation of a
perfectly reflecting metal with a time-dependent plasma frequency \cite{Silva-Farina-PRD-2011}.
Since Eq. \eqref{eq:time-dependent-robin-bc} is a
particular case of our model given in Eq. \eqref{eq:model},
the Lagrangian proposed here can simulate a partially reflecting metal with a time-dependent plasma frequency.
In the next section, we compute and discuss the spectrum and the total
number of created particles from Eqs. (\ref{eq:out-in}), (\ref{eq:Scattering-matrix})
and (\ref{eq:S-full}).
\section{\label{sec:spectrum}Particle Creation}
Considering vacuum as the initial state of the field, the spectrum
of created particles can be computed by \cite{Lambrecht-Jaekel-Reynaud-PRL-1996}
\begin{equation}
N(\omega)=2\omega\,\mathrm{Tr}\left\langle 0_{\text{in}}\right|\Phi_{\text{out}}(-\omega)\Phi_{\text{in}}^{\mathrm{T}}(\omega)\left|0_{\text{in}}\right\rangle .
\label{eq:N-def}
\end{equation}
Substituting Eq. (\ref{eq:out-in}) into (\ref{eq:N-def}), we obtain
\begin{equation}
N(\omega)=N_{+}(\omega)+N_{-}(\omega),
\label{eq:NmNm}
\end{equation}
where
\begin{equation}
N_{\pm}(\omega)=\frac{\epsilon^{2}\mu_{0}^{2}}{\pi}\int_{0}^{\infty}\frac{\mathrm{d}\omega^{\prime}}{2\pi}\frac{\omega}{\omega^{\prime}}\frac{\mathrm{Re}\left[1+r_{\pm}(-\omega^{\prime})\right]|\tilde{f}(\omega+\omega^{\prime})|^{2}}{\left(\mu_{0}-\chi_{0}\omega^{2}\right)^{2}+\omega^{2}(1+\lambda_{0}^{2})^{2}},
\label{eq:N-pm-0}
\end{equation}
with $N_{+}(\omega)$ and $N_{-}(\omega)$ being the spectra for the
right and left sides of the mirror, respectively.
Manipulating this formula, we obtain
\begin{eqnarray}
N_{\pm}(\omega) & = & \frac{\epsilon^{2}}{\pi}(1\pm\lambda_{0})^{2}(1+\lambda_{0}^{2})\nonumber \\
& & \times\int_{0}^{\infty}\frac{\mathrm{d}\omega^{\prime}}{2\pi}\Upsilon(\omega)\Upsilon(\omega^{\prime})|\tilde{f}(\omega+\omega^{\prime})|^{2},
\label{eq:Npm}
\end{eqnarray}
where
\begin{equation}
\Upsilon(\omega)=\frac{\mu_{0}\omega}{(\mu_{0}-\chi_{0}\omega^{2})^{2}+\omega^{2}(1+\lambda_{0}^{2})^{2}}.
\end{equation}
Considering $\mu_{0}\rightarrow\infty$ or $\chi_{0}\rightarrow\infty$,
we have $N_{\pm}(\omega)\rightarrow0$, a Dirichlet
BC on both sides of the mirror, as mentioned in the previous section
[Eq. (\ref{eq:Dirichlet-L-R})].
We also remark the symmetry $\lambda_0\leftrightarrow-\lambda_0$ for
$N(\omega)$,
\begin{equation}
N(\omega)\big|_{\lambda_0}=N(\omega)\big|_{-\lambda_0}.
\label{eq:N-lambda-0-symmetry}
\end{equation}
From Eq. (\ref{eq:Npm}), we conclude that
\begin{equation}
N_{-}(\omega)=\left(\frac{1-\lambda_{0}}{1+\lambda_{0}}\right)^{2}N_{+}(\omega).
\label{eq:N-N}
\end{equation}
Therefore, the spectra, for each side of the mirror, differ from each
other only by a frequency-independent global factor.
For $\lambda_{0}>0$,
$N_{-}(\omega)$ is smaller than $N_{+}(\omega)$ for all frequencies,
and the opposite occurs for $\lambda_{0}<0$. The spectra are symmetric
i.e., $N_{-}(\omega)=N_{+}(\omega)$ only if $\lambda_{0}=0$,
or in the limit $\lambda_{0}\rightarrow\infty$. Furthermore, from
Eq. (\ref{eq:N-N}), we can conclude that there will not be particle
creation for one of the sides of the mirror if $\lambda_{0}\rightarrow\pm1$,
specifically $N_{-}(\omega)=0$ if $\lambda_{0}\rightarrow1$,
or $N_{+}(\omega)=0$
if $\lambda_{0}\rightarrow-1$, which is a consequence of the fact
that, in these limits, the field obeys the (time-independent) Dirichlet
BC for one of the sides and the time-dependent generalized Robin BC
on the other side {[}see Eqs. (\ref{eq:Dir}) and (\ref{eq:time-dependent-generalized-Robin-BC}){]}.
The total number of created particles is obtained by integrating
the spectrum for all frequencies,
\begin{equation}
\mathcal{N}=\int_{0}^{\infty}\mathrm{d}\omega N(\omega).
\label{eq:Total-N}
\end{equation}
From Eq. (\ref{eq:NmNm}), the last equation can be written as $\mathcal{N}=\mathcal{N}_{+}+\mathcal{N}_{-}$,
where $\mathcal{N}_{+}$ and $\mathcal{N}_{-}$ are the total number
of particles for the right and left sides of the mirror, respectively.
From Eq. (\ref{eq:N-N}), it follows that
\begin{equation}
\frac{\mathcal{N}_{-}}{\mathcal{N}_{+}}=\left(\frac{1-\lambda_{0}}{1+\lambda_{0}}\right)^{2}.
\label{eq:N-N-Rate}
\end{equation}
From Eq. \eqref{eq:N-lambda-0-symmetry}, we have
\begin{equation}
\mathcal{N}\big|_{\lambda_0}=\mathcal{N}\big|_{-\lambda_0}.
\label{eq:N-total-lambda-0-symmetry}
\end{equation}
The total energy $\mathcal{E}$, dissipated from the mirror and converted in real particles, is given by
\begin{equation}
\mathcal{E}=\int_{0}^{\infty}\mathrm{d}\omega N(\omega)\omega = \mathcal{E}_{+} + \mathcal{E}_{-},
\label{eq:Total-E}
\end{equation}
where
\begin{equation}
\mathcal{E}_{\pm}=\int_{0}^{\infty}\mathrm{d}\omega N_{\pm}(\omega)\omega.
\label{eq:Total-E-pm}
\end{equation}
The integral in Eq. (\ref{eq:Total-N}) is suitable for numerical
integration and we shall discuss these results in the following.
\section{\label{sec:application} Application}
From now on, we consider, in Eq. \eqref{eq:mu-of-t},
the time-varying behavior given by
\begin{equation}
f(t)=\cos(\omega_{0}t)\exp(-|t|/\tau),
\label{eq:f}
\end{equation}
where $\tau$ is the time interval for which the oscillations occur
effectively and $\omega_{0}$ is the oscillation frequency.
We also
consider $\omega_{0}\tau\gg1$, called monochromatic
limit \cite{Silva-Braga-Rego-Alves-PRD-2015}.
This is a typical oscillatory behavior considered in investigating
the DCE \cite{Johansson-PRL-2009,Johansson-et-al-PRA-2010, Silva-Farina-PRD-2011, Silva-Braga-Rego-Alves-PRD-2015}.
The Fourier transform of $f(t)$, considering the monochromatic limit,
is given by \cite{Silva-Braga-Rego-Alves-PRD-2015}
\begin{equation}
|\tilde{f}(\omega)|/\tau=(\pi/2)[\delta(\omega+\omega_{0})+\delta(\omega-\omega_{0})].
\label{eq:monochromatic}
\end{equation}
Therefore, substituting Eq. (\ref{eq:monochromatic}) into (\ref{eq:Npm})
we obtain the following expression for the spectra
\begin{equation}
\frac{N_{\pm}(\omega)}{\tau}=
\frac{\epsilon^{2}(1\pm\lambda_{0})^{2}(1+\lambda_{0}^{2})}{4\pi}
\Upsilon(\omega)\Upsilon(\omega_{0}-\omega)\Theta(\omega_{0}-\omega).
\label{eq:Spectra}
\end{equation}
The spectrum in Eq. \eqref{eq:Spectra} presents the symmetry
\begin{equation}
N_{\pm}(\omega_{0}/2+\zeta)=N_{\pm}(\omega_{0}/2-\zeta),
\label{eq:Sym-Spec}
\end{equation}
where $|\zeta|<\omega_{0}/2$.
Taking this symmetry into account in \eqref{eq:Total-E-pm}, we have
a proportionality between energy and number of particles, given by
\begin{equation}
\mathcal{E}_{\pm}=\frac{\omega_{0}}{2}\mathcal{N}_{\pm},
\label{eq:Total-E-pm-as-factor-of-N}
\end{equation}
which leads to
\begin{equation}
\mathcal{E}=\frac{\omega_{0}}{2}\mathcal{N}.
\label{eq:Total-E-as-factor-of-N}
\end{equation}
In Fig. \ref{Fig.1}, we show the normalized total rate of the number of created particles, $(2\pi /\epsilon^2\tau)\mathcal{N}$, obtained from
Eqs. \eqref{eq:NmNm}, \eqref{eq:Total-N}, and \eqref{eq:Spectra},
as a function of $\lambda_{0}$ and $\chi_{0}$.
Before we begin our analysis of the general aspects of
Fig. \ref{Fig.1}, let us highlight that
it contains, as particular cases, some results found in the literature \cite{Silva-Farina-PRD-2011,Silva-Braga-Alves-PRD-2016}.
Specifically, the point $(\chi_{0}=0,\lambda_{0}=1)$ corresponds to the result for the normalized particle creation rate
for a perfectly reflecting Robin BC, found in Ref. \cite{Silva-Farina-PRD-2011}.
The dashed line in Fig. \ref{Fig.1} corresponds to the case $\lambda_{0}=1$,
and indicates the result for the normalized particle creation rate for a perfectly reflecting mirror that imposes a generalized time-dependent Robin BC to the field, found in Ref. \cite{Rego-Silva-Alves-Farina-PRD-2014}.
Moreover, the vertical line given by $\chi_{0}=0$
corresponds to the normalized particle creation rate for
the model used in Ref. \cite{Silva-Braga-Rego-Alves-PRD-2020},
when the oscillatory behavior given in Eq. \eqref{eq:f} is considered.
In Fig. \ref{Fig.1}, a point in a lighter region has a greater
normalized particle creation than a point in a darker one.
The dashed line $(\lambda_{0}=1)$ indicates the
particle creation for perfectly reflecting mirrors,
whereas any point not belonging to this line represents the particle creation for
partially reflecting ones.
One can find outside the dashed line points lighter than some belonging to it.
This means that partially reflecting static mirrors, with time-varying properties, can produce a larger number of particles than perfectly reflecting ones.
In fact, one can observe peak of particle creation around $(\chi_{0}\approx 3.5,\lambda_0=0)$.
Moreover, using Eq. \eqref{eq:Total-E-as-factor-of-N},
from Fig. \ref{Fig.1} one can have a direct visualization
of the behavior of $(2\pi /\epsilon^2\tau)\mathcal{E}$,
the normalized rate of total energy.
For a better visualization of where, in the configuration space $\chi_0\times\lambda_0$,
a partially reflecting static mirror produces a larger number of particles than the
correspondent perfectly reflecting one,
in Fig. \ref{Fig.2} we show the ratio $\mathcal{N}/\mathcal{N}|_{\lambda_{0}=1}$.
Moreover, using Eq. \eqref{eq:Total-E-as-factor-of-N},
we can write $\mathcal{N}/\mathcal{N}|_{\lambda_{0}=1}=\mathcal{E}/\mathcal{E}|_{\lambda_{0}=1}$,
so that Fig. \ref{Fig.2} also shows the ratio for the total energy.
Level curves with values $\mathcal{N}/\mathcal{N}|_{\lambda_{0}=1}>1$ indicate
that partially reflecting static mirrors, with a time-varying parameter $\mu(t)$, can produce a larger number of particles than the correspondent perfectly reflecting one.
One can observe greater values of $\mathcal{N}/\mathcal{N}|_{\lambda_{0}=1}$, for instance, around $(\chi_{0}\approx 4,\lambda_0=0)$.
The dashed line (level curve $=1$) indicates perfectly reflecting mirrors. We highlight that
the dotted lines show a family of semi-transparent mirrors generating a same number of particles
than perfect mirrors.
\begin{figure}
\caption{The normalized total number of created particles $(2\pi /\epsilon^2\tau)\mathcal{N}
\label{Fig.1}
\end{figure}
\begin{figure}
\caption{The ratio $\mathcal{N}
\label{Fig.2}
\end{figure}
In order to interpret these results, we remark that $N_{\pm}(\omega)$, in Eq. \eqref{eq:N-pm-0},
depends on the reflectivity $|r_{\pm}(\omega^{\prime})|$ and also on
the phase $\text{arg}[r_{\pm}(\omega^{\prime})]$ [see Eq. \eqref{r-pm}].
Mirrors with $|r_{\pm}(\omega^{\prime})|=1$
(ideal mirrors, correspondent to the dashed lines in Figs. \ref{Fig.1} and \ref{Fig.2}) have
a maximum reflectivity, but not necessarily the
combination of $|r_{\pm}(\omega^{\prime})|=1$ and $\text{arg}[r_{\pm}(\omega^{\prime})]$
that creates a maximum of particles.
On the other hand, we show that there are points not belonging to
the dashed lines for which the combination of
$|r_{\pm}(\omega^{\prime})|\neq 1$ and $\text{arg}[r_{\pm}(\omega^{\prime})]$
can produce the same or even a greater number of particles than perfectly reflecting mirrors.
\section{\label{sec:final-remarks} Final Remarks}
The creation of real particles by excitation of the quantum vacuum can be caused,
for instance, by the movement of a mirror \cite{Moore-1970}, or by the time-dependent properties of a static material medium \cite{Yablonovitch-PRL-1989, Lozovik-PZhETF-1995}.
In the context of moving mirrors,
it had already been shown that transparency can enhance the number of created particles from vacuum, when compared to a perfect mirror \cite{Silva-Braga-Alves-PRD-2016}.
In the present paper, we showed that, even in the context of static mirrors
with time-dependent properties, a partially reflecting mirror can produce
a greater number of particles than a perfectly reflecting one.
\begin{acknowledgments}
The authors thank Danilo Pedrelli for valuable discussions,
and also Lucas Queiroz for
the support in improving the figures presented here.
\end{acknowledgments}
\begin{thebibliography}{74}
\makeatletter
\providecommand \@ifxundefined [1]{
\@ifx{#1\undefined}
}
\providecommand \@ifnum [1]{
\ifnum #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi
}
\providecommand \@ifx [1]{
\ifx #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi
}
\providecommand \natexlab [1]{#1}
\providecommand \enquote [1]{``#1''}
\providecommand \bibnamefont [1]{#1}
\providecommand \bibfnamefont [1]{#1}
\providecommand \citenamefont [1]{#1}
\providecommand \href@noop [0]{\@secondoftwo}
\providecommand \href [0]{\begingroup \@sanitize@url \@href}
\providecommand \@href[1]{\@@startlink{#1}\@@href}
\providecommand \@@href[1]{\endgroup#1\@@endlink}
\providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode
`\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax}
\providecommand \@@startlink[1]{}
\providecommand \@@endlink[0]{}
\providecommand \url [0]{\begingroup\@sanitize@url \@url }
\providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }}
\providecommand \urlprefix [0]{URL }
\providecommand \Eprint [0]{\href }
\providecommand \doibase [0]{https://doi.org/}
\providecommand \selectlanguage [0]{\@gobble}
\providecommand \bibinfo [0]{\@secondoftwo}
\providecommand \bibfield [0]{\@secondoftwo}
\providecommand \translation [1]{[#1]}
\providecommand \BibitemOpen [0]{}
\providecommand \bibitemStop [0]{}
\providecommand \bibitemNoStop [0]{.\EOS\space}
\providecommand \EOS [0]{\spacefactor3000\relax}
\providecommand \BibitemShut [1]{\csname bibitem#1\endcsname}
\let\auto@bib@innerbib\@empty
\bibitem [{\citenamefont {Moore}(1970)}]{Moore-1970}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {G.~T.}\ \bibnamefont
{Moore}},\ }\bibfield {title} {\bibinfo {title} {Quantum theory of the
electromagnetic field in a variable-length one-dimensional cavity},\ }\href
{https://doi.org/10.1063/1.1665432} {\bibfield {journal} {\bibinfo
{journal} {J. Math. Phys. (N.Y.)}\ }\textbf {\bibinfo {volume} {11}},\
\bibinfo {pages} {2679} (\bibinfo {year} {1970})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {DeWitt}(1975)}]{DeWitt-PhysRep-1975}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.~S.}\ \bibnamefont
{DeWitt}},\ }\bibfield {title} {\bibinfo {title} {Quantum field theory in
curved spacetime},\ }\href {https://doi.org/10.1016/0370-1573(75)90051-4}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rep.}\ }\textbf {\bibinfo
{volume} {19}},\ \bibinfo {pages} {295} (\bibinfo {year} {1975})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Fulling}\ and\ \citenamefont
{Davies}(1976)}]{Fulling-Davies-PRSA-1976}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~A.}\ \bibnamefont
{Fulling}}\ and\ \bibinfo {author} {\bibfnamefont {P.~C.~W.}\ \bibnamefont
{Davies}},\ }\bibfield {title} {\bibinfo {title} {Radiation from a moving
mirror in two dimensional space-time: conformal anomaly},\ }\href
{https://doi.org/10.1098/rspa.1976.0045} {\bibfield {journal} {\bibinfo
{journal} {Proc. R. Soc. A.}\ }\textbf {\bibinfo {volume} {348}},\ \bibinfo
{pages} {393} (\bibinfo {year} {1976})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Davies}\ and\ \citenamefont
{Fulling}(1977)}]{Davies-Fulling-PRSA-1977}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.~C.~W.}\
\bibnamefont {Davies}}\ and\ \bibinfo {author} {\bibfnamefont {S.~A.}\
\bibnamefont {Fulling}},\ }\bibfield {title} {\bibinfo {title} {Radiation
from moving mirrors and from black holes},\ }\href
{https://doi.org/10.1098/rspa.1977.0130} {\bibfield {journal} {\bibinfo
{journal} {Proc. R. Soc. A}\ }\textbf {\bibinfo {volume} {356}},\ \bibinfo
{pages} {237} (\bibinfo {year} {1977})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Candelas}\ and\ \citenamefont
{Deutsch}(1977)}]{Candelas-PRSA-1977}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Candelas}}\ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Deutsch}},\ }\bibfield {title} {\bibinfo {title} {On the vacuum stress
induced by uniform acceleration or supporting the ether},\ }\href
{https://doi.org/10.1098/rspa.1977.0057} {\bibfield {journal} {\bibinfo
{journal} {Proc. R. Soc. A}\ }\textbf {\bibinfo {volume} {354}},\ \bibinfo
{pages} {79} (\bibinfo {year} {1977})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Yablonovitch}(1989)}]{Yablonovitch-PRL-1989}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont
{Yablonovitch}},\ }\bibfield {title} {\bibinfo {title} {Accelerating
reference frame for electromagnetic waves in a rapidly growing plasma:
{Unruh-Davies-Fulling-DeWitt} radiation and the nonadiabatic {Casimir}
effect},\ }\href {https://doi.org/10.1103/PhysRevLett.62.1742} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {62}},\ \bibinfo {pages} {1742} (\bibinfo {year}
{1989})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Schwinger}(1992)}]{Schwinger-PNAS-1992}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Schwinger}},\ }\bibfield {title} {\bibinfo {title} {Casimir energy for
dielectrics.},\ }\href {https://doi.org/10.1073/pnas.89.9.4091} {\bibfield
{journal} {\bibinfo {journal} {Proceedings of the National Academy of
Sciences of the United States of America}\ }\textbf {\bibinfo {volume}
{89}},\ \bibinfo {pages} {4091} (\bibinfo {year} {1992})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Casimir}(1948)}]{Casimir-1948}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.~B.~G.}\
\bibnamefont {Casimir}},\ }\bibfield {title} {\bibinfo {title} {On the
attraction between two perfectly conducting plates},\ }\href
{http://www.dwc.knaw.nl/DL/publications/PU00018547.pdf} {\bibfield {journal}
{\bibinfo {journal} {Proc. K. Ned. Akad. Wet.}\ }\textbf {\bibinfo {volume}
{51}},\ \bibinfo {pages} {793} (\bibinfo {year} {1948})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Dodonov}(2009)}]{Dodonov-JPCS-2009}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {V.~V.}\ \bibnamefont
{Dodonov}},\ }\bibfield {title} {\bibinfo {title} {Dynamical {Casimir}
effect: {Some} theoretical aspects},\ }\href
{https://doi.org/10.1088/1742-6596/161/1/012027} {\bibfield {journal}
{\bibinfo {journal} {J. Phys. Conf. Ser.}\ }\textbf {\bibinfo {volume}
{161}},\ \bibinfo {pages} {012027} (\bibinfo {year} {2009})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Dodonov}(2010)}]{Dodonov-PhysScr-2010}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {V.~V.}\ \bibnamefont
{Dodonov}},\ }\bibfield {title} {\bibinfo {title} {Current status of the
dynamical {Casimir} effect},\ }\href
{https://doi.org/10.1088/0031-8949/82/03/038105} {\bibfield {journal}
{\bibinfo {journal} {Phys. Scr.}\ }\textbf {\bibinfo {volume} {82}},\
\bibinfo {pages} {038105} (\bibinfo {year} {2010})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Dalvit}\ \emph {et~al.}(2011)\citenamefont {Dalvit},
\citenamefont {Neto},\ and\ \citenamefont
{Mazzitelli}}]{Dalvit-CasimirPhysics-2011}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~A.~R.}\
\bibnamefont {Dalvit}}, \bibinfo {author} {\bibfnamefont {P.~A.~M.}\
\bibnamefont {Neto}},\ and\ \bibinfo {author} {\bibfnamefont {F.~D.}\
\bibnamefont {Mazzitelli}},\ }\bibinfo {title} {{Fluctuations, Dissipation
and the Dynamical Casimir Effect}}\ (\bibinfo {publisher} {Springer-Verlag
Berlin Heidelberg},\ \bibinfo {year} {2011})\ Chap.~\bibinfo {chapter} {13},
pp.\ \bibinfo {pages} {419--457},\ \bibinfo {note} {edited by D. A. R.
Dalvit, P. Milonni, D. Roberts, and F. da Rosa}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Dodonov}(2020)}]{Dodonov-Phys-2020}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont
{Dodonov}},\ }\bibfield {title} {\bibinfo {title} {Fifty years of the
dynamical {Casimir} effect},\ }\href {https://doi.org/10.3390/physics2010007}
{\bibfield {journal} {\bibinfo {journal} {Physics}\ }\textbf {\bibinfo
{volume} {2}},\ \bibinfo {pages} {67} (\bibinfo {year} {2020})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Lozovik}\ \emph {et~al.}(1995)\citenamefont
{Lozovik}, \citenamefont {G.},\ and\ \citenamefont
{Vinogradov}}]{Lozovik-PZhETF-1995}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.~E.}\ \bibnamefont
{Lozovik}}, \bibinfo {author} {\bibfnamefont {T.~V.}\ \bibnamefont {G.}},\
and\ \bibinfo {author} {\bibfnamefont {E.~A.}\ \bibnamefont {Vinogradov}},\
}\bibfield {title} {\bibinfo {title} {{Femtosecond parametric excitation of
electromagnetic field in a cavity}},\ }\href@noop {} {\bibfield {journal}
{\bibinfo {journal} {Pis'ma Zh. \'{E}ksp. Teor. Fiz.}\ }\textbf {\bibinfo
{volume} {61}},\ \bibinfo {pages} {711} (\bibinfo {year} {1995})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Kim}\ \emph {et~al.}(2006)\citenamefont {Kim},
\citenamefont {Brownell},\ and\ \citenamefont {Onofrio}}]{Kim-PRL-2006}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {W.-J.}\ \bibnamefont
{Kim}}, \bibinfo {author} {\bibfnamefont {J.~H.}\ \bibnamefont {Brownell}},\
and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Onofrio}},\
}\bibfield {title} {\bibinfo {title} {Detectability of dissipative motion in
quantum vacuum via superradiance},\ }\href
{https://doi.org/10.1103/PhysRevLett.96.200402} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {96}},\
\bibinfo {pages} {200402} (\bibinfo {year} {2006})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Brownell}\ \emph {et~al.}(2008)\citenamefont
{Brownell}, \citenamefont {Kim},\ and\ \citenamefont
{Onofrio}}]{Brownell-JPA-2008}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~H.}\ \bibnamefont
{Brownell}}, \bibinfo {author} {\bibfnamefont {W.~J.}\ \bibnamefont {Kim}},\
and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Onofrio}},\
}\bibfield {title} {\bibinfo {title} {{Modelling superradiant amplification
of {Casimir} photons in very low dissipation cavities}},\ }\href
{https://doi.org/10.1088/1751-8113/41/16/164026} {\bibfield {journal}
{\bibinfo {journal} {J. Phys. A Math. Theor.}\ }\textbf {\bibinfo {volume}
{41}},\ \bibinfo {pages} {164026} (\bibinfo {year} {2008})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Macr\`{\i}}\ \emph {et~al.}(2018)\citenamefont
{Macr\`{\i}}, \citenamefont {Ridolfo}, \citenamefont {Di~Stefano},
\citenamefont {Kockum}, \citenamefont {Nori},\ and\ \citenamefont
{Savasta}}]{Macri-PRX-2018}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont
{Macr\`{\i}}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Ridolfo}},
\bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Di~Stefano}}, \bibinfo
{author} {\bibfnamefont {A.~F.}\ \bibnamefont {Kockum}}, \bibinfo {author}
{\bibfnamefont {F.}~\bibnamefont {Nori}},\ and\ \bibinfo {author}
{\bibfnamefont {S.}~\bibnamefont {Savasta}},\ }\bibfield {title} {\bibinfo
{title} {Nonperturbative dynamical {Casimir} effect in optomechanical
systems: Vacuum {Casimir-Rabi} splittings},\ }\href
{https://doi.org/10.1103/PhysRevX.8.011031} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. X}\ }\textbf {\bibinfo {volume} {8}},\ \bibinfo {pages}
{011031} (\bibinfo {year} {2018})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Motazedifard}\ \emph {et~al.}(2018)\citenamefont
{Motazedifard}, \citenamefont {Dalafi}, \citenamefont {Naderi},\ and\
\citenamefont {Roknizadeh}}]{Motazedifard-2018}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Motazedifard}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Dalafi}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Naderi}},\
and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Roknizadeh}},\
}\bibfield {title} {\bibinfo {title} {{Controllable generation of photons
and phonons in a coupled {Bose}-{Einstein} condensate-optomechanical cavity
via the parametric dynamical Casimir effect}},\ }\href
{https://doi.org/10.1016/j.aop.2018.07.013} {\bibfield {journal} {\bibinfo
{journal} {Ann. Phys. (N. Y).}\ }\textbf {\bibinfo {volume} {396}},\ \bibinfo
{pages} {202} (\bibinfo {year} {2018})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Sanz}\ \emph {et~al.}(2018)\citenamefont {Sanz},
\citenamefont {Wieczorek}, \citenamefont {Gr{\"{o}}blacher},\ and\
\citenamefont {Solano}}]{Sanz-Quantum-2018}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Sanz}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Wieczorek}},
\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Gr{\"{o}}blacher}},\ and\
\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Solano}},\ }\bibfield
{title} {\bibinfo {title} {{Electro-mechanical Casimir effect}},\ }\href
{https://doi.org/10.22331/q-2018-09-03-91} {\bibfield {journal} {\bibinfo
{journal} {Quantum}\ }\textbf {\bibinfo {volume} {2}},\ \bibinfo {pages} {91}
(\bibinfo {year} {2018})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Di~Stefano}\ \emph {et~al.}(2019)\citenamefont
{Di~Stefano}, \citenamefont {Settineri}, \citenamefont {Macr\`{\i}},
\citenamefont {Ridolfo}, \citenamefont {Stassi}, \citenamefont {Kockum},
\citenamefont {Savasta},\ and\ \citenamefont {Nori}}]{DiStefano-PRL-2019}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {O.}~\bibnamefont
{Di~Stefano}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Settineri}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont
{Macr\`{\i}}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Ridolfo}},
\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Stassi}}, \bibinfo
{author} {\bibfnamefont {A.~F.}\ \bibnamefont {Kockum}}, \bibinfo {author}
{\bibfnamefont {S.}~\bibnamefont {Savasta}},\ and\ \bibinfo {author}
{\bibfnamefont {F.}~\bibnamefont {Nori}},\ }\bibfield {title} {\bibinfo
{title} {Interaction of mechanical oscillators mediated by the exchange of
virtual photon pairs},\ }\href
{https://doi.org/10.1103/PhysRevLett.122.030402} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {122}},\
\bibinfo {pages} {030402} (\bibinfo {year} {2019})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Qin}\ \emph {et~al.}(2019)\citenamefont {Qin},
\citenamefont {Macr{\`{i}}}, \citenamefont {Miranowicz}, \citenamefont
{Savasta},\ and\ \citenamefont {Nori}}]{Qin-PRA-2019}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {W.}~\bibnamefont
{Qin}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Macr{\`{i}}}},
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Miranowicz}}, \bibinfo
{author} {\bibfnamefont {S.}~\bibnamefont {Savasta}},\ and\ \bibinfo {author}
{\bibfnamefont {F.}~\bibnamefont {Nori}},\ }\bibfield {title} {\bibinfo
{title} {{Emission of photon pairs by mechanical stimulation of the squeezed
vacuum}},\ }\href {https://doi.org/10.1103/PhysRevA.100.062501} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume}
{100}},\ \bibinfo {pages} {062501} (\bibinfo {year} {2019})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Butera}\ and\ \citenamefont
{Carusotto}(2019)}]{Butera-PRA-2019}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Butera}}\ and\ \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont
{Carusotto}},\ }\bibfield {title} {\bibinfo {title} {{Mechanical
backreaction effect of the dynamical Casimir emission}},\ }\href
{https://doi.org/10.1103/PhysRevA.99.053815} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {99}},\ \bibinfo
{pages} {053815} (\bibinfo {year} {2019})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Braggio}\ \emph {et~al.}(2005)\citenamefont
{Braggio}, \citenamefont {Bressi}, \citenamefont {Carugno}, \citenamefont
{{Del Noce}}, \citenamefont {Galeazzi}, \citenamefont {Lombardi},
\citenamefont {Palmieri}, \citenamefont {Ruoso},\ and\ \citenamefont
{Zanello}}]{Braggio-EPL-2005}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Braggio}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Bressi}},
\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Carugno}}, \bibinfo
{author} {\bibfnamefont {C.}~\bibnamefont {{Del Noce}}}, \bibinfo {author}
{\bibfnamefont {G.}~\bibnamefont {Galeazzi}}, \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Lombardi}}, \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Palmieri}}, \bibinfo {author}
{\bibfnamefont {G.}~\bibnamefont {Ruoso}},\ and\ \bibinfo {author}
{\bibfnamefont {D.}~\bibnamefont {Zanello}},\ }\bibfield {title} {\bibinfo
{title} {A novel experimental approach for the detection of the dynamical
{Casimir} effect},\ }\href {https://doi.org/10.1209/epl/i2005-10048-8}
{\bibfield {journal} {\bibinfo {journal} {Europhys. Lett.}\ }\textbf
{\bibinfo {volume} {70}},\ \bibinfo {pages} {754} (\bibinfo {year}
{2005})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Agnesi}\ \emph {et~al.}(2008)\citenamefont {Agnesi},
\citenamefont {Braggio}, \citenamefont {Bressi}, \citenamefont {Carugno},
\citenamefont {Galeazzi}, \citenamefont {Pirzio}, \citenamefont {Reali},
\citenamefont {Ruoso},\ and\ \citenamefont {Zanello}}]{Braggio-JPA-2008}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Agnesi}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Braggio}},
\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Bressi}}, \bibinfo
{author} {\bibfnamefont {G.}~\bibnamefont {Carugno}}, \bibinfo {author}
{\bibfnamefont {G.}~\bibnamefont {Galeazzi}}, \bibinfo {author}
{\bibfnamefont {F.}~\bibnamefont {Pirzio}}, \bibinfo {author} {\bibfnamefont
{G.}~\bibnamefont {Reali}}, \bibinfo {author} {\bibfnamefont
{G.}~\bibnamefont {Ruoso}},\ and\ \bibinfo {author} {\bibfnamefont
{D.}~\bibnamefont {Zanello}},\ }\bibfield {title} {\bibinfo {title} {{MIR}
status report: an experiment for the measurement of the dynamical {Casimir}
effect},\ }\href {https://doi.org/10.1088/1751-8113/41/16/164024} {\bibfield
{journal} {\bibinfo {journal} {J. Phys. A}\ }\textbf {\bibinfo {volume}
{41}},\ \bibinfo {pages} {164024} (\bibinfo {year} {2008})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Agnesi}\ \emph {et~al.}(2009)\citenamefont {Agnesi},
\citenamefont {Braggio}, \citenamefont {Bressi}, \citenamefont {Carugno},
\citenamefont {Valle}, \citenamefont {Galeazzi}, \citenamefont {Messineo},
\citenamefont {Pirzio}, \citenamefont {Reali}, \citenamefont {Ruoso},
\citenamefont {Scarpa},\ and\ \citenamefont {Zanello}}]{Braggio-JPCS-2009}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Agnesi}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Braggio}},
\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Bressi}}, \bibinfo
{author} {\bibfnamefont {G.}~\bibnamefont {Carugno}}, \bibinfo {author}
{\bibfnamefont {F.~D.}\ \bibnamefont {Valle}}, \bibinfo {author}
{\bibfnamefont {G.}~\bibnamefont {Galeazzi}}, \bibinfo {author}
{\bibfnamefont {G.}~\bibnamefont {Messineo}}, \bibinfo {author}
{\bibfnamefont {F.}~\bibnamefont {Pirzio}}, \bibinfo {author} {\bibfnamefont
{G.}~\bibnamefont {Reali}}, \bibinfo {author} {\bibfnamefont
{G.}~\bibnamefont {Ruoso}}, \bibinfo {author} {\bibfnamefont
{D.}~\bibnamefont {Scarpa}},\ and\ \bibinfo {author} {\bibfnamefont
{D.}~\bibnamefont {Zanello}},\ }\bibfield {title} {\bibinfo {title} {{MIR}:
{An} experiment for the measurement of the dynamical {Casimir} effect},\
}\href {https://doi.org/10.1088/1742-6596/161/1/012028} {\bibfield {journal}
{\bibinfo {journal} {J. Phys. Conf. Ser.}\ }\textbf {\bibinfo {volume}
{161}},\ \bibinfo {pages} {012028} (\bibinfo {year} {2009})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Dezael}\ and\ \citenamefont
{Lambrecht}(2010)}]{Dezael-Lambrecht-EPL-2010}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {F.~X.}\ \bibnamefont
{Dezael}}\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Lambrecht}},\ }\bibfield {title} {\bibinfo {title} {Analogue {Casimir}
radiation using an optical parametric oscillator},\ }\href
{https://doi.org/10.1209/0295-5075/89/14001} {\bibfield {journal} {\bibinfo
{journal} {Europhys. Lett.}\ }\textbf {\bibinfo {volume} {89}},\ \bibinfo
{pages} {14001} (\bibinfo {year} {2010})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Wilson}\ \emph {et~al.}(2011)\citenamefont {Wilson},
\citenamefont {Johansson}, \citenamefont {Pourkabirian}, \citenamefont
{Simoen}, \citenamefont {Johansson}, \citenamefont {Duty}, \citenamefont
{Nori},\ and\ \citenamefont {Delsing}}]{Wilson-Nature-2011}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~M.}\ \bibnamefont
{Wilson}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Johansson}},
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Pourkabirian}}, \bibinfo
{author} {\bibfnamefont {M.}~\bibnamefont {Simoen}}, \bibinfo {author}
{\bibfnamefont {J.~R.}\ \bibnamefont {Johansson}}, \bibinfo {author}
{\bibfnamefont {T.}~\bibnamefont {Duty}}, \bibinfo {author} {\bibfnamefont
{F.}~\bibnamefont {Nori}},\ and\ \bibinfo {author} {\bibfnamefont
{P.}~\bibnamefont {Delsing}},\ }\bibfield {title} {\bibinfo {title}
{Observation of the dynamical {Casimir} effect in a superconducting
circuit},\ }\href {https://doi.org/10.1038/nature10561} {\bibfield {journal}
{\bibinfo {journal} {Nature (London)}\ }\textbf {\bibinfo {volume} {479}},\
\bibinfo {pages} {376} (\bibinfo {year} {2011})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Kawakubo}\ and\ \citenamefont
{Yamamoto}(2011)}]{Kawakubo-Yamamoto-PRA-2011}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Kawakubo}}\ and\ \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont
{Yamamoto}},\ }\bibfield {title} {\bibinfo {title} {Photon creation in a
resonant cavity with a nonstationary plasma mirror and its detection with
{Rydberg} atoms},\ }\href {https://doi.org/10.1103/PhysRevA.83.013819}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo
{volume} {83}},\ \bibinfo {pages} {013819} (\bibinfo {year}
{2011})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Faccio}\ and\ \citenamefont
{Carusotto}(2011)}]{Faccio-Carusotto-EPL-2011}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Faccio}}\ and\ \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont
{Carusotto}},\ }\bibfield {title} {\bibinfo {title} {Dynamical {Casimir}
effect in optically modulated cavities},\ }\href
{https://doi.org/10.1209/0295-5075/96/24006} {\bibfield {journal} {\bibinfo
{journal} {Europhys. Lett.}\ }\textbf {\bibinfo {volume} {96}},\ \bibinfo
{pages} {24006} (\bibinfo {year} {2011})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Johansson}\ \emph {et~al.}(2013)\citenamefont
{Johansson}, \citenamefont {Johansson}, \citenamefont {Wilson}, \citenamefont
{Delsing},\ and\ \citenamefont {Nori}}]{Johansson-et-al-PRL-2013}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~R.}\ \bibnamefont
{Johansson}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont
{Johansson}}, \bibinfo {author} {\bibfnamefont {C.~M.}\ \bibnamefont
{Wilson}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Delsing}},\
and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Nori}},\ }\bibfield
{title} {\bibinfo {title} {Nonclassical microwave radiation from the
dynamical {Casimir} effect},\ }\href
{https://doi.org/10.1103/PhysRevA.87.043804} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {87}},\ \bibinfo
{pages} {043804} (\bibinfo {year} {2013})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {L{\"{a}}hteenm{\"{a}}ki}\ \emph
{et~al.}(2013)\citenamefont {L{\"{a}}hteenm{\"{a}}ki}, \citenamefont
{Paraoanu}, \citenamefont {Hassel},\ and\ \citenamefont
{Hakonen}}]{Lahteenmaki-PNAS-2013}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{L{\"{a}}hteenm{\"{a}}ki}}, \bibinfo {author} {\bibfnamefont {G.~S.}\
\bibnamefont {Paraoanu}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Hassel}},\ and\ \bibinfo {author} {\bibfnamefont {P.~J.}\ \bibnamefont
{Hakonen}},\ }\bibfield {title} {\bibinfo {title} {Dynamical {Casimir}
effect in a {Josephson} metamaterial},\ }\href
{https://doi.org/10.1073/pnas.1212705110} {\bibfield {journal} {\bibinfo
{journal} {Proc. Natl. Acad. Sci. U.S.A.}\ }\textbf {\bibinfo {volume}
{110}},\ \bibinfo {pages} {4234} (\bibinfo {year} {2013})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Motazedifard}\ \emph {et~al.}(2015)\citenamefont
{Motazedifard}, \citenamefont {Naderi},\ and\ \citenamefont
{Roknizadeh}}]{Motazedifard-2015}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Motazedifard}}, \bibinfo {author} {\bibfnamefont {M.~H.}\ \bibnamefont
{Naderi}},\ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Roknizadeh}},\ }\bibfield {title} {\bibinfo {title} {{Analogue model for
controllable Casimir radiation in a nonlinear cavity with amplitude-modulated
pumping: generation and quantum statistical properties}},\ }\href
{https://doi.org/10.1364/JOSAB.32.001555} {\bibfield {journal} {\bibinfo
{journal} {J. Opt. Soc. Am. B}\ }\textbf {\bibinfo {volume} {32}},\ \bibinfo
{pages} {1555} (\bibinfo {year} {2015})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {de~Sousa}\ and\ \citenamefont
{Dodonov}(2015)}]{Dodonov-JPA-2015}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {I.~M.}\ \bibnamefont
{de~Sousa}}\ and\ \bibinfo {author} {\bibfnamefont {A.~V.}\ \bibnamefont
{Dodonov}},\ }\bibfield {title} {\bibinfo {title} {Microscopic toy model for
the cavity dynamical {Casimir} effect},\ }\href
{https://doi.org/10.1088/1751-8113/48/24/245302} {\bibfield {journal}
{\bibinfo {journal} {Journal of Physics A: Mathematical and Theoretical}\
}\textbf {\bibinfo {volume} {48}},\ \bibinfo {pages} {245302} (\bibinfo
{year} {2015})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Corona-Ugalde}\ \emph {et~al.}(2016)\citenamefont
{Corona-Ugalde}, \citenamefont {Mart\'{\i}n-Mart\'{\i}nez}, \citenamefont
{Wilson},\ and\ \citenamefont {Mann}}]{Ugalde-PRA-2016}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Corona-Ugalde}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont
{Mart\'{\i}n-Mart\'{\i}nez}}, \bibinfo {author} {\bibfnamefont {C.~M.}\
\bibnamefont {Wilson}},\ and\ \bibinfo {author} {\bibfnamefont {R.~B.}\
\bibnamefont {Mann}},\ }\bibfield {title} {\bibinfo {title} {Dynamical
{Casimir} effect in circuit {QED} for nonuniform trajectories},\ }\href
{https://doi.org/10.1103/PhysRevA.93.012519} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {93}},\ \bibinfo
{pages} {012519} (\bibinfo {year} {2016})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Braggio}\ \emph {et~al.}(2018)\citenamefont
{Braggio}, \citenamefont {Carugno}, \citenamefont {Borghesani}, \citenamefont
{Dodonov}, \citenamefont {Pirzio},\ and\ \citenamefont
{Ruoso}}]{Braggio-JOpt-2018}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Braggio}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Carugno}},
\bibinfo {author} {\bibfnamefont {A.~F.}\ \bibnamefont {Borghesani}},
\bibinfo {author} {\bibfnamefont {V.~V.}\ \bibnamefont {Dodonov}}, \bibinfo
{author} {\bibfnamefont {F.}~\bibnamefont {Pirzio}},\ and\ \bibinfo {author}
{\bibfnamefont {G.}~\bibnamefont {Ruoso}},\ }\bibfield {title} {\bibinfo
{title} {Generation of microwave fields in cavities with laser-excited
nonlinear media: competition between the second- and third-order optical
nonlinearities},\ }\href {https://doi.org/10.1088/2040-8986/aad826}
{\bibfield {journal} {\bibinfo {journal} {Journal of Optics}\ }\textbf
{\bibinfo {volume} {20}},\ \bibinfo {pages} {095502} (\bibinfo {year}
{2018})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Dodonov}(2019)}]{Dodonov-IOP-CS-2019}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {V.~V.}\ \bibnamefont
{Dodonov}},\ }\bibfield {title} {\bibinfo {title} {Dynamical {C}asimir
effect meets material science},\ }\href
{https://doi.org/10.1088/1757-899x/474/1/012009} {\bibfield {journal}
{\bibinfo {journal} {{IOP} Conference Series: Materials Science and
Engineering}\ }\textbf {\bibinfo {volume} {474}},\ \bibinfo {pages} {012009}
(\bibinfo {year} {2019})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Vezzoli}\ \emph {et~al.}(2019)\citenamefont
{Vezzoli}, \citenamefont {Mussot}, \citenamefont {Westerberg}, \citenamefont
{Kudlinski}, \citenamefont {Dinparasti~Saleh}, \citenamefont {Prain},
\citenamefont {Biancalana}, \citenamefont {Lantz},\ and\ \citenamefont
{Faccio}}]{Vezzoli-CommPhys-2019}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Vezzoli}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Mussot}},
\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Westerberg}}, \bibinfo
{author} {\bibfnamefont {A.}~\bibnamefont {Kudlinski}}, \bibinfo {author}
{\bibfnamefont {H.}~\bibnamefont {Dinparasti~Saleh}}, \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Prain}}, \bibinfo {author} {\bibfnamefont
{F.}~\bibnamefont {Biancalana}}, \bibinfo {author} {\bibfnamefont
{E.}~\bibnamefont {Lantz}},\ and\ \bibinfo {author} {\bibfnamefont
{D.}~\bibnamefont {Faccio}},\ }\bibfield {title} {\bibinfo {title} {Optical
analogue of the dynamical {Casimir} effect in a dispersion-oscillating
fibre},\ }\href {https://doi.org/10.1038/s42005-019-0183-z} {\bibfield
{journal} {\bibinfo {journal} {Communications Physics}\ }\textbf {\bibinfo
{volume} {2}},\ \bibinfo {pages} {84} (\bibinfo {year} {2019})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Schneider}\ \emph {et~al.}(2020)\citenamefont
{Schneider}, \citenamefont {Bengtsson}, \citenamefont {Svensson},
\citenamefont {Aref}, \citenamefont {Johansson}, \citenamefont {Bylander},\
and\ \citenamefont {Delsing}}]{Schneider-et-al-PRL-2020}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.~H.}\ \bibnamefont
{Schneider}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Bengtsson}}, \bibinfo {author} {\bibfnamefont {I.~M.}\ \bibnamefont
{Svensson}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Aref}},
\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Johansson}}, \bibinfo
{author} {\bibfnamefont {J.}~\bibnamefont {Bylander}},\ and\ \bibinfo
{author} {\bibfnamefont {P.}~\bibnamefont {Delsing}},\ }\bibfield {title}
{\bibinfo {title} {Observation of broadband entanglement in microwave
radiation from a single time-varying boundary condition},\ }\href
{https://doi.org/10.1103/PhysRevLet} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {124}},\ \bibinfo
{pages} {140503} (\bibinfo {year} {2020})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Johansson}\ \emph {et~al.}(2009)\citenamefont
{Johansson}, \citenamefont {Johansson}, \citenamefont {Wilson},\ and\
\citenamefont {Nori}}]{Johansson-PRL-2009}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~R.}\ \bibnamefont
{Johansson}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont
{Johansson}}, \bibinfo {author} {\bibfnamefont {C.~M.}\ \bibnamefont
{Wilson}},\ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Nori}},\
}\bibfield {title} {\bibinfo {title} {Dynamical {Casimir} effect in a
superconducting coplanar waveguide},\ }\href
{https://doi.org/10.1103/PhysRevLett.103.147003} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {103}},\
\bibinfo {pages} {147003} (\bibinfo {year} {2009})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Johansson}\ \emph {et~al.}(2010)\citenamefont
{Johansson}, \citenamefont {Johansson}, \citenamefont {Wilson},\ and\
\citenamefont {Nori}}]{Johansson-et-al-PRA-2010}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~R.}\ \bibnamefont
{Johansson}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont
{Johansson}}, \bibinfo {author} {\bibfnamefont {C.~M.}\ \bibnamefont
{Wilson}},\ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Nori}},\
}\bibfield {title} {\bibinfo {title} {Dynamical {Casimir} effect in
superconducting microwave circuits},\ }\href
{https://doi.org/10.1103/PhysRevA.82.052509} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {82}},\ \bibinfo
{pages} {052509} (\bibinfo {year} {2010})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Dodonov}\ and\ \citenamefont
{Klimov}(1996)}]{Dodonov-Klimov-PRA-1996}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {V.~V.}\ \bibnamefont
{Dodonov}}\ and\ \bibinfo {author} {\bibfnamefont {A.~B.}\ \bibnamefont
{Klimov}},\ }\bibfield {title} {\bibinfo {title} {Generation and detection
of photons in a cavity with a resonantly oscillating boundary},\ }\href
{https://doi.org/10.1103/PhysRevA.53.2664} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {53}},\ \bibinfo
{pages} {2664} (\bibinfo {year} {1996})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Alves}\ \emph {et~al.}(2003)\citenamefont {Alves},
\citenamefont {Farina},\ and\ \citenamefont
{Neto}}]{Alves-Farina-MaiaNeto-JPA-2003}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~T.}\ \bibnamefont
{Alves}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Farina}},\ and\
\bibinfo {author} {\bibfnamefont {P.~A.~M.}\ \bibnamefont {Neto}},\
}\bibfield {title} {\bibinfo {title} {Dynamical {Casimir} effect with
{Dirichlet} and {Neumann} boundary conditions},\ }\href
{https://doi.org/10.1088/0305-4470/36/44/011} {\bibfield {journal} {\bibinfo
{journal} {J. Phys. A}\ }\textbf {\bibinfo {volume} {36}},\ \bibinfo {pages}
{11333} (\bibinfo {year} {2003})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Alves}\ \emph {et~al.}(2006)\citenamefont {Alves},
\citenamefont {Farina},\ and\ \citenamefont
{Granhen}}]{Alves-Farina-Granhen-PRA-2006}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~T.}\ \bibnamefont
{Alves}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Farina}},\ and\
\bibinfo {author} {\bibfnamefont {E.~R.}\ \bibnamefont {Granhen}},\
}\bibfield {title} {\bibinfo {title} {Dynamical {Casimir} effect in a
resonant cavity with mixed boundary conditions},\ }\href
{https://doi.org/10.1103/PhysRevA.73.063818} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {73}},\ \bibinfo
{pages} {063818} (\bibinfo {year} {2006})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Alves}\ and\ \citenamefont
{Granhen}(2008)}]{Alves-Granhen-PRA-2008}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~T.}\ \bibnamefont
{Alves}}\ and\ \bibinfo {author} {\bibfnamefont {E.~R.}\ \bibnamefont
{Granhen}},\ }\bibfield {title} {\bibinfo {title} {Energy density and
particle creation inside an oscillating cavity with mixed boundary
conditions},\ }\href {https://doi.org/10.1103/PhysRevA.77.015808} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume}
{77}},\ \bibinfo {pages} {015808} (\bibinfo {year} {2008})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Alves}\ \emph {et~al.}(2008)\citenamefont {Alves},
\citenamefont {Granhen},\ and\ \citenamefont
{Lima}}]{Alves-Granhen-Lima-PRD-2008}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~T.}\ \bibnamefont
{Alves}}, \bibinfo {author} {\bibfnamefont {E.~R.}\ \bibnamefont {Granhen}},\
and\ \bibinfo {author} {\bibfnamefont {M.~G.}\ \bibnamefont {Lima}},\
}\bibfield {title} {\bibinfo {title} {Quantum radiation force on a moving
mirror with {Dirichlet} and {Neumann} boundary conditions for a vacuum,
finite temperature, and a coherent state},\ }\href
{https://doi.org/10.1103/PhysRevD.77.125001} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. D}\ }\textbf {\bibinfo {volume} {77}},\ \bibinfo
{pages} {125001} (\bibinfo {year} {2008})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Alves}\ \emph
{et~al.}(2010{\natexlab{a}})\citenamefont {Alves}, \citenamefont {Granhen},
\citenamefont {Silva},\ and\ \citenamefont
{Lima}}]{Alves-Granhen-Silva-Lima-PLA-2010}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~T.}\ \bibnamefont
{Alves}}, \bibinfo {author} {\bibfnamefont {E.~R.}\ \bibnamefont {Granhen}},
\bibinfo {author} {\bibfnamefont {H.~O.}\ \bibnamefont {Silva}},\ and\
\bibinfo {author} {\bibfnamefont {M.~G.}\ \bibnamefont {Lima}},\ }\bibfield
{title} {\bibinfo {title} {Exact behavior of the energy density inside a
one-dimensional oscillating cavity with a thermal state},\ }\href
{https://doi.org/https://doi.org/10.1016/j.physleta.2010.07.063} {\bibfield
{journal} {\bibinfo {journal} {Physics Letters A}\ }\textbf {\bibinfo
{volume} {374}},\ \bibinfo {pages} {3899 } (\bibinfo {year}
{2010}{\natexlab{a}})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Alves}\ \emph
{et~al.}(2010{\natexlab{b}})\citenamefont {Alves}, \citenamefont {Granhen},
\citenamefont {Silva},\ and\ \citenamefont
{Lima}}]{Alves-Granhen-Silva-Lima-PRD-2010}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~T.}\ \bibnamefont
{Alves}}, \bibinfo {author} {\bibfnamefont {E.~R.}\ \bibnamefont {Granhen}},
\bibinfo {author} {\bibfnamefont {H.~O.}\ \bibnamefont {Silva}},\ and\
\bibinfo {author} {\bibfnamefont {M.~G.}\ \bibnamefont {Lima}},\ }\bibfield
{title} {\bibinfo {title} {Quantum radiation force on the moving mirror of a
cavity, with {Dirichlet} and {Neumann} boundary conditions for a vacuum,
finite temperature, and a coherent state},\ }\href
{https://doi.org/10.1103/PhysRevD.81.025016} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. D}\ }\textbf {\bibinfo {volume} {81}},\ \bibinfo
{pages} {025016} (\bibinfo {year} {2010}{\natexlab{b}})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Alves}\ and\ \citenamefont
{Granhen}(2014)}]{Alves-Granhen-CPC-2014}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~T.}\ \bibnamefont
{Alves}}\ and\ \bibinfo {author} {\bibfnamefont {E.~R.}\ \bibnamefont
{Granhen}},\ }\bibfield {title} {\bibinfo {title} {A computer algebra
package for calculation of the energy density produced via the dynamical
{Casimir} effect in one-dimensional cavities},\ }\href
{https://doi.org/https://doi.org/10.1016/j.cpc.2014.03.020} {\bibfield
{journal} {\bibinfo {journal} {Computer Physics Communications}\ }\textbf
{\bibinfo {volume} {185}},\ \bibinfo {pages} {2101 } (\bibinfo {year}
{2014})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Good}\ \emph {et~al.}(2016)\citenamefont {Good},
\citenamefont {Anderson},\ and\ \citenamefont {Evans}}]{Good-PRD-2016}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~R.~R.}\
\bibnamefont {Good}}, \bibinfo {author} {\bibfnamefont {P.~R.}\ \bibnamefont
{Anderson}},\ and\ \bibinfo {author} {\bibfnamefont {C.~R.}\ \bibnamefont
{Evans}},\ }\bibfield {title} {\bibinfo {title} {Mirror reflections of a
black hole},\ }\href {https://doi.org/10.1103/PhysRevD.94.065010} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. D}\ }\textbf {\bibinfo {volume}
{94}},\ \bibinfo {pages} {065010} (\bibinfo {year} {2016})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Good}\ \emph {et~al.}(2020)\citenamefont {Good},
\citenamefont {Zhakenuly},\ and\ \citenamefont {Linder}}]{Good-PRD-2020}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~R.~R.}\
\bibnamefont {Good}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Zhakenuly}},\ and\ \bibinfo {author} {\bibfnamefont {E.~V.}\ \bibnamefont
{Linder}},\ }\bibfield {title} {\bibinfo {title} {Mirror at the edge of the
universe: Reflections on an accelerated boundary correspondence with de
sitter cosmology},\ }\href {https://doi.org/10.1103/PhysRevD.102.045020}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. D}\ }\textbf {\bibinfo
{volume} {102}},\ \bibinfo {pages} {045020} (\bibinfo {year}
{2020})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Good}\ \emph {et~al.}(2021)\citenamefont {Good},
\citenamefont {Lapponi}, \citenamefont {Luongo},\ and\ \citenamefont
{Mancini}}]{Good-Orlando-ArXiv-2021}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~R.}\ \bibnamefont
{Good}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Lapponi}},
\bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Luongo}},\ and\ \bibinfo
{author} {\bibfnamefont {S.}~\bibnamefont {Mancini}},\ }\bibfield {title}
{\bibinfo {title} {Quantum communication through a partially reflecting
accelerating mirror},\ }\href {https://arxiv.org/abs/2103.07374} {\
(\bibinfo {year} {2021})},\ \Eprint {https://arxiv.org/abs/2103.07374}
{arXiv:2103.07374} \BibitemShut {NoStop}
\bibitem [{\citenamefont {Silva}\ and\ \citenamefont
{Farina}(2011)}]{Silva-Farina-PRD-2011}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.~O.}\ \bibnamefont
{Silva}}\ and\ \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Farina}},\
}\bibfield {title} {\bibinfo {title} {Simple model for the dynamical
{Casimir} effect for a static mirror with time-dependent properties},\ }\href
{https://doi.org/10.1103/PhysRevD.84.045003} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. D}\ }\textbf {\bibinfo {volume} {84}},\ \bibinfo
{pages} {045003} (\bibinfo {year} {2011})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Silva}\ \emph {et~al.}(2015)\citenamefont {Silva},
\citenamefont {Braga}, \citenamefont {Rego},\ and\ \citenamefont
{Alves}}]{Silva-Braga-Rego-Alves-PRD-2015}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~D.~L.}\
\bibnamefont {Silva}}, \bibinfo {author} {\bibfnamefont {A.~N.}\ \bibnamefont
{Braga}}, \bibinfo {author} {\bibfnamefont {A.~L.~C.}\ \bibnamefont {Rego}},\
and\ \bibinfo {author} {\bibfnamefont {D.~T.}\ \bibnamefont {Alves}},\
}\bibfield {title} {\bibinfo {title} {Interference phenomena in the
dynamical {Casimir} effect for a single mirror with {Robin} conditions},\
}\href {https://doi.org/10.1103/PhysRevD.92.025040} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. D}\ }\textbf {\bibinfo {volume} {92}},\
\bibinfo {pages} {025040} (\bibinfo {year} {2015})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Silva}\ \emph {et~al.}(2016)\citenamefont {Silva},
\citenamefont {Braga},\ and\ \citenamefont
{Alves}}]{Silva-Braga-Alves-PRD-2016}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~D.~L.}\
\bibnamefont {Silva}}, \bibinfo {author} {\bibfnamefont {A.~N.}\ \bibnamefont
{Braga}},\ and\ \bibinfo {author} {\bibfnamefont {D.~T.}\ \bibnamefont
{Alves}},\ }\bibfield {title} {\bibinfo {title} {Dynamical {Casimir} effect
with $\ensuremath{\delta}-{\ensuremath{\delta}}^{\ensuremath{\prime}}$
mirrors},\ }\href {https://doi.org/10.1103/PhysRevD.94.105009} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. D}\ }\textbf {\bibinfo {volume}
{94}},\ \bibinfo {pages} {105009} (\bibinfo {year} {2016})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Silva}\ \emph {et~al.}(2020)\citenamefont {Silva},
\citenamefont {Braga}, \citenamefont {Rego},\ and\ \citenamefont
{Alves}}]{Silva-Braga-Rego-Alves-PRD-2020}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~D.~L.}\
\bibnamefont {Silva}}, \bibinfo {author} {\bibfnamefont {A.~N.}\ \bibnamefont
{Braga}}, \bibinfo {author} {\bibfnamefont {A.~L.~C.}\ \bibnamefont {Rego}},\
and\ \bibinfo {author} {\bibfnamefont {D.~T.}\ \bibnamefont {Alves}},\
}\bibfield {title} {\bibinfo {title} {Motion induced by asymmetric
excitation of the quantum vacuum},\ }\href
{https://doi.org/10.1103/PhysRevD.102.125019} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. D}\ }\textbf {\bibinfo {volume} {102}},\ \bibinfo
{pages} {125019} (\bibinfo {year} {2020})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Mintz}\ \emph
{et~al.}(2006{\natexlab{a}})\citenamefont {Mintz}, \citenamefont {Farina},
\citenamefont {Maia~Neto},\ and\ \citenamefont
{Rodrigues}}]{Mintz-Farina-MaiaNeto-Rodrigues-JPA-2006-I}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont
{Mintz}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Farina}},
\bibinfo {author} {\bibfnamefont {P.~A.}\ \bibnamefont {Maia~Neto}},\ and\
\bibinfo {author} {\bibfnamefont {R.~B.}\ \bibnamefont {Rodrigues}},\
}\bibfield {title} {\bibinfo {title} {Casimir forces for moving boundaries
with {Robin} conditions},\ }\href
{https://doi.org/10.1088/0305-4470/39/21/S54} {\bibfield {journal} {\bibinfo
{journal} {J. Phys. A}\ }\textbf {\bibinfo {volume} {39}},\ \bibinfo {pages}
{6559} (\bibinfo {year} {2006}{\natexlab{a}})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Mintz}\ \emph
{et~al.}(2006{\natexlab{b}})\citenamefont {Mintz}, \citenamefont {Farina},
\citenamefont {Maia~Neto},\ and\ \citenamefont
{Rodrigues}}]{Mintz-Farina-MaiaNeto-Rodrigues-JPA-2006-II}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont
{Mintz}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Farina}},
\bibinfo {author} {\bibfnamefont {P.~A.}\ \bibnamefont {Maia~Neto}},\ and\
\bibinfo {author} {\bibfnamefont {R.~B.}\ \bibnamefont {Rodrigues}},\
}\bibfield {title} {\bibinfo {title} {Particle creation by a moving boundary
with a {Robin} boundary condition},\ }\href
{https://doi.org/10.1088/0305-4470/39/36/013} {\bibfield {journal} {\bibinfo
{journal} {J. Phys. A}\ }\textbf {\bibinfo {volume} {39}},\ \bibinfo {pages}
{11325} (\bibinfo {year} {2006}{\natexlab{b}})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Fosco}\ \emph {et~al.}(2013)\citenamefont {Fosco},
\citenamefont {Lombardo},\ and\ \citenamefont {Mazzitelli}}]{Fosco-PRD-2013}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~D.}\ \bibnamefont
{Fosco}}, \bibinfo {author} {\bibfnamefont {F.~C.}\ \bibnamefont
{Lombardo}},\ and\ \bibinfo {author} {\bibfnamefont {F.~D.}\ \bibnamefont
{Mazzitelli}},\ }\bibfield {title} {\bibinfo {title} {Vacuum fluctuations
and generalized boundary conditions},\ }\href
{https://doi.org/10.1103/PhysRevD.87.105008} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. D}\ }\textbf {\bibinfo {volume} {87}},\ \bibinfo
{pages} {105008} (\bibinfo {year} {2013})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Rego}\ \emph {et~al.}(2014)\citenamefont {Rego},
\citenamefont {Silva}, \citenamefont {Alves},\ and\ \citenamefont
{Farina}}]{Rego-Silva-Alves-Farina-PRD-2014}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~L.~C.}\
\bibnamefont {Rego}}, \bibinfo {author} {\bibfnamefont {H.~O.}\ \bibnamefont
{Silva}}, \bibinfo {author} {\bibfnamefont {D.~T.}\ \bibnamefont {Alves}},\
and\ \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Farina}},\
}\bibfield {title} {\bibinfo {title} {New signatures of the dynamical
{Casimir} effect in a superconducting circuit},\ }\href
{https://doi.org/10.1103/PhysRevD.90.025003} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. D}\ }\textbf {\bibinfo {volume} {90}},\ \bibinfo
{pages} {025003} (\bibinfo {year} {2014})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Jaekel}\ and\ \citenamefont
{Reynaud}(1992)}]{Jaekel-Reynaud-Quant-Opt-1992}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.-T.}\ \bibnamefont
{Jaekel}}\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Reynaud}},\ }\bibfield {title} {\bibinfo {title} {Fluctuations and
dissipation for a mirror in vacuum},\ }\href
{https://doi.org/10.1088/0954-8998/4/1/005} {\bibfield {journal} {\bibinfo
{journal} {Quantum Opt.}\ }\textbf {\bibinfo {volume} {4}},\ \bibinfo {pages}
{39} (\bibinfo {year} {1992})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Barton}\ and\ \citenamefont
{Eberlein}(1993)}]{Barton-Eberlein-AnnPhys-1993}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont
{Barton}}\ and\ \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Eberlein}},\ }\bibfield {title} {\bibinfo {title} {On quantum radiation
from a moving body with finite refractive index},\ }\href
{https://doi.org/10.1006/aphy.1993.1081} {\bibfield {journal} {\bibinfo
{journal} {Ann. Phys. (N.Y.)}\ }\textbf {\bibinfo {volume} {227}},\ \bibinfo
{pages} {222} (\bibinfo {year} {1993})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Lambrecht}\ \emph {et~al.}(1996)\citenamefont
{Lambrecht}, \citenamefont {Jaekel},\ and\ \citenamefont
{Reynaud}}]{Lambrecht-Jaekel-Reynaud-PRL-1996}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Lambrecht}}, \bibinfo {author} {\bibfnamefont {M.-T.}\ \bibnamefont
{Jaekel}},\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Reynaud}},\ }\bibfield {title} {\bibinfo {title} {Motion induced radiation
from a vibrating cavity},\ }\href
{https://doi.org/10.1103/PhysRevLett.77.615} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {77}},\ \bibinfo
{pages} {615} (\bibinfo {year} {1996})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Lambrecht}\ \emph {et~al.}(1998)\citenamefont
{Lambrecht}, \citenamefont {Jaekel},\ and\ \citenamefont
{Reynaud}}]{Lambrecht-Jaekel-Reynaud-EPJD-1998}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Lambrecht}}, \bibinfo {author} {\bibfnamefont {M.-T.}\ \bibnamefont
{Jaekel}},\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Reynaud}},\ }\bibfield {title} {\bibinfo {title} {Frequency up-converted
radiation from a cavity moving in vacuum},\ }\href
{https://doi.org/10.1007/s100530050152} {\bibfield {journal} {\bibinfo
{journal} {Eur. Phys. J. D}\ }\textbf {\bibinfo {volume} {3}},\ \bibinfo
{pages} {95} (\bibinfo {year} {1998})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Obadia}\ and\ \citenamefont
{Parentani}(2001)}]{Obadia-Parentani-PRD-2001}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont
{Obadia}}\ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Parentani}},\ }\bibfield {title} {\bibinfo {title} {Notes on moving
mirrors},\ }\href {https://doi.org/10.1103/PhysRevD.64.044019} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. D}\ }\textbf {\bibinfo {volume}
{64}},\ \bibinfo {pages} {044019} (\bibinfo {year} {2001})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Haro}\ and\ \citenamefont
{Elizalde}(2006)}]{Haro-Elizalde-PRL-2006}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Haro}}\ and\ \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont
{Elizalde}},\ }\bibfield {title} {\bibinfo {title} {Hamiltonian approach to
the dynamical {Casimir} effect},\ }\href
{https://doi.org/10.1103/PhysRevLett.97.130401} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {97}},\
\bibinfo {pages} {130401} (\bibinfo {year} {2006})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Barton}\ and\ \citenamefont
{Calogeracos}(1995)}]{Barton-Calogeracos-AnnPhys-I-1995}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont
{Barton}}\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Calogeracos}},\ }\bibfield {title} {\bibinfo {title} {{On the Quantum
Electrodynamics of a Dispersive Mirror.: {I}. {Mass} Shifts, Radiation, and
Radiative Reaction}},\ }\href {https://doi.org/10.1006/aphy.1995.1021}
{\bibfield {journal} {\bibinfo {journal} {Ann. Phys. (N.Y.)}\ }\textbf
{\bibinfo {volume} {238}},\ \bibinfo {pages} {227} (\bibinfo {year}
{1995})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Nicolaevici}(2001)}]{Nicolaevici-CQG-2001}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont
{Nicolaevici}},\ }\bibfield {title} {\bibinfo {title} {Quantum radiation
from a partially reflecting moving mirror},\ }\href
{https://doi.org/10.1088/0264-9381/18/4/304} {\bibfield {journal} {\bibinfo
{journal} {Classical Quantum Gravity}\ }\textbf {\bibinfo {volume} {18}},\
\bibinfo {pages} {619} (\bibinfo {year} {2001})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Nicolaevici}(2009)}]{Nicolaevici-PRD-2009}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont
{Nicolaevici}},\ }\bibfield {title} {\bibinfo {title} {Semitransparency
effects in the moving mirror model for {Hawking} radiation},\ }\href
{https://doi.org/10.1103/PhysRevD.80.125003} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. D}\ }\textbf {\bibinfo {volume} {80}},\ \bibinfo
{pages} {125003} (\bibinfo {year} {2009})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Fosco}\ \emph {et~al.}(2017)\citenamefont {Fosco},
\citenamefont {Giraldo},\ and\ \citenamefont
{Mazzitelli}}]{Fosco-Giraldo-Mazzitelli-PRD-2017}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~D.}\ \bibnamefont
{Fosco}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Giraldo}},\
and\ \bibinfo {author} {\bibfnamefont {F.~D.}\ \bibnamefont {Mazzitelli}},\
}\bibfield {title} {\bibinfo {title} {Dynamical {Casimir} effect for
semitransparent mirrors},\ }\href
{https://doi.org/10.1103/PhysRevD.96.045004} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. D}\ }\textbf {\bibinfo {volume} {96}},\ \bibinfo
{pages} {045004} (\bibinfo {year} {2017})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Dalvit}\ and\ \citenamefont
{Maia~Neto}(2000)}]{Dalvit-MaiaNeto-PRL-2000}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~A.~R.}\
\bibnamefont {Dalvit}}\ and\ \bibinfo {author} {\bibfnamefont {P.~A.}\
\bibnamefont {Maia~Neto}},\ }\bibfield {title} {\bibinfo {title}
{Decoherence via the dynamical {Casimir} effect},\ }\href
{https://doi.org/10.1103/PhysRevLett.84.798} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {84}},\ \bibinfo
{pages} {798} (\bibinfo {year} {2000})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Casta\~neda}\ \emph {et~al.}(2013)\citenamefont
{Casta\~neda}, \citenamefont {Guilarte},\ and\ \citenamefont
{Mosquera}}]{Castaneda-Guilarte-PRD-2013}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~M.~M.}\
\bibnamefont {Casta\~neda}}, \bibinfo {author} {\bibfnamefont {J.~M.}\
\bibnamefont {Guilarte}},\ and\ \bibinfo {author} {\bibfnamefont {A.~M.}\
\bibnamefont {Mosquera}},\ }\bibfield {title} {\bibinfo {title} {Quantum
vacuum energies and {Casimir} forces between partially transparent
$\ensuremath{\delta}$-function plates},\ }\href
{https://doi.org/10.1103/PhysRevD.87.105020} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. D}\ }\textbf {\bibinfo {volume} {87}},\ \bibinfo
{pages} {105020} (\bibinfo {year} {2013})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Mu\~noz Casta\~neda}\ and\ \citenamefont
{Mateos~Guilarte}(2015)}]{Castaneda-Guilarte-PRD-2015}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont
{Mu\~noz Casta\~neda}}\ and\ \bibinfo {author} {\bibfnamefont
{J.}~\bibnamefont {Mateos~Guilarte}},\ }\bibfield {title} {\bibinfo {title}
{$\ensuremath{\delta}-{\ensuremath{\delta}}^{\ensuremath{\prime}}$
generalized {Robin} boundary conditions and quantum vacuum fluctuations},\
}\href {https://doi.org/10.1103/PhysRevD.91.025028} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. D}\ }\textbf {\bibinfo {volume} {91}},\
\bibinfo {pages} {025028} (\bibinfo {year} {2015})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Braga}\ \emph {et~al.}(2016)\citenamefont {Braga},
\citenamefont {Silva},\ and\ \citenamefont
{Alves}}]{Braga-Silva-Alves-PRD-2016}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~N.}\ \bibnamefont
{Braga}}, \bibinfo {author} {\bibfnamefont {J.~D.~L.}\ \bibnamefont
{Silva}},\ and\ \bibinfo {author} {\bibfnamefont {D.~T.}\ \bibnamefont
{Alves}},\ }\bibfield {title} {\bibinfo {title} {Casimir force between
$\ensuremath{\delta}\ensuremath{-}{\ensuremath{\delta}}^{\ensuremath{\prime}}$
mirrors transparent at high frequencies},\ }\href
{https://doi.org/10.1103/PhysRevD.94.125007} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. D}\ }\textbf {\bibinfo {volume} {94}},\ \bibinfo
{pages} {125007} (\bibinfo {year} {2016})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Gustafson}\ and\ \citenamefont
{Abe}(1998)}]{Gustafson-Math-Int-1998}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont
{Gustafson}}\ and\ \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Abe}},\ }\bibfield {title} {\bibinfo {title} {The third boundary condition
--- was it {Robin}'s?},\ }\href {https://doi.org/10.1007/BF03024402}
{\bibfield {journal} {\bibinfo {journal} {The Mathematical Intelligencer}\
}\textbf {\bibinfo {volume} {20}},\ \bibinfo {pages} {63} (\bibinfo {year}
{1998})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Mostepanenko}\ and\ \citenamefont
{Trunov}(1985)}]{Mostepanenko-1985}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont
{Mostepanenko}}\ and\ \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont
{Trunov}},\ }\bibfield {title} {\bibinfo {title} {Quantum field theory of
the {Casimir} effect for real media},\ }\href@noop {} {\bibfield {journal}
{\bibinfo {journal} {Soviet Journal of Nuclear Physics}\ }\textbf {\bibinfo
{volume} {42}},\ \bibinfo {pages} {818} (\bibinfo {year} {1985})}\BibitemShut
{NoStop}
\end{thebibliography}
\end{document}
|
\begin{document}
\begin{abstract}
In this paper we consider the Prym map for double coverings of curves of genus $g$ ramified at $r>0$ points. That is, the
map associating to a double ramified covering its Prym variety. The generic
Torelli theorem states that the Prym map is generically injective as soon as the dimension of the space of coverings is less or
equal to the dimension of the space of polarized abelian varieties. We prove the generic injectivity of the Prym map
in the cases of double coverings of curves with: (a) $g=2$, $r=6$, and (b) $g= 5$, $r=2$.
In the first case the proof is constructive and can be extended to the range $r\ge \max \{6,\frac 23(g+2) \}$.
For (b) we study the fibre along the locus of the intermediate Jacobians of cubic threefolds to conclude the generic injectivity.
This completes the work of Marcucci and Pirola who proved this theorem for all the other cases, except for
the bielliptic case $g=1$ (solved later by Marcucci and the first author), and the case $g=3, r=4$ considered previously by Nagaraj and Ramanan, and also by Bardelli, Ciliberto and Verra where the degree of the map is $3$.
The paper closes with an appendix by Alessandro Verra with an independent result, the rationality of the moduli space of coverings with $g=2,r=6$, whose proof is
self-contained.
\end{abstract}
\maketitle
It is well known that a general Prym variety of dimension at least $6$ is the Prym variety of a unique \'etale covering. This is the
so-called generic Torelli Theorem for Prym varieties. In a modular way this result can be reformulated as the generic injectivity of the
Prym map
\[
\mathcal R_g {\longrightarrow } \mathcal A_{g-1}
\]
sending an \'etale covering in $\mathcal R_g$ to its Prym variety which turns out to be a principally polarized abelian variety. In the
last years, starting with the seminal work by Marcucci and Pirola (see \cite{mp}), the ramified case has attracted attention and
the corresponding generic Torelli problem has shown to be a natural problem plenty of rich geometry. In the mentioned paper generic
Torelli is proved for a big amount of cases using degeneration techniques and only a few special situations remained open. Our purpose
is to prove the generic Torelli theorem in these last cases.
In order to establish our results more precisely let us define the basic objects that take part in the main statements.
Let $C$ be an irreducible smooth complex projective curve of genus $g$ and let $\pi: D \rightarrow C $ be a smooth double covering ramified in $r>0$ points. The associated Prym variety to $\pi$ is defined as
$$
P(\pi):= \Ker \Nm_{\pi } \subset JD.
$$
This is an abelian subvariety of dimension $g-1+\frac{r}{2}$ with induced polarization $\Xi$ of type
$$
\delta=(1,\ldots, 1, \underbrace{2, ...,2}_{g \textnormal{ times }} \ ).
$$
Given a divisor $B$ in $C$ of even degree $r>0$ with no multiple points and
a line bundle $\eta \in \Pic^{r/2}(C)$ with $B \in |\eta^{\otimes 2}|$, the projection
$$
\pi: D = \boldsymbol{ \Spec } ({\mathcal O}_C \oplus \eta^{-1}) \rightarrow \boldsymbol{\Spec }{\mathcal O}_C =C.
$$
defines a double covering branched over $B$. Conversely, every double ramified covering of $C$ arises in this way.
Hence these coverings are parametrized by the moduli space
$$
\mathcal{R}_{g,r}:=\{ (C, \eta, B) \ \mid \ \eta \in \Pic^{\frac{r}{2}}(C), B \text{ reduced divisor in } |\eta^{\otimes 2}| \}.
$$
Let ${\mathcal A}^\delta_{g-1+\frac{r}{2}}$ denote the moduli space of abelian varieties of dimension $g-1+\frac{r}{2}$ and polarization of
type $\delta $.
For any $g\ge 1$ and $r>0$ we define the Prym map by
\begin{eqnarray*}
\mathcal{P}_{g,r} : \mathcal{R}_{g,r} & \rightarrow & {\mathcal A}^\delta_{g-1+\frac{r}{2}}\\
(\pi : C \rightarrow D) & \mapsto & ( P (\pi) , \Xi).
\end{eqnarray*}
The codifferential of $\mathcal{P}_{g,r}$ at a generic point $[(C, \eta, B)]$ is given by the multiplication map
$$
d\mathcal{P}_{g,r}^* : \Sym^2H^0(C, \omega_C \otimes \eta) \rightarrow H^0(C, \omega_C^2 \otimes {\mathcal O}(B))
$$
which is known to be surjective (\cite{lo}), therefore $\mathcal{P}$ is generically finite, if and only if
$$
\dim \mathcal{R}_{g,r} \leq \dim {\mathcal A}^{\delta}_{g-1+\frac{r}{2}}.
$$
So, by an elementary count of parameters we get that the Prym map is generically finite as soon as one of the following conditions holds:
either $r\geq 6$ and $g\geq 1$, or $r=4$ and $g\geq 3$, or $r=2$ and $g\geq 5$.
Marcucci and Pirola (\cite{mp}), and later Marcucci and the first author (\cite{mn}), have proved the generic injectivity in all the
cases except three:
\begin{enumerate}
\item [(a)] $r=4, g=3$,
\item [(b)] $r=6, g=2$,
\item [(c)] $r=2, g=5$.
\end{enumerate}
Case (a) was considered previously by Nagaraj and Ramanan, also by Bardelli, Ciliberto and Verra (see \cite{nr}, \cite{bcv}).
They proved (in different ways) that the degree of the Prym map is $3$ and is the only case where the map is generically finite but the generic Torelli Theorem fails.
This paper has two goals: to prove generic Torelli Theorem for the remaining cases (b) and (c), and to give a constructive proof of the generic
Torelli Theorem for $r\ge \max \{6, \frac 23 (g+2)\}$. More precisely we prove (Theorem \ref{constructive_Torelli}):
\begin{thm} \leftarrowbel{constructive_Torelli_intro}
Let $(C, \eta , B)$ be a generic element in $\mathcal R_{g,r}$ and let $(P, \Xi)$ its Prym variety. Assume that $r\ge \max \{6, \frac 23 (g+2)\}$.
Then the element $(C, \eta, B)$ is uniquely determined by the base locus $Bs$ of the linear system $| \Xi |$.
\end{thm}
In particular, this establishes the generic injectivity of the Prym map $\mathcal{P}_{2,6}: \mathcal R_{2,6} \rightarrow \mathcal A_4^{(1,1,2,2)}$ (Corollary \ref{corolario-Torelli}).
The image of the Prym map gives a divisor in $\mathcal A_4^{(1,1,2,2)}$, which is invariant under the natural involution in $\mathcal A_4^{(1,1,2,2)}$
(see \cite{ps}). This divisor could be useful to understand the nature of the birational geometry of $\mathcal A_4^{(1,1,2,2)}$, for instance to compute
its Kodaira dimension, which seems to be unknown.
Recall that in the case of \'etale double coverings there are two different constructive proofs of the generic
injectivity of the Prym map: one due to Welters, which uses degeneration methods and works for genus $g\geq 16$ and another one by Debarre, where
the covering is reconstructed from the tangent cones to the stable singularities of theta divisor of $P$, this holds for curves of genus $g \geq 7$ (see \cite{d} and \cite{w}).
In the same spirit but with a different method, we present in Theorem \ref{constructive_Torelli_intro} a procedure to reconstruct the covering
for $r\ge \max \{6, \frac 23 (g+2)\}$. This gives another proof of Marcucci and Pirola results in this range. It also provides
a different proof for the case $r \geq 6$, $g=1$, which was proven in \cite{mn}.
The idea of the proof of Theorem \ref{constructive_Torelli_intro} is to show that the base locus of the map
$P \dashrightarrow |\Xi|^{\vee}$ is symmetric and invariant by the action of the kernel of the isogeny $\leftarrowmbda_{\Xi} :P{\longrightarrow } P^{\vee}$.
The quotient by this action turns out to be, under some hypothesis of genericity, birational to the symmetric product $ D^{(\frac r2 -1)}$.
Moreover the symmetry of the base locus induces an involution on $D^{(\frac r2 -1)}$. Then an extended Torelli theorem proved by Martens allows us to conclude.
The second part of the paper is devoted to prove the generic injectivity in case $(c)$. The main result of Section 2 is the following (Theorem \ref{gen_injective}):
\begin{thm} \leftarrowbel{injective-52}
The Prym map $\mathcal{P}: \mathcal{R}_{5,2} \rightarrow {\mathcal A}_{5}$ is generically injective.
\end{thm}
The moduli space $\mathcal{R}_{5,2}$ can be embedded into the Beauville partial compactification $\overline{\mathcal{R}}_{6}$ of admissible \'etale double coverings of genus 6 curves, by identifying the two branch points on the covering and on the base curve. The closure of the image of $\mathcal{R}_{5,2}$ is an irreducible divisor $\Delta^n$ in the boundary of $\overline{\mathcal{R}}_{6}$ (denoted by
$\Delta_0^{\mbox{ram}}$ in \cite{fgsv}). Using the definition of the Prym map for admissible coverings one deduces from the main theorem in \cite{ds}
that the degree of the Prym map $\mathcal{P}_6 :\mathcal{R}_{6} \rightarrow {\mathcal A}_5 $ restricted to $\Delta^n$ is $1$, although the total degree of the map is $27$.
The class of the image of $\Delta^n$ under the Prym map in ${\mathcal A}_5$ has been computed in \cite{fgsv} (Theorem 0.8) but that does not give the
degree of the restriction map to $\Delta^n$.
For the proof of Theorem \ref{injective-52} we use the description of the fibres of the Prym map $\mathcal{P}_6$ over the locus of intermediate Jacobians of cubic threefolds
(after a suitable blowup) intersected with the boundary divisor $\Delta^n$, and exploit the geometry of the cubic threefolds.
\vskip 5mm
As a summary, the previous results together with the main theorems in \cite{mp}, \cite{mn} and \cite{nr} gives the following result:
\begin{thm}
Let $\mathcal{P}_{g,r}$ be a Prym map with $r>0$. We assume that
\[
\dim \mathcal{R}_{g,r} \le \dim {\mathcal A}_{g-1+\frac r2}^\delta.
\]
Then $\mathcal{P}_{g,r}$ is generically injective except for $g=3, r=4$, in which case the degree is $3$.
\end{thm}
The paper closes with an appendix by Alessandro Verra whith an independent result, namely the rationality of the moduli space $\mathcal{R}_{2,6}$, whose
proof is self-contained.
{\bf Acknowledgements:} We are grateful to G.P. Pirola and Alessandro Verra for many helpful suggestions and stimulating discussions on the subject. We thank
Herbert Lange for reading and commenting on a preliminary version of the paper.
\section{A constructive generic Torelli Theorem for $r\ge \max \{6, \frac 23 (g+2)\}$}
The goal of the section is to prove the following Theorem:
\begin{thm}\leftarrowbel{constructive_Torelli}
Let $(C, \eta , B)$ be a generic element in $\mathcal R_{g,r}$ and let $(P, \Xi)$ its Prym variety. Assume that $r\ge \max \{6, \frac 23 (g+2)\}$.
Then the element $(C, \eta, B)$ is uniquely determined by the base locus $Bs$ of the linear system $| \Xi |$. \end{thm}
\vskip 3mm
We first explain in a few words how $Bs$ determines explicitly the covering $\pi: D\rightarrow C$ attached to $(C,\eta, B)$: the kernel of the polarization map
$\leftarrowmbda_{\Xi }:P \rightarrow P^{\vee }$ acts on $Bs$, which moreover is a symmetric subvariety of $P$. It turns out that the quotient $Bs/\Ker(\leftarrowmbda_{\Xi})$ is
birational to the symmetric product $D^{(\frac r2-2)}$ and inherits a natural involution $\sigma'$. By a theorem of Martens (see \cite{mar}) $\sigma '$
is induced
by an involution $\sigma $ on $D$. Since $C=D/\leftarrowngle \sigma \rightarrowngle$ the covering is determined. It is convenient to remark that Martens' proof is also
constructible since it relies on the Boolean calculus of special subvarieties of Jacobians. An elegant cohomological proof of the same theorem was given
by Ran in \cite{ran}.
\vskip 3mm
This Theorem reproves the main theorem in \cite{mp} under the hypothesis $r\ge \max \{6, \frac 23 (g+2)\}$ and recovers completely the result in \cite{mn}.
Observe that our proof is constructive: we provide a procedure to recover the covering $\pi:D\rightarrow C$ from its Prym variety. Instead, in both quoted papers, degeneration
methods are used. It would be interesting to find also a constructive proof for the cases where the map is generically injective and $r< \frac 23 (g+2)$.
\begin{cor}\leftarrowbel{corolario-Torelli}
The Prym map $\mathcal R_{2,6} \longrightarrow \mathcal A_4^{(1,1,2,2)}$ is generically injective.
\end{cor}
The rest of the section is devoted to prove Theorem \ref{constructive_Torelli}. As above we denote by $\pi:D \rightarrow C$ the ramified double covering
attached to $(C,\eta,B)$. Remember that the Prym variety $(P, \Xi)$ associated to the covering $\pi$ has polarization of type $(1,\ldots, 2, \ldots ,2)$,
where $2$ appears $g$ times.
By Riemann-Roch Theorem we have that $\dim H^0(P, {\mathcal O}_P(\Xi)) = 2^g$.
Let $\varphi_{\Xi} : P \dashrightarrow \mathbb P H^0(P, {\mathcal O}_P(\Xi))^{\vee}$
be the map defined by the polarization. We fix a translation $\Theta_D \subset \Pic^0(D)$ of the
theta divisor of $JD$ and a translation of $\Theta_C$ such that $\Theta _D \cap JC=2\Theta_C$. Then we define the map
$$
\delta : P \dashrightarrow |2\Theta_C|, \qquad z \mapsto (\Theta_{D, z})_{|_{\pi^* JC}},
$$
where $\Theta_{D, z}= \Theta_D + z$. This is well-defined: indeed this is equivalent to the statement that
\[
\left( \Theta_{D,z}-\Theta_D \right){|_{\pi^* JC}}
\]
is linearly equivalent to the trivial divisor. In other words, that
\begin{equation}\leftarrowbel{well-defined}
(\pi^*)^ {\vee}\circ \leftarrowmbda _{\Theta_D} \circ i=0,
\end{equation}
where $i$ is the embedding of $P$ in $JD$, and $\leftarrowmbda _{\Theta_D}$ is the isomorphism
\[
\leftarrowmbda_{\Theta_D}: JD {\longrightarrow } \Pic^0(JD)
\]
sending $z$ to $\mathcal O_{JD}(\Theta_{D,z}-\Theta_D)$. To see that (\ref{well-defined}) holds we remind the following standard formula for coverings:
\[
\Nm_{\pi }=\leftarrowmbda_{\Theta_C}^ {-1} \circ (\pi ^ *)^ {\vee } \circ \leftarrowmbda _{\Theta_D}.
\]
Then $(\pi^*)^ {\vee }\circ \leftarrowmbda _{\Theta_D} \circ i$ is $0$ if and only if
\[
\leftarrowmbda_{\Theta_C}^ {-1}\circ (\pi^ *)^ {\vee }\circ \leftarrowmbda_{\Theta_D} \circ i=\Nm_{\pi} \circ \,i=0
\]
which is obvious.
Notice that the base locus of $\delta $ is
$$
Bs:=Bs(\delta)= \{ z \in P \ \mid \ \pi^* JC \subset \Theta_{D, z}\}.
$$
Our aim is to compute $Bs$ and to prove that under some genericity condition the covering $\pi$ can be recovered from $Bs$. First we notice that
$\delta $ is in fact another representation of the map $ \varphi_{\Xi}$, hence $Bs$ is completely determined by the polarized Prym variety.
\begin{prop} \leftarrowbel{comm1}
There is a canonical isomorphism $i: \mathbb P H^0(P, {\mathcal O}_P(\Xi))^{\vee} \rightarrow |2\Theta_C|$ making the following diagram commutative
$$
\xymatrix@C=2cm{
& \mathbb P H^0(P, {\mathcal O}_P(\Xi))^{\vee} \ar[dd]^{\simeq} \\
P \setminus Bs \ar[ur]^{\varphi_{\Xi}} \ar[dr]_{\delta} & \\
& |2\Theta_C|
}
$$
In particular the base locus of $\varphi_{\Xi}$ coincides with the base locus of $\delta$.
\end{prop}
\begin{proof}
The proof in \cite[Proposition, p. 334]{mu} applies verbatim.
\end{proof}
In order to compute explicitly the base locus $Bs$ it is convenient to work with a suitable translated of $P$ and of $\Xi$. We consider
$$
P^{can}= \Nm_{\pi}^{-1} (\omega_C \otimes \eta) \subset \Pic^{2g-2+\frac r2}(D).
$$
Observe that $2g-2+\frac r2 =g(D)-1$.
Then there is a canonical representative of the theta divisor
$$
\Xi^{can}= \Theta_D^{can} \cap P^{can}
$$
where $\Theta_D^{can} = W^0_{2g-2+\frac r2}(D) \subset \Pic^{g(D)-1}(D)$ is the canonical theta divisor.
Let $\kappa \in \Pic^{2g-2+\frac r2}(D)$ be such that $\Nm \kappa = \omega_C \otimes \eta$ and let $\Theta_D= \Theta^{can}_{D,-\kappa}$ be the translated theta divisor.
Consider the translation $t_{-\kappa}: P^{can} \stackrel{\simeq}{\rightarrow} P$, $L \mapsto L \otimes \kappa^{-1}$.
Let $\widetilde{Bs}= t_{\kappa} (Bs)$ be the translated of the base locus in $P^{can}$, so we obtain the description:
\[
\widetilde{Bs}= t_{\kappa} (Bs)= \{ L \in P^{can} \ \mid \ \pi^*(JC) \subset \Theta^{can}_{D,-L} \}.
\]
From now on we suppose that $g\ge 2$. The case $g=1$ will be considered at the end of the proof.
We denote by $SU_C(2, \omega_C)$ the moduli space of S-equivalence classes of rank $2$
semistable vector bundles on $C$ with determinant $\omega_C$, which is a normal projective variety.
Given a line bundle $L\in P^{can} \subset \Pic^{2g-2+\frac r2}(D)$, $\pi_*L$ is a rank $2$ vector bundle with
$$
\det \pi_*L = \Nm (L ) \otimes \det \pi_*({\mathcal O}_D)= \omega_C \otimes \eta \otimes \det ({\mathcal O}_C \oplus \eta^{-1}) = \omega_C.
$$
According to \cite[Lemma 1.2]{be_bsmf} there is an open set $P^{ss} \subset P^{can}$ such that $\pi_*L$ is semistable for all $L \in P^{ss}$, so
the map $\pi_*: P^{ss} \rightarrow SU_C(2, \omega_C)$ is well defined. On the other hand, there is a {\it theta map} defined by
$$
\theta: SU_C(2, \omega_C) \rightarrow |2\Theta_C|, \qquad E \mapsto \theta(E),
$$
where $\theta(E)$ is a divisor whose support is $\{\alpha \in \Pic^0(C) \ \mid \ H^0(C, E \otimes \alpha) \neq 0\}$.
It is known that the theta map is well defined (\cite{r}), that is, $\theta(E) $ is a divisor linearly equivalent to $2\Theta_C$,
for every semistable rank 2 vector bundle with canonical determinant.
The following proposition is a consequence of several results in the literature (see for example \cite{vgi} and the references therein):
\begin{prop}
The theta map is an embedding $SU_C(2, \omega_C) \hookrightarrow |2\Theta_C|$ for a generic curve of genus $g\ge 2$.
\end{prop}
In particular, every semistable rank 2-vector bundle with canonical determinant admits a theta divisor in $JC$ (\cite[Proposition 1.6.2]{r}).
Now we set $\tilde{\delta} =\delta \circ t_{-\kappa}$. With this notation we have:
\begin{lem} \leftarrowbel{comm2}
Then the following diagram is commutative
$$
\xymatrix@C=2cm@R=1.4cm{
& SU_C(2, \omega_C) \ar@{^{(}->}[d]_{\theta} \\
P^{ss} \ar[ur]^{\pi_*} \ar[r]^(.5){\tilde{\delta}} \ar[d]_{t_{-\kappa}} & |2\Theta_C| \ar[d]_{i}^{\simeq} \\
P \setminus Bs \ar[ur]^{\delta} \ar[r]^(.4){\varphi_{\Xi}}& \mathbb P H^0(P, {\mathcal O}_P(\Xi))^{\vee}
}
$$
\end{lem}
\begin{proof}
The commutativity of the lower triangle is shown in Proposition \ref{comm1}.
Let $L \in P^{ss} $, then $\pi_*L$ is a semistable rank 2 vector bundle with canonical determinant and the support of the divisor
$\theta(\pi_* L)$ is given by
\begin{eqnarray*}
\Supp \theta(\pi_*L) & = &\{ \alpha \in \Pic^0(C )\ \mid \ H^0(C, \alpha \otimes \pi_*L) \neq 0 \} \\
& = & \{ \alpha \in \Pic^0(C )\ \mid \ H^0(D, \pi^* \alpha \otimes L) \neq 0 \} \\
& = & (\Theta^{can}_{-L})_{|{\pi^*JC}}\\
& = & \tilde{\delta} (L).
\end{eqnarray*}
This proves the commutativity of the upper triangle.
\end{proof}
Observe that Lemma \ref{comm2} also shows that
$$
\widetilde{Bs} \subset B_{nss}:= \{ L \in P^{can} \mid \pi_*L \textrm{ is not semistable} \}.
$$
On the other hand, if $\pi_*L$ is not semistable then there exists a line subbundle $N \subset \pi_*L$ of slope $> g-1$. By Riemann-Roch
we have $0 \neq H^0(C,N\otimes \alpha) \subseteq H^0(C, \pi_*L \otimes \alpha)$ for all $\alpha \in JC$ so
$$
H^0(C, \pi_*L \otimes \alpha) = H^0(D, L \otimes \pi^ * \alpha ) \neq 0 \quad \forall \alpha \in JC,
$$
that is, $\pi^*(JC) \subset \Theta_{D,-L}^{can}$ and $L \in \widetilde{Bs}$. This gives $ \widetilde{Bs} = B_{nss}$.
We consider now the subset of $P^{can}$:
\[
B_0:= \{ L =\pi^*(A)(p_1+\dots +p_{\frac r2-2}) \ \mid \ A \in \Pic^g(C), \ p_i \in D, \ \Nm L \cong \omega_C \otimes \eta \}.
\]
\begin{prop} \leftarrowbel{base-locus} The equality $B_0 = \widetilde{Bs}$ holds.
\end{prop}
\begin{proof}
Let $L= \pi^*(A')(\Sigma p_i) \in B_0$, where $A'$ is effective of maximal degree $\ge g$. In particular, putting $\bar{p_i} = \pi(p_i)$ we have that $\bar{p_i}\ne \bar{p_j}$ for $i\ne j$.
In order to compute $H^0(D, \pi^ * \alpha \otimes L) $ with $\alpha \in JC$, we use a short exact sequence on $C$ given by Mumford in \cite{mu}
(see the proof of the Proposition in page 338) which in our case reads as follows:
\begin{equation} \leftarrowbel{exact_seq_Mumford}
0\,{\longrightarrow } A' {\longrightarrow } \pi_*(L) {\longrightarrow } A'(\Sigma \bar {p_i})\otimes \eta^{-1} {\longrightarrow } \, 0,
\end{equation}
where $\pi_*(L) = \pi_*(\pi^*(A')(\Sigma p_i))$.
Tensoring with $\alpha$ we get:
\[
H^ 0(C, A'\otimes \alpha) \subset H^ 0(C, \pi_*L \otimes \alpha)=H^0(C,\pi_*(L\otimes \pi^*(\alpha)))=H^0(D,L\otimes \pi^*(\alpha)).
\]
Since the degree of $A'\otimes \alpha $ is at least $g=g(C)$, then $H^ 0(C, A'\otimes \alpha) \neq 0$. Therefore $\pi^* (JC) \subset \Theta^{can}_{D,-L} $
and this proves $B_0\subset \widetilde {Bs}$.
Let $L \in B_{nss}$. Then there exists a line bundle $A {\hookrightarrow} \pi_*L$ with
$$
\deg A > g-1 = \mu(\pi_*L):= \frac{ \deg \pi_*L }{ \rk \pi_*L }.
$$
Since
\[
\Hom(A,\pi_*L)\cong \Hom(\pi^ *(A),L)
\]
there is a non-trivial map $\pi^ *(A) \to L$ which is necessarily injective. We also obtain that $h^ 0(D, L\otimes \pi^ *(A^ {-1}))>0$ and hence
$L$ is of the form $L=\pi^*A (\sum p_i)$, $p_i \in D$. Thus we have
\[
\widetilde {Bs}= B_{nss} \subset B_0
\]
and we are done.
\end{proof}
We use the description of the base locus to recover the covering $\pi:D \rightarrow C$ from the Prym variety $(P, \Xi)$.
Define the variety
$$
W:= \{ (A,\sum_i p_i) \in \Pic^g (C) \times D^{(\frac r2-2)} \mid \ A^{\otimes 2} (\sum_i \bar{p_i}) \simeq \omega_C \otimes \eta \},
$$
where $\bar{p_i}:=\pi(p_i)$.
\begin{prop}
Assume that $(C,\eta)$ is general and assume also that $r\ge \max \{6, \frac 23 (g+2)\}$. Then the natural morphism $W{\longrightarrow } B_0$ sending
\[
(A,\sum_{i=1}^{\frac{r}{2} - 2} p_i) \mapsto \pi^*(A)(\sum_{i=1}^{\frac{r}{2} - 2} p_i)
\]
is birational.
\end{prop}
\begin{proof}
The main difficulty of the proof is to show that for such a generic element the equality $h^0(C,A)=1$ holds. To prove this we denote by
\[
W_{\eta}=\{A\in \Pic^g(C) \mid h^0(C,\omega_C \otimes \eta \otimes A^{-\otimes 2}) > 0 \},
\]
which by definition is the image of the first projection $W\rightarrow \Pic^g(C)$. Obviously if
\[
\deg (\omega_C \otimes \eta \otimes A^{-\otimes 2})=2g-2+\frac r2 -2g =\frac r2 -2 \ge g,
\]
then the condition on the cohomology is empty and $W_{\eta }=\Pic^g(C)$. In this case $h^0(C,A)=1$ generically for any $(C,\eta )$. This covers the
cases $r\ge 2g+4$. Assume now that $r =2g+2$. Since the second projection $W\rightarrow D^{(\frac r2 -2)}$ is an \'etale covering, $\dim W_{\eta}\le \dim W =\frac r2 -2=g-1$,
hence $W_{\eta}$ is a proper subvariety of $\Pic^g(C)$. We claim that it is not contained in the Brill-Noether locus $ W^1_g(C)$. If we are able to prove that
$\dim W_{\eta } =g-1$ then $W_{\eta } \not \subset W^1_g(C)\cong W^0_{g-2}(C)$. Since the cohomological condition that defines $W_{\eta }$ refers to $A^{\otimes 2}$,
we consider $\overline {W}_{\eta}$ the image of the \'etale map:
\[
W_{\eta} {\longrightarrow } \overline {W}_{\eta} \subset \Pic^{2g}(C),\quad A \mapsto A ^{\otimes 2}.
\]
Then, an element $L \in \overline {W}_{\eta}$ satisfies
\[
0<h^0(C,\omega_C \otimes \eta \otimes L^{-1})=h^0(C,L\otimes \eta ^{-1})
\]
by Riemann-Roch. Then $\overline {W}_{\eta}$ is isomorphic to the translated $\eta + W^0_{g-1}(C)$ of the theta divisor on $\Pic^{g-1}(C)$. Thus for any
$\eta \in T=\{\eta \mid \eta^{\otimes 2}\in W_{r}^0(C)\}$, $\dim W_{\eta}=g-1$, which proves the claim.
In the previous argument there is no restriction on $\eta $. In order to prove the statement for $r < 2g+2$ we shall show that for a generic $\eta $ in $T$
we have $W_{\eta} \not \subset W^1_g(C)$. We claim that the union of the subsets $W_{\eta}$, as $\eta $ varies in $T$, covers all $\Pic^g(C)$.
In other words, if we define
\[
I:=\{(A,\eta ) \in \Pic^g(C)\times T \mid A\in W_{\eta} \},
\]
this equivalent to show that the first projection $I \rightarrow \Pic^g(C)$ is surjective. Let $T=\{\eta \mid \eta^{\otimes 2}\in W_{r}^0(C)\}$ and
$T_0=\{ \eta \in T \mid |\eta^{\otimes 2}| \ \textnormal{ has a reduced divisor} \}$.
We fix an arbitrary element $A\in \Pic^g(C)$. We ask for the existence of an $\eta \in T$ such that $h^0(C,\omega_C \otimes \eta \otimes
A^{-\otimes 2})>0$. Then $\eta $ has to be chosen in the intersection of the Brill-Noether locus
\[
T_A:= W^0_{\frac r2-2}+(A^{\otimes 2}\otimes \omega_C^{-1})
\]
with $T$. If $r\ge g$, then $T=\Pic^{\frac r2}(C)$ and the existence of such an $\eta $ is obvious as far as
$r\ge 6$.
Assume from now on that $r\le g$, in particular $\dim T=\dim W^0_r(C)=r$. Since the cohomological class of $T_A$ is a fraction of a power of the theta divisor,
the intersection $T_A\cap T$ is non empty if
\[
\dim T_A + \dim T -g= \frac r2 -2 +r-g\ge 0.
\]
Hence for $r\ge \max \{6, \frac 23 (g+2)\}$ we have the claimed surjectivity. Thus, for a generic $\eta \in T$,
and therefore for a generic $\eta \in T_0$, the generic element in $W_{\eta }$ satisfies $h^0(C,A)=1$.
\vskip 3mm
To finish the proof of the proposition we observe that the second projection $W\rightarrow D^{(\frac r2-2)}$ is an \'etale covering of degree $2^{2g}$, hence for a
generic element $ (A,\sum_i p_i)$ in $W$ we can simultaneously assume that $\sum_i p_i$ is $\pi$-simple (i.e., it does not contain fibres of $\pi$) and $h^0(C,A)=1$.
Let us consider again the exact sequence (\ref{exact_seq_Mumford})
\[
0\,{\longrightarrow } A {\longrightarrow } \pi_*(\pi^*(A)(\sum_i p_i)) {\longrightarrow } A(\sum_i \bar {p_i})\otimes \eta^{-1} {\longrightarrow } \, 0.
\]
Since $A^{\otimes 2}(\sum_i \bar p_i)\cong \omega _C\otimes \eta $ the exact sequence becomes:
\[
0\,{\longrightarrow } A {\longrightarrow } \pi_*(\pi^*(A)(\sum_i p_i)) {\longrightarrow } \omega_C \otimes A^{-1} {\longrightarrow } \, 0.
\]
Notice that $H^0(C,\omega_C \otimes A^{-1}) \cong H^1(C,A)$. By Riemann-Roch Theorem this group is trivial since $\deg(A)=g$ and $h^0(C,A)=1$. Therefore:
\[
H^0(C,A) \cong H^0(C, \pi_*(\pi^*(A)(\sum_i p_i)) ).
\]
In other words, there is only one divisor in the linear series $| \pi^*(A)(\sum_i p_i) |$.
This immediately implies the generic injectivity of the map $W\rightarrow B_0$. Since it is obviously surjective we are done.
\vskip 3mm
Now we consider the case $g=1$ and $r\ge 6$. We simply remark that the only place in the proof where we use that $g\ge 2$ is in Lemma (\ref{base-locus}),
where we use rank $2$ vector bundles on curves of genus at least $2$ to show the inclusion $\widetilde {Bs} \subset B_0$. Let us prove it directly:
assume by contradiction that $L \in \widetilde {Bs}$ does not belong to $B_0$, which in this case means that $L$ (which satisfies $h^0(C,L)>0$ by
definition) is represented by an effective $\pi$-simple divisor. Then, the exact sequence (\ref{exact_seq_Mumford}) reads
\[
0{\longrightarrow } \mathcal O_C {\longrightarrow } \pi_*(L) {\longrightarrow } \mathcal O_C {\longrightarrow } 0,
\]
hence for any $\alpha \in JC\setminus \{0\}$ we obtain
\[
0=h^0(C,\pi_*(L)\otimes \alpha )=h^0(C,\pi_*(L\otimes \pi^*\alpha) )=h^0(D,L\otimes \pi^*(\alpha)),
\]
which is a contradiction since $L \in \widetilde {Bs}$.
\end{proof}
\vskip 3mm
Now we are in disposition of finishing the proof of the main Theorem of this section. According to the last Proposition the quotient of the action of
$JC_2=\Ker (\leftarrowmbda _{\Xi})$ on the base locus $Bs$ of the linear system $|\Xi |$ is birational to $W/JC_2 = D^{(\frac r2 -2)}$. Since
$g(D)-1 =2g-2+\frac r2 >\frac r2 -2$ we can apply the main Theorem in \cite{mar} which gives a constructive method to recover the curve
$D$ from a variety birational to $D^{(\frac r2 -2)}$. Moreover the base locus $Bs$ is symmetric in $P$ and this symmetry corresponds to
the involution $\sigma ^{(\frac r2-2)}$ on the symmetric product of the curve. Applying again the result on \cite{mar} we recover also the involution
$\sigma $ on $D$ and therefore the whole covering $D\rightarrow D/\leftarrowngle \sigma \rightarrowngle$.
\begin{rem} \leftarrowbel{key-obs}
The case of Corollary \ref{corolario-Torelli} ($r=6$ and $g=2$) is particularly simple. It is not hard to see that in this case, for a generic $\eta $,
$W$ and $B_0\cong Bs$ are isomorphic irreducible curves of genus $81$. In this case the condition on $\eta $ can be precised: we need
the vanishing $h^0(C, \eta \otimes \omega_C^{-1})=0$.
The quotient of the action of $\Ker (\leftarrowmbda_{\Xi})$ on $W$ gives directly the curve $D$ and the symmetry on $W$ induces the involution
$\sigma $ on $D$. Finally $D/\leftarrowngle \sigma \rightarrowngle =C$.
\end{rem}
\section{Case $g=5$ and $r=2$}
In this section we take care of the map $\mathcal{P}_{5,2}:\mathcal{R}_{5,2} \rightarrow {\mathcal A}_5$ which by Corollary (2.3) in \cite{mp}
is generically finite. This is the last
open case of the generic Torelli problem stated in the introduction. We will prove at the end of this section that
$\mathcal P_{5,2}$ is generically injective.
In order to study the degree of $\mathcal{P}_{5,2}$ we use the classical Prym map $\mathcal{P}_6:\mathcal R_6 \rightarrow \mathcal A_5$ defined on irreducible unramified
double coverings of curves of genus $6$. To do so we first recall how this map was extended by Beauville in \cite{be_invent} to a proper map:
\[
\overline {\mathcal{P}_6}:\overline {\mathcal R}_6 {\longrightarrow } \mathcal A_5.
\]
We give the basic definitions for any $g$. To start with, we borrow from \cite{be_invent} the description of the elements belonging to
$\overline {\mathcal R}_g$, called admissible coverings (see condition (**) and Lemma (5.1) in loc. cit.).
\begin{defn} Let $\tilde C$ be a connected curve with only ordinary double points and arithmetic genus $p_a(\tilde C)=2g-1$, and let $\sigma $ be an involution
on $\tilde C$. Then $\tilde C {\longrightarrow } \tilde C/\leftarrowngle \sigma \rightarrowngle$ is an admissible covering if and only if the following conditions are fulfilled:
\begin{enumerate}
\item There are not fixed non-singular points of $\sigma $.
\item At the nodes fixed by $\sigma $ the two branches are not exchanged.
\item The number of nodes exchanged under $\sigma $ equals the number of irreducible components of $\tilde C$ exchanged under $\sigma$.
\end{enumerate}
\end{defn}
In particular $\sigma$ is not the identity on any component.
Under these conditions the arithmetical genus of $\tilde C/\leftarrowngle \sigma \rightarrowngle$ is $g$ and the Prym variety attached to the covering can be defined in a similar
way of the standard Prym construction and it is a ppav.
An instance of admissible covering is the following: consider two copies of a smooth curve $C$ of genus $g-1$ each one with two marked points.
Call $p_1, q_1$ the marked points in the first copy and $p_2, q_2$ the same points in the second copy. Then the curve
\[
\tilde C = C \cup C / (p_1\sim q_2,p_2\sim q_1)
\]
has a natural involution $\sigma$ exchanging the components. Observe that $\tilde C/\leftarrowngle \sigma \rightarrowngle$ is the irreducible nodal curve
$C/ p_1 \sim p_2$. This is an admissible covering whose associated Prym variety is the Jacobian of $C$. These elements
$(\tilde C, \sigma )$ are called Wirtinger coverings. Observe that the closure of the set parametrizing these objects describe a divisor in
$\overline {\mathcal R}_g$ which is birational to the moduli space $\mathcal M_{g-1,2}$ of curves of genus $g-1$ with two marked points.
Hence this divisor is irreducible. We denote it by $\Delta^{W}$.
Now we come back to our particular situation, hence $g=6$.
Let $\pi:D\rightarrow C$ be a covering in $\mathcal R_{5,2}$. By glueing in $C$ the two branch points and in $D$ the two ramification points we get an
admissible covering. These curves form a divisor in the boundary of $\overline {\mathcal R}_{6}$, birational to $\mathcal R_{5,2}$ (hence irreducible)
that we denote by $\Delta^n$. A generic element in this divisor is an irreducible admissible covering of a curve with exactly one node.
Moreover, the composition
\[
\mathcal R_{5,2} \hookrightarrow \Delta ^n \hookrightarrow \overline {\mathcal R}_{6} {\overset{\overline {\mathcal{P}_6}} \longrightarrow } \mathcal A_5
\]
is the Prym map $\mathcal{P}_{5,2}$ we are considering.
\begin{rem}
Let us consider the forgetful map $\pi :\mathcal R_6 \rightarrow {\mathcal M}_6$ sending $[\tilde C\rightarrow C]$ to the class of the curve $C$. Let $\overline {{\mathcal M} }_6$ the
Deligne-Mumford compactification of the moduli space of curves. Denote by $\overline {\mathcal{R}}_6'$ the normalization of $\overline {{\mathcal M}}_6$ in the
function field of $\mathcal{R}_6$. Then there is a map $\pi': \overline{\mathcal{R}}_6'\rightarrow \overline {{\mathcal M}}_6$ which has been studied in \cite{fl}. The preimage of the
divisor of nodal curves $\Delta_0 \subset \overline {{\mathcal M}}_6$ under $\pi'$ is
\[
\pi'^*(\Delta _0)=\Delta^{na} + \Delta ^W +2 \Delta ^n,
\]
where the general element $\tilde C \rightarrow C$ of $\Delta^{na}$ is not admissible (in Beauville's sense) since $\tilde C$ is irreducible with two nodes
interchanged by $\sigma $. From this one sees that $\pi'$ is ramified along $\Delta^n$ (which explains the notation $\Delta ^{ram}$ in \cite{fl}).
Of course, there are other components of codimension $1$ in the boundary of $\overline {\mathcal{R}}_6'$ (and of $\overline{\mathcal{R}}_6$) that are in the
preimage of $\Delta_{i,6-i}\subset \overline{{\mathcal M}}_6$. We refer to \cite{fl} and the references thererin for more details on this map for any $g$.
\end{rem}
In order to study the degree of $\mathcal{P}_{5,2}$ our strategy is to look at the preimage of the locus $\mathcal C$, the set of the smooth cubic
threefolds embedded in ${\mathcal A}_5$ via the Intermediate Jacobian map (see \cite{cg}). We start by recalling some basic facts on the geometry of cubic
threefolds and the representation of its Intermediate Jacobian as Prym variety. All the results are very classical and can be found in \cite{cg}, \cite{ds},
\cite{do} and \cite{murre}.
Let $V$ be a smooth cubic threefold in $\mathbb P^4$. The Intermediate Jacobian $JV$ of $V$ is isomorphic to the Prym variety of many
elements in $\overline {\mathcal{R}}_6$.
This is done as follows: given a line $l\subset V$ and a $2$-plane $\Pi \subset \mathbb P^4$ disjoint with $l$, the projection $V\setminus {l}\rightarrow \Pi$
extends to a morphism on the blowing-up $ {V_l}$ of $V$ on $l$. The fibres of this map are conics and the discriminant curve on $\Pi$ parametrizing
the points where these
conics degenerate is a quintic $Q_l$. The pairs of lines on the degenerate fibres give a curve $\tilde Q_l$ in the Grassmannian of lines in $\mathbb P^4$.
Then $\tilde Q_l \rightarrow Q_l$ is an admissible covering and its Prym variety is isomorphic as ppav to $JV$. Hence $\overline {\mathcal{P}_6}^{-1}(V)$ contains the
set of all these coverings of plane quintics, which is in bijection with the Fano surface $F(V)$ of the lines contained in $V$. In \cite[Part V, \S 1]{ds} it is
proved that in fact the fibre is exactly $F(V)$ for any $V$. Since we are interested in the fibre restricted to the divisor $\Delta^n$ we need to know in
which cases the quintic $Q_l$ is not smooth, in other words, the first step is to compute
\[
F(V) \cap \Delta^n
\]
for any cubic threefold $V$.
This is related to the geometry of the cubic threefolds due to the following result of Beauville:
\begin{prop}(\cite[Proposition 1.2]{be_pryms_et_ji})\leftarrowbel{beauville_conic_bundles}
Let $V_l \rightarrow \Pi$ be a conic bundle as above and let $Q_l$ be the discriminant curve. Then
\begin{enumerate}
\item The curve $Q_l$ has at most ordinary double points as singularities.
\item If $s$ is a regular point in $Q_l$, then the corresponding conic has only one singular point, i.e. is formed by two different lines.
\item If $s$ is a node of $Q_l$, then the corresponding conic is a double line.
\end{enumerate}
\end{prop}
Hence $Q_l$ is singular if and only if there is a plane in $\mathbb P^4$ intersecting $V$ in $l+2r$, where $r\in F(V)$. So we are interesting in the following two sets of lines:
\[
\begin{aligned}
&\Gamma =\{l\in F(V) \mid \exists \text{ a plane } L \text { and a line } r\in F(V) \text{ with } V\cdot L=l+2r\}, \\
&\Gamma '=\{r\in F(V) \mid \exists \text{ a plane } L \text { and a line } l\in F(V) \text{ with } V\cdot L=l+2r\}.
\end{aligned}
\]
According to Proposition \ref{beauville_conic_bundles}, the set $\Gamma$ parametrizes the lines $l$ for which $Q_l$ is singular.
The curve $\Gamma'$ has deserved more attention in the literature. It appears in \cite[section 6]{cg} as the set of lines of ``second type''.
In Proposition 10.21 of that paper it is shown that $\Gamma '$ has pure dimension $1$ and, as divisor on the Fano surface, it is linearly equivalent to
twice the canonical divisor. Moreover, Murre proved that $\Gamma '$ ($\mathcal F_0$ in Murre's notation) is smooth, non-necessarily connected
and $\Gamma$ ($\mathcal F_0'$ with his notation) is Zariski-closed of dimension at most $1$ (see \cite[Corollary (1.9), Lemma (1.11)]{murre}).
It is easy to go further with the techniques of \cite{murre} to show that in fact $\Gamma $ is also a curve.
To this end, let us consider the incidence variety
\[
I=\{(l,r) \in \Gamma \times \Gamma' \mid \text {there is a plane } L \text{ with } L\cdot V=l+2r\}.
\]
By Proposition \ref{beauville_conic_bundles} the discriminant curve $Q_l$ has at most a finite number of singularities, hence we have that
$I\rightarrow \Gamma $ is finite-to-one. The following lemma implies that $I\rightarrow \Gamma '$ is bijective.
\begin{lemma}
Given a line $r\in \Gamma '$ there is only one line $l$ such that there is a plane $L$ with $L\cdot V=l+2r$.
\end{lemma}
\begin{proof}
We fix $r$, $l$ and $L$ as in the statement. We assume $r\neq l$, a similar proof works with $r=l$.
We can choose a reference system in such a way that $(1:0:0:0:0), (0:1:0:0:0)\in r$ and $(1:0:0:0:0), (0:0:1:0:0)\in l$. Hence, using coordinates
$x,y,z,u,v$, the equations of $r$ and $l$ are $z=u=v=0$ and $y=u=v=0$ respectively. Moreover $L$ has equations $u=v=0$.
Our hypothesis implies that the cubic polynomial $F$ defining $V$ is of the form:
\[
F = c\, y z^2 \, + u \, G_2 \, + \, v \,H_2,
\]
where $G_2,H_2$ are the equations of two quadrics $Q_1,Q_2$ in $\mathbb P^4$. Observe that if $c=0$ then, by an easy computation, the points satisfying
$G_2=H_2=u=v=0$ are singularities of $V$, which is a contradiction. So we can assume $c=1$.
By convenience, in the later computation we decompose the quadratic forms $G_2,H_2$ separating the part in the variables $x,y$:
\[
\begin{aligned}
&G_2\,=\, A(x,y)\,+\,z\,L_1+\,u \,L_2 + \,v\, L_3 \\
&H_2\,=\, B(x,y)\,+\,z\,M_1+\,u \,M_2 + \,v\, M_3,
\end{aligned}
\]
where $L_i,M_i$ are linear forms.
Hence we have
\begin{equation}\leftarrowbel{cubic_equation}
\begin{aligned}
F\,=\, & y z^2 \, + u \, (A(x,y)\,+\,z\,L_1+\,u \,L_2 + \,v\, L_3) \, + \, \\ & v \,( B(x,y)\,+\,z\,M_1+\,u \,M_2 + \,v\, M_3).
\end{aligned}
\end{equation}
The planes through $r$ are parametrized by the points of the plane $x=y=0$. We want to know for which points $p=(0:0:\alpha:\beta:{\alpha}mma)$
the plane $r\vee p$ intersects $V$ in $2r+l'$ for some line $l'$.
A generic point of $r\vee p$ is of the form $(r:s:t \alpha : t \beta : t{\alpha}mma )$ for some parameters $r,s,t$. Replacing in (\ref{cubic_equation})
we have to impose that $t$ (the equation of $r$ in the plane) appears with multiplicity $2$:
\[
\alpha ^2st^2+ t \beta (A(r,s)+t(\alpha L_1+\dots))
+ t{\alpha}mma (B(r,s)+t(\alpha M_1+\dots )).
\]
So the condition is $\beta A(r,s)+ {\alpha}mma B(r,s)=0$ for any $r,s$. The next claim implies that $\beta={\alpha}mma =0$ and therefore the only solution
corresponds to the plane $L$ and hence $l$ is unique.
{\bf Claim:} If the quadratic forms $A(x,y), B(x,y)$ are not linearly independent, then $V$ is singular.
Indeed, assume that $B(x,y)=\leftarrowmbda A(x,y)$ for some constant $\leftarrowmbda$. Then $F$ is of the form:
\[
y z^2 \, + u \, (A(x,y)\,+\,z\,L_1+\,u \,L_2 + \,v\, L_3) \, + \, v \,( \leftarrowmbda A(x,y)\,+\,z\,M_1+\,u \,M_2 + \,v\, M_3).
\]
One easily checks that the points which satisfy $A(x,y)=z=u=v=0$ are non-trivial solutions of the system of the partial derivatives of $F$, hence $V$
is singular.
\end{proof}
\begin{corollary}
For any smooth cubic threefold, $\Gamma $ is a curve dominated by $\Gamma'$.
\end{corollary}
In the case of general cubic threefold we can go further in the description of the properties of the curves $\Gamma $ and $\Gamma '$
\begin{prop}
For a general cubic threefold $V$ the curves $\Gamma $ and $\Gamma '$ are irreducible and $\Gamma '$ is the normalization of $\Gamma$.
\end{prop}
\begin{proof}
We first prove the irreducibility. This is a consequence of the fact that for a general $V$ the Fano surface $F(V)$ has Picard number $1$ (see \cite[p. 382]{rou}).
Indeed, recall from \cite[section 10]{cg} that $\Gamma '$ is bicanonical in $F(V)$ and that canonical divisor is very ample giving the Pl\"ucker
embedding $F(V)\hookrightarrow Grass(1,4)\hookrightarrow \mathbb P^9$. Therefore the positive generator of the Neron-Severi group is ample.
Thus, if $\Gamma'= T_1 \cup T_2$ then $T_1 \cdot T_2>0$ contradicting the smoothness proved by Murre. Since $\Gamma '$ dominates $\Gamma$
both are irreducible. We only have to prove that the map $\Gamma ' {\longrightarrow } \Gamma$ has degree $1$.
Following (5.13) in \cite{do}, there is an involution $\leftarrowmbda $ in the fibre of a generic $JV\in \mathcal C$ in such a way that, given a line $l\in \Gamma$, the preimages $r\in \Gamma'$ corresponds to odd semicanonical pencils in a smooth Prym curve $(C,\eta)=\leftarrowmbda(\tilde Q_l,Q_l)$ (that is, theta characteristics $L$ with $h^0(C,L)=2$ and $h^0(C,L\otimes \eta)=1$). The locus of curves with a semicanonical pencil is an irreducible divisor and a generic element of this divisor has only one semicanonical pencil, this implies our result.
\end{proof}
\vskip 3mm
We come back to our computation of the degree of the map $\mathcal{P}_{5,2}$ identified with $\overline {\mathcal{P}_6}_{|\Delta ^n}$. Observe that, by the previous
results, any $JV\in \mathcal C$ can be represented as $P(\tilde Q_l,Q_l)$ for a quintic plane curve with only one node. Notice that $\tilde Q_l$ must be
irreducible, otherwise $\tilde Q_l \rightarrow Q_l$ would be a Wirtinger cover and $JV$ would be a Jacobian contradicting the main result in \cite{cg}.
Summarizing, we have that:
\begin {enumerate}
\item [(a)] The locus $\mathcal C$ is contained in $\overline {\mathcal{P}_6}(\Delta ^n) $.
\item [(b)] The fiber over $V$ of $\overline {\mathcal{P}_6}_{|\Delta ^n}$ is $\Gamma = \Delta ^n \cap F(V)$.
\item [(c)] The locus $\mathcal S' = (\overline {\mathcal{P}_6}_{|\Delta ^n})^{-1}(\mathcal C)$ has dimension $11$, all the fibers of $\mathcal S' \rightarrow \mathcal C$
are $1$ dimensional and the generic fibre is irreducible.
\end {enumerate}
In particular the irreducible component $\mathcal S\subset \mathcal S'$ containing the generic irreducible fibers is the only component of
$\mathcal S'$ of dimension $11$. Since the map is closed $S=S'$.
Although the map $\overline {\mathcal{P}_6}_{|\Delta ^n}$ is generically finite, the fibres over $\mathcal C$ are one dimensional as we have already seen.
Nevertheless, the degree of $\overline {\mathcal{P}_6}_{|\Delta ^n}$ can be computed at this locus by using the concept of local degree explored in
\cite[Part I, section 3]{ds}. The local degree $d$ of $\overline {\mathcal{P}_6}_{|\Delta ^n}$ along $\mathcal S$ is the degree of map obtained from
$\overline {\mathcal{P}_6}_{|\Delta ^n}$ by localizing at $\mathcal S$ in the source and at $\mathcal C$ in the target:
\[
(\Delta ^n)_\mathcal S {\longrightarrow } (\mathcal A_5)_\mathcal C.
\]
We have that $\deg(\overline {\mathcal{P}_6}_{|\Delta ^n})=d$.
Assume that $\overline {\mathcal{P}_6}_{|\Delta ^n}$ lifts to a regular map from the blowup $\widetilde \Delta^n$ of $\Delta^n$ along $\mathcal S$ to the blowup
of $\widetilde {\mathcal A}_5$ of ${\mathcal A}_5$ along $\mathcal C$, sending the exceptional divisor $\tilde {\mathcal S}$ to the exceptional divisor $\tilde {\mathcal C}$:
\[
\widetilde {\overline {\mathcal{P}_6}}_{|\Delta ^n}:\tilde {\mathcal S} {\longrightarrow } \tilde {\mathcal C}.
\]
Then:
\begin{lemma}\cite[Lemma 3.2]{ds} \leftarrowbel{local_degree}
If the natural linear map $N_{\mathcal S|\Delta^n,s}{\longrightarrow } N_{\mathcal C|{\mathcal A}_5,\overline {\mathcal{P}_6}(s)}$ induced by
\[
d(\overline {\mathcal{P}_6}_{|\Delta ^n}):T_{\Delta^n}{\longrightarrow } T_{{\mathcal A}_5}
\]
is injective at any $s\in \mathcal S$, then the local degree at $\mathcal S$ equals the degree of $\widetilde {\overline {\mathcal{P}_6}}_{|\Delta ^n}:\tilde
{\mathcal S} {\longrightarrow } \tilde {\mathcal C}$.
\end{lemma}
In other words, under some conditions on the behavior of the normal bundle, we can compute the degree by looking at the degree of the map between
exceptional divisors. In part V of \cite{ds} the authors proved that the conditions of the lemma are satisfied by the usual Prym map $\overline {\mathcal{P}_6}:
\overline {\mathcal{R}}_6\rightarrow {\mathcal A}_5$ with respect to $\mathcal C$. Moreover they give a nice geometrical description of the blowups involved in the picture.
In this way they are able to confirm once again that the degree of $P$ is $27$ (see also \cite[\S 4]{do}). The next proposition is a summary of the
results in \S1 and \S2 in part V of \cite{ds}:
\begin{proposition} Let ${\mathcal F}$ be the closure of the union of all the Fano surfaces $F(V)$ of smooth cubic threefolds $V$. Denote by $\pi_1:\widetilde
{\overline R_6}\rightarrow \overline R_6$ the blowup of $\overline R_6$ along ${\mathcal F}$ and by $\pi_2: \widetilde {{\mathcal A}_5}\rightarrow {\mathcal A}_5$ the blowup of ${\mathcal A}_5$ along $\mathcal C$. Then:
\begin{enumerate}
\item $\overline {\mathcal{P}_6}^{-1}(\mathcal C)={\mathcal F}$.
\item For a cubic threefold $V$ the fibre $\pi_2^{-1}(V)$ can be identified with the dual of the ambient space $\mathbb P^4$ of $V$.
\item For an admissible covering $(\widetilde {Q_l}, Q_l)\in {\mathcal F}$ the fibre $\pi_1^{-1}(\widetilde {Q_l},Q_l)$ can be identified with the dual of
the ambient space $\mathbb P^2$ of $Q_l$.
\item The map between exceptional divisors $\tilde {{\mathcal F}} \rightarrow \tilde {\mathcal C}$ sends $(\tilde Q_l,Q_l,m)$ to $(V,l\vee m)$. In particular is
well-defined and injective everywhere and the conditions of the Lemma (\ref{local_degree}) are satisfied.
\end{enumerate}
\end{proposition}
\begin{corollary} Let $\mathcal S$ be the fibre of $\mathcal C$
by the map $\overline {\mathcal{P}_6}_{|\Delta ^n}:\Delta^n\rightarrow {\mathcal A}_5$, and let $\tilde {\mathcal S}$ the exceptional divisor of the blowup of $\Delta^n$ along
$\mathcal S$. Then the degree of $\overline {\mathcal{P}_6}_{|\Delta ^n}$ equals the degree of the map $\tilde {\mathcal S} \rightarrow \tilde {\mathcal C}$ sending
$(\tilde Q_l,Q_l,m)$ to $(V,l\vee m)$
\end{corollary}
\begin{proof}
Since $\Delta ^n\subset \overline {\mathcal{R}}_6$ is a closed immersion, the universal property of blowups (see \cite[Corollary 7.15]{har}) implies that the
previous Proposition is valid for the restriction to $\Delta^n$. In particular,
the conditions of the Lemma \ref{local_degree} are satisfied and the local degree (hence the degree) can be computed looking at the map between
exceptional divisors. \end{proof}
Now we can prove the main result of this section:
\begin{thm} \leftarrowbel{gen_injective}
The Prym map $\mathcal{P}_{5,2}: \mathcal R_{5,2}\rightarrow {\mathcal A}_5$ is generically injective.
\end{thm}
\begin{proof}
Due to the last corollary, the theorem reduces to check the following fact: Let $V$ be a generic smooth cubic threefold and let $l\subset V$
a generic line
on $\Gamma $ (hence $Q_l$ is a quintic nodal curve). Given a a generic hyperplane $H$ in $\mathbb P^4$ containing $l$, there is no other line in
$\Gamma $ contained in $H$.
To prove this we consider
\[
\mathcal H_0=\{(l,H)\in \Gamma \times \mathbb P^{4*} \mid l\subset H \}.
\]
This a $3$-dimensional closed subvariety, since all the fibres of $\mathcal H_0 \rightarrow \Gamma $ are isomorphic to $\mathbb P^2$. The variety we are
interested in
is $\mathcal H=p_2(\mathcal H_0)$. The image of the rational map
\[
\Gamma \times \Gamma \dasharrow \mathcal H
\]
sending $(l_1,l_2)$ to the hyperplane $l_1\vee l_2$ is a $2$-dimensional family $\mathcal H_2$ of hyperplanes contained in $\mathcal H$. To finish the proof
of the theorem it is enough to show that $\dim \mathcal H=3$. Indeed, otherwise for a generic $H\in \mathcal H$ there are infinitely many lines of $\Gamma $
contained in $H$, thus the whole $\Gamma $ give lines in $H$. Taking three different linearly independent hyperplanes $H_1, H_2, H_3\in \mathcal H$
such that all the lines of $\Gamma $ are in the three hyperplanes we get a contradiction.
\end{proof}
\appendix
\section{\\Geometry of $\mathcal{R}_{2,6}$ via Kummer quartic surfaces \\ by Alessandro Verra}
In this appendix we describe the moduli space $\mathcal R_{2,6}$ via the geometry of the sextic models in $\mathbf P^4$ of a smooth, integral genus
two curve $C$ and of its naturally associated Kummer quartic surface. Our goal is to derive from this some recreational geometry and to prove the
following result.
\begin{theorem} \leftarrowbel{rationality}
$\mathcal R_{2,6}$ is rational.
\end{theorem}
To perform the proof let us introduce some preliminaries. Recall that a point of $\mathcal R_{2,6}$ is defined by a triple $(C, \eta, b)$ where $C$ is a curve
as above, $\eta \in \Pic^3C$ and $b$ is a smooth element of $\vert \eta^{\otimes 2} \vert$. Since $\eta^{\otimes 2}$ has degree $6$ we have $\dim \vert
\eta^{\otimes 2} \vert = 4$. Furthermore, a well known theorem of Mumford (\cite{mu1}) implies that $\eta^{\otimes 2}$ defines a projectively normal
embedding
$$
C \subset \mathbf P^4,
$$
with ideal sheaf $\mathcal I_C$ generated by quadrics. The linear system $\vert \mathcal I_C(2) \vert$ is $3$-dimensional and reflects the properties of
the Kummer surface of $\Pic^0 C$. We recollect from \cite{br} some of these properties. First observe that $C$ is contained in the scroll
$$
R = \cup \leftarrowngle x + \sigma(x) \rightarrowngle \ , \ x \in C,
$$
where $\sigma$ is the involution on the covering exchanging the sheets over $C$. So $R$ is either a cone over a rational normal cubic or a
smooth cubic scroll, biregular to $\mathbf P^2$ blown up at one point. Notice that $R$ is a cone if and only if $C$ is 3-canonically embedded.
In particular the moduli points of triples $(C, \eta, b)$ such that $R$ is not a cone fill up a dense open set of $\mathcal R_{2,6}$. Therefore, it will
be not restrictive to assume that $R$ is smooth. \par
The ideal sheaf $\mathcal I_R$ of $R$ is generated by quadrics and $\dim \vert \mathcal I_R(2)
\vert = 2$. This implies that $C = R \cdot Q$, where
$Q$ is a quadric smooth along $C$. Hence, by Bertini theorem, it follows that a general $Q \in \vert \mathcal I_C(2) \vert$ is smooth. Then the discriminant
locus of $ \vert \mathcal I_C(2) \vert$ is a surface, namely the quintic surface
$$
D := \lbrace Q \in \vert \mathcal I_C(2) \vert \ \vert \ \Sing Q \neq \emptyset \rbrace.
$$
Notice that $D$ is not integral. Indeed, by Lefschetz hyperplane theorem, no quadric containing $R$ is smooth. Hence the net $\vert \mathcal I_R(2) \vert$
is a plane contained in $D$. From now on we will use the following notation:
$$
\mathbf P^3 := \vert \mathcal I_C(2) \vert \quad \text {and} \quad \mathbf P^2 := \vert \mathcal I_R(2) \vert.
$$
We describe $\mathbf P^2$ explicitly. The surface $R$ contains a unique exceptional line that we will denote by $E$. We can associate to each point
$v \in R$ a unique quadric $Q_v$ containing $R$ in the following way. Let $p_v: R \to \mathbf P^3$ be the projection from $v$. Then $p_v(R)$ is a
quadric surface and the cone of vertex $v$ over $p_v(R)$ is a quadric $Q_v$ through $R$. If $Q'_v$ is a second quadric through $R$ singular at $v$,
then $Q_v \cap Q'_v$ is a cone of vertex $v$ over a curve, which implies that $R$ is
a cone, so we get a contradiction. Therefore the assignment $v \to Q_v$ defines a morphism
$$
\beta: R \to \mathbf P^2.
$$
\begin{proposition} \leftarrowbel{contraction}
The map $\beta$ is the contraction of $E$. Moreover either $v \not \in E$ and $\beta(v)$ has rank $4$ or $\beta(E)$ has rank $3$. In the latter
case $E = \Sing \beta(E)$.
\end{proposition}
\begin{proof} Since the arguments are standard we only sketch the proof. Let $v \in R$ then $p_v(R)$ is obtained via the elementary transformation of
center $v$ of the ruled surface $R$ ( \cite{hh}). Hence $p_v(R)$ is a smooth quadric surface if $v \not \in E$. If $v \in E$ then $p_v(R)$ has rank $3$,
$p_v(R)$ is singular at $p_v(E)$ and $\Sing Q_v = E$. Now it is easy to see that there is a unique rank $3$ quadric $Q_E$ containing $R$. Hence we
have $Q_v = Q_E$ and $\beta$ is constant on $E$. Then the statement follows.
\end{proof}
To continue the study of $D$ we consider the natural involution
$$
s: \Pic^3 C \to \Pic^3C
$$
defined by $s(L) := \eta^{\otimes 2}(-L)$, so $\Pic^3 C / \leftarrowngle s \rightarrowngle$ is
biregular to the Kummer surface of $\Pic^0 C $. Let $L \in \Pic^3 C$ then $\dim \vert L \vert = 1$ and each $d \in \vert L \vert$ satisfies $\dim \leftarrowngle d
\rightarrowngle = 2$. Otherwise we would have $ \dim \vert \eta^{\otimes 2}(-d) \vert \geq 2$, which is impossible. This defines a quadric through $C$, namely
$$
Q_L := \bigcup_{d \in \vert L \vert} \leftarrowngle d \rightarrowngle,
$$
the union of the planes generated by the divisors in $|L|$. Let
$$
u: \Pic^3 C \to \mathbf P^3, \qquad L \mapsto Q_L
$$
and $S := u(\Pic^3 C)$. Since $Q_L$ is ruled by planes, then $Q_L$ is singular and hence $u(\Pic^3 C) \subset D$.
\begin{lemma}
At a general $Q_L$ the fibre of $u: \Pic^3 C \to S$ is $\lbrace L, s(L) \rbrace$.
\end{lemma}
\begin{proof} From $L \otimes s(L) \cong \mathcal O_C(1)$ it easily follows that $Q_L = Q_{s(L)}$. Let $\vert L \vert$ be a base-point-free pencil,
then $L$ is uniquely reconstructed from the ruling of planes $\leftarrowngle d \rightarrowngle$ of $Q_L$. Since for a general $L \in \Pic^3 C$, $\vert L \vert$ and
$\vert s(L) \vert$ are base point-free distinct pencils, the statement follows.
\end{proof}
To complete the description of $S$ we consider the natural theta divisor
$$
\Theta := \lbrace M \in \Pic^3 C \ \vert \ M \cong \omega_C(x), \ x \in C \rbrace
$$
and its conjugate by the involution $s$
$$
\Theta_{\eta} := s(\Theta) = \lbrace \eta^{\otimes 2}(-M) \ , \ M \in \Theta \rbrace.
$$
Observe that $M \in \Theta$ if and only if $\vert M \vert$ is base-point-free.
\begin{theorem} \
\begin{enumerate}
\item It holds $u^* \mathbf P^2 = \Theta + \Theta_{\eta}$;
\item the linear system $\vert \Theta + \Theta_{\eta} \vert$ defines the map $u: \Pic^3C \to \mathbf P^3$;
\item the surface $S$ is the Kummer quartic model of $\Pic^3 C / \leftarrowngle s \rightarrowngle$.
\end{enumerate}
\end{theorem}
\begin{proof} (1) Let $Q \in \mathbf P^2$ be of rank 4 then $\Sing Q$ is a point $o \in R$. Consider the projection $p_o: R \to \mathbf P^3$ from $o$, then
$p_o(R)$ is a smooth quadric and one of its rulings is the image of the ruling of lines of $R$. If $o \not \in C$ then $p_o(C)$ is a curve of type (2,4) in
$p_o(R)$ and the two rulings of planes of $Q$ cut $\vert \omega_C \vert$
and a base point free pencil of degree 4 on $C$. Hence $Q$ is not in $S$. If $o \in C$ then $Q = u(L)$, where $L $ is
$\omega_C(o)$ and then $L \in \Theta$ and $s(L) \in \Theta_{\eta}$. Secondly, let $Q$ be of rank 3 then $\Sing Q \cap C = \lbrace o_1, o_2 \rbrace$
and one has $Q = u(L)$, where
$ \lbrace L, s(L) \rbrace = \lbrace \omega_C(o_1), \omega_C(o_2) \rbrace \subset \Theta$. The discussion implies $ u^{-1}(\mathbf P^2) = \Theta \cup
\Theta_{\eta}$.
Now $u: \Pic^3C \to S$ is a morphism of degree 2 and $\deg S \leq 4$, since $D$ has degree 5. This easily implies $\deg S = 4$ and $u^* \mathbf P^2 =
\Theta + \Theta_{\eta}$.
This shows (1) and (1) implies (2) and (3). \end{proof}
It is well known that $\Sing S$ consists of sixteen nodes. In our construction these are the images of the fixed points of $s$, in other words
$$
\Sing S = \lbrace Q_L \ \vert \ L^{\otimes 2} \cong \mathcal O_C(1) \rbrace.
$$
Since $L = s(L)$ the quadric $Q_L$ has a unique ruling of planes and rank $3$. Hence, $Q_{\eta} \in \Sing S$ since $s(\eta) = \eta$, and the rank $3$
quadric $Q_{\eta}$ defines a node of $S$.
For a general triple $(C, \eta, b) \in \mathcal R_{2,6}$ is not restrictive to assume $\eta \not \in \Theta$. Then $\vert \eta \vert$ is base point free and
$\Sing Q_{\eta} \cap C = \emptyset$, where $\Sing Q_{\eta}$ is a line. By Proposition \ref{contraction} this implies that
$$
C = Q_{\eta} \cdot R.
$$
Since $R$ is unique in $\mathbf P^4$ up to projective automorphisms, we fix it. Then a general pair $(C, \eta)$ is defined
via the above construction by a rank $3$ quadric not containing $R$. \par
\begin{proof}[Proof of Theorem~\ref{rationality}]
Let $G(1,4)$ be the Grassmannian of lines of $\mathbf P^4$; we use the same notation for a line $\ell \subset \mathbf P^4$ and its parameter
in $G(1,4)$. Let $\mathcal I_{\ell}$ be the ideal sheaf of $\ell$. Then $\vert \mathcal I^2_{\ell}(2) \vert$ is $5$-dimensional. We set
$$
\mathbf P^5_{\ell} := \vert \mathcal I^2_{\ell}(2) \vert .
$$
This linear system parametrizes all the quadrics singular along $\ell$ and its general
member has rank $3$. Let $\ell \in G(1,4)$ and $Q \in \mathbf P^5_{\ell}$ be general. Then $Q$ defines a smooth curve $C := Q \cdot R$.
By the previous remarks $C$ is a general curve of genus $2$. Moreover, the ruling of planes of $Q$ cuts on $C$ a pencil $\vert \eta \vert$, where
$\eta \in \Pic^3C$ and $\eta^{\otimes 2} \cong \mathcal O_C(1)$. Let $\mathbf P^{4*} = \vert \mathcal O_{\mathbf P^4}(1) \vert$. Then a general
$(Q,H) \in \mathbf P^5_{\ell} \times \mathbf P^{4*}$ defines a triple $(C, \eta, b)$ where $b := H \cdot C$, with $(C, \eta)$ is constructed as above.
This defines a rational map
$$
f_{\ell}: \mathbf P^5_{\ell} \times \mathbf P^{4*} \to \mathcal R_{2,6}
$$
sending $(Q,H)$ to the moduli point of $(C, \eta,b)$. Notice that $f_{\ell}$ is a map of varieties of the same dimension, depending of course on $\ell$.
\par
Let $G_{\ell}\subset \Aut \mathbf P^4$ be the stabilizer of $R \cup \ell$. It is clear that $G_{\ell}$ acts linearly on the
two factors of $\mathbf P^5_{\ell} \times \mathbf P^{4*}$. Then we consider its action on this product, sending $(Q,H)$ to $a(Q,H) := (a(Q), a(H))$.
First we show that $f_{\ell}$
is birationally the quotient map of this action.
\par Assume $f_{\ell}(p_1) = f_{\ell}(p_2)$ for two general points $p_i = (Q_i, H_i)$, $i = 1, 2$. Let $C_i := Q_i \cdot R$ and let $\eta_i \in \Pic C^3_i$
be the line bundle defined by the ruling of $Q_i$. Then $C_1$, $C_2$ are biregular to the same general curve $C$ and $\eta_1$, $\eta_2$ are isomorphic.
This implies that there exists $a \in \Aut \mathbf P^4$ such that $a(C_1) = C_2$ and $a(Q_1) = Q_2$. We can assume that $\mathcal O_{C_i}(1)$ is
general in $\Pic^6 C_i$ so that the stabilizer of $C_i$ in $\Aut \mathbf P^4$ is trivial. Let $b_i = C_i \cdot H_i$ then it follows $a(b_1) = b_2$ and $a(H_1) = H_2$. Moreover, notice that
the ruling of lines of $R$ is invariant by $a$, since $a(C_1) = C_2$. Hence $a(R) = R$. Since $a(Q_1) = a(Q_2)$ then $a(\ell) = \ell$. This implies that
$a \in G_{\ell}$ and that $(Q_2, H_2) = a(Q_1, H_2)$.
Since $\mathbf P^5 \times \mathbf P^{4*}$ and $\mathcal R_{2,6}$ are irreducible of the same dimension, it follows that $\mathbf P^5 \times
\mathbf P^{4*} / G_{\ell}$ is birational to $\mathcal R_{2,6}$. \par
Finally, we claim that $G_{\ell}$ is isomorphic to $\mathbb Z/2\mathbb Z$ for a general $\ell$. Therefore
$\mathbf P^5 \times \mathbf P^{4*}$ descends to a $\mathbf P^5$-bundle over the quotient space $\mathbf P^4 / G_{\ell}$
and the latter is rational.
\end{proof}
\begin{proof} [Proof of the claim]
The stabilizer of $R$ in $\Aut \mathbf P^4$ is isomorphic to $\Aut R$. Since $R$ is the blow-up of $\mathbf P^2$ at a point $o$,
$\Aut R$ is isomorphic to the stabilizer of $o$ in $\Aut \mathbf P^2$. Notice that $\Aut R$ is $6$-dimensional and acts on the $6$-dimensional variety
$G(1,4)$. Recall that we have a map
$$
\beta: R \to \vert \mathcal I_R(2) \vert
$$
such that $\beta(v)$ is the unique $Q_v \in \vert \mathcal I_R(2) \vert$ singular at $v$. By Proposition \ref{contraction} $\beta$ is the contraction of
the exceptional line $E \subset R$ and $o := \sigma(E)$ is the parameter point of the unique rank $3$ quadric $Q_E$ containing $R$. Let
$$
r_{\ell}: \vert \mathcal I_R(2) \vert \to \vert \mathcal O_{\ell}(2) \vert
$$
be the restriction map. We define the non empty open sets:
\par
\begin{enumerate}
\item $U_1 := \lbrace \ell \in G(1,4) \ \vert \ r_{\ell} \ \text {\it is an isomorphism} \rbrace$,
\item $U_2 := \lbrace \ell \in G(1,4) \ \vert \ \text{ \it $\ell$ is transversal to $Q_E$} \rbrace$
\end{enumerate}
and prove our claim for $\ell \in U_1 \cap U2$. Let $\Delta_{\ell} := r_{\ell}^*\Delta$ be the pull back of the diagonal conic $\Delta := \lbrace 2x \mid \ x \in \ell
\rbrace$. Then $\Delta_{\ell}$ is a conic in $\vert \mathcal I_R(2) \vert$, more precisely it is the family of quadrics through $R$ which are tangent to $\ell$.
Let $a \in G_{\ell}$ then $a(Q_E\cap \ell) = Q_E \cap \ell$. Since $Q_E$ is transversal to
$\ell$ it follows that $$
Q_E \cap \ell = \lbrace x_1 , x_2 \rbrace
$$
with $x_1 \neq x_2$. Therefore the parameter point $o$ of $Q_E$ is not in $\Delta_{\ell}$. It is also clear that the induced automorphism $a_*: \vert
\mathcal I_R(2) \vert \to \vert \mathcal I_R(2) \vert$ leaves $\Delta_{\ell}$ invariant. Let $Q_v \in \vert \mathcal I_R(2) \vert$ be singular at $v$ then
$Q_{a(v)} = a(Q_v)$ and this equality implies that $a_{|R}: R \to R$ is uniquely defined by $a_*$. Since $a_{|R}$ is the identity if and only if
$a$ is, it follows that $a$ is uniquely reconstructed from $a_*$. \par
Now $a_*(o) = o$ implies $a_*(L) = L$ and $a_*(T_1 \cup T_2) = T_1 \cup T_2$, where $L$ is the polar line of $o$ with respect to $\Delta_{\ell}$
and $T_1, T_2$ are the tangent lines to
$\Delta_{\ell}$ at the two points of $L \cap \Delta_{\ell}$. Let $P$ be the pencil of conics which are tangent to $\Delta_{\ell}$ at $L \cap \Delta_{\ell}$.
Then $a_*$ acts on $P$ fixing each of the
elements $\Delta_{\ell}$, $T_1 + T_2$, $2L$. Hence $a_*$ acts as the identity on $P$. To conclude just recall that the group of planar automorphisms
acting on a pencil like $P$ as the
identity is isomorphic to $\mathbb Z/ 2\mathbb Z$. Indeed, it is generated by the involution leaving each conic of $P$ invariant, with $o$ and the points of
$L$ fixed. Hence $G_{\ell}$ is generated by $a$, where $a_*$ is such an involution.
\end{proof}
\end{document}
|
\begin{document}
\centerline{\bf On the Evaluation of the Fifth Degree Elliptic Singular Moduli}
\vskip .4in
\centerline{N.D.Bagis}
\centerline{Stenimahou 5 Edessa, Pellas 58200}
\centerline{Edessa, Greece}
\centerline{[email protected]}
\vskip .2in
\[
\]
\textbf{Keywords}: Singular Moduli; Algebraic Numbers; Ramanujan; Continued Fractions; Elliptic Functions; Modular equations; Iterations; Polynomials; Pi;
\[
\]
\centerline{\bf Abstract}
\begin{quote}
We find in a algebraic radicals way the value of singular moduli $k_{25^nr_0}$ for any integer $n$ knowing only two consecutive values $k_{r_0}$ and $k_{r_0/25}$.
\end{quote}
\section{Introduction and Definitions}
Modular equations of $k_r$ (the elliptic singular moduli), have considered and have been discussed in the last 200 years by many great Mathematicians. They play very important role in several problems. The construction of $\pi$ approximation formulas, the evaluation of the famous Rogers-Ramanujan and similar continued fractions, the solution of the quintic and sextic equation, the evaluation of the elliptic integrals in modular bases other than the classical (i.e. the cubic, the quartic and the fifth), the evaluation of the derivatives of Jacobi theta functions and many other problems of mathematics (see [1],[2],[4],[5],[6],[11],[15],[16]).\\
The only known solvable modular equations of $k_r$ where that of 2-nd and 3-rd degree. The partial solution of 5-th degree modular equation presented here is a new and important result.\\
As application of this result we give an evaluation, in a closed form, of a quintic iteration formula for $1/\pi$, constructed by the Borwein's brothers and Bailey (see [11],[5] pg.175 and [1] pg.269).\\
We begin with the definition of the complete elliptic integral of the first kind which is (see [3],[4],[5]):
\begin{equation}
K(x)=\frac{\pi}{2}{}_2F_1\left(\frac{1}{2},\frac{1}{2};1,x^2\right)=\int^{\pi/2}_{0}\frac{1}{\sqrt{1-x^2\sin(t)^2}}dt
\end{equation}
It is known that the inverse elliptic nome (singular modulus or elliptic singular moduli), $k=k_r$, $k'^2_r=1-k^2_r$ is the real solution of the equation:
\begin{equation}
\frac{K\left(k'_r\right)}{K(k_r)}=\sqrt{r}
\end{equation}
with $0<k_r<1$.\\
In what it follows we assume that $r \in \bf R^{*}_+ \rm$. If $r$ is positive rational then $k_r$ is algebraic. The function $k_r$ can evaluated in certain cases exactly (see [2],[5],[9]).\\
Continuing we define for $|q|<1$ the Ramanujan's eta function
\begin{equation}
f(-q):=\prod^{\infty}_{n=1}(1-q^n)
\end{equation}
For $\left|q\right|<1$, the Rogers Ramanujan continued fraction (RRCF) is defined as
\begin{equation}
R(q):=\frac{q^{1/5}}{1+}\frac{q^1}{1+}\frac{q^2}{1+}\frac{q^3}{1+}\cdots
\end{equation}
and the following relation of Ramanujan holds (see [1],[2],[8]):
\begin{equation}
\frac{1}{R^5(q)}-11-R^5(q)=\frac{f^6(-q)}{q f^6(-q^5)}
\end{equation}
We can write the eta function $f$ using elliptic functions. It holds
\begin{equation}
f(-q)^8=\frac{2^{8/3}}{\pi^4}q^{-1/3}(k_r)^{2/3}(k'_r)^{8/3}K(k_r)^4.
\end{equation}
Also holds (see [3] pg.488):
\begin{equation}
f(-q^2)^6=\frac{2k_r k'_r K(k_r)^3}{\pi^3 q^{1/2}}
\end{equation}
\textbf{Theorem 1.1} (see [6],[7])
\begin{equation}
R^{-5}(q^2)-11-R^5(q^2)=\left(\frac{k_rk'_r}{ww'}\right)^2\left(\frac{w}{k_r}+\frac{w'}{k'_r}-\frac{w w'}{k_rk'_r}\right)^3 ,
\end{equation}
with $w^2=k_rk_{25r}$, $(w')^2=k'_rk'_{25r}$.
\begin{equation}
k^6_r+k^3_r(-16+10k^2_r)w+15k^4_rw^2-20k^3_rw^3+15k^2_rw^4+k_r(10-16k^2_r)w^5+w^6=0
\end{equation}
\\
Once we know $k_r$ its relation with $w$ and hence with $k_{25r}$ is given from equation (9). Hence the problem of finding $k_{25r}$ reduces to solve the 6-th degree equation (9), which under the change of variables $w=\sqrt{k_rk_{25r}}$, $u^8=k^2_r$, $v^8=k^2_{25r}$ reduces to the 'depressed equation' (see [4] chapter 10):
\begin{equation}
u^6-v^6+5u^2v^2(u^2-v^2)+4uv(1-u^4v^4)=0
\end{equation}
The depressed equation is also related with the problem of solution of the general quintic equation
\begin{equation}
ax^5+bx^4+cx^3+dx^2+ex+f=0,
\end{equation}
which can reduced with a Tchirnhausen transform into the Bring's form
\begin{equation}
x^5+ax+b=0.
\end{equation}
The solution of the depressed equation is a relation of the form
\begin{equation}
k_{25r}=\Phi(k_r) .
\end{equation}
But such construction of the root of the depressed equation can not found in radicals (see [11]). Speaking clearly the equations (9) and (10) are not solvable in radicals. Hence we seek a way to reduce them. A way can found using the extra value of $k_{r/25}$. \\
In this paper we give a solution of the form
\begin{equation}
k_{25r}=\Phi(k_r,k_{r/25}) ,
\end{equation}
which can written more general
\begin{equation}
k_{25^nr_0}=\Phi_n(k_{r_0},k_{r_0/25}) \textrm{ , } n\in\bf Z\rm
\end{equation}
and $\Phi_n(x)$ are known algebraic constant functions which we evaluate them.
\section{State of the Main Theorem}
Our Main Theorem is\\
\\
\textbf{Main Theorem}\\
For $n=1,2,\ldots$ we have
\begin{equation}
k_{25^nr_0}=\sqrt{1/2-1/2\sqrt{1-4\left(k_{r_0}k'_{r_0}\right)^2\prod^{n}_{j=0}P^{(j)}\left[\sqrt[12]{\frac{k_{r_0}k'_{r_0}}{k_{r_0/25}k'_{r_0/25}}}\right]^{24}}}
\end{equation}
where the function $P$ is in radicals known function and is given from
\begin{equation}
P(x)=P[x]=U\left[Q\left[U^{*}\left[x\right]^6\right]^{1/6}\right]
\end{equation}
\begin{equation}
P^{(n)}(x)=(P\underbrace{\circ\ldots\circ}_{n} P)(x)\textrm{ , } P^{(0)}(x)=x .
\end{equation}
The function $Q$ is that of (30) and $U$, $U^{*}$ are given from (33) and (34) below.
\section{The Reduction of the Evaluation of the 5th Degree Modular Equation}
We give below some Lemmas that will help us in the construction of proof and evaluation of function $P$ of Main Theorem.\\
\\
\textbf{Lemma 3.1} (see also [6])\\
Let $q=e^{-\pi\sqrt{r}}$ and $r$ real positive. We define
\begin{equation}
A_r:=\left(\frac{k'_r}{k'_{25r}}\right)^2\sqrt{\frac{k_r}{k_{25r}}}M_5(r)^{-3}
\end{equation}
Then
\begin{equation}
R(q)=\left(-\frac{11}{2}-\frac{A_r}{2}+\frac{1}{2}\sqrt{125+22A_r+A^2_r}\right)^{1/5}
\end{equation}
where $M_5(r)$ is root of: $(5X-1)^5(1-X)=256 (k_r k'_r)^2 X$.\\
\textbf{Proof.}\\
Suppose that $N=n^2\mu$, where $n$ is positive integer and $\mu$ is positive real then holds that
\begin{equation}
K[n^2\mu]=M_n(\mu)K[\mu]
\end{equation}
where $K[\mu]:=K(k_{\mu})$\\
The following equation for $M_5(r)$ is known
\begin{equation}
(5M_5(r)-1)^5(1-M_5(r))=256(k_rk'_r)^2M_5(r)
\end{equation}
Thus if we use (5),(6),(20), we get:
\begin{equation}
R^{-5}(q)-11-R^{5}(q)=\frac{f^6(-q)}{q f^6(-q^5)}=A_r=\left(\frac{k'_r}{k'_{25r}}\right)^2\sqrt{\frac{k_r}{k_{25r}}}M_5(r)^{-3}
\end{equation}
Solving with respect to $R(q)$ we get the result.\\
\\
Let now $q=e^{-\pi\sqrt{r}}$, $r>0$ and $v_r=R(q)$, it have been proved by Ramanujan that (see [1],[2],[8],[10]):
\begin{equation}
v^5_{r/25}=v_r\frac{1-2v_r+4v_r^2-3v_r^3+v_r^4}{1+3v_r+4v_r^2+2v_r^3+v_r^4}
\end{equation}
also from (23) is
\begin{equation}
A_r=R(q)^{-5}-11-R(q)^5=\frac{f(-q)^6}{qf(-q^5)^6} .
\end{equation}
Then from Lemma 3.1
\begin{equation}
v_r=R(q)=S(A_r)=\left(-\frac{11}{2}-\frac{A_r}{2}+\frac{1}{2}\sqrt{125+22A_r+A^2_r}\right)^{1/5} .
\end{equation}
Note that a $S$ function were defined from the 3-rd equality of (26).\\
From the above we get the following modular equation for $A_r$
$$
A_{r/25}=v_{r/25}^{-5}-11-v_{r/25}^{5}=F(v_r)=
$$
\begin{equation}
=\left(v_r\frac{1-2v_r+4v_r^2-3v_r^3+v_r^4}{1+3v_r+4v_r^2+2v_r^3+v_r^4}\right)^{-1}-11-\left(v_r\frac{1-2v_r+4v_r^2-3v_r^3+v_r^4}{1+3v_r+4v_r^2+2v_r^3+v_r^4}\right)
\end{equation}
and from (26) replacing $v_r$ in terms of $A_r$
\begin{equation}
A_{r/25}=F(v_r)=F\left(S\left(A_r\right)\right)=\left(F\circ S\right)(A_r)=Q(A_r) ,
\end{equation}
which after algebraic simplification with Mathematica program we get\\
\\
\textbf{Lemma 3.2}\\
If $q=e^{-\pi\sqrt{r}}$, $r>0$ and
\begin{equation}
A_r=\frac{f(-q)^6}{qf(-q^5)^6}\textrm{ , then }A_{r/25}=Q(A_r)
\end{equation}
where
\begin{equation}
Q(x)=\frac{\left(-1-e^{\frac{1}{5}y}+e^{\frac{2}{5} y}\right)^5}{\left(e^{\frac{1}{5}y}-e^{\frac{2}{5}y}+2 e^{\frac{3}{5} y}-3 e^{\frac{4}{5}y}+5 e^{y}+3 e^{\frac{6}{5}y}+2 e^{\frac{7}{5}y}+e^{\frac{8}{5} y}+e^{\frac{9}{5}y}\right)}
\end{equation}
and
\begin{equation}
y=\textrm{arcsinh}\left(\frac{11+x}{2}\right).
\end{equation}
\\
Note that inserting (31) to (30) and simplifying, we get an algebraic function, but for simplicity and more concentrated results we leave it as it is.\\
\\
Consider now the following equation which appear in the Lemma 3.3 below
\begin{equation}
\frac{X^2}{\sqrt{5}Y}-\frac{\sqrt{5}Y}{X^2}=\frac{1}{\sqrt{5}}\left(Y^3-Y^{-3}\right)
\end{equation}
This equation is solvable in radicals with respect to $Y$ and $X$ also. One can find the solution
\begin{equation}
Y=U(X)=\sqrt{-\frac{5}{3 X^2}+\frac{25}{3 X^2 h(X)}+\frac{X^4}{h(X)}+\frac{h(X)}{3 X^2}}
\end{equation}
where
$$
h(x)=\left(-125-9x^6+3\sqrt{3}\sqrt{-125x^6-22x^{12}-x^{18}}\right)^{1/3}
$$
The solution of (32) with respect to $X$ is
\begin{equation}
X=U^{*}(Y)=\sqrt{-\frac{1}{2 Y^2}+\frac{Y^4}{2}+\frac{\sqrt{1+18 Y^6+Y^{12}}}{2 Y^2}} .
\end{equation}
\\
\textbf{Lemma 3.3} (see [8])\\
If $G_r$ denotes the Weber class invariant
$$
A'=\frac{f(-q^2)}{q^{1/3}f(-q^{10})}=\left(A_{4r}\right)^{1/6} \textrm{ and } V^{'}=\frac{G_{25r}}{G_r},
$$
then
\begin{equation}
\frac{A^{'2}}{\sqrt{5}V^{'}}-\frac{\sqrt{5}V^{'}}{A^{'2}}=\frac{1}{\sqrt{5}}\left(V^{'3}-V^{'-3}\right)
\end{equation}
\\
\textbf{Note.} For the Weber class invariant one can see [14],[2].\\
\\
We state now our first theorem\\
\\
\textbf{Theorem 3.1}\\
For the Weber class invariant $G_r$ holds
\begin{equation}
\frac{G_r}{G_{r/25}}=U\left[Q\left[U^{*}\left[\frac{G_{25r}}{G_r}\right]^6\right]^{1/6}\right]
\end{equation}
\textbf{Proof.}\\
Set $$A=\left(A_{4r/25}\right)^{1/6} \textrm{ and }V'=\frac{G_{r}}{G_{r/25}} , $$
then from Lemmas 3.2 and 3.3 and from relations (29),(30),(32),(33) and (34) we have
$$
\frac{G_r}{G_{r/25}}=U\left[\left(A_{4r/25}\right)^{1/6}\right]=U\left[Q\left(A_{4r}\right)^{1/6}\right]=U\left[Q\left[U^{*}\left(\frac{G_{25r}}{G_{r}}\right)^6\right]^{1/6}\right]
$$
which completes the proof.
\[
\]
Continuing we have
\begin{equation}
G_r=2^{-1/12}(k_rk'_r)^{-1/12}
\end{equation}
hence
\begin{equation}
\left(\frac{k_{r}k'_{r}}{k_{r/25}k'_{r/25}}\right)^{-1/12}=U\left[Q\left(U^{*}\left(\left(\frac{k_{25r}k'_{25r}}{k_{r}k'_{r}}\right)^{-1/12}\right)^6\right)^{1/6}\right]
\end{equation}
From the identity $$k_{1/r}=k'_r$$ we get
\begin{equation}
\left(\frac{k_{1/r}k'_{1/r}}{k_{25/r}k'_{25/r}}\right)^{-1}=U\left[Q\left(U^{*}\left(\left(\frac{k_{1/(25r)}k'_{1/(25r)}}{k_{1/r}k'_{1/r}}\right)^{-1/12}\right)^6\right)^{1/6}\right]^{12}
\end{equation}
and setting $r\rightarrow 1/r$ we lead to\\
\\
\textbf{Theorem 3.2}\\
If $r\in \bf R^{*}_{+}\rm$, then
\begin{equation}
\sqrt[12]{\frac{k_{25r}k'_{25r}}{k_{r}k'_{r}}}=U\left[Q\left[U^{*}\left[\sqrt[12]{\frac{k_{r}k'_{r}}{k_{r/25}k'_{r/25}}}\right]^6\right]^{1/6}\right]
\end{equation}
\\
Hence knowing $k_{r_0}$ and $k_{r_0/25}$ we can evaluate in closed form the $k_{25r_0}$. If we repeat the process we can find any higher or lower order of $k_{25^nr_0}$ in closed radicals form, when $n\in\bf Z\rm$ and easily get (16).\\
Observe that a similar formula to (16) for the evaluation of $k_{r_0/25^n}$, $n=1,2,\ldots$ can extracted from (39).\\
\\
\textbf{Example 1.}
$$
k_{1/5}=\sqrt{\frac{9+4 \sqrt{5}+2 \sqrt{38+17 \sqrt{5}}}{18+8 \sqrt{5}}}
$$
$$
k_{5}=\sqrt{\frac{9+4 \sqrt{5}-2 \sqrt{38+17 \sqrt{5}}}{18+8 \sqrt{5}}}
$$
\begin{equation}
k_{125}=\sqrt{\frac{1}{2}-\frac{1}{2}\sqrt{1-(9-4\sqrt{5})P[1]^2}}
\end{equation}
where $P(x)=P[x]$ is that of (17) and $P[1]$ is a algebraic radical number which has too complicated form to present it here. Its minimal polynomial is
$$
1-5 x^2-10x^3-15 x^4-22 x^5-15 x^6-10x^7-5 x^8+x^{10}=0
$$
\\
The same holds also for the next\\
\\
\textbf{Example 2.}\\
for $r_0=25$ it is
$$
k_{r_0/25}=k_1=\frac{1}{\sqrt{2}}
$$
and
$$
k_{r_0}=k_{25}=\frac{1}{\sqrt{2 \left(51841+23184 \sqrt{5}+12 \sqrt{37325880+16692641 \sqrt{5}}\right)}}
$$
Hence
\begin{equation}
k_{625}=\sqrt{\frac{1}{2}-\frac{1}{2}\sqrt{1-\left(51841-23184\sqrt{5}\right)P\left[\frac{\sqrt{5}-1 }{2}\right]^{24}}}
\end{equation}
\[
\]
Hence from 2-nd and 3-rd degree modular equations (see [1] chapter 19), we can evaluate every $k_{r}$ which $r$ is of the form $r=4^k9^l25^nr_0$, when $k_{r_0}$ and $k_{r_0/25}$ are known and $k,l,n\in\bf Z\rm$.
\section{The fifth degree singular moduli and approximations to $1/\pi$}
In [5] pg.175 J.M. Borwein and P.B. Borwein consider the following sharp convergent approximation algorithm to $\pi$ (see also [11]):\\
Consider $\alpha_0:=\alpha(r_0)$ where $\alpha(n)$ is the elliptic alpha function and $u_0=\sqrt[8]{1-k_{r_0}^2}$ set also $u_{n}=(k_{25^{n-1}r_0})^{1/4}$, which now are given from (16) and are in closed form radicals. Using this way we are able to constuct approximations not depending on numerical values of singular moduli $k_r$, but from exact values.\\
Let
$$
x_n:=2u_nu^5_{n+1}\textrm{, }y_n:=2u_{n}^5u_{n+1}
$$
$$
a_n:=u_{n}^2+5u^2_{n+1}+2x_n\textrm{, }b_n:=5u_{n}^2+u^2_{n+1}-2y_n\textrm{, }\gamma_n:=\frac{a_n}{b_n}
$$
finally
$$
\delta_n:=4^{-1}a^{-1}_n(1-u^8_{n+1})\left[5(u^2_{n+1}+x_n)+\gamma_{n}(y_n-u^2_{n+1})\right]+
$$
$$
+4^{-1}b_n^{-1}(1-u^8_n)[u^2_{n}+x_n+5\gamma_{n}(y_n-u^2_{n})]
$$
Then
\begin{equation}
\alpha_{n+1}:=5\gamma_n\alpha_n+5^{n+1}\sqrt{r_0}(\delta_n+u^8_{n+1}-\gamma_nu^8_n)
\end{equation}
and
\begin{equation}
0<\alpha_n-\pi^{-1}<16\cdot5^n\sqrt{r_0}e^{-5^n\sqrt{r_0}\pi}
\end{equation}
for $r_05^{2n}\geq 1$.\\
Hence for every initial condition given to $r_0>0$, with $k_{r_0}$ and $k_{r_0/25}$ known, we lead to a closed form iteration formula approximating $1/\pi$. Actually the thirteen iterations of the above algorithm give the first one billion digits of $\pi$. Note also that algorithms presented in [11],[13] are for a specific initial value and do not cover all values of $r_0$. Here if we know only $k_{r_0}$ and $k_{r_0/25}$ we can find all $u_n$ and construct the iteration for $1/\pi$.\\
Our idea can generalized also for the 10th degree modular equation of $k_r$. The 10th degree modular equation of Rogers Ramanujan continued fraction is solvable and also can put in the form $v_{r/100}=\phi(v_r)$ (see [12]). By this way one can solve with initial conditions $k_{r_0}$ and $k_{r_0/100}$ the general 10-th degree modular equation of $k_r$, if finds an analogue of Lemma 3.3. But it is not so economical since the modular equation of 10-th degree can be splitten to that of 2-nd and 5-th degree.
\[
\]
\centerline{\bf References}\vskip .2in
\noindent
[1]: B.C. Berndt. 'Ramanujan`s Notebooks Part III'. Springer Verlag, New York (1991)
[2]: B.C. Berndt. 'Ramanujan`s Notebooks Part V'. Springer Verlag, New York (1998)
[3]: E.T. Whittaker and G.N. Watson. 'A course on Modern Analysis'. Cambridge U.P. (1927)
[4]: J.V. Armitage W.F. Eberlein. 'Elliptic Functions'.
Cambridge University Press. (2006)
[5]: J.M. Borwein and P.B. Borwein. 'Pi and the AGM'. John Wiley and Sons, Inc. New York, Chichester, Brisbane, Toronto, Singapore. (1987)
[6]: N.D. Bagis. 'The complete evaluation of Rogers-Ramanujan and other continued fractions with elliptic functions'. arXiv:1008.1304v1 [math.GM] 7 Aug 2010
[7]: N.D. Bagis. 'Parametric Evaluations of the Rogers Ramanujan Continued Fraction'. International Journal of Mathematics and Mathematical Sciences. Vol. 2011
[8]: B.C. Berndt, H.H. Chan, S.S. Huang, S.Y. Kang, J. Sohn and S.H. Son. 'The Rogers-Ramanujan Continued Fraction'. J. Comput. Appl. Math. 105 (1999), 9-24
[9]: D. Broadhurst. 'Solutions by radicals at Singular Values $k_N$ from New Class Invariants for $N\equiv3\;\; mod\;\; 8$'. arXiv:0807.2976 (math-phy)
[10]: W. Duke. 'Continued Fractions and Modular Functions'. Bull. Amer. Math. Soc.(N.S.) 42 (2005), 137-162.
[11]: J.M. Borwein, P.B. Borwein and D.H. Bailey. 'Ramanujan, Modular Equations, and Approximations to Pi or How to Compute One Billion Digits of Pi'. Amer. Math. Monthly (96)(1989), 201-219
[12]: M. Trott. 'Modular Equations of the Rogers-Ramanujan Continued Fraction'. Mathematica Journal. 9,(2004), 314-333
[13]: H.H. Chan, S. Cooper, and W.C. Liaw. 'The Rogers-Ramanujan Continued Fraction and a Quintic Iteration for 1/$\pi$'. Proc. of the Amer. Math. Soc. Vol.135, No.11, (2007), 3417-3424
[14]: B.C. Berndt and H.H. Chan. 'Notes on Ramanujan's Singular Moduli'. The Proceedings of the Fifth Conference of the Canadian Number Theory Association, edited by R.Gupta and K.S. Williams, (1998), 7-16
[15]: N.D. Bagis and M.L. Glasser. 'Conjectures on the evaluation of alternative modular bases and formulas approximating $1/\pi$'. Journal of Number Theory. (Elsevier), (2012).
[16]: N.D. Bagis. 'A General Method for Constructing Ramanujan-Type Formulas for Powers of $1/\pi$'. The Mathematica Journal, (2013)
\end{document}
|
\begin{document}
\begin{frontmatter}
\title{Adaptive design for Gaussian process regression under censoring\thanksref{T1}}
\runtitle{Adaptive design under censoring}
\runauthor{J. Chen et al.}
\thankstext{T1}{This work is supported by NSF CSSI Frameworks 2004571, NSF CMMI grant 1921646, and Piedmont Heart Institute.}
\begin{aug}
\author[A]{\fnms{Jialei} \snm{Chen}\ead[label=e1]{[email protected]}},
\author[B]{\fnms{Simon} \snm{Mak}\ead[label=e2]{[email protected]}},
\author[A]{\fnms{V. Roshan} \snm{Joseph}\ead[label=e3]{[email protected]}},
\and
\author[A]{\fnms{Chuck} \snm{Zhang}\ead[label=e4]{[email protected]}}
\address[A]{H. Milton Stewart School of
Industrial \& Systems Engineering
Georgia Institute of Technology,\\ \printead{e1,e3,e4}}
\address[B]{Department of Statistical Science, Duke University, \\
\printead{e2}}
\end{aug}
\begin{abstract}
A key objective in engineering problems is to predict an unknown experimental surface over an input domain. In complex physical experiments, this may be hampered by response censoring, which results in a significant loss of information.
For such problems, experimental design is paramount for maximizing predictive power using a small number of expensive experimental runs. To tackle this, we propose a novel adaptive design method, called the integrated \textit{censored} mean-squared error (ICMSE) method. The ICMSE method first estimates the posterior probability of a new observation being censored,
then adaptively chooses design points that minimize predictive uncertainty under censoring.
Adopting a Gaussian process regression model with product correlation function, the proposed ICMSE criterion is easy to evaluate, which allows for efficient design optimization. We demonstrate the effectiveness of the ICMSE design in two real-world applications on surgical planning and wafer manufacturing.
\end{abstract}
\begin{keyword}[class=MSC]
\kwd{62K20}
\end{keyword}
\begin{keyword}
\kwd{Adaptive sampling}
\kwd{Censored experiments}
\kwd{Experimental design}
\kwd{Kriging}
\kwd{Multi-fidelity modeling}
\end{keyword}
\end{frontmatter}
\section{Introduction}
In many engineering problems, a key objective is to predict an unknown experimental surface over an input domain.
However, for complex physical experiments, one can encounter \textit{censoring}, i.e., the experimental response is missing or partially measured.
Censoring arises from a variety of practical experimental constraints, including limits in measurement devices, safety considerations of experimenters, and a fixed experimental time budget.
Fig \ref{Fig:censor} provides an illustration:
in such cases, the experimental response of interest is latent, and the observed measurement is subject to censoring.
Here, censoring can result in significant loss of information, which leads to poor predictive performance \citep{brooks1982loss}. For example, suppose an engineer wishes to explore how pressure in a nuclear reactor changes under different control settings.
Due to safety concerns, experiments are forced to stop if the pressure hits a certain upper limit, leading to censored responses.
To further complicate matters, the input region which results in censoring is typically \textit{unknown} prior to experiments, and needs to be estimated from data.
\begin{figure}
\caption{\label{Fig:censor}
\label{Fig:censor}
\end{figure}
Given the presence of censoring in physical experiments, it is therefore of interest to carefully design experimental runs, to best model and predict the experimental response surface.
We present a new integrated \textit{censored} mean-squared error (ICMSE) method, which sequentially selects physical experimental runs to minimize predictive uncertainty under \textit{censoring}. ICMSE leverages a Gaussian process model (GP; \citealp{sacks1989design}) -- a flexible Bayesian nonparametric model -- for the response surface, to obtain an easy-to-evaluate design criterion that maximizes GP's predictive power under censoring. We consider two flavors of ICMSE.
The first is a ``single-fidelity'' ICMSE method for sequentially designing (potentially) censored physical experiments. The second is a ``bi-fidelity'' ICMSE method for sequentially designing (potentially) censored physical experiments given auxiliary computer simulation data.
The two settings are motivated from the following two applications.
\subsection{3D-printed aortic valves for surgical planning} \label{Sec:MotiExampleMM}
The first motivating problem concerns the design of 3D-printed tissue-mimicking aortic valves for heart surgeries.
With advances in additive manufacturing \citep{gibson2014additive}, 3D-printed medical prototypes \citep{rengier20103d}
play an increasingly important role in pre-surgical studies \citep{qian2017quantitative}.
They are particularly helpful in complicated heart diseases, e.g., aortic stenosis, where 3D-printed aortic valves can be used to select the best surgical option with minimal post-surgical complication \citep{chen2018generative}.
The printed aortic valve (see Fig \ref{Fig:MMIntro}(a)) contains a biomimetic substructure: an enhancement polymer (white) is embedded in a substrate polymer (clear); this is known as \textit{metamaterial} \citep{wang2016dual} in the materials engineering literature.
The goal is to build a model to understand how the \textit{stiffness} of the metamaterials is affected by the \textit{geometry} of the enhancement polymer (see Fig \ref{Fig:MMIntro}(b)).
This model can then be used by doctors to select a polymer geometry that mimics the target stiffness of the specific patient -- a procedure known as ``tissue-mimicking'' \citep{chen2018JASA}. An accurate tissue-mimicking is paramount for surgery success, since inaccurate stiffness may lead to severe post-surgery complications and death.
Using earlier terminology, this is a bi-fidelity modeling problem involving two types of experiments: a pre-conducted database of computer simulations and patient-specific physical experiments.
The physical experiments here are very \textit{costly}: we need to 3D print each metamaterial sample, then physically test its stiffness using a load cell. Furthermore, the measurement from physical experiments may be \textit{censored} due to an inherent upper limit of the testing machine. This is shown in Fig \ref{Fig:MMIntro}(c): if the metamaterial sample is stiffer than the load cell (i.e., a spring), the experiment is forced to stop to prevent breakage of the load cell.
One workaround is to use a stiffer load cell, however, it is oftentimes \textit{not} a preferable option: a stiffer load cell with a broader measurement range can be very expensive, costing over a hundred times more than the standard integrated load cells.
Here, the proposed ICMSE method can adaptively design experimental runs to maximize the predictive power of a GP model under censoring. We show later in Section \ref{Sec:MMResult}, ICMSE can lead to better predictions for younger patients (with stiff tissues, which can be \textit{censored} in experiments) and older patients (with soft tissues, which are \textit{uncensored} in experiments) \citep{sicard2018aging}.
This then leads to greatly improved tissue-mimicking performance for personalized printed valves \citep{chen2018JASA}, which is crucial for improving heart surgery success rate \citep{qian2017quantitative}.
Our method is particularly valuable in urgent heart surgeries, where one can perform only a small number of runs prior to the actual surgery.
\begin{figure}
\caption{\label{Fig:MMIntro}
\label{Fig:MMIntro}
\end{figure}
\subsection{Thermal processing in wafer manufacturing} \label{Sec:MotiExampleLH}
The second problem considers the design of the semiconductor wafer manufacturing process \citep{quirk2001semiconductor,jin2012sequential}. Wafer manufacturing involves processing silicon wafers in a series of refinement stages, to be used as circuit chips.
Among these stages, thermal processing is one of the most important stages \citep{singh2000rapid},
since it facilitates the necessary chemical reactions and allows for surface oxidation.
Fig \ref{Fig:LHIntro}(a) illustrates the typical thermal processing procedure: a laser beam (in orange) is moved back and forth over a rotating wafer. The output of interest here is the \textit{minimal} temperature over the whole wafer after heating; a higher minimal temperature facilitates better completeness of chemical reactions, which leads to better quality of the final wafer product \citep{goodson1993annealing,van1998time}. However, higher temperatures may result in higher energy costs for heating. With the fitted predictive model on minimal wafer temperature, industrial engineers can then use this to optimize a heating process which is economical (i.e., conserves heating power) but also meets target quality requirements.
However, laser heating experiments are quite costly, involving high material and operation costs. In industrial settings, the minimal wafer temperature is often subject to \textit{censoring}, due to the nature of measurement procedures.
This is shown in Fig \ref{Fig:LHIntro}(b): the wafer temperature is typically measured by either an array of temperature sensors or a thermal camera, both of which have upper measurement limits \citep{feteira2009negative}. The minimal temperature is censored when the whole sensor array reaches the measurement limits. While more sophisticated sensors exist, they are much more expensive and may lead to tedious do-overs of experiments.
The proposed single-fidelity ICMSE method can be used to adaptively design experimental runs that maximize the predictive power of a GP model under censoring. We show later in Section \ref{Sec:LHresult} that the resulting model using ICMSE enjoys improved predictive performance for high wafer temperatures (that are potentially censored) and low temperatures (that are not censored), to ensure flexibility for different quality requirements. The fitted model can then be used to find an optimal thermal processing setting, which minimizes operation costs subject to target quality requirements.
\begin{figure}
\caption{\label{Fig:LHIntro}
\label{Fig:LHIntro}
\end{figure}
\subsection{Literature}
GP regression (or \textit{kriging}, see \citealp{matheron1963principles}) is widely used as a predictive model for expensive experiments \citep{sacks1989design}, and has been applied in cosmology \citep{kaufman2011efficient}, aerospace engineering \citep{mak2018efficient}, healthcare \citep{chen2018JASA}, and other applications.
The key appeals of GPs are the flexible nonparametric model structure and closed-form expressions for prediction and uncertainty quantification \citep{santner2018design}.
In the engineering literature, GPs have been used for modeling expensive physical experiments \citep{ankenman2010stochastic}, integrating computer and physical experiments \citep{KO2001}, and incorporating various constraints \citep{henkenjohann2005adaptive,groot2012gaussian,da2012gaussian,lopez2018finite,ding2019bdrygp}.
We will adapt in this work a recent censored GP model \citep{cao2018model}, which integrates censored physical experimental data.
There have been several works in the literature on experimental design under response censoring, see, e.g., \cite{borth1996optimal,monroe2008experimental}.
These methods, however, presume a parametric form for the response surface, which may be a dangerous assumption for black-box experiments, hence the recent shift for more nonparametric models such as GPs.
Existing design methods for GPs can be divided into two categories -- space-filling and model-based designs.
Space-filling designs aim to fill empty gaps in the input space; this includes minimax designs \citep{johnson1990minimax}, maximin designs \citep{morris1995exploratory}, and maximum projection designs \citep{joseph2015maximum}. Model-based designs instead maximize an optimality criterion based on an \textit{assumed} GP model; this
includes integrated mean-squared error designs \citep{sacks1989design} and maximum entropy designs \citep{shewry1987maximum}. Such designs can also be implemented sequentially in an adaptive manner, see \cite{lam2008sequential,xiong2013sequential,chen2017sequential,bect2019supermartingale}.
Recently, \cite{binois2019replication} proposed a design method for a heteroscedastic GP model (i.e., under input-dependent noise); this provides a flexible framework that allows for different correlation functions, closed-form gradients for optimization, and batch sequential implementation.
The above GP design methods, however, do not consider potential response \textit{censoring}.
The key challenge in incorporating censoring information is that an experimenter does not know which inputs may lead to censoring prior to experimentation, since the response surface is black-box.
The proposed ICMSE method addresses this by leveraging a GP model on the unknown response surface: it first estimates the posterior probability of a potential observation being censored, and then finds design points that minimize predictive uncertainty under censoring. Under product correlation functions, our method admits an easy-to-evaluate design criterion, which allows for efficient sequential sampling. We show that ICMSE can yield considerably improved predictive performance over existing design methods (which do not consider censoring), in both motivating applications.
\subsection{Structure}
Section \ref{Sec:SingeF} presents the ICMSE design method for the single-fidelity setting, with only physical experiment data.
Section \ref{Sec:MultiF} extends the ICMSE method for the bi-fidelity setting, where auxiliary computer simulation data are available.
Section \ref{Sec:CS} demonstrates the effectiveness of ICMSE in the two motivating applications. Section \ref{sec:conclusion} concludes the work.
\section{ICMSE design} \label{Sec:SingeF}
We now present the ICMSE design method for the single-fidelity setting; a more elaborate bi-fidelity setting is discussed later in Section \ref{Sec:MultiF}. We first review the GP model for censored data, and derive the proposed ICMSE design criterion. We then visualize this via a 1-dimensional (1D) example, and provide some insights.
\subsection{Modeling framework} \label{Sec:PEModel}
We adopt the following model for physical experiments. Let $\bm{x}_i \in [0,1]^p$ be a vector of $p$ input variables (each normalized to $[0,1]$), and let $y_i'$ be its latent response from the physical experiment \textit{prior to} potential censoring (see Fig \ref{Fig:censor}).
We assume:
\begin{align}
y_i'=\xi(\bm{x}_i)+\epsilon_i, \quad i = 1,2, \cdots, n,
\label{Equ:PEModel}
\end{align}
where $\xi(\bm{x}_i)$ is the mean of the latent response $y'_i$ at input $\bm{x}_i$, and $\epsilon_i$ is the corresponding measurement error. Since $\xi(\cdot)$ is unknown, we further assign to it a GP prior with mean $\mu_\xi$, variance $\sigma^2_\xi$, and correlation function $R_{\boldsymbol{\theta}_\xi}(\cdot,\cdot)$ with parameters $\boldsymbol{\theta}_\xi$. This is denoted as:
\begin{equation}
\xi(\cdot) \sim \text{GP} \left(\mu_{\xi}, \sigma^2_\xi R_{\boldsymbol{\theta}_\xi}(\cdot, \cdot) \right).
\label{Equ:PEGP}
\end{equation}
The experimental noise $\epsilon_i \distas{i.i.d.} \mathcal{N}(0,\sigma^2_\epsilon)$ is assumed to be i.i.d. normally distributed, and independent of $\xi(\cdot)$.
For simplicity, we consider only the case of right-censoring below, i.e., censoring of the response only when it exceeds some \textit{known} upper limit (this is the setting for both motivating applications). All equations and insights derived in the paper hold analogously for the general case of interval censoring, albeit with more cumbersome notation. Suppose, from $n$ experiments, $n_o$ responses are observed without censoring, and $n_c$ responses are right-censored at limit $c$, where $n_o + n_c = n$.
The training set experimental data can then be written as the set $\mathcal{Y}_n=\{\bm{y}_{o}, \bm{y}_c' \geq \bm{c}\}$, where $\bm{y}_o$ is a vector of \textit{observed} responses at inputs $\bm{x}_{o} =\bm{x}_{1:n_o} =\{\bm{x}_1, \cdots, \bm{x}_{n_o}\}$, $\bm{y}_c'$ is the latent response vector for inputs in censored regions $\bm{x}_{c} =\bm{x}_{(n_o+1):n}$ prior to censoring, and $\bm{c} = [c,\cdots, c]^T$ is the vector of the right-censoring limit.
Assuming known model parameters, a straightforward adaptation of the equations (11) and (12) in \cite{cao2018model} gives the following expressions for the conditional mean and variance of $\xi(\bm{x}_{\rm new})$ at new input $\bm{x}_{\rm new}$:
\begin{align}
\hat{\xi}(\bm{x}_{\rm new}) = \mathbb{E}[\xi(\bm{x}_{\rm new})|\mathcal{Y}_n]&=\mu_\xi + \boldsymbol{\gamma}_{n,\rm new}^T \bm{\Gamma}_n^{-1}\left([\bm{y}_{o}, \hat{\bm{y}}_{c}]^T - \mu_\xi \cdot \textbf{1}_{n}\right),
\label{Equ:Censor_E}\\
s^2(\bm{x}_{\rm new}) =\text{Var} [\xi(\bm{x}_{\rm new})|\mathcal{Y}_n]&=\sigma^2_\xi-\boldsymbol{\gamma}_{n,\rm new}^T (\bm{\Gamma}^{-1}_n-\bm{\Gamma}_n^{-1}{\bm{\Sigma}}\bm{\Gamma}_n^{-1})\boldsymbol{\gamma}_{n,\rm new}.
\label{Equ:Censor_Var}
\end{align}
Here, $\bm{\Gamma}_n = \sigma^2_\xi{[R_{\boldsymbol{\theta}_\xi}(\bm{x}_i, \bm{x}_j)]_{i=1}^n} _{j=1}^n + \sigma^2_\epsilon \bm{I}_{n}$, $\boldsymbol{\gamma}_{n,\rm new}=\sigma^2_\xi\big[R_{\boldsymbol{\theta}_\xi}(\bm{x}_1, \bm{x}_{\rm new}), \cdots, \break R_{\boldsymbol{\theta}_\xi}(\bm{x}_n, \bm{x}_{\rm new}) \big]^T$, $\boldsymbol{1}_n$ is a one-vector of length $n$, and $\bm{I}_{n}$ is an $n \times n$ identity matrix. Furthermore, $\hat{\bm{y}}_{c}=\mathbb{E}[\bm{y}_{c}' |\mathcal{Y}_n]$ is the expected response for the latent vector $\bm{y}_c'$ given the dataset $\mathcal{Y}_n$,
${\bm{\Sigma}}_{c}=\text{Var}[\bm{y}_c'|\mathcal{Y}_n]$ is its conditional variance, and ${\bm{\Sigma}} = \text{diag}(\textbf{0}_{n_{o}},{\bm{\Sigma}}_{c})$. The computation of these quantities will be discussed later in Section \ref{Sec:AdaAlgor}. The conditional mean \eqref{Equ:Censor_E} is used to predict the mean experimental response at an untested input $\bm{x}_{\rm new}$, and the conditional variance \eqref{Equ:Censor_Var} is used to quantify predictive uncertainty.
In the case of no censoring (i.e., $\mathcal{Y}_n = \{\bm{y}_o\}$), equations \eqref{Equ:Censor_E} and \eqref{Equ:Censor_Var} reduce to:
\begin{align}
\hat{\xi}(\bm{x}_{\rm new}) = \mathbb{E}[\xi(\bm{x}_{\rm new})|\mathcal{Y}_n]&=\mu_\xi + \boldsymbol{\gamma}_{n,\rm new}^T \bm{\Gamma}_n^{-1}\left(\bm{y}_o - \mu_\xi \cdot \textbf{1}_{n}\right), \quad \text{and}
\label{Equ:Uncensor_E}\\
s^2(\bm{x}_{\rm new}) =\text{Var} [\xi(\bm{x}_{\rm new})|\mathcal{Y}_n]&=\sigma^2_\xi-\boldsymbol{\gamma}_{n,\rm new}^T \bm{\Gamma}^{-1}_n \boldsymbol{\gamma}_{n,\rm new}.
\label{Equ:Uncensor_Var}
\end{align}
These are precisely the conditional mean and variance expressions for the standard GP regression model \citep{santner2018design}, which is as expected.
\subsection{Design criterion} \label{Sec:ICMSEFormula}
Now, given data $\mathcal{Y}_n$ from $n$ experiments ($n_o$ of which are observed exactly, $n_c$ of which are censored), we propose a new design method that accounts for the posterior probability of a potential observation being censored.
Let $\bm{x}_{n+1}$ be a potential next input for experimentation, $Y_{n+1}'$ be its latent response \textit{prior to} censoring, and $Y_{n+1} = Y_{n+1}'(1-\mathds{1}_{\{Y_{n+1}'\geq c\}}) + c \mathds{1}_{\{Y_{n+1}'\geq c\}}$ be its corresponding observation \textit{after} censoring, with $\mathds{1}_{\{\cdot\}}$ denoting the indicator function. The proposed method chooses the next input $\bm{x}^*_{n+1}$ as:
\begin{align}
\begin{split}
\bm{x}^*_{n+1} =& \argmin_{\bm{x}_{n+1}} \text{ICMSE}(\bm{x}_{n+1})\\
:=& \argmin_{\bm{x}_{n+1}} \int_{[0,1]^p} \mathbb{E}_{Y_{n+1}|\mathcal{Y}_n}\left[\text{Var}(\xi(\bm{x}_{\rm new})|\mathcal{Y}_n,Y_{n+1}) \right] \; d\bm{x}_{\rm new}.
\label{Equ:SeqICMSE}
\end{split}
\end{align}
The design criterion $\text{ICMSE}(\bm{x}_{n+1})$ can be understood in two parts. First, the term $\text{Var}(\xi(\bm{x}_{\rm new})|\mathcal{Y}_n,Y_{n+1})$ quantifies the predictive variance (i.e., mean-squared error, MSE) of the mean response at an untested input $\bm{x}_{\rm new}$, given both the training data $\mathcal{Y}_n$ and the potential observation $Y_{n+1}$. This is a reasonable quantity to minimize for design, since we wish to find which new input $\bm{x}_{n+1}$ can minimize predictive uncertainty. Second, note that this MSE term cannot be used directly as a criterion, since it depends on the potential observation $Y_{n+1}$, which is yet to be observed. One way around this is to take the conditional expectation $\mathbb{E}_{Y_{n+1}|\mathcal{Y}_n}[\cdot]$ (more on this below).
Finally, the integral over $[0,1]^p$ yields the average predictive uncertainty over the entire design space.
The proposed criterion in \eqref{Equ:SeqICMSE} can be viewed as an extension of the sequential integrated mean-squared error (IMSE) design \citep{lam2008sequential, santner2018design} for the censored response setting. Assuming no censoring (i.e., $\mathcal{Y}_n = \{\bm{y}_o\}$), the sequential IMSE design chooses the next input $\bm{x}_{n+1}^*$ by minimizing:
\begin{align}
\min_{\bm{x}_{n+1}} \text{IMSE}(\bm{x}_{n+1}):= \min_{\bm{x}_{n+1}} \int_{[0,1]^p} \text{Var}(\xi(\bm{x}_{\rm new})|\mathcal{Y}_n,Y_{n+1}') \; d\bm{x}_{\rm new}.
\label{Equ:Seq_IMSE}
\end{align}
Note that, in the \textit{uncensored} setting, the MSE term $\text{Var}(\xi(\bm{x}_{\rm new})|\mathcal{Y}_n,Y_{n+1}')$ in \eqref{Equ:Seq_IMSE} does \textit{not} depend on the potential observation $Y_{n+1}'$, which allows the criterion to be easily computed in practice. However, in the \textit{censored} setting at hand, not only does this MSE term \textit{depend} on $Y_{n+1}'$, but such an observation may not be directly observed due to censoring. The conditional expectation $\mathbb{E}_{Y_{n+1}|\mathcal{Y}_n}[\cdot]$ in \eqref{Equ:SeqICMSE} addresses this by accounting for the posterior probability of censoring in $Y_{n+1}'$.
One attractive feature of the ICMSE criterion \eqref{Equ:SeqICMSE} is that it will be \textit{adaptive} to the experimental responses from data. The criterion \eqref{Equ:SeqICMSE} inherently hinges on whether the potential observation $Y_{n+1}$ is censored (i.e., $Y'_{n+1} \geq c$) or not (i.e., $Y_{n+1}' < c$), but this censoring behavior needs to be estimated from experimental data. Viewed this way, the ICMSE criterion can be broken down into two steps: it (i) estimates the posterior probability of a new observation being censored from data, and then (ii) samples the next point that minimizes the \textit{average} predictive uncertainty under censoring.
We will show how our method adaptively incorporates the posterior probability of censoring $Y_{n+1}$ for sequential design, in contrast to the existing IMSE method \eqref{Equ:Seq_IMSE}.
\subsubsection{No censoring in training data} \label{Sec:NoPriorCensor}
To provide some intuition, consider a simplified scenario with no censoring in the \textit{training} set, i.e., $\mathcal{Y}_n = \{\bm{y}_o\}$ (censoring may still occur for the new $Y_{n+1}$). In this case, the following proposition gives an explicit expression for the ICMSE criterion.
\begin{proposition}
\label{prop:1}
Suppose there is no censoring in training data, i.e., $\mathcal{Y}_n = \{\bm{y}_o\}$.
Then the ICMSE criterion \eqref{Equ:SeqICMSE} has the explicit expression:
\begin{align}
&\text{\rm ICMSE} (\bm{x}_{n+1}) =\int_{[0,1]^p}\sigma_{\rm new}^2-h_c(\bm{x}_{n+1})\rho_{\rm new}^2(\bm{x}_{n+1})\sigma_{\rm new}^2\; d\bm{x}_{\rm new},\label{Equ:CMSEnc}\\
where \quad &
h_c(\bm{x}_{n+1}) = h(z_c) = \Phi(z_c)-z_c\phi(z_c)+\frac{\phi^2(z_c)}{1-\Phi(z_c)}, \quad z_c=\frac{c-\mu_{n+1}}{\sigma_{n+1}}. \notag
\end{align}
Here, $\sigma_{\rm new}^2=\text{\rm Var} [\xi(\bm{x}_{\rm new})|\mathcal{Y}_n]$, $\rho_{\rm new}(\bm{x}_{n+1}) =\text{\rm Corr} [\xi(\bm{x}_{n+1}),\xi(\bm{x}_{\rm new})|\mathcal{Y}_n]$, $\mu_{n+1}\break =\mathbb{E} [\xi(\bm{x}_{n+1})|\mathcal{Y}_n]$, and $\sigma_{n+1}^2=\text{\rm Var} [\xi(\bm{x}_{n+1})|\mathcal{Y}_n]$ follow from \eqref{Equ:Uncensor_E} and \eqref{Equ:Uncensor_Var}.
$\phi(\cdot)$ and $\Phi(\cdot)$ are the probability density and cumulative distribution functions for the standard normal distribution.
\end{proposition}
\noindent In words, $\mu_{n+1}$ is the predictive mean at $\bm{x}_{n+1}$ given data $\mathcal{Y}_n$, $\sigma_{n+1}^2$ and $\sigma_{\rm new}^2$ are the predictive variances at $\bm{x}_{n+1}$ and $\bm{x}_{\rm new}$, respectively, and $\rho_{\rm new}(\bm{x}_{n+1})$ is the posterior correlation between $\xi(\bm{x}_{n+1})$ and $\xi(\bm{x}_{\rm new})$. Note that the $p$-dimensional integral in \eqref{Equ:CMSEnc} can also be efficiently computed in practice; we provide more discussion later in Corollary \ref{corr:1}. The proof of this proposition can be found in Appendix A.2.
To glean intuition from the criterion \eqref{Equ:CMSEnc}, we compare it with the existing sequential IMSE criterion \eqref{Equ:Seq_IMSE}. Under no censoring in training data (i.e., $\mathcal{Y}_n = \{\bm{y}_o\}$), \eqref{Equ:Seq_IMSE} can be rewritten as:
\begin{align}
\text{\rm IMSE}(\bm{x}_{n+1}) =\int_{[0,1]^p}\sigma_{\rm new}^2-\rho^2_{\rm new}(\bm{x}_{n+1})\sigma_{\rm new}^2\; d\bm{x}_{\rm new}.
\label{Equ:SeqIMSEnc}
\end{align}
Comparing \eqref{Equ:SeqIMSEnc} with \eqref{Equ:CMSEnc}, we note a key distinction in the ICMSE criterion: the presence of $h_c(\bm{x}_{n+1})=h(z_c)$, where $z_c$ is the normalized right-censoring limit under the posterior distribution at $\bm{x}_{n+1}$. We call $h(\cdot)$ the \textit{censoring adjustment} function. Fig \ref{Fig:PC} visualizes $h(z_c)$ for different choices of $z_c$.
Consider first the case of $z_c$ large. From the figure, we see that $h(z_c) \rightarrow 1$ as $z_c \rightarrow \infty$, in which case the proposed ICMSE criterion \eqref{Equ:CMSEnc} reduces to the standard IMSE criterion \eqref{Equ:SeqIMSEnc}. This makes sense intuitively: a large value of $z_c$ (i.e., a high right-censoring limit) means that a new observation at $\bm{x}_{n+1}$ has little posterior probability of being censored at $c$. In this case, the ICMSE criterion (which minimizes predictive variance \textit{under} censoring) should then reduce to the IMSE criterion (which minimizes predictive variance \textit{ignoring} censoring).
Consider next the case of $z_c$ small. From the figure, we see that $h(z_c) \rightarrow 0$ as $z_c \rightarrow -\infty$, and the proposed criterion \eqref{Equ:CMSEnc} reduces to the integral of $\sigma_{\rm new}^2$. Again, this makes intuitive sense: a small value of $z_c$ (i.e., a low right-censoring limit) means a new observation at $\bm{x}_{n+1}$ has a high posterior probability of being censored.
In this case, the ICMSE criterion reduces to the predictive variance of the testing point $\bm{x}_{\rm new}$ given only the first $n$ training data points, meaning a new design point at $\bm{x}_{n+1}$ offers little reduction in predictive variance.
Viewed this way, the proposed ICMSE criterion modifies the standard IMSE criterion by accounting for the
posterior probability of censoring via the censoring adjustment function $h(z_c)$.
\begin{figure}
\caption{\label{Fig:PC}
\label{Fig:PC}
\end{figure}
Equation \eqref{Equ:CMSEnc} also reveals an important trade-off for the proposed design under censoring. Consider first the standard IMSE criterion \eqref{Equ:SeqIMSEnc}, which minimizes predictive uncertainty under no censoring. Since the first term $\sigma_{\rm new}^2$ does not depend on the new design point $\bm{x}_{n+1}$, this uncertainty minimization is achieved by maximizing the second term $\rho^2_{\rm new}(\bm{x}_{n+1})\sigma_{\rm new}^2$. This can be interpreted as the variance reduction from observing $Y_{n+1}'$ \citep{gramacy2015local}. Consider next the proposed ICMSE criterion \eqref{Equ:CMSEnc}, which maximizes the term $h(z_c)\rho^2_{\rm new}(\bm{x}_{n+1})\sigma_{\rm new}^2$. This can further be broken down into (i) the maximization of variance reduction term $\rho^2_{\rm new}(\bm{x}_{n+1})\sigma_{\rm new}^2$, and (ii) the maximization of the censoring adjustment function $h(z_c)$. Objective (i) is the same as for the standard IMSE criterion -- it minimizes predictive uncertainty assuming no response censoring. Objective (ii), by maximizing the censoring adjustment function $h(z_c)$, aims to minimize the posterior probability of the new design point being censored. Putting both parts together, the ICMSE criterion \eqref{Equ:CMSEnc} features an important trade-off: it aims to find a new design point that jointly minimizes predictive uncertainty (in the absence of censoring) and the posterior probability of being censored.
\subsubsection {Censoring in training data} \label{Sec:Derivation}
We now consider the general case of censored training data $\mathcal{Y}_n=\{\bm{y}_o,\bm{y}_c'\geq \bm{c}\}$. The following proposition gives an explicit expression for the ICMSE criterion.
\begin{proposition} \label{Thm:EMSEwc}
Given the censored data $\mathcal{Y}_n=\{\bm{y}_o,\bm{y}_c'\geq \bm{c}\}$, we have:
\begin{align}
\text{\rm ICMSE}(\bm{x}_{n+1})=\int_{[0,1]^p}\sigma^2_{\rm new}- \boldsymbol{\gamma}_{
n+1,\rm new}^T\bm{\Gamma}_{n+1}^{-1} {\bm{H}}_c(\bm{x}_{n+1}) \bm{\Gamma}_{n+1}^{-1} \boldsymbol{\gamma}_{n+1,
\rm new}\; d\bm{x}_{\rm new},
\label{Equ:generalCMSE}
\end{align}
where $\sigma^2_{\rm new}=\textup{Var} [\xi(\bm{x}_{\rm new})|\mathcal{Y}_n]$, and ${\boldsymbol{\gamma}}_{n+1,\rm new}$ and $\bm{\Gamma}_{n+1}$ follow from \eqref{Equ:Censor_E} and \eqref{Equ:Censor_Var}. The matrix $\bm{H}_c(\bm{x}_{n+1})$ has an easy-to-evaluate expression given in Appendix A.3.
\end{proposition}
\noindent Here, $\sigma^2_{\rm new}$ is the predictive variance at point $\bm{x}_{\rm new}$ conditional on the data $\mathcal{Y}_n$.
The full expression for $(n+1)\times (n+1)$ matrix $\bm{H}_c(\bm{x}_{n+1})$, while easy-to-evaluate, is quite long and cumbersome; this expression is provided in Appendix A.3. The key computation in calculating $\bm{H}_c( \bm{x}_{n+1})$ is evaluating several orthant probabilities from a multivariate normal distribution. The proof for this proposition can be found in Appendix A.3. Section \ref{Sec:AdaAlgor} and Appendix C provide further details on computation.
While this general ICMSE criterion \eqref{Equ:generalCMSE} is more complex,
its interpretation is quite similar to the earlier criterion -- its integrand contains a posterior variance term conditional on data $\mathcal{Y}_n$, and a variance reduction term from the potential observation $Y_{n+1}$.
The matrix $\bm{H}_c( \bm{x}_{n+1})$ on the variance reduction term serves a similar purpose to the censoring adjustment function. A large value of $\bm{H}_c( \bm{x}_{n+1})$ (in a matrix sense) suggests a low posterior probability of censoring for a new point $\bm{x}_{n+1}$, whereas a small value suggests a high posterior probability of censoring. This again results in the important trade-off for sequential design under censoring: the proposed ICMSE criterion aims to find the next design point which not only (i) minimizes predictive uncertainty of the fitted model in the absence of censoring, but also (ii) minimizes the posterior probability that the resulting observation is censored. The posterior probability is adaptively learned from the training data, and is not considered by the standard IMSE criterion.
\begin{figure}
\caption{ \label{Fig:Illustration}
\label{Fig:Illustration}
\end{figure}
\subsection{An illustrative example} \label{Sec:OneRun1D}
We illustrate the ICMSE criterion using a 1D example. Suppose the mean response of the physical experiment is:
\begin{equation}
\xi(x) = 0.5\sin \left(10 (x - 1.02)^2\right) - 1.25(x - 0.75)(2x-0.25) + 0.2,
\label{Equ:toyExa1D}
\end{equation}
with measurement noise variance $\sigma_\epsilon^2=0.1^2$. Further suppose censoring occurs above an upper limit of $c = 0.55$. The initial design consists of 6 equally-spaced runs, which results in 5 observed runs and 1 censored run.
The Gaussian correlation function is used for $R_{\boldsymbol{\theta}_\xi}$, with model parameters estimated via maximum likelihood.
We compare the ICMSE method \eqref{Equ:generalCMSE} with IMSE methods. Note that, from Section \ref{Sec:ICMSEFormula}, the standard IMSE criterion \eqref{Equ:Seq_IMSE} cannot be directly applied here, since it depends on the potential observation $Y'_{n+1}$ which is unobserved. We adopt the following two variants of IMSE for the censored setting. The first method, ``IMSE-Impute", is a simple baseline which \textit{imputes} the censored responses in the training data with the known measurement limit $c$. The new design point is then optimized assuming its corresponding response $Y'_{n+1}$ is \textit{not} subject to censoring. The second method, ``IMSE-Cen", integrates the censored runs (in training data) using the \textit{censored} GP model (\ref{Equ:Censor_E}-\ref{Equ:Censor_Var}). The new design point is again optimized assuming its response $Y'_{n+1}$ is \textit{not} subject to censoring. In contrast, the proposed ICMSE method \eqref{Equ:generalCMSE} considers \textit{both} censored training data and the possibility of censoring in the new observation $Y'_{n+1}$ within the design criterion. For a fair comparison, we use the censored GP model (\ref{Equ:Censor_E}-\ref{Equ:Censor_Var}) for evaluating predictive performance for all design methods.
Fig \ref{Fig:Illustration}(a) shows the proposed criterion for the ICMSE method (in orange). It selects the next design point at $x_{7}^*=0.068$, which balances the two desired properties from the ICMSE criterion.
First, it avoids regions with high posterior probabilities of response censoring, due to the presence of $\bm{H}_c(\cdot)$ in \eqref{Equ:generalCMSE}. The next point $x_7^*$, which minimizes \eqref{Equ:generalCMSE}, subsequently \textit{avoids} the censored regions (shaded red), as desired. In contrast, Fig \ref{Fig:Illustration}(a) also shows the design criteria for IMSE-Impute (green) and IMSE-Cen (blue). We see that both IMSE methods choose the next point \textit{within} the censored regions, as the IMSE design criterion does not consider the probability of a new observation being censored. Second, the next point $x_7^*$ chosen by ICMSE minimizes the overall predictive uncertainty for the mean function $\xi(\cdot)$, since the ICMSE criterion is small in regions \textit{away} from existing design points. This can be seen within the region $[0.2,0.5]$, where local minima of the ICMSE criterion are found between training points.
The top plots in Fig \ref{Fig:Illustration}(b)-(d) show the next 3 design points ($x^*_7,x^*_8,x^*_9$) from the 3 considered design methods, as well as the final predictor $\hat{\xi}(\cdot)$ using the censored GP model (\ref{Equ:Censor_E}-\ref{Equ:Censor_Var}) with all 9 points. The bottom plots in Fig \ref{Fig:Illustration}(b)-(d) show the corresponding predictive standard deviation. We see that ICMSE yields noticeably better predictive performance compared to the two IMSE methods.
One reason is that the proposed criterion makes use of the censored GP model for both modeling and design, whereas the two baselines do not.
Table \ref{tab:RMSE1DIll} shows the root mean-squared error (RMSE) after the 3 sequential runs over a test set of 1000 equally-spaced points.
The proposed ICMSE method achieves much smaller errors compared to the two IMSE baselines. We will provide a more comprehensive comparison of predictive performance in Section \ref{Sec:Test1D}.
\begin{table}[!t]
\centering
\begin{tabular}{c |c c c}
\toprule
& IMSE-Impute & IMSE-Cen & ICMSE\\
\hline
6 runs & \textbf{0.260} & \textbf{0.260} & \textbf{0.260} \\
7 runs & 0.214 & 0.214 &\textbf{0.119}\\
8 runs & 0.172 & 0.236 & \textbf{0.102}\\
9 runs & 0.153 & 0.203 & \textbf{0.096}\\
\toprule
\end{tabular}
\caption{Predictive performance (in RMSE) for 3 sequential runs in the 1D example \eqref{Equ:toyExa1D}, using ICMSE and the two IMSE baselines (IMSE-Impute and IMSE-Cen).}
\label{tab:RMSE1DIll}
\end{table}
\section{ICMSE design for bi-fidelity modeling} \label{Sec:MultiF}
Next, we extend the ICMSE design to the bi-fidelity setting, where auxiliary computer experiment data are available. We first present the GP framework for bi-fidelity modeling, and extend the earlier ICMSE criterion. We then present an algorithmic framework for efficient implementation, and investigate its performance on two illustrative examples.
\subsection{Modeling framework}
\label{sec:mfmodel}
Let $f(\bm{x})$ denote the \textit{computer} experiment output at input $\bm{x}$. We model $f(\cdot)$ as the GP model:
\begin{equation}
f(\cdot) \sim \text{GP}\{\mu_f,\sigma^2_f R_{\boldsymbol{\theta}_f}(\cdot, \cdot)\}.
\label{Equ:CE}
\end{equation}
Following Section \ref{Sec:PEModel}, let $\xi(\bm{x})$ denote the latent mean response for \textit{physical} experiments at input $\bm{x}$. We assume that $\xi(\cdot)$ takes the form:
\begin{align}
\xi(\bm{x}) = f(\bm{x})+\delta(\bm{x}),
\label{Equ:DataFusion}
\end{align}
where $\delta(\bm{x})$ is the so-called \textit{discrepancy} function, quantifying the difference between computer and physical experiments at input $\bm{x}$. Following \cite{KO2001}, we model this discrepancy using a zero-mean GP model:
\begin{equation}
\delta(\cdot) \sim \text{GP}\{ 0, \sigma^2_\delta R_{\boldsymbol{\theta}_\delta}(\cdot,\cdot) \},
\label{Equ:Disc}
\end{equation}
where the prior on $\delta(\cdot)$ is independent of $f(\cdot)$. Here, physical experiments are observed with experimental noise as in Section \ref{Sec:PEModel}, whereas computer experiments are observed without noise.
Suppose $(n-m)$ computer experiments and $m$ physical experiments ($n$ experiments in total) are conducted at inputs $\bm{x}_{1:n}=\{\bm{x}_{1:(n-m)}^f,\bm{x}_{1:m}^\xi\}$, yielding data $\bm{f} = [f_1,\cdots, f_{n-m}]$ and $\mathcal{Y}_{m} = \{\bm{y}_{o}, \bm{y}_c' \geq \bm{c}\}$. Note that censoring occurs only in physical experiments, since computer experiments are conducted via numerical simulations.
Assuming all model parameters are known (parameter estimation is discussed later in Section \ref{Sec:AdaAlgor}), the mean response $\xi(\bm{x}_{\rm new})$ at a new input $\bm{x}_{\rm new}$ has the following conditional mean and variance:
\begin{align}
\hat{\xi}(\bm{x}_{\rm new}) = \mathbb{E}[\xi(\bm{x}_{\rm new})|\bm{f},\mathcal{Y}_{m}]=\mu_f + \boldsymbol{\gamma}_{n,\rm new}^T\bm{\Gamma}_n^{-1}\left([\bm{f}, \bm{y}_{o}, \hat{\bm{y}}_{c}]^T - \mu_f \textbf{1}_{n} \right), \label{Equ:Fusion_E} \\
s^2(\bm{x}_{\rm new}) = \text{Var}[\xi(\bm{x}_{\rm new})|\bm{f},\mathcal{Y}_{m}]=\sigma^2_f+\sigma^2_\delta-\boldsymbol{\gamma}_{n,\rm new}^T(\bm{\Gamma}_n^{-1}-\bm{\Gamma}_n^{-1}{\bm{\Sigma}}\bm{\Gamma}_n^{-1})\boldsymbol{\gamma}_{n,\rm new},
\label{Equ:Fusion_Var}
\end{align}
where $\boldsymbol{\gamma}_{n,\rm new}=\sigma^2_f[R_{\boldsymbol{\theta}_f}(\bm{x}_i, \bm{x}_{\rm new})]_{i=1}^{n}+\sigma^2_\delta[\bm{0}_{n-m},R_{\boldsymbol{\theta}_\delta}(\bm{x}_i, \bm{x}_{\rm new})]_{i=1}^{m}$ is the covariance vector, and $\bm{\Gamma}_n=\sigma^2_f{[R_{\boldsymbol{\theta}_f}(\bm{x}_i, \bm{x}_j)]_{i=1}^{n}}_{j=1}^{n} + \text{diag}\big( \bm{0}_{n-m},\sigma^2_\epsilon \bm{I}_{m}+\sigma^2_\delta \times \break {[R_{\boldsymbol{\theta}_\delta}(\bm{x}_i, \bm{x}_j)]_{i=1}^m}_{j=1}^m \big)$ is the covariance matrix.
Here, $\hat{\bm{y}}_{c}=\mathbb{E}[\bm{y}_{c}'|\bm{f},\bm{y}_{o},\bm{y}_{c}'\geq \bm{c}]$ is the expected response for latent vector $\bm{y}_{c}'$ given data $\{\bf{f},\mathcal{Y}_m \}$, and ${\bm{\Sigma}}_{c}=\text{Var}[\bm{y}_{c}'|\bm{f},\bm{y}_{o},\bm{y}_{c}'\geq \bm{c}]$ is its conditional variance, with ${\bm{\Sigma}} = \text{diag}(\textbf{0}_{n-n_c},{\bm{\Sigma}}_{c})$.
While such equations appear quite involved, they are simply the bi-fidelity extensions of the earlier GP modeling equations \eqref{Equ:Censor_E} and \eqref{Equ:Censor_Var}. For simplicity, we have overloaded some notations from \eqref{Equ:Censor_E} and \eqref{Equ:Censor_Var} here; the difference should be clear from the context.
\subsection{Bi-fidelity design criterion}
Now, we extend the ICMSE design to the bi-fidelity setting. The goal is to design \textit{physical} experiment runs (which may be censored), given auxiliary computer experiment data (which are not censored).
Under the above bi-fidelity GP model, the following proposition gives an explicit expression for the ICMSE design criterion.
\begin{proposition}\label{Thm:MFICMSE}
With experimental data $\{\bm{f},\mathcal{Y}_{m}\}$, the proposed
ICMSE criterion has the following explicit expression:
\begin{align}
\text{\rm ICMSE}(\bm{x}_{n+1})=&\int_{[0,1]^p} \mathbb{E}_{Y_{n+1}|\bm{f},\mathcal{Y}_m}\left[\text{\rm Var}(\xi(\bm{x}_{\rm new})|\bm{f},\mathcal{Y}_m,Y_{n+1}) \right] \; d\bm{x}_{\rm new} \notag \\
=& \int_{[0,1]^p} \sigma^2_{\rm new}- {\boldsymbol{\gamma}}_{n+1,
\rm new}^T\bm{\Gamma}_{n+1}^{-1} {\bm{H}}_c(\bm{x}_{n+1}) \bm{\Gamma}_{n+1}^{-1} \boldsymbol{\gamma}_{n+1,
\rm new} \; d\bm{x_{\rm new}},
\label{Equ:generalCMSEDF}
\end{align}
where $\sigma^2_{\rm new}=\textup{Var} [\xi(\bm{x}_{\rm new})|\bm{f},\mathcal{Y}_m]$, and ${\boldsymbol{\gamma}}_{n+1,\rm new}$ and $\bm{\Gamma}_{n+1}$ follow from \eqref{Equ:Fusion_E} and \eqref{Equ:Fusion_Var}. The matrix $\bm{H}_c(\bm{x}_{n+1})$ has an easy-to-evaluate expression given in Appendix B.1.
\end{proposition}
\noindent
The proof can be found in Appendix B.1. The following corollary gives a simplification of \eqref{Equ:generalCMSEDF} under a product correlation structure.
\begin{corollary}
\label{corr:1}
Suppose $R_{\boldsymbol{\theta}_f}(\cdot,\cdot)$ and $R_{\boldsymbol{\theta}_\delta}(\cdot,\cdot)$ are product correlation functions:
\begin{equation}
R_{\boldsymbol{\theta}_f} (\bm{x},\bm{x}')=\prod_{l=1}^p R_{\boldsymbol{\theta}_f}^{(l)} (x_l,x_l'), \quad R_{\boldsymbol{\theta}_\delta} (\bm{x},\bm{x}')=\prod_{l=1}^p R_{\boldsymbol{\theta}_\delta}^{(l)} (x_l,x_l'),
\label{Equ:prodcorr}
\end{equation}
with $\bm{x} = [x_{1}, \cdots, x_{p}]^T$.
Then, the ICMSE criterion \eqref{Equ:generalCMSEDF} can be further simplified as:
\begin{align}
\text{\rm ICMSE}(\bm{x}_{n+1})= \bar{\sigma}^2- \text{\rm tr}\left(\bm{\Gamma}_{n+1}^{-1} \bm{H}_c(\bm{x}_{n+1}) \bm{\Gamma}_{n+1}^{-1} \boldsymbol{\Lambda} \right),
\label{Equ:closedformCMSEDF}
\end{align}
where $\bar{\sigma}^2 = \int \sigma^2_{\rm new} \;d \bm{x}_{\rm new}$,
and $\boldsymbol{\Lambda}$ is an $(n+1) \times (n+1)$ matrix with $(i,j)^{th}$ entry:
\begin{align}
\begin{split}
&\Lambda_{ij} = \prod_{l=1}^p \left[\int_0^1
\zeta^{(l)}(x_{i,l},x) \zeta^{(l)}(x_{j,l},x)\; dx \right], \quad \text{\rm and } \\ &\zeta^{(l)}(z,x) = R_{\boldsymbol{\theta}_f}^{(l)} (z,x) +\mathds{1}_{\{i> (n-m)\}}R_{\boldsymbol{\theta}_\delta}^{(l)} (z,x).
\end{split}
\label{Equ: EntryLambda}
\end{align}
\end{corollary}
\noindent The key simplification from Corollary \ref{corr:1} is that it reduces the $p$-dimensional integral in the ICMSE criterion \eqref{Equ:generalCMSEDF} to a product of 1D integrals, which are more easily computed. Furthermore, if Gaussian correlation functions are used, these integrals can be reduced to error functions, which yield an easy-to-evaluate design criterion for ICMSE (see Appendix B.2 for details). Given the computational complexities of censored data, this simplification allows for efficient design optimization. Corollary \ref{corr:1} is motivated by the simplification of the IMSE criterion in \cite{sacks1989designs}. The proof can be found in Appendix B.2.
The interpretation of the bi-fidelity ICMSE criterion \eqref{Equ:generalCMSEDF} is analogous to that of the single-fidelity ICMSE criterion \eqref{Equ:generalCMSE}. Similar to the censoring adjustment function, the matrix $\bm{H}_c(\cdot)$ factors in the posterior probability of censoring over the input space, and is used to adjust the variance reduction term in the criterion. Viewed this way, the ICMSE criterion \eqref{Equ:generalCMSEDF} provides the same design trade-off as before: the next design point should jointly (i) avoid censored regions by adaptively identifying such regions from data at hand, and (ii) minimize predictive uncertainty from the GP model.
\subsection{An adaptive algorithm for sequential design} \label{Sec:AdaAlgor}
We present next an adaptive algorithm \texttt{ICMSE} for implementing the proposed ICMSE design. This algorithm applies for both the single-fidelity setting (with flag $I_{\rm BF}=0$) in Section \ref{Sec:SingeF} and the bi-fidelity setting (with flag $I_{\rm BF}=1$) in Section \ref{Sec:MultiF}.
First, an initial $n_{\rm ini}$-point design is set up for initial experimentation: physical experiments for the single-fidelity setting, and computer experiments for the bi-fidelity setting. In our implementation, we used the maximum projection (MaxPro) design proposed by \cite{joseph2015maximum}, which provides good projection properties and thereby good GP predictive performance. Next, the following two steps are performed iteratively: (i) using observed data $\{\bm{f},\mathcal{Y}_{m}\}$, the GP model parameters are estimated using maximum likelihood, (ii) the next design point $\bm{x}_{n+1}^*$ is then obtained by minimizing the ICMSE criterion (equation \eqref{Equ:generalCMSE} for the single-fidelity setting, equation \eqref{Equ:generalCMSEDF} for the bi-fidelity setting), along with its corresponding response $Y_{n+1}$. This is then repeated until a desired number of samples is obtained.
\begin{algorithm} [!t]
\begin{algorithmic}[1]
\If{$I_{\rm BF} = 0$} \Comment{Single-fidelity}
\State Generate an $n_{\rm ini}$-run initial MaxPro design $\bm{x}_{1:n_{\rm ini}}$
\State Collect initial data $\mathcal{Y}_{n_{\rm ini}}$ at inputs $\bm{x}_{1:n_{\rm ini}}$ from physical experiments
\State Estimate model parameters $\{\mu_\xi, \sigma^2_\xi, \boldsymbol{\theta}_\xi\}$ using MLE from initial data $\mathcal{Y}_{n_{\rm ini}}$
\Else \Comment{Bi-fidelity}
\State Generate an $n_{\rm ini}$-run initial MaxPro design $\bm{x}_{1:n_{\rm ini}}$
\State Collect initial data $\bm{f}$ at inputs $\bm{x}_{1:n_{\rm ini}}$ from computer experiments
\State Estimate model parameters $\{\mu_f,\sigma^2_f, \boldsymbol{\theta}_f\}$ using MLE from $\mathcal{Y}_{n_{\rm ini}}$, and let $\sigma^2_\delta=0$
\EndIf
\For{$k = n_{\rm ini}+1 , \cdots , n_{\rm ini} + n_{\rm seq}$} \Comment{$n_{\rm seq}$ sequential runs}
\If{$I_{\rm BF} = 0$}
\State Obtain new design point $\bm{x}_{k}^*$ by minimizing ICMSE criterion \eqref{Equ:generalCMSE}
\Else
\State Obtain new design point $\bm{x}_{k}^*$ by minimizing ICMSE criterion \eqref{Equ:generalCMSEDF}
\EndIf
\State Perform experiment at $\bm{x}_k^*$ and collect response $Y_{k}$ (which may be censored)
\State Update model parameter estimates using new data
\EndFor
\end{algorithmic}
\caption{\texttt{ICMSE}($n_{\rm ini}$, $n_{\rm seq}$, $c$, $I_{\rm BF}$): Adaptive design under censoring
}
\label{Alg:ICMSE}
\end{algorithm}
To optimize the ICMSE criterion,
we use standard numerical optimization methods in the \textsf{R} package \texttt{nloptr} \citep{ypma2014nloptr}, in particular, the Nelder-Mead method \citep{nelder1965simplex}.
The main computational bottleneck in optimization is evaluating moments of the truncated multivariate normal distribution for $\bm{H}_c(\cdot)$ (see equations (A.10) and (B.3) in the supplementary article, Chen et al., 2021). In our implementation, these moments are efficiently computed using the \textsf{R} package \texttt{tmvtnorm} \citep{wilhelm2010tmvtnorm}. Appendix C details further computational steps for speeding-up design optimization, involving an approximation of the expected variance term via a plug-in estimator. Similar to the standard IMSE criterion, the ICMSE criterion can be quite multi-modal. We therefore suggest performing multiple random restarts of the optimization, and taking the solution with the best objective value as the new design point.
\subsection{Illustrative examples with adaptive design algorithm} \label{Sec:Test1D}
We first illustrate the proposed algorithm \texttt{ICMSE} on a 1D bi-fidelity example. Suppose the computer simulation is given by
\begin{align}
f(x) &= 0.5\sin \left(10 (x - 1.02)^2\right) + 0.1,
\label{eq:1Dbiexp}
\end{align}
with the same physical experiment settings as in Section \ref{Sec:OneRun1D}. We begin with an $n_{\rm ini}=6$-run equally-spaced points $x^f_{1:6}=\{(i-1)/5\}_{i=1}^6$ for computer experiments. We then perform a sequential $n_{\rm seq}=20$-run design for physical experiments using the algorithm \texttt{ICMSE}. The Gaussian correlation function is used for both GPs. In addition to the two IMSE methods in Section \ref{Sec:OneRun1D}, we consider an additional ``sequential MaxPro'' method \citep{joseph2016rejoinder}, which implements a sequential space-filling design. Again, for fair comparison, we use the censored GP model (\ref{Equ:Censor_E}-\ref{Equ:Censor_Var}) for evaluating predictions for all design methods. The simulation is replicated 20 times.
\begin{figure}
\caption{\label{Fig:Test1D}
\label{Fig:Test1D}
\end{figure}
We consider two evaluation metrics for predictive performance: RMSE and the interval score proposed in \cite{gneiting2007strictly}. The first assesses predictive accuracy, and the second assesses uncertainty quantification. The $(1-\alpha)$\% interval score is defined as
\begin{align}
\text{IS}({\xi}_l,{\xi}_u;\xi) = ({\xi}_u-{\xi}_l)+\frac{2}{\alpha}({\xi}_l-\xi)_++\frac{2}{\alpha}(\xi-{\xi}_u)_+ ,
\end{align}
where $(a)_+ = \max(a,0)$, $\xi$ is the ground truth, and $[{\xi}_l,{\xi}_u] $ is an $(1-\alpha)$\% predictive interval. Here, we set $1-\alpha = 68\%$, with predictive interval $[\hat{\xi}-\sqrt{s^2},\hat{\xi}+\sqrt{s^2}]$, where $\hat{\xi}$ and $s^2$ are obtained from \eqref{Equ:Fusion_E} and \eqref{Equ:Fusion_Var}. The mean interval score (MIS) is then computed over the entire test set. We also compared computation time on a 1.4 GHz Quad-Core Intel Core i5 laptop.
Fig \ref{Fig:Test1D} shows the log-RMSE (a), log-MIS (b), and log-computation time (c) for the 4 considered methods. The ICMSE method yields noticeable improvements over the IMSE and sequential MaxPro methods, with smaller RMSE and MIS values for most sequential run sizes.
One reason for this is that the proposed method integrates the possibility of a new observation being censored directly within the design criterion, which allows it to minimize predictive uncertainty by avoiding censored regions.
While ICMSE requires more computation time compared to the two baseline methods, the computation complexities appear to be comparable. Here, the IMSE-Cen method is terminated early after 12 sequential runs, due to numerical instabilities (and thereby expensive computation) in evaluating the predictive equations. This is because, by ignoring censoring, IMSE-Cen overestimates the potential variance reduction in censored regions, leading to many sequential points very close together in such regions.
\begin{figure}
\caption{\label{Fig:Test1Dallmethod}
\label{Fig:Test1Dallmethod}
\end{figure}
Fig \ref{Fig:Test1Dallmethod} shows the sequential design points and the predicted mean responses $\hat{\xi}(\cdot)$ for a single replication. Compared to existing methods, ICMSE yields visually improved prediction in both the censored (shaded) and uncensored (clear) regions.
One reason for this is that the ICMSE criterion chooses points which jointly (i) avoid censored regions and (ii) minimize predictive uncertainty. For (i), note that only 1/20 = 5\% of sequential runs are censored for ICMSE, whereas 7/20 = 35\%, 8/20 = 40\%, and 9/12 = 75\% of sequential runs are censored for IMSE-Impute, MaxPro, and IMSE-Cen, respectively. This shows that ICMSE effectively estimates the posterior probability of censoring, and avoids regions with high probabilities for sampling. For (ii), Fig \ref{Fig:Test1Dallmethod}(d) shows that the sequential runs from ICMSE are far away from existing points, and also concentrated near the boundary of the censored region. Intuitively, this minimizes predictive uncertainty by ensuring design points well-explore the input space while avoiding losing information due to censoring.
Next, we conduct a 2D simulation. The computer simulation and mean physical experiment functions are taken from \cite{xiong2013sequential}:
\begin{align}
f(\bm{x}) =& \frac{1}{4}\xi\left(x_1 + \frac{1}{20}, x_2 + \frac{1}{20}\right)
+\frac{1}{4}\xi\left(x_1 +\frac{1}{20},\left(x_2 -\frac{1}{20}\right)_+\right) \label{eq:2Dexp}\\
+& \frac{1}{4}\xi\left(x_1 - \frac{1}{20},x_2 +\frac{1}{20}\right)
+\frac{1}{4}\xi\left(x_1 -\frac{1}{20},\left(x_2 -\frac{1}{20}\right)_+\right), \notag \\
\xi(\bm{x}) =& \left[1-\exp\left(-\frac{1}{2x_2}\right)\right]\frac{2300x_1^3+1900x_1^2+2092x_1+6}{100x_1^3+500x_1^2+4x_1+20},
\end{align}
with measurement variance $\sigma^2_\epsilon=1$, and a right censoring limit of $c=10$. We begin with an initial $n_{\rm ini}=12$-run MaxPro design for the computer experiment, then add $n_{\rm seq}=40$ sequential runs for physical experiments using \texttt{ICMSE}. This is then replicated 20 times.
\begin{table}[t]
\centering
\begin{tabular}{c|ccc|ccc|ccc}
\toprule
& \multicolumn{3}{c|}{RMSE} & \multicolumn{3}{c|}{MIS} & \multicolumn{3}{c}{Computation Time (in s)} \\
\hline
Sequential runs & 5 & 15 & 40 & 5 & 15 & 40 & 5 & 15 & 40 \\ \hline\hline
IMSE-Impute & 1.62 & 1.38 & {1.14} & {4.91} & {4.03} & {3.29} & {4.14} & {12.18} & {64.34}\\
IMSE-Cen & 1.61 & 1.46 & - & \textbf{4.57} & 4.27 & - & 25.13 & 121.24 & - \\
{Seq-MaxPro} & 1.74 & 1.36 & 1.12 & 5.58 & 4.01 & 3.22 & \textbf{3.03} & \textbf{10.18} & \textbf{57.24} \\
ICMSE & \textbf{1.40} & \textbf{1.21} & \textbf{0.97} & 4.58 & \textbf{3.80} & \textbf{3.01} & 9.77 & 25.01 & 95.67 \\
\toprule
\end{tabular}
\caption{The median RMSE, MIS, and computation time, under different sequential run sizes for the 3 considered design methods in a 2D bi-fidelity example \eqref{eq:2Dexp}.}
\label{ta:pred_accuracy}
\end{table}
Table \ref{ta:pred_accuracy} summarizes the median RMSE, MIS, and computation time after 5, 15, and 40 sequential runs. We see that ICMSE yields noticeably lower RMSE and MIS, suggesting the proposed design method gives a better predictive performance. While computationally more expensive than IMSE-Impute and MaxPro, the proposed ICMSE method appears more effective at integrating censoring information for sequential design, which leads to improved predictive performance.
\section{Case studies} \label{Sec:CS}
We now return to the two motivating applications. For the wafer manufacturing problem (which only has physical experiments), we use the single-fidelity ICMSE method in Section \ref{Sec:SingeF}. For the surgical planning application (which has both computer and physical experiments), we use the bi-fidelity ICMSE method in Section \ref{Sec:MultiF}.
\subsection{Thermal processing in wafer manufacturing} \label{Sec:LHresult}
Consider first the wafer manufacturing application in Section \ref{Sec:MotiExampleLH},
where an engineer is interested in how a wafer chip's heating performance is affected by six process input variables that control wafer thickness, rotation speed, heating laser (i.e., its moving speed, radius, and power),
and heating time.
The response of interest $\xi(\bm{x})$ is the minimum temperature over the wafer, which provides an indication of the wafer's quality after thermal processing. Standard industrial temperature sensors have a measurement limit of $c=350$\textdegree{}C \citep{wafersensors}, and temperatures greater than this limit are censored in the experiment.
As mentioned earlier, certain physical experiments are not only costly (e.g., wafers and laser operation can be expensive), but also time-consuming to perform (e.g., each experiment requires a re-calibration of thermal sensors, as well as a warmup and cooldown of the laser beam). To compare the sequential performance of these methods over a large number of runs, we mimic the costly physical experiments\footnote{The surgical planning application in Section \ref{Sec:MMResult} performs actual physical experiments, but provides fewer sequential runs due to the expensive nature of such experiments.}
with COMSOL Multiphysics simulations (Fig \ref{Fig:LaserHeatNew}(a)), which provides a realistic representation of heat diffusion physics \citep{dickinson2014comsol}. Measurement noise is then added, following an i.i.d. zero-mean normal distribution with standard deviation $\sigma_\epsilon=1.0$\textdegree{}C.
The set-up is as follows. We start with an $n_{\rm ini}=30$-run initial experiment, then perform $n_{\rm seq} = 45$ sequential runs. Note that the total number of $n_{\rm ini} + n_{\rm seq} = 75$ runs is slightly more than the rule-of-thumb sample size of $10p$ recommended by \cite{loeppky2009choosing} -- this is to ensure good predictive accuracy under censoring. Due to the limited budget, the proposed ICMSE method is compared with only the sequential MaxPro method. This is because, from simulations in Section \ref{Sec:Test1D}, it provides the best predictive performance and is the fastest among the three baseline methods.
The fitted GP models are then tested on temperature data generated (without noise) on a 200-run Sobol' sequence \citep{sobol1967distribution}.
Of these 200 test samples, 25 samples have minimum temperatures which exceed the censoring limit of $c = 350$\textdegree{}C, suggesting that roughly $12.5\%$ of the design space leads to censoring.
It is important to note that predictive accuracy is desired for \textit{both} censored and uncensored test runs, since the engineering objective is to predict the experimental response surface prior to censoring. This allows industrial engineers to explore a wide range of quality requirements in manufacturing wafers with low temperatures (which are uncensored in experimentation) and high temperatures (which may potentially be censored).
\begin{figure}
\caption{\label{Fig:LaserHeatNew}
\label{Fig:LaserHeatNew}
\end{figure}
\subsubsection{Predictive performance}
Fig \ref{Fig:LaserHeatNew} compares the RMSE and MIS after $n_{\rm seq} = 45$ sequential runs. While both sequential methods provide relatively steady improvements in RMSE and MIS, the proposed ICMSE method gives a greater predictive improvement over MaxPro. In particular, with 45 sequential runs, ICMSE achieves an RMSE reduction of $(5.8-4.8)/5.8 = 17.2\%$ over the initial 30 runs, which is greater than the RMSE reduction of $(5.8-5.35)/5.8=7.8\%$ for MaxPro.
Similarly, for MIS, ICMSE achieves a reduction of $(9.28-6.95)/9.28 = 25.1\%$, compared to $(9.28-7.9)/9.28 = 14.8\%$ for MaxPro.
This can again be explained by the fact that ICMSE jointly avoids censoring and minimizes predictive uncertainty. Here, ICMSE yields no censored measurements, whereas MaxPro yields 5 censored measurements (a censoring rate of $5/45 = 11.1\%$). Moreover, ICMSE \textit{adaptively} chooses points that minimize predictive uncertainty of the GP model under censoring. This can be seen from Fig \ref{Fig:LaserHeatNew}(b) and (c):
the ICMSE yields progressively lower RMSE and MIS values as sample size increases.
While the ICMSE provides noticeable improvements over the sequential MaxPro, the reductions in RMSE for both methods are only moderate in magnitude. One reason may be that the underlying response surface for minimum temperature is quite non-smooth over the parameter space, which makes it difficult to learn with a limited number of experimental runs, particularly in censored regions.
It is also worth noting, however, that even moderate improvements in predictive accuracy can lead to significant improvements in wafer manufacturing.
As mentioned in Section \ref{Sec:MotiExampleLH}, the fitted GP model is used to find process settings that jointly minimize operational costs while meeting target quality requirements. The improved predictive model using ICMSE can cut down waste in heating power and reduce the number of wafers to be re-manufactured, which results in significant cost reductions in the wafer manufacturing process.
\subsection{3D-printed aortic valves for surgical planning} \label{Sec:MMResult}
Consider next the surgical planning application in Section \ref{Sec:MotiExampleMM}, which uses state-of-the-art 3D printing technology to mimic biological tissues. Here, doctors are interested in predicting the stiffness of the printed organs with different metamaterial geometries.
We will consider 3 design inputs $\bm{x} = (A,\omega,d)$, which parametrize a standard sinusoidal form of the substructure curve $I(t) = A \sin (\omega t)$,
with diameter $d$ (see Fig \ref{Fig:MMIntro}(b) for a visualization).
This parametric form has been shown to provide effective tissue-mimicking performance in prior studies \citep{wang2016dual,chen2018efficient}.
The response of interest $\xi(\bm{x})$ is the elastic modulus at a strain level of 8\%, which quantifies the stiffness at a similar load situation inside the human body \citep{wang2016dual}.
We use the bi-fidelity ICMSE design framework in Section \ref{Sec:MultiF}, since a pre-conducted database of computer simulations is available, and we are interested in the sequential design of physical experiments. Computer simulations were performed with finite element analysis \citep{zienkiewicz1977finite} using COMSOL Multiphysics.
Physical experiments were performed in two steps: the aortic valves were first 3D-printed by the Connex 350 machine (Stratasys Ltd.), and then its stiffness was measured by a load cell using uniaxial tensile tests (see Fig \ref{Fig:MMIntro}(c); \citealp{wang2016dual}).
Here, physical experiments are very costly, requiring expensive material and printing costs, as well as several hours of an experimenter's time per sample.
Censoring is also present in physical experiments; this happens when the force measurement of the load cell exceeds the standard limit of $15N$, corresponding to a modulus upper limit of $c = 0.23 \text{MPa} = 15N \text{(force)}/ 8mm^2 \text{(area)} / 8\% \text{(deformation)}$.
\begin{figure}\label{Fig:MetaMaterialNew}
\label{Tab:PredSensorNew}
\end{figure}
The following design set-up is used. We start with an $n_{\rm ini}=25$-run initial computer experiment design, and then perform $n_{\rm seq}=8$ sequential runs using physical experiments. The limited number of sequential runs is due to the urgent demand of the patients; in such cases, only one to two days of surgical planning can be afforded \citep{chen2018efficient}. Since physical experiments require tedious 3D printing and a tensile test (around 1.5 hours per run), this means only a handful of runs can be performed in urgent cases.
As before, we compare the proposed ICMSE method with the MaxPro method. The fitted GP models from both methods are tested on the physical experiment data from a 20-run Sobol' sequence.
Among these 20 runs, 5 of them are censored due to the load cell limit; in such cases, we re-perform the experiment using a different testing machine with a wider measurement range. The re-experimentation is typically \textit{not} feasible in urgent surgical scenarios, since it requires even more time-consuming tests and higher material costs.
\subsubsection{Predictive performance}
Fig \ref{Fig:MetaMaterialNew} compares the predictive performance of the two design methods over $n_{\rm seq} = 8$ sequential runs.
While MaxPro shows
some stagnation in RMSE and MIS improvement, ICMSE yields more noticeable improvements as sample size increases.
More specifically, ICMSE achieves an RMSE reduction of roughly $(0.0315-0.0235)/0.0315=25.4\%$ over the initial GP model (fitted using 25 computer experiment runs), which is much greater than the RMSE reduction of $(0.0315-0.0288)/0.0315=8.57\%$ for MaxPro. Similar improvements can be seen by inspecting MIS. This can again be attributed to the key design trade-off. ICMSE adaptively identifies and avoids censored regions on the design space using the fitted bi-fidelity model \eqref{Equ:Fusion_E}. Here, the proposed method yields no censored measurements, whereas MaxPro yields 3 censored measurements (a censoring rate of $3/8=37.5\%$).
Furthermore, in contrast to MaxPro, which encourages physical runs to be ``space-filling'' to the initial computer experiment runs, ICMSE instead incorporates censoring information within an adaptive design scheme, which allows for improved predictive performance.
We investigate next the predictive performance of both designs within the \textit{censored} region. This region (corresponding to stiff valves) is important for prediction, since such valves can be used to mimic older patients \citep{sicard2018aging}.
We divide the test set (20 runs in total) into two categories: observed runs (15 in total) and censored runs (5 in total). The responses for the latter are obtained via new experiments on a stiffer load cell (which, as mentioned in Section \ref{Sec:MotiExampleMM}, is typically not feasible in practice).
Table \ref{Tab:PredSensorNew} compares the RMSE of the two methods for the censored and uncensored test runs.
For both methods, the RMSE for observed test runs is much smaller than that for censored test runs, which is as expected.
For censored test runs, ICMSE also performs slightly better than MaxPro, with $(0.0462-0.0416)/0.0462=9.9\%$ lower RMSE.
One reason for this is that ICMSE encourages new runs near (but not within) censored regions (see Fig \ref{Fig:Test1D}), to maximize information under censoring. Because of this adaptivity, ICMSE achieves better predictive performance within the censored region, without putting any sequential runs in this region.
The improved performance for ICMSE can greatly improve the surgery planning procedure.
As discussed in Section \ref{Sec:MotiExampleMM},
the fitted GP model is used to optimize a polymer geometry which mimics a patient's tissue stiffness. An improved predictive model leads to better tissue-mimicking performance of the printed valves, which then translates to improved surgery success rates. Indeed, in a recent study \cite{chen2018JASA}, it was shown that a predictive model with 42\% improvement in predictive performance leads to six-fold error reduction for tissue-mimicking. We would expect a similar improvement in tissue-mimicking performance here, when comparing ICMSE with the baseline design methods. The resulting improved artificial valves from ICMSE then leads to improved success rates for heart surgeries.
\subsubsection{Discrepancy modeling}
The ICMSE method can also yield valuable insights on the discrepancy between computer simulation and reality. The learning of this discrepancy from data is important for several reasons: it allows doctors to (i) pinpoint where simulations may be unreliable, (ii) identify potential root causes for this discrepancy, and (iii) improve the simulation model to better mimic reality. In our modeling framework, this discrepancy can be estimated as:
\begin{equation}
\hat{\delta}(x) = \hat{\xi}(x) - \hat{f}(x),
\end{equation}
where $\hat{\xi}(x)$ is the predictor for the physical experiment mean, fitted using 25 initial computer experiment runs and 8 physical experiment runs, and $\hat{f}(x)$ is the computer experiment model fitted using only the 25 initial runs.
\begin{figure}
\caption{\label{Fig:Discrep}
\label{Fig:Discrep}
\end{figure}
Fig \ref{Fig:Discrep} shows the fitted discrepancy $\hat{\delta}(x)$ as a function of each pair of design inputs, with the third input fixed.
These plots reveal several interesting insights.
First, when the diameter $d$ is moderate (i.e., $d \in [0.2,0.7]$), Fig \ref{Fig:Discrep}(a) and (b) show that the discrepancy is quite small; however, when $d$ is small (i.e., $[0,0.2]$) or large (i.e., $[0.7,1]$), the discrepancy can be quite large.
This is related to the limitations of finite element modeling.
When diameter $d$ is small, the simulations can be inaccurate, since the mesh size would be relatively large compared to $d$.
When diameter $d$ is large, simulations can again be inaccurate, due to the violation of the perfect interface assumption between the two printed polymers.
Second, from Fig \ref{Fig:Discrep}, model discrepancy also appears to be largest when all design inputs are large (i.e., close to 1). This suggests that simulations can be unreliable, when the stiff material is both thick ($d \approx 1$) and fluctuating ($\omega \approx 1, A \approx 1$).
Finally, the model discrepancy is mostly positive over the design domain, revealing smaller stiffness evaluation via simulation compared to physical evaluation.
This may be caused by the hardening of 3D-printed samples due to exposure to natural light, as an aging property for the polymer family (e.g., see \citealp{liao1998long}).
Therefore, the printed aortic valves should be stored in dark storage cells for surgical planning to minimize exposure to light.
\section{Conclusion}
\label{sec:conclusion}
In this paper, we proposed a novel integrated censored mean-squared error (ICMSE) method for adaptively designing physical experiments under response censoring. The ICMSE method iteratively performs two steps: it first estimates the posterior probability of a new observation being censored, and then selects the next design point which yields the greatest reduction in predictive uncertainty under censoring.
We derived easy-to-evaluate expressions for the ICMSE design criterion in both the single-fidelity and bi-fidelity settings, and presented an adaptive design for efficient implementation. We then demonstrated the effectiveness of the proposed ICMSE method over existing methods in real-world applications on 3D-printed aortic valves for surgical planning and thermal processing in wafer manufacturing.
An \textsf{R} package is currently in development and will be released soon.
Looking ahead, there are several interesting directions to be explored. In this work, the censoring limit $c$ is assumed to be known. While this is true for the two motivating applications, there are other problems where $c$ is unknown and needs to be learned from data; it would be useful to extend ICMSE for such problems.
Another direction is to explore the connection between the ICMSE method and the multi-points expected improvement method \citep{ginsbourger2010kriging}, which may speed up design optimization via rejection sampling.
Finally, for the bi-fidelity ICMSE, it would be interesting to explore more elaborate design schemes that allow for additional computer experiments to be added sequentially.
\input{ref.bbl}
\appendix
\section{Single-fidelity ICMSE design criterion}
\subsection{A useful intermediate derivation}
We present first a simplified expression for the design criterion (2.7), which will aid in later derivations.
Let $Y_{n+1}'$ be the latent response at $\bm{x}_{n+1}$ \textit{prior} to censoring, and $Y_{n+1}=Y_{n+1}'(1-\mathcal{I}(\bm{x}_{n+1}))+c\;\mathcal{I}(\bm{x}_{n+1})$ be the corresponding response \textit{after} censoring, with $c$ the right-censoring limit. Here, we define the censoring indicator function:
\begin{align*}
\mathcal{I}(\bm{x}_{n+1}) = \mathds{1}_{\{Y_{n+1}'\geq c\}} = \left\{
\begin{matrix}
0 & \text{ if } & Y_{n+1}'\geq c \\
1 & \text{ if } & Y_{n+1}'<c
\end{matrix}\right. .
\end{align*}
We can now define the probability of censoring at potential input point $\bm{x}_{n+1}$ as $\lambda=\lambda(\bm{x}_{n+1})=\mathbb{P}[\mathcal{I}(\bm{x}_{n+1})=1|\mathcal{Y}_n]=\mathbb{P}[Y_{n+1}'\geq c|\mathcal{Y}_n]$.
Let $\text{CMSE}(\bm{x}_{n+1},\bm{x}_{\rm new})$ be the integrand of the proposed criterion (2.7). One can decompose this integrand via the total variance formula:
\begin{align*}
\text{CMSE}(\bm{x}_{n+1},\bm{x}_{\rm new}) =\text{Var} [\xi(\bm{x}_{\rm new})|\mathcal{Y}_n] - \text{Var}_{Y_{n+1}|\mathcal{Y}_n}\left[ \mathbb{E}\left(\xi(\bm{x}_{\rm new})|Y_{n+1},\mathcal{Y}_n\right)\right].
\end{align*}
Denote the first term $\text{Var} [\xi(\bm{x}_{\rm new})|\mathcal{Y}_n]=\sigma_{\rm new}^2$. As for the second term, we compute variance of the random variable $Z=\mathbb{E}\left(\xi(\bm{x}_{\rm new})|Y_{n+1},\mathcal{Y}_n\right)$ by conditioning on the censoring indicator function $\mathcal{I}=\mathcal{I}(\bm{x}_{n+1})$:
\begin{align*}
\text{Var}[Z]= \mathbb{E}_{\mathcal{I}}[\text{Var}(Z|\mathcal{I})] + \text{Var}_{\mathcal{I}}[\mathbb{E}(Z|\mathcal{I})].
\end{align*}
Consider first the expected variance term $\mathbb{E}_{\mathcal{I}}[\text{Var}(Z|\mathcal{I})]$. Since $Z$ is a constant when censoring occurs (i.e., $\mathcal{I}=1$), the first term becomes:
\begin{align}
\label{AppEqu:EV}
\begin{split}
\mathbb{E}_{\mathcal{I}}[\text{Var}(Z|\mathcal{I})]
&=(1-\lambda)\text{Var}_{Y_{n+1}|\mathcal{I}=0}[\mathbb{E}(\xi(\bm{x}_{\rm new})|Y_{n+1})] + \lambda \times 0 \\
&=(1-\lambda) \left(\text{Var}[\xi(\bm{x}_{\rm new})|Y_{n+1}'\geq c] -
\mathbb{E}_{Y_{n+1}|Y_{n+1}'<c}[\text{Var}(\xi(\bm{x}_{\rm new})|Y_{n+1})]\right),
\end{split}
\end{align}
where the condition on data $\mathcal{Y}_n$ is omitted for brevity. Consider next the variance of expectation term $\text{Var}_{\mathcal{I}}[\mathbb{E}(Z|\mathcal{I})]$. Note that the random variable $\mathbb{E}[Z|\mathcal{I}]$ follows a two point distribution. Hence, the second term becomes:
\begin{align}
\label{AppEqu:VE}
\begin{split}
\text{Var}_{\mathcal{I}}[\mathbb{E}(Z|\mathcal{I})]
&=\lambda(1-\lambda)\left( \mathbb{E}\left[\xi(\bm{x}_{\rm new})|\mathcal{I}=1\right] - \mathbb{E}_{Y_{n+1}|\mathcal{I}=0}\left[\mathbb{E}(\xi(\bm{x}_{\rm new})|Y_{n+1})\right] \right)^2 \\
&=\lambda(1-\lambda)\left( \mathbb{E}\left[\xi(\bm{x}_{\rm new})|Y_{n+1}'\geq c\right] - \mathbb{E}\left[\xi(\bm{x}_{\rm new})|Y_{n+1}'< c\right] \right)^2,
\end{split}
\end{align}
where the condition on data $\mathcal{Y}_n$ is again omitted for brevity. Putting these together, we have the following expression for $\text{CMSE} = \text{CMSE}(\bm{x}_{n+1},\bm{x}_{\rm new})$ :
\begin{align}
\label{AppEqu:CMSE}
\begin{split}
\text{CMSE}
&=\sigma^2_{\rm new}-\lambda(1-\lambda)\left( \mathbb{E}\left[\xi(\bm{x}_{\rm new})|Y_{n+1}'\geq c\right] - \mathbb{E}\left[\xi(\bm{x}_{\rm new})|Y_{n+1}'< c\right] \right)^2 \\
& - (1-\lambda) \left(\text{Var}[\xi(\bm{x}_{\rm new})|Y_{n+1}'\geq c] -
\mathbb{E}_{Y_{n+1}|Y_{n+1}'<c}[\text{Var}(\xi(\bm{x}_{\rm new})|Y_{n+1})]\right) .
\end{split}
\end{align}
which again is the integrand of the proposed criterion $\text{ICMSE}(\bm{x}_{n+1})$.
\subsection{Proof of Proposition 1}\label{Sec:AppCMSEnc}
Suppose no censoring in training data, i.e., $\mathcal{Y}_n = \{\bm{y}_o\}$. Using the conditional mean and variance expressions (2.5) and (2.6) for standard GP regression, we have:
\begin{align*}
\begin{bmatrix}
Y_{n+1}' \\
\xi(\bm{x}_{\rm new})
\end{bmatrix} \big| \mathcal{Y}_n\sim \mathcal{N}
\left(
\begin{bmatrix}
\mu_{n+1} \\
\mu_{\rm new}
\end{bmatrix},
\begin{bmatrix}
\sigma_{n+1}^2 & \rho\sigma_{n+1}\sigma_{\rm new} \\
\rho\sigma_{n+1}\sigma_{\rm new} & \sigma_{\rm new}^2
\end{bmatrix}
\right).
\end{align*}
Here, the predictive means are:
\begin{align*}
\mu_{n+1} &= \mathbb{E}[Y_{n+1}'|\mathcal{Y}_n]= \mu_\xi+\boldsymbol{\gamma}_{n,n+1}^T \bm{\Gamma}_n^{-1}(\bm{y}_o-\mu_\xi \cdot \bm{1}_n), \quad \text{and}\\
\mu_{\rm new} &= \mathbb{E}[\xi(\bm{x}_{\rm new})|\mathcal{Y}_n]= \mu_\xi+\boldsymbol{\gamma}_{n,\rm new}^T \bm{\Gamma}_n^{-1}(\bm{y}_o-\mu_\xi\cdot \bm{1}_n),
\end{align*}
the predictive variances are:
\begin{align*}
\sigma_{n+1}^2 &= \text{Var}[Y_{n+1}'|\mathcal{Y}_n]= \sigma^2_\xi-\boldsymbol{\gamma}_{n,n+1}^T \bm{\Gamma}_n^{-1} \boldsymbol{\gamma}_{n,n+1},\\
\sigma_{\rm new}^2 &= \text{Var}[\xi(\bm{x}_{\rm new}|\mathcal{Y}_n]= \sigma^2_\xi-\boldsymbol{\gamma}_{n,\rm new}^T \bm{\Gamma}_n^{-1} \boldsymbol{\gamma}_{n,\rm new}, \quad \text{and}\\
\rho=\rho_{\rm new}(\bm{x}_{n+1})&=\text{Corr}[Y_{n+1}',\xi(\bm{x}_{\rm new})|\mathcal{Y}_n] = \sigma^2_\xi R_{\boldsymbol{\theta}_\xi}(\bm{x}_{n+1},\bm{x}_{\rm new})-\boldsymbol{\gamma}_{n,n+1}^T \bm{\Gamma}_n^{-1} \boldsymbol{\gamma}_{n,\rm new}.
\end{align*}
Here, $\boldsymbol{\gamma}_{n,n+1} = \sigma^2_\xi\big[R_{\boldsymbol{\theta}_\xi}(\bm{x}_1, \bm{x}_{n+1}), \cdots, R_{\boldsymbol{\theta}_\xi}(\bm{x}_n, \bm{x}_{n+1}) \big]^T$.
We then calculate the first two moments of truncated (bivariant) normal distribution:
\begin{align}
\mathbb{E}\left[\xi(\bm{x}_{\rm new})|Y_{n+1}'\geq c\right] & = \int _{-\infty} ^{\infty} y_{\rm new} \times \psi_{Y_{\rm new}|Y_{n+1}\geq c}( y_{\rm new})\; d y_{\rm new} \notag \\
&=\frac{1}{1-\Phi(z_c)} \int _c^{\infty} \psi_{Y_{n+1}}( y_{n+1})\; d y_{n+1} \int _{-\infty} ^{\infty}y_{\rm new} \psi_{Y_{\rm new}|Y_{n+1}}(y_{\rm new}) \;dy_{\rm new}\notag \\
&=\mu_{\rm new} + \rho\sigma_{\rm new} \frac{\phi(z_c)}{1-\Phi(z_c)}, \label{AppEqu:NoCensorTerm1}
\end{align}
where $z_c=(c-\mu_{n+1})/\sigma_{n+1}$ is the normalized upper censoring limit, $\psi_X(\cdot)$ is the probability density function (PDF) of random variable $X$, $\phi(\cdot)$ is the PDF of standard normal distribution, and $\Phi(\cdot)$ is the cumulative distribution function (CDF) of the standard normal distribution. Similarly, we have:
\begin{align}
\mathbb{E}\left[\xi(\bm{x}_{\rm new})|Y_{n+1}'< c\right] & =\mu_{\rm new} - \rho\sigma_{\rm new} \frac{\phi(z_c)}{\Phi(z_c)}, \quad \text{and} \label{AppEqu:NoCensorTerm2}\\
\text{Var}\left[\xi(\bm{x}_{\rm new})|Y_{n+1}'\geq c\right] & =\sigma_{\rm new}^2\left[1+\rho^2 z_c\frac{\phi(z_c)}{1-\Phi(z_c)}-\rho^2\left(\frac{\phi(z_c)}{1-\Phi(z_c)}\right)^2\right]. \label{AppEqu:NoCensorTerm3}
\end{align}
Furthermore, since the conditional variance of the joint normal distribution does not depend on the value of $Y_{n+1}$, we have:
\begin{align}
\mathbb{E}_{Y_{n+1}|Y_{n+1}'<c}[\text{Var}(\xi(\bm{x}_{\rm new})|Y_{n+1})] =
\text{Var}(\xi(\bm{x}_{\rm new})|Y_{n+1}) = (1-\rho^2)\sigma_{\rm new}^2.
\label{AppEqu:NoCensorTerm4}
\end{align}
Finally, plugging \eqref{AppEqu:NoCensorTerm1}, \eqref{AppEqu:NoCensorTerm2}, \eqref{AppEqu:NoCensorTerm3}, and \eqref{AppEqu:NoCensorTerm4} back to \eqref{AppEqu:CMSE}, and using the fact that the probability of censoring $\lambda=\mathbb{P}[Y_{n+1}'\geq c|\mathcal{Y}_n]=1-\Phi(z_c)$, we have:
\begin{align*}
\text{CMSE}(\bm{x}_{n+1},\bm{x}_{\rm new})= \sigma_{\rm new}^2-\sigma_{\rm new}^2\rho^2 \left[\Phi(z_c)-z_c\phi(z_c)+\frac{\phi^2(z_c)}{1-\Phi(z_c)}\right].
\end{align*}
Therefore, with no censoring in training data, the proposed ICMSE criterion is:
\begin{align}
\text{ICMSE}(\bm{x}_{n+1})=\int_{[0,1]^p}\sigma_{\rm new}^2-\sigma_{\rm new}^2\rho^2 \left[\Phi(z_c)-z_c\phi(z_c)+\frac{\phi^2(z_c)}{1-\Phi(z_c)}\right]\; d\bm{x}_{\rm new}.
\label{AppEqu:CMSEwithcensor}
\end{align}
\subsection{Proof of Proposition 2}\label{Sec:AppCMSEc}
Consider a more general case \textit{with} censoring in training data, i.e., $\mathcal{Y}_n = \{\bm{y}_o,\bm{y}_c' \geq\bm{c}\}$. It is important to note that, due to the existence of censoring data $\{\bm{y}_c' \geq\bm{c}\}$, the random variable $\xi(\bm{x}_{\rm new})|\mathcal{Y}_n$ is \textit{no longer} normally distributed. This, in turn, requires more cumbersome derivations than the earlier case without censoring in training data.
Using the conditional mean and variance expressions (2.3) and (2.4) for the \textit{censored} GP, we have
\begin{align}
\begin{split}
\mathbb{E}\left[\xi(\bm{x}_{\rm new})|Y_{n+1}'\geq c\right]& =\mu_\xi + \boldsymbol{\gamma}_{n+1,\rm new}^T \bm{\Gamma}_{n+1}^{-1}\left([\bm{y}_{o}, \hat{\bm{y}}_{c},\hat{y}_{n+1}^{(> )}]^T - \mu_\xi \cdot \textbf{1}_{n+1}\right), \\
\mathbb{E}\left[\xi(\bm{x}_{\rm new})|Y_{n+1}'< c\right] & =\mu_\xi + \boldsymbol{\gamma}_{n+1,\rm new}^T \bm{\Gamma}_{n+1}^{-1}\left([\bm{y}_{o}, \hat{\bm{y}}_{c},\hat{y}_{n+1}^{(<)}]^T - \mu_\xi \cdot \textbf{1}_{n+1}\right), \\
\text{Var}[\xi(\bm{x}_{\rm new})|Y_{n+1}'\geq c] & = \sigma^2_\xi-\boldsymbol{\gamma}_{n+1,\rm new}^T \bm{\Gamma}_{n+1}^{-1}\left(\bm{\Gamma}_{n+1}-\bm{\Sigma}_1\right)\bm{\Gamma}_{n+1}^{-1}\boldsymbol{\gamma}_{n+1,\rm new}, \\
\mathbb{E}_{Y_{n+1}|Y_{n+1}'<c}[\text{Var}(\xi(\bm{x}_{\rm new})|Y_{n+1})] & = \sigma^2_\xi-\boldsymbol{\gamma}_{n+1,\rm new}^T \bm{\Gamma}_{n+1}^{-1}\left(\bm{\Gamma}_{n+1}-\hat{\bm{\Sigma}}\right)\bm{\Gamma}_{n+1}^{-1}\boldsymbol{\gamma}_{n+1,\rm new},
\label{AppEqu:CMSETrems}
\end{split}
\end{align}
where $\mu_\xi$ and $\sigma^2_\xi$ are the mean and variance for the prior GP, respectively. Here, $\boldsymbol{\gamma}_{n+1,\rm new}=\sigma^2_\xi\big[R_{\boldsymbol{\theta}_\xi}(\bm{x}_1, \bm{x}_{\rm new}), \cdots, R_{\boldsymbol{\theta}_\xi}(\bm{x}_{n+1}, \bm{x}_{\rm new}) \big]^T$ and $\bm{\Gamma}_{n+1} = \sigma^2_\xi{[R_{\boldsymbol{\theta}_\xi}(\bm{x}_i, \bm{x}_j)]_{i=1}^{n+1}} _{j=1}^{n+1} + \sigma^2_\epsilon \bm{I}_{n+1}$. Furthermore, $\hat{y}_{n+1}^{(> )} = \mathbb{E}(Y_{n+1}'|\bm{y}_o,Y_{n+1}'\geq c)$ and $\hat{y}_{n+1}^{(<)} = \mathbb{E}(Y_{n+1}'|\bm{y}_o,Y_{n+1}'<c)$ are the expected response for the potential observation, $\bm{\Sigma}_1=\bm{\Sigma}_1(\bm{x}_{n+1})=\text{diag}\left(\bm{0}_{n_o},\text{Var}([\bm{y}_c',Y_{n+1}]|\bm{y}_{o},\bm{y}_c' \geq\bm{c},Y_{n+1}=c)\right)$, and $\hat{\bm{\Sigma}}=\hat{\bm{\Sigma}}(\bm{x}_{n+1})=\text{diag}\left(\bm{0}_{n_o},\mathbb{E}_{Y_{n+1}|\bm{y}_{o}}[\text{Var}(\bm{y}_c'|Y_{n+1},\bm{y}_{o},\bm{y}_c' \geq\bm{c})],0 \right)$. Plugging in equation \eqref{AppEqu:CMSETrems} back into \eqref{AppEqu:CMSE}, we have
\begin{align*}
\begin{split}
\text{CMSE}
&=\sigma^2_{\rm new}-\lambda(1-\lambda)\left( \boldsymbol{\gamma}_{n+1,\rm new}^T \bm{\Gamma}_{n+1}^{-1}[\bm{0}_n,\hat{y}_{n+1}^{(> )}-\hat{y}_{n+1}^{(<)}]^T \right)^2 \\
& - (1-\lambda) \boldsymbol{\gamma}_{n+1,\rm new}^T \bm{\Gamma}_{n+1}^{-1}\left(\bm{\Sigma}_{1}-\hat{\bm{\Sigma}}\right)\bm{\Gamma}_{n+1}^{-1}\boldsymbol{\gamma}_{n+1,\rm new}.
\end{split}
\end{align*}
Here, $\sigma^2_{\rm new}=\sigma^2_\xi-\boldsymbol{\gamma}_{n,\rm new}^T \bm{\Gamma}^{-1}_n\boldsymbol{\gamma}_{n,\rm new} +\boldsymbol{\gamma}_{n,\rm new}^T \bm{\Gamma}_n^{-1}\hat{\bm{\Sigma}}\bm{\Gamma}_n^{-1}\boldsymbol{\gamma}_{n,\rm new}$ and the probability of censoring
$\lambda=\mathbb{P}(\bm{y}_{c}\geq \bm{c}, Y_{n+1}'\geq c|\bm{y}_{o})/\mathbb{P}(\bm{y}_{c}\geq \bm{c}|\bm{y}_{o})$. (The computation of these orthant probabilities and moments of
the truncated multivariate normal distribution will be discussed later in Appendix \ref{Sec:appImp}.) Putting everything together, our ICMSE criterion has the following explicit form:
\begin{align}
&\text{ICMSE}
=\int_{[0,1]^p}\sigma^2_{\rm new}-\boldsymbol{\gamma}_{n+1,\rm new}^T \bm{\Gamma}_{n+1}^{-1}\bm{H}_c(\bm{x}_{n+1})\bm{\Gamma}_{n+1}^{-1}\boldsymbol{\gamma}_{n+1,\rm new}\;d\bm{x}_{\rm new}
\label{AppEqu:SFH}
\end{align}
where $\bm{H}_c(\bm{x}_{n+1}) = (1-\lambda)\left(\bm{\Sigma}_{1}-\hat{\bm{\Sigma}}\right)+\lambda (1-\lambda)\text{diag}\left(\bm{0}_n,\hat{y}_{n+1}^{(> )}\hat{y}_{n+1}^{(<)}\right)$.
\section{Bi-fidelity ICMSE design criterion}
\subsection{Proof of Proposition 3} \label{Sec:appMFICMES}
For the bi-fidelity setting, the training data is $\{\bm{f},\mathcal{Y}_n\}$. Using the conditional mean and variance expressions (3.3) and (3.4) for the \textit{bi-fidelity} GP model, we have
\begin{align}
\begin{split}
\mathbb{E}\left[\xi(\bm{x}_{\rm new})|Y_{n+1}'\geq c\right]& =\mu_f + \boldsymbol{\gamma}_{n+1,\rm new}^T \bm{\Gamma}_{n+1}^{-1}\left([\bm{f},\bm{y}_{o}, \hat{\bm{y}}_{c},\hat{y}_{n+1}^{(> )}]^T - \mu_f \cdot \textbf{1}_{n+1}\right), \\
\mathbb{E}\left[\xi(\bm{x}_{\rm new})|Y_{n+1}'< c\right] & =\mu_f + \boldsymbol{\gamma}_{n+1,\rm new}^T \bm{\Gamma}_{n+1}^{-1}\left([\bm{f},\bm{y}_{o}, \hat{\bm{y}}_{c},\hat{y}_{n+1}^{(<)}]^T - \mu_f \cdot \textbf{1}_{n+1}\right), \\
\text{Var}[\xi(\bm{x}_{\rm new})|Y_{n+1}'\geq c] & = \sigma^2_f+\sigma^2_\delta-\boldsymbol{\gamma}_{n+1,\rm new}^T \bm{\Gamma}_{n+1}^{-1}\left(\bm{\Gamma}_{n+1}-\bm{\Sigma}_1\right)\bm{\Gamma}_{n+1}^{-1}\boldsymbol{\gamma}_{n+1,\rm new},\\
\mathbb{E}_{Y_{n+1}|Y_{n+1}'<c}[\text{Var}(\xi(\bm{x}_{\rm new})|Y_{n+1})] & = \sigma^2_f+\sigma^2_\delta-\boldsymbol{\gamma}_{n+1,\rm new}^T \bm{\Gamma}_{n+1}^{-1}\left(\bm{\Gamma}_{n+1}-\hat{\bm{\Sigma}}\right)\bm{\Gamma}_{n+1}^{-1}\boldsymbol{\gamma}_{n+1,\rm new}.
\label{AppEqu:Fusion_CMSETrems}
\end{split}
\end{align}
Though equations \eqref{AppEqu:Fusion_CMSETrems} appears to be quite similar to the equations \eqref{AppEqu:CMSETrems}, the notations in \eqref{AppEqu:Fusion_CMSETrems} are overloaded with the bi-fidelity expressions (3.3) and (3.4) for simplicity (see Section 3.1). Here, $\mu_f$ and $\sigma^2_f$ are the mean and variance of the GP $f(\cdot)$ modeling the computer experiment,
$\boldsymbol{\gamma}_{n+1,\rm new}=\sigma^2_f[R_{\boldsymbol{\theta}_f}(\bm{x}_i, \bm{x}_{\rm new})]_{i=1}^{n+1}+\sigma^2_\delta[\bm{0}_{n-m},R_{\boldsymbol{\theta}_\delta}(\bm{x}_i, \bm{x}_{\rm new})]_{i=1}^{m+1}$,
and $\bm{\Gamma}_{n+1}=\sigma^2_f{[R_{\boldsymbol{\theta}_f}(\bm{x}_i, \bm{x}_j)]_{i=1}^{n+1}}_{j=1}^{n+1} + \text{diag}\left( \bm{0}_{n-m},\sigma^2_\delta{[R_{\boldsymbol{\theta}_\delta}(\bm{x}_i, \bm{x}_j)]_{i=1}^{m+1}}_{j=1}^{m+1} + \sigma^2_\epsilon \bm{I}_{m+1} \right)$. Furthermore, $\hat{y}_{n+1}^{(> )} = \mathbb{E}(Y_{n+1}'|\bm{f},\bm{y}_o,Y_{n+1}'\geq c)$ and $\hat{y}_{n+1}^{(<)} = \mathbb{E}(Y_{n+1}'|\bm{f},\bm{y}_o,Y_{n+1}'<c)$ are the expected responses for the potential observation, $\bm{\Sigma}_1=\bm{\Sigma}_1(\bm{x}_{n+1})=\text{diag}\left(\bm{0}_{n-n_c},\text{Var}(\bm{y}_c' \geq\bm{c},Y_{n+1}=c|\bm{f},\bm{y}_{o})\right)$, and $\hat{\bm{\Sigma}}=\hat{\bm{\Sigma}}(\bm{x}_{n+1})=\text{diag}\left(\bm{0}_{n-n_c},\mathbb{E}_{Y_{n+1}|\bm{f},\bm{y}_{o}}[\text{Var}(\bm{y}\geq \bm{c}|Y_{n+1},\bm{f},\bm{y}_{o},\bm{y}_c' \geq\bm{c})],0 \right)$.
Plugging in \eqref{AppEqu:Fusion_CMSETrems} to \eqref{AppEqu:CMSE}, we have the following explicit form ICMSE criterion:
\begin{align}
\text{ICMSE} (\bm{x}_{n+1})
&=\int_{[0,1]^p}\sigma^2_{\rm new}-\boldsymbol{\gamma}_{n+1,\rm new}^T \bm{\Gamma}_{n+1}^{-1}\bm{H}_c(\bm{x}_{n+1})\bm{\Gamma}_{n+1}^{-1}\boldsymbol{\gamma}_{n+1,\rm new}\;d\bm{x}_{\rm new},
\end{align}
where, $\sigma^2_{\rm new}=\sigma^2_f+\sigma^2_\delta-\boldsymbol{\gamma}_{n,\rm new}^T \bm{\Gamma}^{-1}_n\boldsymbol{\gamma}_{n,\rm new} +\boldsymbol{\gamma}_{n,\rm new}^T \bm{\Gamma}_n^{-1}\hat{\bm{\Sigma}}\bm{\Gamma}_n^{-1}\boldsymbol{\gamma}_{n,\rm new}$, and
\begin{align}
\bm{H}_c(\bm{x}_{n+1}) = (1-\lambda)\left(\bm{\Sigma}_{1}-\hat{\bm{\Sigma}}\right)+\lambda (1-\lambda)\text{diag}\left(\bm{0}_n,\hat{y}_{n+1}^{(> )}\hat{y}_{n+1}^{(<)}\right).
\label{AppEqu:MFH}
\end{align}
Here, the probability of censoring
$\lambda=\mathbb{P}(\bm{y}_{c}\geq \bm{c}, Y_{n+1}'\geq c|\bm{f},\bm{y}_{o})/\mathbb{P}(\bm{y}_{c}\geq \bm{c}|\bm{f},\bm{y}_{o})$.
\subsection{Proof of Corollary 1} \label{Sec:appSimpPC}
Note that in ICMSE criterion (3.6), $\boldsymbol{\gamma}_{n+1,\rm new}$ is a function of both $\bm{x}_{n+1}$ and $\bm{x}_{\rm new}$, $\bm{\Gamma}_{n+1}$ and $\bm{H}_c(\bm{x}_{n+1})$ are only function of $\bm{x}_{n+1}$, and $\sigma^2_{\rm new}$ is only a function of $\bm{x}_{\rm new}$; it can therefore be further simplified as
\begin{align*}
\text{ICMSE} (\bm{x}_{n+1}) = \bar{\sigma}^2- \text{tr} \left(\bm{\Gamma}_{n+1}^{-1}\bm{H}_c(\bm{x}_{n+1})\bm{\Gamma}_{n+1}^{-1}
\bm{\Lambda}\right),
\end{align*}
where $\bar{\sigma}^2 = \int \sigma^2_{\rm new} \; d\bm{x}_{\rm new}$ is a constant with respect to
$\bm{x}_{n+1}$, $\text{tr(A)}=\sum_i A_{i,i}$ is the trace of matrix A, and $\bm{\Lambda} = \int \boldsymbol{\gamma}_{n+1,\rm new}^T\boldsymbol{\gamma}_{n+1,\rm new}\;d\bm{x}_{\rm new}$. Assume the following product correlation structure:
\begin{align}
R_{\boldsymbol{\theta}_f} (\bm{x},\bm{x}')=\prod_{l=1}^p R_{\boldsymbol{\theta}_f}^{(l)} (x_l,x_l'), \quad R_{\boldsymbol{\theta}_\delta} (\bm{x},\bm{x}')=\prod_{l=1}^p R_{\boldsymbol{\theta}_\delta}^{(l)} (x_l,x_l'),
\end{align}
and denote $ \zeta^{(l)}(z,x) = R_{\boldsymbol{\theta}_f}^{(l)} (z,x) +\mathds{1}_{\{i\geq (n-m)\}}R_{\boldsymbol{\theta}_\delta}^{(l)} (z,x)$. We can simplify the $p$-dimensional integral in $\bm{\Lambda}$ to a product of $1$-dimensional integrals:
\begin{align}
\Lambda_{ij}= \int_{[0,1]^p} \prod_{l=1}^p
\zeta^{(l)}(x_{i,l},x_l) \zeta^{(l)}(x_{j,l},x_l)\; d\bm{x} = \prod_{l=1}^p \left[\int_0^1
\zeta^{(l)}(x_{i,l},x_l) \zeta^{(l)}(x_{j,l},x_l)\; dx_l \right],
\label{AppEqu:LambdaInt}
\end{align}
where $i,j =1,2,\cdots, n+1$.
Furthermore, under the following product \textit{Gaussian} correlation structure:
\begin{align}
R_{\boldsymbol{\theta}_f} (\bm{x},\bm{x}')=\prod_{l=1}^p \theta_{f,\;l} ^{4(x_{i}-x_{i}')^2}, \quad R_{\boldsymbol{\theta}_\delta} (\bm{x},\bm{x}')=\prod_{l=1}^p\theta_{\delta, \;l} ^{4(x_{i}-x_{i}')^2},
\label{AppEqu:GaussanCorr}
\end{align}
the $1$-dimensional integrals of entries in $\bm{\Lambda}$ \eqref{AppEqu:LambdaInt} can be reduced to integrals of exponential polynomial expressions, which have an easy-to-evaluate form.
Consider the following general expression for an exponential polynomial:
\begin{align*}
G([a,x], [b,y]) &=\int_{0}^1 \exp \left[ - a (x-z)^2\right]\exp \left[ - b (y-z)^2\right] dz\\
&=\sqrt{\frac{\pi}{a+b}}\exp \left(\frac{(ax+by)^2}{a+b}-ax^2-by^2\right) \\
&\times \left[\Phi\left( \sqrt{\frac{2}{a+b}} \left( a+b-ax-by\right)\right) -\Phi\left( -\sqrt{\frac{2}{a+b}} \left( ax+by\right)\right)\right],
\end{align*}
where $\Phi(\cdot)$ is the CDF for standard normal. Under \eqref{AppEqu:GaussanCorr}, the entries of $\bm{\Lambda}$ \eqref{AppEqu:LambdaInt} can be further simplified as:
\begin{align}
\begin{split}
\Lambda_{ij}=&
\prod_{l=1}^p G([\tilde{\theta}_{f,l},{x}_{i,l}],[ \tilde{\theta}_{f,l},{x}_{j,l}]) +\mathds{1}_{\{i\geq (n-m)\}}\mathds{1}_{\{j\geq (n-m)\}} \prod_{l=1}^p G([\tilde{\theta}_{\delta,l},{x}_{i,l}],[ \tilde{\theta}_{\delta,l},{x}_{j,l}]) \notag\\
+&\mathds{1}_{\{i\geq (n-m)\}} \prod_{l=1}^p G([\tilde{\theta}_{f,l},{x}_{i,l}],[\tilde{\theta}_{\delta,l},{x}_{j,l}])+\mathds{1}_{\{j\geq (n-m)\}} \prod_{l=1}^p G([\tilde{\theta}_{\delta,l},{x}_{i,l}],[\tilde{\theta}_{f,l},{x}_{j,l}]),
\end{split}\label{AppEqu:Int}
\end{align}
where $\tilde{\boldsymbol{\theta}_{f}}=-4\log\boldsymbol{\theta}_{f}$ and $\tilde{\boldsymbol{\theta}_{\delta}}=-4\log\boldsymbol{\theta}_{\delta}$.
Note that the above simplification under product Gaussian correlations can also be used in the single-fidelity ICMSE criterion (2.11) as well, in which case $\Lambda_{ij}=\prod_{l=1}^p G([\tilde{\theta}_{\xi,l},{x}_{i,l}],[ \tilde{\theta}_{\xi,l},{x}_{j,l}])$, with $\tilde{\boldsymbol{\theta}_{\xi}}=-4\log\boldsymbol{\theta}_{\xi}$.
\section{Computational approximations} \label{Sec:appImp}
In practice, the computation of the matrix $\hat{\boldsymbol{\Sigma}}$ in the single-fidelity expression \eqref{AppEqu:CMSETrems} or the bi-fidelity expression \eqref{AppEqu:Fusion_CMSETrems} can be quite time-consuming, since a closed-form expression is difficult to obtain for the expected variance term (the conditional distributions $[Y_{n+1}|\bm{y}_o]$ and $[Y_{n+1}|\bm{f},\bm{y}_o]$ are non-Gaussian). For the single-fidelity setting, we found the following approximation to be useful for efficient computation:
\begin{align*}
\mathbb{E}_{Y_{n+1}<c|\mathcal{Y}_n}[\text{\rm Var}(\bm{y}_{c}'|\bm{y}_{o},\bm{y}_{c}'\geq \bm{c},Y_{n+1})]\approx \text{Var}(\bm{y}_{c}'|\bm{y}_{o},\bm{y}_{c}'\geq \bm{c},Y_{n+1} = \hat{Y}_{n+1}),
\end{align*}
where $\hat{Y}_{n+1}=\mathbb{E}(Y_{n+1}|\mathcal{Y}_n,Y_{n+1}<c)$. Similar simplification also applies for the bi-fidelity setting. This can be viewed as a plug-in estimate (with $Y_{n+1} = \hat{Y}_{n+1}$) which approximates the conditional mean expression on the left-hand side. The right-hand approximation can be efficiently computed via the truncated moments of a multivariate normal distribution, as implemented in the R package \texttt{tmvtnorm}.
\end{document}
|
\begin{document}
\publicationdetails{19}{2017}{1}{18}{2651}
\title{Nonrepetitive edge-colorings of trees}
\begin{abstract}
A repetition is a sequence of symbols in which the first half is the
same as the second half. An edge-coloring of a graph is repetition-free or nonrepetitive if
there is no path with a color pattern that is a repetition. The minimum
number of colors so that a graph has a nonrepetitive edge-coloring is called its Thue edge-chromatic number.
We improve on the best known general upper bound of $4\Delta-4$ for the Thue edge-chromatic number of trees of maximum degree $\Delta$
due to Alon, Grytczuk, Haluszczak and Riordan (2002) by providing a simple nonrepetitive edge-coloring with $3\Delta-2$ colors.
\end{abstract}
\section{Introduction}
\label{sec:intro}
A {\em repetition} is a sequence of even length (for example $abacabac$), such that the first half of the sequence is identical to the second half.
In 1906 Thue~\cite{T1} proved that there are infinite sequences of 3 symbols that do not contain a repetition consisting of consecutive elements in the sequence. Such sequences are called {\em Thue sequences}. Thue studied these sequences as words that do not contain any square words $ww$ and the interested reader can consult Berstel~\cite{B2,B1} for some background and a translation of Thue's work using more current terminology.
Thue sequences have been studied and generalized in many views (see the survey of Grytczuk~\cite{G}), but in this paper we focus on the natural generalization of the Thue problem to Graph Theory.
In 2002 Alon, Grytczuk, Ha{\l}uszczak and Riordan~\cite{AGHR} proposed calling a coloring of the edges of a graph {\em nonrepetitive} if the sequence of colors on any open path in $G$ is nonrepetitive. We will use $\pi'(G)$ to denote the {\em Thue chromatic index} of a graph $G$, which is the minimum number of colors in a nonrepetitive edge-coloring of $G$. In~\cite{AGHR} the notation $\pi(G)$ was used for the Thue chromatic index, but by common practice we will instead use this notation for the {\em Thue chromatic number}, which is the minimum number of colors in a nonrepetitive coloring of the {\em vertices} of $G$. Their paper contains many interesting ideas and questions, the most intriguing of which is if $\pi(G)$ is bounded by a constant when $G$ is planar. The best result in this direction is due to Dujmovi{\'c}, Frati, Joret, and Wood~\cite{DFJW} who show that for planar graphs on $n$ vertices $\pi(G)$ is $O(\log n)$. Conjecture~2 from~\cite{AGHR} was settled by Currie~\cite{C} who showed that for the $n$-cycle $C_n$, $\pi(C_n)=3$ when $n\ge 18$. One of the conjectures from~\cite{AGHR} that remains open is whether $\pi'(G)=O(\Delta)$ when $G$ is a graph of maximum degree $\Delta$. At least $\Delta$ colors are always needed, since nonrepetitive edge-colorings must give adjacent edges different colors.
In this paper we study the seemingly easy question of nonrepetitive edge-colorings of trees. Thue's sequence shows that if $P_n$ is the path on $n$ vertices, then $\pi'(P_n)=\pi(P_{n-1})\le 3$. (Keszegh, Patk{\'o}s, and Zhu~\cite{KPZ} extend this to more general path-like graphs.) Using Thue sequences Alon, Grytczuk, Ha{\l}uszczak and Riordan~\cite{AGHR} proved that every tree of maximum degree $\Delta\geq 2$ has a nonrepetitive edge-coloring with $4(\Delta-1)$ colors and stated that the same method can be used to obtain a nonrepetitive vertex-coloring with 4 colors. However, while the star $K_{1,t}$ is the only tree whose vertices can be colored nonrepetitively with fewer than 3 colors, it is still unknown which trees need 3 colors, and which need 4 (see Bre{\v{s}}ar, Grytczuk, Klav{\v{z}}ar, Niwczyk, Peterin~\cite{BGKNP}.) Interestingly Fiorenzi, Ochem, Ossona de Mendez, and Zhu~\cite{FOOZ} showed that for every integer $k$ there are trees that have no nonrepetitive vertex-coloring from lists of size $k$.
Up to this point the only paper we are aware of that narrows the large gap between the trivial lower bound of $\Delta$ colors in a nonrepetitive edge-coloring of a tree of maximum degree $\Delta$ and the $4\Delta-4$ upper bound from~\cite{AGHR} is by Sudeep and Vishwanathan~\cite{SV}. We will describe their results in the next section. The main result of this paper is to give the first nontrivial improvement of the upper bound from~\cite{AGHR}.
\begin{theorem}\label{thm:main}
If $G$ is a tree of maximum degree $\Delta$, then $\pi'(G)\le 3\Delta-2$.
\end{theorem}
We will give a proof of this theorem in Section~\ref{sec:main} using a coloring method we describe in Section~\ref{sec:derived} . We discuss some possible ways for further improvements in Section~\ref{sec:improve}.
\section{Trees of small height}\label{sec:small}
A $k$-ary tree is a tree with a designated root and the property that every vertex that is not a leaf has exactly $k$ children. The $k$-ary tree in which the distance from the root to every leaf is $h$ is denoted by $T_{k,h}$. For convenience we will assume that the vertices in $T_{k,h}$ are labeled as suggested in Figures~\ref{fig:2,2} and~\ref{fig:2,3} with the root labeled 1, its children labeled $2,\dots,k+1$, their children $k+2,\dots k^2+k+1$ and so on. This allows us to write $u<v$ if $u$ is to the left or above $v$, and also gives the vertices at each level (distance from the root) a natural left to right order.
To obtain bounds on the Thue chromatic index of general trees $G$ of maximum degree $\Delta\ge 2$ it suffices to study $k$-ary trees for $k=\Delta-1$, since $G$ is a subgraph of $T_{k,h}$ for sufficiently large $h$.
Of course the Thue sequence shows that for $h>4$ we have $\pi'(T_{1,h})=\pi'(P_h)=3$, and it is similarly obvious that $\pi'(T_{k,1})=\pi'(K_{1,k})=k$.
It is easy to see that the next smallest tree $T_{2,2}$ already requires 4 colors, and Figure~\ref{fig:2,2} shows the only two such 4-colorings up to isomorphism.
\begin{figure}
\caption{Nonrepetitive 4-edge-colorings of $T_{2,2}
\label{fig:2,2}
\end{figure}
The Masters thesis of the second author~\cite{thesis} contains a proof of the fact that the type II coloring of $T_{2,2}$ extends to a unique 4-coloring of $T_{2,3}$ whereas the type I coloring extends to exactly 5 non-isomorphic 4-colorings of $T_{2,3}$, one of which we show in Figure~\ref{fig:2,3}. It is furthermore shown that none of these 6 colorings can be extended to $T_{2,4}$. In fact $\pi'(T_{2,4})=5$ as we can easily extend the coloring from Figure~\ref{fig:2,3} by using color 5 on one of the two new edges at every vertex from $8$ through $15$, and (for example) using colors 1,1,3,4,2,3,2,3 on the other edges in this order.
\begin{figure}
\caption{Nonrepetitive 4-edge-coloring of $T_{2,3}
\label{fig:2,3}
\end{figure}
On a more general level, Sudeep and Vishwanathan~\cite{SV} proved that $\pi'(T_{k,2})=\lfloor \frac32 k\rfloor+1$ (compare also Theorem~4 of~\cite{BLMSS}) and $\pi'(T_{k,3})>\frac{\sqrt5 +1}2k>1.618k$. Their lower bounds follow from counting arguments, whereas the construction for $h=2$ consists of giving the edges at the first level colors $0,1,\dots, k-1$ and using all the $\lfloor k/2\rfloor+1$ remaining colors below each vertex at level 1. The remaining $m=\lceil k/2\rceil -1$ edges below the edge of color $i$ are colored with $i+1 {~\rm mod~} k, i+2 {~\rm mod~} k,\dots,i+m {~\rm mod~} k$, in other words cyclically.
To explain the general upper bound of Alon, Grytczuk, Ha{\l}uszczak and Riordan~\cite{AGHR} we let $T_k$ denote the infinite $k$-ary tree.
It is not difficult to see that $\pi'(T_k)$ is the minimum number of colors needed to color $T_{k,h}$ for every $h\ge 1$. They prove that $\pi'(T_k)\le 4k$ by giving a nonrepetitive edge-coloring of $T_k$ on $4k$ colors as follows:
Starting with a Thue-sequence $123231\dots$ insert 4 as every third symbol to obtain a nonrepetitive sequence $S=124324314\dots$ that also does not contain a {\em palindrome}, that is a sequence of length at least 2 that reads forwards the same as backwards, such as 121. Now color the edges with a common parent at distance $h-1$ from the root with $k$ different copies $s^{(1)},\dots ,s^{(k)}$ of the symbol $s$ in position $h$ of $S$. For example, the type II coloring in Figure~\ref{fig:2,2} is isomorphic to the first two levels of this coloring of $T_2$ if we replace $1^{(1)},1^{(2)},2^{(1)},2^{(2)}$ by $1,2,3,4$ respectively. It is now easy to verify that this coloring has no repetitively colored paths that are monotone ({\it i.e.} have all vertices at different levels) since $S$ is nonrepetitive, and none with a turning point ({\it i.e.} a vertex whose two neighbors on the path are its children) since $S$ is palindrome-free.
Sudeep and Vishwanathan noted the gap between the bounds $1.618k<\pi'(T_k)\le 4k$, and stated their belief that both can be improved.
Even for $k=2$ the gap $3.2<\pi'(T_2)\le 8$ is large. Whereas obviously $\pi'(T_2)\ge \pi'(T_{2,4})=5$ is not hard to obtain, the specific question of showing that $\pi'(T_2)<8$ is already raised in~\cite{AGHR} at the end of Section 4.2. Theorem~\ref{thm:main} implies that indeed $\pi'(T_2)\le 7$. On the other hand, improving on the lower bound of 5 (if that is possible) would require different ideas from those in~\cite{SV} because~\cite{thesis} presents a nonrepetitive 5-coloring of $T_{2,10}$ as Example~3.2.6.
\section{Derived colorings}\label{sec:derived}
In this section, which can also be found in~\cite{thesis}, we present a way to color the edges of $T_k$ that is different from that used by Alon, Grytczuk, Ha{\l}uszczak and Riordan~\cite{AGHR}.
While their idea is in some sense the natural generalization of the type II coloring in the sense that the coloring precedes by level, our coloring generalizes the type I coloring by moving diagonally. The fact that the type I colorings could be extended in 5 nonisomorphic ways, whereas the extension of the type II coloring was unique encourages this notion.
\begin{definition}
Let $S=s_1,s_2, \dots$ be a sequence. The edge-coloring of a $k$-ary tree $T$ \textbf{derived} from $S$ is obtained as follows:
The edges incident with the root receive colors $s_1,s_2,\dots ,s_k$ going from left to right in this order.
If $v$ is any vertex other than the root and if the edge between $v$ and its parent has color $s_i$, then the edges between $v$ and its children receive colors $s_{i+1},s_{i+2},\dots ,s_{i+k}$ again going from left to right in this order.
\end{definition}
To color the edges of the infinite $k$-ary tree $T_k$ in this fashion we need $S$ to be infinite. To color the edges of $T_{k,h}$ it suffices for the length of $S$ to be at least $kh$ (which is rather small considering that there about $k^h$ edges) as each level will use $k$ entries of $S$ more than the previous level (on the edges incident with the right-most vertex). For example the type I coloring of $T_{2,2}$ is the coloring derived from $S=1,2,3,4$, whereas the coloring of $T_{2,3}$ in Figure~\ref{fig:2,3} is derived from $S=1,2,3,4,1,2$. The next definition will enable us to characterize infinite sequences whose derived coloring is nonrepetitive.
\begin{definition}\label{def:kspecial}
Let $S=s_1,s_2, \dots $ be a (finite or infinite) sequence. A sequence of indices $i_1,i_2, \dots, i_{2r}$ is called {\bf $k$-bad} for $S$ if there is an $m$ with $1 < m \leq 2r$ such that the following four conditions hold:
\begin{enumerate}[a)]
\item $s_{i_1},s_{i_2}, \dots ,s_{i_{2r}}$ is a repetition
\item $i_1 > i_2> \dots > i_m < i_{m+1} < i_{m+2} < \dots < i_{2r}$
\item $|i_j-i_{j+1}| \leq k$ for all $j$ with $1 \leq j <2r$
\item $i_{m+1}<i_m+k$ if $m<2r$.
\end{enumerate}
$S$ is called \textbf{{$k$-special}} if it has no $k$-bad sequence of indices.
\end{definition}
The following proposition says something about the structure of a {$k$-special} sequence, namely that identical entries must be at least $2k$ apart.
\begin{proposition}\label{prop:distance}
A sequence $S$ has a $k$-bad sequence of length at most four with $m\le 3$ if and only if $s_i=s_j$ for some $i<j< i+2k$.
\end{proposition}
\begin{proof}
For the back direction observe that if $j\le i+k$, then the sequence of indices $j,i$ is $k$-bad with $m=2$.
If $i+k\le j< i+2k$, then the sequence $i+k-1,i,i+k-1,j$ is $k$-bad with $m=2$.
For the forward direction, observe that if $i_1,i_2$ is $k$-bad (necessarily with $m=2$), then we can let $j=i_1$ and $i=i_2$.
If $i_1,i_2,i_3,i_4$ is $k$-bad with $m=2$ then we let $i=i_2$ and $j=i_4$ and observe that $i<i_3<j\le i_3+k\le i+2k-1$.
So we may assume that $i_1,i_2,i_3,i_4$ is $k$-bad with $m=3$. If $i_2=i_4$, then we let $i=i_3$ and $j=i_1$ and obtain $i<i_2<j\le i_4+k-1=i_2+k-1\le i+2k-1$ as desired. Otherwise $i_2, i_4$ are distinct numbers $x$ with $i_3<x\le i_3+k$ and we can let $\{i,j\}=\{i_2,i_4\}$.
\end{proof}
We are now ready to prove the following.
\begin{theorem}\label{thm:kspecial}
An infinite sequence $S$ is {$k$-special} if and only if the edge-coloring of $T_k$ derived from $S$ is nonrepetitive.
\end{theorem}
\begin{proof} $(\Rightarrow)$ Suppose that a {$k$-special} sequence $S$ creates a repetition on a path $P=v_0,v_1,\dots, v_{2r}$ in $T_k$, that is $R=c(v_0v_1),c(v_1v_2),\dots,c(v_{2r-1}v_{2r})$ satisfies $c(v_iv_{i+1})=c(v_{i+r}v_{i+r+1})$ for $0 \leq i \leq r-1$. Observe that $c(v_jv_{j+1})=s_{i_{j+1}}$ where $0 \leq j \leq 2r-1$, for some $s_{i_{j+1}} \in S$. There are two possibilities; $v_0,v_1,\dots, v_{2r}$ is monotone or it has a single turning point.
\textbf{Case 1:} Suppose $v_0,v_1,\dots, v_{2r}$ is monotone.\\
If $v_0,v_1,v_2 \dots, v_{2r}$ is monotone then we may assume $v_0>v_1>v_2>\dots>v_{2r}$. Since $v_j>v_{j+1}$ we know that $v_j$ is the child of $v_{j+1}$ so we have that $i_j>i_{j+1}$ and $|i_j-i_{j+1}|\leq k$. The subsequence $s_{i_1},s_{i_2},\dots, s_{i_{2r}}$ is a repetition, so that $i_1,\dots,i_{2r}$ is $k$-bad with $m=2r$, a contradiction.
\textbf{Case 2:} Suppose $v_0,v_1,\dots, v_{2r}$ has a turning point $v_m$ for some $m$ with $0<m<2r$.
By the definition of a turning point $v_{m-1}$ and $v_{m+1}$ are the children of $v_m$, and thus $v_0>v_1>\dots>v_{m-1} >v_m <v_{m+1}< \dots <v_{2r}$. We may also assume without loss of generality that $v_{m-1}<v_{m+1}$. Observe that $v_0,v_1,\dots, v_{m}$ is moving towards the root and $v_m,v_{m+1},\dots, v_{2r}$ is moving away from the root. Let $c(v_jv_{j+1})=s_{i_{j+1}}$. We will show that $i_1>i_2>\dots >i_{m-1}>i_m<i_{m+1}<\dots< i_{2r}$ and that this sequence is $k$-bad for $S$. Since $v_{j-1} > v_j > v_{j+1}$ for $1 \leq j < m$ we know that $v_j$ is the child of $v_{j+1}$ and the parent of $v_{j-1}$ so we have $i_j>i_{j+1}$ and $|i_j-i_{j+1}|\leq k$. Similarly, since $v_{j-1}<v_j<v_{j+1}$ for $m < j < 2r$ we know that $v_{j}$ is the child of $v_{j-1}$ and the parent of $v_{j+1}$ so $i_j<i_{j+1}$ and $|i_j-i_{j+1}| \leq k$. Finally, since $v_m$ is the parent of $v_{m-1}$ and $v_{m+1}$ so $|i_m-i_{m+1}|<k$ and $i_m < i_{m+1}$ since we assumed $v_{m-1} < v_{m+1}$. The subsequence $s_{i_1},s_{i_2},\dots, s_{i_{2r}}$ is a repetition, leading to the contradiction that $i_1,\dots,i_{2r}$ is $k$-bad.
$(\Leftarrow)$ We proceed by contrapositive. So suppose $S$ has a $k$-bad sequence ${i_1},{i_2}, \dots ,{i_{2r}}$. We will show that there is a path on vertices $v_0,v_1,v_2,\dots, v_{2r}$ with $c(v_jv_{j+1})=s_{i_{j+1}}$ where the color pattern $c(v_0v_1),c(v_1v_2)\dots,c(v_{2r-1}v_{2r})$ is a repetition in the derived edge-coloring of $T_k$. The left child of a vertex $v$ is the child with the smallest label, and we will denote this child as $v'$. Observe that if $c(vp(v))=s_\alpha$, then $c(vv')=s_{\alpha+1}$.
If $m=2r$ then we start at the root and successively go to the left child of the current vertex until we find a vertex $v_{2r}$ such that $c(v_{2r}v_{2r}')=s_{i_{2r}}$ and let $v_{2r-1}=v_{2r}'$. Let $v_{2r-2}$ be the child of $v_{2r-1}$ with $c(v_{2r-1}v_{2r-2})=s_{i_{2r-1}}$ (this exists since $|i_j-i_{j+1}| \leq k$). We continue in this way until we have found $v_{0}$. Now observe that the color pattern of $v_0,v_1,\dots, v_{2r}$ is $s_{i_1},s_{i_2}, \dots ,s_{i_{2r}}$ as desired.
If $m < 2r$ then we start at the root and successively go to the left child of the current vertex until we find a vertex $v_m$ such that $c(v_mv'_m)=s_{i_m}$ and let $v_{m-1}=v_{m}'$. Let $v_{m+1}$ be the child of $v_m$ with $c(v_mv_{m+1})=s_{i_{m+1}}$ (this exists since $i_m<i_{m+1}<i_m+k$). Now, for $0 \leq p \leq (m-1)$ we successively find a child $v_{p-1}$ of $v_p$ such that $c(v_pv_{p-1})=s_{i_p}$. The existence of $v_{p-1}$ is guaranteed by the fact $|i_p-i_{p-q}|\leq k$ as in the case $m=2r$. For $m+1\leq q \leq 2r$ we successively find a child $v_{q+1}$ of $v_q$ such that $c(v_qv_{q+1})=s_{i_{q-1}}$ which we can do since $|i_q-i_{q+1}|\leq k$. Now observe that the color pattern of $v_0,v_1,\dots, v_{2r}$ is $s_{i_1},s_{i_2}, \dots ,s_{i_{2r}}$ as desired.
\end{proof}
\begin{remark}\label{rem:finite}
Observe that the proof of the forward direction also works for the finite case $T_{k,h}$, a fact we will use in Section~\ref{sec:improve}. However, the back direction need not hold in this case: We already mentioned that the coloring derived from $S=1,2,3,4,1,2$ in Figure~\ref{fig:2,3} is nonrepetitive (see also $k=2$ in Proposition~\ref{2k}), but this sequence $S$ is not 2-special, because the index-sequence $3,1,2,3,5,6$ is $2$-bad.
\end{remark}
Thus to get a good upper bound on $\pi'(T_k)$ we just need an infinite {$k$-special} sequence with few symbols.
As every $2k$ consecutive elements must be distinct, the following simple idea turns out to be useful: from a sequence $S$ on $q$ symbols we can form a sequence $S^{(w)}$ on $qw$ symbols by replacing each symbol $t$ in $S$ by a block $T=t^{(0)},t^{(1)},\dots t^{(w-1)}$ of $w$ symbols. In~\cite{thesis} it is shown that if $S$ is nonrepetitive and palindrome-free then $S^{(k)}$ is $k$-special. This gives a new proof of the result from~\cite{AGHR} that $\pi'(T_k)\le 4k$. In the next section we will improve on that.
\section{Main result}\label{sec:main}
We begin with the simple observation, that if $S$ is a sequence then $S^{(k+1)}=S^+$ has the property that if $i,j$ are indices with $s^+_i=x^{(u)}$ and $s^+_j=y^{(v)}$ then $i<j\le i+k$ implies that either $x=y$ and $u<v$, or $s^+_i$ and $s^+_j$ are in consecutive blocks $XY$ of $S^+$ and $u>v$. In other words we can tell whether we are moving left or right through the sequence just by looking at the superscripts (as long as consecutive symbols in $S$ are distinct.) As a starting point we immediately get the following result.
\begin{corollary}\label{cor:3k+3}
For all $k\ge 1$, $\pi'(T_k)\le 3k+3$.
\end{corollary}
\begin{proof}
It is enough to show that $S^+$ on $3(k+1)$ is {$k$-special} whenever $S$ is an infinite Thue sequence on 3 symbols. Suppose there is a $k$-bad sequence of indices $i_1,\dots,i_{2r}$. Since every sequence of $2(k+1)$ consecutive symbols in $S^+$ is distinct we get that $r>1$ by Proposition~\ref{prop:distance}. If $m<2r$, then we can find an index $j$ such that $i_j>i_{j+1}$ and $i_{r+j}<i_{r+j+1}$ with $s_{i_j}=s_{i_{r+j}}=x^{(u)}$ and $s_{i_{j+1}}=s_{i_{r+j+1}}=y^{(v)}$. Indeed, if $2<m\le r$ we let $j=1$, and otherwise we let $j=m-r$. In this case $x=y$ and $u\le v$ would violate $i_j>i_{j+1}\ge i_j-k$, whereas $u\ge v$ would violate $i_{r+j}<i_{r+j+1}\le i_{r+j}+k$. Similarly if $x\neq y$, then $u\ge v$ would violate $i_j>i_{j+1}\ge i_j-k$, whereas $u\le v$ would violate $i_{r+j}<i_{r+j+1}\le i_{r+j}+k$.
It remains to observe that in the case when $m=2r$ the sequence $s_{i_1},s_{i_2}, \dots ,s_{i_{2r}}$ in $S^+$ yields a repetition in $S$ by erasing the superscripts and merging identical consecutive terms where necessary.
\end{proof}
This bound can be improved to $3k+2$ by removing all symbols of the form $a^{(0)}$ from $S^+$ for one of the symbols $a$ from $S$ and showing that the resulting sequence is still $k$-special. However, we can do a bit better. In fact, Theorem~\ref{thm:main} follows directly from our main result in this section.
\begin{theorem}\label{thm:3k+1}
There are arbitrarily long {$k$-special} sequences on $3k+1$ symbols.
\end{theorem}
One difficulty is that removing two symbols from $S^+$ can easily result in the sequence not being {$k$-special} anymore. To make the proof work we need to start with a Thue sequence with additional properties. The following result was proved by Thue~\cite{T2} and reformulated by Berstel~\cite{B2,B1} using modern conventions.
\begin{theorem}\label{thm:Thue+}
There are arbitrarily long nonrepetitive sequences with symbols $a, b, c$ that do not contain $aba$ or $bab$.
\end{theorem}
To give an idea of how such a sequence can be found, observe that it must be built out of blocks of the form $ca, cb, cab,$ and $cba$ which we denote by $x, y, z, u$, respectively. (In fact, Thue primarily studied two-way infinite sequences, but for our purposes we may simply assume our sequence starts with $c$.)
We first build a sufficiently long sequence on the 5 symbols $A, B, C, D, E$ by starting with the sequence "B" and then in each step simultaneously replacing each letter as follows:
\begin{center}
\begin{tabular}{l|l|l|l|l|l}
Replace & A & B & C & D & E\\
\hline
by & BDAEAC & BDC & BDAE & BEAC & BEAE \\
\end{tabular}
\end{center}
In the resulting sequence we then let $A=zuyxu$, $B=zu$, $C=zuy$, $D=zxu$, $E=zxy$. Lastly we replace $x$, $y$, $z$ and $u$ as aforementioned.
For example, from $B$ we obtain $BDC$, and then
after a second step $BDCBEACBDAE$. This translates to the intermediate sequence
\noindent $zuzxuzuyzuzxyzuyxuzuyzuzxuzuyxuzxy$, which gives us the desired sequence
\noindent $cabcbacabcacbacabcbacbcabcbacabcacbcabcbacbcacbacabcbacbcabcbacabcacbacabcbacbcacbacabcacb$.
It is worth pointing out that Thue's work goes deeper in that he essentially characterizes all two-way infinite sequences that meet the conditions from Theorem~\ref{thm:Thue+} as well as several other related sequences. We also want to mention that the $A,B,C$ in the following proof have nothing to do with the $A,B,C$ in the previous paragraph, but we wanted to maintain the notation used in~\cite{B2,B1}.
\begin{proof}[of Theorem~\ref{thm:3k+1}]
Start with an infinite sequence $S$ in the form of Theorem~\ref{thm:Thue+} and replace each occurrence of $c$ by a block $C$ of $k+1$ consecutive symbols $c^{(0)},c^{(1)},\dots,c^{(k)}$, whereas we replace each occurrence of $a$ or $b$ by shorter blocks $A=a^{(1)},\dots,a^{(k)}$ and $B=b^{(1)},\dots,b^{(k)}$ respectively. We claim that the resulting sequence $S'$ on $3k+1$ symbols is $k$-special. So suppose there is a $k$-bad sequence of indices $i_1,\dots,i_{2r}$. As before when $m=2r$ the sequence $s_{i_1},s_{i_2}, \dots ,s_{i_{2r}}$ in $S'$ yields a repetition in $S$ by erasing the superscripts and merging identical consecutive terms where necessary, as we can not "jump" over any of the blocks $A, B$ or $C$ in $S'$. So we may assume that $1<m<2r$, and since every $2k$ consecutive elements are distinct Proposition~\ref{prop:distance} implies that $r>2$.
{\bf Claim:} If there is an index $j$ with $0<j< r$ such that $i_j>i_{j+1}$ and $i_{r+j}<i_{r+j+1}$, then $s_{i_j}=s_{i_{r+j}}=x^{(u)}$ and $s_{i_{j+1}}=s_{i_{r+j+1}}=y^{(u)}$ for $1\le u\le k$ and $\{x,y\}=\{a,b\}$. Consequently, $i_j-i_{j+1}=k=i_{r+j+1}-i_{r+j}$.
Indeed, $s_{i_j}=s_{i_{r+j}}=x^{(u)}$ and $s_{i_{j+1}}=s_{i_{r+j+1}}=y^{(v)}$ for some $u,v,x,y$. If $x=y$, then $u\le v$ would violate $i_j>i_{j+1}\ge i_j-k$, whereas $u\ge v$ would violate $i_{r+j}<i_{r+j+1}\le i_{r+j}+k$. Thus $x\neq y$. Now $u> v$ would violate $i_j>i_{j+1}\ge i_j-k$, whereas $u< v$ would violate $i_{r+j}<i_{r+j+1}+k$. So we may assume that $u=v$. If $x=c$, then this would violate $i_{j}>i_{j+1}\ge i_j+k$ (as the presence of $c^{(0)}$ means that the distance is $k+1$). Similarly if $y=c$, then this violates $i_{r+j}<i_{r+j+1}\le i_{r+j}+k$. Hence we must have $\{x,y\}=\{a,b\}$ finishing the proof of the claim.
If $r<m<2r$, then we can apply the claim with $j=m-r$ and obtain consequently that $i_{m+1}-i_m=k$, in direct contradiction to condition d) from Definition~\ref{def:kspecial}.
So we suppose that $2 \le m\le r$. In this case we will let $j=m-1$ in our claim and we may assume due to the symmetry of $S$ in $a,b$ that $x=a$ and $y=b$. Thus for some $u$ with $1\le u\le k$ we get $s_{i_{m-1}}=a^{(u)}=s_{i_{m+r-1}}$ and $s_{i_m}=b^{(u)}=s_{i_{m+r}}$. If $m>2$, then we may apply the claim again with $j=m-2$ to obtain that $s_{i_{m-2}}=b^{(u)}=s_{i_{m+r-2}}$. However, the fact that $i_{m-2}>i_{m-1}>i_m$ correspond to symbols $b^{(u)},a^{(u)},b^{(u)}$ means that $S'$ must have consecutive blocks $BAB$, yielding a contradiction to the fact that in $S$ we had no consecutive symbols $bab$.
So we may assume that $m=2$.
Since $r>2$ and $s_{i_2}=b^{(u)}$ and $i_2<\dots<i_r$ we have that for $3\le j\le r$ either all $s_{i_j}$ are of the form $b^{(u_j)}$ or there is a smallest index $j$ such that $s_{i_j}=x^{(u_j)}$ for some $x\neq b$. In the first case it follows that there must be consecutive blocks $BAB$ (yielding a contradiction) such that $i_1$ and $i_{r+1}$ are in the $A$ block, $i_2,\dots i_r$ are in the first $B$-block and $i_{r+2},\dots,i_{2r}$ are in the second. In the second case it follows that since there must be blocks $BA$ with $i_1$ in $A$ and $i_2$ in $B$, that $i_j$ must be in the $A$ block again, that is $s_{i_j}=a^{(u_j)}$. However, since $i_{r+1}<\dots <i_{r+j}$ it follows that there must be consecutive blocks $ABA$ in $S'$ (our final contradiction), such that $i_{r+1}$ is in the first $A$ block, $i_{r+j}$ in the second and $i_{r+2}, \dots, i_{r+j-1}$ are in the $B$ block.
\end{proof}
\section{{$k$-special} sequences on at most $3k$ symbols}\label{sec:improve}
One possible way to improve on Theorem~\ref{thm:main} is to study {$k$-special} sequences on at most $3k$ symbols.
The sequence $S_{n,c}=1,2,\dots,n,1,2,\dots c$ for $n>c\ge 0$ turns out to be a key example in this situation.
Recall that by Proposition~\ref{prop:distance} the entries in a block of length $2k$ of a {$k$-special} sequence must all be distinct.
Thus, if we let $f_k(n)$ denote the maximum length of a {$k$-special} sequence $S$ on $n$ symbols, then this observation immediately implies that $f_k(n)=n$ when $n<2k$ and up to isomorphism the only sequence achieving this value is $S_{n,0}$. When $n\ge 2k$ we can furthermore assume without loss of generality that if $S$ is nonrepetitive on $n$ symbols, then $S_i=i$ for $1\le i\le 2k$ (just like $S_{n,c}$.)
If $n=2k$ then it follows from Proposition~\ref{prop:distance} that a sequence achieving $f_k(2k)$ must be of the form $S_{2k,c}$. It is easy to check $S_{2k,1}$ is in fact $k$-special, whereas $S_{2k,2}$ contains the $k$-bad index sequence $k+1,1,2,k+1,2k+1,2k+2$, which yields the repetition $k+1,1,2,k+1,1,2$. Thus $f_k(2k)=2k+1$ with $S_{2k,1}$ being the unique sequence achieving this value.
This $k$-bad index sequence also explains why we could not have consecutive blocks $ABA$ or $BAB$ in our construction for Theorem~\ref{thm:3k+1} .
For the remaining range we get
\begin{proposition}\label{prop:2k+}
~
\begin{enumerate}[a)]
\item If $n\ge 2k$, then $S_{n,n-k}$ has a $k$-bad sequence only when $n=2k$ and such a sequence must have $2=m<r$.
\item If $n\ge 2k+1$, then $f_k(n)\ge 2n-k$.
\end{enumerate}
\end{proposition}
\begin{proof}
It suffices to prove the first statement, as it immediately implies the second. So suppose $n\ge 2k$ and $I=i_1,\dots,i_{2r}$ is a $k$-bad sequence of indices for some $m$.
If $m=2r$, then $I$ is decreasing and so the fact that $s_{i_j}=s_{i_{j+r}}$ for all $1\le j\le r$ implies that $i_1>\dots >i_r\ge n+1$ and $n-k\ge i_{r+1}>\dots>i_{2r}$, yielding the contradiction $i_r-i_{r+1}>k$. So we may assume that $m<2r$.
If $m>r$, then let $m'=m-r$. Since $s_{i_m}=s_{i_{m'}}$ and $i_{m'}>i_m$, it follows that $i_{m}=i_{m'}-n\in\{1,\dots,n-k\}$. Since $i_{m'}\ge n,$ $i_m\le n-k$ and for all $j$ we have $|i_j-i_{j+1}|\le k$ it follows that there must be some $j$ with $m'<j<m$ such that $i_j\in\{n-k+1,\dots, n\}$. Since $I$ yields a repetition with $i_1>\dots >i_m$, but the symbol $s_{i_j}=i_j$ is unique in $S_{n,n-k}$ we conclude that $i_j=i_{j+r}$.
It follows that $j=m'+1$, since otherwise $i_{m'}>i_{j-1}>i_{j}$ and $i_m<i_{j+r-1}<i_{j+r}$ would contradict $s_{i_{j-1}}=s_{i_{j+r-1}}$ as the sets $\{s_{i_j+1},s_{i_j+2},\dots, s_{i_{m'}-1}\}$ and $\{s_{i_m+1}, s_{i_m+2},\dots s_{i_j-1}\}$ are disjoint. Now $j=m'+1$ implies that
$i_{m'}-k=i_{j-1}-k\le i_j=i_{j+r}=i_{m+1}\le i_m+k-1$, and since $i_{m'}=i_m+n$ we get $n\le 2k-1$, a contradiction.
If $m\le r$, then let $m'=m+r$. It follows again that $i_{m'}=i_m+n$, and that there must be some $j$ such that $i_j=i_{j+r}\in\{n-k+1,\dots, n\}$ and $j<m<j+r$. Thus $m'>j+r$ this time. It follows that $j=m-1$, since otherwise $i_{j}>i_{j+1}>i_{m}$ and $i_{j+r}<i_{j+r+1}<i_{m'}$ would contradict $s_{i_{j+1}}=s_{i_{j+r+1}}$ as the sets $\{s_{i_m+1}, s_{i_m+2},\dots s_{i_j-1}\}$ and $\{s_{i_j+1},s_{i_j+2},\dots, s_{i_{m'}-1}\}$ are still disjoint. Now $j=m-1$ implies that $i_{m}+k=i_{j+1}+k\ge i_j=i_{j+r}=i_{m'-1}\ge i_{m'}-k$, and since $i_{m'}=i_m+n$ we get $n\le 2k$, a contradiction unless $n=2k$. In this case also $i_m+k=i_j=i_{j+r}=i_{m'}-k=x$ for some $k+1\le x\le n=2k$.
If we have $m>2$ then $j-1=m-2\ge 1$ and we consider $i_{j-1}$. Since $i_{j+r-1}<i_{j+r}$ and $k+1=n-k+1\le s_{i_j}\le n=2k$ implies that $s_{i_{j+r-1}}\in\{x-k,x-k+1,\dots,x-1\}$. Similarly $i_{j-1}>i_j$ implies that $s_{i_{j-1}}\in\{x+1,x+2,\dots,n\}\cup\{1,2,\dots,k-(n-x)=x-k\}$. Since $s_{i_{j+r-1}}=s_{i_{j-1}}$ it now follows that this value must be $x-k=i_m$. Hence $i_{j+r-1}=i_m$ and thus $m=j+r-1=(m-1)+r-1$. This implies the contradiction $2=r\ge m>2$. Hence $m=2$ and the fact that $r>2$ follows from Proposition~\ref{prop:distance} and the fact that the distance between identical labels is $2k$.
\end{proof}
We believe that for in Proposition~\ref{prop:2k+} b) equality holds when $2k<n<3k$. An exhaustive search by computer shows that this is the case when $2k<n<3k$ with $n\le 16$. Moreover $S_{2k+1,k+1}$ turns out to be the unique sequence achieving $f_k(2k+1)=3k+2$, whereas for $2k+2\le n<3k$ a typical sequence achieving $f_k(n)$ is obtained by permuting the last $n-k$ entries of $S_{n,n-k}$.
\begin{proposition}\label{2k}
The coloring of $T_{k,3}$ derived from $S_{2k,k}$ is nonrepetitive.
\end{proposition}
\begin{proof}
If the coloring of $T_{k,3}$ derived from $S_{2k,k}$ contains a repetition of length $2r$, then as in the proof of Theorem~\ref{thm:kspecial} it follows that there must be a $k$-bad sequence of $2r$ indices. From Proposition~\ref{prop:2k+} a) it now follows that $r> m=2$. Since a longest path in $T_{k,3}$ has 6 edges we must have $r=3$. However, any repetition of length 6 would have to connect two leaves and turn around at the root, and as such would have $m=3$, a contradiction.
\end{proof}
Combining everything we know so far we get
\begin{corollary}\label{cor:small}
If $h\ge 3$, then $\pi'(T_{k,h})\le \lceil\frac{h+1}2k\rceil$.
\end{corollary}
\begin{proof}
If $h=3$, then the result follows from Proposition~\ref{2k}. For $h>3$ we can apply Proposition~\ref{prop:2k+} b) with $n=\lceil\frac{h+1}2k\rceil$. Since $2n-k\ge hk$ it now follows from Remark~\ref{rem:finite} that the coloring of $T_{k,h}$ derived from $S_{n,n-k}$ is nonrepetitive.
\end{proof}
The bound in Corollary~\ref{cor:small} is better than that derived from Theorem~\ref{thm:3k+1} when $h\le 5$ and we obtain the following table of values for $\pi'(T_{h,k})$, where the presence of two values denotes a lower and an upper bound. The values marked by an asterisk were confirmed by computer search. The programs used are based on those found in~\cite{thesis} and the Python code is available at http://public.csusm.edu/akundgen/Python/Nonrepetitive.py
\begin{table}[h]
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
$k\backslash h$ & 1 & 2 & 3 & 4 & 5 & 6-10 & $h\geq 11$ \\ \hline
1 & 1 & 2 & 2 & 3 & 3 & 3 & 3 \\ \hline
2 & 2 & 4 & 4 & 5 & $5^*$ & $5^*$ & 5,7 \\ \hline
3 & 3 & 5 & $6^*$ & $6^*$ & 6,9 & 6,10 & 6,10 \\ \hline
4 & 4 & 7 & $7^*$ & 7,10 & 7,12 & 7,13 & 7,13 \\ \hline
5 & 5 & 8 & 9,10 & 9,13 & 9,15 & 9,16 & 9,16 \\ \hline
$\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ \\ \hline
$k$ & $k$ & $\lfloor 1.5k \rfloor$+1 & $1.61k, 2k$ & $1.61k,\lceil2.5k\rceil$ & $1.61k,3k$ & $1.61k,3k+1$ & $1.61k,3k+1$ \\ \hline
\end{tabular}
\end{table}
It is worth noting that even though it may be possible to use derived colorings to improve individual columns of this table by a more careful argument (as we did in Proposition~\ref{2k}), this seems unlikely to work for $\pi'(T_k)$ in general. Theorem~\ref{thm:kspecial} implies that the infinite sequence from which we derive the coloring must be {$k$-special}, and while we were able to provide such a sequence on $3k+1$ symbols, it seems unlikely that there are such sequences on $3k$ symbols. An exhaustive search shows that for $k\le 5$ the maximum length of a $k$-special sequence on $n=3k$ symbols is $5k+3$, which is only 3 more than the length of $S_{n,n-k}$. The $k!$ examples achieving this value are all of the strange form $[1,2k],1,[2k+1,3k],x_1,[k+2,2k],1,x_2,x_3,\dots,x_k,x_1,2k+1$ where $\{x_1,\dots,x_k\}=\{2,\dots,k+1\}$ and $[a,b]$ denotes $a,a+1,a+2\dots,b$. In other words they are $S_{3k,2k+1}$ with the last $2k+1$ entries permuted and with 1 and $x_1$ inserted after positions $2k$ and $3k$.
A more promising next step would be to try to improve the lower bounds for $\pi'(T_{k,h})$ for $h=3,4,5$.
\nocite{*}
\label{sec:biblio}
\end{document}
|
\begin{document}
\title{Emulating computer models with step-discontinuous outputs using Gaussian processes}
\abstract{In many real-world applications we are interested in approximating costly functions that are analytically unknown, e.g. complex computer codes. An emulator provides a ``fast" approximation of such functions relying on a limited number of evaluations. Gaussian processes (GPs) are commonplace emulators due to their statistical properties such as the ability to estimate their own uncertainty. GPs are essentially developed to fit smooth, continuous functions. However, the assumptions of continuity and smoothness is unwarranted in many situations. For example, in computer models where \emph{bifurcations} or \emph{tipping points} occur, the outputs can be discontinuous.
This work examines the capacity of GPs for emulating step-discontinuous functions. Several approaches are proposed for this purpose. Two ``special" covariance functions/kernels are adapted with the ability to model discontinuities. They are the neural network and Gibbs kernels whose properties are demonstrated using several examples. Another approach, which is called \emph{warping}, is to transform the input space into a new space where a GP with a standard kernel, such as the Mat\'ern family, is able to predict the function well. The transformation is performed by a parametric map whose parameters are estimated by maximum likelihood. The results show that the proposed approaches have superior performance to GPs with standard kernels in capturing sharp jumps in the ``true" function.}\\
{\bf Keywords:} Covariance kernel, Discontinuity, Emulator, Gaussian processes, Warping.
\section{Introduction}
\label{intro}
Computer models (or simulators) are widely used in many applications ranging from modelling the ocean and atmosphere \cite{adcroft2004, challenor2004} to healthcare \cite{birrell2011, proctor2013}. By simulating real-world phenomena, computer models allow us to better understand/analyse them as a complement to conducting physical experiments. However, on the one hand, the simulators are often ``black box" since they are available as commercial packages and we do not have access to their internal procedures. On the other hand, they are computationally expensive due to the fact that each simulation outcome is actually the solution of some complex mathematical equations, such as partial differential equations.
One of the main purposes of using a computer model is to perform prediction. However, the accuracy of the prediction is questionable because simulators are simplifications of physical phenomena. In addition, due to factors such as lack of knowledge or measurement error, the inputs to the model are subject to uncertainty which yield uncertain outputs. Under this condition, decision makers need to know how good the prediction is. In other words, they need an estimation of the uncertainty propagated through the model \cite{oakley2004_1}. This entails running the simulator very many times which is impractical in the context of time-consuming simulators. To overcome this computational complexity, one can replace the simulator with an \emph{emulator} which is fast to run.
Emulation is a statistical approach for representing unknown functions by approximating the input/output relationship based on evaluations at a finite set of points.
Gaussian process (GP) models (also known as kriging) are widely used to predict the outputs of a complex model and are regarded as an important class of emulators \cite{ohagan2006}. GPs are nonparametric probabilistic models that provide not only a mean predictor but also a quantification of the associated uncertainty. They have become a standard tool for the design and analysis of computer experiments over the last two decades. This includes uncertainty propagation \cite{oakley2004_2, lockwood2012}, model calibration \cite{kennedy2001, higdon2008}, design of experiments \cite{sacks1989, pronzato2012}, optimisation \cite{jones1998, brochu2010} and sensitivity analysis \cite{oakley2004_1, iooss2015}.
GPs can be applied to fit any smooth, continuous function \cite{neal1998}. The basic assumption when using a GP emulator is that the unknown function depends smoothly on its input parameters. However, there are many situations where the model outputs are not continuous. It is very common in computer models that at some regions of the input space, a minor change in the input parameters leads to a sharp jump in the output. For example, models described by nonlinear differential equations often exhibit different modes (phases). Shifting from one mode to another relies on a different set of equations which raises a discontinuity in the model output.
To our knowledge, there are only a few studies that investigate the applicability of GPs in modelling discontinuities. The reason may be due to the fact that they are essentially developed to model smooth and continuous surface forms. However, a natural way of emulating discontinuous functions is to partition the input space by finding discontinuities and then fit separate GP models within each partition. In \cite{caiado2015}, for example, a simulator with tipping point behaviour is emulated such that the boundary of the regions with discontinuity is found first and the simulator output is emulated separately in each region. It is reported that finding the discontinuous regions is a time-consuming operation.
The treed Gaussian process (TGP) \cite{gramacy2008} is a popular model introduced by Gramacy and Lee. The TGP makes binary splits (parallel to the input axes) on the value of a single variable recursively such that each partition (leaf of the tree) is a subregion of the previous section. Then, an independent stationary GP emulator is applied within each section. The disadvantage of the TGP is that it requires many simulation runs which is not affordable in the context of computationally expensive simulators. A similar approach is presented in \cite{pope2018} where Voronoi tessellation is applied to partition the input space. The procedure uses the reversible jump Markov chain Monte Carlo \cite{green95} that is time-consuming. In \cite{ghosh2018} a two-step method is proposed for emulating cardiac electrophysiology models with discontinuous outputs. First a GP classifier is employed to detect boundaries of discontinuities and then the GP emulator is built subject to these boundaries.
Here we provide an alternative perspective in which a single kernel is used to capture discontinuities. The advantage is that there is no need to detect discontinuous boundaries separately which is burdensome. The proposed methods include two nonstationary covariance functions, namely the neural network (NN) \cite{williams1997, raissi2018} and Gibbs kernels \cite{gibbs1997, paciorek2004}, and the idea of warping the input space \cite{calandra2016}. The NN kernel was first derived by Williams \cite{williams1997} and relies on the correspondence between GPs and single-layer neural networks with infinite number of hidden units (neurons) and random weight parameters \cite{neal1998}. As a result, the NN kernel is more expressive than standard kernels in modelling complex data structures. In the Gibbs kernel the parameter that regulates the correlation between observations is a function of the inputs which makes that kernel more flexible than the classical covariance functions. The warping technique has been already proven to be successful in modelling nonstationary functions, see e.g. \cite{sampson1992, snoek2014, marmin2018}. A warped kernel is obtained by applying a deterministic non-linear transformation to its inputs. This can be regarded as a special case of ``Deep Gaussian processes" which is a functional composition of multiple GPs \cite{damianou2013, salimbeni2017}.
In this work we show how these techniques coming from machine learning can be employed in the field of computer experiments to emulate models that present very steep variations.
\section{Overview of Gaussian process emulators}
\label{GP_emulators}
The random (or stochastic) process $Z = \left(Z(\mathbf{x})\right)_{\mathbf{x} \in \mathcal{D}}$, i.e. a collection of random variables indexed by the set $\mathcal{D}$, is a Gaussian process if and only if
$
\forall \, N \in \mathbb{N}, ~ \forall \, \mathbf{x}^j \in \mathcal{D}, ~ \left(Z(\mathbf{x}^1), \dots, Z(\mathbf{x}^N) \right)^\top
$ has a multivariate normal distribution on $\mathbb{R}^N$ \cite{GPML}. Let $\left(\Omega, \mathcal{B}, \mathbb{P} \right)$, where $\Omega$ is a sample space, $\mathcal{B}$ is a sigma-algebra and $\mathbb{P}$ is a probability measure, be the probability space on which $Z(\mathbf{x})$ is defined:
\begin{equation*}
Z: (\mathbf{x}, \omega) \mapsto Z(\mathbf{x}, \omega)~, ~ (\mathbf{x}, \omega) \in \mathcal{D} \times \left(\Omega, \mathcal{B}, \mathbb{P} \right) .
\end{equation*}
For a given $\omega_o \in \Omega$, $Z(\cdot, \omega_o)$ is called a \emph{sample path} (or \emph{realisation}) and for a given $\mathbf{x}_o \in \mathcal{D}$, $Z(\mathbf{x}_o, \cdot)$ is a Gaussian random variable.
In this framework, GPs can be regarded as the probability distribution over functions such that the function being approximated is considered as a particular realisation of the distribution. Herein, $f : \mathcal{D} \mapsto \mathcal{F}$ denotes the unknown function that maps the input space $\mathcal{D} \subset \mathbb{R}^d$ to the output space $\mathcal{F}$. In this work, $\mathcal{F} = \mathbb{R}$.
A GP is fully determined by its mean function $\mu(\cdot)$ and covariance kernel $k(\cdot,\cdot)$ which are defined as:
\begin{table}[H]
\centering
\begin{tabular}{l l}
\multirow{2}{*}{$Z \sim \mathcal{GP}\left(\mu(\cdot), k(\cdot, \cdot)\right)$~;} & $\mu : \mathcal{D} \mapsto \mathbb{R}~,~ \mu(\mathbf{x}) = \mathbb{E}\left[Z(\mathbf{x})\right] $\\
& $k : \mathcal{D} \times \mathcal{D} \mapsto \mathbb{R}~, ~ k(\mathbf{x}, \mathbf{x}^\prime) = \mathbb{C}\text{ov} \left(Z(\mathbf{x}), Z(\mathbf{x}^\prime)\right) $.
\end{tabular}
\end{table}
\noindent While $\mu$ could be any function, $k$ needs to be symmetric positive semidefinite. The function $\mu$ captures the global trend and $k$ controls the structure of sample paths such as differentiability, symmetry, periodicity, etc. In this work, the GP mean is assumed to be an unknown constant which is estimated from data, see Equation (\ref{mu_estim}). The notation $\mu$ is (slightly abusively) used to denote the value of this constant.
Generally, covariance functions are divided into two groups: \emph{stationary} and \emph{nonstationary}. Stationary kernels depend only on the separation vector $\mathbf{x} - \mathbf{x}^\prime$. As a result, they are translation invariant in the input space:
\begin{equation}
k(\mathbf{x}, \mathbf{x}^\prime) = k(\mathbf{x} + \boldsymbol{\tau}, \mathbf{x}^\prime + \boldsymbol{\tau}) \, ,~ \boldsymbol{\tau} \in \mathbb{R}^d.
\end{equation}
One of the most common covariance functions is the squared exponential (SE) kernel whose (separable) form is given by
\begin{equation}
k_{SE}(\mathbf{x}, \mathbf{x}^\prime) = \sigma^2\prod_{i = 1}^{d}\exp \left( -\frac{\vert x_i - x^\prime_i\vert^2}{2l_i^2} \right) .
\end{equation}
Here, the parameters $\sigma^2$ and $l_i$ are called \emph{process variance} and correlation \emph{length-scale} along the $i$-th coordinate, respectively. The former determines the scale of the amplitude of sample paths and the latter regulates how quickly the spatial correlation decays. In this paper, these parameters are estimated via maximum likelihood (ML) \cite{GPML, jones1998}, see Appendix \ref{sec_kernel}. Figure \ref{Fig:SE_kernel} shows the shape of the SE kernel and two sample paths with different length-scales. Another important class of stationary kernels is the Mat\'ern covariance function \cite{GPML}.
Nonstationary kernels are applied to model functions that do not have uniform smoothness within the input space and change significantly in some regions compared to others \cite{xiong2007}. In Sections \ref{sec_NNkernel} and \ref{sec_Gibskernel} two nonstationary covariance functions, namely the neural network and Gibbs kernels, are studied.
\begin{figure}
\caption{Left: shape of the squared exponential kernel. Right: sample paths corresponding to the SE kernel with $l= 0.1$ (solid) and $l = 1$ (red dashed). In both cases $\sigma^2 = 1$}
\label{Fig:SE_kernel}
\end{figure}
The GP prediction of $f$ is obtained by conditioning $Z$ on function evaluations. Let $\mathbf{X} = \left(\mathbf{x}^1, \dots, \mathbf{x}^n \right)^\top$ denote $n$ sample locations in the input space and $\mathbf{y} = \left(f(\mathbf{x}^1), \dots, f(\mathbf{x}^n) \right)^\top$ represent the corresponding outputs (observations). Together, the set $\mathcal{A} = \{\mathbf{X}, \mathbf{y}\}$ is called the \emph{training} set. The conditional distribution of $Z$ on $\mathcal{A}$ is again a GP
\begin{equation}
Z\vert \mathcal{A} \sim \mathcal{GP}\left(m(\cdot), c(\cdot, \cdot)\right) ,
\end{equation}
specified by
\begin{align}
\label{post_mean}
m(\mathbf{x}) & = \hat{\mu} + \mathbf{k}(\mathbf{x})^\top \mathbf{K}^{-1} \left(\mathbf{y} - \hat{\mu} \mathbf{1} \right) \\
c(\mathbf{x}, \mathbf{x}^\prime) & = k(\mathbf{x}, \mathbf{x}^\prime) - \mathbf{k}(\mathbf{x})^\top \mathbf{K}^{-1} \mathbf{k} (\mathbf{x}^\prime) + \frac{\left(1 - \mathbf{1}^\top \mathbf{K}^{-1} \mathbf{k} (\mathbf{x}^\prime) \right)^2}{\left(\mathbf{1}^\top \mathbf{K}^{-1} \mathbf{1} \right)} .
\label{post_var}
\end{align}
Here, $\hat{\mu}$ is the ML estimate of $\mu$ obtained by \cite{GPML}
\begin{equation}
\hat{\mu} = \frac{\mathbf{1}^\top \mathbf{K}^{-1} \mathbf{y}}{\mathbf{1}^\top \mathbf{K}^{-1} \mathbf{1}} .
\label{mu_estim}
\end{equation}
Also, $\mathbf{k}(\mathbf{x}) = \left(k(\mathbf{x}, \mathbf{x}^1), \dots, k(\mathbf{x}, \mathbf{x}^n)\right)^\top$, $\mathbf{K}$ is an $n \times n$ covariance matrix whose elements are: $\mathbf{K}_{i j} = k(\mathbf{x}^i, \mathbf{x}^j)~,~ \forall i, j~;~ 1 \leq i, j \leq n$ and $\mathbf{1}$ is a $n \times 1$ vector of ones. We call $m(\mathbf{x})$ and $s^2(\mathbf{x}) = c(\mathbf{x}, \mathbf{x})$ the GP mean and variance which reflect the prediction and the associated uncertainty at $\mathbf{x}$, respectively.
It can be shown that in the classic covariance functions such as Mat\'ern kernel where $k(\mathbf{x}, \mathbf{x} ; \sigma^2 = 1) = 1$, the predictive mean expressed by Equation (\ref{post_mean}) interpolates the points in the training set. Also, the prediction uncertainty (Equation (\ref{post_var})) vanishes there. To clarify, we obtain the prediction and the associated uncertainty at $\mathbf{x} = \mathbf{x}^j$, the $j$-th training point. In this case, $\mathbf{k}(\mathbf{x}^j)$ is equivalent to the $j$-th column of the covariance matrix $\mathbf{K}$. Because $\mathbf{K}$ is a positive definite matrix, the term $ \mathbf{k}(\mathbf{x}^j)^\top \mathbf{K}^{-1}$ yields vector $\mathbf{e}_j = \left(0, \dots, 0, 1, 0, \dots, 0 \right)$ whose elements are zero except the $j$-th element which is one. As a result
\begin{align}
\label{post_mean_interpolate}
m(\mathbf{x}^j) & = \hat{\mu} + \overbrace{\mathbf{k}(\mathbf{x}^j)^\top \mathbf{K}^{-1}}^{\mathbf{e}_j } (\mathbf{y} - \hat{\mu} \mathbf{1} ) = f(\mathbf{x}^j) , \\
\nonumber s^2(\mathbf{x}^j) & = k(\mathbf{x}^j, \mathbf{x}^j) - \mathbf{k}(\mathbf{x}^j)^\top \mathbf{K}^{-1} \mathbf{k} (\mathbf{x}^j) \\
& + \frac{\left(1 - \mathbf{1}^\top \mathbf{K}^{-1} \mathbf{k} (\mathbf{x}^j) \right)^2}{\mathbf{1}^\top \mathbf{K}^{-1} \mathbf{1}} = 0 ,
\label{post_var_interpolate}
\end{align}
since $\mathbf{k} (\mathbf{x}^j) = (k(\mathbf{x}^j, \mathbf{x}^1), \dots, \overbrace{k(\mathbf{x}^j, \mathbf{x}^j)}^{\sigma^2}, \dots, k(\mathbf{x}^j, \mathbf{x}^n) )^\top$.
\section{Neural network kernel}
\label{sec_NNkernel}
In this section we first show how the NN kernel is derived from a single-layer neural network with infinite number of hidden units. Let $\tilde{f}(\mathbf{x}) $ be a neural network with $N_h$ units that maps inputs to outputs according to
\begin{equation}
\tilde{f}(\mathbf{x}) = b + \sum_{j = 1}^{N_h} v_j h(\mathbf{x} ; \mathbf{u}^j) ,
\label{neural_net}
\end{equation}
where $b$ is the intercept, $v_j$s are weights to the units, $h(\cdot)$ represents the transfer (activation) function in which $\mathbf{u}^j$ represents the weight assigned to the input $\mathbf{x}$.
Suppose $b$ and every $v_j$ have zero mean Gaussian distribution with variances $\sigma^2_b$ and $\sigma^2_v / N_h$, respectively. If $\mathbf{u}^j$s have independent and identical distribution, then the mean and covariance of $\tilde{f}(\mathbf{x})$ are ($\mathbf{w}$ represents all weights together, i.e. $\mathbf{w} = \lbrack b, v_1, \ldots v_{N_h}, \mathbf{u}^1, \ldots, \mathbf{u}^{N_h}\rbrack$)
\begin{align}
\mathbb{E}_{\mathbf{w}} \big[\tilde{f}(\mathbf{x}) \big] &= 0 \, , \\
\nonumber \mathbb{C}\text{ov} \left(\tilde{f}(\mathbf{x}), \tilde{f}(\mathbf{x}^\prime) \right) &= \mathbb{E}_{\mathbf{w}} \big[ (\tilde{f}(\mathbf{x}) - 0) ( \tilde{f}(\mathbf{x}^\prime) - 0 ) \big] \\
\nonumber &= \sigma^2_b + \frac{1}{N_h} \sum_{j = 1}^{N_h} \sigma^2_v \mathbb{E}_{\mathbf{u}} \big[h(\mathbf{x}; \mathbf{u}^j) h(\mathbf{x}^\prime; \mathbf{u}^j) \big] \\
&= \sigma^2_b + \sigma^2_v \mathbb{E}_{\mathbf{u}} \big[h(\mathbf{x}; \mathbf{u}) h(\mathbf{x}^\prime; \mathbf{u}) \big] .
\label{neural_net_kernel}
\end{align}
Since $\tilde{f}(\mathbf{x})$ is the sum of independent random variables, it tends towards a normal distribution as $N_h \to \infty$ according to the central limit theorem. In this situation, any collection $\big\{\tilde{f}(\mathbf{x}^1), \ldots , \tilde{f}(\mathbf{x}^N)| ~\forall N \in \mathbb{N} \big\}$ has a joint normal distribution and $\tilde{f}(\mathbf{x})$ becomes a zero mean Gaussian process with a covariance function specified in Equation (\ref{neural_net_kernel}).
The neural network kernel is a particular case of the covariance structure expressed by Equation (\ref{neural_net_kernel}) such that $h(\mathbf{x}; \mathbf{u}) = \text{erf} \left(u_0 + \sum_{i = 1}^d u_i x_i \right)$ \cite{williams1997}. Here, $\text{erf}(\cdot)$ is the \emph{error function}: $\text{erf}(x) = \frac{2}{\sqrt\pi} \int_{0}^{x} \exp(-t^2) dt$ and $\mathbf{u} \sim \mathcal{N}(\mathbf{0}, \boldsymbol{\Sigma})$ in which $\boldsymbol{\Sigma}$ is a diagonal matrix with elements $\sigma^2_0, \sigma^2_1, \dots, \sigma^2_d$ as the variances of $u_0, u_1, \dots, u_d$. This choice of the activation function leads to the neural network kernel given by
\begin{equation}
k_{NN}(\mathbf{x}, \mathbf{x}^\prime) = \frac{2\sigma^2}{\pi}\arcsin\left( \frac{2\tilde{\mathbf{x}}^\top\boldsymbol{\Sigma}\tilde{\mathbf{x}}^\prime}{\sqrt{(1 + 2\tilde{\mathbf{x}}^\top\boldsymbol{\Sigma}\tilde{\mathbf{x}})(1 + 2\tilde{\mathbf{x}}^{\prime\top}\boldsymbol{\Sigma}\tilde{\mathbf{x}}^\prime)}} \right) .
\label{NN_kernel}
\end{equation}
where $\tilde{\mathbf{x}} = (1, x_1, \dots, x_d)^\top$ is the augmented input vector. The length-scale of the $i$-th coordinate is of order $1/ \sigma_i$; the larger $\sigma_i$, the sample functions vary more quickly in the $i$-th coordinate \cite{mackay1998, GPML}. This is illustrated in Figure \ref{NN_kernel_sample_path} where the shapes of the NN kernel for two different values of $\sigma_1$ and the corresponding sample paths are plotted.
\begin{figure}
\caption{Top: The shapes of the NN kernel for two different values of $\sigma_1$: $1$ (left) and $50$ (right). Bottom: Two sample paths corresponding to the kernels on top. The kernel with $\sigma_1 = 50$ is a more suitable choice for modelling discontinuities. Here, $\sigma = \sigma_0 = 1$.}
\label{NN_kernel_sample_path}
\end{figure}
The NN kernel is nonstationary, see Figure \ref{NN_kernel_sample_path} and also Equation (\ref{NN_kernel}) which does not depend on $\mathbf{x} - \mathbf{x}^\prime$. It can take negative values contrary to classic covariance functions such as the SE kernel depicted in Figure \ref{Fig:SE_kernel}. In this kernel due to the superposition of the function $\text{erf}(u_0 + u_1 x)$, sample paths tend to constant values for large positive or negative $x$ \cite{GPML}. Also, the correlation at zero distance is not one:
\begin{equation}
\mathbb{C}\text{orr}(Z(\mathbf{x}), Z(\mathbf{x})) = k_{NN}(\mathbf{x}, \mathbf{x} ; \sigma^2 = 1)= \frac{2}{\pi}\arcsin\left( \frac{2\tilde{\mathbf{x}}^\top\boldsymbol{\Sigma}\tilde{\mathbf{x}}}{1 + 2\tilde{\mathbf{x}}^\top\boldsymbol{\Sigma}\tilde{\mathbf{x}}} \right) < 1 ,
\end{equation}
since $\arcsin\left( \frac{2\tilde{\mathbf{x}}^\top\boldsymbol{\Sigma}\tilde{\mathbf{x}}}{1 + 2\tilde{\mathbf{x}}^\top\boldsymbol{\Sigma}\tilde{\mathbf{x}}} \right) < \pi/2$. Thus, the mean predictor obtained by the NN kernel does not interpolate the points in the training data and the prediction variances are greater than zero there.
Figure \ref{step_fun_emul} compares the Mat\'ern 3/2 and NN kernels in modelling a step-function defined as
\begin{equation}
f(\mathbf{x}) =
\begin{cases}
-1 & x_1 \leq 0 \\
1 & x_1 >0~.
\end{cases}
\label{step_fun}
\end{equation}
As can be seen, the NN kernel has superior performance to Mat\'ern 3/2 in both $1D$ and $2D$ cases. The predictive mean of the GP with Mat\'ern 3/2 neither captures the discontinuity nor performs well in the flat regions. In the NN kernel, the ML estimation of the parameter that controls the horizontal scale of fluctuation, i.e. $\sigma_1$, takes its maximum possible value which is $10^{3}$.
\begin{figure}
\caption{Emulation of the step-function $f$ defined in (\ref{step_fun}
\label{step_fun_emul}
\end{figure}
Figure \ref{perturb_step_fun_emul} illustrates a function whose step-discontinuity is located at $x = 0.5$. As can be seen from the picture on the left of Figure \ref{perturb_step_fun_emul}, the NN kernel is not able to model $f$ well. This problem can be solved if the NN kernel is modified as follows
\begin{equation}
k(x, x^\prime) = \frac{2\sigma^2}{\pi}\arcsin\left( \frac{2\tilde{\mathbf{x}}_{\tau}^\top\boldsymbol{\Sigma}^{-1}\tilde{\mathbf{x}}_{\tau}^\prime}{\sqrt{(1 + 2\tilde{\mathbf{x}}_{\tau}^\top\boldsymbol{\Sigma}^{-1}\tilde{\mathbf{x}}_{\tau})(1 + 2\tilde{\mathbf{x}}_{\tau}^{\prime\top}\boldsymbol{\Sigma}^{-1}\tilde{\mathbf{x}}_{\tau}^\prime)}} \right),
\label{NN_kernel_perturb}
\end{equation}
where $\tilde{\mathbf{x}}_{\tau} = (1, x - \tau)^\top$ and $\tau$ is estimated together with other parameters using ML. In this case, $\hat{\tau} = 0.457$ which is an estimation for the location of the discontinuity.
\begin{figure}
\caption{Left: The NN kernel given by Equation (\ref{NN_kernel}
\label{perturb_step_fun_emul}
\end{figure}
\section{Gibbs kernel}
\label{sec_Gibskernel}
Mark Gibbs \cite{gibbs1997} in his PhD thesis derived the following covariance function:
\begin{equation}
k_{Gib}(\mathbf{x}, \mathbf{x}^\prime) = \sigma^2 \prod_{i= 1}^{d} \left( \frac{2l_i(\mathbf{x})l_i(\mathbf{x}^\prime) }{l_i(\mathbf{x})^2 + l_i(\mathbf{x}^\prime)^2} \right)^{1/2} \exp \left(- \sum_{i = 1}^{d} \frac{(x_i - x_i^\prime)^2}{l_i(\mathbf{x})^2 + l_i(\mathbf{x}^\prime)^2} \right),
\end{equation}
where $l_i(\cdot)$ is a length-scale function in the $i$-th input dimension. These length-scales can be any arbitrary positive functions of $\mathbf{x}$. This allows the kernel to model sudden variations in the observations: a process with Gibbs kernel is smooth at regions of the input space where the length-scales are relatively high and it changes rapidly where the length-scales reduce. Note that the correlation is one when $\mathbf{x} = \mathbf{x}^\prime$, i.e. $k_{Gib}(\mathbf{x}, \mathbf{x}) = 1$.
In this work, we use the same length-scale functions for all dimensions:
\begin{equation}
k_{Gib}(\mathbf{x}, \mathbf{x}^\prime) = \sigma^2 \left( \frac{2l(\mathbf{x})l(\mathbf{x}^\prime)}{l^2(\mathbf{x}) + l^2(\mathbf{x}^\prime)} \right)^{d/2} \exp \left( -\frac{\sum_{i=1}^{d} (x_i - x_i^\prime)^2}{l^2(\mathbf{x}) + l^2(\mathbf{x}^\prime)} \right).
\end{equation}
Figure \ref{gibbs_kernel} shows the shapes of the Gibbs kernel for three different length-scale functions and corresponding sample paths. As can be seen, it is possible to model both nonstationary and discontinuous functional forms with the Gibbs kernel if a suitable length-scale function is chosen. For example, the nonstationary function depicted in Figure \ref{nonstation_gibbs} varies more quickly in the region $x \in [0 , 0.3 ]$ than in the region $[0.3 , 1]$. Thus, a suitable length-scale function should have ``small" values when $x \in [0 , 0.3 ]$ and larger values when $x \in [0.3 , 1]$. The length-scale used for the Gibbs kernel (right picture) is of the form $l(x) = c_1x^2 + c_2$ whose unknown parameters $c_1$ and $c_2$ are estimated by ML. This choice of the length-scale allows the GP to predict $f$ with a higher accuracy in comparison to the GP with the Mat\'ern 3/2 kernel (left picture). The estimated parameters of the length-scale are $\hat{c}_1 \approx 45.63$ and $\hat{c}_2 \approx 0.11$ which are in line with the nonstationarity of $f$.
\begin{figure}
\caption{Left panel: three different length-scale functions. Middle panel: shapes of the Gibbs kernel based on the corresponding length-scale functions. Right panel: two GP sample paths with the Gibbs kernel on the left. With the Gibbs kernel, one can model both nonstationary (first row) and discontinuous (second and third rows) functions.}
\label{gibbs_kernel}
\end{figure}
\begin{figure}
\caption{GP prediction (solid blue) of a nonstationary function (dashed) with the Mat\'ern 3/2 (left) and Gibbs (right) kernels. The function is defined as $f(x) = \sin\left(30(x - 0.9)^4\right) \cos\left(2(x - 0.9)\right) + \frac{(x - 0.9)}
\label{nonstation_gibbs}
\end{figure}
In order to model discontinuities, one can employ the Gibbs kernel with a sigmoid shaped length-scale. The length-scale functions we use in our experiments (see Section \ref{sec_result}) are all sigmoid functions, specifically:
\begin{itemize}
\item[(i)] Error function: $\text{erf}(c_1 \mathbf{e}_j \mathbf{x} ) + c_2 ; ~ c_2 > 1$
\item[(ii)] Logistic function: $\frac{1}{1 + \exp(c_1 \mathbf{e}_j \mathbf{x})} + c_2 ;~ c_2 > 0$
\item[(iii)] Hyperbolic tangent: $\tanh(c_1 \mathbf{e}_j \mathbf{x} ) + c_2 ; ~ c_2 > 1$
\item[(iv)] Arctangent: $\arctan(c_1 \mathbf{e}_j \mathbf{x} ) + c_2 ; ~ c_2 > \pi/2$
\end{itemize}
which have all been modified slightly by adding a constant $c_2 > 0$ to make $l(\mathbf{x})$ strictly positive. The parameter $c_1$ controls the slope of the transition in the sigmoid function. Both $c_1$ and $c_2$ are estimated by ML. All components of the vector $\mathbf{e}_j$ are zero except the $j$-th one which is 1. This vector determines the $j$-th axis in which the function is discontinuous.
\section{Transformation of the input space (warping)}
\label{sec_warping}
In this section, \emph{warping} or \emph{embedding} is studied as an alternative approach to emulate functions with discontinuities. The method first uses a non-linear parametric function to map the input space into a feature space. Then, a GP with a standard kernel is applied to approximate the map from the feature space to the output space \cite{mackay1998, calandra2016}. A similar idea is used in \cite{snelson2004} where the transformation is performed on the output space to model non-Gaussian processes.
In warping, we assume that $f$ is a composition of two functions
\begin{equation}
f = G \circ M : ~ M : \mathcal{D} \mapsto \mathcal{D}^\prime ~, ~ G : \mathcal{D}^\prime \mapsto \mathcal{F} ,
\end{equation}
where $M$ is the transformation function and $\mathcal{D}^\prime$ represents the feature space. The function $G$ is approximated by a GP relying on the training set $\{\tilde{\mathbf{X}}, \mathbf{Y} \}$ in which $\tilde{\mathbf{X}} = \left(M(\mathbf{x}^1), \ldots, M(\mathbf{x}^n)\right)^\top$. Notice that $\mathcal{D}$ and $\mathcal{D}^\prime$ need not have the same dimensionality \cite{mackay1998}. For example, if the squared exponential kernel $k_{SE} : \mathbb{R} \times \mathbb{R} \mapsto \mathbb{R}$ is composed with $M(x) = \left[\cos(\frac{2\pi x}{T}), \, \sin(\frac{2\pi x}{T})\right]^\top \in \mathbb{R}^2$, the result is a periodic kernel with period $T$ \cite{seeger2003, hajighasemi2014}. In practice, a parametric family of $M$ is selected and its parameters are estimated together with the kernel parameters via ML. Such modelling is equivalent to emulate $f$ with a GP whose covariance function $\tilde{k}$ is
\begin{equation}
\tilde{k}(\mathbf{x}, \mathbf{x}^\prime) = k\left(M(\mathbf{x}), M(\mathbf{x}^\prime) \right) .
\label{warp_kernel}
\end{equation}
Note that $\tilde{k}$ is generally nonstationary even if $k$ is a stationary kernel, see Figure \ref{fig_warp_kernel}. The prediction (conditional mean) and the associated uncertainty (conditional variance) of a warped GP are calculated in the same way as Equations (\ref{post_mean}) and (\ref{post_var}).
\begin{figure}
\caption{Left panel: two different transformation functions, $M(x)$. Middle panel: shapes of the warped kernel $\tilde{k}
\label{fig_warp_kernel}
\end{figure}
According to Figure \ref{fig_warp_kernel} (second row), a sigmoid transformation is a suitable choice for modelling step-discontinuities. The sigmoid functions given in Section \ref{sec_Gibskernel} are used as the transformation mappings in our experiments in the next section. The unknown parameter of the map, i.e. $c_1$, is estimated by the ML method together with other kernel parameters such as the length-scales and process variance.
\section{Numerical examples}
\label{sec_result}
In this section, the performance of the proposed methods in modelling step-discontinuities is compared with the standard kernels, i.e. Mat\'ern 3/2 and squared exponential. The step-function given by Equation (\ref{step_fun}) is used as the test function in dimensions 2 and 5. The input space is $\mathcal{D} = [-2, 2]^d$. Four sigmoid functions are employed as the length-scales of the Gibbs kernel, $l(\mathbf{x})$, and the transformation maps, $M(\mathbf{x})$, in the warping method. The sigmoid functions are: error, logistic, hyperbolic tangent and arctangent whose analytical expressions are given in Section \ref{sec_Gibskernel}. The covariance kernel, $k$, in the warping approach is squared exponential.
The accuracy of the prediction is measured by the \emph{root mean square error (RMSE)} criterion defined as
\begin{equation}
RMSE = \sqrt{\frac{1}{n_t} \sum_{t = 1}^{n_t} \left(f(\mathbf{x}_t) - \hat{f}(\mathbf{x}_t) \right)^2} ,
\end{equation}
where $\mathbf{x}_t$ and $n_t$ represent a test point and the size of test set, respectively. In our experiments $n_t = 1000$.
There are 20 different training sets and for each set, each method produces one prediction. All training sets are of size $10d$ and ``space-filling", meaning that the sample points are uniformly spread over the input space. They are obtained by the \texttt{maximinESE\_LHS} function implemented in the R package \emph{DiceDesign} \cite{DiceDesign}. The results are shown in Figure \ref{fig_box_plot}.
As can be seen, the squared exponential (SquarExp) and Mat\'ern 3/2 (Mat32) kernels have the worst prediction performances. The neural network kernel (NeurNet) can model the step-function well and has one of the best RMSEs in our experiments. Generally, the GP model with the Gibbs kernel outperforms the warping technique. The arctangent function is a suitable choice as the length-scale of the Gibbs kernel and the transformation map in the warping approach. In both cases, the RMSE associated with the logistic function is (on average) the largest in comparison to other sigmoid functions.
\begin{figure}
\caption{Box-plot of RMSEs associated with the prediction of the step-function (Equation (\ref{step_fun}
\label{fig_box_plot}
\end{figure}
\section{Conclusions}
Gaussian processes are mainly used to predict smooth, continuous functions. However, there are many situations in which the assumptions of continuity and smoothness do not hold. In computer experiments, it is common that the output of a complex computer code has discontinuity, e.g. when bifurcations or tipping points occur. This paper deals with the problem of emulating step-discontinuous functions using GPs. Several methods, including two covariance kernels and the idea of transforming the input space (warping), are proposed to tackle this problem. The two covariance functions are the neural network and Gibbs kernels whose properties are demonstrated using several examples. In warping, a suitable transformation function is applied to map the input space into a new space where a standard kernel, e.g. Mat\'ern family of kernels, is able to predict the discontinuous function well. Our experiments show that these techniques have superior performance to GPs with standard kernels in modelling step-discontinuities.
\begin{appendices}
\section{Covariance functions/kernels}
\label{sec_kernel}
Covariance kernels are positive definite (PD) functions. The symmetric function $k : \mathcal{D} \times \mathcal{D} \mapsto \mathbb{R}$ is PD if
\begin{equation*}
\sum_{i = 1} ^N \sum_{j = 1} ^N \alpha_i \alpha_j k(\mathbf{x}^i, \mathbf{x}^j) \geq 0
\end{equation*}
for any $N \in \mathbb{N}$ points $\mathbf{x}^1, \dots, \mathbf{x}^N \in \mathcal{D}$ and $\boldsymbol{\alpha} = [\alpha_1, \dots, \alpha_N ]^\top \in \mathbb{R}^N$. If $k$ is a PD function, then the $N\times N$ matrix $\mathbf{K}$ whose elements are $\mathbf{K}_{i j} = k(\mathbf{x}^i, \mathbf{x}^j)$ is a positive semidefinite matrix because $\sum_{i = 1} ^N \sum_{j = 1} ^N \alpha_i \alpha_j \mathbf{K}_{i j} \geq 0$.
Checking the positive definiteness of a function is not easy. One can combine the existing kernels to make a new one. For example, if $k_1$ and $k_2$ are two kernels, the function $k$ obtained by the following operations is a valid covariance kernel:
\begin{align*}
&k(\mathbf{x}, \mathbf{x}^\prime) = k_1(\mathbf{x}, \mathbf{x}^\prime) + k_2(\mathbf{x}, \mathbf{x}^\prime) \\
&k(\mathbf{x}, \mathbf{x}^\prime) = k_1 (\mathbf{x}, \mathbf{x}^\prime) \times k_2(\mathbf{x}, \mathbf{x}^\prime) \\
&k(\mathbf{x}, \mathbf{x}^\prime) = c k_1(\mathbf{x}, \mathbf{x}^\prime) ,~ c \in \mathbb{R}^+ \\
&k(\mathbf{x}, \mathbf{x}^\prime) = k_1(\mathbf{x}, \mathbf{x}^\prime) + c ,~ c \in \mathbb{R}^+ \\
&k(\mathbf{x}, \mathbf{x}^\prime) = g(\mathbf{x}) k_1(\mathbf{x}, \mathbf{x}^\prime) g(\mathbf{x}^\prime)~ \text{for any function} ~ g(.) .
\end{align*}
We refer the reader to \cite{GPML, duvenaud2014} for a detailed discussion about the composition of covariance functions. It is also possible to compose kernels with a function as explained in Section \ref{sec_warping}.
Usually, a covariance function depends on some parameters $\mathbf{p}$ which are unknown and need to be estimated from data. In practice, a parametric family of $k$ is chosen first. Then the parameters are estimated via maximum likelihood (ML), cross-validation or (full) Bayesian approaches \cite{GPML}. In the sequel, we describe the ML approach as is used in this paper.
The likelihood function measures the adequacy between a probability distribution and the data; a higher likelihood function means that observations are more consistent with the assumed distribution. In the GP framework, as observations are presumed to have the normal distribution, the likelihood function is
\begin{equation}
p \left( \mathbf{y} \vert \mathbf{X}, \mathbf{p}, \mu \right) = \frac{1}{(2\pi)^{n/2} \vert \mathbf{K}\vert ^{1/2}}\exp \left(- \frac{\left(\mathbf{y} - \mu \mathbf{1} \right)^\top \mathbf{K}^{-1} \left(\mathbf{y} - \mu \mathbf{1} \right)}{2} \right) ,
\end{equation}
where $\vert \mathbf{K}\vert$ is the determinant of the covariance matrix. In the above equation, if $\mu$ is unknown, it is replaced with its estimate given by Equation (\ref{mu_estim}).
Usually for optimisation, it is more convenient to work with the natural logarithm of the likelihood (log-likelihood) function which is
\begin{equation}
\ln p \left( \mathbf{y} \vert \mathbf{X}, \mathbf{p}, \mu \right) = -\frac{n}{2} \ln(2\pi) - \frac{1}{2} \ln\vert \mathbf{K}\vert - \frac{\left( \mathbf{y} - \mu \mathbf{1} \right)^\top \mathbf{K}^{-1} \left(\mathbf{y} - \mu \mathbf{1} \right)}{2} .
\label{log_lik}
\end{equation}
Maximising (\ref{log_lik}) is a challenging task as the log-likelihood function is often nonconvex with multiple maxima. To do so, numerical optimisation algorithms are often applied. We refer the reader to \cite{lophaven2002, forrester2008} for further information.
\end{appendices}
\end{document}
|
\begin{document}
\begin{abstract} We introduce signed exceptional sequences as factorizations of morphisms in the cluster morphism category. The objects of this category are wide subcategories of the module category of a hereditary algebra. A morphism $[T]:\ensuremath{{\mathcal{A}}}\to\ensuremath{{\mathcal{B}}}$ is the equivalence class of a rigid object $T$ in the cluster category of $\ensuremath{{\mathcal{A}}}$ so that $\ensuremath{{\mathcal{B}}}$ is the right hom-ext perpendicular category of the underlying object $|T|\in\ensuremath{{\mathcal{A}}}$. Factorizations of a morphism $[T]$ are given by total orderings of the components of $T$. This is equivalent to a ``signed exceptional sequence.'' For an algebra of finite representation type, the geometric realization of the cluster morphism category is an Eilenberg-MacLane space with fundamental group equal to the ``picture group'' introduced by the authors in \cite{IOTW4}.
\end{abstract}
\title{Signed exceptional sequences and the cluster morphism category}
\tableofcontents
\section*{Introduction}
The purpose of this paper is to give an algebraic version of some of the topological definitions, statements and proofs in our joint paper with Kent Orr and Jerzy Weyman about the picture groups for Dynkin quivers \cite{IOTW4}. To avoid repetition, the concurrently written paper \cite{IOTW4} will logically depend on this paper. In the last section of this paper we briefly review, extend and simplify the ideas from earlier versions of \cite{IOTW4} to lay the background for a more streamlined revision of that paper. The conversion to algebra follows the ideas of Quillen \cite{Quillen}. Topological spaces are replaced with small categories, continuous maps with functors and homotopies with natural transformations. In particular, a finite CW-complex can, up to homotopy, be represented algebraically as a finite category, namely, one having finitely many objects and finitely many morphisms between any two objects. When this process is applied to the CW-complex associated in \cite{IOTW4} to a Dynkin quiver, we obtain a category whose morphisms are given by signed exceptional sequences.
Let $\Lambda$ be a finite dimensional hereditary algebra over any field. Then the \emph{cluster morphism category} $\ensuremath{{\mathcal{G}}}(\Lambda)$ of $\Lambda$ is defined to be the category whose objects are the finitely generated wide subcategories of $mod\text-\Lambda$ [InTh], (Section \ref{ss 1.1: wide subcategories} below). Such a subcategory $\ensuremath{{\mathcal{A}}}\subseteq mod\text-\Lambda$ is hereditary and abelian and has a cluster category which we denote by $\ensuremath{{\mathcal{C}}}_\ensuremath{{\mathcal{A}}}$ \cite{BMRRT}. For any indecomposable object $T$ in the cluster category, let $|T|\in\ensuremath{{\mathcal{A}}}$ be the \emph{underlying module} of $T$ given by $|M|=M$ if $T=M$ is a module and $|X[1]|=X$ for shifted objects $X[1]$ where $X$ is an object in $\ensuremath{{\mathcal{A}}}$ which is projective in $\ensuremath{{\mathcal{A}}}$ but not necessarily projective in $mod\text-\Lambda$. We extend additively to all objects of $\ensuremath{{\mathcal{C}}}_\ensuremath{{\mathcal{A}}}$ and to all objects of $\ensuremath{{\mathcal{A}}}\cup \ensuremath{{\mathcal{A}}}[1]\subset \ensuremath{{\mathcal{D}}}^b(\ensuremath{{\mathcal{A}}})$. Then $|T|\in\ensuremath{{\mathcal{A}}}$ is well defined up to isomorphism for any $T\in\ensuremath{{\mathcal{C}}}_\ensuremath{{\mathcal{A}}}$. The \emph{rank} of $\ensuremath{{\mathcal{A}}}$, denoted $rk\,\ensuremath{{\mathcal{A}}}$, is defined to be the number of nonisomorphic simple objects of $\ensuremath{{\mathcal{A}}}$.
Recall that $T\in\ensuremath{{\mathcal{C}}}_\ensuremath{{\mathcal{A}}}$ is \emph{rigid} if $\Ext_{\ensuremath{{\mathcal{C}}}_\ensuremath{{\mathcal{A}}}}^1(T,T)=0$. We say that two rigid objects $T,T'$ are \emph{equivalent} if $add\,T=add\,T'$, i.e., $T,T'$ have isomorphic summands. Given $\ensuremath{{\mathcal{A}}},\ensuremath{{\mathcal{B}}}\in \ensuremath{{\mathcal{G}}}(\Lambda)$ a morphism $[T]:\ensuremath{{\mathcal{A}}}\to \ensuremath{{\mathcal{B}}}$ is defined to be the equivalence class of a rigid object $T\in \ensuremath{{\mathcal{C}}}_\ensuremath{{\mathcal{A}}}$ with the property that $|T|^\perp\cap \ensuremath{{\mathcal{A}}}=\ensuremath{{\mathcal{B}}}$ where $M^\perp$ is the right hom-ext-perpendicular category of $M$ in $mod\text-\Lambda$. We note that, if $\Lambda$ has finite representation type, then the cluster morphism category of $\Lambda$ has finitely many objects and finitely many morphism.
The last part of the definition of the cluster morphism category is the definition of composition of morphisms. This is a difficult technical point which requires a change in terminology from equivalence classes of rigid objects of cluster categories to partial cluster tilting sets (Definition \ref{def: cluster tilting set}). The composition of $[T]:\ensuremath{{\mathcal{A}}}\to\ensuremath{{\mathcal{B}}}$ and $[S]:\ensuremath{{\mathcal{B}}}\to \ensuremath{{\mathcal{B}}}'$ is given by $[\sigma_TS\coprod T]:\ensuremath{{\mathcal{A}}}\to\ensuremath{{\mathcal{B}}}'$ where $\sigma_TS\in\ensuremath{{\mathcal{C}}}_\ensuremath{{\mathcal{A}}}$ is the unique (up to isomorphism) rigid object in $\ensuremath{{\mathcal{C}}}_\ensuremath{{\mathcal{A}}}$ having the following two properties.
\begin{enumerate}
\item $\sigma_TS\coprod T$ is a rigid object in $\ensuremath{{\mathcal{C}}}_\ensuremath{{\mathcal{A}}}$.
\item $\undim \sigma_TS-\undim S$ is a linear combination of $\undim T_i$ where $T=\coprod_iT_i$.
\end{enumerate}
We were not able to construct a functor $\sigma_T:\ensuremath{{\mathcal{C}}}_\ensuremath{{\mathcal{B}}}\to\ensuremath{{\mathcal{C}}}_\ensuremath{{\mathcal{A}}}$ realizing this mapping defined on rigid objects of $\ensuremath{{\mathcal{C}}}_\ensuremath{{\mathcal{B}}}$. What we construct in this paper is a mapping
\[
\sigma_T:\ensuremath{{\mathcal{C}}}(\ensuremath{{\mathcal{B}}})\to\ensuremath{{\mathcal{C}}}(\ensuremath{{\mathcal{A}}})
\]
from the set $\ensuremath{{\mathcal{C}}}(\ensuremath{{\mathcal{B}}})$ of isomorphism classes of rigid indecomposable objects of $\ensuremath{{\mathcal{C}}}_\ensuremath{{\mathcal{B}}}$ to $\ensuremath{{\mathcal{C}}}(\ensuremath{{\mathcal{A}}})$. With this in mind, we shift our notation and use \emph{partial cluster tilting sets} $T=\{T_1,\cdots,T_k\}\subset \ensuremath{{\mathcal{C}}}(\ensuremath{{\mathcal{A}}})$ (Definition \ref{def: cluster tilting set}) which are sets of components of rigid objects of $\ensuremath{{\mathcal{C}}}_\ensuremath{{\mathcal{A}}}$. We say that $T$ is a \emph{cluster tilting set} if $k$ is maximal ($k=rk\,\ensuremath{{\mathcal{A}}}$). With this notation, morphisms are written $[T_1,\cdots,T_k]:\ensuremath{{\mathcal{A}}}\to\ensuremath{{\mathcal{B}}}$ and composition of morphisms is written
\[
[S_1,S_2,\cdots,S_\ell]\circ[T_1,\cdots,T_k]=[\sigma_TS_1,\sigma_TS_2,\cdots,\sigma_TS_\ell,T_1,\cdots,T_k]:\ensuremath{{\mathcal{A}}}\to\ensuremath{{\mathcal{B}}}'.
\]
The \emph{rank} of a morphism $[T]:\ensuremath{{\mathcal{A}}}\to\ensuremath{{\mathcal{B}}}$ is defined to be the number of elements of $T$ as a subset of $\ensuremath{{\mathcal{C}}}(\ensuremath{{\mathcal{A}}})$ (the number of nonisomorphic components of $T$ as object of $\ensuremath{{\mathcal{C}}}_\ensuremath{{\mathcal{A}}}$). Then $rk\,[T]=rk\,\ensuremath{{\mathcal{A}}}-rk\,\ensuremath{{\mathcal{B}}}$. So, $[T]$ has maximal rank if and only if $T$ is a cluster tilting set in $\ensuremath{{\mathcal{C}}}(\ensuremath{{\mathcal{A}}})$. A \emph{signed exceptional sequence} can be defined to be a sequence of objects $(X_1,\cdots,X_k)$ in $mod\text-\Lambda\cup mod\text-\Lambda[1]\subset \ensuremath{{\mathcal{D}}}^b(mod\text-\Lambda)$ with the property that
\[
[X_1]\circ[X_2]\circ\cdots\circ [X_k]:mod\text-\Lambda\to \ensuremath{{\mathcal{B}}}
\]
is a sequence of composable morphisms in $\ensuremath{{\mathcal{G}}}(\Lambda)$ of rank 1 from $mod\text-\Lambda$ to $\ensuremath{{\mathcal{B}}}=\bigcap |X_i|^\perp$. This is equivalent to the following.
\begin{defn}\label{def: signed exceptional sequence}[Subsection \ref{ss 2.1: Def of signed exceptional sequence}]
A \emph{signed exceptional sequence} in a wide subcategory $\ensuremath{{\mathcal{A}}}\subseteq mod\text-\Lambda$ is a sequence of objects $X_1,\cdots,X_k$ in $\ensuremath{{\mathcal{A}}}\cup \ensuremath{{\mathcal{A}}}[1]$ satisfying the following.
\begin{enumerate}
\item $(|X_1|,|X_2|,\cdots,|X_k|)$ is an exceptional sequence in $\ensuremath{{\mathcal{A}}}$
\item $X_i\in\ensuremath{{\mathcal{C}}}(\ensuremath{{\mathcal{A}}}_i)$ where $|X_{i+1}\coprod \cdots\coprod X_k|^\perp=\ensuremath{{\mathcal{A}}}_i$, i.e., either $X_i\in\ensuremath{{\mathcal{A}}}_i$ or $X_i=P[1]$ where $P$ is an indecomposable projective object of $\ensuremath{{\mathcal{A}}}_i$.
\end{enumerate}
The signed exceptional sequence is called \emph{complete} if $k$ is maximal, i.e., $k=rk\,\ensuremath{{\mathcal{A}}}$.
\end{defn}
Consider totally ordered {cluster tilting sets} $(T_i)=(T_1,\cdots,T_k)$ in $\ensuremath{{\mathcal{C}}}(\ensuremath{{\mathcal{A}}})$. We refer to these as \emph{ordered cluster tilting sets}.
\begin{thm}[Theorem \ref{thm 2.3: bijection one}]\label{thm 2.3}
There is a bijection between the set of ordered cluster tilting sets and the set of (complete) signed exceptional sequences.
\end{thm}
For example, in type $A_2$ the cardinality of this set is $2!C_3=2\cdot 5=10$. Another example is the sequence of simple modules $(S_n,\cdots,S_2,S_1)$ in reverse admissible order (so that $S_n$ is injective and $S_1$ is projective). Since each $S_k$ is projective in the right perpendicular category of $S_{k-1},\cdots,S_1$, it can have either sign. So, there are $2^n$ possible signs. It is easy to see that the corresponding ordered cluster tilting sets are distinct as unordered cluster tilting sets. (Proposition \ref{prop: 2 to n distinct clusters}.)
Our sign conventions make the dimension vectors of the objects in certain signed exceptional sequences into the negatives of the $c$-vectors of cluster tilting objects. Speyer and Thomas \cite{ST} gave a characterization of $c$-vectors. We give another description which also determines the cluster tilting object corresponding to the $c$-vectors.
\begin{thm}[Theorem \ref{thm: Which sig exc seqs are c-vectors?}]\label{thm 2.13}
The dimension vectors of objects $X_i$ in a signed exceptional sequence form the set of negative $c$-vectors of some cluster tilting object $T$ if and only if the ordered cluster tilting set $(T_i)=(T_1,\cdots,T_n)$ corresponding to $(X_i)$ under the bijection of Theorem \ref{thm 2.3: bijection one} has the property that $\Hom_\Lambda(|T_i|,|T_j|)=0=\Ext^1_\Lambda(|T_i|,|T_j|)$ for $i<j$. Furthermore, all sets of (negative) $c$-vectors are given in this way and $T=\coprod_iT_i$.
\end{thm}
The equation $T=\coprod_iT_i$ means we have two different descriptions of the same bijection:
\[
\{\text{signed exceptional sequences $(X_i)$ s.t. $-\undim X_i$ are $c$-vectors}\}
\]
\[
\cong\{
\text{ordered cluster tilting sets $(T_i)$ s.t. $\Hom_\Lambda(|T_i|,|T_j|)=0=\Ext^1_\Lambda(|T_i|,|T_j|)$ for $i<j$}
\}
\]
One bijection is given by sending $(X_i)$ to the ordered set of $c$-vectors $(-\undim X_i)$ and then to the {ordered cluster tilting set} which corresponds to these in the usual way by, e.g., Equation \eqref{eq characterizing c-vectors} in section \ref{ss 2.4: c-vectors} below. The other bijection is given by restriction of the bijection given in Theorem \ref{thm 2.3: bijection one}.
Finally, we return to the motivation of this paper which is to show the following.
\begin{thm}[Theorem \ref{thm 3.1: 2nd main theorem}]\label{thm 3.1: Intro}
The classifying space $B\ensuremath{{\mathcal{G}}}(\Lambda)$ of the cluster morphism category of a hereditary algebra of finite representation type is a $K(\pi,1)$ where $\pi$ is the ``picture group'' introduced in \cite{IOTW4}. In fact $B\ensuremath{{\mathcal{G}}}(\Lambda)$ is homeomorphic to the topological space $X(\Lambda)$ constructed in \cite{IOTW4}.
\end{thm}
This gives a proof of the fact that the ``picture space'' $X(Q)$ is a $K(\pi,1)$ for any Dynkin quiver $Q$. A proof of the following slightly stronger theorem, using the results of this paper and ideas from \cite{IIT} will appear in a future paper: For $\Lambda$ of finite type, $B\ensuremath{{\mathcal{G}}}(\Lambda)$ is a ``non-positively curved cube complex'' and therefore the picture group is a ``CAT(0)-group''. Contrarily, for $\Lambda$ of tame infinite type, $B\ensuremath{{\mathcal{G}}}(\Lambda)$ is not a $K(\pi,1)$.
The contents of this paper are as follows. In Section \ref{ss 1.1: wide subcategories} we give the basic definitions including the key definitions (\ref{def 1.5: A(alpha)}, \ref{def 1.7: cluster morphism}) of $\ensuremath{{\mathcal{A}}}(\alpha_\ast)$ and cluster morphisms $[T]:\ensuremath{{\mathcal{A}}}\to \ensuremath{{\mathcal{B}}}$ as outlined above. In Section \ref{ss 1.2: composition of cluster morphisms} we give the definition of composition of cluster morphisms assuming Proposition \ref{prop 1.8: Properties of sigma_T} which is proved in Section \ref{ss 1.3: proof of properties of sT} using \cite{IOTW1} and \cite{IOTW3}. In Section \ref{sec 2: signed exceptional sequences} we define signed exceptional sequences and show that they have the properties outlined above.
In Section \ref{sec 3: classifying space of G(S)} we prove the second main Theorem \ref{thm 3.1: 2nd main theorem} that the classifying space of the cluster morphism category is a $K(\pi,1)$. First, we state the extension of the theorem (Theorem \ref{thm 3.5: G(S) is K(pi,1)}) to any convex set of roots (Definition \ref{def: convex set of roots}). In Section \ref{ss 3.2: outline of G(S)} we give an outline of the proof of Theorem \ref{thm 3.5: G(S) is K(pi,1)} using HNN extensions. The details occupy the rest of Section \ref{sec 3: classifying space of G(S)}.
In Section \ref{ss 4.1: the CW-complex X(S)} we recall the picture space $X(\Lambda)$ of a hereditary algebra $\Lambda$ of finite representation type and extend the definition to any finite convex set of roots $\ensuremath{{\mathcal{S}}}$. This space is a finite CW-complex with one cell $e(\ensuremath{{\mathcal{A}}})$ for every wide subcategory $\ensuremath{{\mathcal{A}}}$ in $mod\text-\Lambda$. Section \ref{ss 4.2: proof that X(S)=BG(S)} proves that $X(\ensuremath{{\mathcal{S}}})$ is homeomorphic to $B\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})$. Section \ref{ss 4.3: example} gives a simple example of the correspondence between parts of $X(\ensuremath{{\mathcal{S}}})$ and parts of $B\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})$. Finally, in \ref{ss 4.4: semi-invariants}, we construct a codimension one subcomplex $D(\ensuremath{{\mathcal{S}}})\subseteq B\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})$ and show in Proposition \ref{prop: DA(b)=e(A) cap D(b)} that $D(\ensuremath{{\mathcal{S}}})$ is the category theoretic version of the picture complex $L(\ensuremath{{\mathcal{A}}})\subset S^{n-1}$.
\section{Definition of cluster morphism category}
We will construct a category abstractly by defining objects to be finitely generated wide categories. We call it the ``cluster morphism category'' since its morphisms are (isomorphism classes of) partial cluster tilting objects.
\subsection{Wide subcategories}\label{ss 1.1: wide subcategories}
Suppose that $\Lambda$ is a hereditary finite dimensional algebra over a field $K$ which we assume to be infinite. Let $mod$-$\Lambda$ be the category of finite dimensional right $\Lambda$-modules. Then a \emph{wide subcategory} of $mod$-$\Lambda$ is defined to be an exactly embedded abelian subcategory $\ensuremath{{\mathcal{A}}}$ of $mod$-$\Lambda$ which is closed under extensions. In particular, taking extensions with $0$, any module which is isomorphic to an object of $\ensuremath{{\mathcal{A}}}$ is already in $\ensuremath{{\mathcal{A}}}$. A wide category is called \emph{finitely generated} if there is one object $P$, which we can take to be projective, so that every other object $X$ of $\ensuremath{{\mathcal{A}}}$ is a quotient of $P^m$ for some $m$ depending on $X$. The wide category $\ensuremath{{\mathcal{A}}}$ is then isomorphic to the category of finitely generated right modules over the endomorphism ring of $P$. This is an hereditary finite dimensional algebra over the ground field.
\begin{thm}\cite{Ingalls-Thomas} There is a 1-1 correspondence between finitely generated wide subcategories in $mod$-$\Lambda$ and isomorphism classes of cluster tilting objects in the cluster category of $\Lambda$.
\end{thm}
In this section, we will review the well-known correspondence between cluster tilting objects of the cluster category with support tilting modules.
We recall that the \emph{quiver} of $\Lambda$ consists of one vertex for every (isomorphism class of) simple module $S_i$ for $i=1,\cdots,n$ and one arrow $i\to j$ if $\Ext^1_\Lambda(S_i,S_j)\neq0$. We number these in \emph{admissible order} which means that $\Ext^1_\Lambda(S_i,S_j)= 0$ if $i<j$. Let $P_i,I_i$ be the projective cover and injective envelope of $S_i$ respectively. Let $F_i=\End_\Lambda(S_i)=\End_\Lambda(P_i)=\End_\Lambda(I_i)$. This is a division algebra which acts on the left on all three of these modules. So, we identify $F_i$ with these endomorphism rings making them all equal. The modules $S_i,P_i,I_i$ are {exceptional} where $X$ is called \emph{exceptional} if $\End_\Lambda(X)$ is a division algebra and $\Ext^1_\Lambda(X,X)=0$.
The \emph{support} of $M$ is the set of vertices $i$ for which $\Hom_\Lambda(P_i,M)\neq0$. A \emph{(basic) support tilting module} is a module $M$ so that\begin{enumerate}
\item $M$ is a direct sum of $k$ nonisomorphic exceptional modules $M_i$ where $k$ is the size of the support of $M$.
\item $\Ext^1_\Lambda(M,M)=0$.
\end{enumerate}
For each support tilting module $M=M_1\coprod\cdots\coprod M_k$ there is a unique cluster tilting set (up to isomorphism) which is the unordered set of objects $\{M_1,M_2,\cdots,M_k\}$ union the $n-k$ shifted projective modules $P_j[1]$ for all $j$ not in the support of $M$.
We will take this to be the definition of a {cluster tilting set}.
\begin{defn}\label{def: cluster tilting set}
Suppose that $\ensuremath{{\mathcal{A}}}$ is a finitely generated wide subcategory of $mod$-$\Lambda$ with $k$ nonisomorphic projective objects $Q_1,\cdots,Q_k$. Since these may not be projective in $mod$-$\Lambda$ we sometimes refer to them as \emph{relative projective objects}. By a \emph{partial cluster tilting set} for $\ensuremath{{\mathcal{A}}}$ we mean a set of objects $T_1,\cdots,T_\ell$ in the bounded derived category of $\ensuremath{{\mathcal{A}}}$ so that
\begin{enumerate}
\item Each $T_i$ is either a shifted projective object $Q_j[1]$ or an exceptional object of $\ensuremath{{\mathcal{A}}}$.
\item For all $i,j$ we have: $\Ext_{\ensuremath{{\mathcal{D}}}^b}^1(T_i,T_j)=0$. Equivalently:
\begin{enumerate}
\item $\Ext_\Lambda^1(T_i,T_j)$ if $T_i,T_j$ are modules.
\item $\Hom_\Lambda(Q,T_j)=0$ if $T_i=Q[1]$ and $T_j$ is a module.
\end{enumerate}
\end{enumerate}
If $\ell=k$ the partial cluster tilting set is called a \emph{cluster tilting set}. We view all shifted projective objects $Q[1]$ as objects of the bounded derived category of $mod$-$\Lambda$. We use the notation $|T|$ to denote the \emph{underlying module} of $T$ which is equal to $T$ if $T$ is a module and $|Q[1]|=Q$.
\end{defn}
We denote a finitely generated wide subcategory by its set of simple objects. Thus $\ensuremath{{\mathcal{A}}}(M_1,\cdots,M_k)$ denotes the wide subcategory of $mod$-$\Lambda$ whose simple objects are $M_1,\cdots,M_k$.
\begin{prop}
A finite set of exceptional modules $\{M_1,\cdots,M_k\}$ forms the set of simple objects in a finitely generated wide subcategory of $mod$-$\Lambda$ if and only if it satisfies the following two conditions.
\begin{enumerate}
\item $\Hom_\Lambda(M_i,M_j)=0$ for all $i\neq j$.
\item The modules $M_i$ can be ordered in such a way that $\Ext^1_\Lambda(M_i,M_j)=0$ for all $1\le i<j\le k$.
\end{enumerate}
\end{prop}
We say that the $M_i$ are \emph{hom-orthogonal} if they satisfy (1). Note that, given (1), (2) is equivalent to the statement that $(M_k,\cdots,M_1)$ is an exceptional sequence.
\begin{proof}
Necessity is clear. Conversely, suppose these condition hold. Then the exceptional sequence $(M_k,\cdots,M_1)$ can be completed by adding $\Lambda$-modules $M_n,\cdots,M_{k+1}$ on the left. Then $M_1,\cdots,M_k$ are the simple objects of the wide subcategory $(M_n\coprod \cdots\coprod M_{k+1})^\perp$.
\end{proof}
The \emph{dimension vector} $\underline\dim M\in\ensuremath{{\field{N}}}^n$ of a module $M$ is defined to be the integer vector whose $i$th coordinate is $\dim_{F_i}\Hom_\Lambda(P_i,M)$. The dimension vector of any shifted object $M[1]$ is defined to be $\underline\dim (M[1])=-\underline\dim M$. The \emph{Euler-Ringel form} $\brk{\cdot,\cdot}$ is the bilinear form on $\ensuremath{{\field{Z}}}^n$ with the property that
\[
\brk{\underline\dim M,\underline\dim N}=\dim_K\Hom_\Lambda(M,N)-\dim_K\Ext_\Lambda^1(M,N)
\]
If $M,N$ lie in a finitely generated wide subcategory $\ensuremath{{\mathcal{A}}}$ then this form takes the same value if evaluated in $\ensuremath{{\mathcal{A}}}$ or in $mod$-$\Lambda$ because $\ensuremath{{\mathcal{A}}}\hookrightarrow mod\text-\Lambda$ is an exact full embedding (so, $\Hom_\ensuremath{{\mathcal{A}}}(M,N)=\Hom_\Lambda(M,N)$ for all $M,N\in\ensuremath{{\mathcal{A}}}$) and $\ensuremath{{\mathcal{A}}}$ is extension closed in $mod$-$\Lambda$ (so, $\Ext^1_\ensuremath{{\mathcal{A}}}(M,N)=\Ext^1_\Lambda(M,N)$ for all $M,N\in\ensuremath{{\mathcal{A}}}$).
We will also use the same bilinear form in the derived category using the following formula which is easily verified.
\begin{prop}
Suppose that $M,N$ lie in $\ensuremath{{\mathcal{D}}}^b(\ensuremath{{\mathcal{A}}})$. Then
\[
\brk{\underline\dim M,\underline\dim N}=\sum_{j\in\ensuremath{{\field{Z}}}}(-1)^j\dim_K\Ext^j_{\ensuremath{{\mathcal{D}}}^b(\ensuremath{{\mathcal{A}}})}(M,N)=\sum_{j\in\ensuremath{{\field{Z}}}}(-1)^j\dim_K\Ext^j_{\ensuremath{{\mathcal{D}}}^b(\Lambda)}(M,N)
\]
\end{prop}
Recall that the dimension vectors of all exceptional objects and all shifted relative projective objects of f.g. wide subcategories are {real Schur roots} and all real Schur roots occur as such \cite{Ringel}. For example, let $\beta$ be a real Schur root of $\Lambda$. Let $M_\beta$ be the unique exceptional object with dimension vector $\beta$. Then $M_\beta$ is a relative projective object in the abelian category $\ensuremath{{\mathcal{A}}}(M_\beta)$ generated by $M_\beta$. So, both $\beta$ and $-\beta$ occur as dimension vectors of exceptional objects and shifted relative projective objects in some f.g. wide subcategory of $mod$-$\Lambda$.
\begin{defn}\label{def 1.5: A(alpha)}
Let $\alpha_\ast=\{\alpha_1,\alpha_2,\cdots,\alpha_k\}$ be an unordered set of distinct positive real Schur roots so that the corresponding modules $M_1,\cdots,M_k$ are hom-orthogonal and form an exceptional sequence in some order. Then we denote by $\ensuremath{{\mathcal{A}}}(\alpha_\ast)$ the wide subcategory with simple objects $M_i$. Equivalently, $\ensuremath{{\mathcal{A}}}(\alpha_\ast)$ is the abelian category of all modules having a filtration for which all subquotients are isomorphic to some $M_i$. Let $\ensuremath{{\mathcal{C}}}(\alpha_\ast)$ be the union of the set of all exceptional objects of $\ensuremath{{\mathcal{A}}}(\alpha_\ast)$ and the set of shifted relative projective objects $Q[1]$ for all indecomposable relative projective objects $Q$ in $\ensuremath{{\mathcal{A}}}(\alpha_\ast)$. Two elements $T,T'$ of $\ensuremath{{\mathcal{C}}}(\alpha_\ast)$ are called \emph{ext-orthogonal} if $\Ext^1_{\ensuremath{{\mathcal{D}}}^b}(T,T')=\Ext^1_{\ensuremath{{\mathcal{D}}}^b}(T',T)=0$.
\end{defn}
\begin{defn}\label{def 1.6: perpendicular category}
For any finitely generated wide subcategory $\ensuremath{{\mathcal{A}}}$ in $mod$-$\Lambda$ let $\,^\perp \ensuremath{{\mathcal{A}}}$ denote the full subcategory of $mod$-$\Lambda$ of all modules $X$ with the property that $\Hom_\Lambda(X,M)=0=\Ext^1_\Lambda(X,M)$ for all $M\in\ensuremath{{\mathcal{A}}}$. Similarly, let $\ensuremath{{\mathcal{A}}}^\perp$ be the full subcategory of $mod$-$\Lambda$ of all modules $X$ with the property that $\Hom_\Lambda(M,X)=0=\Ext^1_\Lambda(M,X)$ for all $M\in\ensuremath{{\mathcal{A}}}$.
\end{defn}
It is well-known that the categories $\,^\perp \ensuremath{{\mathcal{A}}}$ and $\ensuremath{{\mathcal{A}}}^\perp$ are finitely generated wide subcategories of $mod$-$\Lambda$. As a special case (replacing $mod$-$\Lambda$ with $\ensuremath{{\mathcal{B}}}$), $\ensuremath{{\mathcal{B}}}\cap(^\perp\ensuremath{{\mathcal{A}}})$ and $\ensuremath{{\mathcal{B}}}\cap(\ensuremath{{\mathcal{A}}}^\perp)$ are finitely generated wide subcategories of $\ensuremath{{\mathcal{B}}}$ if $\ensuremath{{\mathcal{A}}}\subseteq \ensuremath{{\mathcal{B}}}$.
\begin{defn}\label{def 1.7: cluster morphism}
Suppose that $\ensuremath{{\mathcal{A}}}$ and $\ensuremath{{\mathcal{B}}}$ are finitely generated wide subcategories of $mod$-$\Lambda$ and $\ensuremath{{\mathcal{B}}}\subseteq \ensuremath{{\mathcal{A}}}$. Then a \emph{cluster morphism} $\ensuremath{{\mathcal{A}}}\to \ensuremath{{\mathcal{B}}}$ is defined to be a partial cluster tilting set $T=\{T_1,\cdots,T_k\}$ in $\ensuremath{{\mathcal{C}}}(\ensuremath{{\mathcal{A}}})$ so that $|T|^\perp\cap \ensuremath{{\mathcal{A}}}=\ensuremath{{\mathcal{B}}}$. In other words, $\ensuremath{{\mathcal{B}}}$ is the full subcategory of $\ensuremath{{\mathcal{A}}}$ of all objects $B$ so that $\Hom_\Lambda(|T_i|,B)=0=\Ext^1_\Lambda(|T_i|,B)$ for all $i$. We denote the corresponding morphism by $[T]$ or $[T_1,\cdots,T_k]:\ensuremath{{\mathcal{A}}}\to\ensuremath{{\mathcal{B}}}$. Note that $T$ is an unordered set. For example, the empty set gives the identity morphism $[\,]=id_\ensuremath{{\mathcal{A}}}:\ensuremath{{\mathcal{A}}}\to\ensuremath{{\mathcal{A}}}$.
\end{defn}
\subsection{Composition of cluster morphisms}\label{ss 1.2: composition of cluster morphisms}
We come to the difficult part of the definition which is the formula for composition of cluster morphisms. Suppose that we have cluster morphisms $[T]:\ensuremath{{\mathcal{A}}}(\alpha_\ast)\to \ensuremath{{\mathcal{A}}}(\beta_\ast)$ and $[S]:\ensuremath{{\mathcal{A}}}(\beta_\ast)\to \ensuremath{{\mathcal{A}}}(\gamma_\ast)$. Then the composition $[S]\circ[T]:\ensuremath{{\mathcal{A}}}(\alpha_\ast)\to \ensuremath{{\mathcal{A}}}(\gamma_\ast)$ will be the partial cluster tilting set
\begin{equation}\label{eq:composition of cluster morphisms}
[S_1,\cdots,S_\ell]\circ[T_1,\cdots,T_k]=[\sigma_TS_1,\cdots,\sigma_TS_\ell, T_1,\cdots,T_k]
\end{equation}
where the set mapping $\sigma_T:\ensuremath{{\mathcal{C}}}(\beta_\ast)\to \ensuremath{{\mathcal{C}}}(\alpha_\ast)$ is uniquely determined by the following proposition.
\begin{prop}\label{prop 1.8: Properties of sigma_T}
Suppose that $[T]=[T_1,\cdots,T_k]$ is a cluster morphism $\ensuremath{{\mathcal{A}}}(\alpha_\ast)\to \ensuremath{{\mathcal{A}}}(\beta_\ast)$. Then, for any $S\in \ensuremath{{\mathcal{C}}}(\beta_\ast)$ there is a unique object $\sigma_TS\in\ensuremath{{\mathcal{C}}}(\alpha_\ast)$ satisfying the following three conditions.
\begin{enumerate}
\item[(a)] $\{T_1,\cdots,T_k,\sigma_TS\}$ is a partial cluster tilting set in $\ensuremath{{\mathcal{C}}}(\alpha_\ast)$.
\item[(b)] $\ensuremath{{\mathcal{A}}}(\beta_\ast)\cap |S|^\perp=\ensuremath{{\mathcal{A}}}(\beta_\ast)\cap |\sigma_TS|^\perp$
\item[(c)] $\underline\dim(\sigma_TS)-\underline\dim S$ is an integer linear combination of the vectors $\underline\dim T_i$.
\end{enumerate}
Furthermore, the following additional properties hold as a consequence of the first three.
\begin{enumerate}
\item[(d)] If $S_1,S_2$ are ext-orthogonal elements of $\ensuremath{{\mathcal{C}}}(\beta_\ast)$ then $\sigma_TS_1,\sigma_TS_2$ are ext-orthogonal elements of $\ensuremath{{\mathcal{C}}}(\alpha_\ast)$.
\item[(e)] If $\{T_1,\cdots,T_k,S\}$ is a partial cluster tilting set in $\ensuremath{{\mathcal{C}}}(\alpha_\ast)$ then $\sigma_TS=S$.
\end{enumerate}
\end{prop}
We note that Property (e) follows immediately from the uniqueness of $\sigma_TS$. The proof of the other statements will be given later. For the moment suppose that this proposition holds. Then we will show that composition of cluster morphisms is associative. But first we need to show that composition is defined.
\begin{cor}
Given cluster morphisms $[T]:\ensuremath{{\mathcal{A}}}(\alpha_\ast)\to \ensuremath{{\mathcal{A}}}(\beta_\ast)$ and $[S]:\ensuremath{{\mathcal{A}}}(\beta_\ast)\to \ensuremath{{\mathcal{A}}}(\gamma_\ast)$, the formula \eqref{eq:composition of cluster morphisms} gives a cluster morphism $[T,\sigma_TS]:\ensuremath{{\mathcal{A}}}(\alpha_\ast)\to\ensuremath{{\mathcal{A}}}(\gamma_\ast)$. In other words Properties (a) and (b) in the proposition above hold when $\sigma_TS=\{\sigma_TS_1,\cdots,\sigma_TS_\ell\}$ has more than one element.
\end{cor}
\begin{proof}
First, $\{T,\sigma_TS\}=\{T_1,\cdots,T_k,\sigma_TS_1,\cdots,\sigma_TS_\ell\}$ is a partial cluster tilting set in $\ensuremath{{\mathcal{C}}}(\alpha_\ast)$ since, by (a), each $\sigma_TS_i$ is ext-orthogonal to each $T_j$ and by (d) the $\sigma_TS_i$ are ext-orthogonal to each other. Second, $[T,\sigma_TS]$ is a morphism $\ensuremath{{\mathcal{A}}}(\alpha_\ast)\to\ensuremath{{\mathcal{A}}}(\gamma_\ast)$. In other words, $\ensuremath{{\mathcal{A}}}(\alpha_\ast)\cap |T,\sigma_TS|^\perp=\ensuremath{{\mathcal{A}}}(\gamma_\ast)$. But this follows from Property (b):
\[
\ensuremath{{\mathcal{A}}}(\gamma_\ast)=\ensuremath{{\mathcal{A}}}(\beta_\ast)\cap |S|^\perp=\ensuremath{{\mathcal{A}}}(\beta_\ast)\cap |S_1|^\perp\cap \cdots\cap |S_{\ell-1}|^\perp\cap |S_\ell|^\perp=\bigcap\left(
\ensuremath{{\mathcal{A}}}(\beta_\ast)\cap|S_i|^\perp
\right)
\]
\[
=\bigcap\left(
\ensuremath{{\mathcal{A}}}(\beta_\ast)\cap|\sigma_TS_i|^\perp
\right)=\ensuremath{{\mathcal{A}}}(\beta_\ast)\cap |\sigma_TS|^\perp=\ensuremath{{\mathcal{A}}}(\alpha_\ast)\cap |T,\sigma_TS|^\perp
\]
\end{proof}
\begin{cor}
The composition law \eqref{eq:composition of cluster morphisms} is associative and unital. Consequently, we have a category with objects given by finitely generated wide subcategories $\ensuremath{{\mathcal{A}}}$ of $mod$-$\Lambda$ and morphisms given by partial cluster tilting sets $[T]:\ensuremath{{\mathcal{A}}}\to \ensuremath{{\mathcal{A}}}\cap|T|^\perp$.
\end{cor}
\begin{proof}
It follows from the Definition \eqref{eq:composition of cluster morphisms} that the empty set in $\ensuremath{{\mathcal{C}}}(\beta_\ast)$ is a left identity: $[\,]\circ [T]=[T,\sigma_T(\emptyset)]=[T]$. As a special case of Property (e), $\sigma_\emptyset S=S$. Therefore, the empty set is a right identity:
\[
[S_1,\cdots,S_\ell]\circ[\,]=[\sigma_\emptyset S_1,\cdots,\sigma_\emptyset S_\ell]=[S_1,\cdots,S_\ell]
\]
Finally, we need to show that composition is associative. So, suppose we have the composable cluster morphisms:
\[
\ensuremath{{\mathcal{A}}}(\alpha_\ast)\xrightarrow{[T]}
\ensuremath{{\mathcal{A}}}(\beta_\ast)\xrightarrow{[S]}
\ensuremath{{\mathcal{A}}}(\gamma_\ast)\xrightarrow{[R]}
\ensuremath{{\mathcal{A}}}(\delta_\ast)
\]
By definition we have:
\[
([R]\circ[S])\circ[T]=[S,\sigma_SR]\circ T=[T,\sigma_TS,\sigma_T\sigma_SR]
\]
\[
[R]\circ([S]\circ[T])=[R]\circ[T,\sigma_TS]=[T,\sigma_TS,\sigma_{T,\sigma_TS}R]
\]
Therefore, we need to show that, for each $R_i$ in $R$, $\sigma_T\sigma_SR_i=\sigma_{T,\sigma_TS}R_i$. To prove this we can assume that $R$ has only one element. Then we will verify that $\sigma_T\sigma_SR$ satisfies the three conditions which uniquely characterize $\sigma_{T,\sigma_TS}R$. By the previous corollary we have the first two conditions:
\begin{enumerate}
\item[(a)] $\{T,\sigma_TS,\sigma_T\sigma_SR\}$ forms a partial cluster tilting set in $\ensuremath{{\mathcal{C}}}(\alpha_\ast)$ and
\item[(b)] $\ensuremath{{\mathcal{A}}}(\gamma_\ast)\cap |R|^\perp=\ensuremath{{\mathcal{A}}}(\beta_\ast)\cap |S,\sigma_SR|^\perp=\ensuremath{{\mathcal{A}}}(\alpha_\ast)\cap |T,\sigma_TS,\sigma_T\sigma_SR|^\perp=\ensuremath{{\mathcal{A}}}(\gamma_\ast)\cap |\sigma_T\sigma_SR|^\perp$
\end{enumerate}
The third condition is also easy:
\[
\sigma_T\sigma_SR-R=(\sigma_T\sigma_SR-\sigma_SR)+(\sigma_SR-R)
\]
which is an additive combination of $\underline\dim T_i$ plus an additive combination of $\underline\dim S_j$. However, modulo the vectors $\underline\dim T_i$, each $\underline\dim S_j$ is congruent to $\underline\dim \sigma_TS_j$. Therefore:
\begin{enumerate}
\item[(c)] $\sigma_T\sigma_SR-R$ is an integer linear combination of the vectors $\underline\dim T_i$ and $\underline\dim \sigma_TS_j$.
\end{enumerate}
Therefore, by the uniqueness clause in the Proposition, we have
\begin{equation}\label{eq: sigma TS=sigma T sigma S}
\sigma_T\sigma_SR=\sigma_{T,\sigma_TS}R
\end{equation}
making composition of cluster morphisms associative.
\end{proof}
\subsection{Proof of Proposition \ref{prop 1.8: Properties of sigma_T}}\label{ss 1.3: proof of properties of sT}
To complete the definition of the cluster morphism category we need to prove Proposition \ref{prop 1.8: Properties of sigma_T}. We do this by induction on $k$ starting with $k=1$. Without loss of generality we assume that $\ensuremath{{\mathcal{A}}}(\alpha_\ast)=mod$-$H$. Then $\ensuremath{{\mathcal{A}}}(\beta_\ast)=|T|^\perp$.
\subsubsection{Uniqueness of $\sigma_TS$ when $k=1$}
\begin{lem}\label{lem: X to Tm to Y means X+Y=mT}
Let $\ensuremath{{\mathcal{A}}}(\alpha_1,\alpha_2)$ be a finitely generated wide subcategory of $mod$-$\Lambda$ of rank $2$ and suppose that $T,X,Y \in\ensuremath{{\mathcal{C}}}(\alpha_1,\alpha_2)$ so that $T$ is ext-orthogonal to both $X$ and $Y$. Then $\underline\dim X+\underline\dim Y$ is a multiple of $\underline\dim T$.
\end{lem}
\begin{proof}
Cluster mutation in cluster categories of rank 2 are very well understood. After possibly switching $X$ and $Y$ we have $X=\tau Y$ and an almost split triangle
\[
X\to T^m\to Y\to X[1]
\]
If $Y$ is not projective then $\underline\dim\, X+\underline\dim\, Y=\underline \dim\, T^m=m\,\underline \dim\, T$. If $Y$ is projective then $X=\tau Y=Y[1]$ and $\underline\dim X+\underline\dim Y=0$. So, the lemma holds in all cases.
\end{proof}
We recall the statement of Proposition \ref{prop 1.8: Properties of sigma_T} when $k=1$: For any rank 1 cluster morphism $[T]:\ensuremath{{\mathcal{A}}}(\alpha_\ast)\to\ensuremath{{\mathcal{A}}}(\beta_\ast)$ and any $S\in\ensuremath{{\mathcal{C}}}(\beta_\ast)$ there is a unique $\sigma_TS\in\ensuremath{{\mathcal{C}}}(\alpha_\ast)$ so that:
\begin{enumerate}
\item[(a)] $\{T,\sigma_TS\}$ is a partial cluster tilting set in $\ensuremath{{\mathcal{C}}}(\alpha_\ast)$.
\item[(b)] $\ensuremath{{\mathcal{A}}}(\beta_\ast)\cap |S|^\perp=\ensuremath{{\mathcal{A}}}(\beta_\ast)\cap |\sigma_TS|^\perp$
\item[(c)] $\underline\dim(\sigma_TS)-\underline\dim S$ is an integer multiple of the vector $\underline\dim T$.
\end{enumerate}
To prove uniqueness of $\sigma_TS$, let $X,Y$ be two candidates for $\sigma_TS$. Then, by Properties (a) and (b), $\{T,X\},\{T,Y\}$ are both cluster tilting sets in the rank 2 cluster category of the finitely generated wide subcategory $\,^\perp\left(|T,S|^\perp\right)$ of $\ensuremath{{\mathcal{A}}}(\alpha_\ast)$. By Property (c), $\underline\dim\,X$ and $\underline\dim\,Y$ are both congruent to $\underline\dim\,S$ modulo $\underline\dim\,T$. By the lemma we conclude that $2\,\underline\dim\,S$ is a multiple of $\underline\dim\,T$ and thus $\underline\dim\,X$, $\underline\dim\,T$ are collinear. But this is not possible since the dimension vectors of elements of a cluster tilting set are always linearly independent.
This completes the proof of the uniqueness of $\sigma_TS$. We will now show the existence of $\sigma_TS$ satisfying Properties (a),(b),(c).
\subsubsection{Case 1: $T,S$ are modules}
We are given that $S\in T^\perp$. I.e, $(S,T)$ is an exceptional sequence.
If $\Ext^1_\Lambda(S,T)=0$ then we let $\sigma_TS=S$. This clearly satisfies all three conditions.
Otherwise, let $m\ge1$ be the dimension of $\Ext^1_\Lambda(S,T)$ over the division algebra $F_T:=\End_\Lambda(T)$. If we choose a basis for $\Ext^1_\Lambda(S,T)$ then we get an extension
\[
T^m\rightarrowtail E\twoheadrightarrow S
\]
which is \emph{universal} in the sense that any extension of $T$ by $S$ is given as the pushout of this extension by a unique morphism $T^m\to T$. So, in the exact sequence:
\[
\Hom_\Lambda(T^m,T)\xrightarrow{\cong}\Ext^1_\Lambda(S,T)\to \Ext^1_\Lambda(E,T)\to \Ext^1_\Lambda(T^m,T)=0
\]
the first arrow is an isomorphism making $\Ext^1_\Lambda(E,T)=0$. Applying $\Ext^1_\Lambda(T,-)$ to the universal extension we also get $\Ext^1_\Lambda(T,E)=0$. So, $E,T$ are ext-orthogonal and we let $\sigma_TS=E$.
The construction of $E$ is the well-know mutation rule for exceptional sequences. We start with the exceptional sequence $(S,T)$ and we get the exceptional sequence $(T,E)$ by the universal extension in the case when $\Ext^1_\Lambda(S,T)\neq0$. See \cite{CB} for details. In particular, $E$ is an exceptional module and $(T,E)^\perp=(T,S)^\perp$. Also, $\underline\dim\,E=\underline\dim\,S+m\,\underline\dim\,T$. So, Properties (a),(b),(c) all hold.
\subsubsection{Case 2: $T$ is a module and $S=Q[1]$} We are given that $(Q,T)$ is an exceptional sequence and $Q$ is a relative projective object in $T^\perp$.
Suppose first that $\Hom_\Lambda(Q,T)=0$. If $Q$ is projective in $\ensuremath{{\mathcal{A}}}(\alpha_\ast)$ then we can let $\sigma_TQ[1]=Q[1]$ and there is nothing to prove. So, suppose $Q$ is not projective in $\ensuremath{{\mathcal{A}}}(\alpha_\ast)$. Let $T^m\rightarrowtail E\twoheadrightarrow Q$ be the universal extension. Then, just as in Case 1, $E,T$ are ext-orthogonal, $(T,E)$ is an exceptional sequence and $(T,E)^\perp=(T,S)^\perp$. We also claim that $E$ is projective in $\ensuremath{{\mathcal{A}}}(\alpha_\ast)$. So, we can let $\sigma_TQ[1]=E[1]$ and Properties (a),(b),(c) will hold.
To prove the $E$ is projective in $\ensuremath{{\mathcal{A}}}(\alpha_\ast)$, suppose not and let $X$ be an object in $\ensuremath{{\mathcal{A}}}(\alpha_\ast)$ of minimal length so that $\Ext^1_\Lambda(E,X)\neq0$. By right exactness of $\Ext^1_\Lambda(-,X)$, $\Ext^1_\Lambda(E,X)=0$ if $X\in \ensuremath{{\mathcal{A}}}(\beta_\ast)$. So, we can assume $X\notin T^\perp$. This means either
\begin{enumerate}
\item[(i)] $\Hom_\Lambda(T,X)\neq0$ or
\item[(ii)] $\Ext^1_\Lambda(T,X)\neq0$.
\end{enumerate}
In Case (i), let $f:T\to X$ be any nonzero morphism and let $Y$ be the cokernel of $f$. Then $\Ext^1_\Lambda(E,Y)=0$ by minimality of $X$. So, $Y=0$. But $(T,E)$ is an exceptional sequence. So, $\Ext^1_\Lambda(E,T)=0$. By right exactness of $\Ext^1_\Lambda(E,-)$ this implies that $\Ext^1_\Lambda(E,X)=0$ which is a contradiction. In Case (ii), let $X\to Z\to T^m$ be the universal extension. Applying $\Hom_\Lambda(T,-)$ to this extension we see that $Z\in T^\perp$. Therefore $\Ext^1_\Lambda(E,Z)=0$. But $\Ext^1_\Lambda(E,Z)\cong \Ext^1_\Lambda(E,X)\neq 0$ which is a contradiction. So, $E$ must be projective in $\ensuremath{{\mathcal{A}}}(\alpha_\ast)$ as claimed.
Now, suppose that $\Hom_\Lambda(Q,T)\neq0$. Let $f:Q\to T^m$ be the minimal left $T$-approximation of $Q$. Then, by the theory of exceptional sequences, $f$ is either a monomorphism or an epimorphism. In the first case we get a short exact sequence $Q\rightarrowtail T^m\twoheadrightarrow E$ and $(T,E)$ is an exceptional sequence with $(T,E)^\perp=(Q,T)^\perp$. By right exactness of $\Ext^1_\Lambda(T,-)$ we also get $\Ext^1_\Lambda(T,E)=0$. So, $E,T$ are ext-orthogonal and we can let $\sigma_TQ[1]=E$.
If $f:Q\to T^m$ is an epimorphism, we let $P=\ker f$. Then $P$ is a projective object of $\ensuremath{{\mathcal{A}}}(\alpha_\ast)$ by the same argument used to prove that $E$ is projective in the case $\Hom_\Lambda(Q,T)=0$. (Take $X$ minimal so that $\Ext^1_\Lambda(P,X)\neq 0$, then $X\notin T^\perp$ giving two cases (i), (ii) each leading to a contradiction as before.) Then we can take $\sigma_TQ[1]=P[1]$. By construction $(T,P)$ is an exceptional sequence which is braid mutation equivalent to $(Q,T)$. Therefore (a) and (b) are satisfied and (c) follows from the exact sequence $P\rightarrowtail Q\twoheadrightarrow T^m$.
In all subcases of Case 2 we have the following.
\begin{prop}
When $T$ is a module and $S=Q[1]$, then $(T,\sigma_TS)$ is a signed exceptional sequence.
\end{prop}
\subsubsection{Case 3: $T=P[1]$}
If $S$ is a module then we have $\Hom_\Lambda(P,S)=0$. So, $P[1],S$ are ext-orthogonal and we let $\sigma_{P[1]}S=S$. This trivially satisfies Properties (a),(b),(c).
So, suppose that $S=Q[1]$ where $Q$ is a relative projective object of $P^\perp$. If $Q$ is a projective object of $\ensuremath{{\mathcal{A}}}(\alpha_\ast)$ then $P[1],Q[1]\in \ensuremath{{\mathcal{C}}}(\alpha_\ast)$ form a partial cluster tilting set so we can let $\sigma_{P[1]}Q[1]=Q[1]$ which satisfies (a),(b),(c).
We are reduced to the case when $S=Q[1]$ where $Q$ is not projective in $\ensuremath{{\mathcal{A}}}(\alpha_\ast)$. In this case let $\dim_{F_Q}\Ext^1_\Lambda(Q,P)=m$ (necessarily positive as we will see) and let
\[
P^m\rightarrowtail E\twoheadrightarrow Q
\]
be the universal extension. Then we claim that $E$ is an indecomposable projective object of $\ensuremath{{\mathcal{A}}}(\alpha_\ast)$, the proof being the same as in Case 2 above (but shorter since (ii) does not occur). So, we can let $\sigma_{P[1]}Q[1]=E[1]$ and Property (a) will hold. Since $(P,E)$ is the braid mutation of $(Q,P)$, it is an exceptional sequence. So, $E$ is indecomposable and Property (b) holds. Since $\underline\dim\,E[1]=\underline\dim\,Q[1]+m\,\underline\dim\,P[1]$, Property (c) also holds.
\subsubsection{Stronger version of Proposition \ref{prop 1.8: Properties of sigma_T}}
To complete the proof of the proposition, we need to make the statement stronger. We will prove the following theorem along with the proposition by simultaneous induction on $k$.
\begin{thm}\label{thm:sigma-T is a bijection}
Suppose that $T=\{T_1,\cdots,T_k\}$ is a partial cluster tilting set in a finitely generated wide category $\ensuremath{{\mathcal{A}}}(\alpha_\ast)$ of rank $k+\ell$ and let $\ensuremath{{\mathcal{A}}}(\beta_\ast)=|T|^\perp\cap\ensuremath{{\mathcal{A}}}(\alpha_\ast)$. Then the mapping $\sigma_T$ given by Proposition \ref{prop 1.8: Properties of sigma_T} gives a bijection
\[
\sigma_T:\ensuremath{{\mathcal{C}}}(\beta_\ast)\to \ensuremath{{\mathcal{C}}}_T(\alpha_\ast)
\]
where $\ensuremath{{\mathcal{C}}}_T(\alpha_\ast)$ is the set of all elements of $\ensuremath{{\mathcal{C}}}(\alpha_\ast)$ which are ext-orthogonal to $T$ but not equal to any $T_i$. Furthermore, $X=\{X_1,\cdots,X_\ell\}$ is a partial cluster tilting set in $\ensuremath{{\mathcal{C}}}(\beta_\ast)$ if and only if $\sigma_TX\cup T$ is a partial cluster tilting set in $\ensuremath{{\mathcal{C}}}(\alpha_\ast)$.
\end{thm}
So far we have shown the existence of a unique $\sigma_T$ satisfying Properties (a),(b),(c) of Proposition \ref{prop 1.8: Properties of sigma_T} for $k=1$. We will show that this implies the theorem for $k=1$. This clearly implies Property (d) in the proposition for $k=1$. The induction step is easy for both proposition and theorem.
We first note that $\sigma_T$ is clearly a monomorphism. To see this, let $\ensuremath{{\field{R}}}\alpha_\ast$ be the $\ell$ dimensional vector space of formal real linear combinations of the roots $\alpha_i$. Then $\beta_i$ are linearly independent as elements of $\ensuremath{{\field{R}}}\alpha_\ast$ since they are dimension vectors of modules $S_i$ in an exceptional sequence. This implies that $\ensuremath{{\field{R}}}\beta_\ast\subseteq \ensuremath{{\field{R}}}\alpha_\ast$ is $\ell$ dimensional. Furthermore, the $S_i$ and $T_j$ form an exceptional sequence. So, $\beta_i$ and $\underline\dim\,T_j$ span $\ensuremath{{\field{R}}}\alpha_\ast$. So, the inclusion map $\ensuremath{{\field{R}}}\beta_\ast\hookrightarrow \ensuremath{{\field{R}}}\alpha_\ast$ induces a linear isomorphism
\[
\lambda_T:\ensuremath{{\field{R}}}\beta_\ast\cong \ensuremath{{\field{R}}}\alpha_\ast/\ensuremath{{\field{R}}} T
\]
By Property (c),
\[
\underline\dim\, \sigma_TX+\ensuremath{{\field{R}}} T=\lambda_T(\underline\dim\,X)\]
for all $X\in\ensuremath{{\mathcal{C}}}(\beta_\ast)$. Since $X$ is determined by its dimension vector, $\sigma_T$ is 1-1.
\subsubsection{Proof that $\sigma_T$ is a bijection for $k=1$} It remains to show that $\sigma_T$ is surjective.
Let $X\in \ensuremath{{\mathcal{C}}}_T(\alpha_\ast)$. We will find an object of $\ensuremath{{\mathcal{C}}}(\beta_\ast)$ which maps to $X$. Let $\ensuremath{{\mathcal{A}}}(\gamma_\ast)=|T|^\perp\cap|X|^\perp$. Then $\{T,X\}$ is a cluster tilting set in $\,^\perp\ensuremath{{\mathcal{A}}}(\gamma_\ast)$. There exists a unique module $M$ in $\ensuremath{{\mathcal{A}}}(\delta_\ast)=\,^\perp\ensuremath{{\mathcal{A}}}(\gamma_\ast)$ so that $(M,|T|)$ is an exceptional sequence. Applying $\sigma_T$ we have either $\sigma_TM=X$, in which case we are done, or $\sigma_TM=Y\neq X$ and $\{T,Y\}$ is another cluster tilting set in $\ensuremath{{\mathcal{A}}}(\delta_\ast)$ with $T$. In the second case we claim that $M$ is a relatively projective object of $\ensuremath{{\mathcal{A}}}(\beta_\ast)$. So, $M[1]\in\ensuremath{{\mathcal{C}}}(\beta_\ast)$ and $\sigma_TM[1]=X$ by Lemma \ref{lem: X to Tm to Y means X+Y=mT}.
To prove that $M$ is projective in $\ensuremath{{\mathcal{A}}}(\beta_\ast)$ we examine the AR quiver of the cluster category of $\ensuremath{{\mathcal{A}}}(\delta_\ast)=\,^\perp\ensuremath{{\mathcal{A}}}(\gamma_\ast)$. If $\ensuremath{{\mathcal{A}}}(\delta_\ast)$ is semi-simple then $Y=M$ and $X=M[1]$. So, $M$ is projective in $\ensuremath{{\mathcal{A}}}(\alpha_\ast)$ and thus also in $\ensuremath{{\mathcal{A}}}(\beta_\ast)$. So, we may assume the AR quiver of the cluster category of $\ensuremath{{\mathcal{A}}}(\delta_\ast)$ is connected:
\[
\xymatrixrowsep{10pt}\xymatrixcolsep{10pt}
\xymatrix{
\cdots\ar[rd] && I_2\ar[rd]&& P_2[1]\ar[rd]&& P_2\ar[rd]\\
&I_1\ar[ru] &&P_1[1]\ar[ru] &&P_1\ar[ru] && \cdots
}
\]
We look at all possible cases.
Case 0: If $T$ is not one of the four middle terms: $I_2,P_1[1],P_2[1],P_1$ then $Y=M,T,X$ form an almost split sequence $M\rightarrowtail T^m\twoheadrightarrow X$. Since $\Ext$ is right exact, $\Ext^1_\Lambda(M,-)=0$ on $T^\perp=\ensuremath{{\mathcal{A}}}(\beta_\ast)$, making $M$ projective in that category.
Case 1: If $T=P_1$ then $M=I_2$ and $\sigma_TM=P_2$ making $X=P_2[1]$. Since this is an element of $\ensuremath{{\mathcal{C}}}(\alpha_\ast)$, $P_2$ is projective in $\ensuremath{{\mathcal{A}}}(\alpha_\ast)$ making $P_1\subseteq P_2$ projective as well. The exact sequence $T=P_1\rightarrowtail P_2^m\twoheadrightarrow I_2=M$ show that $\Ext^1_\Lambda(I_2,-)\cong\Ext^1_\Lambda(P_2^m,-)=0$ on $T^\perp$. So, $M=I_2$ is projective in $\ensuremath{{\mathcal{A}}}(\beta_\ast)$.
Case 2: $T=P_2[1]$. Then $M=P_1=Y$ and $X=P_1[1]$. Since $P_1[1]\in\ensuremath{{\mathcal{C}}}(\alpha_\ast)$, $M=P_1$ is projective.
Case 3: If $T=P_1[1]$ then $M=I_2,X=P_2[1]$ is just like Case 1.
Case 4: If $T=I_2$ then $M=Y=I_1$ and $X=P_1[1]$. So, $P_1$ is projective in $\ensuremath{{\mathcal{A}}}(\alpha_\ast)$. The exact sequence $P_1\rightarrowtail M\twoheadrightarrow T^m$ when shows that $M$ is projective in $T^\perp=\ensuremath{{\mathcal{A}}}(\beta_\ast)$.
So, $M$ is projective in $\ensuremath{{\mathcal{A}}}(\beta_\ast)$ in all cases and $X=\sigma_TM[1]$. So, $\sigma_T$ is a bijection for $k=1$.
\subsubsection{Virtual semi-invariants} To show that the bijection $\sigma_T^{-1}:\ensuremath{{\mathcal{C}}}_T(\alpha_\ast)\to \ensuremath{{\mathcal{C}}}(\beta_\ast)$ takes cluster tilting sets to cluster tilting sets we need some results about virtual semi-invariants.
Let $Y$ be a fixed exceptional module in $\ensuremath{{\mathcal{A}}}(\beta_\ast)$ with dimension vector $\gamma\in\ensuremath{{\field{N}}}\beta_\ast$. We consider all pairs of relatively projective objects $P,Q$ in $\ensuremath{{\mathcal{A}}}(\beta_\ast)$ for which there is a homomorphism $f:P\to Q$ so that
\[
\Hom_\Lambda(f,Y):\Hom_\Lambda(Q,Y)\to \Hom_\Lambda(P,Y)
\]
is an isomorphism. When $f:P\to Q$ is a monomorphism, this is equivalent to the condition that $\Hom_\Lambda(M,Y)=0=\Ext^1_\Lambda(M,Y)$ where $M=\coker f$.
\begin{defn}\label{def: det semi-inv and supports}\cite{IOTW3}
The determinant of the matrix of $\Hom_\Lambda(f,Y)$ with respect to some basis is called a \emph{(determinantal) virtual semi-invariant} on the presentation space $\Hom_\Lambda(P,Q)$ with \emph{determinantal (det)-weight} $\gamma=\undim Y$ and is denoted $c_Y:\Hom_\Lambda(P,Q)\to K$. The set of all integer vectors $\underline\dim\,Q-\underline\dim\,P\in \ensuremath{{\field{Z}}}\beta_\ast$ for such pairs (relatively projective objects $P,Q$ in $\ensuremath{{\mathcal{A}}}(\beta_\ast)$ so that $c_Y$ is nonzero) is called the \emph{integer support} of $c_Y$ in $\ensuremath{{\mathcal{A}}}(\beta_\ast)$ and is denoted $D_{\ensuremath{{\field{Z}}}\beta_\ast}(\gamma)$. The \emph{real support} of $c_Y$, denoted $D_{\beta_\ast}(\gamma)$ is the convex hull of $D_{\ensuremath{{\field{Z}}}\beta_\ast}(\gamma)$ in $\ensuremath{{\field{R}}}\beta_\ast$. When $\ensuremath{{\mathcal{A}}}(\beta_\ast)=mod\text-\Lambda$ and $\ensuremath{{\field{R}}}\beta_\ast=\ensuremath{{\field{R}}}^n$, $D_{\ensuremath{{\field{Z}}}\beta_\ast}(\gamma)$, $D_{\beta_\ast}(\gamma)$ are denoted $D_\ensuremath{{\field{Z}}}(\gamma)$, $D(\gamma)$.
\end{defn}
We observe that, if $X\in \ensuremath{{\mathcal{C}}}(\beta_\ast)$ and $Y\in \ensuremath{{\mathcal{A}}}(\beta_\ast)$ is exceptional then $\underline\dim\,X$ lies in $D_{\beta_\ast}(\underline\dim\,Y)$ if and only if $\Hom_{\ensuremath{{\mathcal{D}}}^b}(X,Y)=0=\Ext^1_{\ensuremath{{\mathcal{D}}}^b}(X,Y)$ if and only if $|X|\in \,^\perp Y$.
The following theorem is proved in \cite{IOTW3} in the case $\ensuremath{{\mathcal{A}}}(\beta_\ast)=mod\text-\Lambda$ and $\ensuremath{{\field{R}}}\beta_\ast=\ensuremath{{\field{R}}}^n$.
\begin{thm}[Stability theorem for virtual semi-invariants]\label{Stability theorem for virtual semi-invariants} Let $Y$ be an exceptional module in $\ensuremath{{\mathcal{A}}}(\beta_\ast)$ with $\undim Y=\gamma\in \ensuremath{{\field{R}}}\beta_\ast$. Then, a vector $v\in\ensuremath{{\field{R}}}\beta_\ast$ lies in the convex hull $D_{\beta_\ast}(\gamma)$ of $D_{\ensuremath{{\field{Z}}}\beta_\ast}(\gamma)$ if and only if the following hold.
\begin{enumerate}
\item $\brk{v,\gamma}=0$ and
\item $\brk{v,\gamma'}\le0$ for all real Schur subroots $\gamma'\subseteq \gamma$ so that $\gamma'\in\ensuremath{{\field{N}}}\beta_\ast$ (these are the dimension vectors of exceptional submodules $Y'\subseteq Y$ which lie in $\ensuremath{{\mathcal{A}}}(\beta_\ast)$)
\end{enumerate}
\end{thm}
Note that the second condition is vacuous when $Y$ is a simple object of $\ensuremath{{\mathcal{A}}}(\beta_\ast)$.
\begin{proof}
Let $k$ be the rank of $\ensuremath{{\mathcal{A}}}(\beta_\ast)$. Then we have an isomorphism $\varphi_\ast:\ensuremath{{\field{Z}}}^k\cong \ensuremath{{\field{Z}}}\beta_\ast$ given by $\varphi_\ast(a_1,\cdots,a_k)=\sum a_i\beta_i$. This is the linear isomorphism induced by the exact embedding $\varphi:\ensuremath{{\mathcal{A}}}(\beta_\ast)\hookrightarrow mod\text-\Lambda$. Exactness of $\varphi$ implies that $\varphi_\ast$ is an isometry with respect to the form $\brk{\cdot,\cdot}$ and this extends to a linear isometry $\overline\varphi_\ast:\ensuremath{{\field{R}}}^k\cong \ensuremath{{\field{R}}}\beta_\ast$.
Let $\alpha=\varphi_\ast^{-1}(\gamma)\in\ensuremath{{\field{N}}}^k$. Then the Virtual Stability Theorem (\cite{IOTW3}, Theorem 3.1.1) for $\ensuremath{{\mathcal{A}}}(\beta_\ast)$ states, in the present notation, that $\overline\varphi_\ast^{-1}(D_{\beta_\ast}(\gamma))$ is the set of all $x\in\ensuremath{{\field{R}}}^k$ so that
\begin{enumerate}
\item[(1)$'$] $\brk{x,\alpha}=0$ and
\item[(2)$'$] $\brk{x,\alpha'}\le 0$ for all real Schur subroots $\alpha'\subseteq \alpha$
\end{enumerate}
and $\varphi_\ast^{-1}(D_{\ensuremath{{\field{Z}}}\beta_\ast}(\gamma))=\overline\varphi_\ast^{-1}(D_{\beta_\ast}(\gamma))\cap \ensuremath{{\field{Z}}}^k$. Since $\overline\varphi_\ast$ is an isometry, the theorem follows.
\end{proof}
\begin{cor}\label{cor: comparing D-beta to D-alpha}
Let $Y\in \ensuremath{{\mathcal{A}}}(\beta_\ast)=|T|^\perp$ with $\underline\dim\,Y=\gamma$ and let $v\in\ensuremath{{\field{R}}}\beta_\ast$. Then $v$ lies in $D_{\beta_\ast}(\gamma)$ if and only if $\underline\dim\,T+\vare v\in D_{\alpha_\ast}(\gamma)$ for all $\vare>0$ sufficiently small.
\end{cor}
\begin{proof} ($\Rightarrow$) Suppose that $v\in D_{\beta_\ast}(\gamma)$. Then
\begin{enumerate}
\item $\brk{\underline\dim\,T+\vare v,\gamma}=0$ since $Y\in |T|^\perp$.
\item If $\gamma'\subseteq\gamma$ lies in $\ensuremath{{\mathcal{A}}}(\beta_\ast)$ then $\brk{\underline\dim\,T+\vare v,\gamma'}=\vare\brk{v,\gamma'}\le0$.
\item If $\gamma''\subseteq\gamma$ does not lie in $\ensuremath{{\mathcal{A}}}(\beta_\ast)$ then $Y$ has a submodule $W\in\ensuremath{{\mathcal{A}}}(\alpha_\ast)$ of dimension $\gamma''$ so that $W\notin |T|^\perp$. So, either $\Hom_\Lambda(|T|,W)\neq 0$ or $\Ext^1_\Lambda(|T|,W)\neq0$. If $T$ is a module, we cannot have a nonzero homomorphism $T\to W$ since $\Hom_\Lambda(T,Y)=0$. If $T$ is a shifted projective then $\Ext^1_\Lambda(|T|,W)=0$. In either case, we get $\brk{\underline\dim\,T,\gamma''}<0$. Therefore $\brk{\underline\dim\,T+\vare v,\gamma''}<0$ for sufficiently small $\vare$.
\end{enumerate}
($\Leftarrow$) Conversely, suppose that $\underline\dim\,T+\vare v\in D_{\alpha_\ast}(\gamma)$ for all $\vare>0$ sufficiently small. Then, $\brk{\underline\dim\,T+\vare v,\gamma}=0$. This implies that $\brk{v,\gamma}=0$ since $\brk{\underline\dim\,T,\gamma}=0$. For any $\gamma'\subseteq \gamma$ where $\gamma'$ is the dimension vector of an object of $\ensuremath{{\mathcal{A}}}(\beta_\ast)=|T|^\perp$, we also have $\brk{\underline\dim\,T,\gamma'}=0$. So, \[
\brk{\underline\dim\,T+\vare v,\gamma'}=\vare\brk{v,\gamma'}\le0
\]
which implies $\brk{v,\gamma'}\le0$.
\end{proof}
\begin{eg}
Let $\ensuremath{{\mathcal{A}}}(\alpha_\ast)=\ensuremath{{\mathcal{A}}}(S_1,S_2,S_3)$ be the module category of the quiver $1\leftarrow 2\leftarrow 3$. Let $T$ be the module with $\underline\dim\,T=(0,1,1)^t$. (So, $T=I_2$ is the injective envelope of $S_2$.) Then $T^\perp=\ensuremath{{\mathcal{A}}}(\beta_\ast)=\ensuremath{{\mathcal{A}}}(S_2,P_3)$ is a semi-simple category whose simple objects $S_2$ and $P_3$ are also projective. So, $S_2[1],P_3[1]\in\ensuremath{{\mathcal{C}}}(\beta_\ast)$. Let $Y=P_3$ with dimension vector $\gamma=(1,1,1)^t$ and $v=(0,-1,0)^t=\underline\dim\,S_2[1]$. Then $\brk{v,\gamma}=0$. So, $v\in D_{\beta_\ast}(\gamma)$. The corollary states that
\[
\underline\dim\,T+\vare v=(0,1,1)^t+\vare (0,-1,0)^t=(1,1-\vare,1)^t
\]
is an element of $D_{\alpha_\ast}(\gamma)$ for sufficiently small $\vare>0$. In fact, $(1,1-\vare,1)^t\in D_{\alpha_\ast}(\gamma)$ if and only if $\vare\le1$ since $\brk{(1,1-\vare,1)^t,\underline\dim\,S_1}=\vare-1$ is required to be $\le0$ since $S_1$ is a submodule of $T$ in $\ensuremath{{\mathcal{A}}}(\alpha_\ast)$.
\end{eg}
Another result that we need, also proved in \cite{IOTW3}, is the virtual generic decomposition theorem. As in the proof of Theorem \ref{Stability theorem for virtual semi-invariants}, this can be reworded as follows.
\begin{thm}[Virtual generic decomposition theorem]
Suppose that $\{X_1,\cdots,X_k\}$ is a partial cluster tilting set in $\ensuremath{{\mathcal{C}}}(\alpha_\ast)$. Let $P,Q$ be projective objects in $\ensuremath{{\mathcal{A}}}(\alpha_\ast)$ so that $\underline\dim\,Q-\underline\dim\,P=\sum n_i\underline\dim\,X_i$ for positive integers $n_i$. Then for $f$ in an open dense subset of $\Hom_\Lambda(P,Q)$, we have a distinguished triangle in the bounded derived category of $\ensuremath{{\mathcal{A}}}(\alpha_\ast)$:
\[
P\xrightarrow fQ\to \coprod n_iX_i\to P[1].
\]
\end{thm}
\begin{cor}\label{cor: semi-invariants do not cut clusters}
Suppose that $\{X_1,\cdots,X_k\}$ is a partial cluster tilting set in $\ensuremath{{\mathcal{C}}}(\alpha_\ast)$ with dimension vectors $\underline\dim\,X_i=\gamma_i$. Let $Y\in\ensuremath{{\mathcal{A}}}(\alpha_\ast)$ with $\underline\dim\,Y=\gamma$ so that $D_{\alpha_\ast}(\gamma)$ contains $\sum n_i \gamma_i$ where the $n_i$ are positive rational numbers. Then $D_{\alpha_\ast}(\gamma)$ contains $\gamma_i$ for all $i$.
\end{cor}
\begin{proof} By multiplying by a positive integer we may assume that the $n_i$ are all positive integer. For these $n_i$ we take $P,Q$ and $f:P\to Q$ as in the theorem. Then, for general $f$ the semi-invariant $c_Y$ is defined, i.e., $\Hom_\Lambda(f,Y)$ is an isomorphism. By the long exact sequence for the distinguished triangle in the theorem, $\Ext^j_{\ensuremath{{\mathcal{D}}}^b(\ensuremath{{\mathcal{A}}}(\alpha_\ast))}(n_iX_i,Y)=0$ for all $i$ and $j$. So, $X_i\in D_{\alpha_\ast}(\gamma)$ for all $i$.
\end{proof}
\subsubsection{$\sigma_T^{-1}$ takes cluster tilting sets to cluster tilting sets}
Using virtual semi-invariants we will show that the bijection $\sigma_T^{-1}:\ensuremath{{\mathcal{C}}}_T(\alpha_\ast)\to \ensuremath{{\mathcal{C}}}(\beta_\ast)$ takes ext-orthogonal elements to ext-orthogonal elements assuming Properties (a),(b),(c) of Proposition \ref{prop 1.8: Properties of sigma_T} for $k=1$. This will imply that $\sigma_T^{-1}$ takes cluster tilting sets to cluster tilting sets.
Suppose that $X_1,X_2\in \ensuremath{{\mathcal{C}}}_T(\alpha_\ast)$ are ext-orthogonal but $Y_i=\sigma_T^{-1}(X_i)\in\ensuremath{{\mathcal{C}}}(\beta_\ast)$ are not. Then we will obtain a contradiction.
We have that $\{T,X_1,X_2\}$ is a partial cluster tilting set in $\ensuremath{{\mathcal{A}}}(\alpha_\ast)$. Let $\ensuremath{{\mathcal{A}}}(\gamma_\ast)=|T,X_1,X_2|^\perp$. Then $\ensuremath{{\mathcal{A}}}(\delta_\ast):=\ensuremath{{\mathcal{A}}}(\beta_\ast)\cap\,^\perp\ensuremath{{\mathcal{A}}}(\gamma_\ast)$ is a rank 2 f.g. wide subcategory of $\ensuremath{{\mathcal{A}}}(\alpha_\ast)$. By Properties (a),(b) we have:
\[
|Y_i|^\perp\cap \ensuremath{{\mathcal{A}}}(\beta_\ast)=|X_i|^\perp\cap \ensuremath{{\mathcal{A}}}(\beta_\ast)\supseteq \ensuremath{{\mathcal{A}}}(\gamma_\ast)
\]
for each $i$. Therefore, $|Y_i|$ lie in $\,^\perp\ensuremath{{\mathcal{A}}}(\gamma_\ast)=\ensuremath{{\mathcal{A}}}(\delta_\ast)$. Since projectives in $\ensuremath{{\mathcal{A}}}(\beta_\ast)$ are projective in $\ensuremath{{\mathcal{A}}}(\delta_\ast)$ this implies $Y_i\in \ensuremath{{\mathcal{C}}}(\delta_\ast)$.
We are assuming that $Y_i$ are not ext-orthogonal. We can renumber the $Y_i$ so that $Y_1$ is to the left of $Y_2$ in the fundamental domain of $\tau^{-1}[1]$ in the AR-quiver of the bounded derived category of $\ensuremath{{\mathcal{A}}}(\delta_\ast)$. Then $\Hom_{\ensuremath{{\mathcal{D}}}^b}(Y_2,Y_1)=0$ and $\Ext^1_{\ensuremath{{\mathcal{D}}}^b}(Y_1,Y_2)=0$ in $\ensuremath{{\mathcal{D}}}^b=\ensuremath{{\mathcal{D}}}^b(\ensuremath{{\mathcal{A}}}(\delta_\ast)$. If $Y_1,Y_2$ are not ext-orthogonal in the cluster category, we must have $\Ext^1_{\ensuremath{{\mathcal{D}}}^b}(Y_2,Y_1)\neq0$. Also, $Y_1$ must be a module which implies that $\Ext^j_{\ensuremath{{\mathcal{D}}}^b}(Y_2,Y_1)=0$ for $j\neq 0,1$. Therefore, with the notation $\gamma_i=\underline\dim\,Y_i$, we have
\[
\brk{\gamma_2,\gamma_1}=\dim_K\Hom_{\ensuremath{{\mathcal{D}}}^b}(Y_2,Y_1)-\dim_K\Ext^1_{\ensuremath{{\mathcal{D}}}^b}(Y_2,Y_1)<0
\]
Also, $\brk{\gamma_1,\gamma_1}>0$. This implies that there are positive rational numbers $a,b$, unique up to scaling, so that $\brk{a\gamma_1+b\gamma_2,\gamma_1}=0$. Let $Z$ be the unique object so that there is an irreducible map $Y_1\to Z$. Then $|Z|\in \,^\perp Y_1$. So, $\brk{\underline\dim\,Z,\gamma_1}=0$. By uniqueness of $a,b$ we have $\underline\dim\,Z=a\gamma_1+b\gamma_2$. Since $|Z|\in \,^\perp Y_1$, this vector $v=a\gamma_1+b\gamma_2$ lies in the support $D_{\beta_\ast}(\gamma_1)$ of the virtual semi-invariant $\sigma_{\gamma_1}$ defined on $\ensuremath{{\field{R}}}\beta_\ast$.
By Corollary \ref{cor: comparing D-beta to D-alpha}, $D_{\alpha_\ast}(\gamma_1)$ contains the vector
\[
\underline\dim\,T+\vare v=\underline\dim\,T+\vare a\gamma_1+\vare b\gamma_2
\]
for $\vare>0$ sufficiently small. By Property (c), this is equal to $c\underline\dim\,T+\vare a\underline\dim\,X_1+\vare b\underline\dim\,X_2$ where $c$ is a number which converges to 1 as $\vare\to0$. By Corollary \ref{cor: semi-invariants do not cut clusters}, the objects $T,X_1,X_2$ lie in $D_{\alpha_\ast}(\gamma_1)$. In other words, $|T|,|X_1|,|X_2|$ lie in $\,^\perp Y_1$. Equivalently, $Y_1$ lies in $|T,X_1,X_2|^\perp$ which is a contradiction. Therefore, $\sigma_T^{-1}$ takes ext-orthogonal elements to ext-orthogonal elements.
\subsubsection{$\sigma_T$ takes cluster tilting sets to cluster tilting sets}
Let $\ensuremath{{\mathcal{K}}}$ be the set of all cluster tilting sets in $\ensuremath{{\mathcal{C}}}(\beta_\ast)$ which are the images under $\sigma_T^{-1}$ of cluster tilting sets in $\ensuremath{{\mathcal{C}}}_T(\alpha_\ast)$. We know that $\ensuremath{{\mathcal{K}}}$ is nonempty since $\ensuremath{{\mathcal{C}}}_T(\alpha_\ast)$ contains at least one cluster tilting set. We claim that $\ensuremath{{\mathcal{K}}}$ is closed under all mutations of cluster tilting sets. Using the well-known fact that all cluster tilting sets over a hereditary algebra are mutation equivalent, this will imply that $\ensuremath{{\mathcal{K}}}$ contains all cluster tilting sets in $\ensuremath{{\mathcal{C}}}(\beta_\ast)$ and that therefore $\sigma_T$ sends all cluster tilting sets in $\ensuremath{{\mathcal{C}}}(\beta_\ast)$ to cluster tilting sets in $\ensuremath{{\mathcal{C}}}_T(\alpha_\ast)$.
To prove the claim, let $Y=\{Y_1,\cdots,Y_\ell\}$ be a cluster tilting set in $\ensuremath{{\mathcal{C}}}(\beta_\ast)$ which lies in $\ensuremath{{\mathcal{K}}}$. Then $X=\{\sigma_TY_1,\cdots,\sigma_TY_\ell,T\}$ is a cluster tilting set in $\ensuremath{{\mathcal{C}}}(\alpha_\ast)$ by definition of $\ensuremath{{\mathcal{K}}}$. For any $j=1,\cdots,\ell$ we want to show that the mutation $\mu_jY$ of $Y$, given by replacing $Y_j$ with $Y_j^\ast\in\ensuremath{{\mathcal{C}}}(\beta_\ast)$ also lies in $\ensuremath{{\mathcal{K}}}$. But this is easy: Take $\mu_jX$. This is the cluster tilting set in $\ensuremath{{\mathcal{C}}}(\alpha_\ast)$ obtained by replacing $\sigma_TY_j$ with the unique other object $Z$ which will complete the cluster tilting set. Since $\sigma_T^{-1}$ takes cluster tilting sets to cluster tilting sets, $\sigma_T^{-1}(\mu_jX)$ is a cluster tilting set in $\ensuremath{{\mathcal{C}}}(\beta_\ast)$. But this is the same as $Y$ except that $Y_j$ is replaced with $\sigma_T^{-1}(Z)\neq Y_j$. This must be equal to $Y_j^\ast$. So, $\mu_jY$ is in $\ensuremath{{\mathcal{K}}}$. So, $\ensuremath{{\mathcal{K}}}$ contains all cluster tilting sets in $\ensuremath{{\mathcal{C}}}(\beta_\ast)$.
This completes the proof of Proposition \ref{prop 1.8: Properties of sigma_T} and Theorem \ref{thm:sigma-T is a bijection} in the case $k=1$.
\subsubsection{Induction step}
Suppose now that $k=2$ and the proposition and theorem both hold for $k-1$. So, we have $T=\{T_1,\cdots,T_k\}$ a partial cluster tilting set in $\ensuremath{{\mathcal{A}}}(\alpha_\ast)$ and $|T|^\perp=\ensuremath{{\mathcal{A}}}(\beta_\ast)$. By an observation of Schofield, the modules $|T_i|$ can be reordered in such a way that they form an exceptional sequence $(|T_1|,|T_2|,\cdots,|T_k|)$. This implies that $|T_1|,\cdots,|T_{k-1}|$ lie in $|T_k|^\perp$ which we denote $\ensuremath{{\mathcal{A}}}(\gamma_\ast)$. Also, the bijection $\sigma_{T_k}:\ensuremath{{\mathcal{C}}}(\gamma_\ast)\to \ensuremath{{\mathcal{C}}}_{T_k}(\alpha_\ast)$ sends $T_i$ to $T_i$ for $i<k$.
By induction on $k$ we also have a bijection
$
\sigma_{T_\ast}:\ensuremath{{\mathcal{C}}}(\beta_\ast)\to \ensuremath{{\mathcal{C}}}_{T_\ast}(\gamma_\ast)
$
given by the partial cluster tilting set $T_\ast=\{T_1,\cdots,T_{k-1}\}$ in $\ensuremath{{\mathcal{C}}}(\gamma_\ast)$.
\[
\xymatrixrowsep{15pt}
\xymatrixcolsep{30pt}
\xymatrix{
\ensuremath{{\mathcal{C}}}(\beta_\ast)\ar[r]^{\sigma_{T_\ast}}_\approx &
\ensuremath{{\mathcal{C}}}_{T_\ast}(\gamma_\ast)\ar[d]_\subseteq\ar[r] &
\ensuremath{{\mathcal{C}}}_{T}(\alpha_\ast)\ar[d]_\subseteq\\
&
\ensuremath{{\mathcal{C}}}(\gamma_\ast) \ar[r]^{\sigma_{T_k}}_\approx&
\ensuremath{{\mathcal{C}}}_{T_k}(\alpha_\ast)
}
\]
\noindent\underline{Claim 1}: The bijection $\sigma_{T_k}$ sends $\ensuremath{{\mathcal{C}}}_{T_\ast}(\gamma_\ast)$ bijectively onto $\ensuremath{{\mathcal{C}}}_T(\alpha_\ast)$ and therefore induces a bijection
\[
\sigma_T:=\sigma_{T_k}\circ\sigma_{T_\ast}:\ensuremath{{\mathcal{C}}}(\beta_\ast)\xrightarrow\approx \ensuremath{{\mathcal{C}}}_{T_\ast}(\gamma_\ast)\xrightarrow\approx \ensuremath{{\mathcal{C}}}_{T}(\alpha_\ast)
\]
Proof: An element $Y$ of $\ensuremath{{\mathcal{C}}}(\gamma_\ast)$ lies in $\ensuremath{{\mathcal{C}}}_{T_\ast}(\gamma_\ast)$ iff it is ext-orthogonal to but not equal to $T_i$ for $i<k$. The element $X=\sigma_{T_k}Y\in \ensuremath{{\mathcal{C}}}_{T_k}(\alpha_\ast)$ lies in $\ensuremath{{\mathcal{C}}}_T(\alpha_\ast)$ iff $X$ is ext-orthogonal to but not equal to $T_i$ for $i<k$. Since $\sigma_{T_k}(T_i)=T_i$ and by using the theorem for $k=1$ we see that these conditions are equivalent.
We now show that the bijection $\sigma_T:=\sigma_{T_k}\circ\sigma_{T_\ast}$ satisfies Proposition \ref{prop 1.8: Properties of sigma_T}.
\begin{enumerate}
\item[(a)] If $Y\in\ensuremath{{\mathcal{C}}}(\beta_\ast)$ then $\sigma_TY\in \ensuremath{{\mathcal{C}}}_T(\alpha_\ast)$ implies, by definition, that $\{Y,T_1,\cdots,T_k\}$ is a partial cluster tilting set in $\ensuremath{{\mathcal{C}}}(\alpha_\ast)$. So, $\sigma_T$ has Property (a).
\item[(b)] For any $Y\in\ensuremath{{\mathcal{C}}}(\beta_\ast)$ we have, by induction on $k$, that
\[
\ensuremath{{\mathcal{A}}}(\beta_\ast)\cap|Y|^\perp=\ensuremath{{\mathcal{A}}}(\beta_\ast)\cap|\sigma_{T_\ast}Y|^\perp=\ensuremath{{\mathcal{A}}}(\beta_\ast)\cap|\sigma_{T_k}\sigma_{T_\ast}Y|^\perp
\]
Since $\sigma_T=\sigma_{T_k}\circ\sigma_{T_\ast}$, Property (b) holds.
\item[(c)] By induction on $k$ we have the following for any $Y\in\ensuremath{{\mathcal{C}}}(\beta_\ast)$:
\[
\underline\dim\,Y+\ensuremath{{\field{R}}} T_\ast=\underline\dim\,\sigma_{T_\ast}Y+\ensuremath{{\field{R}}} T_\ast
\]
By the case $k=1$ we have
\[
\underline\dim\,\sigma_{T_\ast}Y+\ensuremath{{\field{R}}} T_k=\underline\dim\,\sigma_{T}Y+\ensuremath{{\field{R}}} T_k
\]
Since $\ensuremath{{\field{R}}} T=\ensuremath{{\field{R}}} T_\ast+\ensuremath{{\field{R}}} T_k$, we can put these together to get:
\begin{equation}\label{eq: Property (c)}
\underline\dim\,Y+\ensuremath{{\field{R}}} T=\underline\dim\,\sigma_{T}Y+\ensuremath{{\field{R}}} T
\end{equation}
which is equivalent to the statement that $\sigma_T$ satisfies Property (c).
\end{enumerate}
The uniqueness of $\sigma_T$ follows from the following observation.
\noindent\underline{Claim 2}: The inclusion map $\ensuremath{{\field{R}}}\beta_\ast\hookrightarrow \ensuremath{{\field{R}}}\alpha_\ast$ induces a linear isomorphism
\[
\lambda_T:\ensuremath{{\field{R}}}\beta_\ast\xrightarrow\approx\ensuremath{{\field{R}}}\alpha_\ast/\ensuremath{{\field{R}}} T
\]
In other words, $\sigma_TY$ is the unique element of $\ensuremath{{\mathcal{C}}}_T(\alpha_\ast)$ satisfying \eqref{eq: Property (c)}.
Proof: Since $\ensuremath{{\field{R}}}\beta_\ast$ and $\ensuremath{{\field{R}}} T$ have complementary dimensions in $\ensuremath{{\field{R}}}\alpha_\ast$, it suffices to show that they span $\ensuremath{{\field{R}}}\alpha_\ast$. Choose any exceptional sequence in $\ensuremath{{\mathcal{A}}}(\beta_\ast)$, for example the simple objects $(S_\ell,\cdots,S_1)$. Then $(S_\ell,\cdots,S_1,|T_1|,\cdots,|T_k|)$ is an exceptional sequence in $\ensuremath{{\mathcal{A}}}(\alpha_\ast)$. So, the dimension vectors of these modules form a basis for $\ensuremath{{\field{R}}}\alpha_\ast$. Since $S_i\in\ensuremath{{\mathcal{A}}}(\beta_\ast)$, $\underline\dim\,S_i\in\ensuremath{{\field{R}}}\beta_\ast$. Therefore $\ensuremath{{\field{R}}}\beta_\ast+\ensuremath{{\field{R}}} T=\ensuremath{{\field{R}}}\alpha_\ast$, proving Claim 2.\vs2
Property (d) and its converse are easy: $Y_1,Y_2\in \ensuremath{{\mathcal{C}}}(\beta_\ast)$ are ext-orthogonal iff $\sigma_{T_\ast}Y_1,\sigma_{T_\ast}Y_2$ are ext-orthogonal iff $\sigma_{T_k}\sigma_{T_\ast}Y_1,\sigma_{T_k}\sigma_{T_\ast}Y_2$ are ext-orthogonal. Therefore $\sigma_{T}=\sigma_{T_k}\sigma_{T_\ast}$ satisfies Property (d) and both $\sigma_T$ and $\sigma_T^{-1}$ take cluster tilting sets to cluster tilting sets.
This concludes the proof of Proposition \ref{prop 1.8: Properties of sigma_T} and Theorem \ref{thm:sigma-T is a bijection} and therefore also completes the definition of the cluster morphism category.
\section{Signed exceptional sequences}\label{sec 2: signed exceptional sequences}
We are now in a position to explore signed exceptional sequences and prove one of the main theorems of this paper: There is a bijection between signed exceptional sequences and ordered cluster tilting sets.
\subsection{Definition and basic properties}\label{ss 2.1: Def of signed exceptional sequence}
Let $\ensuremath{{\mathcal{A}}}$ be a finitely generated wide subcategory of $mod$-$\Lambda$. Recall Definition \ref{def: signed exceptional sequence}: a \emph{signed exceptional sequence} in $\ensuremath{{\mathcal{A}}}$ is a sequence $(X_1,X_2,\cdots,X_k)$ in $\ensuremath{{\mathcal{A}}}\cup \ensuremath{{\mathcal{A}}}[1]\subset\ensuremath{{\mathcal{D}}}^b(\ensuremath{{\mathcal{A}}})$ with the following properties.
\begin{enumerate}
\item $(|X_1|,\cdots,|X_k|)$ is an exceptional sequence. So, $|X_i|\in |X_j|^\perp$ for $i<j$.
\item If $X_j=Q[1]$ then $Q$ is a relatively projective object of $|X_{j+1},\cdots,X_k|^\perp$.
\end{enumerate}
In our notation, it is understood that perpendicular categories are taken inside the ambient category $\ensuremath{{\mathcal{A}}}$. Thus $|X_j|^\perp$ means $|X_j|^\perp\cap\ensuremath{{\mathcal{A}}}$.
Let $\ensuremath{{\mathcal{A}}}_j=|X_{j+1},\cdots,X_k|^\perp$. Then (2) is equivalent to the condition: $X_j\in\ensuremath{{\mathcal{C}}}(\ensuremath{{\mathcal{A}}}_j)$. Therefore, a signed exceptional sequence gives a sequence of composable cluster morphisms:
\[
\ensuremath{{\mathcal{A}}}=\ensuremath{{\mathcal{A}}}_k\xrightarrow{[X_k]}\ensuremath{{\mathcal{A}}}_{k-1}\xrightarrow{[X_{k-1}]}\cdots \xrightarrow{[X_{2}]}\ensuremath{{\mathcal{A}}}_1\xrightarrow{[X_{1}]}\ensuremath{{\mathcal{A}}}_0
\]
Conversely, given any composition of cluster morphisms $[Y_1]\circ[Y_2]\circ\cdots\circ[Y_k]:\ensuremath{{\mathcal{A}}}\to\ensuremath{{\mathcal{B}}}$ where each $Y_j$ is a one element cluster tilting set, the sequence $(Y_1,\cdots,Y_k)$ is a signed exceptional sequence in the domain $\ensuremath{{\mathcal{A}}}$.
By the composition law for cluster morphism we have the following.
\begin{prop}\label{prop 2.1: formula for cluster morphism corresponding to signed exceptional sequence}
The cluster morphism corresponding to a signed exceptional sequence $(X_1,\cdots,X_k)$ in $\ensuremath{{\mathcal{A}}}$ is
$
[X_1]\circ[X_2]\circ\cdots\circ[X_k]=[T(1)]
$
where $T(j)=(T_j, T_{j+1}, \cdots, T_k)$ is the (ordered) partial cluster tilting set in $\ensuremath{{\mathcal{C}}}(\ensuremath{{\mathcal{A}}})$ given recursively as follows.
\begin{enumerate}
\item $T_k=X_k$.
\item Given $T(j)$, let $T_{j-1}=\sigma_{T(j)}X_{j-1}$.\qed
\end{enumerate}
\end{prop}
We call $T=T(1)=(T_1, \cdots, T_k)$ the ordered partial cluster tilting set in $\ensuremath{{\mathcal{C}}}(\ensuremath{{\mathcal{A}}})$ corresponding to $(X_1,\cdots,X_k)$.
As an example, consider the sequence of simple modules $(S_1,S_2,\cdots,S_n)$ in admissible order, i.e., so that $(S_n,S_{n-1},\cdots,S_1)$ is an exceptional sequence. Since each $S_k$ is projective in the right perpendicular category of $S_1,\cdots,S_{k-1}$, it can have either sign. So, there are $2^n$ possible signed exceptional sequences coming from this one exceptional sequence.
\begin{prop}\label{prop: 2 to n distinct clusters}
The cluster tilting sets corresponding to these $2^n$ signed exceptional sequences are all distinct.
\end{prop}
For example, the $2^3=8$ signed exceptional sequences and corresponding cluster tilting sets for the quiver $1\leftarrow 2\leftarrow 3$ are listed in Figure \ref{fig: 8 signed exceptional sequences} where $P_i,I_i,S_i$ are the $i$th projective, injective and simple modules.
\begin{figure}
\caption{The $2^n=8$ signed exceptional sequences of simple objects and corresponding ordered cluster tilting sets for the quiver $1\leftarrow 2\leftarrow 3$.}
\label{fig: 8 signed exceptional sequences}
\end{figure}
\begin{proof}
Note that the elements of each of these cluster tilting sets have a natural ordering since $T_n$ is supported at vertex 1, $T_{n-1}$ at vertices 1,2, etc. Suppose that $E_\ast,E_\ast'$ are two signed exceptional sequences whose underlying modules are the simple objects $S_n,\cdots,S_1$. Let $T$, $T'$ be the corresponding cluster tilting sets with their natural ordering. Let $j$ be maximal so that $E_j\neq E_j'$. Then $T_i=T_i'$ for $i>j$ and the support tilting object $T_j'\coprod T_{j+1}\coprod\cdots\coprod T_n$ is the mutation of the support tilting object $T_j\coprod T_{j+1}\coprod \cdots\coprod T_n$ in the $j$-direction. So, $T_j\neq T_j'$ making $T,T'$ nonisomorphic.
\end{proof}
\subsection{First main theorem}\label{ss 2.2: first main theorem}
We can now state and prove the first main theorem. Note that Proposition \ref{prop 2.1: formula for cluster morphism corresponding to signed exceptional sequence} assigns an {ordered cluster tilting set} to each signed exceptional sequence.
\begin{thm}\label{thm 2.3: bijection one}
There is a bijection $\theta_k$ from the set of isomorphism classes of signed exceptional sequences in $\ensuremath{{\mathcal{A}}}$ of length $k$ to the set of {ordered partial cluster tilting sets} in $\ensuremath{{\mathcal{A}}}$ of size $k$ which is uniquely characterized by the following properties.
\begin{enumerate}
\item If $\theta_k(X_1,\cdots,X_k)=T$ then $|X|^\perp=|T|^\perp$. Let $\ensuremath{{\mathcal{B}}}=|T|^\perp$.
\item If $\theta_k(X_1,\cdots,X_k)=T$ then $[T]=[X_1]\circ[X_2]\circ\cdots\circ[X_k]$ as cluster morphisms $\ensuremath{{\mathcal{A}}}\to \ensuremath{{\mathcal{B}}}$.
\item If $\theta_k(X_1,\cdots,X_k)=(T_1,\cdots,T_k)$ then $\theta_{k-j+1}(X_j,\cdots,X_k)=(T_j,\cdots, T_k)$ for all $1\le j\le k$.
\end{enumerate}
\end{thm}
To clarify the wording of the theorem we mean that there is a unique mapping $\theta_k$ satisfying the three listed conditions and that, furthermore, this mapping is a bijection.
\begin{proof}
The formula in Proposition \ref{prop 2.1: formula for cluster morphism corresponding to signed exceptional sequence} gives a function $\theta_k$ satisfying these three conditions. So, it remains to show that $\theta_k$ is uniquely determined and that it is a bijection. We prove both statements at the same time by induction on $k$.
If $k=1$ then Condition (2) implies that $T_1=X_1$. The two sets are both equal to $\ensuremath{{\mathcal{C}}}(\ensuremath{{\mathcal{A}}})$ and $\theta_1$ must be the identity map.
Suppose $k\ge2$ and $\theta_{k-1}$ is a uniquely determined bijection. Let $(X_1,\cdots,X_k)$ be a signed exceptional sequence. Condition (2) implies that $T=\theta_k(X_1,\cdots,X_k)$ is uniquely determined up to permutation of its elements. But Condition (3) for $j=k-1$ determines the last $k-1$ elements of $T$. So, the first element is also determined. So, the function $\theta_k$ is uniquely determined.
To show that $\theta_k$ is a bijection, we start with any rigid object $T=\coprod_{1\le j\le k}T_j$ with $k$ summands in a fixed order. This gives a cluster morphism $[T]:\ensuremath{{\mathcal{A}}}\to \ensuremath{{\mathcal{B}}}$. Let $T'=(T_2,T_3,\cdots,T_k)$. This gives a morphism $[T']:\ensuremath{{\mathcal{A}}}\to\ensuremath{{\mathcal{B}}}'$ where $\ensuremath{{\mathcal{B}}}\subset\ensuremath{{\mathcal{B}}}'\subset\ensuremath{{\mathcal{A}}}$. By induction on $k$, there is a unique signed exceptional sequence $Y$ of length $k-1$ so that $\theta_{k-1}(Y)=T'$. Since $T$ is rigid, $T_1$ lies in $\ensuremath{{\mathcal{C}}}_{T'}(\ensuremath{{\mathcal{A}}})$. By Theorem \ref{thm:sigma-T is a bijection}, there is a unique object $Y_0\in \ensuremath{{\mathcal{C}}}(\ensuremath{{\mathcal{B}}}')$ so that $\sigma_{T'}Y_0=T_1$. The recursive formula in Proposition \ref{prop 2.1: formula for cluster morphism corresponding to signed exceptional sequence} then gives $\theta_k(Y_0,Y)=T$.
\end{proof}
\begin{rem}
Theorem \ref{thm 2.3: bijection one} has been extended to the $m$-cluster category by the first author and to arbitrary finite dimensional algebras using $\tau$-tilting by Buan and Marsh. Details will appear when available.
\end{rem}
The bijection between {ordered cluster tilting set}s and signed exceptional sequences can be used to define the composition of cluster morphisms.
\begin{cor}
If $[T]:\ensuremath{{\mathcal{A}}}_0\to \ensuremath{{\mathcal{A}}}_1$ and $[T']:\ensuremath{{\mathcal{A}}}_1\to \ensuremath{{\mathcal{A}}}_2$ are cluster morphism, the composition $[T']\circ[T]:\ensuremath{{\mathcal{A}}}_0\to \ensuremath{{\mathcal{A}}}_2$ can be given as follows. Take two signed exceptional sequences $(X_1,\cdots,X_\ell)$ in $\ensuremath{{\mathcal{A}}}_1$ and $(Y_1,\cdots,Y_k)$ in $\ensuremath{{\mathcal{A}}}_0$ so that $\theta_\ell(Y)=T$ in some order and $\theta_k(X)=T'$ in some order. Then $[T']\circ [T]=\theta_{k+\ell}^{-1}(X,Y)$.
\end{cor}
\begin{proof}
This follows immediately from Property (2) in Theorem \ref{thm 2.3: bijection one}.
\end{proof}
The inverse bijection $\theta_k^{-1}$ from {ordered cluster tilting set}s to signed exceptional sequences is given by the following ``twist'' formula which is based on \cite{SM}.
A finite set of vectors in $\ensuremath{{\field{Q}}}^n$ will be called \emph{nondegenerate} if it is linearly independent and satisfies the condition that the Euler-Ringel pairing $\brk{\cdot,\cdot}$ is nondegenerate on the span of any subset of the set of vectors.
\begin{defn}\label{def: twist equation}
We define the \emph{right twist} of any nondegenerate sequence of vectors $v_\ast=(v_1,\cdots,v_k)$ in $\ensuremath{{\field{Q}}}^n$ to be the unique sequence of vectors $\tau_+(v_\ast)=(w_1,\cdots,w_k)$ satisfying the following.
\begin{enumerate}
\item For each $j$, $w_j-v_j$ is a linear combination of $v_i$ for $i>j$.
\item $\brk{v_i,w_j}=0$ for all $i> j$.
\end{enumerate}
Note that, given (1), Condition (2) is equivalent to
($2'$) $\brk{w_i,w_j}=0$ for all $i> j$.
\noindent We say that $(w_\ast)$ is an \emph{integer right twist} of $(v_\ast)$ if each $w_j$ is an integer linear combination of the $v_i$.
\end{defn}
\begin{prop}\label{prop: signed exceptional sequences are nondegenerate}
The dimension vectors of any signed exceptional sequence $(X_1,\cdots,X_k)$ is nondegenerate with respect to the pairing $\brk{\cdot,\cdot}$. Furthermore $\tau_+(\undim X_\ast)=(\undim X_\ast)$.
\end{prop}
\begin{proof}
Any subset of the $X_i$ forms an exceptional sequence. So the span of their dimension vectors is the span of the dimension vectors of a wide subcategory which is equivalent to the module category of a finite acyclic quiver. Thus $\brk{\cdot,\cdot}$ is nondegenerate on any such span. The equation $\tau_+(\undim X_\ast)=(\undim X_\ast)$ follows from Proposition \ref{prop 2.1: formula for cluster morphism corresponding to signed exceptional sequence} and the properties of $\sigma_T$ listed in Proposition \ref{prop 1.8: Properties of sigma_T}, in particular (c).
\end{proof}
We also need the following important theorem essentially due to Schofield.
\begin{thm}\label{thm: Schofield's observation}
Any partial cluster tilting set $\{T_1,\cdots,T_k\}$ giving a morphism $[T]:\ensuremath{{\mathcal{A}}}\to\ensuremath{{\mathcal{B}}}$ can be ordered in such a way that it forms a signed exceptional sequence.
\end{thm}
\begin{proof}
Schofield \cite{S92} proved this in the case when the $T_i$ lie in $\ensuremath{{\mathcal{A}}}$. But this case extends easily to cluster tilting sets by putting the shifted projective objects last.
\end{proof}
Theorem \ref{thm: Schofield's observation} and Proposition \ref{prop: signed exceptional sequences are nondegenerate} imply that the set of dimension vectors of any partial cluster tilting set is nondegenerate, therefore, its right twist is defined.
\begin{thm}\label{thm: formula for theta inverse}
The sequence of dimension vectors of any ordered partial cluster tilting set $T=(T_1,\cdots,T_k)$ has an integer right twist $\tau_+(\undim T_i)=(\undim X_i)$ which gives the dimension vectors of the corresponding signed exceptional sequence $(X_1,\cdots,X_k)=\theta_k^{-1}(T)$.
\end{thm}
\begin{proof}
Let $T_{>j}=(T_{j+1},\cdots,T_k)$. Then it follows from the formula $T_j=\sigma_{T_{>j}}X_j$ that $\undim T_j-\undim X_j$ is an integer linear combination of the vectors $\undim T_i$ for $i>j$. By downward induction on $j$, this implies that the span of $\undim X_i$ for $i>j$ is equal to the span of the vectors $\undim T_i$ for $i>j$. So, the fact that $(X_i)$ is a signed exceptional sequence implies that $
\brk{\undim T_i,\undim X_j}=0
$ for $i>j$. Therefore, the sequence of dimension vectors $(\undim X_i)$ satisfies the definition of an integral right twist for $(\undim T_i)$.
\end{proof}
\subsection{Permutation of signed exceptional sequences}\label{ss 2.3: permutation of signed exc seq}
The question we address here is: When can the terms in a signed exceptional sequence be permuted? Without the signs, the answer is given by the following trivial observation.
\begin{prop}\label{permutation of exc seqs}
Suppose that $(M_1,\cdots,M_n)$ is an exceptional sequence in $mod\text-\Lambda$. Let $\sigma$ be any permutation of $n$. Then $(M_{\sigma(1)},\cdots,M_{\sigma(n)})$ is an exceptional sequence if and only if $M_i,M_j$ are hom-ext perpendicular whenever $i<j$ and $\sigma(i)>\sigma(j)$.\qed
\end{prop}
We will show that the same holds for signed exceptional sequences. This is not completely obvious since there is a condition on which modules can be shifted. We consider the signed version of Proposition \ref{permutation of exc seqs} in the key case when $n=2$ and $\sigma$ is a transposition.
\begin{lem}\label{n=2 case of commuting sig exc seq}
Suppose that $(X,Y)$ is a signed exceptional sequence in $\ensuremath{{\mathcal{A}}}$ with corresponding ordered partial cluster tilting set $(Z,Y)$ in $\ensuremath{{\mathcal{C}}}(\ensuremath{{\mathcal{A}}})$ where $Z=\sigma_YX$. Then the following are equivalent.
\begin{enumerate}
\item $(Y,X)$ is a signed exceptional sequence in $\ensuremath{{\mathcal{A}}}$.
\item $(|Y|,|X|)$ is an exceptional sequence.
\item $|X|,|Y|$ are hom-ext orthogonal.
\item $Z,Y$ are hom orthogonal and $Z=X$.
\end{enumerate}
Furthermore, when this holds, $(Y,X)$ is the ordered partial cluster tilting set corresponding to the signed exceptional sequence $(Y,X)$. I.e., $\sigma_XY=Y$.
\end{lem}
\begin{proof}
It follows from the definitions that (1) implies (2) and that (2), (3) are equivalent.
$(3)\Rightarrow (4)$. By Property (c) of $\sigma_Y$ we know that $\undim Z=\undim X+c\undim Y$. Then
\[
\brk{\undim Y,\undim Z}=\brk{\undim Y,\undim X}+c\brk{\undim Y,\undim Y} =c\dim\End_\Lambda(Y)
\]
\[
\brk{\undim Z,\undim Y}=\brk{\undim X,\undim Y}+c\brk{\undim Y,\undim Y}=c\dim\End_\Lambda(Y)
\]
By Schofield's theorem above, one of these must be zero. So, $c=0$ and $Z=X$ since $Z$ is uniquely determined by its dimension vector. So, $|Z|,|Y|$ are hom-ext orthogonal. This implies (4) when $Y,Z$ have the same sign. So, it is left to consider the case when one of them, say $Y$, is a module and $X=Z=P[1]$ where $P$ is projective. Then $\Hom_{\ensuremath{{\mathcal{D}}}^b(\ensuremath{{\mathcal{A}}})}(P[1],Y)=0$ and $\Hom_{\ensuremath{{\mathcal{D}}}^b(\ensuremath{{\mathcal{A}}})}(Y,P[1])=\Ext^1(Y,P)=0$ by (3). So, (4) holds.
$(4)\Rightarrow(3)$. If $Z=X$ and $Y$ have the same sign, this is clear. So, suppose they have opposite signs. Say, $Y$ is a module and $Z=X=P[1]$. Since $Y,Z$ form a partial cluster tilting set we have $\Hom_\Lambda(P,Y)=0$. Also $\Ext^1_\Lambda(P,Y)=0$ since $P$ is projective. So, $Y\in |X|^\perp$. Since $(X,Y)$ is given to be a signed exceptional sequence, we also have $|X|\in Y^\perp$. So, $|X|,|Y|$ are hom-ext orthogonal.
$ (2),(4)\Rightarrow (1)$. Given that $(|Y|,|X|)$ is an exceptional sequence, we just need to check that the signs on $Y,X$ are admissible. But, by (4), $X,Y$ are either objects of $\ensuremath{{\mathcal{A}}}$ or shifted projective objects. So, by definition, their signs are admissible and $(Y,X)$ is a signed exceptional sequence.
Finally, the last statement $\sigma_XY=Y$ follows from Property (e) of the function $\sigma_X$.
\end{proof}
\begin{lem}\label{transpositions of sig exc seq}
Let $(X_1,\cdots,X_n)$ be a signed exceptional sequence with corresponding {ordered cluster tilting set} $(T_1,\cdots,T_n)$. Then, for each $i$, the following are equivalent.
\begin{enumerate}
\item $(X_1,\cdots,X_{i-1},X_{i+1},X_i,X_{i+2},\cdots,X_n)$ is a signed exceptional sequence.
\item $(|X_1|,\cdots,|X_{i-1}|,|X_{i+1}|,|X_i|,|X_{i+2}|,\cdots,|X_n|)$ is an exceptional sequence.
\item $|X_i|,|X_{i+1}|$ are hom-ext perpendicular.
\end{enumerate}
Furthermore, when these hold, the signed exceptional sequence in \emph{(1)} corresponds to the {ordered cluster tilting set} $(T_1,\cdots,T_{i-1},T_{i+1},T_i,T_{i+2},\cdots,T_n)$.
\end{lem}
\begin{proof} The equivalence of (1), (2) and (3) follows from Lemma \ref{n=2 case of commuting sig exc seq} applied to the signed exceptional sequence $(X_i,X_{i+1})$ in $\ensuremath{{\mathcal{A}}}=|X_{i+2},\cdots,X_n|^\perp$.
To prove the last statement, we use the fact, also proved in Lemma \ref{n=2 case of commuting sig exc seq}, that $\sigma_{X_i}X_{i+1}=X_{i+1}$ and $\sigma_{X_{i+1}}X_{i}=X_{i}$. Then $T_{i+1}=\sigma_{T'}X_{i+1}$ where $T'=(T_{i+2},\cdots,T_n)$ and, using the notation $T''=(T_{i+1},T_{i+2},\cdots,T_n)$, we also have
\[
T_i=\sigma_{T''}X_i =\sigma_{T'}\sigma_{X_{i+1}}X_i=\sigma_{T'}X_i\,.
\]
If $(M_1,\cdots,M_n)$ is the {ordered cluster tilting set} associated to the signed exceptional sequence in (1) then we must have $M_j=T_j$ for $j\neq i,i+1$ and
\[
M_{i+1}=\sigma_{T'}X_i=T_i
\]
\[
M_i=\sigma_{T'}\sigma_{X_i}X_{i+1}=\sigma_{T'}X_{i+1}=T_{i+1}
\]
proving the last claim of the lemma.
\end{proof}
Given any permutation $\sigma$ of $n$, the \emph{inversions} of $\sigma$ are defined to be pairs of integers $(i,j)$ so that $i<j$ and $\sigma(i)>\sigma(j)$.
\begin{prop}\label{prop: permutation of sig exc seqs}
Suppose that $(M_1,\cdots,M_n)$ is a signed exceptional sequence in $mod\text-\Lambda$ with corresponding {ordered cluster tilting set} $(T_1,\cdots,T_n)$. Let $\sigma$ be any permutation of $n$. Then the following are equivalent.
\begin{enumerate}
\item $(M_{\sigma(1)},\cdots,M_{\sigma(n)})$ is a signed exceptional sequence.
\item $(|M_{\sigma(1)}|,\cdots,|M_{\sigma(n)}|)$ is a signed exceptional sequence.
\item $|M_i|,|M_j|$ are hom-ext orthogonal for all inversions $(i,j)$ of $\sigma$.
\end{enumerate}
When this holds, the {ordered cluster tilting set} corresponding to $(M_{\sigma(i)})$ is $(T_{\sigma(1)},\cdots,T_{\sigma(n)})$.
\end{prop}
\begin{proof} If $\sigma$ has only one inversion then it is a simple transposition $(i,i+1)$ and the proposition follows from Lemma \ref {transpositions of sig exc seq} in that case. So, suppose $\sigma$ has $k\ge2$ inversions and the proposition holds for $k-1$. Then $\sigma$ is the product of $k$ simple transpositions: $\sigma=\tau_1\tau_2\cdots\tau_k$. Let $\sigma'=\tau_1\cdots\tau_{k-1}$. Then it is an elementary fact that $\sigma'$ has $k-1$ inversions each of which is an inversion of $\sigma$. We can now prove the proposition for $k$.
If $(M_i),(M_{\sigma(i)})$ are signed exceptional sequences then $(|M_i|), (|M_{\sigma(i)}|)$ are exceptional sequences. This implies (3). Conversely, suppose that $|M_i|,|M_j|$ are hom-ext orthogonal for every inversion $(i,j)$ of $\sigma$. Then, a fortiori, the same holds for every inversion $(i,j)$ of $\sigma'$. By induction on $k$ we have that $(M_{\sigma'(1)},\cdots,M_{\sigma'(n)})$ is a signed exceptional sequence with corresponding {ordered cluster tilting set} $(T_{\sigma'(1)},\cdots,T_{\sigma'(n)})$. By Lemma \ref{transpositions of sig exc seq} we can apply the last simple transposition $\tau_k$ to show that $(M_{\sigma(i)})$ is a signed exceptional sequence with corresponding {ordered cluster tilting set} $(T_{\sigma(i)})$.
\end{proof}
\subsection{{\it c} -vectors}\label{ss 2.4: c-vectors}
In lieu of the definition, we first recall the following characterizing property of $c$-vectors associated to a cluster tilting set. (See \cite{IOTW3}, \cite{ST}, \cite{IOs}, \cite{IOTW2a}.) Since there are two notions of correspondence, we use the term \emph{exchange correspondence} for this association.
\begin{thm}\label{thm: characterization of c-vectors}
Given an ordered cluster tilting set $T=(T_1,\cdots,T_n)$ for $mod\text-\Lambda$, the exchange-corresponding $c$-vectors are real Schur roots $\beta_1,\cdots,\beta_n$ which are uniquely determined by the following equation:
\begin{equation}\label{eq characterizing c-vectors}
\brk{\undim T_i,\beta_j}=-f_i\delta_{ij}
\end{equation}
where $f_i=\dim_K\End_\Lambda(T_i)$.
\end{thm}
It follows immediately that the set of $c$-vectors $\beta_i$ determines the cluster tilting set $T$.
In \cite{ST}, Speyer and Thomas gave a characterization of $c$-vectors. In terms of signed exceptional sequences their theorem can be phrased as follows.
\begin{thm}\cite{ST}\label{ST: c vectors are exceptional sequences}
A set $\{\beta_1,\cdots,\beta_n\}$ of real Schur roots is the set of $c$-vectors of a cluster tilting set if and only if there is a signed exceptional sequence $X_1,\cdots,X_n$ with $\undim X_i=-\beta_{\sigma(i)}$ for some permutation $\sigma$ so that $X_1,\cdots,X_k$ are hom-orthogonal shifted modules and $X_{k+1},\cdots,X_n$ are hom-orthogonal modules.
\end{thm}
The next theorem shows that, under certain conditions, the bijection between ordered cluster tilting sets and signed exceptional sequences is equivalent to the exchange correspondence between cluster tilting sets and $c$-vectors. It is not immediate how Theorem \ref{thm: Which sig exc seqs are c-vectors?} and Theorem \ref{ST: c vectors are exceptional sequences} are related.
\begin{thm}[Exchange-correspondence=bijective correspondence]\label{thm: Which sig exc seqs are c-vectors?}
Given any signed exceptional sequence $(X_1,\cdots,X_n)$, the negatives of the dimension vectors $\gamma_i=\undim X_i$ form the set of $c$-vectors for some cluster tilting set if and only if the ordered cluster tilting set $(T_1,\cdots,T_n)$ bijectively corresponding to $(X_i)$ has the property that
\begin{equation}\label{eq: good order for cluster}
\Hom_\Lambda(|T_i|,|T_j|)=0=\Ext^1_\Lambda(|T_i|,|T_j|)
\end{equation}
for all $i<j$. Furthermore, $(-\gamma_1,\cdots,-\gamma_n)$ is equal to the ordered set of $c$-vectors exchange-corresponding to the ordered cluster tilting set $(T_i)$.
\end{thm}
\begin{proof} Suppose that $(T_1,\cdots,T_n)$ is an ordered cluster tilting set satisfying \eqref{eq: good order for cluster} and let $(X_1,\cdots,X_n)$ be the corresponding signed exceptional sequence. Then we will show that $(-\gamma_i=-\undim X_i)$ satisfies \eqref {eq characterizing c-vectors} and are thus the $c$-vectors of the cluster tilting set.
We will first find the solution of the equations \eqref {eq characterizing c-vectors}. Condition \eqref{eq: good order for cluster} implies that $a_{ij}:=\brk{\undim T_i,\undim T_j}=0$ if $i<j$. We also have $a_{ii}=\dim \End T_i=f_i$. By elementary linear algebra this implies that there is a unipotent lower triangular matrix $(b_{jk})$ so that
\[
\brk{
\undim T_i,\sum_j b_{jk}\undim T_j
}=\sum_j\brk{\undim T_i,\undim T_j}b_{jk}=\sum_j a_{ij}b_{jk}=f_i\delta_{ik}
\]
Therefore, $-\beta_k=-\sum_j b_{jk}\undim T_j$ are the $c$-vectors of the cluster tilting set. \vs2
\underline{Claim}: $\beta_k=\gamma_k=\undim X_k$ for each $k$.
\vs2
Proof: By Theorem \ref{thm: formula for theta inverse}, $\undim X_j-\undim T_j$ is a linear combination of $\undim T_i$ for $i>j$. If we let $k$ be maximal so that $\beta_k\neq\undim X_k$ then this tells us that $\beta_k-\undim X_k$ is a linear combination of $\undim T_i$ for $i>k$, say,
\[
\beta_k-\undim X_k=\sum a_i\undim T_i\neq 0.
\]
Let $j$ be minimal so that $a_j\neq0$. Then
\[
\brk{\undim T_j,\beta_k-\undim X_k}=\sum a_i\brk{\undim T_j,\undim T_i}=a_jf_j\neq0.
\]
But this is impossible since $\brk{\undim T_j,\beta_k}=0$ by construction of $\beta_k$ and $\brk{\undim T_j,\undim X_k}=0$ since $|X_k|\in |T_j|^\perp$.\vs2
Conversely, given that $-\gamma_i=-\undim X_i$ are the $c$-vectors of an ordered cluster tilting set $T'=(T_1',\cdots,T_n')$ we will show that $T'=T$ and that the cluster tilting set satisfies \eqref{eq: good order for cluster}.
Using Theorem \ref{thm: Schofield's observation}, there exists a permutation $\sigma$ of $n$ so that $\Hom_\Lambda(|T'_{\sigma(i)}|,|T'_{\sigma(j)}|)=0=\Ext^1_\Lambda(|T'_{\sigma(i)}|,|T'_{\sigma(j)}|)$ for $i<j$. By what we have shown in the first part of this proof, this implies that the signed exceptional sequence $(X_{\sigma(i)})$ corresponding to $(T'_{\sigma(i)})$ has negative dimension vectors equal to the ordered set of $c$-vectors $-\gamma_{\sigma(i)}=-\undim X_{\sigma(i)}$. Since $X$ and $(X_{\sigma(i)})$ are both signed exceptional sequences, we can apply Proposition \ref{prop: permutation of sig exc seqs} to conclude that $T'$ is the ordered cluster tilting set corresponding to $X$. In other words, $T'=T$ as claimed. This proved all the statements of the theorem.
\end{proof}
For example, in Figure \ref{fig: 8 signed exceptional sequences}, the top 4 {ordered cluster tilting set}s satisfy \eqref{eq: good order for cluster}. So, the dimension vectors of the corresponding signed exceptional sequences satisfy \eqref{eq characterizing c-vectors} and are thus the negatives of the $c$-vectors corresponding to the cluster tilting set. Also, the top 4 signed exceptional sequence in Figure \ref{fig: 8 signed exceptional sequences} satisfy the criteria of Theorem \ref{ST: c vectors are exceptional sequences}.
Since the objects in a cluster tilting set are ext-orthogonal, it is easy to see that condition \eqref{eq: good order for cluster} is equivalent to the condition
\begin{equation}\label{eq: good order for cluster B}
\brk{\undim T_i,\undim T_j}=0.
\end{equation}
By Schofield's observation (Theorem \ref {thm: Schofield's observation}), we get the following corollary.
\begin{cor}
Let $(T_1,\cdots,T_n)$ be an ordered cluster tilting set with corresponding ordered set of $c$-vectors $(-\gamma_1,\cdots,-\gamma_n)$. Then there exists a permutation $\sigma$ so that $(\gamma_{\sigma(1)},\cdots,\gamma_{\sigma(n)})$ are the dimension vectors of a signed exceptional sequence. Furthermore, $\sigma$ has this property if and only if
\begin{equation}\label{eq: good permutation order for cluster}
\brk{\undim T_{\sigma(i)},\undim T_{\sigma(j)}}=0
\end{equation}
for all $i<j$.
\end{cor}
\begin{proof} The existence of $\sigma$ satisfying \eqref{eq: good permutation order for cluster} follows from the observation of Schofield. By Theorem \ref{thm: Which sig exc seqs are c-vectors?} this implies that $(\gamma_{\sigma(i)})$ are the dimension vectors of the signed exceptional sequence corresponding to the ordered cluster tilting set $(T_{\sigma(i)})$.
Conversely, suppose that $\sigma$ is a permutation of $n$ so that $(\gamma_{\sigma(i)})$ are the dimension vectors of a signed exceptional sequence. Let $(M_{\sigma(1)},\cdots,M_{\sigma(n)})$ be the corresponding {ordered cluster tilting set}. By Theorem \ref{thm: Which sig exc seqs are c-vectors?}, this cluster tilting set has the property that $\brk{\undim M_{\sigma(i)},\undim M_{\sigma(j)}}=0$ for $i<j$ and $(-\gamma_{\sigma(i)})$ is the corresponding ordered set of $c$-vectors. Since ordered cluster tilting sets are determined by their ordered set of $c$-vectors, this implies that $M_{\sigma(i)}=T_{\sigma(i)}$ for all $i$ proving the second half of the corollary.\end{proof}
\begin{rem} Using Theorems \ref{thm: formula for theta inverse} and \ref{thm: Which sig exc seqs are c-vectors?}, this corollary gives another method to find the $c$-vectors of a cluster tilting set $(T_1,\cdots,T_n)$: First find $\sigma$ satisfying \eqref{eq: good permutation order for cluster}. Then
\[
(-\gamma_{\sigma(i)})=\tau_+(\undim T_{\sigma(i)}).
\]
\end{rem}
\section{Classifying space of the cluster morphism category}\label{sec 3: classifying space of G(S)}
In this section we state the second main theorem of this paper, give an extension of this theorem more suitable for induction, give an outline and verify all the steps in the outline with some review of basic topics such as Quillen's Theorem A.
\subsection{Statement of the theorem}\label{ss 3.1: statement of theorem}
{Here is the second main theorem.}
\begin{thm}\label{thm 3.1: 2nd main theorem}
The classifying space of the cluster morphism category of any hereditary algebra of finite representation type is a $K(\pi,1)$ where $\pi$ is the picture group of the algebra as defined in \cite{IOTW4}.
\end{thm}
The fundamental group of the cluster morphism category is described below together with a generalization of this theorem to extension closed full subcategories of the module category. This generalization is easier to prove since we can apply induction on the number of roots in the extension closed subset.
Recall that, for any pair of real Schur roots $\alpha,\beta$, $hom(\alpha,\beta)=\dim\Hom_\Lambda(M_\alpha,M_\beta)$ and $ext(\alpha,\beta)=\dim\Ext^1(M_\alpha,M_\beta)$. We say that $\alpha,\beta$ are \emph{hom-orthogonal} if $M_\alpha,M_\beta$ are hom-orthogonal.
\begin{defn}\label{def: convex set of roots}
A set $\ensuremath{{\mathcal{S}}}$ of real Schur roots of $mod$-$\Lambda$ will be called \emph{convex} if it satisfies the following two conditions.
\begin{enumerate}
\item Given any wide subcategory $\ensuremath{{\mathcal{A}}}(\alpha_\ast)$ of $mod$-$\Lambda$ whose simple objects have dimension vectors $\alpha_i\in \ensuremath{{\mathcal{S}}}$, the set $ab(\alpha_\ast)$ of all dimension vectors of all exceptional modules in $\ensuremath{{\mathcal{A}}}(\alpha_\ast)$ is a finite subset of $\ensuremath{{\mathcal{S}}}$.
\item There is a partial ordering of $\ensuremath{{\mathcal{S}}}$ so that for all $\alpha,\beta\in\ensuremath{{\mathcal{S}}}$ with $\alpha<\beta$ we have $hom(\beta,\alpha)=0=ext(\alpha,\beta)$.
\end{enumerate}
\end{defn}
For example, in $A_3$ with straight orientation, $\ensuremath{{\mathcal{S}}}=\{\alpha,\beta\}$ with $\alpha=(1,1,0)^t$, $\beta= (0,1,1)^t$ satisfies (1) since its two elements are not hom-orthogonal. So, the elements of $ab(\alpha,\beta)$ are not required to be in $\ensuremath{{\mathcal{S}}}$. This is possible since the middle term of the extension is not indecomposable. The partial ordering is $\alpha<\beta$.
If $\Lambda$ is of finite representation type then all roots are real Schur roots and the set of all roots is convex. The set of all preprojective (or preinjective) roots, i.e., the dimension vectors of the projective modules in $mod$-$\Lambda$ is also convex. We note that, in Definition \ref{def: convex set of roots}, the simple objects of $\ensuremath{{\mathcal{A}}}(\alpha_\ast)$ are not necessarily simple in $mod\text-\Lambda$.
\begin{defn}\label{def: G(S) for S convex}
If $\ensuremath{{\mathcal{S}}}$ is any convex set of real Schur roots, let $G(\ensuremath{{\mathcal{S}}})$ be the groups given with generators and relations as follow.
\begin{enumerate}
\item $G(\ensuremath{{\mathcal{S}}})$ has one generator $x(\beta)$ for every $\beta\in\ensuremath{{\mathcal{S}}}$.
\item For each pair $(\alpha,\beta)$ of hom-orthogonal roots in $\ensuremath{{\mathcal{S}}}$ so that $ext(\alpha,\beta)=0$, we have the relation:
\[
x(\alpha)x(\beta)=\prod x(a_i\alpha+b_i\beta)
\]
where the product is over all $a_i\alpha+b_i\beta\in ab(\alpha,\beta)$ in order of the ratio $a_i/b_i$.\end{enumerate}
\end{defn}
When $\ensuremath{{\mathcal{S}}}$ is the set of all positive roots for a Dykin quiver, $G(\ensuremath{{\mathcal{S}}})$ is the \emph{picture group} of the quiver as defined in \cite{IOTW4}.
We observe that the order of objects in the product $\prod x(\gamma_i)$ is the right to left order (``backwards'' order) of the objects $M_{\gamma_i}$ in the AR quiver of $\ensuremath{{\mathcal{A}}}(\alpha,\beta)$. For example, in the case $B_2$, the modulated quiver $\ensuremath{{\field{R}}}\leftarrow\ensuremath{{\field{C}}}$ with simple roots $\alpha=(1,0)^t$ and $\beta=(0,1)^t$, the AR quiver is:
\[
\xymatrixrowsep{10pt}\xymatrixcolsep{10pt}
\xymatrix{
& P_2\ar[dr] && I_2\\
P_1\ar[ru]&&
I_1\ar[ru]
}
\]
These modules have dimension vectors $\underline\dim\,P_1,\underline\dim\,P_2,\underline\dim\,I_1,\underline\dim\,I_2=\alpha,2\alpha+\beta,\alpha+\beta,\beta$. The ratios $a_i/b_i$ for these modules are: $\infty,2,1,0$ respectively. So, the order is reversed in the product and we get:
\[
x(\alpha)x(\beta)=x(\beta)x(\alpha+\beta)x(2\alpha+\beta)x(\alpha)
\]
or: $[x(\alpha),x(\beta)]=x(\alpha+\beta)x(2\alpha+\beta)$ where we always use the notation:
\[
[x,y]:=y^{-1}xyx^{-1}
\]
\begin{defn}
If $\ensuremath{{\mathcal{S}}}$ is any convex set of real Schur roots, let $\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})$ be the full subcategory of the cluster morphism category whose objects are all $\ensuremath{{\mathcal{A}}}(\alpha_\ast)$ where $\alpha_\ast\subseteq\ensuremath{{\mathcal{S}}}$ is a finite set of hom-orthogonal roots which form an exceptional sequence. (By definition of convexity this implies that the dimension vector of every exceptional object in $\ensuremath{{\mathcal{A}}}(\alpha_\ast)$ lies in $\ensuremath{{\mathcal{S}}}$.)
\end{defn}
Note that $\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})$ always has at least one object $\ensuremath{{\mathcal{A}}}(\emptyset)$. In the classifying space $B\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})$, we use this as the base point. The choice of base point is important in order to make the fundamental group of $B\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})$ well-defined.
\begin{thm}\label{thm 3.5: G(S) is K(pi,1)}
Let $\ensuremath{{\mathcal{S}}}$ be any finite convex set of real Schur roots. Then the classifying space of the cluster morphism category $\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})$ is a $K(\pi,1)$ with $\pi=\pi_1\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})=G(\ensuremath{{\mathcal{S}}})$:
\[
B\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})\simeq BG(\ensuremath{{\mathcal{S}}})=K(G(\ensuremath{{\mathcal{S}}}),1).
\]
\end{thm}
\subsection{HNN extensions and outline of proof}\label{ss 3.2: outline of G(S)}
The proof of Theorem \ref{thm 3.5: G(S) is K(pi,1)} will be by induction on $|\ensuremath{{\mathcal{S}}}|$. If $\ensuremath{{\mathcal{S}}}$ is empty, then $\ensuremath{{\mathcal{G}}}(\emptyset)$ has only one object $\ensuremath{{\mathcal{A}}}(\emptyset)$ and one morphism: the identity map on this object. The classifying space is therefore a single point which is $K(\pi,1)$ with $\pi=\{e\}$, the trivial group. So, the theorem holds in this case.
If $\ensuremath{{\mathcal{S}}}$ is nonempty we will construct two convex proper subsets $\ensuremath{{\mathcal{S}}}_\omega\subseteq \ensuremath{{\mathcal{S}}}_0\subset \ensuremath{{\mathcal{S}}}$ (in \eqref{eq: def of S-omega} and Lemma \ref{lem: construction of S-0} below). Then, by induction on $|\ensuremath{{\mathcal{S}}}|$, the classifying space $B\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}}_0)$, $B\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}}_\omega)$ will be a $K(\pi,1)$'s with $\pi=G(\ensuremath{{\mathcal{S}}}_0)$, $G(\ensuremath{{\mathcal{S}}}_\omega)$, respectively. We will show that $B\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})$ can be obtained from $B\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}}_0)$, $B\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}}_\omega)$ in the following steps.
First we show (Lemma \ref{lem: G(S) is G+ cup G-}) that $\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})$ is the union of two subcategories $\ensuremath{{\mathcal{G}}}_+,\ensuremath{{\mathcal{G}}}_-$ so that
\[
B\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})=B\ensuremath{{\mathcal{G}}}_+\cup B\ensuremath{{\mathcal{G}}}_-
\]
and
\[
B\ensuremath{{\mathcal{G}}}_+\cap B\ensuremath{{\mathcal{G}}}_-=B\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}}_\omega)\,{\textstyle{\coprod}}\, B\ensuremath{{\mathcal{H}}}(\ensuremath{{\mathcal{S}}},\omega)
\]
where, by Proposition \ref{prop: isomorphism H=G(S-w)}, there is an isomorphism
\[
\varphi:\ensuremath{{\mathcal{H}}}(\ensuremath{{\mathcal{S}}},\omega)\xrightarrow\cong \ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}}_\omega)
\]
Next, we show (Lemma \ref{lem: key lemma}) that there is a homotopy equivalence
\[
B\ensuremath{{\mathcal{G}}}_+\simeq B\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}}_0)
\]
and (Lemma \ref{lem: BG- is a cylinder}) a homeomorphism
\[
B\ensuremath{{\mathcal{G}}}_-\cong B\ensuremath{{\mathcal{H}}}(\ensuremath{{\mathcal{S}}},\omega)\times[0,1]\cong B\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}}_\omega)\times[0,1]
\]
So,
\[
B\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})=B\ensuremath{{\mathcal{G}}}_+\cup B\ensuremath{{\mathcal{G}}}_-=B\ensuremath{{\mathcal{G}}}_+\cup B\ensuremath{{\mathcal{H}}}(\ensuremath{{\mathcal{S}}},\omega) \times[0,1]
\]
We also show in Lemma \ref{lem: BG- is a cylinder} that the cylinder $B\ensuremath{{\mathcal{H}}}(\ensuremath{{\mathcal{S}}},\omega)\times [0,1]$ is attached to $B\ensuremath{{\mathcal{G}}}_+$ on its two ends by mappings
\[
B\varphi_i:B\ensuremath{{\mathcal{H}}}(\ensuremath{{\mathcal{S}}},\omega)\cong B\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}}_\omega)\to B\ensuremath{{\mathcal{G}}}_+
\]
for $i=0,1$ induced by functors $\varphi_i:\ensuremath{{\mathcal{H}}}(\ensuremath{{\mathcal{S}}},\omega)\to \ensuremath{{\mathcal{G}}}_+$ where $\varphi_0:\ensuremath{{\mathcal{H}}}(\ensuremath{{\mathcal{S}}},\omega)\hookrightarrow \ensuremath{{\mathcal{G}}}_+$ is the inclusion functor and $\varphi_1$ is the composition of $\varphi:\ensuremath{{\mathcal{H}}}(\ensuremath{{\mathcal{S}}},\omega)\cong \ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}}_\omega)$ with the inclusion $\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}}_\omega)\hookrightarrow \ensuremath{{\mathcal{G}}}_+$.
Next, we show that the induced maps on fundamental groups
\[
\pi_1(\varphi_i):G(\ensuremath{{\mathcal{S}}}_\omega)\hookrightarrow G(\ensuremath{{\mathcal{S}}}_0)
\]
are monomorphisms where $\pi_1(\varphi_1)=\varphi$ and $\pi_1(\varphi_0)=\psi$ in the notation below. This is shown in Proposition \ref{prop: G-omega to GS is split mono} for $\pi_1(\varphi_1)$ and Proposition \ref{prop: psi has left inverse} for $\pi_1(\varphi_0)=\psi$.
This will be enough to prove Theorem \ref{thm 3.5: G(S) is K(pi,1)} because of the following well-known result about HNN extensions.
\begin{defn}
An \emph{HNN extension} of a group $G$ is given by a subgroup $H$ which is embedded in $G$ in two different ways. Let $\varphi,\psi:H\to G$ be two such group monomorphisms. Then $N(H,G,\varphi,\psi)$ is the quotient of the free product $G\ast \brk{t}$ of $G$ with the free group on one generator $t$ modulo the relation
\[
t\varphi(h)=\psi(h)t
\]
for every $h\in H$.
\end{defn}
Given $G,H,\varphi,\psi$ suppose that $BG=K(G,1)$, $BH=K(H,1)$ and $f,g:BH\to BG$ are continuous maps so that
\begin{enumerate}
\item $f$ is pointed (takes basepoint to basepoint) and induces the group homomorphism $\pi_1(f)=\varphi:H\hookrightarrow G$ and
\item $g$ is not pointed but there is a path $\gamma$ from $g(\ast)$ to the basepoint of $BG$ so that the induced homomorphism on $\pi_1$ is
\[
\pi_1(g,\gamma)=\psi:H\hookrightarrow G
\]
Here $\pi_1(g,\gamma)$ sends $[\alpha]\in \pi_1BH=H$, represented by the loop $\alpha$ in $BH$, to $[\gamma^{-1}g(\alpha)\gamma]\in \pi_1BG=G$.
\end{enumerate}
\begin{thm}\label{thm: HNN graph of groups}
The space
\[
BG\cup BH\times [0,1]
\]
given by attaching the two ends of the cylinder $BH\times [0,1]$ to $BG$ by the mappings $f,g$ is a $K(\pi,1)$ with $\pi=N(H,G,\varphi,\psi)$.
\end{thm}
The space $BG\cup BH\times[0,1]$ is an example of a ``graph of groups'' which is show to be a $K(\pi,1)$ in \cite{Hatcher}.
\begin{rem}
The isomorphism
\[
N(H,G,\varphi,\psi)\cong \pi_1(BG\cup BH\times[0,1])
\]
is the inclusion map on $G=\pi_1BG$ and sends the generator $t$ of $N(H,G,\varphi,\psi)$ to the homotopy class of the path $\gamma^{-1}\beta$ where $\beta$ is the path $\beta(t)=(\ast,t)\in (\ast\times [0,1])\subseteq BH\times[0,1]$.
\end{rem}
We will fill in the details of this outline and show (Theorem \ref{thm: pi-1 G(S) is G(S)}) that $G(\ensuremath{{\mathcal{S}}})$ is the corresponding HNN extension of $G(\ensuremath{{\mathcal{S}}}_0)$. We conclude that $B\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})=K(G(\ensuremath{{\mathcal{S}}}),1)$.
\subsection{Definitions and proofs}\label{ss 3.3: definitions and proofs}
Suppose that $\ensuremath{{\mathcal{S}}}=\{\alpha\}$. Then $\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})$ has two objects: $\ensuremath{{\mathcal{A}}}(\emptyset)$ and $\ensuremath{{\mathcal{A}}}(\alpha)$ and it has two nonidentity morphisms: $[\alpha]$ and $[-\alpha]:\ensuremath{{\mathcal{A}}}(\alpha)\to \ensuremath{{\mathcal{A}}}(\emptyset)$. Thus the classifying space is two points connected by two edges. This is a circle with fundamental group $\ensuremath{{\field{Z}}}$. This is isomorphic to the group $G(\alpha)=\brk{x(\alpha)}$. So, $B\ensuremath{{\mathcal{G}}}(\{\alpha\})=S^1=K(\ensuremath{{\field{Z}}},1)$.
The proof is by induction on $|\ensuremath{{\mathcal{S}}}|$. Recall that $\ensuremath{{\mathcal{S}}}$ is a finite convex set of real Schur roots. Then, the terms in the commutation relation for $x(\alpha),x(\beta)$ are in the set. So, the group $G(\ensuremath{{\mathcal{S}}})$ is defined.
\begin{lem}\label{lem: construction of S-0}
In any finite, nonempty, convex set of real Schur roots $\ensuremath{{\mathcal{S}}}$ there is an $\omega\in\ensuremath{{\mathcal{S}}}$ so that $\ensuremath{{\mathcal{S}}}_0:=\ensuremath{{\mathcal{S}}}\backslash \omega$ has the following properties.
\begin{enumerate}
\item $hom(\omega,\alpha)=0$ for all $\alpha\in\ensuremath{{\mathcal{S}}}_0$.
\item $ext(\alpha,\omega)=0$ for all $\alpha\in\ensuremath{{\mathcal{S}}}_0$.
\item $\ensuremath{{\mathcal{S}}}_0$ is convex.
\end{enumerate}
\end{lem}
\begin{rem}\label{rem:unique map M to Momega m}
This implies that, for any $M\in\ensuremath{{\mathcal{A}}}(\alpha_\ast)\in \ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})$, there is a uniquely determined exact sequence $M_0\rightarrowtail M\twoheadrightarrow M_\omega^m$ where $M_0\in\ensuremath{{\mathcal{A}}}(\alpha_\ast\backslash\omega)\in\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}}_0)$. Equivalently, whenever $\omega$ is an element of $\alpha_\ast$, it is a source in the quiver of $\ensuremath{{\mathcal{A}}}(\alpha_\ast)$. So, any projective object in $\ensuremath{{\mathcal{A}}}(\alpha_\ast\backslash\omega)$ is also projective in $\ensuremath{{\mathcal{A}}}(\alpha_\ast)$.
\end{rem}
\begin{proof}
Take a partial ordering of $\ensuremath{{\mathcal{S}}}$ as given in the definition of convexity and let $\omega$ be any maximal element. Then (1), (2), (3) are clearly satified.
\end{proof}
Since $\ensuremath{{\mathcal{S}}}_0$ has one fewer element than $\ensuremath{{\mathcal{S}}}$, the theorem is true for $\ensuremath{{\mathcal{S}}}_0$. In other words, $B\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}}_0)$ is $K(G(\ensuremath{{\mathcal{S}}}_0),1)$. We will show that $G(\ensuremath{{\mathcal{S}}})$ is an HNN extension of $G(\ensuremath{{\mathcal{S}}}_0)$ and that $B\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})$ is a graph of groups for this group extension and therefore a $K(\pi,1)$ with $\pi=G(\ensuremath{{\mathcal{S}}})$.
Let $\ensuremath{{\mathcal{S}}}_{\omega}$ be the set of all $\gamma\in\ensuremath{{\mathcal{S}}}$ so that $hom(\gamma,\omega)=0$. In particular, $\gamma\neq\omega$. Since $ext(\gamma,\omega)=0$ for all $\gamma\in\ensuremath{{\mathcal{S}}}$, this is a linear condition:
\begin{equation}\label{eq: def of S-omega}
\ensuremath{{\mathcal{S}}}_\omega=\{\gamma\in\ensuremath{{\mathcal{S}}}\,|\, \brk{\gamma,\omega}=0\}
\end{equation}
\begin{lem}
Suppose that $\alpha,\beta\in \ensuremath{{\mathcal{S}}}$ are hom perpendicular and $ext(\alpha,\beta)=0$.
\begin{enumerate}
\item If $\alpha,\beta\in\ensuremath{{\mathcal{S}}}_{\omega}$ then $ab(\alpha,\beta)\subseteq \ensuremath{{\mathcal{S}}}_{\omega}$. So, $\ensuremath{{\mathcal{S}}}_{\omega}$ is convex.
\item If $\ensuremath{{\mathcal{S}}}_\omega$ does not contain both $\alpha$ and $\beta$ then $\{\alpha,\beta\}\cap \ensuremath{{\mathcal{S}}}_\omega=ab(\alpha,\beta)\cap \ensuremath{{\mathcal{S}}}_\omega$.
\end{enumerate}
\end{lem}
\begin{proof} Since every element of $ab(\alpha,\beta)$ is a nonnegative linear combination of $\alpha,\beta$ the linear condition $\brk{-,\omega}=0$ holds on all elements if it holds for either $\alpha$ or $\beta$ and at least one other element. This proves (1) and (2) in the case when $\{\alpha,\beta\}\cap\ensuremath{{\mathcal{S}}}_\omega$ is nonempty.
If $\alpha,\beta\notin \ensuremath{{\mathcal{S}}}_\omega$ then $\brk{\alpha,\omega}>0$ and $\brk{\beta,\omega}>0$ so $\brk{\gamma,\omega}>0$ and thus $\gamma\notin\ensuremath{{\mathcal{S}}}_\omega$ for any positive linear combination $\gamma$ of $\alpha,\beta$. This proves the remaining case of (2).
\end{proof}
\begin{prop}\label{prop: G-omega to GS is split mono}
The group homomorphism \[
G(\ensuremath{{\mathcal{S}}}_{\omega})\hookrightarrow G(\ensuremath{{\mathcal{S}}})\]
induced by the inclusion $\ensuremath{{\mathcal{S}}}_{\omega}\subseteq \ensuremath{{\mathcal{S}}}$ has a left inverse. Since $\ensuremath{{\mathcal{S}}}_{\omega}\subseteq \ensuremath{{\mathcal{S}}}_0\subseteq \ensuremath{{\mathcal{S}}}$, this implies that $G(\ensuremath{{\mathcal{S}}}_\omega)$ is a retract of both $G(\ensuremath{{\mathcal{S}}}_0)$ and $G(\ensuremath{{\mathcal{S}}})$.
\end{prop}
The homomorphism $\pi_1(\varphi_1)$ in the outline is the map $G(\ensuremath{{\mathcal{S}}}_\omega)\hookrightarrow G(\ensuremath{{\mathcal{S}}}_0)$ included by the inclusion $\ensuremath{{\mathcal{S}}}_\omega\subseteq \ensuremath{{\mathcal{S}}}_0$.
\begin{proof}
A retraction $r:G(\ensuremath{{\mathcal{S}}})\to G(\ensuremath{{\mathcal{S}}}_\omega)$ can be defined as follows.
\[
r(x(\alpha))=\begin{cases} x(\alpha) & \text{if } \alpha\in \ensuremath{{\mathcal{S}}}_\omega\\
1 & \text{otherwise}
\end{cases}
\]
Since the relations in both groups are of the form $x(\alpha)x(\beta)=\prod x(\gamma_i)$ where the product is over all $\gamma_i\in ab(\alpha,\beta)$, the lemma shows that $r$ preserves relations.
\end{proof}
Let $\ensuremath{{\mathcal{H}}}(\ensuremath{{\mathcal{S}}},\omega)$ be the full subcategory of $\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})$ of all objects which do not lie in $\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}}_0)$. These are $\ensuremath{{\mathcal{A}}}=\ensuremath{{\mathcal{A}}}(\beta_\ast)$ so that $\omega\in\beta_\ast$, i.e., $M_{\omega}$ is a simple object of $\ensuremath{{\mathcal{A}}}$. The disjoint subcategories $\ensuremath{{\mathcal{H}}}(\ensuremath{{\mathcal{S}}},\omega)$ and $\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}}_0)$ of $\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})$ together contain all the objects of $\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})$. There are no morphisms from $\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}}_0)$ to $\ensuremath{{\mathcal{H}}}(\ensuremath{{\mathcal{S}}},\omega)$ and there are two types of morphisms from $\ensuremath{{\mathcal{H}}}(\ensuremath{{\mathcal{S}}},\omega)$ to $\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}}_0)$.
\begin{defn}\label{def: positive and negative morphisms}
By a \emph{negative morphism} we mean a cluster morphism $[T]:\ensuremath{{\mathcal{A}}}(\alpha_\ast,\omega)\to\ensuremath{{\mathcal{A}}}(\beta_\ast)$ from an object of $\ensuremath{{\mathcal{H}}}(\ensuremath{{\mathcal{S}}},\omega)$ to an object of $\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}}_0)$ so that $T$ contains the shifted projective object $P_\omega[1]$. A \emph{positive morphism} is a morphism $[T]:\ensuremath{{\mathcal{A}}}(\alpha_\ast,\omega)\to\ensuremath{{\mathcal{A}}}(\beta_\ast)$ with $\ensuremath{{\mathcal{A}}}(\beta_\ast)\in\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}}_0)$ which is not negative.
\end{defn}
We note that the target $\ensuremath{{\mathcal{A}}}(\beta_\ast)$ of a negative morphism necessarily lies in $\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}}_\omega)$. And any positive morphism $[T]$ must contain a module $T_0$ which maps onto $M_\omega$ since, otherwise, $|T|^\perp=\ensuremath{{\mathcal{A}}}(\beta_\ast)$ would contain $M_\omega$.
\begin{prop}
The composition of any positive (resp. negative) morphism with any morphism in $\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})$ is positive (resp. negative).
\end{prop}
We say that the positive morphisms form a \emph{two-sided ideal} in the category $\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})$. The negative morphisms also form an ideal which is disjoint from the ideal of positive morphisms.
\begin{proof}
Suppose that $[T]:\ensuremath{{\mathcal{A}}}(\alpha_\ast,\omega)\to\ensuremath{{\mathcal{A}}}(\beta_\ast)$ is positive. Equivalently, $T$ contains some $T_0$ which maps onto the module $M_\omega$. Then any composition $[R]\circ [T]=[T,\sigma_T^{-1}R]$ will also contain $T_0$ and thus be positive. Also, any composition
\[
[T]\circ[S]=[S,\sigma_S^{-1}T]:\ensuremath{{\mathcal{A}}}(\alpha_\ast,\omega)\xrightarrow{[S]} \ensuremath{{\mathcal{A}}}(\beta_\ast,\omega)\xrightarrow{[T]}\ensuremath{{\mathcal{A}}}(\gamma_\ast)
\] will contain $\sigma_S^{-1}T_0\in \ensuremath{{\field{R}}}\alpha_\ast^{op}lus\ensuremath{{\field{R}}}\omega$ which is congruent to $T_0$ module $\ensuremath{{\field{R}}} S\subseteq\ensuremath{{\field{R}}}\alpha_\ast$ and therefore will have positive $\ensuremath{{\field{R}}}\omega$-coordinate. So, $[T]\circ[S]$ will be positive. The negative case is similar.
\end{proof}
\begin{lem}\label{lem: unique factorization of negative morphisms}
Any negative morphism $\ensuremath{{\mathcal{A}}}(\alpha_\ast,\omega)\to \ensuremath{{\mathcal{A}}}(\beta_\ast)$ factors uniquely through $[P_\omega[1]]:\ensuremath{{\mathcal{A}}}(\alpha_\ast,\omega)\to\ensuremath{{\mathcal{A}}}(\alpha_\ast)$.
\[
\xymatrixrowsep{15pt}\xymatrixcolsep{10pt}
\xymatrix{
\ensuremath{{\mathcal{A}}}(\alpha_\ast,\omega) \ar[rr]\ar[dr]_{[P_\omega[1]]}& &\ensuremath{{\mathcal{A}}}(\beta_\ast)\\
&\ensuremath{{\mathcal{A}}}(\alpha_\ast)\ar@{-->}[ur]_{\exists![T]}
}
\]
\end{lem}
\begin{proof}
Any negative morphism has the form $[P_\omega,T]$ by definition. To be ext-orthogonal to $P_\omega[1]$ each $T_i\in T$ must lie in $\ensuremath{{\mathcal{A}}}(\alpha_\ast)$. So, $\sigma_{P_\omega[1]}(T)=T$ is the unique partial cluster tilting set in $\ensuremath{{\mathcal{A}}}(\alpha_\ast)$ so that $[T]\circ[P_\omega[1]]=[P_\omega[1],T]$.
\end{proof}
\begin{prop}\label{prop: isomorphism H=G(S-w)}
There is an isomorphism of categories
\[
\varphi:\ensuremath{{\mathcal{H}}}(\ensuremath{{\mathcal{S}}},\omega)\xrightarrow\cong \ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}}_{\omega})
\]
given on objects by $\varphi\ensuremath{{\mathcal{A}}}(\alpha_\ast,\omega)=\ensuremath{{\mathcal{A}}}(\alpha_\ast)$ and on morphisms by $\varphi[T]=[T]$. Furthermore, inside the larger category $\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})$, there is a natural transformation from the inclusion functor $\iota:\ensuremath{{\mathcal{H}}}(\ensuremath{{\mathcal{S}}},\omega)\hookrightarrow \ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})$ to $\varphi:\ensuremath{{\mathcal{H}}}(\ensuremath{{\mathcal{S}}},\omega)\cong \ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}}_\omega)\hookrightarrow \ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})$ given by $[P_\omega[1]]:\ensuremath{{\mathcal{A}}}(\alpha_\ast,\omega)\to \ensuremath{{\mathcal{A}}}(\alpha_\ast)$.
\end{prop}
\begin{proof} First, $\varphi$ is a bijection on objects since $\ensuremath{{\mathcal{A}}}(\alpha_\ast)$ is an object of $\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}}_\omega)$ if and only if each $\alpha_i$ is hom-orthogonal to $\omega$ which is equivalent to $\ensuremath{{\mathcal{A}}}(\alpha_\ast,\omega)$ being in $\ensuremath{{\mathcal{H}}}(\ensuremath{{\mathcal{S}}},\omega)$.
Let $P_\omega,P_{\alpha_i}$ be the relatively projective objects of $\ensuremath{{\mathcal{A}}}(\alpha_\ast,\omega)$. Then each $P_{\alpha_i}\in\ensuremath{{\mathcal{A}}}(\alpha_\ast)$. So, each shifted projective object $P_{\alpha_i}[1]$ in $\ensuremath{{\mathcal{C}}}(\alpha_\ast)$ lies in $\ensuremath{{\mathcal{C}}}(\alpha_\ast,\omega)$. Thus, $\ensuremath{{\mathcal{C}}}(\alpha_\ast)\subseteq\ensuremath{{\mathcal{C}}}(\alpha_\ast,\omega)$.
A morphism $[T]:\ensuremath{{\mathcal{A}}}(\alpha_\ast)\to\ensuremath{{\mathcal{A}}}(\beta_\ast)$ in $\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}}_\omega)$ is given by a partial cluster tilting set $T\subseteq \ensuremath{{\mathcal{C}}}(\alpha_\ast)$ so that $|T|^\perp\cap\ensuremath{{\mathcal{A}}}(\alpha_\ast)=\ensuremath{{\mathcal{A}}}(\beta_\ast)$. \vs2
\noindent\underline{Claim:} $|T|^\perp\cap\ensuremath{{\mathcal{A}}}(\alpha_\ast,\omega)=\ensuremath{{\mathcal{A}}}(\beta_\ast,\omega)$. So, $T$, considered as a partial cluster tilting set in $\ensuremath{{\mathcal{C}}}(\alpha_\ast,\omega)$, gives a morphism $[T]:\ensuremath{{\mathcal{A}}}(\alpha_\ast,\omega)\to\ensuremath{{\mathcal{A}}}(\beta_\ast,\omega)$.\vs2
\noindent Proof: Since $M_\omega\in T^\perp$ by definition of $\ensuremath{{\mathcal{S}}}_\omega$, we have $|T|^\perp\cap\ensuremath{{\mathcal{A}}}(\alpha_\ast,\omega)\supseteq\ensuremath{{\mathcal{A}}}(\beta_\ast,\omega)$. Conversely, let $M\in |T|^\perp\cap\ensuremath{{\mathcal{A}}}(\alpha_\ast,\omega)$. Then there is a short exact sequence $M_0\rightarrowtail M\twoheadrightarrow M_\omega^m$ where $M_0\in \ensuremath{{\mathcal{A}}}(\alpha_\ast)$. Since $M,M_\omega\in |T|^\perp$, we must have $M_0\in |T|^\perp\cap \ensuremath{{\mathcal{A}}}(\alpha_\ast)=\ensuremath{{\mathcal{A}}}(\beta_\ast)$. But this implies that $M$ lies in $\ensuremath{{\mathcal{A}}}(\beta_\ast,\omega)$ as required, proving the claim.
\vs2
Conversely, given any morphism $[T]:\ensuremath{{\mathcal{A}}}(\alpha_\ast,\omega)\to \ensuremath{{\mathcal{A}}}(\beta_\ast,\omega)$, we can compose with $[\overline P_\omega[1]]:\ensuremath{{\mathcal{A}}}(\beta_\ast,\omega)\to\ensuremath{{\mathcal{A}}}(\beta_\ast)$, where $\overline P_\omega$ is the projective cover of $M_\omega$ in $\ensuremath{{\mathcal{A}}}(\beta_\ast,\omega)$, to get a negative morphism $\ensuremath{{\mathcal{A}}}(\alpha_\ast,\omega)\to\ensuremath{{\mathcal{A}}}(\beta_\ast)$. By the lemma, we get an induced morphism $\varphi[T]:\ensuremath{{\mathcal{A}}}(\alpha_\ast)\to\ensuremath{{\mathcal{A}}}(\beta_\ast)$ which is the unique morphism making the following diagram commute.
\[
\xymatrix{
\ensuremath{{\mathcal{A}}}(\alpha_\ast,\omega)\ar[d]_{P_\omega[1]}\ar[r]^{[T]} &
A(\beta_\ast,\omega)\ar[d]^{\overline P_\omega[1]}\\
\ensuremath{{\mathcal{A}}}(\alpha_\ast)\ar[r]^{\varphi[T]=[T]} &
A(\beta_\ast)
}
\]
This diagram implies at the same time that $\varphi$ is a functor and that $[P_\omega[1]]$ is a natural transformation. For example, given any morphism $[S]:\ensuremath{{\mathcal{A}}}(\beta_\ast,\omega)\to\ensuremath{{\mathcal{A}}}(\beta'_\ast,\omega)$ we have:
\[
[\overline P'_\omega[1]]\circ [S]\circ [T]=\varphi [S]\circ [\overline P_\omega[1]]\circ [T]=\varphi[S]\circ\varphi[T]\circ[P_\omega[1]]
\]
showing that $\varphi([S]\circ[T])=\varphi[S]\circ\varphi[T]$. By the Claim proved above, $\varphi\ensuremath{{\mathcal{H}}}(\ensuremath{{\mathcal{S}}},\omega)\to\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}}_\omega)$ is an isomorphism of categories.
\end{proof}
We can now make precise the structure of the category $\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})$ as given in the outline. The union of disjoint subcategories $\ensuremath{{\mathcal{H}}}(\ensuremath{{\mathcal{S}}},\omega)\coprod \ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}}_0)$ contains all of the objects of $\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})$ by definition. There are no morphisms from $\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}}_0)$ to $\ensuremath{{\mathcal{H}}}(\ensuremath{{\mathcal{S}}},\omega)$. The morphisms from $\ensuremath{{\mathcal{H}}}(\ensuremath{{\mathcal{S}}},\omega)$ to $\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}}_0)$ fall into two classes: positive and negative morphisms as defined above. Thus we have:
\begin{lem}\label{lem: G(S) is G+ cup G-} $\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})$ is the union of two subcategories:
\[
\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})=\ensuremath{{\mathcal{G}}}_+(\ensuremath{{\mathcal{S}}},\omega)\cup \ensuremath{{\mathcal{G}}}_-(\ensuremath{{\mathcal{S}}},\omega)
\]
where $\ensuremath{{\mathcal{G}}}_+(\ensuremath{{\mathcal{S}}},\omega)$ is the union of $\ensuremath{{\mathcal{H}}}(\ensuremath{{\mathcal{S}}},\omega)\coprod \ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}}_0)$ with all positive morphisms and $\ensuremath{{\mathcal{G}}}_-(\ensuremath{{\mathcal{S}}},\omega)$ is the union of $\ensuremath{{\mathcal{H}}}(\ensuremath{{\mathcal{S}}},\omega)\coprod \ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}}_\omega)$ and all negative morphisms. (In $\ensuremath{{\mathcal{G}}}_-$ we include only the targets of the negative morphisms.) So,
\[
\ensuremath{{\mathcal{G}}}_+(\ensuremath{{\mathcal{S}}},\omega)\cap \ensuremath{{\mathcal{G}}}_-(\ensuremath{{\mathcal{S}}},\omega)=\ensuremath{{\mathcal{H}}}(\ensuremath{{\mathcal{S}}},\omega)\,{\textstyle{\coprod}}\, \ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}}_\omega).
\]
\end{lem}
From the definition of the classifying space of a category, we will obtain:
\begin{lem}\label{lem: decomposition of BG} We have an analogous decomposition of the topological space $B\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})$:
\[
B\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})=B\ensuremath{{\mathcal{G}}}_+(\ensuremath{{\mathcal{S}}},\omega)\cup B\ensuremath{{\mathcal{G}}}_-(\ensuremath{{\mathcal{S}}},\omega)
\]
\[
B\ensuremath{{\mathcal{G}}}_+(\ensuremath{{\mathcal{S}}},\omega)\cap B\ensuremath{{\mathcal{G}}}_-(\ensuremath{{\mathcal{S}}},\omega)=B\ensuremath{{\mathcal{H}}}(\ensuremath{{\mathcal{S}}},\omega)\,{\textstyle{\coprod}}\, B\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}}_\omega).
\]
\end{lem}
By the unique factorization of negative morphisms given in Lemma \ref {lem: unique factorization of negative morphisms}, we then show:
\begin{lem}\label{lem: BG- is a cylinder}
The classifying space $B\ensuremath{{\mathcal{G}}}_-(\ensuremath{{\mathcal{S}}},\omega)$ is homeomorphic to a cylinder:
\[
B\ensuremath{{\mathcal{G}}}_-(\ensuremath{{\mathcal{S}}},\omega)=B\ensuremath{{\mathcal{H}}}(\ensuremath{{\mathcal{S}}},\omega)\times[0,1]
\]
The end $B\ensuremath{{\mathcal{H}}}(\ensuremath{{\mathcal{S}}},\omega)\times0$ of this cylinder is attached to $B\ensuremath{{\mathcal{G}}}_+(\ensuremath{{\mathcal{S}}},\omega)$ by the inclusion $B\ensuremath{{\mathcal{H}}}(\ensuremath{{\mathcal{S}}},\omega)\subseteq B\ensuremath{{\mathcal{G}}}_+(\ensuremath{{\mathcal{S}}},\omega)$ and the other end by the mapping $B\varphi:B\ensuremath{{\mathcal{H}}}(\ensuremath{{\mathcal{S}}},\omega)\cong B\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}}_\omega)\subseteq B\ensuremath{{\mathcal{G}}}_+(\ensuremath{{\mathcal{S}}},\omega)$ induced by the functor $\varphi:\ensuremath{{\mathcal{H}}}(\ensuremath{{\mathcal{S}}},\omega)\cong \ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}}_\omega)$.
\end{lem}
In another key lemma proved below, we will see that $B\ensuremath{{\mathcal{G}}}_+(\ensuremath{{\mathcal{S}}},\omega)$ is homotopy equivalent to $B\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}}_0)$ which is a $K(\pi,1)$ by induction since $|\ensuremath{{\mathcal{S}}}_0|=|\ensuremath{{\mathcal{S}}}|-1$. We will then use Theorem \ref{thm: HNN graph of groups} to conclude that $B\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})$ is a $K(\pi,1)$.
\subsection{Classifying space of a category and Lemmas \ref {lem: decomposition of BG}, \ref{lem: BG- is a cylinder}}\label{ss 3.4: classifying space of a category}
We first recall the definition of the classifying space of a category.
\subsubsection{Classifying space of a category}
The classifying space of any small category $\ensuremath{{\mathcal{C}}}$ is the geometric realization of its \emph{nerve}: $B\ensuremath{{\mathcal{C}}}=|\ensuremath{{\mathcal{N}}}__\bulletllet\ensuremath{{\mathcal{C}}}|$ where $\ensuremath{{\mathcal{N}}}__\bulletllet\ensuremath{{\mathcal{C}}}$ is the simplicial set which in degree $n$ is the set of all sequences of $n$ composable morphisms in $\ensuremath{{\mathcal{C}}}$:
\[
\ensuremath{{\mathcal{N}}}_n\ensuremath{{\mathcal{C}}}:=\coprod_{X,Y\in\ensuremath{{\mathcal{C}}}}\ensuremath{{\mathcal{C}}}_n(X,Y)
\]
where $\ensuremath{{\mathcal{C}}}_n(X,Y)$ is the set of all directed paths of length $n$ from $X$ to $Y$ in $\ensuremath{{\mathcal{C}}}$:
\[
\ensuremath{{\mathcal{C}}}_n(X,Y):=\left\{
X=X_0\xrightarrow{f_1} X_1\xrightarrow{f_2} X_2\xrightarrow{f_3} \cdots\xrightarrow{f_n} X_n=Y
\right\}
\]
To simplify notation and clarify the case $n=0$, we will sometimes add redundant information to the elements of the set $\ensuremath{{\mathcal{C}}}_n(X,Y)$. Namely, we add all compositions of morphisms and all identity morphisms of all objects $X_i$ in the sequence. Then, a {path of length $n$} in $\ensuremath{{\mathcal{C}}}$ becomes a collection of morphisms $f_{ij}:X_i\to X_j$ for $0\le i\le j\le n$ so that $f_{jk}\circ f_{ij}=f_{ik}$ for all $0\le i\le j\le k\le n$, and so that $f_{ii}$ is the identity morphism of $X_i$ for each $i$. When $n=0$ we have only the identity morphism $f_{00}$ of $X=X_0=Y$. (So, $\ensuremath{{\mathcal{C}}}_0(X,Y)$ is empty when $X\neq Y$.)
The simplicial structure maps for $\ensuremath{{\mathcal{N}}}__\bulletllet\ensuremath{{\mathcal{C}}}$ are given as follows. Let $[n]:=\{0,1,\cdots,n\}$. Then for any set mappings $a:[n]\to [m]$ so that $0\le a(i)\le a(j)\le m$ for all $0\le i\le j\le n$ we have the mapping $a^\ast:\ensuremath{{\mathcal{N}}}_m\ensuremath{{\mathcal{C}}}\to\ensuremath{{\mathcal{N}}}_n\ensuremath{{\mathcal{C}}}$ given by
\[
a^\ast((p,q)\mapsto f_{pq})=((i,j)\mapsto f_{a(i)a(j)})
\]
The \emph{classifying space} of $\ensuremath{{\mathcal{C}}}$ is the geometric realization of $\ensuremath{{\mathcal{N}}}__\bulletllet\ensuremath{{\mathcal{C}}}$ which is the topological space given by
\[
B\ensuremath{{\mathcal{C}}}=|\ensuremath{{\mathcal{N}}}__\bulletllet\ensuremath{{\mathcal{C}}}|:=\coprod_{n\ge0} \ensuremath{{\mathcal{N}}}_n \ensuremath{{\mathcal{C}}}\times \Delta^n/\sim
\]
with the quotient topology where $\Delta^n$ is the standard $n$-simplex with vertices $v_0,\cdots,v_n$ and the equivalence relation is given by
\[
(f,a_\ast(t))\sim (a^\ast f,t)
\]
for all $f\in \ensuremath{{\mathcal{N}}}_m\ensuremath{{\mathcal{C}}}$, $t\in\Delta^n$ and $a:[n]\to[m]$. The mapping $a_\ast:\Delta^n\to\Delta^m$ is the unique affine linear mapping which sends $v_i$ to $v_{a(i)}$ for all $i\in[n]$.
\subsubsection{Proof of Lemma \ref{lem: decomposition of BG}} By definition,
\[
B\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})=\coprod_n \ensuremath{{\mathcal{N}}}_n\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})\times\Delta^n/\sim
\]
So, $\ensuremath{{\mathcal{N}}}_n\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})$ is the disjoint union of four sets: $\ensuremath{{\mathcal{N}}}_n\ensuremath{{\mathcal{H}}}(\ensuremath{{\mathcal{S}}},\omega)$, $\ensuremath{{\mathcal{N}}}_n\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}}_0)$, the set of all paths $(f_{ij})$ which included one positive morphism, call these \emph{negative paths}, and the set of all paths including one negative morphism, call these \emph{positive paths}.
But, all positive paths lie in $\ensuremath{{\mathcal{N}}}_n\ensuremath{{\mathcal{G}}}_+(\ensuremath{{\mathcal{S}}},\omega)$ and all negative paths lie in $\ensuremath{{\mathcal{N}}}_n\ensuremath{{\mathcal{G}}}_-(\ensuremath{{\mathcal{S}}},\omega)$. Also, $\ensuremath{{\mathcal{N}}}_n\ensuremath{{\mathcal{G}}}_+(\ensuremath{{\mathcal{S}}},\omega)$ contains $\ensuremath{{\mathcal{N}}}_n\ensuremath{{\mathcal{H}}}(\ensuremath{{\mathcal{S}}},\omega)\coprod\ensuremath{{\mathcal{N}}}_n\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}}_0)$. Therefore,
\[
B\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})=B\ensuremath{{\mathcal{G}}}_+(\ensuremath{{\mathcal{S}}},\omega)\cup B\ensuremath{{\mathcal{G}}}_-(\ensuremath{{\mathcal{S}}},\omega)
\]
Since a sequence of composable morphisms in $\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})$ contains at most one morphism not in $\ensuremath{{\mathcal{H}}}(\ensuremath{{\mathcal{S}}},\omega)$ or $\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}}_0)$, a path cannot be both positive and negative. So any element of $\ensuremath{{\mathcal{N}}}_n\ensuremath{{\mathcal{G}}}_+(\ensuremath{{\mathcal{S}}},\omega)\cap \ensuremath{{\mathcal{N}}}_n\ensuremath{{\mathcal{G}}}_-(\ensuremath{{\mathcal{S}}},\omega)$ lies in $\ensuremath{{\mathcal{N}}}_n\ensuremath{{\mathcal{H}}}(\ensuremath{{\mathcal{S}}},\omega)$ or $\ensuremath{{\mathcal{N}}}_n\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}}_\omega)$. Therefore,
\[
B\ensuremath{{\mathcal{G}}}_+(\ensuremath{{\mathcal{S}}},\omega)\cap B\ensuremath{{\mathcal{G}}}_-(\ensuremath{{\mathcal{S}}},\omega)=B\ensuremath{{\mathcal{H}}}(\ensuremath{{\mathcal{S}}},\omega)\,{\textstyle{\coprod}}\, B\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}}_\omega)
\]
completing the proof of Lemma \ref{lem: decomposition of BG}.
\subsubsection{Proof of Lemma \ref{lem: BG- is a cylinder}}
We will use the following well-know construction of the cylinder of a category. Let $\ensuremath{{\mathcal{I}}}$ be the category with two objects $0,1$ and exactly one nonidentity morphism $d:0\to 1$. Then, it is easy to see that $B\ensuremath{{\mathcal{I}}}$ is the unit interval $[0,1]$. Since $B(\ensuremath{{\mathcal{C}}}\times\ensuremath{{\mathcal{D}}})=B\ensuremath{{\mathcal{C}}}\times B\ensuremath{{\mathcal{D}}}$ for any two small categories $\ensuremath{{\mathcal{C}}},\ensuremath{{\mathcal{D}}}$, we get:
\[
B(\ensuremath{{\mathcal{C}}}\times \ensuremath{{\mathcal{I}}})=B\ensuremath{{\mathcal{C}}}\times[0,1]
\]
with two ends given by $B(\ensuremath{{\mathcal{C}}}\times 0)=B\ensuremath{{\mathcal{C}}}\times 0$ and $B(\ensuremath{{\mathcal{C}}}\times 1)=B\ensuremath{{\mathcal{C}}}\times 1$.
To prove Lemma \ref{lem: BG- is a cylinder} it therefore suffices to construct an isomorphism of categories:
\[
\Phi:\ensuremath{{\mathcal{H}}}(\ensuremath{{\mathcal{S}}},\omega)\times \ensuremath{{\mathcal{I}}}\cong \ensuremath{{\mathcal{G}}}_-(\ensuremath{{\mathcal{S}}},\omega)
\]
Such an isomorphism is given on objects by $\Phi(\ensuremath{{\mathcal{A}}},0)=\ensuremath{{\mathcal{A}}}$ for all $\ensuremath{{\mathcal{A}}}=\ensuremath{{\mathcal{A}}}(\alpha_\ast,\omega)\in\ensuremath{{\mathcal{H}}}(\ensuremath{{\mathcal{S}}},\omega)$ and $\Phi(\ensuremath{{\mathcal{A}}},1)=\varphi \ensuremath{{\mathcal{A}}}=\ensuremath{{\mathcal{A}}}(\alpha_\ast)$. On morphisms, $\Phi$ is given by $\Phi([T],id_i)=[T]$ for $i=0,1$ and $\Phi([T],d)=[P_\omega[1],T]$. It is easy to see that $\Phi$ is a functor, that it is the inclusion functor on $\ensuremath{{\mathcal{H}}}(\ensuremath{{\mathcal{S}}},\omega)\times 0$ and $\varphi$ on $\ensuremath{{\mathcal{H}}}(\ensuremath{{\mathcal{S}}},\omega)\times 1$.
The inverse of $\Phi$ is $\Psi:\ensuremath{{\mathcal{G}}}_-(\ensuremath{{\mathcal{S}}},\omega)\to \ensuremath{{\mathcal{H}}}(\ensuremath{{\mathcal{S}}},\omega)\times\ensuremath{{\mathcal{I}}}$ given as follows.
\begin{enumerate}
\item $\Psi \ensuremath{{\mathcal{A}}}(\alpha_\ast,\omega)=(\ensuremath{{\mathcal{A}}}(\alpha_\ast,\omega),0)$ for all $\ensuremath{{\mathcal{A}}}(\alpha_\ast,\omega)\in\ensuremath{{\mathcal{H}}}(\ensuremath{{\mathcal{S}}},\omega)$.
\item $\Psi \ensuremath{{\mathcal{A}}}(\beta_\ast)=(\ensuremath{{\mathcal{A}}}(\beta_\ast,\omega),1)$ for all $\ensuremath{{\mathcal{A}}}(\beta_\ast)\in\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}}_\omega)$.
\item $\Psi[T]=([T],id_0)$ for all $[T]:\ensuremath{{\mathcal{A}}}(\alpha_\ast,\omega)\to \ensuremath{{\mathcal{A}}}(\beta_\ast,\omega)$ in $\ensuremath{{\mathcal{H}}}(\ensuremath{{\mathcal{S}}},\omega)$.
\item $\Psi[T]=(\varphi^{-1}[T],id_1)$ for all $[T]:\ensuremath{{\mathcal{A}}}(\alpha_\ast)\to \ensuremath{{\mathcal{A}}}(\beta_\ast)$ in $\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}}_\omega)$ where $\varphi^{-1}[T]=[T]$ considered as a morphism $\ensuremath{{\mathcal{A}}}(\alpha_\ast,\omega)\to \ensuremath{{\mathcal{A}}}(\beta_\ast,\omega)$. (See Proposition \ref{prop: isomorphism H=G(S-w)}.)
\item $\Psi$ takes $[P_\omega[1],T]:\ensuremath{{\mathcal{A}}}(\alpha_\ast,\omega)\to\ensuremath{{\mathcal{A}}}(\beta_\ast)$ to $([T],d):(\ensuremath{{\mathcal{A}}}(\alpha_\ast,\omega),0)\to(\ensuremath{{\mathcal{A}}}(\beta_\ast,\omega),1)$.
\end{enumerate}
It follows from Lemma \ref{lem: unique factorization of negative morphisms} and Proposition \ref{prop: isomorphism H=G(S-w)} that $\Psi$ is well-defined and inverse to $\Phi$. This proves Lemma \ref{lem: BG- is a cylinder}.
\subsection{Key lemma}\label{ss 3.5: key lemma}
We will now prove the key lemma:
\begin{lem}\label{lem: key lemma}
The inclusion functor $j:\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}}_0)\hookrightarrow \ensuremath{{\mathcal{G}}}_+(\ensuremath{{\mathcal{S}}},\omega)$ induces a homotopy equivalence to $Bj: B\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}}_0)\simeq B\ensuremath{{\mathcal{G}}}_+(\ensuremath{{\mathcal{S}}},\omega)$.
\end{lem}
The proof uses Quillen's Theorem A which we now review.
Given any functor $\psi :\ensuremath{{\mathcal{C}}}\to\ensuremath{{\mathcal{D}}}$ between small categories $\ensuremath{{\mathcal{C}}},\ensuremath{{\mathcal{D}}}$, the \emph{fiber category} $X\backslash \psi $ of $\psi $ over any object $X$ in $\ensuremath{{\mathcal{D}}}$ is defined to be the category of all pairs $(Y,f)$ where $Y\in\ensuremath{{\mathcal{C}}}$ and $f:X\to \psi Y$ is a morphism of $\ensuremath{{\mathcal{D}}}$. A morphism $(Y,f)\to (Z,g)$ in $X\backslash \psi $ is defined to be a morphism $h:Y\to Z$ in $\ensuremath{{\mathcal{C}}}$ so that $g=\psi h\circ f:X\to \psi Y\to \psi Z$.
\begin{thm}[Quillen's Theorem A]\cite{Quillen} If $B(X\backslash \psi )$ is contractible for every $X\in\ensuremath{{\mathcal{D}}}$ then the mapping $B\psi :B\ensuremath{{\mathcal{C}}}\to B\ensuremath{{\mathcal{D}}}$ is a homotopy equivalence.
\end{thm}
\begin{rem}
By a common abuse of language we will often say that a category is \emph{contractible} if its classifying space is contractible and a functor is a \emph{homotopy equivalence} if it induces a homotopy equivalence on classifying spaces.\end{rem}
To prove the key lemma it therefore suffices to show that the fiber category $\ensuremath{{\mathcal{A}}}_0\backslash j$ is contractible for every fixed object $\ensuremath{{\mathcal{A}}}_0\in\ensuremath{{\mathcal{G}}}_+(\ensuremath{{\mathcal{S}}},\omega)$. There are two cases. Either $\ensuremath{{\mathcal{A}}}_0\in \ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}}_0)$ or $\ensuremath{{\mathcal{A}}}_0\in\ensuremath{{\mathcal{H}}}(\ensuremath{{\mathcal{S}}},\omega)$. In the first case, $\ensuremath{{\mathcal{A}}}_0\backslash j$ is contractible since it has an initial object given by $(\ensuremath{{\mathcal{A}}}_0,\ensuremath{{\mathcal{A}}}_0,id_{\ensuremath{{\mathcal{A}}}_0})$. Therefore, we assume $\ensuremath{{\mathcal{A}}}_0=\ensuremath{{\mathcal{A}}}(\alpha_\ast,\omega)\in\ensuremath{{\mathcal{H}}}(\ensuremath{{\mathcal{S}}},\omega)$.
The fiber category $\ensuremath{{\mathcal{A}}}_0\backslash j$ is the category of all positive morphisms $[T]:\ensuremath{{\mathcal{A}}}_0\to\ensuremath{{\mathcal{B}}}\in\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}}_0)$. The elements of $\ensuremath{{\mathcal{N}}}_k(\ensuremath{{\mathcal{A}}}_0\backslash j)$ are equivalent to commuting diagrams:
\[
\xymatrix{
& \ensuremath{{\mathcal{A}}}(\alpha_\ast,\omega)\ar[dl]\ar[d]\ar[drr]\\
\ensuremath{{\mathcal{B}}}_0 \ar[r]& \ensuremath{{\mathcal{B}}}_1\ar[r] &\cdots\ar[r] & \ensuremath{{\mathcal{B}}}_k
}
\]
where each $\ensuremath{{\mathcal{B}}}_i\in\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}}_0)$ and each arrow $\ensuremath{{\mathcal{A}}}(\alpha_\ast,\omega)\to \ensuremath{{\mathcal{B}}}_i$ is a positive morphism. Such diagrams are in bijection with filtrations $T_0\subseteq T_1\subseteq \cdots\subseteq T_k$ of nonempty partial cluster tilting sets in $\ensuremath{{\mathcal{C}}}(\alpha_\ast,\omega)$ which have the following two properties.
\begin{enumerate}
\item $T_k$ does not contain $P_\omega[1]$.
\item $T_0$ contains a module which maps onto $M_\omega$. (Equivalently, $T_0\not\subseteq\ensuremath{{\mathcal{A}}}(\alpha_\ast)$.)
\end{enumerate}
Using this description we will show that the simplicial set $\ensuremath{{\mathcal{N}}}__\bulletllet(\ensuremath{{\mathcal{A}}}_0\backslash j)$ is isomorphic to a familiar simplicial complex.
Suppose that $\alpha_\ast=\{\alpha_1,\cdots,\alpha_n\}$ has $n$ elements. Then every cluster tilting set in the finite set $\ensuremath{{\mathcal{C}}}(\alpha_\ast,\omega)$ has $n+1$ elements and every subset of every cluster tilting set is a partial cluster tilting set by definition. Therefore, the set of nonempty partial cluster tilting sets is an $n$-dimensional simplicial complex which we denote $K^n$. By \cite{IOTW3}, $|K^n|$ is homeomorphic to the $n$-sphere $S^n$.
\begin{lem} {Nonempty partial cluster tilting sets in $\ensuremath{{\mathcal{C}}}(\alpha_\ast,\omega)$ which do not contain $P_\omega[1]$ form a subcomplex $E^n$ of $K^n$ whose realization is homeomorphic to a closed $n$-disk $D^n$.}
\end{lem}
\begin{proof}
$P_\omega[1]$ is a single vertex of $K^n$ and its link is given by all nonempty partial cluster tilting sets in $\ensuremath{{\mathcal{C}}}(\alpha_\ast)$. This forms an $n-1$ sphere which divides $|K^n|=S^n$ into two halves. The half containing $P_\omega[1]$ is a cone on $S^{n-1}$ and thus standard. This implies that the other half, which is $|E^n|$ is also standard and thus an $n$-disk.
\end{proof}
Note that the boundary of $|E^n|=D^n$ is the link of $P_\omega[1]$ in $K^n$. We denote the corresponding subcomplex of $E^n$ by $\partial E^n$. Then $\partial E^n$ is the set of all nonempty cluster tilting sets in $\ensuremath{{\mathcal{C}}}(\alpha_\ast)$.
Let $Simp(E^n)$ be the ``poset category'' whose objects are the simplices of $E^n$ with one morphism $\sigma\to\tau$ whenever $\sigma\subseteq\tau$. Recall that the \emph{first barycentric subdivision} of $E^n$ is $sdE^n=\ensuremath{{\mathcal{N}}}__\bulletllet Simp(E^n)$.
\begin{lem}
The fiber category $\ensuremath{{\mathcal{A}}}(\alpha_\ast,\omega)\backslash j$ is isomorphic to the full subcategory $J$ of $Simp(E^n)$ consisting of all simplices $\sigma$ which are not contained in $\partial E^n$.
\end{lem}
\begin{proof}
The objects of $\ensuremath{{\mathcal{A}}}(\alpha_\ast,\omega)\backslash j$ are nonempty partial cluster tilting sets $[T]$ with two additional conditions listed earlier. If we ignore the conditions, we have a poset category isomorphic to $Simp(K^n)$ by definition. Adding the first condition give the full subcategory $Simp(E^n)$. Adding the second condition gives the full subcategory $J$.
\end{proof}
The key lemma now follows from the following elementary topological fact whose proof
is left as an easy exercise.
\begin{prop} Let $E^n$ be a simplicial complex whose geometric realization $|E^n|$ is homeomorphic to the standard $n$-disk $D^n$. Let $J$ be the subcomplex of the first barycentric subdivision $sdE^n$ spanned by all barycenters $b_\sigma$ of simplices $\sigma$ of $E^n$ which are not contained in the boundary of $D^n$. Then $|J|$ is contractible.
\end{prop}
\subsection{$G(\ensuremath{{\mathcal{S}}})$ is an HNN extension of $G(\ensuremath{{\mathcal{S}}}_0)$}\label{ss 3.6: G(S) is HNN ext of G(S0)}
We will show that
\[
G(\ensuremath{{\mathcal{S}}})=N(G(\ensuremath{{\mathcal{S}}}_\omega),G(\ensuremath{{\mathcal{S}}}_0),\varphi,\psi)
\]
where $G=G(\ensuremath{{\mathcal{S}}}_0)$, $H=G(\ensuremath{{\mathcal{S}}}_\omega)$, $\varphi:G(\ensuremath{{\mathcal{S}}}_\omega)\hookrightarrow G(\ensuremath{{\mathcal{S}}}_0)$ is the monomorphism induced by the inclusion $\ensuremath{{\mathcal{S}}}_\omega\subseteq \ensuremath{{\mathcal{S}}}_0$ (see Proposition \ref{prop: G-omega to GS is split mono}) and $\psi:G(\ensuremath{{\mathcal{S}}}_\omega)\hookrightarrow G(\ensuremath{{\mathcal{S}}}_0)$ is a monomorphism which we now construct. The key step is the following theorem where we use the shorthand notation $[\beta]:=[M_\beta]$ and $[-\beta]:=[M_\beta[1]]$.
\begin{thm}\label{thm: pi-1 G(S) is G(S)}
Taking the zero category $0=\ensuremath{{\mathcal{A}}}(\emptyset)$ as basepoint for $B\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})$, we have an isomorphism of groups $G(\ensuremath{{\mathcal{S}}})\cong \pi_1B\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})$ given by sending each generator $x(\beta),\beta\in\ensuremath{{\mathcal{S}}}$ of $G(\ensuremath{{\mathcal{S}}})$ to (the homotopy class of) the loop in $B\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})$ at $\ensuremath{{\mathcal{A}}}(\emptyset)$ given by
\[
\ensuremath{{\mathcal{A}}}(\emptyset)\xleftarrow{[-\beta]} \ensuremath{{\mathcal{A}}}(\beta)\xrightarrow{[\beta]}\ensuremath{{\mathcal{A}}}(\emptyset)
\]
(going from left to right).\end{thm}
If $\ensuremath{{\mathcal{S}}}=\{\beta\}$ then this is true since the loop is the entire category. So, we can assume this holds for $\ensuremath{{\mathcal{S}}}_0$ and $\ensuremath{{\mathcal{S}}}_\omega$ by induction on the size of $\ensuremath{{\mathcal{S}}}$. Since $\ensuremath{{\mathcal{H}}}(\ensuremath{{\mathcal{S}}},\omega)\cong \ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}}_\omega)$, we get the following corollary which we will use to prove the proposition.
\begin{cor}\label{cor: paths with pasepoint omega}
Taking $\ensuremath{{\mathcal{A}}}(\omega)$ as basepoint for $B\ensuremath{{\mathcal{H}}}(\ensuremath{{\mathcal{S}}},\omega)$, we have an isomorphism of groups $G(\ensuremath{{\mathcal{S}}}_\omega)\cong \pi_1B\ensuremath{{\mathcal{H}}}(\ensuremath{{\mathcal{S}}},\omega)$ given by sending each generator $x(\alpha), \alpha\in\ensuremath{{\mathcal{S}}}_\omega$ of $G(\ensuremath{{\mathcal{S}}}_\omega)$ to (the homotopy class of) the loop in $B\ensuremath{{\mathcal{H}}}(\ensuremath{{\mathcal{S}}},\omega)$ at $\ensuremath{{\mathcal{A}}}(\omega)$ given by
\[
\ensuremath{{\mathcal{A}}}(\omega)\xleftarrow{[-\alpha]} \ensuremath{{\mathcal{A}}}(\alpha, \omega)\xrightarrow{[\alpha]}\ensuremath{{\mathcal{A}}}(\omega)
\]
(going from left to right).
\end{cor}
We define $\psi:G(\ensuremath{{\mathcal{S}}}_\omega)\to G(\ensuremath{{\mathcal{S}}}_0)$ to be the homomorphism:
\[
G(\ensuremath{{\mathcal{S}}}_\omega)\cong \pi_1 B\ensuremath{{\mathcal{H}}}(\ensuremath{{\mathcal{S}}},\omega) \to \pi_1B\ensuremath{{\mathcal{G}}}_+(\ensuremath{{\mathcal{S}}},\omega)\cong\pi_1 B\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}}_0)=G(\ensuremath{{\mathcal{S}}}_0)
\]
induced by the inclusion functors $\ensuremath{{\mathcal{H}}}(\ensuremath{{\mathcal{S}}},\omega)\hookrightarrow \ensuremath{{\mathcal{G}}}_+(\ensuremath{{\mathcal{S}}},\omega)$ and $\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}}_0)\hookrightarrow \ensuremath{{\mathcal{G}}}_+(\ensuremath{{\mathcal{S}}},\omega)$ and by the choice of paths $\gamma=[\omega]$ as explained below.
First, we recall that, when a continuous mapping $f:X\to Y$ fails to take the basepoint $x_0\in X$ to the basepoint $y_0\in Y$, we need to choose a path $\gamma$ from $f(x_0)$ to $y_0$ in order to get an induced map on fundamental groups. Then, for any $[\alpha]\in\pi_1(X,x_0)$, we define $\pi_1(f,\gamma)[\alpha]\in\pi_1(Y,y_0)$ to be the homotopy class of the loop at $y_0$ given by $\gamma^{-1}f(\alpha)\gamma$.
In our case we take $\gamma$ to be the path in $B\ensuremath{{\mathcal{G}}}_+(\ensuremath{{\mathcal{S}}},\omega)$ from the base point $\ensuremath{{\mathcal{A}}}(\omega)$ of $B\ensuremath{{\mathcal{H}}}(\ensuremath{{\mathcal{S}}},\omega)$ to the basepoint $\ensuremath{{\mathcal{A}}}(\emptyset)$ of $B\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}}_0)$ given by the positive morphism $[\omega]:\ensuremath{{\mathcal{A}}}(\omega)\to\ensuremath{{\mathcal{A}}}(\emptyset)$.
\begin{prop}\label{prop: psi has left inverse}
The homomorphism $\psi:G(\ensuremath{{\mathcal{S}}}_\omega)\to G(\ensuremath{{\mathcal{S}}}_0)$ has a left inverse and is therefore a monomorphism. Furthermore, $\psi$ is given on generators $x(\alpha)$ for $\alpha\in\ensuremath{{\mathcal{S}}}_\omega$ by
\begin{equation}\label{eq: equation for psi(x(a))}
\psi(x(\alpha))=\prod x(\gamma_i)
\end{equation}
where $\gamma_i$ runs over all real Schur roots of the form $\gamma_i=a_i\alpha+b_i\omega$ where $a_i>0$ and the product is taken in decreasing order of the ratio $b_i/a_i$.
\end{prop}
\begin{proof} We show that the second statement implies the first.
Let $\iota:G(\ensuremath{{\mathcal{S}}}_0)\to G(\ensuremath{{\mathcal{S}}})$ be the homomorphism induced by the inclusion $\ensuremath{{\mathcal{S}}}_0\hookrightarrow \ensuremath{{\mathcal{S}}}$. Let $\phi$ be the automorphism of $G(\ensuremath{{\mathcal{S}}})$ given by conjugation by $x(\omega)$. Thus $\phi(g)=x(\omega)gx(\omega)^{-1}$. Then, by the defining relations of $G(\ensuremath{{\mathcal{S}}})$, we have
\[
\phi\circ \iota\circ\psi(x(\alpha))=x(\omega)\left(\,{\textstyle{\prod}}\, x(\gamma_i)\right)x(\omega)^{-1}=x(\alpha)
\]
Therefore $\phi\circ\iota\circ\psi:G(\ensuremath{{\mathcal{S}}}_\omega)\to G(\ensuremath{{\mathcal{S}}})$ is the split monomorphism with left inverse $r$ and $r\circ\phi\circ \iota$ is a left inverse for $\psi$.
It remains to prove the equation \eqref{eq: equation for psi(x(a))}. Since $\psi$ is defined in terms of the inclusion functor $\ensuremath{{\mathcal{H}}}(\ensuremath{{\mathcal{S}}},\omega)\hookrightarrow \ensuremath{{\mathcal{G}}}_+(\ensuremath{{\mathcal{S}}},\omega)$, we need to look at the positive morphisms $\ensuremath{{\mathcal{A}}}(\alpha,\omega)\to \ensuremath{{\mathcal{A}}}(\emptyset)$. These are given by all cluster tilting sets in $\ensuremath{{\mathcal{C}}}(\alpha,\omega)$ which do not include $P_\omega[1]$. Since $\ensuremath{{\mathcal{C}}}(\alpha,\omega)$ is finite, there are six possible cases: $A_1\times A_1,A_2,B_2,B_2^{op}=C_2,G_2,G_2^{op}$. We will use type $C_2$ as an example. The other cases are very similar.
When we say that $\ensuremath{{\mathcal{C}}}(\alpha,\omega)$ has type $C_2$ we mean that the division ring $F_\alpha$ is a degree two extension of $F_\omega$. The Auslander-Reiten quiver of the category $\ensuremath{{\mathcal{A}}}(\alpha,\omega)$ has four objects:
\[\xymatrixrowsep{10pt}\xymatrixcolsep{10pt}
\xymatrix{
& P_\omega \ar[dr]& & I_\omega\\
P_\alpha \ar[ur] & & I_\alpha\ar[ur]
}
\]
with dimension vectors $\alpha,\beta,\gamma,\omega$, respectively, where $\beta=\alpha+\omega$ and $\gamma=\beta+2\omega$. The objects of $\ensuremath{{\mathcal{C}}}(\alpha,\omega)$ are $P_\omega,P_\alpha,I_\alpha,I_\omega,P_\alpha[1],P_\omega[1]$. Of these, the first five give all positive morphisms from $\ensuremath{{\mathcal{A}}}(\alpha,\omega)$ to a wide category of rank 1. Consecutive pairs from these first five objects give all four positive morphisms $\ensuremath{{\mathcal{A}}}(\alpha,\omega)\to \ensuremath{{\mathcal{A}}}(\emptyset)$, each of which can be factored in two ways. This gives the following commuting diagram in $\ensuremath{{\mathcal{G}}}_+(\ensuremath{{\mathcal{S}}},\omega)$.
\[
\xymatrix{
\ensuremath{{\mathcal{A}}}(\omega) \ar[d]_{[\omega]}
&& \ensuremath{{\mathcal{A}}}(\alpha,\omega) \ar[ll]_{[-\alpha]}\ar[rr]^{[\alpha]}
\ar[d]^{[\gamma]}
\ar[dl]_{[\omega]}
\ar[dr]^{[\beta]}
&& \ensuremath{{\mathcal{A}}}(\omega)\ar[d]^{[\omega]}
\\
\ensuremath{{\mathcal{A}}}(\emptyset)
& \ensuremath{{\mathcal{A}}}(\gamma) \ar[d]_{[\gamma]}\ar[l]_{[-\gamma]}
& \ensuremath{{\mathcal{A}}}(\beta) \ar[dr]_{[\beta]}\ar[dl]^{[-\beta]}
& \ensuremath{{\mathcal{A}}}(\alpha) \ar[d]^{[-\alpha]}\ar[r]^{[\alpha]}
& \ensuremath{{\mathcal{A}}}(\emptyset)\\
& \ensuremath{{\mathcal{A}}}(\emptyset)
&& \ensuremath{{\mathcal{A}}}(\emptyset)
}
\]
The homomorphism $\psi$ sends $x(\alpha)$ first to the loop at $\ensuremath{{\mathcal{A}}}(\omega)$ given by the top row of the diagrams as in Corollary \ref{cor: paths with pasepoint omega}:
\[
\ensuremath{{\mathcal{A}}}(\omega)\xleftarrow{[-\alpha]}\ensuremath{{\mathcal{A}}}(\alpha,\omega)\xrightarrow{[\alpha]}\ensuremath{{\mathcal{A}}}(\alpha)
\]
then to the loop at $\ensuremath{{\mathcal{A}}}(\emptyset)$ given by the path
\[
\ensuremath{{\mathcal{A}}}(\emptyset)\xrightarrow{[\omega]^{-1}}\cdot\xrightarrow{[-\alpha]^{-1}} \cdot\xrightarrow{[\alpha]} \cdot\xrightarrow{[\omega]}\ensuremath{{\mathcal{A}}}(\emptyset)
\]
which is homotopic to the path $[-\gamma]^{-1}[\gamma][-\beta]^{-1}[\beta][-\alpha]^{-1}[\alpha]$. In other words,
\[
\psi(x(\alpha))=x(\gamma)x(\beta)x(\alpha)
\]
These correspond to the objects in the AR quiver of $\ensuremath{{\mathcal{A}}}(\alpha,\omega)$ in reverse order starting from the (relatively) injective module $I_\alpha$ and ending in the (relatively) simple projective module $P_\alpha$ in all cases. Therefore, \eqref{eq: equation for psi(x(a))} holds in all cases. Our proposition follows.
\end{proof}
Recall that we are assuming by induction that Theorem \ref{thm: pi-1 G(S) is G(S)} holds for $\ensuremath{{\mathcal{S}}}_0$ and $\ensuremath{{\mathcal{S}}}_\omega$ by induction on $|\ensuremath{{\mathcal{S}}}|$.
\begin{cor}\label{cor: G(S) is HNN extension}
Let $\ensuremath{{\mathcal{S}}}=\ensuremath{{\mathcal{S}}}_0\cup\{\omega\}$ be as above. Then $G(\ensuremath{{\mathcal{S}}})$ is isomorphic to the HNN extension $N(G(\ensuremath{{\mathcal{S}}}_\omega),G(\ensuremath{{\mathcal{S}}}_0),\iota,\psi)$ where $\iota:G(\ensuremath{{\mathcal{S}}}_\omega)\hookrightarrow G(\ensuremath{{\mathcal{S}}}_0)$ is the inclusion map and $\psi:G(\ensuremath{{\mathcal{S}}}_\omega)\hookrightarrow G(\ensuremath{{\mathcal{S}}}_0)$ is the split monomorphism described above. The isomorphism
\[
N(G(\ensuremath{{\mathcal{S}}}_\omega),G(\ensuremath{{\mathcal{S}}}_0),\iota,\psi)\cong G(\ensuremath{{\mathcal{S}}})
\]
is the inclusion map on $G(\ensuremath{{\mathcal{S}}}_0),G(\ensuremath{{\mathcal{S}}}_\omega)$ and sends the new generator $t$ to $x(\omega)^{-1}$.
\end{cor}
\begin{proof}
The HNN extension $N(G(\ensuremath{{\mathcal{S}}}_\omega),G(\ensuremath{{\mathcal{S}}}_0),\iota,\psi)$ adds one generator $t^{-1}=x(\omega)$ to $G(\ensuremath{{\mathcal{S}}}_0)$ and, for each $\alpha\in\ensuremath{{\mathcal{S}}}_0$, the new relation
\[
x(\alpha)=x(\omega)\psi(x(\alpha))x(\omega)^{-1}
\]
By \eqref{eq: equation for psi(x(a))}, this is equivalent to the relation
\[
x(\alpha)x(\omega)=\prod x(\gamma_i)
\]where $\gamma_i$ runs over all real Schur roots of the form $\gamma_i=a_i\alpha+b_i\omega$ including the case $a_i=0$ and the product is taken in decreasing order of the ratio $b_i/a_i$. These are the defining relations of $G(\ensuremath{{\mathcal{S}}})$ which are not in $G(\ensuremath{{\mathcal{S}}}_0)$, proving the corollary.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm: pi-1 G(S) is G(S)}]
We have completed the proofs of all statement in the outline in Section \ref{ss 3.2: outline of G(S)}. Therefore, by Theorem \ref{thm: HNN graph of groups}, $B\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})$ is a $K(\pi,1)$ with $\pi$ equal to the HNN extension $N(G(\ensuremath{{\mathcal{S}}}_\omega),G(\ensuremath{{\mathcal{S}}}_0),\iota,\psi)$ which is equal to $G(\ensuremath{{\mathcal{S}}})$ with generators $x(\alpha)\in G(\ensuremath{{\mathcal{S}}})$ corresponding to either $x(\alpha)\in G(\ensuremath{{\mathcal{S}}}_0)$ or to $t^{-1}$ by Corollary \ref{cor: G(S) is HNN extension} above. This proves the theorem for all finite convex $\ensuremath{{\mathcal{S}}}$.
\end{proof}
The proof above also completes the proof of the main Theorem \ref{thm 3.5: G(S) is K(pi,1)}.
\section{Picture groups}\label{sec 4: Picture groups}
We will show that, when $\Lambda$ has finite representation type, the classifying space of the cluster morphism category of $mod$-$\Lambda$ is the CW-complex associated to the algebra in \cite{IOTW4} using pictures. This cell complex has one $k$-cell $e(\ensuremath{{\mathcal{A}}})$ for every wide subcategory of $mod\text-\Lambda$ of rank $k$. We extend this construction to a space $X(\ensuremath{{\mathcal{S}}})$ for every finite convex set $\ensuremath{{\mathcal{S}}}$ of real Schur roots and show that $X(\ensuremath{{\mathcal{S}}})$ is homeomorphic to $B\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})$. We will write $\vare (\ensuremath{{\mathcal{A}}})$ for the cell in $B\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})$ corresponding to $e(\ensuremath{{\mathcal{A}}})\subseteq X(\ensuremath{{\mathcal{S}}})$. We will also construct the cellular chain complex of $X(\ensuremath{{\mathcal{S}}})\simeq B\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})$ to be used in later papers.
\subsection{Construction of the CW-complex $X(\ensuremath{{\mathcal{S}}})$}\label{ss 4.1: the CW-complex X(S)}
For every object $\ensuremath{{\mathcal{A}}}$ in $\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})$ we will construct a simplicial complex whose geometric realization $E(\ensuremath{{\mathcal{A}}})$ is homeomorphic to a disk of dimension equal to the rank of $\ensuremath{{\mathcal{A}}}$. There is a continuous mapping $E(\ensuremath{{\mathcal{A}}})\to B\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})$ which is an embedding on the interior of $E(\ensuremath{{\mathcal{A}}})$ and $B\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})$ will be the disjoint union of the images $\vare(\ensuremath{{\mathcal{A}}})$ of these interiors. When $\ensuremath{{\mathcal{C}}}(\ensuremath{{\mathcal{A}}})$ is not finite, $E(\ensuremath{{\mathcal{A}}})$ is not compact and therefore cannot be homeomorphic to a disk and our construction would not give a CW-complex. Therefore, finiteness of $\ensuremath{{\mathcal{S}}}$ is essential for this construction.
Suppose $\ensuremath{{\mathcal{A}}}=\ensuremath{{\mathcal{A}}}(\alpha_\ast)$ with rank $n$. Then the set of real Schur roots in $\ensuremath{{\field{Z}}}\alpha_\ast\cong \ensuremath{{\field{Z}}}^n$, being finite by the assumption that they all lie in the finite set $\ensuremath{{\mathcal{S}}}$, is the root system $\Phi(\alpha_\ast)$ of a disjoint union of Dynkin quivers which form the valued quiver associated to $\ensuremath{{\mathcal{A}}}$. Let $K(\ensuremath{{\mathcal{A}}})$ be the simplicial complex whose vertices are the positive roots $\Phi_+(\alpha_\ast)$ and the negative projective roots in $\Phi(\alpha_\ast)$. These are the dimension vectors of the objects of $\ensuremath{{\mathcal{C}}}(\ensuremath{{\mathcal{A}}})$. A set of vertices span a simplex in $K(\ensuremath{{\mathcal{A}}})$ if they are pairwise ext-orthogonal. It is well-known (see \cite{IOTW3}) that the geometric realization $|K(\ensuremath{{\mathcal{A}}})|$ is homeomorphic to the $n-1$ sphere. For example, when $n=1$, there are only two roots $\alpha,-\alpha$ and $|K(\ensuremath{{\mathcal{A}}})|=S^0$ is two points.
Let $\simp_+K(\ensuremath{{\mathcal{A}}})$ be the poset category of simplices in $K(\ensuremath{{\mathcal{A}}})$ ordered by inclusion, including the empty simplex. Let $\simp K(\ensuremath{{\mathcal{A}}})$ be the full subcategory of nonempty simplices. The classifying space $B\simp K(\ensuremath{{\mathcal{A}}})$ is the first barycentric subdivision of $K(\ensuremath{{\mathcal{A}}})$ and $B\simp_+ K(\ensuremath{{\mathcal{A}}})$, being the cone on $B\simp K(\ensuremath{{\mathcal{A}}})$ is a triangulated $n$ disk. We define
\[
E(\ensuremath{{\mathcal{A}}}):=B\simp_+K(\ensuremath{{\mathcal{A}}})\cong D^n.
\]
We define the \emph{picture space} $X(\ensuremath{{\mathcal{S}}})$ to be the union of cells:
\[
X(\ensuremath{{\mathcal{S}}})=\coprod_{\ensuremath{{\mathcal{A}}}\in\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})}E(\ensuremath{{\mathcal{A}}})/\sim
\]
with identifications given as follows.
For every cluster morphism $[T]:\ensuremath{{\mathcal{A}}}\to\ensuremath{{\mathcal{B}}}$ in the category $\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})$ of rank $rk\,\ensuremath{{\mathcal{A}}}-rk\,\ensuremath{{\mathcal{B}}}=k$ we have the embedding
\[
\sigma_T:\ensuremath{{\mathcal{C}}}(\ensuremath{{\mathcal{B}}})\hookrightarrow \ensuremath{{\mathcal{C}}}(\ensuremath{{\mathcal{A}}})
\]
with image $\ensuremath{{\mathcal{C}}}_T(\ensuremath{{\mathcal{A}}})$ so that $X,Y\in\ensuremath{{\mathcal{C}}}(\ensuremath{{\mathcal{B}}})$ are ext-orthogonal if and only if $\sigma_TX,\sigma_TY$ are ext-orthogonal in $\ensuremath{{\mathcal{C}}}(\ensuremath{{\mathcal{A}}})$. This induces an embedding of categories:
\[
\Sigma_T:\simp_+K(\ensuremath{{\mathcal{B}}})\to \simp_+K(\ensuremath{{\mathcal{A}}})
\]
which sends every $p$-simplex $X$ in $\simp_+K(\ensuremath{{\mathcal{B}}})$ ($p\ge-1$) to the $(p+k)$-simplex \[
\Sigma_TX=\sigma_TX\cup T
\]
in $\simp_+K(\ensuremath{{\mathcal{A}}})$. In particular, it sends the cone point in $\simp_+K(\ensuremath{{\mathcal{A}}})$ to the $k-1$ simplex spanned by the $k$ objects of $T$.
\begin{lem}\label{lem: Sigma is a functor}
Given $[T]:\ensuremath{{\mathcal{A}}}\to \ensuremath{{\mathcal{B}}}$ and $[S]:\ensuremath{{\mathcal{B}}}\to\ensuremath{{\mathcal{C}}}$ with composition $[S]\circ[T]=[T\cup \sigma_TS]:\ensuremath{{\mathcal{A}}}\to\ensuremath{{\mathcal{C}}}$, we have
\[
\Sigma_{T\cup\sigma_TS}=\Sigma_T\Sigma_S
\]
\end{lem}
\begin{proof} For any $X$ in $\simp_+\ensuremath{{\mathcal{C}}}$ we have
\[
\Sigma_T\Sigma_SX=\Sigma_T(S\cup \sigma_SX)=T\cup \sigma_TS\cup \sigma_T\sigma_SX=\sigma_{T\cup \sigma_TS}X
\]
since $\sigma_T\sigma_S=\sigma_{T\cup \sigma_TS}$ \eqref{eq: sigma TS=sigma T sigma S}.
\end{proof}
On classifying spaces, this gives an embedding of cells:
\[
B\Sigma_T:E(\ensuremath{{\mathcal{B}}})=B\simp_+K(\ensuremath{{\mathcal{B}}})\to B\simp_+K(\ensuremath{{\mathcal{A}}})=E(\ensuremath{{\mathcal{A}}})
\]
which sends the center of $E(\ensuremath{{\mathcal{B}}})$ to the barycenter of the $k-1$ simplex spanned by $T$.
Let $\overline e(\ensuremath{{\mathcal{A}}})$, $e(\ensuremath{{\mathcal{A}}})$ be the images of $E(\ensuremath{{\mathcal{A}}})$ and its interior in $X(\ensuremath{{\mathcal{S}}})$. Then the statement that the quotient space
\[
\bigcup \overline e(\ensuremath{{\mathcal{A}}})=\coprod_{\ensuremath{{\mathcal{A}}}\in\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})}E(\ensuremath{{\mathcal{A}}})/\sim\,,
\]
with equivalence relation given by identifying every point in $E(\ensuremath{{\mathcal{B}}})$ to its image in $E(\ensuremath{{\mathcal{A}}})$ under all mappings $B\Sigma_T:E(\ensuremath{{\mathcal{B}}})\to E(\ensuremath{{\mathcal{A}}})$ constructed as above, is a CW-complex is equivalent to the following proposition.
\begin{prop}
For a fixed $\ensuremath{{\mathcal{A}}}\in\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})$ of rank $n$, the embeddings $B\Sigma_T:E(\ensuremath{{\mathcal{B}}})\hookrightarrow E(\ensuremath{{\mathcal{A}}})$ for all cluster morphisms $[T]:\ensuremath{{\mathcal{A}}}\to \ensuremath{{\mathcal{B}}}$ of rank $\ge1$ define a continuous map
\[
\eta_\ensuremath{{\mathcal{A}}}:\partial E(\ensuremath{{\mathcal{A}}})=B\simp K(\ensuremath{{\mathcal{A}}})\to \bigcup_{rk\,\ensuremath{{\mathcal{B}}}<n} \overline e(\ensuremath{{\mathcal{B}}})
\]
giving the attaching map for the cell $e(\ensuremath{{\mathcal{A}}})$ in a CW-complex $X(\ensuremath{{\mathcal{S}}})=\bigcup \overline e(\ensuremath{{\mathcal{A}}})$.
\end{prop}
\begin{proof} If $n=0$ then $\partial E(\ensuremath{{\mathcal{A}}})$ is empty and there is nothing to prove. So, suppose that $n>0$ and the proposition holds for numbers $<n$. In particular $\bigcup_{rk\,\ensuremath{{\mathcal{B}}}<n}\overline e(\ensuremath{{\mathcal{B}}})$ is a CW-complex
The statement is that the maps $B\Sigma_T:E(\ensuremath{{\mathcal{B}}})\to E(\ensuremath{{\mathcal{A}}})$ together form a surjective continuous mapping
\[
\bigcup_{[T]:\ensuremath{{\mathcal{A}}}\to \ensuremath{{\mathcal{B}}}}\overline e(\ensuremath{{\mathcal{B}}})=\coprod E(\ensuremath{{\mathcal{B}}})/\! \sim\ \twoheadrightarrow \partial E(\ensuremath{{\mathcal{A}}})
\]
and that, furthermore, any two elements which map to the same point in $\partial E(\ensuremath{{\mathcal{A}}})$ are already identified in the subcomplex $\bigcup_{[T]:\ensuremath{{\mathcal{A}}}\to \ensuremath{{\mathcal{B}}}}\overline e(\ensuremath{{\mathcal{B}}})\subseteq\bigcup_{rk\,\ensuremath{{\mathcal{B}}}<n}\overline e(\ensuremath{{\mathcal{B}}})$.
To prove the surjectivity statement, take any point $z\in \partial E(\ensuremath{{\mathcal{A}}})$. Then $z$ will be in the span of a simplex
\[
Z_\ast: Z_0\subset Z_1\subset \cdots\subset Z_p
\]
where each $Z_i$ is nonempty. Let $\ensuremath{{\mathcal{B}}}=|Z_0|^\perp$. Then $[Z_0]:\ensuremath{{\mathcal{A}}}\to\ensuremath{{\mathcal{B}}}$ is a cluster morphism of positive rank and the $p$-simplex $Z_\ast$ in $\partial E(\ensuremath{{\mathcal{A}}})$ is the image of the $p$-simplex
\[
X_\ast: X_0\subset X_1\subset\cdots\subset X_p
\]
in $E(\ensuremath{{\mathcal{B}}})$ where $X_0=\emptyset$ and each $X_i$ is the unique partial cluster tilting set in $\ensuremath{{\mathcal{B}}}$ so that
\[
Z_i=Z_0\cup \sigma_{Z_0}X_i=\Sigma_{Z_0}X_i
\]
and $z=B\Sigma_{Z_0}x$ where $x$ is a point in the simplex spanned by $X_\ast$. Therefore, $\bigcup B\Sigma_T:\bigcup \overline e(\ensuremath{{\mathcal{B}}})\twoheadrightarrow \partial E(\ensuremath{{\mathcal{A}}})$ is surjective.
Now suppose that $y\in E(\ensuremath{{\mathcal{B}}}')$ maps to the same point $z\in \partial E(\ensuremath{{\mathcal{A}}})$ under the map induced by $[T]:\ensuremath{{\mathcal{A}}}\to\ensuremath{{\mathcal{B}}}'$. Since each $\Sigma_T$ is an embedding, this implies that $y$ lies in the interior of a simplex of the same dimension as $Z_\ast$, say, $
Y_\ast:Y_0\subset Y_1\subset\cdots\subset Y_p
$. This implies that
\[
Z_i=T\cup \sigma_TY_i
\]
In particular, $T\subseteq Z_0$ and $Z_0=T\cup \sigma_TY_0$. In other words, the morphism $[Z_0]:\ensuremath{{\mathcal{A}}}\to\ensuremath{{\mathcal{B}}}$ is the composition of $[T]$ and $[Y_0]:\ensuremath{{\mathcal{B}}}'\to\ensuremath{{\mathcal{B}}}$. By Lemma \ref{lem: Sigma is a functor}, this implies $\Sigma_{Z_0}=\Sigma_T\circ \Sigma_{Y_0}$. Since $\Sigma_T$ is an embedding, the equation
\[
\Sigma_TY_i=Z_i=\Sigma_{Z_0}X_i=\Sigma_T\Sigma_{Y_0}X_i
\]
implies $Y_i=\Sigma_{Y_0}X_i$ for all $i$. So, $y=B\Sigma_{Y_0}(x)$ and the points $x,y$ are identified in the subcomplex $\bigcup \overline e(\ensuremath{{\mathcal{B}}})$.
\end{proof}
\subsection{Proof that $X(\ensuremath{{\mathcal{S}}})=B\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})$}\label{ss 4.2: proof that X(S)=BG(S)}
We will show:
\begin{thm}\label{thm: BG(S)=X(S)}
For any finite convex set $\ensuremath{{\mathcal{S}}}$ of real Schur roots, we have a homeomorphism
\[
X(\ensuremath{{\mathcal{S}}})\cong B\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}}).
\]
The image of $\overline e(\ensuremath{{\mathcal{A}}})\subseteq X(\ensuremath{{\mathcal{S}}})$ in $B\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})$, denoted $\overline\vare(\ensuremath{{\mathcal{A}}})$, is the union of all simplices corresponding to sequences of composable morphisms
\[
\ensuremath{{\mathcal{A}}}_0\to \ensuremath{{\mathcal{A}}}_1\to\cdots\to \ensuremath{{\mathcal{A}}}_p
\]
where $\ensuremath{{\mathcal{A}}}_0=\ensuremath{{\mathcal{A}}}$. The center of the cell $e(\ensuremath{{\mathcal{A}}})$ maps to $\ensuremath{{\mathcal{A}}}$ considered as a vertex of $B\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})$.
\end{thm}
\begin{rem}\label{rem: orientation of cells is given by signed exceptional sequences}
Since the top dimensional simplices are given by maximal sequences of composable morphisms which in turn are given by signed exceptional sequences for $\ensuremath{{\mathcal{A}}}$, each such sequence will give an orientation for the cell $\overline\vare(\ensuremath{{\mathcal{A}}})$.
\end{rem}
The proof of Theorem \ref{thm: BG(S)=X(S)} is based on the following general observation.
\begin{prop}\label{prop: BC is the union of B(X under C)}
The classifying space of any small category $\ensuremath{{\mathcal{D}}}$ is equal to the union of classifying spaces $B(X\backslash \ensuremath{{\mathcal{D}}})$ of under-categories $X\backslash \ensuremath{{\mathcal{D}}}$ for all $X\in\ensuremath{{\mathcal{D}}}$ modulo the identifications given by all mappings
\[
Bf^\ast: B(Y\backslash \ensuremath{{\mathcal{D}}})\to B(X\backslash \ensuremath{{\mathcal{D}}})
\]
induced by all morphisms $f:X\to Y$ in $\ensuremath{{\mathcal{D}}}$. Furthermore, the image of $B(X\backslash \ensuremath{{\mathcal{D}}})$ in $B\ensuremath{{\mathcal{D}}}$ is the union of all simplices corresponding to sequences of composable morphisms
\[
X\to X_1\to X_2\to\cdots\to X_p
\]
and the identity morphism $(X,id_X)\in X\backslash \ensuremath{{\mathcal{D}}}$ maps to the vertex $X$ in $B\ensuremath{{\mathcal{D}}}$.
\qed
\end{prop}
Since this statement follows from the definitions and holds in any category, we leave the proof to the reader.
\begin{lem}\label{lem: A under G(S) is simp+K(A)}
For any object $\ensuremath{{\mathcal{A}}}$ in the category $\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})$ we have an isomorphism of categories:
\[
\ensuremath{{\mathcal{A}}}\backslash \ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})\cong \simp_+ K(\ensuremath{{\mathcal{A}}})
\]
given by sending each objects $[T]:\ensuremath{{\mathcal{A}}}\to\ensuremath{{\mathcal{B}}}$ in $\ensuremath{{\mathcal{A}}}\backslash \ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})$ to the simplex $T$ considered as an object of $\simp_+ K(\ensuremath{{\mathcal{A}}})$. This inducing a homeomorphism $B(\ensuremath{{\mathcal{A}}}\backslash \ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}}))\cong E(\ensuremath{{\mathcal{A}}})\cong D^n$.
\end{lem}
\begin{proof}
Recall that the objects of $\ensuremath{{\mathcal{A}}}\backslash \ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})$ are cluster morphisms $[T]:\ensuremath{{\mathcal{A}}}\to\ensuremath{{\mathcal{B}}}$ given by partial (unordered) cluster tilting sets $T=\{T_1,\cdots,T_k\}\subset \ensuremath{{\mathcal{C}}}(\ensuremath{{\mathcal{A}}})$. There is a unique morphism $[T]\to [S]$ if and only if $T\subseteq S$. Thus $\ensuremath{{\mathcal{A}}}\backslash \ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})$ is a poset category and the mapping $\ensuremath{{\mathcal{A}}}\backslash \ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})\to \simp_+K(\ensuremath{{\mathcal{A}}})$ sending $[T]$ to $T$ gives an isomorphism of partial ordered sets and therefore an isomorphism of poset categories.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm: BG(S)=X(S)}] Proposition \ref{prop: BC is the union of B(X under C)} and Lemma \ref{lem: A under G(S) is simp+K(A)} imply that
\[
B\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})=\coprod_{\ensuremath{{\mathcal{A}}}\in\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})} B(\ensuremath{{\mathcal{A}}}\backslash \ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}}))/\!\sim\ \cong \coprod_{\ensuremath{{\mathcal{A}}}\in\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})} E(\ensuremath{{\mathcal{A}}})/\!\sim\ =\bigcup_{\ensuremath{{\mathcal{A}}}\in\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})}\overline e(\ensuremath{{\mathcal{A}}})=X(\ensuremath{{\mathcal{S}}}).
\]
It remains to show that the identifications on the cells $E(\ensuremath{{\mathcal{A}}})\cong B(\ensuremath{{\mathcal{A}}}\backslash \ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}}))$ are the same in $B\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})$ and in $X(\ensuremath{{\mathcal{S}}})$. This is equivalent to showing that the following diagram of functors commutes for any morphism $[T]:\ensuremath{{\mathcal{A}}}\to\ensuremath{{\mathcal{B}}}$.
\[
\xymatrix{
\ensuremath{{\mathcal{B}}}\backslash \ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})\ar[d]_{[T]^\ast}\ar[r] &
\simp_+K(\ensuremath{{\mathcal{B}}})\ar[d]^{\Sigma_T}\\
\ensuremath{{\mathcal{A}}}\backslash \ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}}) \ar[r]&
\simp_+K(\ensuremath{{\mathcal{A}}})
}
\]
But this follows from the fact that the vertical maps are defined by the same formula. Namely, they both take the partial cluster tilting set $X$ in $\ensuremath{{\mathcal{B}}}$ to $T\cup \sigma_TX$ in $\ensuremath{{\mathcal{A}}}$.
This commuting diagram of categories induces a commuting diagram of classifying spaces showing that $B\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})$ and $X(\ensuremath{{\mathcal{S}}})$ are made from the same pieces pasted together in the same way. So, they are homeomorphic.
\end{proof}
\subsection{Example}\label{ss 4.3: example}
Suppose that $\ensuremath{{\mathcal{A}}}=\ensuremath{{\mathcal{A}}}(\alpha,\beta)$ where $M_\alpha,M_\beta$ are relative simple projectives. Then the only objects in $\ensuremath{{\mathcal{C}}}(\alpha,\beta)$ are $M_\alpha,M_\alpha[1],M_\beta,M_\beta[1]$ and
\[
\xymatrixrowsep{10pt}\xymatrixcolsep{10pt}
\xymatrix{
&& \beta \ar@{-}[dl]\ar@{-}[dr] \\
K(\alpha,\beta)=& -\alpha\ar@{-}[dr] && \alpha\ar@{-}[dl]\\
&& -\beta }
\]
This simplicial complex has 4 edges, 4 vertices and one empty simplex:
\[
\xymatrix{
& \{-\alpha,\beta\} & \beta\ar[l]\ar[r] & \{\alpha,\beta\} \\
\simp_+K(\alpha,\beta)=& -\alpha\ar[u]\ar[d] &\emptyset\ar[u]\ar[d]\ar[l]\ar[r]\ar[lu]\ar[ru]\ar[ld]\ar[rd]& \alpha\ar[d]\ar[u]\\
& \{-\alpha,-\beta\} & -\beta\ar[l]\ar[r] & \{\alpha,-\beta\} }
\]
with classifying space $E(\ensuremath{{\mathcal{A}}}(\alpha,\beta))\cong D^2$. This category is isomorphic to the under-category:
\[
\xymatrixcolsep{30pt}
\xymatrix{
& \ensuremath{{\mathcal{A}}}(\emptyset) & \ensuremath{{\mathcal{A}}}(\alpha)\ar[l]_{[-\alpha]}\ar[r]^{[\alpha]} &\ensuremath{{\mathcal{A}}}(\emptyset) \\
\ensuremath{{\mathcal{A}}}(\alpha,\beta)\backslash \ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})=
& \ensuremath{{\mathcal{A}}}(\beta)\ar[u]^{[\beta]}\ar[d]_{[-\beta]}
&\ensuremath{{\mathcal{A}}}(\alpha,\beta)\ar[u]_{[\beta]}\ar[d]^{[-\beta]}\ar[l]_{[-\alpha]}\ar[r]^{[\alpha]}\ar[lu]_{[-\alpha,\beta]}\ar[ru]^{[\alpha,\beta]}\ar[ld]_{[-\alpha,-\beta]}\ar[rd]^{[\alpha,-\beta]}
& \ensuremath{{\mathcal{A}}}(\beta)\ar[d]^{[-\beta]}\ar[u]_{[\beta]}\\
& \ensuremath{{\mathcal{A}}}(\emptyset) & \ensuremath{{\mathcal{A}}}(\alpha)\ar[l]^{[-\alpha]}\ar[r]_{[\alpha]} & \ensuremath{{\mathcal{A}}}(\emptyset)
}
\]
The space $B\ensuremath{{\mathcal{G}}}(\alpha,\beta)\cong X(\alpha,\beta)$ has four cells: $\vare(\ensuremath{{\mathcal{A}}}(\emptyset))$ which is at each of the four vertices in the diagram, $\vare(\ensuremath{{\mathcal{A}}}(\alpha))$ which is the interior of the top and bottom rows, $\vare(\ensuremath{{\mathcal{A}}}(\beta))$ which is the interior of the left and right columns and $\vare(\ensuremath{{\mathcal{A}}}(\alpha,\beta))$ which is the interior of the square. So,
\[
X(\alpha,\beta)\cong B\ensuremath{{\mathcal{G}}}(\alpha,\beta)=S^1\times S^1
\]
is a torus.
\subsection{Semi-invariant labels}\label{ss 4.4: semi-invariants}
One of the key properties of the picture space $X(\ensuremath{{\mathcal{S}}})$ is that it has a ``normally oriented'' codimension one subcomplex
\[
D(\ensuremath{{\mathcal{S}}})=\bigcup_{\beta\in\ensuremath{{\mathcal{S}}}} D(\beta)
\]
where each $D(\beta)$ is locally the support of the virtual semi-invariant with det-weight $\beta$ (Definition \ref{def: det semi-inv and supports}). Using the categorical version of $X(\ensuremath{{\mathcal{S}}})$, these subspaces are easy to describe.
\begin{defn}
For any $\beta\in\ensuremath{{\mathcal{S}}}$, let $D(\beta)\subseteq B\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})$ be the union of all simplices given by composable sequences of morphisms
\[
\ensuremath{{\mathcal{A}}}_0\to\ensuremath{{\mathcal{A}}}_1\to\cdots\to \ensuremath{{\mathcal{A}}}_p
\]
where $M_\beta\in \ensuremath{{\mathcal{A}}}_p$. Then $D(\ensuremath{{\mathcal{S}}})=\bigcup D(\beta)$ is the union of all simplices given by sequences of morphisms as above where $\ensuremath{{\mathcal{A}}}_p$ is nonzero.
\end{defn}
It follows directly from this definition that the complement of $D(\ensuremath{{\mathcal{S}}})$ in $B\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})\cong X(\ensuremath{{\mathcal{S}}})$ is the open star of $\ensuremath{{\mathcal{A}}}(\emptyset)$ which is, by definition, the set of all points so that the barycentric coordinate of $\ensuremath{{\mathcal{A}}}(\emptyset)$ is positive. This is a contractible space with deformation retraction to the vertex $\ensuremath{{\mathcal{A}}}(\emptyset)$ given by linear deformation of barycentric coordinates.
In the universal covering $\tilde X(\ensuremath{{\mathcal{S}}})$ of $X(\ensuremath{{\mathcal{S}}})$, the complement of the inverse image $\tilde D(\ensuremath{{\mathcal{S}}})$ of $D(\ensuremath{{\mathcal{S}}})$ in $\tilde X(\ensuremath{{\mathcal{S}}})$ is a disjoint union of contractible spaces, one for each element of the fundamental group $G(\ensuremath{{\mathcal{S}}})$ of $X(\ensuremath{{\mathcal{S}}})$. This gives is a locally constant function
\[
g: \tilde X(\ensuremath{{\mathcal{S}}})\backslash \tilde D(\ensuremath{{\mathcal{S}}})\to G(\ensuremath{{\mathcal{S}}})
\]
which has the following property.
For any root $\beta\in\ensuremath{{\mathcal{S}}}$, let $x_t$ be the path in $X(\ensuremath{{\mathcal{S}}})$ given by the cell $E(\ensuremath{{\mathcal{A}}}(\beta))\cong B\ensuremath{{\mathcal{A}}}(\beta)\backslash\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})$ going from left to right along the path
\[
\ensuremath{{\mathcal{A}}}(\emptyset)\xleftarrow{[-\beta]} \ensuremath{{\mathcal{A}}}(\beta)\xrightarrow{[\beta]}\ensuremath{{\mathcal{A}}}(\emptyset).
\]
This path intersects $D(\beta)$ only in its midpoint $\ensuremath{{\mathcal{A}}}(\beta)$. Since this represents the generator $x(\beta)$ of $\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})$, any lifting $\tilde{x_t}$ of this path to $\tilde X(\ensuremath{{\mathcal{S}}})\cong B\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})$ will have the property that
\[
g(\tilde x_1)=g(\tilde x_0)x(\beta).
\]
We will now determine the relationship between the subcomplex $D(\beta)\subseteq X(\ensuremath{{\mathcal{S}}})$ and the subspace $D_{\alpha_\ast}(\beta)\subseteq \ensuremath{{\field{R}}}\alpha_\ast$ defined in Theorem \ref{Stability theorem for virtual semi-invariants}. Suppose that $\ensuremath{{\mathcal{A}}}=\ensuremath{{\mathcal{A}}}(\alpha_1,\cdots,\alpha_n)$ where $\alpha_i\in\ensuremath{{\mathcal{S}}}$. Then, for each positive root $\beta\in\Phi_+(\ensuremath{{\mathcal{A}}})$, we recall that
\[
D_{\alpha_\ast}(\beta) = \{v\in\ensuremath{{\field{R}}}\alpha_\ast\cong \ensuremath{{\field{R}}}^n\,|\, \brk{v,\beta}=0 \text{ and } \brk{v,\beta'}\le 0\ \varphiorall \beta'\subseteq\beta,\beta'\in \Phi_+(\ensuremath{{\mathcal{A}}})\}
\]
This is a closed convex subset of the hyperplane $\{v\in\ensuremath{{\field{R}}}\alpha_\ast\,|\,\brk{v,\beta}=0\}$. This hyperplane has a normal orientation. The \emph{positive side} is the set
\[
\{v\in\ensuremath{{\field{R}}}\alpha_\ast\,|\,\brk{v,\beta}>0\}.
\]
For example, $\beta$ is on the positive side of $D_{\alpha_\ast}(\beta)$. One point in $\ensuremath{{\field{R}}}\alpha_\ast$ which is on the positive side of all of these hyperplanes is the dimension vector of the sum of all projective objects.
We recall that $D_{\alpha_\ast}(\beta)$ contains the dimension vector of any object $M\in\ensuremath{{\mathcal{C}}}(\ensuremath{{\mathcal{A}}})$ with the property that
\[
\Hom_\Lambda(|M|,M_\beta)=0=\Ext^1_\Lambda(|M|,M_\beta),
\]
i.e., $|M|\in \,^\perp M_\beta$. This implies that, given any cluster tilting set $T=(T_1,\cdots,T_n)$ in $\ensuremath{{\mathcal{C}}}(\ensuremath{{\mathcal{A}}})$, with corresponding $c$-vectors $(-\gamma_1,\cdots,-\gamma_n)$, we have
\[
\undim T_i\in D_{\alpha_\ast}(|\gamma_j|)
\]
for all $i\neq j$. Furthermore, $\undim T_j$ is on the positive or negative side of $D_{\alpha_\ast}(|\gamma_j|)$ depending on whether $\gamma_j$ is positive or negative, respectively.
For any $n$-dimensional simplicial complex we define a \emph{normal orientation} on an $n-1$ simplex $\tau$ to be the assignment of a sign ($+$ or $-$) to each $n$-simplex containing $\tau$ as a face. A \emph{normal orientation} of an $n-1$ dimensional subcomplex of an $n$-dimensional simplicial complex is defined to be a normal orientation of each of its $n-1$ simplices. We do not assume any consistency between orientations of adjacent $n-1$ simplices.
\begin{defn}
Let $\ensuremath{{\mathcal{A}}}\in\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})$ of rank $n$ and let $\beta\in \Phi_+(\ensuremath{{\mathcal{A}}})$. We define $L_\ensuremath{{\mathcal{A}}}(\beta)\subset K(\ensuremath{{\mathcal{A}}})$ to be the normally oriented codimension one subcomplex consisting of simplices all of whose vertices lie in $\ensuremath{{\mathcal{A}}}\cap \,^\perp M_\beta$. The normal orientation is given in the discussion above. Namely, an $n-1$-simplex $T=\{T_1,\cdots,T_n\}$ with one face $\partial_i T$ in $\,^\perp M_\beta$ has positive sign if the corresponding vector $\gamma_i$ is positive (the $c$-vector $-\gamma_i$ is negative). Take the full subcategory $\simp_+L_\ensuremath{{\mathcal{A}}}(\beta) \subset \simp_+K(\ensuremath{{\mathcal{A}}})$ which is normally oriented when considered as a simplicial complex. Denote its classifying space by
\[
D_\ensuremath{{\mathcal{A}}}(\beta)=B\simp_+L_\ensuremath{{\mathcal{A}}}(\beta)\subset B \simp_+K(\ensuremath{{\mathcal{A}}})=E(\ensuremath{{\mathcal{A}}}).
\]
\end{defn}
For a fixed $\ensuremath{{\mathcal{A}}}$ with rank $n$, the space
\[
L(\ensuremath{{\mathcal{A}}})=\bigcup_{\beta\in \Phi_+(\ensuremath{{\mathcal{A}}})} B\simp L_\ensuremath{{\mathcal{A}}}(\beta)\subset B\simp K(\ensuremath{{\mathcal{A}}})\cong S^{n-1}
\]
is the picture for $\ensuremath{{\mathcal{A}}}$ as defined in \cite{IOTW4} and $B\simp L_\ensuremath{{\mathcal{A}}}(\beta)$ is the normally oriented subset labeled $\beta$. The following proposition shows that the spaces $D(\beta)\subseteq B\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})$ play the analogous role. So, the union $D(\ensuremath{{\mathcal{S}}})=\bigcup D(\beta)$ is a generalization of a picture.
\begin{prop}\label{prop: DA(b)=e(A) cap D(b)}
$D_\ensuremath{{\mathcal{A}}}(\beta)\subseteq E(\ensuremath{{\mathcal{A}}})$ is the inverse image of $\overline \vare(\ensuremath{{\mathcal{A}}})\cap D(\beta)$ under the epimorphism $E(\ensuremath{{\mathcal{A}}})\twoheadrightarrow \overline \vare(\ensuremath{{\mathcal{A}}})$. Furthermore, the normal orientation of $D_\ensuremath{{\mathcal{A}}}(\beta)$ is such that each embedding of the 1-cell $E(\ensuremath{{\mathcal{A}}}(\beta))$ in $E(\ensuremath{{\mathcal{A}}})$, oriented by the path $[-\beta]^{-1}[\beta]$ passes from the negative to the positive side of $D_\ensuremath{{\mathcal{A}}}(\beta)$.
\end{prop}
\begin{proof}
By Theorem \ref{thm: BG(S)=X(S)} and the definition of $D(\beta)$, the intersection $\overline \vare(\ensuremath{{\mathcal{A}}}(\alpha_\ast))\cap D(\beta)$ is the union of all simplices in $B\ensuremath{{\mathcal{G}}}(\ensuremath{{\mathcal{S}}})$ corresponding to sequences of morphisms $\ensuremath{{\mathcal{A}}}_0\to\cdots\to \ensuremath{{\mathcal{A}}}_p$ starting at $\ensuremath{{\mathcal{A}}}_0=\ensuremath{{\mathcal{A}}}(\alpha_\ast)$ and ending at $\ensuremath{{\mathcal{A}}}_p=\ensuremath{{\mathcal{A}}}(\beta)$. If the rank of $\ensuremath{{\mathcal{A}}}(\alpha_\ast)$ is $n$, the composition of these morphisms is a morphism
\[
[T]:\ensuremath{{\mathcal{A}}}(\alpha_\ast)\to \ensuremath{{\mathcal{A}}}(\beta)
\]
where $T=(T_1,\cdots,T_{n-1})$ is a cluster tilting set in $\ensuremath{{\mathcal{A}}}(\alpha_\ast)\cap M_\beta^\perp$. Such a cluster tilting set corresponds to a maximal simplex in $L_{\ensuremath{{\mathcal{A}}}(\alpha_\ast)}(\beta)$ and the sequence of objects $\ensuremath{{\mathcal{A}}}_i$ correspond to faces of this simplex starting with the empty face. So, the inverse image of $\overline \vare(\ensuremath{{\mathcal{A}}})\cap D(\beta)$ in $E(\ensuremath{{\mathcal{A}}})$ is $D_\ensuremath{{\mathcal{A}}}(\beta)=B\simp_+L_{\ensuremath{{\mathcal{A}}}}(\beta)\subset B\simp_+K(\ensuremath{{\mathcal{A}}})$.
Conversely, any simplex in $D_\ensuremath{{\mathcal{A}}}(\beta)$ is a chain of faces of a maximal simplex in $L_\ensuremath{{\mathcal{A}}}(\beta)$. Such a chain corresponds to an ordered cluster tilting set in $\ensuremath{{\mathcal{A}}}\cap M_\beta^\perp$ which corresponds to a maximal chain of morphisms $\ensuremath{{\mathcal{A}}}\to\cdots\to\ensuremath{{\mathcal{A}}}(\beta)$ which is a maximal simplex in $\overline\vare(\ensuremath{{\mathcal{A}}})\cap D(\beta)$.
The completion of the cluster tilting set $T$ is given by the composition of $[T]:\ensuremath{{\mathcal{A}}}\to\ensuremath{{\mathcal{A}}}(\beta)$ with $[M_\beta]:\ensuremath{{\mathcal{A}}}(\beta)\to\ensuremath{{\mathcal{A}}}(\emptyset)$ which is
$
[M_\beta]\circ[T]=[T,\sigma_TM_\beta]:\ensuremath{{\mathcal{A}}}\to \ensuremath{{\mathcal{A}}}(\emptyset)
$. The cluster tilting set $(T_1,\cdots,T_{n-1},\sigma_TM_\beta)$ has last $c$-vector $-\gamma_n=-\beta$ since $\brk{\undim T_i,\beta}=0$ and
\[
\brk{\undim\sigma_TM_\beta,\beta}=\brk{\beta,\beta}>0
\]
Therefore, the normal orientation of $D_\ensuremath{{\mathcal{A}}}(\beta)$ assigns a positive sign to the maximal simplex $\{T_1,\cdots,T_n,M_\beta\}$. But this is equivalent to saying that $[-\beta]^{-1}[\beta]$ goes through $D_\ensuremath{{\mathcal{A}}}(\beta)$ in the positive direction as claimed.
\end{proof}
\subsection{Cellular chain complex for $X(\ensuremath{{\mathcal{S}}})$}\label{ss 4.5: cellular chain complex}
We recall that the cellular chain complex of any $CW$-complex $X$ is:
\[
\cdots \to C_n(X)\xrightarrow{d_n} C_{n-1}(X) \to \cdots \to C_1(X)\xrightarrow{d_1} C_0(X)\to 0
\]
where $C_n(X)$ is the free abelian group generated by the $n$-cells of $X$ with some chosen orientation for each cell. The boundary map $d_n:C_n(X)\to C_{n-1}(X)$ is given by an integer matrix whose $ij$ coordinate is the incidence number of the composition
\[
S^{n-1}\xrightarrow{\eta_j} X^{n-1}\xrightarrow{\pi_i} S^{n-1}
\]
where $X^{n-1}$ is the $n-1$ skeleton of $X$, $\eta_j$ is the attaching map of the $j$th $n$-cell of $X$ and $\pi_i$ is the map which collapses all cells in $X^{n-1}$ to a point except for the $i$th $n-1$-cell.
In the case $X=X(\ensuremath{{\mathcal{S}}})$, where $\ensuremath{{\mathcal{S}}}$ is a finite convex set of real Schur roots, the generators of $C_n(X)$ are oriented wide categories $\ensuremath{{\mathcal{A}}}=\ensuremath{{\mathcal{A}}}(\alpha_1,\cdots,\alpha_n)$ where $\alpha_i\in\ensuremath{{\mathcal{S}}}$. We denote this element $[\ensuremath{{\mathcal{A}}}]\in C_n(X)$. The orientation is given by the ordering of the hom-orthogonal roots $\alpha_i$ which span $\ensuremath{{\mathcal{A}}}(\alpha_\ast)$. Any odd permutation of the $\alpha_i$ will change the sign of the generator. For example $[\ensuremath{{\mathcal{A}}}(\alpha_2,\alpha_1)]=-[\ensuremath{{\mathcal{A}}}(\alpha_1,\alpha_2)]$.
\begin{thm}\label{thm: chain complex for X(S)}
The boundary map $d_n:C_n(\ensuremath{{\mathcal{S}}})\to C_{n-1}(\ensuremath{{\mathcal{S}}})$ is given on each oriented generator $\ensuremath{{\mathcal{A}}}=\ensuremath{{\mathcal{A}}}(\alpha_1,\cdots,\alpha_n)$ by
\[
d_n[\ensuremath{{\mathcal{A}}}]=\sum_{\beta\in \Phi_+(\ensuremath{{\mathcal{A}}})\text{ not projective}} \det(c_{ij}) [\ensuremath{{\mathcal{A}}}\cap M_\beta^\perp]
\]
where the sum is over all nonprojective exceptional roots $\beta\in\Phi_+(\ensuremath{{\mathcal{A}}})\subseteq\ensuremath{{\mathcal{S}}}$. The sign $\det(c_{ij})=\pm1$ is the determinant of the unique integer matrix $(c_{ij})$ satisfying
\[
\beta_i=\sum_{j=1}^n c_{ij}\alpha_j
\]
for all $1\le i\le n$ where $\ensuremath{{\mathcal{A}}}\cap \beta^\perp=\ensuremath{{\mathcal{A}}}(\beta_1,\cdots,\beta_{n-1})$ is any chosen orientation of $\ensuremath{{\mathcal{B}}}=\ensuremath{{\mathcal{A}}}\cap M_\beta^\perp$ and $\beta_n=\beta$.\end{thm}
\begin{proof}
By Theorem \ref{thm: BG(S)=X(S)}, the $n$-cell $\varepsilon(\ensuremath{{\mathcal{A}}})$ in $X(\ensuremath{{\mathcal{S}}})$ is the union of $n$-simplices $\ensuremath{{\mathcal{A}}}_0\to \ensuremath{{\mathcal{A}}}_1\to\cdots\to \ensuremath{{\mathcal{A}}}_n$ where $\ensuremath{{\mathcal{A}}}_0=\ensuremath{{\mathcal{A}}}$. The first morphism $\ensuremath{{\mathcal{A}}}\to \ensuremath{{\mathcal{B}}}=\ensuremath{{\mathcal{A}}}_1$ is given by a single exceptional object $[M_\beta]$ in the cluster category of $\ensuremath{{\mathcal{A}}}$ and the $n-1$ simplex $\ensuremath{{\mathcal{B}}}=\ensuremath{{\mathcal{A}}}_1\to\cdots\to \ensuremath{{\mathcal{A}}}_n$ is part of the $n-1$ cell $\varepsilon(\ensuremath{{\mathcal{B}}})$. Every simplex is oriented by the ordering of its vertices. Since each maximal chain of composable morphisms $\ensuremath{{\mathcal{A}}}=\ensuremath{{\mathcal{A}}}_0\to\cdots\to \ensuremath{{\mathcal{A}}}_n$ is given by a signed exceptional sequence, each such sequence gives an orientation of $\varepsilon(\ensuremath{{\mathcal{A}}})$. \vs2
Claim: The corresponding sequence of dimension vectors $(\alpha_1,\cdots,\alpha_n)$ is unique up to invertible integer matrix tranformation. I.e., $\beta_i=\sum c_{ij}\alpha_j$ where $(c_{ij})\in GL(n,\ensuremath{{\field{Z}}})$ for any other such sequence $(\beta_i)$.
Proof of Claim: Any two exceptional sequences can be transformed into each other by braid moves. Each braid move changes the sequence of dimension vectors by transposing two and adding a multiple of one to the other. The signs in a signed exceptional sequence can be changed by multiplication by a diagonal matrix with entries $\pm1$. In all cases, the dimension vectors change by an integer matrix of determinant $\pm1$.
\vs2
Suppose we have a fixed orientation of the $n$-cell $\varepsilon(\ensuremath{{\mathcal{A}}})$. Then which morphisms $[M_\beta]:\ensuremath{{\mathcal{A}}}\to \ensuremath{{\mathcal{B}}}$ do we have? By definition of cluster morphism, there is one such morphism for every (isomorphism class of) indecomposable object $M_\beta$ of the cluster category of $\ensuremath{{\mathcal{A}}}$. These objects have target $\ensuremath{{\mathcal{B}}}=\ensuremath{{\mathcal{A}}}\cap M_\beta^\perp$. Each wide subcategory $\ensuremath{{\mathcal{B}}}\subseteq\ensuremath{{\mathcal{A}}}$ of rank $n-1$ occurs in this way and $M_\beta$ is uniquely determined by $\ensuremath{{\mathcal{B}}}$ except in the case when $M_\beta$ is projective in which case $[M_\beta[1]]=[M_{-\beta}]$ is also a morphism $\ensuremath{{\mathcal{A}}}\to\ensuremath{{\mathcal{B}}}$.
When $M_\beta$ is not projective, the incidence number of $\varepsilon(\ensuremath{{\mathcal{A}}})$ with $\varepsilon(\ensuremath{{\mathcal{B}}})$ is $\pm1$ and the sign is determined by the choice of orientation of both $\ensuremath{{\mathcal{A}}}$ and $\ensuremath{{\mathcal{B}}}$. The orientation of $\ensuremath{{\mathcal{B}}}$ is specified by an $n-1$ simplex: $\ensuremath{{\mathcal{B}}}=\ensuremath{{\mathcal{B}}}_1\to \ensuremath{{\mathcal{B}}}_2\to\cdots\to \ensuremath{{\mathcal{B}}}_n=0$ which is given by a signed exceptional sequence $(\beta_1,\cdots,\beta_{n-1})$. Appending the morphism $[M_\beta]:\ensuremath{{\mathcal{A}}}\to\ensuremath{{\mathcal{B}}}$ gives the signed exceptional sequence $(\beta_1,\cdots,\beta_{n-1},\beta)$. If $(c_{ij})$ is the comparison matrix of this sequence with $(\alpha_1,\cdots,\alpha_n)$ then $\det(c_{ij})$ is the incidence number of $[\ensuremath{{\mathcal{A}}}]$ with $[\ensuremath{{\mathcal{B}}}]$.
For each projective object $P=M_\beta\in\ensuremath{{\mathcal{A}}}$ there are two objects in the cluster category: $P$ and $P[1]=M_{-\beta}$. This gives two morphisms $[M_{\pm \beta}]:\ensuremath{{\mathcal{A}}}\to \ensuremath{{\mathcal{B}}}$. For any fixed orientations $(\alpha_\ast),(\beta_\ast)$ of $\ensuremath{{\mathcal{A}}},\ensuremath{{\mathcal{B}}}=\ensuremath{{\mathcal{A}}}\cap P^\perp$ these two morphisms have opposite sign since the sign of the last vector $\beta_n=\pm \beta$ changes. Therefore, the incidence number of $[\ensuremath{{\mathcal{A}}}]$ and $[\ensuremath{{\mathcal{A}}}\cap P^\perp]$ is zero. This proves the formula for $d_n:C_n(X(\ensuremath{{\mathcal{S}}}))\to C_{n-1}(X(\ensuremath{{\mathcal{S}}}))$ for any finite convex set $\ensuremath{{\mathcal{S}}}$.
\end{proof}
\end{document}
|
\begin{document}
\title[Types of elements, Gelfand and strongly harmonic rings]{On types of elements, Gelfand and strongly harmonic rings of skew PBW extensions over weak compatible rings}
\author{Andr\'es Chac\'on}
\address{Universidad Nacional de Colombia - Sede Bogot\'a}
\curraddr{Campus Universitario}
\email{[email protected]}
\thanks{}
\author{Sebasti\'an Higuera}
\address{Universidad Nacional de Colombia - Sede Bogot\'a}
\curraddr{Campus Universitario}
\email{[email protected]}
\thanks{}
\author{Armando Reyes}
\address{Universidad Nacional de Colombia - Sede Bogot\'a}
\curraddr{Campus Universitario}
\email{[email protected]}
\thanks{The authors were supported by the research fund of Faculty of Science, Code HERMES 53880, Universidad Nacional de Colombia--Sede Bogot\'a, Colombia.}
\subjclass[2020]{16S36, 16S38, 16U40, 16U60}
\keywords{NI ring, skew PBW extension, idempotent element, unit element, von Neumann regular element, clean element, Gelfand ring, harmonic ring}
\date{}
\dedicatory{Dedicated to Professor Oswaldo Lezama}
\begin{abstract}
We investigate and characterize several kinds of elements such as units, idempotents, von Neumann regular, $\pi$-regular and clean elements for skew PBW extensions over weak compatible rings. We also study the notions of Gelfand and Harmonic rings for these families of algebras. The results presented here extend those corresponding in the literature for commutative and noncommutative rings of polynomial type.
\end{abstract}
\maketitle
\section{Introduction}\label{ch0}
Throughout the paper, the term ring means an associative (not necessarily commutative) ring with identity unless otherwise stated. Additionally, for a ring $R$, the sets $N_{*}(R)$, $N(R)$, $N^{*}(R)$, $U(R)$, $Z(R)$ and $J(R)$ denote the prime radical, the set of nilpotent elements, the upper nil radical (i.e., the sum of all its nil ideals of $R$), the set of units, the center, and the Jacobson radical of $R$, respectively.
Recall that a ring $R$ is {\em reduced} if it has no nonzero nilpotent elements, that is, $N(R)= \{ 0 \}$. On the other hand, a ring $R$ is called {\em semicommutative} if $ab = 0$ implies $aRb = 0$, for any elements $a, b\in R$. If the equality $N(R)^{*} = N(R)$ holds, Marks \cite{Marks2001} called $R$ an {\em NI} ring. If $J(R)=N(R)$, then we say that $R$ is an {\em NJ ring}. The following relations are well-known: reduced $\Rightarrow$ semicommutative $\Rightarrow$ NI, but the converses do not hold (see \cite{ChenCui2011} and \cite{Karamzadeh1987} for more details). A ring $R$ is called {\em Abelian} if every idempotent is central. A ring $R$ is called {\em right} (respectively, {\em left}) {\em duo} if every right (respectively, left) ideal is an ideal. It follows that the relations one sided duo (left or right) $\Rightarrow$ semicommutative $\Rightarrow$ Abelian hold.
In 1936, von Neumann \cite{vonNeumann1936} introduced the von Neumann regular rings as an algebraic tool for studying certain lattices and some properties of operator algebras. Briefly, a ring $R$ is called {\em von Neumann regular} if for every element $a\in R$ there exists an element $r\in R$ such that $a = ara$. These rings are also known as {\em absolutely flat rings} due to their characterization in terms of modules. von Neumann regular rings are of great importance in areas such as topology and functional analysis. More precisely, the prime spectrum of a commutative von Neumann ring establishes relationships with different types of compactifications and homomorphisms of the prime spectrum and the prime spectrum of its ring of idempotent elements (see \cite{Goodearl1979, RubioAcosta2012, Rump2010} for more details). These facts show the close relationship of the von Neumann rings with Boolean rings.
Related to the description of idempotent elements and other kinds of elements of a ring, we can find the Gelfand rings and the clean elements. Recall that a ring $R$ is said to be a {\em Gelfand ring} if for each pair of distinct maximal right ideals $M_1$, $M_2$ of $R$, there exist two right ideals $I_1$, $I_2$ of $R$ such that $I_1\not\subseteq M_1$, $I_2\not\subseteq M_2$ and $I_1I_2=0$. On the other hand, an element $a\in R$ is called a {\em clean element} if $a$ is the sum of a unit and an idempotent of $R$. $R$ is said to be a {\em clean ring} if every element of $R$ is clean. In commutative algebra, it is known that every clean ring is Gelfand. Additionally, Gelfand rings are tied to the Zariski topology over a ring, which allows to characterize different properties of the prime spectrum and the maximal spectrum of a ring (for more details, see \cite{Aghajanietal2020}).
Continuing with the study of Gelfand rings and their relationship with topological spaces, Mulvey \cite{Mulvey1976} obtained a generalization of Swan's theorem concerning vector bundles over a compact topological space. He established an equivalence between the category of modules over a Gelfand ring and the category of modules over the corresponding compact ringed space. The algebraic $K$-theory of commutative Gelfand rings has been studied by Carral \cite{Carral1980} by showing relationships between the stable rank over Gelfand rings and the covering dimension of the maximal ideal space. These results are analogous to those corresponding for Noetherian rings with respect to the Krull dimension. Zhang et al. \cite{ZhangGelfand} showed that certain topological spaces related to $R$ are normal if and only if the quotient ring $R/N_*(R)$ is Gelfand.
Concerning noncommutative rings of polynomial type, for the {\em skew polynomial rings} (also known as {\em Ore extensions}) $R[x;\sigma,\delta]$ introduced by Ore \cite{Ore1933}, Hashemi et al., \cite{Hashemi} investigated characterizations of different elements over skew polynomial rings by using the notion of {\em compatible ring} defined by Annin \cite{Annin2004} (c.f. Hashemi and Moussavi \cite{HashemiMoussavi2005}). Briefly, if $R$ is a ring, $\sigma$ is an endomorphism of $R$, and $\delta$ is a $\sigma$-derivation of $R$, then (i) $R$ is said to be $\sigma$-{\em compatible} if for each $a, b\in R$, $ab = 0$ if and only if $a\sigma(b)=0$ (necessarily, the endomorphism $\sigma$ is injective). (ii) $R$ is called $\delta$-{\em compatible} if for each $a, b\in R$, $ab = 0$ implies $a\delta(b)=0$. (iii) If $R$ is both $\sigma$-compatible and $\delta$-compatible, then $R$ is called ($\sigma,\delta$)-{\em compatible}. With these ring-theoretical notions, Hashemi et al., \cite{Hashemi} characterized the unit elements, idempotent elements, von Neumann regular elements, $\pi$-regular elements, and also the von Neumann local elements of the skew polynomial ring $R[x; \sigma, \delta]$ when the base ring $R$ is a right duo $(\sigma,\delta)$-compatible.
Motivated by the notion of compatibility for Ore extensions, Hashemi et al. \cite{Hashemietal2017} and Reyes and Su\'arez \cite {ReyesSuarez2018} introduced independently the $(\Sigma, \Delta)$-{\em compatible rings} (see Section \ref{weakcompatiblerings}) as a natural generalization of $(\sigma, \delta)$-compatible rings with the aim of studying the {\em skew PBW extensions} defined by Gallego and Lezama \cite{GallegoLezama2011} (in Section \ref{Definitions} we will say some words about the generality of these objects with respect to other noncommutative algebras). Examples, ring and module theoretic properties of these extensions over compatible rings have been investigated by some people (e.g. \cite{Hashemietal2017, HigueraReyes2022, ReyesSuarez2018, ReyesSuarez2020}). In particular, Hamidizadeh et al. \cite{Hamidizadehetal2020} characterized the above types of elements of skew PBW extensions over compatible rings generalizing the results obtained by Hashemi et al. \cite{Hashemi}.
Since Reyes and Su\'arez \cite{ReyesSuarez2020} introduced the {\em weak $(\Sigma,\Delta)$-compatible rings} as a natural generalization of compatible rings over skew PBW extensions and the {\em weak $(\sigma,\delta)$-compatible rings} defined by Ouyang and Liu \cite{LunquenJingwang2011} for Ore extensions, and Higuera and Reyes \cite{HigueraReyes2022} characterized the weak annihilators and nilpotent associated primes of skew PBW extensions by using this weak notion, an immediate and natural task is to study the types of elements described above of these extensions by considering this weak notion of compatibility, and hence to investigate if it is possible to extend all results established in \cite{Hamidizadehetal2020, Hashemi} to a more general setting. This is the purpose of the paper.
With this aim, the article is organized as follows. In Section \ref{Definitions}, we recall some definitions and results about skew PBW extensions and weak $(\Sigma, \Delta)$-compatible rings. Section \ref{originalresults} contains the original results of the paper. More exactly, Section \ref{Elements} presents results concerning the characterization of different types of elements such as idempotents, units, von Neumman regular, and clean elements of skew PBW extensions over weak compatible rings (Theorems \ref{th.units}, \ref{theoremidem}, \ref{Proposition1.3.3.a}, \ref{WeakTheorem4.14}, \ref{WeakTheorem4.15}, \ref{WeakTheorem4.16} and \ref{WeakTheorem4.17}). Next, in Section \ref{Gelfandrings} we investigate the notions of strongly harmonic and Gelfand rings over skew PBW extensions (Propositions \ref{prop.strongly.coef} and \ref{prop.not.local}, and Theorems \ref{th.no.gelfand} and \ref{theorem.harmonic.unique}). Our results generalize those corresponding for skew PBW extensions over right duo rings presented by Hamidizadeh et al. \cite{Hamidizadehetal2020}, and Ore extensions over noncommutative rings presented by Hashemi et al. \cite{Hashemi}. Finally, Section \ref{future} presents some ideas for possible future research.
Throughout the paper, $\mathbb{N}$ and $\mathbb{Z}$ denote the natural and integer numbers. We assume the set of natural numbers including zero.
\section{Definitions and elementary properties}\label{Definitions}
\subsection{Skew Poincar\'e-Birkhoff-Witt extensions}\label{definitionexamplesspbw}
Skew PBW extensions were defined by Gallego and Lezama \cite{GallegoLezama2011} with the aim of generalizing Poincar\'e-Birkhoff-Witt extensions introduced by Bell and Goodearl \cite{BellGoodearl1988} and Ore extensions of injective type defined by Ore \cite{Ore1933}. Over the years, several authors have shown that skew PBW extensions also generalize families of noncommutative algebras such as 3-dimensional skew polynomial algebras introduced by Bell and Smith \cite{BellSmith1990}, diffusion algebras defined by Isaev et al. \cite{IsaevPyatovRittenberg2001}, ambiskew polynomial rings introduced by Jordan in several papers \cite{Jordan1993, Jordan2000,JordanWells1996}, solvable polynomial rings introduced by Kandri-Rody and Weispfenning \cite{KandryWeispfenninig1990}, almost normalizing extensions defined by McConnell and Robson \cite{McConnellRobson2001}, skew bi-quadratic algebras recently introduced by Bavula \cite{Bavula2021}, and others. For more details about the relationships between skew PBW extensions and other algebras having PBW bases, see \cite{BrownGoodearl2002, Lezamabook2020, GoodearlWarfield2004, McConnellRobson2001} and references therein.
\begin{definition}[{\cite[Definition 1]{GallegoLezama2011}}] \label{def.skewpbwextensions}
Let $R$ and $A$ be rings. We say that $A$ is a \textit{skew PBW extension over} $R$ (the ring of coefficients), denoted $A=\sigma(R)\langle
x_1,\dots,x_n\rangle$, if the following conditions hold:
\begin{enumerate}
\item[\rm (i)]$R$ is a subring of $A$ sharing the same identity element.
\item[\rm (ii)] There exist finitely many elements $x_1,\dots ,x_n\in A$ such that $A$ is a left free $R$-module, with basis the
set of standard monomials
\begin{center}
${\rm Mon}(A):= \{x^{\alpha}:=x_1^{\alpha_1}\cdots
x_n^{\alpha_n}\mid \alpha=(\alpha_1,\dots ,\alpha_n)\in
\mathbb{N}^n\}$.
\end{center}
Moreover, $x^0_1\cdots x^0_n := 1 \in {\rm Mon}(A)$.
\item[\rm (iii)]For every $1\leq i\leq n$ and any $r\in R\ \backslash\ \{0\}$, there exists $c_{i,r}\in R\ \backslash\ \{0\}$ such that $x_ir-c_{i,r}x_i\in R$.
\item[\rm (iv)]For $1\leq i,j\leq n$, there exists $d_{i,j}\in R\ \backslash\ \{0\}$ such that
\[
x_jx_i-d_{i,j}x_ix_j\in R+Rx_1+\cdots +Rx_n,
\]
i.e. there exist elements $r_0^{(i,j)}, r_1^{(i,j)}, \dotsc, r_n^{(i,j)} \in R$ with
\begin{center}
$x_jx_i - d_{i,j}x_ix_j = r_0^{(i,j)} + \sum_{k=1}^{n} r_k^{(i,j)}x_k$.
\end{center}
\end{enumerate}
\end{definition}
Since ${\rm Mon}(A)$ is a left $R$-basis of $A$, the elements $c_{i,r}$ and $d_{i, j}$ are unique. Thus, every nonzero element $f \in A$ can be uniquely expressed as $f = \sum_{i=0}^ma_iX_i$, with $a_i \in R$, $X_0=1$, and $X_i \in \text{Mon}(A)$, for $0 \leq i \leq m$ (when necessary, we use the notation $f = \sum_{i=0}^ma_iY_i$) \cite[Remark 2]{GallegoLezama2011}.
\begin{proposition}[{\cite[Proposition 3]{GallegoLezama2011}}] \label{sigmadefinition}
If $A=\sigma(R)\langle x_1,\dots,x_n\rangle$ is a skew PBW extension, then there exist an injective endomorphism $\sigma_i:R\rightarrow R$ and a $\sigma_i$-derivation $\delta_i:R\rightarrow R$ such that $x_ir=\sigma_i(r)x_i+\delta_i(r)$, for each $1\leq i\leq n$, where $r\in R$.
\end{proposition}
We use the notation $\Sigma:=\{\sigma_1,\dots,\sigma_n\}$ and $\Delta:=\{\delta_1,\dots,\delta_n\}$ for the families of injective endomorphisms and $\sigma_i$-derivations, respectively, established in Proposition \ref{sigmadefinition}. For a skew PBW extension $A = \sigma(R)\langle x_1,\dotsc, x_n\rangle$ over $R$, we say that the pair $(\Sigma, \Delta)$ is a \textit{system of endomorphisms and $\Sigma$-derivations} of $R$ with respect to $A$. For $\alpha = (\alpha_1, \dots , \alpha_n) \in \mathbb{N}^n$, $\sigma^{\alpha}:= \sigma_1^{\alpha_1}\circ \cdots \circ \sigma_n^{\alpha_n}$, $\delta^{\alpha} := \delta_1^{\alpha_1} \circ \cdots \circ \delta_n^{\alpha_n}$, where $\circ$ denotes the classical composition of functions.
We recall some results about quotient rings of skew PBW extensions which are useful for the paper (c.f. \cite{LezamaAcostaReyes2015}).
\begin{definition}[{\cite[Definition 5.1.1]{Lezamabook2020}}]
Let $R$ be a ring and $(\Sigma,\Delta)$ a system of endomorphisms and $\Sigma$-derivations of $R$. If $I$ is a two-sided ideal of $R$, then $I$ is called $\Sigma$-{\em invariant} if $\sigma^{\alpha}(I)\subseteq I$, where $\alpha \in \mathbb{N}^n$. $I$ is a $\Delta$-{\em invariant} ideal if $\delta^{\alpha}(I)\subseteq I$, where $\alpha \in \mathbb{N}^n$. If $I$ is both $\Sigma$ and $\Delta$-invariant, we say that $I$ is {\em $(\Sigma,\Delta)$-invariant}.
\end{definition}
\begin{proposition}[{\cite[Proposition 5.1.2]{Lezamabook2020}}]
Let $R$ be a ring, $(\Sigma,\Delta)$ a system of endomorphisms and $\Sigma$-derivations of $R$, $I$ a proper two-sided ideal of $R$ and $\overline{R}:=R/I$. If $I$ is $(\Sigma,\Delta)$-invariant then over $\overline{R}$ is induced a system $(\overline{\Sigma},\overline{\Delta})$ of endomorphisms and $\overline{\Sigma}$-derivations defined by $\overline{\sigma_i}(\overline{r}):=\overline{\sigma_i(r)}$ and $\overline{\delta_i}(\overline{r}):=\overline{\delta_i(r)}$, $1\le i\le n$.
\end{proposition}
\begin{proposition}[{\cite[Proposition 5.1.6]{Lezamabook2020}}]\label{prop.invariant}
Let $A=\sigma(R)\langle x_1,\dots,x_n\rangle$ be a skew PBW extension and $I$ a $(\Sigma,\Delta)$-invariant ideal of $R$. Then:
\begin{enumerate}
\item[{\rm (1)}] $IA$ is an ideal of $A$ and $IA\cap R=I$. $IA$ is proper if and only if $I$ is proper. Moreover, if for every $1\leq i\leq n$, $\sigma_i$ is bijective and $\sigma_i(I)=I$, then $IA=AI$.
\item[{\rm (2)}] If $I$ is proper and $\sigma_i(I)=I$ for every $1\leq i\leq n$, then $A/IA$ is a skew PBW extension of $R/I$.
\end{enumerate}
\end{proposition}
The following result shows the relation between NI rings and invariant ideals.
\begin{proposition}\label{prop.N.invariante}
Let $A=\sigma(R)\langle x_1,\dots,x_n\rangle$ be a skew PBW extension. If $A$ is NI then $N(R)$ is a $(\Sigma,\Delta)$-invariant ideal.
\begin{proof}
Clearly, we have that $N(R)$ is a $\Sigma$-invariant ideal. On the other hand, consider an element $a\in N(R)$. Since $a$ and $\sigma_i(a)$ are elements of $N(A)$ and $N(A)$ is an ideal of $A$, then $\delta_i(a)=x_ia-\sigma_i(a)x_i\in N(A)$, that is, $\delta_i(a)\in N(R)$. Therefore, $N(R)$ is a $(\Sigma,\Delta)$-invariant ideal.
\end{proof}
\end{proposition}
\subsection{Weak $(\Sigma,\Delta)$-compatible rings}\label{weakcompatiblerings}
Let $R$ be a ring and $\sigma$ an endomorphism of $R$. Krempa \cite{Krempa1996} defined $\sigma$ as a {\em rigid endomorphism} if $r\sigma(r)=0$ implies $r=0$, where $r\in R$. In this way, a ring $R$ is called $\sigma$-{\em rigid} if there exists a rigid endomorphism $\sigma$ of $R$. Following Annin \cite{Annin2004} (c.f. Hashemi and Moussavi \cite{HashemiMoussavi2005}), a ring $R$ is said to be $\sigma$-{\em compatible} if for every $a, b \in R$, $ab = 0$ if and only if $a\sigma(b) = 0$; $R$ is called $\delta$-{\em compatible}, if for each $a, b \in R$, we have $ab = 0 \Rightarrow a\delta(b) = 0$. Moreover, if $R$ is both $\sigma$-compatible and $\delta$-compatible, then $R$ is said to be $(\sigma,\delta)$-{\em compatible}. Reyes and Su\'arez \cite {ReyesSuarez2018} and Hashemi, Khalilnezhad and Alhevaz \cite{Hashemietal2017} introduced independently the $(\Sigma, \Delta)$-compatible rings which are a natural generalization of $(\sigma, \delta)$-compatible rings. Briefly, for a ring $R$ with a finite family of endomorphisms $\Sigma$ and a finite family of $\Sigma$-derivations $\Delta$, and considering the notation established in Proposition \ref{sigmadefinition} for families of endomorphisms and derivations, we say that $R$ is $\Sigma$-{\em compatible} if for each $a, b \in R$, $a\sigma^{\alpha}(b) = 0$ if and only if $ab = 0$, where $\alpha \in \mathbb{N}^n$. Similarly, we say that $R$ is $\Delta$-{\em compatible} if for each $a, b \in R$, it follows that $ab = 0$ implies $a\delta^{\beta}(b)=0$, where $\beta \in \mathbb{N}^n$. If $R$ is both $\Sigma$-compatible and $\Delta$-compatible, then $R$ is called $(\Sigma, \Delta)$-{\em compatible}.
Examples of skew PBW extensions over $(\Sigma, \Delta)$-compatible rings include PBW extensions defined by Bell and Goodearl \cite{BellGoodearl1988}, some operator algebras (e.g., the algebra of linear partial differential operators, the algebra of linear partial shift operators, the algebra of linear partial difference operators, the algebra of linear partial $q$-dilation operators, and the algebra of linear partial $q$-differential operators), the class of diffusion algebras \cite{IsaevPyatovRittenberg2001}, quantizations of Weyl algebras, the family of 3-dimensional skew polynomial algebras \cite{BellSmith1990}, and other families of noncommutative algebras having PBW bases. A detailed list of examples can be found in \cite{Hashemietal2017, HigueraReyes2022, Jordan2000, ReyesSuarez2018}.
The next definition present the {\em weak compatible rings} which are more general than compatible rings (see Examples \ref{exampleweak1} and \ref{exampleweak2}).
\begin{definition}[{\cite[Definition 4.1]{ReyesSuarez2020}}]\label{def.weakcom}
Let $R$ be a ring with a finite family of endomorphisms $\Sigma$ and a finite family of $\Sigma$-derivations $\Delta$. We say that $R$ is {\it weak $\Sigma$-compatible} if for each $a,b\in R$, $a\sigma^\alpha(b)\in N(R)$ if and only if $ab\in N(R)$, for all $\alpha\in\mathbb{N}^n$. Similarly, $R$ is called {\it weak $\Delta$-compatible} if for each $a,b\in R$, $ab\in N(R)$ implies $a\delta^\beta(b)\in N(R)$, for every $\beta\in\mathbb{N}^n$. If $R$ is both weak $\Sigma$-compatible and weak $\Delta$-compatible, then $R$ is called {\it weak $(\Sigma,\Delta)$-compatible}.
\end{definition}
\begin{proposition}[{\cite[Proposition 4.2]{ReyesSuarez2020}}]\label{prop.weak}
If $R$ is a weak $(\Sigma,\Delta)$-compatible ring, then the following assertions hold:
\begin{enumerate}
\item If $ab\in N(R)$, then $a\sigma^\alpha(b),\sigma^\beta(a)b\in N(R)$, for all elements $\alpha,\beta\in\mathbb{N}^n$.
\item If $\sigma^\alpha(a)b\in N(R)$, for some element $\alpha\in\mathbb{N}^n$, then $ab\in N(R)$.
\item If $a\sigma^\beta(b)\in N(R)$, for some element $\beta\in\mathbb{N}^n$, then $ab\in N(R)$.
\item If $ab\in N(R)$, then $\sigma^\alpha(a)\delta^\beta(b),\delta^\beta(a)\sigma^\alpha(b)\in N(R)$, for every $\alpha,\beta\in\mathbb{N}^n$.
\end{enumerate}
\end{proposition}
We present two examples of skew PBW extensions over weak $(\Sigma,\Delta)$-compatible rings that are not $(\Sigma,\Delta)$-compatible.
\begin{example}\label{exampleweak1} Let $R$ be a reduced ring and $R_2$ the ring of upper triangular matrices. Consider the endomorphism $\sigma:R_2 \rightarrow R_2$ defined by $\sigma \left( \bigl( \begin{smallmatrix}a & b\\ 0 & c \end{smallmatrix} \bigr) \right)= \bigl(\begin{smallmatrix}a & 0\\0 & c\end{smallmatrix}\bigr)$, and the $\sigma$-derivation $\delta: R_2 \rightarrow R_2$ defined by $\delta \left( \bigl( \begin{smallmatrix}a & b\\ 0 & c \end{smallmatrix} \bigr) \right)= \bigl( \begin{smallmatrix}0 & b\\ 0 & 0 \end{smallmatrix} \bigr)$, for all $\bigl( \begin{smallmatrix}a & b\\ 0 & c \end{smallmatrix} \bigr) \in R_2$. Notice that $\bigl(\begin{smallmatrix} 1 & 1 \\ 0 & 1\end{smallmatrix}\bigr)\cdot \sigma \left(\bigl(\begin{smallmatrix}0 & 1\\ 0 &0 \end{smallmatrix}\bigr) \right)=\bigl(\begin{smallmatrix}0 & 0\\ 0 & 0\end{smallmatrix}\bigr)$ with $\bigl(\begin{smallmatrix} 1 & 1 \\ 0 & 1\end{smallmatrix}\bigr)\cdot \bigl(\begin{smallmatrix}0 & 1\\ 0 &0 \end{smallmatrix}\bigr) \neq \bigl(\begin{smallmatrix}0 & 0\\ 0 & 0\end{smallmatrix}\bigr)$ which means that $R_2$ is not a $(\sigma, \delta)$-compatible ring. On the other hand, the set of nilpotent elements of $R_2$ consists of all matrices of the form $\bigl(\begin{smallmatrix}0 & b\\ 0 & 0\end{smallmatrix}\bigr)$, for any element $b \in R$. In this way, if $\bigl(\begin{smallmatrix} a & b \\ 0 & c\end{smallmatrix}\bigr)\cdot \bigl(\begin{smallmatrix}e & f\\ 0 & h \end{smallmatrix}\bigr) \in N(R_2)$, then $ae=ch=0$. This implies that $\bigl(\begin{smallmatrix} a & b \\ 0 & c\end{smallmatrix}\bigr)\cdot \sigma \left( \bigl( \begin{smallmatrix} e & f\\ 0 & h \end{smallmatrix} \bigr) \right) = \bigl(\begin{smallmatrix} a & b \\ 0 & c\end{smallmatrix}\bigr)\cdot \bigl(\begin{smallmatrix}e & 0\\ 0 & h \end{smallmatrix}\bigr) \in N(R_2)$. By a similar argument, if $A\sigma(B) \in N(R_2)$, then $AB \in N(R_2)$, for all $A, B \in R_2$. Finally, $A\delta(B) \in N(R_2)$, for all $A,B \in R_2$ with $AB \in N(R_2)$. Therefore, we conclude $R_2$ is a weak $(\sigma, \delta)$-compatible ring. Notice that the Ore extension $R_2[x;\sigma,\delta]$ is a skew PBW extension over $R_2$ which is weak $(\sigma,\delta)$-compatible.
\end{example}
\begin{example}[{\cite[Example 3.2]{Suarezetal2021RNP}}]\label{exampleweak2} Let ${\rm S}_2(\mathbb{Z})$ be a subring of upper triangular matrices defined by ${\rm S}_2(\mathbb{Z})= \left \{ \bigl(\begin{smallmatrix}a & b\\ 0 & a \end{smallmatrix}\bigr) \mid a,b \in \mathbb{Z} \right \}$. Let $\sigma_1={\rm id}_{{\rm S}_2(\mathbb{Z})}$ be the identity endomorphism of ${\rm S}_2(\mathbb{Z})$, and consider $\sigma_2$ and $\sigma_3$ two endomorphisms defined by
$\sigma_2 \left( \bigl( \begin{smallmatrix}a & b\\ 0 & a \end{smallmatrix} \bigr) \right)= \bigl(\begin{smallmatrix}a & -b\\0 & a\end{smallmatrix}\bigr)$ and $\sigma_3\left( \bigl( \begin{smallmatrix}a & b\\ 0 & a \end{smallmatrix} \bigr) \right)= \bigl( \begin{smallmatrix}a & 0\\0 & a\end{smallmatrix} \bigr)$.
Note that ${\rm S}_2(\mathbb{Z})$ is not $\sigma_3$-compatible, since for $A=\bigl( \begin{smallmatrix} 1 & 1\\ 0& 1\end{smallmatrix} \bigr)$ and $B=\bigl(\begin{smallmatrix} 0 & 1\\ 0 & 0 \end{smallmatrix}\bigr)$, we have $A\sigma_3(B)=0$ but $AB=\bigl( \begin{smallmatrix} 0 & 1\\ 0& 0\end{smallmatrix}\bigr) \neq 0$. Hence, we conclude ${\rm S}_2(\mathbb{Z})$ is not a $\Sigma$-compatible ring. In the same way, the set of nilpotent elements of $S_2(\mathbb{Z})$ consists of all matrices of the form $\bigl(\begin{smallmatrix}0 & b\\ 0 & 0\end{smallmatrix}\bigr)$, for any $b \in \mathbb{Z}$. An argument similar to the previous example shows that ${\rm S}_2(\mathbb{Z})$ is a weak $\Sigma$-compatible ring, so we can consider a skew PBW extension $A= \sigma({\rm S}_2(\mathbb{Z}))\langle x,y,z \rangle$ with three indeterminates $x, y$ and $z$ satisfying the conditions established in Definition \ref{def.skewpbwextensions}.
\end{example}
Proposition \ref{prop.rigid.and.weak} shows that if $R$ is reduced, then the notions of compatible ring and weak compatible ring coincide (c.f. \cite[Theorem 3.9]{ReyesSuarez2018}).
\begin{proposition}[{\cite[Theorem 4.5]{ReyesSuarez2020}}]\label{prop.rigid.and.weak}
If $A=\sigma(R)\langle x_1,\dots,x_n\rangle$ is a skew PBW extension, then the following statements are equivalent:\begin{enumerate}
\item $R$ is reduced and weak $(\Sigma,\Delta)$-compatible.
\item $R$ is $\Sigma$-rigid.
\item $A$ is reduced.
\end{enumerate}
\end{proposition}
Ouyang and Liu \cite{LunquenJingwang2011} characterized the nilpotent elements of skew polynomial rings over a weak $(\sigma, \delta)$-compatible and NI ring. Reyes and Su\'arez \cite{ReyesSuarez2020} extended this result for skew PBW extensions as the following proposition shows. We assume that the elements $d_{i,j}$ from Definition \ref{def.skewpbwextensions}(iv) are central and invertible in $R$.
\begin{proposition}[{\cite[Theorem 4.6]{ReyesSuarez2020}}]\label{prop.nilp}
If $A=\sigma(R)\langle x_1,\dots,x_n\rangle$ is a skew PBW extension over a weak $(\Sigma,\Delta)$-compatible and NI ring, then $f=\sum_{i=0}^m a_iX_i\in N(A)$ if and only if $a_i\in N(R)$, for all $0\leq i\leq m$.
\end{proposition}
Ouyang et al. \cite{Lunqunetal2013} introduced the notion of skew $\pi$-Armendariz ring as follows: If $R$ is a ring with an endomorphism $\sigma$ and a $\sigma$-derivation $\delta$, then $R$ is called {\em skew $\pi$-Armendariz ring} if for polynomials $f(x) = \sum_{i=0}^{l} a_ix^i$ and $g(x) = \sum_{j=0}^{m} b_jx^j$ of $R[x; \sigma, \delta]$, $f(x)g(x) \in N(R[x; \sigma, \delta])$ implies that $a_ib_j \in N(R)$, for each $0 \leq i \leq l$ and $0 \leq j \leq m$. Skew $\pi$-Armendariz rings are more general than skew Armendariz rings when the ring of coefficients is $(\sigma,\delta)$-compatible \cite[Theorem 2.6]{Lunqunetal2013}, and also extend $\sigma$-Armendariz rings defined by Hong et al. \cite{Hongetal2006} considering $\delta$ as the zero derivation.
Ouyang and Liu \cite{LunquenJingwang2011} showed that if $R$ is a weak $(\sigma,\delta)$-compatible and NI ring, then $R$ is skew $\pi$-Armendariz ring \cite[Corollary 2.15]{LunquenJingwang2011}. Reyes \cite{Reyes2018} formulated the analogue of skew $\pi$-Armendariz ring in the setting of skew PBW extensions. For a skew PBW extension $A = \sigma(R)\langle x_1,\dotsc, x_n\rangle$ over a ring $R$, we say that $R$ is {\em skew} $\Pi$-{\em Armendariz ring} if for elements $f = \sum_{i=0}^l a_iX_i$ and $g = \sum_{j=0}^m b_jY_j$ belong to $A$, $fg \in N(A)$ implies $a_ib_j \in N(R)$, for each $0 \leq i \leq l$ and $0 \leq j \leq m$. If $R$ is reversible and $(\Sigma,\Delta)$-compatible, then $R$ is skew $\Pi$-Armendariz ring \cite[Theorem 3.10]{Reyes2018}. This result was generalized to skew PBW extensions over weak $(\Sigma,\Delta)$-compatible and NI rings as the following proposition shows.
\begin{proposition}\label{prop.producto.nil1}
Let $A=\sigma(R)\langle x_1,\dots,x_n\rangle$ be a skew PBW extension over a weak $(\Sigma,\Delta)$-compatible and NI ring $R$.
\begin{enumerate}
\item [\rm (1)] \cite[Theorem 4.7]{ReyesSuarez2020}. If $f=\sum_{i=0}^m a_iX_i$ and $g=\sum_{j=0}^t b_jY_j$ are elements of $A$, then $fg\in N(A)$ if and only if $a_ib_j\in N(R)$, for all $i,j$.
\item [\rm (2)] \cite[Theorem 4.9]{ReyesSuarez2020}. For every idempotent element $e\in R$ and a fixed $i$, we have $\delta_i(e)\in N(R)$ and $\sigma_i(e)=e+u$, where $u\in N(R)$.
\end{enumerate}
\end{proposition}
\begin{proposition}[{\cite[Theorem 3.3]{SuarezChaconReyes2022}}]\label{th.NIiffNI}
Let $A=\sigma(R)\langle x_1,\dots,x_n\rangle$ be a skew PBW extension over a weak $(\Sigma,\Delta)$-compatible ring. $R$ is NI if and only if $A$ is NI. In this case, it is clear that $N(A) = N(R)A$.
\end{proposition}
\section{Elements, Gelfand and strongly harmonic rings}\label{originalresults}
\subsection{Von Neumman regular and clean elements}\label{Elements}
Following Lam \cite{Lam1991}, for a ring $R$, an element $a\in R$ is called {\em von Neumann regular} if there exists an element $r\in R$ with $a = ara$. The element $a\in R$ is said to be a $\pi$-{\em regular element} of $R$ if $a^{m}ra^{m} = a^{m}$, for some $r\in R$ and $m\ge 1$. We consider the set of idempotent elements ${\rm Idem}(R)$, the set of von Neumann regular elements ${\rm vnr}(R)$, and the set of $\pi$-regular elements $\pi-r(R)$. It is clear that ${\rm Idem}(R)\subseteq {\rm vnr}(R)\subseteq \pi - r(R)$. Additionally, a ring $R$ is called {\em von Neumann regular}, if the equality ${\rm vnr}(R) = R$ holds. If $\pi-r(R) = R$, then $R$ is said to be $\pi$-{\em regular}. Finally, $R$ is called {\em Boolean}, whenever ${\rm Idem}(R) = R$. In this way, the implications Boolean $\Rightarrow $ von Neumann regular $\Rightarrow$ $\pi$-regular hold.
Contessa \cite{Contessa1984} introduced the notion of von Neumann local. An element $a\in R$ is called a {\em von Neumann local element}, if either $a\in {\rm vnr}(R)$ or $1-a\in {\rm vnr}(R)$. Following Nicholson \cite{Nicholson1977}, an element $a\in R$ is a {\em clean element}, if $a$ is the sum of a unit and an idempotent of $R$. Let ${\rm vnl}(R)$ be the set of von Neumann local elements and ${\rm cln}(R)$ the set of clean elements. If ${\rm cln}(R) = R$, then $R$ is called a {\em clean ring} \cite{Nicholson1977}. Examples of clean rings are the exchange rings and semiperfect rings. Several characterizations of clean elements have been established by different authors (see \cite{KanwarLeroyMatczuk2015} and \cite{NicholsonZhou2005}). Finally, if ${\rm vnl}(R) = R$, we say that $R$ a {\em von Neumann local} ring \cite{Hashemi}.
In this section, we characterize types of elements of a skew PBW extension over a weak $(\Sigma,\Delta)$-compatible and NI ring. We formulate analogue results to the
obtained for the case of skew polynomial rings \cite{Hashemi}, and the skew PBW extensions over right duo rings \cite{Hamidizadehetal2020}.
Proposition \ref{WeakTheorem4.5} generalizes \cite[Theorem 4.5]{Hamidizadehetal2020}.
\begin{proposition}\label{WeakTheorem4.5}
Let $A = \sigma(R)\left \langle x_1, \dots , x_n \right \rangle$ be a skew PBW extension over a weak $(\Sigma,\Delta)$-compatible and $NI$ ring $R$, and consider $f = \sum_{i=0}^la_iX_i$ and $g = \sum_{j=0}^mb_jY_j $ non-zero elements of $A$ such that $fg = c \in R$. If $b_0$ is a unit of $R$, then $a_1, a_2,\dots,a_l$ are nilpotent elements of $R$.
\end{proposition}
\begin{proof}
Assume that $b_0$ is a unit element of $R$. Let us show that $a_1, a_2,\dots,a_l$ are all nilpotent. Since $R$ is NI ring and weak $(\Sigma, \Delta)$-compatible, we have $N(R)$ is a $(\Sigma, \Delta)$-invariant ideal of $R$. Hence $\overline{R} = R/N(R)$ is a reduced ring and also weak $(\overline{\Sigma}, \overline{\Delta})$-compatible. By Proposition \ref{prop.rigid.and.weak}, $\overline{R}$ is a $\overline{\Sigma}$-rigid ring. Since $fg = c \in R$, we have $\overline{f}\overline{g} = \overline{c}$ in $\sigma(\overline{R}) \left \langle x_1, \dots, x_n \right \rangle$, and hence $\overline{a_0}\overline{b_0} = \overline{c}$ and $\overline{a_i}\overline{b_j} = \overline{0}$, for each $i + j \geq 1$, by \cite[Proposition 4.2]{Hamidizadehetal2020}. Therefore, we get $\overline{a_i} = 0$ for each $i \geq 1$, since $b_0$ is a unit, whence $a_i$ is nilpotent for every $i \geq 1$.
\end{proof}
We establish the following characterization of the units of a skew PBW extension. Theorem \ref{th.units} generalizes \cite[Theorem 4.7]{Hamidizadehetal2020}.
\begin{theorem}\label{th.units} If $A=\sigma(R)\langle x_1,\dots,x_n\rangle$ is a skew PBW extension over a weak $(\Sigma,\Delta)$-compatible and NI ring $R$, then an element $f=\sum_{i=0}^ma_iX_i\in A$ is a unit of $A$ if and only if $a_0$ is a unit of $R$ and $a_i$ is nilpotent, for every $1 \le i \le m$.
\end{theorem}
\begin{proof}
Suppose that $R$ is a weak $(\Sigma, \Delta)$-compatible and NI ring. This implies that $N(R)$ is a $(\Sigma, \Delta)$-invariant ideal of $R$, whence $\overline{R} = R/N(R)$ is reduced and weak $(\overline{\Sigma},\overline{\Delta})$-compatible. Proposition \ref{prop.rigid.and.weak} implies that $\overline{R}$ is $\overline{\Sigma}$-rigid, and $\overline{A}=A/N(A)$ is a skew PBW extension over $\overline{R}$ by Proposition \ref{prop.invariant}.
Consider $f = \sum_{i=0}^la_iX_i$ a unit element of $A$. There exists $g=\sum_{j=0}^mb_jY_j \in A$ such that $fg = gf = 1$, which implies that $\overline{f}\overline{g}=\overline{g}\overline{f}= \overline{1}$ in $\overline{A}$, and so $\overline{a_0}\overline{b_0}=\overline{b_0}\overline{a_0}=\overline{1}$ and $\overline{a_i}\overline{b_j}=0$, for each $i+j\geq 1$ by \cite[Proposition 4.2]{Hamidizadehetal2020}. Then $\overline{a_0}$ and $\overline{b_0}$ are units of $\overline{R}$ and $a_1, \dotsc, a_l \in N(R)$. Since $N(R) \subseteq J(R)$ and $\overline{a_0}$ is a unit element of $\overline{R}$, we have that $a_0 \in U(R)$.
Conversely, let $a_0$ be a unit element and $a_1, \dotsc, a_l$ be nilpotent elements of $R$. Then $\sum_{i=1}^la_iX_i \in N(A)$ by Proposition \ref{prop.nilp}. Also, we get $N(A) \subseteq J(A)$ since $A$ is NI, and so $\sum_{i=1}^la_iX_i \in J(A)$. Therefore, we have $f =\sum_{i=0}^la_iX_i$ is a unit element of $A$.
\end{proof}
As a consequence of the characterization of the units and nilpotent elements of a skew PBW extension over weak $(\Sigma, \Delta)$-compatible rings, we obtain Corollary \ref{WeakCorollary4.8} and Proposition \ref{th.NJ}.
\begin{corollary}\label{WeakCorollary4.8}
If $A=\sigma(R)\langle x_1,\dots,x_n\rangle$ is a skew PBW extension over a weak $(\Sigma,\Delta)$-compatible and NI ring $R$, then $U(A)=U(R)+N(R)A$.
\end{corollary}
\begin{proposition}\label{th.NJ}
If $A=\sigma(R)\langle x_1,\dots,x_n\rangle$ is a skew PBW extension over a weak $(\Sigma,\Delta)$-compatible and NI ring $R$, then $A$ is NJ.
\end{proposition}
\begin{proof}
Since $A$ is a NI ring, then $N(A)\subseteq J(A)$ by Proposition \ref{th.NIiffNI}. Additionally, if $f$ is an element of $J(A)$ with $f=\sum_{i=0}^ma_iX_i$, we obtain $1+fx_n=1+\sum_{i=0}^ma_iX_ix_n$ is a unit element of $A$. Theorem \ref{th.units} shows that the coefficients $a_0,a_1,\dots,a_m\in N(R)$, and hence $f\in N(A)$ by Proposition \ref{prop.nilp}. Thus, we conclude $N(A)=J(A)$.
\end{proof}
About idempotent elements of skew PBW extensions over weak $(\Sigma, \Delta)$-compatible NI rings, the next result generalizes \cite[Theorem 4.9]{Hamidizadehetal2020}.
\begin{theorem}\label{theoremidem}
Let $A = \sigma(R)\left \langle x_1, \dots , x_n \right \rangle$ be a skew PBW extension over a weak $(\Sigma, \Delta)$-compatible and $NI$ ring $R$. If $f = \sum_{i=0}^la_iX_i$ is an idempotent element of $A$, then $a_i \in N(R)$, for each $1 \leq i \leq l$, and there exists an idempotent element $e \in R$ such that $\overline{a_0} = \overline{e} \in R/N(R)$.
\end{theorem}
\begin{proof}
Let $f = \sum_{i=0}^la_iX_i$ be an idempotent element of $A$ and consider the element $g=1-f=(1-a_0)-\sum_{i=1}^la_iX_i$. If $f$ is an idempotent, then $fg=0\in N(R)$. Hence, by Proposition \ref{prop.producto.nil1} (1), we have $a_ia_i=a_i^2 \in N(R)$ for $1 \le i \le l$ and $a_0(1-a_0)\in N(R)$. The former means that $a_i\in N(R)$ for $i\geq 1$, and the last assertion implies that $a_0-a_0^2\in N(R)$. By \cite[Theorem 21.28]{Lam1991}, there exists an idempotent $e\in R$ such that $a_0-e\in N(R)$, that is, $\overline{a_0} = \overline{e} \in R/N(R)$.
\end{proof}
Before studying the idempotent elements, we present two preliminary results that are used in the proof of the theorem that describes of these elements in skew PBW extensions over Abelian NI rings.
\begin{lemma}\label{lemaidem}
Let $R$ be any ring, $f,e\in {\rm Idem}(R)$ and $s\in N(R)$. If $f = e+s$ and $es = se$, then $s=0$.
\end{lemma}
\begin{proof}
If $s\neq 0$ then there exists $k\geq 2$ such that $s^k=0$ and $s^{k-1}\neq 0$. Since $f$ is idempotent, $0 = f(1-f) = (e+s)(1-e-s) = s-2es-s^2$. Thus $s^2 = (1-2e)s$, and multiplying by $s^{k-2}$ we have $0 = s^k=(1-2e)s^{k-1}$. Since $1-2e$ is invertible, it follows that $0 = s^{k-1}$, which is a contradiction. Hence $n=0$.
\end{proof}
\begin{proposition}\label{prop.idem.centr}
Let $A=\sigma(R)\langle x_1,\dots,x_n\rangle$ be a skew PBW extension over a weak $(\Sigma,\Delta)$-compatible and Abelian NI ring $R$. If $e\in {\rm Idem}(R)$, then $e\in Z(A)$.
\begin{proof}
Fix $1\leq i\leq n$. By using Proposition \ref{prop.producto.nil1} (2), there exists $u\in N(R)$ such that $\sigma_i(e)=e+u$. On the other hand, since $R$ is Abelian, we have $eu=ue$ and $\sigma_i(e)\in {\rm Idem}(R)$, which implies that $u=0$ and $\sigma_i(e)=e$ by Lemma \ref{lemaidem}. Hence, $\delta_i(e)=\delta_i(e^2)=\sigma_i(e)\delta_i(e)+\delta_i(e)e=2e\delta_i(e)$, i.e., $(1-2e)\delta_i(e)=0$, whence $\delta_i(e)=0$. Finally, since $\sigma_i(e)=e$ and $\delta_i(e)=0$, for all $1\leq i\leq n$, then $e$ commutes with the $x_i$'s, and therefore $e\in Z(A)$.
\end{proof}
\end{proposition}
Next, we formulate a result that describes the idempotent elements of skew PBW extensions over Abelian NI rings. Theorem \ref{Proposition1.3.3.a} generalizes \cite[Theorem 4.10]{Hamidizadehetal2020}.
\begin{theorem}\label{Proposition1.3.3.a}
Let $A = \sigma(R)\left \langle x_1, \dotsc, x_n \right \rangle$ be a skew PBW extension over a weak $(\Sigma,\Delta)$-compatible and Abelian $NI$ ring $R$, and $f=\sum_{i=0}^na_iX_i$ an element of $A$. If $f^2=f$, then $f= a_0\in {\rm Idem}(R)$.
\end{theorem}
\begin{proof}
Let $f=\sum_{i=0}^la_iX_i \in A$ such that $f^2=f$. By using Proposition \ref{theoremidem}, we have $a_i \in N(R)$, for each $1 \leq i \leq l$, and there exist an idempotent element $e \in R$ and a nilpotent element $b \in R$ such that $a_0 = e + b$. In this way, if we consider $h = b + \sum_{i=1}^la_iX_i \in A$, then $f = e+h$. Finally, we have $h \in N(A)$ by Proposition \ref{prop.nilp} and $he = eh$ by using Proposition \ref{prop.idem.centr}. Therefore, we conclude $h=0$ by Lemma \ref{lemaidem}, and so $f = a_0$.
\end{proof}
The next corollary extends \cite[Corollaries 4.11 and 4.12]{Hamidizadehetal2020}.
\begin{corollary}\label{WeakCorollary4.11}
If $A=\sigma(R)\langle x_1,\dots,x_n\rangle$ is a skew PBW extension over a weak $(\Sigma,\Delta)$-compatible and Abelian NI ring $R$, then ${\rm Idem}(A)={\rm Idem}(R)$, and so $A$ is an Abelian ring.
\end{corollary}
In \cite[Theorem 4.14]{Hamidizadehetal2020} the von Neumann regular elements in skew PBW extensions over right duo rings were characterized. Next, we formulate a generalization of this theorem.
\begin{theorem}\label{WeakTheorem4.14}
If $A=\sigma(R)\langle x_1,\dots,x_n\rangle$ is a skew PBW extension over a weak $(\Sigma,\Delta)$-compatible and Abelian NI ring $R$, then ${\rm vnr}(A)$ consists of the elements of the form $\sum_{i=0}^ma_iX_i$ where $a_0=ue$, $a_i\in e(N(R))$, for every $i\geq 1$, for some $u\in U(R)$ and $e\in {\rm Idem}(R)$.
\end{theorem}
\begin{proof}
By Corollary \ref{WeakCorollary4.11}, $A$ is an Abelian ring. Additionally, $f\in{\rm vnr(A)}$ if and only if $f = ue$, for some $u\in U(A)$ and $e\in {\rm Idem}(A)$ \cite[Proposition 4.2]{Hashemi}. Hence, the result follows from Corollaries \ref{WeakCorollary4.8} and \ref{WeakCorollary4.11}.
\end{proof}
As a consequence of \cite[Theorem 1, Lemma 2]{Badawi}, we obtain the following description of the $\pi$-regular elements over Abelian rings.
\begin{proposition}\label{badawi}
Let $R$ be an Abelian ring. Then $r$ is a $\pi$-regular element of $R$ if and only if there exist $e\in {\rm Idem}(R)$ and $u\in U(R)$ such that $er = eu$ and $(1-e)r\in N(R)$.
\end{proposition}
As a result of the previous proposition, we can describe the $\pi$-regular elements in a skew PBW extension over weak $(\Sigma,\Delta)$-compatible and Abelian NI ring. Theorem \ref{WeakTheorem4.15} generalizes \cite[Theorem 4.15]{Hamidizadehetal2020}.
\begin{theorem}\label{WeakTheorem4.15}
If $A=\sigma(R)\langle x_1,\dots,x_n\rangle$ is a skew PBW extension over a weak $(\Sigma, \Delta)$-compatible and Abelian NI ring $R$, then
\begin{center}
$\displaystyle \pi - r(A) = \left \{ \sum a_iX_i\in A\ |\ a_0\in \pi-r(R),\ a_i\in N(R), \ \text{for}\ i \geq 1 \right \}$.
\end{center}
\end{theorem}
\begin{proof}
Let $f=\sum_{i=0}^la_iX_i$ be an element of $\pi-r(A)$. Since $A$ is Abelian, there exist elements $e\in{\rm Idem}(A)={\rm Idem}(R)$ and $u\in U(A)$ such that $ef=eu$ and $(1-e)f\in N(A)$, by Proposition \ref{badawi}. Then $ea_0=eu'$ and $ea_i\in N(R)$, for some $u'\in U(R)$ and for all $1\le i\le l$ by Theorem \ref{th.units}. Additionally, we have $(1-e)a_i\in N(R)$ for all $0\le i \le l$ by Proposition \ref{prop.nilp}. Thus, Proposition \ref{badawi} shows that $a_0\in\pi-r(R)$ and $a_i\in N(R)$, for $1\le i\le l$.
On the other hand, suppose that $a_0\in \pi-r(R)$ and $a_i\in N(R)$, for all $i\ge 1$. By using Proposition \ref{badawi}, there exist $e\in{\rm Idem}(R)$ and $u\in U(R)$ such that $ea_0=eu$ and $(1-e)a_0\in N(R)$. This implies that $ef = ea_0+\sum_{i=1}^l ea_iX_i=e\left(u+\sum_{i= 1}^n a_iX_i\right) = eu'$ and $(1-e)f = \sum_{i=0}^l (1-e)a_iX_i$ where $u' = u+\sum_{i= 1}^l a_iX_i$. Since $N(R)$ is an ideal of $R$, $(1-e)a_i\in N(R)$ and therefore $(1-e)f\in N(A)$ by Proposition \ref{prop.nilp}, and $u'\in U(A)$ by Theorem \ref{th.units}. Therefore, Proposition \ref{badawi} guarantees that $f\in\pi-r(A)$.
\end{proof}
Theorem \ref{WeakTheorem4.16} extends \cite[Theorem 4.16]{Hamidizadehetal2020}.
\begin{theorem}\label{WeakTheorem4.16}
If $A=\sigma(R)\langle x_1,\dots,x_n\rangle$ is a skew PBW extension over a weak $(\Sigma, \Delta)$-compatible and Abelian NI ring $R$, then ${\rm vnl}(A)$ consists of the elements of the form $\sum_{i=0}^la_iX_i$, where either $a_0 = ue$ or $a_0 = 1 - ue$, $a_i \in eN(R)$, for every $i \geq 1$, some element
$u \in U(R)$ and $e \in {\rm Idem}(R)$.
\end{theorem}
\begin{proof}
It follows from Theorem \ref{WeakTheorem4.14} and \cite[Theorem 6.1 (2)]{Hashemi}.
\end{proof}
In \cite[Theorem 4.17]{Hamidizadehetal2020}, the clean elements of skew PBW extensions over right duo rings were characterized. We present a generalization of this result.
\begin{theorem}\label{WeakTheorem4.17}
Let $A=\sigma(R)\langle x_1,\dots,x_n\rangle$ be a skew PBW extension over a weak $(\Sigma,\Delta)$-compatible and Abelian NI ring $R$. Then
\[{\rm cln}(A)=\left\lbrace\sum a_iX_i\in A\ |\ a_0\in{\rm cln}(R),\ a_i\in N(R) \right\rbrace. \]
\end{theorem}
\begin{proof}
The result follows from Proposition \ref{th.NIiffNI} and Corollaries \ref{WeakCorollary4.8} and \ref{WeakCorollary4.11}.
\end{proof}
\subsection{Strongly harmonic and Gelfand rings}\label{Gelfandrings}Mulvey \cite{Mulvey1979a} introduced {\it strongly harmonic rings} with the aim of generalizing the Gelfand duality from $C^{*}$-algebras to rings (not necessarily commutative). Borceux and Van den Bossche \cite{Borceux1983} modified the definition of strongly harmonic rings and defined the {\it Gelfand rings}. A ring $R$ is called \textsl{Gelfand} (resp. {\it strongly harmonic}) if for each pair of distinct maximal right ideals (resp. maximal ideals) $M_1$, $M_2$ of $R$, there are right ideals (resp. ideals) $I_1$, $I_2$ of $R$ such that $I_1\not\subseteq M_1$, $I_2\not\subseteq M_2$ and $I_1I_2=0$. Equivalently, $R$ is a Gelfand ring (resp. strongly harmonic) if for each pair of distinct maximal right ideals (resp. maximal ideals) $M_1$, $M_2$ of $R$, there are elements $r\notin M_1$, $s\notin M_2$ of $R$ such that $rRs=0$. Gelfand rings and strongly harmonic rings have been investigated by different authors such as Borceux and Van den Bossche \cite{Borceux1983}, Borceux et al., \cite{Borceuxetal1984}, Carral \cite{Carral1980}, Demarco and Orsatti \cite{DemarcoOrsatti1971}, Mulvey \cite{Mulvey1976, Mulvey1979a, Mulvey1979b}, Sun \cite{Sun1992a, Sun1992b}, Zhang et al., \cite{Zhangetal2006}. We continue the study of Gelfand and Harmonic rings in the setting of skew PBW extensions over weak $(\Sigma, \Delta)$-compatible rings.
\begin{proposition}\label{prop.strongly.coef}
Let $A=\sigma(R)\langle x_1,\dots,x_n\rangle$ be a skew PBW extension over a weak $(\Sigma,\Delta)$-compatible and NI ring $R$. Then $A/J(A)$ is a Gelfand ring {\rm (}resp. strongly harmonic{\rm )} if and only if for each pair of distinct maximal right ideals (resp. maximal ideals) $M_1$, $M_2$ of $A$, there exist elements of $R$, $r\notin M_1$ and $s\notin M_2$ such that $rRs\subseteq N(R)$.
\end{proposition}
\begin{proof} Let $M_1$, $M_2$ be a pair of distinct maximal right ideals of $A$. Since $A/J(A)$ is a Gelfand ring, there exist elements $f, g$ of $A$ such that $f=\sum_{i=0}^ma_iX_i\notin M_1$, $g=\sum_{j=0}^lb_jX_j\notin M_2$ and $fAg\subseteq J(A)=N(A)$. Since $f\notin M_1$, $a_t\notin M_1$ for some coefficient of $f$ with $1 \le t \le m$. In this way, we also have $b_s\notin M_2$ for some coefficient of $g$ with $1 \le s \le l$. Since $fRg \subseteq fAg \subseteq N(A)$, then $fcg \in N(A)$ and $a_icb_j\in N(R)$ for all $i,j$, and for every $c \in R$ by Proposition \ref{prop.nilp}. Thus, if we consider $r=a_t$ and $s=b_s$, the result follows.
For the other implication, let $M_1$, $M_2$ be a pair of distinct maximal right ideals of $A$. Since $rRs\subseteq N(R)$, for some $r \notin M_1$ and $s \notin M_2$, then $rAs\subseteq N(A)$, by Proposition \ref{prop.nilp}. This implies that $A/J(A)$ is a Gelfand ring.
The proof of the strongly harmonic case is analogous.
\end{proof}
\begin{proposition}\label{prop.not.local}
If $A=\sigma(R)\langle x_1,\dots,x_n\rangle$ is a skew PBW extension over a weak $(\Sigma,\Delta)$-compatible and NI ring, then $A$ is not a local ring.
\end{proposition}
\begin{proof}
Suppose that $A$ is a local ring. By using Theorem \ref{th.units}, we have $x_n$ is not an unit of $A$. In this way, since $J(A)=N(A)$ we have that $x_n \in N(A)$, which contradicts Proposition \ref{prop.nilp}. Hence $A$ is not local.
\end{proof}
\begin{example} The following examples illustrate the above theorem.
\begin{itemize}
\item[\rm (i)] In the Example \ref{exampleweak1}, we have that the Ore extension $R_2[x;\sigma,\delta]$ is not a local ring by Proposition \ref{prop.not.local}.
\item[\rm (ii)] Consider the skew PBW extension $A = \sigma({\rm S}_2(\mathbb{Z}))\langle x,y,z \rangle$ over the ring ${\rm S}_2(\mathbb{Z})$ in Example \ref{exampleweak2}. Proposition \ref{prop.not.local} shows that $A$ is not a local ring.
\end{itemize}
\end{example}
\begin{theorem}\label{th.no.gelfand}
Let $A=\sigma(R)\langle x_1,\dots,x_n\rangle$ be a skew PBW extension over a weak $(\Sigma,\Delta)$-compatible and NI ring $R$. If $N(R)$ is a prime ideal of $R$ then $A/J(A)$ is not a Gelfand ring.
\end{theorem}
\begin{proof}
Suppose that $A/J(A)$ is a Gelfand ring. By using Proposition \ref{prop.not.local}, there exist at least two maximal right ideals $M_1,M_2$ of $A$. Additionally, since $A/J(A)$ is a Gelfand ring, there exist $r,s\in R$ such that $r\notin M_1$, $s\notin M_2$ and $rRs\subseteq N(R)$ by Proposition \ref{prop.strongly.coef}. Since $N(R)$ is a prime ideal of $R$, we have $r\in N(R)$ or $s\in N(R)$. Finally, since $N(R)\subseteq N(A)=J(A)\subseteq M_1, M_2$, then $r\in M_1$ or $s\in M_2$ which is a contradiction. Hence, we conclude $A/J(A)$ is not a Gelfand ring.
\end{proof}
\begin{corollary}\label{corollary.no.gelfand}
Let $A=\sigma(R)\langle x_1,\dots,x_n\rangle$ be a skew PBW extension over a weak $(\Sigma,\Delta)$-compatible and NI ring $R$. If $N(R)$ is a prime ideal then $A/N_*(A)$ is not a Gelfand ring.
\end{corollary}
\begin{example} \label{examplenotGelfand}
If we consider the Theorem \ref{th.no.gelfand} and the Corollary \ref{corollary.no.gelfand}, we conclude $A/J(A)$ and $A/N_{*}(A)$ are not Gelfand rings where $A$ is the Ore extension $R_2[x;\sigma,\delta]$ over the ring of upper triangular matrices $R_2$ in Example \ref{exampleweak1}, or the skew PBW extension $A = \sigma({\rm S}_2(\mathbb{Z}))\langle x,y,z \rangle$ over the ring ${\rm S}_2(\mathbb{Z})$ in Example \ref{exampleweak2}.
\end{example}
\begin{theorem}\label{theorem.harmonic.unique}
Let $A=\sigma(R)\langle x_1,\dots,x_n\rangle$ be a skew PBW extension over a weak $(\Sigma,\Delta)$-compatible and NI ring. If $N(R)$ is a prime ideal then $A/J(A)$ is strongly harmonic if and only if $A$ has a unique maximal ideal.
\end{theorem}
\begin{proof}
If $A$ has a unique maximal ideal, then $A/J(A)$ has a unique maximal ideal and therefore $A/J(A)$ is strongly harmonic. If $A$ has at least two maximal ideals, the proof of the Theorem \ref{th.no.gelfand} guarantees that $A/J(A)$ is not a strongly harmonic ring.
\end{proof}
\section{Future work}\label{future}
Different authors have studied the notion of compatibility for the study of modules over Ore extensions and skew PBW extensions (e.g., \cite{AlhevazMoussavi2012, Annin2004, NinoRamirezReyes2020, Reyes2019}). Considering the results obtained in this paper, we think as future work to investigate a classification of several types of elements in modules extended over skew PBW extensions. Also, following this idea and the notions of {\em strongly harmonic and Gelfand modules} introduced by Medina-B\'arcenas et al. \cite{Medinaetal2020}, it will be interesting to study these modules for these noncommutative rings.
On the other hand, since skew PBW extensions are examples of noncommutative algebraic structures such as the {\em semi-graded rings} defined by Lezama and Latorre \cite{Lezamalatorre2017}, which have been studied recently from the point of view of noncommutative algebraic geometry (topics such as Hilbert polynomial and Hilbert series, generalized Gelfand-Kirillov dimension, noncommutative schemes, and Serre-Artin-Zhang-Verevkin theorem, and others, see \cite{Lezama2020, Lezamagomez2019}), and having in mind that semi-graded rings are more general than $\mathbb{N}$-graded rings, the natural task is to investigate the several types of elements studied in this paper and the notions of strongly harmonic and Gelfand rings in this more general setting.
\end{document}
|
\begin{document}
\begin{abstract}
In this article we continue our investigation of the thin obstacle problem with variable coefficients which was initiated in \cite{KRS14}, \cite{KRSI}. Using a partial Hodograph-Legendre transform and the implicit function theorem, we prove higher order Hölder regularity for the regular free boundary, if the associated coefficients are of the corresponding regularity. For the zero obstacle this yields an improvement of a \eta_{\delta,r}mph{full derivative} for the free boundary regularity compared to the regularity of the metric. In the presence of non-zero obstacles or inhomogeneities, we gain \eta_{\delta,r}mph{three halves of a derivative} for the free boundary regularity with respect to the regularity of the inhomogeneity. Further we show analyticity of the regular free boundary for analytic metrics. We also discuss the low regularity set-up of $W^{1,p}$ metrics with $p>n+1$ with and without ($L^p$) inhomogeneities.\\
Key new ingredients in our analysis are the introduction of generalized Hölder spaces, which allow to interpret the transformed fully nonlinear, degenerate (sub)elliptic equation as a perturbation of the Baouendi-Grushin operator, various uses of intrinsic geometries associated with appropriate operators, the application of the implicit function theorem to deduce (higher) regularity and the splitting technique from \cite{KRSI}.
\eta_{\delta,r}nd{abstract}
\subjclass[2010]{Primary 35R35}
\keywords{Variable coefficient Signorini problem, variable coefficient thin obstacle problem, thin free boundary, Hodograph-Legendre transform}
\thanks{
H.K. acknowledges support by the DFG through SFB 1060.
A.R. acknowledges a Junior Research Fellowship at Christ Church.
W.S. is supported by the Hausdorff Center of Mathematics.}
\title{The Variable Coefficient Thin Obstacle Problem: Higher Regularity}
\tableofcontents
\section{Introduction}
This article is devoted the study of the higher regularity properties of the free boundary of solutions to the \eta_{\delta,r}mph{thin obstacle} or \eta_{\delta,r}mph{Signorini problem}. To this end, we consider local minimizers to the functional
\begin{align*}
J(v):=\int\limits_{B_1^+}a^{ij}\p_iv\p_jv dx,\quad v\in \mathcal{K},
\eta_{\delta,r}nd{align*}
with $\mathcal{K}:=\{u\in H^1(B_1^+)| \ u\geq 0 \mbox{ on } B_1'\times \{0\}\}$. Here $B_1^+ := \{x\in B_1 \subset \R^{n+1}| \ x_{n+1}\geq 0\}$ and $B_1':= \{x\in B_1 \subset \R^{n+1}| \ x_{n+1}=0\}$ denote the $(n+1)$-dimensional upper half ball and the co-dimension one ball, respectively. The tensor field $a^{ij}: B_1^+ \rightarrow \R^{(n+1)\times (n+1)}_{sym}$ is assumed to be uniformly elliptic, symmetric and $W^{1,p}(B_1^+)$ regular for some $p> n+1$. Here and in the sequel we use the summation convention.\\
Due to classical results on variational inequalities \cite{U87}, \cite{F10}, minimizers of this problem exist and are unique (under appropriate boundary conditions). Moreover, minimizers are $C^{1,\min\{1-\frac{n+1}{p},\frac{1}{2}\}}(B_{1/2}^+)$ regular (c.f. \cite{AC06}, \cite{KRSI}) and solve the following uniformly elliptic equation with \eta_{\delta,r}mph{complementary} or \eta_{\delta,r}mph{Signorini boundary conditions}
\begin{equation}
\label{eq:thin_obst}
\begin{split}
\p_i a^{ij} \p_j w &= 0 \mbox{ in } B_1^+,\\
w \geq 0, \ a^{n+1, j} \p_{j}w\leq 0,\ w (a^{n+1,j}\p_j w) &= 0 \mbox{ in } B_1' \times \{0\}.
\eta_{\delta,r}nd{split}
\eta_{\delta,r}nd{equation}
Here the bulk equation is to be interpreted weakly, while the boundary conditions hold pointwise. In particular, the constraint originating from the convex set $\mathcal{K}$ only acts on the boundary; in this sense the obstacle is \eta_{\delta,r}mph{thin}. The constraint on functions in $\mathcal{K}$ divides the boundary $B_1' \times \{0\}$ into three different regions: The \eta_{\delta,r}mph{contact set} $\Lambda_w := \{x\in B_{1}'\times \{0\}| \ w=0\}$, where the minimizer attains the obstacle, the \eta_{\delta,r}mph{non-coincidence set}, $\Omega_w:= \{x\in B_1' \times \{0\}| \ w>0\}$, where the minimizer lies strictly above the obstacle, and the \eta_{\delta,r}mph{free boundary}, $\Gamma_w := \partial \Omega_w$, which separates the contact set from the non-coincidence set. \\
As we seek to obtain a more detailed analysis of the (regular) free boundary under higher regularity assumptions on the metric tensor $a^{ij}$, we briefly recall the, for our purposes, most relevant known properties of the free boundary (c.f. \cite{CSS}, \cite{PSU}, \cite{GSVG14}, \cite{GPSVG15}, \cite{KRS14} \cite{KRSI}):
Considering metrics which need not be more regular than $W^{1,p}$ with $p\in(n+1,\infty]$ and carrying out a blow-up analysis of solutions, $w$, of (\ref{eq:thin_obst}) around free boundary points, it is possible to assign to each free boundary point $x_0\in \Gamma_w \cap B'_1$ the uniquely determined \eta_{\delta,r}mph{order of vanishing} $\kappa(x_0)$ of $w$ at this point (c.f. Proposition 4.2 in \cite{KRS14}\xspace):
\begin{align*}
\kappa(x_0):= \lim\limits_{r \rightarrow 0_+} \frac{\ln\left( r^{-\frac{n+1}{2}} \left\| w \right\|_{L^2(B_r^+(x_0))} \right)}{\ln(r)} .
\eta_{\delta,r}nd{align*}
Since the order of vanishing satisfies the gap property that either $\kappa(x_0)= \frac{3}{2}$ or $\kappa(x_0)\geq 2$ (c.f. Corollary 4.2 in \cite{KRS14}\xspace), the free boundary can be decomposed as follows:
\begin{align*}
\Gamma_w \cap B_{1}' := \Gamma_{3/2}(w)\cup \bigcup\limits_{\kappa \geq 2} \Gamma_{\kappa}(w),
\eta_{\delta,r}nd{align*}
where $\Gamma_{\kappa}(w):= \{x_0\in \Gamma_{w} \cap B_{1}'| \ \kappa(x_0)= \kappa\}$. Moreover, noting that the mapping $\Gamma_w \ni x_0\mapsto \kappa(x_0)$ is upper-semi-continuous (c.f. Proposition 4.3 in \cite{KRS14}\xspace), we obtain the set $\Gamma_{3/2}(w)$, which is called the \eta_{\delta,r}mph{regular free boundary}, is a relatively open subset of $\Gamma_w$. At each regular free boundary point $x_0\in \Gamma_{3/2}(w)$, there exists an $L^2$-normalized blow-up sequence $w_{x_0,r_j}$, which converges to a nontrivial global solution $w_{3/2}(Q(x_0)x)$ with flat free boundary. Here $w_{3/2}(x):=\Ree (x_n+ix_{n+1})^{3/2}$ is a model solution and $Q(x_0)\in SO(n+1)$ (c.f. Proposition 4.5 in \cite{KRS14}\xspace).
By a more detailed analysis the regular free boundary can be seen to be $C^{1,\alphapha}$ regular (c.f. Theorem 2 in \cite{KRSI}\xspace) and a leading order expansion of solutions $w$ at the regular free boundary can be determined (c.f. Proposition 4.6 in \cite{KRSI}\xspace and Corollary 4.8 in \cite{KRSI}\xspace, c.f. also Proposition \ref{prop:asym2} in Section \ref{sec:asymp}).\\
In the sequel we will exclusively focus on the \eta_{\delta,r}mph{regular} free boundary. Due to its relative openness and by scaling, it is always possible to assume that the whole boundary in a given domain consists only of the regular free boundary. This convention will be used throughout the article; whenever referring to the ``free boundary'' without further details, we will mean the regular free boundary.
\subsection{Main results and ideas}
In this article our main objective is to prove \eta_{\delta,r}mph{higher} regularity of the (regular) free boundary if the metric $a ^{ij}$ is of higher (Hölder) regularity. In particular, we prove the analyticity of the free boundary for analytic coefficients:
\begin{thm}
\label{thm:higher_reg}
Let $a^{ij}:B_1^+ \rightarrow \R^{(n+1)\times (n+1)}_{sym}$ be a uniformly elliptic, symmetric, $W^{1,p}$ tensor field with $p\in(n+1,\infty]$. Suppose that $w:B_{1}^+ \rightarrow \R$ is a solution of the variable coefficient thin obstacle problem (\ref{eq:thin_obst}) with metric $a^{ij}$.
\begin{itemize}
\item[(i)] Then the regular free boundary $\Gamma_{3/2}(w)$ is locally a $C^{1,1-\frac{n+1}{p}}$ graph if $p<\infty$ and a $C^{1,1-}$ graph if $p=\infty$.
\item[(ii)] Assume further that $a^{ij}$ is $C^{k,\gamma}$ regular with $k\geq 1$ and $\gamma\in (0,1)$. Then the regular free boundary $\Gamma_{3/2}(w)$ is locally a $C^{k+1,\gamma}$ graph.
\item[(iii)] Assume in addition that $a^{ij}$ is real analytic. Then the regular free boundary $\Gamma_{3/2}(w)$ is locally real analytic.
\eta_{\delta,r}nd{itemize}
\eta_{\delta,r}nd{thm}
We note that these results are sharp on the Hölder scale. In deriving the sharp gain of a full derivative, the choice of our function spaces play a key role (c.f. the discussion below for details on the motivation of our function spaces and Remark \ref{rmk:optreg} in Section \ref{sec:IFT1} for the optimality on the Hölder scale and the role of our function spaces). \\
In addition to the previously stated results, we also deal with the regularity problem in the presence of inhomogeneities.
\begin{thm}
\label{thm:higher_reg_inhom}
Let $a^{ij}:B_1^+ \rightarrow \R^{(n+1)\times (n+1)}_{sym}$ be a $W^{1,p}$ tensor field with $p\in(2(n+1),\infty]$ and let $f:B_1^+ \rightarrow \R$ be an $L^p(B_1^+)$ function. Suppose that $w:B_{1}^+ \rightarrow \R$ is a solution of the variable coefficient thin obstacle problem with metric $a^{ij}$ and inhomogeneity $f$:
\begin{equation}
\label{eq:thin_obst_inhom}
\begin{split}
\p_i a^{ij} \p_j w &= f \mbox{ in } B_1^+,\\
w \geq 0, \ a^{n+1, j} \p_{j}w\leq 0,\ w (a^{n+1,j}\p_j w) &= 0 \mbox{ in } B_1' \times \{0\}.
\eta_{\delta,r}nd{split}
\eta_{\delta,r}nd{equation}
\begin{itemize}
\item[(i)] Then the regular free boundary $\Gamma_{3/2}(w)$ is locally a $C^{1,\frac{1}{2}-\frac{n+1}{p}}$ graph.
\item[(ii) ] Assume in addition that $a^{ij}$ is a $C^{k,\gamma}$ tensor field with $k\geq 1$ and $\gamma\in (0,1]$ and let $f:B_1^+ \rightarrow \R$ be a $C^{k -1,\gamma}$ function. Then the regular free boundary $\Gamma_{3/2}(w)$ is locally a $C^{k+[\gamma+ \frac{1}{2}], \gamma+\frac{1}{2}- [\gamma+ \frac{1}{2}]}$ graph.
\item[(iii)] Moreover, assume that $a^{ij}$ is a real analytic tensor field and let $f:B_1^+ \rightarrow \R$ be a real analytic function. Then the regular free boundary $\Gamma_{3/2}(w)$ is locally real analytic.
\eta_{\delta,r}nd{itemize}
Here $[\cdot]$ denotes the floor function.
\eta_{\delta,r}nd{thm}
We note that this in particular includes the set-up with non-zero obstacles with as low as $W^{2,p}$, $p\in(2(n+1),\infty]$, regularity (c.f. Section \ref{sec:nonzero}). To to best of our knowledge Theorems \ref{thm:higher_reg} and \ref{thm:higher_reg_inhom} are the first results on higher regularity for the thin obstacle problem with variable coefficients and inhomogeneities.\\
In order to obtain a better understanding for the gain of the free boundary regularity with respect to the regularity of the inhomogeneity, it is instructive to compare Theorem \ref{thm:higher_reg_inhom}, i.e. the situation of the variable coefficient \eta_{\delta,r}mph{thin} obstacle problem, with that of the variable coefficient \eta_{\delta,r}mph{classical} obstacle problem (c.f. \cite{KN77}, \cite{F10}): In the classical obstacle problem (for the Laplace operator) there is a gain of \eta_{\delta,r}mph{one} order of differentiability with respect to the inhomogeneity, i.e. if $f\in C^{k,\alphapha}$, then the (regular) free boundary $\Gamma_w$ is $C^{k+1,\alphapha}$ regular. This can be seen to be optimal by for instance considering an inhomogeneity which only depends on a single variable (the variable $x_n$ in whose direction the free boundary is a graph, i.e. $\Gamma_w=\{x\in B_1| \ x_n = g(x')\}$, with a choice of parametrization such that locally $|\nabla' g| \neq 0$), by using up to the boundary elliptic regularity estimates for all derivatives $\p_i w$ with respect to directions orthogonal to $e_n$ and by expressing the partial derivative $\p_{nn}w$ along the free boundary in terms of the parametrization $g$.\\
In contrast, in our situation of \eta_{\delta,r}mph{thin} obstacles, we gain \eta_{\delta,r}mph{three halves} of a derivative with respect to a general inhomogeneity. We conjecture that this is the optimal gain. As we are however dealing with a co-dimension two free boundary value problem, it seems harder to prove the optimality of this gain by similar means as for the classical obstacle problem. Yet, we remark that this gain of three-halves of a derivative also fits the scaling behavior (though not the regularity assumptions) of the inhomogeneities treated in \cite{DSS14}.\\
Let us explain the main ideas of deriving the regularity results of Theorems \ref{thm:higher_reg} and \ref{thm:higher_reg_inhom}:
In order to prove higher regularity properties of the free boundary, we rely on the partial Legendre-Hodograph transform (c.f. \cite{KN77}, \cite{KPS})
\begin{equation}\label{eq:L}
\begin{split}
T: B_1^+ &\rightarrow Q_+:=\{ y \in \R^{n+1}| \ y_n \geq 0, y_{n+1} \leq 0\},\\
y:=T(x)&:= (x'', \p_{n}w, \p_{n+1} w), \ v(y):= w(x) - x_n y_{n} - x_{n+1}y_{n+1},
\eta_{\delta,r}nd{split}
\eta_{\delta,r}nd{equation}
which allows us to fix the (regular) free boundary:
\begin{align*}
T(\Gamma_w) \subset \{y\in \R^{n+1}| \ y_n=y_{n+1}=0\}.
\eta_{\delta,r}nd{align*}
The asymptotic expansion of $w$ around $\Gamma_w$ implies that the transformation $T$ is asymptotically a square root mapping. Similar arguments as in \cite{KPS} yield that $T$ is invertible with inverse given by
\begin{align*}
T^{-1}(y)= (y'', - \p_n v(y), -\p_{n+1}v(y)).
\eta_{\delta,r}nd{align*}
Thus, the free boundary $\Gamma_w$ can be parametrized in terms of the Legendre function $v$ as
\begin{align*}
\Gamma_w \cap B_{1/2}' := \{x'\in B_{1/2}'| \ x_n = -\p_{n} v(x'',0,0)\}.
\eta_{\delta,r}nd{align*}
Therefore, it suffices to study the regularity properties of the Legendre function $v$, in order to derive higher regularity properties of the free boundary $\Gamma_w$.\\
Pursuing this strategy and investigating the properties of the Legendre function $v$, we encounter several difficulties:\\
\eta_{\delta,r}mph{Nonlinearity and subellipticity of the transformed equation, function spaces.} In analogy to the observations in \cite{KPS} the Hodograh-Legendre transformation $T$ transforms the \eta_{\delta,r}mph{uniformly} elliptic equation for $w$ into a fully nonlinear, \eta_{\delta,r}mph{degenerate} (sub)elliptic equation for $v$ (c.f. Proposition \ref{prop:bulk_eq}). Moreover, studying the asymptotics of $v$ at the degenerate set of the nonlinear operator (which is the image of the free boundary under the transformation $T$), the linearized operator (at $v$) is identified a perturbation of the (subelliptic) \eta_{\delta,r}mph{Baouendi-Grushin} operator (c.f. Section~\ref{sec:grushin}).\\
In this context a central new ingredient and major contribution of the article enters: Seeking to deduce regularity by an application of the implicit function theorem instead of direct and tedious elliptic estimates (c.f. Section \ref{sec:IFT1}), we have to capture the relation between the linearized and nonlinear operators in terms of our \eta_{\delta,r}mph{function spaces} (c.f. Section \ref{sec:holder}). This leads to the challenge of finding function spaces which on the one hand mimic the asymptotics of the Legendre function. This is a key requirement, since the perturbative interpretation of the fully nonlinear operator (as a nonlinear Baouendi-Grushin type operator) crucially relies on the asymptotics close the the straightened free boundary. On the other hand, the spaces have to allow for good regularity estimates for the linearized equation which is of Baouendi-Grushin type. In this context, we note that Calderon-Zygmund estimates and Schauder estimates have natural analogues for subelliptic operators like the Baouendi-Grushin operator. The mismatch between the vector space structure (relevant for derivatives) and the subelliptic
geometry allows for nontrivial choices in the definition of Sobolev spaces and higher order H\"older spaces.\\
In order to deal with both of the described conditions, we introduce \eta_{\delta,r}mph{generalized Hölder} spaces which are on the one hand adapted to the Baouendi-Grushin operator (for instance by relying on the intrinsic geometry induced by this operator) and on the other hand measure the distance to an approximating polynomial with the ``correct'' asymptotics close to the straightened free boundary (c.f. Section \ref{sec:holder} for the definition and properties of our generalized Hölder spaces and the Appendix, Section \ref{sec:append} for the proofs of these results). These function spaces are reminiscent of Campanato type spaces (c.f. \cite{Ca64}, \cite{Mo09}) and also of the polynomial approximations used by De Silva and Savin \cite{DSS14}. While similar constructions are possible for elliptic equations they seem to be not relevant there. For our problem however they are crucial.\\
\eta_{\delta,r}mph{Partial regularity and the implicit function theorem.} Seeking to avoid lengthy and tedious higher order derivative estimates for the Legendre function $v$, we deviate from the previous strategies of proving higher regularity that are present in the literature on the thin obstacle problem. Instead we reason by the \eta_{\delta,r}mph{implicit function theorem} along the lines of an argument introduced by Angenent (c.f. \cite{AN90}, \cite{AN90a}, \cite{KL12}). In this context we pre-compose our Legendre function, $v$, with a one-parameter family, $\Phi_a$, of diffeomorphisms leading to a one-parameter family of ``Legendre functions'', $v_a$ (Section~\ref{subsec:IFT0}). Here the diffeomeorphisms are chosen such that the parameter dependence on $a$ is analytic and so that the diffeomorphisms are the identity outside of a fixed compact set, whereas at the free boundary infinitesimally they generate a family of translations in the tangential directions. The functions $v_a$ satisfy a similar fully nonlinear, degenerate elliptic equation as $v$. Invoking the analytic implicit function theorem, we then establish that solutions of this equation are necessarily analytic in the parameter $a$, which, due to the uniqueness of solutions, implies that the family $v_a$ depends on $a$ analytically. As the family of diffeomorphisms, $\Phi_a$, infinitesimally generates translations in the tangential directions, this immediately entails the partial analyticity of the original Legendre function $v$ in the tangential variables.\\
\eta_{\delta,r}mph{Corner domain, function spaces.}
Compared with the constant coefficient case in \cite{KPS}, the presence of variable coefficients leads to a completely new difficulty: By the definition of the Hodograph-Legendre transform \eta_{\delta,r}qref{eq:L}, the transformation $T$ maps the upper half ball $B_1^+$ into the quarter space $Q_+$ (c.f. Section \ref{sec:Hodo}). In particular, the free boundary is mapped into the \eta_{\delta,r}mph{edge} of $Q_+$, which does not allow us to invoke standard interior regularity estimates there. \\
In contrast to the argument in \cite{KPS}, we cannot overcome this problem by reflecting the resulting solution so as to obtain a problem in which the free boundary is in the interior of the domain: Indeed, this would immediately lead to a loss of regularity of the coefficients $a^{ij}$ and hence would not allow us to prove higher regularity estimates up to the boundary.
Thus, instead, we have to work in the setting of an equation that is posed in the quarter space, where the singularity of the domain is centered at the straightened free boundary. This in particular necessitates regularity estimates in this (singular) domain which hold uniformly up to the boundary (c.f. Appendix, Section \ref{sec:quarter_Hoelder}). \\
In deducing these regularity estimates, we strongly rely on the form of our generalized Hölder spaces and on the interpretation of our fully nonlinear equation as a perturbation of the Baouendi-Grushin Laplacian in the quarter space which satisfies homogeneous Dirichlet data on $\{y_{n}=0\}$ and homogeneous Neumann data on $\{y_{n+1}=0\}$. As it is possible to classify and explicitly compute all the homogeneous solutions to this operator, an approximation argument in the spirit of \cite{Wa03} yields the desired regularity estimates in our generalized Hölder spaces (c.f. Appendix, Section \ref{sec:quarter_Hoelder}).\\
\eta_{\delta,r}mph{Low regularity metrics.}
In the case of only $W^{1,p}$ regular metrics with $p\in(n+1,\infty]$, and/or $L^p$ inhomogeneities, even \eta_{\delta,r}mph{away} from the free boundary a general solution $w$ is only $W^{2,p}$ regular. Thus, the previous arguments leading to the invertibility of the Hodograph-Legendre transform do not apply directly, as they rely on pointwise bounds for $D^2w$.
To resolve this issue, we use the \eta_{\delta,r}mph{splitting technique} from \cite{KRSI} and introduce a mechanism that exchanges \eta_{\delta,r}mph{decay} and \eta_{\delta,r}mph{regularity}: More precisely, we split a general solution $w$ into two components $w=\tilde{u}+u$. Here the first component $\tilde{u}$ deals with the low regularity of the coefficients and the inhomogeneity:
\begin{align*}
a^{ij}\p_{ij} \tilde{u} - \dist(x,\Gamma_w)^{-2}\tilde{u} = f - (\p_i a^{ij})\p_j w \mbox{ in } B_1 \setminus \Lambda_w, \ \tilde{u}=0 \mbox{ on } \Lambda_w.
\eta_{\delta,r}nd{align*}
Due to the inclusion of the strongly coercive term $-\dist(x,\Gamma_w)^{-2}\tilde{u}$ in the equation, the solution $\tilde{u}$ has a \eta_{\delta,r}mph{strong decay} properties (compared to $w$) towards $\Gamma_w$. We hence interpret it as a controlled error.\\
The second contribution $u$ is now of better \eta_{\delta,r}mph{regularity} away from the free boundary $\Gamma_w$, as it solves the non-divergence form elliptic equation
\begin{align*}
a^{ij}\p_{ij} u = - \dist(x,\Gamma_w)^{-2}\tilde{u} \mbox{ in } B_1 \setminus \Lambda_w, \ \tilde{u}=0 \mbox{ on } \Lambda_w.
\eta_{\delta,r}nd{align*}
Moreover, it captures the essential behavior of the original function $w$ (c.f. Lemma \ref{lem:lower1'} and Proposition \ref{prop:improved_reg1}). In particular, the free boundary $\Gamma_w$ is the same as the free boundary $\Gamma_u:=\partial_{B_1'}\{x\in B_1': u(x)>0\}$ of $u$. We then apply our previous arguments to $u$ and correspondingly obtain the regularity of the free boundary.
\subsection{Literature and related results}
The thin obstacle problem has been studied extensively beginning with the fundamental works of Caffarelli \cite{Ca79}, Uraltseva \cite{U85}, \cite{U87}, Kinderlehrer \cite{Ki81}, and the break through results of Athanasopoulos, Caffarelli \cite{AC06}, as well as Athanasopoulos, Caffarelli, Salsa \cite{ACS08} and Caffarelli, Silvestre, Salsa \cite{CSS}. While there is a quite good understanding of many aspects of the \eta_{\delta,r}mph{constant} coefficient problem, the \eta_{\delta,r}mph{variable} problem has only recently received a large amount of attention: Here, besides the early work of Uraltseva \cite{U87}, in particular the articles by Garofalo, Smit Vega Garcia \cite{GSVG14} and Garofalo, Petrosyan, Smit Vega Garcia \cite{GPSVG15} and the present authors \cite{KRS14}, \cite{KRSI} should be mentioned. While the methods differ -- the first two articles rely on a frequency function approach and an epiperimetric inequality, the second two articles build on a Carleman estimate as well as careful comparison arguments -- in both works the regularity of the regular free boundary is obtained for the variable coefficient problem under low regularity assumptions on the metric.\\
Hence, it is natural to ask whether the free boundary regularity can be improved if higher regularity assumptions are made on the coefficients and what the precise dependence on the regularity of the coefficients amounts to. In the constant coefficient setting, the higher regularity question has independently been addressed by De Silva, Savin \cite{DSS14}, who prove $C^{\infty}$ regularity of the free boundary by approximation arguments, and by Koch, Petrosyan, Shi \cite{KPS}, who prove analyticity of the free boundary. While the precise dependence on the coefficient regularity is well understood for the \eta_{\delta,r}mph{classical} obstacle with variable coefficients \cite{F10}, to the best of our knowledge this question has not yet been addressed in the framework of the \eta_{\delta,r}mph{variable} coefficient \eta_{\delta,r}mph{thin} obstacle problem.
\subsection{Outline of the article}
The remainder of the article is organized as follows: After briefly introducing the precise setting of our problem and fixing our notation in the following Section \ref{sec:prelim}, we recollect the asymptotic behavior of solutions of (\ref{eq:thin_obst}) in Section \ref{sec:asymp}. With this at hand, in Section \ref{sec:Hodo} we introduce the partial Hodograph-Legendre transformation in the case of $C^{k,\gamma}$ metrics with $k\geq 1$, obtain its invertibility (c.f. Proposition \ref{prop:invertibility}) and in Section \ref{sec:Legendre} derive the fully nonlinear, degenerate elliptic equation which is satisfied by the Legendre function $v$ (Proposition \ref{prop:bulk_eq}). Motivated by the linearization of this equation, we introduce our generalized Hölder spaces (c.f. Definitions \ref{defi:Hoelder}, \ref{defi:Hoelder1} and \ref{defi:spaces}) which are adapted to the geometry of the Baouendi-Grushin Laplacian (Section \ref{sec:holder}). Exploring the (self-improving) structure of the nonlinear equation for the Legendre function $v$ (c.f. Proposition \ref{prop:error_gain}), we deduce regularity properties of the Legendre function $v$ (c.f. Proposition \ref{prop:regasymp}) by an iterative bootstrap argument (c.f. Proposition \ref{prop:error_gain}) in Section \ref{sec:improve_reg}. In Section~\ref{sec:fb_reg} we build on this regularity result and proceed with the application of the implicit function theorem to prove the optimal regularity of the regular free boundary when the metrics $a^{ij}$ are $ C^{k,\gamma}$ Hölder regular for some $k\geq 1$ (c.f. Theorem \ref{prop:hoelder_reg_a}). Moreover, we also derive analyticity of the free boundary for analytic metrics (c.f. Theorem \ref{prop:analytic}). This provides the argument for the first two parts of Theorem \ref{thm:higher_reg}. Next, in Section~\ref{sec:W1p} we study the Hodograph-Legendre transformation for $W^{1,p}$ metrics with $p\in (n+1,\infty]$ and thus derive the optimal regularity result of Theorem \ref{thm:higher_reg} (i). Using similar ideas, we also discuss the necessary adaptations in proving regularity results in the presence of inhomogeneities and nonzero obstacles (c.f. Proposition \ref{prop:inhomo_2}). Finally, in the Appendix, Section \ref{sec:append}, we prove a characterization of our function spaces introduced in Section~\ref{sec:holder} and show an a priori estimate for the Baouendi-Grushin Laplacian in these function spaces (c.f. Section \ref{sec:quarter_Hoelder}). We also discuss auxiliary regularity and mapping properties which we use in the derivation of the asymptotics and in the application of the implicit function theorem (c.f. Sections \ref{sec:XY}, \ref{sec:kernel}).
\section{Preliminaries}
\label{sec:prelim}
\subsection{Conventions and normalizations}
\label{sec:conventions}
In the sequel we introduce a number of conventions which will be used throughout this paper.
Any tensor field $a^{ij}:B_1^+\rightarrow \R^{(n+1)\times (n+1)}_{sym}$ in this paper is uniformly elliptic, symmetric and at least $W^{1,p}$ regular for some $p\in (n+1,\infty]$. Furthermore, we assume that
\begin{itemize}
\item[(A1)]$a^{ij}(0)=\delta^{ij}$,
\item[(A2)] (Uniform ellipticity) $\frac{1}{2}|\xi|^2\leq a^{ij}(x)\xi_i\xi_j \leq 2|\xi|^2$ for each $x\in B_1^+$ and $\xi\in \R^{n+1}$,
\item[(A3)] (Off-diagonal) $a^{i,n+1}(x',0)=0$ for $i\in \{1,\dots,n\}$.
\eta_{\delta,r}nd{itemize}
Here (A1)-(A2) follow from an affine transformation. The off-diagonal assumption (A3) is a consequence of a change of coordinates (c.f. for instance Section 2.1 in \cite{KRS14} and Uraltseva \cite{U85}), which allows to reduce \eta_{\delta,r}qref{eq:thin_obst} to
\begin{equation}
\label{eq:varcoeff}
\begin{split}
\p_ia^{ij}\p_jw=0 &\text{ in } B_1^+,\\
w\geq 0,\quad \p_{n+1}w\leq 0, \ w(\p_{n+1}w)=0 &\text{ on } B'_1.
\eta_{\delta,r}nd{split}
\eta_{\delta,r}nd{equation}
Under the above assumptions (A1)-(A3), a solution $w$ to the thin obstacle problem is $C^{1,\min\{1-\frac{n+1}{p},\frac{1}{2}\}}_{loc}$ regular and of $\dist(x,\Gamma_w)^{3/2}$ growth at the free boundary $\Gamma_w$, i.e. for any $x_0\in \Gamma_w\cap B^+_{1/2}$,
\begin{equation}\label{eq:interior_est}
\sup _{ B_r(x_0)}|\nabla w|\leq C(n,p, \|a^{ij}\|_{W^{1,p}})\|w\|_{L^2(B_1^+)}r^{\frac{1}{2}}, \quad r\in (0,1/2).
\eta_{\delta,r}nd{equation}
This regularity and growth behavior is optimal by the interior regularity and the growth behavior of the model solution $w_{3/2}(x)=\Ree(x_n+ix_{n+1})^{3/2}$. We refer to \cite{U85} for the $C^{1,\alphapha}_{loc}$ regularity and to \cite{KRSI} for the optimal $C^{1,\min\{1-\frac{n+1}{p},\frac{1}{2}\}}_{loc}$ regularity as well as the growth result. In this paper we will always work with a solution $w\in C^{1,\min\{1-\frac{n+1}{p},\frac{1}{2}\}}_{loc}(B_1^+)$, for which \eta_{\delta,r}qref{eq:interior_est} holds true.\\
In order to further simplify our set-up, we observe the following symmetry properties of our problem:\\
(Symmetry) Equation \eta_{\delta,r}qref{eq:thin_obst} is invariant under scaling and multiplication. More precisely, if $w$ is a solution to \eta_{\delta,r}qref{eq:thin_obst}, then for $x_0\in K\Subset B'_1$, for $c\geq 0$ and $\lambda>0$, the function
$$x\mapsto cw(x_0+\lambda x)$$
is a solution to \eta_{\delta,r}qref{eq:thin_obst} (with coefficients $a^{ij}(x_0+\lambda\cdot)$) in $B_r^+$, $r\in (0, \lambda^{-1}(1-|x_0|)]$.\\
These symmetry properties are for instance crucial in carrying out rescalings around the (regular) free boundary:
Assuming that $x_0\in \Gamma_{3/2}(w)$ is a regular free boundary point and defining $w_{x_0,\lambda}(x):=w(x_0+\lambda x)/\lambda^{\frac{3}{2}}$, $\lambda\in (0,1/4)$, the asymptotic expansion around the regular free boundary (c.f. Proposition 4.6 in \cite{KRSI}\xspace) yields that
\begin{align*}
w_{x_0,\lambda}(x)\rightarrow a(x_0)w_{x_0}(x) \mbox{ in } C^{1,\beta}_{loc}(\R^{n+1}_+)
\eta_{\delta,r}nd{align*}
for each $\beta\in (0,1/2)$ as $\lambda\rightarrow 0_+$. Here $w_{x_0}$ is a global solution with flat free boundary and $a(x_0)>0$ is a constant. \\
In this paper we are interested in the higher regularity of $\Gamma_{3/2}(w)$ under an appropriate higher regularity assumption on the metric $a^{ij}$. All the results given below are \eta_{\delta,r}mph{local} estimates around \eta_{\delta,r}mph{regular} free boundary points. Using the scaling and multiplication symmetries of the equation, we may hence without loss of generality suppose the following normalization assumptions (A4)-(A7):
\begin{itemize}
\item[(A4)] $0\in \Gamma_{3/2}(w)$,
\eta_{\delta,r}nd{itemize}
that $w$ is sufficiently close to $w_{3/2}$ and that the metric is sufficiently flat in the following sense: For $\eta_{\delta,r}psilon_0,c_\ast>0$ small
\begin{itemize}
\item[(A5)] $\|w-w_{3/2}\|_{C^1(B_1^+)}\leq \eta_{\delta,r}psilon_0$,
\item[(A6)] $\|\nabla a^{ij}\|_{L^p(B_1^+)}\leq c_\ast$.
\eta_{\delta,r}nd{itemize}
By \cite{KRSI}, if $\eta_{\delta,r}psilon_0$, $c_\ast $ are sufficiently small depending on $n,p,\|w\|_{L^2(B_1^+)}$, then assumptions (A5)-(A6) imply that $\Gamma_w\cap B'_{1/2}\subset \Gamma_{3/2}(w)$ and that $\Gamma_w\cap B'_{1/2}$ is a $C^{1,\alphapha}$ graph, i.e. after a rotation of coordinates $\Gamma_{w}\cap B_{1/2}' = \{x'=(x'',x_n,0)\cap B_{1/2}'| \ x_n = g(x'')\}$, for some $\alphapha\in (0,1)$. Moreover, we have the following estimate for the (in-plane) outer unit normal $\nu_{x_0}$ of $\Lambda_w$ at $x_0$
\begin{equation}\label{eq:normal}
|\nu_{x_0}-\nu_{\tilde{x}_0}|\lesssim \max\{\eta_{\delta,r}psilon_0,c_\ast\}|x_0-\tilde{x}_0|^\alphapha, \text{ for any }x_0, \tilde{x}_0\in \Gamma_w\cap B'_{1/2},
\eta_{\delta,r}nd{equation}
For notational simplicity, we also assume that
\begin{itemize}
\item[(A7)] $\nu_0=e_n$.
\eta_{\delta,r}nd{itemize}
From now on, we will always work under the assumptions (A1)-(A7).
\subsection{Notation}
\label{sec:notation}
Similarly as in \cite{KRSI} we use the following notation:\\
\eta_{\delta,r}mph{Geometry.}
\begin{itemize}
\item $\R^{n+1}_+:=\{(x'',x_n,x_{n+1})\in \R^{n+1} | x_{n+1}\geq 0\}$.
\item $B_r(x_0):=\{x\in \R^{n+1}| |x-x_0|<r\}$, where $|\cdot|$ is the norm induced by the Euclidean metric, $B_r^+(x_0):=B_r(x_0)\cap \R^{n+1}_+$, $B'_r(x_0):=B_r(x_0)\cap \{x_{n+1}=0\}$. If $x_0$ is the origin, we simply write $B_r$, $B_r^+$ and $B'_r$.
\item Let $w$ be a solution of \eta_{\delta,r}qref{eq:thin_obst}, then $\Lambda_w:=\{(x',0)\in B'_1| w(x',0)=0\}$ is the \eta_{\delta,r}mph{contact set}, $\Omega_w:=B'_1\setminus \Lambda_w$ is the \eta_{\delta,r}mph{positivity set}, $\Gamma_w:=\p_{B'_1}\Lambda_w\cap B'_1$ is the \eta_{\delta,r}mph{free boundary}, $\Gamma_{3/2}(w):=\{x\in \Gamma_w| \kappa_x=\frac{3}{2}\}$ is the \eta_{\delta,r}mph{regular set} of the free boundary, where $\kappa_x$ is the vanishing order at $x$.
\item For $x_0\in \Gamma_w$, we denote by $\mathcal{N}_{x_0}=\{x\in B^+_{1/4}(x_0)\big|\dist(x,\Gamma_w)\geq \frac{1}{2}|x-x_0|\}$ the non-tangential cone at $x_0$.
\item $\mathcal{C}'_\eta_{\delta,r}ta(e_n):=\mathcal{C}_\eta_{\delta,r}ta(e_n)\cap \{e_{n+1}=0\}$ is a tangential cone (with axis $e_n$ and opening angle $\eta_{\delta,r}ta$).
\item $Q_+:=\{(y'',y_n,y_{n+1})\in \R^{n+1} | y_n\geq 0, y_{n+1}\leq 0\}$.
\item $\tilde{\mathcal{B}}_r(y_0):=\{y\in \R^{n+1}| d_G(y,y_0)<r\}$, where $d_G(\cdot, \cdot)$ is the Baouendi-Grushin metric (c.f. Definition~\ref{defi:Grushinvf}). $\tilde{\mathcal{B}}_r^+(y_0):=\tilde{\mathcal{B}}_r(y_0)\cap Q_+$.
\item In the $Q_+$ with the Baouendi-Grushin metric $d_G(\cdot,\cdot)$, given $y_0\in P:=\{y_n=y_{n+1}=0\}$ we denote by $\mathcal{N}_G(y_0):=\{x\in \tilde{\mathcal{B}}_{1/4}^+(y_0)\big|\dist_G(y,P)\geq \frac{1}{2}d_G(y,y_0)\}$ the Baouendi-Grushin non-tangential cone at $y_0$.
\item We use the Baouendi-Grushin vector fields $Y_i$, $i\in\{1,\dots,n+1\}$, (c.f. Definition \ref{defi:Grushinvf}) and the modified Baouendi-Grushin vector fields $\tilde{Y}_i$, $i\in\{1,\dots,2n\}$ (c.f. Definition \ref{defi:Hoelder1}).
\item For $k\in \N$, we denote by $\mathcal{P}_k^{hom}$ the space of homogeneous polynomials (w.r.t. the Grushin scaling) of order $k$ (c.f. Definition \ref{defi:poly}), and by $\mathcal{P}_k$ the vector space of homogeneous polynomials of order less than or equal to $k$.
\eta_{\delta,r}nd{itemize}
\eta_{\delta,r}mph{Functions and function spaces.}
\begin{itemize}
\item $w_{3/2}(x):= c_n \Ree(x_n + i x_{n+1})^{3/2}$, where $c_n>0$ is a normalization constant ensuring that $\| w_{3/2} \|_{L^2(B_{1}^+)}=1$.
\item $w_{1/2}(x):= c_n \Ree(x_n + i x_{n+1})^{1/2}$ and $\bar{w}_{1/2}(x):= - c_n \Imm(x_n + i x_{n+1})^{1/2}$, where $c_n>0$ denotes the same normalization constant as above.
\item We denote the \eta_{\delta,r}mph{asymptotic profile} at a point $x_0 \in \Gamma_{3/2}(w)\cap B_{1}'$ by $\mathcal{W}_{x_0}$. It is given by
\begin{align*}
\mathcal{W}_{x_0}(x)=a(x_0)w_{3/2}\left(\frac{(x-x_0)\cdot \nu_{x_0}}{(\nu_{x_0}\cdot A(x_0)\nu_{x_0})^{1/2}}, \frac{x_{n+1}}{(a^{n+1,n+1}(x_0))^{1/2}}\right).
\eta_{\delta,r}nd{align*}
\item For a solution $w$ to (\ref{eq:varcoeff}) and a point $x_0\in \Gamma_{w}$ we define a \eta_{\delta,r}mph{blow-up sequence} $w_{x_0,\lambda}(x):=\frac{w(x_0 + \lambda x)}{\lambda^{3/2}}$ by rescaling with $\lambda \in (0,1)$. The asymptotic expansion from Proposition 4.6 in \cite{KRSI}\xspace implies that as $\lambda \rightarrow 0$ it converges to the \eta_{\delta,r}mph{blow-up profile} $\mathcal{W}_{x_0}(\cdot+ x_0)$. Here $\mathcal{W}_{x_0}$ denotes the asymptotic profile from above.
\item In the sequel, we use spaces adapted to the Baouendi-Grushin operator $\Delta_G$ and denote the corresponding Hölder spaces by $C^{k,\alphapha}_{\ast}$ (c.f. Definitions \ref{defi:Hoelder}, \ref{defi:Hoelder1}). Moreover, relying on these, we construct our generalized Hölder spaces $X_{\alphapha,\eta_{\delta,r}psilon}, Y_{\alphapha,\eta_{\delta,r}psilon}$ which are appropriate for our corner domains (c.f. Definition \ref{defi:spaces}).
\item We use the notation $C_0(Q_+)$ to denote the space of all continuous functions vanishing at infinity.
\item Let $\R^{(n+1)\times (n+1)}_{sym}$ denote the space of symmetric matrices and let
$$G: \R^{(n+1)\times (n+1)}_{sym} \times \R^{n+1} \times \R^{n+1} \rightarrow \R, \ (M,P,y)\mapsto G(M,P,y),$$
with $M=(m_{k\eta_{\delta,r}ll})_{k \eta_{\delta,r}ll} \in \R^{(n+1) \times (n+1)}_{sym}$, $P = (p_1,\dots,p_{n+1})\in \R^{n+1}$ and $y=(y_1,\dots,y_{n+1})\in \R^{n+1}$. We denote the partial derivative with respect to the different components by
\begin{align*}
\p_{m_{k\eta_{\delta,r}ll}}G(M,P,y)&:= \frac{\p G(M,P,y)}{\partial m_{k \eta_{\delta,r}ll}},\\
\p_{p_{k}}G(M,p,y) &:=\frac{\p G(M,P,y)}{\partial p_{k}},\\
\p_{y_k}G(M,p,y)&:=\frac{\p G(M,P,y)}{\partial y_{k}}.
\eta_{\delta,r}nd{align*}
\item $\D_G$ stands for the Baouendi-Grushin operator
\begin{align*}
\D_{G} v:= (y_n^2 + y_{n+1}^2)\D'' v + \p_{nn}v + \p_{n+1,n+1}v,
\eta_{\delta,r}nd{align*}
where $\D''$ denotes the Laplacian in the tangential variables, i.e. in the $y''$-variables of $y=(y'',y_n,y_{n+1})$.
\eta_{\delta,r}nd{itemize}
The notation $A\lesssim B$ means that $A\leq CB$ with $C$ depending only on dimension $n$.
\section{Hodograph-Legendre Transformation}
\label{sec:HLTrafo}
In this section we perform a partial Hodograph-Legendre transform of our problem (\ref{eq:varcoeff}). While fixing the free boundary, this comes at the price of transforming our uniformly elliptic equation in the upper half ball into a fully nonlinear, degenerate (sub)elliptic equation in the lower quarter ball (c.f. Sections \ref{sec:Hodo}, \ref{sec:Legendre}, Propositions \ref{prop:invertibility}, \ref{prop:bulk_eq}). In particular, in addition to the difficulties in \cite{KPS}, the domain in which our problem is posed now contains a corner. In spite of this additional problem, as in \cite{KPS} we identify the fully nonlinear equation as a perturbation of the Baouendi-Grushin operator with symmetry (i.e. with Dirichlet-Neumann data) by a careful analysis of the asymptotic behavior of the Legendre transform (c.f. Section \ref{sec:Legendre}, Example \ref{ex:linear} and Section \ref{sec:grushin}).
\subsection{Asymptotic behavior of the solution $w$}
\label{sec:asymp}
We begin by deriving and collecting asymptotic expansions for higher order derivatives of solutions to our equation (c.f. \cite{KRSI}). This will prove to be advantageous in the later sections (e.g. Sections \ref{sec:Hodo}, \ref{sec:Leg}).
\begin{prop}[\cite{KRSI}, Proposition 4.6]
Let $a^{ij}\in W^{1,p}(B_1^+, \R^{(n+1)\times (n+1)}_{sym})$ with $p\in (n+1,\infty]$ be a uniformly elliptic tensor.
Assume that $w:B_1^+ \rightarrow \R$ is a solution to the variable coefficient thin obstacle problem and that it satisfies the following conditions:
\label{prop:asym2}
There exist positive constants $\eta_{\delta,r}psilon_0$ and $c_{\ast}$ such that
\begin{itemize}
\item[(i)] $\|w-w_{3/2}\|_{C^1(B_1^+)}\leq \eta_{\delta,r}psilon_0$,
\item[(ii)] $\|\nabla a^{ij}\|_{L^p(B_1^+)}\leq c_\ast$.
\eta_{\delta,r}nd{itemize}
Then if $\eta_{\delta,r}psilon_0$ and $c_\ast$ are sufficiently small depending on $n,p$, there exists some $\alphapha\in (0,1-\frac{n+1}{p}]$ such that $\Gamma_w\cap B_{1/2}^+$ is a $C^{1,\alphapha}$ graph. Moreover, at each free boundary point $x_0\in \Gamma_w\cap B^+_{1/4}$, there exists an asymptotic profile, $\mathcal{W}_{x_0}(x)$,
\begin{align*}
\mathcal{W}_{x_0}(x)=a(x_0)w_{3/2}\left(\frac{(x-x_0)\cdot \nu_{x_0}}{(\nu_{x_0}\cdot A(x_0)\nu_{x_0})^{1/2}}, \frac{x_{n+1}}{(a^{n+1,n+1}(x_0))^{1/2}}\right),
\eta_{\delta,r}nd{align*}
such that for any $x\in B_{1/4}^+(x_0)$
\begin{align*}
(i)\quad &\left|\p_i w(x)-\p_i \mathcal{W}_{x_0}(x)\right|\leq C_{n,p}\max\{\eta_{\delta,r}psilon_0,c_{\ast}\}|x-x_0|^{\frac{1}{2}+\alphapha}, \quad i\in\{1,\dots,n\},\\
(ii)\quad &\left| \p_{n+1} w(x)-\p_{n+1}\mathcal{W}_{x_0}(x)\right|\leq C_{n,p}\max\{\eta_{\delta,r}psilon_0,c_{\ast}\} |x-x_0|^{\frac{1}{2}+\alphapha},\\
(iii)\quad &\left|w(x)-\mathcal{W}_{x_0}(x)\right|\leq C_{n,p}\max\{\eta_{\delta,r}psilon_0,c_{\ast}\} |x-x_0|^{\frac{3}{2}+\alphapha}.
\eta_{\delta,r}nd{align*}
Here $x_0\mapsto a(x_0)\in C^{0,\alphapha}(\Gamma_w\cap B_{1/2}^+)$, $\nu_{x_0}$ is the (in-plane) outer unit normal of $\Lambda_w$ at $x_0$ and $A(x_0)=(a^{ij}(x_0))$. Furthermore, $w_{3/2}(x)= c_n \Ree(x_n+ i x_{n+1})^{3/2}$, where $c_n>0$ is a dimensional constant which is chosen such that $\| w_{3/2}\|_{L^{2}(B_1^+)}=1$.
\eta_{\delta,r}nd{prop}
Assuming higher regularity of the metric allows us to use a scaling argument to deduce the asymptotics for higher order derivatives in non-tangential cones.
\begin{prop}
\label{prop:improved_reg}
Let $a^{ij}\in C^{k,\gamma}(B_1^+, \R^{(n+1)\times (n+1)}_{sym})$ with $k\geq 1$ and $\gamma\in (0,1)$ be uniformly elliptic. Let $\alphapha>0$ be the Hölder exponent from Proposition \ref{prop:asym2}. There exist $\eta_{\delta,r}psilon_0$ and $c_\ast$ sufficiently small depending on $n,p$ such that if
\begin{itemize}
\item[(i)] $ \|w-w_{3/2}\|_{C^1(B_1^+)}\leq \eta_{\delta,r}psilon_0$,
\item[(ii)]$ [a^{ij}]_{\dot{C}^{k,\gamma}(B_1^+)}\leq c_\ast,$
\eta_{\delta,r}nd{itemize}
then for each $x_0\in \Gamma_w\cap B^+_{1/4}$, an associated non-tangential cone $x\in \mathcal{N}_{x_0}:=\{x\in B^+_{1/4}(x_0)| \ \dist(x,\Gamma_w)\geq \frac{1}{2}|x-x_0|\}$ and for all multi-indeces $\beta$ with $|\beta|\leq k+1$ we have
\begin{align*}
\left|\p^\beta w(x)-\p^\beta \mathcal{W}_{x_0}(x)\right|& \leq C_{\beta,n,p} \max\{\eta_{\delta,r}psilon_0,c_{\ast}\} |x-x_0|^{\frac{3}{2}+\alphapha-|\beta|},\\
\left[\p^\beta w-\p^\beta \mathcal{W}_{x_0}\right]_{\dot{C}^{0,\gamma}(\mathcal{N}_{x_0}\cap (B_{3\lambda /4 }^+(x_0)\setminus B_{\lambda/2 }^+(x_0)))}& \leq C_{\beta,n,p}\max\{\eta_{\delta,r}psilon_0,c_{\ast}\}\lambda^{\frac{3}{2}+\alphapha-\gamma-|\beta|}.
\eta_{\delta,r}nd{align*}
Here $\alphapha$ is the same exponent as in Proposition~\ref{prop:asym2} and $\lambda \in (0,1)$.
\eta_{\delta,r}nd{prop}
\begin{rmk}
\label{rmk:improved_reg}
It is possible to extend the above asymptotics for $x$ in the full neighborhood $B_{1/4}^+(x_0)$ as
\begin{align*}
&\left|\p^\beta w(x)-\p^\beta \mathcal{W}_{x_0}(x)\right|\leq C_{\beta,n,p} \max\{\eta_{\delta,r}psilon_0,c_{\ast}\} |x-x_0|^{\frac{1}{2}+\alphapha}\dist(x,\Gamma_w)^{-|\beta|+1}.
\eta_{\delta,r}nd{align*}
Here it is necessary to introduce the distance to the free boundary instead of measuring it with a negative power of $|x-x_0|$.
\eta_{\delta,r}nd{rmk}
Before coming to the proof of Proposition \ref{prop:improved_reg}, we state an immediate corollary, which will be important in the derivation of the asymptotics of the Legendre function in Proposition \ref{prop:holder_v} in Section \ref{sec:Leg}.
\begin{cor}
\label{cor:improved_reg}
Assume that the conditions of Proposition \ref{prop:improved_reg} hold. Let $$w_{x_0,\lambda}(x):=\frac{w(x_0+\lambda x)}{\lambda^{3/2}},\quad \lambda>0.$$ Then,
\begin{align*}
&[\p^\beta w_{x_0,\lambda}-\p^\beta \mathcal{W}_{x_0}(x_0+\cdot)]_{\dot{C}^{0,\gamma}(\mathcal{N}_{0}\cap (B_{3 /4}^+\setminus B_{1/2}^+))}\leq C_{n,p} \max\{\eta_{\delta,r}psilon_0,c_{\ast}\}\lambda^{\alphapha}.
\eta_{\delta,r}nd{align*}
\eta_{\delta,r}nd{cor}
\begin{proof}[Proof of Proposition \ref{prop:improved_reg}]
The proof of the proposition follows from elliptic estimates in Whitney cubes, which in turn are reduced to estimates on the scale one by scaling the problem.\\
We only prove the result for $k=1$ (i.e. in case of $|\beta|=2$) and restrict ourselves to the $L^{\infty}$ estimates. For $k> 1$ and for the second estimate the argument is similar. Moreover, we observe that the case $|\beta|=1$ is already covered in Proposition~\ref{prop:asym2}. We begin by considering the tangential derivatives of $w$: Let $\tilde{v}:=\p_\eta_{\delta,r}ll w$ with $\eta_{\delta,r}ll\in \{1,\dots, n\}$. Then $\tilde{v}$ satisfies
\begin{align*}
\p_i(a^{ij}\p_j\tilde{v})=\p_iF^i, \quad F^i=-(\p_\eta_{\delta,r}ll a^{ij})\p_jw,
\eta_{\delta,r}nd{align*}
with the boundary conditions
\begin{align*}
\tilde{v}&=0 \text{ on } \Lambda_w, \quad \p_{n+1}\tilde{v}=0 \text{ on } B'_1\setminus \Lambda_w.
\eta_{\delta,r}nd{align*}
Also, the derivative of the profile functions, $\p_\eta_{\delta,r}ll \mathcal{W}_{x_0}$, satisfies
\begin{align*}
\p_i(a^{ij}\p_j(\p_\eta_{\delta,r}ll \mathcal{W}_{x_0}))=g_1+g_2,
\eta_{\delta,r}nd{align*}
with
\begin{align*}
g_1=(\p_{\eta_{\delta,r}ll} a^{ij}) \p_j\p_i \mathcal{W}_{x_0}, \ g_2=(a^{ij}(x)-a^{ij}(x_0))\p_{ij}\p_\eta_{\delta,r}ll\mathcal{W}_{x_0}.
\eta_{\delta,r}nd{align*}
Seeking to combine the information on the functions $\tilde{v}$ and $\p_{\eta_{\delta,r}ll} \mathcal{W}_{x_0}$, we define
\begin{align*}
\tilde{u}(x):=\frac{w(x_0+\lambda x)-\mathcal{W}_{x_0}(x_0+\lambda x)}{\lambda^{\frac{3}{2}+\alphapha}}, \ 0<\lambda<1/4.
\eta_{\delta,r}nd{align*}
Due to the previous considerations,
$\p_\eta_{\delta,r}ll \tilde{u}$ satisfies the equation
\begin{align*}
\p_i(a^{ij}(x_0+\lambda \cdot)\p_j \p_\eta_{\delta,r}ll \tilde{u})=\p_i \tilde{F}^i - \tilde{g}_1 - \tilde{g}_2 \text{ in } B_1^{+}.
\eta_{\delta,r}nd{align*}
Here
\begin{equation}
\begin{split}
\label{eq:tilde}
\tilde{F}^i(x)=\lambda^{\frac{1}{2}-\alphapha}F^i(x_0+\lambda x),\\
\tilde{g}_1(x)=\lambda^{\frac{3}{2}-\alphapha} g_1(x_0+\lambda x),\\
\tilde{g}_2(x)=\lambda^{\frac{3}{2}-\alphapha} g_2(x_0+\lambda x).
\eta_{\delta,r}nd{split}
\eta_{\delta,r}nd{equation}
Moreover, by the asymptotics of $w$ at $x_0$ which were given in (iii) of Proposition~\ref{prop:asym2}, we obtain the following $L^{\infty}$ bound in the non-tangential cone $\mathcal{N}_0=\{x\in B^+_{1/4}|\dist(x,\Gamma_{w_{x_0,\lambda}})\geq \frac{1}{2}|x|\}$ for all $\eta_{\delta,r}ll \in \{1,\dots,n+1\}$:
\begin{align}\label{eq:max}
|\partial_{\eta_{\delta,r}ll} \tilde{u}|\lesssim C_{n,p}\max\{\eta_{\delta,r}psilon_0,c_{\ast}\}.
\eta_{\delta,r}nd{align}
Noting that
\begin{align*}
|F^i(x)|&\lesssim c_{\ast} \dist(x,\Gamma_w)^{1/2},\\
|g_1(x)|&\lesssim c_{\ast} \dist(x,\Gamma_w)^{-1/2},\\
|g_2(x)|&\lesssim c_{\ast} |x-x_0| \dist(x,\Gamma_w)^{-3/2},
\eta_{\delta,r}nd{align*}
recalling that $\lambda \dist(x, \Gamma_{w_{x_0,\lambda}}) = \dist(\lambda (x-x_0), \Gamma_w)$ and using (\ref{eq:tilde}) yields
\begin{equation}
\label{eq:distresc}
\begin{split}
|\tilde{F}^i(x)|&\lesssim c_{\ast} \lambda^{1-\alphapha}\dist(x,\Gamma_{w_{x_0,\lambda}})^{1/2},\\
|\tilde{g}_1(x)|&\lesssim c_{\ast} \lambda^{1-\alphapha}\dist(x,\Gamma_{w_{x_0,\lambda}})^{-1/2},\\
|\tilde{g}_2(x)|&\lesssim c_{\ast} \lambda^{1-\alphapha}\dist(x,\Gamma_{w_{x_0,\lambda}})^{-3/2}.
\eta_{\delta,r}nd{split}
\eta_{\delta,r}nd{equation}
By the definition of $\mathcal{N}_0$ the expressions involving the distance functions in (\ref{eq:distresc}) are uniformly (in $\lambda$) bounded in $B_1^+\setminus B_{1/4}^+$. Moreover, it is immediate to check that the semi-norms $[\tilde{F}^i]_{C^{0,\gamma}}$ are uniformly bounded.
For $\eta_{\delta,r}ll\in \{1,\dots, n\}$, we apply the $C^{1,\gamma}$ estimate to $\p_\eta_{\delta,r}ll\tilde{u}$, which holds up to the boundary, in $\mathcal{N}_0\cap (B^+_1\setminus B^+_{1/4})$ (note that with $\eta_{\delta,r}psilon_0, c_\ast$ sufficiently small, $\mathcal{N}_0\cap (B_1^+\setminus B^+_{1/4})$ does not intersect the free boundaries $\Gamma_w$ or $\Gamma_{\mathcal{W}_{x_0}}$, thus $\tilde{u}$ satisfies either Dirichlet or Neumann conditions):
\begin{align}
\label{eq:tangential}
\|\p_\eta_{\delta,r}ll\tilde{u}\|_{C^{1,\gamma}(\mathcal{N}_0\cap (B^+_{3/4}\setminus B^+_{1/2}))} \lesssim \|\partial_{\eta_{\delta,r}ll}\tilde{u}\|_{L^\infty(\mathcal{N}_0\cap (B^+_1\setminus B^+_{1/4}))}+ c_{\ast}\lambda^{1-\alphapha}.
\eta_{\delta,r}nd{align}
In order to obtain a full second derivatives estimate, we now combine (\ref{eq:tangential}) with the equation for $\partial_{n+1}\tilde{u}$ to also obtain
\begin{align*}
\|\p_{n+1,n+1}\tilde{u}\|_{C^{0,\gamma}(\mathcal{N}_0\cap (B^+_{3/4}\setminus B^+_{1/2}))} \lesssim \sum\limits_{\eta_{\delta,r}ll=1}^{n}\|\partial_{\eta_{\delta,r}ll}\tilde{u}\|_{L^\infty(\mathcal{N}_0\cap (B^+_1\setminus B^+_{1/4}))}+ c_{\ast} \lambda^{1-\alphapha}.
\eta_{\delta,r}nd{align*}
Rescaling back and using \eta_{\delta,r}qref{eq:max} consequently leads to
\begin{align*}
|\nabla \p_\eta_{\delta,r}ll w(x)-\nabla \p_\eta_{\delta,r}ll \mathcal{W}_{x_0}(x)|\lesssim \max\{\eta_{\delta,r}psilon_0,c_{\ast}\}\lambda^{-\frac{1}{2}+\alphapha} \text{ in } \mathcal{N}_{x_0}\cap (B^+_{3\lambda/4}(x_0)\setminus B^+_{\lambda/2}(x_0)),
\eta_{\delta,r}nd{align*}
for $\eta_{\delta,r}ll\in \{1,\dots, n+1\}$.
Since this holds for any $\lambda\in (0,1/4)$, we conclude that
\begin{align*}
|\p^\beta w(x)-\p^\beta \mathcal{W}_{x_0}(x)|\lesssim \max\{\eta_{\delta,r}psilon_0,c_{\ast}\} |x-x_0|^{-\frac{1}{2}+\alphapha}, \quad x\in \mathcal{N}_{x_0},\ |\beta|=2.
\eta_{\delta,r}nd{align*}
\eta_{\delta,r}nd{proof}
\begin{rmk}
\label{rmk:normal}
In the next section we will strongly use the asymptotics of the first order derivatives $\p_\eta_{\delta,r}ll w$ with $\eta_{\delta,r}ll \in \{1,\dots,n+1\}$. Hence, for future reference we state them explicitly:
\begin{align*}
\p_e\mathcal{W}_{x_0}(x)&=b_e(x_0)w_{1/2}\left(\frac{(x-x_0)\cdot \nu_{x_0}}{(\nu_{x_0}\cdot A(x_0)\nu_{x_0})^{1/2}}, \frac{x_{n+1}}{(a^{n+1,n+1}(x_0))^{1/2}}\right),\\
\p_{n+1}\mathcal{W}_{x_0}(x)&=b_{n+1}(x_0)\bar w_{1/2}\left(\frac{(x-x_0)\cdot \nu_{x_0}}{(\nu_{x_0}\cdot A(x_0)\nu_{x_0})^{1/2}}, \frac{x_{n+1}}{(a^{n+1,n+1}(x_0))^{1/2}}\right),
\eta_{\delta,r}nd{align*}
where
\begin{align*}
w_{1/2}(x)&= c_n \Ree(x_n + i x_{n+1})^{1/2},\quad \bar{w}_{1/2}(x)= - c_n \Imm(x_n + i x_{n+1})^{1/2},\\
b_e(x_0)&=\frac{3(e\cdot \nu_{x_0}) a(x_0)}{2(\nu_{x_0}\cdot A(x_0)\nu_{x_0})^{1/2}},\quad b_{n+1}(x_0)=\frac{3 a(x_0)}{2(a^{n+1,n+1}(x_0))^{1/2}},
\eta_{\delta,r}nd{align*}
and $c_n>0$ is the same normalization constant as in Proposition \ref{prop:asym2}.
\eta_{\delta,r}nd{rmk}
\begin{rmk}\label{rmk:convention}
For simplicity we can, and in the sequel will, further assume that $0\in \Gamma_w$ and that
\begin{align*}
\nu_0=e_n, \quad b_n(0)=b_{n+1}(0)=1\ (\text{which corresponds to } a(0)=2/3).
\eta_{\delta,r}nd{align*}
Thus,
$$\nabla \mathcal{W}_0(x)=(0,w_{1/2}(x),\bar w_{1/2}(x)).$$
Moreover, under the assumptions of Proposition \ref{prop:asym2} we can also bound the $\dot{C}^{0,\alphapha}$ semi-norm of $b_n, b_{n+1}$ by $\max\{\eta_{\delta,r}psilon_0,c_{\ast}\}$.
\eta_{\delta,r}nd{rmk}
Last but not least, we recall a sign condition on $\p_n w$ and $\p_{n+1}w$, which plays an important role in the determination of the image of the Hodograph-Legendre transform in (\ref{eq:mapT}) in Section \ref{sec:Hodo}. An extension of this to the set-up of $W^{1,p}$, $p\in (n+1,\infty]$, metrics is recalled in Section \ref{sec:ext}. As explained in \cite{KRSI} this requires an additional splitting step.
\begin{lem}[Positivity, \cite{KRSI}, Lemma 4.12.]
\label{lem:lower1}
Let $a^{ij}:B_1^+ \rightarrow \R^{(n+1)\times (n+1)}_{sym}$ be a tensor field that satisfies the conditions from Section \ref{sec:conventions} and in addition is $C^{1,\gamma}$ regular for some $\gamma \in (0,1)$. Let $w:B_1^+ \rightarrow \R$ be a solution of the thin obstacle problem with metric $a^{ij}$ and assume that it satisfies the normalizations from Section \ref{sec:conventions}. Then there exist positive constants $\eta_{\delta,r}ta= \eta_{\delta,r}ta(n)$ and $c=c(n)$ such that
\begin{align}
\label{eq:lower1}
\p_ew(x)\geq c\dist(x,\Lambda_w)\dist(x,\Gamma_w)^{-\frac{1}{2}}, \quad x\in B_{\frac{1}{2}}^+
\eta_{\delta,r}nd{align}
for $e\in \mathcal{C}'_\eta_{\delta,r}ta(e_n):=\mathcal{C}_\eta_{\delta,r}ta(e_n)\cap \{e_{n+1}=0\}$, which is a tangential cone (with axis $e_n$ and opening angle $\eta_{\delta,r}ta$). Similarly,
\begin{align*}
\p_{n+1}w(x)\leq -c \dist(x,\Omega_w)\dist(x,\Gamma_w)^{-\frac{1}{2}}, \quad x\in B_{\frac{1}{2}}^+.
\eta_{\delta,r}nd{align*}
\eta_{\delta,r}nd{lem}
\subsection{Hodograph-Legendre transformation}
\label{sec:Hodo}
In this section we perform a partial Hodograph-Legendre transformation to show the higher regularity of the free boundary with zero obstacle. In the sequel, we assume that the metric satisfies $a^{ij}\in C^{1,\gamma}(B_1^+, \R^{(n+1)\times (n+1)}_{sym})$ with $\gamma\in (0,1)$.\\
We define the partial Hodograph-Legendre transformation associated with $w$ as
\begin{align}
\label{eq:def_Legendre}
T=T^w:B_1^+\rightarrow \R^{n+1}, \quad y=T(x)=(x'', \partial_{n} w(x), \partial_{n+1}w(x)).
\eta_{\delta,r}nd{align}
The regularity of $w$ immediately implies that $T\in C^{0,1/2}(B_1^+)$. Moreover,
\begin{equation}
\label{eq:mapT}
\begin{split}
T(B_1^+\setminus B'_1)&\subset \{y_n>0, y_{n+1}<0\},\\
T(\Lambda_w)&\subset\{y_n=0, y_{n+1}\leq 0\}, \ T(B'_1\setminus \Lambda_w)\subset \{y_n>0, y_{n+1}=0\},\\
T(\Gamma_w)&\subset\{y_n=y_{n+1}=0\}.
\eta_{\delta,r}nd{split}
\eta_{\delta,r}nd{equation}
Here the first inclusion is a consequence of Lemma \ref{lem:lower1}.
Using the leading order asymptotic expansions from Section \ref{sec:asymp}, we prove the invertibility of the transformation:
\begin{prop}[Invertibility of $T$]\label{prop:invertibility}
Suppose that the assumptions of Proposition~\ref{prop:asym2} hold. Then, if $[\nabla a^{ij}]_{\dot{C}^{0,\gamma}(B_1^+)}\leq c_{\ast}$ and if $\eta_{\delta,r}psilon_0$ and $c_\ast $ are sufficiently small, the map $T$ is a homeomorphism from $B_{1/2}^+$ to $T(B_{1/2}^+) \subset \{y\in \R^{n+1}| \ y_n\geq 0, y_{n+1}\leq 0\}$. Moreover, away from $\Gamma_w$, $T$ is a $C^1$ diffeomorphism.
\eta_{\delta,r}nd{prop}
The proof of this result essentially relies on the facts that for each fixed $x''$, the transformation $T$ is asymptotically a square root mapping and the free boundary $\Gamma_w$ is sufficiently flat (i.e. it is a $C^{1,\alphapha}$ graph with slow varying normals c.f. \eta_{\delta,r}qref{eq:normal}). Hence, the main idea is to show the injectivity of $T$ on dyadic annuli around the free boundary. At these points the map $T$ is differentiable, which allows us to exploit the non-degeneracy of the derivative of $T$. To achieve this reduction to dyadic annuli we exploit the asymptotic structure of the functions $w$ (c.f. Propositions \ref{prop:asym2} and \ref{prop:improved_reg}).
\begin{proof}
\eta_{\delta,r}mph{Step 1: Homeomorphism.}\\
We begin with the injectivity of $T$ in $B_{1/2}^+$.
Since $T$ fixes the first $n-1$ variables, it is enough to show that for each $x_0\in \Gamma_w\cap B_{1/2}'$, $T$ is injective on the set $H_{x_0}:=\{(x''_0,x_n,x_{n+1})\}\cap B_{1/2}^+$. Moreover, as $\Gamma_w$ is given as a graph of a $C^{1,\alphapha}$ function $g$, it suffices to prove that $T(x)\neq T(\tilde{x})$ for any two points $x,\tilde{x}\in H_{x_0}$ such that $x,\tilde{x}\notin \Gamma_w$.
In order to obtain this, we first prove that the mapping $T_1:=\psi\circ T$ is injective (and a homeomorphism) on $B_{1/2}^+$. Here $\psi:\R^{n+1}\rightarrow \R^{n+1}$ with $\psi(z)=(z'',z_n^2-z_{n+1}^2, -2z_nz_{n+1})$. Note that $T_1(x)=(x'',(\p_n w(x))^2 -(\p_{n+1} w(x))^2, - 2 \p_n w(x) \p_{n+1}w(x))$. We rely on the asymptotic expansion of $\nabla w$. In a second step, we then return to the mapping properties of $T$.\\
\eta_{\delta,r}mph{Step 1a: $T_1$ is a homeomorphism.}
We begin with the injectivity of $T_1$.
By Proposition~\ref{prop:improved_reg}, for $x\in B^+_{1/2}$
\begin{align*}
\p_n w(x)&= w_{1/2}(x)+\max\{\eta_{\delta,r}psilon_0,c_{\ast}\}O(|x|^{\frac{1}{2}+\alphapha}),\\
\p_{n+1}w(x)&=\bar{w}_{1/2}(x)+\max\{\eta_{\delta,r}psilon_0,c_{\ast}\}O(|x|^{\frac{1}{2}+\alphapha}).
\eta_{\delta,r}nd{align*}
Hence, a direct computation gives that
\begin{align*}
&T_1(x)=x+E_0(x), \\
&\text{where } E_0:B_{1/2}^+\rightarrow \R^{n+1},\ |E_0(x)|=\max\{\eta_{\delta,r}psilon_0,c_{\ast}\}O(|x|^{1+\alphapha}).
\eta_{\delta,r}nd{align*}
In general, by the explicit asymptotic expansions of $\p_nw$ and $\p_{n+1}w$ around $x_0\in \Gamma_w\cap B^+_{1/2}$ (c.f. Proposition~\ref{prop:improved_reg}) and by using the fact that $b_n,b_{n+1}\in C^{0,\alphapha}(\Gamma_w\cap B^+_{1/2})$, $\nu(x_0)=\nu_{x_0}\in C^{0,\alphapha}(\Gamma_w\cap B^+_{1/2})$ and $A(x_0)\in C^{0,\alphapha}(\Gamma_w\cap B^+_{1/2})$, we have
\begin{equation}
\label{eq:identity}
\begin{split}
T_1(x)- T_1(x_0)&=(x-x_0) + E_{x_0}(x), \quad x\in B_{1/2}^+(x_0)\\
\text{where } |E_{x_0}(x)|&\lesssim \max\{\eta_{\delta,r}psilon_0,c_{\ast}\} \left(|x-x_0||x_0|^\alphapha+ |x-x_0|^{1+\alphapha}\right).
\eta_{\delta,r}nd{split}
\eta_{\delta,r}nd{equation}
Here we recall that as indicated in Remark \ref{rmk:convention} we may assume that the Hölder constants of $b_n(x_0), b_{n+1}(x_0)$ are controlled by $\max\{\eta_{\delta,r}psilon_0,c_{\ast}\}$.
From the identity (\ref{eq:identity}) we note that if $\eta_{\delta,r}psilon_0, c_*$ are sufficiently small and if $x_0\in \Gamma_w\cap B_{1/2}^+$, then for $x\in B_{1/2}^+(x_0)$
\begin{align}
\label{eq:cont_boundary}
(1-\frac{1}{4})|T_1(x) - T_1(x_0)| \leq |x-x_0|\leq (1+\frac{1}{4})|T_1(x) - T_1(x_0)|.
\eta_{\delta,r}nd{align}
Thus, if there are $x, \tilde{x}\in H_{x_0}$ with $x, \tilde{x}\notin \Gamma_w$ such that $T_1(x)=T_1(\tilde{x})$, then necessarily
\begin{align}\label{eq:quo}
\frac{1}{2}\leq \frac{|x-x_0|}{|\tilde{x}-x_0|}\leq 2.
\eta_{\delta,r}nd{align}
Without loss of generality, we assume that $|x-x_0|\leq |\tilde{x}-x_0|$ and define $r:=|x-x_0|$. Then \eta_{\delta,r}qref{eq:quo} implies that $x, \tilde{x}\in A_{r,2r}^+(x_0)\cap H_{x_0}$, where $A_{r,2r}^+(x_0)$ is the closed cylinder centered at $x_0$:
\begin{align*}
A^+_{r,2r}(x_0)&:=\{(x'',x_n,x_{n+1})\in B_1^+| \ |x''-x''_0|\leq r,\\
&\qquad r\leq \sqrt{(x_n-(x_0)_n)^2+(x_{n+1}-(x_0)_{n+1})^2}\leq 2r\}.
\eta_{\delta,r}nd{align*}
Since $\Gamma_w$ is $C^{1,\alphapha}$ with $|\nu_{x_0}-\nu_{\tilde{x}_0}|\lesssim \max\{\eta_{\delta,r}psilon_0,c_\ast\}|x_0-\tilde{x}_0|^\alphapha$, for any $x_0, \tilde{x}_0\in \Gamma_w\cap B'_{1/2}$ and $\nu_0=e_n$, we have that $\Gamma_w \cap A^+_{r,2r}(x_0)=\eta_{\delta,r}mptyset$ for a sufficiently small (but independent of $r$) choice of the constants $\eta_{\delta,r}psilon_0, c_\ast$. Thus, $T_1$ is a $C^1$ mapping in $A^+_{r,2r}(x_0)\cap B_{1/2}^+$ (because $w$ is $C^{2,\gamma}$ away from $\Gamma_w$). We compute $DT_1$ in $A_{r,2r}^+(x_0)\cap B_{1/2}^+$. By using the asymptotics of $D w$ and $D^2w$ around $x_0$ (c.f. Propositions \ref{prop:asym2}, \ref{prop:improved_reg}), we obtain
\begin{align*}
|DT_1(x)-I|\lesssim \max\{\eta_{\delta,r}psilon_0,c_{\ast}\} \left(|x_0|^{\alphapha}+r^{2\alphapha}\right), \quad x\in A_{r,2r}^+(x_0) \cap \mathcal{N}_{x_0}\cap B_{1/2}^+,
\eta_{\delta,r}nd{align*}
where $I$ is the identity map.
Therefore, for sufficiently small, universal constants $\eta_{\delta,r}psilon_0, c_\ast$, the map $T_1$ is injective in $A_{r,2r}^+(x_0)\cap \mathcal{N}_{x_0}\cap B_{1/2}^+$. This implies that $T_1(x)\neq T_1(\tilde{x})$.\\
\eta_{\delta,r}mph{Step 1b: $T_1:B_{1/2}^+ \rightarrow T_1(B_{1/2}^+)$ is a homeomorphism.}
By the continuity of $T_1$ and by the invariance of domain theorem, we infer that, as a mapping from $\inte(B_{1/2}^+)$ to $T_1(\inte(B_{1/2}^+))$, $T_1$ is a homeomorphism. We claim that this is also true for $T_1$ as a map from $B_{1/2}^+$ to $T_1(B_{1/2}^+)$. Indeed, due to our previous considerations in Step 1a, $T_1$ is injective (and hence invertible) on the whole of $B_{1/2}^+$ (as a map onto its image). Hence, it suffices to prove the continuity of the inverse. Here we distinguish three cases: Let $y\in T_1(B_{1/2}^+)$ and first assume that $y\in T_1(\Gamma_w\cap B_{1/2}')$. Then, (\ref{eq:cont_boundary}) immediately implies the continuity of $T_1^{-1}$ at $y$. Secondly, we assume that $y\in T_1(B_{1/2}'\setminus \Lambda_w)$. Let $x=T^{-1}_1(y)\in B_{1/2}'\setminus \Lambda_w$. Then we carry out an even reflection of $w$ about $x_{n+1}$ (and a corresponding partly even, partly odd reflection for $a^{ij}$) as described in Remark 3.8 in \cite{KRSI}. The resulting reflected function $\tilde{w}$ is still $C^{1,1/2}$ regular in a (sufficiently small) neighborhood $B_{\rho}(x) \subset B_{1/2}^+ \setminus \Lambda_w$ of $x$. Moreover, the $y_{n+1}$ -component of $T_1^{\tilde{w}}$ changes sign on passing from $x_{n+1}>0$ to $x_{n+1}<0$. Thus, the mapping $T_1^{\tilde{w}}$ is still injective as a mapping from $B_{\rho}(x)$ to $T^{\tilde{w}}_1(B_{\rho}(x))$. Since it is also continuous, the invariance of domain theorem implies that it is a homeomorphism from $B_{\rho}(x)$ to $T^{\tilde{w}}_1(B_{\rho}(x))$, which is an open subset in $\R^{n+1}$ containing $y$. In particular, this implies that our original mapping, $(T^{w}_1)^{-1}$, is continuous at $y\in T_1(B_{1/2}'\setminus \Lambda_w)$. Last but not least, for a point $y\in T_1(B_{1/2}' \cap \inte(\Lambda_w))$, we argue similarly. However, instead of using an even reflection, we carry out an odd reflection of $w$ about $x_{n+1}$. Again, we note that the associated map $T_1^{\tilde{w}}$ changes sign on passing from $x_{n+1}>0$ to $x_{n+1}<0$. Thus, arguing as in the second case, we again obtain the continuity of $(T_1^{w})^{-1}$ at $y$. Combining the results of the three cases therefore yields that $T_1$ is a homeomorphism as a map from $B_{1/2}^+$ to $T_1(B_{1/2}^+)$, which is relatively open in $\{y_n\geq 0, y_{n+1}\leq 0\}$.
\\
\eta_{\delta,r}mph{Step 1c: $T$ is a homeomorphism.}
By definition of $T_1$, we have that $T_1 = \psi\circ T$, where $\psi(x):=(x'', x_n^2 - x_{n+1}^2, -2 x_{n}x_{n+1})$. We show that the injectivity of $T$ follows immediately from the injectivity of $T_1$. As $T(B_1^+)\subset \{y\in \R^{n+1}| y_{n}\geq 0, y_{n+1}\leq 0\}$ and as $\psi$ is injective on this quadrant, we obtain $T(U)=\psi^{-1}\circ T_1(U)$ for any $U\subset B_1^+$. Since $T_1$ is open and $\psi$ is continuous, this implies that $T$ is open. Combining this with the continuity of $T$, we obtain that $T$ is homeomorphism from $B_{1/2}^+$ to $T(B_{1/2}^+)\subset \{y\in \R^{n+1}| y_n\geq 0, y_{n+1}\leq 0\}$.\\
\eta_{\delta,r}mph{Step 2: Differentiability.}
Recalling the regularity of the metric, $a^{ij}\in C^{1,\gamma}$ for $\gamma>0$, we observe that $w\in C^{2,\gamma}_{loc}(B_1^+\setminus \Gamma_w)$. Thus, $T$ is $C^1$ away from $\Gamma_w$. In order to show that $T$ is a $C^1$ diffeomorphism away from $\Gamma_w$, it suffices to compute its Jacobian. For $x\in B_{1/2}^+\setminus \Gamma_w$, let $x_0=(x'',g(x''),0)$ be the projection onto $\Gamma_w$. Then, by the asymptotics for $D^2w$ (Proposition~\ref{prop:improved_reg} applied in the non-tangential cone $\mathcal{N}_{x_0}$), we have
\begin{align*}
\det(DT(x))&=\p_{nn}w\p_{n+1,n+1}w-(\p_{n,n+1}w)^2 \\
&=\p_{nn}\mathcal{W}_{x_0}\p_{n+1,n+1}\mathcal{W}_{x_0}-(\p_{n,n+1}\mathcal{W}_{x_0})^2 + \max\{\eta_{\delta,r}psilon_0,c_{\ast}\} O(|x-x_0|^{-1+\alphapha}).
\eta_{\delta,r}nd{align*}
A direct computation gives
\begin{align*}
&\p_{nn}\mathcal{W}_{x_0}\p_{n+1,n+1}\mathcal{W}_{x_0}-(\p_{n,n+1}\mathcal{W}_{x_0})^2\\
&=-\frac{9}{16}a(x_0)^2\frac{e_n\cdot \nu_{x_0}}{(\nu_{x_0}\cdot A(x_0)\nu_{x_0})(a^{n+1,n+1}(x_0))} \frac{1}{\tilde{r}},
\eta_{\delta,r}nd{align*}
where
\begin{align*}
\tilde{r}=\left(\frac{((x-x_0)\cdot \nu_{x_0})^2}{(\nu_{x_0}\cdot A(x_0)\nu_{x_0})}+\frac{x_{n+1}^2}{a^{n+1,n+1}(x_0)}\right)^{1/2}.
\eta_{\delta,r}nd{align*}
The $C^{0,\alphapha}$ regularity of $\nu_{x_0}$ and the ellipticity of $A(x)=(a^{ij}(x))$ entail that
\begin{align*}
c|x-x_0|\leq \tilde{r}\leq C |x-x_0|, \text{ for some absolute constants }0<c<C<\infty.
\eta_{\delta,r}nd{align*}
Thus,
\begin{equation}
\label{eq:jacobi}
\det(DT(x))=-c|x-x_0|^{-1}+\max\{\eta_{\delta,r}psilon_0,c_{\ast}\} O(|x-x_0|^{-1+\alphapha})<0.
\eta_{\delta,r}nd{equation}
Therefore, after potentially choosing the constant $1/2=1/2(n,p,\alphapha)>0$ even smaller, the implicit function theorem implies that $T$ and $T^{-1}$ are locally $C^{1}$. Due to the global invertibility, which we have proved above, the statement follows.
\eta_{\delta,r}nd{proof}
\subsection{Legendre function and nonlinear PDE}
\label{sec:Legendre}
In this section we compute a partial Legendre transform of a solution $w$ of our problem (\ref{eq:varcoeff}). In this context it becomes convenient to view the equation (\ref{eq:varcoeff}) in non-divergence form and to regard the equation in the interior as a special case of the problem
\begin{align*}
a^{ij}\p_{ij} u = f(Du,u,y),
\eta_{\delta,r}nd{align*}
for a suitable function $f$. In our case $f(Du,u,y)=-(\p_{i} a^{ij})\p_j u$.
Starting from this non-divergence form, we compute the equation which the Legendre function satisfies (c.f. Proposition \ref{prop:bulk_eq}). By considering the explicit example of the Legendre transform of $\mathcal{W}_{x_0}$ for $x_0=0$, we motivate that the fully nonlinear equation in the bulk is related to the Baouendi-Grushin operator (c.f. Example \ref{ex:linear}). \\
From now on we will work in the image domain $T(B_{1/2}^+)$, where $T$ is the partial Hodograph transformation defined in \eta_{\delta,r}qref{eq:def_Legendre}. For simplicity, we set $U:=T(B_{1/2}^+)$ and denote the straightened free boundary by $P:=T(\Gamma_w\cap B_{1/2}')$. We recall that the Hodograph transform was seen to be invertible in $U$ (c.f. Proposition \ref{prop:invertibility}).
For $y\in U$, we define the partial Legendre transform of $w$ by the identity
\begin{equation}\label{eq:legendre}
v(y)=w(x)-x_{n}y_n-x_{n+1}y_{n+1}, \quad x=T^{-1}(y).
\eta_{\delta,r}nd{equation}
A direct computation shows that
\begin{equation}\label{eq:dual}
\partial_{y_i}v=\partial_{x_i}w, \ i=1,\ldots, n-1,\quad \partial_{y_{n}}v=-x_n,\quad \partial_{y_{n+1}}v=-x_{n+1}.
\eta_{\delta,r}nd{equation}
As a consequence of \eta_{\delta,r}qref{eq:dual}, the free boundary $\Gamma_w\cap B_{1/2}'$ is parametrized by
\begin{align}
\label{eq:boundaryLH}
x_n=-\p_{y_n}v(y'',0,0).
\eta_{\delta,r}nd{align}
As in \cite{KPS} the advantage of passing to the Legendre-Hodograph transform consists of fixing (the image of the) free boundary, i.e. by mapping it to the co-dimension two hyperplane $y=(y'',0,0)$. However, this comes at the expense of a more complicated, fully nonlinear, degenerate (sub)elliptic equation for $v$. We summarize this in the following:
\begin{prop}[Bulk equation]
\label{prop:bulk_eq}
Suppose that $a^{ij}\in C^{1,\gamma}(B_1^+, \R^{(n+1)\times (n+1)}_{sym})$ is uniformly elliptic.
Let $w:B_1^+ \rightarrow \R$ be a solution of the variable coefficient thin obstacle problem and let $v:U \rightarrow \R$ be its partial Legendre-Hodograph transform. Then $v\in C^{1}(U)$ and it satisfies the following fully nonlinear equation
\begin{equation}
\label{eq:nonlineq1}
\begin{split}
F(D^2v, D v, v,y)&=-\sum_{i,j=1}^{n-1}\tilde{a}^{ij}\det\begin{pmatrix}
\p_{ij}v& \p_{in}v & \p_{i,n+1}v\\
\p_{jn}v& \p_{nn}v & \p_{n,n+1}v\\
\p_{j,n+1}v & \p_{n,n+1}v &\p_{n+1,n+1}v
\eta_{\delta,r}nd{pmatrix}\\
&+2\sum_{i=1}^{n-1}\tilde{a}^{i,n}\det\begin{pmatrix}
\p_{in}v & \p_{i,n+1}v\\
\p_{n,n+1}v & \p_{n+1,n+1}v
\eta_{\delta,r}nd{pmatrix}\\
& \quad+2 \sum_{i=1}^{n-1}\tilde{a}^{i,n+1}\det\begin{pmatrix}
\p_{i,n+1}v & \p_{in}v\\
\p_{n,n+1}v & \p_{nn}v
\eta_{\delta,r}nd{pmatrix}\\
&+\tilde{a}^{nn}\p_{n+1,n+1}v+\tilde{a}^{n+1,n+1}\p_{nn}v-2\tilde{a}^{n,n+1}\p_{n,n+1}v\\
&-\det\begin{pmatrix}
\p_{nn}v &\p_{n,n+1}v\\
\p_{n,n+1}v &\p_{n+1,n+1}v
\eta_{\delta,r}nd{pmatrix}\left(\sum_{j=1}^{n-1}\tilde{b}^j\p_j v+\tilde{b}^ny_n+\tilde{b}^{n+1}y_{n+1}\right)=0,
\eta_{\delta,r}nd{split}
\eta_{\delta,r}nd{equation}
where
\begin{align*}
\tilde{a}^{ij}(y)&:=a^{ij}(x)\big|_{x=(y'',-\p_nv(y),-\p_{n+1}v(y))},\\
\tilde{b}^j(y)&:=\sum_{i=1}^{n+1}(\p_{x_i}a^{ij})(x)\big|_{x=(y'',-\p_nv(y),-\p_{n+1}v(y))}.
\eta_{\delta,r}nd{align*}
Moreover, the following mixed Dirichlet-Neumann boundary conditions hold:
\begin{align*}
v=0\text{ on } U\cap \{y_n=0\}; \quad \p_{n+1}v=0 \text{ on } U\cap \{y_{n+1}=0\}.
\eta_{\delta,r}nd{align*}
In particular, $\Gamma_w\cap B_{1/4}$ is parametrized by $x_n=-\p_{y_n}v(y'',0,0)$.
\eta_{\delta,r}nd{prop}
\begin{rmk}
For convenience of notation, in the sequel we will also use the notation $F(v,y):=F(D^2v,Dv,v,y)$. We emphasize that the coefficients $\tilde{a}^{ij}(y)$ depend on $v$ nonlinearly.
\eta_{\delta,r}nd{rmk}
The proof of Proposition \ref{prop:bulk_eq} follows by computing the corresponding changes of coordinates:
\begin{proof}
Due to the regularity of $T^{-1}$ and $w$ \eta_{\delta,r}qref{eq:dual} directly entails that $v\in C^{ 1}(U)$. The condition $w=0$ on $\Gamma_w\cap B^+_{1/4}$ immediately translates into $v=0$ on $P$. Moreover, it is easy to check from \eta_{\delta,r}qref{eq:dual} and the Signorini boundary condition of $w$, that $v=0$ on $U\cap \{y_n=0\}$ and $\p_{n+1}v=0$ on $U\cap \{y_{n+1}=0\}$.
Now we derive the equation for $v$. Recalling that
\begin{align*}
y=T(x)=(x', \partial_{x_n}w, \partial_{x_{n+1}} w), \quad x=T^{-1}(y)=(y', -\partial_{y_n}v, -\partial_{y_{n+1}}v),
\eta_{\delta,r}nd{align*}
and using \eta_{\delta,r}qref{eq:dual}, we have
\begin{align*}
DT=\begin{pmatrix}
I_{n-1} & 0\\
A(w)& H(w)
\eta_{\delta,r}nd{pmatrix},\quad
DT^{-1}=\begin{pmatrix}
I_{n-1} & 0\\
A(v)& H(v)
\eta_{\delta,r}nd{pmatrix} \mbox{ in } U\setminus P,
\eta_{\delta,r}nd{align*}
where
\begin{align*}
A(w)=\begin{pmatrix}
\partial_{x_{n}x_1} w & \ldots & \partial_{x_{n}x_{n-1}} w\\
\partial_{x_{n+1}x_1} w & \ldots & \partial_{x_{n+1}x_{n-1}} w
\eta_{\delta,r}nd{pmatrix},\quad
H(w)=\begin{pmatrix}
\partial_{x_{n}x_n} w & \partial_{x_{n}x_{n+1}} w\\
\partial_{x_{n+1}x_n} w & \partial_{x_{n+1}x_{n+1}}w
\eta_{\delta,r}nd{pmatrix},\\
A(v)=-\begin{pmatrix}
\partial_{y_{n}y_1} v & \ldots & \partial_{y_{n}y_{n-1}} v\\
\partial_{y_{n+1}y_1} v & \ldots & \partial_{y_{n+1}y_{n-1}} v
\eta_{\delta,r}nd{pmatrix},\quad
H(v)=-\begin{pmatrix}
\partial_{y_{n}y_n} v & \partial_{y_{n}y_{n+1}} v\\
\partial_{y_{n+1}y_n} v & \partial_{y_{n+1}y_{n+1}}v
\eta_{\delta,r}nd{pmatrix}.
\eta_{\delta,r}nd{align*}
Next we express $D^2w(x)$ in terms of $D^2v(y)$ if $y\in U\setminus P$. Since $(DT)^{-1}=DT^{-1}$, we immediately obtain
\begin{equation}\label{eq:relation}
H(w)=H(v)^{-1},\quad A(v)=-H(w)^{-1}A(w).
\eta_{\delta,r}nd{equation}
Moreover, the identities \eta_{\delta,r}qref{eq:relation} and \eta_{\delta,r}qref{eq:dual} together with a direct calculation give
\begin{align}
(\partial_{y_iy_j}v)_{(n-1)\times (n-1)}&= (\partial_{x_ix_j}w)_{(n-1)\times (n-1)} - A(w)^t H(w)^{-1} A(w), \label{eq:hessianv}\\
(\partial_{x_ix_j}w)_{(n-1)\times (n-1)}&= (\partial_{y_iy_j}v)_{(n-1)\times (n-1)} - A(v)^t H(v)^{-1} A(v)\label{eq:hessianw}.
\eta_{\delta,r}nd{align}
In order to compute the equation for $v$, we assume that $a^{ij}\in C^{1,\gamma}$ for some $\gamma>0$ and rewrite the equation for $w$ in non-divergence form
\begin{equation}\label{eq:nondivw}
a^{ij}\partial_{ij}w + (\partial_i a^{ij})\partial_j w=0.
\eta_{\delta,r}nd{equation}
For convenience and abbreviation, we set
\begin{align*}
\tilde{a}^{ij}(y)&:=a^{ij}(x)\big|_{x=(y'',-\p_nv(y),-\p_{n+1}v(y))},\\
\tilde{b}^j(y)&:=\sum_{i=1}^{n+1}(\p_{x_i}a^{ij})(x)\big|_{x=(y'',-\p_nv(y),-\p_{n+1}v(y))}.
\eta_{\delta,r}nd{align*}
Plugging \eta_{\delta,r}qref{eq:relation}-\eta_{\delta,r}qref{eq:hessianw} into (\ref{eq:nondivw}), and multiplying the resulting equation by
\begin{align*}
-J(v):=-\det\begin{pmatrix}
\p_{nn}v &\p_{n,n+1}v\\
\p_{n,n+1}v &\p_{n+1,n+1}v
\eta_{\delta,r}nd{pmatrix},
\eta_{\delta,r}nd{align*}
leads to the equation~(\ref{eq:nonlineq1}) for $v$.
\eta_{\delta,r}nd{proof}
We conclude this section by computing the Legendre function of a 3/2-homogeneous blow-up of a solution to the variable coefficient thin obstacle problem.
\begin{lem}
\label{lem:asymp_profile}
Let $w:B_{1}^+ \rightarrow \R$ be a solution of the variable coefficient thin obstacle problem and let $x_0\in \Gamma_w\cap B_{1/2}$. Assume that $v$ is the Legendre function of $w$ under the Hodograph transformation $y=T^w(x)$. Then at $y_0=T^w(x_0)$, the Legendre function $v$ has the asymptotic expansion
$$v(y)= v_{y_0}(y)+ \max\{\eta_{\delta,r}psilon_0,c_\ast\}O(|y-y_0|^{3+2\alphapha}),$$
with the leading order profile
\begin{align*}
v_{y_0}(y) &=- \frac{4}{27 a^2(x_0)}\left( \left(\frac{\nu_{x_0}\cdot A(x_0)\nu_{x_0}}{(\nu_{x_0})_n} y_n \right)^3 \right.\\
& \quad \left. - 3 \left( \frac{\nu_{x_0}\cdot A(x_0)\nu_{x_0}}{(\nu_{x_0})_n} (a^{n+1,n+1}(x_0)) \right) y_ny_{n+1}^2 \right)\\
&\quad -g(y_0)y_n + y_{n}\frac{(y''-y_0)\cdot \nu_{x_0}''}{(\nu_{x_0})_n},
\eta_{\delta,r}nd{align*}
where $\nu_{x_0}:= (\nu_{x_0}'', (\nu_{x_0})_n,0)=\frac{(-\nabla''g(x_0), 1, 0)}{\sqrt{1+|\nabla''g(x_0)|^2}}$ denotes the (in-plane) outer normal to $\Lambda_w$ at $x_0$.
\eta_{\delta,r}nd{lem}
\begin{proof}
The claim follows from a straightforward calculation. Indeed, recall that $y(x)=(x'',\p_nw(x),\p_{n+1}w(x))$. From the asymptotics of $\p_nw, \p_{n+1}w$ around $x_0\in \Gamma_w$ in Proposition~\ref{prop:asym2}, we obtain the asymptotics of the inverse $x=x(y)$ around $y_0=T^w(x_0)$. Additionally, recalling that $v(y)=w(x(y))-x_n(y)y_n-x_{n+1}(y)y_{n+1}$, we obtain the claimed asymptotic expansion of $v$ around $y_0$.
\eta_{\delta,r}nd{proof}
It turns out that the function $v_{y_0}(y)$ provides good intuition for the behavior of solutions to (\ref{eq:nonlineq1}).
In order to obtain a better idea about the structure of $F(v,y)$, we compute its linearization at $ v_0(y)$, which is the leading order expansion of $v$ at the origin. It is immediate from Lemma~\ref{lem:asymp_profile} (and using the normalization in Remark~\ref{rmk:normal} and Remark~\ref{rmk:convention}) that
$$v_0(y)=-\frac{1}{3}\left(y_n^3-3y_ny_{n+1}^2\right).$$
\begin{example}[Linearization at $v_0$]
\label{ex:linear}
Let $v_0$ be the Legendre function of the blow-up limit $\mathcal{W}_{0}$ at the origin, which itself is a global solution to the Signorini problem with constant metric $a^{ij}=\delta^{ij}$. Then, the Legendre function $v_0$ satisfies the nonlinear PDE
\begin{align*}
F(D^2v)=-\p_{nn}v-\p_{n+1,n+1}v+ \sum\limits_{i=1}^{n-1} \det
\begin{pmatrix}
\p_{ii}v & \p_{in}v & \p_{i,n+1}v\\
\p_{ni}v & \p_{nn}v & \p_{n,n+1}v\\
\p_{n+1,i}v & \p_{n+1,n}v & \p_{n+1,n+1}v
\eta_{\delta,r}nd{pmatrix}=0.
\eta_{\delta,r}nd{align*}
A direct computation leads to
\begin{align*}
\frac{\p F(M)}{\p m_{ij}}\big|_{M=D^2 v_0}=-
\begin{pmatrix}
4(y_n^2+y_{n+1}^2) & 0 & 0\\
0 & 1 &0\\
0& 0& 1
\eta_{\delta,r}nd{pmatrix}.
\eta_{\delta,r}nd{align*}
As a consequence, the linearization $L_{v_0}=D_vF \big|_{ v_0}=4(y_n^2+y_{n+1}^2)\Delta''+\p^2_{n,n}+\p^2_{n+1,n+1}$ is a \eta_{\delta,r}mph{constant coefficient Baouendi-Grushin operator}.
\eta_{\delta,r}nd{example}
The previous example and the observation that around the origin $v$ is a perturbation of $v_0$ and $a^{ij}$ is a perturbation of the identity matrix, indicates that the linearization $D_v F$ (and hence $F$) can be viewed as a perturbation of the Baouendi-Grushin Laplacian.
Motivated by this, we introduce function spaces which are adapted to the Baouendi-Grushin operator in the next section.
\section{Function spaces}
\label{sec:holder}
In this section we introduce and discuss generalized H\"older spaces (c.f. Definition \ref{defi:spaces}, Proposition \ref{prop:decompI}) which are adapted to our equation (\ref{eq:nonlineq1}). These are the spaces in which we apply the implicit function theorem in Section~\ref{sec:fb_reg} to deduce the tangential regularity of the Legendre function $v$.
In order to define these spaces, we use the intrinsic geometry induced by the Baouendi-Grushin operator. In particular, we work with the intrinsic (or Carnot-Caratheodory) distance (c.f. Definition \ref{defi:Grushinvf}) associated with the Baouendi-Grushin operator and corresponding intrinsic Hölder spaces (c.f. Definitions \ref{defi:Hoelder}, \ref{defi:Hoelder1}).\\
Our function spaces are inspired by Campanato's characterization of the classical H\"older spaces \cite{Ca64} and are reminiscent of the function spaces used in \cite{DSS14}. They are constructed on the one hand to capture the asymptotics of the Legendre function and on the other hand to allow for elliptic estimates for the Baouendi-Grushin operator (c.f. Proposition \ref{prop:invert}).
\subsection{Intrinsic metric for Baouendi-Grushin Laplacian}
\label{sec:intrinsic}
In this section we define the geometry which is adapted to our equation (\ref{eq:nonlineq1}). This is motivated by viewing our nonlinear operator from (\ref{eq:nonlineq1}) as a variable coefficient perturbation of the constant coefficient \eta_{\delta,r}mph{Baouendi-Grushin} operator (c.f. Example \ref{ex:linear})
\begin{align*}
\D_G:= (y_n^2 + y_{n+1}^2)\D'' + \p_n^2 + \p_{n+1}^2.
\eta_{\delta,r}nd{align*}
The Baouendi-Grushin operator is naturally associated with the \eta_{\delta,r}mph{Baouendi-Grushin vector fields} and an \eta_{\delta,r}mph{intrinsic metric}:
\begin{defi}
\label{defi:Grushinvf}
Let $Y_i:=\sqrt{y_n^2+y_{n+1}^2}\p_i$, $i\in\{1,\dots, n-1\}$, $Y_n:=\p_n$, $Y_{n+1}:=\p_{n+1}$ denote the \eta_{\delta,r}mph{Baouendi-Grushin vector fields}. The metric associated with the vector fields $Y_i$ is
\begin{align}
\label{eq:metr}
ds^2=\sum_{j=1}^{n-1}\frac{dy_j^2}{y_n^2+y_{n+1}^2}+dy_n^2+dy_{n+1}^2.
\eta_{\delta,r}nd{align}
More precisely it is defined by the following scalar product in the tangent space:
\begin{align*}
g_{y}(v,w):= (y_n^2 + y_{n+1}^2)^{-1}\left(\sum\limits_{j=1}^{n-1}v_j w_j \right) + v_n w_n + v_{n+1} w_{n+1},
\eta_{\delta,r}nd{align*}
for all $y \in \R^{n+1}$, $v,w \in \spa\{Y_i(y)| \ i\in \{ 1,\dots, n+1\}\}$.
Let $d_G$ be the distance function associated with this sub-Riemannian metric (or the associated \eta_{\delta,r}mph{Carnot-Caratheodory metric}):
\begin{multline*}
d_G(x,y) := \inf \{ \eta_{\delta,r}ll(\gamma)| \ \gamma: [a,b] \subset \R \rightarrow \R^{n+1} \mbox{ joins } x \mbox{ and } y,\\
\dot{\gamma}(t)\in \spa\{Y_i(\gamma(t))| \ i\in \{1,\dots, n+1\}\}\},
\eta_{\delta,r}nd{multline*}
where
\begin{align*}
\eta_{\delta,r}ll(\gamma) := \int_{a}^{b} \sqrt{g_{\gamma(t)}(\dot{\gamma}(t), \dot{\gamma}(t))}dt.
\eta_{\delta,r}nd{align*}
\eta_{\delta,r}nd{defi}
\begin{rmk}\label{rmk:equi_dist}
We remark that for the family of dilations $\delta_\lambda(\cdot)$ which is defined by $\delta_\lambda(y'',y_n,y_{n+1}):=(\lambda^2 y'',\lambda y_{n},\lambda y_{n+1})$, we have $d_G(\delta_\lambda( p), \delta_\lambda (q))=|\lambda|d_G(p,q)$ for $p,q\in \R^{n+1}$. Moreover, from \eta_{\delta,r}qref{eq:metr} for $\sqrt{y_n^2+y_{n+1}^2}\sim 1$ we have $ds^2\sim dy_1^2+\dots+dy_{n+1}^2$. Using these, it is possible to directly verify that $d_G$ is equivalent to the following quasi-metric
\begin{align*}
d(x,y)=|x_n-y_n|+|x_{n+1}-y_{n+1}|+\frac{|x''-y''|}{|x_n|+|x_{n+1}|+|y_n|+|y_{n+1}|+|x''-y''|^{1/2}}.
\eta_{\delta,r}nd{align*}
\eta_{\delta,r}nd{rmk}
\begin{rmk}
\label{rmk:original_variables}
In order to elucidate our choice of metric, we derive its form in our original $x$-coordinates. To this end, we consider the case of the flat model solution $w(x)=\mathcal{W}_{0}(x)$. Denoting the Euclidean inner product on $\R^{n+1}$ by $g_0$ and defining $g_{\mathcal{W}_{0}}$ as the Baouendi-Grushin inner product from Definition~\ref{defi:Grushinvf}, (\ref{eq:metr}) (up to constants), we obtain that $g_{\mathcal{W}_{0}}=(x_n^2+x_{n+1}^2)^{-\frac{1}{2}}T_\ast g_{0}$, where $T$ is the Legendre transformation associated with $\mathcal{W}_0$.
\eta_{\delta,r}nd{rmk}
The previously defined intrinsic metric induces a geometry on our space. In particular, it defines associated Baouendi-Grushin cylinders/balls:
\begin{defi}
\label{defi:Grushincylinder}
Let $0<r\leq 1$. We set
$$\mathcal{B}_r:= \{y\in \R^{n+1}| \ |y''|\leq r^2, \ y_{n}^2 + y_{n+1}^2 \leq r^2 \}$$
to denote the closed \eta_{\delta,r}mph{non-isotropic Baouendi-Grushin cylinders}. For
$$y_0\in P:=\{(y'',y_n,y_{n+1})| y_n=y_{n+1}=0\}$$ we further define $\mathcal{B}_r(y_0):=y_0+\mathcal{B}_r$.
In the quarter space, we restrict the cylinders to the corresponding intersection
$$\mathcal{B}_{r}^+(y_0):=\mathcal{B}_{r}(y_0)\cap Q_{+}, \quad \text{where }Q_+:= \{y\in \R^{n+1}| \ y_{n}\geq 0, y_{n+1}\leq 0\}.$$
\eta_{\delta,r}nd{defi}
\begin{rmk}
Due to Remark~\ref{rmk:equi_dist}, there are constants $c,C>0$ such that for any $y_0\in P$
\begin{align*}
\mathcal{\tilde{B}}_{cr}(y_0)\subseteq \mathcal{B}_r(y_0)\subseteq \mathcal{\tilde{B}}_{Cr}(y_0),\quad \text{where }\mathcal{\tilde{B}}_r(y_0)=\{y| d_G(y,y_0)< r\}.
\eta_{\delta,r}nd{align*}
In the sequel, with slight abuse of notation, for $y_0\in P$ we will not distinguish between $\mathcal{\tilde{B}}_r(y_0)$ and $\mathcal{B}_r(y_0)$ for convenience of notation.
\eta_{\delta,r}nd{rmk}
\subsection{Function spaces}
\label{sec:functions}
In the sequel, we consider the intrinsic H\"older spaces which are associated with the geometry introduced in Section \ref{sec:intrinsic}:
\begin{defi}
\label{defi:Hoelder}
Let $\Omega$ be a subset in $\R^{n+1}$ and let $\alphapha\in (0,1]$. Then
\begin{align*}
C^{0,\alphapha}_\ast(\overline{\Omega}):=\left\{u:\overline{\Omega}\rightarrow \R| \ \sup_{x,y\in \overline{\Omega}}\frac{|u(x)-u(y)|}{d_G(x,y)^\alphapha}<\infty\right\}.
\eta_{\delta,r}nd{align*}
Let
$$[u]_{\dot{C}^{0,\alphapha}_\ast(\overline{\Omega})}:=\sup_{x,y\in \overline{\Omega}}\frac{|u(x)-u(y)|}{d_G(x,y)^\alphapha}.$$
For $u\in C^{0,\alphapha}_\ast(\overline{\Omega})$ we define
$$\|u\|_{C^{0,\alphapha}_\ast(\overline{\Omega})}:=\|u\|_{L^\infty(\Omega)}+[u]_{C^{0,\alphapha}_\ast(\overline{\Omega})}.$$
\eta_{\delta,r}nd{defi}
\begin{rmk}
The mapping $\| \cdot \|_{C_{\ast}^{0,\alphapha}}: C_{\ast}^{0,\alphapha} \rightarrow [0,\infty)$ is a norm. By Remark \ref{rmk:equi_dist}
\begin{align*}
C^{0,\alphapha}_{\ast}(\overline{\Omega}) \hookrightarrow C^{0,\frac{\alphapha}{2}}(\overline{\Omega}).
\eta_{\delta,r}nd{align*}
Hence, the pair $(C^{0,\alphapha}_\ast(\overline{\Omega}), \| \cdot \|_{C^{0,\alphapha}_{\ast}(\bar{\Omega})})$ is a Banach space.
\eta_{\delta,r}nd{rmk}
Based on the spaces from Definition \ref{defi:Hoelder}, we can further define higher order H\"older spaces:
\begin{defi}
\label{defi:Hoelder1}
Let
$$\tilde{Y}_1=y_n\p_1, \quad \tilde{Y}_2=y_{n+1}\p_1, \quad \dots, \quad \tilde{Y}_{2n-1}=\p_n,\quad \tilde{Y}_{2n}=\p_{n+1}.$$
For $k\in \mathbb{N}$, $k\geq 1$, we say that $u\in C^{k,\alphapha}_\ast(\overline{\Omega})$, if for all $\sigma_i\in \{1,\ldots, 2n\}$, $1\leq i\leq k$, the functions $u, \tilde{Y}_{\sigma_1}\cdots \tilde{Y}_{\sigma_i}u$ are continuous and $\tilde{Y}_{\sigma_1}\cdots\tilde{Y}_{\sigma_k}u \in C^{0,\alphapha}_\ast(\overline{\Omega})$.
We define
\begin{equation}
\label{eq:norm1}
\begin{split}
\|u\|_{C^{k,\alphapha}_\ast(\overline{\Omega})}& =\|u\|_{L^{\infty}(\overline{\Omega})} \\
&+ \sum_{j=1}^{k-1}\sum_{\sigma_1,\ldots,\sigma_j\in \{1,\dots, 2n\}}\|\tilde{Y}_{\sigma_1}\cdots \tilde{Y}_{\sigma_j}u\|_{L^\infty(\overline{\Omega})} \\
& +\sum_{\sigma_1,\ldots,\sigma_k\in \{1,\dots,2n\}}\|\tilde{Y}_{\sigma_1}\cdots \tilde{Y}_{\sigma_k}u\|_{C^{0,\alphapha}_\ast(\overline{\Omega})}.
\eta_{\delta,r}nd{split}
\eta_{\delta,r}nd{equation}
\eta_{\delta,r}nd{defi}
\begin{rmk}
The space $C^{k,\alphapha}_{\ast}(\overline{\Omega})$ equipped with $\| \cdot \|_{C^{k,\alphapha}_{\ast}(\overline{\Omega})}$ is a Banach space.\\
\eta_{\delta,r}nd{rmk}
Building on the previously introduced Hölder spaces, we proceed to define the function spaces which we use to prove the higher regularity of the Legendre function $v$. These spaces, their building blocks and their role in our argument are reminiscent of the higher regularity approach of De Silva and Savin \cite{DSS14}. In contrast to the approach of De Silva and Savin we however use them in the \eta_{\delta,r}mph{linear} set-up in the sense that the (regular) free boundary has been fixed by the Legendre-Hodograph transform (at the expense of working with a degenerate (sub)elliptic, fully nonlinear equation). In this situation the approximation approach of De Silva and Savin simply becomes a Taylor expansion of our solution at the straightened free boundary.
Moreover, we do not carry out the expansion up to arbitrary order, but only up to order less than five. Beyond this we work with the implicit function theorem (c.f. Theorem \ref{prop:hoelder_reg_a} in Section \ref{sec:IFT1}), which is more suitable to the variable coefficients set-up. In particular, this restriction to an essentially leading order expansion with respect to the non-tangential variables allows us to avoid dealing with \eta_{\delta,r}mph{regularity issues in the non-tangential directions}. Working in a conical domain and with metrics and inhomogeneities which are not necessarily symmetric with respect to the non-tangential directions, we thus ignore potential higher-order singularities in the non-tangential variables. This has the advantage of deducing the desired partial regularity result in the tangential directions, which then entails the free boundary regularity, without having to deal with potentially arising non-tangential singularities.\\
Roughly speaking, our spaces interpolate between the regularity of the function at
$P=\{y_n=y_{n+1}=0\}$ (at which the Baouendi-Grushin operator is only degenerate elliptic)
and at $\{\frac{1}{2}<y_n^2+y_{n+1}^2< 2\}$ (in which the Baouendi-Grushin operator is uniformly elliptic region). In order to make this rigorous, we need the notion of an \eta_{\delta,r}mph{homogeneous polynomial}:
\begin{defi}[Homogeneous polynomials]
\label{defi:poly}
Let $k\in \N$. We define the \eta_{\delta,r}mph{space of homogeneous polynomials of degree less than or equal to $k$} as
\begin{align*}
\mathcal{P}_k=&\{p_k(y)| \ p_k(y)=\sum_{|\beta|\leq k}a_\beta y^{\beta},\\
&\text{ such that }a_\beta=0 \text{ whenever }\sum_{i=1}^{n-1}2\beta_i+\beta_n+\beta_{n+1}>k\}.
\eta_{\delta,r}nd{align*}
Moreover, we define the \eta_{\delta,r}mph{space of homogeneous polynomials of degree exactly $k$} as
\begin{align*}
\mathcal{P}_k^{hom}=&\{p_k(y)| \ p_k(y)=\sum_{|\beta|\leq k}a_\beta y^{\beta},\\
&\text{ such that }a_\beta=0 \text{ whenever }\sum_{i=1}^{n-1}2\beta_i+\beta_n+\beta_{n+1}\neq k\}.
\eta_{\delta,r}nd{align*}
\eta_{\delta,r}nd{defi}
The definition of the homogeneous polynomials is motivated by the scaling properties of our operator $\Delta_G$. More precisely, we note the following dilation invariance property: if $u$ solves $\D_G u = f$, then the function $v(y):= u(\delta_\lambda(y))$, where $\delta_\lambda(y)=(\lambda^2y'',\lambda y_n,\lambda y_{n+1})$, solves
\begin{align*}
\D_G v = \lambda^2 f_\lambda,
\eta_{\delta,r}nd{align*}
where $f_{\lambda}(y)= f(\delta_\lambda(y))$.
This motivates to count the order of the tangential variables $y''$ and the normal variables $y_n$, $y_{n+1}$ differently and define the homogeneous polynomials with respect to the Grushin scaling: $p_k(\delta_\lambda(y))=\lambda^k p_k(y)$ for $p_k\in \mathcal{P}_k^{hom}$.
\begin{rmk}
We observe that for instance $P\in \mathcal{P}_3$ is of the form
\begin{align*}
P(y)&=c_0+\sum_{i=1}^{n+1}a_iy_i+\sum_{k\in \{1,\dots,n-1\},\eta_{\delta,r}ll\in \{n,n+1\}}a_{k\eta_{\delta,r}ll}y_ky_\eta_{\delta,r}ll\\
&+ \left(c_1y_n^3+c_2y_n^2y_{n+1}+c_3y_ny_{n+1}^2+c_4y_{n+1}^3\right).
\eta_{\delta,r}nd{align*}
\eta_{\delta,r}nd{rmk}
Using the notion of homogeneous polynomials, we further define an adapted notion of differentiability at the co-dimension two hypersurface $P$:
\begin{defi}\label{defi:diff}
Let $k\in \N$ and $\alphapha \in (0,1]$. Given a function $f$, we say that \eta_{\delta,r}mph{$f$ is $C^{k,\alphapha}_\ast$ at $P$}, if at each $y_0\in P$ there exists an approximating polynomial $P_{y_0}(y)=\sum a_\beta(y_0)(y-y_0)^{\beta}\in \mathcal{P}_k$ such that
\begin{align*}
f(y)=P_{y_0}(y)+O(d_G(y,y_0)^{k+2\alphapha}), \quad \text{as } y\rightarrow y_0.
\eta_{\delta,r}nd{align*}
\eta_{\delta,r}nd{defi}
\begin{rmk}
We note that for a multi-index $\beta$ satisfying $\sum_{i=1}^{n-1}2\beta_i+\beta_n+\beta_{n+1}\leq k$, the evaluation $\partial^{\beta}P_{y_0}(y_0) = \beta! a_{\beta}(y_0)$ corresponds to the (classical) $\beta$ derivative of $f$ at $y_0$, i.e. $\p^\beta f(y_0)=\beta! a_{\beta}(y_0)$.
\eta_{\delta,r}nd{rmk}
With this preparation, we can finally give the definition of our function spaces:
\begin{defi}[Function spaces]
\label{defi:spaces}
Let $\eta_{\delta,r}psilon, \alphapha\in (0,1]$. Then,
\begin{align*}
X_{\alphapha,\eta_{\delta,r}psilon}:=&\{v\in C^{2,\eta_{\delta,r}psilon}_\ast (Q_+) \cap C_0(Q_+)| \ \supp(\Delta_G v)\subset \mathcal{B}_1^+,\ v\text{ is } C^{3,\alphapha}_\ast \text{ at } P, \\
&
v=0 \text{ on } \{y_n=0\},\ \p_{n+1}v=0\text{ on } \{y_{n+1}=0\}, \ \p_{nn}v=0 \text{ on } P,\\
& \text{and }\| v \|_{X_{\alphapha, \eta_{\delta,r}psilon}}<\infty\},\\
Y_{\alphapha,\eta_{\delta,r}psilon}:=&\{f\in C^{0,\eta_{\delta,r}psilon}_\ast(Q_+)| \ \supp(f) \subset \mathcal{B}_1^+, \ f \text{ is } C^{1,\alphapha}_\ast \text{ at }P, \ f=\p_{n+1}f=0 \text{ on }P,\\
& \text{and }\|f\|_{Y_{\alphapha,\eta_{\delta,r}psilon}}<\infty\}.
\eta_{\delta,r}nd{align*}
The corresponding norms are defined as
\begin{align*}
\|f\|_{Y_{\alphapha, \eta_{\delta,r}psilon}}& : =\sup_{\bar y\in P}[d_G(\cdot,\bar y)^{-(1+2\alphapha-\eta_{\delta,r}psilon)}(f-P_{\bar y})]_{\dot{C}^{0,\eta_{\delta,r}psilon}_\ast(\mathcal{B}_3^+(\bar y))} ,\\
\text{ where } & P_{\bar y}(y)=y_n \p_nf(\bar y);\\
\| v \|_{X_{\alphapha,\eta_{\delta,r}psilon}} &:=\sup_{ \bar{y}\in P} \left(\|d_G(\cdot, \bar y)^{-(3+2\alphapha)}(v-P_{\bar y})\|_{L^{\infty}(\mathcal{B}_3^+(\bar y))} \right.\\
& \left. +\sum\limits_{i,j=1}^{n+1}[d_G(\cdot, \bar y)^{-(1+2\alphapha- \eta_{\delta,r}psilon)}Y_i Y_j (v-P_{\bar y})]_{\dot{C}^{0,\eta_{\delta,r}psilon}_{\ast}(\mathcal{B}_3^+(\bar y))}
+ [v]_{C^{2,\eta_{\delta,r}psilon}_{\ast}(Q_+ \setminus \mathcal{B}_3^+(\bar y))}\right),\\
\text{ where } & P_{\bar y}(y)=\p_nv(\bar y)y_n + \sum_{i=1}^{n-1}\p_{in}v(\bar y)(y_i-\bar y_i) y_n +\frac{1}{6}\p_{nnn}v(\bar y)y_n^3\\
& \qquad +\frac{1}{2}\p_{n,n+1,n+1}v(\bar y)y_ny_{n+1}^2.
\eta_{\delta,r}nd{align*}
\eta_{\delta,r}nd{defi}
Let us discuss these function spaces $X_{\alphapha,\eta_{\delta,r}psilon}$: They are subspaces of the Baouendi-Grushin H\"older spaces $C^{2,\eta_{\delta,r}psilon}_\ast(Q_+)$, with the additional properties that these functions are $C^{3,\alphapha}_\ast$ along the edge $P$ and that they satisfy the symmetry conditions $v=0$ on $\{y_n=0\}$ and $\p_{n+1}v=0$ on $\{y_{n+1}=0\}$. The condition $\p_{nn}v=0$ on $P$ is a necessary compatibility condition which ensures that $\Delta_G$ maps $X_{\alphapha,\eta_{\delta,r}psilon}$ to $Y_{\alphapha,\eta_{\delta,r}psilon}$. The boundary conditions together with the $C^{3,\alphapha}_\ast$ regularity allow us to conclude that any function in $X_{\alphapha,\eta_{\delta,r}psilon}$ has the same type of asymptotic expansion at $P$ as the Legendre function $v$. The support condition on $\Delta_G v$ together with the decay condition at infinity ($v\in C_0(Q_+)$ is a continuous function in $Q_+$ vanishing at infinity) is to ensure that $(X_{\alphapha,\eta_{\delta,r}psilon},\|\cdot\|_{X_{\alphapha,\eta_{\delta,r}psilon}})$ is a Banach space. \\
The spaces are the ones in which we apply the Banach implicit function theorem later in Section~\ref{sec:IFT1}. They are constructed in such a way as to
\begin{itemize}
\item[(i)] mimic the asymptotics behavior of our Legendre functions (which are defined in (\ref{eq:legendre})) around the straightened regular free boundary $P$. In particular, the Legendre functions $v$ associated with solutions $w$ of (\ref{eq:varcoeff}) are contained in the spaces $X_{\alphapha,\eta_{\delta,r}psilon}$ for a suitable range of $\alphapha,\eta_{\delta,r}psilon$ (c.f. Proposition \ref{prop:error_gain2}).
\item[(ii)] The spaces are compatible with the mapping properties of the fully nonlinear, degenerate, (sub)elliptic operator $F$ from (\ref{eq:nonlineq1}) (c.f. Proposition \ref{prop:nonlin_map}).
\item[(iii)] They are compatible with the linearization of the operator $F$ (c.f. Proposition \ref{prop:linear}). In particular they allow for ``Schauder type'' estimates for the Baouendi-Grushin Laplacian.
\eta_{\delta,r}nd{itemize}
\begin{rmk}
\label{rmk:homo}
\begin{itemize}
\item[(i)] We note that by our support assumptions
\begin{align*}
& \|d_G(\cdot,\bar y)^{-(1+2\alphapha)}(f-P_{\bar y})\|_{L^{\infty}(\mathcal{B}_3^+(\bar y))} \\
& \quad \leq C [d_G(\cdot,\bar y)^{-(1+2\alphapha-\eta_{\delta,r}psilon)}(f-P_{\bar y})]_{\dot{C}^{0,\eta_{\delta,r}psilon}_\ast(\mathcal{B}_3^+(\bar y))} .
\eta_{\delta,r}nd{align*}
Similarly, by interpolation, we control all intermediate Hölder norms of $v-P_{\bar{y}}$ by $\| v\|_{X_{\alphapha,\eta_{\delta,r}psilon}}$.
\item[(ii)] We remark that the norms of $X_{\alphapha,\eta_{\delta,r}psilon}$ and $Y_{\alphapha,\eta_{\delta,r}psilon}$ only contain homogeneous contributions and do not include the lower order contributions which would involve the norms of the approximating polynomials $p(y):= \sum\limits_{|\alphapha|\leq k} a_{\alphapha}y^{\alphapha}\in \mathcal{P}_k^{hom}$:
$$|p|_{k}:= \sum\limits_{\beta}|a_{\beta}| .$$ Yet, this results in Banach spaces as additional support conditions are imposed on $f, \D_G v$. The Banach space property is shown in Lemma~\ref{lem:Banach} in the Appendix.
\eta_{\delta,r}nd{itemize}
\eta_{\delta,r}nd{rmk}
For locally defined functions we use the following spaces:
\begin{defi}[Local function spaces]
\label{defi:spaces_loc}
Given $\alphapha,\eta_{\delta,r}psilon\in (0,1]$ and $R>0$.
\begin{align*}
X_{\alphapha,\eta_{\delta,r}psilon}(\mathcal{B}_R^+):=&\{v\in C^{2,\eta_{\delta,r}psilon}_\ast(\mathcal{B}_R^+)| v\text{ is } C^{3,\alphapha}_\ast \text{ at } P\cap \mathcal{B}_R, \\
&v=0 \text{ on } \{y_n=0\}\cap \mathcal{B}_R,\ \p_{n+1}v=0\text{ on }\{y_{n+1}=0\}\cap \mathcal{B}_R, \\
&\p_{nn}v=0\text{ on } P\cap \mathcal{B}_R \text{ and } \|v\|_{X_{\alphapha,\eta_{\delta,r}psilon}(\mathcal{B}_R^+)}<\infty\},
\eta_{\delta,r}nd{align*}
where
\begin{align*}
\|v\|_{X_{\alphapha,\eta_{\delta,r}psilon}(\mathcal{B}_R^+)}:=\sup_{\bar y\in P\cap \mathcal{B}_R}\left(\sum_{i,j=1}^{n+1}[d_G(\cdot, \bar y)^{-(1+2\alphapha-\eta_{\delta,r}psilon)}Y_iY_j(v-P_{\bar y})]_{\dot{C}^{0,\eta_{\delta,r}psilon}_\ast(\mathcal{B}_3^+(\bar y)\cap \mathcal{B}_R^+)}\right.\\
\left.+\|d_G(\cdot, \bar y)^{-(3+2\alphapha)}(v-P_{\bar y})\|_{L^{\infty}(\mathcal{B}_3^+(\bar y)\cap \mathcal{B}_R^+)}
+ |P_{\bar y}|_{3}\right),
\eta_{\delta,r}nd{align*}
with $P_{\bar y}$ being as in Definition~\ref{defi:spaces}.\\
Similarly,
\begin{align*}
Y_{\alphapha,\eta_{\delta,r}psilon}(\mathcal{B}_R^+):=&\{f\in C^{0,\eta_{\delta,r}psilon}_\ast(\mathcal{B}_R^+)| f \text{ is } C^{1,\alphapha}_\ast \text{ at } P\cap \mathcal{B}_R,\\
&f=\p_{n+1}f=0\text{ on }P\cap \mathcal{B}_R \text{ and } \|f\|_{Y_{\alphapha,\eta_{\delta,r}psilon}(\mathcal{B}_R^+)}<\infty\},
\eta_{\delta,r}nd{align*}
where
\begin{align*}
\|f\|_{Y_{\alphapha,\eta_{\delta,r}psilon}(\mathcal{B}_R^+)}:=\sup_{\bar y\in P\cap \mathcal{B}_R}\left(\| d_G(\cdot, \bar y)^{-(1+2\alphapha)}(f-P_{\bar y})\|_{L^{\infty}(\mathcal{B}_3^+(\bar y)\cap \mathcal{B}_R^+)} \right.\\
\left. + [d_G(\cdot, \bar y)^{-(1+2\alphapha)}(f-P_{\bar y})]_{\dot{C}^{0,\eta_{\delta,r}psilon}_\ast(\mathcal{B}_3^+(\bar y)\cap \mathcal{B}_R^+)} +|P_{\bar y}|_1 \right),
\eta_{\delta,r}nd{align*}
with $P_{\bar y}$ being as in Definition~\ref{defi:spaces}.
\eta_{\delta,r}nd{defi}
For the functions in $X_{\alphapha,\eta_{\delta,r}psilon}$ and $Y_{\alphapha,\eta_{\delta,r}psilon}$, the following characterization will be useful. We postpone the proof to the Appendix, Section \ref{sec:decomp}.
\begin{prop}[Characterization of $X_{\alphapha,\eta_{\delta,r}psilon}$ and $Y_{\alphapha,\eta_{\delta,r}psilon}$]
\label{prop:decompI}
Let $v\in X_{\alphapha,\eta_{\delta,r}psilon}$ and $f\in Y_{\alphapha,\eta_{\delta,r}psilon}$ and $2\alphapha>\eta_{\delta,r}psilon$. Let $r=r(y):=\sqrt{y_n^2+y_{n+1}^2}$ denote the distance from $y$ to $P$. Let $y'':=(y'',0,0)\in P$.
\begin{itemize}
\item[(i)] Then $\p_n f(y'')\in C^{0,\alphapha}(P)$. Moreover, there exists $f_1(y)\in C^{0,\eta_{\delta,r}psilon}_{\ast}(Q_+)$ vanishing on $P$, such that for $y\in \mathcal{B}_3^+$
\begin{align*}
f(y)=\p_n f(y'')y_n+ r^{1+2\alphapha-\eta_{\delta,r}psilon}f_1(y).
\eta_{\delta,r}nd{align*}
\item[(ii)] Then $\p_nv(y'')\in C^{1,\alphapha}(P)$, $\p_{nnn}v(y''), \p_{n,n+1,n+1}v(y'') \in C^{0,\alphapha}(P)$. Moreover, there exist functions $C_1,V_i, C_{ij}\in C^{0,\eta_{\delta,r}psilon}_{\ast}(Q_+)$, $i,j\in\{1,\dots,n+1\}$, vanishing on $P$, such that for $y\in \mathcal{B}_3^+$
\begin{align*}
v(y) &= \p_n v(y'') y_n + \frac{\p_{nnn}v(y'')}{6}y_n^3 +\frac{\p_{n,n+1,n+1}v(y'')}{2}y_ny_{n+1}^2+ r^{3+2\alphapha-\eta_{\delta,r}psilon}C_1(y), \\
\p_{i}v(y)&= \p_{in}v(y'') y_n + r^{1+2\alphapha-\eta_{\delta,r}psilon}V_i(y),\quad i\in \{1,\dots, n-1\},\\
\p_{n}v(y)& = \p_{n}v(y'') + \frac{\p_{nnn}v(y'')}{2}y_n^2 + \frac{\p_{n,n+1,n+1}v(y'')}{2}y_{n+1}^2 + r^{2+2\alphapha-\eta_{\delta,r}psilon}V_n(y),\\
\p_{n+1}v(y) & = \p_{n,n+1,n+1}v(y'') y_n y_{n+1} + r^{2+2\alphapha-\eta_{\delta,r}psilon}V_{n+1}(y),\\
\p_{ij}v(y)&=r^{-1+2\alphapha-\eta_{\delta,r}psilon}C_{ij}(y),\\
\p_{in}v(y)&= \p_{in}v(y'') + r^{2\alphapha-\eta_{\delta,r}psilon}C_{in}(y),\\
\p_{i,n+1}v(y)&=r^{2\alphapha-\eta_{\delta,r}psilon}C_{i,n+1}(y),\\
\p_{n,n}v(y)&= \p_{nnn}v(y'') y_n + r^{1+2\alphapha-\eta_{\delta,r}psilon}C_{n,n}(y),\\
\p_{n,n+1}v(y)&=\p_{n,n+1,n+1}v(y'')y_{n+1}+r^{1+2\alphapha-\eta_{\delta,r}psilon}C_{n,n+1}(y),\\
\p_{n+1,n+1}v(y)&=\p_{n,n+1,n+1}v(y'')y_{n} + r^{1+2\alphapha-\eta_{\delta,r}psilon}C_{n+1,n+1}(y).
\eta_{\delta,r}nd{align*}
\eta_{\delta,r}nd{itemize}
Moreover, $C_1(y)=0=V_{n+1}(y)$ on $\{y_n=0\}$.
For the decompositions in (i) and (ii) we have
\begin{align*}
&[\p_{n}f]_{\dot{C}^{0,\alphapha}(P\cap \mathcal{B}_3^+)}+ [f_1]_{\dot{C}^{0,\eta_{\delta,r}psilon}_\ast(\mathcal{B}_3^+)}\leq C \|f\|_{Y_{\alphapha,\eta_{\delta,r}psilon}},\\
&[\p_{in}v]_{\dot{C}^{0,\alphapha}(P\cap \mathcal{B}_3^+)} +[\p_{nnn}v]_{\dot{C}^{0,\alphapha}(P\cap \mathcal{B}_3^+)}+[\p_{n,n+1,n+1}v]_{\dot{C}^{0,\alphapha}(P\cap \mathcal{B}_3^+)}\\
& \quad + \sum\limits_{i,j=1}^{n+1}[C_{ij}]_{\dot{C}^{0,\eta_{\delta,r}psilon}_\ast(\mathcal{B}_3^+)}
+ \sum\limits_{j=1}^{n+1}[V_{j}]_{\dot{C}^{0,\eta_{\delta,r}psilon}_\ast(\mathcal{B}_3^+)}
+ [C_1]_{\dot{C}^{0,\eta_{\delta,r}psilon}_\ast(\mathcal{B}_3^+)} \leq C \|v\|_{X_{\alphapha,\eta_{\delta,r}psilon}}.
\eta_{\delta,r}nd{align*}
\eta_{\delta,r}nd{prop}
\begin{rmk}
\label{rmk:characterize}
It is immediate that any $f\in C^{0,\eta_{\delta,r}psilon}_\ast(Q_+)$ with $\supp(f)\subset \mathcal{B}_3^+$ which satisfies the decomposition in (i) is in $Y_{\alphapha,\eta_{\delta,r}psilon}$. Moreover,
$$\|f\|_{Y_{\alphapha,\eta_{\delta,r}psilon}}\leq C\left([\p_{n}f]_{\dot{C}^{0,\alphapha}(P)}+ [f_1]_{\dot{C}^{0,\eta_{\delta,r}psilon}_\ast(\mathcal{B}_3^+)}\right).$$
Similarly, it is not hard to show that functions $v$ satisfying the decomposition in (ii) with $\supp(\Delta_Gv)\subset \mathcal{B}_3^+$ are in $X_{\alphapha,\eta_{\delta,r}psilon}$. In particular, this implies that Proposition \ref{prop:decompI} gives an equivalent characterization of the spaces $X_{\alphapha,\eta_{\delta,r}psilon}$ and $Y_{\alphapha,\eta_{\delta,r}psilon}$.\\
Motivated by the decomposition of Proposition \ref{prop:decompI}, we sometimes also write
\begin{align*}
Y_{\alphapha,\eta_{\delta,r}psilon} = y_n C^{0,\alphapha} + r^{1+2\alphapha- \eta_{\delta,r}psilon} C_{\ast}^{0,\eta_{\delta,r}psilon}.
\eta_{\delta,r}nd{align*}
\eta_{\delta,r}nd{rmk}
At the end of this section, we state the following a priori estimate (which should be viewed as a Schauder type estimate for the Baouendi-Grushin operator):
\begin{prop}
\label{prop:invert}
Let $\alphapha \in (0,1)$, $\eta_{\delta,r}psilon \in (0,1)$, $v\in X_{\alphapha,\eta_{\delta,r}psilon}$, $f\in Y_{\alphapha,\eta_{\delta,r}psilon}$ and
\begin{align*}
\D_G v = f.
\eta_{\delta,r}nd{align*}
Then we have
\begin{align*}
\| v\|_{X_{\alphapha,\eta_{\delta,r}psilon}} \leq C \|f\|_{Y_{\alphapha,\eta_{\delta,r}psilon}}.
\eta_{\delta,r}nd{align*}
\eta_{\delta,r}nd{prop}
\begin{rmk}
We note that the a priori estimates exemplify a scaling behavior which depends on the support of the respective function. More precisely, let $=\D_G v$ be supported in $\mathcal{B}_{\mu}^+$ for some $\mu>0$. Then,
\begin{align*}
\|v\|_{X_{\alphapha,\eta_{\delta,r}psilon}} \leq C(1+|\mu|^{1+2\alphapha-\eta_{\delta,r}psilon}) \|\D_G v\|_{Y_{\alphapha,\eta_{\delta,r}psilon}}.
\eta_{\delta,r}nd{align*}
\eta_{\delta,r}nd{rmk}
The proof of Proposition \ref{prop:invert} follows by exploiting the scaling properties of our operator and polynomial approximations. This method is in analogy to the Campanato approach (c.f. \cite{Ca64}) to prove Schauder estimates for the Laplacian (c.f. also \cite{Gia}, \cite{Wang92} and \cite{Wa03} for generalizations to elliptic systems, fully nonlinear elliptic and parabolic equations and certain subelliptic equations). Our generalization of these spaces is adapted to our thin free boundary problem. We postpone the proof of Proposition \ref{prop:invert} to the Appendix, Section \ref{sec:quarter_Hoelder}.
\section[Regularity of the Legendre Function]{Regularity of the Legendre Function for $C^{k,\gamma}$ Metrics}
\label{sec:improve_reg}
In this Section we return to the investigation of the Legendre function. We recall that in Section~\ref{sec:Legendre} we transformed the free boundary problem (\ref{eq:varcoeff}) into a fully nonlinear Baouendi-Grushin type equation for the Legendre function $v$. In this section, we will study the regularity of the Legendre function $v$ in terms of the function spaces $X_{\alphapha,\eta_{\delta,r}psilon}$ from Section~\ref{sec:holder}.\\
In the whole section we assume that the metrics $a^{ij}$ are $C^{1,\gamma}$ Hölder regular for some $\gamma\in (0,1)$. We start by showing that the Legendre function $v$ (associated with a solution $w$ to \eta_{\delta,r}qref{eq:thin_obst}) is in the space $X_{\alphapha,\gamma}$ introduced in Section~\ref{sec:holder}. This is a consequence of transferring the asymptotics of $w$ (which were derived in Propositions \ref{prop:asym2}, \ref{prop:improved_reg} in Section~\ref{sec:asymp}) to $v$. Here $\alphapha$ is the (a priori potentially very small) H\"older exponent of the free boundary from Section~\ref{sec:asymp}.\\
In Section~\ref{subsec:improvement} (c.f. Propositions \ref{prop:error_gain}, \ref{prop:error_gain2}), we exploit the structure of our nonlinear equation to improve the regularity of $v$ from $X_{\alphapha,\eta_{\delta,r}psilon}$ to $X_{\delta,\eta_{\delta,r}psilon}$ for any $\delta\in (0,1)$ and $\eta_{\delta,r}psilon \in (0,\gamma]$. In particular this implies that the free boundary $\Gamma_w$ is $C^{1,\delta}$ regular for any $\delta\in (0,1)$.
\subsection{Asymptotics of the Legendre function}
\label{sec:Leg}
Throughout this section we assume that $w$ solves the thin obstacle problem \eta_{\delta,r}qref{eq:varcoeff} with coefficients $a^{ij}\in C^{k,\gamma}$ for $k\geq 1$ and $\gamma\in (0,1]$. Moreover, we always assume that the conditions (A1)-(A7) are satisfied. We further recall from Section~\ref{sec:Legendre} that the Legendre function $v$ is originally defined on $U=T(B_{1/2}^+)$. However, after a rescaling procedure we may assume that $v$ is defined in $\mathcal{B}_2^+$. \\
We first rewrite the asymptotics of $w$ (stated in Corollary~\ref{cor:improved_reg}) in terms of the corresponding Legendre function $v$.
\begin{prop}\label{prop:holder_v}
Let $a^{ij}\in C^{k,\gamma}$ with $k\geq 1$ and $\gamma\in (0,1)$. Let $w$ be a solution to the variable coefficient thin obstacle problem and let $v$ be its Legendre function as defined in \eta_{\delta,r}qref{eq:legendre}. Assume that $w$ satisfies the asymptotic expansion in Proposition~\ref{prop:improved_reg} with some $\alphapha\in (0,1)$.
Then for any $\hat y\in \mathcal{B}_1^+$ with $\sqrt{\hat y_n^2+\hat y_{n+1}^2}=:\lambda \in (0,1)$, and any multi-index $\beta$ with $|\beta|\leq k+1$
\begin{align*}
\left[D^{\beta}v-D^{\beta} v_{\hat y''}\right]_{\dot{C}^{0,\gamma}_\ast(\mathcal{B}_{\lambda/4}^+(\hat y))} \leq C(|\beta|,n,p) \max\{\eta_{\delta,r}psilon_0, c_*\} \lambda^{3+2\alphapha-\gamma-2|\beta''|-|\beta_n|-|\beta_{n+1}|},
\eta_{\delta,r}nd{align*}
where $v_{\hat y''}$ is the leading order expansion of $v$ at $\hat y''$ as defined in Lemma~\ref{lem:asymp_profile}.
\eta_{\delta,r}nd{prop}
\begin{proof}
We argue in three steps in which we successively simplify the problem:\\
\eta_{\delta,r}mph{Step 1: First reduction -- Scaling.} Given $\hat y\in \mathcal{B}_1^+$ with $\sqrt{\hat y_n^2+\hat y_{n+1}^2}=\lambda>0$, we project $\hat y$ onto $P=\{y_n=y_{n+1}=0\}$ and let $\hat y''=(\hat y'',0,0)$ denote the projection point.
Let
$$\hat v_{\hat y'',\lambda}(\zeta):=\frac{v(\hat y''+\delta_\lambda( \zeta))-\p_nv(\hat y'')(\lambda \zeta_n)}{\lambda^3}, $$
with $\delta_\lambda (\zeta)=(\lambda^2\zeta'',\lambda \zeta_n,\lambda \zeta_{n+1})$.
We note that
$\hat v_{\hat y'',\lambda}$ is the Legendre function for $w_{x_0,\lambda^2}(\xi):=w(x_0+\lambda^2\xi)/\lambda^3$ with $x_0=T^{-1}(\hat y'')$ . We set
$$\hat \zeta=\left(0,\frac{\hat y_n}{\lambda}, \frac{\hat y_{n+1}}{\lambda}\right)\in B''_1\times \mathcal{S}^1.$$
With this rescaling, it suffices to show that
\begin{align}\label{eq:deri_v}
\left[ D^{\beta} \hat v_{\hat y'',\lambda}- D^{\beta} \hat v_{\hat y''} \right]_{\dot{C}^{0,\gamma}_\ast(\mathcal{B}_{1/4}^+(\hat \zeta))}\leq C(|\beta|,n,p) \max\{\eta_{\delta,r}psilon_0, c_*\} \lambda^{2\alphapha},
\eta_{\delta,r}nd{align}
where $\hat v_{\hat{y}''}$ denotes the Legendre function for $w_{x_0}(\xi)=\lim_{\lambda\rightarrow 0_+} w_{x_0,\lambda^2}(\xi)$.
Indeed, the conclusion of Proposition \ref{prop:holder_v} then follows by undoing the rescaling in \eta_{\delta,r}qref{eq:deri_v}.\\
\eta_{\delta,r}mph{Step 2: Second reduction.} Given any multi-index $\beta$ with $|\beta |\leq k+1$, it is possible to express $D^{\beta}\hat v_{\hat y'',\lambda}$ as a function of $w_{x_{0},\lambda^2}$ and its derivatives:
$$D^\beta \hat v_{\hat y'',\lambda}(y)= F_\beta(D^{\tilde{\alphapha}} w_{x_{0},\lambda^2}(x))\big|_{x=(T^{w_{x_0,\lambda^2}})^{-1}(y)},\quad |\tilde{\alphapha}|\leq |\beta|.$$
Here $F_\beta$ is an analytic function on the open set $\{J(w_{x_{0},\lambda^2})\neq 0\}$. Let $\hat \xi=(T^{w_{x_{0},\lambda^2}})^{-1}(\hat \zeta)$. We note that a sufficiently small choice of $\lambda_0\in(0,1)$ implies that our change of coordinates is close to the square root mapping (c.f. Proposition \ref{prop:asym2}). Combining this with the observation that on scales of order one (i.e. when $\zeta_n^2+\zeta_{n+1}^2\sim 1$) the Baouendi-Grushin metric is equivalent to the Euclidean metric, results in the inclusions $(T^{w_{x_{0},\lambda^2}})^{-1}(\mathcal{B}_{1/4}(\hat \zeta)), (T^{w_{x_{0}}})^{-1}(\mathcal{B}_{1/4}(\hat \zeta))\subset B_{1/2}(\hat \xi)$ and $B_{3/4}(\hat \xi)\cap \Gamma_{w_{x_{0},\lambda^2}}=\eta_{\delta,r}mptyset$, $B_{3/4}(\hat \xi)\cap \Gamma_{w_{x_{0}}}=\eta_{\delta,r}mptyset$. Thus, to show \eta_{\delta,r}qref{eq:deri_v}, it then suffices to prove
\begin{align}\label{eq:multi_deri}
\left[F_\beta(D^{\tilde{\alphapha}} w_{x_{0},\lambda^2} )- F_\beta(D^{\tilde{\alphapha}} \mathcal{W}_{x_0}(x_0 +\cdot))\right]_{\dot{C}^{0,\gamma}(B_{1/2}^+(\hat \xi))} \leq C(|\beta|,n,p) \max\{\eta_{\delta,r}psilon_0, c_*\} \lambda^{2\alphapha},
\eta_{\delta,r}nd{align}
and
\begin{align}\label{eq:inverse_T}
\|(T^{w_{x_0,\lambda^2}})^{-1}-(T^{w_{x_0}})^{-1}\|_{C^{1}(\mathcal{B}^+_{3/4})}\leq C \max\{\eta_{\delta,r}psilon_0, c_*\} \lambda^{2\alphapha}
\eta_{\delta,r}nd{align}
for $\lambda \in (0,\lambda_0)$.
Indeed, once we have obtained \eta_{\delta,r}qref{eq:multi_deri}-\eta_{\delta,r}qref{eq:inverse_T}, \eta_{\delta,r}qref{eq:deri_v} follows by the equivalence of the Euclidean and Baouendi-Grushin geometries at scales of order one and a triangle inequality.\\
\eta_{\delta,r}mph{Step 3: Proof of (\ref{eq:multi_deri}) and conclusion.}
To show \eta_{\delta,r}qref{eq:multi_deri}, we use a Taylor expansion and write
$$F_\beta(D^{\tilde{\alphapha}} w_{x_{0},\lambda^2})-F_\beta(D^{\tilde{\alphapha}} \mathcal{W}_{x_0}(x_0 + \cdot))=R_{\tilde{\alphapha}} (w_{x_{0},\lambda^2})(D^{\tilde{\alphapha}} w_{x_{0},\lambda^2}-D^{\tilde{\alphapha}} \mathcal{W}_{x_0}(x_0 + \cdot)),$$
where
$$R_{\tilde{\alphapha}}(w_{x_{0},\lambda^2})(x)=\int_0^1\p_{m_{\tilde{\alphapha}}}F_\beta(tD^{\tilde{\alphapha}}w_{x_{0},\lambda^2}(x)+(1-t)D^{\tilde{\alphapha}}\mathcal{W}_{x_0}(x_0 + x))dt.$$
Since $-C\leq J(tw_{x_{0},\lambda^2}+(1-t)\mathcal{W}_{x_0})\leq -c$ in $B_{1/2}^+(\hat \xi)$ and since $w_{x_{0},\lambda^2}, \mathcal{W}_{x_0}(x_0 + \cdot) \in C^{k+1,\gamma}(B_{1/2}^+(\hat \xi))$, we have that $R_{\tilde{\alphapha}}(w_{x_{0},\lambda^2})\in C^{0, \gamma}(B_{1/2}^+(\hat \xi))$ with uniform bounds in $\lambda$.
Next we recall that by Corollary \ref{cor:improved_reg}
$$\left[D^{\tilde{\alphapha}}w_{x_{0},\lambda^2}-D^{\tilde{\alphapha}}\mathcal{W}_{x_0}(x_0 + \cdot) \right]_{\dot{C}^{0,\gamma}(B_{1/4}^+(\hat{\xi}))} \leq C(\beta)\max\{ \eta_{\delta,r}psilon_0, c_{\ast}\} \lambda^{2\alphapha}.$$ Combining this with the fact that $R_{\tilde{\alphapha}}(w_\tau)(\xi)\in C^{0, \gamma}(B_{1/4}^+(\hat \xi))$, yields \eta_{\delta,r}qref{eq:multi_deri}.
To show \eta_{\delta,r}qref{eq:inverse_T}, we first observe that $\|T^{w_{x_0,\lambda^2}}-T^{w_{x_0}}\|_{C^1(B_{3/4}(\hat \xi))}\leq C \max\{\eta_{\delta,r}psilon_0, c_*\}\lambda^{2\alphapha}$ by Corollary~\ref{cor:improved_reg} and the definition of $T$. Then using the uniform boundedness of $\|D (T^{w_{x_0,\lambda^2}})^{-1}\|_{L^\infty(\mathcal{B}^+_{1/2}(\hat \xi))}$ and $\|D (T^{w_{x_0}})^{-1}\|_{L^\infty(\mathcal{B}^+_{1/2}(\hat \xi))}$ we obtain the desired estimate.
\eta_{\delta,r}nd{proof}
Using the spaces from Definition \ref{defi:spaces}, we apply Proposition \ref{prop:holder_v} to quantify the regularity and asymptotics of our Legendre function:
\begin{prop}
\label{prop:regasymp}
Under the assumptions of Proposition~\ref{prop:holder_v} we have $v\in X_{\alphapha, \mu}(\mathcal{B}_{1}^+)$
for all $\mu\in(0,\gamma]$. In particular, for $y_0\in P\cap \mathcal{B}_{1}$ there exist functions $C_{k\eta_{\delta,r}ll}\in C^{0,\gamma}_{\ast}(\mathcal{B}_{1}^+(y_0))$ with $C_{k l}(y'',0,0)=0$ for all $k,l \in \{1,\dots,n+1\} $ such that the following asymptotics are valid:
\begin{align*}
\p_{ij}v(y)& = (y_n^2+y_{n+1}^2)^{-1}d_G(y,y_0)^{1+2\alphapha-\gamma}C_{ij}(y),\\
\p_{in}v(y)&=\frac{(e_i\cdot \nu_{T^{-1}(y_0)})}{(e_n\cdot \nu_{T^{-1}(y_0)})}+d_G(y,y_0)^{2\alphapha-\gamma}C_{in}(y),\\
\p_{i,n+1}v(y)&= d_G(y,y_0)^{2\alphapha-\gamma}C_{i,n+1}(y),\\
\begin{pmatrix}
\partial_{nn} v(y) & \partial_{n,n+1} v(y)\\
\partial_{n+1,n} v(y) & \partial_{n+1,n+1}v(y)
\eta_{\delta,r}nd{pmatrix}
&=\begin{pmatrix} a_0(y_0) y_n & a_1(y_0)y_{n+1}\\
a_1(y_0)y_{n+1}& a_1(y_0)y_n
\eta_{\delta,r}nd{pmatrix} \\
& \quad + d_G(y,y_0)^{1+2\alphapha- \gamma}\begin{pmatrix} C_{nn}(y)& C_{n,n+1}(y)\\
C_{n,n+1}(y)& C_{n+1,n+1}(y)
\eta_{\delta,r}nd{pmatrix} .
\eta_{\delta,r}nd{align*}
Here $i,j\in\{1,\dots,n-1\}$.
\eta_{\delta,r}nd{prop}
\begin{rmk}
We emphasize that in the expression for $\p_{ij}v$ it is not possible to replace $(y_n^2 + y_{n+1}^2)^{-1}$ by $d_G(y,y_0)^{-2}$. This is in analogy with Remark \ref{rmk:improved_reg}.
\eta_{\delta,r}nd{rmk}
\begin{proof}
For each $y_0\in P$ the proof of the asymptotics in the non-tangential cone $\mathcal{N}_G(y_0)$ follows directly from Proposition~\ref{prop:holder_v}, Lemma \ref{lem:asymp_profile} (which yields the explicit expressions of the leading order asymptotic expansions) and a chain of balls argument. In order to obtain the asymptotic expansion in the whole of $\mathcal{B}_{1}^+(y_0)$, we use the regularity of the coefficient functions and the triangle inequality. We only present the argument for $\p_{in}v$, since the reasoning for the other partial derivatives is analogous. Hence, let $y\in \mathcal{B}_{1}^+(y_0)$ but $y\notin \mathcal{N}_G(y_0)$. Let $\bar{y}\in P\cap \mathcal{B}_1$ denote the projection of $y$ onto $P$. Then, by the triangle inequality, we may assume that $d_G(y,\bar{y}), d_G(\bar{y},y_0)\leq C d_G(y,y_0)$. Thus, by virtue of the regularity of $\nu_{T^{-1}(y_0)}$, we have that
\begin{align*}
\left|\p_{in}v(y)- \frac{(e_i\cdot \nu_{T^{-1}(y_0)})}{(e_n\cdot \nu_{T^{-1}(y_0)})} \right| &\leq \left| \p_{in}v(y) -\frac{(e_i\cdot \nu_{T^{-1}(\bar{y})})}{(e_n\cdot \nu_{T^{-1}(\bar{y})})}\right| \\
& \quad + \left| \frac{(e_i\cdot \nu_{T^{-1}(\bar{y})})}{(e_n\cdot \nu_{T^{-1}(\bar{y})})} -\frac{(e_i\cdot \nu_{T^{-1}({y_0})})}{(e_n\cdot \nu_{T^{-1}({y_0})})} \right|\\
& \leq C d_G(y,\bar{y})^{2\alphapha} +C|\bar{y}- y_0|^{\alphapha} \leq C d_G(y,y_0)^{2\alphapha}.
\eta_{\delta,r}nd{align*}
The Hölder estimates are analogous.
\eta_{\delta,r}nd{proof}
\begin{rmk}
\label{rmk:close}
For later reference we conclude this section by noting that the closeness condition (A5) (and the asymptotics from Proposition \ref{prop:holder_v}) implies that
$$\left\| v- v_{0}\right\|_{X_{\alphapha,\eta_{\delta,r}psilon}(\mathcal{B}_{1}^+)} \leq C\max\{\eta_{\delta,r}psilon_0,c_\ast\},$$
where $v_0(y)= -\frac{1}{3}(y_n^3-3y_ny_{n+1}^2)$ is the leading order expansion of $v$ at the origin.
This will be used in the perturbation argument in Section \ref{sec:grushin} and in the application of the implicit function theorem in Section \ref{sec:IFT1}.
\eta_{\delta,r}nd{rmk}
\subsection{Improvement of regularity}
\label{subsec:improvement}
In this section we present a bootstrap argument to infer higher regularity of the Legendre function. By virtue of the previous section, we have that $v\in X_{\alphapha,\eta_{\delta,r}psilon}(\mathcal{B}_1^+)$ for some potentially very small value of $\alphapha\in(0,\gamma]$. In this section we improve this regularity modulus further by showing that $\alphapha$ can be chosen arbitrarily close to one. To this end we argue in two steps: By an expansion, we first identify the structure of $F$ in terms of a leading order linear operator and additional higher order controlled contributions (c.f. Proposition \ref{prop:error_gain}). Then in a second step, we use this to bootstrap regularity (c.f. Proposition \ref{prop:error_gain2}). \\
In the sequel we use the following abbreviations:
\begin{align*}
G^{ij}(v)&:=-\det\begin{pmatrix}
\p_{ij}v& \p_{in}v & \p_{i,n+1}v\\
\p_{jn}v& \p_{nn}v & \p_{n,n+1}v\\
\p_{j,n+1}v & \p_{n,n+1}v &\p_{n+1,n+1}v
\eta_{\delta,r}nd{pmatrix}, \ i,j\in\{1,\dots,n-1\},\\
G^{i,n}(v)&:=2\det\begin{pmatrix}
\p_{in}v & \p_{i,n+1}v\\
\p_{n,n+1}v & \p_{n+1,n+1}v
\eta_{\delta,r}nd{pmatrix}, \ i\in\{1,\dots,n-1\},\\
G^{i,n+1}(v)&:=2\det\begin{pmatrix}
\p_{i,n+1}v & \p_{in}v\\
\p_{n,n+1}v & \p_{nn}v
\eta_{\delta,r}nd{pmatrix}, \ i\in\{1,\dots,n-1\},\\
G^{n,n}(v)&:=\p_{n+1,n+1}v,\\
G^{n+1,n+1}(v)&:=\p_{nn}v,\\
G^{n,n+1}(v)&:=-\p_{n,n+1} v,\\
J(v)&:=\det\begin{pmatrix}
\p_{nn}v &\p_{n,n+1}v\\
\p_{n,n+1}v &\p_{n+1,n+1}v
\eta_{\delta,r}nd{pmatrix}.
\eta_{\delta,r}nd{align*}
With slight abuse of notation, we thus interpret $G^{ij}$ and $J$ as functions from the symmetric matrices $\R^{(n+1)\times(n+1)}_{sym}$ to $\R$ and recall the notation for partial derivatives of $G^{ij}$ with respect to the components $m_{k\eta_{\delta,r}ll}$ from Section \ref{sec:notation}:
\begin{align*}
\p_{m_{k\eta_{\delta,r}ll}}G^{ij}(M)=\frac{\p G^{ij}(M)}{\p m_{k\eta_{\delta,r}ll}}, \ M=(m_{k\eta_{\delta,r}ll})\in \R^{(n+1)\times (n+1)}_{sym}.
\eta_{\delta,r}nd{align*}
With these conventions, the nonlinear equation \eta_{\delta,r}qref{eq:nonlineq1} from Section~\ref{sec:Legendre}, which is satisfied by $v$, turns into
\begin{align}\label{eq:nonlin_2}
F(v,y):=\sum_{i,j=1}^{n+1}\tilde{a}^{ij}(y)G^{ij}(v)-J(v)\left(\sum_{j=1}^{n-1}\tilde{b}^j\p_j v+\tilde{b}^ny_n+\tilde{b}^{n+1}y_{n+1}\right)=0.
\eta_{\delta,r}nd{align}
Relying on this structure, we derive a (self-improving) linearization. More precisely,
for each $y_0\in P\cap \mathcal{B}_{1}^+$, we will linearize the equation at $v_{y_0}$, where
\begin{equation}\label{eq:v0}
\begin{split}
v_{y_0}(y)&=\p_nv(y_0)y_n+\sum_{i=1}^{n-1}\p_{in}v(y_0)(y_i-(y_0)_i)y_n\\
&+\frac{\p_{nnn}v(y_0)}{6}y_n^3+\frac{\p_{n,n+1,n+1}v(y_0)}{2}y_ny_{n+1}^2,
\eta_{\delta,r}nd{split}
\eta_{\delta,r}nd{equation}
is the up to order three asymptotic expansion of $v$ at $y_0$.
The linearization then leads to the following self-improving structure:
\begin{prop}\label{prop:error_gain}
Let $a^{ij}(x)\in C^{1,\gamma}$ for some $\gamma\in (0,1]$.
Assume that $v\in X_{\alphapha,\eta_{\delta,r}psilon}(\mathcal{B}_{1}^+)$ with $\alphapha\in (0,1]$ solves $F(v,y)=0$. Then at each point $y_0\in P\cap \mathcal{B}_{1/2}^+$, we have the following expansion in $\mathcal{B}_r^+(y_0)$ with $0<r<1/2$:
\begin{align*}
F(v,y)=L_{y_0}v + P_{y_0}(y)+E_{y_0}(y).
\eta_{\delta,r}nd{align*}
Here
$$L_{y_0}v=\tilde{a}^{ij}(y_0)\p_{m_{k\eta_{\delta,r}ll}}G^{ij}(v_{y_0})\p_{k\eta_{\delta,r}ll}v=D_vF\big|_{(v,y)=(v_{y_0},y_0)} v,$$
$$P_{y_0}(y)=\tilde{a}^{ij}(y_0)\left(G^{ij}(v_{y_0})-\p_{m_{k\eta_{\delta,r}ll}}G^{ij}(v_{y_0})\p_{k\eta_{\delta,r}ll}v_{y_0}\right)\in \mathcal{P}_{1}^{hom},$$
$P_{y_0}(y)$ is of the form $c_0(y_0)y_n$ and $E_{y_0}(y)$ is an error term satisfying
\begin{equation*}
\begin{split}
&\left\|d_G(\cdot,y_0)^{-\eta_{\delta,r}ta_0}E_{y_0}\right\|_{L^\infty(\mathcal{B}_{1/2}^+(y_0))}+\left[d_G(\cdot,y_0)^{-(\eta_{\delta,r}ta_0-\eta_{\delta,r}psilon)}E_{y_0}\right]_{\dot{C}^{0,\eta_{\delta,r}psilon}_\ast(\mathcal{B}_{1/2}^+(y_0))}\leq C,
\eta_{\delta,r}nd{split}
\eta_{\delta,r}nd{equation*}
for $\eta_{\delta,r}ta_0=\min\{1+4\alphapha,3\}$.
\eta_{\delta,r}nd{prop}
\begin{rmk}[The role of $\alphapha$, $2\alphapha$ and $4\alphapha$]
As already seen in Proposition~\ref{prop:decompI}, the parameter $\alphapha$ in the space $Y_{\alphapha,\eta_{\delta,r}psilon}$ refers to the (tangential) H\"older regularity (w.r.t. the Euclidean metric) of the quotient $\frac{f}{y_n}\big|_P$ for any function $f\in Y_{\alphapha,\eta_{\delta,r}psilon}$. The parameter $2\alphapha$ originates from the different scalings of the Euclidean metrics and Baouendi-Grushin metrics (c.f. Remark~\ref{rmk:equi_dist}). More precisely, for any $y''_1,y''_2\in P$, $|y''_1-y''_2|\sim d_G(y''_1,y''_2)^{2}$ by Remark~\ref{rmk:equi_dist}, which accounts for the $2\alphapha$ in the definition of the norm $\|f\|_{Y_{\alphapha,\eta_{\delta,r}psilon}}=\sup_{\bar y\in P}[d_G(\cdot, \bar y)^{-(1+2\alphapha-\eta_{\delta,r}psilon)}(f-P_{\bar y})]_{\dot{C}^{0,\eta_{\delta,r}psilon}_\ast}$. The parameter $4\alphapha$ in Proposition~\ref{prop:error_gain} indicates an \eta_{\delta,r}mph{improvement} of the tangential regularity of $L_{y_0}v/y_n$ at $y_0$ from $C^{0,\alphapha}$ to $C^{0,2\alphapha}$.
\eta_{\delta,r}nd{rmk}
\begin{rmk}\label{rmk:error_gain3}
By using the explicit expression of $v_{y_0}$ and $G^{ij}(v)$, it is possible to compute the form of the leading order operator:
\begin{align*}
L_{y_0}&:=\sum_{i,j=1}^{n+1}\tilde{a}^{ij}(y_0)\p_{m_{k\eta_{\delta,r}ll}}G^{ij}(v_{y_0})\p_{k\eta_{\delta,r}ll}\\
&=\sum_{i,j=1}^{n-1}\tilde{a}^{ij}\left((A_1)^2y_{n+1}^2-A_0A_1y_n^2\right)\p_{ij}\\
&\quad +2\sum_{i,j=1}^{n-1}\tilde{a}^{ij}\left(B_jA_1y_n\p_{in}-B_jA_1y_{n+1}\p_{i,n+1}\right)+\sum_{i,j=1}^{n-1}\tilde{a}^{ij}B_iB_j\p_{n+1,n+1}\\
&\quad +2\sum_{i=1}^{n-1}\tilde{a}^{in}\left(A_1y_n\p_{in}-A_1y_{n+1}\p_{i,n+1}+B_i\p_{n+1,n+1}\right)\\
& \quad +\tilde{a}^{nn}\p_{n+1,n+1}+\tilde{a}^{n+1,n+1}\p_{n,n}.
\eta_{\delta,r}nd{align*}
Here the coefficients $\tilde{a}^{ij}$ are evaluated at $y_0$ and $A_0,A_1,B_j$ are constants depending on $y_0$:
\begin{align*}
A_0&:=\p_{nnn}v(y_0),\quad A_1:=\p_{n,n+1,n+1}v(y_0),\\
B_j&:=\p_{jn}v(y_0),\quad j\in \{1,\dots, n-1\}.
\eta_{\delta,r}nd{align*}
To obtain this, we have used the off-diagonal assumption (A3) for the metric $a^{ij}$, i.e. $a^{i,n+1}(x',0)=0$ for $i\in\{1,\dots, n\}$.\\
We note that the operator $L_{y_0}$ is a self-adjoint, constant coefficient Baouendi-Grushin type operator. It is hypoelliptic as an operator on $\R^{n+1}$ after an odd reflection in the $y_n$ variable and an even reflection in the $y_{n+1}$ variable (c.f. \cite{JSC87}).
\eta_{\delta,r}nd{rmk}
\begin{proof}[Proof of Proposition~\ref{prop:error_gain}]
The proof of this result relies on a successive expansion of the coefficients and the nonlinearities.
Thus, we first expand $\tilde{a}^{ij}(y)$, $G^{ij}(v)$ and the lower order term $J(v)\left(\cdots\right)$ in \eta_{\delta,r}qref{eq:nonlin_2} separately and then combine the results to derive the desired overall expansion. \\
\eta_{\delta,r}mph{Step 1: Expansion of the leading term.}
\eta_{\delta,r}mph{Step 1 a: Expansion of the coefficients.} For the coefficients $\tilde{a}^{ij}$ we have
\begin{align*}
\tilde{a}^{ij}(y)=\tilde{a}^{ij}(y_0)+E_2^{y_0,ij}(y),
\eta_{\delta,r}nd{align*}
where
the error term $E^{y_0,ij}_2(y)$ satisfies
\begin{equation}
\label{eq:Holderweight0}
\begin{split}
\left\|d_G(\cdot,y_0)^{-2}E^{y_0,ij}_2\right\|_{L^\infty(\mathcal{B}_{1}^+(y_0))}
+\left[d_G(\cdot,y_0)^{-(2-\eta_{\delta,r}psilon)}E^{y_0,ij}_2\right]_{\dot{C}^{0,\eta_{\delta,r}psilon}_\ast(\mathcal{B}_{1}^+(y_0))}\leq C.
\eta_{\delta,r}nd{split}
\eta_{\delta,r}nd{equation}
\eta_{\delta,r}mph{Proof of Step 1a:}
The claim follows from the differentiability of $a^{ij}(x)$ and the asymptotics of $\nabla v$. Using the abbreviations $\xi(y):=(y'',-\p_nv(y),-\p_{n+1}v(y))$ and $\xi_0:=(y_0'',-\p_{n}v(y_0),-\p_{n+1}v(y_0))=(y_0'',-\p_nv(y_0),0)$, we obtain
\begin{equation}
\label{eq:expansion_1}
\begin{split}
\tilde{a}^{ij}(y)=a^{ij}(\xi(y))&=a^{ij}(\xi_0)+ (a^{ij}(\xi)-a^{ij}(\xi_0))\\
&=a^{ij}(\xi_0) + \int\limits_{0}^{1}\nabla_x a^{ij}((1-t)\xi_0 + t \xi(y))dt \cdot (\xi_0 - \xi(y)) .
\eta_{\delta,r}nd{split}
\eta_{\delta,r}nd{equation}
Hence, (\ref{eq:expansion_1}) turns into
\begin{equation}
\label{eq:expansion_2}
\begin{split}
\tilde{a}^{ij}&=\tilde{a}^{ij}(y_0) +E_2^{y_0,ij}(y),
\eta_{\delta,r}nd{split}
\eta_{\delta,r}nd{equation}
where
\begin{align*}
E_2^{y_0,ij}(y)&:=\int\limits_{0}^{1}\nabla_x'' a^{ij}((1-t)\xi_0 + t \xi(y)) dt \cdot (y'' - y_0'')\\
& \quad + \sum\limits_{k=n}^{n+1}\int\limits_{0}^{1}\nabla_{x_k} a^{ij}((1-t)\xi_0 + t \xi(y)) dt \cdot (\p_k v(y) - \p_k v(y_0)).
\eta_{\delta,r}nd{align*}
Recalling the asymptotics of $v$ from Definition~\ref{defi:spaces} for $v$, we infer that for all $y\in \mathcal{B}_{1/2}^+(y_0)$
\begin{align*}
|E_2^{y_0,ij}(y)|&\leq C \left|(y''-y''_0,-\p_nv(y) + \p_{n}v(y_0),-\p_{n+1}v(y))\right|\leq Cd_G(y,y_0)^{2}.
\eta_{\delta,r}nd{align*}
Thus, we have shown that $d_G(y,y_0)^{-2}E_2^{y_0,ij}(y)\in L^\infty(\mathcal{B}_{1/2}^+(y_0))$.
Similarly, we also obtain $[d_G(y,y_0)^{-(2-\eta_{\delta,r}psilon)}E_2^{y_0, ij}]_{\dot{C}^{0,\eta_{\delta,r}psilon}(\mathcal{B}_{1/2}^+(y_0))}\leq C$. This, together with Definition~\ref{defi:spaces}, yields the second estimate in (\ref{eq:Holderweight0}).\\
\eta_{\delta,r}mph{Step 1 b: Expansion of the functions $G^{ij}$.}
For the (nonlinear) functions $G^{ij}(v)$ we have for all $i,j\in\{1,\dots,n+1\}$
\begin{align*}
G^{ij}(v) = G^{ij}(v_{y_0})+\p_{m_{k \eta_{\delta,r}ll}}G^{ij}(v_{y_0}) \p_{k \eta_{\delta,r}ll} (v-v_{y_0}) + E^{y_0,ij}_1(y),
\eta_{\delta,r}nd{align*}
where
the error $E^{y_0,ij}_{1}(y)$ satisfies the bounds
\begin{equation}
\label{eq:Holderweight}
\begin{split}
\left\|d_G(\cdot,y_0)^{-(1+4\alphapha)}E^{y_0,ij}_1\right\|_{L^\infty(\mathcal{B}_{1}^+(y_0))}
+\left[d_G(\cdot,y_0)^{-(1+4 \alphapha -\eta_{\delta,r}psilon)}E^{y_0,ij}_1\right]_{\dot{C}^{0,\eta_{\delta,r}psilon}_\ast(\mathcal{B}_1^+(y_0))}\leq C.
\eta_{\delta,r}nd{split}
\eta_{\delta,r}nd{equation}
\eta_{\delta,r}mph{Proof of Step 1b:}
To show the claim, we first expand
\begin{equation}
\label{eq:expand_v}
\begin{split}
G^{ij}(v)&=G^{ij}(v_{y_0})+\p_{m_{k\eta_{\delta,r}ll}}G^{ij}(v_{y_0})\p_{k\eta_{\delta,r}ll}(v -v_{y_0})\\
&\quad +\frac{1}{2}\p^2_{m_{k\eta_{\delta,r}ll}m_{\xi\eta_{\delta,r}ta}}G^{ij}(v_{y_0})\p_{k\eta_{\delta,r}ll}(v-v_{y_0})\p_{\xi\eta_{\delta,r}ta}(v-v_{y_0})\\
&\quad +\frac{1}{6}\p^3_{m_{k\eta_{\delta,r}ll}m_{\xi\eta_{\delta,r}ta}m_{hs}}G^{ij}(v_{y_0})\p_{k\eta_{\delta,r}ll}(v-v_{y_0})\p_{\xi\eta_{\delta,r}ta}(v-v_{y_0})\p_{hs}(v-v_{y_0})\\
&=G^{ij}(v_{y_0})+\p_{m_{k \eta_{\delta,r}ll}}G^{ij}(v_{y_0}) \p_{k \eta_{\delta,r}ll} (v-v_{y_0})+ E^{y_0,ij}_1(y).
\eta_{\delta,r}nd{split}
\eta_{\delta,r}nd{equation}
Here the error term is given by
\begin{align*}
E^{y_0,ij}_1(y)&=\frac{1}{2}\p^2_{m_{k\eta_{\delta,r}ll}m_{\xi\eta_{\delta,r}ta}}G^{ij}(v_{y_0})\p_{k\eta_{\delta,r}ll}(v-v_{y_0})\p_{\xi\eta_{\delta,r}ta}(v-v_{y_0})\\
&+\frac{1}{6}\p^3_{m_{k\eta_{\delta,r}ll}m_{\xi\eta_{\delta,r}ta}m_{hs}}G^{ij}(v_{y_0})\p_{k\eta_{\delta,r}ll}(v-v_{y_0})\p_{\xi\eta_{\delta,r}ta}(v-v_{y_0})\p_{hs}(v-v_{y_0}).
\eta_{\delta,r}nd{align*}
Hence, it remains to prove the error estimate (\ref{eq:Holderweight}). To this end we estimate each term from the expression for $E_1^{y_0,ij}$ separately. We begin by observing that
\begin{equation}
\label{eq:det}
\begin{split}
e^{y_0,ij}(y):&=\p_{m_{k\eta_{\delta,r}ll}m_{\xi\eta_{\delta,r}ta}}G^{ij}(v_{y_0})\p_{k\eta_{\delta,r}ll}(v-v_{y_0})\p_{\xi\eta_{\delta,r}ta}(v-v_{y_0})\\
&= \det \begin{pmatrix} \p_{ij}v_{y_0} & \p_{in}(v-v_{y_0}) & \p_{i,n+1}(v-v_{y_0}) \\
\p_{jn}v_{y_0} & \p_{nn}(v-v_{y_0}) & \p_{n,n+1}(v-v_{y_0}) \\
\p_{j,n+1}v_{y_0} & \p_{n,n+1}(v-v_{y_0}) & \p_{n+1,n+1}(v-v_{y_0})
\eta_{\delta,r}nd{pmatrix}\\
& \quad + \det \begin{pmatrix} \p_{ij}(v-v_{y_0}) & \p_{in}v_{y_0} & \p_{i,n+1}(v-v_{y_0}) \\
\p_{jn}(v-v_{y_0}) & \p_{nn}v_{y_0} & \p_{n,n+1}(v-v_{y_0}) \\
\p_{j,n+1}(v-v_{y_0}) & \p_{n,n+1}v_{y_0} & \p_{n+1,n+1}(v-v_{y_0})
\eta_{\delta,r}nd{pmatrix}\\
&\quad +\det \begin{pmatrix} \p_{ij}(v-v_{y_0}) & \p_{in}(v-v_{y_0}) & \p_{i,n+1}v_{y_0} \\
\p_{jn}(v-v_{y_0}) & \p_{nn}(v-v_{y_0}) & \p_{n,n+1}v_{y_0} \\
\p_{j,n+1}(v-v_{y_0}) & \p_{n,n+1}(v-v_{y_0}) & \p_{n+1,n+1}v_{y_0}
\eta_{\delta,r}nd{pmatrix},
\eta_{\delta,r}nd{split}
\eta_{\delta,r}nd{equation}
and
\begin{align*}
\tilde{e}^{y_0,ij}(y)&:=\p^3_{m_{k\eta_{\delta,r}ll}m_{\xi\eta_{\delta,r}ta}m_{hs}}G^{ij}(v_{y_0})\p_{k\eta_{\delta,r}ll}(v-v_{y_0})\p_{\xi\eta_{\delta,r}ta}(v-v_{y_0})\p_{hs}(v-v_{y_0})\\
&=\left\{
\begin{array}{ll}
3G^{ij}(v-v_{y_0})\text{ if } i,j\in\{1,\dots, n-1\},\\
0 \quad \text{ if } i \text{ or }j\in\{n,n+1\}.
\eta_{\delta,r}nd{array}
\right.
\eta_{\delta,r}nd{align*}
For simplicity we only present the estimate for $e^{y_0,ij}(y)$ in detail.
To estimate the difference $\p_{k\eta_{\delta,r}ll}(v-v_{y_0})$ for $\eta_{\delta,r}ll,k\in\{1,\dots, n+1\}$ in $\mathcal{B}_{1/2}^+(y_0)$, we use the definition of our function space $X_{\alphapha,\eta_{\delta,r}psilon}$ (in the form of Definition~\ref{defi:spaces} or in the form of the decomposition from Proposition~\ref{prop:decompI}) to obtain that for $y\in \mathcal{B}_{1/2}^+(y_0)$
\begin{align*}
|\p_{ij}(v-v_{y_0})(y)|&\leq C d_G(y,P)^{-1+2\alphapha},\\
|\p_{in}(v-v_{y_0})(y)|&\leq C d_G(y,y_0)^{2\alphapha},\\
|\p_{nn}(v-v_{y_0})(y)|&\leq C d_G(y,y_0)^{1+2\alphapha}.
\eta_{\delta,r}nd{align*}
Using this and plugging the explicit expression for $\p_{ij}v_{y_0}$ (c.f. \eta_{\delta,r}qref{eq:v0}) into \eta_{\delta,r}qref{eq:det} gives
$
|e^{y_0,ij}(y)| \leq Cd_G(y,y_0)^{1+4\alphapha}.
$
Similarly, $|\tilde{e}^{y_0,ij}(y)|\leq Cd_G(y,y_0)^{1+6\alphapha}$.
Hence, we obtain
\begin{align*}
|E^{y_0,ij}_1(y)|\leq C|e^{y_0,ij}(y)|+C|\tilde{e}^{y_0,ij}(y)|\leq d_G(y,y_0)^{1+4\alphapha}.
\eta_{\delta,r}nd{align*}
Moreover, it is not hard to deduce that $d_G(y,y_0)^{-(1+4\alphapha-\eta_{\delta,r}psilon)}E_1^{y_0,ij}(y)\in \dot{C}^{0,\eta_{\delta,r}psilon}_\ast(\mathcal{B}_{1/2}^{+}(y_0))$. This concludes the proof of Step 1b.\\
\eta_{\delta,r}mph{Step 1c: Concatenation.}
We show that the leading order term $\tilde{a}^{ij}G^{ij}(v)$ has the expansion
\begin{align*}
\tilde{a}^{ij}G^{ij}(v)=\tilde{a}^{ij}(y_0)\p_{m_{k\eta_{\delta,r}ll}}G^{ij}(v_{y_0})\p_{k\eta_{\delta,r}ll}v + P_{y_0}(y)+E^{y_0}_3(y),
\eta_{\delta,r}nd{align*}
where
\begin{align}
\label{eq:P3}
P_{y_0}(y)=\tilde{a}^{ij}(y_0)\left(G^{ij}(v_{y_0})-\p_{m_{k\eta_{\delta,r}ll}}G^{ij}(v_{y_0})\p_{k\eta_{\delta,r}ll}v_{y_0}\right)\in \mathcal{P}_1^{hom},
\eta_{\delta,r}nd{align}
is a polynomial of the form $c(y_0)y_n$, and $E^{y_0}_3(y)$ satisfies for $\eta_{\delta,r}ta_0=\min\{3,1+4\alphapha\}$
\begin{equation}
\label{eq:err_3}
\|d_G(\cdot,y_0)^{-\eta_{\delta,r}ta_0}E^{y_0}_3\|_{L^\infty(\mathcal{B}_{1/2}^+(y_0))}+\left[d_G(\cdot,y_0)^{-(\eta_{\delta,r}ta_0-\eta_{\delta,r}psilon)}E^{y_0}_3\right]_{\dot{C}^{0,\eta_{\delta,r}psilon}_\ast(\mathcal{B}_{1/2}^+(y_0))}\leq C.
\eta_{\delta,r}nd{equation}
\eta_{\delta,r}mph{Proof of Step 1c:} Using the expansions for $\tilde{a}^{ij}$ and for $G^{ij}(v)$ from Steps 1a and 1b, we obtain
\begin{align*}
\tilde{a}^{ij}(y)G^{ij}(v)&=\tilde{a}^{ij}(y_0)G^{ij}(v)+E_{2}^{y_0,ij}(y)G^{ij}(v)\\
&=\tilde{a}^{ij}(y_0)\left(G^{ij}(v_{y_0})+\p_{m_{k\eta_{\delta,r}ll}}G^{ij}(v_{y_0})\p_{k\eta_{\delta,r}ll}(v-v_{y_0}) +E_1^{y_0,ij}(y)\right)\\
&\quad +E_2^{y_0,ij}(y)G^{ij}(v)\\
&=\tilde{a}^{ij}(y_0)\p_{m_{k\eta_{\delta,r}ll}}G^{ij}(v_{y_0})\p_{k\eta_{\delta,r}ll}v +P_{y_0}(y) +E_3^{y_0}(y),
\eta_{\delta,r}nd{align*}
where
\begin{align*}
E_3^{y_0}(y) := \tilde{a}^{ij}(y_0)E_1^{y_0,ij}(y)+E^{y_0,ij}_2(y)G^{ij}(v).
\eta_{\delta,r}nd{align*}
Recalling the error bounds from Steps 1a, 1b and further observing
\begin{align*}
\left| G^{ij}(v) \right| \leq Cd_G(y,y_0),
\eta_{\delta,r}nd{align*}
entails (\ref{eq:err_3}).\\
\eta_{\delta,r}mph{Step 2: Expansion of the lower order contributions.}
For the lower order contribution the asymptotics of $v$ immediately yield
\begin{align*}
\left|J(v)(y)\left( \sum\limits_{j=1}^{n-1} \tilde{b}^j(y) \p_j v(y) + \tilde{b}^n(y)y_n + \tilde{b}^{n+1}(y)y_{n+1}\right)\right|\leq Cc_{\ast} d_G(y,P)^3.
\eta_{\delta,r}nd{align*}
Here we used that $\|\nabla a^{ij}\|_{L^{\infty}}\leq C c_{\ast}$. Hence, this error is small compared with the error term $E^{y_0}_3(y)$ from the leading order expansion (c.f. \eta_{\delta,r}qref{eq:err_3}).
\eta_{\delta,r}nd{proof}
The previous proposition allows us to apply an iterative bootstrap argument to obtain higher regularity for $v$.
\begin{prop}
\label{prop:error_gain2}
Assume that $v\in X_{\alphapha,\eta_{\delta,r}psilon}(\mathcal{B}_{1}^+)$ for some $\alphapha\in (0,1]$, $\eta_{\delta,r}psilon \in (0,\gamma]$, and that it satisfies $F(v,y)=0$ with $a^{ij}(x)\in C^{1,\gamma}$ for some $\gamma\in (0,1]$. Then
$v\in X_{\delta,\eta_{\delta,r}psilon}(\mathcal{B}_{1/2}^+)$ for any $\delta\in (0,1)$.
\eta_{\delta,r}nd{prop}
\begin{proof}
If $1+4\alphapha<3$, i.e. $0\leq \alphapha<1/2$, Proposition~\ref{prop:error_gain} and Remark~\ref{rmk:error_gain3} yield that for each fixed $y_0\in P\cap \mathcal{B}_{1/2}$ the Legendre function $v$ solves
\begin{align}
\label{eq:eq}
L_{y_0}v=L_{y_0}v_{y_0}+\tilde{f} \text{ in } \mathcal{B}_{1/2}^+(y_0),
\eta_{\delta,r}nd{align}
where $L_{y_0}$ is the ``constant coefficient" Baouendi-Grushin type operator from Remark~\ref{rmk:error_gain3}, $L_{y_0}v_{y_0}=c(y_0)y_n$ and the function $\tilde{f}(y)$ is such that
$$\|d_G(\cdot,y_0)^{-(1+4\alphapha)}\tilde{f}\|_{L^\infty(\mathcal{B}_{1/2}^+(y_0))}+[d_G(\cdot,y_0)^{-(1+4\alphapha-\eta_{\delta,r}psilon)}\tilde{f}]_{\dot{C}^{0,\eta_{\delta,r}psilon}_\ast(\mathcal{B}_{1/2}^+(y_0))}\leq C,$$
where $C$ depends on $\|v\|_{X_{\alphapha,\eta_{\delta,r}psilon}}$ and $[D a^{ij}]_{\dot{C}^{0,\gamma}}$ and is in particularly independent of $y_0$. We apply the compactness argument from the Appendix (c.f. the proof of Proposition~\ref{prop:Hoelder0}) at each point $y_0\in P\cap \mathcal{B}_{1/2}$. This is possible as $L_{y_0}$ is a self-adjoint, constant coefficient subelliptic operator of Baoundi-Grushin type which is hypoelliptic after suitable reflections (c.f. Remark \ref{rmk:error_gain3} and \cite{JSC87}). We note that as in the case of the Grushin operator there are no fourth order homogeneous polynomials with symmetry (even about $y_{n+1}$ and odd about $y_n$) which are solutions to the equation $L_{y_0}v=0$. Combining the above approximation result along $P\cap \mathcal{B}_{1/2}$ with the $C^{2,\eta_{\delta,r}psilon}_\ast$, $\eta_{\delta,r}psilon\leq \gamma$, estimate in the corresponding non-tangential region (with respect to $P$) leads to $v\in X_{2\alphapha,\eta_{\delta,r}psilon}(\mathcal{B}_{1/4}^+)$.
We repeat the above procedure until after finitely many, say, $k$, steps, $1+4k\alphapha>3$. This results in $v\in X_{\delta,\eta_{\delta,r}psilon}(\mathcal{B}_{1/2^{k}}^+)$ for every $\delta\in (0,1)$ (where we used the nonexistence of homogeneous fourth order approximating polynomials). Repeating this procedure in $\mathcal{B}^+_{1/2}(\bar y)$ for $\bar y\in P\cap \mathcal{B}_{1/2}^+$ and by a covering argument, we obtain that $v\in X_{\delta,\eta_{\delta,r}psilon}(\mathcal{B}_{1/2}^+)$ for every $\delta\in (0,1)$.
\eta_{\delta,r}nd{proof}
\section[Free Boundary Regularity]{Free Boundary Regularity for $C^{k,\gamma}$ Metrics, $k\geq 1$}
\label{sec:fb_reg}
In this section we apply the implicit function theorem to show that the regular free boundary is locally in $C^{k+1,\gamma}$, if $a^{ij}\in C^{k,\gamma}$ with $k\geq 1$ and $\gamma\in (0,1)$. Moreover, we also argue that the regular free boundary is locally real analytic, if $a^{ij}$ is real analytic. \\
In order to invoke the implicit function theorem, we discuss the mapping properties of the nonlinear function $F$ in the next two sections. More precisely, we prove that
\begin{itemize}
\item the nonlinearity $F$ maps $X_{\delta,\eta_{\delta,r}psilon}(\mathcal{B}_1^+)$ to $Y_{\delta,\eta_{\delta,r}psilon}(\mathcal{B}_1^+)$ for any $\delta\in (0,1)$ and $\eta_{\delta,r}psilon$ sufficiently small (c.f. Section \ref{sec:nonlinmap}),
\item and that its linearization in a neighborhood of $v_0$ is a perturbation of the Baouendi-Grushin Laplacian and is hence invertible (c.f. Section \ref{sec:grushin}).
\eta_{\delta,r}nd{itemize}
Then in Section \ref{subsec:IFT0} we introduce an one-parameter family of diffeomorphisms which will form the basis of our application of the implicit function theorem in Section \ref{sec:IFTAppl}. In Section \ref{sec:IFTAppl} we apply the implicit function theorem argument to show the regularity of the free boundary in $C^{k,\gamma}$, $k\geq 1$ metrics and analytic metrics, which yields the desired proof of Theorem \ref{thm:higher_reg}.
\subsection{Mapping properties of $F$}
\label{sec:nonlinmap}
As a consequence of the representation of $F$ which was derived in Proposition \ref{prop:error_gain} we obtain the following mapping properties for our nonlinear function $F$.
\begin{prop}
\label{prop:nonlin_map}
Let $a^{ij}:B_1^+ \rightarrow \R^{(n+1)\times (n+1)}_{sym}$ be a $C^{k,\gamma}$ tensor field with $k\geq 1$ and $\gamma \in (0,1]$. Assume that the nonlinear function $F$ is as in (\ref{eq:nonlin_2}). Then for any $\delta\in (0,1]$ and $\eta_{\delta,r}psilon\in (0,\gamma)$, we have that
\begin{align*}
F:X_{\delta,\eta_{\delta,r}psilon}(\mathcal{B}_1^+) \rightarrow Y_{\delta,\eta_{\delta,r}psilon}(\mathcal{B}_1^+).
\eta_{\delta,r}nd{align*}
\eta_{\delta,r}nd{prop}
These properties will be used in Propositions \ref{prop:reg_a} and \ref{prop:invertible} to establish the mapping properties of the nonlinear function to which we apply the implicit function theorem in Section \ref{sec:IFT1}
\begin{proof}
The mapping properties of $F$ are an immediate consequence of the representation for $F$ that was obtained in Proposition \ref{prop:error_gain}. Indeed, given any $u\in X_{\delta,\eta_{\delta,r}psilon}$, by Proposition \ref{prop:error_gain}, we have
\begin{align*}
F(u, y) = \sum\limits_{i,j=1}^{n+1} \tilde{a}^{ij}(y_0) \p_{m_{k \eta_{\delta,r}ll}} G^{ij}(u_{y_0}) \p_{k \eta_{\delta,r}ll } u + P_{y_0}(y) + E_{y_0}(y).
\eta_{\delta,r}nd{align*}
Due to Proposition~\ref{prop:decompI}, Proposition~\ref{prop:error_gain} and Remark~\ref{rmk:error_gain3} we infer that $P_{y_0}(y) + E_{y_0}(y)\in Y_{\delta,\eta_{\delta,r}psilon}(\mathcal{B}_1^+)$. For the remaining linear term we note that similarly as in the proof of Proposition \ref{prop:error_gain}
\begin{align*}
\tilde{a}^{ij}(y_0) \p_{m_{k \eta_{\delta,r}ll}} G^{ij}(u_{y_0}) \p_{k \eta_{\delta,r}ll } u & = \tilde{a}^{ij}(y_0) \p_{m_{k \eta_{\delta,r}ll}} G^{ij}(u_{y_0}) \p_{k \eta_{\delta,r}ll } u_{y_0} \\
& \quad + \tilde{a}^{ij}(y_0) \p_{m_{k \eta_{\delta,r}ll}} G^{ij}(u_{y_0}) \p_{k \eta_{\delta,r}ll } (u-u_{y_0}) \\
& = c(y'')y_n + r^{1+2\alphapha-\eta_{\delta,r}psilon}f(y),
\eta_{\delta,r}nd{align*}
with $c(y'')\in C^{0,\delta}(\R^{n-1}\cap \mathcal{B}_1^+)$ and $f(y)\in C^{0,\eta_{\delta,r}psilon}_{\ast}(\mathcal{B}_1^+)$.
This implies the result.
\eta_{\delta,r}nd{proof}
\subsection{Linearization and the Baouendi-Grushin operator}
\label{sec:grushin}
In this section we compute the linearization of $F$ and show that it can be interpreted as a perturbation of the Baouendi-Grushin Laplacian. We treat the cases $a^{ij}\in C^{k,\gamma}$ with $k=1$ and $k\geq 2$ simultaneously as only small modifications are needed in the argument.
If $a^{ij}\in C^{k,\gamma}$ with $k\geq 2$, by using the notation in \eta_{\delta,r}qref{eq:nonlin_2},
\begin{align*}
F(D^2v, Dv,y):=&\sum_{i,j=1}^{n+1}a^{ij}(y'',-\p_nv,-\p_{n+1}v)G^{ij}(v)-J(v)\sum_{j=1}^{n-1}b^j(y'',-\p_nv,-\p_{n+1}v)\p_j v\\
&-J(v)b^n(y'',-\p_nv,-\p_{n+1}v)y_n-J(v)b^{n+1}(y'',-\p_nv,-\p_{n+1}v)y_{n+1},
\eta_{\delta,r}nd{align*}
In the case of $a^{ij}\in C^{1,\gamma}$ we view $F$ as
\begin{align*}
F(D^2v, Dv,y)&=\sum_{i,j}^{n+1}a^{ij}(y'',-\p_nv,-\p_{n+1}v)G^{ij}(v)+ f(y),\\
&\text{where } f(y)=-J(v(y))\left(\sum_{j=1}^{n-1}\tilde{b}^j(y)\p_j v(y)+\tilde{b}^n(y)y_n+\tilde{b}^{n+1}(y)y_{n+1}\right).
\eta_{\delta,r}nd{align*}
Here we view $F$ as a mapping $F:\R^{(n+1)\times(n+1)}_{sym}\times \R^{n+1}\times U\rightarrow \R$ and introduce the abbreviations
\begin{align*}
F_{k\eta_{\delta,r}ll}(M,P,y)&:=\frac{\partial F(M,P,y)}{\partial m_{k\eta_{\delta,r}ll}}, \quad M=(m_{k\eta_{\delta,r}ll})\in \R^{(n+1)\times(n+1)}_{sym},\ P\in \R^{n+1},\ y\in \R^{n+1},\\
F_k(M,P,y)&:=\frac{\partial F(M,P,y)}{\partial p_{k}}.
\eta_{\delta,r}nd{align*}
For notational convenience, we use the conventions $$F_{k\eta_{\delta,r}ll}(v,y):=F_{k\eta_{\delta,r}ll}(D^2v, Dv, y), \quad F_k(v,y):=F_{k}(D^2v, Dv,y), \quad F(v,y):=F(D^2v, Dv,y).$$
The linearization of $F$ at $v$ is
\begin{align}
\label{eq:op_lin}
L_v:=F_{k\eta_{\delta,r}ll}(v,y) \p_{k \eta_{\delta,r}ll} + F_k (v,y) \p_k.
\eta_{\delta,r}nd{align}
In the case $a^{ij}\in C^{k,\gamma}$ with $k\geq 2$, we have
\begin{align*}
F_{k\eta_{\delta,r}ll}(v,y) &= \sum\limits_{i,j=1}^{n+1} \tilde{a}^{ij} \p_{m_{k \eta_{\delta,r}ll}} G^{ij}(v) - \p_{m_{k \eta_{\delta,r}ll}} J(v) \left(\sum_{j=1}^{n-1}\tilde{b}^j\partial_jv+\tilde{b}^ny_n+\tilde{b}^{n+1}y_{n+1}\right),\\
F_k(v,y) &= -J(v)\sum_{j=1}^{n-1}\tilde{b}^j, \mbox{ for } k\in\{1,\dots,n-1\},\\
F_{k}(v,y) &= \sum\limits_{i,j=1}^{n+1} b^{ij}_k G^{ij}(v)- J(v)\left(\sum_{j=1}^{n-1}b^j_k\partial_jv+b^n_k y_n+b^{n+1}_ky_{n+1}\right), \mbox{ for } k\in\{n,n+1\}.
\eta_{\delta,r}nd{align*}
Here
\begin{align*}
\tilde{a}^{ij}&:= a^{ij}|_{(y'',-\p_n v(y), - \p_{n+1} v(y))}, \quad \tilde{b}^j=\sum_{i=1}^{n+1}\p_{x_i}a^{ij}|_{(y'',-\p_nv(y),-\p_{n+1}v(y))}\\
b^{ij}_k &:= \p_{x_k} a^{ij}|_{(y'',-\p_n v(y), - \p_{n+1} v(y))},\\
b^{j}_k &:= \sum_{i=1}^{n+1}\p_{x_k x_i} a^{ij}|_{(y'',-\p_n v(y), - \p_{n+1} v(y))}.
\eta_{\delta,r}nd{align*}
In particular, we note that the linearization of $F$ already involves second order derivatives of our metric $a^{ij}$. \\
In the case $a^{ij}\in C^{1,\gamma}$, we have
\begin{align*}
F_{k\eta_{\delta,r}ll}(v,y)&=\sum_{i,j=1}^{n+1}\tilde{a}^{ij}\p_{m_{k\eta_{\delta,r}ll}}G^{ij}(v), \quad k,\eta_{\delta,r}ll\in\{1,\dots,n+1\},\\
F_k(v,y)&=0,\ k\in\{1,\dots, n-1\}; \quad F_k(v,y)=\sum\limits_{i,j=1}^{n+1} b^{ij}_k G^{ij}(v), \quad k\in\{n,n+1\}.
\eta_{\delta,r}nd{align*}
Let
$v_0(y):=-\frac{1}{6}\left(y_n^3-3y_ny_{n+1}^2\right)
$ be the (scaled) blow-up of $v$ at $0$.
A direct computation shows that $F_{k\eta_{\delta,r}ll}(v_0,0)\p_{k\eta_{\delta,r}ll}=\Delta_G$, where $\Delta_G$ is the standard Baouendi-Grushin Laplacian in Section~\ref{sec:intrinsic}. Thus, we write
\begin{align*}
L_v&=\Delta_G+\left(F_{k \eta_{\delta,r}ll}(v,y)-F_{k \eta_{\delta,r}ll}(v_0,0)\right)\p_{k \eta_{\delta,r}ll} +F_k(v,y)\p_k\\
&=:\Delta_G+\mathcal{P}_v.
\eta_{\delta,r}nd{align*}
With this at hand, we can prove the following mapping properties for $L_v$:
\begin{prop}
\label{prop:linear}
Let $L_v, \mathcal{P}_v$ be as above. Assume furthermore in the case of $a^{ij}\in C^{k,\gamma}$ with $k\geq 2$ that $[D^2_xa^{ij}]_{\dot{C}^{0,\gamma}}\leq c_\ast$. Given $\delta\in (0,1]$ and $\eta_{\delta,r}psilon\in (0,\min\{\delta,\gamma\})$, let $X_{\delta,\eta_{\delta,r}psilon}(\mathcal{B}_1^+)$ and $Y_{\delta,\eta_{\delta,r}psilon}(\mathcal{B}_1^+)$ be the spaces from Definition~\ref{defi:spaces_loc}. Then
\begin{align*}
L_v : X_{\delta,\eta_{\delta,r}psilon}(\mathcal{B}_1^+) \rightarrow Y_{\delta,\eta_{\delta,r}psilon}(\mathcal{B}_1^+).
\eta_{\delta,r}nd{align*}
Moreover, if $\|v-v_0\|_{X_{\delta,\eta_{\delta,r}psilon}(\mathcal{B}_1^+}\leq \delta_0$, then
$$\|\mathcal{P}_v w\|_{Y_{\delta,\eta_{\delta,r}psilon}(\mathcal{B}_1^+)}\lesssim \max\{\delta_0,c_\ast\}\|w\|_{X_{\delta,\eta_{\delta,r}psilon}(\mathcal{B}_1^+)}, \quad \text{for all } w\in X_{\delta,\eta_{\delta,r}psilon}(\mathcal{B}_1^+).$$
\eta_{\delta,r}nd{prop}
\begin{proof}
We first show the claims of the proposition in the case of $a^{ij}\in C^{k,\gamma}$ with $k\geq 2$. We begin by arguing that $L_v w \in Y_{\delta,\eta_{\delta,r}psilon}(\mathcal{B}_1^+)$ if $w\in X_{\delta, \eta_{\delta,r}psilon}(\mathcal{B}_1^+)$.
\begin{itemize}
\item[(a)] To show this, we observe that
\begin{align*}
&\sum_{k,\eta_{\delta,r}ll}\p_{m_{k\eta_{\delta,r}ll}}G^{ij}(v)\p_{k\eta_{\delta,r}ll}w
=\det\begin{pmatrix}
\p_{ij}w & \p_{in}w & \p_{i,n+1}w\\
\p_{in}v & \p_{nn}v & \p_{n,n+1}v\\
\p_{i,n+1}v & \p_{n+1,n}v & \p_{n+1,n+1}v
\eta_{\delta,r}nd{pmatrix} \\
&+ \det\begin{pmatrix}
\p_{ij}v & \p_{in}v & \p_{i,n+1}v\\
\p_{in}w & \p_{nn}w & \p_{n,n+1}w\\
\p_{i,n+1}v & \p_{n+1,n}v & \p_{n+1,n+1}v
\eta_{\delta,r}nd{pmatrix}
+\det\begin{pmatrix}
\p_{ij}v & \p_{in}v & \p_{i,n+1}v\\
\p_{in}v & \p_{nn}v & \p_{n,n+1}v\\
\p_{i,n+1}w & \p_{n+1,n}w & \p_{n+1,n+1}w
\eta_{\delta,r}nd{pmatrix},
\eta_{\delta,r}nd{align*}
if $i,j\in\{1,\dots, n-1\}$ and for the remaining indices $(i,j)$ the expression $\sum_{k,\eta_{\delta,r}ll}G^{ij}(v)\p_{k\eta_{\delta,r}ll}w$ is similar. Thus, the mapping property of $\sum_{i,j}\sum_{k,\eta_{\delta,r}ll}\tilde{a}^{ij}\p_{m_{k\eta_{\delta,r}ll}}G^{ij}(v)\p_{k\eta_{\delta,r}ll}w$ follows along the lines of the proof of Proposition~\ref{prop:nonlin_map} and Proposition~\ref{prop:error_gain}.
\item[(b)] We discuss the term $\sum\limits_{i,j=1}^{n+1}\sum\limits_{\eta_{\delta,r}ll=n}^{n+1}b^{ij}_\eta_{\delta,r}ll\p_\eta_{\delta,r}ll w G^{ij}(v)$. As $b^{ij}_\eta_{\delta,r}ll$ is $C^{1,\gamma}$, it satisfies the same decomposition as $\tilde{a}^{ij}$ in \eta_{\delta,r}qref{eq:Holderweight0}. Using the characterization for $\p_nw$, $\p_{n+1}w$ from Proposition~\ref{prop:decompI} (ii), we have that around $y_0\in P\cap \mathcal{B}_{1/2}$
\begin{align*}
\sum_{\eta_{\delta,r}ll=n,n+1}b^{ij}_\eta_{\delta,r}ll\p_\eta_{\delta,r}ll w = \tilde{b}^{ij}_n(y_0)\p_nw(y_0)+ E_{y_0}^{ij},
\eta_{\delta,r}nd{align*}
where $E_{y_0}^{ij}$ satisfies the same error bounds as (\ref{eq:Holderweight0}). Moreover, the functions $\tilde{b}^{ij}_n$ inherit the off-diagonal condition of $a^{ij}$, i.e. $\tilde{b}^{i,n+1}_n=0$ on $\{y_{n+1}=0\}$ for $i\in \{1,\dots, n\}$. Thus, using exactly the same estimate as for $\tilde{a}^{ij}G^{ij}(v)$ in Step 1 of Proposition~\ref{prop:error_gain}, we obtain that $\sum\limits_{i,j=1}^{n+1}\sum\limits_{\eta_{\delta,r}ll=n}^{n+1}b_\eta_{\delta,r}ll^{ij}\p_\eta_{\delta,r}ll w G^{ij}(v)\in Y_{\delta, \eta_{\delta,r}psilon}(\mathcal{B}_1^+)$.
\item[(c)] The term
\begin{align*}
-J(v)\left( \sum\limits_{j=1}^{n-1}b^j_k \p_j v + b^n_k y_n + b^{n+1}_k y_{n+1} \right)\p_k w,\quad k\in \{n,n+1\}
\eta_{\delta,r}nd{align*}
is a lower order term contained in $Y_{\delta,\eta_{\delta,r}psilon}(\mathcal{B}_1^+)$ as it is bounded by $r(y)^3=\dist(y,P)^3$ and satisfies the right Hölder bounds.
\item[(d)] Similarly, the contribution
\begin{align*}
-\p_{m_{k\eta_{\delta,r}ll}}J(v)\left( \sum\limits_{j=1}^{n-1}\tilde{b}^j \p_j v + \tilde{b}^n y_n + \tilde{b}^{n+1} y_{n+1} \right)\p_{k\eta_{\delta,r}ll} w,\quad k,\eta_{\delta,r}ll\in \{n,n+1\}
\eta_{\delta,r}nd{align*}
is a lower order term contained in $Y_{\delta,\eta_{\delta,r}psilon}(\mathcal{B}_1^+)$ as it is also bounded by $\dist(y,P)^3$ and satisfies the right Hölder bounds.
\eta_{\delta,r}nd{itemize}
We continue by proving the bounds for $\mathcal{P}_v$. To this end, we again consider the individual terms in the linearization separately:
\begin{itemize}
\item Estimate of $\left(F_{k\eta_{\delta,r}ll}(v,y)-F_{k\eta_{\delta,r}ll}(v_0,0)\right)\p_{k\eta_{\delta,r}ll}w$. We will only present the details of the estimate for $k,\eta_{\delta,r}ll\in \{1,\dots,n-1\}$. The estimates for the remaining terms are similar. \\
For $k,\eta_{\delta,r}ll\in \{1,\dots, n-1\}$, from \eta_{\delta,r}qref{eq:op_lin} we have
\begin{align*}
F_{k\eta_{\delta,r}ll}(v,y)=-\tilde{a}^{k\eta_{\delta,r}ll}(y)J(v).
\eta_{\delta,r}nd{align*}
By assumption $v\in X_{\delta,\eta_{\delta,r}psilon}(\mathcal{B}_1^+)$ is in a $\delta_0$ neighborhood of $v_0$ and by Proposition~\ref{prop:decompI} we have the decomposition ($r=r(y)=(y_n^2+y_{n+1}^2)^{1/2}$)
\begin{align*}
\p_{ij}v(y)&=r^{-1+2\delta-\eta_{\delta,r}psilon}C_{ij}(y),\\
\p_{in}v(y)&=B_i(y'')+r^{2\delta-\eta_{\delta,r}psilon}C_{in}(y),\\
\p_{i,n+1}v(y)&=r^{2\delta-\eta_{\delta,r}psilon}C_{i,n+1}(y),\\
\p_{nn}v(y)&=A_0(y'')y_n+r^{1+2\delta-\eta_{\delta,r}psilon}C_{n,n}(y),\\
\p_{n,n+1}v(y)&=A_1(y'')y_{n+1}+r^{1+2\delta-\eta_{\delta,r}psilon}C_{n,n+1}(y),\\
\p_{n+1,n+1}v(y)&=A_1(y'')y_{n}+r^{1+2\delta-\eta_{\delta,r}psilon}C_{n+1,n+1}(y),
\eta_{\delta,r}nd{align*}
where $B_i, A_0, A_1\in C^{0,\delta}(P\cap \mathcal{B}_1)$ with
\begin{equation}\label{eq:smallness}
[B_i]_{\dot{C}^{0,\delta}}+\|A_0-1\|_{L^\infty}+\|A_1-(-1)\|_{L^\infty}\lesssim \delta_0,
\eta_{\delta,r}nd{equation}
and $C_{ij}\in C^{0,\eta_{\delta,r}psilon}_\ast(\mathcal{B}_1^+)$ with $[C_{ij}]_{\dot{C}^{0,\eta_{\delta,r}psilon}_\ast(\mathcal{B}_1^+)}\lesssim \delta_0$.
Thus, for $k,\eta_{\delta,r}ll\in \{1,\dots, n-1\}$
\begin{align*}
&\quad F_{k\eta_{\delta,r}ll}(v,y)-F_{k\eta_{\delta,r}ll}(v_{0},0)\\
&=-\left(\tilde{a}^{k\eta_{\delta,r}ll}(y)-\tilde{a}^{k\eta_{\delta,r}ll}(0)\right)J(v_{0})-\tilde{a}^{k\eta_{\delta,r}ll}(y)\left(J(v)-J(v_{0})\right)\\
&\lesssim c_\ast(y_n^2+y_{n+1}^2)+ \delta_0 (y_n^2+y_{n+1}^2).
\eta_{\delta,r}nd{align*}
Consequently,
\begin{align*}
\left|\left(F_{k\eta_{\delta,r}ll}(v,y)-F_{k\eta_{\delta,r}ll}(v_0,0)\right)\p_{k\eta_{\delta,r}ll}w(y)\right|\lesssim \max\{c_\ast,\delta_0\}r^{1+2\delta}|r^{-(1-2\delta)}\p_{k\eta_{\delta,r}ll}w(y)|.
\eta_{\delta,r}nd{align*}
Moreover, it is not hard to check that
\begin{align*}
\left[r(\cdot)^{-(1+2\delta-\eta_{\delta,r}psilon)}(F_{k\eta_{\delta,r}ll}(v,\cdot)-F_{k\eta_{\delta,r}ll}(v_0,0))\p_{k\eta_{\delta,r}ll}w\right]_{\dot{C}^{0,\eta_{\delta,r}psilon}_\ast(\mathcal{B}_{1}^+)}\lesssim \max\{c_\ast,\delta_0\}\|w\|_{X_{\delta,\eta_{\delta,r}psilon}(\mathcal{B}_1^+)}.
\eta_{\delta,r}nd{align*}
Hence, for $k,\eta_{\delta,r}ll\in \{1,\dots, n-1\}$
$$\left\|(F_{k\eta_{\delta,r}ll}(v,y)-F_{k\eta_{\delta,r}ll}(v_0,0))\p_{k\eta_{\delta,r}ll}w\right\|_{Y_{\delta,\eta_{\delta,r}psilon}(\mathcal{B}_1^+)}\lesssim \max\{c_\ast,\delta_0\}\|w\|_{X_{\delta,\eta_{\delta,r}psilon}(\mathcal{B}_1^+)}.$$
For the remaining second order terms in the linearization we argue similarly.
\item Estimate for $F_k(v,y)\p_k$. To estimate the lower order terms $F_k(v,y)$, we need to assume that $a^{ij}\in C^{2,\gamma}$ for some $\gamma>0$. Furthermore, we assume $[D^2_xa^{ij}]_{\dot{C}^{0,\gamma}}\leq c_\ast$ (this assumption on the second derivatives becomes necessary as the term $F_k(v,y)$ with $k\in \{n,n+1\}$ involves $D^2a$, c.f. \eta_{\delta,r}qref{eq:op_lin}). Note that combining this with the assumption (A6) and using an interpolation estimate we have $\|D^2_xa^{ij}\|_{L^\infty}\leq Cc_\ast$.\\
We begin with the terms with $k\in\{1,\dots, n-1\}$, i.e. with
\begin{align*}
F_k(v,y)&=-J(v)\sum_{j=1}^{n-1}\tilde{b}^j, \quad \tilde{b}^j=\sum_{i=1}^{n+1}\p_{x_i}a^{ij}|_{(y'',-\p_nv,-\p_{n+1}v)}.
\eta_{\delta,r}nd{align*}
As by our assumption (A6) $\|\tilde{b}^i\|_{L^\infty}\lesssim c_\ast$, the asymptotics of $J(v)$ immediately yield that $\|F_k(v,y)\p_k w\|_{Y_{\delta,\eta_{\delta,r}psilon}(\mathcal{B}_1^+)}\lesssim c_\ast\|w\|_{X_{\delta,\eta_{\delta,r}psilon}(\mathcal{B}_1^+)}$ as long as $\eta_{\delta,r}psilon\leq \gamma$.\\
For the contributions with $k\in\{n,n+1\}$, $F_k(v,y)$ is of the same structural form as the original nonlinear function $F(v,y)$, however with coefficients which contain an additional derivative, i.e. $a^{ij}$ is replaced by $b^{ij}_k$ and $\tilde{b}^j$ is replaced by $b^j_k$ (c.f. \eta_{\delta,r}qref{eq:op_lin}).
Thus, by the argument in Section \ref{sec:nonlinmap} on the mapping properties of the nonlinear function $F(v,y)$, we infer that $F_k(v,y)\p_k w \in Y_{\alphapha,\eta_{\delta,r}psilon}(\mathcal{B}_1^+)$, $k\in\{n,n+1\}$ (for $\eta_{\delta,r}psilon \leq \gamma$) for any $w\in X_{\alphapha,\eta_{\delta,r}psilon}(\mathcal{B}_1^+)$. Moreover, it satisfies $\|F_{k}(v,y)\p_k w\|_{Y_{\delta,\eta_{\delta,r}psilon}(\mathcal{B}_1^+)}\lesssim C [D^2 a^{ij}]_{\dot{C}^{0,\gamma}}\|w\|_{X_{\delta,\eta_{\delta,r}psilon}(\mathcal{B}_1^+)}$.
\eta_{\delta,r}nd{itemize}
Combining the previous observations concludes the proof of Proposition \ref{prop:linear} in the case that $k\geq 2$.\\
For the case $k=1$, we notice that if $a^{ij}\in C^{1,\gamma}$ the linearization is simply given by $L_{v}=F_{k\eta_{\delta,r}ll}(v,y)\p_{k\eta_{\delta,r}ll}+\sum\limits_{i,j=1}^{n+1} b^{ij}_k G^{ij}(v)\p_k$. Thus, a similar proof as for the case $C^{k,\gamma}$ with $k\geq 2$ applies.
\eta_{\delta,r}nd{proof}
\subsection{Hölder Regularity and Analyticity}
\label{sec:IFT1}
In this section we apply the implicit function theorem to show that if $a^{ij}\in C^{k,\gamma}$ with $k\geq 1$ and $\gamma\in (0,1)$, then the regular free boundary is locally in $C^{k+1,\gamma}$. \\
To this end, we first define a one-parameter family of diffeomorphisms which we compose with our Legendre function to create an ``artificially parameter-dependent problem''. Due to the regularity properties of $F$, this is then exploited to deduce the existence of a solution to the parameter-dependent problem, which enjoys good regularity properties in the artificially introduced parameter (c.f. Proposition \ref{prop:reg_a}). Finally, this regularity is transfered from the parameter variable to the original variables yielding the desired regularity properties of our Legendre function (c.f. Theorems \ref{prop:hoelder_reg_a}, \ref{prop:analytic}). This then proves the claims of Theorem \ref{thm:higher_reg}.\\
In the sequel, we will always assume that $v$ is a Legendre function (c.f. (\ref{eq:legendre})) which is associated with a solution of the variable coefficient thin obstacle problem (\ref{eq:varcoeff}), which satisfies the normalizations (A4) and (A5). The coefficient metric $a^{ij} \in C^{k,\gamma}$, $k\geq 1$, is assumed to obey the conditions (A1)-(A3) as well as (A6). We also suppose that $[D^2_xa^{ij}]_{\dot{C}^{0,\gamma}}\leq c_\ast$ if $k\geq 2$. By rescaling we assume that $v$ is well defined in $\mathcal{B}_2^+$. We have shown in Proposition~\ref{prop:error_gain2} that $v\in X_{\delta,\eta_{\delta,r}psilon}(\mathcal{B}_1^+)$ for any $\delta\in (0,1)$ and $\eta_{\delta,r}psilon\in (0,\gamma]$. Furthermore, we recall that $\|v-v_0\|_{X_{\delta,\eta_{\delta,r}psilon}(\mathcal{B}_1^+)}\leq C\max\{\eta_{\delta,r}psilon_0,c_\ast\}$ (c.f. Remark~\ref{rmk:close}), where $v_0(y)=-\frac{1}{6}(y_n^3-3y_ny_{n+1}^2)$ is the model solution, which is the (rescaled) blow-up of $v$ at $0$.
\subsubsection{An infinitesimal translation.}
\label{subsec:IFT0}
For $y\in \R^{n+1}$ and $a\in \R^{n-1}$ fixed, we consider the following ODE
\begin{equation}
\label{eq:ODE}
\begin{split}
\phi'(t)&=a ((3 /4 )^2-|\phi(t)|^2)^5_+ \eta_{\delta,r}ta(y_n,y_{n+1}),\\
\phi(0)&=y''.
\eta_{\delta,r}nd{split}
\eta_{\delta,r}nd{equation}
Here $\eta_{\delta,r}ta$ is an in the $y_n, y_{n+1}$ variables radially symmetric smooth cut-off function supported in $\{(y_n,y_{n+1})| \ y_n^2+y_{n+1}^2<1/2\}$, which is equal to one in $\{y_n^2+y_{n+1}^2\leq 1/4\}$. We denote the unique solution to the above ODE by $\phi_{a,y}(t)$ and let $$\Phi_a(y):= (\phi_{a,y}(1),y_n,y_{n+1}).$$
Due to the $C^5$ regularity of the right hand side of (\ref{eq:ODE}), we obtain that $\phi_{a,y}(1)$ is $C^5$ in $y$.
Moreover, an application of a fixed point argument yields that $\phi_{a,y}(1)$ is analytic in the parameter $a$. We summarize these properties as:
\begin{lem}
For each $a\in \R^{n-1}$, $\Phi_a:\R^{n+1}\rightarrow \R^{n+1}$ is a $C^5$ diffeomorphism. The mapping $\R^{n-1}\ni a\mapsto \Phi_a\in C^5(\R^{n+1})$ is analytic.
\eta_{\delta,r}nd{lem}
Moreover, we note that $\Phi_a$ enjoys further useful properties:
\begin{lem}
\label{prop:psi}
\begin{itemize}
\item[(i)] For each $a\in \R^{n-1}$, $\Phi_a(\{y_n=0\})\subset\{y_n=0\}$ and $\Phi_a(\{y_{n+1}=0\})\subset\{y_{n+1}=0\}$. Moreover, $\Phi_a(y)=y$ if $y\notin \{y\in \R^{n+1}| \ |y''|<\frac{3 }{4}, \ y_n^2+y_{n+1}^2<\frac{1}{2}\}$.
\item[(ii)] For each $a\in \R^{n-1}$ and $y\in \{y\in\R^{n+1}| \ y_n^2+y_{n+1}^2<\frac{1}{4}\}$, we have $\Phi_a(y)=(\phi_{a,y''}(1),y_n,y_{n+1})$, i.e. $\Phi_a$ only acts on the tangential variables.
\item[(iii)] For each $a\in \R^{n-1}$, $\p_{n+1}\Phi_a(y'',y_n,0)=(0,\dots, 0,1)$.
\item[(iv)] At $a=0$, $\Phi_0(y)=y$ for all $y\in \R^{n+1}$.
\eta_{\delta,r}nd{itemize}
\eta_{\delta,r}nd{lem}
\begin{proof}
This follows directly from the definition of $\Phi_a$ and a short calculation.
\eta_{\delta,r}nd{proof}
Let $v$ be a Legendre function as described at the beginning of Section~\ref{sec:IFT1}. We use the family of diffeomorphisms $\Phi_a$ to define a one-parameter family of functions:
$$v_a(y):=v(\Phi_a(y)).$$
We first observe that the space $X_{\delta,\eta_{\delta,r}psilon}$ (c.f. Definition \ref{defi:spaces}) is stable under the diffeomorphism $\Phi_a$:
\begin{lem} \label{prop:psi2}
If $v\in X_{\delta,\eta_{\delta,r}psilon}(\mathcal{B}_{1}^+)$, then $v_a=v\circ \Phi_a\in X_{\delta,\eta_{\delta,r}psilon}(\mathcal{B}_{1}^+)$ as well.
\eta_{\delta,r}nd{lem}
\begin{proof}
We first check that $v_a$ satisfies the Dirichlet-Neumann boundary condition. Indeed, by (i) in Lemma~\ref{prop:psi}, if $v=0$ on $\{y_n=0\}$, then $v_a=0$ on $\{y_n=0\}$ as well. To verify the Neumann boundary condition, we compute
\begin{align*}
\p_{n+1}v_a(y)=\sum_{k=1}^{n+1}\p_{k}v(z)\big|_{z=\Phi_a(y)}\p_{n+1} \Phi_a^k(y).
\eta_{\delta,r}nd{align*}
Thus by (i) and (iii) of Lemma~\ref{prop:psi}, $\p_{n+1}v_a=0$ on $\{y_{n+1}=0\}$.
Next by property (ii) of Lemma~\ref{prop:psi}, for $y\in \{y\in \R^{n+1}| y_n^2+y_{n+1}^2<\frac{1}{4}\}$ and $i,j\in\{1,\dots,n-1\}$,
\begin{align*}
\p_iv_a(y)&=\sum_{k=1}^{n-1}\p_k v(z)\big|_{z=\Phi_a(y)}\p_i\Phi^k_a(y''), \\
\p_{ij}v_a(y)&=\sum_{k,\eta_{\delta,r}ll=1}^{n-1}\p_{k\eta_{\delta,r}ll}v(z)\big|_{z=\Phi_a(y)}\p_i\Phi^k_a(y'') \p_j\Phi^\eta_{\delta,r}ll_a(y'') + \sum_{k=1}^{n-1}\p_kv(z)\big|_{z=\Phi_a(y)}\p_{ij}\Phi_a^k(y''),\\
\p_{in}v_a(y)&=\sum_{k=1}^{n-1}\p_{kn}v(z)\big|_{z=\Phi_a(y)}\p_i\Phi_a^{k}(y'').
\eta_{\delta,r}nd{align*}
Thus, combining these calculations with the fact that $\Phi_a$ fixes the $(y_n,y_{n+1})$ variables ((ii) of Lemma~\ref{prop:psi}), it is not hard to check that $v_a$ satisfies the decomposition in Proposition~\ref{prop:decompI} in the region $\{y\in \R^{n+1}| y_n^2+y_{n+1}^2<\frac{1}{4}\}$. The regularity of $\Phi_a$ and of $v$, entails that $v_a\in C^{2,\eta_{\delta,r}psilon}_\ast$ outside of the region $\{y\in \R^{n+1}| y_n^2+y_{n+1}^2<\frac{1}{4}\}$. Thus, $v_a\in X_{\delta,\eta_{\delta,r}psilon}(\mathcal{B}_1^+)$.
\eta_{\delta,r}nd{proof}
Since $v$ satisfies $F(v,y)=0$, the function $v_a(y)=v(\Phi_a(y))$ solves a new equation $F_a(u,y)=0$. Here
\begin{align}\label{eq:Fa}
F_a(u,y)=F(u(\Phi^{-1}_a(z)),z)\big|_{z=\Phi_a(y)}.
\eta_{\delta,r}nd{align}
For this equation we note the following properties:
\begin{prop}\label{prop:reg_a}
Let $a^{ij}\in C^{k,\gamma}$ with $k\geq 1$, $\gamma\in (0,1]$. Then for each $a\in \R^{n-1}$, $F_a$ maps $X_{\delta,\eta_{\delta,r}psilon}(\mathcal{B}_{1}^+)$ into $Y_{\delta,\eta_{\delta,r}psilon}(\mathcal{B}_{1}^+)$. Moreover,
\begin{itemize}
\item[(i)] for each $a\in \R^{n-1}$, the mapping
\begin{align*}
F_a(\cdot, y): X_{\delta,\eta_{\delta,r}psilon}(\mathcal{B}_1^+)\rightarrow Y_{\delta,\eta_{\delta,r}psilon}(\mathcal{B}_1^+), \quad u\mapsto F_a(u,y),
\eta_{\delta,r}nd{align*}
is $C^{k-1,\gamma-\eta_{\delta,r}psilon}$ in $u$.
\item[(ii)] For each $u\in X_{\alphapha,\eta_{\delta,r}psilon}(\mathcal{B}_1^+)$, the mapping
\begin{align*}
F_{\cdot}(u,y):\R^{n-1} \rightarrow Y_{\delta,\eta_{\delta,r}psilon}(\mathcal{B}_1^+), \ a \mapsto F_a(u,y),
\eta_{\delta,r}nd{align*}
is $C^{k-1,\gamma-\eta_{\delta,r}psilon}$ in $a$. If $a^{ij}$ is real analytic in $B_1^+$, then $F_a$ is real analytic in $a$.
\eta_{\delta,r}nd{itemize}
\eta_{\delta,r}nd{prop}
\begin{proof}
We first check the mapping property of $F_a$. Let $\Psi_a(z):=\Phi_a^{-1}(z)$ and let $\tilde{u}_a(z):=u(\Phi_a^{-1}(z))$. A direct computation shows that for $ i,j\in\{1,\dots,n-1\}$, $\eta_{\delta,r}ta,\xi\in\{n,n+1\}$ and $y\in\{y_n^2+ y_{n+1}^2 \leq \frac{{1}}{4}\}$
\begin{align*}
\p_{ij}\tilde{u}_a(z)&=\sum_{k,\eta_{\delta,r}ll=1}^{n-1}\p_{k\eta_{\delta,r}ll}u(\Psi_a(z))\p_i \Psi_a^{k}(z) \p_j\Psi_a^\eta_{\delta,r}ll(z)+\sum_{k=1}^{n-1}\p_k u(\Psi_a(z))\p_{ij}\Psi_a^k(z),\\
\p_{i\xi}\tilde{u}_a(z)&=\sum_{k=1}^{n-1}\p_{k\xi}u(\Psi_a(z))\p_i\Psi^{k}_a(z),\\
\p_{\xi}\tilde{u}_a(z)&=\p_{\xi}u(\Psi_a(z)),\\
\p_{\eta_{\delta,r}ta\xi}\tilde{u}_a(z)&=\p_{\eta_{\delta,r}ta\xi}u(\Psi_a(z)).
\eta_{\delta,r}nd{align*}
By property (ii) of Lemma~\ref{prop:psi} and a similar argument as in Lemma~\ref{prop:psi2} we have that $\tilde{u}_a=u\circ \Psi_a \in X_{\delta,\eta_{\delta,r}psilon}(\mathcal{B}_{{1}/4}^+)$, if $u\in X_{\delta,\eta_{\delta,r}psilon}(\mathcal{B}_{1}^+)$. Thus, by Proposition \ref{prop:nonlin_map}, $F(\tilde{u}_a,z)\in Y_{\delta,\eta_{\delta,r}psilon}(\mathcal{B}_{1/4}^+)$. By (ii) in Lemma~\ref{prop:psi}, $F(\tilde{u}_a,z)\big|_{z=\Phi_a(y)}\in Y_{\delta,\eta_{\delta,r}psilon}(\mathcal{B}_{1/4}^+)$ as well. Outside of $\{y_{n}^2 + y_{n+1}^2 \leq \frac{1}{4}\}$, the statement follows without difficulties.\\
Next we show the regularity of $F_a(u,y)$ in $u$ and in the parameter $a$ which were claimed in the statements (i) and (ii). We first show that when $a=0$, $u\mapsto F(u,y)$ is $C^{k-1,\gamma-\eta_{\delta,r}psilon}$. Indeed, we recall the expression of $F(v,y)$ from the beginning of Section~\ref{sec:grushin}. By a similar estimate as in Proposition~\ref{prop:linear} we have that $u\mapsto \sum_{i,j}a^{ij}(z'',-\p_nu,-\p_{n+1}u)G^{ij}(u)$ is $C^{k-1,\gamma-\eta_{\delta,r}psilon}$ regular. To estimate the contribution $J(u)(b^j(u)\p_ju+b^n(u)y_n+b^{n+1}(u)y_{n+1})$ (in the case $k\geq 2$), we use the bound
\begin{align*}
\left|b(u_1,x)-b(u_2,y)\right|\lesssim \|b\|_{\dot{C}^{0,\gamma}}\|u_1-u_2\|^{\gamma-\eta_{\delta,r}psilon}_{X_{\alphapha,\eta_{\delta,r}psilon}(\mathcal{B}_1^+)}|x-y|^\eta_{\delta,r}psilon.
\eta_{\delta,r}nd{align*}
Here we used the decomposition property of $u\in X_{\delta,\eta_{\delta,r}psilon}(\mathcal{B}_1^+)$ from Proposition~\ref{prop:decompI}, that $b$ is $C^{0,\gamma}$ as a function of its arguments and the definition $b(u,y):=b(y'',-\p_nu,-\p_{n+1}u)$. Combining this we infer that
$$\|(D^{k-1}_{u_1}F-D^{k-1}_{u_2}F)(h^{k-1})\|_{Y_{\alphapha,\eta_{\delta,r}psilon}(\mathcal{B}_1^+)}\lesssim_k\|u_1-u_2\|_{X_{\alphapha,\eta_{\delta,r}psilon}(\mathcal{B}_1^+)}^{\gamma-\eta_{\delta,r}psilon}\|h\|_{X_{\alphapha,\eta_{\delta,r}psilon}(\mathcal{B}_1^+)}^{k-1}.$$
To show the regularity of $u\mapsto F_a(u,y)$ for nonzero $a$, we use the definition of $F_a$ in \eta_{\delta,r}qref{eq:Fa} and the computation for $D^2\tilde{u}_a$ from above. The argument is the same as for $a=0$.\\
Now we show the regularity of $F_a(u,y)$ in $a$ for fixed $u$. We only show the case when $k=1$. The remaining cases follow analogously. We recall that
\begin{align*}
F(u,z)&=\sum_{i,j}^{n+1}a^{ij}(z'',-\p_{n}u,-\p_{n+1}u)G^{ij}(u)+ f(z),\\
&\text{where } f(z)=-J(v(z))\left(\sum_{j=1}^{n-1}\tilde{b}^j(z)\p_j v(z)+\tilde{b}^n(z)z_n+\tilde{b}^{n+1}(z)z_{n+1}\right),
\eta_{\delta,r}nd{align*}
and that $F_a(u,y)=F(u(\Psi_a(z)),z)\big|_{z=\Phi_a(y)}$ from \eta_{\delta,r}qref{eq:Fa}.
Since $a\mapsto \Psi_a$ and $a\mapsto \Phi_a$ are real analytic and since $F(v,z):X_{\delta, \eta_{\delta,r}psilon}(\mathcal{B}_{1}^+)\rightarrow Y_{\delta,\eta_{\delta,r}psilon}(\mathcal{B}_{1}^+)$ is $C^{k-1,\gamma-\eta_{\delta,r}psilon}$ regular in $v$, it suffices to note the regularity of the mappings
\begin{align*}
a &\mapsto a^{ij}((\Phi_a(y))'',-\p_n u,-\p_{n+1}u)G_a^{ij}(u)
\eta_{\delta,r}nd{align*}
in $a$ as functions from $\R^{n-1}$ to $Y_{\delta,\eta_{\delta,r}psilon}(\mathcal{B}_1^+)$. For the term $f(z)|_{z=\Phi_a(y)}$, since $f(z)$ has the form $f(z)=r(z)^{3-\eta_{\delta,r}psilon}\tilde{f}(z)$ with $r(z)=(z_n^2+z_{n+1}^2)^{1/2}$ and since $\tilde{f}(z)$ is $C^{0,\gamma}$ in its tangential variables (due to the regularity of $\tilde{b}^j$ and the fact that $f(z)$ involves only lower order derivatives of $v$), the map
$$\R^{n-1}\ni a\mapsto \tilde{f}_a(y):=\tilde{f}(\Phi_a(y))\in C^{0,\eta_{\delta,r}psilon}_\ast$$
is $C^{0,\gamma-\eta_{\delta,r}psilon}$ regular.
\eta_{\delta,r}nd{proof}
\subsubsection{Application of the implicit function theorem and regularity}
\label{sec:IFTAppl}
With this preparation we are now ready to invoke the implicit function theorem. We seek to apply the implicit function theorem in the spaces $X_{\delta,\eta_{\delta,r}psilon}$ and $Y_{\delta,\eta_{\delta,r}psilon}$ from Definition \ref{defi:spaces}. However, the Legendre function $v$ is only defined in $\mathcal{B}_1^+$. Thus, we extend it into the whole quarter space $Q_+$. In order to avoid difficulties at (artificially created) boundaries, we base our argument not on $v_a$ but instead consider $w_a:= v_a - v$, where $v$ is the original Legendre function. For this function we first note that $\supp(w_a) \Subset \mathcal{B}_{3 /4}^+$, which follows from the definition of the diffeomorphism $\Phi_a$. Moreover, $w_a$ solves the following fully nonlinear, degenerate elliptic equation:
\begin{align*}
\tilde{F}_a(w_a,y) := F_a(w_a+v,y)=0 \mbox{ in } \mathcal{B}_{1}^+.
\eta_{\delta,r}nd{align*}
We extend $w_a$ to the whole quarter space $Q_+$ by setting $w_a=0$ in $Q_+\setminus \mathcal{B}_{1}^+$. Using $w_a=0$ in $Q_+\setminus \mathcal{B}_{3/4}^+$, the function $w_a$ solves the equation
\begin{align*}
G_a(w_a, y):= \eta_{\delta,r}ta(d_G(y,0)) \tilde{F}_a(w_a + v,y) + (1-\eta_{\delta,r}ta(d_G(y,0))) \D_G w_a= 0,
\eta_{\delta,r}nd{align*}
in $Q_+$. Here $\eta_{\delta,r}ta: [0,\infty) \rightarrow \R$ is a smooth cut-off function with $\eta_{\delta,r}ta(s)=1$ for $s\leq \frac{3 }{4}$ and $\eta_{\delta,r}ta(s)=0$ for $s\geq 1$.
This extension is chosen such that the operator is of ``Baouendi-Grushin type'' around the degenerate set $P=\{y_n=y_{n+1}=0\}$ and the Baouendi-Grushin type estimates from Proposition \ref{prop:invert} and from the Appendix, Section \ref{sec:quarter_Hoelder}, can be applied in a neighborhood of $P$. The function $G_a$ satisfies the following mapping properties:
\begin{prop}\label{prop:invertible}
Assume that $a^{ij}\in C^{k,\gamma}$ with $k\geq 1$ and $\gamma\in (0,1]$. Given $a\in \R^{n-1}$, $G_a$ maps from $X_{\delta,\eta_{\delta,r}psilon}$ into $Y_{\delta,\eta_{\delta,r}psilon}$ for each $\delta\in (0,1)$ and $\eta_{\delta,r}psilon\in (0,\gamma)$. Let $L:=D_w G_a\big|_{(a,w)=(0,0)}$ be the linearization of $G_a$ at $w=0$ and $a=0$. Then $L:X_{\delta,\eta_{\delta,r}psilon}\rightarrow Y_{\delta,\eta_{\delta,r}psilon}$ is invertible.
\eta_{\delta,r}nd{prop}
\begin{proof}
Let $\bar{\eta_{\delta,r}ta}(y):= \eta_{\delta,r}ta(d_G(y,0))$. By Proposition~\ref{prop:error_gain} the Legendre function $v\in X_{\delta,\eta_{\delta,r}psilon}(\mathcal{B}_{1}^+)$ for any $\delta\in (0,1)$. Thus by Proposition~\ref{prop:reg_a}, $\tilde{F}_a(w,y)=F_a(w+v,y)\in Y_{\delta,\eta_{\delta,r}psilon}(\mathcal{B}_{1}^+)$ for any $w\in X_{\delta,\eta_{\delta,r}psilon}$ and $y\in \mathcal{B}_{1}^+$. Since the Baouendi-Grushin Laplacian also has this mapping property, i.e. $\Delta_G:X_{\delta,\eta_{\delta,r}psilon}\rightarrow Y_{\delta,\eta_{\delta,r}psilon}$, and using the support assumption of $\eta_{\delta,r}ta$, we further observe that $G_a=\bar \eta_{\delta,r}ta F_a+(1-\bar\eta_{\delta,r}ta)\Delta_G$ maps $X_{\delta,\eta_{\delta,r}psilon}$ into $Y_{\delta,\eta_{\delta,r}psilon}$.
By (iv) in Proposition~\ref{prop:psi}, it is not hard to check that the linearization of $G_a$ at $(0,0)$ is given by
\begin{equation}
\begin{split}
\label{eq:lin2}
L=(D_w G_a)|_{(0,0)}= \bar{\eta_{\delta,r}ta} \left(F_{k\eta_{\delta,r}ll}(v,y)\p_{k\eta_{\delta,r}ll}+ F_{k}(v,y)\p_k\right) + (1-\bar{\eta_{\delta,r}ta})\D_G.
\eta_{\delta,r}nd{split}
\eta_{\delta,r}nd{equation}
Firstly, by Proposition \ref{prop:linear}, $L$ maps $X_{\delta,\eta_{\delta,r}psilon}$ into $Y_{\delta,\eta_{\delta,r}psilon}$. Moreover, $L$ can be written as $L=\Delta_G+\bar\eta_{\delta,r}ta\mathcal{P}_v$. Since $\|v-v_0\|_{X_{\alphapha,\eta_{\delta,r}psilon}(\mathcal{B}_1^+)}\lesssim \max\{\eta_{\delta,r}psilon_0,c_\ast\}$ by Remark~\ref{rmk:close}, Proposition~\ref{prop:linear} implies that $\|\bar\eta_{\delta,r}ta \mathcal{P}_v(w)\|_{Y_{\delta,\eta_{\delta,r}psilon}}\lesssim \max\{\eta_{\delta,r}psilon_0,c_\ast\}\|w\|_{X_{\delta,\eta_{\delta,r}psilon}}$ for $\eta_{\delta,r}psilon\in (0,\gamma)$. Thus if $\eta_{\delta,r}psilon_0$ and $c_\ast$ are sufficiently small, $L:X_{\delta,\eta_{\delta,r}psilon}\rightarrow Y_{\delta,\eta_{\delta,r}psilon}$ is invertible, as $\Delta_G:X_{\delta,\eta_{\delta,r}psilon}\rightarrow Y_{\delta,\eta_{\delta,r}psilon}$ is invertible (c.f. Lemma~\ref{lem:inverse}).
\eta_{\delta,r}nd{proof}
\begin{thm}[H\"older regularity]
\label{prop:hoelder_reg_a}
Let $a^{ij}\in C^{k,\gamma}(B_1^+,\R^{(n+1)\times(n+1)}_{sym})$ with $k\geq 1$ and $\gamma\in (0,1)$. Let $w:B_1^+\rightarrow \R$ be a solution of the variable coefficient thin obstacle problem with metric $a^{ij}$. Then locally $\Gamma_{3/2}(w)$ is a $C^{k+1,\gamma}$ graph.
\eta_{\delta,r}nd{thm}
\begin{proof}
\eta_{\delta,r}mph{Step 1: Almost optimal regularity.}
We apply the implicit function theorem to $G_a: X_{\delta,\eta_{\delta,r}psilon}\rightarrow Y_{\delta,\eta_{\delta,r}psilon}$ with $\delta$ and $\eta_{\delta,r}psilon$ chosen such that $\eta_{\delta,r}psilon\in (0,\gamma/2)$, $\delta=1-\eta_{\delta,r}psilon \in (0,1)$ (as explained above, for $k=1$ we here interpret the lower order term as a function of $y$ in the linearization).
We note that as a consequence of Proposition~\ref{prop:reg_a}, for $v\in X_{\delta,\eta_{\delta,r}psilon}$, $G_a(v)$ (interpreted as the function $G_{\cdot}(v): \R^{n-1}\ni a \mapsto G_a(v)\in Y_{\delta,\eta_{\delta,r}psilon}$) is $C^{k-1,\gamma-\eta_{\delta,r}psilon}$ in $a$. Thus, the implicit function theorem yields a unique solution $\tilde{w}_a$ in a neighborhood $B''_{\eta_{\delta,r}psilon_0}(0)\times \mathcal{U}$ of $(0,0)\in \R^{n-1}\times X_{\delta,\eta_{\delta,r}psilon}$ (c.f. Proposition \ref{prop:reg_a}). Moreover, the map $\R^{n-1}\ni a\mapsto \tilde{w}_a\in X_{\delta,\eta_{\delta,r}psilon}$ is $C^{k-1,\gamma-\eta_{\delta,r}psilon}$.
Hence, for all multi-indices $\beta=(\beta'',0,0)$ with $|\beta|=k-1$,
\begin{align*}
\left\| \frac{\partial^{\beta}_a \tilde{w}_{a_1}- \partial^{\beta}_{a} \tilde{w}_{a_2}}{|a_1-a_2|^{\gamma-\eta_{\delta,r}psilon}}\right\|_{X_{\delta,\eta_{\delta,r}psilon}}\leq C \left\| \partial_a^{\beta} \frac{G_{a_1}(\tilde{w}_{a_1})-G_{a_2}(\tilde{w}_{a_1})}{|a_1-a_2|^{\gamma-\eta_{\delta,r}psilon}} \right\|_{Y_{\delta,\eta_{\delta,r}psilon}} <\infty.
\eta_{\delta,r}nd{align*}
In particular,
\begin{align}
\label{eq:differences}
\left[ \frac{\partial^{\beta}_a \p_{in}\tilde{w}_{a_1}- \partial^{\beta}_{a} \p_{in}\tilde{w}_{a_2}}{|a_1-a_2|^{\gamma-\eta_{\delta,r}psilon}}\right]_{\dot{C}^{0,\delta}(P)}\leq C \left\| \partial_a^{\beta} \frac{G_{a_1}(\tilde{w}_{a_1})-G_{a_2}(\tilde{w}_{a_1})}{|a_1-a_2|^{\gamma-\eta_{\delta,r}psilon}} \right\|_{Y_{\delta,\eta_{\delta,r}psilon}} <\infty.
\eta_{\delta,r}nd{align}
Since by Lemma~\ref{prop:psi2} $w_a=v_a-v\in \mathcal{U}$ if $a\in B''_{\eta_{\delta,r}psilon_1}(0)$ for some sufficiently small radius $\eta_{\delta,r}psilon_1$, the local uniqueness of the solution implies that $\tilde{w}_a=w_a$ for $a\in B''_{\eta_{\delta,r}psilon_1}(0)$. Thus, $v_a=v+w_a=v+\tilde{w}_a$ is $C^{k-1,\gamma-\eta_{\delta,r}psilon}$ in $a$. Combined with (\ref{eq:differences}) this in particular implies that for any multi-index $\beta=(\beta'',0,0)$ with $|\beta|= k-1$
\begin{align}
\label{eq:reg_tan}
\left[\frac{\partial^{\beta}_a \p_{in}v_{a_1} - \partial^{\beta}_a \p_{in}v_{a_2}}{|a_1 - a_2|^{\gamma - \eta_{\delta,r}psilon}} \right]_{\dot{C}^{0,\delta}(P)}\leq C <\infty.
\eta_{\delta,r}nd{align}
Recalling that the $a$-derivative corresponds to a tangential derivative in $\mathcal{B}_{1/2}$ and the fact that Hölder and Hölder-Zygmund spaces agree for non-integer values (c.f. \cite{Triebel}), this implies that for any multi-index $\beta=(\beta'',0,0)$ with $|\beta|\leq k-1$, $\p^\beta\p_{in} v\in C^{1,\gamma-2\eta_{\delta,r}psilon}(P\cap \mathcal{B}_{1/2})$. By the characterization of the free boundary as $\Gamma_w = \{x \in B_{1}'| x_n = -\p_{n}v(x'',0,0)\}$ this implies that $\Gamma_w$ is a $C^{k+1,\gamma-2\eta_{\delta,r}psilon}$ graph for any $\eta_{\delta,r}psilon\in (0,\gamma/2)$. As $\eta_{\delta,r}psilon>0$ can be chosen arbitrarily small, this completes the proof of the \eta_{\delta,r}mph{almost} optimal regularity result.\\
\eta_{\delta,r}mph{Step 2: Optimal regularity.}
In order to infer the \eta_{\delta,r}mph{optimal} regularity result, we argue by scaling and our previous estimates. More precisely, we have that
\begin{equation*}
\label{eq:est}
\begin{split}
[\Delta_{a}^{\gamma-\eta_{\delta,r}psilon}\p_{in}\partial_a^{\beta} \tilde{w}_a ]_{\dot{C}^{0,\delta}}
&\leq \|\Delta_{a}^{\gamma-\eta_{\delta,r}psilon}\p_{in}\partial_a^{\beta} \tilde{w}_a\|_{X_{\delta,\eta_{\delta,r}psilon}} \leq C \| \Delta_{a}^{\gamma-\eta_{\delta,r}psilon}\partial_a^{\beta} G_a(\tilde{w}_{a_1}) \|_{Y_{\delta,\eta_{\delta,r}psilon}}\\
& \leq C \left( \| \Delta_{a}^{\gamma-\eta_{\delta,r}psilon}\partial_a^{\beta} G_a^1(\tilde{w}_{a_1}) \|_{Y_{\delta,\eta_{\delta,r}psilon}} + \| \Delta_{a}^{\gamma-\eta_{\delta,r}psilon}\partial_a^{\beta} G_a^2(\tilde{w}_{a_1}) \|_{Y_{\delta,\eta_{\delta,r}psilon}}\right).
\eta_{\delta,r}nd{split}
\eta_{\delta,r}nd{equation*}
Here $G^{1}_a(\cdot)$ is the term that originates from $ F^1(v,y)= \sum\limits_{i,j=1}^{n+1}\tilde{a}^{ij}(y)G^{ij}(v)$ and $G^{2}_a(\cdot)$ is the contribution that originates from the lower order contribution
$$ F^2(v,y)= -J(v(y))\left(\sum_{j=1}^{n-1}\tilde{b}^j(y)\p_j v(y)+\tilde{b}^n(y)y_n+\tilde{b}^{n+1}(y)y_{n+1}\right).$$
The notation $\D_a^{\gamma-\eta_{\delta,r}psilon}$ denotes the difference quotient in $a$ with exponent $\gamma-\eta_{\delta,r}psilon$.
We now consider the norms on the right hand side of (\ref{eq:est}) more precisely and consider their rescalings. A typical contribution of $\| \Delta_{a}^{\gamma-\eta_{\delta,r}psilon}\partial_a^{\beta} G_a^2(\tilde{w}_{a_1}) \|_{Y_{\delta,\eta_{\delta,r}psilon}}$ for instance is
\begin{align*}
[ r^{-(1+2\delta-\eta_{\delta,r}psilon)}\Delta_{a}^{\gamma-\eta_{\delta,r}psilon} \partial^{\beta}_a \tilde{b}^{j}_a J(v_{a_1})\p_j v_{a_1} ]_{C^{0,\eta_{\delta,r}psilon}_{\ast}(\mathcal{B}_{2}^+)}.
\eta_{\delta,r}nd{align*}
We focus on this contribution and on the case $k=1$. The other terms can be estimated by using similar ideas. We consider the rescaled function $v_{\lambda,a}(y)$, where $v_{\lambda}(y):=\frac{v(\delta_{\lambda}(y))}{\lambda^{3}}$ (with $\delta_\lambda(y)=(\lambda^2y'',\lambda y_n,\lambda y_{n+1})$) and $v_{\lambda,a}(y):= v_{\lambda}(\Phi_a(y))$. The function $w_{\lambda, a}(y):=v_{\lambda,a}(y)-v_{\lambda}(y)$ is defined as its analogue from above. It is compactly supported in $\mathcal{B}_{3/4}^+$ (by definition of $\Phi_a$) and the functions $v_{\lambda}$ and $w_{\lambda,a}$ satisfy similar equations as $v, w_a$. Thus, we may apply estimate (\ref{eq:differences}) to $w_{\lambda,a}$. Inserting $\delta =1 -\eta_{\delta,r}psilon$, using the support condition for $w_{\lambda,a}$ yields (with slight abuse of notation, as there are additional right hand side contributions, which however by the compact support assumption on $w_{\lambda,a}$ have the same or better scaling)
\begin{align*}
[\p_{ijn}v_{\lambda}]_{C^{0,\gamma-2\eta_{\delta,r}psilon}(\mathcal{B}_{1}^+\cap P)} &\leq C \lambda^{-1}[r^{-3+3\eta_{\delta,r}psilon}J(v)|_{\delta_{\lambda}(y)}\p_j v|_{\delta_{\lambda}(y)}]_{C^{0,\eta_{\delta,r}psilon}_{\ast}}[\tilde{b}^{j}|_{\delta_{\lambda}(y)}]_{C^{0,\gamma}(\mathcal{B}_{2}^+)}\\
& \leq C \lambda^{-1}[r^{-3+\eta_{\delta,r}psilon}J(v)|_{\delta_{\lambda}(y)}\p_j v|_{\delta_{\lambda}(y)}]_{C^{0,\eta_{\delta,r}psilon}_{\ast}(\mathcal{B}_{2}^+)}[\tilde{b}^{j}|_{\delta_{\lambda}}]_{C^{0,\gamma}(\mathcal{B}_{2}^+)}.
\eta_{\delta,r}nd{align*}
Comparing this to the left hand side of the estimate and rescaling both sides of the inequality therefore amounts to
\begin{align*}
\lambda^{2+2\gamma-4\eta_{\delta,r}psilon} [\p_{ijn}v]_{C^{0,\gamma-2\eta_{\delta,r}psilon}(\mathcal{B}_{ \lambda}^+\cap P)} & \leq C \lambda^{2+2\gamma}[r^{-3+\eta_{\delta,r}psilon}J(v)\p_j v]_{C^{0,\eta_{\delta,r}psilon}_{\ast}(\mathcal{B}_{2 \lambda}^+)}[\tilde{b}^{j}]_{C^{0,\gamma}(\mathcal{B}_{2 \lambda}^+}),
\eta_{\delta,r}nd{align*}
which yields
\begin{align*}
[\p_{ijn}v]_{C^{0,\gamma-2\eta_{\delta,r}psilon}(\mathcal{B}_{ \lambda}^+\cap P)} & \leq C \lambda^{4\eta_{\delta,r}psilon}[r^{-3+\eta_{\delta,r}psilon}J(v)\p_j v]_{C^{0,\eta_{\delta,r}psilon}_{\ast}(\mathcal{B}_{2\lambda}^+)}[\tilde{b}^{j}]_{C^{0,\gamma}(\mathcal{B}_{2 \lambda}^+)}.
\eta_{\delta,r}nd{align*}
As a result considering two points $x,y\in P$ with $|x-y|= \lambda^2$, yields
\begin{align*}
\frac{|\p_{ijn}v(x)-\p_{ijn}v(y)|}{|x-y|^{\gamma-2\eta_{\delta,r}psilon}} \leq C \lambda^{4\eta_{\delta,r}psilon} = C |x-y|^{2\eta_{\delta,r}psilon}.
\eta_{\delta,r}nd{align*}
Thus,
\begin{align*}
[\p_{ijn}v]_{C^{0,\gamma}(\mathcal{B}_{ \lambda}^+\cap P)} \leq C,
\eta_{\delta,r}nd{align*}
which proves the optimal regularity result.
\eta_{\delta,r}nd{proof}
\begin{rmk}[$\gamma=1$]
\label{rmk:gamma=1}
As expected from elliptic regularity, we can only deduce the full $C^{k+1,\gamma}$ regularity of the free boundary in the presence of $C^{k,\gamma}$ metrics, if $\gamma=1$. This is essentially a consequence of the elliptic estimates of Proposition \ref{prop:error_gain2}. On a technical level this is exemplified in the fact that in Step 2 of the previous proof, we for instance also have to deal with the term $(\D_a^{\gamma-\eta_{\delta,r}psilon} a^{n,n})\p_{n+1,n+1}v_{\lambda}$ with the expansion $\p_{n+1,n+1}v(y) = a_1(y'')y_n + r^{1+2\delta-\eta_{\delta,r}psilon}C_{n+1,n+1}(y)$ with $\delta\in (0,1)$. As the coefficients $a_1(y'')$ are in general not better than $C^{0,\delta}$, we do not have the full gain of $\lambda^{4\eta_{\delta,r}psilon}$ if $\gamma=1$.
\eta_{\delta,r}nd{rmk}
\begin{rmk}[Optimal regularity]
\label{rmk:optreg}
Let us comment on the optimality of the gain of the free boundary regularity with respect to the regularity of the metric $a^{ij}$:
Proposition \ref{prop:bulk_eq} in combination with our linearization results (c.f. Example \ref{ex:linear} and Section \ref{sec:grushin}) illustrates that $F$ can be viewed as a nonlinear perturbation of the degenerate, elliptic (second order) Baouendi-Grushin operator with metric $a^{ij}$. As such, we can not hope for a gain of more than \eta_{\delta,r}mph{two orders} of regularity for $v$ compared to the regularity of the metric $a^{ij}$ (by interior regularity in appropriate Hölder spaces, c.f. Section \ref{sec:holder}). Hence, for the regular free boundary we can in general hope for a gain of at most \eta_{\delta,r}mph{one order} of regularity with respect to the regularity of the metric. This explains our expectation that the regularity results from Theorem \ref{thm:higher_reg} are sharp higher order regularity results.\\
By a simple transformation it is possible to construct an example to the sharpness of this claim:
In $\R^3$ the function $w(x_1,x_2,x_3)=\Ree((x_2-x_1)/\sqrt{2}+ix_3)^{3/2}$ is a solution to the thin obstacle problem $\Delta w=0$ in $\R^3_+$ with the free boundary $\Gamma_w=\{(x_1,x_2,0)\in B'_1: x_2=x_1\}$. Applying a transformation of the form $$y(x):=(x_1,h(x_2),x_3),$$
with $h$ being a $W^{k+1,p}$ diffeomorphism from $(-1,1)$ to $(-1,1)$ yields that $\tilde{w}(y):=\Ree((h^{-1}(y_2)-y_1)/\sqrt{2}+iy_3)^{3/2}$ solves the variable coefficient thin obstacle problem
\begin{align*}
\p_{11}\tilde{w}+ \p_2(h'(x_2)\p_2 \tilde{w})+\p_{33}w=0 \mbox{ in } B'_1,
\eta_{\delta,r}nd{align*}
with Signorini conditions on $B'_1$. We note that the free boundary of $\tilde{w}$ is given by the graph $\Gamma_{\tilde{w}}=\{(y_1,y_2,0)\in B'_1: y_2=h(y_1)\}$. If $h$ is not better than $W^{k+1,p}$ regular, the coefficients in the bulk equation are no more than $W^{k,p}$ regular. The free boundary is $W^{k+1,p}$ regular. Since it is a graph, it does not admit a more regular parametrization.\\
We further note that our choice of function spaces was crucial in deducing the full gain of regularity for the free boundary with respect to the metric. Indeed, considering the equation (\ref{eq:nonlineq1}), we note that also second order derivatives of the metric are involved. Yet, in order to deduce regularity of the free boundary (which corresponds to \eta_{\delta,r}mph{partial} regularity of the Legendre function $v$) this loss of regularity does not play a role as it is a ``lower order bulk term'' (this is similar in spirit to the gain of regularity obtained in boundary Harnack inequalities).
\eta_{\delta,r}nd{rmk}
Finally, we give the argument for the analyticity of the free boundary in the case that the coefficients $a^{ij}$ are analytic:
\begin{thm}[Analyticity]
\label{prop:analytic}
Let $a^{ij}:B_{1}^+ \rightarrow \R^{(n+1)\times (n+1)}_{sym}$ be an analytic tensor field. Let $w:B_{1}^+ \rightarrow \R$ be a solution of the variable coefficient thin obstacle problem with metric $a^{ij}$. Then locally $\Gamma_{3/2}(w)$ is an analytic graph.
\eta_{\delta,r}nd{thm}
\begin{proof}
This follows from the analytic implicit function theorem (c.f. \cite{Dei10}). Indeed, due to Proposition \ref{prop:reg_a}, $F_a$ is a real analytic function in $a$ and hence also $G_a$ is a real analytic function in $a$. Applying the analytic implicit function theorem similarly as in Step 1 of the previous proof, we obtain an in $a$ analytic function $\tilde{w}_a$. As before, this coincides with our function $w_a$. Therefore $w_a$ depends analytically on $a$. As differentiation with respect to $a$ however directly corresponds to differentiation with respect to the tangential directions $y''$, $w$ (and hence $v$) is an analytic function in the tangential variables.
\eta_{\delta,r}nd{proof}
\begin{rmk}[Regularity in the normal directions]
In Theorems \ref{prop:hoelder_reg_a} and \ref{prop:analytic} we proved partial analyticity for the Legendre function $v$: We showed that in the \eta_{\delta,r}mph{tangential} directions, the regularity of $v$ in a quantitative way matches that of the metric (i.e. a $C^{k,\gamma}$ metric yields $C^{k+1,\gamma}$ regularity for $\p_n v(y'',0,0)$). Although this suffices for the purposes of proving regularity of the (regular) free boundary, a natural question is whether it is also possible to obtain corresponding higher regularity for $v$ in the \eta_{\delta,r}mph{normal} directions $y_n, y_{n+1}$. Intuitively, an obstruction for this stems from working in the corner domain $Q_+$. That this set-up of a corner domain really imposes restrictions on the normal regularity can be seen by checking a compatibility condition: As we are considering an expansion close to the regular free boundary point, we know that the Legendre function asymptotically behaves like a multiple of the function $v_0(y)=-(y_1^3 - 3 y_1 y_2^2)$. If additional regularity were true in the normal directions, we could expand the Legendre function $v$ further, for instance into a fifth order polynomial (with symmetry obeying the mixed Dirichlet-Neumann boundary conditions), which has $v_0$ as its leading order expansion. Hence, working in the two-dimensional corner domain $Q_+:=\{y_1\geq 0, y_2\leq 0\}$, we make the ansatz that
\begin{equation}
\label{eq:ansatz}
v(y) = -(y_1^3 - 3y_1 y_2^2) + c_1 y_1^4 + c_2 y_1^2 y_2^2 + c_3 y_1 y_2^3 + c_4 y_1^5 + c_5 y_1^3 y_1^2 + c_6 y_1^2 y_2^3 + c_7 y_1 y_2^4 + h.o.t,
\eta_{\delta,r}nd{equation}
where $h.o.t$ abbreviates terms of higher order. We seek to find conditions on the metric $a^{ij}$ which ensure that such an expansion for $v$ up to fifth order exists. Without loss of generality we may further assume that
\begin{align*}
a^{ij}(0) = \delta^{ij}, \ a^{12}(x_1,0)=a^{21}(x_1,0)=0,
\eta_{\delta,r}nd{align*}
which corresponds to a normalization at zero and the off-diagonal condition on the plane $\{x_2=0\}$. Transforming the equation
\begin{align*}
\p_i a^{ij} \p_j w = 0 \mbox{ in } \R^2_+,
\eta_{\delta,r}nd{align*}
into the Legendre-Hodograph setting with the associated Legendre function $v$ yields
\begin{align*}
&a^{11}(-\p_1 v, -\p_2 v) \p_{22}v + a^{22}(-\p_1 v, -\p_2 v) \p_{11}v - 2 a^{12}(-\p_1 v, - \p_2 v) \p_{12}v\\
& - J(v)[\p_1 a^{11}(-\p_1 v, -\p_2 v) +\p_2 a^{12}(-\p_1 v, - \p_2 v) ] y_1 \\
&- J(v)[\p_1 a^{12}(-\p_1 v, - \p_2 v) +\p_2 a^{22}(-\p_1 v, -\p_2 v) ] y_2 =0 \mbox{ in } Q_+:= \{y_1,y_1 \geq 0\}.
\eta_{\delta,r}nd{align*}
Here $J(v) = \det\begin{pmatrix} \p_{11}v & \p_{12}v\\
\p_{21}v & \p_{22}v \eta_{\delta,r}nd{pmatrix}$.
Carrying out a Taylor expansion of the metric thus gives
\begin{align*}
&\Delta v -2\left(\p_2a^{12}(0)(-\p_2 v)\right)\p_{12}v\\
&+\left((-\p_1 v)\p_1a^{11}(0)+(-\p_2 v)\p_2a^{11}(0)\right)\p_{22}v+\left((-v_1)\p_1a^{22}(0)+(-v_2)\p_2a^{22}(0)\right)\p_{11}v\\
&-\det\begin{pmatrix}
\p_{11}v & \p_{12}v\\
\p_{21}v & \p_{22}v
\eta_{\delta,r}nd{pmatrix}\left((\p_1a^{11}(0)+\p_2a^{21}(0))y_1+\p_2a^{22}(0)y_2\right)+h.o.t.=0.
\eta_{\delta,r}nd{align*}
Inserting the ansatz (\ref{eq:ansatz}) into this equation, matching all terms of order up to three and using the off-diagonal condition, eventually yields the compatibility condition
\begin{align*}
\p_2(a^{11} + a^{22})(0)=0.
\eta_{\delta,r}nd{align*}
Due to our normalization this necessary condition for having a polynomial expansion up to degree five can thus be formulated as
\begin{align*}
(\p_2 \det(a^{ij}))(0)=0.
\eta_{\delta,r}nd{align*}
In particular this shows that on the transformed side, i.e. in the Legendre-Hodograph variables, one cannot expect arbitrary high regularity for $v$ in the normal directions $y_n, y_{n+1}$ in general. Compatibility conditions involving the metric $a^{ij}$ have to be satisfied to ensure this.
\eta_{\delta,r}nd{rmk}
\section{$W^{1,p}$ Metrics and Nonzero Obstacles}
\label{sec:W1p}
In this section we consider the previous set-up in the presence of inhomogeneities $f\in L^p$ and possibly only Sobolev regular metrics. More precisely, in this section we assume that $a^{ij}:B_1^+ \rightarrow \R^{(n+1)\times (n+1)}_{sym}$ is a uniformly elliptic $W^{1,p}$, $p\in (n+1,\infty]$, metric and consider a solution $w$ of the variable coefficient thin obstacle problem with this metric:
\begin{equation}
\label{eq:inhom}
\begin{split}
\p_{i} a^{ij} \p_j w & = f \mbox{ in } B_1^+,\\
w \geq 0,\ \p_{n+1}w \leq 0, \ w\p_{n+1}w&=0 \mbox{ on } B_1'.
\eta_{\delta,r}nd{split}
\eta_{\delta,r}nd{equation}
We will discuss two cases:
\begin{itemize}
\item[(1)] $f=0$, $a^{ij}\in W^{1,p}$ with $p\in (n+1,\infty]$,
\item[(2)] $a^{ij}\in W^{1,p}$, $f\in L^p$ with $p\in (2(n+1),\infty]$.
\eta_{\delta,r}nd{itemize}
In both cases all the normalization conditions (A1)-(A7) from Section \ref{sec:conventions} as well as the asymptotic expansions (c.f. Proposition~\ref{prop:asym2}) hold. We observe that case (2) in particular contains the setting with non-flat obstacles.
\subsection{Hodograph-Legendre transformation for $W^{1,p}$ metrics}
\label{sec:ext}
In the sequel, we discuss how the results from Sections \ref{sec:asymp}- \ref{sec:Legendre} generalize to the less regular setting of $W^{1,p}$, $p\in (n+1,\infty]$, metrics. We note that in this case the solution $w$ is only $W^{2,p}_{loc}(B_1^+\setminus\Gamma_w)$ regular away from the free boundary $\Gamma_w$. Thus, our Hodograph-Legendre transformation method from the previous sections does not apply directly (as it relies on the pointwise estimates of $D^2v$, and hence $D^2w$). Thus, a key ingredient in our discussion of this set-up will be the splitting result, Proposition 3.9, from \cite{KRSI}. In order to apply it, we extend $w$ and the metric $a^{ij}$ from $B_1^+$ to $B_1$ by an even reflection as in \cite{KRSI}. We now split our solution into two components, $w=u+\tilde{u}$, where $\tilde{u}$ solves
\begin{align}
\label{eq:split1}
a^{ij}\p_{ij}\tilde{u}-\dist(x,\Gamma_w)^{-2}\tilde{u}=f - (\p_i a^{ij})\p_j w \text{ in } B_1\setminus \Lambda_w, \quad \tilde{u}=0\text{ on }\Lambda_w,
\eta_{\delta,r}nd{align}
and the function $u$ solves
\begin{align}
\label{eq:split2}
a^{ij}\p_{ij}u=-\dist(x,\Gamma_w)^{-2}\tilde{u}\text{ in } B_1\setminus \Lambda_w, \quad u=0\text{ on } \Lambda_w.
\eta_{\delta,r}nd{align}
As in \cite{KRSI} the intuition is that $\tilde{u}$ is a ``controlled error'' and that $u$ captures the essential behavior of $w$. Moreover, as we will see later, $u$ will be $C^{2,1-\frac{n+1}{p}}_{loc}$ regular away from $\Gamma_w$ and that $\Gamma_w = \Gamma_u$ (c.f. the discussion below Lemma \ref{lem:lower1'}). Thus, in the sequel, we will apply the Hodograph-Legendre transformation to the function $u$.\\
In order to support this intuition, we recall the positivity of $\p_e u$ as well as the fact that $u$ inherits the complementary boundary conditions from $w$.
\begin{lem}[\cite{KRSI}, Lemma 4.11]
\label{lem:lower1'}
Let $a^{ij}\in W^{1,p}(B_1^+, \R^{(n+1)\times(n+1)}_{sym})$ and let $f\in L^p(B_1^+)$. Suppose that either
\begin{itemize}
\item[(1)] $p\in (n+1,\infty]$ and $f=0$ or,
\item[(2)] $p\in (2(n+1),\infty]$.
\eta_{\delta,r}nd{itemize}
Let $w:B_1^+ \rightarrow \R$ be a solution to the thin obstacle problem with inhomogeneity $f$, and let $u$ be defined as at the beginning of this section. Then we have that $u\in C^{2,1-\frac{n+1}{p}}_{loc}(B_1^+\setminus \Gamma_w)\cap C^{1,\min\{\frac{1}{2},1-\frac{n+1}{p}\}}_{loc}(B_1^+)$. Moreover, there exist constants $c, \eta_{\delta,r}ta>0$ such that for $e\in \mathcal{C}'_\eta_{\delta,r}ta(e_n)$, $\p_eu $ satisfies the lower bound
\begin{align*}
\p_{e}u (x) \geq c\dist(x,\Lambda_w)\dist(x,\Gamma_w)^{-\frac{1}{2}}.
\eta_{\delta,r}nd{align*}
A similar statement holds for $\p_{n+1}u$ if $\Lambda_w$ is replaced by $\Omega_w$.
\eta_{\delta,r}nd{lem}
We remark that the lower bound in Lemma 3.13 does not necessarily hold for $\p_ew$. This is due to the insufficient decay properties of $\p_e \tilde{u}$ in the decomposition $\p_ew = \p_e\tilde{u}+\p_eu$. More precisely, the decay of $\p_e\tilde{u} $ to $\Lambda_w$ is in general only of the order $\dist(x,\Lambda_w)^{1-\frac{n+1}{p}}$, which cannot be controlled by $\dist(x,\Lambda_w)$.\\
We further note that the symmetry of $u$ about $x_{n+1}$ and the regularity of $u$ imply that $\p_{n+1}u=0$ in $B'_1\setminus \Lambda_w$. In particular, this yields the complementary boundary conditions:
\begin{align*}
u \p_{n+1}u = 0 \mbox{ on } B_1'.
\eta_{\delta,r}nd{align*}
Most importantly, Lemma \ref{lem:lower1'} combined with the previous observations on the behavior of $\nabla u$ on $B_1'$ implies that $\Gamma_w = \Gamma_u$. Hence, seeking to investigate $\Gamma_w$, it suffices to study $u$ and its boundary behavior. In this context, Lemma \ref{lem:lower1'} plays a central role as it allows us to deduce the sign conditions for $\p_e u$ and $\p_{n+1}u$ which are crucial in determining the image of the Legendre-Hodograph transform which we will associate with $u$.\\
In accordance with our intuition that $\tilde{u}$ is a ``controlled error", the function $u$ inherits the asymptotics of the solution $w$ around $\Gamma_w$. As in Proposition \ref{prop:invertibility} in Section \ref{sec:Hodo}, this is of great importance in proving the invertibility of the Legendre-Hodograph transform which we will associate with $u$. We formulate the asymptotic expansions in the following proposition:
\begin{prop}
\label{prop:improved_reg1}
Let $a^{ij}\in W^{1,p}(B_1^+, \R^{(n+1)\times(n+1)}_{sym})$ and let $f\in L^p(B_1^+)$.
Suppose that either
\begin{itemize}
\item[(1)] $p\in (n+1,\infty]$ and $f=0$ or,
\item[(2)] $p\in (2(n+1),\infty]$.
\eta_{\delta,r}nd{itemize}
Let $w:B_1^+ \rightarrow \R$ be a solution to the thin obstacle problem with inhomogeneity $f$, and let $u$ be defined as at the beginning of this section. There exist small constants $\eta_{\delta,r}psilon_0>0$ and $c_\ast>0$ depending on $n,p$ such that if
\begin{itemize}
\item[(i)] $ \|w-w_{3/2}\|_{C^1(B_1^+)}\leq \eta_{\delta,r}psilon_0$,
\item[(ii)]$ \|\nabla a^{ij}\|_{L^p(B_1^+)}+\|f\|_{L^p(B_1^+)}\leq c_\ast,$
\eta_{\delta,r}nd{itemize}
then the asymptotics (i)-(iii) in Proposition~\ref{prop:asym2} hold for $\p_eu$, $\p_{n+1}u$ and $u$. The exponent $\alphapha$ in the error term satisfies $\alphapha\in (0,1-\frac{n+1}{p}]$ in case (1) and $\alphapha\in (0,\frac{1}{2}-\frac{n+1}{p}]$ in case (2).
\eta_{\delta,r}nd{prop}
\begin{proof}
By the growth estimate of Remark 3.11 in \cite{KRSI} we have that
\begin{equation}
\label{eq:auxv}
\begin{split}
|\tilde{u}(x)|& \lesssim c_\ast \dist(x,\Lambda_w)\dist(x,\Gamma_w)^{\frac{3}{2}-\frac{n+1}{p}} \text{ in case (1);}\\
|\tilde{u}(x)|& \lesssim c_\ast \dist(x,\Lambda_w)\dist(x,\Gamma_w)^{1-\frac{n+1}{p}} \text{ in case (2).}
\eta_{\delta,r}nd{split}
\eta_{\delta,r}nd{equation}
In particular this implies that
\begin{align*}
&|\tilde{u}(x)|\lesssim c_\ast \dist(x,\Gamma_w)^{\frac{3}{2}+\delta_0},\quad
|\nabla \tilde{u}(x)|\lesssim c_\ast \dist(x,\Gamma_w)^{\frac{1}{2}+\delta_0},\\
&\text{where }\delta_0=\left\{\begin{array}{ll}
1-\frac{n+1}{p} &\text{ if } p\in (n+1,2(n+1)],\\
\frac{1}{2}-\frac{n+1}{p} &\text{ if } p\in (2(n+1),\infty].
\eta_{\delta,r}nd{array}
\right.
\eta_{\delta,r}nd{align*}
Since $\delta_0>0$, in both cases the functions $\tilde{u}$ and $\nabla \tilde{u}$ are of higher vanishing order at $\Gamma_w$ compared to the leading term in the corresponding asymptotics of $w$ and $\nabla w$ (which are of order $\dist(x,\Gamma_w)^{3/2}$ and $\dist(x,\Gamma_w)^{1/2}$).
\eta_{\delta,r}nd{proof}
In addition to these results the second order asymptotics for $u$ (not for the whole function $w$) remain valid under the conditions of Proposition \ref{prop:improved_reg1}. More precisely we have the following result:
\begin{prop}
\label{prop:improved_reg'}
Under the same assumptions as in Proposition~\ref{prop:improved_reg1}, we have the following:
For each $x_0\in \Gamma_w\cap B^+_{1/2}$, for all $x$ in an associated non-tangential cone $\mathcal{N}_{x_0}$ and for all multi-indeces $\beta$ with $|\beta|\leq 2$,
\begin{align*}
\left|\p^\beta u(x)-\p^\beta \mathcal{W}_{x_0}(x)\right|& \leq C_{n,p,\beta} \max\{\eta_{\delta,r}psilon_0, c_*\} |x-x_0|^{\frac{3}{2}+\alphapha-|\beta|},\\
\left[\p^\beta u-\p^\beta \mathcal{W}_{x_0}\right]_{\dot{C}^{0,\gamma}(\mathcal{N}_{x_0}\cap (B_{3\lambda/4}(x_0)\setminus B_{\lambda/2}(x_0)))}& \leq C_{n,p,\beta} \max\{\eta_{\delta,r}psilon_0, c_*\} \lambda^{\frac{3}{2}+\alphapha-\gamma-|\beta|}.
\eta_{\delta,r}nd{align*}
Here $\gamma=1-\frac{n+1}{p}$ and $\lambda \in (0,1)$.
\eta_{\delta,r}nd{prop}
\begin{proof}
We only prove the case of $|\beta|=2$, the other cases are already contained in Proposition \ref{prop:improved_reg1}. Since the arguments for case (1) and (2) are similar we only prove case (2), i.e. $p\in (2(n+1),\infty]$. As in Proposition \ref{prop:improved_reg} the result follows from scaling.
We consider the function
\begin{align*}
\bar{u}(x):= \frac{u(x_0+ \lambda x)- \mathcal{W}_{x_0}(x_0+\lambda x)}{\lambda^{3/2 + \alphapha}},
\eta_{\delta,r}nd{align*}
and note that it satisfies
\begin{align*}
a^{ij}(x_0 + \lambda \cdot) \p_{ij }\bar{u} = \tilde{G} + \tilde{g}_1, \quad \eta_{\delta,r}ll\in\{1,\dots,n\},
\eta_{\delta,r}nd{align*}
for $x\in \mathcal{N}_{0}\cap (B_{1}\setminus B_{1/4})$ and $\mathcal{N}_0:= \{x\in B_{1/4}^+| \ \dist(x,\Gamma_{w_{x_0,\lambda}}) > \frac{1}{2}|x|\}$. Here
\begin{align*}
\tilde{G}(x) & := -\lambda^{1/2 - \alphapha}\dist(x_0+\lambda x,\Gamma_w)^{-2}\tilde{u}(x_0+\lambda x),\\
&=-\lambda^{-\frac{3}{2}-\alphapha}\dist(x,\Gamma_{w_{x_0,\lambda}})^{-2}\tilde{u}(x_0+\lambda x),\\
\tilde{g}_1(x) & := \lambda^{1/2-\alphapha} (a^{ij}(x_0 + \lambda x)-a^{ij}(x_0)) \partial_{ij} \mathcal{W}_{x_0}(x_0 +\lambda x).
\eta_{\delta,r}nd{align*}
In the definition of $\tilde{g}_1$ we have used that $a^{ij}(x_0)\p_{ij}\mathcal{W}_{x_0}=0$ in $ \mathcal{N}_{x_0}\cap (B_{\lambda}(x_0)\setminus B_{\lambda/4}(x_0))$. Using \eta_{\delta,r}qref{eq:auxv} and the regularity of $\tilde{u}$ (and abbreviating $\gamma=1-\frac{n+1}{p}$) yields
\begin{align*}
\|\tilde{G}\|_{C^{0,\gamma}(\mathcal{N}_{0}\cap (B_{1}\setminus B_{1/4}))}\leq C \lambda^{-\frac{3}{2}-\alphapha} \lambda^{2-\frac{n+1}{p}}=C\lambda^{\frac{1}{2}-\alphapha-\frac{n+1}{p}}.
\eta_{\delta,r}nd{align*}
Recalling the $C^{0,1-\frac{n+1}{p}}$ regularity of $a^{ij}$ and the explicit expression of $\mathcal{W}_{x_0}$, we estimate
\begin{align*}
\|\tilde{g}_1\|_{C^{0,\gamma}(\mathcal{N}_{0}\cap (B_{1}\setminus B_{1/4}))}\leq C\lambda^{\frac{1}{2}-\alphapha}\lambda^{1-\frac{n+1}{p}}\lambda^{-\frac{1}{2}}=C\lambda^{1-\alphapha-\frac{n+1}{p}}.
\eta_{\delta,r}nd{align*}
Hence, applying the interior Schauder estimate to $\bar u$ we obtain
\begin{align*}
\|\bar u\|_{C^{2,\gamma}(\mathcal{N}_{0}\cap (B_{3/4}\setminus B_{1/2}))}&\leq C\left(\|\tilde{ G}\|_{C^{0,\gamma}(\mathcal{N}_0\cap (B_{1}\setminus B_{1/4}))}+\|\tilde{g}_1\|_{C^{0,\gamma}(\mathcal{N}_0\cap (B_{1}\setminus B_{1/4}))}\right.\\
&\quad \left.+\|\bar u\|_{L^\infty(\mathcal{N}_0\cap (B_{1}\setminus B_{1/4}))}\right)\\
&\leq C\left(\lambda^{\frac{1}{2}-\alphapha-\frac{n+1}{p}}+1 \right)\leq C \quad (\text{since }\alphapha\in (0,\frac{1}{2}-\frac{n+1}{p}]).
\eta_{\delta,r}nd{align*}
Scaling back, the error estimates become
\begin{align*}
\|\p_e u -\p_e \mathcal{W}_{x_0}\|_{L^\infty(\mathcal{N}_{x_0}\cap (B_{3\lambda/4}(x_0)\setminus B_{\lambda/2}(x_0)))}&\leq C\lambda^{\frac{1}{2}-\alphapha},\\
\|\p_{ee'}u-\p_{ee'}\mathcal{W}_{x_0}\|_{L^\infty(\mathcal{N}_{x_0}\cap (B_{3\lambda/4}(x_0)\setminus B_{\lambda/2}(x_0)))}
&\leq C\lambda^{-\frac{1}{2}-\alphapha},\\
[\p_{ee'}u-\p_{ee'}\mathcal{W}_{x_0}]_{\dot{C}^{0,\gamma}(\mathcal{N}_{x_0}\cap (B_{3\lambda/4}(x_0)\setminus B_{\lambda/2}(x_0)))}
&\leq C\lambda^{-\frac{1}{2}-\alphapha-\gamma}.
\eta_{\delta,r}nd{align*}
Since this holds for every $\lambda\in (0,1)$, we obtain the asymptotic expansion for $\p_e w $ and $\p_{e e'}w$. The asymptotics for $\p_{ij}w$ with $i$ or $j=n+1$ are derived analogously.
\eta_{\delta,r}nd{proof}
Due to the above discussion, the associated Hodograph transform with respect to $u$,
$$
T(x):=(x'',\p_n u(x), \p_{n+1}u(x)),$$
still enjoys all the properties stated in Section \ref{sec:Hodo}. In particular, it is possible to define the associated Legendre function
\begin{align}
\label{eq:Leg_split}
v(y):= u(x)-x_n y_n - x_{n+1} y_{n+1},
\eta_{\delta,r}nd{align}
for $x= T^{-1}(y)$. This function satisfies an analogous nonlinear PDE as the one from Section \ref{sec:Legendre}:
\begin{align*}
\tilde{F}(v,y) = g(y).
\eta_{\delta,r}nd{align*}
Here,
\begin{equation}\label{eq:w1p}
\begin{split}
\tilde{F}(v,y)&=-\sum_{i,j=1}^{n-1}\tilde{a}^{ij}\det\begin{pmatrix}
\p_{ij}v& \p_{in}v & \p_{i,n+1}v\\
\p_{jn}v& \p_{nn}v & \p_{n,n+1}v\\
\p_{j,n+1}v & \p_{n,n+1}v &\p_{n+1,n+1}v
\eta_{\delta,r}nd{pmatrix}\\
&+2\sum_{i=1}^{n-1}\tilde{a}^{i,n}\det\begin{pmatrix}
\p_{in}v & \p_{i,n+1}v\\
\p_{n,n+1}v & \p_{n+1,n+1}v
\eta_{\delta,r}nd{pmatrix}+2 \sum_{i=1}^{n-1}\tilde{a}^{i,n+1}\det\begin{pmatrix}
\p_{i,n+1}v & \p_{in}v\\
\p_{n,n+1}v & \p_{nn}v
\eta_{\delta,r}nd{pmatrix}\\
&+\tilde{a}^{nn}\p_{n+1,n+1}v+\tilde{a}^{n+1,n+1}\p_{nn}v-2\tilde{a}^{n,n+1}\p_{n,n+1}v,\\
\tilde{a}^{ij}(y)&:=a^{ij}(x)\big|_{x=(y'',-\p_nv(y),-\p_{n+1}v(y))},\\
J(v)& := \p_{nn} v(y) \p_{n+1,n+1}v(y) - (\p_{n,n+1}v(y))^2,\\
g(y) &:= -J(v(y)) \dist(T^{-1}(y), T^{-1}(P))^{-2}\tilde{u}(T^{-1}(y)),\quad P:=\{y_n=y_{n+1}=0\}.
\eta_{\delta,r}nd{split}
\eta_{\delta,r}nd{equation}
From the asymptotics of $J(v)$ and \eta_{\delta,r}qref{eq:auxv} for $\tilde{u}$ we have
\begin{align}
\label{eq:w1p_g}
|g(y)|\leq \left\{ \begin{array}{ll}\
C(y_n^2+y_{n+1}^2)^{\frac{3}{2}-\frac{n+1}{p}} &\text{ if } p\in (n+1,\infty] \text{ and } f=0,\\
C(y_n^2+y_{n+1}^2)^{1-\frac{n+1}{p}} &\text{ if }p\in (2(n+1),\infty].
\eta_{\delta,r}nd{array}\right.
\eta_{\delta,r}nd{align}
The result of Proposition \ref{prop:improved_reg'} in combination with an argument as in the proof of Proposition \ref{prop:holder_v} also yields that $v\in X_{\alphapha,\eta_{\delta,r}psilon}$ for a potentially very small $\alphapha>0$.\\
We summarize all this in the following Proposition:
\begin{prop}
\label{eq:bulk_new}
Let $a^{ij}\in W^{1,p}$ and let $f\in L^p$.
Suppose that either
\begin{itemize}
\item[(1)] $p\in (n+1,\infty]$ and $f=0$ or,
\item[(2)] $p\in (2(n+1),\infty]$.
\eta_{\delta,r}nd{itemize}
Let $v:T(B_{1/2}^+) \rightarrow \R$ be the Legendre function associated with $u$ defined in (\ref{eq:Leg_split}). Then $v\in C^1(T(B_{1/2}^+) )\cap X_{\alphapha,\eta_{\delta,r}psilon}(\mathcal{B}_{r_0}^+)$ for some $\alphapha\in (0,1)$ (which is the same as in Proposition \ref{prop:improved_reg'}) and it satisfies the fully nonlinear equation
\begin{align*}
\tilde{F}(v,y) = g(y).
\eta_{\delta,r}nd{align*}
Here $\tilde{F}, g$ are as in (\ref{eq:w1p}) and $g$ satisfies the decay estimate (\ref{eq:w1p_g}).
\eta_{\delta,r}nd{prop}
\begin{rmk}
\label{rmk:decay}
We note that the leading contribution in the decay estimate for $g$ originates from the decay behavior of $\tilde{u}$ in \eta_{\delta,r}qref{eq:auxv}. Therefore, the decay of $g$ is influenced by $-(\p_{i}a^{ij})\p_j w$ and by the inhomogeneity $f$ from \eta_{\delta,r}qref{eq:split1}.
\eta_{\delta,r}nd{rmk}
\subsection{Regularity of the free boundary}
\label{sec:free_boundary_reg_1}
In this section we discuss the implications of the results from Section \ref{sec:ext} on the free boundary regularity. In order to understand the different ingredients to the regularity results, we treat two different scenarios: First we address the setting of $W^{1,p}$ metrics with $p\in (n+1,\infty]$, and zero obstacles, i.e. with respect to Sections \ref{sec:HLTrafo} - \ref{sec:fb_reg} we present a result under even weaker regularity assumptions of the metric (c.f. Section \ref{subsec:w1p_zero}). Secondly, in Section \ref{sec:nonzero} we address the set-up with inhomogeneities. This in particular includes the case of non-zero obstacles. We treat this in the $W^{1,p}$ and the $C^{k,\gamma}$ framework.
\subsubsection{$W^{1,p}$ metrics without inhomogeneity}
\label{subsec:w1p_zero}
We now specialize to the setting in which $a^{ij}\in W^{1,p}$ with $p\in (n+1,\infty]$ and $f=0$ in (\ref{eq:inhom}). In this framework, we prove the following quantitative regularity result for the free boundary:
\begin{prop}[$C^{1,1-\frac{n+1}{p}}$ regularity]
\label{prop:W1p}
Let $a^{ij}\in W^{1,p}$ with $p\in (n+1,\infty]$ and $f=0$. Let $w$ be a solution of (\ref{eq:inhom}) and assume that the normalizations (A1)-(A7) from Section \ref{sec:conventions} hold. Then, if $p<\infty$, $\Gamma_w$ is a $C^{1,1-\frac{n+1}{p}}(B_{1/2}')$ graph and if $p=\infty$, it is a $C^{1,1-}(B_{1/2}')$ graph.
\eta_{\delta,r}nd{prop}
\begin{proof}
We prove this result similarly as in the case of $C^{1,\gamma}$ metrics but instead of working with the original solution $w$, we work with the modified function $u$ from Section~\ref{sec:ext}.\\
We begin by splitting $w=u+\tilde{u}$ as in Section \ref{sec:ext}. Moreover, we recall that by Lemma~\ref{lem:lower1'} (and the discussion following it) $\Gamma_w = \Gamma_{u}$. The Legendre function $v$ with respect to $u$ (c.f. (\ref{eq:Leg_split})) satisfies the nonlinear equation $\tilde{F}(v,y)=g(y)$ (c.f. \eta_{\delta,r}qref{eq:w1p}), which in the notation in Section~\ref{subsec:improvement}, can be written as
$$\tilde{F}(v,y)=\sum_{i,j=1}^{n+1}\tilde{a}^{ij}(y)G^{ij}(v)=g(y).$$
Furthermore, $g$ satisfies the decay condition (\ref{eq:w1p_g}).
Keeping this in the back of our minds, we begin by proving analogues of Propositions \ref{prop:error_gain}, \ref{prop:error_gain2}.
To this end, we use a Taylor expansion to obtain that
$$\tilde{a}^{ij}(y)=\tilde{a}^{ij}(y_0)+ \hat E^{y_0,ij}(y), \quad y\in \mathcal{B}_{1/2}^+(y_0),$$
for each $y_0\in P\cap \mathcal{B}_{1/2}$.
Due to the $C^{0,1-\frac{n+1}{p}}$ Hölder regularity of $a^{ij}$, for $\eta_{\delta,r}psilon \in (0,1-\frac{n+1}{p})$ the function $\hat E^{y_0,ij}(y)$ satisfies
\begin{equation}\label{eq:w1p_metric}
\begin{split}
\left\|d_G(\cdot,y_0)^{-2(1-\frac{n+1}{p})}\hat E^{y_0,ij}\right\|_{L^\infty(\mathcal{B}_{1/2}^+(y_0))}\\
+\left[d_G(\cdot,y_0)^{-2(1-\frac{n+1}{p}-\eta_{\delta,r}psilon) }\hat E^{y_0,ij}\right]_{\dot{C}^{0,\eta_{\delta,r}psilon}_\ast(\mathcal{B}_{1/2}^+(y_0))}\leq C.
\eta_{\delta,r}nd{split}
\eta_{\delta,r}nd{equation}
Recalling \eta_{\delta,r}qref{eq:expand_v}, we expand the nonlinear function $G^{ij}(v)$ as
$$G^{ij}(v)=G^{ij}(v_{y_0})+\p_{m_{k\eta_{\delta,r}ll}}G^{ij}(v_{y_0})\p_{k\eta_{\delta,r}ll}(v-v_{y_0})+\tilde{E}^{y_0,ij}_1(y),$$
where $v_{y_0}$ is the asymptotic profile of $v$ at $y_0$ and $\tilde{E}^{y_0,ij}_1(y)$ denote the same functions as in \eta_{\delta,r}qref{eq:expand_v}.
Due to Proposition~\ref{prop:improved_reg1} the error term $\tilde{E}_1^{y_0,ij}$ satisfies the estimate (\ref{eq:Holderweight}). Hence, as in Step 1c of Proposition~\ref{prop:error_gain}, we can rewrite our nonlinear equation $\tilde{F}(v,y)=g(y)$ as
\begin{align*}
L_{y_0}v= L_{y_0}v_{y_0}+\tilde{f}
\eta_{\delta,r}nd{align*}
with $L_{y_0}=\tilde{a}^{ij}(y_0)\p_{m_{k\eta_{\delta,r}ll}}G^{ij}(v_{y_0})\p_{k\eta_{\delta,r}ll}$ being the same leading term as in Remark~\ref{rmk:error_gain3} and
\begin{align*}
\tilde{f}(y)&=-\tilde{a}^{ij}(y_0)\tilde{E}^{y_0,ij}_1(y)-\hat E^{y_0,ij}(y)G^{ij}(v_{y_0})+g(y).
\eta_{\delta,r}nd{align*}
Due to the error bounds for $\tilde{E}_1^{y_0,ij}$ and $\hat E^{y_0,ij}(y)$, the linear estimate for $G^{ij}(v)$ and the estimate \eta_{\delta,r}qref{eq:w1p_g} for $g$, we infer that
\begin{align*}
|\tilde{f}(y)|\leq C d_G(y,y_0)^{\eta_{\delta,r}ta_0}, \quad \eta_{\delta,r}ta_0=\min\left\{1+4\alphapha, 3-\frac{2(n+1)}{p}\right\}.
\eta_{\delta,r}nd{align*}
Hence, as long as $1+4\alphapha<3-\frac{2(n+1)}{p}$ we bootstrap regularity as in Proposition~\ref{prop:error_gain2}, in order to obtain an increasingly higher modulus of regularity for $v$ at $P$. In particular, by the compactness argument in the Appendix, c.f. Section \ref{sec:quarter_Hoelder}, this allows us to conclude that the Legendre function $v$ is in $X_{\delta,\eta_{\delta,r}psilon}(\mathcal{B}_{1/2}^+)$ for all $\delta \in (0,1-\frac{n+1}{p}]$ if $p<\infty$ and in $X_{\delta,\eta_{\delta,r}psilon}(\mathcal{B}_{1/2}^+)$ for all $\delta \in (0,1-\frac{n+1}{p})$ if $p= \infty$. This shows the desired regularity of $v$ and hence of $\Gamma_u$.
\eta_{\delta,r}nd{proof}
\subsubsection{Regularity results in the presence of inhomogeneities and obstacles}
\label{sec:nonzero}
In this section we consider the regularity of the free boundary in the presence of non-vanishing inhomogeneities $f$. In particular, this includes the presence of obstacles (c.f. Remark \ref{rmk:obstacles_1}).
In this set-up we show the following results:
\begin{prop}[Inhomogeneities]
\label{prop:inhomo_2}
Let $w$ be a solution of the thin obstacle problem with metric $a^{ij}$ satisfying the assumptions (A1)-(A7) from Section \ref{sec:conventions}.
\begin{itemize}
\item[(i)] Assume further that $a^{ij}\in W^{1,p}(B_1^+, \R^{(n+1)\times(n+1)}_{sym})$ and $f\in L^p$ for some $p\in (2(n+1),\infty]$. Then $\Gamma_w$ is locally a $C^{1,\frac{1}{2}-\frac{n+1}{p}}$ graph.
\item[(ii)] Assume further that $a^{ij}\in C^{k,\gamma}(B_1^+, \R^{(n+1)\times(n+1)}_{sym})$ and that $f\in C^{k-1,\gamma}$ with $k\geq 1$, $\gamma \in (0,1)$. Then we have that $\Gamma_w$ is locally a $C^{k+[\frac{1}{2}+\gamma], (\frac{1}{2}+\gamma - [\frac{1}{2}+\gamma])}$ graph.
\eta_{\delta,r}nd{itemize}
\eta_{\delta,r}nd{prop}
We point out that compared with the result without inhomogeneities we lose half a derivative. This is due to the worse decay of the inhomogeneity in (\ref{eq:w1p_g}).\\
Similarly as for the zero obstacle case the proofs for Proposition \ref{prop:inhomo_2} rely on the Hodograph-Legendre transformation. In case (i) of Proposition~\ref{prop:inhomo_2}, we consider the Legendre transformation with respect to the modified solution $u$ after applying the splitting method. This is similar as in Section~\ref{subsec:w1p_zero}, where we dealt with $W^{1,p}$ metrics with zero right hand side. In case (ii) of Proposition~\ref{prop:inhomo_2}, we consider the Legendre transformation with respect to the original solution $w$. We remark that the presence of the inhomogeneity changes neither the leading order asymptotic expansion of $\nabla w$ around the free boundary, nor of the second derivatives $D^2w$ in the corresponding non-tangential cones (assuming $\|f\|_{L^\infty}$ is sufficiently small, which can always be achieved by scaling). In particular, in this case the Hodograph-Legendre transformation is well defined, and the asymptotic expansion of the Legendre function (c.f. Section~\ref{sec:Leg}) remains true.
\begin{proof}
We prove the result of Proposition \ref{prop:inhomo_2} in three steps. First we consider the set-up of (i). Then we divide the setting of (ii) into the cases $k=1$ and $k\geq 2$.
\begin{itemize}
\item In the case of $W^{1,p}$ metrics and $W^{2,p}$ obstacles, we proceed similarly as in Section~\ref{subsec:w1p_zero} by using the splitting method from above. The only changes occur when we estimate the inhomogeneity $g(y)$, where $g(y)$ is as in (\ref{eq:w1p}). Indeed, in the case of $f\neq 0$ we can in general only use the decay estimate \eta_{\delta,r}qref{eq:w1p_g} for $g(y)$. This yields
\begin{align*}
|g(y)|\leq Cd_G(y,y_0)^{\eta_{\delta,r}ta_0},\quad \eta_{\delta,r}ta_0=\min\left\{1+4\alphapha, 2-\frac{2(n+1)}{p}\right\}.
\eta_{\delta,r}nd{align*}
Thus, we obtain that $v\in X_{\delta,\eta_{\delta,r}psilon}(\mathcal{B}_{1/2}^+)$ for all $\delta \in (0,\frac{1}{2}-\frac{n+1}{p}]$. In particular, this entails that $\p_{in}v\in C^{0,\frac{1}{2}-\frac{n+1}{p}}(P\cap \mathcal{B}_{\frac{1}{2}})$. Hence, the regular free boundary $\Gamma_{\frac{3}{2}}(w)$ is locally a $C^{1,\frac{1}{2}-\frac{n+1}{p}}$ submanifold.
\item In the case of a $C^{1,\gamma}$ metric $a^{ij}$ and a $C^{0,\gamma}$ inhomogeneity $f$, we carry out an analogous expansion as in Proposition \ref{prop:error_gain} and estimate the right hand side of the equation by $d_G(y,y_0)^{2}$. Hence, an application of the bootstrap argument from Proposition \ref{prop:error_gain2} implies that $v\in X_{\delta,\eta_{\delta,r}psilon}(\mathcal{B}_{1/2}^+)$ for all $\delta \in (0,\frac{1}{2}]$. Combining this with the application of the implicit function theorem as in Section \ref{sec:IFT1} hence yields that $\p_{in}v \in C^{[1/2+\gamma], (1/2+\gamma - [1/2+\gamma])}$. This implies the desired regularity.
\item
In the case of $C^{k,\gamma}$, $k\geq 2$ metrics we first apply the implicit function theorem (note that in our set-up the functional $\tilde{F}_a(w_a,y)=F_a(w_a+v,y)-g_a(y)$ is still $C^{k-1,\gamma-\eta_{\delta,r}psilon}$ regular in the parameter $a$). In contrast to the argument in Section \ref{sec:IFT1} we can however now only apply the implicit function theorem in the spaces $X_{\delta,\eta_{\delta,r}psilon}$ with $\delta \in (0,1/2]$. Thus, by the implicit function theorem argument (Step 1 in Theorem~\ref{prop:hoelder_reg_a} in Section \ref{sec:IFT1}) we infer that $\p_{in}v\in C^{k+[1/2+\gamma], (1/2+\gamma - [1/2+\gamma])}$.
\eta_{\delta,r}nd{itemize}
This concludes the proof of Proposition \ref{prop:inhomo_2}.
\eta_{\delta,r}nd{proof}
Finally, we comment on the relation of our regularity results with inhomogeneities and the presence of non-zero obstacles.
\begin{rmk}
\label{rmk:obstacles_1}
We note that the set-up of the present Section \ref{sec:W1p} (c.f. \eta_{\delta,r}qref{eq:inhom}) in particular includes the set-up on non-zero obstacles: Indeed, let $a^{ij}:B_1^+ \rightarrow \R^{(n+1)\times (n+1)}_{sym}$ be a uniformly elliptic $W^{1,p}$, $p\in (2(n+1),\infty]$, metric satisfying (A1)-(A3) and let $\phi:B'_1 \rightarrow \R$ be a $W^{2,p}$ function. Suppose that $\tilde{w}$ is a solution to the thin obstacle problem with metric $a^{ij}$ and obstacle $\phi$. Then $w:=\tilde{w}-\phi$ is a solution of the thin obstacle problem
\begin{align*}
\p_{i} a^{ij} \p_j w & = f \mbox{ in } B_1^+,\\
\p_{n+1}w \leq 0, \ w \geq 0, \ w\p_{n+1}w&=0 \mbox{ in } B_1'.
\eta_{\delta,r}nd{align*}
Hence, the inhomogeneity now reads $f=-\p_ia^{ij}\p_j\phi$ and is in $L^p$. In particular, Proposition \ref{prop:inhomo_2} is applicable and yields the $C^{1,\frac{1}{2}-\frac{n+1}{p}}$ regularity of the free boundary. Analogous reductions hold for more regular metrics and non-vanishing obstacles.
\eta_{\delta,r}nd{rmk}
\section{Appendix}
\label{sec:append}
Last but not least, we provide proofs of the estimates which we used in the application of the implicit function theorem. This in particular concerns the spaces $X_{\delta,\eta_{\delta,r}psilon}, Y_{\delta,\eta_{\delta,r}psilon}$ and the mapping properties of $\D_G$ in these: After giving the proof of the characterization of the spaces $X_{\delta,\eta_{\delta,r}psilon}$, $Y_{\delta,\eta_{\delta,r}psilon}$ in terms of decompositions into Hölder functions (c.f. Proposition \ref{prop:decompI}) in Section \ref{sec:decomp}, we present the proof of the (local) $X_{\delta,\eta_{\delta,r}psilon}$ estimates for solutions of the Baouendi-Grushin operator with mixed homogeneous Dirichlet-Neumann data (c.f. Proposition \ref{prop:invert}) in Section \ref{sec:quarter_Hoelder}. Here we argue by an iterative approximation argument, which exploits the scaling properties of the Baouendi-Grushin operator similarly as in \cite{Wa03}. Finally in Sections \ref{sec:XY} and \ref{sec:kernel}, we use this to show the necessary mapping properties of $\D_G$ in the spaces $X_{\delta,\eta_{\delta,r}psilon},Y_{\delta,\eta_{\delta,r}psilon}$.
\subsection{Proof of Proposition \ref{prop:decompI}}
\label{sec:decomp}
In this section we present the proof of the characterization of the spaces $X_{\alphapha,\eta_{\delta,r}psilon}$, $Y_{\alphapha,\eta_{\delta,r}psilon}$ in terms of decompositions into Hölder functions (c.f. Proposition \ref{prop:decompI} in Section \ref{sec:functions}).
\begin{proof}
We argue in two steps and first discuss the decomposition of functions in $Y_{\alphapha,\eta_{\delta,r}psilon}$ and then the corresponding property of functions in $X_{\alphapha,\eta_{\delta,r}psilon}$:\\
(i) Given $f\in Y_{\alphapha,\eta_{\delta,r}psilon}$, we denote
$$f_0(y''):=\p_nf(y''), \quad f_1(y):=r(y)^{-(1+2\alphapha-\eta_{\delta,r}psilon)}(f(y)-f_0(y'')y_n).$$
In particular, this yields
$f(y)=f_0(y'')y_n+r^{1+2\alphapha-\eta_{\delta,r}psilon}f_1(y)$. Moreover, we note that $f_1$ is well-defined on $P$, where it vanishes as a consequence of the boundedness of the $Y_{\alphapha,\eta_{\delta,r}psilon}$ norm and of Remark \ref{rmk:homo}. Hence, it suffices to prove the Hölder regularity of $f_0$ and $f_1$. \\
To show that $f_0 \in C^{0,\alphapha}(P)$ (in the classical sense), we consider points $y_0,y_1\in P$, $y_0\neq y_1$ and a point $y=(y'',y_n,y_{n+1})\notin P$ with the property that $r(y)=|y_0-y_1|^{1/2}$ and $y_0,y_1\in \mathcal{B}_{2r(y)}(y)$. Then, by the boundedness of the norm and by recalling the estimates in Remark \ref{rmk:homo} (i), we have
\begin{align*}
|f(y)-f_0(y_0)y_n|&\leq C r^{1+2\alphapha},\quad |f(y)-f_0(y_1)y_n|\leq C r^{1+2\alphapha}.
\eta_{\delta,r}nd{align*}
Thus, by the triangle inequality,
$$|f_0(y_0)y_n-f_0(y_1)y_n|\leq Cr^{1+2\alphapha}.$$
Choosing $y$ with $y_{n+1}=0$, $|y_n|=r(y)>0$ and dividing by $|y_{n}|$ yields
\begin{align*}
|f_0(y_0)-f_0(y_1)|\leq Cr^{2\alphapha}=|y_0-y_1|^\alphapha.
\eta_{\delta,r}nd{align*}
This shows the $C^{0,\alphapha}$ regularity of $f_0$ (if $\alphapha \in (0,1]$).\\
We proceed with the $C^{0,\eta_{\delta,r}psilon}_\ast(Q_+)$ regularity of $f_1$. First we observe that since $|f(y)-f_0(y'')y_n|\leq C r(y)^{1+2\alphapha}$ (which follows from Remark \ref{rmk:homo} (i)), we immediately infer that $|f_1(y)|\leq Cr(y)^{\eta_{\delta,r}psilon}$. Thus, if $y_1, y_2 \in Q_+$ are such that
$d_G(y_1,y_2)\geq \frac{1}{10}\max\{r(y_1), r(y_2) \}$, we have
\begin{align*}
|f_1(y_1)-f_1(y_2)| \leq C r(y_1)^{\eta_{\delta,r}psilon}+Cr(y_2)^\eta_{\delta,r}psilon\leq C d_G(y_1,y_2)^{\eta_{\delta,r}psilon}.
\eta_{\delta,r}nd{align*}
If $y_1,y_2\in Q_+$ are such that $d_G(y_1,y_2)<\frac{1}{10}\max\{r(y_1),r(y_2)\}$, then there is a point $\bar y\in P$ such that $y_1,y_2\in \mathcal{B}_1^+(\bar y)$ (for example assuming $r(y_1)\geq r(y_2)$ we can let $\bar y=(y''_1,0,0)$). Then the H\"older regularity follows from the $C^{0,\eta_{\delta,r}psilon}_\ast(\mathcal{B}_1^+(\bar y))$ regularity of $d(\cdot, \bar y)^{-(1+2\alphapha-\eta_{\delta,r}psilon)}(f-P_{\bar y})$ and the $C^{0,\alphapha}(P)$ regularity of $f_0$.
More precisely,
\begin{equation}
\label{eq:f1}
\begin{split}
&\quad \left|f_1(y_1)-f_1(y_2)\right|\\
&=\left|r(y_1)^{-1-2\alphapha+\eta_{\delta,r}psilon}\left(f(y_1)-f_0(y''_1)(y_1)_n\right)-r(y_2)^{-1-2\alphapha+\eta_{\delta,r}psilon}\left(f(y_2)-f_0(y_2'')(y_2)_n\right)\right|\\
&\leq r(y_1)^{-1-2\alphapha+\eta_{\delta,r}psilon}\left|\left(f(y_1)-f_0(y''_1)(y_1)_n\right)-\left(f(y_2)-f_0(y''_1)(y_2)_n\right)\right|\\
& \quad + r(y_1)^{-1-2\alphapha+\eta_{\delta,r}psilon}\left|f_0(y''_1)(y_2)_n-f_0(y_2'')(y_2)_n\right|\\
&\quad + |r(y_1)^{-1-2\alphapha+\eta_{\delta,r}psilon}-r(y_2)^{-1-2\alphapha+\eta_{\delta,r}psilon}||f(y_2)-f_0(y_2'')(y_2)_n|.
\eta_{\delta,r}nd{split}
\eta_{\delta,r}nd{equation}
By the definition of the norm of $Y_{\alphapha,\eta_{\delta,r}psilon}$, we have
\begin{align*}
&\left|\left(f(y_1)-f_0(y''_1)(y_1)_n\right)-\left(f(y_2)-f_0(y''_1)(y_2)_n\right)\right|\\
&=\left|(f(y_1)-P_{y''_1}(y_1))-(f(y_2)-P_{y''_1}(y_2))\right|\lesssim r(y_1)^{1+2\alphapha-\eta_{\delta,r}psilon}d_G(y_1,y_2)^{\eta_{\delta,r}psilon}.
\eta_{\delta,r}nd{align*}
Moreover, the $C^{0,\alphapha}$ regularity of $f_0$ as well as $|(y_2)_n|\sim r$ yields
\begin{align*}
r(y_1)^{-1-2\alphapha+\eta_{\delta,r}psilon}\left|f_0(y''_1)(y_2)_n-f_0(y_2'')(y_2)_n\right|&\lesssim r(y_1)^{-1-2\alphapha+\eta_{\delta,r}psilon}r(y_1)d_G(y_1,y_2)^{2\alphapha}\\
& \lesssim d_G(y_1,y_2)^{\eta_{\delta,r}psilon}.
\eta_{\delta,r}nd{align*}
Here we have used that $2\alphapha \geq \eta_{\delta,r}psilon$ and that w.l.o.g. $0\leq r(y_2)\leq r(y_1)$.
Finally, the last term in (\ref{eq:f1}) is estimated by the $C^{0,\eta_{\delta,r}psilon}_\ast$ regularity of $r(y_1)^{\eta_{\delta,r}psilon}$ and by recalling the definition of the norm on $Y_{\alphapha,\eta_{\delta,r}psilon}$ in combination with Remark \ref{rmk:homo} once more. Combining all the previous observations, we have
\begin{align*}
|f_1(y_1)-f_1(y_2)|\lesssim d_G(y_1,y_2)^{\eta_{\delta,r}psilon}.
\eta_{\delta,r}nd{align*}
This completes the proof of (i).\\
(ii) The proof for the decomposition of $v$ is similar. Given any $y\in Q_+\setminus P$, we denote by $y_0:=(y'',0,0)\in P$ the projection of $y$ onto $P$. Since $v$ is $C^{3,\alphapha}_\ast$ at $y_0$, there exists a Taylor polynomial
\begin{align*}
P_{y_0}(z)=\p_n v(y_0)z_n+\p_{in}v(y_0)(z_i-y_i)z_n + \frac{1}{6}\p_{nnn}v(y_0)z_n^3+\frac{1}{2}\p_{n,n+1,n+1}(y_0)z_nz_{n+1}^2,
\eta_{\delta,r}nd{align*}
such that $|v(z)-P_{y_0}(z)|\leq Cd_G(z,y_0)^{3+2\alphapha}$ for each $z\in \mathcal{B}_{1}^+(y_0)$. Due to the regularity of $v$ at $P$, the coefficients have the desired regularity properties: $\p_{n}v(y'')\in C^{1,\alphapha}(P\cap \mathcal{B}_{1/2})$, $\p_{in}v(y''), \p_{nnn}v(y''), \p_{n,n+1,n+1}v(y'')\in C^{0,\alphapha}(P\cap \mathcal{B}_{1/2})$. Moreover, their Hölder semi-norms are bounded from above by $C\|v\|_{X_{\alphapha,\eta_{\delta,r}psilon}}$. \\
In order to show the $C^{0,\eta_{\delta,r}psilon}_\ast$ estimates of $C_1$, $V_i$ and $C_{ij}$, we argue similarly as in (i) for $f\in Y_{\alphapha,\eta_{\delta,r}psilon}$. For simplicity we only present the argument for
$$V_n(y):=r^{-(2+2\alphapha-\eta_{\delta,r}psilon)}\p_n(v-P_{y''})(y), \quad y\in \mathcal{B}_1^+\setminus P.$$
The others are analogous.
First, the boundedness of the first two terms in the norm $\|v\|_{X_{\alphapha,\eta_{\delta,r}psilon}}$ (c.f. Definition~\ref{defi:spaces}) and an interpolation estimate imply that for each $y\in \mathcal{B}_1^+\setminus P$ fixed, $d_G(z,y'')^{-(2+2\alphapha-\eta_{\delta,r}psilon)}\p_n(v-P_{y''})(z)$ as a function of $z$ is in $C^{0,\eta_{\delta,r}psilon}_\ast(\mathcal{B}_{r(y)/2}(y))$, with norm bounded by $C\|v\|_{X_{\alphapha,\eta_{\delta,r}psilon}}$.
Next, for any points $z_1$ and $z_2$ in the non-tangential ball $\mathcal{B}_{r(y)/2}(y)$ we have
\begin{align*}
&\quad |V_n (z_1)-V_n(z_2)|\\
&=\left|d(z_1,z''_1)^{-(2+2\alphapha-\eta_{\delta,r}psilon)}\p_{n}(v-P_{z''_1})(z_1)-d(z_2,z''_2)^{-(2+2\alphapha-\eta_{\delta,r}psilon)}\p_{n}(v-P_{z''_2})(z_2)\right|\\
&\leq \left|d(z_1,z''_1)^{-(2+2\alphapha-\eta_{\delta,r}psilon)}\p_{n}(v-P_{z''_1})(z_1)-d(z_2,z''_1)^{-(2+2\alphapha-\eta_{\delta,r}psilon)}\p_{n}(v-P_{z''_1})(z_2)\right|\\
& \quad + \left|\left(d(z_2,z''_1)^{-(2+2\alphapha-\eta_{\delta,r}psilon)}-d(z_2,z''_2)^{-(2+2\alphapha-\eta_{\delta,r}psilon)}\right)\p_{n}(v-P_{z''_1})(z_2)\right|\\
& \quad +\left|d(z_2,z''_2)^{-(2+2\alphapha-\eta_{\delta,r}psilon)}\p_{n}(P_{z''_2}-P_{z''_1})(z_2)\right|:=I+II+III.
\eta_{\delta,r}nd{align*}
By the definition of the $X_{\alphapha,\eta_{\delta,r}psilon}$-norm and by interpolation, $I\leq C\|v\|_{X_{\alphapha,\eta_{\delta,r}psilon}}d_G(z_1, z_2)^{\eta_{\delta,r}psilon}$. Using the fact that $|\p_{n}(v-P_{z''_1})(z_2)|\leq C\|v\|_{X_{\alphapha,\eta_{\delta,r}psilon}} d_G(z_2,z''_1)^{2+2\alphapha}$ and that $d_G(z''_2,z''_1)\leq Cd_G(z_2,z_1)\leq C\min \{d_G(z_2,z''_2),d_G(z_1,z''_1)\}$ for $z_1,z_2$ in the non-tangential ball $\mathcal{B}_{r(y)/2}(y)$, we also have that $II\leq C\|v\|_{X_{\alphapha,\eta_{\delta,r}psilon}}d_G(z_2,z_1)^{\eta_{\delta,r}psilon}$. To estimate $III$ we notice that
\begin{align*}
\p_n P_{z''_1}(z)&=\p_nv(z''_1)+\sum_{i=1}^{n-1}\p_{in}v(z''_1)(z_i-(z_1)_i)\\
& \quad +\frac{1}{2}\p_{nnn}v(z''_1)z_n^2+\frac{1}{2}\p_{n,n+1,n+1}v(z''_1)z_{n+1}^2.
\eta_{\delta,r}nd{align*}
Recalling that $\p_{n}v(y'')\in C^{1,\alphapha}(P)$ and using a Taylor expansion of $\p_nv(y'')$ at $z''_1$, we infer that
\begin{align*}
\left|\p_nv(z''_2)-\left(\p_nv(z''_1)+\sum_{i=1}^{n-1}\p_{in}v(z''_1)((z_2)_i-(z_1)_i)\right)\right|\leq C\|v\|_{X_{\alphapha,\eta_{\delta,r}psilon}}|z''_2-z''_1|^{1+\alphapha}.
\eta_{\delta,r}nd{align*}
Thus, recalling the $C^{0,\alphapha}$ regularity of $\p_{nnn}v(z'')$ and $\p_{n,n+1,n+1}v(z'')$, we obtain
\begin{align*}
\p_n( P_{z''_1}-P_{z''_2}) (z_2)&= \left(\p_n v(z''_1)+\sum_{i=1}^{n-1}\p_{in}v(z''_1)( (z_2)_i-(z_1)_i) - \p_n v(z''_2)\right)\\
&+\frac{1}{2}\left(\p_{nnn}v(z''_1)-\p_{nnn}v(z''_2)\right)(z_2)_n^2 \\
&+\frac{1}{2}\left(\p_{n,n+1,n+1}v(z''_1)-\p_{n,n+1,n+1}v(z''_2)\right)(z_2)_{n+1}^2\\
&\leq C\|v\|_{X_{\alphapha,\eta_{\delta,r}psilon}}d_G(z''_2,z''_1)^{2(1+\alphapha)}+ C\|v\|_{X_{\alphapha,\eta_{\delta,r}psilon}}d_G(z''_1,z''_2)^{2\alphapha}d_G(z_2,z''_2)^2.
\eta_{\delta,r}nd{align*}
Due to the same reason as for $f$, this implies the estimate for $III$.
\eta_{\delta,r}nd{proof}
\subsection{Proof of Proposition~\ref{prop:invert}}
\label{sec:quarter_Hoelder}
In this section, we present the proof of the (local) $X_{\delta,\eta_{\delta,r}psilon}$ estimates for the Baouendi-Grushin operator (c.f. Proposition \ref{prop:invert} in Section \ref{sec:functions}).
We begin by recalling the natural energy spaces associated with the Baouendi-Grushin operator:
\begin{defi}
\label{defi:GrushinLp}
Let $\Omega\subset \R^{n+1}$ be an open subset.
The Baouendi-Grushin operator is naturally associated with the following Sobolev spaces (recall Definition~\ref{defi:Hoelder1} for the vector fields $\tilde{Y}_j$):
\begin{align*}
M^{1}(\Omega)&:= \{u\in L^2(\Omega)| \tilde{Y}_ju \in L^2(\Omega) \mbox{ for } j\in \{1,\dots,2n\}\},\\
M^{2}(\Omega)&:= \{u\in L^2(\Omega)|\tilde{Y}_ju, \tilde{Y}_{k}\tilde{Y}_\eta_{\delta,r}ll u\in L^2(\Omega)\text{ for } j,k,\eta_{\delta,r}ll\in \{1,\ldots, 2n\}\}.
\eta_{\delta,r}nd{align*}
\eta_{\delta,r}nd{defi}
We prove Proposition~\ref{prop:invert} in two steps. Firstly, we obtain a polynomial approximation (in the spirit of Campanato spaces) near the points at which the ellipticity of the operator degenerates, $P:=\{y_n=y_{n+1}=0\}$ (c.f. Proposition \ref{prop:Hoelder0}). Then we interpolate these estimates with the uniformly elliptic estimates which hold away from the degenerate points. Here we follow a compactness argument which was first outlined in this form by Wang, \cite{Wa03}. It proceeds via approximation and iteration steps.\\
In the sequel, we deduce a first regularity estimate in the energy space. This serves as a compactness result for the following approximation lemmata:
\begin{prop}
\label{prop:Sobolevreg}
Let $0<r\leq R<\infty$. Let $f:\mathcal{B}_{R}^+(0) \rightarrow \R$ be an $L^{2}$ function and let $u:\mathcal{B}_{R}^+\rightarrow \R$ be a solution of
\begin{equation}
\label{eq:Grushin}
\begin{split}
\Delta_G u& =f \text{ in } \mathcal{B}_R^+,\\
u&=0 \text{ on } \mathcal{B}_R^+\cap \{y_n=0\},\\
\p_{n+1}u&=0 \text{ on } \mathcal{B}_R^+\cap\{y_{n+1}=0\}.
\eta_{\delta,r}nd{split}
\eta_{\delta,r}nd{equation}
Then
\begin{align}
\label{eq:Lpapriori}
\left\| u \right\|_{M^{2}(\mathcal{B}_r^+)} \leq C(n,r,R)\left( \| f\|_{L^{2}(\mathcal{B}_{R}^+)} + \| u\|_{L^{\infty}(\mathcal{B}_{R}^+)} \right).
\eta_{\delta,r}nd{align}
\eta_{\delta,r}nd{prop}
\begin{proof}
The result is obtained by an even and odd reflection from the whole space result (in particular by the kernel estimate, see e.g. Lemma \ref{lem:ker} in Section \ref{sec:kernel}).
\eta_{\delta,r}nd{proof}
With Proposition \ref{prop:Sobolevreg} at hand, we prove our first approximation result: We approximate solutions of the \eta_{\delta,r}mph{inhomogeneous} Baouendi-Grushin equation by solutions of the \eta_{\delta,r}mph{homogeneous} equation, provided the inhomogeneity is sufficiently small.
\begin{lem}
\label{lem:compactness}
Assume that $u:\mathcal{B}_{1}^+ \rightarrow \R$ is a solution of (\ref{eq:Grushin}) which satisfies
\begin{align*}
\frac{1}{|\mathcal{B}_{1}^+(0)|}\int\limits_{\mathcal{B}_1^+(0)} u^2 dx \leq 1.
\eta_{\delta,r}nd{align*}
For any $\eta_{\delta,r}psilon>0$ there exists a constant $\delta=\delta(\eta_{\delta,r}psilon)>0$ such that if
\begin{align*}
\frac{1}{|\mathcal{B}_{1}^+(0)|} \int\limits_{\mathcal{B}_1^{+}(0)}f^2 dx \leq \delta^2,
\eta_{\delta,r}nd{align*}
then there is a solution $h$ of the homogeneous Baouendi-Grushin equation with mixed Dirichlet-Neumann data, i.e.
\begin{equation}
\label{eq:hGrushin}
\begin{split}
\D_G h & = 0 \mbox{ on } \mathcal{B}_R^+(0),\\
h&=0 \mbox{ on } \{y_{n}=0\}\cap \mathcal{B}_R^+(0),\\
\p_{n+1} h&=0 \mbox{ on } \{y_{n+1}=0\}\cap \mathcal{B}_R^+(0),
\eta_{\delta,r}nd{split}
\eta_{\delta,r}nd{equation}
such that
\begin{align*}
\frac{1}{|\mathcal{B}_{1/2}^+(0)|} \int\limits_{\mathcal{B}_{1/2}^+(0)}|u-h|^2 dx \leq \eta_{\delta,r}psilon^2.
\eta_{\delta,r}nd{align*}
\eta_{\delta,r}nd{lem}
\begin{proof}
We argue by contradiction and compactness. Assume that the statement were wrong. Then there existed $\eta_{\delta,r}psilon>0$ and sequences, $\{u_m\}_{m}$, $\{f_m\}_m$, such that on the one hand
\begin{equation}
\label{eq:contra}
\begin{split}
\D_G u_m &= f_m \mbox{ in } \mathcal{B}_1^+(0),\\
u_m & = 0 \mbox{ on } \{y_n = 0\}\cap \mathcal{B}_{1}^+(0),\\
\p_{n+1} u_m &= 0 \mbox{ on } \{y_{n+1}=0\} \cap \mathcal{B}_{1}^+(0).
\eta_{\delta,r}nd{split}
\eta_{\delta,r}nd{equation}
and
\begin{align*}
&\frac{1}{|\mathcal{B}_{1}^+(0)|}\int\limits_{\mathcal{B}_1^+(0)} u_m^2 dx \leq 1, \ \frac{1}{|\mathcal{B}_{1}^+(0)|} \int\limits_{\mathcal{B}_1^{+}(0)}f_m^2 dx \leq \frac{1}{m}.
\eta_{\delta,r}nd{align*}
On the other hand
\begin{align*}
\frac{1}{|\mathcal{B}_{1}^+(0)|}\int\limits_{\mathcal{B}_1^+(0)} |u_m - h|^2 dx \geq \eta_{\delta,r}psilon^2,
\eta_{\delta,r}nd{align*}
for all $h$ which satisfy the homogeneous equation (\ref{eq:hGrushin}). By (\ref{eq:Lpapriori}), we however have compactness for $u_m$ in $M^{1}$:
\begin{align*}
u_m \rightarrow u_0 \mbox{ in } M^{1}(\mathcal{B}_{3/4}^+(0)).
\eta_{\delta,r}nd{align*}
Testing the weak form of (\ref{eq:contra}) with a $C_{0}^{\infty}(\mathcal{B}_{1/2}^+(0))$ function, we can pass to the limit and infer that $u_0$ is a weak solution of the homogeneous bulk equation from (\ref{eq:hGrushin}).
Finally, by the boundedness of $u_m\in M^{2}$ and the corresponding trace inequalities or a reflection argument, we obtain that $u_0$ satisfies the mixed Dirichlet-Neumann conditions from (\ref{eq:hGrushin}). This yields the desired contradiction.
\eta_{\delta,r}nd{proof}
We now prove a further approximation result for solutions of the homogeneous Baouendi-Grushin equation in the quarter space. More precisely, we now seek to approximate solutions of the homogeneous equation (\ref{eq:contra}) by associated (eigen-) polynomials.
To this end, we recall the notion of \eta_{\delta,r}mph{homogeneous polynomials} in Section~\ref{sec:holder}.
\begin{rmk}
We note that all homogeneous polynomial solutions (e.g. the ones up to degree five) of $\Delta_Gv=0$ which satisfy the Dirichlet-Neumann boundary conditions can be computed explicitly. For instance, the degree less than five polynomial solutions are given by the linear combination of
\begin{align*}
y_n, \ y_jy_n, \ j\in\{1,\dots,n-1\},\ y_{n}^3 - 3y_n y_{n+1}^2.
\eta_{\delta,r}nd{align*}
\eta_{\delta,r}nd{rmk}
Using the notion of homogeneous polynomials, we proceed to our second approximation lemma:
\begin{lem}
\label{lem:approx}
Let $u:\mathcal{B}_{1}^+(0) \rightarrow \R$ be a solution of (\ref{eq:hGrushin}) with $\| u \|_{L^2(\mathcal{B}_{1}^+(0))}\leq \bar{c}$. Then there exists a polynomial $p$ of homogeneous degree less than or equal to three which solves (\ref{eq:hGrushin}), i.e.
\begin{align*}
p(y)=y_n \left(a_0+\sum_{i=1}^{n-1}a_iy_i+b(y_n^2-3y_{n+1}^2)\right),
\eta_{\delta,r}nd{align*}
such that for all $0<r\leq \frac{1}{2}$
\begin{align}
\label{eq:approx}
\frac{1}{|\mathcal{B}_{r}^+(0)|}\int\limits_{\mathcal{B}_r^+(0)}|u-p|^2 dy \leq C(\bar{c}) r^{10},
\eta_{\delta,r}nd{align}
and
\begin{align*}
\sum_{i=0}^{n-1}|a_i|+|b|\leq C\bar c,
\eta_{\delta,r}nd{align*}
where $C$ is a universal constant.
\eta_{\delta,r}nd{lem}
\begin{proof}
After a conformal change of variables, the Baouendi-Grushin operator can be rewritten as
\begin{align*}
\D_G = (\dt^2 + \D_{\Sigma}),
\eta_{\delta,r}nd{align*}
where $\Sigma:= \{(y'',y_n,y_{n+1})| \ |y''|^4 + y_n^2 + y_{n+1}^2 = 1\}$ denotes the Baouendi-Grushin sphere. In our setting this is augmented with the Dirichlet and Neumann conditions from (\ref{eq:hGrushin}). The eigenfunctions of $\D_{\Sigma}$ can be extended in the radial direction to yield homogeneous solutions of the homogeneous Baouendi-Grushin equation. As the Baouendi-Grushin operator is hypoelliptic, these solutions are polynomials (this remains true in the cone $\Sigma \cap \mathcal{B}_1^+$, as the eigenfunctions on the Baouendi-Grushin quarter sphere can be identified as a subset of the eigenfunctions on the whole sphere by appropriate (even and odd) reflections). Moreover, the eigenfunctions on $\Sigma$ are orthogonal and as a consequence, the same is true for the correspondingly associated polynomials. The Baouendi-Grushin polynomials hence form an orthogonal basis into which a solution of the homogeneous Baouendi-Grushin problem can be decomposed.
We denote these polynomials by $p_k(y)$ and normalize them with respect to $\mathcal{B}^{+}_{1}(0)$. Since $u$ is bounded in $L^2(\mathcal{B}_{1}^+(0))$, we have
\begin{align}
\label{eq:poly_decomp}
u(y) = \sum\limits_{k=0}^{\infty} \alphapha_k p_k(y) \mbox{ with } \sum\limits_{k=0}^{\infty}|\alphapha_k|^2 \leq \bar{c}^2.
\eta_{\delta,r}nd{align}
The previous decomposition can also be seen ``by hand'': Making the ansatz that a homogeneous solution of the Baouendi-Grushin problem is of the form
\begin{align*}
u(t,\theta) = \sum\limits_{k\in \Z} \alphapha_k(t) u_{k}(\theta),
\eta_{\delta,r}nd{align*}
where $u_k(\theta)$ denotes the spherical eigenfunctions, we obtain that
\begin{align*}
0& =(u_k, \D_G u)_{L^{2}(\Sigma)} = \alphapha_k''(t) + (u_k, \D_{\Sigma} u)_{L^2(\Sigma)} \\
& = \alphapha_k''(t) -\lambda_k^2 \alphapha_k(t)
+ \int\limits_{\partial \Sigma} u_k (\nu\cdot \nabla_{\Sigma} u) d\mathcal{H}^{n-1} - \int\limits_{\partial \Sigma} u (\nu\cdot \nabla_{\Sigma} u_k) d\mathcal{H}^{n-1}.
\eta_{\delta,r}nd{align*}
Here $\lambda_k^2$ is the eigenvalue associated with $u_k(\theta)$ and $\nu:\partial \Sigma \rightarrow \R^{n-1}$ is the outer unit normal field.
As both $u$ and $u_k$ satisfy the mixed Dirichlet-Neumann boundary conditions, this yields that
\begin{align*}
\alphapha''_k(t) - \lambda_k^2 \alphapha_k(t)=0.
\eta_{\delta,r}nd{align*}
As the Dirichlet data imply that $\alphapha_k(-\infty)=0$, this results in $\alphapha_k(t) = \alphapha_k(0) e^{|\lambda_k| t} $. By hypoellipticity, $\lambda_k \in \Z$, so that we obtain a decomposition into polynomials, after undoing the conformal change of coordinates.
After an appropriate normalization, we again infer (\ref{eq:poly_decomp}).\\
We define $p(y):= \sum\limits_{k=0}^{3} \alphapha_k p_k(y)$.
Thus, recalling that there are no eigenpolynomials of (homogeneous) degree four which satisfy our mixed Dirichlet-Neumann conditions and computing the difference $u-p$, we arrive at
\begin{align*}
\left\| u- p \right\|_{L^2(\mathcal{B}_r^+)}^2 = \sum\limits_{k=5}^{\infty}|a_k|^2 \| p_k\|_{L^2(\mathcal{B}_r^+)}^2
&\leq \sum\limits_{k=5}^{\infty}|a_k|^2 r^{10}|\mathcal{B}_r^+|\| p_k\|_{L^2(\mathcal{B}_1^+)}^2\\
&\leq r^{10+ 2n}C(\bar{c}),
\eta_{\delta,r}nd{align*}
where we used the scaling of the Baouendi-Grushin cylinders from Definition \ref{defi:Grushincylinder} and the boundedness of $u$ (c.f. (\ref{eq:poly_decomp})).
This yields the desired result.
\eta_{\delta,r}nd{proof}
\begin{rmk}
\label{rmk:approx}
We stress that the approximation from Lemma \ref{lem:approx} is not restricted to third order polynomials. It can be extended to polynomials of arbitrary (homogeneous) degree.
\eta_{\delta,r}nd{rmk}
Combining the previous results, we obtain the key building block for the iteration which yields regularity at the hyperplane $\{y_n=y_{n+1}=0\}$ at which the ellipticity of the Baouendi-Grushin operator degenerates.
\begin{lem}[Iteration]
\label{prop:iteration}
Let $\alphapha\in(0,1)$.
Assume that $u:\mathcal{B}_{1}^+ \rightarrow \R$ is a solution of (\ref{eq:Grushin}) which satisfies
\begin{align*}
\frac{1}{|\mathcal{B}_{1}^+(0)|}\int\limits_{\mathcal{B}_1^+(0)} u^2 dx \leq 1.
\eta_{\delta,r}nd{align*}
There exist a radius $r_0\in (0,1)$, a universal constant $C>0$ and a constant $\eta_{\delta,r}psilon>0$ such that if
\begin{align*}
\frac{1}{|\mathcal{B}_{1}^+(0)|} \int\limits_{\mathcal{B}_1^{+}(0)}f^2 dx \leq \eta_{\delta,r}psilon^2,
\eta_{\delta,r}nd{align*}
then there exists a polynomial $p$ of order less than or equal to three
satisfying (\ref{eq:hGrushin}), i.e.
\begin{align*}
p(y)=y_n \left(a_0+\sum_{i=1}^{n-1}a_iy_i+b(y_n^2-3y_{n+1}^2)\right),
\eta_{\delta,r}nd{align*}
such that
\begin{align*}
\frac{1}{|\mathcal{B}_{r_0}^+(0)|}\int\limits_{\mathcal{B}_{r_0}^+(0)} |u-p|^2 dy \leq r^{2(3+2\alphapha)}_0,
\eta_{\delta,r}nd{align*}
and
\begin{align*}
\sum_{i=0}^{n-1}|a_i|+|b|\leq C.
\eta_{\delta,r}nd{align*}
\eta_{\delta,r}nd{lem}
\begin{proof}
By our first approximation result, Lemma \ref{lem:compactness}, there exists a function $h$ which solves (\ref{eq:hGrushin}) and satisfies
\begin{align}
\label{eq:approx1}
\int\limits_{\mathcal{B}_{1/2}^+(0)}|u-h|^2 dy \leq \delta^2.
\eta_{\delta,r}nd{align}
In particular, $\| h \|_{L^2(\mathcal{B}_{1/2}^+(0))}\leq C$. Hence, by our second approximation result, Lemma \ref{lem:approx}, there exists a (homogeneous) third order Baouendi-Grushin polynomial satisfying the Dirichlet-Neumann condition such that
\begin{align*}
\frac{1}{|\mathcal{B}_{r}^+(0)|} \int\limits_{\mathcal{B}_{r}^+(0)} |h-p|^2 dy \leq C r^{10}, \quad \mbox{ for all } 0<r<1/2.
\eta_{\delta,r}nd{align*}
Consequently, by rescaling, we obtain for each $0<r<1/2$,
\begin{align*}
\frac{1}{|\mathcal{B}_r^{+}(0)|} \int\limits_{\mathcal{B}_r^+(0)}|u-p|^2 dy & \leq 2 \frac{1}{|\mathcal{B}_r^{+}(0)|} \int\limits_{\mathcal{B}_r^+(0)}|u-h|^2 dy
+ 2 \frac{1}{|\mathcal{B}_r^{+}(0)|} \int\limits_{\mathcal{B}_r^+(0)}|h-p|^2 dy\\
& \leq 2 r^{-2n} \int\limits_{\mathcal{B}_{1/2}^+(0)}|u-h|^2 dy + 2C r^{10}\\
& \leq 2 r^{-2n}\delta^2 + 2C r^{10}, \
\eta_{\delta,r}nd{align*}
where we used (\ref{eq:approx1}) to estimate the first term. First choosing $0<r_0<1$ universal, but so small such that $2C r_0^{10}\leq \frac{1}{2}r_0^{2(3+2\alphapha)}$, and then choosing $\delta>0$ universal such that $2 r_0^{-2n}\delta^2 \leq\frac{1}{2}r_0^{2(3+2\alphapha)}$, yields the desired result.
\eta_{\delta,r}nd{proof}
As a corollary of Lemma \ref{prop:iteration}, we can iterate in increasingly finer radii.
\begin{cor}
\label{cor:iteration}
Let $\alphapha\in(0,1)$.
Assume that $u:\mathcal{B}_{1}^+ \rightarrow \R$ is a solution of (\ref{eq:Grushin}) which satisfies
\begin{align*}
\frac{1}{|\mathcal{B}_{1}^+(0)|}\int\limits_{\mathcal{B}_1^+(0)} u^2 dx \leq 1,
\eta_{\delta,r}nd{align*}
and that for each $k\in \mathbb{N}_+$
\begin{align*}
\frac{1}{|\mathcal{B}_{r_0^{k-1}}^+(0)|} \int\limits_{\mathcal{B}_{r_0^{k-1}}^{+}(0)}f^2 dx \leq \eta_{\delta,r}psilon^2 r_0^{2(k-1)(1+2\alphapha)}.
\eta_{\delta,r}nd{align*}
Then there exists a polynomial $p_k$ of (homogeneous) degree (less than or equal to) three solving (\ref{eq:hGrushin}) such that
\begin{align*}
\frac{1}{|\mathcal{B}_{r^{k}_0}^{+}(0)|} \int\limits_{\mathcal{B}_{r^{k}_0}^+(0)}|u-p_k|^2 dx \leq r^{2k (3+ 2\alphapha)}_0.
\eta_{\delta,r}nd{align*}
Moreover, it is of the form
\begin{align*}
p_k(y) = y_n\left(a^0_k + \sum\limits_{j=1}^{n-1}a_k^j y_j\right) + b_k (y_n^3 - 3y_n y_{n+1}^2),
\eta_{\delta,r}nd{align*}
and we have
\begin{equation}
\label{eq:coefficients}
\begin{split}
|a^0_k - a^0_{k-1}| &\leq C r_0^{k(2+{2\alphapha})},\\
|a_k^j - a_{k-1}^j| &\leq C r_0^{{2k\alphapha}},\quad j=1,\dots, n-1,\\
|b_k - b_{k-1}| & \leq C r_0^{{2k\alphapha}}.
\eta_{\delta,r}nd{split}
\eta_{\delta,r}nd{equation}
\eta_{\delta,r}nd{cor}
\begin{proof}
We argue by induction on $k$ and take $p_0 = 0$ and $p_1$ as the polynomial from Proposition \ref{prop:iteration}. We assume that the statement is true for $k$ and show it for $k+1$. For that purpose, we consider the rescaled and dilated functions
\begin{align*}
u_k(y):= \frac{(u-p_k)(r_0^{2k} y'', r_0^k y_n, r_0^k y_{n+1})}{r_0^{k(3+{2\alphapha})}}.
\eta_{\delta,r}nd{align*}
Hence,
\begin{align*}
\D_G u_k = \frac{r_0^{2 k}f_{r_0}}{r_0^{(3+{2\alphapha})k}} = r_0^{-k (1+{2\alphapha})} f_{r_0},
\eta_{\delta,r}nd{align*}
where $f_{r_0}(y'',y_n,y_{n+1})= f(r_0^{2} y'', r_0 y_n, r_0 y_{n+1})$.
Using the smallness assumption on $f$, we obtain that
\begin{align*}
\frac{1}{|\mathcal{B}_{1}^{+}(0)|} \int\limits_{\mathcal{B}_{1}^+(0)}|r_0^{-k(1+{2\alphapha})} f_{r_0}|^2 dx \leq r_0^{-2k (1+{2\alphapha})} \frac{1}{|\mathcal{B}_{r_0^{k}}^{+}(0)|} \int\limits_{\mathcal{B}_{r_0^k}^+(0)}f^2 dx \leq \eta_{\delta,r}psilon^2.
\eta_{\delta,r}nd{align*}
Hence by Proposition \ref{prop:iteration}, we obtain a (homogeneous) polynomial, $q$, of degree less than or equal to three, which satisfies (\ref{eq:hGrushin}) and is of the form
\begin{align*}
q(y'',y_n, y_{n+1}) = y_n\left(a_0+ \sum_{j=1}^{n-1}a_j y_j\right) + b (y_n^3 - 3y_n y_{n+1}^2),
\eta_{\delta,r}nd{align*}
and
$$\sum_{j=0}^{n-1}|a_j|+|b|\leq C,$$
such that
\begin{align*}
\frac{1}{|\mathcal{B}_{r_0}^{+}(0)|} \int\limits_{\mathcal{B}_{r_0}^+(0)}|u_k - q|^2 dx \leq r_0^{2(3+{2\alphapha})}.
\eta_{\delta,r}nd{align*}
Rescaling therefore gives us that the polynomial
\begin{align*}
p_{k+1}(y'',y_n,y_{n+1}) = p_k(y'',y_{n},y_{n+1}) + r_0^{(3+{2\alphapha})k}q\left(\frac{y''} {r_0^{2k}}, \frac{y_n}{r_0^k}, \frac{y_{n+1}}{r_0^k} \right),
\eta_{\delta,r}nd{align*}
satisfies the claim of the corollary.
\eta_{\delta,r}nd{proof}
We summarize the previous compactness and iteration arguments in the following intermediate result:
\begin{prop}
\label{prop:Hoelder0}
Let $\alphapha\in(0,1)$.
Assume that $u:\mathcal{B}_{1}^+ \rightarrow \R$ is a solution of (\ref{eq:Grushin}). Suppose that the inhomogeneity $f:\mathcal{B}_{1}^+(0)\rightarrow \R$ is $C^{1,\alphapha}_{\ast}$ at $y=0$ in the sense of Definition \ref{defi:diff}, i.e.
\begin{align*}
|f - f(0) - \p_n f(0) y_n | \leq F_0 r^{1+{2\alphapha}}
\eta_{\delta,r}nd{align*}
for any $0<r<1$.
Then there is a polynomial $p$ with \\
$\|p\|_{C^{3}(\mathcal{B}_{1}^+)}\leq C \left(\| u\|_{L^{2}(\mathcal{B}_1^+)} + |f(0)|+|\p_nf(0)|\right)$ of (homogeneous) degree less than or equal to three such that
\begin{align*}
\frac{1}{|\mathcal{B}_{r}^{+}(0)|} \int\limits_{\mathcal{B}_{r}^+(0)}|u-p|^2 dx \leq C(\| u\|_{L^2(\mathcal{B}_1^+(0))}^2 + F_0^2)r^{2(3+{2\alphapha})}.
\eta_{\delta,r}nd{align*}
\eta_{\delta,r}nd{prop}
\begin{proof}
Without loss of generality we may assume that $f(0)=\p_n f(0)=0$. Indeed, this follows by considering the function $v(y'',y_n,y_{n+1}):= u(y) - q(y)$, where $q(y)$ is a homogeneous polynomial of homogeneous degree less than or equal to three such that $\Delta_G q=f(0)+\p_{n}f(0)y_n$ and $q=0$ on $\{y_n=0\}$, $\p_{n+1}q=0$ on $\{y_{n+1}=0\}$ (for example, one can consider $q(y)=\frac{1}{2}f(0) y_n^2+ cy_ny_{n+1}^2 +dy_n^3$ with $2c+6d=\p_nf(0)$).
Considering $\tilde{v}:=\frac{\eta_{\delta,r}psilon}{F_0}v$, then also gives the smallness assumptions of Corollary \ref{cor:iteration}. Thus, for each $k\in \N_+$ there exists a Baouendi-Grushin polynomial $p_k$ such that
\begin{align*}
\frac{1}{|\mathcal{B}_{r_0^k}^+(0)|} \int\limits_{\mathcal{B}_{r_0^k}^+(0)}|\tilde{v}-p_k|^2 dy \leq r_0^{2k(3+{2\alphapha})}.
\eta_{\delta,r}nd{align*}
Due to the estimates (\ref{eq:coefficients}) on the coefficients of $p_k$, which were derived in Corollary \ref{cor:iteration}, $p_k \rightarrow p_{\infty}$, where $p_{\infty}$ is a polynomial of (homogeneous) degree at most three and which satisfies
\begin{align*}
\frac{1}{|\mathcal{B}_{r_0^k}^+(0)|} \int\limits_{\mathcal{B}_{r_0^k}^+(0)}|p_{\infty}-p_k|^2 dy \leq C r_0^{2k(3+{2\alphapha})}.
\eta_{\delta,r}nd{align*}
Consequently, by the triangle inequality
\begin{align*}
\frac{1}{|\mathcal{B}_{r_0^k}^+(0)|} \int\limits_{\mathcal{B}_{r_0^k}^+(0)}|\tilde{v} -p_{\infty}|^2 dy \leq C r_0^{2k(3+{2\alphapha})}
\eta_{\delta,r}nd{align*}
for $k\in \N$. Rescaling then yields the desired result.
\eta_{\delta,r}nd{proof}
\begin{rmk}
\begin{itemize}
\item The previous result yields the ``H\"older regularity at the point'' $y=0$. For other points $y_0=(y_0'',0,0)$ an analogous result holds by translation invariance of the equation and the boundary conditions in the $y''$ directions (c.f. (\ref{eq:Grushin})). In this translated case, the conditions on the inhomogeneity $f:\mathcal{B}_{1}^+(y_0)\rightarrow \R$ read
\begin{align*}
|f - f(y_0) - \p_{n}f(y_0) y_n| \leq F_0 r^{2(1+{2\alphapha})}.
\eta_{\delta,r}nd{align*}
\item Instead of imposing the $C^{1,\alphapha}_{\ast}$ condition in the sense of Definition \ref{defi:diff}, it would have sufficed to assume the weaker condition
\begin{align*}
\frac{1}{|\mathcal{B}_{r}^{+}(y_0)|} \int\limits_{\mathcal{B}_{r}^+(y_0)}|f - f(y_0) - \p_{n}f(y_0) y_n|^2 dy \leq F_0 r^{2(1+{2\alphapha})}.
\eta_{\delta,r}nd{align*}
\item In order to argue as we have outlined above, we have to require the compatibility condition $\p_{n+1}f(y'',0,0)=0$ (c.f. Definition \ref{defi:spaces}). However, apart from the described $C^{1,\alphapha}_{\ast}$ regularity, we do not have to pose further restrictions on $f$.
\eta_{\delta,r}nd{itemize}
\eta_{\delta,r}nd{rmk}
Building on the precise description of the regularity of solutions close to the hyperplane $\{y_n=y_{n+1}=0\}$, we can now derive the full regularity result of Proposition \ref{prop:invert} by additionally invoking the uniform ellipticity which holds at a sufficiently far distance from $P$. This then concludes the argument for Proposition \ref{prop:invert}.
\begin{proof}[Proof of Proposition~\ref{prop:invert}]
It suffices to prove the corresponding regularity result in $X_{\alphapha,\eta_{\delta,r}psilon}(\mathcal{B}_3^+)$ (c.f. Definition \ref{defi:spaces_loc}).
Indeed, the Hölder estimate,
\begin{align*}
\|v\|_{C^{2,\eta_{\delta,r}psilon}_{\ast}(Q_+)} \lesssim \|\D_G v\|_{C^{0,\eta_{\delta,r}psilon}_{\ast}(Q_+)},
\eta_{\delta,r}nd{align*}
follows similarly. As a consequence of the support assumption on $\Delta_G v$, this then yields the bound
\begin{align*}
\|v\|_{C^{2,\eta_{\delta,r}psilon}_{\ast}(Q_+)} \lesssim \|\D_G v\|_{Y_{\alphapha,\eta_{\delta,r}psilon}},
\eta_{\delta,r}nd{align*}
which together with the local estimate in $X_{\alphapha,\eta_{\delta,r}psilon}(\mathcal{B}_3^+)$ provides the full bound from Proposition \ref{prop:invert}.\\
\eta_{\delta,r}mph{Step 1. Polynomial approximation at $P=\{y_n=y_{n+1}=0\}$.} We note that for $f\in Y_{\alphapha,\eta_{\delta,r}psilon}$ and $y_0\in P$ , there exists a first order polynomial $p_{y_0}(y)$ which is of the form $p_{y_0}(y) =f_0(y_0)y_n$ such that
\begin{align*}
\frac{1}{|\mathcal{B}_s^+(y_0)|}\int_{\mathcal{B}_s^+(y_0)}|f(y)-f_0(y_0)y_n|^2\leq C s^{2(1+2\alphapha)}, \quad \forall s\in(0,1).
\eta_{\delta,r}nd{align*}
By considering $v(y)-f_0(y_0)y_n^3/6$ and by still denoting the resulting function by $v$, we may assume that $f_0(y_0)=0$. The same arguments as before lead to the existence of a third order (in the homogeneous sense) polynomial $P_{y_0}$, where $$P_{y_0}(y)=a_0\left(y_n^3-3y_ny_{n+1}^2\right)+\sum_{i=1}^{n-1}b_iy_ny_i+c_0y_n,$$ for some constants $a_0,b_i,c_0$ depending on $y_0$, such that
\begin{align}
\label{eq:approx_a}
\frac{1}{|\mathcal{B}_s^+(y_0)|}\int_{\mathcal{B}_s^+(y_0)}|v-P_{y_0}|^2\leq C\left(\|v\|_{L^2(\mathcal{B}_1^+)}^2+\|f\|_{Y_{\alphapha,\eta_{\delta,r}psilon}}^2\right)s^{2(3+2\alphapha)}
\eta_{\delta,r}nd{align}
for any $0<s<1/2$. \\
\eta_{\delta,r}mph{Step 2. Interpolation.} For $y\notin P$ with $\sqrt{y_n^2+y_{n+1}^2}=\lambda>0$, let
\begin{align*}
\tilde{v}_\lambda(\xi):=\frac{(v-P_{y_0})(y_0+\lambda ^2 \xi'',\lambda \xi_n, \lambda \xi_{n+1})}{\lambda ^{3+2\alphapha}},
\eta_{\delta,r}nd{align*}
where $y_0$ is the projection of $y$ on $P$. Let $\xi_0$ be the image point of $y$ under this rescaling. By Step 1, $\tilde{v}_\lambda(\xi)\in L^2(\mathcal{B}_{1/2}(\xi_0))$ with
\begin{align}
\label{eq:rhs_est}
\|\tilde{v}_\lambda\|_{L^2(\mathcal{B}_1(\xi_0))}\leq C\left(\|v\|_{L^2}+\|f\|_{Y_{\alphapha,\eta_{\delta,r}psilon}}\right).
\eta_{\delta,r}nd{align}
Moreover,
\begin{align*}
\Delta_G \tilde{v}_\lambda(\xi)= f_{\lambda}(\xi),
\eta_{\delta,r}nd{align*}
where $f_{\lambda}(\xi):=\frac{1}{\lambda^{\eta_{\delta,r}psilon}}f(y_0+\lambda^2\xi'',\lambda\xi_n,\lambda \xi_{n+1})$. We note that by the definition of $Y_{\alphapha,\eta_{\delta,r}psilon}$ and by $f_0(y''_0)=0$,
\begin{align*}
\| f_{\lambda}\|_{C^{0,\eta_{\delta,r}psilon}(\mathcal{B}_{1/2}(\xi_0))} \leq \| f\|_{Y_{\alphapha,\eta_{\delta,r}psilon}}.
\eta_{\delta,r}nd{align*}
In $\mathcal{B}_{1/2}(\xi_0)$, $\Delta_G$ is uniformly elliptic. Thus, by the classical $C^{2,\eta_{\delta,r}psilon}$ Schauder estimates
\begin{align*}
\|\tilde{v}_\lambda\|_{C^{2,\eta_{\delta,r}psilon}(\mathcal{B}_{1/4}(\xi_0))} \leq C\left( \|\tilde{v}_\lambda\|_{L^2(\mathcal{B}_{1/2}(\xi_0))}+\|f_\lambda\|_{C^{0,\eta_{\delta,r}psilon}(\mathcal{B}_{1/2}(\xi_0))}\right).
\eta_{\delta,r}nd{align*}
Rescaling back and letting $\tilde{v}:=v-P_{y_0}$, we in particular infer
\begin{align*}
\lambda^{-1-2\alphapha+\eta_{\delta,r}psilon}\sum\limits_{i,j=1}^{n+1}[Y_i Y_j \tilde{v}]_{C^{0,\eta_{\delta,r}psilon}(\mathcal{B}_{\lambda/4}(y))}
\leq C\left(\|v\|_{L^2(\mathcal{B}_1^+)}+\|f\|_{Y_{\alphapha,\eta_{\delta,r}psilon}}\right).
\eta_{\delta,r}nd{align*}
Here we used (\ref{eq:rhs_est}) to estimate the right hand side contribution.
Recalling the $L^{\infty}$ estimate $\| v\|_{L^{\infty}(Q_+)} \leq C \| \D_G v \|_{L^{\infty}}$ (c.f. the kernel bounds in Lemma \ref{lem:ker} in Section \ref{sec:kernel}) and the support conditions for $\D_G v$ and for $f$ allows us to further bound
\begin{align*}
\|v\|_{L^2(\mathcal{B}_1^+)}+\|f\|_{Y_{\alphapha,\eta_{\delta,r}psilon}} \leq C \| f\|_{Y_{\alphapha,\eta_{\delta,r}psilon}}.
\eta_{\delta,r}nd{align*}
This implies
\begin{align}\label{eq:err_est}
\lambda^{-1-2\alphapha+\eta_{\delta,r}psilon}\sum\limits_{i,j=1}^{n+1}[Y_i Y_j \tilde{v}]_{C^{0,\eta_{\delta,r}psilon}_\ast(\mathcal{B}_{\lambda/4}(y))}
+ \lambda^{-1-2\alphapha}\sum\limits_{i,j=1}^{n+1}\|Y_i Y_j \tilde{v}\|_{L^{\infty}(\mathcal{B}_{\lambda/4}(y))}
\leq C\|f\|_{Y_{\alphapha,\eta_{\delta,r}psilon}}.
\eta_{\delta,r}nd{align}
Passing through a chain of non-tangential balls, we infer that \eta_{\delta,r}qref{eq:err_est} holds in a non-tangential cone at $y_0$:
\begin{align*}
&\sum\limits_{i,j=1}^{n+1}[d_G(y,y_0)^{-1-2\alphapha+\eta_{\delta,r}psilon}Y_i Y_j \tilde{v}]_{C^{0,\eta_{\delta,r}psilon}_\ast(\mathcal{N}_G(y_0))} \\
&+ \sum\limits_{i,j=1}^{n+1}\|d_G(y,y_0)^{-1-2\alphapha}Y_i Y_j \tilde{v}\|_{L^{\infty}(\mathcal{N}_G(y_0))}
\leq C\|f\|_{Y_{\alphapha,\eta_{\delta,r}psilon}}.
\eta_{\delta,r}nd{align*}
We note that it is possible to derive \eta_{\delta,r}qref{eq:err_est} for $v-P_{\bar y}$ at each $\bar y\in P$ and that hence $v$ is $C^{3,\alphapha}_{\ast}(P)$ in the sense of Definition \ref{defi:diff}. As by Proposition~\ref{prop:decompI} the map $\bar y\mapsto P_{\bar y}$ is $C^{0,\alphapha}(P)$ regular, a triangle inequality and a covering argument yield the estimate in the full neighborhood of $y_0$
\begin{align*}
&\sum\limits_{i,j=1}^{n+1}[d_G(y,y_0)^{-1-2\alphapha+\eta_{\delta,r}psilon}Y_i Y_j \tilde{v}]_{C^{0,\eta_{\delta,r}psilon}_\ast(\mathcal{B}_1^+(y_0))} \\
&+ \sum\limits_{i,j=1}^{n+1}\|d_G(y,y_0)^{-1-2\alphapha}Y_i Y_j \tilde{v}\|_{L^{\infty}(\mathcal{B}_1^+(y_0))}
\leq C\|f\|_{Y_{\alphapha,\eta_{\delta,r}psilon}}.
\eta_{\delta,r}nd{align*}
This concludes the local estimate and hence concludes the proof of Proposition \ref{prop:invert}.
\eta_{\delta,r}nd{proof}
\subsection{Invertibility of the Baouendi-Grushin Laplacian in $X_{\alphapha,\eta_{\delta,r}psilon} $, $Y_{\alphapha,\eta_{\delta,r}psilon}$}
\label{sec:XY}
We provide the proofs of the completeness of the spaces $X_{\alphapha,\eta_{\delta,r}psilon},Y_{\alphapha,\eta_{\delta,r}psilon}$ (c.f. Definition~\ref{defi:spaces}) and the desired invertibility of the Baouendi-Grushin Laplacian as an operator from $X_{\alphapha,\eta_{\delta,r}psilon}$ to $Y_{\alphapha,\eta_{\delta,r}psilon}$ (c.f. Lemma \ref{lem:inverse}).
\begin{lem}
\label{lem:Banach}
Let $X_{\alphapha,\eta_{\delta,r}psilon},Y_{\alphapha,\eta_{\delta,r}psilon}$ be as in Definition~\ref{defi:spaces}. Then $(X_{\alphapha,\eta_{\delta,r}psilon},\| \cdot \|_{X_{\alphapha,\eta_{\delta,r}psilon}}), (Y_{\alphapha,\eta_{\delta,r}psilon}, \| \cdot \|_{Y_{\alphapha,\eta_{\delta,r}psilon}})$ are Banach spaces.
\eta_{\delta,r}nd{lem}
\begin{proof}
(i) We first note that by the definition of $Y_{\alphapha,\eta_{\delta,r}psilon}$, $\supp(f)\subset \mathcal{B}_3^+$ . Hence, it suffices to consider the behavior of functions on $\bar{\mathcal{B}_3^+}$. By Proposition~\ref{prop:decompI} and Remark~\ref{rmk:characterize} a function $f\in Y_{\alphapha,\eta_{\delta,r}psilon}$ can be decomposed as
\begin{align*}
f(y)=f_0(y'')y_n+ r(y)^{1+2\alphapha-\eta_{\delta,r}psilon}f_1(y),\quad r(y)=\sqrt{y_n^2+y_{n+1}^2},
\eta_{\delta,r}nd{align*}
with $f_0\in C^{0,\alphapha}(P)$ and $f_1\in C^{0,\eta_{\delta,r}psilon}_\ast(Q_+)$ and $f_0$, $f_1$ are obtained by Taylor approximation of $f$ (c.f. the proof of Proposition \ref{prop:decompI} in Section \ref{sec:decomp}).
Moreover, $[f_0]_{\dot{C}^{0,\alphapha}(P\cap \mathcal{B}_3)} +[f_1]_{\dot{C}^{0,\eta_{\delta,r}psilon}_\ast(\mathcal{B}_3^+)}$ is equivalent to $\|f\|_{Y_{\alphapha,\eta_{\delta,r}psilon}}$. Thus, in order to obtain the desired Banach property, it suffices to show the equivalence of the homogeneous Hölder norms and their inhomogeneous counterparts for $y\in \mathcal{B}_3^+$. \\
We start by making the following observation: For any $f\in Y_{\alphapha,\eta_{\delta,r}psilon}$, $\supp(f)\subset \mathcal{B}_3^+$ (in combination with the definition of $f_0, f_1$) implies that
\begin{align*}
f_0(y'')=0 \mbox{ and } f_1(y)=0 \mbox{ for } y=(y'',y_n, y_{n+1}) \mbox{ such that } (y'',0,0) \in P\setminus \mathcal{B}_3^+.
\eta_{\delta,r}nd{align*}
Thus,
\begin{align*}
\|f_0 \|_{L^\infty(P\cap \mathcal{B}_3)} \leq C [f_0]_{\dot{C}^{0,\alphapha}(P \cap \mathcal{B}_3)}, \quad \|f_1\|_{L^\infty(\mathcal{B}_{3}^+)}\leq C[f_1]_{\dot{C}^{0,\eta_{\delta,r}psilon}_\ast(\mathcal{B}_3^+)}.
\eta_{\delta,r}nd{align*}
In particular, this immediately entails that $\|f_0\|_{C^{0,\alphapha}(P\cap \mathcal{B}_3)}\leq C[f_0]_{\dot{C}^{0,\alphapha}(P\cap \mathcal{B})}$ and $\|f_1\|_{C^{0,\eta_{\delta,r}psilon}_\ast(\mathcal{B}_3^+)}\leq C[f_1]_{\dot{C}^{0,\eta_{\delta,r}psilon}_\ast(\mathcal{B}_3^+)}$. Therefore, $Y_{\alphapha,\eta_{\delta,r}psilon}$ is a Banach space. \\
(ii) Let $v\in X_{\alphapha,\eta_{\delta,r}psilon}$. Since $\supp(\Delta_G v)\subset \mathcal{B}_3^+$, we infer that
\begin{align}\label{eq:compact_supp2}
\|\Delta_G v\|_{C^{0,\eta_{\delta,r}psilon}_\ast(Q_+)}\leq C [\Delta_G v]_{\dot C^{0,\eta_{\delta,r}psilon}_\ast(Q_+)}\leq C\|v\|_{X_{\alphapha,\eta_{\delta,r}psilon}}.
\eta_{\delta,r}nd{align}
Moreover, the Dirichlet-Neumann boundary conditions allow us to extend $v$ and $\Delta_Gv$ evenly about $y_{n+1}$ and oddly about $y_n$. After the extension, the assumption that $v\in C_0(Q_+)$ yields the representation
$$v(x)=\int\limits_{\R^{n+1}} K(x,y) \Delta_G v(y)dy,$$
where $K$ is the fundamental solution of $\Delta_G$ in $\R^{n+1}$ (c.f. Lemma \ref{lem:ker} in Section \ref{sec:kernel}). We remark that a priori $v$ deviates from $\int K(x,y) \Delta_G v(y)dy$ by (at most) a third order polynomial as we only control the semi-norm $[v]_{C^{2,\eta_{\delta,r}psilon}_{\ast}(Q_+\setminus \mathcal{B}_1^+)}$ in the bulk and the deviation of $Y_{i}Y_{j}v$ at the boundary of $\mathcal{B}_3^+$.
However, the decay property at infinity forces $v$ to coincide with $\int K(x,y) \Delta_Gv(y) dy$. By the kernel estimates for the fundamental solution (c.f. Lemma \ref{lem:ker} in Section \ref{sec:kernel}) and by the support assumption (\ref{eq:compact_supp2})
\begin{align*}
\| v \|_{L^{\infty}(Q_+)} \leq C \| \D_G v\|_{L^\infty(Q_+)} \leq C \|v\|_{X_{\alphapha,\eta_{\delta,r}psilon}}.
\eta_{\delta,r}nd{align*}
Thus, we are able to control the $L^\infty $ norm of the coefficients of the approximating polynomial $P_{\bar y}$ at each point $\bar y\in P$.
\eta_{\delta,r}nd{proof}
Last but not least, we show the invertibility of the Baouendi-Grushin Laplacian as an operator on these spaces:
\begin{lem}
\label{lem:inverse}
Let $X_{\alphapha,\eta_{\delta,r}psilon},Y_{\alphapha,\eta_{\delta,r}psilon}$ be as in Definition~\ref{defi:spaces}. Then, $\D_G:X_{\alphapha,\eta_{\delta,r}psilon} \rightarrow Y_{\alphapha,\eta_{\delta,r}psilon}$ is an invertible operator.
\eta_{\delta,r}nd{lem}
\begin{proof}
We show that for each $f\in Y_{\alphapha,\eta_{\delta,r}psilon}$, there exists a unique $u\in X_{\alphapha,\eta_{\delta,r}psilon}$ such that $\Delta_Gu=f$. Moreover, by Section \ref{sec:quarter_Hoelder}
\begin{align}\label{eq:apriori}
\|u\|_{X_{\alphapha,\eta_{\delta,r}psilon}}\leq C\|f\|_{Y_{\alphapha,\eta_{\delta,r}psilon}}.
\eta_{\delta,r}nd{align}
Indeed, given $f\in Y_{\alphapha,\eta_{\delta,r}psilon}$, we extend $f$ oddly about $y_{n}$ and evenly about $y_{n+1}$ and (with slight
abuse of notation) still denote the extended function by $f$. Let $u(x)= \int K(x,y) f(y)dy$, where $K$ is the kernel from Section \ref{sec:kernel}. In particular, the decay estimates for $K$ (c.f. Lemma \ref{lem:ker} in Section \ref{sec:kernel}) imply that $u\in C_0(\R^{n+1})$. Since $f\in L^\infty(\R^{n+1})$ and $\supp(f)\subset \mathcal{B}_3$, we obtain that $u\in M^{2,p}(\R^{n+1})$ for any $1<p<\infty$ (c.f. the Calderon-Zygmund estimates in Section \ref{sec:kernel}). Moreover, by the symmetry of the extension, $u$ is odd in
$y_n$ and even in $y_{n+1}$, which implies that $u=0$ on $\{y_n=0\}$ and $\p_{n+1}u=0$ on
$\{y_{n+1}=0\}$. We restrict $u$ to $Q_+$ and still denote it by $u$. By the interior estimates from
the previous Section \ref{sec:quarter_Hoelder} and a scaling argument, we further obtain that
$u\in X_{\alphapha,\eta_{\delta,r}psilon}$ and that it satisfies \eta_{\delta,r}qref{eq:apriori}.
It is immediate that $\supp(\Delta_Gu)=\supp(f)\subset \mathcal{B}_3^+$. Moreover, by using the equation, $\p_{nn}u=f-(y_n^2+y_{n+1}^2)\sum_{i=1}^{n-1}\p_{ii}u-\p_{n+1,n+1}u=0 $ on $\{y_{n}=y_{n+1}=0\}$. This shows
the existence of $u\in X_{\alphapha,\eta_{\delta,r}psilon}$ which satisfies $\Delta_Gu=f$. Due to \eta_{\delta,r}qref{eq:apriori} such a function $u$ is unique in $X_{\alphapha,\eta_{\delta,r}psilon}$.
\eta_{\delta,r}nd{proof}
\subsection{Kernel estimates for the Baouendi-Grushin Laplacian}
\label{sec:kernel}
Last but not least, we provide the arguments for the mapping properties of the Baouendi-Grushin Laplacian in the whole space setting. This in particular yields the kernel bounds, which are used in the previous subsection.\\
Our main result in this section are the following Calderon-Zygmund estimates:
\begin{prop}[Calderon-Zygmund estimates]
\label{prop:CZ}
Let $Y:=(Y_1,\dots,Y_{n+1})$ with $Y_i$ denoting the vector fields from Definition \ref{defi:Grushinvf} and let $F=(F^1,\dots,F^{n+1})\in L^p(\R^{n+1},\R^{n+1})$, $f\in L^p(\R^{n+1})$.
Suppose that
\[ \Delta_G u = Y_i F^{i}. \]
Then, there exists a constant $c_n= c(n)>0$ such that
\[ \sum\limits_{i=1}^{n+1} \Vert Y_i u \Vert_{L^p(\R^{n+1})} \le c_{n} \frac{p^2}{p-1} \Vert F \Vert_{L^p(\R^{n+1})}. \]
If
\[ \Delta_ G u = f ,\]
then there exists a constant $c_n= c(n)>0$ such that
\[ \sum\limits_{i,j=1}^{n+1} \Vert Y_{i}Y_j u \Vert_{L^p(\R^{n+1})} \le c_{n} \frac{p^2}{p-1} \Vert f \Vert_{L^p(\R^{n+1})}. \]
If $0<s<1$ and $F\in \dot C^s(\R^{n+1}, \R^{n+1})$, then there exists a constant $c_n= c(n)>0$ such that
\[ \sum\limits_{i=1}^{n+1} \Vert Y_i u \Vert_{\dot C^s(\R^{n+1})} \le c_{n} \frac1{s(1-s)} \Vert F \Vert_{\dot C^s(\R^{n+1})}. \]
Moreover, if $F$ is supported on ball of radius one,
\[ \sum\limits_{i=1}^{n+1} \Vert Y_i u (x)\Vert_{L^\infty(\R^{n+1})}
\le c \Vert F \Vert_{C^{s}(B_1)}. \]
\eta_{\delta,r}nd{prop}
The key auxiliary result to infer the regularity estimates of Proposition \ref{prop:CZ} is the following existence and regularity result for a kernel to our problem:
\begin{lem}
\label{lem:ker}
Let $u:\R^{n+1}\rightarrow \R$ be a solution of $\D_G u = f$. Then there exists a kernel $k(z,w): \R^{(n+1)\times (n+1)}\rightarrow \R$ such that
\begin{align*}
u(x)= \int\limits_{\R^{n+1}} k(x,y)f(y)dy.
\eta_{\delta,r}nd{align*}
Let $\tilde{Y}^{\alphapha}$ denote the composition of the vector fields $\tilde{Y}_{\alphapha_{1}}\dots \tilde{Y}_{\alphapha_{|\alphapha|}}$ where $\tilde{Y}_i$, $i\in\{1,\dots,2n\}$, denote the modified vector fields from Definition \ref{defi:Hoelder1}. Then for all multi-indeces $\alphapha, \beta$ the following estimates hold
\[ \left|\tilde{Y}_{z}^\alphapha \tilde{Y}_{w}^\beta k(z,w)\right|
\le c_{\alphapha, \beta} d_G(z,w)^{2-|\alphapha|-|\beta|} (\vol(B_{d_G(z,w)}(z)))^{-1} .
\]
Here the subscript $z,w$ in the vector fields $\tilde{Y}_z^{\alphapha}, \tilde{Y}_w^{\beta}$ indicates the variable the vector fields are acting on.
\eta_{\delta,r}nd{lem}
Relying on this representation, we can proceed to the proof of Proposition \ref{prop:CZ}:
\begin{proof}[Proof of Proposition \ref{prop:CZ}]
Let $K_{ij}(z,w)= Y_{i,z} Y_{j,w} k(z,w)$ for any pair $i,j\in\{1,\dots,n+1\}$, where the indeces $z,w$ refer to the variables which the vector field are acting on and $k(z,w)$ denotes the kernel from Lemma \ref{lem:ker}. The function $K_{ij}(z,w)$ is related to the
obvious Calderon-Zygmund operator $T$ which maps $L^p$ to $L^p$. This proves the desired $L^p$ bounds. Hence, it remains to prove the Hölder estimates. Formally, it maps constants to zero. Thus,
\[ Tf(x) = T(f-f(x))(x)= \int\limits_{\R^{n+1}} K_{ij}(x,y)(f(y)-f(x)) dy .\]
Now let $d_G(z,w) = 3$. We choose a smooth cutoff function $\phi$ which is equal to $1$ in $\mathcal{B}_1(0)$ and equal to $0$ outside $\mathcal{B}_{3/2}(0)$ and set
$f(x) = \phi(x-z) f(x) + \phi(x-w)f(x) + (1-\phi(x-z)-\phi(x-w)) f(x). $
We claim that $|T(f)(w)-T(f)(z)| \le c $. This follows from the kernel estimates of Lemma \ref{lem:ker}.
\eta_{\delta,r}nd{proof}
Finally, to conclude our discussion of the mapping properties of the Baouendi-Grushin operator, we present the proof of Lemma \ref{lem:ker}:
\begin{proof}[Proof of Lemma \ref{lem:ker}]
We begin by considering the equation
\[ \Delta_G u = \sum\limits_{i=1}^{n+1}Y_i F \mbox{ in } \R^{n+1}. \]
For this we have the energy estimate
\[ \sum\limits_{i=1}^{n+1} \Vert Y_i u \Vert_{L^2(\R^{n+1})}^2 \le \sum\limits_{i=1}^{n+1} \Vert F^i \Vert_{L^2(\R^{n+1})}^2 . \]
Also, the Sobolev embedding
\[ \Vert u \Vert_{L^p(\R^{n+1})} \le c \sum\limits_{i=1}^{n+1} \Vert Y_i u \Vert_{L^q(\R^{n+1})} , \]
holds with
\[ \frac1q - \frac1{2n-2} = \frac1p. \]
By duality, if
\[ \frac12 - \frac1{2n-2} = \frac1p, \]
we have the embedding
\[ \Vert f \Vert_{\dot M^{-1}(\R^{n+1})} \leq c \Vert f \Vert_{L^{p'}(\R^{n+1})} .\]
Here $\dot{M}^{-1}(\R^{n+1})$ denotes the dual space of $\dot{M}^1(\R^{n+1})$ (and $\dot{M}^{1}(\R^{n+1})$ is the homogeneous version of the space introduced in Definition \ref{defi:GrushinLp} in Section \ref{sec:quarter_Hoelder}).
As discussed in Section \ref{sec:holder} the symbol of $\D_G$ defines the sub-Riemannian metric
\[ g_{y}(v,w) = (y_n^2 + y_{n+1}^2)^{-1} \sum\limits_{i=1}^{n-1} v_{i}w_{i} + v_{n}w_{n} + v_{{n+1}}w_{{n+1}}, \]
which itself correspondingly defines a metric $d_G$ on $\R^{n+1}$.
The operator $\D_G$ satisfies the Hörmander condition with the vector fields
\[ \tilde{Y}_i, \ i\in\{1,\dots,2n\}. \]
Hence, it is hypoelliptic and any local distributional solution is smooth.
More precisely, if $u \in L^1(B_1(y))$ satisfies $\Delta_G u =0$, then for any multi-index $\alphapha$
\[ \Vert \tilde{Y}^{\alphapha} u \Vert_{L^\infty(B_{1/2}(y))}
\le c_{\alphapha} \Vert u \Vert_{L^1(B_1(y))}. \]
Let $\frac1p+\frac1{2n-2}= \frac12$ and let $p'$ the Hölder conjugate
exponent. Then, by the embeddings, if $f \in L^{p'}$ and
\[ \Delta_G u = f \mbox{ in } \R^{n+1}, \]
then $\Vert u \Vert_{L^p(\R^{n+1})} \le c \Vert f \Vert_{L^{p'}(\R^{n+1})}$.
By the Schwartz kernel theorem there is a kernel $k(z,w)$ so that
\[ u(z) = \int\limits_{\R^{n+1}} k(z,w) f(w) dw. \]
More precisely, if $f \in \dot M^{-1}$, then $u \in \dot M^{-1}$.
In particular, if $f$ is supported in ball $B_1(w)$ then $u$ is a solution to
the homogeneous problem outside. In particular if $ d_G(z,w) \ge 3$ then $u$ is bounded together with all derivatives in $B_1(z)$. We fix $z$. Then,
\[ M^{-1}(B_1(w)) \ni f \to u(z), \]
is a linear continuous map, which is represented by $\tilde w \to k(z,\tilde w) \in M^1(B_1(w))$. Since $\Delta_G$ is self-adjoint,$k(z,w) = k(w,z)$. Repeating previous arguments we see that
\[ \tilde{Y}^\alphapha_w k(z,\tilde w) \]
is bounded in $B_{1/2}(w)$. Repeating the arguments and dualizing once more we obtain that
\[ |\tilde{Y}_z^\alphapha \tilde{Y}^\beta_w k(z,w)| \le c, \]
provided $d_G(z,w)=1$. Hence rescaling leads to the desired kernel estimates.
\eta_{\delta,r}nd{proof}
\eta_{\delta,r}nd{document}
|
\begin{document}
\xdef\@thefnmark{}\@footnotetext {{\it 2010 Mathematics Subject Classification:} 14J50, 14C05, 14C34.}
\xdef\@thefnmark{}\@footnotetext {{\it Key words:} Irreducible holomorphic symplectic manifolds, involutions, moduli spaces of polarized manifolds, moduli spaces of twisted sheaves on $K3$ surfaces.}
\begin{abstract}
We study irreducible holomorphic symplectic manifolds deformation equivalent to Hilbert schemes of points on a $K3$ surface and admitting a non-symplectic involution. We classify the possible discriminant forms of the invariant and anti-invariant lattice for the action of the involution on cohomology, and explicitly describe the lattices in the cases where the invariant has small rank. We also give a modular description of all $d$-dimensional families of manifolds of $K3^{[n]}$-type with a non-symplectic involution for $d\geq 19$ and $n\leq 5$, and provide examples arising as moduli spaces of twisted sheaves on a $K3$ surface.
\end{abstract}
\maketitle
\section*{Introduction}
The aim of this note is to explain the classification of non-symplectic involutions on IHS manifolds of $K3^{\left[2\right]}n$-type, thus generalizing to all even dimensions the classification which is already known for $n=1$ by foundational work of Nikulin \cite{NikulinInv} on K3 surfaces and for $n=2$ by the work of Beauville \cite{BeauInv} and of Boissi\`ere, the first author and Sarti \cite{BCS}. The core of the classification result contained in this work comes from Joumaah's PhD thesis \cite{joumaah}, but he kindly decided to let us publish by ourselves. On the other hand, the proof of one of the main results in loc.\ cit.\ is not entirely correct, so in this paper we prove a revised statement (Proposition \ref{discr groups involutions}), in order to obtain the correct classification of non-symplectic involutions on manifolds of $K3^{\left[2\right]}n$-type.
In the first two authors' work \cite{CC} the interested reader can find the analogue classification for non-symplectic automorphisms of odd prime order: although the lattice-theoretical techniques used are similar, and descend from work by Nikulin \cite{nikulin}, the prime $p=2$ is somewhat different with respect to other primes because for $n\geq 2$ it always divides $2(n-1)$, which is the discriminant of the Beauville--Bogomolov--Fujiki lattice $L_n:=U^{\oplus 3}\oplus E_8^{\oplus 2}\oplus \langle -2(n-1)\rightarrowngle$, i.e.\ the second cohomology lattice of any manifold of $K3^{\left[2\right]}n$-type.
Concerning involutions, in \cite{catt_autom_hilb} the second author computed the automorphism group of the Hilbert scheme of $n$ points over a generic projective $K3$ surface, showing that this group (if not trivial) is generated by exactly one non-natural and non-symplectic involution (for $n=2$, this had already been proved by Boissi\`ere, the third author, Nieper-Wisskirchen and Sarti \cite{bcnws}). The present paper also provides a partial extension of these results, allowing the pair consisting of a Hilbert scheme and its involution to be deformed.
\subsection*{IHS manifolds and automorphisms}
We recall that an irreducible holomorphic symplectic (IHS) manifold is a compact complex K\"ahler manifold $X$ which is simply connected and such that $H^{2, 0}(X)$ is generated by the class of a single holomorphic symplectic (i.e.\ everywhere non-degenerate) $2$-form. Basic examples of IHS manifolds are provided by $K3$ surfaces and, in dimension $2n$, by the Hilbert scheme of zero-dimensional subschemes of length $n$ of a $K3$ surface. As small deformations of IHS manifolds are still IHS, we can then produce new examples: we say that an IHS manifold is of $K3^{\left[2\right]}n$-type if it is deformation equivalent to the Hilbert scheme of $n$ points on a $K3$ surface.
The deformation theory of IHS manifolds is sufficiently well understood. For any manifold $X$ of $K3^{\left[2\right]}n$-type, a \emph{marking} is a lattice isometry $\eta: H^2(X, \mathbb{Z}) \longrightarrow L_n$, where we recall that $H^2(X, \mathbb{Z})$ is a lattice by means of the Beauville--Bogomolov--Fujiki form (see \cite[$\S$8]{Beauville}). Then, there exists a well-defined compact complex moduli space which parametrizes marked IHS manifolds of $K3^{\left[2\right]}n$-type. A fundamental result, due to work by Huybrechts, Markman and Verbitsky, is the Global Torelli Theorem \cite[Corollary 1.20]{VerbitskyTorelli}, which describes the fibers of the period map associated to this moduli space.
The use of markings allows us to transfer most of the questions about automorphisms to a purely algebraic setting, involving lattices and their properties. However, we need to determine which of the isometries of the abstract lattice $L_n$ correspond, via the marking, to automorphisms of the IHS manifold. To this end, we will make use of Markman's version of the Torelli Theorem \cite[Theorem 1.3]{markman}.
\subsection*{Structure of the paper and main results}
Our study of involutions on manifolds of $K3^{\left[2\right]}n$-type will be conducted in two steps. In Section \ref{sect: lattices} we study the problem only from a lattice-theoretical point of view: our aim is to classify the possible discriminant groups of pairs $T, S \subset L_n$ consisting of the invariant lattice $T$ and the anti-invariant (or co-invariant) lattice $S$ of a non-symplectic involution. We provide this classification in Proposition \ref{discr groups involutions}, fixing the inaccuracies of \cite{joumaah}. An important ingredient of our proof is the fact that one between the invariant and anti-invariant lattice is $2$-elementary (Proposition \ref{prop: T o S 2-elementary}).
In Section \ref{sec: existence}, by using the Global Torelli Theorem we prove that the conditions determined in Section \ref{sect: lattices} on the abstract lattices $T,S$ are also sufficient to obtain a marked manifold of $K3^{\left[2\right]}n$-type with a non-symplectic involution, having $T$ and $S$ as invariant and co-invariant lattice respectively.
\begin{thm}[{Theorem \ref{thm: existence}}]
Let $\rho \in O(L_n)$ be an involution whose invariant lattice $T$ is hyperbolic with $\mathrm{rk}\,(T) \leq 20$. Assume also that $\rho\vert_{A_L} = \pm \id$. Then there exists a marked manifold $(X, \eta)$ of $K3^{\left[2\right]}n$-type with a non-symplectic involution $i \in \aut(X)$ such that $\eta \circ i^* = \rho \circ \eta$.
\end{thm}
In Section \ref{sect: geography} we focus on the cases where the invariant lattice has small rank, i.e.~$\mathrm{rk}\,(T) = 1$ or $2$. For $2 \leq n \leq 5$ we explicitly classify the isometry classes of the pairs of lattices $T,S$ (Propositions \ref{prop: rank one}, \ref{prop:rank two +id} and \ref{prop:rk two -id}). Non-symplectic involutions of manifolds of $K3^{\left[2\right]}n$-type having invariant lattice of small rank are particularly interesting, since they deform in families of large dimensions. For each possible action on cohomology $\rho \in O(L_n)$ in our classification, we study the corresponding \mbox{moduli} space $\mathcal{M}_{T, \rho}$ of $(\rho, T)$-polarized manifolds of $K3^{\left[2\right]}n$-type with a non-symplectic involution.
\begin{thm}[{Theorem \ref{thm: max dim families}}]
Let $(X, \eta)$ be a marked manifold of $K3^{[n]}$-type for \mbox{$2 \leq n \leq 5$}, and let $i \in \aut(X)$ be a non-symplectic involution such that the pair $(X, i)$ deforms in a family of dimension $d \geq 19$. Then $(X, \eta)$ belongs to the closure of one of the following moduli spaces:
\begin{enumerate}
\item[$n=2$:] $\mathcal{M}_{\langle 2\rightarrowngle, \rho_a}$ or $\mathcal{M}_{U(2), \rho_1}$
\item[$n=3$:] $\mathcal{M}_{\langle 2\rightarrowngle, \rho_a}$, $\mathcal{M}_{\langle 4\rightarrowngle, \rho}$ or $\mathcal{M}_{U(2), \rho_1}$
\item[$n=4$:] $\mathcal{M}_{\langle 2\rightarrowngle,\rho_a}$, $\mathcal{M}_{\langle 2\rightarrowngle,\rho_b}$ or $\mathcal{M}_{U(2), \rho_1}$
\item[$n=5$:] $\mathcal{M}_{\langle 2\rightarrowngle, \rho_a}$, $\mathcal{M}_{U(2), \rho_1}$ or $\mathcal{M}_{U(2), \rho_2}$
\end{enumerate}
where $\rho,\rho_a,\rho_1,\rho_2$ are defined in Remark \ref{rem: moduli spaces recap}.
All these moduli spaces are irreducible with the exception of $\mathcal{M}_{U(2), \rho_2}$ for $n=5$, which has three distinct irreducible components.
\end{thm}
Finally, in Section \ref{sect: examples} we use moduli spaces of twisted sheaves on $K3$ surfaces to describe the generic element in the maximal moduli spaces $\mathcal{M}_{T, \rho}$ of dimension $19$ (Propositions \ref{prop: twisted induced U(2) irred} and \ref{prop: twisted induced U(2) n=5}), though only in one case the involution is induced by a non-symplectic involution of the underlying $K3$ surface. Finding an explicit description of the automorphism in the other families is still an open problem.
\subsection*{Notations and conventions}
Throughout the paper, all the varieties will be defined over the field $\IC$ of complex numbers.
A \emph{lattice} is a free abelian group $M$ equipped with a symmetric non-degenerate bilinear form $\left(\cdot, \cdot\right): M \times M \rightarrow \mathbb{Z}$. Its discriminant group $A_M$ is defined as $A_M = M^\vee / M$, where $M^\vee = \Hom_\mathbb{Z}(M, \mathbb{Z})$ is the dual group of $M$. If $A_M$ is cyclic of order $m$, we write $A_M \cong \frac{\mathbb{Z}}{m\mathbb{Z}}\left(\alpha\right)$ if the finite quadratic form $q_M: A_M \rightarrow \mathbb{Q} / 2\mathbb{Z}$ (induced by the quadratic form on $M$) takes value $\alpha$ on a generator of $A_M$. For any positive integer $n \geq 2$, we will denote by $L_n$ the lattice
\[L_n = U^{\oplus 3} \oplus E_8^{\oplus 2} \oplus \langle -2(n - 1) \rightarrowngle,\]
where $U$ is the hyperbolic plane, $E_8$ is the unique unimodular lattice of signature $(0, 8)$ and for any integer $t \neq 0$ we denote by $\langle t \rightarrowngle$ the lattice generated by an element $\delta$ with $(\delta, \delta) = t$.
For a pair of lattices $M$, $N$ there may be several non-isometric embeddings of $M$ into $N$. When we say that $M$ is embedded in $N$, writing $M \subset N$, we always mean that an embedding $j: M \hookrightarrow N$ has been fixed. We will consider two such embeddings $j, j'$ as being isomorphic if there exist isometries $\psi \in O(M)$ and $\varphi \in O(N)$ such that $j \circ \psi = \varphi \circ j'$. The images $j(M)$, $j'(M)$ inside $N$ are also called \emph{isomorphic sublattices} according to \cite[$\S$1.5]{nikulin}.
\section{Involutions of the lattice \texorpdfstring{$L_n$}{Ln}}\label{sect: lattices}
\subsection{Invariant and anti-invariant lattices} \label{sec: inv and anti-inv lattices}
Let $(X, i)$ be a pair consisting of an IHS manifold $X$ of $K3^{\left[2\right]}n$-type and a non-symplectic involution $i\in \aut(X)$. The lattice $H^2(X, \mathbb{Z})$ is isometric to $L_n = U^{\oplus 3} \oplus E_8^{\oplus 2} \oplus \langle -2(n - 1) \rightarrowngle$, as we already recalled, and $i^* \in \Mon^2(X)$, which is the subgroup of monodromy operators inside $O(H^2(X,\mathbb{Z}))$. We now fix $n\geq 2$ and we write $L:=L_n$ for the sake of simplicity.
By \cite[Cor.\ 9.5(1)]{markman} we have a primitive embedding $L \hookrightarrow M$ where $M := U^{\oplus 4} \oplus E_8^{\oplus 2}$ is the Mukai lattice, unimodular of rank $24$. Observe that, if we call $\delta$ a generator of $\langle -2(n - 1) \rightarrowngle$ in $L$, then $A_L$ is cyclic generated by $\frac{1}{2(n - 1)}\delta $, i.e.\
\begin{equation}\label{eq: discriminant L}
A_L = \frac{\mathbb{Z}}{2(n - 1) \mathbb{Z}}\left( -\frac{1}{2(n - 1)} \right).
\end{equation}
We denote by $L^\perp$ the orthogonal complement of $L$ inside $M$. By \cite[Cor.\ 1.6.2]{nikulin} we have
\begin{equation}\label{eq: discriminant lperp}
A_{L^{\perp}} \cong \frac{\mathbb{Z}}{2(n - 1)\mathbb{Z}} \left( \frac{1}{2(n - 1)} \right).
\end{equation}
Since $L^{\perp} \subset M$ has rank one, we deduce that $L^{\perp} \cong \langle 2(n - 1) \rightarrowngle$.
After choosing a marking (i.e.\ an isometry) $\eta: H^2(X, \mathbb{Z}) \rightarrow L$, we can consider the action $i^* \in O(L)$. By \cite[Lemma 9.2]{markman}, $i^*$ satisfies the following properties: it has spin norm equal to $1$ (equivalently, it is orientation preserving) and it induces $\pm \id$ on the discriminant group $A_L$. This means that $\pm i^* \in \widetilde{O}(L)$, where for any lattice $\Lambda$ the stable orthogonal group $\widetilde{O}(\Lambda)$ is the subgroup of $O(\Lambda)$ consisting of isometries that induce the identity on the discriminant group $A_\Lambda$. Let $\sigma = \pm i^*$ be such that $\sigma \in \widetilde{O}(L)$.
The \emph{invariant lattice} of the involution $i \in \aut(X)$ is the sublattice $H^2(X, \mathbb{Z})^{i^*} \subset H^2(X, \mathbb{Z})$ of elements that are fixed by $i^*$. Its orthogonal complement in $H^2(X, \mathbb{Z})$ is called the \emph{anti-invariant} (or co-invariant) lattice. Notice that the anti-invariant lattice coincides with $\ker(\id + i^*)$ (see \cite[\S 5]{smith}) and therefore it is equal to $H^2(X, \mathbb{Z})^{-i^*}$, the invariant lattice of $-i^*$.
We now show that one between the invariant and the anti-invariant lattice of $i^*$ is $2$-elementary.
\begin{prop} \label{prop: T o S 2-elementary}
Let $X$ be a manifold of $K3^{[n]}$-type and let $i \in \aut(X)$ be a non-symplectic involution. Then one of the following holds:
\begin{enumerate}
\item $i^*$ acts as $\id$ on the discriminant group of $H^2(X, \mathbb{Z})$ and $H^2(X, \mathbb{Z})^{-i^*}$ is $2$-e\-le\-men\-ta\-ry;
\item $i^*$ acts as $-\id$ on the discriminant group of $H^2(X, \mathbb{Z})$ and $H^2(X, \mathbb{Z})^{i^*}$ is $2$-elementary.
\end{enumerate}
\end{prop}
\begin{proof}
Consider $\sigma \in \widetilde{O}(L)$ as above: in both cases we want to show that the invariant lattice of $-\sigma$ is $2$-elementary. By \cite[Lemma 7.1]{ghs2}, we can extend $\sigma$ to an isometry $\tau \in \widetilde{O}(M)$ such that $\tau\vert_{L^{\perp}} = \id_{L^{\perp}}$ and with the following properties:
\begin{enumerate}
\item $L^\sigma \subset M^\tau$;
\item $L^{-\sigma} \subset M^{-\tau}$;
\item $L^{\perp} \subset M^\tau$.
\end{enumerate}
As a consequence, $L^\sigma \oplus L^{\perp} \subset M^\tau$ is a finite index sublattice and moreover, inside the lattice $M$:
\[M^{-\tau} = (M^\tau)^{\perp} \subset (L^\sigma \oplus L^{\perp})^{\perp} = (L^ \sigma)^{\perp} \cap L \subset L.\]
Hence $L^{-\sigma} = M^{-\tau}$. The invariant and anti-invariant lattices of an involution of an even unimodular lattice are $2$-elementary by \cite[Lemma 3.5]{ghs3}: this concludes the proof.
\end{proof}
With the same notation used above, we remark the following facts.
\begin{lemma}\label{lemma: orthogonal Lperp}
\begin{enumerate}\leavevmode
\item The lattice $L^\sigma$ is primitively embedded in $M^\tau$.
\item The lattice $L^{\perp}$ is primitively embedded in $M^\tau$.
\item The lattices $L^\sigma$ and $L^{\perp}$ are the orthogonal complement of each other in $M^\tau$.
\end{enumerate}
\end{lemma}
\begin{proof}\leavevmode
\begin{enumerate}
\item As $L^\sigma \subset L$ and $L \subset M$ are primitive, we deduce that $L^\sigma \subset M$ is primitive. The claim follows then from the inclusion $L^\sigma \subset M^\tau$.
\item This follows from the fact that $L^{\perp} \subset M$ is primitive and $L^{\perp} \subset M^\tau \subset M$.
\item Since $(L^\sigma, L^{\perp}) = 0$, we deduce that $L^{\perp} \subset (L^\sigma)^{\perp_{M^\tau}}$. Moreover, both $L^{\perp}$ and $(L^\sigma)^{\perp_{M^\tau}}$ are primitive sublattices of $M^\tau$: since they have the same rank, they must coincide.\qedhere
\end{enumerate}
\end{proof}
In the same spirit of \cite[Def.~4.1]{BCS}, we give the following definition.
\begin{defi}
An automorphism $f$ of a manifold $X$ of $K3^{\left[2\right]}n$-type is \emph{natural} if there exists a $K3$ surface $\Sigma$ and $\varphi\in\aut(\Sigma)$ such that $(X, f)$ is deformation equivalent to $(\Sigma^{[n]}, \varphi^{[n]})$.
\end{defi}
\begin{lemma}
Let $X$ be a manifold of $K3^{\left[2\right]}n$-type and $i \in \aut(X)$ be a natural non-symplectic involution. Then $i^* \in \widetilde{O}(H^2(X, \mathbb{Z}))$.
\end{lemma}
\begin{proof}
As shown in \cite[\S 4]{BCS}, the isomorphism class of the invariant lattice of a non-symplectic involution is deformation invariant. For the pair $(\Sigma^{[n]}, \varphi^{[n]})$, the action of the natural involution on the exceptional divisor of the Hilbert--Chow morphism $\Sigma^{[n]} \rightarrow \Sigma^{(n)}$ is trivial by \cite[Thm.~1]{bs}. Let $\delta \in H^2(\Sigma^{[n]}, \mathbb{Z})$ be the class whose double is the exceptional divisor. From $i^*(2 \delta) = 2 \delta$ we get that the image of $L + \frac{1}{2(n - 1)}\delta \in A_L$ is $L + \frac{1}{2(n - 1)}\delta$, hence the action of $i^*$ on $A_L$ is trivial.
\end{proof}
\begin{cor}
Let $X$ be a manifold of $K3^{\left[2\right]}n$-type and $i \in \aut(X)$ be a natural non-symplectic involution. Then the anti-invariant lattice $H^2(X, \mathbb{Z})^{-i^*}$ is $2$-elementary.
\end{cor}
\subsection{Discriminant groups}\label{sec: discr groups}
We explain in this section the inaccuracies in the proof of \cite[Prop.~5.1.1]{joumaah} and provide the necessary corrections. Adopting our notation, which differs from the one used by Joumaah, let $X$ be a manifold of $K3^{\left[2\right]}n$-type with a non-symplectic involution $i \in \aut(X)$. Let $T = L^{i^*}$, $S = L^{-i^*}$ be, respectively, the invariant and anti-invariant lattices of the involution. The aim of \cite[Prop.~5.1.1]{joumaah} is to classify the discriminant groups $A_T, A_S$. In order to do so, Joumaah considers the isotropic subgroup $H_L \subset A_T \oplus A_S$, which is isomorphic to $\frac{L}{T \oplus S} \cong \left( \frac{\mathbb{Z}}{2 \mathbb{Z}} \right)^a$ for some $a \geq 0$, and its projections \mbox{$H_T := p_T(H_L) \subset A_T$}, \mbox{$H_S := p_S(H_L) \subset A_S$}. In particular, $H_L \cong H_T \cong H_S$ as groups, $\frac{H_L^\perp}{H_L} \cong A_L$ and $\gamma := p_S \circ p_T^{-1} : H_T \rightarrow H_S$ is an anti-isometry.
The following proposition provides the complete classification for the discriminant groups $A_T$, $A_S$. We refer to \cite[Prop.~3.2]{CC} for the analogous classification in the case of automorphisms of odd prime order.
\begin{prop}\label{discr groups involutions}
Let $X$ be a manifold of $K3^{\left[2\right]}n$-type, for $n \geq 2$, and let $l \geq 1$ and $m$ odd such that $2(n-1)=2^l m$. Let $\mathcal{G} \subset \aut(X)$ be a group of order $2$ acting non-symplectically on $X$. Denote by $T,S \subset L := L_n$, respectively, the invariant and anti-invariant sublattices for the action of $\mathcal{G}$, with $\frac{L}{T \oplus S} \cong \left( \frac{\mathbb{Z}}{2 \mathbb{Z}} \right)^a$ for some $a \geq 0$. Then one of the following cases holds:
\begin{enumerate}
\item[\textit{(i)}] $A_T \cong \left(\frac{\mathbb{Z}}{2 \mathbb{Z}}\right)^{\oplus a} \oplus \frac{\mathbb{Z}}{2(n-1) \mathbb{Z}}$, $A_S \cong \left(\frac{\mathbb{Z}}{2 \mathbb{Z}}\right)^{\oplus a}$ or vice versa;
\item[\textit{(ii)}] $a \geq 1$, $A_T \cong \left(\frac{\mathbb{Z}}{2 \mathbb{Z}}\right)^{\oplus a-1} \oplus \frac{\mathbb{Z}}{2(n-1) \mathbb{Z}}$, $A_S \cong \left(\frac{\mathbb{Z}}{2 \mathbb{Z}}\right)^{\oplus a+1}$ or vice versa;
\item[\textit{(iii)}] $l = 1$, $a=0$, $A_T \cong \frac{\mathbb{Z}}{m \mathbb{Z}}$, $A_S \cong \frac{\mathbb{Z}}{2 \mathbb{Z}}$ or vice versa.
\end{enumerate}
\begin{proof}
Let $i$ be the non-symplectic involution generating the group $\mathcal{G}$ and, as before, let $\sigma = \pm i^*$ be the isometry such that $\sigma \in \widetilde{O}(L)$. Let $T, S$ be the invariant and anti-invariant lattices of $i^*$, as in the statement. If $\sigma = i^*$, then $T = L^\sigma$, $S = L^{-\sigma}$; if $\sigma = -i^*$, then $T = L^{-\sigma}$, $S = L^\sigma$.
As we showed in Proposition \ref{prop: T o S 2-elementary}, the lattice $L^{-\sigma}$ is $2$-elementary, therefore $A_{L^{-\sigma}}$ coincides with its Sylow $2$-subgroup (it actually coincides with its $2$-torsion part). Moreover, $A_{L^\sigma} = \left( A_{L^\sigma} \right)_2 \oplus \frac{\mathbb{Z}}{m \mathbb{Z}}$, where $\left( A_{L^\sigma} \right)_2$ denotes the Sylow $2$-subgroup of $A_{L^\sigma}$ (see \cite[Prop.~5.1.1]{joumaah}). Using the notation introduced at the beginning of the section, there exist subgroups $H_{L^{\sigma}} \subset \left( A_{L^\sigma} \right)_2$ and $H_{L^{-\sigma}} \subset A_{L^{-\sigma}}$ isomorphic to $\frac{L}{L^{\sigma} \oplus L^{-\sigma}} \cong \left( \frac{\mathbb{Z}}{2 \mathbb{Z}} \right)^{a}$. The case $l = 1$ was correctly discussed by Joumaah in his proof: the only possibilities are $A_{L^{\sigma}} = H_{L^{\sigma}} \oplus \frac{\mathbb{Z}}{2 \mathbb{Z}} \oplus \frac{\mathbb{Z}}{m \mathbb{Z}}$, $A_{L^{-\sigma}} = H_{L^{-\sigma}}$ (i.e.\ case (i) of the statement), or $A_{L^{\sigma}} = H_{L^{\sigma}} \oplus \frac{\mathbb{Z}}{m \mathbb{Z}}$, $A_{L^{-\sigma}} = H_{L^{-\sigma}} \oplus \frac{\mathbb{Z}}{2 \mathbb{Z}}$ (case (ii) if $a \geq 1$ or case (iii) if $a = 0$).
If $l \geq 2$, we define $G := (A_{L^{\sigma}})_2 \oplus A_{L^{-\sigma}}$ and let $G_2 \subset G$ be the subgroup of elements of order $2$ in $G$. Joumaah showed that $[G : G_2] = 2^{l-1}$, which implies $G \cong \left( \frac{\mathbb{Z}}{2 \mathbb{Z}} \right)^{2a} \oplus \frac{\mathbb{Z}}{2^l \mathbb{Z}}$. As a consequence, we obtain two possible structures (not just one, as stated in \cite[Prop.~5.1.1]{joumaah}; see below for details) for the summands $(A_{L^{\sigma}})_2$ and $A_{L^{-\sigma}}$ of $G$, recalling that $L^{-\sigma}$ is $2$-elementary and that both $(A_{L^{\sigma}})_2$, $A_{L^{-\sigma}}$ contain a subgroup isomorphic to $\left( \frac{\mathbb{Z}}{2 \mathbb{Z}} \right)^{a}$:
\begin{itemize}
\item $(A_{L^{\sigma}})_2 = \left( \frac{\mathbb{Z}}{2 \mathbb{Z}} \right)^{a} \oplus \frac{\mathbb{Z}}{2^l \mathbb{Z}}, \; A_{L^{-\sigma}} = \left( \frac{\mathbb{Z}}{2 \mathbb{Z}} \right)^{a}$ (case (i));
\item $a \geq 1$, $(A_{L^{\sigma}})_2 = \left( \frac{\mathbb{Z}}{2 \mathbb{Z}} \right)^{a-1} \oplus \frac{\mathbb{Z}}{2^l \mathbb{Z}}, \; A_{L^{-\sigma}} = \left( \frac{\mathbb{Z}}{2 \mathbb{Z}} \right)^{a+1}$ (case (ii)).\qedhere
\end{itemize}
\end{proof}
\end{prop}
\begin{rem}
Assume that $i^* \in \widetilde{O}(L)$, so that $\sigma = i^*$. If $l > 1$, Joumaah correctly highlighted in his proof that the index $[G:G_2]$ needs to be $2^{l-1}$ and therefore $G \cong \left( \frac{\mathbb{Z}}{2 \mathbb{Z}} \right)^{2a} \oplus \frac{\mathbb{Z}}{2^l \mathbb{Z}}$. However, contrary to what he stated, this does not necessarily imply that $G = H_T \oplus H_S \oplus \frac{\mathbb{Z}}{2^l \mathbb{Z}}$, from which he inferred $A_T \cong H_T \oplus A_L, A_S = H_S$ as the only possibility for the discriminant groups. Indeed, we exhibit two lattices $T,S$ which are the invariant and anti-invariant lattices of a non-symplectic involution of a manifold of $K3^{[3]
}$-type and whose discriminant groups are in contrast with \cite[Prop.~5.1.1]{joumaah}.
For $n=3$ we have $2(n-1)=4$, meaning $l=2$, $m=1$. The authors of \cite{IKKR} describe a $20$-dimensional family of manifolds of $K3^{\left[3\right]}$-type, called \emph{double EPW cubes}, with polarization of degree four and divisibility two (see \cite[Prop.~5.3]{IKKR}), whose members are always endowed with a non-symplectic involution $i$. As a consequence, the invariant lattice of $i$ is $T \cong \langle 4 \rightarrowngle$ and the anti-invariant lattice is $S \cong U^{\oplus 2} \oplus E_8^{\oplus 2} \oplus \langle -2 \rightarrowngle^{\oplus 2}$. In particular, their discriminant groups are:
\[ A_T = \langle t \rightarrowngle \cong \frac{\mathbb{Z}}{4 \mathbb{Z}}\left( \frac{1}{4} \right), \qquad A_S = \langle s_1, s_2 \rightarrowngle \cong \frac{\mathbb{Z}}{2 \mathbb{Z}}\left( -\frac{1}{2} \right) \oplus \frac{\mathbb{Z}}{2 \mathbb{Z}}\left( -\frac{1}{2} \right).\]
In this case $G = A_T \oplus A_S$, since $m=1$. Moreover
$$16 = \left| A_T \oplus A_S \right| = \left[ L : T \oplus S \right]^2 \left| A_L \right| = 2^{2a} \cdot 4 $$
\noindent therefore $a=1$. Looking at the discriminant quadratic forms on $A_T$ and $A_S$, the only possible choice for the subgroups of order two $H_T \subset A_T$ and $H_S \subset A_S$, with $H_T \cong H_S(-1)$, is the following:
\[ H_T = \langle 2t \rightarrowngle \subset A_T, \quad H_S = \langle s_1 + s_2 \rightarrowngle \subset A_S\]
\noindent which implies $H_L = \langle 2t + s_1 + s_2 \rightarrowngle \subset A_T \oplus A_S$. One can check, by computing $H_L^\perp \subset A_T \oplus A_S$, that $\frac{H_L^\perp}{H_L} \cong A_L = \frac{\mathbb{Z}}{4 \mathbb{Z}}\left( -\frac{1}{4} \right)$.
This is therefore a case where $l = 2 > 1$ and $[G : G_2 ] = 2 = 2^{l-1}$. However, it is not possible to write the group $G = A_T \oplus A_S$ as $G = H_T \oplus H_S \oplus \frac{\mathbb{Z}}{2^l \mathbb{Z}}$ and it is not true that $A_T \cong H_T \oplus A_L$, $A_S = H_S$.
\end{rem}
\begin{rem}
In the case of manifolds of $K3^{\left[2\right]}$-type, it was proved in \cite[Lemma 8.1]{BCS} (extending results from \cite[\S 6]{smith}) that the discriminant groups can only be $A_S \cong \left(\frac{\mathbb{Z}}{2 \mathbb{Z}}\right)^{\oplus a}$, $A_T \cong \left(\frac{\mathbb{Z}}{2 \mathbb{Z}}\right)^{\oplus a+1}$ or vice versa. This is coherent with the classification of Proposition \ref{discr groups involutions} (if $n=2$ we have $2(n-1) = 2$, hence $l = m = 1$).
\end{rem}
\section{Existence of automorphisms} \label{sec: existence}
In this section we show that the lattice-theoretic conditions of Proposition~\ref{prop: T o S 2-elementary} are actually sufficient to give rise to a geometric realization. First, we prove that every $2$-elementary sublattice of $L = L_n$ is the invariant (or anti-invariant) lattice of some involution of $L$, and finally that we can generically lift this abstract involution to an involution of a manifold of $K3^{\left[2\right]}n$-type.
\begin{prop} \label{prop: extension involution}
Let $S$ be an even $2$-elementary lattice, primitively embedded into an even lattice $\Lambda$. Then $\id_{S^{\perp}} \oplus (-\id_S)$ (resp.\ $(-\id_{S^{\perp}}) \oplus \id_S$) extends to an isometry $\rho \in \widetilde{O}(\Lambda)$ (resp.\ $-\rho \in \widetilde{O}(\Lambda)$).
\end{prop}
\begin{proof}
By \cite[Thm.~1.1.2]{nikulin}, we can primitively embed $\Lambda$ into an even unimodular lattice $V$ of high enough rank. We fix such a primitive embedding and consider the orthogonal complements $\Lambda^{\perp_V}$ and $S^{\perp_V}$ of $\Lambda$ and $S$ inside $V$. Obviously, $V$ is an overlattice of $S \oplus S^{\perp_V}$. We want to show that $\alpha := \id_{S^{\perp_V}} \oplus (-\id_S)$ extends to $M$. A completely analogous proof will show that also $(-\id_{S^{\perp_V}}) \oplus \id_S$ extends, as in the statement. Let $H_V = V/(S \oplus S^{\perp_V})$ be the isotropy subgroup of $A_S \oplus A_{S^{\perp_V}}$ corresponding to the overlattice $V$ and let $p_S$, $p_{S^{\perp_V}}$ be the two projections to $A_S$ and $A_{S^{\perp_V}}$:
\[\xymatrix{H_V = V/(S \oplus S^{\perp_V}) \ar[d]^{p_{S^{\perp_V}}} \ar[dr]^{p_S} & \subset A_S \oplus A_{S^{\perp_V}}\\
H_{S^{\perp_V}} \subset A_{S^{\perp_V}} & H_S \subset A_S.}\]
Since $V$ is unimodular, we have $H_{S^{\perp_V}} = A_{S^{\perp_V}}$ and $H_S = A_S$. As before, let $\gamma: A_{S^{\perp_V}} \longrightarrow A_S$ be the anti-isometry given by $p_S\circ (p_{S^{\perp_V}})^{-1}$. By \cite[Prop.~1.5.1]{nikulin}, the existence of an extension of $\alpha$ to $V$ is equivalent to the commutativity of the diagram
\[\xymatrix{A_{S^{\perp_V}} \ar[r]^\gamma \ar[d]^{\overline{\id}_{S^{\perp_V}}} & A_S \ar[d]^{-\overline{\id}_S}\\
A_{S^{\perp_V}} \ar[r]^\gamma & A_S}\]
\noindent where, for any lattice $N$ and $\mu \in O(N)$, we denote by $\overline{\mu}$ the isometry of finite quadratic forms induced by $\mu$ on the discriminant group $A_N$.
The diagram is commutative because $-\gamma = \gamma$, since $S$ is $2$-elementary, hence we get the extension $\widetilde{\alpha}\in O(V)$ of $\alpha$ to $V$.
As $S^{\perp_\Lambda} \oplus \Lambda^{\perp_V} \subset S^{\perp_V}$, we deduce that $\Lambda^{\perp_V}$ is invariant for the action of $\widetilde{\alpha}$. Let $\rho$ be the restriction $\widetilde{\alpha}\vert_{\Lambda}$. Since $\rho \oplus \id_{\Lambda^{\perp_V}}$ extends to $\widetilde{\alpha} \in O(V)$, we have a commutative diagram
\[\xymatrix{A_\Lambda \ar[r]^\beta \ar[d]_{\overline{\rho}} & A_{\Lambda^{\perp_V}} \ar[d]^{\overline{\id}_{\Lambda^{\perp_V}}}\\
A_\Lambda \ar[r]^\beta & A_{\Lambda^{\perp_V}}}\]
where $\beta:= p_{\Lambda^{\perp_V}}\circ (p_\Lambda)^{-1}$. Hence, $\overline{\rho} = \id_{A_\Lambda}$, i.e.\ $\rho \in \widetilde{O}(\Lambda)$.
\end{proof}
\begin{rem}
This is in some sense a converse of \cite[Lemma~3.5]{ghs3}. See also \cite[Prop.~1.5.1]{dolgachev}.
\end{rem}
We come now to the second part of the section. First, we recall some results on lattice-polarized manifolds of $K3^{\left[2\right]}n$-type.
Let $T$ be a hyperbolic lattice which admits a primitive embedding $j:T \hookrightarrow L$, with $\mathrm{rk}\,(T) \leq 20$. We identify $T$ with the sublattice $j(T) \subset L$ and we denote by $S$ its orthogonal complement in $L$. Following \cite[\S 4.1]{joumaah}, we say that $T$ is \emph{admissible} if it is the invariant lattice of a monodromy operator $\rho \in \mon^2(L)$ of order two. In particular, $T$ and $S$ are as in Proposition \ref{discr groups involutions}, therefore one of them is $2$-elementary. This implies, by Proposition \ref{prop: extension involution}, that $\rho$ is the unique extension of $\id_T\oplus(-\id_S)$ to $L$.
Let $X$ be a manifold of $K3^{[n]}$-type and $i \in\aut(X)$ be a non-symplectic involution acting on it. Joumaah says that the pair $(X,i)$ is \emph{of type $T$} if it admits a $(\rho,T)$-polarization, i.e.\ a marking $\eta: H^2(X,\mathbb{Z})\rightarrow L$ such that $\eta \circ i^\ast= \rho\circ\eta$. If $(X,i)$ and $(X',i')$ are two pairs of type $T$, they are said to be isomorphic if there exists an isomorphism $f: X\rightarrow X'$ such that $i' = f\circ i \circ f^{-1}$. The monodromy operators $f^* \in \mon^2(L)$ induced by these isomorphisms of pairs are the isometries contained in
\[\mon^2(L,T) := \left\{ g\in\mon^2(L) \mid g\circ \rho=\rho\circ g \right\} = \left\{g\in\mon^2(L) \mid g(T) = T \right\}.\]
In particular, for any $g \in \mon^2(L,T)$ we have that $g \vert_T \in O(T)$ and $g \vert_S \in O(S)$. We can then define the following subgroups:
\[ \Gamma_T := \left\{ g \vert_T \mid g \in \mon^2(L,T) \right\} \subset O(T), \qquad \Gamma_S := \left\{ g \vert_S \mid g \in \mon^2(L,T) \right\} \subset O(S).\]
Notice that local deformations of a pair $(X,i)$ of type $T$ are parametrized by $H^{1,1}(X)^{i^*}$ (more details on this are provided in \cite[Theorem 2]{BeauInv} and \cite[\S 4]{BCS}).
Inside the moduli space $\mathcal{M}_L$ of marked IHS manifolds of $K3^{\left[2\right]}n$-type, let $\mathcal{M}_{T,\rho}$ be the subspace of $(\rho,T)$-polarized marked manifolds $(X,\eta)\in\mathcal{M}_L$. Since the symplectic form $\omega_X$ generating $H^{2,0}(X)$ is orthogonal to the N\'eron--Severi group (which contains $T$), for any $(X,\eta)\in\mathcal{M}_{T,\rho}$ the period point $\eta(H^{2,0}(X))$ belongs to
\[ \Omega_S := \left\{ \kappa \in \mathbb{P}(S \otimes \IC) \mid (\kappa, \kappa) = 0, (\kappa, \overline{\kappa}) > 0\right\}.\]
Moreover, by \cite[Proposition 4.6.7]{joumaah}, the period map restricts to a holomorphic surjective morphism
$$\mathcal{P}: \mathcal{M}_{T,\rho}\longrightarrow \Omega_S^0 := \Omega_S\setminus \bigcup_{\delta\in\Delta(S)}(\delta^{\perp}\cap \Omega_S),$$ where $\Delta(S)$ is the set of wall divisors (i.e.\ primitive integral monodromy birationally minimal classes) contained in $S$. This restriction is equivariant with respect to the action of $\mon^2(L,T)$, hence we also obtain a surjection
\[
\mathcal{P}: \mathcal{M}_{T,\rho}/\mon^2(L,T)\longrightarrow \Omega_S^0/\Gamma_S.
\]
\begin{theorem}\label{thm: existence}
Let $\rho \in O(L)$ be an involution whose invariant lattice $T$ is hyperbolic with $\mathrm{rk}\,(T) \leq 20$. Assume also that $\pm \rho \in \widetilde{O}(L)$. Then there exists a marked manifold $(X, \eta)$ of $K3^{\left[2\right]}n$-type with an involution $i \in \aut(X)$ such that $\eta \circ i^* = \rho \circ \eta$.
\end{theorem}
\begin{proof}
Let $S \subset L$ be the anti-invariant lattice of $\rho$, i.e.\ the orthogonal complement of $T$. By \cite[Prop.~5.3]{BCS} the very general point $\omega \in \Omega_S$ is the image under the period map of a $T$-polarized marked manifold of $K3^{\left[2\right]}n$-type $(X, \eta)$ with $\NS(X) = \eta^{-1}(T)$. We can then consider $\alpha:=\eta^{-1}\circ \rho\circ \eta\in O(H^2(X, \mathbb{Z}))$, which is an involution, and we observe that:
\begin{enumerate}
\item $\alpha$ induces a Hodge isometry on $H^2(X, \IC)$ since the period point $\eta(H^{2,0}(X))$ is invariant for the action of $\rho$ on $\Omega_S$;
\item $\alpha$ is effective, because the equality $\NS(X) = \eta^{-1}(T) = \eta^{-1}(L^\rho)$ implies that there is an $\alpha$-fixed K\"ahler (even ample) class on $X$;
\item $\pm \rho \in \widetilde{O}(L)$.
\end{enumerate}
Hence, $\alpha$ is a monodromy operator by \cite[Lemma~9.2]{markman} and, by \cite[Thm.~1.3]{markman}, there exists $i \in \aut(X)$ such that $i^* = \alpha$. Since the map $\aut(X) \longrightarrow O(H^2(X, \mathbb{Z}))$, sending an automorphism to its action on $H^2(X, \mathbb{Z})$, is injective for manifolds of $K3^{\left[2\right]}n$-type (see \cite[Prop.~10]{beauville_rmks} and \cite[Lemma~1.2]{mongardi_deform}), the automorphism $i$ is both unique and an involution. It is then straightforward to check that $\eta \circ \iota^* = \rho \circ \eta$.
\end{proof}
\section{Geography for IHS manifolds of small dimension}\label{sect: geography}
The aim of this section is to make some remarks on which families of large dimension one can expect from the results of the previous section. We first classify the admissible invariant lattices of rank one and two, and then we describe the geography of these cases for manifolds of $K3^{\left[2\right]}n$-type when $n \leq 5$.
\subsection{Invariant sublattices of rank one and two}
Let $T, S$ be the invariant and co-invariant lattices of a non-symplectic involution of a manifold of $K3^{\left[2\right]}n$-type. As we saw in Proposition \ref{prop: T o S 2-elementary}, either $S$ or $T$ is $2$-elementary, depending on the action of the involution on the discriminant group of $L$ (which is $\id$ or $-\id$ respectively). Assume that $S$ is $2$-elementary and consider it embedded in the Mukai lattice $M$ (the case where $T$ is $2$-elementary is similar). Starting from the signature of $S^{\perp_M}$, we can use \cite[Thm.~1.5.2]{dolgachev} to deduce the possible isometry classes for $S^{\perp_M}$. As observed in Lemma~\ref{lemma: orthogonal Lperp}, we have that $T$ is the orthogonal complement in $S^{\perp_M}$ of $L^{\perp}$: since we know this last explicitly (see \eqref{eq: discriminant lperp}), we can use \cite[Prop.~1.15.1]{nikulin} to classify all primitive embeddings $L^{\perp} \hookrightarrow S^{\perp_M}$ and to compute, in each case, the discriminant group of the orthogonal complement, i.e.\ $A_T$.
\subsubsection{Invariant sublattice of rank one}\label{subsec:rank one}
In this subsection we prove the following proposition, which describes the pairs $T$ and $S$ that can occur when $\mathrm{rk}\,(T) = 1$.
\begin{prop} \label{prop: rank one}
Let $X$ be a manifold of $K3^{[n]}$-type for some $n \geq 2$, and let $i \in \aut(X)$ be a non-symplectic involution. If the invariant lattice $T \subset H^2(X, \mathbb{Z})$ has rank one, then one of the following holds:
\begin{enumerate}
\item if $i^*$ acts as $\id$ on $A_{H^2(X, \mathbb{Z})}$, then $-1$ is a quadratic residue modulo $n-1$ and
\[T \cong \langle 2(n - 1) \rightarrowngle, \qquad S \cong U^{\oplus 2} \oplus E_8^{\oplus 2} \oplus \langle -2 \rightarrowngle \oplus \langle -2 \rightarrowngle;\]
\item if $i^*$ acts as $-\id$ on $A_{H^2(X, \mathbb{Z})}$, then $T \cong \langle 2 \rightarrowngle$ and
\begin{enumerate}
\item either $S \cong U^{\oplus 2} \oplus E_8^{\oplus 2} \oplus \langle -2(n - 1) \rightarrowngle \oplus \langle -2 \rightarrowngle$;
\item or $n \equiv 0 \pmod 4$ and
\[S \cong U^{\oplus 2} \oplus E_8^{\oplus 2} \oplus \left( \begin{array}{cc}
-\frac{n}{2} & n - 1\\
n - 1 & -2(n - 1)
\end{array} \right).\]
\end{enumerate}
\end{enumerate}
\end{prop}
\begin{proof}
This result generalizes \cite[Prop.~5.1]{catt_autom_hilb}, which holds for non-natural involutions of Hilbert schemes of points on a generic projective $K3$ surface.
We deal first with the case where $T,S$ are the invariant and anti-invariant lattices of an involution whose action on the discriminant $A_L$ is the identity. This means that $S$ is $2$-elementary and that $T \oplus L^{\perp} \subset S^{\perp_M}$. Since both $T$ and $L^{\perp}$ have signature $(1, 0)$, we deduce that $S^{\perp_M}$ has signature $(2, 0)$. By \cite[Table 15.1]{conway_sloane}, there is only one possible choice for $S^{\perp_M}$, which embeds in $M$ in a unique way by \cite[Thm.~1.1.2]{nikulin}: this is enough to claim that there is only one possible choice for $S$, up to isometries, which explicitly is
\[S = U^{\oplus 2} \oplus E_8^{\oplus 2} \oplus \langle -2 \rightarrowngle \oplus \langle -2 \rightarrowngle, \qquad S^{\perp_M} = \langle 2 \rightarrowngle \oplus \langle 2 \rightarrowngle.\]
We then need to look at how $L^{\perp} \cong \langle 2(n - 1) \rightarrowngle$ embeds primitively in $S^{\perp_M}$. A pair $(x, y)$ gives the coordinates of a primitive vector in $S^{\perp_M} = \langle 2 \rightarrowngle \oplus \langle 2 \rightarrowngle$ of square $2(n - 1)$ if and only if $\gcd(x, y) = 1$ and $x^2 + y^2 = n - 1$. Moreover, the isometry group of $S^{\perp_M}$ acts on these coordinates either by permutation or by exchanging sign. The orthogonal complement of $L^{\perp}$ in $S^{\perp_M}$, which is $T$, is then a lattice isometric to $\langle 2(n - 1) \rightarrowngle$, generated by $(-y, x)$. Notice that there exist two coprime integers $x, y$ such that $x^2 + y^2 = n - 1$ if and only if $-1$ is a quadratic residue modulo $n-1$ (to see this, combine \cite[Prop.~5.1.1]{Classical_Intro_Num_Theory} and \cite[Thm.~3.20]{Intro_Theory_Numbers}).
We now consider the case where the action of $i^*$ on $A_L$ is $-\id$. We have that $T$ is $2$-elementary of signature $(1, 0)$, hence $T \cong \langle 2 \rightarrowngle$. It follows that $T$ embeds in a unique way in the Mukai lattice, with orthogonal complement
\[T^{\perp_M} \cong U^{\oplus 3} \oplus E_8^{\oplus 2} \oplus \langle -2 \rightarrowngle.\]
We now want to describe the different embeddings of $L^{\perp} \cong \langle 2(n - 1) \rightarrowngle$ in $T^{\perp_M}$. Since $T^{\perp_M}$ is unique in its genus, by \cite[Prop.~1.15.1]{NikulinInv} we have only two possibilities: they correspond to the two possible choices of a subgroup of $A_{T^{\perp_M}} \cong \mathbb{Z} / 2\mathbb{Z}$. Choosing the trivial subgroup, we see that the orthogonal complement of $L^{\perp}$ in $T^{\perp_M}$, i.e.~$S$, has discriminant group
\[A_S = \frac{\mathbb{Z}}{2(n - 1) \mathbb{Z}} \left( -\frac{1}{2(n - 1)} \right) \oplus \frac{\mathbb{Z}}{2 \mathbb{Z}} \left( -\frac{1}{2} \right),\]
and signature $(2, 20)$. By \cite[Thm.~2.4]{BCS}, there exists only one lattice with these invariants, up to isometries, which is
\[S = U^{\oplus 2} \oplus E_8^{\oplus 2} \oplus \langle -2(n - 1) \rightarrowngle \oplus \langle -2 \rightarrowngle.\]
The last possibility corresponds to the choice of the whole $A_{T^{\perp_M}}$, but in this case we must have $n \equiv 0 \pmod{4}$. This leads us to
\[A_S = \frac{\mathbb{Z}}{(n - 1) \mathbb{Z}} \left( -\frac{n}{2(n - 1)} \right),\]
where $S$ has again signature $(2, 20)$. By the same argument as above, there exists only one isometry class of lattices in this genus. A representative, which can be computed by applying \cite[Prop.~3.6]{ghs}, is
\begin{equation*}
S = U^{\oplus 2} \oplus E_8^{\oplus 2} \oplus \left( \begin{array}{cc}
-\frac{n}{2} & n - 1\\
n - 1 & -2(n - 1)
\end{array} \right).
\end{equation*}
\end{proof}
\begin{rem}
The three cases of Proposition \ref{prop: rank one} can be distinguished also by looking at the generator $t \in H^2(X, \mathbb{Z})$ of the invariant lattice $T$. In fact, by \cite[Prop.~3.6]{ghs}, we have that:
\begin{itemize}
\item in case (1), $t$ has square $2(n-1)$ and divisibility $n-1$;
\item in case (2a), $t$ has square $2$ and divisibility $1$;
\item in case (2b), $t$ has square $2$ and divisibility $2$.
\end{itemize}
We point out that, by the Global Torelli Theorem for IHS manifolds, the existence of a primitive ample $t \in \textrm{NS}(X)$ with one of these three combinations of square and divisibility is sufficient to prove the existence of a non-symplectic involution on $X$, whose invariant lattice is $T = \langle t \rightarrowngle$ (see \cite[Prop.~5.3]{catt_autom_hilb}).
\end{rem}
\subsubsection{Invariant sublattice of rank two}\label{subsec:ranktwo}
The aim of this subsection is to provide some results for $\mathrm{rk}\,(T) = 2$. In particular, we describe the discriminant groups of the invariant and co-invariant lattices in complete generality, but we address the problem of their realization and uniqueness only for $n \leq 5$.
Assume that $\mathrm{rk}\,(T) = 2$, so that the signature of $T$ is $(1, 1)$. We first consider the case where the induced action on $A_L$ is the identity, hence $S$ is a $2$-elementary lattice of signature $(2, 19)$ and $S^{\perp_M}$ is $2$-elementary of signature $(2, 1)$. It follows from \cite[Thm.~1.1.2]{nikulin} that $S^{\perp_M}$ has a unique embedding in the Mukai lattice, up to isometries. By \cite[Thm.~1.5.2]{dolgachev} we have then two possibilities:
\[S^{\perp_M} = U \oplus \langle 2 \rightarrowngle \qquad \text{or} \qquad S^{\perp_M} = U(2) \oplus \langle 2 \rightarrowngle.\]
\begin{itemize}[leftmargin=0.35cm]
\item {\bf Case $ \bm{S^{\perp_M} = U \oplus \langle 2 \rightarrowngle}$.} We look for a primitive embedding of $L^{\perp} = \langle 2(n - 1) \rightarrowngle$ in $S^{\perp_M}$. By \cite[Prop.~1.15.1]{nikulin} we need to consider pairs of isomorphic subgroups in $A_{L^{\perp}}$ and $A_{S^{\perp_M}} = \frac{\mathbb{Z}}{2\mathbb{Z}} \left( \frac{1}{2} \right)$. In particular, for the choice of the trivial subgroup we have
\[A_T = \frac{\mathbb{Z}}{2(n - 1) \mathbb{Z}} \left( -\frac{1}{2(n - 1)} \right) \oplus \frac{\mathbb{Z}}{2 \mathbb{Z}} \left( \frac{1}{2} \right).\]
A possible realization for this lattice $T$ is given by $T = \langle -2(n - 1) \rightarrowngle \oplus \langle 2 \rightarrowngle$; if $n \leq 5$, this is the only isometry class in the genus by \cite[Ch.~15, Thm.~21]{conway_sloane}.
The other possibility is to consider the subgroup of $A_{L^{\perp}}$ generated by the class of $n - 1$: in order for it to have the same discriminant form of $A_{S^{\perp_M}}$ we need $n \equiv 2 \pmod{4}$, and in this case we have
\[A_T = \frac{\mathbb{Z}}{(n - 1) \mathbb{Z}} \left( \frac{n - 2}{2(n - 1)} \right).\]
A lattice $T$ with this discriminant form and signature $(1,1)$ is the following:
\[T = \begin{pmatrix}
-2h & k\\
k & 2
\end{pmatrix}\]
\noindent where we write $n-1 = k^2 + 4h$, with $k, h$ non-negative integers and $k$ maximal. This is the only isometry class in the genus of $T$ if $n \leq 17$, by \cite[Ch.~15, Thm.~21]{conway_sloane}. For $n=2$, this lattice is isometric to $U$.
\item {\bf Case $ \bm{S^{\perp_M} = U(2) \oplus \langle 2 \rightarrowngle}$.} Here we have more possibilities, because there are more subgroups inside the discriminant group of $S^{\perp_M}$, which is
\[A_{S^{\perp_M}} = \left(\frac{\mathbb{Z}}{2 \mathbb{Z}} \right)^{\oplus 3}, \quad \text{with quadratic form} \; q_{S^{\perp_M}} = \begin{pmatrix}
0 & \frac{1}{2} & 0 \\
\frac{1}{2} & 0 & 0 \\
0 & 0 & \frac{1}{2}
\end{pmatrix}.\]
It is easy to see that we can discard the choice corresponding to the trivial subgroup, as it gives rise to a lattice $T$ of length $4$, hence the only relevant subgroups of $A_{S^{\perp_M}}$ are those of order two. Up to isomorphism, we have the two following possibilities.
\begin{enumerate}
\item The subgroup is $\langle(0,0,1)\rightarrowngle\subset A_{S^{\perp_M}}$ with $q((0,0,1))=1/2$. This case can occur only if $n \equiv 2 \pmod{4}$, and gives
\[A_T = \frac{\mathbb{Z}}{2(n - 1) \mathbb{Z}} \oplus \frac{\mathbb{Z}}{2 \mathbb{Z}}, \quad \text{with quadratic form} \; q_{T} = \begin{pmatrix}
\frac{n-2}{2(n-1)} & \frac{1}{2} \\
\frac{1}{2} & 0
\end{pmatrix}.\]
For $n=2$, the lattice $U(2)$ realizes this genus; for $n=6$, we can consider the lattice whose bilinear form is given by the matrix $\begin{pmatrix}
2 & 4\\
4 & -2
\end{pmatrix}$.
\item The subgroup is $\langle v\rightarrowngle\cong\mathbb{Z}/2\mathbb{Z}\subset A_{S^{\perp_M}}$, for an element $v \neq (0,0,1)$ such that $q(v)=(n-1)/2$. This case gives
\[A_T = \frac{\mathbb{Z}}{2(n - 1) \mathbb{Z}} \left( -\frac{1}{2(n - 1)} \right) \oplus \frac{\mathbb{Z}}{2 \mathbb{Z}} \left( \frac{1}{2} \right).\]
A possible realization for this lattice is given by $T = \langle -2(n - 1) \rightarrowngle \oplus \langle 2 \rightarrowngle$; if $n \leq 5$, this is the only isometry class in the genus by \cite[Ch.~15, Thm.~21]{conway_sloane}.
\end{enumerate}
\end{itemize}
For $n \leq 5$, we summarize these results as follows.
\begin{prop}\label{prop:rank two +id}
Let $X$ be a manifold of $K3^{[n]}$-type for $2 \leq n \leq 5$, and let $i \in \aut(X)$ be a non-symplectic involution. If the invariant lattice $T \subset H^2(X, \mathbb{Z})$ has rank two and $i^*$ acts as $\id$ on $A_{H^2(X, \mathbb{Z})}$, then one of the following holds:
\begin{enumerate}
\item $T \cong \langle 2 \rightarrowngle \oplus \langle -2(n-1)\rightarrowngle$ and $S \cong U^{\oplus 2} \oplus E_8^{\oplus 2} \oplus \langle -2\rightarrowngle$;
\item $T \cong \langle 2 \rightarrowngle \oplus \langle -2(n-1)\rightarrowngle$ and $S \cong U \oplus U(2) \oplus E_8^{\oplus 2} \oplus \langle -2\rightarrowngle$;
\item $n = 2$, $T \cong U$ and $S \cong U^{\oplus 2} \oplus E_8^{\oplus 2} \oplus \langle -2\rightarrowngle$;
\item $n = 2$, $T \cong U(2)$ and $S \cong U \oplus U(2) \oplus E_8^{\oplus 2} \oplus \langle -2\rightarrowngle$.
\end{enumerate}
\end{prop}
We assume now that the action of the involution on the discriminant is $-\id$. In this case, $T$ is $2$-elementary of signature $(1, 1)$, so $T^{\perp_M}$ is also $2$-elementary and its signature is $(3, 19)$. This implies that $S$ (which is a sublattice of $T^{\perp_M}$) has signature $(2, 19)$. By \cite[Thm.~1.5.2]{dolgachev} there exist three $2$-elementary lattices of signature $(1, 1)$, namely $U$, $U(2)$ and $\langle 2 \rightarrowngle \oplus \langle -2 \rightarrowngle$. Every such lattice, by \cite[Thm.~1.1.2]{nikulin}, embeds in the Mukai lattice in a unique way, hence the orthogonal complement is uniquely determined too. We analyse the three cases separately: in each of them, there is only one isometry class in the genus of $S$ by \cite[Thm.~2.4]{BCS}.
\begin{itemize}[leftmargin=0.35cm]
\item {\bf Case $ \bm{T = U}$.} We have $T^{\perp_M} \cong U^{\oplus 3} \oplus E_8^{\oplus 2}$, which is unimodular. As a consequence, $L^{\perp} \cong \langle 2(n - 1) \rightarrowngle$ embeds in an essentially unique way in $T^{\perp_M}$ and its orthogonal complement $S$ is
\[S = U^{\oplus 2} \oplus E_8^{\oplus 2} \oplus \langle -2(n - 1) \rightarrowngle.\]
\item {\bf Case $ \bm{T = U(2)}$.} In this case, $T^{\perp_M} = U(2) \oplus U^{\oplus 2} \oplus E_8^{\oplus 2}$ has discriminant
\[A_{T^{\perp_M}} = \frac{\mathbb{Z}}{2\mathbb{Z}} \oplus \frac{\mathbb{Z}}{2\mathbb{Z}}, \quad \text{with quadratic form} \; q_{T^{\perp_M}} = \begin{pmatrix}
0 & \frac{1}{2} \\
\frac{1}{2} & 0
\end{pmatrix}.\]
As before, we look at the cyclic subgroups of $A_{T^{\perp_M}}$: a direct computation gives rise to two different cases.
\begin{enumerate}
\item If we choose the trivial subgroup we have $A_S = \frac{\mathbb{Z}}{2(n - 1)\mathbb{Z}} \oplus \frac{\mathbb{Z}}{2\mathbb{Z}} \oplus \frac{\mathbb{Z}}{2\mathbb{Z}}$, with quadratic form
\[q_S = \begin{pmatrix}
-\frac{1}{2(n - 1)} & 0 & 0\\
0 & 0 & \frac{1}{2}\\
0 & \frac{1}{2} & 0
\end{pmatrix}.\]
We conclude
\[S = U \oplus U(2) \oplus E_8^{\oplus 2} \oplus \langle -2(n - 1) \rightarrowngle.\]
\item If $n \equiv 1,3 \pmod{4}$, we can choose a subgroup of order two and we have
\[A_S = \frac{\mathbb{Z}}{2(n - 1) \mathbb{Z}} \left( -\frac{1}{2(n - 1)} \right),\]
which corresponds to
\[S = U^{\oplus 2} \oplus E_8^{\oplus 2} \oplus \langle -2(n - 1) \rightarrowngle.\]
\end{enumerate}
\item {\bf Case $ \bm{T = \langle 2 \rightarrowngle \oplus \langle -2 \rightarrowngle}$.} Here $T^{\perp_M} = U^{\oplus 2} \oplus E_8^{\oplus 2} \oplus \langle 2 \rightarrowngle \oplus \langle -2 \rightarrowngle$, whose discriminant group is
\[A_{T^{\perp_M}} = \frac{\mathbb{Z}}{2\mathbb{Z}} \left( \frac{1}{2} \right) \oplus \frac{\mathbb{Z}}{2\mathbb{Z}} \left( -\frac{1}{2} \right).\]
The same kind of computations yield three cases:
\begin{enumerate}
\item The discriminant group is
\[A_S = \frac{\mathbb{Z}}{2(n - 1) \mathbb{Z}} \left( -\frac{1}{2(n - 1)} \right) \oplus \frac{\mathbb{Z}}{2\mathbb{Z}} \left( \frac{1}{2} \right) \oplus \frac{\mathbb{Z}}{2\mathbb{Z}} \left( -\frac{1}{2} \right),\]
which corresponds to
\[S = U \oplus E_8^{\oplus 2} \oplus \langle 2 \rightarrowngle \oplus \langle -2 \rightarrowngle \oplus \langle -2(n - 1) \rightarrowngle.\]
\item If $n \equiv 0,2 \pmod{4}$ we can have
\[A_S = \frac{\mathbb{Z}}{2(n - 1) \mathbb{Z}} \left( -\frac{1}{2(n - 1)} \right),\]
which is realized by
\[S = U^{\oplus 2} \oplus E_8^{\oplus 2} \oplus \langle -2(n - 1) \rightarrowngle.\]
\item If $n \equiv 1 \pmod{4}$ we can have
\[A_S = \frac{\mathbb{Z}}{2(n - 1) \mathbb{Z}} \left( \frac{n-2}{2(n - 1)} \right).\]
For $n=5$, a representative of the unique isometry class in this genus is
\[S = U \oplus E_8^{2} \oplus \begin{pmatrix}
-2 & 1 & 0\\
1 & -2 & 1 \\
0 & 1 & 2
\end{pmatrix}.\]
\end{enumerate}
\end{itemize}
The next proposition summarizes all possible pairs of lattices $T,S$ corresponding to involutions whose action on the discriminant group $A_L$ is $-\id$, for $n \leq 5$.
\begin{prop}\label{prop:rk two -id}
Let $X$ be a manifold of $K3^{[n]}$-type for $2 \leq n \leq 5$, and let $i \in \aut(X)$ be a non-symplectic involution. If the invariant lattice $T \subset H^2(X, \mathbb{Z})$ has rank two and $i^*$ acts as $-\id$ on $A_{H^2(X, \mathbb{Z})}$, then one of the following holds:
\begin{enumerate}
\item $T \cong U$ and $S \cong U^{\oplus 2} \oplus E_8^{\oplus 2} \oplus \langle -2(n-1) \rightarrowngle$;
\item $T \cong U(2)$ and $S \cong U \oplus U(2) \oplus E_8^{\oplus 2} \oplus \langle -2(n-1) \rightarrowngle$;
\item $T \cong \langle 2 \rightarrowngle \oplus \langle -2\rightarrowngle$ and $S \cong U \oplus E_8^{\oplus 2} \oplus \langle 2 \rightarrowngle\oplus \langle -2 \rightarrowngle \oplus \langle -2(n-1) \rightarrowngle$;
\item $n \in \left\{3, 5 \right\}$, $T \cong U(2)$ and $S \cong U^{\oplus 2} \oplus E_8^{\oplus 2} \oplus \langle -2(n-1) \rightarrowngle$;
\item $n \in \left\{2, 4 \right\}$, $T \cong \langle 2 \rightarrowngle \oplus \langle -2\rightarrowngle$ and $S \cong U^{\oplus 2} \oplus E_8^{\oplus 2} \oplus \langle -2(n-1) \rightarrowngle$;
\item $n = 5$, $T \cong \langle 2 \rightarrowngle \oplus \langle -2\rightarrowngle$ and $S \cong U \oplus E_8^{2} \oplus \begin{pmatrix}
-2 & 1 & 0\\
1 & -2 & 1 \\
0 & 1 & 2
\end{pmatrix}.$
\end{enumerate}
\end{prop}
\begin{rem}
For $n=2$, the isometries $\id$ and $-\id$ of $A_L \cong \mathbb{Z}/2 \mathbb{Z}$ coincide, hence Proposition \ref{prop:rank two +id} and Proposition \ref{prop:rk two -id} give the same classification (to check this, recall that $U(2) \oplus \langle -2 \rightarrowngle \cong \langle 2 \rightarrowngle \oplus \langle -2\rightarrowngle \oplus \langle -2\rightarrowngle$ by \cite[Thm.~1.5.2]{dolgachev}).
\end{rem}
\subsection{Deformation types for families of large dimension}
The lattice computations of Section \ref{subsec:rank one} and Section \ref{subsec:ranktwo} allow us to determine all moduli spaces $\mathcal{M}_{T, \rho}$, for $T$ an admissible invariant sublattice of rank one or two inside $L$ (recall the definitions from Section \ref{sec: existence}). By construction, the moduli spaces $\mathcal{M}_{T, \rho}$ arise as subspaces of the complex space $\mathcal{M}_{L}$, which parametrizes marked IHS manifolds of $K3^{\left[2\right]}n$-type. The following fact was remarked in \cite[Theorem 9.5]{AST} for $K3$ surfaces, and it can be easily generalized to manifolds of $K3^{\left[2\right]}n$-type.
\begin{lemma}\label{lem:closure}
Let $T', T'' \subset L$ be the invariant lattices of two monodromy operators $\rho', \rho'' \in \mon^2(L)$, respectively, and let $S' = (T')^\perp, S'' = (T'')^\perp$ be their orthogonal complements in $L$. The moduli space $\mathcal{M}_{T',\rho'}$ is in the closure of $\mathcal{M}_{T'',\rho''}$ if and only if $S' \subset S''\subset L$ and $(\rho'')\vert_{S'}=(\rho')\vert_{S'}$.
\end{lemma}
\begin{rem}
In our setting we can slightly improve the result of Lemma \ref{lem:closure}. In fact, as observed in Section \ref{sec: existence}, the orthogonal sublattices $T, S \subset L$ determine the involution $\rho \in \mon^2(L)$ as the unique extension of $\id_{T} \oplus (-\id_{S})$ to $L$. So, if we assume that $S'\subset S''$, then
\[(\rho'')\vert_{S'} = (-\id_{S''})\vert_{S'} = -\id_{S'} = (\rho')\vert_{S'}.\]
In the case of involutions we can then say that $\mathcal{M}_{T',\rho'}$ is in the closure of $\mathcal{M}_{T'',\rho''}$ if and only if $S' \subset S''\subset L$, as embedded sublattices.
\end{rem}
In this sense, the moduli spaces $\mathcal{M}_{T,\rho}$ of maximal dimension (where maximality is with respect to this notion) correspond to minimal (with respect to inclusion) admissible sublattices $T \subset L$. This is the reason why, in the previous section, we investigated in detail admissible invariant lattices of low rank. Any of these admissible lattices $T$ will give rise to at least one (but there could be more a priori, depending on the number of connected components of the moduli space) projective family of dimension $21 - \mathrm{rk}\,(T)$, whose generic member has a non-symplectic involution with invariant lattice $T$. We are now interested in computing the number of irreducible components for some of these moduli spaces.
We adopt the notation of \cite[Chapter 4]{joumaah}. Let $T \subset L$ be an admissible sublattice, i.e.\ the (hyperbolic) invariant lattice of an involution $\rho \in \mon^2(L)$, and let $\mathcal{C}_T$ be one of the two connected components of the cone $\left\{x \in T \otimes \mathbb{R} \mid (x,x) > 0 \right\}$. The \emph{K\"ahler-type chambers of $T$} are the connected components of
\[ \mathcal{C}_T \setminus \bigcup_{\delta \in \Delta(T)} \delta^\perp \]
\noindent where $\Delta(T)$ is the set of wall divisors in $T$. As before, let $\Gamma_T$ be the image of the restriction map $\mon^2(L,T) \rightarrow O(T)$: the subgroup $\Gamma_T \subset O(T)$ has finite index and it conjugates invariant wall-divisors, therefore it also acts on the set $\textrm{KT}(T)$ of K\"ahler-type chambers of $T$ (see \cite[\S 4.7]{joumaah}). In \cite[Theorem 4.8.11]{joumaah}, Joumaah proved that the quotient $\textrm{KT}(T)/\Gamma_T$ is in one-to-one correspondence with the set of distinct deformation types of marked manifolds $(X, \eta) \in \mathcal{M}_{T, \rho}$.
\begin{prop}\label{prop:U(2)irreducible}
Let $T \cong U(2)$ be a primitive sublattice of $L = U^{\oplus 3} \oplus E_8^{\oplus 2} \oplus$ $\langle -2(n-1)\rightarrowngle$ with orthogonal complement $S \cong U \oplus U(2) \oplus E_8^{\oplus 2} \oplus \langle -2(n-1) \rightarrowngle$. Let $\rho_1 \in \mon^2(L)$ be the involution which extends $\id_T \oplus (-\id_S)$. Then, for any $n \geq 2$ there is a single deformation type of marked manifolds of $K3^{\left[2\right]}n$-type $(X, \eta) \in \mathcal{M}_{T, \rho_1}$.
\end{prop}
\begin{proof}
As we recalled above, the number of deformation types of $(\rho_1, T)$-polarized marked manifolds of $K3^{\left[2\right]}n$-type is equal to the number of orbits of K\"ahler-type chambers of $T$, with respect to the action of the subgroup $\Gamma_T \subset O(T)$. For $T \cong U(2)$ as in the statement, an element $\delta \in T$ of coordinates $(a,b)$ with respect to a basis has square $4ab$ and divisibility in $L$ equal to $\gcd(a,b)$. In particular, the divisibility can only be one, if $\delta$ is primitive. However, a direct computation using \cite[Thm.~12.1]{bayer_macri_mmp} shows that, if $\delta$ is a wall-divisor with $\divi(\delta) = 1$, then $\delta^2 = -2$ (see \cite[Rmk.~2.5]{mongardi_cones}). We conclude that there are no wall-divisors $\delta \in T$, since $T \cong U(2)$ contains no elements of square $-2$.
\end{proof}
As we showed in Subsection \ref{subsec:ranktwo}, when $n$ is odd there is a second way to embed the lattice $U(2)$ in $L$, which is not isometric to the one studied in Proposition \ref{prop:U(2)irreducible}.
\begin{prop}\label{prop:U(2) 3 chambers}
For $n$ odd, let $T \cong U(2)$ be a primitive sublattice of $L = U^{\oplus 3} \oplus E_8^{\oplus 2} \oplus \langle -2(n-1)\rightarrowngle$ with orthogonal complement $S \cong U^{\oplus 2} \oplus E_8^{\oplus 2} \oplus \langle -2(n-1) \rightarrowngle$. Let $\rho_2 \in \mon^2(L)$ be the involution which extends $\id_T \oplus (-\id_S)$. Then, if $n=5$ there are three distinct deformation types of marked manifolds $(X, \eta) \in \mathcal{M}_{T, \rho_2}$.
\end{prop}
\begin{proof}
As in the proof of Proposition \ref{prop:U(2)irreducible}, we need to study the K\"ahler-type chambers of $T$ and therefore determine whether the lattice contains any wall-divisors. Up to isometries, the embedding $U(2) \hookrightarrow L$ in the statement can be realized as follows. Let $t = \frac{n-1}{2} \in \mathbb{N}$ and consider the map
\[ j: U(2) \hookrightarrow L = U^{\oplus 3} \oplus E_8^{\oplus 2} \oplus \langle -2(n-1) \rightarrowngle, \quad (a,b) \mapsto 2ae_1 + (at+b)e_2 + ag\]
\noindent where $\left\{ e_1, e_2 \right\}$ is a basis for one of the summands $U$ of $L$ and $g$ is a generator of $\langle -2(n-1) \rightarrowngle$. We then have $j(U(2))^\perp \cong U^{\oplus 2} \oplus E_8^{\oplus 2} \oplus \langle -2(n-1) \rightarrowngle$, as requested. In particular, if $n=5$ (i.e.\ $t=2$) one can show that the divisibility in $L$ of $(a,b) \in T = j(U(2))$ is $\gcd(2a,b)$, hence, if the element is primitive, it can only be one or two. We compute explicitly all possible pairs $(\delta^2, \divi(\delta))$ for wall-divisors $\delta \in L_5 = U^{\oplus 3} \oplus E_8^{\oplus 2} \oplus \langle -8 \rightarrowngle$. This is an application of of \cite[Thm.~12.1]{bayer_macri_mmp} and \cite[Thm.~1.3]{mongardi_cones}, which gives the following results:
\begin{center}
\begin{tabular}{|c|c|}
\hline
$\delta^2$ & $\divi(\delta)$\\\hline \hline
$-2$ & $1$ \\\hline
$-8$ & $2$ \\\hline
$-8$ & $4$ \\\hline
$-8$ & $8$ \\\hline
$-16$ & $2$ \\\hline
$-40$ & $4$ \\\hline
$-72$ & $8$ \\\hline
$-136$ & $8$ \\\hline
$-200$ & $8$ \\\hline
\end{tabular}
\end{center}
Since for any $\delta \in T$ we have $\delta^2 \in 4\mathbb{Z}$, the only pairs $(\delta^2, \divi(\delta))$ for wall-divisors $\delta \in T$ are $(\delta^2, \divi(\delta)) = (-8,2), (-16,2)$. Each of the two admissible pairs $(\delta^2, \divi(\delta))$ yields a single wall-divisor $\delta \in T$, whose orthogonal complement $\delta^\perp$ intersects the positive cone of $T$ in its interior. We therefore have two (distinct) walls, which cut out three K\"ahler-type chambers in $\mathcal{C}_T$. These three chambers correspond to three distinct orbits, with respect to the action of the group $\Gamma_T$ on $\text{KT}(T)$. This is due to the fact that an isometry $\gamma \in \Gamma_T$ permutes the walls of the chambers, which in our case are generated by primitive vectors having all different squares.
\end{proof}
\begin{rem} \label{rem: moduli spaces recap}
By Proposition \ref{prop: rank one}, there are two distinct $(\rho, T)$-polarizations with $T \cong \langle 2 \rightarrowngle$. In the following, we will denote them by $(\rho_a, \langle 2 \rightarrowngle)$ and $(\rho_b, \langle 2 \rightarrowngle)$, where the orthogonal complement $S$ of the admissible sublattice $T \subset L$ is as in case ($2a$) and ($2b$), respectively, of the proposition. In particular, for all $n \geq 2$ the moduli space $\mathcal{M}_{\langle 2\rightarrowngle, \rho_a}$ is non-empty, while $\mathcal{M}_{\langle 2\rightarrowngle, \rho_b} = \emptyset$ if $n \not \equiv 0 \pmod{4}$. In turn, again by Proposition \ref{prop: rank one}, for $n \geq 3$ there is only one $(\rho, T)$-polarization with $T \cong \langle 2(n-1) \rightarrowngle$: we denote by $\mathcal{M}_{\langle 2(n-1)\rightarrowngle, \rho}$ the corresponding moduli space, which is non-empty if and only if $-1$ is a quadratic residue modulo $n-1$. Finally, for $T \cong U(2)$, we have the two polarizations $(\rho_1, U(2))$, $(\rho_2, U(2))$ which we studied in Proposition \ref{prop:U(2)irreducible} and Proposition \ref{prop:U(2) 3 chambers}, respectively.
\end{rem}
\begin{theorem} \label{thm: max dim families}
Let $(X, \eta)$ be a marked manifold of $K3^{[n]}$-type for $2 \leq n \leq 5$, and let $i \in \aut(X)$ be a non-symplectic involution such that the pair $(X, i)$ deforms in a family of dimension $d \geq 19$. Then $(X, \eta)$ belongs to the closure of one of the following moduli spaces.
\begin{enumerate}
\item[$n=2$:] $\mathcal{M}_{\langle 2\rightarrowngle, \rho_a}$ or $\mathcal{M}_{U(2), \rho_1}$.
\item[$n=3$:] $\mathcal{M}_{\langle 2\rightarrowngle, \rho_a}$, $\mathcal{M}_{\langle 4\rightarrowngle, \rho}$ or $\mathcal{M}_{U(2), \rho_1}$.
\item[$n=4$:] $\mathcal{M}_{\langle 2\rightarrowngle,\rho_a}$, $\mathcal{M}_{\langle 2\rightarrowngle,\rho_b}$ or $\mathcal{M}_{U(2), \rho_1}$.
\item[$n=5$:] $\mathcal{M}_{\langle 2\rightarrowngle, \rho_a}$, $\mathcal{M}_{U(2), \rho_1}$ or $\mathcal{M}_{U(2), \rho_2}$.
\end{enumerate}
All these moduli spaces are irreducible with the exception of $\mathcal{M}_{U(2), \rho_2}$ for $n=5$, which has three distinct irreducible components.
\end{theorem}
\begin{proof}
Since $(X,i)$ deforms in a family of dimension at least $19$, it is a pair of type $T$ for some admissible lattice $T$ with $\mathrm{rk}\,(T) \leq 2$. At the level of period domains, the list in the statement is an easy consequence of Lemma \ref{lem:closure} and of Propositions \ref{prop: rank one}, \ref{prop:rank two +id} and \ref{prop:rk two -id}. Moreover, the period map is generically injective when restricted to manifolds polarized with a lattice of rank one, and the same is true in the case of $U(2)$ by Proposition \ref{prop:U(2)irreducible} and by \cite[Corollary 4.9.6]{joumaah}, with the exception of $n=5$ and $\mathcal{M}_{U(2), \rho_2}$ as explained in Proposition \ref{prop:U(2) 3 chambers}.
\end{proof}
\section{Examples}\label{sect: examples}
Even when we limit ourselves to $n \leq 5$, we observe that we lack the description of most of the projective families listed in Theorem \ref{thm: max dim families}. Indeed, while for $n=2$ both families have been described, respectively in \cite{OG}-\cite{BCMS} and \cite{IKKR_U2}, for $n\geq 3$ the family of $(\langle 2\rightarrowngle, \rho_a)$-polarized manifolds of $K3^{[n]}$-type is still unknown. In fact, when $n\geq 3$ the only two explicit examples which have been found are for $n=3$, $T\cong \langle 4\rightarrowngle$ (see \cite{IKKR} and Section \ref{sec: discr groups}) and $n=4$, $T\cong \langle 2\rightarrowngle$ with polarization $\rho_b$ (involution of the Lehn--Lehn--Sorger--van Straten eightfold; see for instance \cite{llms_twisted_cubics}), in addition to the involutions of Hilbert schemes of points on generic projective $K3$ surfaces whose existence has been proved by the second author in \cite{catt_autom_hilb}.
We conclude by observing that all families of dimension $19$ can in fact be realized as families of moduli spaces of stable twisted sheaves on a $K3$ surface. We briefly recall the construction and the properties of these moduli spaces.
Let $\Sigma$ be a $K3$ surface. By \cite[\S 2]{vg_brauer}, a Brauer class $\alpha \in H^2(\Sigma, \mathcal{O}^*_\Sigma)_{\text{tor}}$ of order $2$ corresponds to a surjective homomorphism $\alpha: \transc(\Sigma) \rightarrow \mathbb{Z}/2 \mathbb{Z}$, where $\transc(\Sigma) = \ns(\Sigma)^\perp \subset H^2(\Sigma, \mathbb{Z})$ is the transcendental lattice of the surface. A $B$-field lift of $\alpha$ is a class $B \in H^2(\Sigma, \mathbb{Q})$ (which can be determined via the exponential sequence) such that $2B \in H^2(\Sigma, \mathbb{Z})$ and $\alpha(v) = (2B, v)$ for all $v \in \transc(\Sigma)$ (see \cite[\S 3]{stellari_huybrechts}). Notice that $B$ is defined only up to an element in $H^2(\Sigma, \mathbb{Z}) + \frac{1}{2}\ns(\Sigma)$.
The full cohomology $H^*(\Sigma, \mathbb{Z}) = H^0(\Sigma, \mathbb{Z}) \oplus H^2(\Sigma, \mathbb{Z}) \oplus H^4(\Sigma, \mathbb{Z})$, endowed with the pairing $(r,H,s)\cdot (r',H',s') = H\cdot H' - rs' -r's$, is a lattice isometric to the Mukai lattice $M = U^{\oplus 4} \oplus E_8^{\oplus 2}$. A Mukai vector $v = (r,H,s)$ is said to be \emph{positive} if $H \in \Pic(\Sigma)$ and either $r > 0$, or $r = 0$ and $H \neq 0$ effective, or $r = H = 0$ and $s > 0$. If $v=(r,H,s) \in H^*(\Sigma, \mathbb{Z})$ is positive, and $B$ is a $B$-field lift of $\alpha$, we define the twisted Mukai vector \mbox{$v_B := (r, H + rB, s + B \cdot H + r\frac{B^2}{2})$}. If $v_B$ is primitive, for a suitable choice of a polarization $D$ of $\Sigma$ the coarse moduli space $M_{v_B}(\Sigma, \alpha)$ of $\alpha$-twisted Gieseker $D$-stable sheaves with Mukai vector $v_B$ is a projective IHS manifold of $K3^{\left[2\right]}n$-type, with $n = \frac{v_B^2}{2} + 1$. Moreover, the image of the canonical embedding $H^2(M_{v_B}(\Sigma, \alpha), \mathbb{Z}) \hookrightarrow M$, which we recalled at the beginning of Section \ref{sec: inv and anti-inv lattices}, is the subspace $v_{B}^\perp \subset M$ (see \cite{yoshioka_twisted} and \cite{baymacr}). For the sake of readability, we do not specify the ample divisor $D$ in the notation for $M_{v_B}(\Sigma, \alpha)$, even though the construction depends on it: we will always assume that a choice of a polarization (generic with respect to the Mukai vector $v_B$, in the sense of \cite[Def.~3.5]{yoshioka_twisted}) has been made. The transcendental lattice of $M_{v_B}(\Sigma, \alpha)$ is isomorphic to $\ker(\alpha) \subset \transc(\Sigma)$, which is a sublattice of index $2$ if $\alpha$ is not trivial. In turn, $\Pic(M_{v_B}(\Sigma, \alpha)) \cong v_B^\perp \cap \Pic(\Sigma, \alpha)$ inside $H^*(\Sigma, \mathbb{Z})$, where $\Pic(\Sigma, \alpha)$ is the sublattice generated by $\Pic(\Sigma)$ and by the vectors $(0,0,1), (2, 2B, 0)$ (see \cite[\S 3]{yoshioka_twisted} and \cite[Lemma 3.1]{macri_stellari}).
\begin{prop}\label{prop: twisted induced U(2) irred}
For $n \geq 2$, let $(X, \eta)$ be a very general element in the moduli space $\mathcal{M}_{U(2), \rho_1}$ of Proposition \ref{prop:U(2)irreducible}, such that $\eta(\Pic(X)) \cong U(2)$. Then, the manifold $X$ is isomorphic to a moduli space of twisted sheaves on a very general projective $\langle 2(n-1) \rightarrowngle$-polarized $K3$ surface.
\end{prop}
\begin{proof}
Let $\Sigma$ be a generic projective $K3$ surface of degree $2(n-1)$, i.e.\ $\Pic(\Sigma) = \mathbb{Z} L$ with $L = \mathcal{O}_\Sigma(H)$ for an effective, ample divisor $H$ with $H^2 = 2(n-1)$. Let $\left\{e_1, e_2 \right\}$ generate one of the summands $U$ in $\transc(\Sigma) \cong U^{\oplus 2} \oplus E_8^{\oplus 2} \oplus \langle -2(n-1) \rightarrowngle$, and consider the Brauer class of order two:
\[ \alpha: \transc(\Sigma) \rightarrow \mathbb{Z}/ 2 \mathbb{Z}, \qquad v \mapsto (e_1, v).\]
Clearly, $B = \frac{e_1}{2} \in H^2(\Sigma, \mathbb{Q})$ is a $B$-field lift of $\alpha$ such that $B^2 = 0$ and $B \cdot H = 0$, since $2B \in \transc(\Sigma)$. Consider the primitive positive Mukai vector $v = (0,H,0)$: then
\[v_B = \left( 0, H, B \cdot H \right) = v\]
\noindent and the moduli space $M_{v_B}(\Sigma, \alpha)$ is a manifold of $K3^{\left[2\right]}n$-type with
\[\transc(M_{v_B}(\Sigma, \alpha)) \cong \ker(\alpha) \cong U \oplus U(2) \oplus E_8^{\oplus 2} \oplus \langle -2(n-1) \rightarrowngle.\]
Moreover, $\Pic(\Sigma, \alpha) = \langle (0,H,0), (0,0,1), (2, 2B, 0) \rightarrowngle \cong \langle 2 \rightarrowngle \oplus U(2)$, thus
\[ \Pic(M_{v_B}(\Sigma, \alpha)) \cong v_B^\perp \cap \Pic(\Sigma, \alpha) = \langle (0,0,1), (2, e_1, 0) \rightarrowngle \cong U(2). \]
Hence, the moduli space $Y = M_{v_B}(\Sigma, \alpha)$ constructed above has $\Pic(Y) \cong T$, $\transc(Y) \cong S$ for the lattices $T,S$ of Proposition \ref{prop:U(2)irreducible}. By the same proposition we know that the moduli space $\mathcal{M}_{U(2), \rho_1}$ is irreducible. For $(X, \eta) \in \mathcal{M}_{U(2), \rho_1}$ very general we also have $\Pic(X) \cong T$ and $\transc(X) \cong S$ (via the marking $\eta$). Hence, the statement follows from the generic injectivity of the period map for $U(2)$-polarized manifolds of $K3^{\left[2\right]}n$-type (see \cite[Corollary 4.9.6]{joumaah}).
\end{proof}
\begin{rem}
For $(X, \eta) \in \mathcal{M}_{U(2), \rho_1}$, let $i \in \aut(X)$ be the non-symplectic involution such that $\eta \circ i^\ast= \rho_1 \circ \eta$. Even though, for $(X, \eta)$ very general, the manifold $X$ is isomorphic to $Y = M_{v_B}(\Sigma, \alpha)$ as in the previous proposition, if $n \geq 3$ we cannot realize the automorphism $i$ as a twisted induced involution on $Y$ (in the sense of \cite{ckkm}), since the group of automorphisms of the $K3$ surface $\Sigma$ is trivial (see \cite[\S 5]{saint-donat}).
\end{rem}
\begin{prop} \label{prop: twisted induced U(2) n=5}
For $n = 5$, let $\mathcal{M}_{U(2), \rho_2}$ be the moduli space of Proposition \ref{prop:U(2) 3 chambers}. There exists an irreducible component $\mathcal{M}^0 \subset \mathcal{M}_{U(2), \rho_2}$ such that, for the very general element $(X,\eta) \in \mathcal{M}^0$ with $\eta(\Pic(X)) \cong U(2)$, the manifold $X$ is isomorphic to a moduli space $Y$ of twisted sheaves on a very general projective $\langle 2 \rightarrowngle$-polarized $K3$ surface. Moreover, the non-symplectic involution $i \in \aut(X)$ such that $\eta \circ i^\ast= \rho_2 \circ \eta$ is realized by a twisted induced automorphism on $Y$.
\end{prop}
\begin{proof}
Let $\Sigma$ be the double cover of $ \mathbb{P}^2$ branched along a smooth sextic curve. We have $\Pic(\Sigma) \cong \langle 2\rightarrowngle$ and $\transc(\Sigma) \cong U^{\oplus 2} \oplus E_8^{\oplus 2} \oplus \langle -2 \rightarrowngle$. If we denote by $g$ the generator of the summand $\langle -2 \rightarrowngle$ inside $\transc(\Sigma)$, then the (non-primitive) index two sublattice $U^{\oplus 2} E_8^{\oplus 2} \oplus \langle 2g \rightarrowngle \subset \transc(\Sigma)$ is isometric to $S = U^{\oplus 2} \oplus E_8^{\oplus} \oplus \langle -8\rightarrowngle$. Let $\alpha$ be the following Brauer class of order two:
\[ \alpha: \transc(\Sigma) \rightarrow \mathbb{Z}/ 2\mathbb{Z}, \qquad \lambda + mg \mapsto m\]
\noindent where $\lambda \in U^{\oplus 2} \oplus E_8^{\oplus 2}$ and $m \in \mathbb{Z}$. Clearly, $\ker(\alpha) = U^{\oplus 2} E_8^{\oplus 2} \oplus \langle 2g \rightarrowngle \cong S$. Let $\left\{e_1, e_2\right\}$ generate a summand $U$ inside $H^2(\Sigma, \mathbb{Z}) \cong U^{\oplus 3} \oplus E_8^{\oplus 2}$. We can assume that $e_1 + e_2$ is the generator of $\Pic(\Sigma)$ and therefore $g = e_1 - e_2$. Notice that the rational class $B = \frac{e_2}{2} \in H^2(\Sigma, \mathbb{Q})$ is a $B$-field lift for $\alpha$, since $\alpha(x) = (e_2, x) \in \mathbb{Z}/ 2\mathbb{Z}$ for all $x \in \transc(\Sigma)$. Consider the (non-primitive) positive Mukai vector \mbox{$v = (0,2(e_1+e_2),0) \in H^*(\Sigma, \mathbb{Z})$}. When twisting $v$ with respect to the $B$-field lift $B$, we obtain $v_B = (0, 2(e_1+e_2), 1)$, which is now primitive of square $8$. Hence, the moduli space $M_{v_B}(\Sigma, \alpha)$ is a manifold of $K3^{[5]}$-type with transcendental lattice isomorphic to $S$. Moreover
\[\Pic(\Sigma, \alpha) = \langle (0,e_1+e_2,0), (0,0,1), (2, e_2, 0) \rightarrowngle\]
\noindent thus
\[ \Pic(M_{v_B}(\Sigma, \alpha)) \cong v_B^\perp \cap \Pic(\Sigma, \alpha) = \langle (0,0,1), (2, e_2, 0) \rightarrowngle \cong U(2).\]
Since $\Sigma$ is a double cover of the plane, it is equipped with a non-symplectic involution $\iota$, which acts as $\id$ on $H^0(\Sigma, \mathbb{Z}) \oplus \Pic(\Sigma) \oplus H^4(\Sigma, \mathbb{Z})$ and as $-\id$ on $\transc(\Sigma)$. This implies that both the Brauer class $\alpha: \transc(\Sigma) \rightarrow \mathbb{Z} / 2\mathbb{Z}$ and the twisted Mukai vector $v_B = (0, 2(e_1 + e_2), 1)$ are $\iota$-invariant. Then, by \cite[\S 3]{ckkm}, the moduli space $Y = M_{v_B}(\Sigma, \alpha)$ comes with a (non-symplectic) induced involution $\widetilde{\iota}$. In particular, the invariant lattice of $\widetilde{\iota}$ is the whole $\Pic(M_{v_B}(\Sigma, \alpha))$, since $\iota$ acts trivially on $\langle (0,0,1), (2,e_2,0) \rightarrowngle$ by \cite[Remark 2.4]{ckkm} (the two classes $(2,e_2,0)$ and $(2,\iota^*(e_2),0) = (2,e_1,0)$ coincide in $H^2(M_{v_B}(\Sigma, \alpha), \mathbb{Z})$). As in Proposition \ref{prop: twisted induced U(2) irred}, the statement follows from the generic injectivity of the period map, after recalling that $\mathcal{M}_{U(2), \rho_2}$ has three irreducible components by Proposition \ref{prop:U(2) 3 chambers}.
\end{proof}
\end{document}
|
\begin{document}
\title{One- versus multi-component regular variation\\operatorname{an}d extremes of Markov trees}
\authorone[UCLouvain]{Johan Segers}
\addressone{UCLouvain, LIDAM/ISBA, Voie du Roman Pays 20, B-1348 Louvain-la-Neuve, Belgium. Email: [email protected]}
\begin{abstract}
A Markov tree is a random vector indexed by the nodes of a tree whose distribution is determined by the distributions of pairs of neighbouring variables and a list of conditional independence relations.
Upon an assumption on the tails of the Markov kernels associated to these pairs, the conditional distribution of the self-normalized random vector when the variable at the root of the tree tends to infinity converges weakly to a random vector of coupled random walks called tail tree.
If, in addition, the conditioning variable has a regularly varying tail, the Markov tree satisfies a form of one-component regular variation.
Changing the location of the root, that is, changing the conditioning variable, yields a different tail tree.
When the tails of the marginal distributions of the conditioning variables are balanced, these tail trees are connected by a formula that generalizes the time change formula for regularly varying stationary time series.
The formula is most easily understood when the various one-component regular variation statements are tied up to a single multi-component statement.
The theory of multi-component regular variation is worked out for general random vectors, not necessarily Markov trees, with an eye towards other models, graphical or otherwise.
\end{abstract}
\keywords{Conditional independence; graphical model; H\"usler--Reiss distribution; max-linear model; Markov tree; multivariate Pareto distribution; Pickands dependence function; regular variation; root change formula; tail measure; tail tree; time change formula.}
\section{Introduction}
Imagine a random vector $X = (X_1, \ldots, X_d)$ of nonnegative variables. One of the components, say $X_i$, is known to have exceeded a large threshold. How does this information affect the conditional distribution of the whole vector $X$? There could be a causal link from $X_i$ to the other variables $X_j$, perhaps via a network of dependence relations, so that tampering with $X_i$ would affect the whole system. Another possibility is that a large value of $X_i$ is merely the result of a large value of some other variable $X_j$. The latter event, however, could have consequences for still other variables $X_k$.
Depending on which one of the $d$ components is known to have been exceptionally large, the conditional distribution of $X$ is likely to be different. Still, if high values of two variables $X_i$ and $X_j$ are not unlikely to arrive together, the conditional distribution of $X$ given that $X_i$ is large must be connected to the one given that $X_j$ is large.
In this paper, these questions are studied for general random vectors using the language of regular variation. The answers are worked out for the particular case that $X$ is a Markov tree. A large value at a particular node is found to spread through the tree via independent increments along the edges. The joint limit distribution is the one of a vector of coupled geometric random walks. The couplings occur through the common edges of different paths starting at the same root node.
Graphical models, of which Markov trees are a special case, bring structure and sparsity to the web of dependence relations between many random variables \citep{LauritzenBook, wainwright+j:2008}. Extreme value theory for such models is a fairly recent subject.
In \citep{asadi+d+s:2015}, a metric that takes the distance along a river into account underlies a spatial model for extremes of river networks.
Recursive max-linear models on directed acyclic graphs are proposed in \citep{gissibl+k:2018} and put to work in \citep{einmahl+k+s:2018, gissibl+k+o:2018}.
In \citep{hitz+e:2016}, the density of a multivariate Pareto distribution is factorized through a version of the Hammersley--Clifford theorem.
Such factorizations are also the theme in \citep{engelke+h:2018}, where they form the basis of new inference methods for extremes of graphical models, including the identification of the graphical structure itself.
Multivariate H\"usler--Reiss extreme-value copulas based on Gaussian Markov trees and higher-order truncated vines are introduced in \cite{lee+j:2018}, who propose composite likelihood methods based on bivariate margins to estimate the parameters.
Multivariate Pareto distributions arise as weak limits of normalized random vectors conditionally on the event that at least one component exceeds a high threshold. Although such conditioning events are covered by Theorem~\ref{thm:MPD:rho} below, the focus of this paper is rather on the case where the exceedance is known to have occurred at a specific variable. The message hinted at in the title is that both points of view are mathematically equivalent, but that, at least for Markov trees, the one-component limit is particularly elegant, as will be explained next.
\subsection{Tail tree of a Markov tree}
For a Markov chain, it was discovered in \citep{smith:1992} that, conditionally on the event that the series is large at some time instant, the conditional distribution of the future of the system is that of a random walk, a process called tail chain in \citep{perfekt:1994}. For light-tailed marginal distributions, this random walk is additive, and for heavy-tailed margins it is geometric, i.e., multiplicative, which is the convention used in this paper.
A Markov tree can be viewed as a coupled collection of Markov chains with common stretches. Take for instance the four-variate Markov tree in Figure~\ref{fig:MT:4}. The nodes of the tree are $\{1,2,3,4\}$ and the three pairs of neighbours are $\{1,2\}$, $\{2,3\}$ and $\{2,4\}$. The vector $(X_1, X_2, X_3)$ is a Markov chain, and so is $(X_1, X_2, X_4)$. These two chains are coupled via the common pair $(X_1, X_2)$. Conditionally on $X_2$, the variables $X_1$, $X_3$ and $X_4$ are independent, since any path that connects two of the three nodes~1, 3 and 4 passes through node~2. This conditional independence property together with the distributions of the three pairs $(X_1, X_2)$, $(X_2, X_3)$ and $(X_2, X_4)$ determines the joint distribution of $(X_1, X_2, X_3, X_4)$.
For the moment, assume that the four variables have the same, regularly varying tail function. The set-up involving regular variation will be further motivated in Section~\ref{subsec:rv}. The effect (not necessarily causal) on $X_2$ of a large value at $X_1$ is via a multiplicative increment $M_{1,2}$ whose distribution is equal to the weak limit of $X_2/X_1$ conditionally on $X_1 = t$ as $t \to \infty$. The existence of this limit is an assumption on the Markov kernel induced by the distribution of the pair $(X_1, X_2)$. Similarly, a large value at $X_2$ affects $X_3$ and $X_4$ via the increments $M_{2,3}$ and $M_{2,4}$, respectively. The effect of $X_1$ on $X_3$ is then through the composite increment $M_{1,2} M_{2,3}$, whereas on $X_4$ it is through $M_{1,2} M_{2,4}$. The conditional independence property ensures that the increments $M_{1,2}$, $M_{2,3}$ and $M_{2,4}$ are mutually independent. The common edge $(1, 2)$ on the paths from node~$1$ to node~$3$ and from node~$1$ to node~$4$ induces dependence between the two tail chains $(M_{1,2}, M_{1,2} M_{2,3})$ and $(M_{1,2}, M_{1,2} M_{2,4})$ via the common increment $M_{1,2}$. In this paper, the random vector
\begin{equation}
\label{eq:tt}
(\Theta_{1,2}, \Theta_{1,3}, \Theta_{1,4})
= (M_{1,2}, M_{1,2} M_{2,3}, M_{1,2} M_{2,4})
\end{equation}
is called the \emph{tail tree} induced by $X$ with root at node $u=1$.
\begin{figure}
\caption{\label{fig:MT:4}
\label{fig:MT:4}
\end{figure}
The tail tree represents a network of stochastic dependence relations that are not necessarily causal. Suppose the Markov tree in Figure~\ref{fig:MT:4} represents water levels at four locations on a river network. If water flows from left to right, node~2 represents a point where the stream branches into two channels, as occurs for instance in a river delta. If water flows from right to left, however, node~2 represents the junction of two branches coming from nodes~3 and~4 into a larger stream flowing towards node~1. In the first case, the tail tree describes how a high water level at the upstream node~1 may cause high water levels at various locations in the delta further downstream. In the second, case however, it is nodes~3 and~4 that are situated upstream, and the tail tree models the sources of a high water volume at the downstream site~1. Still other set-ups are possible, such as for instance node~3 being upstream and nodes~1 and~4 being downstream: high water levels at nodes~1 and~4 are then related through a common cause at node~2, which can itself perhaps be traced back to node~3.
Whatever the causal relationships within $X$, it may make sense to change the conditioning variable. In Figure~\ref{fig:MT:4}, for instance, suppose it is known that a large value has occurred at node~3 rather than at node~1. Tracing the paths from node~3 to the three other nodes yields the tail tree with root at node $u = 3$:
\begin{equation}
\label{eq:tt:bis}
(\Theta_{3,2}, \Theta_{3,1}, \Theta_{3,4})
=
(M_{3,2}, M_{3,2}M_{2,1}, M_{3,2}M_{2,4}).
\end{equation}
The tail trees in \eqref{eq:tt} and~\eqref{eq:tt:bis} have a similar structure. The two edges on the path between the root nodes~1 and~3 have changed direction, however. The edge from node~2 to node~4 is common to both tail trees.
For each pair $\{a, b\}$ of neighbouring nodes, the choice of the root node~$u$ determines which of the two increments appears in the tail tree: $M_{a,b}$ from $X_a$ to $X_b$ or $M_{b,a}$ from $X_b$ to $X_a$. The distributions of $M_{a,b}$ and $M_{b,a}$ are connected by an expression that involves the marginal distributions of $X_a$ and $X_b$. For stationary and reversible Markov chains, this relation underlies a sufficiency property discovered in \citep{bortot+c:2000}. For tail chains of not necessarily reversible Markov chains, it was described in \citep{janssen+s:2014, segers:2007} and for tail processes of regularly varying stationary time series in \citep{basrak+s:2009} via the time change formula. This formula can be understood most easily through the connection between the tail process and the tail measure \citep{dombry+h+s:2018, planinic+s:2018, samorodnitsky+o:2012}, and this is also the way in which the root change formula in Corollary~\ref{cor:modcons} below will be derived, but then without the assumption of stationarity and for general random vectors, not necessarily Markov trees.
\subsection{Regular variation}
\label{subsec:rv}
The language of regularly varying functions and measures provides a rich medium through which to express limit theorems. Recall that a positive, Lebesgue measurable function $f$ defined on a neighbourhood of infinity is regularly varying with index $\tau \in \mathbb{R}$ if $\lim_{t \to \infty} f(\lambda t)/f(t) = \lambda^\tau$ for all $\lambda \in (0, \infty)$. If $X$ is a nonnegative random variable with unbounded support, cumulative distribution function $F(x) = \operatorname{\mathbb{P}}(X \leqslantslant x)$ and tail function $\overline{F} = 1-F$, regular variation of $\overline{F}$ with index $-\alpha < 0$ is equivalent to weak convergence of the conditional distribution of $X/t$ given that $X > t$ to a Pareto random variable $Y$ with index $\alpha$, i.e., $\operatorname{\mathbb{P}}(Y > y) = y^{-\alpha}$ for all $y \in [1, \infty)$. We write $\mathcal{L}(X/t \mid X>t) \dto \operatorname{Pa}(\alpha)$ as $t \to \infty$, where $\mathcal{L}(Z\mid A)$ denotes the conditional distribution of the random object $Z$ given the event $A$, the arrow $\dto$ denotes convergence in distribution, and $\operatorname{Pa}(\alpha)$ denotes the said Pareto distribution.
For multivariate distributions, regular variation can be described via multivariate cumulative distribution functions as well, but an approach via convergence of Borel measures is more versatile. Let the state space be $\mathbb{S} = [0, \infty)^d$. Generalizations to star-shaped metric spaces or abstract cones as in \citep{dombry+h+s:2018, HultLindskog2006, lindskog+r+r:2014, segers+z+m:2017} are left for further work. Let $I \subset \{1, \ldots, d\}$ denote the non-empty set of indices $i$ of variables of which the conditioning event $X_i > t$ is of possible interest. The marginal distributions of $X_i$ for $i \in I$ are assumed to be regularly varying and the ratios of their tail functions are assumed to converge to positive constants. This set-up is a bit more general than the one of identical margins and comes at little technical or notational cost.
The measures involved may have infinite mass but need to assign finite values to sets that remain bounded away from $\{ x \in \mathbb{S}: \forall i \in I, x_i = 0 \}$ or $\{ x \in \mathbb{S} : x_i = 0 \}$, depending on the conditioning event. The topology on the space of such measures will be the one proposed in \citep{lindskog+r+r:2014}, extending \citep{HultLindskog2006}, and resembles the one of vague convergence of measures, but avoiding the need to consider artificially compactified spaces. Regular variation is defined as convergence of $b(t) \, \operatorname{\mathbb{P}}(X/t \in \,\cdot\,)$ to a limit measure called tail measure. Here, $b(t) > 0$ is a scale function tending to infinity and calibrated to the marginal distributions of $X_i$ for $i \in I$.
It is instructive to formulate statements in terms of weak convergence of distributions. For a high threshold $t$ tending to infinity and for a component $i \in I$, consider the asymptotic distribution of the rescaled random vector $X/t$ given that $X_i > t$. Decompose $X/t$ as $(X_i/t, X/X_i)$. Here, $X_i/t$ represents the overall level of $X$ with respect to $t$ whereas $X/X_i$ represents a self-normalized version of $X$. Convergence in distribution of $(X_i/t, X/X_i)$ given $X_i > t$ as $t \to \infty$ is a special case of what is called one-component regular variation in \citep{hitz+e:2016}, explored already in \citep{heffernan+r:2007, Resnick2014} for the bivariate case but allowing for affine normalizations. The random variable $X_i/t$ is asymptotically $\operatorname{Pa}(\alpha)$ distributed and independent of $X/X_i$, whose weak limit, denoted by $\Theta_i = (\Theta_{i,j})_{j=1}^d$, captures extremal dependence within $X$ given that $X_i$ is large. Letting the index $i$ run through $I$ produces multiple such one-component regular variation statements, which, together, are equivalent to what can be called multi-component regular variation. The limit distributions $\Theta_{i}$ that arise for various indices~$i$ must be mutually consistent, and the tail measure mentioned at the end of the previous paragraph embraces them all at once.
In Section~\ref{sec:one2multi}, the focus is on tying together multiple one-component regular variation limits. The theory is worked out for general random vectors, not necessarily Markov trees. A number of results in that section have already been formulated in the literature in one way or another, in slightly different settings. Some of the equivalence relations in Theorem~\ref{thm:one2multi}, for instance, resemble those in~\citep[Theorem~1.4]{hitz+e:2016} and~\citep[Proposition~3.1]{segers+z+m:2017}. The model consistency property between limit measures in Theorem~\ref{thm:one2multi}(ii) is formulated in \citep[Section~2]{das+r:2011} for the bivariate case. The root change formula in Corollary~\ref{cor:rcf} extends the time change formula for regularly varying stationary time series stemming from \citep{basrak+s:2009} and studied extensively in \citep{dombry+h+s:2018, janssen:2018}. Multivariate Pareto distributions as in Theorem~\ref{thm:MPD:rho} are foreshadowed in \citep[Section~6.3]{resnick:2006} and appear in \citep{ferreira+dh:2014, rootzen+s+w:2018} when $\rho(x) = \max(x_1, \ldots, x_d)$ and in \citep{dombry+r:2015} for more general functionals $\rho$.
These are just a few connections, and the above list is by no means intended to be complete.
The set-up involving regular variation is intended to serve two purposes. First, to model tail dependence within a vector of random variables which have been transformed to the same, heavy-tailed distribution, such as the unit-Fréchet distribution, as is common in multivariate extreme value theory. Second, to model the joint distribution of a vector of regularly varying random variables, not necessarily identically distributed, but with equivalent tails, such as returns on financial portfolios composed of the same basket of underlying assets. The latter framework is more general than the former and comes at little additional notational cost.
\subsection{Outline}
For a Markov tree $X$, convergence as $t \to \infty$ of the conditional distribution of $X/X_u$ given that $X_u = t$ is proved in Section~\ref{sec:tailtree}. The main assumption is that, for edges $e = (a, b)$ directed away from the root $u$, the conditional distribution of $X_b/X_a$ given $X_a = t$ converges as $t \to \infty$. No regular variation is needed yet.
The tail trees pertaining to different roots $u$ can be linked up thanks to the theory of one- and multi-component regular variation developed in Section~\ref{sec:one2multi}. The results do not rely on the Markov property and cover quite general random vectors $X$ on $[0, \infty)^d$, as is illustrated briefly for max-linear models. An interesting special case of these are the recursive max-linear structural equation models introduced in \citep{gissibl+k:2018}, featuring a causal structure induced by a directed acyclic graph. Most of the proofs of this section are deferred to the Appendix.
When combined, the results in Section~\ref{sec:tailtree} and~\ref{sec:one2multi} serve to uncover the regular variation properties of Markov trees in Section~\ref{sec:rvmt}.
The common special case that the joint distribution of the Markov tree is absolutely continuous with respect to Lebesgue measure is the subject of Section~\ref{sec:ac}. The theory then simplifies considerably and the limit distribution with respect to a single root $u$ is already sufficient to reconstruct the limit distributions with respect to all other possible roots $\bar{u}$.
In Sections~\ref{sec:rvmt} and~\ref{sec:ac}, the distributions of the increments of the tail trees are calculated in case the pair distributions are max-stable, not necessarily absolutely continuous. For the H\"usler--Reiss distribution max-stable distribution, the tail tree is multivariate log-normal, constructed from partial sums of independent normal random variables along the edges of the tree.
\iffalse
Let $X = (X_v)_{v \in V}$ be a random vector of nonnegative variables indexed by a finite set $V$ with at least two elements. The situation that motivated this work is when $V$ is the set of vertices of a tree with respect to which $X$ satisfies the global Markov property. Now imagine that for some given $u \in V$, variable $X_u$ exceeds a high threshold $x > 0$. Given this information, how does the joint and properly rescaled distribution of $X$ look like? More precisely, what can we say about the limit in distribution of $X / X_u$ conditionally on $X_u > x$ as $x \to \infty$?
\js{What about $X / x$ conditionally on $X_u > x$? We study this in Step~2 in the proof of Proposition~\ref{prop:rcf}.}
Let the random vector $\Theta_{u} = (\Theta_{u,v})_{v \in V}$ be this weak limit. The distribution of $\Theta_{u}$ obviously depends on the location, $u$, of the initial shock, so that different sources $u, \bar{u} \in V$ will produce different weak limits $\Theta_{u}$ and $\Theta_{\bar{u}}$. Still, these limits can be expected to be related whenever there is positive chance that large values of $X_u$ and $X_{\bar{u}}$ occur together. For stationary time series, this property is known as the tail change formula \citep{basrak+s:2009}. In the present context, we are looking after a root-change formula.
\js{Can also formulate things in terms of vague convergence of measures or $\mathcal{M}_0$-convergence. Then becomes a matter of model consistency: see \citep{das+r:2011} for the bivariate case and the link to conditional extreme value models.}
The existence of a weak limit of $\mathcal{L}(X/X_u \mid X_u > x)$ may even imply the same for $\mathcal{L}(X/X_{\bar{u}} \mid X_{\bar{u}} > x)$. This is the case when a large value by $X_{\bar{u}}$ implies that $X_{u}$ is of the same order of magnitude. It then becomes possible to switch conditioning events, and the law of $\Theta_{u}$ is entirely determined by the one of $\Theta_{u}$.
In case $X$ is a finite stretch of a homogeneous Markov chain $(X_0, \ldots, X_T)$, then under general conditions, it is well-known that the weak limit $\Theta_0 = (\Theta_{0,0}, \ldots, \Theta_{0,T})$ takes the form of a multiplicative random walk called tail chain: $\Theta_t = \prod_{s=1}^t M_s$ for $t = 1, \ldots, T$ \citep{smith, yun, perfekt:1994, janssen+s:2014, segers:2007}. The increments $M_1, \ldots, M_T$ are independent and their common distribution is determined by the joint tail of the bivariate transition $(X_{t-1}, X_t)$. A Markov tree can be thought of as a chain which is allowed to fork or split into multiple branches. The joint distribution of the tail tree $\Theta_{u}$ starting at $u$ is then the one of a collection of tail chains that share increments along common edges.
For stationary time series, the existence of the tail process $(\Theta_{0,t})_t$ implies that the finite-dimensional distributions of the series are multivariate regularly varying. This turns out to be the case for general random vectors too.
\js{References and discussion of work on extremes of graphical models: \citep{asadi+d+s:2015, das+r:2011, gissibl+k:2018, gissibl+k+o:2018, hitz+e:2016}. Joe? Einmahl, Kiriliouk Segers for DAGs}
\js{In the language of \citep{hitz+e:2016}, Step~2 in the proof of Proposition~\ref{prop:rcf} shows one-component regular variation of $(X_u, X/X_u)$. This step is also the implication (ii) $\implies$ (iv) in \citep[Theorem~1.4]{hitz+e:2016} (need to have published version of the paper). By \citep[Theorem~1.6]{hitz+e:2016}, Condition~\ref{ass:MT:pdf} together with regular variation of $f_a$ implies one-component regular variation of $f_{a,b}$.}
\js{Extremes of Markov kernels: need to find and read \citep{resnick_zeber_2013}.}
A large part of our theory holds for general random vectors: this is true for the root-change formula in Section~\ref{sec:rcf} and its reinforced version in Section~\ref{sec:existence} and for the description of multivariate regular variation in Section~\ref{sec:mrv}. It is our hope that these results will be useful for understanding the joint tails of graphical models more general than Markov trees. As a proof of concept, we illustrate our theory in these sections for max-linear models, of which the recursive max-linear models on directed acyclic graphs introduced in \citep{gissibl+k:2018} are a special case.
Starting from Section~\ref{sec:tailtree}, we specialize the theory to the case of Markov trees. We compute the tail tree, show multivariate regular variation (Section~\ref{sec:rvmt}), and pay particular attention to the absolutely continuous case (Section~\ref{sec:ac}). The root change formula takes on a particularly neat form: all that is required is to change the distributions of certain multiplicative increments in a specific way. If the bivariate transitions along the edges in the tree are H\"usler--Reiss max-stable distributions, the tail tree is multivariate lognormal, built upon the partial sums of independent normal random variables.
Concepts and notation will be introduced along the way. Here we already mention that for an arbitrary set $A$ and a non-empty finite set $W$, we let $A^W$ denote the set of vectors $(a_w)_{w \in W}$ indexed by $W$ and with elements in $A$. The law of a random object $\xi$ given an event $B$ is denoted by $\mathcal{L}(\xi \mid B)$.
\fi
\section{The spectral tail tree of a Markov tree}
\label{sec:tailtree}
A (finite) graph is a pair $(V, E)$ where $V$ is a non-empty finite set of vertices or nodes and where $E \subset V \times V$ is a set of edges. Self-loops are excluded, i.e., $(u, u) \not\in E$ for all $u \in V$. To avoid trivialities, $V$ is assumed to have at least two elements. Two nodes are neighbours if they are joined by an edge. A graph is undirected if $(a, b) \in E$ implies $(b, a) \in E$. A path from a node $u$ to a node $v$ is a collection $\{e_1, \ldots, e_n\} \subset E$ of edges such that $e_k = (u_{k-1}, u_k)$ for all $k = 1, \ldots, n$, for $n+1$ \emph{distinct} nodes $u_0, u_1, \ldots, u_n \in V$ such that $u_0 = u$ and $u_n = v$. An undirected tree $\mathcal{T} = (V, E)$ is an undirected graph such that for any pair of distinct nodes $u$ and $v$, there exists a unique path from $u$ to $v$, and this path is then denoted by $\operatorname{pa}th{u}{v}$.
Let $\mathcal{T} = (V, E)$ be an undirected tree and let $X = (X_v)_{v \in V}$ be a random vector indexed by the nodes of the tree. The pair $(X, \mathcal{T})$ is a Markov tree if it satisfies the global Markov property \citep{LauritzenBook}: whenever $A, B, S$ are disjoint, non-empty subsets of $V$ such that $S$ separates $A$ and $B$ (i.e., any path between a node $a \in A$ and a node $b \in B$ passes through some node in $S$), the conditional independence relation
\begin{equation}
\label{eq:Markov}
X_A \perp\!\!\!\perp X_B \mid X_S
\end{equation}
holds, where $X_W$ denotes the random vector $(X_v)_{v \in W}$ for $W \subset V$.
For an undirected tree $\mathcal{T} = (V, E)$ and a node $u \in V$, let $\mathcal{T}_u = (V, E_u)$ denote the directed, rooted tree that consists of directing the edges in $E$ outward starting from $u$. Formally, $E_u$ is the subset of $E$ that is obtained by choosing for every pair of edges $(a, b)$ and $(b, a)$ in $E$ the one such that the first node separates the second one from $u$. If $(a, b) \in E_u$, then $a$ is the (necessarily unique) parent of $b$ in $\mathcal{T}_u$ whereas $b$ is a child of $a$ in $\mathcal{T}_u$.
Let $(X, \mathcal{T})$ be a nonnegative Markov tree, where $\mathcal{T} = (V, E)$ is an undirected tree.
\begin{condition}
\label{ass:MT}
There exists $u \in V$ with the following two properties.
\begin{compactenum}[(i)]
\item
For every directed edge $e = (a, b) \in E_u$, there exists a version of the conditional distribution of $X_b$ given $X_a$ and a probability measure $\mu_e$ on $[0, \infty)$ such that
\begin{equation}
\label{eq:kernel:limit}
\mathcal{L}( X_b / x_a \mid X_a = x_a ) \dto \mu_e, \qquad x_a \to \infty.
\end{equation}
\item
For edges $e = (a, b) \in E_u$ such that $a \ne u$ and such that there exists an edge $\bar{e} \in \operatorname{pa}th{u}{a}$ for which $\mu_{\bar{e}}( \{0\} ) > 0$, we have
\begin{equation}
\label{eq:kernel:control}
\forall \eta > 0, \qquad
\lim_{\delta \downarrow 0}
\limsup_{x \to \infty}
\sup_{\varepsilon \in [0, \delta]}
\operatorname{\mathbb{P}}( X_b / x > \eta \mid X_a = \varepsilon x )
= 0.
\end{equation}
\end{compactenum}
\end{condition}
Assumption~\ref{ass:MT}(ii) is similar to \citep[equation~(3.4)]{perfekt:1994} and prevents non-extreme values to cause extreme ones. A similar assumption is \citep[equation~(2.4)]{segers:2007}, where it is illustrated \citep[Example~7.5]{segers:2007} what can go wrong without it.
\begin{theorem}
\label{thm:MT}
Let $(X, \mathcal{T})$ be a nonnegative Markov tree on $\mathcal{T} = (V, E)$. Assume Condition~\ref{ass:MT}. Let $(M_e : e \in E_u)$ be a vector of independent random variables such that the law of $M_e$ is $\mu_e$ for all $e \in E_u$. Then
\begin{equation}
\label{eq:spectral}
\mathcal{L} ( X / X_u \mid X_u = t )
\dto
\Theta_u = ( \Theta_{u,v} )_{v \in V},
\qquad t \to \infty,
\end{equation}
where $\Theta_{u,u} = 1$ and
\begin{equation}
\label{eq:Thetauv}
\forall v \in V \setminus \{u\}, \qquad
\Theta_{u,v} = \prod_{e \in \operatorname{pa}th{u}{v}} M_e.
\end{equation}
\end{theorem}
The random vector $(\Theta_{u,v})_{v \in V}$ is called the \emph{tail tree} of the Markov tree $(X_v)_{v \in V}$, adapting terminology for Markov chains in \citep{perfekt:1994}. In Figure~\ref{fig:T}, the tail tree is illustrated for a tree with seven nodes. For subvectors $(\Theta_{u,w})_{w \in W}$ where all nodes in $W$ lie on the same path starting at $u$, the structure of the tail tree is that of a geometric random walk; take for instance $u = 1$ and $W = \{1, 4, 5, 7\}$ in Figure~\ref{fig:T}. The tail tree couples several geometric random walks together through the common edges in the underlying paths: in the same figure, consider for instance the vectors indexed by $\{1, 4, 5, 7\}$ and by $\{1, 4, 6\}$, respectively, which share the initial edge $(1, 4)$.
\begin{figure}
\caption{\label{fig:T}
\label{fig:T}
\end{figure}
\begin{proof}[Proof of Theorem~\ref{thm:MT}]
Put $d = \abs{V} - 1 \geqslantslant 1$. The proof is by induction on $d$.
If $V$ has only two elements, i.e., $d = 1$, then Condition~\ref{ass:MT}(i) already confirms the convergence stated in \eqref{eq:spectral} and \eqref{eq:Thetauv}. Therefore, we can henceforth assume that $V$ has at least three elements, i.e., $d \geqslantslant 2$. Identify $V$ with $\{0, 1, \ldots, d\}$ in such a way that the root is $u = 0$ and such that if $(a, b) \in E_u$ then $a < b$. Since $X_0/X_0 = 1 = \Theta_{0,0}$, we do not need to consider the components $X_0$ and $\Theta_{0,0}$ in \eqref{eq:spectral}.
{}
\noindent\emph{Step 1.} ---
Let $k$ denote the parent of $d$ in the directed tree $\mathcal{T}_0$, that is, $k$ is the unique node in $\{0, 1, \ldots, d-1\}$ such that $(k, d)$ is an edge in $E_0$. Our way of numbering nodes implies that $d$ cannot be the parent of every other node. Condition~\ref{ass:MT} is then satisfied also for the nonnegative Markov tree $X_{0:(d-1)} = (X_0, X_1, \ldots, X_{d-1})$ on the tree that is obtained from $\mathcal{T}$ by removing node $d$ from $V$ and edges $(k, d)$ and $(d, k)$ from $E$. The induction hypothesis then means that, for every bounded, continuous function $g : [0, \infty)^d \to \mathbb{R}$, we have
\[
\lim_{t \to \infty} \operatorname{\mathbb{E}}[ g(X_{1:(d-1)}/t) \mid X_0 = t] = \operatorname{\mathbb{E}}[ g(\Theta_{0,1:(d-1)}) ],
\]
the joint distribution of $\Theta_{0,1:(d-1)} = (\Theta_{0,1}, \ldots, \Theta_{0,d-1})$ being given by \eqref{eq:Thetauv}.
Let $f : \mathbb{R}^{d} \to [0, 1]$ be a Lipschitz function. We will show that
\[
\lim_{t \to \infty}
\operatorname{\mathbb{E}} [ f( X_{1:d} / t ) \mid X_0 = t ]
=
\operatorname{\mathbb{E}} [ f( \Theta_{0,1:d}) ].
\]
Recall that $k \in \{0, 1, \ldots, d-1\}$ denotes the parent node of $d$. We need to distinguish between two cases: $k = 0$ is the root or $k \in \{1, \ldots, d-1\}$ is a non-root vertex. The case $k = 0$ is similar to but easier than the case $k \in \{1, \ldots, d-1\}$ and is left to the reader. We assume henceforth that $k \in \{1, \ldots, d-1\}$.
{}
\noindent\emph{Step 2.} ---
Let $\delta > 0$ be such that $\operatorname{\mathbb{P}}( \Theta_{0,k} = \delta ) = 0$. We have
\begin{align}
\label{eq:snoezie}
\leqslantslantfteqn{
\leqslantslantft\lvert
\operatorname{\mathbb{E}}[ f( X_{1:d}/t ) \mid X_0 = t ]
-
\operatorname{\mathbb{E}}[ f( \Theta_{0,1:d} ) ]
\right\rvert
} \\
\label{eq:snoezie:up}
&\leqslantslant
\leqslantslantft\lvert
\operatorname{\mathbb{E}}[ \mathds{1}( X_k/t \geqslantslant \delta) \, f( X_{1:d}/t ) \mid X_0 = t ]
-
\operatorname{\mathbb{E}}[ \mathds{1}( \Theta_k \geqslantslant \delta) \, f( \Theta_{0,1:d} ) ]
\right\rvert
\\
\label{eq:snoezie:down}
&\quad\mbox{}+
\leqslantslantft\lvert
\operatorname{\mathbb{E}}[ \mathds{1}( X_k/t < \delta) \, f( X_{1:d}/t ) \mid X_0 = t ]
-
\operatorname{\mathbb{E}}[ \mathds{1}( \Theta_k < \delta ) \, f( \Theta_{0,1:d} ) ]
\right\rvert.
\end{align}
We will show that the expression \eqref{eq:snoezie:up} converges to zero as $x_0 \to \infty$ (Step~3). Moreover, we will find a bound for the limit superior of \eqref{eq:snoezie:down} as $x \to \infty$. The bound will depend on $\delta$ but will converge to zero as $\delta \downarrow 0$ (Step~4). Together, these properties of \eqref{eq:snoezie:up} and \eqref{eq:snoezie:down} are sufficient to prove the theorem (Step~5).
{}
\noindent\emph{Step 3: The term \eqref{eq:snoezie:up}.} ---
The vertex $k$ is the parent of $d$ in $\mathcal{T}_0$, and therefore it separates $d$ from the other vertices. By the conditional independence property~\eqref{eq:Markov},
\begin{multline*}
\operatorname{\mathbb{E}}[ \mathds{1}( \tfrac{X_k}{t} \geqslantslant \delta) \, f( \tfrac{X_{1:d}}{t} ) \mid X_0 = t ] \\
=
\int_{x_{1:(d-1)}}
\mathds{1}( \tfrac{x_k}{t} \geqslantslant \delta) \,
\operatorname{\mathbb{E}}[ f( \tfrac{x_{1:(d-1)}}{t}, \tfrac{X_d}{t}) \mid X_k = x_k ]
\operatorname{\mathbb{P}}[ X_{1:(d-1)} \in \mathrm{d} x_{1:(d-1)} \mid X_0 = t ].
\end{multline*}
To explain our notation: the integral is over $x_{1:(d-1)} = (x_1, \ldots, x_{d-1})$ and is with respect to the conditional distribution of $X_{1:(d-1)} = (X_1, \ldots, X_{d-1})$ given that $X_0 = t$. The integrand involves the conditional expectation of a function of $X_d$ given that $X_k = x_k$.
We change variables and integrate with respect to the conditional distribution of $X_{1:(d-1)}/t$ given that $X_0 = t$: we get
\begin{equation}
\label{eq:bella:up}
\operatorname{\mathbb{E}}[ \mathds{1}( \tfrac{X_k}{t} \geqslantslant \delta) \, f( \tfrac{X_{1:d}}{t} ) \mid X_0 = t ]
=
\int_{y_{1:(d-1)}}
g_{t}(y_{1:(d-1)})
\operatorname{\mathbb{P}}[ \tfrac{X_{1:(d-1)}}{t} \in \mathrm{d} y_{1:(d-1)} \mid X_0 = t ],
\end{equation}
where the integrand in \eqref{eq:bella:up} is given by
\[
g_{t}(y_{1:(d-1)})
=
\mathds{1}(y_k \geqslantslant \delta) \operatorname{\mathbb{E}}[ f( y_{1:(d-1)}, y_k \tfrac{X_d}{t y_k}) \mid X_k = t y_k ].
\]
By Assumption~\ref{ass:MT}(i), we have $X_d / x_k \mid X_k = x_k \dto M_{k,d}$ as $x_k \to \infty$. Define
\[
g( y_{1:(d-1)} )
=
\mathds{1}(y_k \geqslantslant \delta) \operatorname{\mathbb{E}}[ f( y_{1:(d-1)}, y_k M_{k,d} ) ].
\]
Recall that $f$ is bounded and (Lipschitz) continuous. By the extended continuous mapping theorem \citep[Theorem~18.11]{vandervaart:1998}, we have, for all vectors $y_{1:(d-1)}$ such that $y_k \ne \delta$ and for all functions $y_{1:(d-1)}(\,\cdot\,)$ such that $y_{1:(d-1)}(t) \to y_{1:(d-1)}$ as $t \to \infty$, the limit relation
\[
\lim_{t \to \infty} g_{t} \bigl( y_{1:(d-1)}(t) \bigr) = g( y_{1:(d-1)} ).
\]
Moreover, $\mathcal{L}(X_{1:(d-1)}/t \mid X_0 = t) \dto \mathcal{L}(\Theta_{1:(d-1)})$ as $t \to \infty$ by the induction hypothesis. By the same extended continuous mapping theorem, the integral \eqref{eq:bella:up} converges to
\begin{multline*}
\int_{y_{1:(d-1)}} g(y_{1:(d-1)}) \operatorname{\mathbb{P}}[ \Theta_{1:(d-1)} \in \mathrm{d} y_{1:(d-1)} ] \\
=
\int_{y_{1:(d-1)}}
\mathds{1}(y_k \geqslantslant \delta)
\operatorname{\mathbb{E}}[ f( y_{1:(d-1)}, y_k M_{k,d} ) ]
\operatorname{\mathbb{P}}[ \Theta_{1:(d-1)} \in \mathrm{d} y_{1:(d-1)} ].
\end{multline*}
Recall that $(M_e)_{e \in E_{u}}$ is a vector of independent random variables such that the law of $M_e$ is $\mu_e$ for $e \in E_u$. By construction, $M_{k,d}$ and $\Theta_{1:(d-1)}$ are then independent too: each component $\Theta_{0,j}$ of $\Theta_{1:(d-1)}$ is a product of random variables $M_{a,b}$ with $a, b \in \{0, \ldots, d-1\}$ and thus $e = (a, b) \neq (k, d)$. The above integral may therefore be simplified to
\[
\operatorname{\mathbb{E}}[ \mathds{1}( \Theta_k \geqslantslant \delta ) \, f( \Theta_{1:(d-1)}, \Theta_k M_{k,d} ) ]
= \operatorname{\mathbb{E}}[ \mathds{1}( \Theta_k \geqslantslant \delta ) \, f( \Theta_{1:d} ) ]
\]
since $\Theta_d = \Theta_{k}M_{k,d}$. It follows that the limit of \eqref{eq:snoezie:up} as $t \to \infty$ is equal to zero.
{}
\noindent\emph{Step 4: The term \eqref{eq:snoezie:down}.} ---
We consider two cases: $\operatorname{\mathbb{P}}( \Theta_k = 0 ) = 0$ (Step~4.a) and $\operatorname{\mathbb{P}}( \Theta_k = 0 ) > 0$ (Step~4.b).
{}
\noindent\emph{Step 4.a: The case $\operatorname{\mathbb{P}}(\Theta_k = 0) = 0$.} ---
Since $0 \leqslantslant f \leqslantslant 1$, the integral~\eqref{eq:snoezie:down} is bounded by
\[
\operatorname{\mathbb{P}}( X_k/t < \delta \mid X_0 = t ) + \operatorname{\mathbb{P}}( \Theta_k < \delta ).
\]
By the induction hypothesis, this sum converges to $2 \operatorname{\mathbb{P}}( \Theta_k < \delta )$ as $x \to \infty$. The latter probability converges to zero as $\delta \downarrow 0$, as required.
{}
\noindent\emph{Step 4.b: The case $\operatorname{\mathbb{P}}( \Theta_k = 0 ) > 0$.} ---
We decompose \eqref{eq:snoezie:down} into three terms:
\begin{align}
\nonumber
\leqslantslantfteqn{
\leqslantslantft\lvert
\operatorname{\mathbb{E}}[ \mathds{1}( \tfrac{X_k}{t} < \delta) \, f( \tfrac{X_{1:d}}{t} ) \mid X_0 = t ]
-
\operatorname{\mathbb{E}}[ \mathds{1}( \Theta_k < \delta ) \, f( \Theta_{1:d} ) ]
\right\rvert
} \\
&\leqslantslant
\label{eq:snoezie:down:1}
\leqslantslantft\lvert
\operatorname{\mathbb{E}}[ \mathds{1}( \tfrac{X_k}{t} < \delta ) \, f( \tfrac{X_{1:d}}{t} ) \mid X_0 = t ]
-
\operatorname{\mathbb{E}}[ \mathds{1}( \tfrac{X_k}{t} < \delta ) \, f( \tfrac{X_{1:(d-1)}}{t}, 0 ) \mid X_0 = t ]
\right\rvert
\\
\label{eq:snoezie:down:2}
&\quad\mbox{}
+
\leqslantslantft\lvert
\operatorname{\mathbb{E}}[ \mathds{1}( \tfrac{X_k}{t} < \delta ) \, f( \tfrac{X_{1:(d-1)}}{t}, 0 ) \mid X_0 = t ]
-
\operatorname{\mathbb{E}}[ \mathds{1}( \Theta_k < \delta ) \, f( \Theta_{1:(d-1)}, 0 ) ]
\right\rvert
\\
\label{eq:snoezie:down:3}
&\quad\mbox{}
+
\leqslantslantft\lvert
\operatorname{\mathbb{E}}[ \mathds{1}( \Theta_k < \delta ) \, f( \Theta_{1:(d-1)}, 0 ) ]
-
\operatorname{\mathbb{E}}[ \mathds{1}( \Theta_k < \delta ) \, f( \Theta_{1:d} ) ]
\right\rvert.
\end{align}
Let $L > 0$ be such that $\abs{ f(y) - f(z) } \leqslantslant L \sum_{j=1}^d \abs{y_j - z_j}$ for all $y, z \in \mathbb{R}^d$. Furthermore, recall that $0 \leqslantslant f \leqslantslant 1$, so that also $\abs{ f(y) - f(z) } \leqslantslant 1$ for all $y, z \in \mathbb{R}^d$.
{}
\noindent\emph{Step 4.b.i: The term \eqref{eq:snoezie:down:1}.} ---
The term \eqref{eq:snoezie:down:1} is bounded by
\begin{equation}
\label{eq:snoezie:down:1:0}
\operatorname{\mathbb{E}}\leqslantslantft[
\leqslantslantft.
\mathds{1}( \tfrac{X_k}{t} < \delta ) \,
\lvert f( \tfrac{X_{1:d}}{t} ) - f( \tfrac{X_{1:(d-1)}}{t}, 0 ) \rvert
\, \right| \,
X_0 = t
\right]
\leqslantslant
\operatorname{\mathbb{E}}[ \mathds{1}( \tfrac{X_k}{t} < \delta ) \min(1, L \tfrac{X_d}{t}) \mid X_0 = t ].
\end{equation}
The node $k$ separates the nodes $0$ and $d$. By the global Markov property, the expectation on the right-hand side of \eqref{eq:snoezie:down:1:0} is therefore equal to
\begin{equation}
\label{eq:snoezie:down:1:aux}
\int_{[0, \delta)}
\operatorname{\mathbb{E}}[ \min(1, L \tfrac{X_d}{t}) \mid X_k = \varepsilon t ] \,
\operatorname{\mathbb{P}}[ \tfrac{X_k}{t} \in \mathrm{d} \varepsilon \mid X_0 = t ].
\end{equation}
Let $\eta \in (0, 2/L)$. The conditional expectation in the integrand in \eqref{eq:snoezie:down:1:aux} satisfies
\begin{align*}
\operatorname{\mathbb{E}}[ \min(1, L X_d/t) \mid X_k = \varepsilon t ]
&=
\operatorname{\mathbb{E}}[
\min(1, L X_d/t) \, \mathds{1}( X_d/t \leqslantslant \eta) \mid X_k = \varepsilon t
]\\
&\quad\mbox{}
+
\operatorname{\mathbb{E}}[
\min(1, L X_d/t) \, \mathds{1}( X_d/t > \eta) \mid X_k = \varepsilon t
]
\\
&\leqslantslant
L \eta + \operatorname{\mathbb{P}}( X_d/t > \eta \mid X_k = \varepsilon t ).
\end{align*}
Therefore, the integral in \eqref{eq:snoezie:down:1:aux} is bounded by
\[
L\eta + \sup_{\varepsilon \in [0, \delta)} \operatorname{\mathbb{P}}( X_d > \eta t \mid X_k = \varepsilon t ).
\]
By Assumption~\ref{ass:MT}(ii), we can first take the limit superior as $t \to \infty$ and then the limit superior as $\delta \downarrow 0$ to find that
\[
\limsup_{\delta \downarrow 0} \limsup_{t \to \infty} \operatorname{\mathbb{E}}[ \mathds{1}( X_k/t < \delta ) \min(1, LX_d/t) \mid X_0 = t ]
\leqslantslant L\eta.
\]
Since $\eta$ can be chosen arbitrarily close to zero, we find that the double limit superior above is equal to zero.
{}
\noindent\emph{Step 4.b.ii: The term \eqref{eq:snoezie:down:2}.} ---
By the induction hypothesis, the term \eqref{eq:snoezie:down:2} converges to zero as $t \to \infty$.
{}
\noindent\emph{Step 4.b.iii: The term \eqref{eq:snoezie:down:3}.} ---
Since $\Theta_d = \Theta_k M_{k,d}$, the term \eqref{eq:snoezie:down:3} is bounded by
\[
\operatorname{\mathbb{E}}[ \mathds{1}( \Theta_k < \delta ) \, \lvert f( \Theta_{1:(d-1)}, 0 ) - f( \Theta_d ) \rvert ]
\leqslantslant
\operatorname{\mathbb{E}}[ \mathds{1}( \Theta_k < \delta ) \min( 1, L \Theta_k M_{k,d} ) ].
\]
By the dominated convergence theorem, the expectation on the right-hand side converges to zero as $\delta \downarrow 0$.
{}
\noindent\emph{Step 5.} ---
The terms \eqref{eq:snoezie:up} and \eqref{eq:snoezie:down} were analyzed in Steps~3 and~4, respectively. In Step~3, it was shown that the term \eqref{eq:snoezie:up} converges to zero as $t \to \infty$, for any $\delta > 0$ such that $\operatorname{\mathbb{P}}(\Theta_{0,k} = \delta) = 0$. In Step~4, it was shown that the limit superior as $t \to \infty$ of the term in \eqref{eq:snoezie:down} is bounded by a quantity depending on $\delta$ which converges to zero as $\delta \downarrow 0$. Since the expression in \eqref{eq:snoezie} does not depend on $\delta$, its limit as $t \to \infty$ must thus be zero.
This completes the proof of the induction step and thus of the theorem.
\end{proof}
\begin{corollary}
\label{cor:MT}
In the setting of Theorem~\ref{thm:MT}, also $\mathcal{L}(X/X_u \mid X_u > t) \dto \Theta_u$ as $t \to \infty$.
\end{corollary}
\begin{proof}
For a bounded and continuous function $f : [0, \infty)^V \to \mathbb{R}$, we have
\begin{multline*}
\bigl\lvert \operatorname{\mathbb{E}}[ f(X/X_u) \mid X_u > t ] - \operatorname{\mathbb{E}}[ f(\Theta_u)] \bigr\rvert \\
\leqslantslant
\frac{1}{\operatorname{\mathbb{P}}(X_u > t)}
\int_{(t,\infty)}
\bigl\lvert
\operatorname{\mathbb{E}}[ f(X/X_u) \mid X_u = s ]
-
\operatorname{\mathbb{E}}[f(\Theta_u)]
\bigr\rvert \,
\operatorname{\mathbb{P}}(X_u \in \mathrm{d} s).
\end{multline*}
Given $\varepsilon > 0$, Theorem~\ref{thm:MT} allows us to find $t(\varepsilon)$ sufficiently high such that the absolute value inside the integral is bounded by $\varepsilon$ for all $s \geqslantslant t(\varepsilon)$. But then the left-hand side in the previous display is bounded by $\varepsilon$ too, for all $t \geqslantslant t(\varepsilon)$. Since $\varepsilon > 0$ was arbitrary, the stated convergence in distribution follows.
\end{proof}
\section{One- versus multi-component regular variation}
\label{sec:one2multi}
Let $X = (X_1, \ldots, X_d)$ be a random vector of nonnegative variables. Upon an obvious change in notation, Corollary~\ref{cor:MT} concerned weak convergence of $\mathcal{L}(X/X_i \mid X_i > t)$ as $t \to \infty$ for some $i \in \{1, \ldots, d\}$. This convergence plus regular variation of the marginal distribution of $X_i$ is a special case of what is called one-component regular variation in \citep{hitz+e:2016}. The weak limit, $\Theta_i = (\Theta_{i,j})_{j=1}^d$, depends on the choice of~$i$. There may be good reasons to consider these limits for several indices $i$. Let $I \subset \{1, \ldots, d\}$ be the set of all indices $i$ for which such a limit $\Theta_{i}$ exists. How are these random vectors $\Theta_{i}$ related?
In this section, several such one-component statements are combined into a single one which could be called multi-component regular variation. If $I = \{1,\ldots,d\}$, this is just ordinary multivariate regular variation. As discussed already in Section~\ref{subsec:rv}, the connections between the limits $\Theta_{i}$ generalize the time change formula for stationary regularly varying time series and can be deduced from their connections to a limiting tail measure.
Let $\mathbb{S} = [0, \infty)^d$ for some positive integer $d$ and let $I \subset \{1,\ldots,d\}$ be non-empty. For $x \in \mathbb{S}$, put $x_I = (x_i)_{i \in I}$. Define $\mathbb{S}_{0,I} = \{ x \in \mathbb{S} : \max(x_I) > 0 \}$. Let $\mathcal{M}_0I$ denote the collection of Borel measures $\nu$ on $\mathbb{S}_{0,I}$ with the property that $\nu(B)$ is finite for every Borel set $B$ of $\mathbb{S}_{0,I}$ that is contained in a set of the form $\{x \in \mathbb{S} : \max(x_I) \geqslantslant \varepsilon \}$ for some $\varepsilon > 0$. Let $\mathcal{C}_{0,I}$ denote the collection of bounded, continuous functions $f : \mathbb{S}_{0,I} \to \mathbb{R}$ for which there exists $\varepsilon > 0$ such that $f(x) = 0$ as soon as $\max(x_I) \leqslantslant \varepsilon$. Let $\mathcal{M}_0I$ be equipped with the smallest topology that makes the evaluation mappings $\nu \mapsto \nu(f) = \int f \, \mathrm{d} \nu$ continuous, where $f$ ranges over $\mathcal{C}_{0,I}$. This is the notion of $\mathcal{M}_{\mathbb{O}}$ convergence in \citep{lindskog+r+r:2014}, with, in their notation, $\mathbb{C} = \{x \in [0, \infty)^d : \forall i \in I, x_i = 0 \}$ and $\mathbb{O} = \mathbb{S} \setminus \mathbb{C} = \mathbb{S}_{0,I}$. The topology just defined is metrizable, turning $\mathcal{M}_0I$ into complete, separable metric space, with convenient characterizations of relative compactness, a Portmanteau theorem, and a mapping theorem, all very much in the spirit of the notion of vague convergence of Borel measures on locally compact second countable Hausdorff spaces. Convergence of measures with respect to this topology is denoted by the arrow $\zto$. If $I$ is just a singleton, $\{i\}$ say, then notation is simplifed from $\mathbb{S}_{0,\{i\}}$ to $\mathbb{S}_{0,i}$ and so on.
For $\alpha > 0$, let $\operatorname{Pa}(\alpha)$ denote the Pareto distribution on $[1, \infty)$ with shape parameter $\alpha$, that is, the distribution of a random variable $Z$ such that $\operatorname{\mathbb{P}}(Z > z) = z^{-\alpha}$ for $z \geqslantslant 1$. Product measure is denoted by $\otimes$.
\begin{theorem}
\label{thm:one2multi}
Let $X = (X_1, \ldots, X_d)$ be a random vector in $\mathbb{S} = [0, \infty)^d$ and let $I \subset \{1,\ldots,d\}$ be non-empty. Let $F_i(x) = \operatorname{\mathbb{P}}(X_i \leqslantslant x)$ and $\overline{F}_i = 1-F_i$ for $i \in I$ and $x \in [0, \infty)$. Assume there exists a function $b$, regularly varying at infinity with index $\alpha > 0$, such that $\lim_{t \to \infty} b(t) \overline{F}_i(t) = c_i \in (0, \infty)$ for $i \in I$. The following statements are equivalent:
\begin{compactenum}[(a)]
\item
For every $i \in I$ we have $\mathcal{L}(X/X_i \mid X_i > t) \dto \mathcal{L}(\Theta_i)$ as $t \to \infty$ for some random vector $\Theta_i = (\Theta_{i,j})_{j=1}^d$ on $\mathbb{S}$.
\item
For every $i \in I$ we have $\mathcal{L}(X_i/t, X/X_i \mid X_i > t) \dto \operatorname{Pa}(\alpha) \otimes \mathcal{L}(\Theta_i)$ as $t \to \infty$ for some random vector $\Theta_i$ on $\mathbb{S}$.
\item
For every $i \in I$ we have $\mathcal{L}(X/t \mid X_i > t) \dto \mathcal{L}(Y_i)$ as $t \to \infty$ for some random vector $Y_i = (Y_{i,j})_{j=1}^d$ on $\mathbb{S}$.
\item
For every $i \in I$ there exists $\nu_i \in \mathcal{M}_0i$ such that $b(t) \operatorname{\mathbb{P}}(X/t \in \,\cdot\,) \zto \nu_i$ as $t \to \infty$ in $\mathcal{M}_0i$.
\item
There exists $\nu \in \mathcal{M}_0I$ such that $b(t) \operatorname{\mathbb{P}}(X/t \in \,\cdot\,) \zto \nu$ as $t \to \infty$ in $\mathcal{M}_0I$.
\end{compactenum}
In that case, the limiting objects are connected in the following ways: for all $i \in I$,
\begin{compactenum}[(i)]
\item
$Y_i$ is equal in distribution to $Y_{i,i} \Theta_i$, where $\mathcal{L}(Y_{i,i}) = \operatorname{Pa}(\alpha)$ and $Y_{i,i}$ and $\Theta_i$ are independent;
\item
$\nu_i$ is equal to the restriction of $\nu$ to $\mathbb{S}_{0,i}$;
\item
$\operatorname{\mathbb{P}}(Y_i \in \,\cdot\,) = c_i^{-1} \nu(\,\cdot\, \cap \{ x : x_i > 1 \})$;
\item
for every Borel measurable $f : \mathbb{S}_{0,I} \to [0, \infty]$, we have
\begin{equation}
\label{eq:Th2nu}
\int_{\mathbb{S}_{0,I}} f(x) \, \mathds{1} \{ x_i > 0 \} \, \mathrm{d} \nu(x) \\
=
c_i \operatorname{\mathbb{E}}\leqslantslantft[
\int_{0}^{\infty}
f(z \Theta_i) \,
\alpha z^{-\alpha-1} \, \mathrm{d} z
\right].
\end{equation}
\end{compactenum}
\end{theorem}
The proof of Theorem~\ref{thm:one2multi}, together with the proofs of the other theorems in this section, is given in Appendix~\ref{app:proofs}.
To highlight the connection with the theory of one-component regular variation in \citep{hitz+e:2016}, note that the random vector $(X, \boldsymbol{Y})$ taking values in $[1, \infty) \times \mathbb{R}^{d-1}$ in \citep[Theorem~1.4]{hitz+e:2016} plays the same role as the random vector $(X_i, X/X_i)$ in Theorem~\ref{thm:one2multi} above. The equivalence between (ii) and (iv) in the cited theorem is then the same as the equivalence between (a) and (b) in Theorem~\ref{thm:one2multi}.
Further, note that in Theorem~\ref{thm:one2multi}(a), for $j \in \{1,\ldots,d\}$ such that $\Theta_{i,j}$ is not degenerate at $0$, we necessarily have $\liminf_{t \to \infty} \operatorname{\mathbb{P}}(X_j > t) / \operatorname{\mathbb{P}}(X_i > t) > 0$, i.e., the tail of $X_j$ is at least as heavy as the one of $X_i$. If also $j \in I$, we can reverse the roles of $i$ and $j$ to find that the condition that the tails of all variables $X_i$ with $i \in I$ are balanced is almost forced.
Apart from the characterizations (a)--(e) in Theorem~\ref{thm:one2multi}, other equivalent ones are possible, for instance, involving sequences rather than functions, with a scaling function inside the probability rather than outside, or with respect to radial and `angular' coordinates $(\rho(X)/t, X/\rho(X))$ for some appropriate functional $\rho$. See for instance \citep[Theorem~6.1]{resnick:2006} and \citep[Theorem~3.1]{lindskog+r+r:2014}. The tail measures $\nu$ and $\nu_i$ are homogeneous with index $-\alpha$ \citep[Theorem~3.1]{lindskog+r+r:2014} and, upon a coordinate transformation, can be written as product measures. Since the focus here is on the weak limits $\Theta_i$, these properties are not further elaborated upon. Statement~(e) in Theorem~\ref{thm:one2multi} implies that the vector $X_I = (X_i)_{i \in I}$ is multivariate regularly varying with limit measure $\nu_I(\,\cdot\,) = \nu( \{ x \in [0, \infty)^d : x_I \in \,\cdot\, \})$ on $[0, \infty)^I \setminus \{0\}$, which in turn implies, among other things, that it is in the domain of attraction of a multivariate max-stable distribution with Fr\'echet margins and exponent measure $\nu_I$; see for instance \citep{resnick:1987, resnick:2006}.
A noteworthy special case of \eqref{eq:Th2nu} is when $f$ is the indicator function of the orthant $\{ x \in \mathbb{S}_{0,I} : \forall j \in J, \, x_j > y_j \}$, where $J \subset \{1, \ldots, d\}$ has a non-empty intersection with $I$ and where $y_j > 0$ for all $j \in J$. If $i \in I \cap J$, then $f(x) \mathds{1} \{ x_i > 0 \} = f(x)$, and thus
\begin{equation}
\label{eq:min}
\nu(\{x \in \mathbb{S}_{0,I} : \forall j \in J, \, x_j > y_j \})
=
c_i \, \operatorname{\mathbb{E}}[ \min \{ y_j^{-\alpha} \Theta_{i,j}^\alpha : j \in J \}].
\end{equation}
A remarkable consequence is that the right-hand side does not depend on the choice of $i \in I \cap J$. This invariance property is a special case of a more general mutual consistency property of the limit distributions $\mathcal{L}(\Theta_i)$ for $i \in I$ that is formulated in Corollary~\ref{cor:modcons} below.
\begin{corollary}[Model consistency]
\label{cor:modcons}
If the equivalent conditions of Theorem~\ref{thm:one2multi} are fulfilled, then, for Borel measurable $f : \mathbb{S}_{0,I} \to [0, \infty]$ and for $i, j \in I$, we have
\begin{equation}
\label{eq:modcons}
c_j \operatorname{\mathbb{E}}\leqslantslantft[ \mathds{1} \{\Theta_{j,i} > 0 \}\int_0^\infty f(z \Theta_j) \, \alpha z^{-\alpha-1} \, \mathrm{d} z \right]
=
c_i \operatorname{\mathbb{E}}\leqslantslantft[ \mathds{1} \{\Theta_{i,j} > 0 \} \int_0^\infty f(z \Theta_i) \, \alpha z^{-\alpha-1} \, \mathrm{d} z \right].
\end{equation}
\end{corollary}
\begin{proof}
By \eqref{eq:Th2nu}, both sides in \eqref{eq:modcons} are equal to $\int_{\mathbb{S}_{0,I}} f(x) \, \mathds{1} \{ x_i > 0, x_j > 0 \} \, \mathrm{d} \nu(x)$.
\end{proof}
\begin{corollary}[Root-change formula]
\label{cor:rcf}
If the equivalent conditions of Theorem~\ref{thm:one2multi} are fulfilled, then, for all Borel measurable $g : \mathbb{S}_{0,I} \to [0, \infty)$ and for all $i,j \in I$, we have
\begin{equation}
\label{eq:rcf1}
c_{j} \, \operatorname{\mathbb{E}}[g(\Theta_j) \, \mathds{1} \{ \Theta_{j,i} > 0 \}]
=
c_{i} \, \operatorname{\mathbb{E}}[g(\Theta_{i} / \Theta_{i,j}) \, \Theta_{i,j}^{\alpha}].
\end{equation}
In particular, $\operatorname{\mathbb{P}}[\Theta_{j,i} > 0] = 1$ if and only if $\operatorname{\mathbb{E}}[\Theta_{i,j}^\alpha] = c_j/c_i$, and then, for all $g$ as above,
\begin{equation}
\label{eq:rcf2}
\operatorname{\mathbb{E}}[g(\Theta_j)]
=
\operatorname{\mathbb{E}}[g(\Theta_i / \Theta_{i,j}) \, \Theta_{i,j}^\alpha] / \operatorname{\mathbb{E}}[\Theta_{i,j}^\alpha].
\end{equation}
\end{corollary}
\begin{proof}
In Corollary~\ref{cor:modcons}, take $f(x) = g(x/x_j) \, \mathds{1} \{ x_j > 1 \}$. As $\Theta_{j,j} = 1$, the left-hand side of \eqref{eq:modcons} is $c_j \operatorname{\mathbb{E}}[\mathds{1}\{\Theta_{j,i} > 0 \} g(\Theta_{j})]$. The right-hand side of \eqref{eq:modcons} becomes $c_i \operatorname{\mathbb{E}}[ \mathds{1} \{ \Theta_{i,j} > 0 \} g(\Theta_i/\Theta_{i,j}) \Theta_{i,j}^\alpha]$, in which the indicator function is redundant.
The special case $g \equiv 1$ in \eqref{eq:rcf1} yields $c_j \operatorname{\mathbb{P}}(\Theta_{j,i} > 0) = c_i \, \operatorname{\mathbb{E}}[\Theta_{i,j}^\alpha]$. If $\operatorname{\mathbb{E}}[\Theta_{i,j}^\alpha] = c_j/c_i$, we have $\operatorname{\mathbb{P}}(\Theta_{j,i}>0)=1$, and the indicator function on the left-hand side of \eqref{eq:rcf1} can be omitted.
\end{proof}
If the limit measure $\nu$ does not assign any mass to the coordinate hyperplane $\{x : x_i = 0\}$, the indicator in \eqref{eq:Th2nu} is redundant and $\nu$ can be expressed entirely in terms of $\mathcal{L}(\Theta_i)$. Moreover, whether this occurs or not can be read off from the $\alpha$-th moments of the components of $\Theta_i$.
\begin{corollary}
\label{cor:justk}
In Theorem~\ref{thm:one2multi}, we have, for $i \in I$,
\begin{equation}
\label{eq:nuxi0}
\nu(\{x \in \mathbb{S}_{0,I} : x_i = 0 \})
=
\begin{cases}
0 & \text{if $\operatorname{\mathbb{E}}[\Theta_{i,j}^\alpha] = c_j/c_i$ for all $j \in I$,} \\
\infty & \text{otherwise}.
\end{cases}
\end{equation}
If $\nu(\{x : x_i = 0\}) = 0$ for some $i \in I$, then, for all Borel measurable $f : \mathbb{S}_{0,I} \to [0, \infty]$,
\begin{equation}
\label{eq:Thk2nu}
\nu(f) = c_i \operatorname{\mathbb{E}}\leqslantslantft[ \int_0^\infty f(z \Theta_i) \, \alpha z^{-\alpha-1} \, \mathrm{d} z \right].
\end{equation}
Moreover, all tail trees $\Theta_{j}$ for $j \in I$ are determined by $\Theta_{i}$ via \eqref{eq:rcf2}.
\end{corollary}
\begin{proof}
Choose $i, j \in I$. In \eqref{eq:Th2nu}, let $f$ be the indicator function of the set $\{ x : x_i = 0 \}$ and let the index $i$ in \eqref{eq:Th2nu} be equal to the index $j$ chosen here.
It follows that $\nu(\{x \in \mathbb{S}_{0,I} : x_i = 0, x_j > 0 \})$ is zero if $\operatorname{\mathbb{P}}(\Theta_{j,i} = 0) = 0$ and infinity otherwise.
By \eqref{eq:rcf1} with $g \equiv 1$, we have $\operatorname{\mathbb{P}}(\Theta_{j,i} = 0) = 0$ if and only if $\operatorname{\mathbb{P}}(\Theta_{j,i} > 0) = 1$ if and only if $\operatorname{\mathbb{E}}[\Theta_{i,j}^\alpha] = c_j / c_i$. Equation~\eqref{eq:nuxi0} follows. If $\nu(\{x : x_i = 0\}) = 0$, we can omit the indicator $\mathds{1} \{ x_i > 0 \}$ on the left-hand side of \eqref{eq:Th2nu}, yielding~\eqref{eq:Thk2nu}.
\end{proof}
Let $f$ be the indicator function of the set $\{ x : \exists j \in J, \, x_j > y_j \}$, where $J \subset \{1,\ldots,d\}$ is non-empty and where $y_j > 0$ for all $j \in J$. If $\nu(\{x : x_i = 0\}) = 0$, then, by \eqref{eq:Thk2nu},
\begin{equation}
\label{eq:max}
\nu(\{ x \in \mathbb{S}_{0,I} : \exists j \in J, \, x_j > y_j \})
=
c_{i} \, \operatorname{\mathbb{E}}[ \max\{y_j^{-\alpha} \Theta_{i,j}^\alpha : j \in J\}].
\end{equation}
In contrast to equation~\eqref{eq:min}, equation~\eqref{eq:max} is true only when $\nu(\{x : x_i = 0\}) = 0$, a prerequisite for which Corollary~\ref{cor:justk} gives a necessary and sufficient condition.
By Corollary~\ref{cor:justk}, the case where $\operatorname{\mathbb{E}}[\Theta_{i,j}^\alpha] = c_{j}/c_{i}$ for all $j \in I$ leads to considerable simplifications. In fact, the special case $K = \{i\}$ in the next theorem implies that even the weak convergence of $\mathcal{L}(X/X_j \mid X_j>t)$ for $j \in I$ can then be deduced from the weak convergence of $\mathcal{L}(X/X_i \mid X_i>t)$ alone.
\begin{theorem}
\label{thm:justK}
In Theorem~\ref{thm:one2multi}, a sufficient condition for (a)--(e) to hold is that there exists a non-empty set $K \subset I$ with the following two properties:
\begin{compactenum}[(i)]
\item
For every $i \in K$ we have $\mathcal{L}(X/X_i \mid X_i > t) \dto \mathcal{L}(\Theta_i)$ as $t \to \infty$ for some random vector $\Theta_i$ on $\mathbb{S}$.
\item
For every $j \in I \setminus K$, there exists $i = i(j) \in K$ such that $\operatorname{\mathbb{E}}[ \Theta_{i,j}^\alpha ] = c_j / c_i$.
\end{compactenum}
In that case, also $\mathcal{L}(X/X_j \mid X_j > t) \dto \mathcal{L}(\Theta_j)$ as $t \to \infty$ for $j \in I \setminus K$, where the law of $\Theta_j$ is given in terms of the one of $\Theta_{i}$ with $i = i(j)$ via \eqref{eq:rcf2}.
\end{theorem}
The focus so far has been on weak limits of conditional distributions involving a high-threshold exceedance by a specific component. In the spirit of the multivariate peaks-over-thresholds methodology \citep{engelke+h:2018, kiriliouk+r+s+w:2018}, the following result covers, among other possibilities, the case where the conditioning event involves a high-threshold exceedance in at least one of a number of components.
\begin{theorem}
\label{thm:MPD:rho}
Suppose the conditions of Theorem~\ref{thm:one2multi} are fulfilled. Let $\rho : \mathbb{S} \to [0, \infty)$ be continuous and homogeneous of order one, that is, $\rho(\lambda x) = \lambda \rho(x)$ for $\lambda \in [0, \infty)$ and $x \in \mathbb{S}$. If $\mathbb{S}_{\rho} :=\{ x \in \mathbb{S}_{0,I} : \rho(x) > 1 \}$ is contained in a set of the form $\{x : \max(x_I) > \varepsilon \}$ for some $\varepsilon > 0$ and if $\nu(\mathbb{S}_{\rho}) > 0$, then $b(t) \operatorname{\mathbb{P}}[\rho(X) > t] \to \nu(\mathbb{S}_{\rho}) \in (0, \infty)$ as $t \to \infty$ and $\mathcal{L}(X/t \mid \rho(X) > t) \dto \nu(\,\cdot\, \cap \mathbb{S}_{\rho})/\nu(\mathbb{S}_{\rho})$ as $t \to \infty$. The conclusions of Theorem~\ref{thm:one2multi} thus apply to the random vector $(X, \rho(X))$ in the space $[0, \infty)^{d+1}$ and relative to the index set $I \cup \{d+1\}$.
\end{theorem}
Examples of $\rho$ in Theorem~\ref{thm:MPD:rho} are $\rho(x) = \max_{i \in I} a_i x_i$ and $\rho(x) = \sum_{i \in I} a_i x_i$, where $a \in [0, \infty)^I \setminus \{0\}$. The special case $I = \{1, \ldots, d\}$ and $\rho(x) = \max(x_1,\ldots,x_d)$ produces multivariate Pareto distributions as in \citep[Section~6.3]{resnick:2006} and other references mentioned in Section~\ref{subsec:rv}. Also covered by Theorem~\ref{thm:MPD:rho} is $\rho(x) = \min_{i \in J} a_i x_i$ for non-empty $J \subset I$ and $a_j \in (0, \infty)$ for all $j \in J$ provided $\operatorname{\mathbb{E}}[ \min \{ a_j^\alpha \Theta_{i,j}^\alpha : j \in J \}] > 0$ for some (and hence all) $i \in I$. If, however, $\nu(\mathbb{S}_{\rho}) = 0$, then $\operatorname{\mathbb{P}}[\rho(X) > t]$ decays more rapidly than $b(t)$, and more refined models are needed, opening up a whole new world of possibilities.
\begin{example}
\label{ex:maxlinear}
Let $X = (X_1, \ldots, X_d)$ follow the max-linear model
\begin{equation}
\label{eq:maxlinear}
\forall i \in \{1,\ldots,d\}, \qquad
X_i = \max_{r=1,\ldots,s} a_{i,r} Z_r,
\end{equation}
where $a_{i,r} \in [0, \infty)$ are scalars such that $\max_r a_{i,r} > 0$ for all $i$ and where $Z_1, \ldots, Z_s$ are independent and identically distributed nonnegative random variables whose common distribution function $F$ has a regularly varying tail function $\overline{F}=1-F$ with index $-\alpha < 0$. The marginal tails satisfy $\overline{F}_{i}(t) / \overline{F}(t) \to \sum_r a_{i,r}^\alpha =: c_i$ as $t \to \infty$. If $X_i$ exceeds a large threshold $t \to \infty$, the probability that this was due to $Z_r$ is proportional to $a_{i,r}^\alpha$, and then the other factors $Z_{\bar{r}}$ for $\bar{r} \ne r$ are of smaller order than $Z_{r}$. It follows that (a) in Theorem~\ref{thm:one2multi} holds where the law of $\Theta_{i}$ is discrete with at most $s$ atoms and is given by
\begin{equation}
\label{eq:maxlinear:Theta}
\mathcal{L}(\Theta_{i}) = \frac{1}{c_{i}} \sum_{r=1}^{s} a_{i,r}^\alpha \, \varepsilonilon_{(a_{j,r}/a_{i,r})_{j=1}^d},
\end{equation}
with $\varepsilonilon_x(\,\cdot\,)$ denoting a unit point mass at $x$. From \eqref{eq:maxlinear:Theta}, we find
\begin{equation*}
\operatorname{\mathbb{E}}[ \Theta_{i,j}^\alpha ]
= \frac{1}{c_{i}} \sum_{r=1}^s a_{j,r}^\alpha \mathds{1} \{ a_{i,r} > 0 \}.
\end{equation*}
It follows that $\operatorname{\mathbb{E}}[\Theta_{i,j}^\alpha] = c_j/c_i$ as soon as $a_{i,r} > 0$ for every $r$ such that $a_{j,r} > 0$. Indeed, in that case, if $X_{j}$ is large, then some variable $Z_r$ with $r = 1, \ldots, s$ such that $a_{j,r}$ is positive was large, which in turn implies that $X_{i}$ is large as well, so that $\operatorname{\mathbb{P}}(\Theta_{j,i} > 0) = 1$.
\end{example}
\begin{example}
Recursive max-linear models on directed acyclic graphs were introduced in \citep{gissibl+k:2018}. Borrowing some of their notation, consider a directed acyclic graph $\mathcal{D} = (V, E)$ with nodes $V = \{1, \ldots, d\}$ and edges $E = \{(k,i) : i \in V, k \in \operatorname{pa}(i)\}$, where $\operatorname{pa}(i) \subset V$ denotes the possibly empty set of parents of $i$. Consider a random vector $X = (X_1, \ldots, X_d)$ given by the structural equation model
\begin{equation}
\label{eq:SEM}
\forall i \in \{1, \ldots, d\}, \qquad
X_i = \max \leqslantslantft[ \max \{\gamma_{ki} X_k : k \in \operatorname{pa}(i)\}, \gamma_{ii} Z_i \right],
\end{equation}
where the random variables $Z_1, \ldots, Z_d$ are as in Example~\ref{ex:maxlinear} with $s = d$ and where all coefficients $\gamma_{ki}$ and $\gamma_{ii}$ are (strictly) positive; the maximum over the empty set is zero by convention. Then by \citep[Theorem~2.2]{gissibl+k+o:2018}, the random vector $X$ admits the max-linear representation
\[
\forall i \in \{1, \ldots, d\}, \qquad
X_i = \max_{j=1,\ldots,d} b_{ji} Z_j,
\]
with coefficients $b_{ji}$, for $i,j \in \{1,\ldots,d\}$, defined as follows: $b_{ii} = \gamma_{ii}$ and $b_{ji} = 0$ if $j \in V \setminus (\operatorname{an}(i) \cup \{i\})$, while
\begin{equation}
\label{eq:DAG:maxlinear}
\forall i \in \{1,\ldots,d\}, \forall j \in \operatorname{an}(i), \qquad
b_{ji} = \max_{p \in P_{ji}} \leqslantslantft\{\gamma_{jj} \prod_{e \in p} \gamma_e\right\},
\end{equation}
where $P_{ji}$ is the collection of paths $p = \{e_1, \ldots, e_n\}$ from $j$ to $i$ in $\mathcal{D}$; recall the definition of a path in the beginning of Section~\ref{sec:tailtree}. The representation in \eqref{eq:DAG:maxlinear} is of the form \eqref{eq:maxlinear} with $s = d$ and with $a_{i,r} = b_{ri}$ for $i,r \in \{1, \ldots, d\}$. It follows that $a_{i,r} = 0$ unless $r = i$ or $r \in \operatorname{an}(i)$.
The condition that $a_{i,r} > 0$ whenever $a_{j,r} > 0$ is satisfied as soon as $j \in \operatorname{an}(i)$: indeed, in that case, we have $\operatorname{an}(j) \cup \{j\} \subset \operatorname{an}(i)$, so that $b_{rj} > 0$ implies $r \in \operatorname{an}(j) \cup \{j\}$ and thus $r \in \operatorname{an}(i)$, implying $b_{ri} > 0$. Through the structural equation model~\eqref{eq:SEM}, a large value appearing at a node~$j$ will also be felt at any of its descendants~$i$. We get $\operatorname{\mathbb{P}}(\Theta_{j,i} > 0) = 1$ for $j \in \operatorname{an}(i)$, which, by Corollary~\ref{cor:rcf}, means that the root-change formula~\eqref{eq:rcf2} applies, by which the law of $\Theta_j$ can be recovered from the one of $\Theta_i$. Moreover, Theorem~\ref{thm:justK} applies with $K$ equal to the set of leaf nodes, i.e., the nodes without descendants.
In the special case that the directed acyclic graph is also a directed, rooted tree, every node~$i$ has either exactly one parent or is equal to the root node, say $u$. In that case, the collection of paths $P_{ji}$ between $j \in \operatorname{an}(i)$ and $i \in \{1,\ldots,d\} \setminus \{u\}$ is a singleton, $p = \operatorname{pa}th{j}{i}$, and the formula for $b_{ji}$ in~\eqref{eq:DAG:maxlinear} simplifies to $b_{ji} = \gamma_{jj} \prod_{e \in \operatorname{pa}th{j}{i}} \gamma_e$. Furthermore, the tail tree $\Theta_u$ in~\eqref{eq:maxlinear:Theta} starting from the root node $u$ simplifies to the degenerate distribution at the point $\theta_u = (\theta_{u,1}, \ldots, \theta_{u,d})$ with coordinates $\theta_{u,j} = \prod_{e \in \operatorname{pa}th{u}{j}} \gamma_e \in (0, \infty)$ for $j \in \{1,\ldots,d\}$. This is of the form in~\eqref{eq:Thetauv} with degenerate increments $M_e = \gamma_e$ for all $e \in E$.
\end{example}
\section{Regularly varying Markov trees}
\label{sec:rvmt}
As in Section~\ref{sec:tailtree}, let $(X, \mathcal{T})$ be a nonnegative Markov tree on the undirected tree $\mathcal{T} = (V, E)$. The general theory in Section~\ref{sec:one2multi} sheds light on the relation between two tail trees emanating at different roots.
For two different nodes $u$ and $\bar{u}$ in $V$, the sets of directed edges $E_{u}$ and $E_{\bar{u}}$ are the same except for the edges connecting nodes on the path between $u$ and $\bar{u}$, which are directed in opposite ways in the two edge sets: For every $(a, b) \in \operatorname{pa}th{u}{\bar{u}} = E_u \setminus E_{\bar{u}}$, we have $(b, a) \in \operatorname{pa}th{\bar{u}}{u} = E_{\bar{u}} \setminus E_{u}$ and the other way around.
Condition~\ref{ass:MT} was formulated relative to a single root $u \in V$. The next condition covers all nodes $u \in U$ in a non-empty subset $U$ of $V$ as possible roots. For such $U$, let $E_U = \bigcup_{u \in U} E_u$ denote the set of directed edges that appear in at least one of the directed trees $E_u$.
\begin{condition}
\label{ass:MT:all}
There exists a non-empty $U \subset V$ with the following two properties:
\begin{compactenum}[(i)]
\item
For every $e = (a, b) \in E_U$, there exists a version of the conditional distribution of $X_b$ given $X_a$ and a probability measure $\mu_e$ on $[0, \infty)$ such that \eqref{eq:kernel:limit} holds.
\item
For every edge $e = (a, b) \in E_U$ for which there exists $u \in U$ such that $e \in E_u$ and an edge $\bar{e} \in \operatorname{pa}th{u}{a}$ such that $\mu_{\bar{e}}(\{0\}) > 0$, we have \eqref{eq:kernel:control}.
\end{compactenum}
\end{condition}
If $u, v \in U$ in Condition~\ref{ass:MT:all} then every node $w \in V$ that is on the path between $u$ and $v$ can be added to $U$ and Condition~\ref{ass:MT:all} remains true. Indeed, for such $u,v,w$, we have $E_w \subset E_u \cup E_v$, which takes care of (i), and $\operatorname{pa}th{w}{a} \subset \operatorname{pa}th{u}{a} \cup \operatorname{pa}th{v}{a}$ for every node $a \in V$, which takes care of (ii). The author is grateful to an anonymous reviewer for having pointed this out.
\begin{corollary}
\label{thm:rvmt}
Let $(X, \mathcal{T})$ be a nonnegative Markov tree on the undirected tree $\mathcal{T} = (V, E)$. Let $U \subset V$ be non-empty. Let $F_u(x) = \operatorname{\mathbb{P}}(X_u \leqslantslant x)$ and $\overline{F}_u = 1-F_u$ for all $u \in U$. Assume that there exists a positive function $b$, regularly varying at infinity with index $\alpha > 0$, such that $b(t) \overline{F}_u(t) \to c_u \in (0, \infty)$ as $t \to \infty$ for every $u \in U$. Assume that Condition~\ref{ass:MT:all} holds. Let $(M_e)_{e \in E_U}$ be a vector of independent random variables such that $M_e$ has law $\mu_e$ for each $e \in E_U$. Then all conclusions of Theorem~\ref{thm:one2multi} hold with $I = U$ and with $\Theta_{u}$ the tail tree in \eqref{eq:Thetauv} for $u \in U$.
\end{corollary}
\begin{proof}
Condition~\ref{ass:MT:all} and Corollary~\ref{cor:MT} imply that assumption~(a) in Theorem~\ref{thm:one2multi} is satisfied for $I = U$ and with $\Theta_u$ the tail tree in \eqref{eq:Thetauv}, for every $u \in U$. All equivalence relations and other properties are then as stated in Theorem~\ref{thm:one2multi}.
\end{proof}
\begin{corollary}
In Corollary~\ref{thm:rvmt}, if $a, b \in V$ are neighbours in $E$ and if they both belong to $U$, then the distributions of $M_{a,b}$ and $M_{b,a}$ mutually determine each other by
\begin{equation}
\label{eq:muba}
c_b \operatorname{\mathbb{E}}[ g(M_{b,a}) \, \mathds{1}\{ M_{b,a} > 0 \}]
=
c_a \operatorname{\mathbb{E}}[ g(1/M_{a,b}) \, M_{a,b}^\alpha]
\end{equation}
for all Borel measurable $g : (0, \infty) \to [0, \infty]$.
\end{corollary}
\begin{proof}
To find \eqref{eq:muba}, apply \eqref{eq:rcf1} to the case $d = 2$ and the random vector $(X_a, X_b)$. The two limit random vectors $\Theta_u$ in Theorem~\ref{thm:one2multi}(a) are $(1, M_{a,b})$ and $(M_{b,a}, 1)$ when conditioning on $X_a > t$ and on $X_b > t$, respectively. Equation~\eqref{eq:muba} implies $\operatorname{\mathbb{P}}(M_{b,a} > z) = (c_a/c_b) \operatorname{\mathbb{E}}[ \mathds{1}\{z M_{a,b} < 1\} \, M_{a,b}^\alpha]$ for all $z \in [0, \infty)$, so that the distribution of $M_{b,a}$ can be recovered from the one of $M_{a,b}$.
\end{proof}
For different roots $u, \bar{u} \in U$, the tail trees $\Theta_{u}$ and $\Theta_{\bar{u}}$ have the same multiplicative structure. The differences between their distributions lie in the starting nodes of the paths and in the distributions of the multiplicative increments for edges on the paths $\operatorname{pa}th{u}{\bar{u}}$ and $\operatorname{pa}th{\bar{u}}{u}$, since these edges change direction. For such edges of which the nodes belong to $U$ as well, the increment distributions are related by \eqref{eq:muba}. See Figure~\ref{fig:change} for an illustration.
\begin{figure}
\caption{\label{fig:change}
\label{fig:change}
\end{figure}
For $u, \bar{u} \in U$, the equality $\operatorname{\mathbb{E}}[\Theta_{u,\bar{u}}^\alpha] = c_{\bar{u}}/c_{u}$ has interesting ramifications, see Corollaries~\ref{cor:rcf} and~\ref{cor:justk} and Theorem~\ref{thm:justK}. If all nodes on the path between $u$ and $\bar{u}$ belong to $U$ as well, then, since
\begin{equation}
\label{eq:Ma2Tha}
\operatorname{\mathbb{E}}[ \Theta_{u,\bar{u}}^\alpha ]
= \prod_{e \in \operatorname{pa}th{u}{\bar{u}}} \operatorname{\mathbb{E}}[M_{e}^\alpha],
\end{equation}
we have $\operatorname{\mathbb{E}}[\Theta_{u,\bar{u}}^\alpha] = c_{\bar{u}}/c_{u}$ as soon as $\operatorname{\mathbb{E}}[M_{a,b}^\alpha] = c_{b}/c_{a}$ for every $e = (a, b) \in \operatorname{pa}th{u}{\bar{u}}$.
Given the tree structure, the distribution of a Markov tree $X$ on $\mathcal{T} = (V, E)$ is entirely determined by the bivariate distributions $(X_{a}, X_{b})$ for $e = (a, b) \in E$. Markov chains of which all pairs $(X_i, X_{i+1})$ are max-stable were proposed in \citep[Section~4.6]{coles+t:1991} and \citep{smith+t+c:1997}. When extended to trees, this construction method provides models meeting Condition~\ref{ass:MT:all}.
\begin{example}
\label{ex:EVC:1}
Let the distribution of the random pair $(X, Y)$ on $(0, \infty)^2$ be bivariate max-stable with cumulative distribution function
\begin{equation*}
F(x, y) = \exp \{ - (x^{-1}+y^{-1}) \, A(x/(x+y)) \}, \qquad
(x, y) \in (0, \infty)^2,
\end{equation*}
where $A : [0, 1] \to [1/2, 1]$ is a Pickands dependence function, that is, a convex function such that $\max(w, 1-w) \leqslantslant A(w) \leqslantslant 1$ for all $w \in [0, 1]$; see \citep{gudendorf_extreme-value_2010} and the references therein. Both marginal distributions are unit-Fr\'echet, $F(z,\infty)=F(\infty,z)=\exp(-1/z)$ for $z \in (0, \infty)$. In particular, the marginal tail functions are regularly varying at infinity with index $-\alpha=-1$.
Let $A'$ be the left-hand derivative of $A$, which exists everywhere on $(0, 1]$, takes values between $-1$ and $1$, and is non-decreasing and continuous from the left; define $A'(0)$ as the right-hand limit. Since $A$ is convex, it is absolutely continuous, and the set of points in $(0, 1)$ where it is not continuously differentiable is at most countable. For $x, y \in (0, \infty)$ such that $A$ is differentiable at $w = x/(x+y)$, we have
\[
\operatorname{\mathbb{P}}(Y \leqslantslant y \mid X = x)
= \frac{\operatorname{pa}rtial F(x, y) / \operatorname{pa}rtial x}{\operatorname{pa}rtial F(x, \infty) / \operatorname{pa}rtial x}
= \exp \{ - x^{-1} ((1 - w)^{-1} A(w) - 1)\} \, \{ A(w) - w \, A'(w) \}.
\]
It follows that $\mathcal{L}(Y/x \mid X = x) \dto M$ as $x \to \infty$, where
\begin{equation}
\label{eq:A2M}
\operatorname{\mathbb{P}}(M \leqslantslant z) = A(w) - w \, A'(w), \qquad z \in [0, \infty), \ w = 1 / (1+z).
\end{equation}
This is part~(i) of Condition~\ref{ass:MT}. Further, equation~\eqref{eq:kernel:control} in part~(ii) of Condition~\ref{ass:MT} follows from the monotone regression dependence property of bivariate max-stable distributions established in \citep{gg:2000}, by which the supremum over $\varepsilon$ in \eqref{eq:kernel:control} is attained in $\varepsilon = \delta$ and the limit superior as $x \to \infty$ is bounded by $\operatorname{\mathbb{P}}(M \geqslantslant \eta/\delta)$, which tends to $0$ as $\delta \downarrow 0$ for every fixed $\eta > 0$.
This construction using bivariate max-stable distributions is in some sense generic. Given a random variable $M$ on $[0, \infty)$ with expectation $\operatorname{\mathbb{E}}(M) \leqslantslant 1$, one can define a Pickands dependence function $A$ by $A(w) = 1 - \operatorname{\mathbb{E}}[\min(1-w, wM)]$ for $w \in [0, 1]$, and then \eqref{eq:A2M} holds. The extension to general exponents $\alpha$ and tail constants $c_u$ is straightforward.
\end{example}
\section{Absolutely continuous case}
\label{sec:ac}
If the joint distribution of the Markov tree $X = (X_v)_{v \in V}$ on $\mathcal{T} = (V, E)$ is absolutely continuous with respect to the Lebesgue measure on $[0, \infty)^V$, the formulations of the conditions and results simplify considerably. Let $f$ denote the joint probability density function of $X$ and let $f_v$, for $v \in V$, denote the marginal density of $X_v$.
By the Hammersley--Clifford theorem \citep[Theorem~3.9]{LauritzenBook}, $X$ is a Markov tree as soon as the joint density factorizes as
\begin{equation*}
\forall x \in \mathbb{R}^d, \qquad
f(x) = \prod_{j=1}^v f_v(x_v)
\prod_{\substack{\{a, b\} \subset V:\\\text{$a$ and $b$ are neighbours}}} \frac{f_{a,b}(x_{a}, x_{b})}{f_{a}(x_{a}) f_{b}(x_{b})}.
\end{equation*}
The second product is over all unordered pairs of neighbours and $f_{a,b}$ denotes the bivariate density function of $(X_a, X_b)$.
For $t \in (0, \infty)$ such that $f_a(t) \in (0,\infty)$, the density of $\mathcal{L}(X_b/t \mid X_a = t)$ is $t f_{a,b}(t, ty) / f_{a}(t)$ for $y \in (0, \infty)$. The following condition replaces Condition~\ref{ass:MT:all}.
\begin{condition}
\label{ass:MT:pdf}
For every $e = (a, b) \in E$, there exists a probability density function $q_{a,b}$ on $(0, \infty)$ such that
\[
\forall y \in (0, \infty), \qquad \lim_{t \to \infty}
\frac{t f_{a,b}(t, ty)}{f_{a}(t)} = q_{a,b}(y).
\]
\end{condition}
\begin{theorem}
Let the random vector $X$ on $[0, \infty)^V$ be a Markov tree on the undirected tree $\mathcal{T} = (V, E)$ with joint density function $f$. Assume there exists a positive function $g$, regularly varying at infinity with index $-\alpha-1 < -1$, such that $f_v(t)/g(t) \to c_v \in (0, \infty)$ as $t \to \infty$ for every $v \in (0, \infty)$. If Condition~\ref{ass:MT:pdf} holds, then the conditions of Corollary~\ref{thm:rvmt} are satisfied with $U = V$, the same constants $c_u$, and auxiliary function $b(t) = \alpha / \{t \, g(t)\}$. For all pairs of neighbours $a, b \in V$, the density of $M_{a,b}$ is $q_{a,b}$ and for almost every $y \in (0, \infty)$, we have
\begin{equation}
\label{eq:quba}
c_b \, y^\alpha \, q_{b,a}(y) = c_a \, y^{-2} \, q_{a,b}(y^{-1}).
\end{equation}
Moreover, $\operatorname{\mathbb{E}}[\Theta_{u,v}^\alpha] = c_{v}/c_{u}$ for all $u, v \in V$, so that $b(t) \operatorname{\mathbb{P}}(X/t) \zto \nu$ as $t \to \infty$, where $\nu \in \mathcal{M}_0$ satisfies
\[
\nu(f) = c_u \operatorname{\mathbb{E}} \leqslantslantft[
\int_{0}^{\infty}
f(z\Theta_{u}) \,
\alpha z^{-\alpha-1} \, \mathrm{d} z
\right]
\]
for every $u \in V$ and for every Borel measurable $f : [0, \infty)^V \setminus \{0\} \to [0, \infty]$, with $\Theta_{u}$ the tail tree in \eqref{eq:Thetauv}. Moreover, all tail trees are connected through \eqref{eq:rcf2}.
\end{theorem}
\begin{proof}
The function $f_v$ is regularly varying at infinity with index $-\alpha-1$ too. By Karamata's theorem \citep[Proposition~1.5.10]{BGT}, we have $t f_v(t) / \overline{F}_v(t) \to \alpha$ and thus $\overline{F}_v(t) / \{t g(t)\} \to c_v/\alpha$ as $t \to \infty$.
Condition~\ref{ass:MT:all} with $U = V$ follows from Condition~\ref{ass:MT:pdf} and Scheff\'e's theorem. Part~(ii) of Condition~\ref{ass:MT:all} is void, since $\mu_e(\{0\}) = 0$ for every $e \in E$.
If $a, b \in V$ are neighbours, we can apply \eqref{eq:muba} to $g(y) = \mathds{1}_{(0, z)}(y)$, where $z \in (0, \infty)$, to find
\begin{equation*}
c_b \, \int_0^z q_{b,a}(y) \, \mathrm{d} y
=
c_a \, \int_{1/z}^\infty y^\alpha \, q_{a,b}(y) \, \mathrm{d} y \\
=
c_a \, \int_{0}^{z} y^{-\alpha-2} \, q_{a,b}(y^{-1}) \, \mathrm{d} y.
\end{equation*}
Since this is true for every $z \in (0, \infty)$, we must have $c_b \, q_{b,a}(y) = c_{a} \, y^{-\alpha-2} \, q_{a,b}(y^{-1})$ for almost every $y \in (0, \infty)$, whence \eqref{eq:quba}.
Since $\mu_{(b,a)}$ does not have an atom at $0$, the identity \eqref{eq:muba} with $g = \mathds{1}_{(0, \infty)}$ implies that $\operatorname{\mathbb{E}}[M_{(a,b)}^\alpha] = c_{b}/c_{a}$. Apply \eqref{eq:Ma2Tha} and the observation on the line just below that equation to see that $\operatorname{\mathbb{E}}[\Theta_{u,v}^\alpha] = c_{v}/c_{u}$ for all $u, v \in V$. By Corollary~\ref{cor:rcf}, all tail trees are then connected via \eqref{eq:rcf2}.
Finally, $\mathcal{M}_0$-convergence to $\nu$ with the stated expression follows from Theorem~\ref{thm:one2multi} and Corollary~\ref{cor:justk}.
\end{proof}
\begin{example}
In Example~\ref{ex:EVC:1}, assume that $A$ is twice continuously differentiable on $(0, 1)$ and that $A'(0) = -1$ and $A'(1) = 1$. The distribution of $(X, Y)$ is then absolutely continuous and the conditional density of $Y/x$ given that $X = x$ converges as $x \to \infty$ to the function
\[
q(z) = w^3 \, A''(w), \qquad z \in (0, \infty), \ w = 1 / (1+z).
\]
The conditions on $A$ imply that $\int_0^\infty q(z) \, \mathrm{d} z = \int_0^1 w \, A''(w) \, \mathrm{d} w = 1$ and $\int_0^\infty z \, q(z) \, \mathrm{d} z = \int_0^1 (1-w) \, A''(w) \, \mathrm{d} w = 1$, so that $q$ is a probability density function with first moment equal to $1$. Moreover, replacing the function $A$ by the Pickands dependence function $w \mapsto A(1-w)$ amounts to changing $q$ by the function $z \mapsto z^{-3} q(z^{-1})$, in line with \eqref{eq:quba} with $c_a = c_b = 1$ and $\alpha = 1$.
An interesting example in this respect is the bivariate H\"{u}sler--Reiss distribution \citep{husslerReiss1989} with Pickands dependence function
\[
A(w) =
(1-w) \, \Phi\leqslantslantft( \lambda + \tfrac{1}{2\lambda} \log \tfrac{1-w}{w} \right)
+
w \, \Phi\leqslantslantft( \lambda + \tfrac{1}{2\lambda} \log \tfrac{w}{1-w} \right)
\]
for $0 < w < 1$. Here $\lambda \in (0, \infty)$ is a parameter and $\Phi(z) = \int_{-\infty}^z (2\pi)^{-1/2} \exp(-z^2/2) \, \mathrm{d} z$ is the standard normal cumulative distribution function. After tedious calculations, we find that $q$ is given by the density of the lognormal random variable $M = \exp\{2\lambda(Z-\lambda)\}$, where $Z$ is a standard normal random variable. Note indeed that $\operatorname{\mathbb{E}}[M] = 1$. Moreover, the density function satisfies $z \, q(z) = z^{-2} q(z^{-1})$ for all $z \in (0, \infty)$, which is \eqref{eq:quba} with $q_{a,b} = q_{b,a} = q$ and $c_a = c_b = 1$ and $\alpha = 1$. This also follows from the symmetry of the H\"usler--Reiss Pickands dependence function, i.e., $A(w) = A(1-w)$ for all $w \in [0, 1]$, so that the pair $(X, Y)$ is exchangeable.
If all neighbouring pairs $(X_a, X_b)$ for $(a, b) \in E$ of the Markov tree follow such H\"{u}sler--Reiss max-stable distributions, the joint distribution of the tail tree is multivariate log-normal, since $\log \Theta_{u,v} = \sum_{e \in \operatorname{pa}th{u}{v}} \log M_e$ for all $u, v \in V$, where the random variables $\log M_e$ are independent and normally distributed with expectation $-2\lambda_e^2$ and variance $4\lambda_e^2$, with dependence parameter $\lambda_e \in (0, \infty)$ for all $e \in E$.
\end{example}
\appendix
\section{Proofs for Section~\ref{sec:one2multi}}
\label{app:proofs}
\begin{proof}[Proof of Theorem~\ref{thm:one2multi}]
\emph{(a) and (b) are equivalent.} ---
Clearly, (b) implies (a). Conversely, assume (a); let us show (b). Let $z \in [1, \infty)$ and let $\theta \in \mathbb{S}$ be such that $\operatorname{\mathbb{P}}(\Theta_j=\theta_j)=0$ for all $j \in \{1, \ldots, d\}$. We have
\begin{align*}
\operatorname{\mathbb{P}}(X_i/t > z, X/X_i \leqslantslant \theta \mid X_i > t)
&=
\frac{b(zt) \overline{F}_i(zt)}{b(t) \overline{F}_i(t)} \,
\frac{b(t)}{b(zt)} \,
\operatorname{\mathbb{P}}(X/X_i \leqslantslant \theta \mid X_i/t > z) \\
&\to
z^{-\alpha} \operatorname{\mathbb{P}}(\Theta_i \leqslantslant \theta),
\qquad t \to \infty.
\end{align*}
It follows that $\operatorname{\mathbb{P}}(X_i/t \leqslantslant z, X/X_i \leqslantslant \theta \mid X_i > t) \to \operatorname{\mathbb{P}}(Z \leqslantslant z) \operatorname{\mathbb{P}}(\Theta_i \leqslantslant \theta)$ as $t \to \infty$, where $Z$ is a $\operatorname{Pa}(\alpha)$ random variable.
\emph{(b) implies (c) and (i).} ---
Since $(X/t) = (X_i/t) (X/X_i)$, statement~(b) and the continuous mapping theorem \citep[Theorem~2.3]{vandervaart:1998} imply that $\mathcal{L}(X/t \mid X_i > t)$ converges weakly to $Y_i = Z \Theta_i$, where $Z$ is a $\operatorname{Pa}(\alpha)$ random variable independent of $\Theta_i$. Since $\Theta_{i,i} = 1$ almost surely, we have $Y_{i,i} = Z$.
\emph{(c) implies (a).} ---
Since $X/X_i = (X/t)/(X_i/t)$, statement (c) and the continuous mapping theorem imply statement (a) with $\Theta_i = Y_i / Y_{i,i}$.
\emph{(b) implies (d).} ---
Define a Borel measure $\nu_i$ on $\mathbb{S}_{0,i}$ by
\[
\nu_i(\,\cdot\,)
=
c_i \int_0^\infty \operatorname{\mathbb{P}}(z \Theta_i \in \,\cdot\,) \, \alpha z^{-\alpha-1} \, \mathrm{d} z.
\]
If $B$ is a Borel subset of $\mathbb{S}_{0,i}$ contained in $\{ x \in \mathbb{S}_{0,i} : x_i \geqslantslant \varepsilon \}$ for some $\varepsilon > 0$, then $\operatorname{\mathbb{P}}(z \Theta_i \in B) = 0$ as soon as $z < \varepsilon$, since $\Theta_{i,i} = 1$ almost surely. As a consequence, $\nu_i(B) \leqslantslant c_i \varepsilon^{-\alpha}$ for such $B$. It follows that $\nu_i \in \mathcal{M}_0i$.
By linearity of the integral and by monotone convergence, we find that
\begin{equation}
\label{eq:Thetai2nui}
\nu_i(f) = c_i \int_0^\infty \operatorname{\mathbb{E}}[f(z\Theta_i)] \, \alpha z^{-\alpha-1} \, \mathrm{d} z
\end{equation}
for every nonnegative Borel measurable function $f$ on $\mathbb{S}_{0,i}$. The same expression is then true for real-valued Borel measurable functions $f$ on $\mathbb{S}_{0,i}$ for which at least one of the two integrals with $f$ replaced by $\abs{f}$ is finite. This includes bounded, Borel measurable functions that vanish on a set of the form $\{x \in \mathbb{S}_{0,i} : x_i \leqslantslant \varepsilon \}$ for some $\varepsilon > 0$.
Let $f \in \mathcal{C}_0i$ and let $\varepsilon > 0$ be such that $f(x) = 0$ as soon as $x_i \leqslantslant \varepsilon$. By (b), we have
\begin{align*}
b(t) \operatorname{\mathbb{E}}[ f(X/t) ]
&=
b(t) \operatorname{\mathbb{E}}[ f((X_i/t)(X/X_i)) \, \mathds{1}(X_i/t > \varepsilon)] \\
&=
b(\varepsilon t) \overline{F}_i(\varepsilon t) \frac{b(t)}{b(\varepsilon t)}
\operatorname{\mathbb{E}}[ f(\varepsilon(X_i/(\varepsilon t))(X/X_i)) \mid X_i > \varepsilon t] \\
&\to
c_i \varepsilon^{-\alpha} \operatorname{\mathbb{E}}[f(\varepsilon Z \Theta_i)],
\qquad t \to \infty,
\end{align*}
where $Z$ is a $\operatorname{Pa}(\alpha)$ random variable, independent of $\Theta_i$. The limit is equal to
\begin{equation*}
c_i \varepsilon^{-\alpha} \int_{1}^{\infty} \operatorname{\mathbb{E}}[f(\varepsilon z \Theta_i)] \, \alpha z^{-\alpha-1} \, \mathrm{d} z
=
c_i \int_{\varepsilon}^{\infty} \operatorname{\mathbb{E}}[f(z \Theta_i)] \, \alpha z^{-\alpha-1} \, \mathrm{d} z \\
=
\nu_i(f),
\end{equation*}
since $f(z \Theta_i) = 0$ almost surely whenever $z \leqslantslant \varepsilon$, as $\Theta_{i,i} = 1$ almost surely.
{}
\emph{(d) implies (c).} ---
For $z \in (0, \infty)$, we have $b(t) \operatorname{\mathbb{P}}(X_i/t > z)
\to c_i z^{-\alpha}$ as $t \to \infty$, and thus $\nu_i(\{x : x_i > z\}) = c_i z^{-\alpha}$ by (d). For open $G \subset \mathbb{R}^d$, the Portmanteau theorem \citep[Theorem~2.1(iii)]{lindskog+r+r:2014} yields
\begin{align*}
\liminf_{t \to \infty} \operatorname{\mathbb{P}}(X/t \in G \mid X_i > t)
&=
\liminf_{t \to \infty}
\frac{1}{b(t) \overline{F}_i(t)} b(t) \operatorname{\mathbb{P}}(X/t \in G \cap \{ x : x_i > 1\}) \\
&\geqslantslant
c_i^{-1} \nu_i(G \cap \{ x : x_i > 1\})
\end{align*}
By the Portmanteau lemma for weak convergence \citep[Lemma~2.2]{vandervaart:1998} we obtain (c) where the law of $Y_i$ is $\operatorname{\mathbb{P}}(Y_i \in \,\cdot\,) = c_i^{-1} \nu_i(\,\cdot\, \cap \{ x : x_i > 1 \})$.
\emph{(d) implies (e).} ---
For every $z > 0$, we have
\begin{align*}
b(t) \operatorname{\mathbb{P}}[\max(X_I)/t > z]
&\leqslantslant
b(t) \sum_{i \in I} \operatorname{\mathbb{P}}(X_i/t > z) \\
&=
\frac{b(t)}{b(zt)}
\sum_{i \in I} b(zt) \overline{F}_i(zt)
\to
z^{-\alpha} \sum_{i \in I} c_i,
\qquad t \to \infty.
\end{align*}
Since the limit is finite for every $z > 0$ and since it converges to zero as $z \to \infty$, it follows by the relative compactness criterion in \citep[Theorem~2.5]{lindskog+r+r:2014} that for every sequence $(t_n)_n$ tending to infinity, there exists a subsequence along which $b(t_n) \operatorname{\mathbb{P}}(X/t_n \in \,\cdot\,)$ converges in $\mathcal{M}_0I$. To show (e), we then need to show that these subsequence limits must coincide. To do so, we show that for every $f \in \mathcal{C}_0I$, the limit of $b(t) \operatorname{\mathbb{E}}[f(X/t)]$ exists as $t \to \infty$. This fixes the value of the integral of such $f$ with respect to all subsequence limits, which then must be the same.
For $\varepsilon > 0$, let $h_\varepsilon : [0, \infty) \to [0, 1]$ be the piece-wise linear function
\[
h_\varepsilon(t) =
\min\{ \max(2t/\varepsilon - 1, 0), 1 \} =
\begin{cases}
0 & \text{if $t \in [0, \varepsilon/2]$,} \\
2t/\varepsilon - 1 & \text{if $t \in [\varepsilon/2, \varepsilon]$,} \\
1 & \text{if $t \in [\varepsilon, \infty)$.}
\end{cases}
\]
Put $\hbar_\varepsilon = 1 - h_\varepsilon$. Write $I = \{i_{1}, \ldots, i_{k}\}$. Then
\begin{align*}
1 &= h_\varepsilon(x_{i_{1}}) + \hbar_\varepsilon(x_{i_{1}}) \\
&= h_\varepsilon(x_{i_{1}})
+ \hbar_\varepsilon(x_{i_{1}}) h_\varepsilon(x_{i_{2}})
+ \hbar_\varepsilon(x_{i_{1}}) \hbar_\varepsilon(x_{i_{2}}) \\
&= \ldots \\
&= \sum_{\ell=1}^k \leqslantslantft( \prod_{m=1}^{\ell-1} \hbar_\varepsilon(x_{i_{m}}) \right) h_\varepsilon(x_{i_{\ell}})
+ \prod_{\ell=1}^k \hbar_\varepsilon(x_{i_{\ell}}).
\end{align*}
For $f \in \mathcal{C}_0I$ we can find $\varepsilon > 0$ such that $f(x) = 0$ if $\max(x_{i_{1}},\ldots,x_{i_{k}}) \leqslantslant \varepsilon$. Then $f(x) \prod_{\ell=1}^k \hbar_\varepsilon(x_{i_{\ell}}) = 0$ for all $x$, and thus $f = \sum_{i \in I} f_i$ where, for $\ell \in \{1, \ldots, k\}$, we have
\[
f_{i_{\ell}}(x) = f(x) \leqslantslantft( \prod_{m=1}^{\ell-1} \hbar_\varepsilon(x_{i_{m}}) \right) h_\varepsilon(x_{i_{\ell}}).
\]
Each function $f_i$ belongs to $\mathcal{C}_0I$ too but has moreover the property that $f_{i}(x) = 0$ as soon as $x_{i} \leqslantslant \varepsilon/2$. The restriction of $f_i$ to $\mathbb{S}_{0,i}$ thus belongs to $\mathcal{C}_0i$. By (d),
\begin{equation*}
b(t) \operatorname{\mathbb{E}}[ f(X/t) ]
=
\sum_{i \in I} b(t) \operatorname{\mathbb{E}}[f_i(X/t)]
\to
\sum_{i \in I} \nu_{i}(f_{i}), \qquad t \to \infty.
\end{equation*}
The existence of a limit has thus been shown, and convergence in $\mathcal{M}_0I$ to some measure $\nu$ as stated in (e) follows.
\emph{(e) implies (d), (ii), (iii) and (iv).} ---
A function $f$ in $\mathcal{C}_0i$ can be extended to a function in $\mathcal{C}_0I$ denoted by the same symbol by putting $f(x) = 0$ for $x \in \mathbb{S}_{0,I} \setminus \mathbb{S}_{0,i}$. Hence, (e) implies (d), with $\nu_i$ as described in (ii).
Statement (iii) follows from (ii) and the description of the law of $Y_i$ in terms of $\nu_i$ in the proof above of the implication that (d) implies (c).
Similarly, (iv) follows from (ii), equation~\eqref{eq:Thetai2nui}, and Fubini's theorem.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:justK}]
It is sufficient to show statement (a) in Theorem~\ref{thm:one2multi}. By property~(i) in Theorem~\ref{thm:justK}, the weak convergence in Theorem~\ref{thm:one2multi}(a) already holds for all $i \in K$, and we need to show that it also holds for all $j \in I \setminus K$. Choose $j \in I \setminus K$ and let $i = i(j) \in K$ be as in property~(ii) of Theorem~\ref{thm:justK}
We will show that $\mathcal{L}(X/X_j \mid X_j > t)$ converges weakly as $t \to \infty$ to $\Theta_j$ whose law is defined in \eqref{eq:rcf2}. Let $G \subset \mathbb{S}$ be open and let $\delta > 0$.
We have
\begin{align*}
\operatorname{\mathbb{P}}(X/X_j \in G \mid X_j > t)
&\geqslantslant
\operatorname{\mathbb{P}}(X/X_j \in G, X_i > \delta t \mid X_j > t) \\
&=
\frac{b(\delta t) \overline{F}_i(\delta t)}{b(t) \overline{F}_j(t)}
\,
\frac{b(t)}{b(\delta t)}
\,
\operatorname{\mathbb{P}}\leqslantslantft[
\frac{X/X_i}{X_j/X_i} \in G, \,
\delta \frac{X_i}{\delta t} \frac{X_j}{X_i} > 1
\, \Bigg\vert \,
X_i > \delta t
\right].
\end{align*}
By Theorem~\ref{thm:one2multi} applied to $K$, we have $\mathcal{L}(X_i/s, X/X_i \mid X_i > s) \dto \operatorname{Pa}(\alpha) \otimes \mathcal{L}(\Theta_{i})$ as $s \to \infty$. Let $Z$ be a $\operatorname{Pa}(\alpha)$ random variable, independent of $\Theta_{i}$. By the Portmanteau lemma for weak convergence, we have
\begin{align*}
\liminf_{t \to \infty} \operatorname{\mathbb{P}}(X/X_j \in G \mid X_j > t)
&\geqslantslant
\frac{c_i}{c_j} \delta^{-\alpha}
\operatorname{\mathbb{P}}[ \Theta_i/\Theta_{i,j} \in G, \, \delta Z \Theta_{i,j} > 1 ] \\
&= \operatorname{\mathbb{E}}[ \mathds{1} \{ \Theta_i / \Theta_{i,j} \in G \} \min( \Theta_{i,j}^\alpha, \delta^{-\alpha} ) ] / \operatorname{\mathbb{E}}[ \Theta_{i,j}^{\alpha} ].
\end{align*}
The equality on the second line follows from (ii) and the fact that $Z^{-\alpha}$ is uniformly distributed on $(0, 1)$ and independent of $\Theta_i$.
Since $\delta > 0$ was arbitrary, the monotone convergence theorem yields $\liminf_{t \to \infty} \operatorname{\mathbb{P}}(X/X_j \in G \mid X_j > t) \geqslantslant \operatorname{\mathbb{E}}[\mathds{1}\{\Theta_{i}/\Theta_{i,j} \in G \} \, \Theta_{i,j}^\alpha] / \operatorname{\mathbb{E}}[\Theta_{i,j}^\alpha]$. Apply the Portmanteau lemma for weak convergence once more to obtain the stated weak convergence.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:MPD:rho}]
The properties of $\rho$ imply that $\mathbb{S}_{\rho}$ is open and non-empty and that $0 < \nu(\mathbb{S}_{\rho}) < \infty$. The boundary of $\mathbb{S}_{\rho}$ is $\{ x \in \mathbb{S}_{0,I} : \rho(x_I) = 1 \}$, which is $\nu$-null set, since its $\nu$-measure is bounded by the sum over $i \in I$ of $\nu( \{ x \in \mathbb{S}_{0,I} : \rho(x_I) = 1, x_i > 0 \})$, which is zero by \eqref{eq:Th2nu}. Since $\rho(X_I) > t$ if and only if $X_I/t \in \mathbb{S}_{\rho}$, the Portmanteau theorem \citep[Theorem~2.1(iv)]{lindskog+r+r:2014} implies $b(t) \operatorname{\mathbb{P}}[\rho(X_I) > t] \to \nu(\mathbb{S}_{\rho})$ as $t \to \infty$.
Let $G \subset \mathbb{R}^d$ be open. By (iii) of the same Portmanteau theorem,
\[
\liminf_{t \to \infty} \operatorname{\mathbb{P}}(X/t \in G \mid \rho(X_I) > t)
=
\liminf_{t \to \infty}
\frac
{b(t) \operatorname{\mathbb{P}}(X/t \in G \cap \mathbb{S}_{\rho})}
{b(t) \operatorname{\mathbb{P}}(X \in \mathbb{S}_{\rho})}
\geqslantslant
\nu(G \cap \mathbb{S}_{\rho}) / \nu(\mathbb{S}_{\rho}).
\]
The Portmanteau theorem for weak convergence implies the stated weak convergence of $\mathcal{L}(X/t \mid \rho(X) > t)$ as $t \to \infty$. This proves statement (c) in Theorem~\ref{thm:one2multi} for the enlarged random vector $(X, \rho(X))$.
\end{proof}
\acks
The author is grateful to two anonymous reviewers whose suggestions have led to various improvements throughout the text. The author also wishes to thank Stefka Asenova and Gildas Mazo for inspiring discussions.
\small
\end{document}
|
\begin{document}
\title{Nash SIR: An Economic-Epidemiological Model of Strategic Behavior During a Viral Epidemic}
\author{David McAdams\thanks{Fuqua School of Business and Economics Department, Duke University, Durham, North Carolina, USA. Email: [email protected]. I thank Carl Bergstrom, Yonatan Grad, and Marc Lipsitch for encouragement and Sam Brown, Troy Day, Nick Papageorge, Elena Quercioli, Lones Smith, Yangbo Song, Marta Wosinska, and participants at the Johns Hopkins Pandemic Seminar in April 2020 for helpful comments.
}
}
\date{May 3, 2020}
\maketitle
\begin{abstract}
This paper develops a Nash-equilibrium extension of the classic SIR model of infectious-disease epidemiology (``Nash SIR''), endogenizing people's decisions whether to engage in economic activity during a viral epidemic and allowing for complementarity in social-economic activity. An \textit{equilibrium epidemic} is one in which Nash equilibrium behavior during the epidemic generates the epidemic. There may be multiple equilibrium epidemics, in which case the epidemic trajectory can be shaped through the coordination of expectations, in addition to other sorts of interventions such as stay-at-home orders and accelerated vaccine development. An algorithm is provided to compute all equilibrium epidemics.
\end{abstract}
People's choices impact how a viral epidemic unfolds.
As noted in a March 2020 \emph{Lancet} commentary on measures to control the current coronavirus pandemic, ``How individuals respond to advice on how best to prevent transmission will be as important as government actions, if not more important'' (Anderson et al (2020)).\nocite{Anderson_etal2010} Early on when pre-emptive measures could be especially effective (Dalton, Corbett, and Katelaris (2020)), \nocite{Dalton_etal2020} people are at little personal risk of exposure and hence may be unwilling to follow orders to ``distance'' themselves from others. On the other hand, as infections mount and the health-care system is overwhelmed, people may then voluntarily take extreme measures to limit their exposure to the virus.
Clearly, the way in which people's incentives change during the course of an epidemic is essential to how the epidemic itself progresses, and how widespread are its harms.
This paper develops a Nash equilibrium extension of the classic Susceptible-Infected-Recovered (SIR) model of viral epidemiology. ``Nash SIR'' augments the well-known system of differential equations that characterizes epidemiological dynamics in the SIR model with a system of Bellman equations characterizing the dynamics of agent welfare and a Nash-equilibrium condition characterizing the dynamics of agent behavior. What emerges is a model of \emph{equilibrium epidemics} that, while highly stylized, sheds light on the interplay between epidemiological dynamics, economic behavior, and the health and economic harm done during the course of a viral epidemic.
The paper's most important modeling innovation is to account for the \textit{economic complementarities} of personal interaction that can be lost when agents ``distance'' themselves to slow viral transmission. Such complementarities are missing from the existing literature (discussed below), but can impact the progression of an epidemic in meaningful ways.
In particular, a \textit{positive feedback} can arise in which people complying with public-health directives induces others to do so as well, and vice versa. As non-essential businesses close, there is less that people are able to do outside the home, reducing their incentive to go out. Similarly, as co-workers in an office (or professors in a university) stay home, there is less reason to go to the office yourself, especially when the work involved is collaborative and can be managed remotely.\footnote
{
The opposite is true of essential work. The more that essential workers are absent, the more valuable the work done by those who remain. More generally, there may be congestion effects associated with social-economic activity, increasing the benefit one gets as others reduce their activity. This paper abstracts from congestion for ease of exposition.
}
Developed independently, this paper's Nash-equilibrium SIR (``Nash SIR'') model generalizes the Nash SIR model in \cite{farboodi2020internal}, by allowing for complementarities in social-economic activity.
In the traditional SIR model, the trajectory of the epidemic is completely determined by epidemiological fundamentals. Similarly, in \cite{farboodi2020internal}'s Nash SIR model, the epidemic has a unique equilibrium trajectory. By contrast, in this paper's Nash SIR model, there may be multiple potential trajectories for the epidemic, each of which induces agents to behave in a way that generates that epidemic trajectory. Because of this indeterminancy, the ultimate harm done during an epidemic, in terms of lost lives and lost livelihoods, can hinge on what agents believe about what others believe. This paper's model therefore highlights the importance of coordinating mechanisms, such as effective political leadership, in shaping expectations during an epidemic.
In addition to \textit{coordinating interventions} such as a political leader's public statements, \textit{fundamental interventions} such as public policies, public-health programs, scientific effort, and new cultural practices impact the set of equilibrium-epidemic possibilities. Such impacts can be explored using the Nash SIR model, by computing the set of equilibrium epidemics with and without the intervention in question. To enable such exploration, I provide an algorithm to compute all equilibrium epidemics in any instance of the model. This algorithm requires solving a set of problems, each of which corresponds to a different potential ``final condition'' and involves solving a system of first-order differential equations that augments the traditional SIR equations.
\paragraph{Relation to the literature.} This paper follows the dominant tradition within economics of modeling disease hosts as dynamically-optimizing agents with correct forward-looking beliefs. A few notable examples include \cite{geoffard1996rational}, \cite{kremer1996integrating}, \cite{adda2007behavior}, \cite{chan2016health}, and \cite{greenwood2019equilibrium}. More recently, there has been an outpouring of important work motivated by the SARS-CoV2 outbreak, much of it embedding economic models into an SIR (or closely related) framework. Some notable examples include \cite{alvarez2020simple}, \cite{bethune2020covid}, \cite{eichenbaum2020macroeconomics}, \cite{garabaldi2020modeling}, \cite{glover2020health}, \cite{jones2020optimal}, \cite{keppo2020avoidance}, \cite{krueger2020macroeconomic}, and \cite{toxvaerd2020equilibrium}.
There is also of course an enormous literature within epidemiology that models behavioral response to infectious disease. However, epidemiologists have been slow to adopt economics-style modeling, usually instead making ad hoc assumptions about behavior. For instance, \cite{bootsma2007effect} assume that people's intensity of social-contact avoidance during the 1918 flu pandemic varied depending on how many others in their community had recently died. An example that grapples with the dynamics of social distancing in the current pandemic is \cite{kissler2020social}.
Thanks to the recent explosion of interest in economic epidemiology among economists, the gap between economics and infectious-disease epidemiology is closing. \cite{farboodi2020internal} provide an elegant Nash-equilibrium extension of the SIR model that augments the usual system of differential equations that governs epidemiological dynamics with just two additional differential equations. \cite{toxvaerd2020equilibrium} beautifully analyzes a similar equilibrium SIR model, establishing compelling features of the equilibrium trajectory.
Because of their analytical simplicity and tight connection to existing models and methods within epidemiology, \cite{farboodi2020internal}, \cite{toxvaerd2020equilibrium}, and others in this fast-growing literature could potentially have enormous influence on infectious-disease epidemiology, marrying the fields and promoting further cross-fertilization of ideas.
Yet there is also a danger here. This new crop of equilibrium SIR models make an implicit assumption that the benefit people get from social-economic activity does not depend on others' activity. Consequently, the ``activity game'' that people play necessarily exhibits negative externalities (activity increases others' risk of infection) and strategic substitutes (increased risk of infection prompts others to be less active, \cite{bulow1985multimarket}). As a profession, we have strong insights about such games, insights that can be easily and powerfully communicated. These models could therefore be highly influential in terms of shaping public policy. However, the insights that we get from these models could be misguided if, in fact, the activity game exhibits positive externalities and/or strategic complements. This is especially important because, as I discuss in the concluding remarks, the qualitative nature of the game does indeed change during the course of the epidemic.
The rest of the paper is organized as follows. Section \ref{section:model} presents the economic-epidemiological model, along with preliminary analysis. Section \ref{section:equilibrium} analyzes equilibrium epidemics in more detail. Section \ref{section:conclusion} discusses some limitations of the model and directions for future research.
\section{Model and Preliminary Analysis}\label{section:model}
This section presents the economic-epidemiological model, divided for clarity into three parts: \emph{the epidemic}, on how the epidemic process depends on agents' behavior (Section \ref{section:model:epidemic}); \emph{the economy,} on how the epidemic impacts economic activity, both directly by making people sick and indirectly by changing behavior (Section \ref{section:model:economy}); and \emph{individual and collective behavior,} on how the state of the epidemic and expectations about economic activity impact Nash-equilibrium behavior at each point along the epidemic trajectory (Section \ref{subsection:model:behavior}).
\subsection{The epidemic}\label{section:model:epidemic}
There is a unit-mass population of hosts, referred to as ``agents.'' Building on the classic Susceptible-Infected-Recovered (SIR) model of viral epidemiology, each host is at each time $t \geq 0$ in one of five epidemiological states: ``susceptible'' ($S$) if as-yet-unexposed to the virus; ``carriage/contagious'' ($C$) if asymptomatically infected; ``infected/sick'' ($I$) if symptomatically infected; ``recovered from carriage'' ($R_C$) if immune but never sick; and ``recovered from sickness'' ($R_I$) if immune and previously sick.\footnote{It remains unknown whether those who recover from SARS-CoV2 infection are immune to re-infection and, if so, for how long (\cite{lipsitch2020immunity}. }
\paragraph{Epidemiological distance.}
At each point in time $t$, each agent who is not sick decides how intensively to distance themselves from others. Distancing with intensity $d_i \in [0,1]$ causes an agent to avoid fraction $\alpha d_i$ of ``meetings'' with other agents, where $\alpha \in (0,1]$ is a parameter capturing the maximal effectiveness of distancing. Agents who are sick are assumed to be automatically isolated, as if distancing with $\alpha= 1$. (The analysis extends easily to a more general context in which sick agents also transmit the virus.)
Let $\Omega \equiv \{S, C, R_C, R_I\}$ denote the set of not-sick epidemiological states. For each $\omega \in \Omega$, let $d_{\omega}(t)$ denote the average distancing intensity of those currently in state $\omega$ at time $t$ who choose to distance themselves. Let $\mathbf{d}_t \equiv \left( d_{\omega}(t'): \omega \in \Omega, 0 \leq t' < t \right)$ denote the \emph{collective distancing behavior} of the agent population up to time $t$, and let $\mathbf{d} \equiv \left( d_{\omega}(t): \omega \in \Omega, t \geq 0 \right)$ be their collective distancing behavior over the entire epidemic.
\paragraph{Epidemiological dynamics.}
The following notation is used to describe the state of the epidemic at each time $t \geq 0$, depending on agents' distancing behavior:
\begin{itemize}
\item $S(t;\mathbf{d}_t) = $ mass of agents who are susceptible;
\item $C(t;\mathbf{d}_t) = $ mass of agents who are in carriage, i.e., asymptotically infected but not sick;
\item $I(t;\mathbf{d}_t) = $ mass of agents who are sick;
\item $R_C(;\mathbf{d}_t) = $ mass of agents who are immune and were not previously sick; and
\item $R_I(t;\mathbf{d}_t) = $ mass of agents who are immune and were previously sick.
\end{itemize}
Because the population has unit mass, $\sum_{\omega \in \Omega} \omega(t) = 1$ for all $t$.
Agents transition between epidemiological states as follows:
\begin{itemize}
\item $S \to C$: Susceptible agents become asymptomatically infected once ``exposed'' to someone currently infected, at a rate that depends on agents' behavior (details below);
\item $C \to I$: Each agent with asymptomatic infection becomes sick at rate $\sigma > 0$; and
\item $C \to R_C$ and $I \to R_I$: Each agent with infection clears their infection at rate $\gamma > 0$.
\end{itemize}
Initially at time $t=0$, mass $\Delta > 0$ have asymptomatic infection but no one is yet sick and no one is yet immune;\footnote
{The model can be easily extended to allow for innate immunity, by allowing some mass of hosts to be in states $R_I$ and $R_C$ at time $t=0$. For instance, during the ``second wave'' of SARS-CoV2 infections expected to arrive in Fall 2020, some hosts may retain immunity due to exposure during the first wave in Spring 2020. }
that is, $S(0) = 1 - \Delta$, $C(0) = \Delta$, and $I(0) = R_C(0) = R_I(0) = 0$.
Epidemiological dynamics at times $t > 0$ are then uniquely determined by the following system of differential equations:
\begin{align}
S'(t;\mathbf{d}_t) & = - \beta (1 - \alpha d_S(t))(1 - \alpha d_C(t)) S(t;\mathbf{d}_t) C(t;\mathbf{d}_t)
\label{eqn_S}
\\
C'(t;\mathbf{d}_t) & = - S'(t;\mathbf{d}_t) - (\sigma + \gamma) C(t;\mathbf{d}_t)
\label{eqn_C}
\\
I'(t;\mathbf{d}_t) & = \sigma C(t;\mathbf{d}_t) - \gamma I(t;\mathbf{d}_t)
\label{eqn_I}
\\
R_C'(t;\mathbf{d}_t) & = \gamma C(t;\mathbf{d}_t)
\label{eqn_RC}
\\
R_I'(t;\mathbf{d}_t) & = \gamma I(t;\mathbf{d}_t)
\label{eqn_RI}
\end{align}
Let $\mathcal{E}(t;\mathbf{d}_t) \equiv \left(S(t;\mathbf{d}_t), C(t;\mathbf{d}_t), I(t;\mathbf{d}_t), R_C(t;\mathbf{d}_t), R_I(t;\mathbf{d}_t) \right)$ denote the ``epidemic state'' at time $t$ and
$\mathcal{E}(\mathbf{d}) \equiv \left( \mathcal{E}(t;\mathbf{d}_t): t \geq 0 \right)$ the ``epidemic process.''
\noindent \emph{Note on notation:}
I use ``$\mathbf{d}_t$ notation'' in equations (\ref{eqn_S}-\ref{eqn_RI}) to emphasize how the epidemic state at time $t$ depends on agents' previous distancing behavior. However, to ease exposition, I henceforth suppress this notation, except where needed for clarity.
Equations (\ref{eqn_C}-\ref{eqn_RI}) are standard---reflecting agents' progression over time into carriage and then \emph{either} to infection at rate $\sigma$ \emph{or} to viral clearance at rate $\gamma$, and from infection to clearance at rate $\gamma$---but equation (\ref{eqn_S}) is different than in a standard SIR model.
Each susceptible agent $i$ has a \emph{potential} meeting (i.e., opportunity for transmission) with another randomly-selected agent $j$ at ``transmission rate'' $\beta > 0$. Since fraction $S(t)$ of the population is susceptible and fraction $C(t)$ have unisolated infection, the flow of potential meetings between susceptible and infected agents across the entire population is $\beta S(t)C(t)$. However, because susceptible and contagious agents distance themselves with intensity $d_S(t)$ and $d_C(t)$, respectively, each such potential meeting is avoided with probability $(1-\alpha d_S(t))(1-\alpha d_C(t))$. The overall flow of newly-exposed hosts is therefore $\beta (1-\alpha d_S(t))(1-\alpha d_C(t))S(t) C(t)$, a functional form that appeared first in \cite{quercioli2006contagious}.
\paragraph{End of the epidemic.} For analytical convenience, I assume that the epidemic ends at time $T > 0$ when a vaccine is introduced, giving all still-susceptible agents subsequent immunity. (Infected agents remain infected, but there are no new infections after time $T$.) I focus on the case when $T < \infty$ and $T$ is known to all agents, but the analysis can be easily extended to a setting in which $T$ is a random variable drawn from interval support.
\paragraph{Information states and distancing strategies.}
Agents' distancing decisions depend on what they know about their own epidemiological state and the overall epidemic.
This paper focuses on the simplest non-trivial case, assuming that (i) agents know when they are sick but otherwise observe nothing about their own epidemiological state and (ii) agents have correct beliefs about the epidemic process. The model can be extended in several natural directions, to include diagnostic testing (allowing agents to learn more about their own epidemiological state) and incorrect beliefs, but such extensions are left for future work.
Agent $i$'s \emph{information state} captures what she knows and believes, which depends only on (i) the time $t \geq 0$ and (ii) whether she is sick (state $I$), was previously sick (state $R_I$), or has not yet been sick (combined state $N \equiv S \cup C \cup R_C$).
An agent currently in information state $\iota \in \{N,I,R_I\}$ is referred to as a ``$\iota$-agent.'' Let $N(t) = S(t) + C(t) + R_C(t)$ denote the mass of $N$-agents; thus, $N(t) + I(t) + R_I(t) = 1$.
Agent $i$'s \emph{distancing strategy} specifies her likelihood of distancing herself at each time $t$ in each information state. $I$-agents are automatically isolated, as mentioned earlier. $R_I$-agents know that they are immune and therefore have a dominant strategy not to distance themselves. It remains to determine the behavior of $N$-agents.
Let $d_N(t)$ denote the share of $N$-agents who distance themselves. Because susceptible and contagious agents are in the same not-yet-sick information state, $d_N(t) = d_S(t) = d_C(t)$ and equation (\ref{eqn_S}) simplifies to:
\begin{equation}
S'(t) = - \beta (1-\alpha d_N(t))^2 S(t) C(t)
\label{eqn_S_sym}
\end{equation}
\paragraph{Attack rate. }
Each agent's ex ante likelihood of becoming infected, referred to as the ``attack rate'' of the virus, is equal to $\lim_{t \to \infty} (R_C(t) + R_I(t))$. The attack rate is always strictly less than one, even if a vaccine is never discovered ($T = \infty$) and the epidemic is left completely uncontrolled; see \cite{brauer2012mathematical} and \cite{katriel2012attack} for details.
\subsection{The economy}\label{section:model:economy}
Each agent's activities fall into three broad categories: \emph{isolated activities} that can be performed while distancing (e.g., lifting weights, collaborating online), \emph{public activities} that require entering public spaces but do not require interacting with others (e.g., going for a walk, getting gas), and \emph{social activities} that require interacting physically with others (e.g., meeting friends, working in an office building). An agent who distances herself with intensity $d_i$ can continue engaging in isolated activity, but forgoes fraction $\alpha d_i$ of public and social activity and reduces others' opportunities to join her in social activity.
\paragraph{Availability for social interaction.}
A not-sick agent who does not distance enjoys all the benefits of public activity, but engages in social activity only with those who are ``socially available.'' Let $A(t)$ denote agents' \textit{availability} for social interaction at time $t$, averaged across the entire population:
\begin{equation}\label{eqn:At}
A(t) = (1-\alpha d_N(t)) N(t) + R_I(t)
= 1 - I(t) - \alpha d_N(t) N(t).
\end{equation}
(Recall that $I$-agents are completely unavailable due to sickness, while $R_I$-agents find it optimal not to distance themselves at all.)
\paragraph{Economic output.}
Economic activity generates \textit{benefits}, a broad concept that should be understood to include everything from income (work activity) and access to goods and services (shopping) to psycho-social well-being (from interactions with friends). Sick agents are assumed for simplicity to be incapacitated and hence unable to engage in any economic activity; their economic benefit is normalized to zero. The benefit that well agents get depends on their own and others' distancing decisions, as well as how many people are currently sick.
Let $b(d_i; A)$ denote the flow benefit that agent $i$ gets when well and choosing distance $d_i \in \{0,1\}$, given population-wide average availability $0 \leq A \leq 1$. For concreteness, I assume that
\begin{equation}\label{eqn:bdef}
b(d_i; A) = a_0 + a_1 (1-\alpha d_i) + a_2 (1 - \alpha d_i)A.
\end{equation}
\noindent \textit{Discussion: meaning of the economic parameters.}
The parameters $a_0,a_1,a_2 > 0$ capture the importance, respectively, of isolated activities, public activities, and social activities for agent welfare. More precisely: $a_0$ captures the baseline level of benefits that a well agent gets while quarantined in the home; $a_1$ captures the extra benefits associated with being able to leave the home, e.g., the extra pleasure and health benefit of walking outside, the extra convenience of shopping in person rather than online; and $a_2$ captures the extra benefits associated with sharing the same physical space with others, e.g., eating out at a restaurant rather than at home, hugging a friend rather than just talking on the phone. (Put differently: $a_2$ is the cost associated with everyone else being quarantined; $a_1$ is the cost of quarantining yourself, in a world where everyone else is quarantined; and $a_0$ is the cost of being sick, in a world where everyone is quarantined.) These parameters can be changed in many ways. For instance, a restaurant service that delivers safely-prepared fresh-cooked meals would increase $a_0$ and reduce $a_2$, as would improved virtual-meeting technology that enhances remote collaboration.
\noindent \textit{Discussion: functional form of economic benefits.}
The assumption that the benefits of public and social activity are linear in own and others' availability simplifies the presentation but is not essential for the analysis or qualitative findings. For instance, suppose that agents were to prioritize their activities, e.g., by leaving the home only to get urgently-needed supplies, or to visit only with their dearest friends. In that case, each agent's benefit from public and social activity would naturally be a concave function of her own personal distance and of others' availability. The analysis can be easily adapted to allow for such concavity, but at the cost of complicating the presentation.
\paragraph{Economic losses due to the virus.}
If the virus did not exist, then no one would become sick and everyone would choose not to distance. All agents would then get constant flow economic benefit $b(0;1) = a_0 + a_1 + a_2$ and, since the population has unit mass, overall economic activity would also be $b(0;1)$.
The virus reduces economic activity directly, by making people sick, and indirectly, by inducing not-yet-sick agents to distance themselves. Distancing in turn creates two sorts of economic harm: ``private harm'' that distancing oneself reduces one's own public and social activity, and ``social harm'' that distancing oneself reduces others' social activity.
Let $b_t(d_i)$ be shorthand for each well agent's flow economic benefit at time $t$. $b_t(d_i)$ depends on (i) how many people are recovered from sickness, $R_I(t)$, and how many are currently sick, $I(t) = 1 - N(t) - R_I(t)$, (ii) what fraction $d_N(t)$ of not-yet-sick agents are distancing, and (iii) her own distancing choice $d_i \in \{0,1\}$:
\begin{align*}
b_t(d_i) & = a_0 + a_1 (1 - \alpha d_i) + a_2 ( 1- \alpha d_i) A(t)
\\
& = a_0 + a_1 (1 - \alpha d_i) + a_2 ( 1- \alpha d_i) ((1-\alpha d_N(t)) N(t) + R_I(t))
\end{align*}
All agents suffer economically throughout the epidemic, relative to the no-virus benchmark case in which everyone gets flow benefit $a_0 + a_1 + a_2$:
\begin{itemize}
\item \textit{Sick:} $I$-agents are incapacitated and get zero economic benefit. These agents lose $a_0 + a_1 + a_2$.
\item \textit{Previously sick:} $R_I$-agents do not distance, but have less opportunity for social interaction due to others' distancing behavior. These agents lose social-activity benefit $a_2(1-A(t))$.
\item \textit{Not-yet-sick:} $N$-agents choose distancing intensity $d_N(t)$, reducing their public and social activities by a factor of $(1-\alpha d_N)$. These agents lose public-activity benefit $a_1 \alpha d_N$ and lose social-activity benefit $a_2 (1 - (1-\alpha d_N)A(t)) = a_2 (1 - A(t) + \alpha d_N A(t))$.
\end{itemize}
Let $\Gamma_E(t)$ denote the lost economic activity at time $t$, across the entire population. Overall economic loss across the entire epidemic is $\Gamma_E = \int_0^{\infty} \Gamma_E(t) dt$. (If future losses are discounted by discount factor $0 < \delta \leq 1$, then the overall economic loss has present value $\Gamma_E = \int_0^{\infty} \delta^t \Gamma_E(t) dt$ at time $0$. I focus on the case without discounting for ease of exposition.)
\begin{lem}\label{lem:1}
$\Gamma_E(t) = a_0 I(t) + a_1 (1-A(t)) + a_2 (1-A(t)^2)$.
\end{lem}
\begin{proof}
See the Appendix.
\end{proof}
\subsection{Individual welfare and equilibrium behavior}\label{subsection:model:behavior}
Each agent seeks to minimize her own total\footnote
{The analysis can be trivially extended to allow for discounting of future losses.} losses during the course of the entire epidemic.
Let $l_i(t)$ denote agent $i$'s \textit{flow loss} at time $t$. As discussed earlier: $l_i(t) = a_0 + a_1 + a_2$ if $i$ is sick; $l_i(t) = a_2(1-A(t))$ if $i$ is well and not distancing, where $A(t)$ is others' availability for social interaction; and $l_i(t) = a_1 \alpha + a_2 (1 - A(t) + \alpha A(t))$ if $i$ is well and distancing.
Let $L_{\omega}(t)$ denote agent $i$'s expected future total losses starting from time $t$ if in epidemiological state $\omega \in \{S,C,I,R_C,R_I\}$, referred to as ``continuation losses from state $\omega$.'' (Continuation losses depend on future agent behavior and the future trajectory of the epidemic, but I suppress such notation as much as possible for ease of exposition.)
A susceptible agent who becomes infected at time $t$ will not notice this transition but, at that moment, her continuation losses change from $L_S(t)$ to $L_C(t)$. Let $H(t) \equiv L_C(t) - L_S(t)$ denote the ``harm of susceptible exposure'' at time $t$.
Let $p_{i}(t)$ denote agent $i$'s subjective belief about her own likelihood of being susceptible at time $t$, conditional on being not-yet-sick.
At time $t$, mass $N(t)$ of agents are not-yet-sick, of whom mass $S(t)$ remain susceptible. Thus, $N$-agents' average likelihood of being susceptible is $\frac{S(t)}{N(t)}$. For simplicity, I will focus on epidemics with \textit{symmetric behavior} by all those in the same information state at each point in time, in which case $p_{i}(t) = \frac{S(t)}{N(t)}$.
\paragraph{Gain from distancing: reduced exposure.}
Suppose that, at time $t$, agent $i$ distances with intensity $d_i \in [0,1]$ and other $N$-agents distance themselves with intensity $d_N \in [0,1]$.
Agent $i$ is then exposed to the virus at rate $\beta (1-\alpha d_i) (1-\alpha d_N)C(t)$, compared to being exposed at rate $\beta (1-\alpha d_N)C(t)$ if not distancing at all. The ``gain from distancing'' at time $t$, denoted $GAIN_t(d_N)$, is therefore
\begin{equation}\label{eqn:GAIN}
GAIN_t(d_i, d_N) = \alpha d_i \beta (1-\alpha d_N)C(t) \times H(t) \times \frac{S(t)}{N(t)}.
\end{equation}
The marginal gain from distancing $_t(d_N) = \frac{\mathrm{d} GAIN_t(d_i,d_N)}{\mathrm{d} d_i}$ is then
\begin{equation}\label{eqn:MG}
MG_t(d_N) = \alpha (1-\alpha d_N) \frac{\beta S(t)C(t)H(t)}{N(t)}.
\end{equation}
Note that the marginal gain from distancing is decreasing in $d_N$. This is because, as others distance themselves more, agents face less risk of exposure.
\paragraph{Economic cost of distancing: reduced activity.}
If other $N$-agents choose distancing intensity $d_N$, agent $i$ gets flow economic benefit $a_0 + a_1 (1-\alpha d_i) + a_2 (1-\alpha d_i) ((1-\alpha d_N) N(t) + R_I(t))$ when choosing distancing intensity $d_i$, compared to $a_0 + a_1 + a_2 ((1-\alpha d_N) N(t) + R_I(t))$ when not distancing at all. The ``cost of distancing'' at time $t$, denoted $COST_t(d_i, d_N)$, is therefore
\begin{equation}\label{eqn:COST}
COST_t(d_i, d_N) = a_1 \alpha d_i + a_2 \alpha d_i ((1-\alpha d_N) N(t) + R_I(t)).
\end{equation}
The marginal cost of distancing $MC_t(d_N) = \frac{\mathrm{d} COST_t(d_i, d_N)}{\mathrm{d} d_i}$ is then
\begin{equation}\label{eqn:MC}
MC_t(d_N) = a_1 \alpha + a_2 \alpha ((1-\alpha d_N) N(t) + R_I(t)).
\end{equation}
Note that the marginal cost of distancing is decreasing in $d_N$. This is because, as others distance themselves more, there are fewer opportunities for social activity.
Because the marginal gain and the marginal cost of distancing are each decreasing in $d_N$, the game that $N$-agents play may exhibit ``strategic substitutes'' or ``strategic complements'' (\cite{bulow1985multimarket}), depending on the epidemic state. By contrast, in \cite{quercioli2006contagious}, there are no sources of strategic complementarity.
\paragraph{``Distancing game'' among agents.}
At each time $t$, the not-yet-sick $N$-agents play a \emph{game} when deciding whether or not to distance. (Sick $I$-agents are incapacitated, while previously sick $R_I$-agents obviously prefer not to distance. Thus, only $N$-agents have a non-trivial decision.) I assume that $N$-agents play a Nash equilibrium (NE) of this game, and focus on symmetric NE in which all $N$-agents choose the same distancing intensity.
\begin{prop}\label{lem:unique}
Given epidemic state $\mathcal{E}(t) = (S(t), C(t), I(t), R_C(t), R_I(t))$ and harm from susceptible exposure $H(t) = L_C(t) - L_S(t)$, the ``time-$t$ distancing game'' played by not-yet-sick agents has a unique symmetric NE, in which agents choose distancing intensity $d_N^*(t)$. In particular: (i) if $MG_t(0) \leq MC_t(0)$, then $d_N^*(t) = 0$; (ii) if $MG_t(1) \geq MC_t(1)$, then $d_N^*(t) = 1$; and (iii) otherwise, if $MG_t(0) > MC_t(0)$ and $MG_t(1) < MC_t(1)$ then
\begin{equation}
d_N^*(t) = \frac{\frac{\beta S(t)C(t)H(t)}{N(t)} - a_1 - a_2(N(t) + R_I(t))}{\alpha \left( \frac{\beta S(t)C(t)H(t)}{N(t)} - a_2 N(t) \right)} \in (0,1).
\end{equation}
\end{prop}
\begin{proof}
The proof is in the Appendix.
\end{proof}
\noindent Uniqueness of symmetric NE is not obvious, since the time-$t$ distancing game may have either strategic substitutes or strategic complements, depending on the epidemic state and the harm of susceptible exposure. However, uniqueness arises because $N$-agents have a dominant strategy whenever the game has strategic complements.
\paragraph{Equilibrium epidemics.} Let $\mathcal{E}(\mathbf{d}_N)$ denote the epidemic process that results when $N$-agents choose distancing intensity $d_N(t)$ at each time $t$, as determined by the system (\ref{eqn_C}-\ref{eqn_S_sym}). $\mathcal{E}^*$ is referred to as an \textit{equilibrium epidemic process} (or ``behaviorally-constrained epidemic'') if (i) $\mathcal{E}^* = \mathcal{E}(\mathbf{d}_N^*)$ and (ii) given the epidemic process $\mathcal{E}^*$, the time-$t$ distancing game has a symmetric NE in which $N$-agents choose distancing intensity $d_N^*(t)$, for all $t \geq 0$.
\section{Equilibrium Epidemic Analysis} \label{section:equilibrium}
This section characterizes all equilibrium epidemics with symmetric\footnote{I do not know if an equilibrium epidemic can exist with asymmetric behavior by symmetric agents. } agent behavior (or, more simply, ``equilibrium epidemics'') and provides an algorithm for computing them.
The analysis is organized as follows.
First, the augmented system of differential equations that governs economic-epidemiological dynamics in the Nash SIR model is presented. This system builds on the well-known system that governs epidemiological dynamics in the SIR model.
Second, for any given ``final prevalences'' $(S(T), C(T), I(T), R_I(T))$ at time $T$ when distancing ends (due to a perfect vaccine being introduced), there is at most one equilibrium epidemic having these final prevalences.
\subsection{Economic-epidemiological dynamics} \label{subsection:dynamics}
At any given time $t$, the epidemic is characterized by: (i) the mass of agents in each epidemiological state ($S(t)$, $C(t)$, $I(t)$, $R_C(t)$, $R_I(t)$); (ii) the welfare of agents in each epidemiological state (as captured by state-contingent ``continuation losses'' $L_S(t)$, $L_C(t)$, $L_I(t)$, $L_{R_C}(t)$, $L_{R_I}(t)$); and (iii) the distancing behavior of agents who are not yet sick ($d_N^*(t)$).
\paragraph{Epidemiological dynamics.}
The dynamics of the epidemic state $\mathcal{E}(t) = (S(t)$, $C(t)$, $I(t)$, $R_C(t)$, $R_I(t))$ up until time $T$ are determined by equations (\ref{eqn_C}-\ref{eqn_S_sym}) and depend on $N$-agents' distancing behavior. After the vaccine is introduced at time $T$, equations (\ref{eqn_C}-\ref{eqn_RI}) remain unchanged but, because there is no further transmission of the virus, $S'(t) = 0$.
\paragraph{Distancing behavior.}
Lemma \ref{lem:unique} characterizes $N$-agents distancing behavior $d_N(t)$ at each time $t$, depending on the epidemic state $\mathcal{E}(t)$ and the harm of susceptible exposure $H(t) = L_C(t) - L_S(t)$.
\paragraph{Welfare dynamics.} It remains to characterize how the continuation losses associated with each epidemiological state change over time.
\noindent \emph{State $R_I$.}
Agents who have recovered from sickness remain well\footnote{The analysis can be extended in a straightforward way to allow for the possibility of re-infection, for instance, by having recovered agents transition back at some rate to the susceptible state.} and choose not to distance. Such an agent still suffers from the fact that others are distancing, losing social-activity benefit $a_2(\alpha d_N^*(t)N(t) + I(t))$ relative to the no-virus benchmark in which everyone earns flow benefit $a_0 + a_1 + a_2$. Because these losses are ``sunk'' once experienced, and because $R_I$-agents do not transition to any other state,
\begin{equation}\label{eqn:LRI}
L_{R_I}'(t) = - a_2(\alpha d_N^*(t)N(t) + I(t)).
\end{equation}
After time $T$ when new transmission stops, all social distancing stops, i.e., $d_N^*(t) = 0$ for all $t > T$. However, well agents still suffer from not being able to engage socially with those who are sick. In particular, $L_{R_I}(t) = \int_{t' \geq t} a_2 I(t') \mathrm{d}t'$ for all $t \geq T$.
\noindent \emph{State $I$.}
Sick agents incur flow loss $a_0 + a_1 + a_2$ and transition to the recovered state $R_I$ at rate $\gamma$. Thus,
\begin{equation}\label{eqn:LI}
L_{I}'(t) = - (a_0 + a_1 +a_2) + \gamma (L_I(t) - L_{R_I}(t)).
\end{equation}
\noindent \emph{State $R_C$.}
Agents who have recovered from carriage never learn that they are immune, and so continue to distance themselves throughout the entire epidemic. In particular, these agents lose public-activity benefit $a_1 \alpha d_N^*(t)$, lose social-activity benefit $a_2 (1 - (1-\alpha d_N^*(t))((1-\alpha d_N^*(t))N(t) + R_I(t))$, and never transition to another state. Thus,
\begin{equation}\label{eqn:LRC}
L_{R_C}'(t) = - a_1 \alpha - a_2 (1 - (1-\alpha d_N^*(t))((1-\alpha d_N^*(t))N(t) + R_I(t)).
\end{equation}
After time $T$, because all social distancing stops and $R_C$-agents do not become sick, their only subsequent losses come from not being able to interact with other people who are sick, the same as $R_I$-agents. So, $L_{R_C}(t) = L_{R_I}(t)$ for all $t \geq T$.
\noindent \emph{State $C$.}
Agents with asymptomatic infection incur the same flow losses due to social distancing as all not-yet-sick agents (including those in state $R_C$), but transition to sickness at rate $\sigma$ and to asymptomatic recovery at rate $\sigma$. Thus,
\begin{equation}\label{eqn:LC}
L_C'(t) = L_{R_C}'(t) + \gamma (L_I(t) - L_C(t)) + \sigma (L_{R_C}(t) - L_C(t)).
\end{equation}
\noindent \emph{State $S$.}
Susceptible agents incur the same flow losses as all other not-yet-sick agents, but become asymptomatically infected at rate
$\beta (1-\alpha d_N^*(t))^2 S(t)C(t)$. Thus,
\begin{equation}\label{eqn:LS}
L_S'(t) = L_{R_C}'(t) + \beta (1-\alpha d_N^*(t))^2 S(t)C(t) (L_C(t) - L_S(t)).
\end{equation}
After time $T$, $S$-agents remain susceptible and only suffer from not being able to interact with others who are sick, the same as $R_I$-agents. So, $L_S(t) = L_{R_I}(t)$ for all $t \geq T$.
\subsection{Algorithm} \label{subsection:algorithm}
Suppose for a moment that an equilibrium exists with final epidemic state $\mathcal{E}(T)$. Here I discuss how to determine numerically whether an equilibrium epidemic exists with this ``final condition'' and, if so, to compute the entire epidemic trajectory.
Observe first that the final epidemic state uniquely pins down the trajectory of the epidemic \textit{after} time $T$. Because there is no new transmission, no one distances and subsequent epidemiological dynamics are trivial: contagious agents leave state $C$ at rate $\gamma + \sigma$, fraction $\frac{\sigma}{\gamma + \sigma}$ becoming sick; sick agents recover at rate $\gamma$; and others remain in their current state. Moreover, because $C(T)$ and $I(T)$ together determine $(I(t): t \geq T)$, they also determine $L_{R_I}(t) = L_{R_C}(t) = L_S(t) = \int_{t' \geq t} a_2 I(t') \mathrm{d}t'$ for all $t \geq T$, which in turn determine $L_C(t)$ and $L_I(t)$ after $T$.
Having determined $L_S(T)$ and $L_C(T)$, we now know $H(T) = L_C(T) - L_S(T)$, the harm of susceptible exposure just before the vaccine is introduced. Together with the final epidemic state, this uniquely determines $N$-agents' equilibrium distancing intensity just \textit{before} the vaccine is introduced, as characterized in Proposition \ref{lem:unique}.
Having determined $N$-agent behavior $d_N^*(t)$, we now can determine: $S'(T)$ (equation (\ref{eqn_S_sym})) and all other epidemiological dynamics, which remain unchanged (equations (\ref{eqn_C}-\ref{eqn_RI})); $L_{R_I}'(T)$, which in turn determines $L_{I}'(T)$ (equations (\ref{eqn:LRI},\ref{eqn:LI})); and $L_{R_C}'(T)$, which together with $L_{I}'(T)$ determines $L_{C}'(T)$, which in turn determines $L_{S}'(t)$ (equations (\ref{eqn:LRC},\ref{eqn:LC},\ref{eqn:LS})).
In this way, any \textit{candidate epidemic} can be uniquely traced backward over time, from the given final epidemic state (``final condition''), until one of three things happens: (i) the trajectory hits an invalid boundary\footnote{An ``invalid boundary'' is reached if $S(t)$, $C(t)$, $I(t)$, $R_C(t)$, or $R_I(t)$ equals zero at any time $t > 0$. }, in which case no equilibrium epidemic exists with the given final condition; (ii) the backwards trajectory ``ends'' at the desired initial epidemic state $\mathcal{E}(0) = (1 - \Delta, \Delta, 0,0,0)$, in which case a unique equilibrium epidemic exists with the given final condition; or (iii) the backwards trajectory ends at some other initial epidemic state $\mathcal{E}(0) \neq (1 - \Delta, \Delta, 0,0,0)$, in which case no equilibrium epidemic exists with the given final condition.
\section{Concluding Remarks}\label{section:conclusion}
This paper introduces Nash SIR, an economic-epidemiological model of a viral epidemic that builds on the classic Susceptible-Infected-Recovered (SIR) model of infectious-disease epidemiology. The model departs from the previous literature by focusing on the complementarities associated with the social-economic activity that can be lost when agents distance themselves to prevent the spread of infection.
\paragraph{A changing game.}
An important complicating feature of this paper's model is that, as the epidemic progresses through its course, the basic strategic structure of the ``distancing game'' that agents play changes over time. For instance, very early in the epidemic when infection remains rare, the distancing game exhibits negative externalities, since agents get little health benefit but suffer substantial economic harm when others distance themselves. However, that changes once infection grows more common, as others' distancing generates greater health benefit. Moreover, the game can shift between having strategic substitutes and strategic complements.
\paragraph{Complementarity and multi-dimensionality of agent actions.}
This paper focuses on a simple context in which the only way to protect oneself from infection is to avoid public and in-person social activity. However, people can also prevent transmission in other ways, such as wearing a mask. Bearing that in mind, it would be interesting to generalize the analysis to allow agents to decide both (i) how much to curtail their public and social activities (``avoidance,'' as in this paper), and (ii) how much to change their behavior during such activities (``vigilance,'' as in \cite{quercioli2006contagious}). The game that agents play in this richer context has an interesting strategic structure, with agents' vigilance decisions always being strategic substitutes, agents' avoidance decisions potentially being either strategic complements or strategic substitutes, more vigilance promoting less avoidance, and more avoidance promoting less vigilance.
\paragraph{Asymmetry and social inequality.} This paper assumes that agents are symmetric for ease of exposition, but this assumption appears to entail meaningful loss of generality. In particular, assuming that all agents are the same at the start of the epidemic obscures important issues related to inequality and social justice. To see why, suppose that agents belong to one of two social classes: ``elites'' who are able to earn income and care for themselves from home (higher $a_0$) and ``non-elites'' whose income and well-being hinge more on being in public social spaces (higher $a_2$). With less to lose by staying at home, elites will distance themselves relatively early during the epidemic. Having distanced less in the past, non-elites will then be more likely than elites to already have been exposed to the virus---further reducing their relative incentive to distance. In the end, the equilibrium trajectory of the epidemic could exacerbate pre-existing inequality, with non-elites bearing the brunt of the burden of the epidemic, being more likely to become sick and suffering more from the economic contraction associated with elite-driven distancing.
\pagebreak
\pagebreak
\appendix
\section{Mathematical proofs}
\paragraph{Proof of Lemma \ref{lem:1}.}
\begin{proof}
Recall that $A(t) = (1-\alpha d_N(t))N(t) + R_I(t)$ and hence $1-A(t) = I(t) + \alpha d_N(t) N(t)$.
\noindent \textit{Isolated activity:} Sick agents get no benefit, while well agents get full benefit $a_0$. The overall economic loss due to reduced isolated activity at time $t$ is therefore $a_0 I(t)$.
\noindent \textit{Public activity:} Sick agents get no benefit, well agents who do not distance get full benefit $a_1$, and well agents who distance get benefit $a_1 (1-\alpha)$. Since fraction $d_N(t)$ of $N$-agents distance and no $R_I$-agents distance, the overall economic loss due to reduced public activity at time $t$ is therefore $a_1 (I(t) + \alpha d_N(t) N(t)) = a_1(1-A(t))$.
\noindent \textit{Social activity:} Sick agents get no benefit, well agents who do not distance get benefit $a_2 A(t)$, and well agents who distance get benefit $a_2(1-\alpha)A(t)$ (and hence lose $a_2(1 - A(t) +\alpha A(t))$). The overall economic loss due to reduced social activity at time $t$ is therefore $a_2$ times
\begin{align*}
& I(t) + (1-A(t))(R_I(t) + (1-d_N(t)) N(t))
+ (1-A(t)+\alpha A(t))d_N(t)N(t)
\\
& = {\color{red} I(t)} + (1-A(t))(R_I(t) + (1-d_N(t)) N(t))
+ ((1-A(t))(1-\alpha) + {\color{red}\alpha)d_N(t)N(t)}
\\
& = {\color{red} 1 - A(t)} + (1-A(t)) \left(
{\color{blue}R_I(t) + (1-d_N(t))N(t) + (1-\alpha) d_N(t) N(t)}
\right)
\\
& = (1-A(t)) \times \left( 1 +
{\color{blue}R_I(t) + (1-\alpha d_N(t))N(t)}
\right)
\\
& = (1-A(t)) \times (1 +
{\color{blue}A(t)}) = 1-A(t)^2
\end{align*}
as desired.
\end{proof}
\paragraph{Proof of Proposition \ref{lem:unique}.}
\begin{proof}
(i) \textit{No distancing:} If $MG_t(0) \leq MC_t(0)$, then the time-$t$ distancing game has a symmetric NE in which all agents choose not to distance, i.e., $d_N^*(t) = 0$. To establish uniqueness, note by equations (\ref{eqn:MG}-\ref{eqn:MC}) that $MG_t(0) \leq MC_t(0)$ implies $\frac{\beta S(t)C(t)H(t)}{N(t)} \leq a_1 + a_2(N(t) + R_I(t))$. But then
\begin{align*}
MG_t(1) & = \alpha (1-\alpha) \frac{\beta S(t)C(t)H(t)}{N(t)}
\\
& \leq \alpha (1-\alpha) (a_1 + a_2(N(t) + R_I(t)))
\\
& < \alpha (a_1 + a_2((1-\alpha)N(t) + R_I(t)))
\\
& = MC_t(1)
\end{align*}
Since $MG_t(d_N)$ and $MC_t(d_N)$ are each linear in $d_N$, the fact that $MG_t(0) \leq MC_t(0)$ and $MG_t(1) < MC_t(1)$ implies that $MG_t(d_N) < MC_t(d_N)$ for all $d_N \in (0,1]$. In particular, $N$-agents have a dominant strategy not to distance.
(ii) \textit{Maximal distancing:} If $MG_t(1) \geq MC_t(1)$, then a symmetric NE exists in which all agents choose to distance as much as possible, i.e., $d_N^*(t) = 1$. To establish uniqueness, note by equations (\ref{eqn:MG}-\ref{eqn:MC}) that $MG_t(1) \geq MC_t(1)$ implies $(1-\alpha)\frac{\beta S(t)C(t)H(t)}{N(t)} \geq a_1 + a_2((1-\alpha)N(t) + R_I(t))$. But then
\begin{align*}
MG_t(0) & = \alpha \frac{\beta S(t)C(t)H(t)}{N(t)}
\\
& \geq \frac{\alpha}{1-\alpha} (a_1 + a_2((1-\alpha)N(t) + R_I(t)))
\\
& > \alpha (a_1 + a_2(N(t) + R_I(t)))
\\
& = MC_t(0)
\end{align*}
Since $MG_t(d_N)$ and $MC_t(d_N)$ are each linear in $d_N$, the fact that $MG_t(1) \geq MC_t(1)$ and $MG_t(0) > MC_t(0)$ implies that $MG_t(d_N) > MC_t(d_N)$ for all $d_N \in [0,1)$. In particular, $N$-agents have a dominant strategy to distance.
(iii) \textit{Intermediate distancing:} If $MG_t(0) > MC_t(0)$ and $MG_t(1) < MC_t(1)$, then it must be that $MG_t'(d_N) = - \frac{\alpha^2 \beta S(t)C(t)H(t)}{N(t)} < - \alpha^2 a_2 N(t) = MC_t'(d_N)$ and hence that there exists a unique $d_N^*(t) \in (0,1)$ such that $MG_t(d_N^*(t)) = MC_t(d_N^*(t))$, $MG_t(d_N) > MC_t(d_N)$ for all $d_N < d_N^*(t)$, and $MG_t(d_N) < MC_t(d_N)$ for all $d_N > d_N^*(t)$. In particular, solving $MG_t(d_N^*(t)) = MC_t(d_N^*(t))$ yields
\begin{equation}
d_N^*(t) = \frac{\frac{\beta S(t)C(t)H(t)}{N(t)} - a_1 - a_2(N(t) + R_I(t))}{\alpha \left( \frac{\beta S(t)C(t)H(t)}{N(t)} - a_2 N(t) \right)}.
\end{equation}
\end{proof}
\end{document}
|
\begin{document}
\pagestyle{headings}
\title{
New Developments in Interval Arithmetic and Their Implications for
Floating-Point Standardization\thanks{
Technical report DCS-273-IR, Department of Computer Science,
University of Victoria, Victoria, BC, Canada.}
}
\author{M.H. van Emden}
\institute{Department of Computer Science, \\University of Victoria,
Victoria, Canada \\
\email{[email protected]}\\
\texttt{http://www.cs.uvic.ca/\homedir vanemden/}
}
\maketitle
\begin{abstract}
We consider the prospect of a processor that can perform interval
arithmetic at the same speed as conventional floating-point
arithmetic. This makes it possible for all arithmetic to be performed
with the superior security of interval methods without any penalty in
speed. In such a situation the IEEE floating-point standard needs to be
compared with a version of floating-point arithmetic that is ideal for
the purpose of interval arithmetic. Such a comparison requires a
succinct and complete exposition of interval arithmetic according to
its recent developments. We present such an exposition in this paper.
We conclude that the directed roundings toward the infinities and the
definition of division by the signed zeros are valuable features of the
standard. Because the operations of interval arithmetic are
always defined, exceptions do not arise. As a result neither Nans nor
exceptions are needed. Of the status flags, only the inexact flag may
be useful. Denormalized numbers seem to have no use for interval
arithmetic; in the use of interval constraints, they are a handicap.
\end{abstract}
{\bf Keywords:} interval arithmetic, IEEE floating-point standard,
extended interval arithmetic, exceptions
\section{Introduction}
Continuing advances in process technology have caused a tremendous
increase in the number of transistors available to the designer of a
processor chip. As a result, multiple parallel floating-point units
become feasible. The time will soon come when interval arithmetic can
be done as fast as conventional arithmetic.
However, to properly utilize the newly available number of transistors,
chip designers need to spend ever more time iterating through cycles of
synthesis, place-and-route, and physical verification than current
design methodology allows. This design bottleneck makes it desirable
to simplify floating-point units. Recent developments in the
theory of interval arithmetic suggest possibilities for simplification.
As far as interval arithmetic is concerned, certain parts the 1985
standard are essential, whereas other parts are superfluous or even a
liability.
In this paper we present a consolidated, self-contained account of the new
developments in interval arithmetic not available elsewhere. In the
conclusions we give a sketch of what would be an ideal standard from
the point of view of interval arithmetic when arranged in this way.
\section{Why interval arithmetic?}
Conventional, non-interval, numerical analysis is marvelously cheap and
it works most of the time. This was exactly what was needed in the
1950s when computers needed a demonstration of feasibility. A lot has
changed since that time. Numerical computation no longer needs to be
cheap. It has become more important that it always works. As a result
interval arithmetic is becoming an increasingly compelling
alternative. For example, civil, mechanical, and chemical engineers are
liable for damage due to unsound design. So far, they have been able to
get away with the use conventional numerical analysis, appealing to
what appears to be best practice. As interval methods mature, it is
becoming harder to ignore them when defining ``best practice''.
Recent developments, which we call ``modern interval arithmetic''
provide a practical and mathematically compelling basis. In
conjunction with this it has become clear that some aspects of IEEE
standard 754 are not needed or are detrimental, whereas other aspects
are marvelously suited to interval arithmetic. If these latter are
preserved in future development of the standard, then interval
arithmetic can help bridge the design gap and lead to the situation
where all arithmetic can be faster due to interval methods.
\section{A theory of approximation}
Conventional numerical analysis approximates each real by a single
floating-point number. It approximates each of the elementary
operations on reals by the floating-point operation that has the same
name, but not the same effect. Let us call this approach ``point
approximation''. It has been amply documented that this approach,
though often satisfactory, can lead to catastrophical errors. This can
happen because it is not known to what degree a floating-point variable
approximates its real-valued counterpart.
Interval arithmetic is based on a theory of approximation, {\em
set approximation}, that ensures that for every real-valued variable
$x$ in the mathematical model, there is a machine-representable set $X$
of reals that contains $x$. Such arithmetic is exact in the
sense that $x \in X$ is and remains a true statement in the sense of
mathematics. It is of course not exact in the sense that $X$ typically
contains many reals.
Conversely, in case of numerical difficulties, it will turn out that
continued iteration does not reduce the size of $X$, in which case we
have a notification that there is a problem with the algorithm or with
the model. Because of this property, we call this {\em manifest}
approximation: there is always a known lower bound to the quality of
the approximation.
In this way the operations can be interpreted as
inference rules of logic; for example, $x \in X$ and $y \in Y$ imply
that $x+y \in Z$, where $Z$ is computed from $X$ and $Y$.
\section{Approximation structures}
We address the situation where we need to solve a mathematical model in
which a variable takes on values that are not representable in a
computer, but where it is possible to so represent {\em sets} of
values. We then approximate a variable by a representable set that
contains all the values that are possible according to the model.
Models with real-valued variables are but one example of such a
situation.
As the theory of set approximation applies to sets in general, we
first present it this way.
\begin{definition}
\label{approxStruct}
Let $\mathcal{T}$ be the type of a variable $x$, that is, the set of the
possible values for $x$. A finite set $\mathcal{A}$ of subsets of $\mathcal{T}$ that contains
$\mathcal{T}$ and that is closed under intersection is called an {\em
approximation structure in $\mathcal{T}$}.
\end{definition}
In a typical practical application of this theory, $\mathcal{A}$ is
a set of computer-representable subsets of $\mathcal{T}$.
\begin{theorem}
\label{approxTh}
If $\mathcal{A}$ is an approximation structure for $\mathcal{T}$, then
for every $S \subset \mathcal{T}$ there exists a unique least
(in the sense of the set-inclusion partial order) element
$S'$ of $\mathcal{A}$ such that $S \subset S'$.
\end{theorem}
\begin{definition}
\label{phi}
For every $S \subset \mathcal{T}$, $\phi(S)$ is the unique least element of $\mathcal{A}$
that exists according to theorem~\ref{approxTh}.
\end{definition}
We regard $\phi(S)$ as the approximation of $S$.
As $S$ can be a singleton set, this theory
provides approximations both of elements and subsets of $\mathcal{T}$.
\section{An approximation structure for the reals}
We seek a set of subsets of the reals that can serve as approximation
structure. A first step, not yet computer-representable, is that of
the closed, connected sets of reals.
\begin{theorem}
\label{cloConn}
The closed, connected subsets of $R$ are an approximation
structure for $R$.
\end{theorem}
{\em Proof}
According to a well-known result in topology,
all closed connected subsets of $R$ have one of the following
four forms:
$\{x \in \mathcal{R} \mid a \leq x \leq b \}$,
$\{x \in \mathcal{R} \mid x \leq b \}$,
$\{x \in \mathcal{R} \mid a \leq x \}$, and
$\mathcal{R}$
where $a$ and $b$ are reals.
Here we do not exclude $a > b$, because the empty set is also
included among the closed connected sets.
Clearly the conditions for an approximation structure are satisfied:
$\mathcal{R}$ is included and the intersection of any two sets of this form is
again closed and connected.
This completes the proof.
The significance of the closed connected sets of reals as approximation
structure is that they can be represented as a pair of extended reals.
\begin{definition}
\label{extReals}
The {\em extended reals} are the set obtained by adding to the reals
the two infinities. As with the reals, the extended reals are totally
ordered. When two extended reals are finite, then they are ordered
within the extended reals as they are in the reals. Furthermore,
$-\infty$ is less than any real and $+\infty$ is greater than any
real.
\end{definition}
Theorem~\ref{cloConn} together with definition~\ref{extReals}
suggest the following notation for the four forms of the closed
connected sets of reals. In this notation we do not include the empty
interval. The reason is that if it is found in an interval computation
that an interval is empty, any further operations involving the
corresponding variable yield the same result, so that the computation
can be halted.
\begin{definition} \label{brackets}
Let $a$ and $b$ be reals such that $a \leq b$.
\begin{eqnarray*}
\langle a,b \rangle & \stackrel{\rm def}{=} &
\{x \in \mathcal{R} \mid a \leq x \leq b \} \\
\langle -\infty,b \rangle & \stackrel{\rm def}{=} &
\{x \in \mathcal{R} \mid x \leq b \} \\
\langle a,+ \infty \rangle & \stackrel{\rm def}{=} &
\{x \in \mathcal{R} \mid a \leq x \} \\
\langle -\infty, +\infty \rangle & \stackrel{\rm def}{=} & \mathcal{R}
\end{eqnarray*}
\end{definition}
Note that each of these pairs denote sets of reals, even though in
their notation the infinities are used. These are not reals.
\section{Floating-point intervals: a finite approximation structure for
the reals}
Let $F$ be a finite set of reals.
\begin{theorem}
\label{finApprox}
The sets of the form
$\emptyset$,
$\langle -\infty,b \rangle$,
$\langle a,b \rangle$,
$\langle a,+\infty \rangle$, and
$\langle -\infty,+\infty \rangle$
are an approximation structure when $a$ and
$b$ are restricted to elements of $F$ such that $a \leq b$.
\end{theorem}
\begin{definition}
\label{flptOrReal}
The {\em real intervals} are sets of the form described in
theorem~\ref{cloConn}.
The {\em floating-point intervals} are sets of the form described in
theorem~\ref{finApprox}, where $a$ and $b$ are finite IEEE754 floating
point numbers (according to a choice of format: single-length,
double-length, extended) such that $a \not= -0$, $b \not= +0$, and $a
\leq b$.
\end{definition}
From definitions \ref{brackets}
and \ref{flptOrReal} and theorem \ref{finApprox} we conclude:
\begin{itemize}
\item
The restriction on the sign of zero bounds
in definition~\ref{flptOrReal} is there to make
the notation unambiguous. We will see that disambiguating the notation
in this way has an advantage for interval division.
\item
$\{0\}$ is written as $\langle+0,-0\rangle$.
\item
When $\langle a,b \rangle$ is a floating-point interval, then $a \not= +\infty$
and $b \not= -\infty$.
\end{itemize}
Let us take care to distinguish ``real intervals'' from
``floating-point intervals''. Both are sets of reals. The latter are a
subset of the former.
From now on we assume
the floating-point intervals
as approximation structure when we rely on
the fact that for any set $S$ of reals there is a unique least
floating-point interval $\phi(S)$ containing it.
\begin{definition}
For any real $x$, $x^-$ ($x^+$) is the left (right) bound
of $\phi(\{x\})$.
\end{definition}
This operation is implemented by performing a floating-point
operation that yields $x$ in rounding mode toward $-\infty$
($+\infty$).
\section{Interval Arithmetic}
Much of the standard is concerned with defining, signaling, and
trapping exceptions caused by overflow, underflow, and undefined
operations. What distinguishes modern interval arithmetic from the old
is that {\em no exceptions occur}. As we will see, no operation can
result in Nan. Every operation is defined on all operands. Moreover,
it is defined in such a way that the floating-point endpoints bound the
set of the real numbers that are the possible values of the associated
variable in the mathematical model.
This property is based on the use of {\em set extensions} of the
arithmetical operations. It is helped by the use of {\em relational
definition} rather than functional ones of these operations. We
discuss these in turn.
\paragraph{Set extensions of functions}
Whenever a function $f$ is defined on a set $S$ and has values in a set
$T$, there exists the {\em canonical set extension} $\widehat f$, which
is a function defined on the subsets of $S$ and has as values subsets
of $T$ according to $\widehat f(X) = \{ f(x) \mid x \in X\}$ for any $X
\subset S$. This definition is of interest because it also carries over
to partial functions and to multivalued functions.
Though $X$ may be an approximation of $x$, $\widehat f(X)$ may not be an
element of an approximation structure of $T$, so is not necessarily
an approximation of $f(x)$. But $\phi(\widehat f(X))$ does approximate $f(x)$.
Thus $\phi$ induces a transformation among functions.
It changes $f$ to
the function that maps $x$ to $\phi(\widehat f(\{x\}))$.
The {\em inverse canonical set extension} of $f$ is defined as
$f^{-1}(Y) = \{ x \mid f(x) \in Y \}$. This definition is of interest
because such an inverse is defined even when $f$ itself has no
inverse.
By using the canonical set extensions of a function, one ensures that
undefined cases never arise. By considering instead of the
arithmetical operations on the reals their canonical set extensions
to suitably selected sets of reals (namely, floating-point intervals),
undefined cases are eliminated.
An example of a set extension for arithmetical operations is
$X+Y = \{x + y \mid x \in X \wedge y \in Y\}$. Though $X$ and $Y$ may be
floating-point intervals, that is typically not the case for
$\{x + y \mid x \in X \wedge y \in Y\}$. So
to ensure that addition is closed in the set of floating-point
intervals, we need to apply $\phi$, as shown below in the formulas for
interval operations that go back to R.E. Moore \cite{moore66}.
\begin{eqnarray}
X+Y & = &
\phi(\{x + y \mid x \in X \wedge y \in Y\}) \nonumber\\
X-Y & = &
\phi(\{x - y \mid x \in X \wedge y \in Y\}) \nonumber\\
X*Y & = &
\phi(\{x * y \mid x \in X \wedge y \in Y\}) \nonumber\\
X/Y & = &
\phi(\{x / y \mid x \in X \wedge y \in Y\}) \nonumber
\end{eqnarray}
Regarded as a set extension, the above definition of $X/Y$ is correct
and unambiguous: set extensions are defined just as well for partial
functions, functions that are not everywhere defined. Yet many authors
have subjected it to the condition $0 \not\in Y$, making it useless in
practice. Others have taken a less restrictive stance by changing the
definition to:
$$
X/Y =
\phi(\{x / y \mid x \in X \wedge y \in Y \wedge y \not= 0\}).
$$
\paragraph{Relational definitions}
Ratz \cite{ratz96} has avoided such difficulties by using a relational form
of the above definitions. Although not necessary, this relational form
also makes it possible to define both addition and subtraction with the
same ternary relation $x+y=z$. This leads to an attractive
uniformity in the definition of the interval arithmetic operations.
\begin{definition}
\label{intArithm}
Let $X$ and $Y$ be non-empty floating-point intervals. Then interval
addition, subtraction, multiplication, and division are defined as
follows.
\begin{eqnarray}
X+Y & \stackrel{\rm def}{=} &
\phi(\{z \mid \exists x \in X \wedge
\exists y \in Y.\; x+y=z\}) \nonumber\\
X-Y & \stackrel{\rm def}{=} &
\phi(\{z \mid \exists x \in X \wedge
\exists y \in Y.\; z+y=x\}) \nonumber\\
X*Y & \stackrel{\rm def}{=} &
\phi(\{z \mid \exists x \in X \wedge
\exists y \in Y.\; x*y=z\}) \nonumber\\
X \oslash Y & \stackrel{\rm def}{=} &
\phi(\{z \mid \exists x \in X \wedge
\exists y \in Y.\; z*y=x\}) \nonumber
\end{eqnarray}
\end{definition}
We use the symbol $\oslash$ in $X \oslash Y$ here for interval division
rather than the $X/Y$ defined earlier. There is only a difference
between the two definitions when $\langle 0,0 \rangle$ occurs as an operand.
For details, see \cite{hckvnmdn01}. The difference is immaterial, as
intuition fails in these cases, anyway.
The operations thus defined form an interval arithmetic that is {\em
sound} in the sense that the resulting sets contain all the real
values they should contain according the set extension definition.
They are {\em closed} in the sense that they are defined for all
interval arguments and yield only interval results. Such an interval
arithmetic never yields an exception.
It remains to show that these definitions can be efficiently computed
by IEEE standard floating-point arithmetic while avoiding the undefined
floating-point operations
$\infty - \infty$, $\pm \infty / \pm \infty$, $0 * \pm
\infty$, and $0/0$. This we do in the next sections.
\subsection{The algorithm for interval addition and subtraction}
\begin{theorem}
If $X=\langle a,b\rangle $ and $Y=\langle c,d\rangle $ are non-empty
floating-point intervals, then
$X+Y$ and $X-Y$ according to definition~\ref{intArithm} are equal to
$\langle (a+c)^-, (b+d)^+ \rangle$ and
$\langle (a-d)^-, (b-c)^+\rangle$, respectively.
\end{theorem}
See \cite{hckvnmdn01}.
The interesting part of the proof takes into account that
adding $a$ and $c$ is undefined if they are infinities with opposite
signs. As, according to definition~\ref{brackets},
$a$ and $c$ are not $+\infty$, this cannot
happen. Similar reasoning shows that $b+d$ is always defined and that
the formula for subtraction cannot give an undefined result.
Thus, in interval addition and subtraction
we achieve the ideal:
{\em Never a Nan}, and this without the need to test.
\subsection{The algorithm for interval multiplication}
If $\langle a,b\rangle $ and $\langle c,d\rangle $ are bounded, real
intervals, then
$$ \langle a,b\rangle * \langle c,d\rangle = \langle \min(S),\max(S)\rangle,$$
where $S = \{a*c, a*d, b*c, b*d\}$.
This formula holds for real rather than floating-point intervals. It
is several steps away from interval arithmetic. When we allow the
bounds to be any floating-point number, we introduce the possibility
that they are infinite. In that case we need to be assured that all
four products in $S$ are defined. Moreover, we want, as much as
possible, to perform only two multiplications, one for each bound. The
above formula always requires four.
To attain these goals, we classify the intervals $\langle a,b\rangle$ and $\langle
c,d\rangle$ according to the signs of their elements, as shown in the table
in Figure~\ref{fig:class}. This classification creates many cases in
which intervals can be multiplied with only one multiplication for each
bound.
\begin{figure}
\caption{Classification of nonempty intervals according to whether
they contain at least one real of the sign indicated at the top of the
second and third columns. Classes $P$ and $N$ are further decomposed
according to whether they have a zero bound.
As only non-empty intervals are classified, we have $u \leq v$.
}
\label{fig:class}
\end{figure}
The classification yields four cases (for multiplication the
subdivision of $P$ and $N$ do not matter) for each of the operands,
giving at first sight 16 cases. However, when at least one of the
operands classifies as $Z$, several cases collapse. As a result, we are
left with 11 cases.
\begin{theorem}
\label{multTheorem}
If $\langle a,b\rangle $ and $\langle c,d\rangle $ are real intervals, then
$ \langle a,b\rangle * \langle c,d\rangle $
is a real interval whose endpoints are given by the expressions, to
be evaluated as extended reals, in Figure~\ref{multTable1}.
\end{theorem}
\begin{figure*}
\caption{Case analysis for multiplication of real intervals,
$\langle a,b\rangle *\langle c,d\rangle $.}
\label{multTable1}
\end{figure*}
In \cite{hckvnmdn01} the cases indicated as such in the table in
Figure~\ref{multTable1} are proved directly. The other cases can be
proved by symmetry from the case proved already. The symmetries
applied are based on the identities $x*y=-(x*-y)$ or
similar ones shown in the last column in the table.
The proofs first show the correctness of the scalar products for
bounded real intervals. To allow for floating-point intervals, which
can be unbounded, we have to consider whether the products are
defined. Let us consider as example the top line according to which
$\langle a,b \rangle * \langle c,d \rangle = \langle a*c, b*d \rangle$. The undefined cases
occur when one operand is 0 and the other $\infty$. It is possible for
$a$ or $c$ to equal 0, but neither can be infinite: because of the
classification $P$, they cannot be $-\infty$; because of their being
lower bounds, they cannot be $+\infty$.
Let us now consider $b*d$. It is possible for $b$ or $d$ to equal
$+\infty$, but neither can be 0 because of the classification $P$. One
may verify that in every case of the table in Figure~\ref{multTable1}
undefined values are avoided by a combination of definitions
\ref{brackets} and \ref{flptOrReal} and the classification of the case
concerned.
We need tests to identify the right case in the table anyway to
minimize the number of multiplications. We obtain as a bonus the
saving of tests to avoid undefined values.
Thus, in interval multiplication
we achieve the ideal:
{\em Never a Nan}, and this without the need to test.
\subsection{Division}
For interval multiplication the classification of the interval
operands in the classes $P$, $M$, $N$, and $Z$ is sufficient. For
interval division it turns out that the further subdivision of $P$
into $P_0$ and $P_1$ and of $N$
into $N_0$ and $N_1$ (see the table in figure~\ref{fig:class})
is relevant for the dividend.
\begin{theorem}
\label{divTheorem}
If $\langle a,b\rangle $ and $\langle c,d\rangle $ are real intervals, then $ \langle
a,b\rangle \oslash \langle c,d\rangle $ is the least floating-point interval
containing the real interval whose endpoints are given as the ``general
formula'' column in Figure~\ref{divTable1} unless the specified
condition in the next column holds, in which case the result is given
by the exception case in column 5.
\end{theorem}
\begin{figure*}
\caption{Case analysis for relational division of real intervals, $\langle
a,b\rangle /\langle c,d\rangle $ when $a \leq b$, $c \leq d$.
The last column refers to how the formula has been
proved (``$D$'' for a direct proof, ``$S_1$'' and ``$S_2$'' refer to a
symmetry used to reduce it to an earlier case.) The ``class'' labels,
$N,N_1,N_0,M,P_0,P_1,P$ are as in Figure \ref{fig:class}
\label{divTable1}
\end{figure*}
In \cite{hckvnmdn01} the cases indicated as such in the table in
Figure~\ref{divTable1} are proved directly. The other cases can be
proved by symmetry from the case proved already. The symmetries
used are based on the identities $x/y = -(x/-y)$ (indicated as $S_1$)
and $x/y = -(-x/y)$ (indicated by $S_2$).
The proofs first show the correctness of the scalar products for
bounded real intervals. To allow for floating-point intervals, which
can be unbounded, we have to consider whether the products are defined.
In the column labelled ``unless'' we find the values for which an
undefined value occurs. In the ``exception case'' column we find the
correct value for the exception case.
{\em In every case, evaluating the formula in the third column in IEEE
standard floating-point arithmetic in the exception case is defined
and gives the infinity of the right sign,} as shown in column 5.
This property depends on a zero
lower bound being $+0$ and a zero upper bound being $-0$, as required by
definition~\ref{flptOrReal}.
Let us now consider potentially undefined cases. In case of division
these are $\infty/\infty$ and $0/0$. Consider for example the top
line according to which
$\langle a,b \rangle \oslash \langle c,d \rangle = \langle a/d, b/c \rangle \setminus \{0\}$.
Because of the classification $P_1$, $a$ can be neither infinite nor
zero. This ensures that $a/d$ is defined.
Because of the $P_1$ classification, $b$ cannot be zero.
It is possible for $b$ to be infinite, but not for $c$ because of the
$P$ classification.
This ensures that $b/c$ is defined.
One may verify that in every case of the table in
Figure~\ref{divTable1} undefined values are avoided by a combination
of definition~\ref{flptOrReal} and the classification of the case concerned.
Thus, in relational interval division
we achieve the ideal:
{\em Never a Nan}, and this without the need to test.
\section{Related work}
For most of the time since the beginning of interval arithmetic, two
systems have coexisted.
One was the official one, where intervals were bounded, and division
by an interval containing zero was undefined.
Recognizing the unpracticality of this approach, there was also a
definition of ``extended'' interval arithmetic
\cite{khn68}
where these limitations
were lifted. Representative of this state of affairs are the
monographs by Hansen \cite{hnsn92} and Kearfott \cite{krftt96}.
However, here the specification of interval division is quite far from an
efficient implementation that takes advantage of the IEEE floating-point
standard. The specification is indirect via multiplication by
the interval inverse. There is no consideration of the possibility of
undefined operations: presumably one is to perform a test before each
operation.
Steps beyond this were taken by Older \cite{ldr89ias} in connection
with the development of BNR Prolog. A different approach has been taken
by Walster \cite{walster98}, who pioneered the idea that intervals are
sets of values rather than abstract elements of an interval algebra.
In Walster shares our objective to obtain a closed system of arithmetic
without exceptions. He attains this objective in a different way: by
including the infinities among the possible values of the variables. In
our approach, the variables can only take reals as values; the
infinities are only used for the representation of unbounded sets of
reals. In this way, the conventional framework of calculus, where
variables are restricted to the reals, needs no modification.
\section{Conclusions}
We have presented the result of some recent developments in interval
arithmetic that lead to a system with the following properties.
\begin{itemize}
\item
{\em Correctness}
The interval operations are such that their result includes
all real numbers that are
possible as values of the variables according to the mathematical
model.
\item
{\em Freedom of exceptions}
No floating-point operation needs raise an exception.
All divisions by zero are
defined and give the correct result: an infinity of the correct
sign. This is achieved by a zero lower (upper) bound being $+0$
($-0$). Mathematically speaking, the system is a closed interval
algebra. We do not emphasize the algebra aspect, because it is not
important whether it has any interesting properties.
Other approaches have limited the applicability of interval arithmetic
in their pursuit of a presentable algebra.
\item
{\em Efficiency}
The system is efficient in that tests are only needed to determine the
right case in the tables in Figures \ref{multTable1} and
\ref{divTable1}. Tests are not necessary to avoid exceptions.
\end{itemize}
\noindent
These properties lead to several observations about the
floating-point standard from the point of view of interval arithmetic:
\begin{itemize}
\item
{\em Exceptions}
Freedom from exceptions has interesting implications for the standard.
A considerable part of the definition effort, and presumably also of
the implementation effort, is concerned with defining, signaling, and
trapping exceptions caused by overflow, underflow and undefined
operations. A processor where the floating-point arithmetic is
interval arithmetic can omit this as unnecessary ballast.
Let us review the five exceptions. {\em Invalid Operation} is
prevented by the design of the algorithms.
{\em Division by Zero} does occur in our interval arithmetic and is
designed to yield the correct result. So it should not be an
exception.
{\em Overflow} occurs in the sense that a real $x$ can result in real
arithmetic such that $\phi(x)$ is the interval between the greatest
finite floating-point number and $+\infty$. This result is
mathematically correct and therefore the desired one.
There is no reason to terminate computation: it should
not be an exception.
{\em Underflow} means that a lower bound zero is substituted for a
nonzero bound with very small absolute value. This is correct and no
reason to terminate computation.
{\em Inexact} result: this might be of some use, but is certainly not
essential for interval arithmetic.
\item
{\em Signed zeros}
Often signed zeros are regarded as an unavoidable, but regrettable
artifact of the sign-magnitude format of floating-point numbers.
It is fortunate that the drafters of the standard have nonetheless
taken them seriously and defined sensible conventions for operations
involving zeros. Especially having the right sign of a zero bound
turns out to be useful in interval division.
\item
{\em Denormalized numbers}
of view of interval arithmetic, denormalized numbers seem to be neither
useful nor harmful. It is different from the point of view of interval
constraints. This a method \cite{bnldr97,vhlmyd97} for using interval
arithmetic to solve systems of constraints with real-valued variables.
Interval arithmetic is used for the basic operations in constraint
propagation. This is an iteration that can be slowed down by
denormalized numbers when the limit is zero, even when operations on
denormalized numbers are performed at normal speed. Thus the presence
of denormalized numbers only plays a role as a performance bug that occurs
gratuitously, and fortunately rarely, in this special case.
An argument that is advanced in favour of denormalized numbers is that
it justifies compiler optimizations that rely on
certain mathematical equivalences that hold only in the presence of
denormalized numbers.
This is of no interest from the point of view of interval constraints.
Any mathematically correct transformation can be performed on
the set of constraints without changing the set of solutions obtained
by a correctly implemented interval constraint system.
This correctness is not dependent on the presence of denormalized
numbers. In fact, it only depends on the finite floating-point numbers
being {\em some} subset $F$ of the reals, as described in this paper.
Because of this independence, elaborate symbolic processing far beyond
currently contemplated compiler optimizations is taken for granted in
interval constraints.
\end{itemize}
\section{Acknowledgments}
Many thanks to Belaid Moa for pointing out errors.
We acknowledge generous support from the Natural Science and
Engineering Research Council NSERC.
\end{document}
|
\begin{document}
\title[Simple blow-up]{Simple blow-up solutions of singular Liouville equations}
\keywords{Liouville equation, quantized singular source, non-simple blow-up, construction of solutions, blow-up solutions. Spherical Harnack inequality}
\author{Lina Wu}\footnote{Lina Wu is partially supported by National Natural Science Foundation of China (12201030), China Postdoctoral Science Foundation (2022M720394) and Talent Fund of Beijing Jiaotong University (2022RC028). }
\address{ Lina Wu\\
School of Mathematics and Statistics \\
Beijing Jiaotong University \\
Beijing, 100044, China }
\email{[email protected]}
\date{\today}
\begin{abstract}
In a recent series of important works \cite{wei-zhang-1,wei-zhang-2,wei-zhang-3}, Wei-Zhang proved several vanishing theorems for non-simple blow-up solutions of singular Liouville equations. It is well known that a non-simple blow-up situation happens when the spherical Harnack inequality is violated near a quantized singular source. In this article, we further strengthen the conclusions of Wei-Zhang by proving that if the spherical Harnack inequality does hold, there exist blow-up solutions with non-vanishing coefficient functions.
\end{abstract}
\maketitle
\section{Introduction}
It is well known that the following Liouville equation has a rich background in geometry and Physics.
\begin{equation}\label{liou-eq}
\Delta u+h(x)e^{u(x)}=\sum_{t=1}^L4\pi \gamma_t \delta_{p_t}
\quad \mbox{in}\quad \Omega\subset \mathbb R^2,
\end{equation}
where $\Omega$ is a subset of $\mathbb R^2$, $p_1,..p_L$ are $L$ points in $\Omega$ and $4\pi\gamma_t\delta_{p_t}$ ($t=1,...,L$) are Dirac masses placed at $p_t$. Since applications require integrability of $e^{u}$ we assume $\gamma_t>-1$ for each $t$.
Equation \eqref{liou-eq} is one of the most extensively studied elliptic partial differential equations in recent years. In conformal geometry, \eqref{liou-eq} is related to the well-known Nirenberg problem when all $\gamma_t=0$. The recent progress on this project can be seen in Kazdan-Warner \cite{kazdan-warner}, Chang-Gursky-Yang \cite{chang-gursky-yang}, Chang-Yang \cite{chang-yang}, Cheng-Lin \cite{cheng-lin}, and the references therein. If some $\gamma_t\neq 0$, \eqref{liou-eq} arises from the existence of conformal metric with conic singularities, seen in Fang-Lai \cite{fang-lai}, Troyanov \cite{troy}, Wei-Zhang \cite{wei-zhang-pacific}. Also, it serves as
a model equation in the Chern-Simons-Higgs theory and in the Liouville system, the interested readers may browse Chanillo-Kiessling \cite{chani-kiess}, Spruck-Yang \cite{spruck-yang}, Tarantello \cite{taran-C}, Yang \cite{yang-y}, and the references therein.
It is well known that if there is no singularity in (\ref{liou-eq}), $h\equiv 1$ and $\int_{\mathbb R^2}e^u{\rm d}x<\infty$, a global solution belongs to a family described by three parameters (see \cite{chen-li-duke}). Then Y. Y. Li \cite{li-cmp} proved the first uniform approximation theorem, which confirms that around a regular blow-up point, the profile of a blow-up sequence is close to that of a sequence of global solutions. Later Chen-Lin \cite{chen-lin-sharp}, Zhang \cite{zhang-cmp}, Gluck \cite{gluck}, Bartolucci, et,al \cite{unique-4}
improved Li's estimate by obtaining better pointwise estimates and some gradient estimates. It turns out that the blow-up point has to be a critical point of a function determined by the coefficient function. This plays a crucial role in applications. In the non-quantized case, the classification theorem was proved by Prajapat-Tarantello, the uniform estimate is obtained by Bartolucci-Chen-Lin-Tarantello \cite{bart-chen-lin-tarant}, Bartolucci-Tarantello \cite{bart-taran}, Zhang \cite{zhang-ccm}. The most difficult case is when the singular source is quantized. In this case, the first breakthrough was obtained by Kuo-Lin in \cite{kuo-lin}, then independently by Bartolucci-Tarantello in \cite{bart-taran}. In this case, if the spherical Harnack inequality is violated near a quantized singular source, the profile of bubbling solutions appears to have multiple local maximums. Here a sequence of bubbling solutions satisfying \emph{spherical Harnack inequality} means the oscillation of solutions on each fixed radius around the singular point is uniformly bounded. In the work of Kuo-Lin, they use \emph{non-simple blow-up} to describe this phenomenon. In a recent series of works of Wei-Zhang \cite{wei-zhang-1,wei-zhang-2,wei-zhang-3}, they proved the first vanishing theorems for the non-simple blow-up case. Their two main results can be stated as follows:
Let $\{u_k\}_{k=1}^{\infty}$ be a sequence of blow-up solutions of
\begin{equation}\label{main-2}
\Delta u_k+|x|^{2\alpha}h_k(x)e^{u_k(x)}=0, \quad \mbox{in}\quad B_1
\end{equation}
where $h_k$ is a sequence of smooth, positive functions in $B_1$:
\begin{equation}\label{h-assump}
\frac 1{c_1}\le h_k(x)\le c_1, \quad \|\nabla^{\beta}h_k(x)\|_{B_1}\le c_1, \quad x\in B_1, \quad |\beta |=1,2,3.
\end{equation}
for some $c_1>0$. Let $0$ be the only blow-up point of $u_k$ in $B_1$,
and suppose $u_k$ has a bounded oscillation on $\partial B_1$:
\begin{equation}\label{BOF}
|u_k(x)-u_k(y)|\le C,\quad \forall x,y\in \partial B_1,
\end{equation}
and a uniform bound on its integration:
\begin{equation}\label{unif-en}
\int_{B_1}|x|^{2\alpha}h_k(x)e^{u_k(x)}{\rm d}x<C
\end{equation}
for some $C>0$ independent of $k$. In their first vanishing theorem Wei-Zhang proved that
\emph{Theorem A: (Wei-Zhang). Let $u_k$ be a sequence of non-simple blow-up solutions around the origin. Suppose $0$ is the only blow-up point in $B_1$ and
$u_k$ satisfies (\ref{main-2}),(\ref{BOF}) and (\ref{unif-en}). Then along a sub-sequence
$$\lim_{k\to \infty}\nabla (\log h_k+\psi_k)(0)=0$$
where $\psi_k$ is the harmonic function that eliminates the finite oscillation of $u_k$ on $\partial B_1$:
\begin{equation}\label{psi-eq}
\Delta \psi_k=0,\quad \mbox{in}\quad B_1, \quad \psi_k(x)=u_k(x)-\frac{1}{2\pi}\int_{\partial B_1}u_k,\quad x\in \partial B_1.
\end{equation}
}
In their recent work, Wei-Zhang further proved the following Laplacian vanishing theorem:
\emph{Theorem B: (Wei-Zhang). Let $u_k$ be the same as in Theorem A. Then along a subsequence,
$$\lim_{k\to \infty}\Delta (\log h_k)(0)=0.$$
}
It is important to point out that in both Theorem A and Theorem B, the blow-up sequence has to be \emph{non-simple}, this assumption implies that $\alpha\in \mathbb N$. Both Theorem A and Theorem B are powerful tools in application, since the equation (\ref{main-2}) represents a number of situations in more general equations/systems. For example, in the author's recent joint work with Wei and Zhang \cite{wei-wu-zhang}, we proved that under certain conditions on the coefficient function and Gauss curvature, all blow-up points to Toda systems are simple.
The purpose of this article is twofold. First if $\alpha=0$ in (\ref{main-2}) and $0$ is the only blow-up point, it is well known that (see \cite{chen-lin-sharp, gluck,zhang-cmp}) along a sub-sequence
$\lim_{k\to \infty} \nabla (\log h_k+\psi_k)(0)=0$. Over the years it has long been suspected that this property does not hold if $\alpha$ is not an integer. This is indeed verified in our first main theorem:
\begin{thm}\label{main-thm}
For any given $\alpha>-1$ and $\alpha\not\in \mathbb N\cup \{0\}$, there exist a sequence $h_k$ satisfying (\ref{h-assump}) and
\begin{equation}\label{h-assum-2}
|\nabla \log h_k(0)+\nabla \psi_k(0)|\ge c_1,\quad |\Delta \log h_k(0)|\ge c_1
\end{equation}
for some $c_1>0$,
Corresponding to $h_k$ there is a sequence of blow-up solutions $u_k$ of (\ref{main-2}) such that the origin is its only blow-up point, (\ref{BOF}) (\ref{unif-en}) holds for $u_k$, which also satisfies the spherical Harnack inequality around the origin.
\end{thm}
The second goal is to prove that
when $\alpha\in \mathbb N\cup \{0\}$ we can construct a sequence of \emph{simple } blow-up solutions that
does not satisfy the Laplacian vanishing theorem.
\begin{thm}\label{main-thm-2}
Let $\alpha\in \mathbb N\cup \{0\}$, there exist a sequence of blow-up solutions $\{u_k\}_{k=1}^{\infty}$ of (\ref{main-2}) having $0$ as its only blow-up point in $B_1$. Moreover $\{u_k\}$ satisfies (\ref{BOF}) (\ref{unif-en}) and the coefficient $h_k$ satisfies (\ref{h-assump}) and
$$|\Delta (\log h_k)(0)|\ge c, \,\, \mbox{for a constant }\,\, c>0 \,\, \mbox{independent of $k$}.$$
\end{thm}
Theorem \ref{main-thm} settles the conjecture that around a non-quantized singular source, the vanishing theorems do not hold. Theorem \ref{main-thm-2} proves that it is essential to have a \emph{ non-simple} blow-up sequence in Theorem B. If this assumption is violated, the corresponding Laplacian vanishing property also fails. However, this article did not provide a similar example for the first-order vanishing theorem in Theorem A.
The paper is organized as follows: In Section \ref{non-quan}, we establish Theorem \ref{main-thm}. Our proof is based on the thorough comprehension of the corresponding linearized operator of a model equation. It is also essential that we analyze the Fourier series of some correction terms and prove its convergence. In Section \ref{quan}, we establish Theorem \ref{main-thm-2}, and the key point of the proof is to use a radial coefficient function and reduce all the iterations into radial cases. This method made us avoid kernel functions in the linearized equation corresponding to the quantized case.
\section{Non-quantized situation}\label{non-quan}
In this section, we consider the non-quantized case. In other words, we set $\alpha>-1$ and $\alpha\notin\mathbb{N}\cup\{0\}$. It is known that the spherical Harnack holds around the origin when $\alpha$ is not an integer (See \cite{kuo-lin}).
Denote $\lambda_k=u_k(0)$ and $\epsilon_k=e^{-\frac{\lambda_k}{2(1+\alpha)}}$. Let $v_k$ be the scaling of $u_k$:
\begin{equation*}
v_k(y)=u_k(\epsilon_ky)+2(1+\alpha)\log \epsilon_k,\quad y\in\Omega_k:=B(0,\epsilon_k^{-1}).
\end{equation*}
Clearly, we need to construct $v_k$ to satisfy
\begin{equation}\label{euq-v-k}
\left\{\begin{array}{lcl}
\Delta v_k(y)+|y|^{2\alpha}h_k(\epsilon_ky)e^{v_k(y)}=0,&& {\rm in} \ \, \Omega_k, \\
v_k(0)=0, \\
|v_k(y_1)-v_k(y_2)|\le C,&& {\rm for\ \ any} \ \, y_1,y_2\in\partial\Omega_k,\\
v_k(y)\to-2\log(1+|y|^{2+2\alpha}),&& {
\rm in}\ \, C_{loc}^{\beta}(\mathbb{R}^2)
\end{array}
\right.
\end{equation}
where $\beta\in (0,1)$.
It suffices to construct $\{v_k\}$ satisfying \eqref{euq-v-k}. Since we can choose $h_k$ we require $h_k(0)= 8(1+\alpha)^2$ for convenience. Let
\begin{equation*}
U_k(y)=-2\log(1+|y|^{2+2\alpha})
\end{equation*}
be a standard bubble that satisfies
\begin{equation}\label{equ-U-k}
\Delta U_k(y)+8(1+\alpha)^2|y|^{2\alpha}e^{U_k(y)}=0 \quad {\rm in} \ \, \mathbb{R}^2.
\end{equation}
Here we note that a uniform estimate of Bartolucci-Chen-Lin-Tarantello \cite{bart-chen-lin-tarant} assures that any blow-up solution $v_k$ of (\ref{euq-v-k}) satisfies
\begin{equation*}
|v_k(y)-U_k(y)|\leq C,\quad y\in \Omega_k.
\end{equation*}
We will construct our solutions based on the expansion of $v_k$ established in \cite{zhang-ccm}. Firstly, let us recall some notations and results in \cite{zhang-ccm}. Denote
\begin{equation*}
\begin{split}
g_k(r)&=-\frac{1}{4\alpha(1+\alpha)}\frac{r}{1+r^{2+2\alpha}},\quad r=|y|, \\
c_1^k(y)&=g_k(r)\epsilon_k\sum_{j=1}^{2}\partial_jh_k(0)\theta_j,\quad \theta_j=\frac{y_j}{r}\ \ (j=1,2).
\end{split}
\end{equation*}
Then $c_1^k$ satisfies
\begin{equation}\label{equ-phi-k}
\Delta c_1^k+8(1+\alpha)^2|y|^{2\alpha}e^{U_k(y)}c_1^k=-\sum_{j=1}^{2}\epsilon_k\partial_jh_k(0)y_j|y|^{2\alpha}e^{U_k(y)}\quad {\rm in} \ \, \Omega_k.
\end{equation}
\cite{zhang-ccm} tells us that $c_1^k$ is the second term in the expansion of $v_k$ if $\alpha>0$ is a non-integer. For the case $-1<\alpha<0$, Bartolucci-Yang-Zhang \cite{byz} have established the same result.
Here we point out that the radial part of $c_1^k$ decays like $\epsilon_k r^{-1-2\alpha}$ at infinity. In particular, for $r=\epsilon_k^{-1}$, the angular part of the function is comparable to $\epsilon_k^{2+2\alpha}e^{i\theta}$, which means this term contributes no oscillation on the boundary. So as long as $|\nabla \log h_k(0)|\ge 2c>0$, we have $|\nabla \log h_k(0)+\nabla \psi_k(0)|\ge c$.
For the convenience of the readers, we comment that the construction of $c_1^k$ is essentially solving
\begin{equation}\label{equ-l}
\frac{d^2}{dr^2}g+\frac{1}{r}\frac{d}{dr}g+\Big(8(1+\alpha)^2r^{2\alpha}e^{U_k}-\frac{l^2}{r^2}\Big)g=-r^{1+2\alpha}e^{U_k},\quad r>0,
\end{equation}
with $l=1$. From the proof of Lemma 2.1 in \cite{zhang-ccm}, we know two fundamental solutions $F_1$ and $F_2$ of the homogeneous equation of \eqref{equ-l} can be written explicitly as follows:
\begin{equation}\label{two-fun}
\begin{split}
F_{1}(r)&=\frac{(\frac{l}{1+\alpha}+1)r^l+(\frac{l}{1+\alpha}-1)r^{l+2(1+\alpha)}}{1+r^{2(1+\alpha)}}, \\
F_{2}(r)&=\frac{(\frac{l}{1+\alpha}+1)r^{-l+2(1+\alpha)}+(\frac{l}{1+\alpha}-1)r^{-l}}{1+r^{2(1+\alpha)}}.
\end{split}
\end{equation}
Therefore, we can verify that $g$ can be explicitly written with two fundamental solutions above by the standard ODE methods.
For the motivation of adding more terms in the correction, we use the decay of $c_1^k$ to
obtain
\begin{align}\label{tem-2}
&\Delta (U_k+c_1^k)+8(1+\alpha)^2r^{2\alpha}e^{U_k+c_1^k}\\
=&\Delta U_k+\Delta c_1^k+8(1+\alpha)^2r^{2\alpha}e^{U_k}\big(1+c_1^k+\frac{(c_1^k)^2}{2}\big)
+O(\epsilon_k^3)(1+r)^{-7-8\alpha}\nonumber \\
=&8(1+\alpha)^2r^{2\alpha}e^{U_k}\frac{(c_1^k)^2}{2}-\epsilon_k\sum_j\partial_jh_k(0)\theta_jr^{1+2\alpha}e^{U_k}+O(\epsilon_k^3)(1+r)^{-7-8\alpha}.\nonumber
\end{align}
At this moment we write the expansion of $h_k(\epsilon_ky)$:
\begin{equation}\label{h-expan}
\begin{split}
h_k(\epsilon_ky)=&8(1+\alpha)^2+\epsilon_k\nabla h_k(0)\cdot y+\frac{\epsilon_k^2}2\partial_{11}h_k(0)(y_1^2-\frac{|y|^2}{2})
+\frac{\epsilon_k^2}2\partial_{22}h_k(0)\cdot \\
&(y_2^2-\frac{|y|^2}{2})
+\epsilon_k^2\partial_{12}h_k(0)
y_1y_2+\frac{\epsilon_k^2}4\Delta h_k(0)|y|^2+O(\epsilon_k^3)|y|^3 \\
=&8(1+\alpha)^2+\epsilon_k\nabla h_k(0)\cdot y+\epsilon_k^2r^2\Theta_2+\frac 14\epsilon_k^2 r^2\Delta h_k(0)+O(\epsilon_k^3)r^3.
\end{split}
\end{equation}
where
\begin{equation*}
\begin{split}
\Theta_2:=&\frac 12\partial_{11}h_k(0)(\theta_1^2-\frac 12)+\partial_{12}h_k(0)\theta_1\theta_2+\frac 12\partial_{22}h_k(0)(\theta_2^2-\frac 12)\\
=&\frac 14(\partial_{11}h_k(0)-\partial_{22}h_k(0))\cos 2\theta +\frac 12 \partial_{12}h_k(0)\sin 2\theta.
\end{split}
\end{equation*}
Based on (\ref{equ-phi-k}), (\ref{tem-2}) and (\ref{h-expan}) we have
\begin{align}\label{euq-Uk-c1k}
\begin{split}
&\Delta (U_k+c_1^k)+h_k(\epsilon_ky)|y|^{2\alpha}e^{U_k+c_1^k}\\
=&r^{2\alpha}e^{U_k}\Big(8(1+\alpha)^2\frac{(c_1^k)^2}2+\Theta_2\epsilon_k^2r^2+\frac 14 \epsilon_k^2r^2\Delta h_k(0)+\epsilon_k\nabla h_k(0)\cdot y c_1^k \\
&+O(\epsilon_k^3 (1+r)^3)\Big).
\end{split}
\end{align}
Now we compute $(c_1^k)^2$:
\begin{align*}
(c_1^k)^2=&\epsilon_k^2g_k^2(\partial_1 h_k(0) \cos \theta+\partial_2 h_k(0) \sin\theta)^2\\
=&\epsilon_k^2g_k^2\Big(\frac{|\nabla h_k(0) |^2}2+\frac 12
((\partial_1 h_k(0))^2-(\partial_2h_k(0))^2)\cos 2\theta \\
&+\partial_1 h_k(0)\partial_2 h_k(0) \sin 2\theta\Big)
\end{align*}
Also the remaining term of the order $O(\epsilon_k^2)$ is
\begin{align*}
&\epsilon_k\nabla h_k(0)\cdot y c_1^k\\
=&\epsilon_k^2 g_k r (\partial_1h_k(0)\cos \theta+\partial_2 h_k(0)\sin \theta)^2\\
=&\epsilon_k^2 g_k r \Big(\frac{|\nabla h_k(0)|^2}2+\frac 12(\partial_1h_k(0)^2-\partial_2 h_k(0)^2)\cos 2\theta+\partial_1h_k(0)\partial_2 h_k(0)\sin 2\theta\Big).
\end{align*}
To get rid of the terms with $e^{2i\theta}$ of the order $O(\epsilon_k^2)$ in \eqref{euq-Uk-c1k} we let $c_2^k$ be the solution of
\begin{align*}
&\Delta c_2^k+8(1+\alpha)^2|y|^{2\alpha}e^{U_k}c_2^k=-r^{2\alpha}e^{U_k}\Big(\epsilon_k^2r^2\Theta_2+\mathcal{A}(\frac{(c_1^k)^2}{2}+\epsilon_k\nabla h_k(0)\cdot yc_1^k)\Big)\\
=&-\epsilon_k^2|y|^{2\alpha}e^{U_k}\Big (r^2\Theta_2
+\big(4(1+\alpha)^2g_k^2+g_kr\big)\cdot
((\partial_1h_k(0))^2-(\partial_2 h_k(0))^2)\frac{\cos 2\theta}2\\
&+\partial_1h_k(0) \partial_2 h_k(0)\sin 2\theta)\Big ).
\end{align*}
Note that $\mathcal{A}(\cdot)$ means the non-radial part of the term in the parenthesis.
Since each term in $c_2^k$ is a product of a radial function and a spherical harmonic function, we set $w_1$ to be a solution of
$$\frac{d^2}{dr^2}w_1+\frac{1}{r}\frac{d}{dr}w_1+\Big(8(1+\alpha)^2r^{2\alpha}e^{U_k}-\frac{4}{r^2}\Big)w_1=r^{2+2\alpha}e^{U_k}$$
with the control of $|w_1(r)|\le C$ for all $r$. Similarly, we set $w_2$ to be a solution of
$$\frac{d^2}{dr^2}w_2+\frac{1}{r}\frac{d}{dr}w_2+\Big(8(1+\alpha)^2r^{2\alpha}e^{U_k}-\frac{4}{r^2}\Big)w_2=r^{2\alpha}e^{U_k}\big(4(1+\alpha)^2g_k^2+g_kr\big)$$
with $|w_2(r)|\le C$ for all $r$. Two fundamental solutions of the corresponding homogeneous equation can be seen in \eqref{two-fun} with $l=2$. Furthermore, we observe that the non-homogenous terms have good decay rates at infinity. Therefore, the construction of $w_1$ and $w_2$ is standard. At this point, it is easy to verify that $c_2^k$ can be constructed as
\begin{align*}
c_2^k(y)
=&\epsilon_k^2\Big (w_1(r)\Theta_2+w_2(r) \big ((\partial_1h_k(0))^2-(\partial_2 h_k(0))^2)\frac{\cos 2\theta}2 \\
&+\partial_1h_k(0) \partial_2 h_k(0)\sin 2\theta\big )\Big).
\end{align*}
Finally we use $c_0^k$ to handle the radial term of the order $O(\epsilon_k^2)$:
We let $c_0^k$ solve
\begin{align*}
&\Delta c_0^k+8(1+\alpha)^2|y|^{2\alpha}e^{U_k}c_0^k\\
=&-\epsilon_k^2|y|^{2\alpha}e^{U_k}\Big (r^2\frac{\Delta h_k(0)}4+\frac 12|\nabla h_k(0)|^2\big(4(1+\alpha)^2g_k(r)^2+g_k(r)r\big)\Big ).
\end{align*}
Since both $U_k$ and the right-hand side of the above are radial, we can construct $c_0^k$ as a radial function $c_0^k(r)$ that satisfies
$$\left\{\begin{array}{ll}
\frac{d^2}{dr^2}c_0^k(r)+\frac{d}{dr}c_0^k(r)+8(1+\alpha)^2r^{2\alpha}e^{U_k}c_0^k(r)\\
\quad =-\epsilon_k^2r^{2\alpha}e^{U_k}\Big (\frac{r^2}4\Delta h_k(0)+\frac 12|\nabla h_k(0)|^2\big(4(1+\alpha)^2g_k(r)^2+g_k(r)r\big)\Big ).\\
c_0^k(0)=\frac{d}{dr}c_0^k(0)=0.
\end{array}
\right.
$$
We only need to define $c_0^k$ for $0<r<\epsilon_k^{-1}$. It is easy to use the standard ODE method to obtain
\begin{equation}\label{est-c0}
|c_0^k(r)|\le C\epsilon_k^2(1+r)^{-2\alpha}\log (2+r),
\quad 0<r<\epsilon_k^{-1}.
\end{equation}
Set $c_k=c_0^k+c_1^k+c_2^k$, we verify by direct computation that
\begin{equation}\label{base-e}
\Delta (U_k+c_k)+|y|^{2\alpha}h_k(\epsilon_ky)e^{U_k+c_k}
= E_k
\end{equation}
\begin{equation}\label{itera-2}
|E_k(y)|\le c_1\epsilon_k^3(1+|y|)^{-1-2\alpha},\quad y\in \Omega_k.
\end{equation}
So in order to find a solution with a non-vanishing coefficient, we need to find $d_k$ to satisfy
\begin{equation}\label{want-1}\Delta (U_k+c_k+d_k)+|y|^{2\alpha}h_k(\epsilon_ky)e^{U_k+c_k+d_k}=0,\quad \mbox{in}
\quad \Omega_k.
\end{equation}
The difference between (\ref{base-e}) and (\ref{want-1}) gives
\begin{equation}\label{want-2}
\Delta d_k+8(1+\alpha)^2|y|^{2\alpha}e^{U_k}d_k=-E_k-f(d_k).
\end{equation}
where
\begin{equation}\label{itera-3}
f(d_k)=-|y|^{2\alpha}h_k(\epsilon_ky)e^{U_k+c_k}(e^{d_k}-1-d_k)
+|y|^{2\alpha}e^{U_k}(h_k(\epsilon_ky)e^{c_k}-h_k(0))d_k.
\end{equation}
is of higher order.
Based on (\ref{want-2}) we design an iteration scheme: Let $d_k^{(0)}\equiv 0$ and
$d_k^{(1)}$ satisfy
$$\Delta d_k^{(1)}+8(1+\alpha)^2|y|^{2\alpha}e^{U_k}d_k^{(1)}=-E_k-f(d_k^{(0)}). $$
In general we shall construct $d_k^{(m+1)}$ that satisfies
$$\Delta d_k^{(m+1)}+8(1+\alpha)^2|y|^{2\alpha}e^{U_k}d_k^{(m+1)}=-E_k-f(d_k^{(m)}) $$
and
\begin{equation}\label{der-d}
d_k^{(m+1)}(0)=|\nabla d_k^{(m+1)}(0)|=0.
\end{equation}
Here we claim that there exists $c_0>0$ independent of $m$ and $k$ such that
\begin{equation}\label{unif-b}
|d_k^{(m)}(y)|\le c_0\epsilon_k^3(1+|y|)^{1-2\alpha}\log (2+|y|).
\end{equation}
The constant $c_0$ will be determined based on $c_1$ later.
To prove this uniform bound, we assume that (\ref{unif-b}) holds for $d_k^{(m)}$, and we shall show that it also holds for $d_k^{(m+1)}$.
The projection to $1$ is the following equation: Let $f_0$ be the
projection of $d_k^{(m+1)}$ onto $1$, then $f_0$ solves
\begin{equation*}
\left\{\begin{array}{ll}
f_{0}''(r)+\frac 1r f_{0}'(r)+8(1+\alpha)^2r^{2\alpha}e^{U_k}f_{0}(r)+E_0^k=0, \quad 0<r\leq\epsilon_k^{-1}\\
f_{0}(0)=\frac{\mathrm{d}}{\mathrm{d}r}f_{0}(0)=0,
\end{array}
\right.
\end{equation*}
where $E_0^k$ is the corresponding projection of $E_k+f(d_k^{(m)})$ onto 1, and satisfies a similar bound of $E_k$:
\begin{equation}\label{fur-err}
|E_0^k(r)|\le 2c_1\epsilon_k^3(1+|y|)^{-1-2\alpha}.
\end{equation}
The reason that $E_0^k$ has a worse coefficient $2c_1$ is that the $d_k^{(m)}$ terms are absorbed.
We denote the two fundamental solutions of the homogeneous equation of $f_{0}$ as $u_1$ and $u_2$, where
$$
u_1(r)=\frac{1-r^{2+2\alpha}}{1+r^{2+2\alpha}},
$$
and $u_2(r)$ is comparable to $\log r$ near $0$ and infinity. Based on standard ODE theory,
\begin{equation*}
f_{0}(r)=-u_{1}(r)\int_0^{r} t E_1^k(t)u_{2}(t)\mathrm{d}t-u_{2}(r)\int_0^rt E_1^k(t)u_{1}(t)\mathrm{d}t.
\end{equation*}
Integrating the identity above, we know that
\begin{equation*}
f_{0}(r)=O(\epsilon_k^3(1+r)^{1-2\alpha}r|\log(r)|),\ \ \mathrm{at}\ \ 0,\quad f_{0}(r)=O(\epsilon_k^3(1+r)^{1-2\alpha}\log(r)),\ \ \mathrm{at}\ \ \infty.
\end{equation*}
In other words, we have the following estimate for $f_{0}$
$$
|f_{0}(r)|\le c_0\epsilon_k^3(1+r)^{1-2\alpha}\log(2+r),
$$
where $c_0$ is a constant independent of $l$ and only depends on $c_1$.
Next, we consider the projections on high frequencies. For $l\in\mathbb{N}^+$, let $f_{l}$ satisfies
\begin{equation*}
\left\{\begin{array}{ll}
f_{l}''(r)+\frac{1}{r}f_{l}'(r)+\Big(8(1+\alpha)^2r^{2\alpha}e^{U_k}-\frac{l^2}{r^2}\Big)f_{l}(r)+E_{2,k}^l=0, \quad 0<r\leq\epsilon_k^{-1}\\
f_{l}(0)=0.
\end{array}
\right.
\end{equation*}
Here $E_{2,k}^l$ ($l\ge 1$) is the radial part of the projection of some error term on $\cos({l\theta}$):
\begin{equation*}
E_{2,k}^l(r)=\frac{1}{2\pi}\int_0^{2\pi}E_{k}\cos{(l\theta)}\mathrm{d}\theta
\end{equation*}
The estimate of $E_{2,k}^l$ is
\begin{equation}\label{e2k-l}
|E_{2,k}^l(r)|\le 2 c_1\epsilon_k^3(1+r)^{-1-2\alpha}
\end{equation}
In order to find $f_{l}$ we use two fundamental solutions $F_{1}$ and $F_{2}$ of the homogeneous equation, whose explicit expressions can be seen in \eqref{two-fun}.
As one can see that $F_{1}$ is comparable to $r^l$ at the origin and at infinity, and $F_{2}$ is comparable to $r^{-l}$ at the origin and infinity. At this point, we can construct $f_{l}$ as follows
\begin{equation*}
f_{l}(r)=-F_{1}(r)\int_r^{\infty}\frac{t}{2l}E_{2,k}^l(t)F_{2}(t)\mathrm{d}t-F_{2}(r)\int_0^r\frac{t}{2l}E_{2,k}^l(t)F_{1}(t)\mathrm{d}t.
\end{equation*}
Integrate the identity above, we know that
$$
|f_{l}(r)|\le \frac{c_2}{l^2}\epsilon_k^3(1+r)^{1-2\alpha},
$$
where $c_2$ is a constant independent of $l$. It is easy to see that $f_{l}(0)=0$. Furthermore the summation of projections on all $\cos{(l\theta})$ $(l\ge 1)$ is convergent. That is
\begin{equation*}
\big|\sum_{l\ge 1}f_{l}(r)\big|\le c_2\epsilon_k^3(1+r)^{1-2\alpha}\sum_{l\ge1}\frac{1}{l^2}\le c_2\epsilon_k^3(1+r)^{1-2\alpha}
\end{equation*}
In the same way we can construct the projection on $\sin{(l\theta)}$ for all $l\ge 1$, called $\tilde{f}_{l}$, and the summation of $\tilde{f}_{l}$ is convergent as well. $d_k^{(m)}$ is well-defined and satisfies the estimate \eqref{unif-b}.
Thus by Brower fixed point theorem, we obtain the existence of $d_k$. The construction is complete in this case.
The Laplacian term is also obviously true, which can be seen in the construction. The construction of a non-quantized case is complete.
\section{Quantized situation}\label{quan}
Let $N$ be a positive natural number, our goal is to construct a sequence of blow-up solutions $u_k$ such that
$$\Delta u_k+|x|^{2N}h_k(x)e^{u_k(x)}=0, \quad \mbox{in}\quad B_1 $$
such that the spherical Harnack holds around the origin, the only blow-up point in $B_1$ and $\Delta \log h_k(0)$ do not tend to zero. Here $\psi_k$ is the set of harmonic functions that eliminate the oscillation of $u_k$ on $\partial B_1$.
The main result of this section is to prove the following theorem.
\begin{thm}\label{lap-non}
For any $N\in \mathbb N$, there exists $h_k(x)$ satisfying \eqref{h-assump} and a sequence of blow-up solutions $u_k$ of \eqref{main-2}\eqref{BOF}\eqref{unif-en} such that $u_k$ is simple and $|\Delta (\log h_k)(0)|\ge c$ for some $c>0$ independent of $k$.
\end{thm}
\noindent{\bf Proof of Theorem \ref{lap-non}:} We set
$$h_k(x)=8(N+1)^2+|x|^2,\quad x\in B_1.$$
Obviously
$$\nabla \log h_k(0)=0,\quad \Delta (\log h_k)(0)=\frac{\Delta h_k(0)}{h_k(0)}=\frac{1}{4(1+N)^2}.$$
Let $v_k$ be the scaling of $u_k$ according to the maximum of $u_k$: Let
$$\epsilon_k=e^{-\frac{u_k(0)}{2(1+N)}}$$ and
$$v_k(y)=u_k(\epsilon_ky)+2(1+N)\log \epsilon_k. $$
The equation for $v_k$ is
$$\Delta v_k+(1+\epsilon_k^2|y|^2)|y|^{2N}e^{v_k}=0.$$
Our goal is to construct $v_k$ satisfying the equation above based on the global solution $U_k$. The classification theorem of Prajapat-Tarantello \cite{prajapat} gives the standard bubble of $\Delta U+8(N+1)^{2}|y|^{2N}e^{U}=0$:
\begin{equation*}
U(y)=\log \frac{\lambda}{\big(1+\lambda|y^{N+1}-\xi|^2\big)^2}
\end{equation*}
where parameters $\lambda>0$ and $\xi\in\mathbb{C}$.
Setting $\lambda=1$ and $\xi=0$ in $U$, we use the radial $U_k(y)$:
$$U_k(y)=\log \frac{1}{\big(1+|y|^{2N+2}\big)^2}. $$
Here we note that $\partial_{\lambda}|_{\lambda=0}
U$, $\partial_{\xi}|_{\xi=0} U$ and $\partial_{\bar \xi}|_{\xi=0}U$ for a basis for the linearized space.
\begin{align*}
\partial_{\lambda}\big|_{\lambda=1}U&=\frac{1-r^{2N+2}}{1+r^{2N+2}}, \\
\partial_{\xi}\big|_{\xi=0}U&=\frac{2r^{N+1}}{1+r^{2N+2}}e^{-i(N+1)\theta}, \\
\partial_{\bar \xi}\big|_{\xi=0}U&=\frac{2r^{N+1}}{1+r^{2N+2}}e^{i(N+1)\theta}.
\end{align*}
Because of this, we see that corresponding to $N$ we have
$$\frac{2r^{N+1}}{1+r^{2N+2}}\sin ((N+1)\theta),\quad
\frac{2r^{N+1}}{1+r^{2N+2}}\cos ((N+1)\theta) $$ in the kernel, this is the reason we only obtain the non-vanishing estimate for $\Delta (\log h_k)(0)$. It would be interesting to construct a simple blowup sequence with non-vanishing first-order coefficients.
Based on the fact $h_k(\epsilon_ky)=8(N+1)^2+\epsilon_k^2|y|^2$ and the equation of $U_k$, we have
\begin{equation*}
\Delta U_k+h_k(\epsilon_ky)|y|^{2N}e^{U_k}=\epsilon_k^2|y|^{2N+2}e^{U_k}.
\end{equation*}
In order to deal with the right-hand side of the equation above, we let $c_k$ solve
\begin{equation*}
\Delta c_k+8(N+1)^2|y|^{2N}e^{U_k}c_k=-\epsilon_k^2|y|^{2N+2}e^{U_k}.
\end{equation*}
Similar with $c_0^k$ in the non-quantized case, we can construct $c_k$ as a radial function $c_k(r)$ satisfying
$$\left\{\begin{array}{ll}
\frac{d^2}{dr^2}c_k(r)+\frac{d}{dr}c_k(r)+8(N+1)^2r^{2N}e^{U_k}c_k(r)=-\epsilon_k^2r^{2N+2}e^{U_k}, \quad 0<r<\epsilon_k^{-1}\\
c_k(0)=\frac{d}{dr}c_k(0)=0.
\end{array}
\right.
$$
After the standard ODE method, we obtain the estimate as in \eqref{est-c0}:
\begin{equation}\label{est-c0-q}
|c_k(r)|\le C\epsilon_k^2(1+r)^{-2N}\log (2+r),
\quad 0<r<\epsilon_k^{-1}.
\end{equation}
Note that $e^{U_k+c_k}=e^{U_k}(1+c_k+O(\epsilon_k^4))$. By direct computation, we obtain
\begin{equation}\label{base-e-q}
\Delta (U_k+c_k)+|y|^{2N}h_k(\epsilon_ky)e^{U_k+c_k}
= E_k.
\end{equation}
Here $E_k$ is radial and satisfies
\begin{equation}\label{itera-2-q}
|E_k(y)|\le c\epsilon_k^3(1+|y|)^{-1-2N},\quad y\in \Omega_k,
\end{equation}
where $c$ is a positive constant independent of $k$.
Then we set $v_k=U_k+c_k+b_k$. Removing the equation for $U_k$ and $c_k$ we write the equation of $b_k$ as
\begin{equation}\label{equ-bk}
\Delta b_k+|y|^{2N}h_k(\epsilon_ky)e^{U_k+c_k+b_k}-|y|^{2N}h_k(\epsilon_ky)e^{U_k+c_k}=-E_k,\quad |y|\le \tau \epsilon_k^{-1}.
\end{equation}
The equation can be further written as
\begin{equation}\label{equ-bk-2}
\Delta b_k+8(N+1)^2|y|^{2N}e^{U_k}b_k=-E_k-\hat{f}(b_k),\quad |y|\le \tau \epsilon_k^{-1}
\end{equation}
where
$$\hat{f}(b_k)=-|y|^{2N}h_k(\epsilon_k y)e^{U_k+c_k}(e^{b_k}-1-b_k)+|y|^{2N}e^{U_k}(h_k(\epsilon_ky)e^{c_k}-h_k(0))b_k.$$
Similar to the non-quantized case, we construct $b_k$ by iteration. Let $b_k^{(m)}\equiv 0$ and $b_k^{(1)}$ satisfy
\begin{equation*}
\Delta b_k^{(1)}+8(N+1)^2r^{2N}e^{U_k}b_k^{(1)}=-E_k-\hat f(b_k^{(0)}).
\end{equation*}
In general we construct $b_k^{(m+1)}$ satisfying
\begin{equation*}
\Delta b_k^{(m+1)}+8(N+1)^2r^{2N}e^{U_k}b_k^{(m+1)}=-E_k-\hat f(b_k^{(m)})
\end{equation*}
and $b_k^{(m+1)}(0)=0$.
Denote $F_k^m(r)=-E_k(r)-\hat{f}(b_k^{(m)}(r))$. Then by the iteration method as before if we set
$$\frac{d^2}{dr^2}b_k^{(m+1)}+\frac 1r\frac{d}{dr}b_k^{(m+1)}+8(N+1)^2r^{2N}e^{U_k}b_k^{(m+1)}=F_k^m, \quad 0<r<\tau \epsilon_k^{-1},$$
and
$$b_k^{(m+1)}(0)=0.$$
The homogeneous equation has two fundamental solutions, one is
$$u_1=\frac{1-r^{2N+2}}{1+r^{2N+2}}.$$
The second fundamental solution $u_2$ satisfies $|u_2(r)|\le C\log \frac{1}r$ near $0$ and $\infty$. We can construct $b_k^{(m+1)}$ from $b_k^{(m)}$ as
$$b_k^{(m+1)}(r)=u_1(r)\int_0^rtF_k^m(t)u_2(t){\rm d}t+u_2(r)\int_0^rtF_k^m(t)u_1(t){\rm d}t. $$
If $b_k^{(m)}(t)$ satisfies
$$|b_k^{(m)}(t)|\le C\epsilon_k^2\log (2+t),$$
one can verify by direct computation that $b_k^{(m+1)}$ satisfies the same bound. Thus by standard Brower fixed point theorem, there is a $b_k$ such that
$$\Delta b_k+8(N+1)^2r^{2N}e^{U_k}b_k=-E_k-\hat{f}(b_k).$$
Theorem \ref{lap-non} is established. $\Box$
\end{document}
|
\begin{eqnarray}gin{document}
\renewcommand{\arabic{section}.\arabic{equation}}{\arabic{section}.\arabic{equation}}
\begin{eqnarray}gin{titlepage}
\title{Boundary integral equation methods for the elastic and thermoelastic waves in three dimensions}
\author{Gang Bao\thanks{School of Mathematical Sciences, Zhejiang University, Hangzhou 310027, China. Email: {\tt [email protected]}}\;,
Liwei Xu\thanks{School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu, Sichuan 611731, China. Email: {\tt [email protected]}}\;,
Tao Yin\thanks{Department of Computing and Mathematical Sciences, California Institute of Technology, 1200 East California Blvd., CA 91125, United States. Email:{\tt [email protected]}}
}
\end{eqnarray}d{titlepage}
\maketitle
\begin{eqnarray}gin{abstract}
In this paper, we consider the boundary integral equation (BIE) method for solving the exterior Neumann boundary value problems of elastic and thermoelastic waves in three dimensions based on the Fredholm integral equations of the first kind. The innovative contribution of this work lies in the proposal of the new regularized formulations for the hyper-singular boundary integral operators (BIO) associated with the time-harmonic elastic and thermoelastic wave equations. With the help of the new regularized formulations, we only need to compute the integrals with weak singularities at most in the corresponding variational forms of the boundary integral equations. The accuracy of the regularized formulations is demonstrated through numerical examples using the Galerkin boundary element method (BEM).
{\bf Keywords:} Elastic wave, thermoelastic wave, hyper-singular boundary integral operator, regularized formulation, boundary element method
\end{eqnarray}d{abstract}
{\mathbb S}ection{Introduction}
\lambdabel{sec:i}
In this paper, we apply the BIE method to solve the three dimensional time-harmonic elastic and thermoelastic scattering problems that are of great importance in many fields of applications such as geophysics, seismology, non-destructive testing and material sciences, to name a few. We are interested in the wave scattering by a bounded impenetrable obstacle immersed in an infinite isotropic solid. Based upon the assumption that any deformation of the elastic medium occurs under a constant temperature, the displacement in the solid can be modeled by the Navier equation together with an appropriate radiation condition at infinity (\cite{KGBB79}). If considering the temperature fluctuations caused by the dynamic deformation, one arrives at the Biot system of equations (\cite{B56,KGBB79,N75}) describing the interaction of the temperature and displacement fields.
When considering the wave propagation in an unbounded domain, one could use the so-called Dirichlet-to-Neumann (DtN) map (or non-reflecting boundary condition) on a closed artificial boundary which decomposes the exterior region into two parts. After that, the original scattering problem could be solved on the bounded region. The DtN map for the elastic wave has been used for numerical simulations in the open literatures (\cite{GK90,HS98}) and some properties of the DtN map have been investigated in \cite{BHSY,L16,LY}. For the thermoelastic wave, the explicit formulation of the DtN map is still unknown. We refer to \cite{CD98,CD99,DK88,DK90} for the mathematical analysis of the thermoelastic wave. The BIE method is another conventional numerical method for solving the scattering problems, and it has been widely used in acoustics, electromagnetics, elastodynamics and thermoelastodynamics (\cite{BLR14,BXY,C00,CBS08,DB90,HW04,HW08,HX11,JWX14,LH16,L14,LR93,MB88,N01,RS,SS84,TC07,TC09,YHX}). The BIE method takes some advantages over domain discretization methods, including that the boundary integral representation of the solution fulfills the radiation condition naturally, the dimension of the computational domain is reduced by one, and to name a few. Various numerical techniques, including the Galerkin scheme, the Nystr\"om method, the fast multipole method, and the spectral method etc., have been developed for the efficient transformation of the BIE into a linear system in the past decades. In this work, we will use the Galerkin scheme (\cite{HW04,HW08,RS}) for the numerical solutions, and its advantages include the availability of mathematical convergence analysis allowing $h$-$p$ approximations, and particularly the strength on dealing with the hyper-singularities in the boundary integrals.
During the application of the BIE method, there needs for the use of hyper-singular BIO in many situations, containing the removal of the pollution of eigenfrequencies for the BIE (\cite{BM71}), and the solution of the Neumann boundary value problem using the Fredholm integral equation of the first kind (\cite{GN78,N82}), etc.. Theoretical analysis indicates that the hyper-singular BIO is equivalent to the Hadamard finite part of a hyper-singular integral (\cite{HW08}) which is usually difficult to be calculated accurately. One usually needs additional treatments in numerics for the correct evaluation of the classically non-integrable boundary integrals arising from the hyper-singular BIO. There are already existing many works (\cite{GN78,LR93,H94,HW08,M49,M66,L14,YHX,BXY}) on this issue, and the main idea consists of rewriting the hyper-singular BIO in terms of a composition of differentiation and weakly singular operators for the Laplace equation, the Helmholtz equation, the time-harmonic Navier equation and the Lam\'e equation. This composition, in fact, is a regularization procedure (\cite{HW08, YHX}) of the hyper-singular distribution, and is useful for the variational formulation and related computational procedures.
In this paper, we consider the three dimensional elastic and thermoelastic scattering problems with Neumann boundary conditions. For each problem, we apply the double-layer potential to represent the solution, and then the original boundary value problem is reduced to a Fredholm boundary integral equation of the first kind with the corresponding hyper-singular BIO. Following the work in \cite{YHX,BXY} for the two-dimensional case, and utilizing the tangential G\"unter derivative, we derive the new and analytically accurate regularized formulations for the hyper-singular BIO associated with the three dimensional time-harmonic Navier equation and Biot system of linearized thermoelasticity, respectively. As a result, in the corresponding weak forms, all involved integrals are at most weakly-singular. This work is an extension of our work only considering two dimensional elastic waves, and the extension is in fact non-trivial. In the numerical implementations, applying the special local coordinate system given in \cite{RS} and the Gauss quadrature rules on the triangle element, we present a semi-analytic strategy to evaluate all the weakly-singular integrals effectively. Although we only consider $C^2$ boundary in our numerical tests, the theoretical results actually could be extended to the Lipschitz case in terms of the properties of G\"unter derivative given in \cite{BT}. The convergence analysis of the numerical scheme could be obtained following the standard techniques in \cite{HW08}, and we will omit it in this work.
The rest of this paper is organized as follows. In Section \ref{sec:ge}, we introduce the exterior elastic and thermoelastic scattering problems, and then describe the BIE and the Galerkin BEM in Section \ref{sec:nm}. In section \ref{sec:rf}, we propose the new and analytically accurate regularized formulations for the hyper-singular BIO in three dimensions. Finally, we discuss a semi-analytic strategy for the numerical implementation of the Galerkin scheme and present numerical results of several examples in section \ref{sec:ne}.
{\mathbb S}ection{Mathematical problems}
\lambdabel{sec:ge}
Let $\Omegaega{\mathbb S}ubset {\mathbb R}^3$ be a bounded, simply connected and impenetrable body with $C^2$ boundary $\Gammaamma=\partialrtial\Omegaega$. The exterior complement of $\Omegaega$ is denoted by $\Omegaega^c= {\mathbb R}^3{\mathbb S}etminus\overlineerline{\Omegaega}{\mathbb S}ubset {\mathbb R}^3$. Assume that $\Omegaega^c$ is occupied by a linear and isotropic elastic solid characterized by the Lam\'e constants $\lambdambda$ and $\mu$ ($\mu>0$, $3\lambdambda+2\mu>0$) and mass density $\rho>0$. Let $\omegaega>0$ be the frequency of propagating waves.
{\mathbb S}ubsection{Elastic scattering problem (ESP)}
Assume that the temperature is always a constant and suppress the time-harmonic dependence $e^{-i\omegaega t}$. Then the displacement field $u$ in the solid can be modeled by the following exterior ESP: Given $f\in H^{-1/2}(\Gammaamma)^3$, determine $u=(u_1,u_2,u_3)^\top\in H_{loc}^1(\Omegaega^c)^3$ satisfying
\begin{eqnarray}
\lambdabel{Navier}
\Delta^{*}u + \rho \omegaega^2u &=& 0\quad\mbox{in}\quad \Omegaega^c,\\
\lambdabel{BoundCond}
T(\partial,\nu)u &=& f \quad \mbox{on}\quad \Gammaamma,
\end{eqnarray}
and the Kupradze radiation condition (\cite{KGBB79})
\begin{eqnarray}
\lambdabel{RadiationCond}
\lim_{r \to \infty} r\left(\frac{\partialrtial u_t}{\partialrtial r}-ik_tu_t\right) = 0,\quad r=|x|,\quad t=p,s,
\end{eqnarray}
uniformly with respect to all $\hat{x}=x/|x|\in\mathbb{S}^2:=\{x\in{\mathbb R}^3:|x|=1\}$. Here, $\Delta^{*}$ is the Lam\'e operator defined by
\begin{eqnarray}n
\lambdabel{LameOper}
\Delta^* := \mu\,\mbox{div}\,\mbox{grad} + (\lambdambda + \mu)\,\mbox{grad}\, \mbox{div}\,,
\end{eqnarray}n
and the traction operator $T(\partial,\nu)$ on the boundary is defined as
\begin{eqnarray}n
\lambdabel{stress-3D}
T(\partial,\nu)u:=2 \mu \, \partialrtial_{\nu} u + \lambdambda \,
\nu \, {\rm div\,} u+\mu \nu\times {\rm curl\,} u,\quad \nu=(\nu^1,\nu^2,\nu^3){^\top},
\end{eqnarray}n
where $\nu$ is the outward unit normal to the boundary $\Gamma$ and $\partialrtial_\nu:=\nu\cdot\mbox{grad}$ is the normal derivative. In (\ref{RadiationCond}), $u_p$ and $u_s$ are referred as the compressional wave and the shear wave, respectively, and they are given by
\begin{eqnarray}n
u_p=-\frac{1}{k_p^2}\,\mbox{grad}\,\mbox{div}\;u,\quad u_s=\frac{1}{k_s^2}\,\mbox{curl}\,\mbox{curl}\;u,
\end{eqnarray}n
where the wave numbers $k_s,k_p$ are defined as
\begin{eqnarray}n
k_s := \omegaega/c_p,\quad k_p := \omegaega/c_s,
\end{eqnarray}n
with
\begin{eqnarray}n
c_p:={\mathbb S}qrt{\mu/\rho},\quad c_s:={\mathbb S}qrt{(\lambdambda+2\mu)/\rho}.
\end{eqnarray}n
For the uniqueness of the ESP (\ref{Navier})-(\ref{RadiationCond}), we refer to \cite{KGBB79,BHSY}.
{\mathbb S}ubsection{Thermoelastic scattering problem (TESP)}
Now we consider the temperature fluctuations caused by the dynamic deformation. In this case, the elastic medium $\Omegaega^c$ is additionally characterized by the coefficient of thermal diffusity $\kappappa$ and the coupling constants $\gammamma$, $\eta$ given by
\begin{eqnarray}n
\gammamma=(\lambdambda+\frac{2}{3}\mu)\alphapha,\quad \eta=\frac{T_0\gammamma}{\lambdambda_0},
\end{eqnarray}n
respectively, where $\alphapha$ is the volumetric thermal expansion coefficient, $T_0$ is a reference value of the absolute temperature and $\lambdambda_0$ is the coefficient of thermal conductivity. Denote by $\epsilonilon:=\gammamma\eta\kappappa/(\lambdambda+2\mu)$ the dimensionless thermoelastic coupling constant which assumes 'small' positive values for most thermoelastic media and $q=i\omegaega/\kappappa$. Suppressing the time-harmonic dependence $e^{-i\omegaega t}$, the displacement field $u$ and the temperature variation field $p$ can be modeled by the following Biot system of linearized thermoelasticity
\begin{eqnarray}
\lambdabel{thermo1}
\Delta^{*}u+\rho\omegaega^2u-\gammamma\nabla p &=& 0\quad\mbox{in}\quad \Omegaega^c,\\
\lambdabel{thermo2}
\Delta p+qp+i\omegaega\eta\nabla\cdot u &=& 0 \quad\mbox{in}\quad \Omegaega^c.
\end{eqnarray}
Rewriting (\ref{thermo1})-(\ref{thermo2}) into a matrix form, we obtain
\begin{eqnarray}
\lambdabel{thermo12}
LU=0,\quad L:=\begin{eqnarray}gin{bmatrix}
(\mu\Delta+\rho\omegaega^2)I_3+(\lambdambda+\mu)\nabla\nabla\cdot & -\gammamma\nabla\\
q\eta\kappappa\nabla\cdot & \Delta+q
\end{eqnarray}d{bmatrix},\quad U=(u^\top,p)^\top.
\end{eqnarray}
On the boundary of the scatterer, we assume the Neumann boundary condition
\begin{eqnarray}
\lambdabel{BC}
\widetilde{T}(\partial,\nu)U:=\begin{eqnarray}gin{bmatrix}
T(\partial,\nu) & -\gammamma\nu \\
0 & \partial_\nu
\end{eqnarray}d{bmatrix}U=F.
\end{eqnarray}
It follows (\cite{KGBB79}) that the wave field $U$ admits the decomposition \begin{eqnarray}n
u=u^1+u^2+u^s,\quad p=p^1+p^2,
\end{eqnarray}n
where the vector displacement fields $u^1,u^2,u^s$ satisfy the vectorial Helmholtz equations
\begin{eqnarray}n
\Delta u^i+k_i^2u^i=0,\quad i=1,2 \quad\mbox{and}\quad \Delta u^s+k_s^2u^s=0
\end{eqnarray}n
with
\begin{eqnarray}n
\mbox{curl}\,u^i=0\quad i=1,2 \quad\mbox{and}\quad \mbox{div}\,u^s=0,
\end{eqnarray}n
and the scalar temperature fields $p^1$ and $p^2$ satisfy the following scalar Helmholtz equations
\begin{eqnarray}n
\Delta p^i+k_i^2p^i=0,\quad i=1,2.
\end{eqnarray}n
Here, the wave numbers $k_1,k_2$, corresponding to the elastothermal and thermoelastic waves respectively, are the roots of the characteristic system
\begin{eqnarray}
\lambdabel{k12}
k_1^2+k_2^2=q(1+\epsilonilon)+k_p^2,\quad k_1^2k_2^2=qk_p^2,
\end{eqnarray}
for which $\mbox{Im}\,k_i\ge 0, i=1,2$. In particular,
\begin{eqnarray}n
k_1 &=& \frac{1}{2c_p}{\mathbb S}qrt{\frac{\omegaega}{\kappappa}}\left[{\mathbb S}qrt{\omegaega\kappappa +C_+} + {\mathbb S}qrt{\omegaega\kappappa +C_-}\right],\\
k_2 &=& \frac{1}{2c_p}{\mathbb S}qrt{\frac{\omegaega}{\kappappa}}\left[{\mathbb S}qrt{\omegaega\kappappa +C_+} - {\mathbb S}qrt{\omegaega\kappappa +C_-}\right],
\end{eqnarray}n
where
\begin{eqnarray}n
C_\pm=i(1+\epsilonilon)c_p^2\pm (1+i)c_p{\mathbb S}qrt{2\omegaega\kappappa}.
\end{eqnarray}n
We assume that the scattered field $U$ satisfies the following Kupradze radiation conditions as $r=|x|\rightarrow \infty$ for $i=1,2,3$ and $j=1,2$
\begin{eqnarray}n
&& u^j=o(r^{-1}),\quad \partial_{x_i}u^j=O(r^{-2}),\\
&& p^j=o(r^{-1}),\quad \partial_{x_i}p^j=O(r^{-2}),\\
&& u^s=o(r^{-1}),\quad r(\partial_ru^s-ik_su^s)=O(r^{-1}).
\end{eqnarray}n
The direct TESP to be considered in this paper is to determine the displacement field $u$ and the temperature variation field $p$ satisfying (\ref{thermo12}), the boundary condition (\ref{BC}) and the Kupradze radiation conditions. For given $F\in (H^{-1/2}(\Gamma))^4$, we refer to \cite{KGBB79,C00} for the uniqueness of the direct problem.
{\mathbb S}ection{Numerical method}
\lambdabel{sec:nm}
In this section, we derive the BIE for solving the ESP and TESP, respectively and give a brief introduction to the Galerkin BEM for the discretization of the derived BIE.
{\mathbb S}ubsection{BIE for ESP}
\lambdabel{sec:bie1}
For the ESP, it follows from the potential theory (\cite{KGBB79}) that the unknown function $u$ can be represented as
\begin{eqnarray}
\lambdabel{DirectBRF1}
u(x) = (\mathcal{D}_s\varphi)(x):= \int_{\Gammaamma}(T(\partial_y,\nu_y) E(x,y))^\top \varphi(y)\,ds_y, \quad \forall\,x\in\Omegaega^c,
\end{eqnarray}
where $\mathcal{D}_s$ is referred as the double-layer potential and $E(x,y)$ is the fundamental displacement tensor of the time-harmonic Navier equation (\ref{Navier}) in ${\mathbb R}^3$ taking the form
\begin{eqnarray}
\lambdabel{NavierFS}
E(x,y) = \frac{1}{\mu}\gammamma_{k_s}(x,y) I + \frac{1}{\rho\omegaega^2}\nabla_x\nabla_x^\top \left[\gammamma_{k_s}(x,y) - \gammamma_{k_p}(x,y)\right],\quad x\ne y.
\end{eqnarray}
In (\ref{NavierFS}) and the following, $I$ denotes the $3\times 3$ identity matrix, and $\gammamma_{k_t}(x,y)$ is the fundamental solution of the Helmholtz equation in ${\mathbb R}^3$ with wave number $k_t$ and takes the form
\begin{eqnarray}
\lambdabel{HelmholtzFS}
\gammamma_{k_t}(x,y) =\frac{\exp(ik_t|x-y|)}{4\pi|x-y|},
\quad x\ne y,\quad t=p,s.
\end{eqnarray}
Operating with the traction operator on (\ref{DirectBRF1}), taking the limits as $x\to\Gamma$ and applying the jump relations and the boundary condition (\ref{BoundCond}) , we arrive at the BIE on $\Gamma$
\begin{eqnarray}
\lambdabel{BIE1}
W_s\varphi(x)=-f,\quad x\in\Gammaamma,
\end{eqnarray}
where $W_s:H^{s}(\Gammaamma)^3\rightarrow H^{s-1}(\Gammaamma)^3(s\ge 1/2)$ is called the hyper-singular BIO defined by
\begin{eqnarray}
\lambdabel{HBIO1}
W_su(x) := -\lim_{z\rightarrow x\in\Gamma,z\nonumbertin\Gamma}T(\partial_z,\nu_x)\int_{\Gammaamma}(T(\partial_y,\nu_y)E(z,y))^\top u(y)\,ds_y,\quad x\in\Gammaamma.
\end{eqnarray}
The standard weak formulation of (\ref{BIE1}) reads: Given $f\in H^{-1/2}(\Gammaamma)^3$, find $\varphi\in H^{1/2}(\Gamma)^3$ such that
\begin{eqnarray}
\lambdabel{weak1}
\lambdangle W_s\varphi,v\rangle=-\lambdangle f,v\rangle \quad\mbox{for all}\quad v\in H^{1/2}(\Gamma)^3.
\end{eqnarray}
Here and in the sequel, $\lambdangle\cdot,\cdot\rangle$ denotes the $L^2$ duality pairing between $H^{-1/2}(\Gamma)^d$ and $H^{1/2}(\Gamma)^d$ for $d\in{\mathbb Z}^+$.
{\mathbb S}ubsection{BIE for TESP}
For the TESP, it follows from the potential theory (\cite{KGBB79,C00}) that the unknown function $U$ can be represented as
\begin{eqnarray}
\lambdabel{DirectBRF2}
U(x) = (\widetilde{\mathcal{D}}\Psi)(x):= \int_{\Gammaamma}(\widetilde{T}^*(\partial_y,\nu_y) \widetilde{E}^\top(x,y))^\top \Psi(y)\,ds_y, \quad \forall\,x\in\Omegaega^c,
\end{eqnarray}
where $\widetilde{E}(x,y)$ is the fundamental solution of the Biot system (\ref{thermo12}) in ${\mathbb R}^3$ given by
\begin{eqnarray}n
\widetilde{E}(x,y)=\begin{eqnarray}gin{bmatrix}
E_{11}(x,y) & E_{12}(x,y) \\
E_{21}^\top(x,y) & E_{22}(x,y)
\end{eqnarray}d{bmatrix},
\end{eqnarray}n
with
\begin{eqnarray}n
E_{11}(x,y)&=& \frac{1}{\mu}\gammamma_{k_s}(x,y)I+\frac{1}{\rho\omegaega^2} \nabla_x\nabla_x^\top \left[\gammamma_{k_s}(x,y)-\gammamma_{k_1}(x,y) \right]\\
&-& \frac{1}{\rho\omegaega^2}\frac{k_p^2-k_1^2}{k_1^2-k_2^2} \nabla_x\nabla_x^\top \left[\gammamma_{k_1}(x,y)-\gammamma_{k_2}(x,y) \right],\\
E_{12}(x,y)&=& -\frac{\gammamma}{(k_1^2-k_2^2)(\lambdambda+2\mu)}\nabla_x \left[ \gammamma_{k_1}(x,y)-\gammamma_{k_2}(x,y)\right],\\
E_{21}(x,y)&=& \frac{i\omegaega\eta}{(k_1^2-k_2^2)(\lambdambda+2\mu)}\nabla_x \left[ \gammamma_{k_1}(x,y)-\gammamma_{k_2}(x,y)\right],\\
E_{22}(x,y)&=& -\frac{1}{k_1^2-k_2^2}\left[ (k_p^2-k_1^2)\gammamma_{k_1}(x,y)-(k_p^2-k_2^2)\gammamma_{k_2}(x,y) \right],
\end{eqnarray}n
and $\widetilde{T}^*(\partial,\nu)$ is the corresponding Neumann operator of the adjoint problem of (\ref{thermo12}) taking the form
\begin{eqnarray}n
\widetilde{T}^*(\partial,\nu):=\begin{eqnarray}gin{bmatrix}
T(\partial,\nu) & -i\omegaega\eta\nu \\
0 & \partial_\nu
\end{eqnarray}d{bmatrix}.
\end{eqnarray}n
Operating with the operator $\widetilde{T}$ on (\ref{DirectBRF2}), taking the limits as $x\to\Gamma$ and applying the boundary condition (\ref{BC}) , we obtain the BIE on $\Gamma$
\begin{eqnarray}
\lambdabel{BIE2}
\widetilde{W}_s\Psi(x)=-F,\quad x\in\Gamma,
\end{eqnarray}
where the hyper-singular BIO $\widetilde{W}_s$ is defined by
\begin{eqnarray}
\lambdabel{HBIO2}
\widetilde{W}_s\Psi(x) := -\lim_{z\rightarrow x\in\Gamma,z\nonumbertin\Gamma}\widetilde{T}(\partial_z,\nu_x)\int_\Gamma (\widetilde{T}^*(\partial_y,\nu_y)E^\top(z,y))^\top\Psi(y)ds_y,\quad x\in\Gamma.
\end{eqnarray}
The standard weak formulation of (\ref{BIE2}) reads: Given $F\in H^{-1/2}(\Gammaamma)^4$, find $\Psi\in H^{1/2}(\Gamma)^4$ such that
\begin{eqnarray}
\lambdabel{weak2}
\lambdangle\widetilde{W}_s\Psi,V\rangle=-\lambdangle F,V\rangle \quad\mbox{for all}\quad V\in H^{1/2}(\Gamma)^4.
\end{eqnarray}
\begin{eqnarray}gin{remark}
For the wellposedness of the variational equations (\ref{weak1}) and (\ref{weak2}), we refer the readers to \cite{BXY,C00,KGBB79}. The pollution of eigenfrequencies on the uniqueness can be removed by applying the so-called Burton-Miller formulation, see \cite{BXY,BM71} for example.
\end{eqnarray}d{remark}
{\mathbb S}ubsection{Galerkin BEM}
\lambdabel{sec:gbem}
We only propose the Galerkin Scheme for solving ESP, and the corresponding procedure and formulas for TESP are quite similar and will be neglected.
Let $\mathcal{H}_h$ be a finite dimensional subspace of $H^{1/2}(\Gamma)$. Then the Galerkin approximation of (\ref{BIE1}) reads: Given $f$, find $\varphi_h\in\mathcal{H}_h^3$ satisfying
\begin{eqnarray}
\lambdabel{galerkin}
\lambdangle W_s\varphi_h,v_h\rangle=-\lambdangle f,v_h\rangle \quad\mbox{for all}\quad v_h\in\mathcal{H}_h^3.
\end{eqnarray}
In the following, we describe briefly the reduction of the Galerkin equation (\ref{galerkin}) into its discrete linear system of equations.
Let $\Gamma_h=\cup_{i=1}^N \overline{\tau_i}$ be a uniform boundary element mesh of $\Gamma$ where each $\tau_i$ is a plane triangle with vertex $x_{i_1},x_{i_2},x_{i_3}$ ordered counter clockwise. Let $\{x_j\}_{j=1}^M$ be the set of all nodes of the triangulation. Using the reference element
\begin{eqnarray}n
\tau_\xi=\{\xi=(\xi_1,\xi_2)^\top\in{\mathbb R}^2,0<\xi_1<1,\; 0<\xi_2<1-\xi_1\},
\end{eqnarray}n
the point $x\in\tau_i$ can be parameterized as
\begin{eqnarray}n
x=x(\xi)=x_{i_1}+\xi_1(x_{i_2}-x_{i_1})+\xi_2(x_{i_3}-x_{i_1}),\quad \xi\in\tau_\xi.
\end{eqnarray}n
Let $\{\psi_j\}_{j=1}^M$ be the set of piecewise linear basis functions. We seek the approximate solution
\begin{eqnarray}n
u_h(x)={\mathbb S}um_{j=1}^M u_j\psi_j(x),
\end{eqnarray}n
where $u_j\in{\mathbb C}^3, j=1,\cdots,M$ are unknown nodal values of $u_h$ at $x_j$. For the boundary value $f$, we interpolated it as
\begin{eqnarray}n
f_h={\mathbb S}um_{i=1}^N f(x_{i_*})\phi_i(x),
\end{eqnarray}n
where $x_{i_*}=(x_{i_1}+x_{i_2}+x_{i_3})/3$ is the mid point of the element $\tau_i$ and $\{\phi_i\}_{i=1}^N$ is the set of piecewise constant basis functions. Substituting the interpolation forms into (\ref{galerkin}) and setting $\psi_j, j=1,\cdots,M$ as test functions, we arrive at a linear system of equations
\begin{eqnarray}
\lambdabel{linearsys}
{\bf A}_h {\bf X}={\bf B}_h{\bf b}, {\bf A}_h\in{\mathbb C}^{3M\times 3M}, {\bf B}_h\in{\mathbb C}^{3M\times 3N}, {\bf b}=({\bf b}_1^\top,\cdots,{\bf b}_N^\top)^\top\in{\mathbb C}^{3N\times 1},
\end{eqnarray}
where for $k,j=1,\cdots,M$, $i=1,\cdots,N$,
\begin{eqnarray}
\lambdabel{coematrix1}
{\bf A}_h(k,j)= -\int_{\Gamma_h}\left[\lim_{z\rightarrow x\in\Gamma,z\nonumbertin\Gamma}T(\partial_z,\nu_x)\int_{\Gamma_h}(T(\partial_y,\nu_y)E(z,y))^\top \psi_j(y)ds_y\right]\psi_k(x)ds_x,
\end{eqnarray}
\begin{eqnarray}
\lambdabel{coematrix2}
{\bf B}_h(k,i)= \int_{\Gamma_h}\phi_i(x)\psi_k(x)\,ds_x\,I\in{\mathbb C}^{3\times3},
\end{eqnarray}
\begin{eqnarray}
\lambdabel{coedata}
{\bf b}_i = f(x_{i_*})\in{\mathbb C}^{3\times1}.
\end{eqnarray}
{\mathbb S}ection{Regularized formulation for the hyper-singular BIO}
\lambdabel{sec:rf}
In this section, we derive the new regularized formulations for the hyper-singular BIOs $W_s$ and $\widetilde{W}_s$ in three dimensions such that the coefficient matrix ${\bf A}_h$ ( or $\widetilde{\bf A}_h$) can be evaluated in a more effective and accurate way. More precisely, using the derived regularized formulations, only classically integrable and weakly-singular integrals are involved in the weak forms of $W_s$ and $\widetilde{W}_s$. Before doing this, we introduce the hyper-singular BIO associated with the Helmholtz equation and the G\"unter derivatives.
{\mathbb S}ubsection{Hyper-singular BIO for acoustic scattering problem}
\lambdabel{sec:hbioa}
Consider the Helmholtz equation
\begin{eqnarray}n
\Delta p+k^2p=0\quad\mbox{in}\quad\Omegaega^c,
\end{eqnarray}n
with wave number $k>0$. Denote by $V_f:H^{s-1}(\Gammaamma)\rightarrow H^s(\Gammaamma)$ and $W_f:H^s(\Gammaamma)\rightarrow H^{s-1}(\Gammaamma)$, $s\ge1/2$ the single-layer and hyper-singular BIO defined by
\begin{eqnarray}n
V_f\psi(x) &:=& \int_{\Gammaamma}\gammamma_k(x,y) \psi(y)\,ds_y,\quad x\in\Gammaamma,\\
W_f\varphi(x) &:=& -\lim_{z\rightarrow x\in\Gamma,z\nonumbertin\Gamma}\nu_x\cdot\nabla_z\int_{\Gammaamma}\partial_{\nu_y}\gammamma_k(z,y) \varphi(y)\,ds_y,\quad x\in\Gammaamma,
\end{eqnarray}n
respectively. It follows from Lemma 1.2.2 in \cite{HW08} that the hyper-singular BIO $W_f$ can be expressed as
\begin{eqnarray}
\lambdabel{Wf2}
W_{f} p(x)=-(\nu_x\times\nabla_x)\cdot V_f(\nu\times\nabla p)(x)-k^2\nu_x^\top V_f(p\nu)(x).
\end{eqnarray}
{\mathbb S}ubsection{G\"unter derivatives}
\lambdabel{sec:gd}
Now we describe the G\"unter derivatives that play essential roles in the proof of our main results. Define the operator $M(\partial,\nu)$, whose elements are also called the G\"unter derivatives, as
\begin{eqnarray}n
M(\partial,\nu)u(x)= \partial_{\nu}u -\nu(\nabla\cdot u)+\nu\times \,{\rm curl\,}\,u.
\end{eqnarray}n
Then the traction operator can be rewritten as
\begin{eqnarray} \lambdabel{Tform2}
T(\partial,\nu)u(x)= (\lambdambda+\mu)\nu(\nabla \cdot u) + \mu\partial_{\nu}u + \mu M(\partial,\nu)u.
\end{eqnarray}
A direct calculation yields
\begin{eqnarray}n
T(\partial,\nu)\nabla &=& (\lambdambda+\mu)\nu\Delta+\mu\partial_{\nu}\nabla +\mu M(\partial,\nu)\nabla\\
&=& (\lambdambda+\mu)\nu\Delta+\mu\partial_{\nu}\nabla -\mu M(\partial,\nu)\nabla+ 2\mu M(\partial,\nu)\nabla.
\end{eqnarray}n
Then
\begin{eqnarray}n
\partial_{\nu}\nabla-M(\partial,\nu)\nabla = \nu\Delta,
\end{eqnarray}n
which implies that
\begin{eqnarray}
\lambdabel{Tgrad}
T(\partial,\nu)\nabla=(\lambdambda+2\mu)\nu\Delta+2\mu M(\partial,\nu)\nabla.
\end{eqnarray}
The properties of the operator $M(\partial,\nu)$ (\cite{BT,HW08,KGBB79}) shows that for any scaler fields $p,q$, vector fields $u,v$ and tensor field $E$, there hold the Stokes formulas
\begin{eqnarray}
\lambdabel{stokes1}
\int_\Gammaamma (m^{ij}p)\,q\,ds &=& -\int_\Gammaamma p\,(m^{ij}q)\,ds,\\
\lambdabel{stokes2}
\int_\Gammaamma (Mu)\cdot v\,ds &=& \int_\Gammaamma u\cdot (Mv)\,ds,\\
\lambdabel{stokes3}
\int_\Gammaamma (Mq)\,v\,ds &=& -\int_\Gammaamma q\,(Mv)\,ds,
\end{eqnarray}
and
\begin{eqnarray}
\lambdabel{stokes4}
\int_\Gammaamma (ME)^\top\,vds=\int_\Gammaamma E^\top\,(Mv)\,ds.
\end{eqnarray}
{\mathbb S}ubsection{Hyper-singular BIO for ESP}
\lambdabel{sec:hbioe}
We now investigate the operator $W_s$. Following the results in \cite{L14,YHX}, we have for $x\ne y$,
\begin{eqnarray} \lambdabel{TxExy}
T(\partial_x,\nu_x)E(x,y) &=& - \nu_x \nabla _x^\top [\gammamma_{k_s}(x,y)-\gammamma_{k_p}(x,y)] + \partial_{\nu_x}\gammamma _{k_s}(x,y)I\nonumbernumber\\
&+& M_x\left[2\mu E(x,y) - \gammamma
_{k_s}(x,y)I\right],
\end{eqnarray}
and
\begin{eqnarray} \lambdabel{TyExy}
T(\partial_y,\nu_y)E(x,y) &=& - \nu_y \nabla _y^\top [\gammamma_{k_s}(x,y)-\gammamma_{k_p}(x,y)] + \partial_{\nu_y}\gammamma _{k_s}(x,y)I\nonumbernumber\\
&+& M_y\left[2\mu E(x,y) - \gammamma
_{k_s}(x,y)I\right].
\end{eqnarray}
Using the G\"unter derivatives, the hyper-singular operator $W_s$ can be rewritten as
\begin{eqnarray}n
&\quad& W_su(x)\\
&=& -\lim_{z\rightarrow x\in\Gamma,z\nonumbertin\Gamma} \mu\nu_x\cdot\nabla_z \mathcal{D}_su(z) +(\lambdambda+\mu)\nu_x(\nabla_z \cdot \mathcal{D}_s u(z))+\mu M_{z,x}\mathcal{D}_s u(z),
\end{eqnarray}n
where
\begin{eqnarray}n
M_{z,x}\psi(z)= \partial_{\nu_x}\psi -\nu_x(\nabla_z \cdot \psi)+\nu_x\times \,{\rm curl\,}_z\,\psi, M_{z,x}=[m_{z,x}^{ij}]_{i,j=1}^3.
\end{eqnarray}n
\begin{eqnarray}gin{theorem}
\lambdabel{main}
The hyper-singular BIO $W_s$ in three dimensions can be expressed alternatively as
\begin{eqnarray}
\lambdabel{Ws1}
W_s u(x) &=& \rho\omegaega^2\int_\Gammaamma\left[ \gammamma_{k_s}(x,y)(\nu_x\nu_y^\top-\nu_x^\top\nu_yI-J_{\nu_x,\nu_y})- \gammamma_{k_p}(x,y)\nu_x\nu_y^\top\right]u(y)ds_y\nonumbernumber\\
&+& 2\mu\int_\Gammaamma M_x\nabla_y[\gammamma_{k_s}(x,y)-\gammamma_{k_p}(x,y)]\nu_y^\top u(y)ds_y \nonumbernumber\\
&+& 2\mu\int_\Gammaamma \nu_x\nabla_x^\top [\gammamma_{k_s}(x,y)-\gammamma_{k_p}(x,y)] M_yu(y)ds_y \nonumbernumber\\
&-& \mu\int_\Gammaamma \left(\nu_x\times\nabla_x\gammamma_{k_s}(x,y)\right)\cdot \left(\nu_y\times\nabla_yu(y)\right)ds_y\nonumbernumber\\
&+& 2\mu\int_\Gammaamma M_x\gammamma_{k_s}(x,y) M_yu(y)ds_y- 4\mu^2\int_\Gammaamma M_xE(x,y) M_yu(y)ds_y\nonumbernumber\\
&-& \mu\left\{ {\mathbb S}um_{k,l=1}^3\int_\Gammaamma m_x^{kl}\gammamma_{k_s}(x,y) m_y^{kj}u_l(y)ds_y \right\}_{j=1}^3,
\end{eqnarray}
where $J_{\nu_x,\nu_y}=\nu_y\nu_x^\top-\nu_x\nu_y^\top$.
\end{eqnarray}d{theorem}
\begin{eqnarray}gin{proof}
See \ref{appendex.a}.
\end{eqnarray}d{proof}
{\mathbb S}ubsection{Hyper-singular BIO for TESP}
Now we consider the operator $\widetilde{W}_s$. Note that the hyper-singular kernel of $\widetilde{W}_s$ is
\begin{eqnarray}n
\begin{eqnarray}gin{bmatrix}
W_{11}(x,y;z) & W_{12}(x,y;z) \\
W_{21}^\top(x,y;z) & W_{22}(x,y;z)
\end{eqnarray}d{bmatrix}
\end{eqnarray}n
where
\begin{eqnarray}n
W_{11}(x,y;z) &=& T(\partial_z,\nu_x)(T(\partial_y,\nu_y)E_{11}(z,y))^\top- i\omegaega\eta T(\partial_z,\nu_x)E_{12}(z,y)\nu_y^\top \\
&-& \gammamma\nu_x(T(\partial_y,\nu_y)E_{21}(z,y))^\top +i\omegaega\eta\gammamma\nu_x\nu_y^\top E_{22}(z,y),\\
W_{12}(x,y;z) &=& T(\partial_z,\nu_x)\partial_{\nu_y}E_{12}(z,y)-\gammamma\nu_x\partial_{\nu_y}E_{22}(z,y),\\
W_{21}(x,y;z) &=& (\nu_x\cdot\partial_z)T(\partial_y,\nu_y)E_{21}(z,y)-i\omegaega\eta\nu_y(\nu_x\cdot\partial_z)E_{22}(z,y),\\
W_{22}(x,y;z) &=& (\nu_x\cdot\partial_z)\partial_{\nu_y}E_{22}(z,y).
\end{eqnarray}n
For $U=(u^\top,p)$ and $V=(v^\top,q)$, we have
\begin{eqnarray}
\lambdabel{weakW}
\widetilde{W}_sU &=& [W_1u+ W_2p; W_3u+W_4p]
\end{eqnarray}
where
\begin{eqnarray}n
W_1u(x) &=& -\lim_{z\rightarrow x\in\Gamma,z\nonumbertin\Gamma}\int_\Gamma W_{11}(x,y;z)u(y)ds_y,\\
W_2p(x) &=& -\lim_{z\rightarrow x\in\Gamma,z\nonumbertin\Gamma}\int_\Gamma W_{12}(x,y;z)p(y)ds_y,\\
W_3u(x) &=& -\lim_{z\rightarrow x\in\Gamma,z\nonumbertin\Gamma}\int_\Gamma W_{21}^\top(x,y;z)u(y)ds_y,\\
W_4p(x) &=& -\lim_{z\rightarrow x\in\Gamma,z\nonumbertin\Gamma}\int_\Gamma W_{22}(x,y;z)p(y)ds_y.
\end{eqnarray}n
\begin{eqnarray}gin{lemma}
\lambdabel{Tlemma1}
For $x\ne y$, it follows that
\begin{eqnarray}
\lambdabel{TxE11}
&\quad& T(\partial_x,\nu_x)E_{11}(x,y)\nonumbernumber\\
&=&-\nu_x\nabla_{x}^\top [\gammamma_{k_s}(x,y)-\gammamma_{k_1}(x,y)] + \frac{k_2^2-q}{k_1^2-k_2^2} \nu_x\nabla_{x}^\top[\gammamma_{k_1}(x,y)-\gammamma_{k_2}(x,y)] \nonumbernumber\\
&+& \partialrtial_{\nu_x}\gammamma_{k_s}(x,y)I+ M_x[2\mu E_{11}(x,y)-\gammamma_{k_s}(x,y)I],
\end{eqnarray}
and
\begin{eqnarray}
\lambdabel{TyE11}
&\quad& T(\partial_y,\nu_y)E_{11}(x,y)\nonumbernumber\\
&=&-\nu_y\nabla_{y}^\top [\gammamma_{k_s}(x,y)-\gammamma_{k_1}(x,y)]+ \frac{k_2^2-q}{k_1^2-k_2^2} \nu_y\nabla_{y}^\top[\gammamma_{k_1}(x,y)-\gammamma_{k_2}(x,y)] \nonumbernumber\\
&+& \partialrtial_{\nu_y}\gammamma_{k_s}(x,y)I+ M_y[2\mu E_{11}(x,y)-\gammamma_{k_s}(x,y)I].
\end{eqnarray}
\end{eqnarray}d{lemma}
\begin{eqnarray}gin{proof}
See \ref{appendex.b}.
\end{eqnarray}d{proof}
We first investigate the term $W_1u$. Observe that the first term in $W_{11}$ is consistent with the the kernel of hyper-singular BIO $W_s$. For $W_1u$, we have the following regularized formulation.
\begin{eqnarray}gin{theorem}
\lambdabel{main1}
The hyper-singular operator $W_1$ can be expressed alternatively as
\begin{eqnarray}
\lambdabel{W1-1}
W_1 u(x) &=&\rho\omegaega^2 \int_\Gamma \gammamma_{k_s}(x,y) (\nu_x\nu_y^\top-\nu_x^\top\nu_yI-J_{\nu_x,\nu_y}) u(y)ds_y \nonumbernumber\\
&+& \int_\Gamma \left[C_1\gammamma_{k_1}(x,y)-C_2\gammamma_{k_1}(x,y) \right]\nu_x\nu_y^\top u(y)ds_y \nonumbernumber\\
&-& \mu\left\{ {\mathbb S}um_{k,l=1}^3\int_\Gamma m_x^{kl}\gammamma_{k_s}(x,y) m_y^{kj}u_l(y)ds_y \right\}_{j=1}^3 \nonumbernumber\\
&-& \mu\int_\Gamma \left(\nu_x\times\nabla_x\gammamma_{k_s}(x,y)\right)\cdot \left(\nu_y\times\nabla_yu(y)\right)ds_y\nonumbernumber\\
&+& 2\mu\int_\Gamma M_x\gammamma_{k_s}(x,y) M_yu(y)ds_y -4\mu^2\int_\Gamma M_xE_{11}(x,y) M_yu(y)ds_y\nonumbernumber\\
&+& 2\mu\int_\Gamma \nu_x\nabla_x^\top \left[\gammamma_{k_s}(x,y) -\gammamma_{k_1}(x,y)\right] M_yu(y)ds_y \nonumbernumber\\
&+& 2\mu\int_\Gamma M_x\nabla_y \left[\gammamma_{k_s}(x,y) -\gammamma_{k_1}(x,y)\right]\nu_y^\top u(y)ds_y \nonumbernumber\\
&+& C_3\int_\Gamma \nu_x\nabla_x^\top \left[\gammamma_{k_1}(x,y) -\gammamma_{k_2}(x,y)\right] M_yu(y)ds_y \nonumbernumber\\
&+& C_3\int_\Gamma M_x\nabla_y \left[\gammamma_{k_1}(x,y) -\gammamma_{k_2}(x,y)\right]\nu_y^\top u(y)ds_y.
\end{eqnarray}
Here, the constants $C_i, i=1,2,3$ are given by
\begin{eqnarray}n
C_1=\frac{i\omegaega\eta\gammamma(k_p^2+k_1^2)-k_1^2(k_1^2-q)(\lambdambda+2\mu)} {k_1^2-k_2^2},\\ C_2=\frac{i\omegaega\eta\gammamma(k_p^2+k_2^2)-k_2^2(k_2^2-q)(\lambdambda+2\mu)} {k_1^2-k_2^2}
\end{eqnarray}n
and
\begin{eqnarray}n
C_3=\frac{2\mu}{k_1^2-k_2^2}\left( \frac{i\omegaega\eta\gammamma}{\lambdambda+2\mu} -k_2^2+q \right).
\end{eqnarray}n
\end{eqnarray}d{theorem}
\begin{eqnarray}gin{proof}
See \ref{appendex.c}.
\end{eqnarray}d{proof}
Next we investigate the terms $W_2p$ and $W_3u$. We have
\begin{eqnarray}gin{theorem}
\lambdabel{main2}
The hyper-singular operators $W_2$ and $W_3$ can be expressed as
\begin{eqnarray}
\lambdabel{W2-1}
&\quad& W_2p(x) \nonumbernumber\\
&=& -\frac{\gammamma k_p^2\nu_x}{k_1^2-k_2^2}\int_\Gamma \partial_{\nu_y}\left( \gammamma_{k_1}(x,y)-\gammamma_{k_2}(x,y)\right)p(y)ds_y\nonumbernumber\\
&+& \frac{2\mu\gammamma}{(k_1^2-k_2^2)(\lambdambda+2\mu)} M_x\int_\Gamma M_yp(y)\nabla_y(\gammamma_{k_1}(x,y)-\gammamma_{k_2}(x,y))ds_y\nonumbernumber\\
&+& \frac{2\mu\gammamma}{(k_1^2-k_2^2)(\lambdambda+2\mu)}M_x \int_\Gamma (k_1^2\gammamma_{k_1}(x,y)-k_2^2\gammamma_{k_2}(x,y))\nu_yp(y)ds_y,
\end{eqnarray}
and
\begin{eqnarray}
\lambdabel{W3-1}
&\quad& W_3u(x) \nonumbernumber\\
&=& -\frac{i\omegaega\eta k_p^2}{k_1^2-k_2^2}\int_\Gamma\partial_{\nu_x}\left( \gammamma_{k_1}(x,y)-\gammamma_{k_2}(x,y)\right)\nu_y^\top u(y)ds_y\nonumbernumber\\
&-& \frac{2i\mu\omegaega\eta}{(k_1^2-k_2^2)(\lambdambda+2\mu)} \int_\Gamma M_x\nabla_x(\gammamma_{k_1}(x,y)-\gammamma_{k_2}(x,y))\cdot M_yu(y)ds_y\nonumbernumber\\
&+& \frac{2i\mu\omegaega\eta}{(k_1^2-k_2^2)(\lambdambda+2\mu)}\int_\Gamma (k_1^2\gammamma_{k_1}(x,y)-k_2^2\gammamma_{k_2}(x,y))\nu_x^\top M_yu(y)ds_y,
\end{eqnarray}
respectively.
\end{eqnarray}d{theorem}
\begin{eqnarray}gin{proof}
It follows that
\begin{eqnarray}n
&\quad& T(\partial_z,\nu_x)\partial_{\nu_y}E_{12}(z,y)-\gammamma\nu_x\partial_{\nu_y}E_{22}(z,y)\\
&=& \frac{\gammamma\nu_x}{k_1^2-k_2^2}\partial_{\nu_y}\left[ k_1^2\gammamma_{k_1}(z,y)-k_2^2\gammamma_{k_2}(z,y)+ (k_p^2-k_1^2)\gammamma_{k_1}(z,y)-(k_p^2-k_2^2)\gammamma_{k_2}(z,y)\right] \\
&-& \frac{2\mu\gammamma}{(k_1^2-k_2^2)(\lambdambda+2\mu)},
\end{eqnarray}n
and
\begin{eqnarray}n
&\quad& M_{z,x}\partial_{\nu_y}\nabla_z(\gammamma_{k_1}(z,y)-\gammamma_{k_2}(z,y))\\
&=& \frac{\gammamma k_p^2\nu_x}{k_1^2-k_2^2}\partial_{\nu_y}\left( \gammamma_{k_1}(z,y)-\gammamma_{k_2}(z,y)\right) + \frac{2\mu\gammamma}{(k_1^2-k_2^2)(\lambdambda+2\mu)} M_{z,x}\partial_{\nu_y}\nabla_y(\gammamma_{k_1}(z,y)-\gammamma_{k_2}(z,y))\\
&=& \frac{\gammamma k_p^2\nu_x}{k_1^2-k_2^2}\partial_{\nu_y}\left( \gammamma_{k_1}(z,y)-\gammamma_{k_2}(z,y)\right)+ \frac{2\mu\gammamma}{(k_1^2-k_2^2)(\lambdambda+2\mu)} M_{z,x}M_y\nabla_y(\gammamma_{k_1}(z,y)-\gammamma_{k_2}(z,y))\\
&-& \frac{2\mu\gammamma\nu_y}{(k_1^2-k_2^2)(\lambdambda+2\mu)}M_{z,x} (k_1^2\gammamma_{k_1}(z,y)-k_2^2\gammamma_{k_2}(z,y)),
\end{eqnarray}n
which further implies (\ref{W2-1}) by the Stokes formulas (\ref{stokes3}). The proof of (\ref{W3-1}) is similar and we omit it here.
\end{eqnarray}d{proof}
Finally, we investigate the term $W_4p$. From the results for acoustic wave (\ref{Wf2}), we immediately conclude that
\begin{eqnarray}gin{theorem}
\lambdabel{main3}
The hyper-singular operator $W_4$ can be expressed as
\begin{eqnarray}
\lambdabel{W4-1}
&\quad& W_4 p(x) \nonumbernumber\\
&=& \frac{1}{k_1^2-k_2^2}\int_\Gamma \left(\nu_x\times\nabla_x [(k_p^2-k_1^2)\gammamma_{k_1}(x,y)- (k_p^2-k_2^2)\gammamma_{k_2}(x,y)] \right)\cdot (\nu_y\times\nabla_y p(y))ds_y \nonumbernumber\\
&+& \frac{1}{k_1^2-k_2^2}\int_\Gamma [k_1^2(k_p^2-k_1^2)\gammamma_{k_1}(x,y)- k_2^2(k_p^2-k_2^2)\gammamma_{k_2}(x,y)]\nu_x^\top \nu_y p(y)ds_y.
\end{eqnarray}
\end{eqnarray}d{theorem}
\begin{eqnarray}gin{remark}
It can be easily verified from the Stokes formulas of G\"unter derivatives that using the proposed regularized formulations, all the integrals in the corresponding weak forms of $W_su$ and $\widetilde{W}_sU$ are at most weakly-singular.
\end{eqnarray}d{remark}
{\mathbb S}ection{Numerical tests}
\lambdabel{sec:ne}
In this section, we present several numerical examples to demonstrate the accuracy of the proposed scheme solving the exterior ESP and TESP. We now take the ESP as the model to describe the method for numerical implementations.
{\mathbb S}ubsection{Numerical implementations}
\lambdabel{sec:ni}
Using the weak form of the regularized formulation (\ref{Ws1}), it follows that (\ref{coematrix1}) can be retreated as
\begin{eqnarray}
\lambdabel{coematrix11}
{\bf A}_h(k,j)&=& \rho\omegaega^2\int_{\Gamma_h}\int_{\Gamma_h} \gammamma_{k_s}(x,y)(\nu_x\nu_y^\top-\nu_x^\top\nu_yI-J_{\nu_x,\nu_y})\psi_j(y) \psi_k(x)ds_yds_x\nonumbernumber\\
&-& \rho\omegaega^2\int_{\Gamma_h}\int_{\Gamma_h} \gammamma_{k_p}(x,y)\nu_x\nu_y^\top\psi_j(y) \psi_k(x)ds_yds_x\nonumbernumber\\
&-& 2\mu\int_{\Gamma_h}\int_{\Gamma_h} M_x\psi_k(x) \nabla_y[\gammamma_{k_s}(x,y)-\gammamma_{k_p}(x,y)]\nu_y^\top \psi_j(y)\,ds_yds_x\nonumbernumber\\
&+& \mu\int_{\Gamma_h}\int_{\Gamma_h} \gammamma_{k_s}(x,y) \left(\nu_y\times\nabla_y\psi_j(y)\right)\cdot \left(\nu_x\times\nabla_x\psi_k(x)\right)ds_yds_x I\nonumbernumber\\
&-& 2\mu\int_{\Gamma_h}\int_{\Gamma_h}\gammamma_{k_s}(x,y) M_x\psi_k(x)M_y\psi_j(y)\,ds_yds_x\nonumbernumber\\
&+& 4\mu^2\int_{\Gamma_h}\int_{\Gamma_h} M_x\psi_k(x) E(x,y)M_y\psi_j(y)\,ds_yds_x\nonumbernumber\\
&+& 2\mu\int_{\Gamma_h}\int_{\Gamma_h} \nu_x\nabla_x^\top [\gammamma_{k_s}(x,y)-\gammamma_{k_p}(x,y)] M_y\psi_j(y)\psi_k(x)\,ds_yds_x\nonumbernumber\\
&-& \mu {\mathbb S}um_{j,k,l=1}^3\int_{\Gamma_h}\int_{\Gamma_h} \gammamma_{k_s}(x,y)M_{\psi_j,\psi_k}ds_yds_x,
\end{eqnarray}
in which the entries of the matrix $M_{\psi_j,\psi_k}$ are given by
\begin{eqnarray}n
M_{\psi_j,\psi_k}(m,n)={\mathbb S}um_{l=1}^3m_y^{lm}\psi_j(y)m_x^{nl}\psi_k(x),\quad m,n=1,2,3.
\end{eqnarray}n
In (\ref{coematrix11}), all the integrals are at most weakly-singular. It can be obtained from decompositions that the weakly-singular kernels in (\ref{coematrix11}) are of types
\begin{eqnarray}n
\frac{1}{|x-y|},\quad \frac{(x-y)(x-y)^\top}{|x-y|^3}.
\end{eqnarray}n
\lambdabel{sec:lcs}
\begin{eqnarray}gin{figure}[htbp]
\centering
\includegraphics[scale=0.6]{cord.jpg}
\caption{Boundary element $\tau$.}
\lambdabel{Fig1}
\end{eqnarray}d{figure}
To compute the weakly singular integrals efficiently, we apply the special local coordinate system given in \cite{RS} to the boundary element $\tau$ with vertex $x_{(1)},x_{(2)},x_{(3)}$. Define the unit vector $r_{\tau,2}=(x_{(3)}-x_{(2)})/(|x_{(3)}-x_{(2)}|)$. Set $q_1=(x_{(2)}-x_{(1)})\cdot r_{\tau,2}$ and $q_2=(x_{(3)}-x_{(1)})\cdot r_{\tau,2}$. Then the intersection point $x^*$ can be determined by $x^*=x_{(3)}-q_2r_{\tau,2}$ or equivalently, $x^*=x_{(2)}-q_1r_{\tau,2}$. Define another unit vector $r_{\tau,1}=(x^*-x_{(1)})/(p_\tau)$, $p_\tau=|x^*-x_{(1)}|$ and the proportional coefficients $\alphapha_i=q_i/s_\tau,\quad i=1,2$. Thus, the boundary element $\tau$ can be parameterized as
\begin{eqnarray}n
\tau=\{x=x(p_x,q_x)=x_{(1)}+p_xr_{\tau,1}+q_xr_{\tau,2}: 0<p_x<p_\tau,\; \alphapha_1p_x<q_x<\alphapha_2p_x\}.
\end{eqnarray}n
Then for $x,y\in\tau$, $|x-y|^2=(p_x-p_y)^2+(q_x-q_y)^2$. The outward unit norma $\nu_\tau$ to the element $\tau$ is determined by $\nu_\tau=r_{\tau,1}\times r_{\tau,2}$. Moreover, the piecewise linear basis function $\psi_{(1)}$ for the vertex $x_{(1)}$ on $\tau$ can be formulated as $\psi_{(1)}(x)=\psi_{(1)}(x(p_x,q_x))=(p_\tau-p_x)/(p_\tau), x\in\tau$, and $\nabla_x\psi_{(1)}(x)=-r_{\tau,1}/p_\tau$. In addition, using the above parametrisation we have
\begin{eqnarray}n
\int_\tau f(x)\,ds_x= \int_0^{p_\tau}\int_{\alphapha_1p_x}^{\alphapha_2p_x}f(x(p_x,q_x))\,dq_xdp_x,
\end{eqnarray}n
or
\begin{eqnarray}n
\int_\tau f(x)\,ds_x &=& \int_{q_1}^0\int_{p_\tau-(q_1-q_x)/\alphapha_1}^{p_\tau}f(x(p_x,q_x))\,dp_xdq_x\\
&+& \int_0^{q_2}\int_{p_\tau-(q_2-q_x)/\alphapha_2}^{p_\tau} f(x(p_x,q_x))\,dp_xdq_x.
\end{eqnarray}n
Now we present the main computing strategy of the numerical implementation. Set $\tau=\tau_i$. Corresponding to the piecewise linear basis function $\psi_{i_m}(x), m=1,2,3$ on $\tau$, set
\begin{eqnarray}n
x_{(n)}=x_{i_{B(m,n)}},\quad n=1,2,3,\quad B=\begin{eqnarray}gin{bmatrix}
1 & 2 & 3\\
2 & 3 & 1\\
3 & 1 & 2
\end{eqnarray}d{bmatrix}.
\end{eqnarray}n
Using this reorder strategy and the local coordinate system, $\psi_{i_m}(x)=\psi_{(1)}(x(p_x,q_x))$ on $\tau$. Therefore,
\begin{eqnarray}n
\nabla_x\psi_{i_m}(x)=-\frac{1}{p_\tau}r_{\tau,1},\quad M_x\psi_{i_m}(x)=-\frac{1}{p_\tau} (r_{\tau,1}\nu_\tau^\top-\nu_\tau r_{\tau,1}^\top),\quad x\in\tau,
\end{eqnarray}n
are all constants.
The nonsingular integrals involved in (\ref{coematrix11}) can be approximated by Gaussian quadrature for triangular elements and we only need to consider the following integrals
\begin{eqnarray}n
I_1 &=& \int_{\tau_i}\int_{\tau_i} \frac{1}{|x-y|}\,ds_yds_x,\\
I_2 &=& \int_{\tau_i}\int_{\tau_i} \frac{1}{|x-y|}\psi_{i_m}(y)\psi_{i_n}(x)\,ds_yds_x,\quad m,n=1,2,3,\\
I_3 &=& \int_{\tau_i}\int_{\tau_i} \frac{(x-y)(x-y)^\top}{|x-y|^3}\,ds_yds_x,
\end{eqnarray}n
which can be numerically computed following the steps described in \cite{RS} in a semi-analytic sense.
{\mathbb S}ubsection{Numerical examples}
In the numerical tests, the direct solver '${\mathbb S}etminus$' in Matlab is employed for solutions of the linear system (\ref{linearsys}). The impenetrable obstacle $\Omega$ is set to be a unit ball (see Figure \ref{obstacle} (a)) or star-like (see Figure \ref{obstacle} (b)) with radial function
\begin{eqnarray}n
r(\theta,\phi)={\mathbb S}qrt{0.8+0.5(\cos2\phi-1)(\cos4\theta-1)},\quad\theta\in[0,\pi], \;\phi\in[0,2\pi].
\end{eqnarray}n
For these two obstacles, the origin $O$ is in $\Omegaega$. In our numerical tests, we first compute the unknown potentials $\varphi_h$ and $\Psi_h$ on $\Gammaamma_h$ by solving the variational equations (\ref{weak1}) and (\ref{weak2}), respectively and then put them into the solution representations (\ref{DirectBRF1}) and (\ref{DirectBRF2}) to get the numerical solutions $u_h$ and $U_h$ in $\Omega^c$, i.e.,
\begin{eqnarray}n
u_h(x) &=& \int_{\Gammaamma_h}(T_y E(x,y))^\top \varphi_h(y)\,ds_y,\\
U_h(x) &=& \int_{\Gammaamma_h}(\widetilde{T}^*(\partial_y,\nu_y) \widetilde{E}^\top(x,y))^\top \Psi_h(y)\,ds_y.
\end{eqnarray}n
\begin{eqnarray}gin{figure}[htbp]
\centering
\begin{eqnarray}gin{tabular}{ccc}
\includegraphics[scale=0.24]{Obstacle1.jpg} &
\includegraphics[scale=0.24]{Obstacle3.jpg} \\
(a) Obstacle I & (b) Obstacle II
\end{eqnarray}d{tabular}
\caption{Impenetrable obstacles to be considered in numerical tests.}
\lambdabel{obstacle}
\end{eqnarray}d{figure}
{\mathbb S}ubsubsection{Numerical examples for ESP}
Set $\omegaega=1$, $\rho=1$, $\lambdambda=2$, $\mu=1$. Let the exact solution be
\begin{eqnarray}n
u(x)=\nabla_x\left(\frac{e^{ik_p|x|}}{4\pi|x|}\right),\quad x\in\Omegaega^c.
\end{eqnarray}n
Denote $\Gamma_m:=\{x=(x_1,x_2,x_3)^\top\in{\mathbb R}^3: x_1=2\cos\theta,x_2=2,x_3=1.5\cos\theta,\theta\in[0,2\pi]\}$. Define the numerical error
\begin{eqnarray}n
\mbox{Error}:=\|u-u_h\|_{L^\infty(\Gamma_m)^3}.
\end{eqnarray}n
For simplicity, we use 'RP' and 'IP' to stand for 'real part' and 'imaginary part', respectively. The exact and numerical solutions on $\Gamma_m$ are plotted in Figure \ref{Fig3} for Obstacle I with $h=0.1005$. We observe that the numerical solutions are in a perfect agreement with the exact ones from the qualitative point of view. In Table \ref{Table1}, we present the numerical errors $\mbox{Error}$ with respect to the meshsize $h$ which indicate the asymptotic convergence order $O(h^2)$. These results verify the accuracy of the regularized formulation for hyper-singular BIO $W_s$.
\begin{eqnarray}gin{figure}[ht]
\centering
\begin{eqnarray}gin{tabular}{ccc}
\includegraphics[scale=0.18]{Obstacle1u1.jpg} &
\includegraphics[scale=0.18]{Obstacle1u2.jpg} &
\includegraphics[scale=0.18]{Obstacle1u3.jpg} \\
(a) $u_1$ & (b) $u_2$ & (c) $u_3$
\end{eqnarray}d{tabular}
\caption{The real and imaginary parts of the exact and numerical solutions when $\Omega$ is Obstacle I with $h=0.1005$.}
\lambdabel{Fig3}
\end{eqnarray}d{figure}
\begin{eqnarray}gin{table}[ht]
\caption{Numerical errors $\mbox{Error}$ with respect to the meshsize $h$.}
\centering
\begin{eqnarray}gin{tabular}{ccc}
\hline
$h$ & $\mbox{Error}$ & Order \\
\hline
0.4880 & 1.46E-3 & -- \\
0.3871 & 7.51E-4 & 2.87 \\
0.2668 & 2.95E-4 & 2.51 \\
0.1913 & 1.33E-4 & 2.39 \\
0.1005 & 3.01E-5 & 2.31 \\
\hline
\end{eqnarray}d{tabular}
\lambdabel{Table1}
\end{eqnarray}d{table}
Next, we consider the scattering of an incident plane wave $u^{in}$ taking the form
\begin{eqnarray}n
u^{in}=ik_pde^{ik_px\cdot d},\quad x\in{\mathbb R}^3,\quad d=({\mathbb S}in\theta^{in}\cos\phi^{in}, {\mathbb S}in\theta^{in}{\mathbb S}in\phi^{in}, \cos\theta^{in})^\top\in\mathcal{S}^2.
\end{eqnarray}n
by Obstacle II where $(\theta^{in},\phi^{in})$ is the incident direction. In this case, $f=-T(\partial,\nu)u^{in}$ on $\Gamma$. We choose $\theta^{in}=\pi/2$ and $\phi^{in}=0$. The real and imaginary parts of the numerical solution $u_h$ on four unit spheres surrounding the obstacle is presented in Figure \ref{Fig5}.
\begin{eqnarray}gin{figure}[ht]
\centering
\begin{eqnarray}gin{tabular}{ccc}
\includegraphics[scale=0.18]{realu1.jpg} &
\includegraphics[scale=0.18]{realu2.jpg} &
\includegraphics[scale=0.18]{realu3.jpg} \\
(a) $\mbox{Re}(u_1)$ & (b) $\mbox{Re}(u_2)$ & (c) $\mbox{Re}(u_3)$ \\
\includegraphics[scale=0.18]{imagu1.jpg} &
\includegraphics[scale=0.18]{imagu2.jpg} &
\includegraphics[scale=0.18]{imagu3.jpg} \\
(d) $\mbox{Im}(u_1)$ & (e) $\mbox{Im}(u_2)$ & (f) $\mbox{Im}(u_3)$
\end{eqnarray}d{tabular}
\caption{The real and imaginary parts of the numerical solutions of the scattering of plane incident wave for Obstacle II.}
\lambdabel{Fig5}
\end{eqnarray}d{figure}
{\mathbb S}ubsubsection{Numerical examples for TESP}
Choose $\omegaega=1$, $\rho=2$, $\lambdambda=1$, $\mu=1$, $\kappappa=1$, $\eta=0.2$ and $\gammamma=0.1$. The exact solution is set to be
\begin{eqnarray}n
u(x)=E_{12}(x,z),\quad p(x)=E_{22}(x,z),\quad x\in\Omega^c,
\end{eqnarray}n
and $z=(0,1,0.3,0.2)^\top\in\Omegaega$. Define the numerical error
\begin{eqnarray}n
\widetilde{\mbox{Error}}:=\|U-U_h\|_{L^\infty(\Gamma_m)^4}.
\end{eqnarray}n
We plot the exact and numerical solutions on $\Gamma_m$ in Figure \ref{Fig6} for Obstacle I with $h=0.1005$. The numerical solutions are in a perfect agreement with the exact ones from the qualitative point of view. In Table \ref{Table2}, we present the numerical errors $\widetilde{\mbox{Error}}$ with respect to the meshsize $h$ which also indicate the convergence. These results verify the accuracy of the regularized formulation for hyper-singular BIO $\widetilde{W}_s$.
\begin{eqnarray}gin{figure}[ht]
\centering
\begin{eqnarray}gin{tabular}{cccc}
\includegraphics[scale=0.18]{TObstacle1u1.jpg} &
\includegraphics[scale=0.18]{TObstacle1u2.jpg} \\
(a) $u_1$ & (b) $u_2$ \\
\includegraphics[scale=0.18]{TObstacle1u3.jpg} &
\includegraphics[scale=0.18]{TObstacle1p.jpg} \\
(c) $u_3$ & (d) $p$
\end{eqnarray}d{tabular}
\caption{The real and imaginary parts of the exact and numerical solutions when $\Omega$ is Obstacle I with $h=0.1005$.}
\lambdabel{Fig6}
\end{eqnarray}d{figure}
\begin{eqnarray}gin{table}[ht]
\caption{Numerical errors $\widetilde{\mbox{Error}}$ with respect to the meshsize $h$.}
\centering
\begin{eqnarray}gin{tabular}{ccc}
\hline
$h$ & $\widetilde{\mbox{Error}}$ & Order \\
\hline
0.4880 & 5.32E-4 & -- \\
0.3871 & 2.22E-4 & 3.77 \\
0.2668 & 1.07E-4 & 1.96 \\
0.1913 & 3.59E-5 & 3.28 \\
0.1005 & 2.61E-6 & 4.07 \\
\hline
\end{eqnarray}d{tabular}
\lambdabel{Table2}
\end{eqnarray}d{table}
Finally, we consider the scattering of an incident point source $U^{in}=({u^{in}}^\top,p^{in})^\top$ taking the form
\begin{eqnarray}n
u^{in}(x)=E_{12}(x,z),\quad p^{in}(x)=E_{22}(x,z),\quad x,z\in\Omega^c,
\end{eqnarray}n
by Obstacle II where $z$ is the location of point source. In this case, $F=-\widetilde{T}(\partial,\nu)U^{in}$ on $\Gamma$. We choose $z=(0,0,2)^\top$. The real and imaginary parts of the numerical solutions $U_h$ on four unit spheres surrounding the obstacle is presented in Figure \ref{Fig8}.
\begin{eqnarray}gin{figure}[ht]
\centering
\begin{eqnarray}gin{tabular}{cccc}
\includegraphics[scale=0.09]{Trealu1.jpg} &
\includegraphics[scale=0.09]{Trealu2.jpg} &
\includegraphics[scale=0.09]{Trealu3.jpg} &
\includegraphics[scale=0.09]{Trealp.jpg} \\
(a) $\mbox{Re}(u_1)$ & (b) $\mbox{Re}(u_2)$ & (c) $\mbox{Re}(u_3)$ & (d) $\mbox{Re}(p)$ \\
\includegraphics[scale=0.09]{Timagu1.jpg} &
\includegraphics[scale=0.09]{Timagu2.jpg} &
\includegraphics[scale=0.09]{Timagu3.jpg} &
\includegraphics[scale=0.09]{Timagp.jpg} \\
(e) $\mbox{Im}(u_1)$ & (f) $\mbox{Im}(u_2)$ & (g) $\mbox{Im}(u_3)$ & (h) $\mbox{Im}(p)$
\end{eqnarray}d{tabular}
\caption{The real and imaginary parts of the numerical solutions of the scattering of point source for Obstacle II.}
\lambdabel{Fig8}
\end{eqnarray}d{figure}
\appendix
{\mathbb S}ection{Proof of Theorem \ref{main}}
\lambdabel{appendex.a}
We know from (\ref{TyExy}) that
\begin{eqnarray}n
\mathcal{D}_su(z)=-f_1(z)+f_2(z)+f_3(z),
\end{eqnarray}n
where
\begin{eqnarray}n
f_1(z) &=& \int_{\Gammaamma} \nabla_y [\gammamma_{k_s}(z,y)-\gammamma_{k_p}(z,y)]\nu_y^\top u(y)ds_y,\\
f_2(z) &=& \int_{\Gammaamma} \partial_{\nu_y}\gammamma _{k_s}(z,y)u(y)ds_y,\\
f_3(z) &=& \int_{\Gammaamma} [2\mu E(z,y)-\gammamma_{k_s}(z,y)I] M_yu(y)ds_y.
\end{eqnarray}n
Note that
\begin{eqnarray}
\lambdabel{Wsproof1}
W_su(x)= \lim_{z\rightarrow x\in\Gammaamma,z\nonumbertin\Gammaamma} (g_1(z)-g_2(z)-g_3(z)),
\end{eqnarray}
where
\begin{eqnarray}n
g_i(z) = \mu\nu_x\cdot\nabla_z f_i(z) +(\lambdambda+\mu)\nu_x(\nabla_z \cdot f_i(z)+\mu M_{z,x}f_i(z).
\end{eqnarray}n
We obtain from (\ref{Tgrad}) that
\begin{eqnarray}
\lambdabel{Wsproof2}
g_1(z)&=& (\lambdambda+2\mu) \int_\Gammaamma [k_s^2\gammamma_{k_s}(z,y)-k_p^2\gammamma_{k_p}(z,y)] \nu_x\nu_y^\top u(y)ds_y \nonumbernumber\\
&+& 2\mu\int_\Gammaamma M_{z,x}\nabla_y[\gammamma_{k_s}(z,y)-\gammamma_{k_p}(z,y)]\nu_y^\top u(y)ds_y.
\end{eqnarray}
From (\ref{Wf2}) we can obtain that
\begin{eqnarray}
\lambdabel{Wsproof3}
g_2(z)&=& \mu\int_\Gammaamma (\nu_x\cdot\nabla_z)\partial_{\nu_y}\gammamma_{k_s}(z,y)u(y)ds_y +(\lambdambda+\mu)\int_\Gammaamma \nu_x\nabla_z^\top \partial_{\nu_y}\gammamma_{k_s}(z,y) u(y)ds_y\nonumbernumber\\
&+& \mu\int_\Gammaamma M_{z,x}\partial_{\nu_y}\gammamma_{k_s}(z,y)u(y)ds_y\nonumbernumber\\
&=& \mu\int_\Gammaamma \left(\nu_x\times\nabla_z\gammamma_{k_s}(z,y)\right)\cdot \left(\nu_y\times\nabla_yu(y)\right)ds_y + \mu k_s^2\int_\Gammaamma \gammamma_{k_s}(z,y)\nu_x^\top\nu_yu(y)ds_y\nonumbernumber\\
&+& (\lambdambda+\mu)\int_\Gammaamma \nu_x\nabla_z^\top \partial_{\nu_y}\gammamma_{k_s}(z,y) u(y)ds_y +\mu\int_\Gammaamma M_{z,x}\partial_{\nu_y}\gammamma_{k_s}(z,y)u(y)ds_y
\end{eqnarray}
For $g_3(z)$, we know from (\ref{TxExy}) that
\begin{eqnarray}
\lambdabel{Wsproof4}
&\quad& g_3(z)\nonumbernumber\\
&=& 2\mu\int_\Gammaamma (\nu_x\cdot\nabla_z)\gammamma_{k_s}(z,y) M_yu(y)ds_y \nonumbernumber\\
&-& 2\mu\int_\Gammaamma \nu_x\nabla_z^\top [\gammamma_{k_s}(z,y)-\gammamma_{k_p}(z,y)]M_yu(y)ds_y\nonumbernumber\\
&+& 4\mu^2\int_\Gammaamma M_{z,x}E(z,y)M_yu(y)ds_y -2\mu\int_\Gammaamma M_{z,x}\gammamma_{k_s}(z,y)M_yu(y)ds_y \nonumbernumber\\
&-& \mu\int_\Gammaamma (\nu_x\cdot\nabla_z)\gammamma_{k_s}(z,y) M_yu(y)ds_y -(\lambdambda+\mu)\int_\Gammaamma \nu_x\nabla_z^\top\gammamma_{k_s}(z,y)M_yu(y)ds_y\nonumbernumber\\
&-& \mu\int_\Gammaamma M_{z,x}\gammamma_{k_s}(z,y)M_yu(y)ds_y\nonumbernumber\\
&=& \mu\int_\Gammaamma (\nu_x\cdot\nabla_z)\gammamma_{k_s}(z,y) M_yu(y)ds_y -3\mu\int_\Gammaamma M_{z,x}\gammamma_{k_s}(z,y)M_yu(y)ds_y\nonumbernumber\\
&+& 4\mu^2\int_\Gammaamma M_{z,x}E(z,y)M_yu(y)ds_y \nonumbernumber\\
&-& 2\mu\int_\Gammaamma \nu_x\nabla_z^\top [\gammamma_{k_s}(z,y)-\gammamma_{k_p}(z,y)]M_yu(y)ds_y\nonumbernumber\\
&-& (\lambdambda+\mu)\int_\Gammaamma \nu_x\nabla_z^\top\gammamma_{k_s}(z,y)M_yu(y)ds_y.
\end{eqnarray}
Therefore, (\ref{Wsproof2})-(\ref{Wsproof4}) yields
\begin{eqnarray}
\lambdabel{Wsproof5}
&\quad& g_1(z)-g_2(z)-g_3(z) \nonumbernumber\\
&=& (\lambdambda+2\mu) \int_\Gammaamma [k_s^2\gammamma_{k_s}(z,y)-k_p^2\gammamma_{k_p}(z,y)] \nu_x\nu_y^\top u(y)ds_y\nonumbernumber\\
&+& 2\mu\int_\Gammaamma M_{z,x}\nabla_y[\gammamma_{k_s}(z,y)-\gammamma_{k_p}(z,y)]\nu_y^\top u(y)ds_y\nonumbernumber\\
&-& \mu\int_\Gammaamma \left(\nu_x\times\nabla_z\gammamma_{k_s}(z,y)\right)\cdot \left(\nu_y\times\nabla_yu(y)\right)ds_y - \mu k_s^2\int_\Gammaamma \gammamma_{k_s}(z,y)\nu_x^\top\nu_yu(y)ds_y\nonumbernumber\\
&+& 3\mu\int_\Gammaamma M_{z,x}\gammamma_{k_s}(z,y)M_yu(y)ds_y -4\mu^2\int_\Gammaamma M_{z,x}E(z,y)M_yu(y)ds_y\nonumbernumber\\
&+& 2\mu\int_\Gammaamma \nu_x\nabla_z^\top [\gammamma_{k_s}(z,y)-\gammamma_{k_p}(z,y)]M_yu(y)ds_y -\mu h_1(z)-(\lambdambda+\mu)h_2(z),
\end{eqnarray}
where
\begin{eqnarray}n
h_1(z) &=& \int_\Gammaamma \left[ M_{z,x}\partial_{\nu_y}\gammamma_{k_s}(z,y)u(y)+ (\nu_x\cdot\nabla_z)\gammamma_{k_s}(z,y)M_yu(y)\right]ds_y\\
h_2(z) &=& \int_\Gammaamma \nu_x\left[\nabla_z^\top \partial_{\nu_y}\gammamma_{k_s}(z,y) u(y)- \nabla_z^\top\gammamma_{k_s}(z,y)M_yu(y)\right]ds_y.
\end{eqnarray}n
Note that for $i,j=1,2,3$,
\begin{eqnarray}n
{\mathbb S}um_{l=1}^3 \left(m_y^{il}m_{z,x}^{lj}-m_{z,x}^{il}m_y^{lj}\right)
&=& (\nu_y^i\nu_x^j-\nu_x^i\nu_y^j)\Delta_z+ m^{ij}_{z,x}\partial_{\nu_y} -m^{ij}_y(\nu_x\cdot\nabla_z).
\end{eqnarray}n
We conclude that
\begin{eqnarray}n
h_1(z) &=& \int_\Gammaamma \left[ M_{z,x}\partial_{\nu_y}\gammamma_{k_s}(z,y)u(y)+ (\nu_x\cdot\nabla_z)\gammamma_{k_s}(z,y)M_yu(y)\right]ds_y\nonumbernumber\\
&=& \int_\Gammaamma \left[ M_{z,x}\partial_{\nu_y}\gammamma_{k_s}(z,y)u(y)- M_y(\nu_x\cdot\nabla_z)\gammamma_{k_s}(z,y)u(y)\right]ds_y\nonumbernumber\\
&=& \int_\Gammaamma \left[ M_yM_{z,x}- M_{z,x}M_y\right]\gammamma_{k_s}(z,y)u(y)ds_y\nonumbernumber\\
&+& k_s^2\int_\Gammaamma \gammamma_{k_s}(z,y)J(\nu_x,\nu_y)u(y)ds_y.
\end{eqnarray}n
We obtain from the Stokes formula (\ref{stokes1}) that
\begin{eqnarray}n
&\quad& \lim_{z\rightarrow x\in\Gammaamma,z\nonumbertin\Gammaamma}\int_\Gammaamma M_yM_{z,x}\gammamma_{k_s}(z,y)u(y)ds_y\\
&=& \left\{ {\mathbb S}um_{k,l=1}^3m_{jk}^ym_{kl}^x\gammamma_{k_s}(x,y)u_l(y)ds_y \right\}_{j=1}^3\\
&=& \left\{ {\mathbb S}um_{k,l=1}^3m_{kl}^x\gammamma_{k_s}(x,y)m_{kj}^yu_l(y)ds_y \right\}_{j=1}^3.
\end{eqnarray}n
On the other hand,
\begin{eqnarray}n
\int_\Gammaamma M_{z,x}M_y\gammamma_{k_s}(z,y)u(y)ds_y &=& -\int_\Gammaamma M_{z,x}\gammamma_{k_s}(z,y)M_yu(y)ds_y.
\end{eqnarray}n
Thus,
\begin{eqnarray}
\lambdabel{Wsproof6}
\lim_{z\rightarrow x\in\Gammaamma,z\nonumbertin\Gammaamma} h_1(z) &=& \left\{ {\mathbb S}um_{k,l=1}^3m_{kl}^x\gammamma_{k_s}(x,y)m_{kj}^yu_l(y)ds_y \right\}_{j=1}^3\nonumbernumber\\
&+& \int_\Gammaamma M_x\gammamma_{k_s}(x,y)M_yu(y)ds_y\nonumbernumber\\
&+& k_s^2\int_\Gammaamma \gammamma_{k_s}(z,y)J(\nu_x,\nu_y)u(y)ds_y.
\end{eqnarray}
Finally, since
\begin{eqnarray}n
\nu_x\int_\Gammaamma \nabla_z^\top\gammamma_{k_s}(z,y)M_yu(y)ds_y =\nu_x\int_\Gammaamma \left[M_y\nabla_z\gammamma_{k_s}(z,y)\right] \cdot u(y)ds_y
\end{eqnarray}n
we have
\begin{eqnarray}
\lambdabel{Wsproof7}
h_2(z) &=& \nu_x\int_\Gammaamma \left[ \nabla_z \partial_{\nu_y}\gammamma_{k_s}(z,y)- M_y\nabla_z\gammamma_{k_s}(z,y) \right]\cdot u(y)ds_y\nonumbernumber\\
&=& -\nu_x\int_\Gammaamma \Delta_z\gammamma_{k_s}(z,y)\nu_y^\top u(y)ds_y\nonumbernumber\\
&=& k_s^2\int_\Gammaamma \gammamma_{k_s}(z,y)\nu_x\nu_y^\top u(y)ds_y.
\end{eqnarray}
We complete the proof of (\ref{Ws1}) by a combination of (\ref{Wsproof1}) and (\ref{Wsproof5})-(\ref{Wsproof7}).
{\mathbb S}ection{Proof of Lemma \ref{Tlemma1}}
\lambdabel{appendex.b}
For some matrix $A$ or vector $B$, we denote $(A)_{ij}$ and $(B)_i$ their Cartesian components, respectively. Let
\begin{eqnarray}n
R_1=\gammamma_{k_s}-\frac{k_p^2-k_2^2}{k_1^2-k_2^2}\gammamma_{k_1}+ \frac{k_p^2-k_1^2}{k_1^2-k_2^2}\gammamma_{k_2}.
\end{eqnarray}n
Then we have
\begin{eqnarray}
\lambdabel{Tlemma1-1}
(\nabla_x\cdot E_{11})_i &=& \frac{1}{\mu}\partialrtial_{x_i}\gammamma_{k_s} +\frac{1}{\rho\omegaega^2} {\mathbb S}um_{j=1}^d\partialrtial_{x_i}\partialrtial_{x_j}^2R_1 \nonumbernumber\\
&=& \partialrtial_{x_i}\left( \frac{1}{\mu}\gammamma_{k_s}+\frac{1}{\rho\omegaega^2} \Delta_x R_1 \right),
\end{eqnarray}
\begin{eqnarray}
\lambdabel{Tlemma1-2}
(\partialrtial_{\nu_x}E_{11})_{ij}= \frac{1}{\mu}\partialrtial_{\nu_x}\gammamma_{k_s}\delta_{ij} +\frac{1}{\rho\omegaega^2}{\mathbb S}um_{l=1}^d \nu_x^l\partialrtial_{x_l}\partialrtial_{x_i}\partialrtial_{x_j} R_1,
\end{eqnarray}
and
\begin{eqnarray}
\lambdabel{Tlemma1-3}
&\quad& (M_xE_{11})_{ij} \nonumbernumber\\
&=& \frac{1}{\mu}M_x\gammamma_{k_s}+ \frac{1}{\rho\omegaega^2} {\mathbb S}um_{l=1}^d (\partialrtial_{x_i}\nu_x^l-\partialrtial_{x_l}\nu_x^i) \partialrtial_{x_l}\partialrtial_{x_j}R_1\nonumbernumber\\
&=& \frac{1}{\mu}M_x\gammamma_{k_s} +\frac{1}{\rho\omegaega^2}{\mathbb S}um_{l=1}^d \nu_x^l\partialrtial_{x_l}\partialrtial_{x_i}\partialrtial_{x_j} R_1 -\frac{1}{\rho\omegaega^2}\nu_x^i\partial_{x_j}\Delta_x R_1.
\end{eqnarray}
Therefore, from (\ref{Tform2}) and (\ref{Tlemma1-1})-(\ref{Tlemma1-3}) we have
\begin{eqnarray}n
&\quad&(T(\partial_x,\nu_x)E_{11}(x,y))_{ij}\\
&=& (\lambdambda+\mu)\nu_x^i(\nabla_x\cdot E_{11})_j + \mu(\partial_{\nu_x}E_{11})_{ij} + \mu (M_xE)_{ij}\\
&=& \nu_x^i\partial_{x_j}\left( \frac{\lambdambda+\mu}{\mu}\gammamma_{k_s}+ \frac{\lambdambda+2\mu}{\rho\omegaega^2} \Delta_x R_1 \right) +\partialrtial_{\nu_x}\gammamma_{k_s}\delta_{ij}+ (M_x(2\mu E_{11}-\gammamma_{k_s}))_{ij}.
\end{eqnarray}n
Note that
\begin{eqnarray}n
\Delta_x R_1 &=& -k_s^2\gammamma_{k_s}+ \frac{(k_p^2-k_2^2)k_1^2}{k_1^2-k_2^2}\gammamma_{k_1}- \frac{(k_p^2-k_1^2)k_2^2}{k_1^2-k_2^2}\gammamma_{k_2}\\
&=& -k_s^2\gammamma_{k_s} +\frac{(k_1^2-q)k_p^2}{k_1^2-k_2^2}\gammamma_{k_1}- \frac{(k_2^2-q)k_p^2}{k_1^2-k_2^2}\gammamma_{k_2}\\
&=& -k_s^2\gammamma_{k_s}+ k_p^2\gammamma_{k_1} +\frac{(k_2^2-q)k_p^2}{k_1^2-k_2^2} (\gammamma_{k_1}-\gammamma_{k_2}).
\end{eqnarray}n
Hence,
\begin{eqnarray}n
(T(\partial_x,\nu_x)E_{11})_{ij} &=& -\nu_x^i\partial_{x_j} (\gammamma_{k_s}-\gammamma_{k_1}) + \frac{k_2^2-q}{k_1^2-k_2^2} \nu_x^i\partial_{x_j}(\gammamma_{k_1}-\gammamma_{k_2}) \\
&+& \partialrtial_{\nu_x}\gammamma_{k_s}\delta_{ij}+ (M_x(2\mu E_{11}-\gammamma_{k_s}))_{ij}
\end{eqnarray}n
which completes the proof of (\ref{TxE11}). The proof of (\ref{TyE11}) follows in similar way, and we skip it here.
{\mathbb S}ection{Proof of Theorem \ref{main1}}
\lambdabel{appendex.c}
Following the same steps in \ref{appendex.a} we can obtain that
\begin{eqnarray}
\lambdabel{W1proof8}
&\quad& -\lim_{z\rightarrow x\in\Gamma,z\nonumbertin\Gamma}T(\partial_z,\nu_x)\int_\Gamma (T(\partial_y,\nu_y)E_{11}(z,y))^\top u(y)ds_y\nonumbernumber\\
&=& \rho\omegaega^2\int_\Gamma \gammamma_{k_s}(x,y) (\nu_x\nu_y^\top-\nu_x^\top\nu_yI-J_{\nu_x,\nu_y})u(y)ds_y\nonumbernumber\\
&-& \frac{k_1^2(k_1^2-q)(\lambdambda+2\mu)}{k_1^2-k_2^2}\int_\Gamma \gammamma_{k_1}(x,y) \nu_x\nu_y^\top u(y)ds_y\nonumbernumber\\
&+& \frac{k_2^2(k_2^2-q)(\lambdambda+2\mu)}{k_1^2-k_2^2}\int_\Gamma\gammamma_{k_2}(x,y) \nu_x\nu_y^\top u(y)ds_y\nonumbernumber\\
&-& \mu\int_\Gammaamma \left(\nu_x\times\nabla_x\gammamma_{k_s}(x,y)\right)\cdot \left(\nu_y\times\nabla_yu(y)\right)ds_y-4\mu^2\int_\Gammaamma M_xE(x,y)M_yu(y)ds_y\nonumbernumber\\
&+& 2\mu\int_\Gammaamma M_x\gammamma_{k_s}(x,y)M_yu(y)ds_y -\mu\left\{ {\mathbb S}um_{k,l=1}^3\int_\Gamma m_x^{kl}\gammamma_{k_s}(x,y) m_y^{kj}u_l(y)ds_y \right\}_{j=1}^3 \nonumbernumber\\
&+& 2\mu\int_\Gamma \nu_x\nabla_x^\top \left[\gammamma_{k_s}(x,y) -\gammamma_{k_1}(x,y)\right] M_yu(y)ds_y \nonumbernumber\\
&+& 2\mu\int_\Gamma M_x\nabla_y \left[\gammamma_{k_s}(x,y) -\gammamma_{k_1}(x,y)\right]\nu_y^\top u(y)ds_y\nonumbernumber\\
&-& \frac{2\mu(k_2^2-q)}{k_1^2-k_2^2}\int_\Gamma \nu_x\nabla_x^\top \left[\gammamma_{k_1}(x,y) -\gammamma_{k_2}(x,y)\right] M_yu(y)ds_y \nonumbernumber\\
&-& \frac{2\mu(k_2^2-q)}{k_1^2-k_2^2}\int_\Gamma M_x\nabla_y \left[\gammamma_{k_1}(x,y) -\gammamma_{k_2}(x,y)\right]\nu_y^\top u(y)ds_y.
\end{eqnarray}
On the other hand, we have
\begin{eqnarray}n
T(\partial_z,\nu_x)E_{12}(z,y) &=& \frac{\gammamma}{k_1^2-k_2^2} \nu_x\left[k_1^2\gammamma_{k_1}(z,y)-k_2^2\gammamma_{k_1}(z,y)\right]\nonumbernumber\\
&-& \frac{2\mu\gammamma}{(k_1^2-k_2^2)(\lambdambda+2\mu)} M_{z,x}\nabla_z\left[\gammamma_{k_1}(z,y) -\gammamma_{k_2}(z,y)\right] ,
\end{eqnarray}n
and
\begin{eqnarray}n
T(\partial_y,\nu_y)E_{21}(z,y) &=& \frac{i\omegaega\eta}{k_1^2-k_2^2} \nu_y\left[k_1^2\gammamma_{k_1}(z,y)-k_2^2\gammamma_{k_1}(z,y)\right]\nonumbernumber\\
&-& \frac{2i\mu\omegaega\eta}{(k_1^2-k_2^2)(\lambdambda+2\mu)} M_y\nabla_y\left[\gammamma_{k_1}(z,y) -\gammamma_{k_2}(z,y)\right].
\end{eqnarray}n
Then we have
\begin{eqnarray}
\lambdabel{W1proof9}
&\quad& \lim_{z\rightarrow x\in\Gamma,z\nonumbertin\Gamma}\int_\Gamma [i\omegaega\eta T(\partial_z,\nu_x)E_{12}(z,y)\nu_y^\top +\gammamma\nu_x(T(\partial_y,\nu_y)E_{21}(z,y))^\top \nonumbernumber\\
&\quad& \quad\quad\quad\quad\quad\quad\quad\quad\quad -i\omegaega\eta\gammamma\nu_x\nu_y^\top E_{22}(z,y)]u(y)ds_y\nonumbernumber\\
&=& \frac{i\omegaega\eta\gammamma}{k_1^2-k_2^2}\int_\Gamma [(k_p^2+k_1^2)\gammamma_{k_1}(x,y)- (k_p^2+k_2^2)\gammamma_{k_2}(x,y)]\nu_x\nu_y^\top u(y)ds_y\nonumbernumber\\
&+& \frac{2i\mu\omegaega\eta\gammamma}{(k_1^2-k_2^2)(\lambdambda+2\mu)} \int_\Gamma M_x\nabla_y\left[\gammamma_{k_1}(x,y) -\gammamma_{k_2}(x,y)\right]\nu_y^\top u(y)ds_y\nonumbernumber\\
&+& \frac{2i\mu\omegaega\eta\gammamma}{(k_1^2-k_2^2)(\lambdambda+2\mu)} \int_\Gamma
\nu_x\nabla_x^\top\left[\gammamma_{k_1}(x,y) -\gammamma_{k_2}(x,y)\right] M_yu(y)ds_y.
\end{eqnarray}
Then (\ref{W1-1}) can be proved by combining (\ref{W1proof8}) and (\ref{W1proof9}).
{\mathbb S}ection*{Acknowledgments}
The work of G. Bao is supported in part by a NSFC Innovative Group Fund (No.11621101), an Integrated Project of the Major Research Plan of NSFC (No. 91630309), and an NSFC A3 Project (No. 11421110002). The work of L. Xu is partially supported by a Key Project of the Major Research Plan of NSFC (No. 91630205), and a NSFC Grant (No. 11771068).
\begin{eqnarray}gin{thebibliography}{00}
\bibitem{BHSY} G. Bao, G. Hu, J. Sun, T. Yin, Direct and inverse elastic scattering from anisotropic media, to appear in J. Math. Pures Appl..
\bibitem{BXY} G. Bao, L. Xu, T. Yin, An accurate boundary element method for the exterior elastic scattering problem in two dimensions, J. Comput. Phy. 348 (2017) 343-363.
\bibitem{BT} A. Bendalia, S. Tordeux, Extension of the G\"unter derivatives to lipschitz domains and application to the boundary potentials of elastic waves, arXiv:1611.04362.
\bibitem{BLR14} F. Bu, J. Lin, F. Reitich, A fast and high-order method for the three-dimensional elastic wave scattering problem, J. Comput. Phy. 258 (2014) 856-870.
\bibitem{B56} M. A. Biot, Thermoelasticity and irreversible thermodynamics, J. Appl. Phys. 27 (1956) 240-253.
\bibitem{BM71} A. J. Burton, G. F. Miller, The application of integral equation methods to the numerical solution of some exterior boundary-value problem, Proc. Roy. Soc. London Ser. A 323 (1971) 201-210.
\bibitem{C00} F. Cakoni, Boundary integral method for thermoelastic screen scattering problem in ${\mathbb R}^3$, Math. Meth. Appl. Sci. 23 (2000) 441-466.
\bibitem{CD98} F. Cakoni, G. Dassios, The coated thermoelastic body within a low-frequency elastodynamic field, Int. J. Engng. Sci. 36 (1998) 1815-1838.
\bibitem{CD99} F. Cakoni, G. Dassios, The Atkinson-Wilcox theorem in thermoelasticity, Quart. Appl. Math. 57(4) (1999) 771-795.
\bibitem{CBS08} S. Chaillat, M. Bonnet, J.-F. Semblat, A multi-level fast multipole BEM for 3-d elastodynamics in the frequency domain, Comput. Methods Appl. Mech. Eng. 197 (2008) 4233-4249.
\bibitem{DB90} G. F. Dargush, P. K. Banergee, Boundary element methods in three-dimensional thermoelasticity, Int. J. Solid Struct. 26 (1990) 199-216.
\bibitem{DK88} G. Dassios, V. Kostopoulos, The scattering amplitudes and cross-sections in the theory of thermoelasticity, SIAM J. Appl. Math. 48(1) (1988) 79-98.
\bibitem{DK90} G. Dassios, V. Kostopoulos, On rayleigh expansions in thermoelastic scattering, SIAM J. Appl. Math. 50(5) (1990) 1300-1324.
\bibitem{GK90} D. Givoli, J. B. Keller, Non-reflecting boundary conditions for elastic waves, Wave Motion 12 (1990) 261-279.
\bibitem{GN78} J. Giroire, J. C. N\'{e}d\'{e}lec, Numerical solution of an exterior Neumann problem using a double layer potential, Math. Comp. 32 (1978) 973-990.
\bibitem{H94} H. Han, The boundary-integro-differential equations of three-dimensional Neumann problem in linear elasticity, Numer. Math. 68 (1994) 269-281.
\bibitem{HS98} I. Harari, Z. Shohet, On non-reflecting boundary conditions in unbounded elastic solids, Comput. Methods Appl. Mech. Engrg. 163 (1998) 123-139.
\bibitem{HW04} G. C. Hsiao, W. L. Wendland, Boundary element methods: Foundation and error analysis, in: E. Stein, R. de Borst, T.J.R. Hughes (Eds.), Encyclopedia of Computational Mechanics, vol. 1, John Wiley and Sons, Ltd., 2004, pp. 339-373.
\bibitem{HW08} G. C. Hsiao, W. L. Wendland, Boundary Integral Equations, Applied Mathematical Sciences, Vol. 164, Springer-verlag, 2008.
\bibitem{HX11} G. C. Hsiao, L. Xu, A system of boundary integral equations for the transmission problem in acoustics, J. Comput. Appl. Math. 61 (2011) 1017-1029.
\bibitem{JWX14} Y. Jiang, B. Wang, Y. Xu, A fast fourier-galerkin method solving a boundary integral equation for the biharmonic equation, SIAM J. Numer. Anal. 52 (2014) 2530-2554.
\bibitem{KGBB79} V. D. Kupradze, T. G. Gegelia, M. O. Basheleishvili, T. V. Burchuladze, Three-Dimensional Problems of the Mathematical Theory of Elasticity and Thermoelasticity, North-Holland Series in Applied Mathematics and Mechanics, vol. 25, North-Holland Publishing Co., Amsterdam, 1979.
\bibitem{LH16} H. Li, J. Huang, High-accuracy quadrature methods for solving boundary integral equations of axisymmetric elasticity problems, Comput. Math. Appl. 71 (2016) 459-469.
\bibitem{L16} P. Li, Y. Wang, Z. Wang, Y. Zhao, Inverse obstacle scattering for elastic waves, Inverse Problems 32 (2016) 115018.
\bibitem{LY} P. Li, X. Yuan, Inverse obstacle scattering for elastic waves in three dimensions, Inverse Problems and Imaging, to appear.
\bibitem{LR93} Y. Liu, F. J. Rizzo, Hypersingular boundary integral equations for radiation and scattering of elastic waves in three dimensions, Comput. Method Appl. Method Eng. 107 (1993) 131-144.
\bibitem{L14} F. L. Lou\"er, A high order spectral algorithm for elastic obstacle scattering in three dimensions, J. Comput. Phy. 279 (2014) 1-18.
\bibitem{MB88} G. D. Manolis, D. E. Beskos, Boundary element methods in elastodynamics, Unwin Hyman, London, 1988.
\bibitem{M49} A. W. Maue, Zur Formulierung eines allgemeinen Beugungsproblems durch eine Integralgleichung, Z. Phys. 126 (1949) 601-618.
\bibitem{M66} K. M. Mitzner, Acoustic scattering from an interface between media of greatly different density, J. Math. Phys. 7 (1966) 2053-2060.
\bibitem{N01} J. C. N\'{e}d\'{e}lec, Acoustic and Electromagnetic Equations: Integral Representations for Harmonic Problems, Springer-Verlag, New York, 2001.
\bibitem{N82} J. C. N\'{e}d\'{e}lec, Integral equations with non integrable kernels, Integral Equ. Oper. Theory 5 (1982) 562-572.
\bibitem{N75} W. Nowacki, Dynamic Problems of Thermoelasticity, Leyden: Noordhoff, 1975.
\bibitem{RS} S. Rjasanow, O. Steinbach, The Fast Solution of Boundary Integral Equations, Mathematical and Analytical Techniques with Applications to Engineering, Springer, 2007.
\bibitem{SS84} V. Sladek, J. Sladek, Boundary integral equation method in thermoelasticity. Part I: general analysis, Appl. Math. Modelling 7 (1984) 241-253.
\bibitem{TC07} M. S. Tong, W. C. Chew, Nystr\"om method for elastic wave scattering by three-dimensional obstacles, J. Comput. Phy. 226 (2007) 1845-1858.
\bibitem{TC09} M. S. Tong, W. C. Chew, Multilevel fast multipole algorithm for elastic wave scattering by large three-dimensional objects, J. Comput. Phy. 228 (2009) 921-932.
\bibitem{YHX} T. Yin, G. C. Hsiao, L. Xu, Boundary integral equation methods for the two dimensional fluid-solid interaction problem, SIAM J. Numer. Anal. 55(5) (2017) 2361-2393.
\end{eqnarray}d{thebibliography}
\end{eqnarray}d{document}
|
\begin{document}
\title{Representation of powers by polynomials over function fields and a problem of Logic}
\begin{abstract}
We solve a generalization of B\"uchi's problem in any exponent for function fields, and briefly discuss some consequences on undecidability. This provides the first example where this problem is solved for rings of functions in the case of an exponent larger than $3$.
\end{abstract}
\tableofcontents
\section{Introduction and results}
Our starting point is the following conjecture by B\"uchi (see \cite{Mazur}), arising as an attempt to improve the negative answer to Hilbert's tenth problem given by Matiyasevic in 1970 after the work of J. Robinson, M. Davis and H. Putnam (see \cite{Matiyasevic})
\begin{conjecture}
There exists a constant $M$ with the following property. Suppose that $s_1,\ldots,s_M$ is a sequence of integer squares such that the second differences of the $s_i$ are constant and equal to $2$, that is
$$
s_{i+2}-2s_{i+1}+s_i=2,\quad i=1,\ldots,M-2.
$$
Then there is an integer $\nu\in\mathbb{Z}$ such that $s_i=(i+\nu)^2$ for $i=1,\ldots,M$.
\end{conjecture}
It is easy to see that if such $M$ does exist then $M\ge 5$, but no counterexample is known for $M=5$ and this conjecture is still an open problem.
Analogously, one can ask a similar question for other rings, higher order differences and higher powers, see \cite{PheidasVidaux1} for a detailed study of such extensions. A general statement for problems of this sort is rather complicated due to trivial exceptions arising in each particular case, so we will state here just the problem in the case of polynomials over the complex numbers
\begin{problem}\label{BuchiPol1} Let $n\ge 2$ be an integer. Is it true that there exists an integer $M=M(n)$ with the following property?
Given $q_1,q_2,\ldots,q_M\in\mathbb{C}[x]$, if the sequence of $n$-th powers of the $q_i$ has $n$-th differences constant and equal to $n!$, then either all the $q_i$ are constant, or there exists $\nu\in\mathbb{C}[x]$ such that $q_k^n=(k+\nu)^n$.
\end{problem}
This problem is of particular interest because a positive answer can be used to obtain consequences in logic in the spirit of the original motivation of B\"uchi. The reason for this is a celebrated theorem by Denef which establishes an analogue of the negative answer to Hilbert's Tenth Problem for polynomial rings in characteristic zero, see \cite{Denef1}.
Problem \ref{BuchiPol1} has been answered positively for $n=2$ (see \cite{Vojta} where Vojta actually proved an analogous statement for $n=2$ in a much more general context - function fields of curves in characteristic zero and meromorphic functions over $\mathbb{C}$) and $n=3$ (see \cite{PheidasVidaux3})\footnote{In personal communication, I have been informed that Hsiu-Lien Huang and Julie Tzu-Yueh Wang have recently solved B\"uchi's problem in the case of cubes for function fields. I deeply thank the authors for sending me their pre-print. We remark that the method of proof in the work of Huang and Wang is completely different from the method used here.}. In this work we answer positively to Problem \ref{BuchiPol1} and actually we prove a more general result for function fields.
In order to state our main results, let us first make some remarks on Problem \ref{BuchiPol1}.
First of all, observe that if $u_1,\ldots,u_M$ is a sequence of elements in a (commutative unitary) ring $A$ whose sequence of $n$-th differences is $n!,n!,\ldots,n!$ then one has elements $a_0,a_1,\ldots,a_{n-1}\in A$ such that for $k=1,2,\ldots,M$
$$
u_k=k^n+a_{n-1}k^{n-1}+\ldots+a_1k+a_0,
$$
and clearly a sequence that admits such a representation has $n$-th differences $n!$. Thus we have the following equivalent statement for Problem \ref{BuchiPol1}
\begin{problem}\label{BuchiPol2} Let $n\ge 2$ be an integer. Is it true that there exists an integer $M=M(n)$ with the following property?
If $F(t)\in (\mathbb{C}[x])[t]$ is a monic polynomial of degree $n$ such that $F(\lambda)$ is an $n$-th power in $\mathbb{C}[x]$ for $\lambda=1,2,\ldots,M$, then either $F(t)$ has constant coefficients, or $F(t)=(t+\nu)^n$ for some $\nu\in\mathbb{C}[x]$.
\end{problem}
One can further generalize this problem by just requiring that $F(\lambda)$ is an $n$-th power for at least $M$ distinct values of $\lambda\in \mathbb{C}$, not necessarily $\lambda=1,\ldots,M$. Moreover, we can replace $\mathbb{C}$ by some other field $K$ of characteristic zero, and $\mathbb{C}[x]$ by some other $K$-algebra $R$ of arithmetic interest (for example, $R$ being the function field of a variety over $K$).
\begin{problem}\label{BuchiPol3} Let $n\ge 2$ be an integer. Is it true that there exists an integer $M=M(n)$ with the following property?
If $F(t)\in R[t]$ is a monic polynomial of degree $n$ such that $F(\lambda)$ is an $n$-th power in $R$ for at least $M$ values of $\lambda\in K$, then either $F(t)$ has constant coefficients (i.e. $F\in K[t]$), or $F(t)=(t+\nu)^n$ for some $\nu\in R$.
\end{problem}
Therefore a positive answer to Problem \ref{BuchiPol3} when $K=\mathbb{C}$ and $R=\mathbb{C}[x]$ would give a positive answer to Problem \ref{BuchiPol1}.
In this last generalization we have insisted in requiring characteristic zero. The reason is that the problems we have discussed have negative answer in positive characteristic. For example over $\bar{\mathbb{F}}_p[x]$ for $p>2$ the polynomial
$$
F(t)=\left(t+\frac{x^q+x}{2}\right)^2-\left(\frac{x^q-x}{2}\right)^2
$$
only represents squares as $t$ ranges in $\mathbb{F}_q\subseteq\bar{\mathbb{F}}_p$ for $q$ a power of $p$, but $F$ has non-constant coefficients and one can show that it is not of the form $(t+\nu)^2$. Nevertheless, one can also try to characterize such exceptions obtaining similar consequences in Logic, see for example \cite{PheidasVidaux2bis} and \cite{ShlapentokhVidaux}.
If $L$ is the function field of a curve over an algebraically closed field and $f\in L$ we say that $f$ is \textit{$k$-powerful} if all the zeros of $f$ have multiplicity at least $k$ (note that $k$ is not required to be attained and there is no assumption on the poles of $f$). Our main theorem is the following.
\begin{theorem}\label{MainTheorem}
Let $C$ be a smooth projective curve of genus $g$ over an algebraically closed field $K$ of characteristic zero. Let $n\ge 2$ be a positive integer and let
$$
F(s,t)=s^n + a_{n-1}s^{n-1}t + \cdots + a_1st^{n-1} + a_0t^n\in K(C)[s,t]
$$
where at least one $a_i$ is non-constant. Assume that there is a set $B\subset \mathbb{P}^1(K)$ with at least
$$
M=M(n)= 2n(n+1)\left(g+n\binom{3n-1}{n}\right)
$$
elements and such that for each $b\in B$ we have that $F(b)$ is $\mu$-powerful in $K(C)$ for some fixed $\mu\ge n$. Then $\mu=n$ and $F$ is the $n$-th power of a linear polynomial in $K(C)[s,t]$.
\end{theorem}
The statement has some obvious abuse of notation (when evaluating $F$ at points of $\mathbb{P}^1(K)$), which is harmless since the order of vanishing is well defined up to multiplication by non-zero scalars.
As far as we know, this is the first case where the analogue of B\"uchi's problem in higher powers is solved completely for some ring of functions. Our techniques are completely different from the methods previously used to attack B\"uchi's problem in the case of functions. The previous methods were developed by Vojta in \cite{Vojta} and Pheidas and Vidaux in \cite{PheidasVidaux2} and \cite{PheidasVidaux3}. We believe that the extension of the methods of the previous authors for higher values of $n$ is not straightforward. Indeed, it was commented to me by Vidaux that his method works, in principle, for any \emph{given} $n$ as long as one is willing to work with systems of several differential equations, but making that method work for \emph{general} $n$ requires some new idea, or a systematic way to deal with such systems.
Nevertheless, an extension of Vojta's method for higher values of $n$ would have remarkable arithmetic consequences, and on the other hand an extension of the method by Pheidas and Vidaux seems to be appropriate for the case of meromorphic functions over the complex numbers or non-archimedian fields. Indeed, in \cite{Transactions} we used both methods in order to explore arithmetic extensions of B\"uchi's original problem for number fields and to prove an analogue for $n=2$ in the case of $p$-adic meromorphic functions.
Concerning partial results towards the solution of B\"uchi's problem for general $n$ (at least in the case of functions), in \cite{Proceedings} we considered an intermediate problem between the case $n=2$ and the case of general exponent for polynomial rings, this problem is called Hensley's problem. Then the results in \cite{Proceedings} were generalized for function fields in characteristic zero (see \cite{ShlapentokhVidaux}) and recently in the case of meromorphic functions over the complex numbers and non-archimedian fields (see \cite{chinas}). In all the cases the results were obtained by means of the Pheidas-Vidaux method mentioned above. Despite this progress, Hensley's problem for exponent $n$ implies B\"uchi's problem for exponent $n$ just in the case $n=2$, but for higher exponents Hensley's problem is a particular case of B\"uchi's problem.
The following slightly weaker form of Theorem \ref{MainTheorem} is more convenient for applications.
\begin{theorem}\label{eMainTheorem}
Let $C$ be a smooth projective curve of genus $g$ over an algebraically closed field $K$ of characteristic zero, and let $n\ge 2$ be a positive integer. There exists a constant $N=N(n,g)$ depending only on $n$ and $g$ such that the following happens:
For all
$$
F(t)=t^n + a_{n-1}t^{n-1} + \cdots + a_1t + a_0\in K(C)[t],
$$
if $F(\lambda)$ is $n$-powerful in $K(C)$ for at least $N$ values of $\lambda\in K$ then either $F$ has constant coefficients or $F(t)=(t+\nu)^n$ for some $\nu\in K(C)$.
\end{theorem}
Thus, we get as an immediate consequence a positive answer to Problem \ref{BuchiPol1} in the case of polynomials for example, because $n$-th powers in $K[x]$ are in particular $n$-powerful rational functions.
As an application of Theorem \ref{eMainTheorem} one obtains the following consequences in Logic
\begin{theorem}\label{Logic} Let $\mathcal{L}$ be the language $\{0,1,+,f_x,\alpha\}$ where $\alpha$ is a unary predicate. Let $\mathfrak{M}$ be the $\mathcal{L}$-structure with base set $\mathbb{C}[x]$ and where $f_x$ is interpreted as the map $u\mapsto xu$ and $\alpha$ is interpreted in one of the following ways:
\begin{enumerate}
\item $\alpha(u)$ means '$u$ is powerful'
\item $\alpha(u)$ means '$u$ is $k$-powerful' for fixed $k>1$
\item $\alpha(u)$ means '$u$ is a power'
\item $\alpha(u)$ means '$u$ is a $k$-th power' for fixed $k>1$.
\end{enumerate}
Then multiplication is positive existential $\mathcal{L}$-definable over $\mathfrak{M}$. In particular, the positive existential theory of $\mathfrak{M}$ over $\mathcal{L}$ is undecidable.
\end{theorem}
We remark that Item 3 in Theorem \ref{Logic} has been recently proved by completely different methods by Garcia-Fritz as part of her MSc thesis (see \cite{Garcia}), and actually she managed to deal with even weaker languages and not only positive-existential theories. Also, Item 4 in the cases $k=2,3$ is already known after the work of Vojta, Pheidas, Shlapentockh and Vidaux (see \cite{Vojta}, \cite{ShlapentokhVidaux},\cite{PheidasVidaux2bis} and \cite{PheidasVidaux3}) also by different techniques. In all the mentioned cases, the strategy is to prove an arithmetic result in the spirit of Theorem \ref{eMainTheorem} and then use ideas of B\"uchi to obtain the results in Logic, see \cite{Survey} for a general exposition of these ideas, at least in the positive-existential case. The proof of Theorem \ref{Logic} from Theorem \ref{eMainTheorem} goes along the same lines of the work of the referred authors and we omit the details. Similar consequences for other structures (sub-rings of function fields of curves for example) over related languages are straightforward from Theorem \ref{eMainTheorem} as long as some version of Hilbert's Tenth Problem has been answered negatively for the corresponding structure. Such results can be obtained similarly, we let the details to the reader.
\section{Proof of Theorem \ref{MainTheorem}}
In this section we will use the notation introduced in the statement of Theorem \ref{MainTheorem}.
\subsection{A reduction}
We will need the following lemma.
\begin{lemma}\label{Linear} Let
$$
L=s+ct\in K(C)[s,t]
$$
with $c\in K(C)$ non-constant. There are at most
$$
4+4g
$$
values of $b\in \mathbb{P}^1(K)$ for which $L(b)$ has only multiple zeros as a rational function on $C$ (after a choice of projective coordinates for $b$).
\end{lemma}
\begin{proof}
Let $B'\subset\mathbb{P}^1(K)$ be the set of such $b$. Consider the map $\phi:C\to \mathbb{P}^1$ given by $p\mapsto [1:c(p)]$, and let $\check{\phi}:C\to \mathbb{P}^1$ be the composition of the dual map $\mathbb{P}^1\to\mathbb{P}^1$ with $\phi$, in coordinates this is $p\mapsto [-c(p),1]$. Let $d$ be the degree of $\check{\phi}$, then $c$ has $d$ poles counting multiplicity. If $b\in B'$ and $b\ne [1:0]$ then $b$ is a branch point of $\check{\phi}$, and since all the zeros of $L(b)$ are multiple we have $|\check{\phi}^{-1}(b)|\le d/2$. Therefore by Riemann-Hurwitz formula
$$
2-2g=2d-\sum_{q \mbox{ brach}} (d-|\check{\phi}^{-1}(q)|)\le 2d - \sum_{b\in B'-\{[1:0]\}} \frac{d}{2}
\le 2d - (|B'|-1)\frac{d}{2}
$$
hence
$$
|B'|\le 5-\frac{4}{d}+\frac{4g}{d}\le 5+4g - \frac{4}{d}< 5+4g.
$$
Since $|B'|$ is an integer we get $|B'|\le 4+4g$.
\end{proof}
We will prove the following lemma to reduce the proof of Theorem \ref{MainTheorem} to the proof of a simpler statement.
\begin{lemma}\label{Horizontal} It suffices to prove Theorem \ref{MainTheorem} under the additional hypothesis that $F$ cannot be factored as $F=GH$ for some $H\in K(C)[s,t]$ and some non-constant $G\in K[s,t]$.
\end{lemma}
\begin{proof} First we note that $F$ cannot be factored as $F=G(s,t)L(s,t)$ with $L$ linear on $s$, $t$ and $G\in K[s,t]$ because such an $F$ can be powerful for at most
$$
4+4g+\deg G<4+4g+n<M(n)
$$
values of $[s:t]\in \mathbb{P}^1(K)$ (by Lemma \ref{Linear}).
Suppose that the theorem is proved under the additional requirement, and given $F$ as in \ref{MainTheorem} suppose that we can factor it as $F=GH$ for some non-constant $G\in K[s,t]$ and some $H\in K(C)[s,t]$ and moreover assume that $G$ is the largest such factor. We can further suppose that $G,H$ are monic as polynomials on $s$ and that $H$ is not linear on $s,t$. Assume also that the hypothesis of Theorem \ref{MainTheorem} hold for $F$. Write
$$
H=s^{n'}+\cdots + b_1st^{n'-1}+ b_0t^{n'}\quad\mbox{ with } 2\le n'\le n
$$
and note that $G\in K[s,t]$ is homogeneous of degree $n-n'$. Since $G$ can vanish at most for $n-n'$ values of $[s:t]\in \mathbb{P}^1(K)$, we know that $H$ is $\mu$-powerful in $K(C)$ for at least
$$
M(n)-(n-n')\ge M(n')=2n'(n'+1)\left(g+n'\binom{3n'-1}{n'}\right)
$$
values of $b$ in $\mathbb{P}^1(K)$ with $\mu\ge n=n'+\deg_t(G)\ge n'$.
Therefore, by maximality of $G$, we can apply to $H$ the version of the theorem that we are assuming as proved, so we must have $\mu=n'$ which implies $n=n'$ (because $\mu\ge n$) and $G$ is constant.
\end{proof}
\subsection{Setup of the proof}
Let $S=C\times \mathbb{P}^1$ and take
$$
F(s,t)=s^n + a_{n-1}s^{n-1}t + \cdots + a_1st^{n-1} + a_0t^n
$$
as in Theorem \ref{MainTheorem}. From now on, we call vertical (resp. horizontal) divisor on $S$ a divisor which is the pull-back of a divisor on $C$ (resp. on $\mathbb{P}^1$) by the corresponding projection.
Let $D=(F)_0\in\mbox{\upshape{Div}}(S)$ be the divisor of zeros of $F$ on $S$; this is nothing but the closure on $S$ of the divisor of zeros of $F$ on the generic fiber of the trivial family $S\to C$. Write
$$
D=\sum_{i=1}^c m_iX_i
$$
where the $X_i$ are the reduced irreducible components of the support of $D$. Let
$$
X=\bigcup_{i=1}^cX_i
$$
which is a reduced (but possibly reducible) curve on $S$. By Lemma \ref{Horizontal} we can assume that no $X_i$ is a horizontal divisor. Moreover, the rational normal curve in $\mathbb{P}^{n}$ is not contained in any proper linear subspace, in particular it is not contained in the dual of $[1:a_{n-1}(p):\cdots : a_0(p)]$ for any $p\in C$, therefore the $X_i$ cannot be vertical divisors.
Let $\pi_1: S \to C$ and $\pi_2:S\to\mathbb{P}^1$ be the projection maps. Let $\nu_i:\tilde{X}_i\to X_i$ be the normalization of $X_i$, and define $h_i:\tilde{X}_i\to C$ and $f_i:\tilde{X}_i\to \mathbb{P}^1$ by $h_i=\pi_1\circ\nu_i$ and $f_i=\pi_2\circ\nu_i$. Note that $h_i$ and $f_i$ are non-constant morphisms because $X_i$ is not a vertical or horizontal divisor. Let $\epsilon_i$ and $\delta_i$ be the degrees of $h_i$ and $f_i$ respectively. We have
$$
n=\sum_{i=1}^c m_i\epsilon_i
$$
so, in particular $\max\{m_i\}\le n$ with equality if and only if $n=m_1$ and $c=1$. We define
$$
d=\sum_{i=1}^c m_i\delta_i.
$$
Applying the Riemann-Hurwitz Formula to $h_i$ and $f_i$ we get
\begin{equation}\label{Zeuthen}
\epsilon_i\chi(C)-\sum_{p\in C}(\epsilon_i-|h_i^{-1}(p)|)=\delta_i\chi(\mathbb{P}^1)-\sum_{q\in \mathbb{P}^1}(\delta_i-|f_i^{-1}(q)|)
\end{equation}
where $\chi$ is the Euler characteristic. This formula is also known as Zeuthen Formula and the same idea works in general for irreducible correspondences on the product of two curves. In Equation \eqref{Zeuthen} we replace $\chi(\mathbb{P}^1)=2$, and then we add the equations with weight $m_i$ as $i$ ranges, to get
$$
n\chi(C)-\sum_{p\in C}\sum_{i=1}^c m_i(\epsilon_i-|h_i^{-1}(p)|)=2d-\sum_{q\in \mathbb{P}^1} \sum_{i=1}^c m_i (\delta_i-|f_i^{-1}(q)|)
$$
hence
\begin{equation}\label{WeightedZeuthen}
\sum_{q\in \mathbb{P}^1} \sum_{i=1}^c m_i (\delta_i-|f_i^{-1}(q)|)=\sum_{p\in C}\sum_{i=1}^c m_i(\epsilon_i-|h_i^{-1}(p)|) + 2d + 2n(g-1)
\end{equation}
For $p\in C$ define the set
$$
\Theta(p)=\{P\in X : \pi_1(P)=p\}
$$
then
$$
|h_i^{-1}(p)|=\sum_{P\in \Theta(p)} |\nu_i^{-1}(P)|.
$$
Similarly, for $q\in \mathbb{P}^1$ let
$$
\Gamma(q)=\{Q\in X : \pi_2(Q)=q\}
$$
and note that
$$
|f_i^{-1}(q)|=\sum_{Q\in \Gamma(q)} |\nu_i^{-1}(Q)|.
$$
We have
$$
\begin{aligned}
\sum_{q\in \mathbb{P}^1} \sum_{i=1}^c m_i (\delta_i-|f_i^{-1}(q)|)&\ge \sum_{q\in B} \sum_{i=1}^c m_i (\delta_i-|f_i^{-1}(q)|)\\
&=\sum_{q\in B} \sum_{i=1}^c m_i\delta_i-\sum_{q\in B} \sum_{i=1}^c m_i|f_i^{-1}(q)|\\
&= d|B|-\sum_{q\in B} \sum_{i=1}^c m_i\sum_{Q\in \Gamma(q)} |\nu_i^{-1}(Q)|
\end{aligned}
$$
and, if $E\subset C(K)$ is any finite set containing all the branch point of the $h_i$ then we get
$$
\begin{aligned}
\sum_{p\in C}\sum_{i=1}^c m_i(\epsilon_i-|h_i^{-1}(p)|)&= \sum_{p\in E}\sum_{i=1}^c m_i(\epsilon_i-|h_i^{-1}(p)|)\\
&=\sum_{p\in E}\sum_{i=1}^c m_i\epsilon_i-\sum_{p\in E}\sum_{i=1}^c m_i|h_i^{-1}(p)|\\
&=n|E|-\sum_{p\in E}\sum_{i=1}^c m_i\sum_{P\in \Theta(p)} |\nu_i^{-1}(P)|.
\end{aligned}
$$
We will later choose a convenient $E$. After the above computation, Equation \eqref{WeightedZeuthen} implies
$$
d|B|-\sum_{q\in B} \sum_{i=1}^c m_i\sum_{Q\in \Gamma(q)} |\nu_i^{-1}(Q)|\le n|E|-\sum_{p\in E}\sum_{i=1}^c m_i\sum_{P\in \Theta(p)} |\nu_i^{-1}(P)| + 2d + 2n(g-1)
$$
that is
\begin{equation}\label{EqIntermedia}
d|B|-n|E|- 2d - 2n(g-1)\le\sum_{q\in B} \sum_{i=1}^c m_i\sum_{Q\in \Gamma(q)} |\nu_i^{-1}(Q)| -\sum_{p\in E}\sum_{i=1}^c m_i\sum_{P\in \Theta(p)} |\nu_i^{-1}(P)|
\end{equation}
Let
$$
Z=\{x\in X: X\mbox{ is singular at }x\}\cup (X\cap(F)_{\infty}) \subseteq S
$$
where $(F)_{\infty}$ stands for the divisor of poles of $F$ on $S$, which is nothing but the vertical divisor $C\times (F(q))_{\infty}$ for generic $q\in\mathbb{P}^1(K)$ (the choice of coordinates for $q$ does not affect this definition). Take $E$ as the union of $\pi_1(Z)$ and the set of all branch points of the maps $h_i$. Since $Z$ contains the singular points of $X$, $E$ is the same as the union of $\pi_1(Z)$ and the branch points of $\pi_1|_{X-Z}:X-Z\to C$.
Given $q\in\mathbb{P}^1(K)$, we note that $\Gamma(q)\setminus Z$ is included in the set
$$
\{(p,q)\in S(K): F(q)\in K(C)\mbox{ vanishes at }p\}
$$
(fixing a choice of coordinates for $q$) because $Z\supseteq X\cap (F)_{\infty}$, but for $q\in B$ we know that $F(q)$ has multiplicity at least $\mu$ at each zero, hence for $q\in B$ we have
$$
\mu|\Gamma(q)\setminus Z|\le \deg (F(q))_0 \le (C\times q, D)=d
$$
where $(F(q))_0\in \mbox{\upshape{Div}}(C)$ and $(\cdot,\cdot)$ denotes the intersection pairing on $\mbox{\upshape{Div}}(S)$. So, from Inequality \eqref{EqIntermedia} we get
$$
\begin{aligned}
d|B|-n|E|- 2d - 2n(g-1)&\le \sum_{q\in B} \sum_{i=1}^c m_i\sum_{Q\in \Gamma(q)} |\nu_i^{-1}(Q)| -\sum_{p\in E}\sum_{i=1}^c m_i\sum_{P\in \Theta(p)} |\nu_i^{-1}(P)| \\
&\le \sum_{q\in B} \sum_{i=1}^c m_i\sum_{Q\in \Gamma(q)\setminus Z} |\nu_i^{-1}(Q)|\\
&= \sum_{q\in B} \sum_{Q\in \Gamma(q)\setminus Z} \sum_{i=1}^c m_i|\nu_i^{-1}(Q)|\\
&\le \max_i\{m_i\}\sum_{q\in B} \sum_{Q\in \Gamma(q)\setminus Z} \sum_{i=1}^c |\nu_i^{-1}(Q)|\\
&= \max_i\{m_i\}\sum_{q\in B} \sum_{Q\in \Gamma(q)\setminus Z} 1\\
&= \max_i\{m_i\}\sum_{q\in B} |\Gamma(q)\setminus Z|\\
&= \frac{\max_i\{m_i\}}{\mu}\sum_{q\in B} \mu|\Gamma(q)\setminus Z|\\
&\le \max_i\{m_i\}|B|\frac{d}{\mu}.
\end{aligned}
$$
Note that we have used the fact that for $q\in B$ one has
$$
\sum_{Q\in \Gamma(q)\setminus Z} \sum_{i=1}^c |\nu_i^{-1}(Q)|=\sum_{Q\in \Gamma(q)\setminus Z} 1
$$
because for $Q$ a smooth point in $X$ there is one and only one $i$ such that $Q\in X_i$ (since the points where $X_i$ and $X_j$ meet are singular for $X$) and moreover for such $Q$ and $i$ one has $|\nu_i^{-1}(Q)|=1$ because $Q$ is a smooth point of $X_i$.
Write $m=\max_i\{m_i\}$. We get
\begin{equation}\label{EqMain1}
|B| \le |B|\frac{m}{\mu}+2\frac{n}{d}(g-1)+n\frac{|E|}{d}+ 2\le |B|\frac{m}{\mu}+2n\max\{0,g-1\}+n\frac{|E|}{d}+ 2
\end{equation}
Now we need an upper estimate for $|E|$.
\subsection{Bounding $|E|$}
For convenience of the reader, we recall that
$$
F(s,t)=s^n + a_{n-1}s^{n-1}t + \cdots + a_1st^{n-1} + a_0t^n\in K(C)[s,t]
$$
A general horizontal divisor on $C\times\mathbb{P}^1$ meets $(F)_0$ in $d=\sum m_i\delta_i$ points counting multiplicity (by definition of $\delta_i$) hence $(F)_{\infty}$ is a formal sum of $d$ vertical lines counting multiplicity. So, we have that
\begin{itemize}
\item at most $d$ points in $C(K)$ are poles of some $a_i$, and
\item each $a_i$ has at most $d$ poles counting multiplicity.
\end{itemize}
Define
$$
U=\{p\in C : a_i(p)\ne \infty,\forall i\}
$$
then $C-U$ consists of at most $d$ points.
Let $V=\mathbb{P}^{1}-\{[1:0]\}$.
We will use the following well-known lemma.
\begin{lemma}\label{Affine}
Let $Y$ be a smooth projective curve over $K$. Let $W$ be a non-empty proper open set in $Y$ obtained by removing the points $p_1,\ldots,p_r$. Then $W$ is an affine open set. In particular $U\times V$ is an affine open set of $S$.
\end{lemma}
The zero set of
$$
\hat{F}=s^n+\cdots a_1s+a_0.
$$
agrees with $(F)_0$ on $C\times V$. We factor $\hat{F}$ as an element of $K(C)[s]$ in the following way
\begin{equation}\label{Factorization}
\hat{F}=\prod_{i=1}^R\hat{F}_i^{w_i}
\end{equation}
with $\hat{F}_i\in K(C)[s]$ distinct, non-constant on $s$, monic and irreducible. This is possible because $\hat{F}\in K(C)[s]$ is monic and non-constant on $s$. Define
$$
H=\prod_{i=1}^R\hat{F}_i.
$$
\begin{lemma}\label{FiReg}
The $\hat{F}_i$ have no poles on $U\times V$.
\end{lemma}
\begin{proof}
If $\hat{F}_1$ has a pole through some point $(p,q)\in U\times V$ then it has a pole along $p\times V$. Thus some other $\hat{F}_i$ must vanish along $p\times V$ because $\hat{F}$ has no poles on $U\times V$, and this contradicts the fact that the $\hat{F}_i$ are monic and non-constant on $s$.
\end{proof}
\begin{lemma}\label{RedEq} We have that $(H)_0$ agrees with $(F)_0^{red}=\sum X_i$ on $U\times V$, that is, $H=0$ is an equation for $X$ on $U\times V$.
\end{lemma}
\begin{proof}
First note that both $F$ and $H$ are regular on $U\times V$ (by Lemma \ref{FiReg}) hence their zero loci can be computed pointwise on $U\times V$. Given $P\in U\times V$ have that $P\in X$ if and only if $F(P)=0$ which happens if and only if $\prod_{i=1}^R\hat{F}_i^{w_i}(P)=0$. The coefficients of $\prod_{i=1}^R\hat{F}_i^{w_i}$ are regular on $U$ so we conclude that for $P\in U\times V$ we have $P\in X$ if and only if $H(P)=0$.
Now we prove that $(H)_0$ on $U\times V$ is reduced. Suppose it is not reduced, then a general vertical prime divisor on $U\times V$ meets $(H)_0$ in at least one multiple point, hence for general $p\in U$ we have that $\mbox{\upshape{disc}}_p(H(s))=0$ where $\mbox{\upshape{disc}}_p(H(s))$ stands for the discriminant of the polynomial $H((p,s))\in K[s]$ ($p\in U$ is given). This implies that $\Delta(p)=\mbox{\upshape{disc}}_p(H(s))=0$ for general $p\in U$, where $\Delta\in K(C)$. So, $\Delta\in K(C)$ is the zero function and we conclude that $H$ has a multiple root as element of $K(C)[s]$. This contradicts the fact that $H\in K(C)[s]$ is a reduced polynomial in characteristic zero.
\end{proof}
Now, let $\Sigma$ be the set of points $P\in X\cap U\times V$ that are singular points or a ramification point for the projection $\pi_1|_X$. If $P_0=(s_0,p_0)\in \Sigma$ then
$$
H(P_0)=0\mbox{ and }\partial_s H(P_0)=0
$$
hence $\Delta(p_0)=0$ where $\Delta\in K(C)$ is the discriminant of $H\in K(C)[s]$. We know that $\Delta\in K(C)$ is no the zero function because $H$ is a reduced polynomial in characteristic zero, and we also know that $\Delta$ is a polynomial expression on the coefficients of $H$. Let $v=\deg_s H\le n$. The intersection of a general horizontal divisor with $X$ has at most $d$ points counting multiplicity, hence the above lemma implies that each coefficient of $H$ has at most $d$ poles counting multiplicity.
The number of zeros of $\Delta$ on $U$ is at most the number of poles of $\Delta$ on $C$ counting multiplicities, that is at most
$$
\begin{aligned}
\#\left\{\stackrel{\mbox{monomials of }\Delta\mbox{ as a polynomial}}{\mbox{on the coefficients of }H} \right\}\cdot\left(\stackrel{\mbox{degree of }\Delta\mbox{ as a polynomial}}{\mbox{on the coefficients of }H}\right) \cdot d&\le \binom{3v-1}{v}(2v-2)d\\
&\le \binom{3n-1}{n}(2n-2)d
\end{aligned}
$$
Finally,
\begin{equation}\label{**}
|E|\le |\pi_1(\Sigma)|+ |C-U| + (C\times[1:0], X) \le \binom{3n-1}{n}(2n-2)d+2d.
\end{equation}
\subsection{A criterion for showing that $F$ is an $n$-th power}
The purpose of this section is to show that proving $\max\{m_i\}=n$ is enough to conclude that $F$ is an $n$-th power. This is done in Lemma \ref{clave} below.
We recall that
$$
D=(F)_0=\sum_{i=1}^c m_i X_i \in \mbox{\upshape{Div}}(S)
$$
where the $X_i$ are the reduced irreducible components of the support of $D$, and
$$
\hat{F}=\prod_{i=1}^R\hat{F}_i^{w_i}
$$
as in Equation \eqref{Factorization}.
\begin{lemma}\label{DivRed}
For each $i=1,\ldots,R$, the algebraic set in $U\times V$ defined by $\hat{F}_i=0$ is a (non-empty) reduced curve. Moreover
$$
D=\sum_{i=1}^R w_i (\hat{F}_i)_0.
$$
on $U\times V$.
\end{lemma}
\begin{proof}
We have that
$$
\sum_{i=1}^c X_i=(H)_0=\left(\prod\hat{F}_i\right)_0=\sum_{i=1}^R (\hat{F}_i)_0
$$
on $U\times V$, the first equality because of Lemma \ref{RedEq} and the last because on $U\times V$ the $\hat{F}_i$ are regular (by Lemma \ref{FiReg}). This shows that the curves $\hat{F}_i=0$ on $U\times V$ are reduced.
Note also that each $\hat{F}_i$ must vanish somewhere in $U\times V$ because they are non-constant monic on $s$ having some non-constant coefficient (since the $X_i$ are not horizontal or vertical).
Finally we have
$$
D|_{U\times V}=(\hat{F})_0=\left(\prod\hat{F}_i^{w_i}\right)_0=\sum_{i=1}^R w_i(\hat{F}_i)_0
$$
where the last equality is because the $\hat{F}_i$ are regular on $U\times V$.
\end{proof}
\begin{lemma}\label{PrimeDivisor}
We have that $D_i=(\hat{F}_i)_0$ is a prime divisor on $U\times V$ for each $i$. Moreover, $\mathcal{O}_S(U\times V)=\mathcal{O}_C(U)[s]$.
\end{lemma}
\begin{proof}
By Lemma \ref{DivRed} $D_i$ is not the zero divisor and all of its coefficients are $1$ (we say that it is reduced). To show that $D_i$ is irreducible it is enough to show that $\hat{F}_i$ is a prime element in $A=\mathcal{O}_S(U\times V)$ because $U\times V$ is affine by Lemma \ref{Affine}.
First we note that $\mathcal{O}_C(U)[s]\subseteq A$ but the canonical isomorphism
$$
\mathcal{O}_C(U)\otimes_K K[s]=\mathcal{O}_C(U)\otimes_K \mathcal{O}_{\mathbb{P}^1}(V)\longrightarrow A
$$
has image $\mathcal{O}_C(U)[s]$ therefore $A=\mathcal{O}_C(U)[s]$.
\end{proof}
\begin{lemma}\label{Equality} We have that $R=c$ and $w_i=m_i$ for each $i$, up to reordering.
\end{lemma}
\begin{proof}
By Lemma \ref{RedEq} on $U\times V$ we have
$$
\sum_{i=1}^c m_jX_j=D=\sum_{i=1}^R w_i(\hat{F}_i)_0
$$
where no $(\hat{F}_i)_0$ is the zero divisor, so, by Lemma \ref{PrimeDivisor} it suffices to show that $(\hat{F}_i)_0\ne (\hat{F}_j)_0$ for $i\ne j$ because those divisors are prime divisors.
If $(\hat{F}_i)_0= (\hat{F}_j)_0$ for $i\ne j$ then we have $(\hat{F}_i)= (\hat{F}_j)$ because of Lemma \ref{FiReg}, so we get $\hat{F}_i=u\hat{F}_j$ for some $u\in \mathcal{O}_S(U\times V)=\mathcal{O}_C(U)[s]$ invertible, and in particular $u$ is constant on $s$. As all the $\hat{F}_i$ are monic this implies that $u$ is monic as a polynomial on $s$, therefore $u=1$. Hence $\hat{F}_i=\hat{F}_j$, a contradiction with the definition of the $\hat{F}_i$.
\end{proof}
\begin{lemma}\label{clave}
We have that $F$ is the $n$-th power of a linear polynomial in $K(C)[s,t]$ if and only if $\max\{m_i\}=n$.
\end{lemma}
\begin{proof}
This follows by homogenizing the equation
$$
\hat{F}=\prod_{i=1}^R\hat{F}_i^{w_i}
$$
with the variable $t$, Lemma \ref{Equality} and the fact that the $\hat{F}_i$ are not constant on $s$.
\end{proof}
\subsection{Completion of the proof}
From inequalities \eqref{EqMain1} and \eqref{**} we get
$$
\begin{aligned}
|B| &\le |B|\frac{m}{\mu}+2n\max\{0,g-1\}+n\frac{|E|}{d}+ 2\\
&\le |B|\frac{m}{\mu}+2n\max\{0,g-1\}+\frac{n}{d}\left(\binom{3n-1}{n}(2n-2)d+2d\right)+ 2\\
&= |B|\frac{m}{\mu}+2n\max\{0,g-1\}+\binom{3n-1}{n}(2n-2)n+2n+ 2\\
&< |B|\frac{m}{\mu}+2ng+2n^2\binom{3n-1}{n}
\end{aligned}
$$
that is
\begin{equation}\label{EqMain2}
|B|<|B|\frac{m}{\mu}+2ng+2n^2\binom{3n-1}{n}.
\end{equation}
Recall that $m=\max\{m_i\}\le n\le \mu$. We claim that $m=n=\mu$. Indeed, if $m<n$ then $m+1\le n\le \mu$ so \eqref{EqMain2} gives
$$
|B|<|B|\frac{n-1}{n}+2ng+2n^2\binom{3n-1}{n}
$$
hence
$$
|B|<2n^2g+2n^3\binom{3n-1}{n}.
$$
On the other hand, if $n<\mu$ then $m\le n\le \mu-1$, thus \eqref{EqMain2} gives
$$
|B|<|B|\frac{n}{n+1}+2ng+2n^2\binom{3n-1}{n}
$$
hence
$$
|B|<2n(n+1)g+2n^2(n+1)\binom{3n-1}{n}.
$$
In either case, we obtain
$$
2n(n+1)\left(g+n\binom{3n-1}{n}\right)= M\le |B|<2n(n+1)\left(g+n\binom{3n-1}{n}\right)
$$
a contradiction. This proves that $\max\{m_i\}=n=\mu$ and Theorem \ref{MainTheorem} follows from Lemma \ref{clave}.
\end{document}
|
\begin{document}
\title{On the degree five $L$-function for ${\rm G}Sp(4)$}
\author{Daniel File}
{\rm ad\, }dress{Departement of Mathematics, 14 MacLean Hall, Iowa City, Iowa 52242-1419}\email{[email protected]}
\thanks{This work was done while I was a gruaduate student at Ohio State University as part of my Ph.D. dissertation. I wish to thank my advisor Jim Cogdell for being a patient teacher and for his helpful discussions about this work. I also thank Ramin Takloo-Bighash for many useful conversations.}
\keywords{$L$-function, integral representation, Bessel models}
\begin{abstract}
I give a new integral representation for the degree five (standard) $L$-function for automorphic representations of ${\rm G}Sp(4)$ that is a refinement of integral representation of Piatetski-Shapiro and Rallis. The new integral representation unfolds to produce the Bessel model for ${\rm G}Sp(4)$ which is a unique model. The local unramified calculation uses an explicit formula for the Bessel model and differs completely from Piatetski-Shapiro and Rallis.
\end{abstract}
\maketitle
\date{}
\section{Introduction}
In 1978 Andrianov and Kalinin established an integral representation for the degree $2n+1$ standard $L$-function of a Siegel modular form of genus $n$~\cite{andrianovkalinin1978}. Their integral involves a theta function and a Siegel Eisenstein series. The integral representation allowed them to prove the meromorphic continuation of the $L$-function, and in the case when the Siegel modular form has level $1$ they established a functional equation and determined the locations of possible poles.
Piatetski-Shapiro and Rallis became interested in the construction of Andrianov and Kalinin because it seems to produce Euler products without using any uniqueness property. Previous examples of integral representations used either a unique model such as the Whittaker model, or the uniqueness of the invariant bilinear form between an irreducible representation and its contragradient. It is known that an automorphic representation of ${\rm Sp}_4$ (or ${\rm G}Sp_4$) associated to a Siegel modular form does not have a Whittaker model. Piatetski-Shapiro and Rallis adapted the integral representation of Andrianov and Kalinin to the setting of automorphic representations and were able to obtain Euler products ~\cite{piatetski-shapirorallis1988}; however, the factorization is not the result of a unique model that would explain the local-global structure of Andrianov and Kalinin. They considered the integral
\begin{equation*}
\int \limits_{{\rm Sp}_{2n}(F) \backslash {\rm Sp}_{2n}(\mathbb{A})} \phi(g) \theta_T(g) E(s,g) \, dg
\end{equation*}
where $E(s,g)$ is an Eisenstein series induced from a character of the Siegel parabolic subgroup, $\phi$ is a cuspidal automorphic form, $T$ is a $n$-by-$n$ symmetric matrix determining an $n$ dimensional orthogonal space, and $\theta_T(g)$ is the theta kernel for the dual reductive pair ${\rm Sp}_{2n} \times {\rm O}(V_T)$.
Upon unfolding their integral produces the expansion of $\phi$ along the abelian unipotent radical $N$ of the Siegel parabolic subgroup. They refer to the terms in this expansion as Fourier coefficients in analogy with the Siegel modular case. The Fourier coefficients are defined as
\begin{equation*}
\phi_T(g) = \int \limits_{N(F)\backslash N(\mathbb{A})} \phi(ng) \, \psi_T(n) \, dn.
\end{equation*}
Here, $T$ is associated to a character $\psi_T$ of $N(F)\backslash N(\mathbb{A})$. These functions $\phi_T$ do not give a unique model for the automorphic representation to which $\phi$ belongs. The corresponding statement for a finite place $v$ of $F$ is that for a character $\psi_v$ of $N(F_v)$ the inequality
\begin{equation*}
dim_\mathbb{C} \mathrm{Hom}_{N(F_v)}(\pi_v , \psi_v) \leq 1
\end{equation*}
does not hold for all irreducible admissible representation $\pi_v$ of ${\rm Sp}_{2n}(F_v)$.
However, Piatetski-Shapiro and Rallis show that their local integral is independent of the choice of Fourier coefficient when $v$ is a finite place and the local representation $\pi_v$ is spherical.
Specifically, they show that for any $\ell_T \in \mathrm{Hom}_{N(F_v)}(\pi_v , \psi_v)$ the integral
\begin{equation*}
\int \limits_{Mat_n(\mathcal{O}_v) \cap {\rm G}L_n(F_v)} \ell_T \left( \begin{bmatrix} g & \\ & ^{t} g ^{-1} \end{bmatrix} v_0 \right) |\det(g)|_v^{s-1/2} \, dg= d_v(s) L(\pi_v, \frac{2s+1}{2}) \ell_T(v_0)
\end{equation*}
where $v_0$ is the spherical vector for $\pi_v$, $\mathcal{O}_v$ is the ring of integers, and $d_v(s)$ is a product of local $\zeta$-factors.
At the remaining ``bad'' places the integral does not factor, and there is no local integral to compute. However, they showed that the integral over the remaining places is a meromorphic function of $s$.
In this paper I present a new integral representation for the degree five $L$-function for GSp$_4$ which is a refinement of the work of Piatetski-Shapiro and Rallis. Instead of working with the full theta kernel, the construction in this paper uses a theta integral for ${\rm G}Sp_4 \times {\rm G}SO_2$. This difference has the striking effect of producing the Bessel model for ${\rm G}Sp_4$ and the uniqueness that Piatetski-Shapior and Rallis expected. Therefore, this integral factors as an Euler product over all places. I compute the local unramified integral when the local representation is spherical using the formula due to Sugano~\cite{sugano1985}.
In some instances an integral representation of an $L$-function can be used to prove algebraicity of special values of that $L$-function (up to certain expected transcendental factors). Harris~\cite{harris1981}, Sturm~\cite{sturm1981}, Bocherer~\cite{bocherer1985}, and Panchishkin~\cite{panchishkin1991} applied the integral representation of Andrianov and Kalinin to prove algebraicity of special values of the standard $L$-function of certain Siegel modular forms. Shimura~\cite{shimura2000} also used an integral representation to prove algebraicity of these special values for many Siegel modular forms including forms for every congruence subgroup of ${\rm Sp}_{2n}$ over a totally real number field.
Furusawa~\cite{furusawa1993} gave an integral representation for the degree eight $L$-function for ${\rm G}Sp_4 \times {\rm G}L_2$. Furusawa's integral representation unfolds to give the Bessel model for ${\rm G}Sp_4$ times the Whittaker model for ${\rm G}L_2$, and he uses Sugano's formula for the spherical Bessel model to compute the unramified local integral. Let $\Phi$ be a genus $2$ Siegel eigen cusp form of weight $\ell$, and let $\pi=\otimes_v \pi_v$ be the automorphic representation for ${\rm G}Sp_4$ associated to it. Let $\Psi$ be an elliptic (genus 1) eigen cusp form of weight $\ell$, and let $\tau=\otimes_v \tau_v$ be the associated representation for ${\rm G}L_2$. As an application of his integral representation Furusawa proved an algebraicity result for special values of the degree eight $L$-function $L(s, \Phi \times \Psi)$ provided that for all finite places $v$ both $\pi_v$ and $\tau_v$ are spherical. This condition is satisfied when $\Phi$ and $\Psi$ are modular forms for the full modular groups ${\rm Sp}_4(\mathbb{Z})$ and ${\rm SL}_2(\mathbb{Z})$, respectively.
A recent result of Saha~\cite{saha2009} includes the explicit computation of Bessel functions of local representations that are Steinberg. This allowed Saha to extend the special value result of Furusawa to the case when $\pi_p$ is Steinberg at some prime $p$. Pitale and Schmidt~\cite{pitaleschmidt2009} considered the local integral of Furusawa for a large class of representations $\tau_p$ and as an application extended the algebraicity result of Furusawa further.
In principle one could explicitly compute the local integral given in this paper using the formula of Saha at a place where the local representation is Steinberg. Considering the algebraicity results of Harris~\cite{harris1981}, Sturm~\cite{sturm1981}, Bocherer~\cite{bocherer1985}, and Panchishkin~\cite{panchishkin1991} that involve the integral of Andrianov and Kalinin, and the explicit computations for Bessel models due to Sugano~\cite{sugano1985}, Furusawa~\cite{furusawa1993}, and Saha~\cite{saha2009}, it would be interesting to see if the integral representation of this paper can be used to obtain any new algebraicity results. This is a question I intend to address in a later work.
\section{Summary of Results}
Let $\pi$ be an automorphic representation of GSp$_4(\mathbb{A})$, $\phi \in V_\pi$, $\nu$ an automorphic character on GSO$_2(\mathbb{A})$ the similitude orthogonal group that preserves the symmetric form determined by the symmetric matrix $T$, $\theta_\varphi(\nu^{-1})$ the theta lift of $\nu^{-1}$ to GSp$_4$ with respect to a Schwartz-Bruhat function $\varphi$, and $E(s, f, g)$ a Siegel Eisenstein series for a section $f(s, -) \in \text{Ind}_{P(\mathbb{A})}^{G(\mathbb{A})}( \delta_P ^{1/3(s-1/2)})$. Consider the global integral
\begin{equation*}
I(s;f, \phi, T, \nu, \varphi)=I(s):=\int \limits_{Z_\mathbb{A} GSp_4(F) \backslash GSp_4(\mathbb{A})}
E(s, f, g) \phi(g) \theta_\varphi(\nu^{-1})(g) \, dg.
\end{equation*}
Section~\ref{global} contains the proof that $I(s)$ has an Euler product expansion
\begin{equation*}
I(s)=\int \limits_{N(\mathbb{A}_\infty) \backslash G_1(\mathbb{A}_\infty)} f(s,g) \phi^{T, \nu}(g) \omega(g, 1) \varphi(1_2) \, dg \cdot \prod \limits_{v < \infty} I_v(s)
\end{equation*}
where integrals $I_v(s)$ are defined to be
\begin{equation*}
I_v(s)=\int \limits_{N(F_v) \backslash G_1(F_v)} f_v(s, g_v) \, \phi_v ^{T, \nu}(g_v) \, \omega_v(g_v, 1) \varphi_v(1_2) \, dg_v.
\end{equation*}
The function $\phi_v^{T,\nu}$ belongs to the Bessel model of $\pi_v$.
Section~\ref{unramifiedchapter} includes the proof that under certain conditions that hold for all but a finite number of places $v$, there is a normalization $I_v^*(s)=\zeta_v(s+1)\zeta_v(2s) \, I_v(s)$ such that
\begin{equation*}
I_v^*(s)=L(s, \pi_v \otimes \chi_T)
\end{equation*}
where $\chi_T$ is a quadratic character associated to the matrix $T$. Section~\ref{ramified} deals with the finite places that are not covered in Section~\ref{unramifiedchapter}. For these places there is a choice of data so that $I_v(s)=1$. Section~\ref{archimedean} deals with the archimedean places and shows that there choice of data to control the analytic properties of $I_v(s)$.
Combining these analyses give the following theorem.
\begin{maintheorem} \label{maintheorem}
Let $\pi$ be a cuspidal automorphic representation of GSp$_4(\mathbb{A})$, and $\phi \in V_\pi$. Let $T$ and $\nu$ be such that $\phi^{T,\nu}\neq0$. There exists a choice of section $f(s, -) \in \text{Ind}_{P(\mathbb{A})}^{G(\mathbb{A})}( \delta_P ^{1/3(s-1/2)})$, and some $\varphi=\otimes_v \varphi_v \in \mathcal{S}( \mathbb{X}(\mathbb{A}))$ such that the normalized integral
\begin{equation*}
I^*(s;f, \phi, T, \nu, \varphi)= d(s) \cdot L^{S}(s, \pi \otimes \chi_{T,v})
\end{equation*}
where $S$ is a finite set of bad places including all the archimedean places. Furthermore, for any complex number $s_0$, there is a choice of data so that $d(s)$ is holomorphic at $s_0$, and $d(s_0) \neq 0$.
\end{maintheorem}
\section{Notation}
Let $F$ be a number field, and let $\mathbb{A}=\mathbb{A}_F$ be its ring of adeles. For a place $v$ of $F$ denote by $F_v$ the completion of $F$ at $v$. For a non-archimedean place $v$ let $\mathcal{O}_v$ be the ring of integers of $F_v$, and let $\mathfrak{p}_v$ be its maximal ideal. Let $q_v=[ \mathcal{O}_v : \mathfrak{p}_v ]$. Let $\varpi_v$ be a choice of uniformizer for $\mathfrak{p}_v$, and let $| \cdot |_v$ be the absolute value on $F_v$, normalized so that $|\varpi_v |_v=q_v^{-1}$.
For a finite set of places $S$, let $\mathbb{A}^S= {\prod \limits_{v \notin S}}^\prime F_v$, and $\mathbb{A}_S={\prod \limits_{v \in S}} F_v$. In particular, $\mathbb{A}_\infty = {\prod \limits_{v | \infty}} F_v$, and $\mathbb{A}_{\text{fin}}={\prod \limits_{v < \infty}}^\prime F_v$.
Denote by $\text{Mat}_n$ the variety of $n \times n$ matrices defined over $F$. $\text{Sym}_n$ is the variety of symmetric $n \times n$ matrices defined over $F$.
Let
$G= \mathrm{GSp}_4=\{g \in GL_4 \big| \ ^tg J g= \lambda_G(g) J\}$
where
\begin{align*}
J=\begin{bmatrix} & & 1 & \\ & & & 1\\ -1 & & &\\ & -1 & &\end{bmatrix}.
\end{align*}
Fix a maximal compact subgroup $K$ of $G(\mathbb{A})$ such that $K=\prod_{v} K_v$ where $K_v$ is a maximal compact subgroup of $G(F_v)$, and at all but finitely many finite places $K_v =G(\mathcal{O}_v)$. According to~\cite{moeglinwaldspurger1995}*{I.1.4} the subgroups $K_v$ can be chosen so that for every standard parabolic subgroup $P$, $G(\mathbb{A})=P(\mathbb{A})K$, and $M(\mathbb{A}) \cap K$ is a maximal compact subgroup of $M(\mathbb{A})$.
\section{Orthogonal Similitude Groups} \label{orthog}
A matrix $T \in \text{Sym}_2(F)$ with $\det(T) \neq 0$ determines a non-degenerate symmetric bilinear form $( \ , \ )_T$ on an $V_T=F^2$:
\begin{equation*}
(v_1 , v_2)_T:= {^t}v_1 T v_2.
\end{equation*}
The orthogonal group associated to this form (and matrix $T$) is
\begin{equation*}
O(V_T)=\{h\in GL_2 \big| {^t}h T h= T\}.
\end{equation*}
Similarly, the similitude group $GO(V_T) = \{h\in GL_n \big| {^t}h T h=\lambda_T(h) T\}$, and $GSO(V_T)$ is defined to be the Zariski connected component of $GO(V_T)$. Note that since $dim(V_T)=2$, and $h\in GSO(V_T)$, then $\lambda(h) = \det(h)$.
Let $\chi_T$ be the quadratic character associated to $V_T$. If $E/F$ is the discriminant field of $V_T$, i.e. $E=F \left(\sqrt{-\det(T)}\right)$, then
\begin{equation*}
\chi_T : F^\times \backslash \mathbb{A}^\times \rightarrow \mathbb{C}
\end{equation*}
is the idele class character associated to $E$ by class field theory. It has the property that $\chi_T=\otimes \chi_{T,v}$ where $\chi_{T,v}(a)= ( a , - \det(T) )_v$, and $(\, , \, )_v$ denotes the local Hilbert symbol~\cite{soudry1988}*{$\S$ 0.3}. Consequently, each $\chi_{T.v} \circ N_{E_v/F_v} \equiv 1$ where $N_{E_v / F_v}$ is the norm map~\cite{serre1973}*{Chapter III, Proposition 1}. Note that $N_{E_v / F_v}=\det=\lambda_T$.
\subsection{The Siegel Parabolic Subgroup}
Let $P=MN$ be the Siegel parabolic subgroup of $G$, i.e. $P$ stabilizes a maximal isotropic subspace $X=\mathrm{span}_F \{e_1, e_2\}$ where $e_i$ is the ith standard basis vector. Then $P$ has Levi factor $M \cong \mathrm{GL}_1 \times \mathrm{GL}_2$ and unipotent radical $N \cong \text{Sym}_2 \cong \mathbb{G}_a ^3$. For $g \in GL_2$, define
\begin{equation*}
m(g)=
\begin{bmatrix}
g &\\
& ^t g^{-1}
\end{bmatrix} \in M.
\end{equation*}
For $X \in \text{Sym}_2$, define
\begin{align*}
n(X)=\begin{bmatrix} I_2 & X \\ & I_2 \end{bmatrix} \in N.
\end{align*}
Let $\delta_P$ be the modular character of $P$.
For $m=\begin{bmatrix} g & \\ & ^t g^{-1} \lambda \end{bmatrix} \in M$ and $n \in N$,
$\delta_P(mn)= | \det(g)^3 \cdot \lambda^{-3}|_{\mathbb{A}} \label{adeq}$.
It is possible to extend
$\delta_P$ to all of $G$. For $g=nmk$ where $n \in N$, $m \in M$, and $k \in K$, define $\delta_P(g)=\delta_P(m)$. This is well defined because $\delta_P(m)=1$ for $m \in M \cap K$.
\section{Bessel Models and Coefficients} \label{bessel1}
\subsection{The Bessel Subgroup}
Let $\psi$ be an additive character of $F \backslash \mathbb{A}$.
There is a bijection between $\text{Sym}_2(F)$ and the characters of $N(F) \backslash N(\mathbb{A})$. For $T \in \text{Sym}_2(F)$ define
\begin{align}
\psi_T : N(F) \backslash N(\mathbb{A}) \rightarrow \mathbb{C} \nonumber \\
\psi_T(n(X))=\psi(tr(TX)). \label{char}
\end{align}
Since $M(F)$ acts on $N(F) \backslash N(\mathbb{A})$, it also acts on its characters. Define $H_T$ to be the connected component of the stabilizer of $\psi_T$ in $M$.
For $g \in GL_2$, define
\begin{align*}
b(g)=\begin{bmatrix} g & \\ &det(g)\cdot \ {^t}g^{-1}\end{bmatrix}.
\end{align*}
Then
\begin{align*}
H_T &=\left\{ b(g) \Big| \ {^t}gTg=det(g) \cdot T \right\}.
\end{align*}
Then $H_T$ is an algebraic group defined over $F$ isomorphic to $GSO(V_T)$ where $V_T$ is defined as above.
The adjoint action of $M(F)$ on the characters of $N(F) \backslash N(\mathbb{A})$ has two types of orbits. They are represented by matrices
\begin{align*}
T_\rho= \begin{bmatrix} 1 & \\ & -\rho \end{bmatrix} \quad \text{with} \, \rho \notin F^{\times, 2}, \, \text{and} \quad T_{\text{split}}=\begin{bmatrix} & 1 \\1 & \end{bmatrix}.
\end{align*}
The quadratic spaces corresponding to these matrices have similitude orthogonal groups
\begin{align*}
GSO(V_{T_\rho}) = \left\{ \begin{bmatrix} x & \rho y \\ y & x \end{bmatrix} \Bigg| x^2 - \rho y^2 \neq 0 \right\}.
\end{align*}
and
\begin{align}
GSO(V_{T_{\text{split}}}) = \left\{ \begin{bmatrix} x & \\ & y \end{bmatrix} \Bigg| xy \neq 0 \right\}. \label{gsosplit}
\end{align}
For the rest of this article assume that $\rho \notin F^{\times, 2}$, and only consider $T=T_\rho$.
Define the Bessel subgroup $R=R_T=H_T N$.
Consider a character
\begin{equation*}
\nu : H_T(F) \backslash H_T(\mathbb{A}) \rightarrow \mathbb{C}.
\end{equation*}
Then define
\begin{align*}
\nu \otimes& \psi_T : R(F) \backslash R(\mathbb{A}) \rightarrow \mathbb{C}&\\
\nu \otimes& \psi_T (tn)= \nu (t) \psi_T(n) & t \in H_T(\mathbb{A}), \ n\in N(\mathbb{A}).
\end{align*}
This is well defined since $H_T$ normalizes $\psi_T$.
Similarly, for a place $v$ of $F$ there are local characters
\begin{equation*}
\nu_v \otimes \psi_{T,v} : R(F_v) \rightarrow \mathbb{C}.
\end{equation*}
\subsection{Non-Archimedean Local Bessel Models}
Let $v$ be a finite place of $F$.
Let $\mathcal{B}$ be the space of locally constant functions $\phi : G(F_v) \rightarrow \mathbb{C}$ satisfying
\begin{align*}
\phi(rg)=\nu_v \otimes \psi_{T,v}(r) \phi(g)
\end{align*}
for all $r \in R(F_v)$ and all $g \in G(F_v)$.
Let $\pi_v$ be an irreducible admissable representation of $G(F_v)$. Piatetski-Shapiro and Novodvorsky~\cite{Bessel} showed that there is at most one subspace $\mathcal{B}(\pi_v) \subseteq \mathcal{B}$ such that the right regular representation of $G(F_v)$ on $\mathcal{B}(\pi_v)$ is equivalent to $\pi_v$.
If the subspace $\mathcal{B}(\pi_v)$ exists, then it is called the $\nu_v \otimes \psi_{T,v}$ Bessel model of $\pi_v$.
\subsection{Archimedean Local Bessel Models}
Now suppose $v$ is an infinite place of $F$. Let $K_v$ be the maximal compact subgroup of $G(F_v)$. Let $\mathcal{B}$ be the vector space of functions $\phi : G(F_v) \rightarrow \mathbb{C}$ with the following properties~\cite{pitaleschmidt2009}:
\begin{enumerate}
\item $\phi$ is smooth and $K_v$-finite.
\item $\phi(rg)= \nu_v \otimes \psi_{T,v}(r) \phi(g)$ for all $r \in R(F_v)$ and all $g \in G(F_v)$.
\item $\phi$ is slowly increasing on $Z(F_v) \backslash G(F_v)$.
\end{enumerate}
Let $\pi_v$ be a $(\mathfrak{g_v}, K_v)$-module with space $V_{\pi_v}$. Suppose that there is a subspace $\mathcal{B}(\pi_v) \subset \mathcal{B}$, invariant under right translation by $\mathfrak{g}_v$ and $K_v$, and is isomorphic as a $(\mathfrak{g}_v, K_v)$-module to $\pi_v$, then $\mathcal{B}(\pi_v)$ is called the $\nu_v \otimes \psi_{T,v}$ Bessel model of $\pi_v$. In some instances the Bessel model at an archimedean place is known to be unique. For example, when $v$ is a real place and $\pi_v$ is a lowest or highest weight representation of $GSp_4(\mathbb{R})$ the Bessel model of $\pi_v$ is unique~\cite{pitaleschmidt2009}. It is also known to be unique when the central character of $\pi_v$ is trivial~\cite{Bessel}. The results of this article do not depend on the uniqueness of the Bessel model at any archimedean place; however, if the model is not unique, then there is no local integral.
\subsection{Bessel Coefficients}
Let $\mathcal{A}_0(G)$ be the space of cuspidal automorphic forms on $G(\mathbb{A})$. Suppose that $\pi$ is an irreducible cuspidal automorphic representation of $G(\mathbb{A})$ with space $V_\pi \subset \mathcal{A}_0(G)$. Let $\omega_\pi$ denote the central character of $\pi$. Let $\phi \in V_\pi$.
Suppose that $\nu$ is as above. Denote by $Z_{\mathbb{A}}$ the center of $G(\mathbb{A})$ so $Z_{\mathbb{A}} \subset H_T(\mathbb{A})$. Suppose that $\nu_{|Z_{\mathbb{A}}}= \omega_\pi^{-1}$. Define the $\nu \otimes \psi_T $ Bessel coefficient of $\phi$ to be
\begin{align}
\phi^{T,\nu}(g) = \int \limits_{Z_{\mathbb{A}} R(F) \backslash R(\mathbb{A})} (\nu\otimes \psi_T)^{-1}(r) \phi(rg) dr. \label{besseldef}
\end{align}
\section{Siegel Eisenstein Series} \label{siegeleisenstein}
For more details about Siegel Eisenstein series of ${\rm Sp}_{2n}$ see Kudla and Rallis~\cite{kudlarallis1994} and Section 1.1 of Kudla, Rallis, and Soudry~\cite{kudlarallissoudry1992}.
\begin{defn}[Induced Representation] \label{induced}
The induced representation of $\delta_P^{\frac{1}{3}(s-\frac{1}{2})}$ to $G(\mathbb{A})$ is defined to be
\begin{equation*}
\text{Ind}_{P(\mathbb{A})}^{G(\mathbb{A})}( \delta_P ^{\frac{1}{3}(s-\frac{1}{2})})=\left\{ \begin{array}{rl} f:G(\mathbb{A}) \rightarrow \mathbb{C} \Big| & f \ \text{is smooth, right $K$-finite, and for}\\ \Big| & p \in P(\mathbb{A}), \, f(pg)=\delta_P^{\frac{1}{3}(s+1)}(p)f(g) \end{array} \right\}.
\end{equation*}
\end{defn}
For convenience write Ind$(s)=\text{Ind}_{P(\mathbb{A})}^{G(\mathbb{A})}( \delta_P ^{\frac{1}{3}(s-\frac{1}{2})})$. Ind$(s)$ is a representation of \linebreak $(\mathfrak{g}_\infty , K_\infty ) \times G(\mathbb{A}_{\text{fin}})$ under right translation. A standard section $f(s, \cdot)$ is one such that its restriction to $K$ is independent of $s$. Let $f(s, \cdot ) \in \text{Ind}(s)$ be a holomorphic standard section. That is for all $g \in G(\mathbb{A})$ the function $s \mapsto f(s, g)$ is a holomorphic function.
For a finite place $v$ define $f_v^\circ(s, \cdot)$ to be the function so that $f_v^\circ(s,k)=1$ for $k \in K_v$.
There is an intertwining operator
\begin{equation*}
M(s): \text{Ind}(s) \rightarrow \text{Ind}(1-s).
\end{equation*}
For $Re(s) >2$, $M(s)$ may be defined by means of the integral~\cite{kudlarallis1988}*{4.1}
\begin{equation*}
M(s)f(s,g):= \int \limits_{N(\mathbb{A})} f(s, wng) \, dn
\end{equation*}
where
\begin{equation*}
w=\begin{bmatrix} & & 1 & \\ & & & 1\\ -1 & & & \\ & -1 & & \end{bmatrix}.
\end{equation*}
The induced representation factors as a restricted tensor product with respect to $f_v^\circ(s, \cdot)$:
\begin{equation*}
\text{Ind}(s)={\bigotimes_v}^\prime \text{Ind}_v(s),
\end{equation*}
and so does the intertwining operator
\begin{equation*}
M(s)=\bigotimes_v M_v(s).
\end{equation*}
There is a normalization of $M_v(s)$
\begin{equation*}
M^*_v(s)=\frac{\zeta_v(s+1) \, \zeta_v(2s)}{\zeta_v(s-1) \, \zeta_v(2s-1)}M_v(s)
\end{equation*}
where $\zeta_v(\cdot)$ is the local zeta factor for $F$ at $v$, so that
\begin{equation*}
M^*_v(s)f^\circ_v(s,g)= f^\circ_v(1-s,g).
\end{equation*}
Define the Siegel Eisenstein series
\begin{equation*}
E(s ,f,g)= \sum \limits_{\gamma \in P(F)\backslash G(F)} f(s,\gamma g)
\end{equation*}
which converges uniformly for $Re(s)>2$ and has meromorphic continuation to all $\mathbb{C}$~\cite{kudlarallis1994}. Furthermore, the Eisenstein series satisfies the functional equation
\begin{equation*}
E(s,f,g)=E(1-s,M(s)f,g)
\end{equation*}
~\cite{kudlarallis1994}*{1.5}.
Later, it will be useful to work with the normalized Eisenstein series. Let $S$ be a finite set of places, including the archimedean places, such that for $v \notin S$ $f_v=f_v^{\circ}$. Define
\begin{equation}
E^*(s,f,g)=\zeta^S(s+1) \, \zeta^S (2s) E(s,f,g). \label{normalizing}
\end{equation}
Kudla and Rallis completely determined the locations of possible poles of Siegel Eisenstein series~\cite{kudlarallis1994}. The normalized Eisenstein series $E^*(s,f,g)$ has at most simple poles at $s_0=1,2$ ~\cite{kudlarallis1994}*{Theorem 1.1}.
\section{The Weil Representation}
\subsection{The Schr\"odinger Model}\label{schrodinger}
Consider the orthogonal space $V_T$ with symmetric form $( \, , )_T$, and the four dimensional symplectic space $W$ with symplectic form $<\, ,>$. Let $\mathbb{W}=V_T \otimes W$ be the symplectic space with form $\ll \, , \gg$ defined on pure tensors by $\ll u\otimes v, u' \otimes v' \gg \, =(u,u')_T \, <v,v'>$ and extended to all of $\mathbb{W}$ by linearity.
The Weil representation $\omega=\omega_{\psi_T^{-1}}$ is a representation of $\widetilde{Sp}(\mathbb{W})$. However, restricting this representation to $\widetilde{Sp}(W) \times O(V_T) \hookrightarrow \widetilde{Sp}(\mathbb{W})$. Since the dimension of $V_T$ is even, there is a splitting $Sp(W) \times O(V_T) \hookrightarrow \widetilde{Sp}(W) \times O(V_T)$~\cite{rallis1982}*{Remark 2.1}.
Suppose that $X$ is a maximal isotropic subspace of $W$. Then $\mathbb{X} = X \otimes_F V_T$ is a maximal isotropic subspace of $\mathbb{W}$. The space of the Schr\"odinger model, $\mathcal{S} (\mathbb{X})$, is the space of Schwartz-Bruhat functions on $\mathbb{X}$. Let $v$ be a place of $F$. If $v$ is a finite place, then $\mathcal{S}(\mathbb{X}(F_v))$ is the space of locally constant functions with compact support. If $v$ is an infinite place, then $\mathcal{S}(\mathbb{X}(F_v))$ is the space of $C^\infty$ functions all derivatives of which are rapidly decreasing.
Identify $\mathbb{X}$ with $V_T^2=\text{Mat}_{2}$.
The local Weil representation at a finite place $v$ restricted to
\begin{equation*}
Sp(W)(F_v) \times O(V_T)(F_v)
\end{equation*}
acts in the following way on the Schr\"odinger model
\begin{align*}
\omega_v(1, h) \varphi(x) &=\varphi(h^{-1}x),\\
\omega_v(m(a), 1) \varphi(x) &= \chi_{T,v}\circ det(a) \ |\mathrm{det}(a)|_v \ \varphi(xa),\\
\omega_v( n(X), 1) \varphi(x) &= \psi_{ ^tx T x}^{-1}(X) \varphi(x),\\
\omega_v( w, 1) \varphi (x) & = \gamma \cdot \hat{\varphi}(x).
\end{align*}
where $\gamma$ is a certain eighth root of unity, and $\hat{\varphi}$ is the Fourier transform of $\varphi$ defined by
\begin{align*}
\hat{\varphi}(x)= \int \limits_{V_T(F_v)^2} \varphi(x') \psi( (x, x')_1 ) dx'.
\end{align*}
Here $( \ , \ )_1$ is defined as follows:
for $
x, y \in \mathbb{X}=\text{Mat}_{2}$ define
\begin{equation*}
(x , y)_1:=tr (x \cdot y).
\end{equation*}
Note that matrices of the form $m(a)$, $n(X)$, and $w$ generate $Sp_4$.
The space $\mathcal{S}(\mathbb{X}(\mathbb{A}))$ is spanned by functions $\varphi= \otimes_v \varphi_v$ where $\varphi_v=\varphi_v^\circ$ is the normalized local spherical function for all but finitely many of the finite places $v$. At an unramified place $\varphi_v^\circ=1_{\mathbb{X}(\mathcal{O}_v)}$. The global Weil representation, $\omega={\otimes_v}^\prime \omega_v$, is the restricted tensor product with respect to the normalized spherical functions $\varphi_v^\circ$.
Suppose that $F_v=\mathbb{R}$. Assume that $\psi_T= \exp(2\pi i x)$. Let $K_{1,v}=\text{Sp}_4(\mathbb{R}) \cap O_4(\mathbb{R})$. Let $V_T^+$ and $V_T^-$ be positive definite and negative definite, respectively, subspaces of $V_T(F_v)$ such that $V_T(F_v)=V_T^+ \oplus V_T^-$. For $x \in V_T$ define
\begin{equation*}
(x,x)_+= \left\{
\begin{array}{rl}
(x,x) & \text{if } x \in V_T^+ \\
- (x,x) & \text{if } x \in V_T^- \\
\end{array} \right.
\end{equation*}
For $x \in V_T^2$ let $(x,x)=( (x_i, x_j)_{i,j})\in V_T^2$. Define \begin{equation*}\varphi_v^\circ(x)= \exp(-\pi \, {\rm tr\,}((x,x)_+)).\end{equation*}
Now, suppose $F_v=\mathbb{C}$. Assume that $\psi_T=\exp(4 \pi i (x+\bar{x})$. In this case $K_{1,v}\cong {\rm Sp}(4)$, the compact real form of ${\rm Sp}(4, \mathbb{C})$. There is a choice of basis so \begin{equation*}(x,x)_+= {}^t \bar{x}x, \end{equation*} and \begin{equation*}\varphi_v^\circ(x)=\exp(-2\pi \, {\rm tr\,}((x,x)_+)).\end{equation*}
The subspace of $K_{1,v}$ finite vectors in the space of smooth vectors, $\mathcal{S}_0(\mathbb{X}(F_v)) \subset \mathcal{S}(\mathbb{X}(F_v))$, consists of functions of the form $p(x) \varphi_v^\circ $ where $p$ is a polynomial on $V_T(F_v)^2$.
\subsection{Extension to Similitude Groups}
Harris and Kudla describe how to extend the Weil representation to similitude groups~\cite{harriskudla1992}*{$\S$3}. See also~\cite{harriskudla2004} and~\cite{roberts2001}.
The Weil representation can be extended to the group
\begin{align*}
Y=\{ (g, h) \in GSp_4 \times GSO(V_T) \ | \ \lambda_G(g)=\lambda_{T}(h) \}.
\end{align*}
For $(g,h) \in Y$ the action of $\omega_v$ is defined by
\begin{align*}
\omega_v(g,h) \varphi(x)= |\lambda_{T}(h)|_v^{-1} \ \omega_v( g_1 , 1) \varphi(h^{-1}x)
\end{align*}
where
\begin{align*}
g_1=\begin{bmatrix} I_2 & \\ & \lambda_G(g)^{-1} \cdot I_2 \end{bmatrix}g.
\end{align*}
Note that the natural projection to the first coordinate
\begin{align*}
p_1: &Y \rightarrow GSp(4)\\
&(g,h) \mapsto g
\end{align*}
is generally not a surjective map. Indeed, $g \in Im(p_1)$ if and only if there is an $h \in GSO(V_T)$ such that $ \lambda_G(g)=\lambda_T(h)$. Define
\begin{equation*}
G^+ := p_1 (Y).
\end{equation*}
\subsection{Theta Lifts} \label{thetalifts}
Let $H=GSO(V_T)$, and $H_1=SO(V_T)$.
\begin{defn}
The theta lift of $\nu^{-1}$ to $G^+(\mathbb{A})$ is given by the integral \label{theta}
\begin{equation*}
\theta_\varphi(\nu^{-1})(g)=
\int \limits_{H_1(F) \backslash H_1(\mathbb{A})} \sum \limits_{x \in V_T^2(F)} \omega(g, h_g h_1) \varphi (x) \nu^{-1} (h_g h_1 ) dh_1.
\end{equation*}
\end{defn}
Here, $h_g \in H(\mathbb{A})$ is any element so that $\lambda_{T}(h_g)=\lambda_G(g)$. Note that Definition~\ref{theta} is independent of the choice $h_g$.
Since $H_1(F) \backslash H_1(\mathbb{A})$ is compact the integral is termwise absolutly convergent~\cite{weil1965}.
There is a natural inclusion
\begin{equation*}
G(F)^+ \backslash G(\mathbb{A})^+ \hookrightarrow G(F) \backslash G(\mathbb{A}).
\end{equation*}
Consider $\theta_\varphi(\nu^{-1})$ as a function of $G(F) \backslash G(\mathbb{A})$ by extending it by $0$~\cite{ganichino}*{$\S$7.2}.
If $\varphi$ is chosen to be a $K$-finite Schwartz-Bruhat function, then $\theta_\varphi(\nu^{-1})$ is a $K$-finite automorphic form on $G(F) \backslash G(\mathbb{A})$ \cite{harriskudla1992}.
\section{The Degree Five $L$-function} \label{lfunctionsec}
The connected component of the dual group of ${\rm G}Sp_4$ is $^L G^\circ= \text{GSp}_4(\mathbb{C})$~\cite{borel1979}*{I.2.2 (5)}.
The degree five $L$-function of ${\rm G}Sp_4$ corresponds to the map of $L$-groups~\cite{soudry1988}*{page 88}
\begin{equation*}
\varrho: {\rm G}Sp_4(\mathbb{C}) \rightarrow {\rm PGSp}_4(\mathbb{C}) \cong {\rm SO}_5(\mathbb{C}).
\end{equation*}
I describe the local $L$-factor explicitly when $v$ is finite and $\pi_v$ is equivalent to an unramified principal series.
Consider the maximal torus $A_0$ of $G$ and an element $t \in A_0$:
\begin{equation}
t=\text{diag}(a_1, a_2, a_0 a_1^{-1}, a_0 a_2^{-1}):=\begin{bmatrix} a_1 & & & \\ & a_2 & & \\ & & a_0 a_1^{-1} & \\ & & & a_0 a_2^{-2} \end{bmatrix}. \label{toruselement}
\end{equation}
The character lattice of $G$ is
\begin{equation*}
X=\mathbb{Z}e_0 \oplus \mathbb{Z}e_1 \oplus \mathbb{Z}e_2
\end{equation*}
where $e_i(t)=a_i$.
The cocharacter lattice is
\begin{equation*}
X^{\vee}= \mathbb{Z}f_0 \oplus \mathbb{Z}f_1 \oplus \mathbb{Z}f_2
\end{equation*}
where
\begin{align*}
&f_0(u)=\text{diag} (1,1,u,u), & f_1(u)=\text{diag}( u, 1, u^{-1}, 1),\\ & f_2(u)=\text{diag}(1,u,1,u^{-1}).
\end{align*}
Suppose
\begin{equation*}
\pi_v \cong \pi_v(\chi)=Ind_{B(F_v)}^{G(F_v)}(\chi)
\end{equation*}
where
\begin{equation}
\chi(t)=\chi_1(a_1) \chi_2(a_2) \chi_0(a_0), \label{chidefine}
\end{equation}
and $t$ is given by~\eqref{toruselement}. Then $^L G^\circ=\hat{G}$ has character lattice $X^\prime=X^\vee$ and cocharacter lattice $X^{\prime \vee}=X$. Let $f_i^\prime=e_i \in X^{\prime \vee}$.
Define
\begin{equation}
\hat{t}=\prod_{i=0}^{3} f_i^\prime(\chi_i(\varpi_v)) \in {^L G ^\circ}. \label{satakeparameter}
\end{equation}
Then $\hat{t}$ is the Satake parameter for $\pi_v(\chi)$~\cite{asgarischmidt2001}*{Lemma 2}. The Langlands $L$-factor is defined in~\cite{borel1979}*{II.7.2 (1)} to be
\begin{IEEEeqnarray*}{rCl}
L(s, \pi_v, \varrho):&=&\det( I - \varrho(\hat{t}) q_v^{-s}) ^{-1} \nonumber \\
&=&(1-q_v^{-s})^{-1}(1- \chi_1(\varpi_v) q_v^{-s})^{-1} (1- \chi_1(\varpi_v)^{-1}q_v^{-s})^{-1}\nonumber \\&&(1- \chi_2(\varpi_v) q_v^{-s})^{-1} (1- \chi_2(\varpi)^{-1} q_v^{-s})^{-1}.
\end{IEEEeqnarray*}
Let $S$ be a finite set of primes, including the archimedean primes, such that if $v \notin S$, then $\pi_v$ is \text{unramified}. Then the partial $L$-function is defined to be
\begin{equation*}
L^S(s, \pi)=L^S(s, \pi, \varrho)=\prod_{v \notin S} L(s, \pi_v, \varrho).
\end{equation*}
The product converges absolutely for $Re(s) \gg 0$~\cite{langlands1971}.
\section{Global Integral Representation} \label{global}
The main result of this section is Theorem~\ref{eulerproduct} which states that the integral unfolds as an Euler product of local integrals.
As before $G=$GSp$_4$, $G_1=$Sp$_4$, $P=MN$ is the Siegel parabolic subgoup of $G$, and let $P_1=M_1 N=P \cap G_1$ where $M_1 = M \cap G_1$.
The global integral is
\begin{align}
I(s; f, \phi, \nu)= I(s):&= \int \limits_{Z_\mathbb{A} G(F) \backslash G(\mathbb{A})}
E(s,f, g) \phi(g) \theta_\varphi(\nu^{-1})(g) \, dg\\
&=\int \limits_{Z_\mathbb{A} G(F)^+ \backslash G(\mathbb{A})^+}
E(s, f, g) \phi(g) \theta_\varphi(\nu^{-1})(g) \, dg \label{10}
\end{align}
where equality holds because $\theta_\varphi(\nu^{-1})$ is supported on $G(F)^+ \backslash G(\mathbb{A})^+$.
The central character of $E(s, f, - )$ is trivial, and the central character of $\theta_\varphi (\nu^{-1})=\omega_\pi ^{-1}$, so the integrand is $Z_\mathbb{A}$ invariant. Since $E(s, f, -)$ and $\theta_\varphi(\nu^{-1})$ are automorphic forms, they are of moderate growth. Since $\phi$ is a cuspidal automorphic form, it is rapidly decreasing on a Siegel domain~\cite{moeglinwaldspurger1995}*{I.2.18}. Therefore, the integral \eqref{10} converges everywhere that $E(s, f, -)$ does not have a pole.
Define
\begin{align*}
\mathbb{A}^{\times, +}:=\lambda_T(H(\mathbb{A})), & & F^{\times, +} := F^\times \cap \mathbb{A}^{\times, +} \subseteq \mathbb{A}^{\times,+},\\
\mathbb{A}^{\times, 2}:=\{a^2 | \, a \in \mathbb{A}^\times \}, & & \mathcal{C}:=\mathbb{A}^{\times, 2} F^{\times, +} \backslash \mathbb{A}^{\times, +}.
\end{align*}
There is an isomorphism
\begin{align}
Z_\mathbb{A} G_1(\mathbb{A}) G(F)^+ \backslash G(\mathbb{A})^+ \cong \mathcal{C}. \label{quotient1}
\end{align}
The isomorphism is realized by considering the map from $G(\mathbb{A})^+ \longrightarrow \mathcal{C}$, $g \mapsto \lambda_G(g)$. It has kernel $Z_\mathbb{A} G_1(\mathbb{A}) G(F)^+$.
This fact is stated in~\cite{ganichino}.
Identify $Z_\mathbb{A}$ with the subgroup of scalar linear transformations in $H(\mathbb{A})$.
\begin{prop} \label{quotient2}
\begin{equation}
Z_\mathbb{A} H_1(\mathbb{A}) H(F) \backslash H(\mathbb{A}) \cong \mathcal{C}. \label{Hiso}
\end{equation}
\end{prop}
\begin{proof}
Consider the map $H(\mathbb{A}) \rightarrow \mathcal{C}$, $h \mapsto \lambda_T(h)$. This map is onto by definition of $\mathbb{A}^{\times, +}$. I must show that the kernel is $Z_\mathbb{A} H_1(\mathbb{A}) H(F)$. Suppose $\lambda_T(h)=a^2 \mu$ where $a \in \mathbb{A}^{\times, 2}$ and $\mu \in F^{\times, +}$. By Hasse's norm theorem~\cite{hasse1967} there is an element $h_\mu \in H(F)$ such that $\lambda_T(h)=\mu$. Let $z(a)$ be the scalar matrix with eigenvalue $a$. Since $\lambda_T( z(a)^{-1} h h_\mu^{-1})=1$, $h_1= z(a)^{-1} h h_\mu^{-1} \in H_1(\mathbb{A})$. Therefore, $h= z(a) h_1 h_\mu$. This shows that $Z_\mathbb{A} H_1(\mathbb{A}) H(F)$ contains the kernel of this map. The opposite inclusion is obvious. This proves the proposition.
\end{proof}
Fix sections
\begin{align*}
&\mathcal{C} \rightarrow G(\mathbb{A})^+ & & \mathcal{C} \rightarrow H(\mathbb{A}) \nonumber \\
&c \mapsto g_c & & c \mapsto h_c
\end{align*}
\begin{prop}
There is a measure $dc$ on $\mathcal{C}$ and measures $dh_1$ and $dg_1$ on $H_1(F) \backslash H_1(\mathbb{A})$ and $G_1(F) \backslash G_1(\mathbb{A})$, respectively, such that
\begin{equation*}
\int \limits_{Z_\mathbb{A} H(F) \backslash H(\mathbb{A}) } \, f(h) \, dh= \int \limits_{\mathcal{C} } \int \limits_{H_1(F) \backslash H_1(\mathbb{A})} \, f(h_1 h_c) \, dh_1 \, dc,
\end{equation*}
and
\begin{equation*}
\int \limits_{Z_\mathbb{A} G(F)^+ \backslash G(\mathbb{A})^+ } \, f(g) \, dg= \int \limits_{\mathcal{C}} \int \limits_{G_1(F) \backslash G_1(\mathbb{A})} \, f(g_1 g_c) \, dg_1 \, dc.
\end{equation*}
\end{prop}
\begin{proof}
Let $dh$ denote the right invariant measure on $Z_\mathbb{A} H(F) \backslash H(\mathbb{A})$. Then by \cite{prasadtakloobighash2011}*{Lemma 13.2} there are measures $dh_1$ and $dh_c$ so that for all $f \in L^{1}(Z_\mathbb{A} H(F) \backslash H(\mathbb{A}))$
\begin{equation}
\int \limits_{Z_\mathbb{A} H(F) \backslash H(\mathbb{A}) }f(h) \, dh= \int \limits_{Z_\mathbb{A} H_1(\mathbb{A}) H(F) \backslash H(\mathbb{A}) } \int \limits_{H_1(F) \backslash H_1(\mathbb{A})} f(h_1 h_c) \, dh_1 \, dh_c. \label{hintegral}
\end{equation}
Through the isomorphism ~\eqref{Hiso} define a measure $dc := dh_c$ on $\mathcal{C}$. By ~\eqref{quotient1} define a measure $dg_c:=dc$ on $Z_\mathbb{A} G_1(\mathbb{A}) G(F)^+ \backslash G(\mathbb{A})^+$. Then there is a choice of measures $dg$ and $dg_1$ so that for $f \in L^{1}(Z_\mathbb{A} G(F)^+ \backslash G(\mathbb{A})^+)$
\begin{equation}
\int \limits_{Z_\mathbb{A} G(F)^+ \backslash G(\mathbb{A})^+ }f(g) \, dg= \int \limits_{Z_\mathbb{A} G_1(\mathbb{A}) G(F)^+ \backslash G(\mathbb{A})^+ } \int \limits_{G_1(F) \backslash G_1(\mathbb{A})} f(g_1 g_c) \, dg_1 \, dg_c. \label{gintegral}
\end{equation}
\end{proof}
Then \eqref{10} equals
\begin{equation}
\int \limits_{\mathcal{C}} \int \limits_{G_1(F) \backslash G_1(\mathbb{A})}
E(s, f, g_1 g_c) \phi(g_1 g_c) \theta_\varphi(\nu^{-1})(g_1 g_c) \, dg_1 \, dc. \label{9}
\end{equation}
Denote the theta kernel by
\begin{equation*}
\theta_\varphi(g_1g_c,h_1h_c)= \sum \limits_{x \in V_T^2(F)} \omega(g_1 g_c, h_1 h_c ) \varphi(x).
\end{equation*}
Then
\begin{equation*}
\theta_\varphi(\nu^{-1})(g_1 g_c) = \int \limits_{H_1(F) \backslash H_1(\mathbb{A})}\theta_\varphi(g_1 g_c, h_1 h_c) \nu^{-1}(h_1 h_c) dh_1.
\end{equation*}
As noted in section~\ref{thetalifts} this integral converges absolutely. The following adjoint identity holds for the global theta integral
\begin{IEEEeqnarray*}{rCl}
\int \limits_{G_1(F) \backslash G_1(\mathbb{A})}
\int \limits_{H_1(F) \backslash H_1(\mathbb{A})}
E(s, f, g_1 g_c) \, \phi(g_1 g_c) \, \theta_\varphi(g_1 g_c, h_1 h_c) \, \nu^{-1}(h_1 h_c) \, dh_1\, dg_1 \nonumber \\
=
\int \limits_{H_1(F) \backslash H_1(\mathbb{A})} \int \limits_{G_1(F) \backslash G_1(\mathbb{A})}
E(s, f, g_1 g_c) \, \phi(g_1 g_c) \, \theta_\varphi(g_1 g_c, h_1 h_c) \, \nu^{-1}(h_1 h_c) \, dg_1 \, dh_1. \label{adjoint} \IEEEeqnarraynumspace
\end{IEEEeqnarray*}
Since $ P_1(F) \backslash G_1(F) \cong P(F) \backslash G(F)$, then
\begin{equation*}
E(s, f,g) = \sum \limits_{\gamma \in P(F) \backslash G(F) } f(s, \gamma g) = \sum \limits_{\gamma \in P_1(F) \backslash G_1(F)} f(s, \gamma g).
\end{equation*}
Then the inner integral of~\eqref{adjoint} becomes
\begin{equation*}
\int \limits_{P_1(F) \backslash G_1(\mathbb{A})}
f(s, g_1 g_c) \, \phi(g_1 g_c) \, \theta_\varphi(g_1 g_c, h_1 h_c) \, \nu^{-1}(h_1 h_c) \, dg_1.
\end{equation*}
Expanding the theta kernel gives
\begin{align}
\int \limits_{G_1(F) \backslash G_1(\mathbb{A})}
f(s, g_1 g_c) \, \phi(g_1 g_c) \, \sum \limits_{x \in V_T^2(F)} \omega(g_1 g_c, h_1 h_c) \varphi (x) \nu^{-1} (h_1 h_c) \, dg_1 \label{1}
\end{align}
The Levi factor of $P_1$ is $M_1 \cong {\rm G}L_2$. The Weil representation restricted to this subgroup acts on $\mathcal{S}(\mathbb{A})$ by
\begin{equation*}
\omega(m(y), 1) \, \varphi_1(x)= |\det(y)|_\mathbb{A} \varphi_1(xy)
\end{equation*}
for $\varphi_1 \in \mathcal{S}(\mathbb{A})$, $y \in {\rm G}L_2(\mathbb{A})$, and $x \in Mat_2(\mathbb{A})$. Consider $x \in Mat_2(F)$. If $\det(x)=0$, then $Stab_{{\rm G}L_2(\mathbb{A})}(x)$ contains a normal unipotent subgroup. By the cuspidality of $\phi$ this term vanishes upon integration.
Therefore
\begin{IEEEeqnarray}{rCl}
& & \int \limits_{ P_1(F) \backslash G_1(\mathbb{A})} f(s, g_1 g_c) \, \phi(g_1 g_c)
\sum \limits_{x \in Mat_2(F)} \omega(m(x) g_1 g_c, h_1 h_c) \varphi (1_2) \nu^{-1} (h_1 h_c ) dg_1\nonumber \\
&= & \int \limits_{ P_1(F) \backslash G_1(\mathbb{A})} f(s, g_1 g_c) \, \phi(g_1 g_c)
\sum \limits_{x \in {\rm G}L_2(F)} \omega(m(x) g_1 g_c, h_1 h_c) \varphi (1_2) \nu^{-1} (h_1 h_c ) dg_1\nonumber \\
&= & \int \limits_{ N(F) \backslash G_1(\mathbb{A})} f(s, g_1 g_c) \, \phi(g_1 g_c)
\omega(g_1 g_c, h_1 h_c) \varphi (1_2) \, \nu^{-1} (h_1 h_c ) \, dg_1. \label{a1}
\end{IEEEeqnarray}
Note that the integral
\begin{equation*}
\int \limits_{ N(F) \backslash G_1(\mathbb{A})} f(s, g_1 g_c) \, \phi(g_1 g_c) \, \omega(g_1 g_c, h_1 h_c) \varphi (1_2) \, \nu^{-1} (h_1 h_c ) \, dg_1
\end{equation*}
is $H_1(F)$ invariant; however, the integrand is not.
Then
\begin{align*}
& \int \limits_{ N(F) \backslash G_1(\mathbb{A})} f(s, g_1 g_c) \, \phi(g_1 g_c) \omega(g_1 g_c, h_1 h_c) \varphi (1_2) \, dg_1\\
=& \int \limits_{ N(\mathbb{A}) \backslash G_1(\mathbb{A})} \int \limits_{N(F) \backslash N(\mathbb{A})} f(s,g_1 g_c) \phi(ng_1 g_c)
\omega(n g_1 g_c, h_1 h_c) \, \varphi (1_2) \, dn \, dg_1.
\end{align*}
Define
\begin{equation*}
\phi^T(g) :=\int \limits_{N(F) \backslash N(\mathbb{A}) } \phi(ng) \, \psi_T^{-1}(n) \, dn.
\end{equation*}
Then
\begin{equation*}
\int \limits_{N(F) \backslash N(\mathbb{A})} \phi(ng)\omega(n g, h_g h) \, \varphi (1_2) \, dn \\
= \phi^{T}(g) \, \omega(g, h_g h) \varphi (1_2).
\end{equation*}
This follows since for $n \in N(\mathbb{A})$
\begin{equation*}
\omega(n g_1 g_c, h_1 h_c) \varphi (1_2)= \psi_T^{-1}(n) \, \omega(g_1 g_c, h_1 h_c) \varphi (1_2),
\end{equation*}
so the integral \eqref{a1} becomes
\begin{align}
I(s) = & \int \limits_\mathcal{C} \int \limits_{H_1(F) \backslash H_1(\mathbb{A})} \int \limits_{ N(\mathbb{A}) \backslash G_1(\mathbb{A})} f(s,g_1 g_c) \phi^{T}(g_1 g_c) \nonumber\\
& \times \omega(g_1 g_c, h_1 h_c) \varphi (1_2) \, \nu^{-1} (h_1 h_c ) \, dh_1 dg_1 dg_c, \label{2}
\end{align}
Computing in the Weil representation
\begin{align}
\omega(g_1 g_c , h_1 h_c) \varphi(1_2) =& |\lambda_G(g_c)|^{-1}_\mathbb{A} \, \omega \left( \ell \left(\lambda_G(g)^{-1}\right) g_1 g_c, 1 \right) \varphi \left( (h_1 h_c)^{-1}\right)\\
=& \chi_V \circ \det(h_1 h_c) \, |\lambda_G(g_c)|^{-1}_\mathbb{A} \, |\det (h_1 h_c)^{-1}|^{-1}_\mathbb{A} \nonumber \\ &\times \omega \left( m(h_1 h_c )^{-1} \ell\left(\lambda_G(g_c)\right)g_1 g_c, 1 \right) \varphi(1_2). \label{41}
\end{align}
For $h \in H_T$, $\det(h) \in N_{E/F}(E^\times)$. Therefore, $\chi_V \circ \det(h)=1$.
Combining this with the fact that
\begin{equation*}
|\lambda_G(g_c)|_\mathbb{A} =|\det\left(h_1 h_c\right)|_\mathbb{A}
\end{equation*}
(see Section~\ref{orthog}) and applying it to $\eqref{41}$ gives
\begin{align}
\omega(g_1 g_c , h_1 h_c) \varphi(1_2)&= \omega \left( m(h_1 h_c)^{-1} \ell \left( \lambda_G(g_c)^{-1} \right) g_1 g_c, 1 \right) \varphi(1_2) \nonumber\\
&= \omega \left( b(h_1 h_c)^{-1} g_1 g_c, 1 \right) \varphi(1_2). \label{3}
\end{align}
Since $\lambda_G( b(h_1 h_c))=\lambda_G(g_1 g_c)$, the map $g_1 \mapsto b(h_1 h_c) g_1 g_c^{-1}$ sends $G_1$ to itself.
\begin{prop}
Let $d\bar{g}$ be the right invariant measure on $N(\mathbb{A}) \backslash G_1(\mathbb{A})$, and $g \in P(\mathbb{A}) \subseteq G(\mathbb{A})$. Then $d (\overline{ghg^{-1}})=|\delta_P(g)^{-1}|_\mathbb{A} \cdot d \bar{g}$.
\end{prop}
\begin{proof}
Suppose $dn$ is Haar measure on $N(\mathbb{A})$ and $d\bar{g}$ is the right invariant measure on $N(\mathbb{A}) \backslash G_1(\mathbb{A})$ normalized so that for $f \in L^1( G_1(\mathbb{A}))$
\begin{equation*}
\int \limits_{G_1(\mathbb{A})} f(g) \, dg = \int \limits_{N(\mathbb{A}) \backslash G_1(\mathbb{A})} \int \limits_{N(\mathbb{A})} f(n \bar{g}) \, dn \, d\bar{g}
\end{equation*}
Let $g \in P(\mathbb{A})$. The transformation $h \mapsto ghg^{-1}$ preserves Haar measure on $G_1(\mathbb{A})$. Let $E$ be a measurable subset of $G_1(\mathbb{A})$ such that with finite volume with respect to $dg_1$, and let $vol(E)$ denote this volume. Then the volume of $N(\mathbb{A}) \backslash N(\mathbb{A}) E$ is given by the formula
\begin{equation*}
vol(N(\mathbb{A}) \backslash N(\mathbb{A}) E) = \dfrac{vol(E)}{\int \limits_{N(\mathbb{A}) \cap E} dn}.
\end{equation*}
Since $d(gng^{-1})=|\delta_P(g)|_\mathbb{A} \cdot dn$, then $d(\overline{ghg^{-1}})=|\delta_P(g)|_\mathbb{A}^{-1} \cdot d\bar{g}$.
\end{proof}
Since $\delta_P(b(h_1 h_c))=1$, the map $g_1 \mapsto b(h_1 h_c) g_1 g_c^{-1}$ preserves the right invariant measure on $N(\mathbb{A}) \backslash G_1(\mathbb{A})$.
Substituting $\eqref{3}$ into $\eqref{2}$, and making the above change of variables gives
\begin{IEEEeqnarray*}{rCl}
I(s) &= & \int \limits_\mathcal{C} \int \limits_{H_1(F) \backslash H_1(\mathbb{A})} \int \limits_{ N(\mathbb{A}) \backslash G_1(\mathbb{A})} f(s,b(h_1 h_c) g_1) \phi^{T}(b(h_1 h_c) g_1) \nonumber\\
& & \qquad \times \omega(g_1, 1 ) \varphi (1_2) \nu^{-1} (h_1 h_c ) \, dg_1 dh_1 dg_c.
\end{IEEEeqnarray*}
The $Z_{\mathbb{A}} H_1(\mathbb{A}) H(F) \backslash H(\mathbb{A})$ integral and the $H_1(F) \backslash H_1(\mathbb{A})$ fold together ($H$ is abe\-lian so $h_c h_1 = h_1 h_c$) to produce
\begin{align}
\int \limits_{Z_{\mathbb{A}} H(F) \backslash H(\mathbb{A})} \int \limits_{ N(\mathbb{A}) \backslash G_1(\mathbb{A})} f(s,b(h) g_1) \, \phi^{T}(b(h) g_1) \, \omega(g_1, 1 ) \varphi (1_2) \, \nu^{-1} (h_1 h_c ) \, dg_1 dh. \label{42}
\end{align}
Since $b(h) \in P(\mathbb{A})$, and $\delta_P\left( b(h) \right)=1$, $f(s,b(h) g_1)=f(s, g_1)$.
Therefore, changing the order of integration in $\eqref{42}$ and applying Proposition~\ref{quotient2} produces
\begin{align}
&\int \limits_{ N(\mathbb{A}) \backslash G_1(\mathbb{A})} f(s, g_1) \, \omega(g_1, 1 ) \varphi (1_2) \int \limits_{Z_{\mathbb{A}} H_T(F) \backslash H_T(\mathbb{A})} \phi^{T}(h g_1) \, \nu^{-1} (h) \, dh \, dg_1 \label{4} \\
= &\int \limits_{N(\mathbb{A}) \backslash G_1(\mathbb{A})} f(s,g_1) \phi^{T, \nu}(g_1) \, \omega(g_1, 1) \varphi(1_2) \, dg_1. \label{blah3}
\end{align}
The next section shows that \eqref{4} converges absolutely for $Re(s) >2$, justifying the change in order of integration.
\begin{prop} \label{eulerproduct}
Let $\phi^{T,\nu}=\otimes_v \phi^{T,\nu}_v$, $f(s, \cdot)=\otimes_v f_v(s, \cdot)$, and $\varphi=\otimes_v \varphi_v$. Then for $Re(s)>2$
\begin{align}
& \int \limits_{Z_\mathbb{A} G(F) \backslash G(\mathbb{A})}
E(s, f,g) \phi(g) \theta(\nu^{-1}, \varphi)(g) \, dg\\
=& \int \limits_{N(\mathbb{A}) \backslash G_1(\mathbb{A})} f(s,g) \, \phi^{T, \nu}(g) \, \omega(g, 1) \varphi(1_2) \, dg\\
=& \int \limits_{N(\mathbb{A}_\infty) \backslash G_1(\mathbb{A}_\infty)} f(s,g_\infty) \, \phi^{T, \nu}(g_\infty) \, \omega(g_\infty, 1) \varphi(1_2) \, dg_\infty \cdot \prod \limits_{v < \infty} I_v(s) \label{blah1111}
\end{align}
where
\begin{align}
I_v(s)=\int \limits_{N(F_v) \backslash G_1(F_v)} f_v(s, g_v) \, \phi_v ^{T, \nu}(g_v) \, \omega_v(g_v, 1) \varphi_v(1_2) \, dg_v. \label{blah2222}
\end{align}
\end{prop}
The uniqueness of the Bessel model is used to obtain the factorization in \eqref{blah1111}. When the local archimedean Bessel models are unique, the integral factors at these places as in \eqref{blah2222}.
\section{Absolute Convergence of the Unfolded Integral} \label{abs}
\begin{prop}
The integral
\begin{equation*}
\int \limits_{N(\mathbb{A}) \backslash G_1(\mathbb{A})} \int \limits_{Z_{\mathbb{A}} H_T(F) \backslash H_T(\mathbb{A}) } f(s,g) \, \phi^{T}(h g) \, \nu^{-1}(h) \, \omega(g, 1) \varphi(1_2) \, dh \, dg \label{absconv}
\end{equation*}
converges absolutely for $Re(s)>2$.
\end{prop}
\begin{proof}
This argument follows~\cite{moriyama2004} to show that $\phi$ is bounded on $G_1(\mathbb{A})$. By~\cite{moeglinwaldspurger1995}*{Corollary I.2.12, I.2.18}, $\phi$ is rapidly decreasing. To be precise, suppose $\mathfrak{S}$ is a Siegel domain for $G(\mathbb{A})$. Let $G(\mathbb{A})^1 := \cap_{\chi} \ker |\chi|_\mathbb{A}$ where $\chi$ range over rational characters of $G$. Then
\begin{equation*}
G(\mathbb{A})^1=\{ g \in G(\mathbb{A}) \Big| |\lambda_G(g)|_\mathbb{A}=1 \}.
\end{equation*}
Therefore, $G_1(\mathbb{A}) \subset G(\mathbb{A})^1$.
\begin{defn}[A rapidly decreasing function on $G(\mathbb{A})$~\cite{moeglinwaldspurger1995}*{I.2.12}]\label{rapiddecrease}
A function $\phi : \mathfrak{S} \rightarrow \mathbb{C}$ is rapidly decreasing if there exists an $r>0$ such that for all real positive valued characters $\lambda$ of the standard maximal torus $A_0$, there exists $C_0>0$ such that for all $z \in Z_\mathbb{A}$ and $g \in G(\mathbb{A})^1 \cap \mathfrak{S}$ the follwing inequality holds
\begin{equation}
|\phi(zg)| \leq C_0 ||z||^r \lambda( a(g)) \label{siegel}
\end{equation}
where $|| \cdot ||$ is the height function on $G(\mathbb{A})$, and $a(g)$ is defined so that if $g=nak$, then $a(g)=a$ where $n \in N_0$, the unipotent radical of the Borel, $a \in A_0$, and $k \in K$.
\end{defn}
By choosing $z=1$, and $\lambda$ to be the adelic norm of the similitude character in~\eqref{siegel}, then the right hand side of the inequality equals $C_0$ for $g \in \mathfrak{S} \cap G_1(\mathbb{A})$. Therefore, $\phi$ is bounded on $\mathfrak{S} \cap G_1(\mathbb{A})$. However, $\phi$ is $G(F)$ invariant, so $\phi$ is bounded on \begin{equation*}G(F)(\mathfrak{S} \cap G_1(\mathbb{A}))\supseteq G_1(\mathbb{A}).\end{equation*}
The quotients $Z_{\mathbb{A}} H_T(F) \backslash H_T(\mathbb{A})$ and $N(F) \backslash N(\mathbb{A})$ are compact. Therefore, \begin{equation}|\nu(r)|=|\psi_T(n)|=1 \end{equation} for all $r \in H_T(\mathbb{A}) \cap G_1(\mathbb{A})$, and all $n \in N(\mathbb{A})$.
Assume that all representatives $r \in Z_{\mathbb{A}} H_T(F) \backslash H_T(\mathbb{A})$ are chosen so that $r \in G_1(\mathbb{A})$. Then $r g_1 \in G_1(\mathbb{A})$ and
\begin{equation}
|\phi(r g_1)| \, |\nu^{-1}(r)| < C_0. \label{ineq}
\end{equation}
Furthermore, since $\nu_{|Z_\mathbb{A}}$ agrees with the central character of $\phi$, \eqref{ineq} holds for all $r \in H_T(\mathbb{A})$.
Then
\begin{align*}
\int \limits_{Z_{\mathbb{A}} H_T(F) \backslash H_T(\mathbb{A}) } |\phi^T( h g)| \, |\nu^{-1}(h)| \, dh &
=\int \limits_{Z_\mathbb{A} R(F) \backslash R(\mathbb{A})} | (\nu \otimes \psi_T)^{-1}(r)| \, | \phi(r g_1) | \, dr \nonumber \\
&\leq \text{vol}\left(Z_\mathbb{A} R(F) \backslash R(\mathbb{A})\right) \cdot C_0.
\end{align*}
Therefore,
\begin{align*}
\int \limits_{N(\mathbb{A}) \backslash G_1(\mathbb{A})} \int \limits_{Z_{\mathbb{A}} H_T(F) \backslash H_T(\mathbb{A}) } |f(s,g)| \, |\phi^{T}(h g)| \, |\nu^{-1}(h)| \, |\omega(g, 1) \varphi(1_2)| \, dh \, dg \nonumber \\
\leq C \int \limits_{N(\mathbb{A}) \backslash G_1(\mathbb{A})} |f(s,g)| \, |\omega(g, 1) \varphi(1_2)| \, dg.
\end{align*}
The Schwartz-Bruhat function $\varphi$ is $K$-finite, as is $f(s, -)$, so there is some open subgroup $K_0 \leq K$ such that $[K : K_0]=n < \infty$, and $\varphi$ and $f(s, -)$ are $K_0$-invariant. Let $\{ k_i \}_{1 \leq i \leq n}$ be a set of irredundant coset representatives for $K / K_0$. We have
\begin{equation*}
G_1(\mathbb{A})=P_1(\mathbb{A}) K
\end{equation*}
Suppose that $p=m(a) n \in P_1(\mathbb{A})$, and $k \in k_i K_0$. Define $\varphi_i:=\omega(k_i,1)\varphi$. Then we have
\begin{align*}
\omega(pk, 1)\varphi(1_2)&=\omega(p, 1) \omega(k, 1) \varphi(1_2)\\
&=\omega(p) \varphi_i(1_2)\\
&=\psi_T(n) \, \chi_V \circ \det(a) \, |\det(a)|_{\mathbb{A}} \, \varphi_i(a).
\end{align*}
Therefore,
\begin{align*}
&\int \limits_{N(\mathbb{A}) \backslash G_1(\mathbb{A})} |f(s,g)| \, |\phi^{T, \nu}(g)| \, |\omega(g, 1) \varphi(1_2)| \, dg\\ \leq &
\int \limits_{N(\mathbb{A}) \backslash P_1(\mathbb{A})} \int \limits_{K} |\delta_P(p)^{-1}| \, |\delta_P(p)^{s/3+1/3}||f(s,k)| \, |\omega(pk, 1)\varphi(1_2)| \, dp \, dk\\
\leq & \text{vol}(K_0) \times \, \sum \limits_{i=1}^{n} |f(s,k_i)| \int \limits_{GL_2(\mathbb{A})} |\varphi_i(a)| \, |\det(a)|^{s-1}_{\mathbb{A}} \, da.
\end{align*}
Absolute convergence of $\eqref{absconv}$ depends only on the convergence of
\begin{equation*}
\int \limits_{GL_2(\mathbb{A})} |\varphi_i(a)| \, |\det(a)|^{s-1}_{\mathbb{A}} \, da.
\end{equation*}
The Schwartz-Bruhat function $\varphi_i=\otimes \varphi_{i,v}$ is rapidly decreasing, i.e. $\varphi_{i,v}$ is compactly supported at each finite places $v$, and $\varphi_i$ are rapidly decreasing when $v$ is archimedean. Let $Q$ be the Borel subgroup of $GL_2$, and let $L=\prod \limits_v L_v$, where $L_v$ is the maximal compact subgroup of $GL_2(F_v)$ so that GL$_2=QL$. There is a compact finite index open subgroup $L_i \leq L$ such that $\varphi_i$ is $L_i$ invariant. Let $\varphi_{ij}$, $j=1, \ldots, m$, be the $L$ translates of $\varphi_i$. Then
\begin{align*}
& \quad \int \limits_{GL_2(\mathbb{A})} |\varphi_i(a)| |\det(a)|^{s-1}_{\mathbb{A}} \, da\\
&= \text{vol}(L_i) \times \sum \limits_{j=1}^{m} \, \int \limits_{Q(\mathbb{A})} |\varphi_{ij}(b)| \, |\det(b)|^{s-1}_{\mathbb{A}} \, db.
\end{align*}
Then
\begin{align}
&\int \limits_{Q(\mathbb{A})} |\varphi_{ij}(b)| |\det(b)|^{s-1}_{\mathbb{A}} db\\
=&\int \limits_{\mathbb{A}^{\times}} \int \limits_{\mathbb{A}^\times} \int \limits_{\mathbb{A}} \, \left|\varphi_i \begin{pmatrix} a_1 & x\\ & a_2 \end{pmatrix} \right| \, |a_1|^{s-1}_{\mathbb{A}} |a_2|_{\mathbb{A}}^{s-1} \, \left|\frac{a_1}{a_2}\right|^{-1}_{\mathbb{A}} \, dx \, \frac{da_1}{|a_1|_{\mathbb{A}}} \, \frac{da_2}{|a_2|_{\mathbb{A}}} \\
=& \int \limits_{\mathbb{A}^{\times}} \int \limits_{\mathbb{A}^\times} \int \limits_{\mathbb{A}} \, \left|\varphi_i \begin{pmatrix} a_1 & x\\ & a_2 \end{pmatrix} \right| \, |a_1|^{s-3}_{\mathbb{A}} |a_2|^{s-1}_{\mathbb{A}} \, dx \, da_1 \, da_2. \label{estimate}
\end{align}
Since $\phi_i$ decreases rapidly as $|a_1|_{\mathbb{A}}$, $|a_2|_{\mathbb{A}}$, and $|x|_{\mathbb{A}}$ become large, the integral $\eqref{estimate}$ converges for $\text{Re}(s) > 2$.
\end{proof}
\begin{cor} \label{boundedlemma}
There exists a real number $C$ so that for every $g_1 \in G_1(\mathbb{A})$,
\begin{equation}
|\phi^{T, \nu}(g_1)| \leq C.
\end{equation}
\end{cor}
\begin{remark} Absolute convergence of the integral right of the line Re$(s) =2$ is the best one could hope for since the Eisenstein series $E(s, f, -)$ has a possible pole at $s=2$ by Section~\ref{siegeleisenstein}, and~\cite{kudlarallis1994}*{Theorem 1.1}.
\end{remark}
\section{Computation of the Unramified Integral}
\label{unramifiedchapter}
The local integral is
\begin{equation}
I_v(s)= \int \limits_{N(F_v) \backslash G_1(F_v)} f_v(s, g) \, \phi_v ^{T, \nu}(g) \, \omega_v(g, 1) \varphi_v(1_2) \, dg. \label{11}
\end{equation}
Let $v$ be a finite place. As before $T= \begin{bmatrix} 1 & \\ & -\rho \end{bmatrix}$.
\begin{defn}\label{unramifieddef}
The data for the integral $I_v(s)$ are unramified if all of the following hold:
\begin{enumerate}[1.]
\item $K_v = G(\mathcal{O}_v)$, and by the $\mathfrak{p}$-adic Iwasawa decomposition $G(F_v)=P(F_v) K_v$
\item $\phi_v^{T, \nu}=\phi_v^{T, \nu^\circ}$ is the normalized local spherical Bessel function, i.e. it is right $K_v$ invariant
\item $\varphi_v =\varphi_v^\circ = \mathbf{1}_{Mat_{2, 2}(\mathcal{O}_v)}$ is the normalized spherical function for the Weil representation
\item $f_v(g, s)=f_v^\circ(g,s)=\delta_{P,v} ^{\frac{s}{3}+\frac{1}{3}}(g)$ where the modulus character is extended to the entire group $G(F_v)$ by $\delta_{P,v}(pk)=\delta_{P,v}(p)$ for $p \in P(F_v)$ and $k \in K_v$
\item $\nu( H_T(\mathcal{O}_v))=1$
\item $\rho \in \mathcal{O}_v^\times$
\end{enumerate}
\end{defn}
Assume that all the data are unramified for $I_v(s)$. This is the case for almost every $v$.
Let $P_1=P \cap G_1$, and $M_1=M \cap G_1 \cong GL_2$, and $K_{1,v}=K_v \cap G_1(F_v)$.
With these assumptions the integrand of $\eqref{11}$ is constant on double cosets \newline
$N(F_v) \backslash G_1(F_v) / K_{1,v}$. By the $\mathfrak{p}$-adic Iwasawa decomposition, $G_1=P_1(F_v) K_{1,v}$, and since \begin{equation*}M_1(F_v) \cong N(F_v) \backslash P_1(F_v)\end{equation*} representatives may be found among representatives for $M_1(F_v) / \left( M_1(F_v) \cap K_{1,v} \right).$
By~\cite{furusawa1993}
\begin{equation}
{\rm G}L_2(F)= \coprod \limits_{m \geq 0} H(F_v) \begin{bmatrix} \varpi^m & \\ & 1 \end{bmatrix} {\rm G}L_2(F_v). \label{decompF}
\end{equation}
Let $m \geq 0$, and define
\begin{equation*}
H^m(\mathcal{O}_v) :=H(F_v) \cap \begin{bmatrix} \varpi^{m} & \\ & 1 \end{bmatrix} {\rm G}L_2(F_v) \begin{bmatrix} \varpi^{-m} & \\ & 1 \end{bmatrix} .
\end{equation*}
Note that $H^m(\mathcal{O}_v) \subseteq H(\mathcal{O}_v)$.
Recall that $E$ is the discriminant field of $V_T$. Define $E_v := E \otimes_F F_v$. Let $\left( \frac{E}{v} \right)$ denote the Legendre symbol which equals $-1$, $0$, or $1$ according to whether $v$ is inert, ramifies, or splits in $E$.
By elementary number theory, $\left( \frac{E}{v} \right) =0$ for only finitely many primes $v$. This case is not considered. Call the case when $\left( \frac{E}{v} \right) =-1$ as the \textit{inert case}, and the case when $\left( \frac{E}{v} \right) =1$ as the \textit{split case}. In each case $E_v^\times \cong H(F_v)$.
If $\left( \frac{E}{v} \right)=-1$, then $E_v / F_v$ is an unramified quadratic extension.
If $\left( \frac{E}{v}\right)=+1$, then $E_v \cong F_v \oplus F_v$. In this case there is an isomorphism
\begin{equation*}
\iota : H(F_v) \rightarrow (F_v \oplus F_v)^\times.
\end{equation*}
Let $\Pi_1 := \iota^{-1} ( (\varpi, 1))$, and $\Pi_2 := \iota^{-1}( (1, \varpi))$. Then $\det \Pi_i \in \mathfrak{p}$ for $i=1,2$, and $\Pi_1 \Pi_2 = diag(\varpi, \varpi)$.
\subsection{The Inert Case}
\begin{prop}
If $\left( \frac{E}{v} \right) = -1$, then a complete set of irredundant coset representatives for $N(F_v)\backslash G_1(F_v) / K_1$ is given by
\begin{equation*}
m\left( h \begin{bmatrix} \varpi^{m+n} & \\ & \varpi^n \end{bmatrix} \right)
\end{equation*}
where $m\geq 0$, $n \in \mathbb{Z}$, and $h$ runs over a set of representatives for $H(\mathcal{O}_v) / H^m(\mathcal{O}_v)$.
\end{prop}
\begin{proof}
This follows from the above decomposition \eqref{decompF} and the fact that $H(F_v)=Z(F_v)H(\mathcal{O}_v)$ where $Z$ is the center of ${\rm G}L_2$.
\end{proof}
Since
\begin{equation*}
\omega_v(m(g),1)\varphi_v^\circ (1_2)=\chi_{T,v}\circ \det(g) \, |\det(g)|_v \, \varphi_v^\circ(g),
\end{equation*}
only $m(g)$ with $g \in \text{GL}_2(F_v) \cap \text{Mat}_{2, 2} (\mathcal{O}_v)$ are in the support of the \begin{equation*}\omega_v(m(g) , 1) \varphi_v^\circ (1_2).\end{equation*} Hence, a complete set of irredundant representatives for the cosets in the support of $\omega_v(m(g) , 1) \varphi_v(1_2)$ is given by the representatives listed above with $n \geq 0$.
\begin{prop}
The various components of the integrand are computed as follows
\begin{align}
&\delta_P ^{\frac{s}{3}+\frac{1}{3}} \left( m \left( h \begin{bmatrix} \varpi_v^{n+m} & \\ & \varpi^n \end{bmatrix} \right) \right)=q_v^{-(2n+m)(s+1)} \label{integrand1}\\
&\omega_v \left( m \left(h \begin{bmatrix} \varpi_v^{n+m} & \\ & \varpi_v^n\\ \end{bmatrix} \right), 1 \right) \varphi_v^\circ(1_2)=\chi_{T,v}(\varpi_v^{2n+m}) q_v^{-(2n+m)} \label{integrand2}\\
&\text{vol}\left(N(F_v) \backslash N(F_v) \, m\left( h \begin{bmatrix} \varpi^{n+m} & \\ & \varpi^n \end{bmatrix} \right) \, K_{1,v}\right)=q_v^{6n+3m} \label{integrand3}
\end{align}
\end{prop}
\begin{proof}
The proof of \eqref{integrand1} is just an application of \eqref{adeq}, and \eqref{integrand2} follows from Section~\ref{schrodinger}.
To prove \eqref{integrand3} observe that there are measures $dn$ on $N$ and $d\bar{g}$ on $N \backslash G_1$ so that
\begin{equation*}
\int \limits_{G_1} f(g) dg = \int \limits_{N \backslash G_1} \int \limits_{N} f(n \bar{g}) \, dn \, d\bar{g}
\end{equation*}
and
\begin{equation*}
\int \limits_{G_1} f(nmk) dg = \int \limits_{M_1} \int \limits_{ K} \int \limits_{N} \delta_{P}^{-1}(m) \, f(nmk) \, dn\, dk \, dm.
\end{equation*}
Therefore, $\delta_{P,v}^{-1} \cdot d\dot{g}$ gives a right invariant measure on $N(F_v) \backslash G_1(F_v)$ normalized so that vol $\left( N(F_v) \backslash N(F_v)K_{1,v} \right) =1$.
Again, let $h$ be a representative for an element of $H(\mathcal{O}_v) / H^m(\mathcal{O}_v)$, and let
\begin{equation*}
A(h,m,n)=N(F_v) \backslash N(F_v) \, m\left( h \begin{bmatrix} \varpi_v^{m+n} & \\ & \varpi_v^{n}\\ \end{bmatrix} \right) \, K_{1,v}.
\end{equation*}
Then
$\text{vol}(A(h,m,n))=\delta_{P,v}^{-1}\left( m\left( h \begin{bmatrix} \varpi_v^{m+n} & \\ & \varpi_v^n\\ \end{bmatrix} \right) \right)=q_v^{3m+6n}$
\end{proof}
Note that the integrand does not depend on the coset of $H(\mathcal{O}_v) / H^{m}(\mathcal{O}_v)$.
\begin{lem}[Furusawa]
The index $ [ H(\mathcal{O}_v) : H^{m}(\mathcal{O}_v) ]=q^m(1-\left( \frac{E}{v}\right)\frac{1}{q})$ for $m \geq 1$.
\end{lem}
Then the local integral $\eqref{11}$ is
\begin{IEEEeqnarray}{rCl}
I_v(s) &=& (1+\frac{1}{q}) \sum \limits_{m,n \geq 0} \phi^{T, \nu} _v \left( m \left( \begin{bmatrix} \varpi^{n+m} & \\ & \varpi^n \end{bmatrix} \right) \right) \chi_T(\varpi_v^{m}) q_v^{2n(1-s)}q_v^{m(2-s)} \nonumber \\
&& -\ \frac{1}{q} \sum \limits_{n \geq 0} \phi^{T, \nu} _v \left( m \left( \begin{bmatrix} \varpi^{n} & \\ & \varpi^n \end{bmatrix} \right) \right) q_v^{2n(1-s)}. \label{16}
\end{IEEEeqnarray}
\subsection{The Split Case}
\begin{prop}
If $\left( \frac{E}{v} \right) = +1$, then a complete set of irredundant coset representatives for $N(F_v)\backslash G_1(F_v) / K_1$ are given by
\begin{equation*}
m\left( h \, \Pi_i^k \begin{bmatrix} \varpi^{m+n} & \\ & \varpi^{n} \end{bmatrix} \right)
\end{equation*}
where $i=1,2$, $m,k\geq 0$, $n \in \mathbb{Z}$, and $h$ are representatives for $H(\mathcal{O}_v) / H^m(\mathcal{O}_v)$.
\end{prop}
\begin{proof}
For every $(x,y) \in (F_v \oplus F_v)^\times$ there are unique integers $k_1$, and $k_2$ such that $(x,y)=(\varpi^{k_1}, \varpi^{k_2}) \cdot u$ where $u \in \mathcal{O}_v^\times \oplus \mathcal{O}_v^\times$. If $k_1 > k_2$, then let $i=1$. Otherwise $i=2$. Let $n=\min\{k_1, k_2\}$, and $k=k_i-n$. The result then follows from the decomposition \eqref{decompF}.
\end{proof}
\begin{prop} The various components of the integrand are computed as follows
\begin{align*}
&\delta_P ^{\frac{s}{3}+\frac{1}{3}} \left( m \left( h \, \Pi_i^k \begin{bmatrix} \varpi_v^{n+m} & \\ & \varpi^n \end{bmatrix} \right) \right)=q_v^{-(2n+m+k)(s+1)}\\
&\omega_v \left( m \left(h \, \Pi_i^k \begin{bmatrix} \varpi_v^{n+m} & \\ & \varpi_v^n\\ \end{bmatrix} \right), 1 \right) \varphi_v^\circ(1_2)=\chi_{T,v}(\varpi_v^{2n+m+k}) q_v^{-(2n+m)} \\
&\text{vol}\left(N(F_v) \backslash N(F_v) \, m\left( h \, \Pi^k_i\begin{bmatrix} \varpi^{n+m} & \\ & \varpi^n \end{bmatrix} \right) \, K_{1,v}\right)=q_v^{6n+3m+3k}
\end{align*}
\end{prop}
\begin{proof}
The only nontrivial part is volume computation which follows from an argument that is similar to the proof of ~\cite{furusawa1993}*{Lemma 3.5.3}.
\end{proof}
Then the local integral $\eqref{11}$ is
\begin{align}
I_v(s) = (1-\frac{1}{q}) & \sum \limits_{i=1,2} \sum \limits_{m,n \geq 0} \phi^{T, \nu} _v \left( m \left( \Pi_i^k \begin{bmatrix} \varpi^{n+m} & \\ & \varpi^n \end{bmatrix} \right) \right) q_v^{(2n+k)(1-s)}q_v^{m(2-s)} \nonumber \\
+ \frac{1}{q} & \sum \limits_{i=1,2} \sum \limits_{n \geq 0} \phi^{T, \nu} _v \left( m \left(\Pi_i^k \begin{bmatrix} \varpi^{n} & \\ & \varpi^n \end{bmatrix} \right) \right) q_v^{(2n+k)(1-s)}. \label{16a}
\end{align}
The expressions \eqref{16} and \eqref{16a} are evaluated using Sugano's Formula.
\section{Sugano's Formula}
The results of this section were obtained by Sugano~\cite{sugano1985}, but I follow the treatment found in Furusawa~\cite{furusawa1993}.
Define
\begin{align*}
h_v(\ell,m)=\begin{bmatrix} \varpi_v^{2m+\ell} & & & \\ & \varpi_v^{m+\ell} & & \\ & & 1 & \\ & & & \varpi_v^{m} \end{bmatrix}.
\end{align*}
The local spherical Bessel function is supported on double cosets
\begin{equation*}
\coprod \limits_{\ell,m \geq 0} R(F_v) h_v(\ell,m) GSp_4(\mathcal{O}_v).
\end{equation*}
In~\cite{sugano1985} Sugano explicitly computes the following expression when $\phi^{T,\nu}$ is spherical:
\begin{align*}
C_v(x,y)=\sum \limits_{\ell,m \geq 0} \phi_v^{T,\nu}(h_v(\ell, m)) x^m y^\ell.
\end{align*}
Since $\pi_v$ is assumed to be a spherical representation, it is isomorphic to an unramified principal series representation. I describe this more precisely.
Let $P_0$ be the standard Borel subgroup of $G$ with Levi component $M_0$.
\begin{align*}
M_0=\left\{ \begin{bmatrix} a_1 & & & \\ & a_2 & & \\ & & a_3 & \\ & & & a_4 \end{bmatrix} \Bigg| a_1 a_3 = a_2 a_4 \right\}.
\end{align*}
There exists a character
\begin{equation*}
\gamma_v : M_0(F_v) \rightarrow \mathbb{C}^\times
\end{equation*}
that is trivial on $M_0(\mathcal{O}_v)$ such that $\pi_v \cong \mathrm{Ind}^{G(F_v)}_{P_0(F_v)}(\gamma_v)$. Then $\gamma_v$ is determined by its values
\begin{align*}
\gamma_{1,v}=\gamma_v \begin{bmatrix} \varpi_v & & & \\ & \varpi_v & & \\ & & 1 &\\ & & & 1 \end{bmatrix}, & \quad
\gamma_{2,v}=\gamma_v \begin{bmatrix} \varpi_v & & & \\ & 1 & & \\ & & 1 &\\ & & & \varpi_v \end{bmatrix},\\
\gamma_{3,v}=\gamma_v \begin{bmatrix} 1 & & & \\ & 1 & & \\ & & \varpi_v &\\ & & & \varpi_v \end{bmatrix}, & \quad
\gamma_{4,v}=\gamma_v \begin{bmatrix} 1 & & & \\ & \varpi_v & & \\ & & \varpi_v &\\ & & & 1 \end{bmatrix}.\\
\end{align*}
Note that
\begin{equation*}
\gamma_{1,v} \gamma_{3,v} = \gamma_{2,v} \gamma_{4,v}=\omega_{\pi,v}(\varpi_v).
\end{equation*}
Let
\begin{equation*}
\epsilon_v= \left\{
\begin{array}{ll}
0 & \text{if } \left( \frac{E}{v} \right)=-1,\\
\nu(\varpi_{E,v}) & \text{if } \left( \frac{E}{v} \right)=0,\\
\nu(\varpi_{E,v})+\nu(\varpi_{E,v}^\sigma) \qquad & \text{if } \left( \frac{E}{v} \right)=1.
\end{array} \right.
\end{equation*}
\begin{theorem}(Sugano)
\begin{equation*}
C_v(x,y)=\frac{H_v(x,y)}{P_v(x)Q_v(y)},
\end{equation*}
where
\begin{IEEEeqnarray*}{rCl}
P_v(x)&=& (1-\gamma_{1,v} \gamma_{2,v} q_v^{-2}x)(1-\gamma_{1,v} \gamma_{4,v} q_v^{-2}x) \nonumber\\ && (1-\gamma_{2,v} \gamma_{3,v} q_v^{-2}x)(1-\gamma_{3,v} \gamma_{4,v} q_v^{-2}x),\\
Q_v(y) &=& \prod \limits_{i=1}^4 (1-\gamma_{i,v} q_v^{-3/2}y),\\
H_v(x,y) &=& (1+A_2 A_3 x y^2)\{M_1(x)(1+A_2 x)+A_2 A_5 A_1^{-1} \alpha x^2 \}\nonumber \\ &&-A_2 x y \{\alpha M_1(x) -A_5 M_2(x) \} -A_5 P_v(x)y -A_2 A_4 P_v(x) y^2,\\
M_1(x) &=& 1 -A_1^{-1}(A_1+A_4)^{-1}(A_1 A_5 \alpha + A_4 \beta -A_1 A_5^2 -2 A_1 A_2 A_4)x \nonumber \\ &&+A_1^{-1} A_2^2 A_4 x^2,\\
M_2(x) &=& 1+A_1^{-1}(A_1 A_2 -\beta)x + A_1^{-1}A_2 (A_1 A_2 -\beta)x^2+A_2^3x^3,
\end{IEEEeqnarray*}
\begin{align*}
\alpha=&q_v^{-3/2}\sum \limits_{i=1}^4 \gamma_{i,v},&
\quad \beta=&q_v^{-3}\sum \limits_{1 \leq i < j \leq 4} \gamma_{i,v} \gamma_{j,v},\\
A_1=&q_v^{-1},& \quad A_2=&q_v^{-2} \nu(\varpi_v),\\
\quad A_3=&q_v^{-3} \nu(\varpi_v),& \quad A_4=&-q_v^{-2}\left( \frac{E}{v} \right),\\
A_5=&q_v^{-2} \epsilon_v.
\end{align*}
\end{theorem}
The parameters $\gamma_{i,v}$ differ from the parameters of Section \ref{lfunctionsec}. One verifies that
\begin{align*}
\gamma_{1,v}=\chi_1 \chi_2 \chi_0(\varpi_v), & &
\gamma_{2,v}=\chi_1 \chi_0(\varpi_v),\\
\gamma_{3,v}=\chi_0(\varpi_v), & &
\gamma_{4,v}=\chi_2 \chi_0(\varpi_v),
\end{align*}
and
\begin{equation*}
\omega_{\pi,v} = \chi_1 \chi_2 \chi_0^2.
\end{equation*}
Therefore,
\begin{align}
\gamma_{1,v} \gamma_{2,v} = \chi_1^2 \chi_2 \chi_0^2 (\varpi_v)= \chi_1 \omega_{\pi,v}(\varpi_v), \nonumber\\
\gamma_{1,v} \gamma_{4,v} = \chi_1 \chi_2^2 \chi_0^2(\varpi_v) = \chi_1 \omega_{\pi,v}(\varpi_v), \nonumber\\
\gamma_{2,v} \gamma_{3,v} = \chi_1 \chi_0^2(\varpi_v) = \chi_2^{-1} \omega_{\pi,v}(\varpi_v), \nonumber\\
\gamma_{3,v} \gamma_{4,v} = \chi_2 \chi_0^2(\varpi_v) = \chi_1^{-1} \omega_{\pi,v}(\varpi_v).\label{blah124}
\end{align}
\begin{prop}\label{absconv1}
Let $P_1$, $P_2 \in X \cdot \mathbb{C}[X]$, i.e. $P_i$ have constant coefficient $0$, then $C_v( P_1(q^{-s}) ,P_2( q^{-s}))$ converges absolutely for Re$(s) \gg 0$. Therefore, the terms of this series may be rearranged without affecting the sum.
\end{prop}
The proposition is a consequence of the following lemma which is the local version of Corollary \ref{boundedlemma}.
\begin{lem}
For each place $v$ of $F$, there is a constant $A_v>0$ and a real number $\alpha$ independent of $v$ so that
\begin{equation*}
| \phi_v^{T,\nu}(g_v)| \leq A_v | \lambda_G(g_v)|_v ^\alpha.
\end{equation*}
\end{lem}
\begin{proof}
Again, I follow \cite{moriyama2004}.
Pick a place $w | \infty$. Then for $g \in G(\mathbb{A})$ one may write $g=z_w g_1$ where $g_1 \in G(\mathbb{A})^1$, and $z_w$ is in the center of $G(F_w)$.
Then by Corollary \ref{boundedlemma}, I estimate
\begin{align*}
| \phi^{T,\nu} (g)|=& | \omega_{\pi,w}(z_w)|_w |\phi^{T,\nu}(g_1)|_\mathbb{A} \nonumber\\
=& A \cdot |z_w|_w ^\beta \nonumber \\
=& A \cdot |\lambda_G(g)|_\mathbb{A}^{\beta /2}.
\end{align*}
Let $g_0 \in G(\mathbb{A})$ so that $\pi^{T,\nu} \neq 0$. Then for each place $v$ define
\begin{equation*}
A_v := A \times \prod \limits_{v^\prime \neq v} \frac{|\lambda_G(g_{0,v^\prime})|_{v^\prime} ^{\beta/2}}{|\phi^{T,\nu}(g_{0,v^\prime})|_\mathbb{A}},
\end{equation*}
and let $\alpha= \beta/2$.
\end{proof}
If $Re(s)$ is sufficiently large (to account for $A_v$ and the coefficients of $P_i$), then comparing the series $C_v(P_1(q^{-s}), P_2(q^{-s}))$ to a doubly geometric series completes the proof of the proposition.
\section{Computing the Local Integral}
\subsection{The Inert Case}
It is necessary to express $\eqref{16}$ as a linear combination of terms of the form $C_v(x,y)$ where $x$ and $y$ are monomials in $q_v^{-s}$. Since
\begin{align*}
m \begin{bmatrix} \varpi_v^{m+n} & \\ & \varpi_v^n \end{bmatrix} &= \begin{bmatrix} \varpi_v^{m+n} & & & \\ & \varpi_v^{n} & & \\ & & \varpi_v^{-n-m} & \\ & & & \varpi_v^{-n} \end{bmatrix} \nonumber \\
&=z(\varpi_v^{-n-m}) h_v(2n, m),
\end{align*}
then \eqref{16} gives
\begin{align*}
I_v(s) = (1+\frac{1}{q}) & \sum \limits_{m,n \geq 0} \omega_{\pi,v}(\varpi)^{-m-n} \, \phi^{T, \nu} _v (h_v(m, 2n)) \chi_T(\varpi_v)^m q_v^{2n(1-s)}q_v^{m(2-s)} \nonumber \\
- \frac{1}{q} &\sum \limits_{n \geq 0} \omega_{\pi,v}(\varpi)^{-n} \, \phi^{T, \nu} _v (h_v(0, 2 n)) q_v^{2n(1-s)}
\end{align*}
\begin{prop}
When $\left( \frac{E}{v}\right)=-1$ the unramified local integral is
\begin{align*}
I_v(s)=& (1+\frac{1}{q_v})\sum \limits_{\ell, m \geq 0} (-\omega_{\pi,v}(\varpi_v)^{-1} q_v^{2-s})^m(\omega_{\pi,v}(\varpi_v)^{-1/2}q_v^{1-s})^{2 \ell} \phi^{T,\nu}_v(h_v(2\ell, m))\nonumber \\
&-\frac{1}{q_v}\sum \limits_{\ell \geq 0} (-\omega_{\pi, v}(\varpi_v)^{-1/2}q_v^{1-s})^{2 \ell} \phi^{T,\nu}_v (h_v(2 \ell, 0))\nonumber\\
=&(1+\frac{1}{q_v}) \Big[ \frac{1}{2} C_v\left( -\omega_{\pi,v}(\varpi_v)^{-1}q_v^{2-s}, \omega_{\pi, v}(\varpi_v)^{-1/2}q_v^{1-s} \right)\nonumber \\& \qquad +
\frac{1}{2} C_v\left( -\omega_{\pi,v}(\varpi_v)^{-1}q_v^{2-s}, -\omega_{\pi, v}(\varpi_v)^{-1/2}q_v^{1-s} \right) \Big] \nonumber \\& -
\frac{1}{q_v} \Big[ \frac{1}{2} C_v\left( 0, \omega_{\pi, v}(\varpi_v)^{-1/2}q_v^{1-s} \right)+ \frac{1}{2} C_v\left( 0, -\omega_{\pi, v}(\varpi_v)^{-1/2}q_v^{1-s} \right) \Big].
\end{align*}
\end{prop}
This expression was evaluated using Sugano's formula and Mathematica.
\begin{prop}
When $\left( \frac{E}{v}\right)=-1$ the unramified local integral is
\begin{IEEEeqnarray}{rCl}
I_v(s)&=&(1-q_v^{-s})(1-q_v^{-s-1}) \nonumber \\
&& (1+\gamma_1 \gamma_2 \omega_{\pi,v}(\varpi_v)^{-1} q_v^{-s})^{-1}(1+\gamma_1 \gamma_4 \omega_{\pi,v}(\varpi_v)^{-1} q_v^{-s})^{-1}\nonumber\\
&& (1+\gamma_2 \gamma_3 \omega_{\pi,v}(\varpi_v)^{-1} q_v^{-s})^{-1}(1+\gamma_3 \gamma_4 \omega_{\pi,v}(\varpi_v)^{-1} q_v^{-s})^{-1}. \label{blah125}
\end{IEEEeqnarray}
\end{prop}
Correcting by the normalizing factor~\eqref{normalizing} gives
\begin{prop} \label{inertprop1}
When $\left( \frac{E}{v}\right)=-1$, the normalized unramified local integral is
\begin{align*}
\zeta_v(s+1) \zeta_v(2s) I_v(s)=L(s, \pi_v \otimes \chi_{T,v})
\end{align*}
\begin{proof}
This follows from comparing \eqref{blah124} and \eqref{blah125}.
\end{proof}
\end{prop}
\subsection{The Split Case}
Since $\chi_{T,v}(\varpi_v)=\left( \frac{E}{v} \right)=1$, $\chi_{T,v}$ does not appear in this part of the calculation. Since
\begin{align*}
m \left(\Pi_i^k \, \begin{bmatrix}\varpi_v^{m+n} & \\ & \varpi_v^n \end{bmatrix}\right)
&=b(\Pi_i^k)
\begin{bmatrix} \varpi_v^{m+n} & & & \\ & \varpi_v^{n} & & \\ & & \varpi_v^{-n-m-k} & \\ & & & \varpi_v^{-n-k} \end{bmatrix} \nonumber \\
&=b(\Pi_i^k) z(\varpi^{-m-n-k}) \begin{bmatrix}\varpi_v^{2m+2n+k} & & & \\ & \varpi_v^{m+2n+k} & & \\ & & 1 & \\ & & & \varpi_v^{m} \end{bmatrix} \nonumber\\
&=b(\Pi_i^k) z(\varpi_v^{-m-n-k}) h_v(2n+k, m),
\end{align*}
so \eqref{16a} becomes
\begin{IEEEeqnarray}{rCl}
I_v(s) &=& (1-\frac{1}{q}) \sum \limits_{i=1,2} \sum \limits_{m,n,k \geq 0} \omega_{\pi,v}(\varpi)^{-m-n-k} \, \nu(\Pi_i)^k \, \phi^{T, \nu} _v (h_v(m, 2n+k)) \, q_v^{(2n+k)(1-s)}q_v^{m(2-s)} \nonumber \\
& & + \frac{1}{q} \sum \limits_{i=1,2} \sum \limits_{n,k \geq 0} \omega_{\pi,v}(\varpi)^{-n-k} \, \nu(\Pi_i)^k \, \phi^{T, \nu} _v (h_v(0, 2 n+k)) \, q_v^{(2n+k)(1-s)}. \label{A}
\end{IEEEeqnarray}
First, suppose $\nu(\Pi_1) \neq \nu(\Pi_2)$ which is equivalent to $\nu(\Pi_i)^2 \neq \omega_{\pi,v}(\varpi)$.
\begin{prop} \label{splitprop}
\begin{equation}
\frac{\omega_{\pi,v}(\varpi)^{-1} \nu(\Pi_1)}{\omega_{\pi,v}(\varpi)^{-1} \nu(\Pi_1)^2-1}+\frac{\omega_{\pi,v}(\varpi)^{-1} \nu(\Pi_2)}{\omega_{\pi,v}(\varpi)^{-1} \nu(\Pi_2)^2-1}=0 \label{identity1}
\end{equation}
and
\begin{equation}
\frac{\omega_{\pi,v}(\varpi)^{-1}\nu(\Pi_1)^2}{\omega_{\pi,v}(\varpi)^{-1} \nu(\Pi_1)^2-1} +
\frac{\omega_{\pi,v}(\varpi)^{-1}\nu(\Pi_2)^2}{\omega_{\pi,v}(\varpi)^{-1} \nu(\Pi_2)^2-1}=1. \label{identity2}
\end{equation}
\end{prop}
\begin{proof}
Both identities follow from the fact that $\nu(\Pi_1) \cdot \nu(\Pi_2) = \omega_{\pi,v}(\varpi_v)$.
\end{proof}
Let
\begin{align*}
\eta_1:= \frac{\omega_{\pi,v}(\varpi)^{-1}\nu(\Pi_1)^2}{\omega_{\pi,v}(\varpi)^{-1} \nu(\Pi_1)^2-1}& & \eta_2:= \frac{\omega_{\pi,v}(\varpi)^{-1}\nu(\Pi_2)^2}{\omega_{\pi,v}(\varpi)^{-1} \nu(\Pi_2)^2-1}.\\
\theta_1:=\frac{\omega_{\pi,v}(\varpi)^{-1}\nu(\Pi_1)}{\omega_{\pi,v}(\varpi)^{-1} \nu(\Pi_1)^2-1}& & \theta_2:= \frac{\omega_{\pi,v}(\varpi)^{-1}\nu(\Pi_2)}{\omega_{\pi,v}(\varpi)^{-1} \nu(\Pi_2)^2-1}.
\end{align*}
Then combining terms with $2n+k=\ell$ gives
\begin{IEEEeqnarray}{rCl}
I_v(s)&=&\sum \limits_{i=1,2} (1-\frac{1}{q}) \eta_i \sum \limits_{\ell,m \geq 0} (\omega_{\pi,v}(\varpi)^{-1} \nu(\Pi_i) q^{1-s} )^\ell (\omega_{\pi,v} (\varpi)^{-1}q^{2-s})^m \phi_v^{T,\nu}(h_v(2 \ell, m)) \nonumber \\
&& -\ (1-\frac{1}{q}) (\eta_i -1) \sum \limits_{\ell, m \geq 0} (\omega_{\pi,v}(\varpi)^{-1/2} q^{1-s})^{2 \ell} ( \omega_{\pi,v}(\varpi)^{-1} q^{2-s})^m \phi_v^{T,\nu}(h_v(2\ell, m))\nonumber \\
&& -\ (1-\frac{1}{q}) \theta_i \sum \limits_{\ell,m \geq 0} (\omega_{\pi,v}(\varpi)^{-1/2} q^{1-s})^{2 \ell+1} ( \omega_{\pi,v}(\varpi)^{-1} q^{2-s})^m \phi_v^{T,\nu}(h_v(2\ell+1, m)) \nonumber \\
&& +\ \frac{1}{q} \eta_i \sum \limits_{\ell \geq 0} (\omega_{\pi,v}(\varpi)^{-1} \nu(\Pi_i) q^{1-s} )^\ell \phi_v^{T,\nu}(h_v( \ell, 0)) \nonumber\\
&& -\ \frac{1}{q} (\eta_i-1) \sum \limits_{\ell \geq 0} (\omega_{\pi,v}(\varpi)^{-1/2} q^{1-s})^{2 \ell} \phi_v^{T,\nu}(h_v(2\ell, 0))
\nonumber \\
&& -\ \frac{1}{q} \theta_i \sum \limits_{\ell \geq 0} (\omega_{\pi,v}(\varpi)^{-1/2} q^{1-s})^{2 \ell+1} \phi_v^{T,\nu}(h_v(2\ell+1, 0)).
\label{splitsum}
\end{IEEEeqnarray}
Applying Proposition~\ref{splitprop} to \eqref{splitsum} gives
\begin{prop} When $\left(\frac{E}{v}\right)=+1$ and $\nu(\Pi_1) \neq \nu(\Pi_2)$ the unramified local integral is given by
\begin{align}
I_v(s)=&\left(1-\frac{1}{q_v}\right)\eta_1 \cdot C_v(\omega_{\pi,v}(\varpi_v)^{-1} q_v^{2-s}, \omega_{\pi,v}(\varpi_v)^{-1}\nu(\Pi_1)q_v^{1-s}) \nonumber \\
+& \left(1-\frac{1}{q_v}\right) \eta_2 \cdot C_v(\omega_{\pi,v}(\varpi_v)^{-1} q_v^{2-s}, \omega_{\pi,v}(\varpi_v)^{-1}\nu(\Pi_2)q_v^{1-s}) \nonumber\\
-&\frac{1}{q_v} \eta_1 \cdot C_v(0, \omega_{\pi,v}(\varpi_v)^{-1}\nu(\Pi_1)q_v^{1-s})\nonumber \\
-&\frac{1}{q_v} \eta_2 \cdot C_v(0, \omega_{\pi,v}(\varpi_v)^{-1}\nu(\Pi_2)q_v^{1-s}).\label{splitsum1}
\end{align}
\end{prop}
\begin{prop}
When $\left( \frac{E}{v}\right)=+1$ the unramified local integral is
\begin{IEEEeqnarray}{rCl}
I_v(s)&=&(1+q_v^{-s})(1-q_v^{-s-1}) \nonumber \\
&&(1-\gamma_1 \gamma_4 \omega_{\pi,v}(\varpi_v)^{-1} q_v^{-s})^{-1}(1-\gamma_1 \gamma_2 \omega_{\pi,v}(\varpi_v)^{-1} q_v^{-s})^{-1} \nonumber \\
&&(1-\gamma_2 \gamma_3 \omega_{\pi,v}(\varpi_v)^{-1} q_v^{-s})^{-1}
(1-\gamma_3 \gamma_4 \omega_{\pi,v}(\varpi_v)^{-1} q_v^{-s})^{-1}.\label{blah127}
\end{IEEEeqnarray}
\end{prop}
\begin{proof}
When $\nu(\Pi_1) \neq \nu(\Pi_2)$ this follows from applying Sugano's formula to \eqref{splitsum1}, and the computation was verified with Mathematica. To extend the identity to all values of $\nu(\Pi_1)$ and $\nu(\Pi_2)$ observe that the right hand side of \eqref{A} is equal to
\begin{align*}
\sum \limits_{i=1,2} \sum \limits_{m,n,k \geq 0} A(m) \phi_{v}^{T,\nu} ( h_v(m, n+2k)) X^m Y^n Z^k,
\end{align*}
with $X=\omega_{\pi,v}^{-1}(\varpi) q_v^{2-s}$, $Y=\omega_{\pi,v}^{-1}(\varpi) q_v^{2-2s}$, $Z=\omega_{\pi,v}^{-1}(\varpi) \nu(\Pi_i) q_v^{1-s} $, $A(0)=1$, and $A(m)=1-\frac{1}{q}$ otherwise. This is an absolutely convergent power series for $Re(s) \gg 0$, and convergence is uniform as $\nu(\Pi_1)$ and $\nu(\Pi_2)$ vary in a compact set. Furthermore, for $\nu(\Pi_1) \neq \nu(\Pi_2)$ the sum is equal to a rational function in $q^{-s}$ that remains continuous for all values of $\nu(\Pi_1)$ and $\nu(\Pi_2)$ such that $\omega_{\pi,v}(\varpi) \nu(\Pi_1) \nu(\Pi_2)=1$. Then by uniform convergence the equality holds for all such values of $\nu(\Pi_1)$ and $\nu(\Pi_2)$.
\end{proof}
\begin{prop}\label{splitprop1}
When $\left( \frac{K}{v}\right)=+1$, the normalized unramified local integral is
\begin{align*}
\zeta_v(s+1) \zeta_v(2s) I_v(s)=L(s, \pi_v \otimes \chi_{T,v}).
\end{align*}
\end{prop}
\begin{proof}
This follows from comparing \eqref{blah124} and \eqref{blah127}
\end{proof}
\section{Ramified Integrals at Finite Places} \label{ramified}
Consider the local integral at a finite place $v$ where some of the data may be ramified.
\begin{equation*}
I_v(s)= \int \limits_{N(F_v) \backslash G_1(F_v)} f_v(s, g) \, \phi_v ^{T, \nu}(g) \, \omega_v(g, 1) \varphi_v(1_2) \, dg.
\end{equation*}
Let $K_{P,v}=P(F_v) \cap K_v$. Let $K_{P_1,v}=P_1(F_v) \cap K_v= K_1 \cap K_{P,v}$.
\begin{prop}\label{induced2}
Let $h : K_{P,v} \backslash K_v \rightarrow \mathbb{C}$ be a locally constant (i.e. smooth) function. Then there exists $f_v(s,k) \in \text{Ind}(s)$ such that for all $k \in K_v$, $f(s,k)=h(k)$.
\end{prop}
\begin{proof}
This proposition follows from \cite{casselman1995}*{Proposition 3.1.1}.
\end{proof}
\begin{prop}\label{finiteram}
There exists a $K_v$-finite section $f_v(s,g) \in \text{Ind}(s)$, and a $K_v$-finite Schwartz-Bruhat function $\varphi_v \in \mathcal{S}(\mathbb{X}(F_v))$ such that $I_v(s) \equiv 1$.
\end{prop}
\begin{proof}
This argument comes directly from~\cite{piatetski-shapirorallis1988}.
Suppose that $K_0$ is an open compact subgroup of $G_1(F_v)$ so that $\phi^{T,\nu}_v$ is right $K_0$ invariant. Consider the isomorphism $p: M_1 \rightarrow \text{GL}_2$, $m(a) \mapsto a$.
Let $K_\phi$ be an open compact subgroup of GL$_2(F_v)$ such that $K_\phi \subseteq p(M_1(F_v) \cap \nolinebreak K_0) \cap \ker \chi_T \circ \det$.
Let $\varphi_v = 1_{K_\phi}$. Since $\varphi_v$ is a smooth function, there exists an open compact subgroup $K' \subseteq G_1(F_v)$ such that $\omega_v(k,1) \varphi_v = \varphi_v$.
Let $\mathcal{K}=K_{P_1,v} \backslash K_{P_1,v} \cdot (K' \cap K_0)$. By Proposition~\ref{induced2} there is $f_v(s, -) \in \text{Ind}(s)$ so that $f_v(s, -)|_{K_1}=1_{K_{{P_1},v} \cdot (K^\prime \cap K_0)}.$ Then
\begin{IEEEeqnarray}{rCl}
I_v(s)&=& \int \limits_{N(F_v) \backslash G_1(F_v)} f_v(s, g) \, \phi_v ^{T, \nu}(g) \, \omega_v(g, 1) \varphi_v(1_2) \, dg \nonumber \\
&=& \int \limits_{K_{P_1,v} \backslash K_{1,v}} \int \limits_{\text{GL}_2(F_v)} \delta_P(m(a))^{-1} f_v(s, m(a)k) \phi_v^{T,\nu} (m(a)k) \nonumber\\ && \qquad \omega_v(m(a)k, 1) \varphi_v(1_2) \, da \, dk \nonumber \\
&=& \int \limits_{K_{P_1,v} \backslash K_{1,v}} f_v(s,k) \int \limits_{\text{GL}_2(F_v)} |\det(a)|_v^{s-2} \phi_v^{T,\nu} (m(a)k)\, \omega_v(m(a)k, 1) \varphi_v(1_2) \, da \, dk \nonumber \\
&=& \int \limits_{\mathcal{K}} \int \limits_{\text{GL}_2(F_v)} |\det(a)|_v^{s-2} \phi_v^{T,\nu} (m(a)k) \, \omega_v(m(a)k, 1) \varphi_v(1_2) \, da \, dk \nonumber \\
&=& \int \limits_{\mathcal{K}} \int \limits_{\text{GL}_2(F_v)} |\det(a)|_v^{s-1} \phi_v^{T,\nu}(m(a) k) \, \chi_T( \det(a)) \, 1_{K_\phi}(a) \, da \, dk \label{blah2}
\end{IEEEeqnarray}
For $a \in K_\phi$, $| \det( a)|_v=1$, and $\phi_v^{T, \nu}(m(a))=1$ , then
\begin{eqnarray*}
& &\int \limits_{\mathcal{K}} \int \limits_{\text{GL}_2(F_v)} |\det(a)|_v^{s-1} \phi_v^{T,\nu}(m(a) k) \, \chi_T( \det(a)) \, 1_{K_\phi}(a) \, da \, dk \\
&=&\int \limits_{K_\phi (K' \cap K_0)} \phi_v^{T,\nu}(k) \, dk.
\end{eqnarray*}
After normalizing measures and $\phi^{T, \nu}_v$, $I_v(s) \equiv 1$.
\end{proof}
\section{Ramified Integrals at Infinite Places}\label{archimedean}
Consider the integral at the infinite places from Proposition~\ref{eulerproduct}.
\begin{equation*}
I_\infty(s)= \int \limits_{N(\mathbb{A}_\infty) \backslash G_1(\mathbb{A}_\infty)} f(s,g) \phi^{T, \nu}(g) \omega(g, 1) \varphi(1_2) \, dg.
\end{equation*}
\begin{prop} \label{archprop}
For every complex number $s_0$ there is a choice of of data $f( s, g) \in$ \text{Ind}$(s)$, and $\varphi=\varphi_\infty \otimes \varphi_{\text{fin}} \in \mathcal{S}( \mathbb{X}(\mathbb{A}))$ such that $I_\infty$ converges to a holomorphic function at $s_0$, and $I_\infty(s_0) \neq 0$.
\end{prop}
The proof of this proposition is essentially given in~\cite{piatetski-shapirorallis1988}*{\S 2}, but is reproduced here with necessary changes.
\begin{proof}
By the Iwasawa decomposition, $G_1(\mathbb{A}_\infty) = P_1(\mathbb{A}_\infty) K_\infty$, where $P_1$ has Levi factor $M_1 \cong \text{GL}_2$. The integral $I_\infty(s)$ may be broken up as a $M_1(\mathbb{A}_\infty)$ integral and a $K_\infty$ integral:
\begin{align*}
I_\infty(s)=& \int \limits_{K_\infty} \int \limits_{\text{GL}_2(\mathbb{A}_\infty)} \delta_P(m(a))^{-1} f(s, m(a)k) \phi^{T, \nu}(m(a)k) \omega(m(a)k, 1) \varphi(1_2) \, da \, dk \\
=& \int \limits_{K_\infty} f(k,s) \int \limits_{\text{GL}_2(\mathbb{A}_\infty)} |\det(a)|_\infty^{s-2} \phi^{T, \nu}(m(a)k) \omega(m(a)k, 1) \varphi(1_2) \, da \, dk.
\end{align*}
Here $| \, |_\infty$ denotes the valuation on $ \mathbb{A}_\infty$ defined by $| x |_\infty=\prod \limits_v|x_v|_v$, and $\det(a) \in \mathbb{A}_\infty$ with $v$ coordinate equal to $\det(a_v)$.
Since $f(k ,s)$ is a standard section, it is independent of $s$ when restricted to $K_\infty$. Write $f(k,s)=f(k)$ for $k \in K_\infty$.
The integral
\begin{align}
A(k,s):= \int \limits_{\text{GL}_2(\mathbb{A}_\infty)} |\det(a)|^{s-2} \phi^{T, \nu}(m(a)k) \, \omega(m(a)k, 1) \varphi_\infty(1_2) \, da \label{blah23}
\end{align}
gives a function on $(M_1(\mathbb{A}_\infty) \cap K_\infty) \backslash K_\infty = (P_1(\mathbb{A}_\infty) \cap K_\infty) \backslash K_\infty $. The function $\varphi$ was chosen to be $K$-finite, in particular it is $K_\infty$-finite. Suppose that $\varphi=\otimes_v \varphi_v$, and $\varphi_\infty=\otimes_{v|\infty} \varphi_v$. Since the integrand of \eqref{blah23} is a smooth function of $k$, in the region of absolute convergence, $A(-,s)$ is smooth function.
There is some choice of data so that $A(1,s_0) \neq 0$.
The function $\varphi_\infty$ is $K_\infty$-finite. Let $\varphi_\infty^\circ=\bigotimes_{v|\infty} \phi_v^\circ(X_v)$. According to \eqref{schrodinger} it is of the form
\begin{equation*}
\varphi_\infty(X)= p(X) \varphi_\infty^\circ(X)
\end{equation*}
where $X =\bigotimes_{v|\infty} X_v \in \mathbb{X}(\mathbb{A}_\infty)$ and $p(X)$ is a polynomial in $\mathbb{X}(\mathbb{A}_\infty)$. In particular, $| \det(X) |_\infty$ is a polynomial in $\mathbb{X}(\mathbb{A}_\infty)$. Pick $p(X)$ to be of the form
\begin{equation*}
p(X)=q(X) \cdot |\det(X)|_\infty ^n
\end{equation*}
where $q(X)$ is a polynomial in $\mathbb{X}(\mathbb{A}_\infty)$ and $n \in \mathbb{N}$. By Lemma~\ref{boundedlemma} $\phi^{T, \nu}$ is bounded, so in particular it is bounded on $M_1(\mathbb{A}_\infty)$. Therefore,
\begin{align}
A(1,s_0) = \int \limits_{\text{GL}_2(\mathbb{A}_\infty)} & |\det(a)|_\infty^{s_0-1+n} \, q(a) \phi^{T, \nu}(m(a)) \, da. \label{integralA}
\end{align}
For $Re(s_0-1+n)>>0$ the integral converges absolutely. Indeed, $\varphi_\infty^\circ$ decays exponentially as the entries of $a$ become large while the rest of the integrand has polynomial growth at infinity. As the entries of $a$ become small, so does $|\det(a)|^{s_0-1+n}$. The other terms in the integrand are bounded.
By assumption on $\phi$, there is some $g \in G_{1}(\mathbb{A}_\infty)$ so that $\phi^{T,\nu}(g) \neq 0$. By the Iwasawa decomposition I can write $g_\infty=na^\prime k$, where $n \in N(\mathbb{A}_\infty)$, $a^\prime \in M_1(\mathbb{A}_\infty)$, and $k \in K_{1, \infty}$. Since $K_{1, \infty}$ acts on the space of $\pi$, replace $\phi$ with $\pi(k) \phi$ because the action of $\pi$ is compatible with taking Bessel coefficients. Assume $k=1$. Since $\phi^{T,\nu}(n a^\prime)=\psi_T(n) \cdot \phi^{T,\nu}(a^\prime)$, and $\psi_T(n) \neq 0$, then $\phi(a^\prime)\neq 0$.
Since polynomials are dense in $L^2(\mathbb{X}(\mathbb{A}_\infty))$, then there is some choice of polynomial $q$ so that $A(1,s_0) \neq 0$.
Therefore, $A(1,s)$ is a nonzero holomorphic function in a neighborhood of $s_0$, and $A(k,s)$ is a $K$-finite function of $k$ on $(M_1(\mathbb{A}_\infty) \cap K_\infty) \backslash K_\infty$.
There is a bijection between $\bigotimes \limits_{v|\infty} Ind_{P(F_v)}^{G(F_v)}$ and $K_v$ finite functions in $L^2((M_1(\mathbb{A}_\infty \cap K_\infty) \backslash K_\infty)$ given by restricting $f$ to $K_v$.
Therefore, there is a choice of $K$-finite standard section $f(k)$ so that
\begin{equation*}
\int \limits_{K_\infty} f(k,s_0) \, A(k,s_0) \, dk \neq 0. \qedhere
\end{equation*}
\end{proof}
\section{Proof of Theorem 1}
This section summarizes the results of previous sections to prove Theorem \ref{maintheorem} which is restated here.
\begin{restatement}
Let $\pi$ be a cuspidal automorphic representation of GSp$_4(\mathbb{A})$, and $\phi=\otimes_v \phi_v \in V_\pi$ be a decomposable automorphic form. Let $T$ and $\nu$ be such that $\phi^{T,\nu}\neq0$. There exists a choice of section $f(s, -) \in \text{Ind}_{P(\mathbb{A})}^{G(\mathbb{A})}( \delta_P ^{1/3(s-1/2)})$, and some $\varphi=\otimes_v \varphi_v \in \mathcal{S}( \mathbb{X}(\mathbb{A}))$ such that the normalized integral
\begin{equation*}
I^*(s;f, \phi, T, \nu, \varphi)= d(s) \cdot L^{S}(s, \pi \otimes \chi_T)
\end{equation*}
where $S$ is a finite set of bad places including all the archimedean places. Furthermore, for any complex number $s_0$, the data may be chosen so that $d(s)$ is holomorphic at $s_0$, and $d(s_0) \neq 0$.
\end{restatement}
\begin{proof}
There is a finite set of places $S$ including all the archimedean places, such that for $v \notin S$, the conditions in Definition \ref{unramifieddef} are satisfied. Consider the normalized Eisenstein series $E^*(s,f,g)=\zeta^S(s+1) \, \zeta^S (2s) E(s,f,g)$ that was described in \eqref{normalizing}. Define $I^*(s;f, \phi, T, \nu, \varphi)$ to be the global integral defined in \eqref{10} except that $E(s,f,g)$ is replaced by $E^*(s,f,g)$.
By Proposition \ref{eulerproduct} the integral factors as
\begin{equation*}
I^*(s;f, \phi, T, \nu, \varphi)=I_\infty(s) \times \prod \limits_{\substack{v < \infty\\ v \in S}} I_v(s) \times \prod \limits_{ v \notin S} I^*_v(s).
\end{equation*}
Here, $I^*_v(s)= \zeta_v (s+1) \zeta_v (2s) I_v(s)$.
According to Proposition \ref{inertprop1} and Proposition \ref{splitprop1} for $v \notin S$,
\begin{equation*}
I^*_v(s)=L(s, \pi_v \otimes \chi_{T,v}).
\end{equation*}
By Proposition \ref{finiteram} for every finite place $v \in S$, there is a choice of local section $f_v(s, -)$ and local Schwartz-Bruhat function $\varphi_v$ so that
\begin{equation*}
I_v(s) \equiv 1.
\end{equation*}
By Proposition \ref{archprop} there is a choice of data at the infinite places, $f_\infty(s, -)$, and $\varphi_\infty$, so that $I_\infty(s)$ is holomorphic at $s_0$, and $I_\infty(s_0) \neq 0$.
Choose $f(s, -)=\otimes_v f_v(s, -)$ so that $f_v(s,-)$ is the choice specified above for $v \in S$, and the local spherical section otherwise. Similarly, choose $\varphi = \otimes_v \varphi_v$ so that $\varphi_v$ is the choice specified for $v \in S$, and the local spherical Schwartz-Bruhat function otherwise. This completes the proof of the theorem.
\end{proof}
\end{document}
|
\begin{document}
\renewcommand{\sectionmark}[1]{\markboth{#1}{}}
\def\parshape=1.75true in 5true in{\parshape=1.75true in 5true in}
\def\mathbb C{\mathbb C}
\def\mathbb Z{\mathbb Z}
\def\mathbb N{\mathbb N}
\def\mathbb R^{\mathbb R^}
\def\mathbb P^{\mathbb P^}
\def\mathcal A{\mathcal A}
\def\mathfrak{S}{\mathfrak{S}}
\def\mathrm{sgn}{\mathrm{sgn}}
\def\mathrm{dim}{\mathrm{dim}}
\def\mathrm{codim}{\mathrm{codim}}
\def\mathrm{rk}{\mathrm{rk}}
\def\underline{v}{\underline{v}}
\defX^{(n,m)}_{(1,d)}{X^{(n,m)}_{(1,d)}}
\def\Def#1{\noindent {\bf Definition #1:}}
\def\noindent {\bf Notation: }{\noindent {\bf Notation: }}
\def\Prop#1{\noindent {\bf Proposition #1:}}
\def\noindent {\it Proof: }{\noindent {\it Proof: }}
\def\noindent{\bf Remark: }{\noindent{\bf Remark: }}
\def\noindent{\bf Example: }{\noindent{\bf Example: }}
\newenvironment{dem}{\begin{proof}[Proof]}{\end{proof}}
\newtheorem{theorem}{Theorem}[section]
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{propos}[theorem]{Proposition}
\newtheorem{corol}[theorem]{Corollary}
\newtheorem{defi}[theorem]{Definition}
\newtheorem{conj}[theorem]{Conjecture}
\newtheorem{rem}[theorem]{Remark}
\title{Higher secant varieties of $\mathbb P^ n \times\mathbb P^ m$ embedded in bi-degree $(1,d)$}
\author{Alessandra Bernardi\footnote{CIRM--FBK c/o Universit\`a degli Studi di Trento, via Sommarive 14,
38050 Povo (Trento), Italy. E-mail address: {\tt{[email protected]}}}, Enrico Carlini\footnote{Dipartimento di Matematica
Politecnico di Torino, Corso Duca Degli Abruzzi 24, 10129 Torino, Italy. E-mail address: {\tt [email protected]}}, Maria Virginia Catalisano\footnote{DIPTEM - Dipartimento di Ingegneria della Produzione,
Termoenergetica e Modelli Matematici, Piazzale Kennedy, pad. D, 16129 Genoa, Italy.
E-mail address: {\tt [email protected]}}}
\date{}
\maketitle
\begin{abstract}
Let $X^{(n,m)}_{(1,d)}$ denote the Segre\/-Veronese embedding of
$\mathbb P^ n \times \mathbb P^ m$ via the sections of the sheaf
$\mathcal{O}(1,d)$. We study the dimensions of higher secant
varieties of $X^{(n,m)}_{(1,d)}$ and we prove that there is no
defective $s^{th}$ secant variety, except
possibly for $n$ values of $s$. Moreover when ${m+d \choose d}$ is multiple of $(m+n+1)$,
the $s^{th}$ secant variety of $X^{(n,m)}_{(1,d)}$
has the expected dimension for every $s$.
\end{abstract}
\section*{Introduction}
The $s^{th}$ higher secant variety of a projective variety $X\subset \mathbb P^
N$ is defined to be the Zariski closure of the union of the span
of $s$ points of $X$ (see Definition \ref{secant}), we will denote
it with $\sigma_{s}(X)$.
Secant varieties have been intensively studied (see for example
\cite{AH}, \cite{BCS}, \cite{Gr}, \cite{Li}, \cite{St},
\cite{Za}). One of the first problems of interest is the
computation of their dimensions. In fact, there is an expected
dimension for $\sigma_{s}(X)\subset \mathbb P^ N$, that is, the minimum
between $N$ and $s(\mathrm{dim} X)+s-1$. There are well known
examples where that dimension is not attained, for instance, the variety of
secant lines to the Veronese surface in $\mathbb{P} ^5$. A
variety $X$ is said to be $(s-1)$-defective if there exists
an integer $s\in \mathbb{N}$ such that the dimension of
$\sigma_{s}(X)$ is less than the expected value. We would like to
notice that only for Veronese varieties a complete list of all
defective cases is given. This description is obtained using a
result by J. Alexander and A. Hirschowitz \cite{AH} recently
reproposed with a simpler proof in \cite{BO}.
The interest around these varieties has been recently revived from
many different areas of mathematics and applications when $X$ is a
variety parameterizing certain kind of tensors (for example
Electrical Engineering - Antenna Array Processing \cite{ACCF},
\cite{DM} and Telecommunications \cite{Ch}, \cite{dLC} -
Statistics -cumulant tensors, see \cite{McC} -, Data Analysis -
Independent Component Analysis \cite{Co1}, \cite{JS} -; for other
applications see also \cite{Co2}, \cite{CR}, \cite{dLMV},
\cite{SBG} \cite{GVL}).
One of the main examples is the one of Segre varieties. Segre
varieties parameterize completely decomposable tensors (i.e.
projective classes of tensors in $\mathbb P^ {}(V_1\otimes \cdots \otimes
V_t)$ that can be written as $v_1\otimes \cdots \otimes v_t$ ,
with $v_i\in V_i$ and $V_i$ vector spaces for $i=1, \ldots , t$).
The $s^{th}$ higher secant varieties of Segre varieties is
therefore the closure of the sets of tensors that can be written
as a linear combination of $s$ completely decomposable tensors.
Segre-Veronese varieties can be described both as the embedding
of $\mathbb P^ {n_{1}} \times \cdots \times \mathbb P^ {n_{t}}$ with the
sections of the sheaf ${\cal O}(d_{1}, \ldots ,d_{t})$ into $\mathbb P^
{N}$, for certain $d_{1}, \ldots , d_{t}\in \mathbb N$, with
$N=\Pi_{i=1}^{t}{n_{i}+d_{i} \choose d}-1$, both as a section of
Segre varieties. Consider the Segre variety that naturally lives
in $\mathbb P^ {}(V_1^{\otimes d_1}\otimes \cdots \otimes V_t^{\otimes
d_t})$ with $V_i$ vector spaces of dimensions $n_i+1$ for $i=1,
\ldots , t$, then the Segre-Veronese variety is obtained
intersecting that Segre variety with the projective subspaces
$\mathbb P^ {}(S^{d_1}V_1\otimes \cdots \otimes S^{d_t}V_t)$ of
projective classes of partially symmetric tensors
(where $S^{d_i}V_i \subset V_i^{\otimes d_i}$ is the subspace of completely symmetric tensors of $V_i^{\otimes d_i}$).
These two
different ways of describing Segre-Veronese varieties allow us to
translate problems about partially symmetric tensors into
problems on forms of multi-degree $(d_{1}, \ldots , d_{t})$ and
viceversa. We will follow the description of Segre-Veronese
variety as the variety parameterizing forms of certain
multi-degree.
In this paper we will describe the $s^{th}$ higher secant varieties of
the embedding of $\mathbb P^ n \times \mathbb P^ m$ into $\mathbb P^ N$ ($N=(n+1){m+d \choose
d}-1$ ), by the sections of the
sheaf $\mathcal{O}(1,d)$, for almost all $s \in \mathbb N$ (see Theorem \ref{t2}).
The higher secant varieties of the Segre embedding of $\mathbb P^ n
\times \mathbb P^ m$ are well known as they parameterize matrices of
bounded rank (e.g., see \cite{Hr}).
One of the first instance of the study of the two factors
Segre-Veronese varieties is the one of $\mathbb P^ 1 \times \mathbb P^ 2$
embedded in bi-degree $(1,3)$ and appears in a paper by London
\cite{London}, for a more recent
approach see \cite{DF} and \cite{CaCh}. A first generalization
for $\mathbb P^ 1 \times \mathbb P^ 2$ embedded in bi-degree $(1,d)$ is treated
in \cite{DF}. The general case for $\mathbb P^ 1 \times \mathbb P^ 2$ embedded
in any bi-degree $(d_{1}, d_{2})$ is done in \cite{BD}. In
\cite{ChCi} the case $\mathbb P^ 1 \times \mathbb P^ n$ embedded i bi-degree $
(d,1)$ is treated.
In \cite{CaCh} one can find the defective cases $\mathbb P^ 2 \times \mathbb P^
3$ embedded in bi-degree $(1,2)$, $\mathbb P^ 3 \times \mathbb P^ 4$ embedded in
bi-degree $(1,2)$ and $\mathbb P^ 2 \times \mathbb P^ 5$ embedded in bi-degree
$(1,2)$.
The paper \cite{CGG} studies also the cases $\mathbb P^ n \times \mathbb P^ m$
with bi-degree $ (n+1,1)$; $\mathbb P^ 1 \times \mathbb P^ 1$ with bi-degree $
(d_{1}, d_{2})$ and $\mathbb P^ 2 \times \mathbb P^ 2 $ with bi-degree $(2,2)$.
In \cite{Ab} the cases $\mathbb P^ 1 \times \mathbb P^ m $ in bi-degree
$(2d+1,2)$, $\mathbb P^ 1 \times \mathbb P^ m$ in bi-degree $ (2d,2)$, and $\mathbb P^
1 \times \mathbb P^ m $ in bi-degree $ (d,3)$ can be found. A recent
result on $\mathbb P^ n \times \mathbb P^ m$ in bi-degree $ (1,2)$ is in
\cite{AB}, where the authors prove the existence of two functions
$\underline{s}(n,m)$ and $\overline{s}(n,m)$ such that
$\sigma_s(X^{(n,m)}_{(1,2)})$ has the expected dimension for
$s\leq \underline{s}(n,m)$ and for $s\geq \overline{s}(n,m)$.
In the same paper it is also shown that $X^{(1,m)}_{(1,2)}$ is never
defective and all the defective cases for $X^{(2,m)}_{(1,2)}$ are
described.
The varieties $\mathbb P^ n \times \mathbb P^ m$ embedded in bi-degree $(1,d)$
are related to the study of Grassmann defectivity (\cite{DF}).
More precisely, one can consider the Veronese variety $X$ obtained by embedding $\mathbb P^ m$ in
$\mathbb P^ N$ using the $d$-uple Veronese
embedding ($N= {m+d \choose d}$). Then consider, in $\mathbb G(n,N)$, the
{\it $(n, s-1)$-Grassmann secant variety} of $X$, that is, the closure of the
set of $n$-dimensional linear spaces contained in the linear span of $s$ linearly
independent points of $X$. The variety $X$ is said to be {\it $(n,s-1)$-Grassmann
defective} if the $(n, s-1)$-Grassmann secant variety of $X$ has not the expected dimension. It is
shown in \cite{DF}, following Terracini's ideas in \cite{Te1},
that $X$ is $(n,s-1)$-Grassmann defective if and only if
the $s^{th}$ higher secant varieties of
the embedding of $\mathbb P^ n \times \mathbb P^ m$ into $\mathbb P^ N$ ($N=(n+1){m+d
\choose d}-1$ ), by the sections of the sheaf $\mathcal{O}(1,d)$,
is $(s-1)$-defective. Hence, the result
proved in this paper gives information about the Grasmann
defectivity of Veronese varieties (see Remark \ref{grassref}).
The main result of this paper is Theorem \ref{t1} where we prove
the regularity of the Hilbert function of a subscheme of $\mathbb P^
{n+m}$ made of a $d$-uple $\mathbb P^ {n-1}$, $t$ projective subspaces of
dimension $n$ containing it, a simple $\mathbb P^ {m-1}$ and a number of
double points that is an integer multiple of $n-1$. This theorem,
together with Theorem 1.1 in \cite{CGG} (see Theorem \ref
{metodoaffineproiettivo} in this paper), gives immediately the
regularity of the higher secant varieties of the Segre-Veronese
variety that we are looking for.
More precisely, we consider (see Section \ref{results}) the case
of $\mathbb P^ n \times \mathbb P^ m$ embedded in bi-degree $(1,d)$ for $d\geq
3$. We prove (see Theorem \ref{t2}) that the $s^{th}$ higher
secant variety of such Segre-Veronese varieties have the expected
dimensions for $s \leq s_1$ and for $s \geq s_2$, where
$$s_1= \max
\left \{
s \in \mathbb N \ | \ s \ is \ a \ multiple\ of\
(n+1) \ and \ s \leq
\left\lfloor \frac{(n+1){m+d\choose d}}{m+n+1} \right\rfloor
\right
\},
$$
$$
s_2= \min
\left \{
s \in \mathbb N \ | \ s \ is \ a \ multiple\ of\
(n+1) \ and \ s \geq
\left\lceil \frac{(n+1){m+d\choose d}}{m+n+1} \right\rceil
\right
\}.
$$
\section{Preliminaries and Notation}\label{prelim}
We will always work with projective spaces defined over an
algebraically closed field $K$ of characteristic $0$. Let us
recall the notion of higher secant varieties and some
classical results which we will often use.
For definitions and proofs we refer the reader to \cite{CGG} .
\begin{defi}\label{secant} {\rm Let $X\subset \mathbb P^ N$ be a projective variety.
We define the
$s^{th}$ {\it higher secant variety} of $X$, denoted by $\sigma_{s}(X)$, as the Zariski closure of the union of all linear spaces spanned by $s$ points of $X$, i.e.:
$$\sigma_{s}(X):= \overline{ \bigcup_{P_{1}, \ldots , P_{s}\in X} \langle P_{1}, \ldots , P_{s} \rangle}\subset \mathbb P^ N.$$
When $\sigma_s(X)$ does not have the expected dimension, that is
$\min\{N, s(\mathrm{dim} X+1)-1\},$
$X$ is
said to be $(s-1)$-{\it defective}, and the positive integer
$$
\delta _{s-1}(X) = {\rm min} \{N, s(\mathrm{dim} X+1)-1\}-\mathrm{dim} \sigma_s(X)
$$
is called the $(s-1)${\it-defect} of $X$. }\end{defi} The basic
tool to compute the dimension of $\sigma_{s}(X)$ is Terracini's
Lemma (\cite{Te}):
\begin{lemma}
[{\bf Terracini's Lemma}]
Let $X$ be an irreducible
variety in $\mathbb P^ N$, and let $P_1,\ldots ,P_s$ be s generic
points on $X$. Then, the tangent space to $\sigma_{s}(X)$ at a
generic point in $ \langle P_1,\ldots ,P_s \rangle$ is the linear span in
$\mathbb P^ N$ of the tangent spaces $T_{X, P_i}$ to $X$ at $P_i$,
$i=1,\ldots ,s$, hence
$$ \mathrm{dim} \sigma_{s}(X) = \mathrm{dim} \langle T_{X,P_1},\ldots ,T_{X,P_s}\rangle.$$
\end{lemma}
A consequence of Terracini's Lemma is the following corollary (see
\cite[Section 1]{CGG} or \cite [Section 2]{AB} for a proof of
it).
\begin {corol}\label{corTer}
Let $X^{(n,m)}_{(1,d)} \subset \mathbb P^N$ be the {\it
Segre-Veronese variety} image of the embedding of $\mathbb P^n
\times \mathbb P^m$ by the sections of the sheaf ${\cal O}(1, d)$
into $\mathbb P^N$, with $N=(n+1) {m+d \choose d}-1$. Then
$$
\mathrm{dim} \sigma_s \left ( X^{(n,m)}_{(1,d)} \right ) = N - \mathrm{dim} (I_Z)_{(1,d)} =
H (Z,(1,d)) -1 ,
$$
where $Z \subset \mathbb P^n \times \mathbb P^m$ is a set of $s$
generic double points, $I_Z $ is the multihomogeneous ideal of $Z$
in $R = K [x_0, \dots, x_n,y_0, \dots, y_m]$, the multigraded
coordinate ring of $ \mathbb P^n\times \mathbb P^m$, and $ H
(Z,(1,d)) $ is the multigraded Hilbert function of $Z$.
\end{corol}
Now we recall the fundamental tool which allows us to convert
certain questions about ideals of varieties in multiprojective
space to questions about ideals in standard polynomial rings (for
a more general statement see \cite[Theorem 1.1] {CGG}) .
\begin{theorem}\label{metodoaffineproiettivo}
Let $X^{(n,m)}_{(1,d)} \subset \mathbb P^N$ and $Z \subset
\mathbb P^n \times \mathbb P^m$ as in Corollary \ref{corTer}. Let
$H_{1}, H_{2}\subset \mathbb P^ {n+m}$ be generic projective linear
spaces of dimensions $n-1$ and $m-1$, respectively, and let
$P_{1}, \dots ,P_{s} \in \mathbb P^{n+m}$ be
generic points. Denote by
$$dH_1+H_2+2P_{1}+ \cdots + 2P_s \subset \mathbb P^{n+m}$$
the scheme defined
by the ideal sheaf
${\mathcal I}^{d}_{H_1}\cap {\mathcal I}_{H_2} \cap {\mathcal I}^{2}_{P_1}\cap \dots \cap
{\mathcal I}^{2}_{P_s}\ \subset {\mathcal O}_{P^{n+m } }$.
Then
$$
\mathrm{dim} (I_Z)_{(1,d)} = \mathrm{dim} (I_{dH_1+H_2+2P_1+ \cdots + 2P_s})_{d+1}
$$
hence
$$
\mathrm{dim} \sigma_s \left ( X^{(n,m)}_{(1,d)} \right ) = N - \mathrm{dim} (I_{dH_1+H_2+2P_1+ \cdots + 2P_s})_{d+1} .
$$
\end{theorem}
Since we will make use of Castelnuovo's inequality several times,
we recall it here (for notation and proof we refer to \cite{AH2},
Section 2).
\begin{lemma} [{\bf Castelnuovo's inequality}] \label{castelnuovo}
Let $H \subset \mathbb P^N$ be a hyperplane, and let $X \subset \mathbb P
^N$ be a scheme. We denote by $Res_H X$
the scheme defined by the ideal $(I_X:I_H)$ and we call it the
{\it residual scheme} of $X$ with respect to $H$, while the scheme
$Tr_H X \subset H$ is the schematic intersection $X\cap H$, called
the {\it trace} of $X$ on $H$.
Then
$$
\mathrm{dim} (I_{X, \mathbb P^N})_t \leq \mathrm{dim} (I_{ Res_H X, \mathbb P^N})_{t-1}+
\mathrm{dim} (I_{Tr _{H} X, H})_t.
$$
\end{lemma}
\section{Segre-Veronese embeddings of $\mathbb P^n \times \mathbb P^m$ }\label{results}
Now that we have introduced all the necessary tools that we need for the main theorem of this paper we can state and prove it.
\begin{theorem}\label{t1}
Let $d\geq 3$, $n,m \geq 1$ and let $s=(n+1)q$ be an integer
multiple of $n+1$. Let $ P_1, \dots ,P_s \in \mathbb P^ {n+m}$ be
generic points and $H_1\simeq \mathbb P^{n-1}, H_2\simeq \mathbb
P^{m-1}$ be generic linear spaces in $\mathbb P^ {n+m}$.
Let $W_{1}, \ldots , W_{t}\subset \mathbb P^ {n+m}$ be $t$ generic linear spaces of dimension $n$ containing $H_{1}$.
Now consider the scheme
\begin{equation}\label{X}
\mathbb{X}:=dH_{1}+H_{2}+2P_{1}+ \cdots + 2P_{s}+W_{1}+ \cdots + W_{t}
\end{equation}
Then for any $q, t\in \mathbb{N}$ the dimension of the degree
$d+1$ piece of the ideal $I_{\mathbb{X}}$ is the expected one,
that is
$$\mathrm{dim}(I_{\mathbb{X}})_{d+1}=\max \left \{ (n+1){m+d \choose d}-s(n+m+1)-t(n+1)\ ; 0\right \}. $$
\end{theorem}
\begin{proof}
We will prove the theorem by induction on $n$.
A hypersurface defined by a form of $(I_{dH_1})_{d+1}$ cuts on
$W_i \simeq \mathbb P^n$ a hypersurface which has $H_1$ as a
fixed component of multiplicity $d$.
It follows that
$$\mathrm{dim} (I_{dH_1 ,W_i} )_{d+1} = \mathrm{dim} (I_{\emptyset,W_i} )_1=n+1.
$$
Hence the expected number of conditions that a linear space $W_i$ imposes to the forms of
$(I_{\mathbb{X}})_{d+1}$ is at most $n+1$.
Moreover a double point imposes at most $n+m+1$ conditions.
So, since by Theorem \ref{metodoaffineproiettivo} with $Z = \emptyset $ we get
$$
\mathrm{dim} (I_{dH_1+H_2})_{d+1} =\mathrm{dim} R_{(1,d)} = (n+1){m+d \choose d},
$$
(where $R = K [x_0, \dots, x_n,y_0, \dots, y_m] $), then we have
\begin{equation}\label{disug}
\mathrm{dim}(I_{\mathbb{X}})_{d+1} \geq (n+1){m+d \choose d}-s(n+m+1)-t(n+1).
\end{equation}
\\
Now let $H\subset \mathbb P^ {n+m}$ be a generic hyperplane containing
$H_{2}$ and let $\widetilde{ \mathbb X}$ be the scheme obtained
from $ \mathbb X$ by specializing the $nq$ points
$P_1,\dots,P_{nq}$ on $H$, ($P_{nq+1}, \dots , P_{s}$ remain
generic points, not lying on $ H$).
Since by the semicontinuity of the Hilbert Function $ \mathrm{dim}(I_{
\widetilde { \mathbb X } })_{d+1} \geq
\mathrm{dim}(I_{ { \mathbb X } })_{d+1}$, by (\ref{disug}) we have
\begin{equation}\label{disug2}
\mathrm{dim}(I_{ \widetilde { \mathbb X } })_{d+1} \geq
(n+1){m+d \choose d}-s(n+m+1)-t(n+1).
\end{equation}
Let $V_i = \langle H_1, P_i \rangle \simeq \mathbb P^n $. Since the linear spaces $V_i $
are in the base locus of the hypersurfaces defined by the forms of
$(I_{ \widetilde { \mathbb X } })_{d+1}$, we have
\begin{equation}\label{spazifissi}
(I_{ \widetilde { \mathbb X } })_{d+1}=
(I_{ \widetilde { \mathbb X }+ V_1+\dots+V_{s} })_{d+1} .
\end{equation}
Consider the residual scheme of $ ( \widetilde { \mathbb X } + V_1+\dots+V_s ) $ with
respect to $H$:
$$
Res_{H} ( { \widetilde { \mathbb X } } + V_1+\dots+V_s )=dH_{1}+W_{1}+ \cdots + W_{t}+P_{1}+\cdots + P_{nq}+2P_{nq+1}+ \cdots + 2P_{s} + V_1+\dots+V_{s}
$$
$$=dH_{1}+W_{1}+ \cdots + W_{t}+2P_{nq+1}+ \cdots + 2P_{s} + V_1+\dots+V_{s}
\subset \mathbb P^{n+m} .
$$
Any form of degree $d$ in $I_{ Res _H { ( \widetilde { \mathbb X } } + V_1+\dots+V_s)} $
represents a cone whose vertex contains
$H_1$.
Hence if $ \mathbb Y \subset \mathbb P^m$ is the scheme obtained by projecting $ Res _H ({ \widetilde { \mathbb X } }+ V_1+\dots+V_s )$
from $H_1$ in a $\mathbb P^m$, we have:
\begin{equation} \label{res1}
\mathrm{dim}(I_{ Res _H { ( \widetilde { \mathbb X } } + V_1+\dots+V_s)})_d = \mathrm{dim}(I_{ \mathbb Y } )_d .
\end{equation}
Since the image by this projection of each $W_i$ is a point, and for $1 \leq i \leq nq$ the image of $P_i+V_i$ is a simple point, and
for $nq+1 \leq i \leq s$ the image of $2P_i+V_i$ is a double point,
we have that $ \mathbb Y$ is a scheme
consisting of $t+ nq $ generic points and $ q $ generic double points.
Now by the Alexander-Hirschowitz Theorem (see \cite{AH}), since
$d>2$ and $t+nq >1$ we have that the dimension of the degree $d$
part of the ideal of $q$ double points plus $t+ nq $ simple
points is always as expected. So we get
\begin{equation} \label{res2}
\mathrm{dim}(I_{ \mathbb Y } )_d =\max \left \{ {m+d\choose d} - q(m+1)-t-nq \ ;\ 0 \right
\}.
\end{equation}
Now let $n=1$. In this case we have: $s=2q$,
\begin{equation}\label{resxn=1}
\mathrm{dim}(I_{ Res _H { ( \widetilde { \mathbb X } } + V_1+\dots+V_s)})_d =
\mathrm{dim}(I_{ \mathbb Y } )_d =\max \left \{ {m+d\choose d} - q(m+1)-t-q \ ;\ 0 \right \},
\end{equation}
moreover $H_1$ is a point, $H_{1}\cap H$ is the empty set, the
$W_i$ and the $V_i$ are lines, and $V_i$ is the line $H_1P_i$.
Set $W'_{i}=W_{i}\cap H$, $V'_{i}=V_{i}\cap H$. Note that for $1\leq i \leq q$ we have $V'_{i}=P_i.$
The trace on $H$ of
$ { \widetilde { \mathbb X } +V_1+ \dots +V_{s} } $ is:
$$Tr_{H} ( { \widetilde { \mathbb X } } +V_1+ \dots +V_{s})
=H_2+2P_{1}+ \cdots + 2P_{q}+ W'_{1}+ \cdots + W'_{t}+V'_1+ \dots +V'_{2q} =
$$
$$=H_2+2P_{1}+ \cdots + 2P_{q}+ W'_{1}+ \cdots + W'_{t}+V'_{q+1}+ \dots +V'_{2q}
\subset H \simeq \mathbb P^ {m}
.$$
So $Tr_{H} ( { \widetilde { \mathbb X } } +V_1+ \dots +V_{s})\subset H$ is a scheme in $\mathbb P^ {m}$ union of $H_2 \simeq \mathbb P^ {m-1}$, plus $q$ generic double points and $t+q$ generic simple points.
As above, by \cite{AH}, since $d>2$ and $t+q \geq1$ we get
$$
\mathrm{dim}(I_{ Tr _H { ( \widetilde { \mathbb X } } + V_1+\dots+V_s)})_{d+1} =
\mathrm{dim}(I_{ 2P_{1}+ \cdots + 2P_{q}+ W'_{1}+ \cdots + W'_{t}+V'_{q+1}+ \dots +V'_{2q} })_{d}
$$
\begin{equation}\label{trxn=1}
=\max \left \{ {m+d\choose d} - q(m+1)-t-q \ ;\ 0 \right \}.
\end{equation}
By Castelnuovo's inequality (see Lemma \ref {castelnuovo}), by (\ref{resxn=1}) and (\ref{trxn=1})
we get
\begin{equation}\label{disug3}
\mathrm{dim}(I_{{ \widetilde { \mathbb X } } + V_1+\dots+V_s})_{d+1} \leq
\max \left \{ 2 {m+d\choose d} - 2q(m+1)-2t-2q \ ;\ 0 \right \},
\end{equation}
so by (\ref{disug2}), (\ref{spazifissi}) and (\ref{disug3}) we have
$$
\mathrm{dim}(I_{{ \widetilde { \mathbb X } } })_{d+1} =
\max \left \{ 2 {m+d\choose d} - 2q(m+2)-2t \ ;\ 0 \right \}.
$$
From here, by (\ref{disug}) and by the semicontinuity of the Hilbert Function we get
$$
\mathrm{dim}(I_{{ \mathbb X } })_{d+1} =
\max \left \{ 2 {m+d\choose d} - 2q(m+2)-2t \ ;\ 0 \right \}
$$
and the result is proved for $n=1$.
Let $n>1$.
Set: $H'_{1}=H_{1}\cap H$; $W'_{i}=W_{i}\cap H$; $V'_{i}=V_{i}\cap H$.
With this notation the trace of
$ { \widetilde { \mathbb X } +V_1+ \dots +V_s } $ on $H$ is:
$$Tr_{H} ( { \widetilde { \mathbb X } } +V_1+ \dots +V_s)
=dH'_{1}+H_2+2P_{1}+ \cdots + 2P_{nq}+ W'_{1}+ \cdots + W'_{t}+V'_1+ \dots +V'_s
\subset H \simeq \mathbb P^ {n+m-1}
.$$
Anaugously as above, observe that the linear spaces $V'_i = \langle H'_1, P_i \rangle \simeq \mathbb P^n $
are in the base locus for the hypersurfaces defined by the forms of
$(I_{ dH'_{1}+2P_{i} })_{d+1}$, hence the parts of degree $d+1$ of the ideals of
$Tr_{H} ( { \widetilde { \mathbb X } } +V_1+ \dots +V_s)$ and
of
$Tr_{H} ( { \widetilde { \mathbb X } } +V_{nq+1}+ \dots +V_s)$
are equal. So we have
$$
(I_{ Tr_{H} ( { \widetilde { \mathbb X } } +V_1+ \dots +V_s)})_{d+1}=
(I_ { Tr_{H} ( { \widetilde { \mathbb X } } +V_{nq+1}+ \dots +V_s)} )_{d+1}=
(I_{ \mathbb T })_{d+1} ,
$$
where
$$\mathbb T=
dH'_{1}+H_2+2P_{1}+ \cdots + 2P_{nq}+ W'_{1}+ \cdots + W'_{t}+V'_{nq+1}+ \dots +V'_s
\subset \mathbb P^ {n+m-1},
$$
that is, $\mathbb T$
is union of the $ d$-uple linear space $H_1' \simeq \mathbb P^{n-2}$, the linear space
$H_2\simeq \mathbb P^{m-1}$, $t+q$ generic linear spaces through $H_1' $, and
$nq$ double points. Hence by the inductive hypothesis we have
\begin{equation}\label{tr1}
\mathrm{dim}(I_{\mathbb{T}})_{d+1}=\max \left \{ n{m+d \choose d}-nq(n+m)-(t+q)n\ ; 0\right \}.
\end{equation}
By (\ref{spazifissi}), by Lemma \ref{castelnuovo}, by (\ref{res1}), (\ref{res2}) and (\ref{tr1})
we get
$$
\mathrm{dim} (I_{ \widetilde { \mathbb X } })_{d+1}\leq
\max \left \{ {m+d\choose d} - q(m+1)-t-nq \ ;\ 0 \right \} +
\max \left \{ n{m+d \choose d}-nq(n+m)-(t+q)n\ ; 0\right \}
$$
$$=
\max \left \{ {m+d\choose d} - q(m+1)-t-nq \ ;\ 0 \right \} +
\max \left \{ n \left ( {m+d\choose d} - q(m+1)-t-nq \right )\ ; 0\right \}
$$
$$=
\max \left \{ (n+1) \left ( {m+d\choose d} - q(m+1)-t-nq \right )\ ; 0\right \}
$$
$$=
\max \left \{ (n+1){m+d \choose d}-s(n+m+1)-t(n+1)\ ; 0\right \}
.$$
Now the conclusion follows from (\ref{disug}) and the semicontinuity of the Hilbert Function
and this ends the proof.
\end{proof}
\begin{corol}\label{c1}
Let $d\geq 3$, $n,m \geq 1$ and let
$$s_1:= \max
\left \{
s \in \mathbb N \ | \ s \ is \ a \ multiple\ of\
(n+1) \ and \ s \leq
\left\lfloor \frac{(n+1){m+d\choose d}}{m+n+1} \right\rfloor
\right
\}
$$
$$
s_2:= \min
\left \{
s \in \mathbb N \ | \ s \ is \ a \ multiple\ of\
(n+1) \ and \ s \geq
\left\lceil \frac{(n+1){m+d\choose d}}{m+n+1} \right\rceil
\right
\}.
$$
Let $ P_1, \dots ,P_{s} \in \mathbb P^ {n+m}$ be generic points and $H_1\simeq \mathbb P^{n-1}, H_2\simeq \mathbb P^{m-1}$ be generic linear spaces in $\mathbb P^ {n+m}$.
Consider the scheme
$$
\mathbb{X}:=dH_{1}+H_{2}+2P_{1}+ \cdots + 2P_{s}.
$$
Then for any $s \leq s_1$ and any $s \geq s_2$ the dimension of
$(I_{\mathbb{X}})_{d+1}$ is the expected one, that is
$$\mathrm{dim}(I_{\mathbb{X}})_{d+1}=
\left \{
\begin{matrix} (n+1){m+d \choose d}-s(n+m+1) &\ for \ s \leq s_1 \\
\\
0 & for \ s \geq s_2 \\
\end{matrix}
\right.
$$
\end{corol}
\begin{proof}
By applying Theorem \ref{t1}, with $t=0$, to the scheme
$
\mathbb{X}=dH_{1}+H_{2}+2P_{1}+ \cdots + 2P_{s}
$, we get that the dimension of
$I(\mathbb{X})_{d+1}$ is the expected one for $s=(n+1)q$ and for
any $q\in \mathbb{N}$. Hence if $s_1$ is the biggest integer
multiple of $n+1$ such that $\mathrm{dim}(I_{\mathbb{X}})_{d+1}\neq 0$ we
get that for that value of $s$ the Hilbert function
$H(I_{\mathbb{X}},d+1)$ has the expected value. Now if for such
$s_1$ we have that $(I_{\mathbb{X}})_{d+1}$ has the expected
dimension than it has the expected dimension also for every $s\leq
s_1$.
Now, if $s_2$ is the smallest integer multiple of $n+1$ such that $\mathrm{dim}(I_{\mathbb{X}})_{d+1} = 0$ then obviously such a dimension will be zero for all $s\geq s_2$.
\end{proof}
\begin{theorem}\label{t2}
Let $d\geq 3$, $n,m \geq 1$, $N = (n+1){m+d \choose d}-1$ and let $s_1, s_2$ be as in Corollary \ref{c1}.
Then the variety $\sigma_{s} \left ( X^{(n,m)}_{(1,d)}\right ) \subset \mathbb P^N$
has the expected dimension for any
$s \leq s_1$ and any $s \geq s_2$,
that is
$$
\mathrm{dim}
\sigma_{s} \left ( X^{(n,m)}_{(1,d)}
\right )=
\left \{
\begin{matrix}
s(n+m+1) -1 &\ for \ s \leq s_1 \\
\\
N & for \ s \geq s_2 \\
\end{matrix}
\right. .
$$
\end{theorem}
\begin{proof}
Let $H_{1}, H_{2}\subset \mathbb P^ {m+n}$ be projective subspaces of dimensions $n-1$ and $m-1$ respectively and let $P_{1}, \ldots , P_{s}\in \mathbb P^ {n+m}$ be $s$ generic points of $\mathbb P^ {n+m}$. Define $\mathbb{X}\subset \mathbb P^ {m+n}$ to be the scheme $\mathbb{X}:=dH_{1}+H_{2}+2P_{1}+ \cdots + 2P_{s}$. Theorem 1.1 in \cite{CGG} shows that $\mathrm{dim} \sigma_{s} \left ( X^{(n,m)}_{(1,d)}
\right )$ is the expected one if and only if $\mathrm{dim}(I_{\mathbb{X}})_{d+1}$ is the expected one.
Therefore the conclusion immediately follows from Theorem \ref{metodoaffineproiettivo} and Corollary \ref{c1}.
\end{proof}
\begin{rem}{\em If ${m+d \choose d}$ is multiple of $(m+n+1)$, say
${m+d \choose d} = h(m+n+1)$, we get
$$
\left\lfloor \frac{(n+1){m+d\choose d}}{m+n+1} \right\rfloor =
\left\lceil \frac{(n+1){m+d\choose d}}{m+n+1} \right\rceil =
h(n+1)
$$
so $s_1 = s_2$. Hence in this case the variety
$\sigma_{s} \left ( X^{(n,m)}_{(1,d)}\right )$ has the expected dimension for any $s$.
If ${m+d \choose d}$ is not multiple of $(m+n+1)$, it is easy to show that
$s_2- s_1 = n$. Thus there are at most $n$ values of $s$ for which
the $s^{th}$ higher secant varieties of $X^{(n,m)}_{(1,d)}$ can be defective.
}
\end{rem}
\begin{rem}\label{grassref}{\em Theorem \ref{t2} has a straightforward interpretation in terms of Grassmann defectivity. More precisely, we see that the $d$-uple Veronese embedding of $\mathbb P^ m$ is not $(n,s-1)$-Grassmann defective when $s\leq s_1$ or $s\geq s_2$.
}
\end{rem}
\end{document}
|
\begin{document}
\title{On linearity of separating multi-particle differential Schr\"odinger operators for identical particles}
\date{\today}
\author{George Svetlichny}
\email{[email protected]}
\affiliation{Departamento de Matem\'atica, \\ Pontif{\'{\i}}cia
Universidade Cat{\'o}lica, \\ Rio de Janeiro, RJ, Brazil}
\begin{abstract}
\baselineskip 0pt
We show that hierarchies of differential Schr\"odinger operators for identical particles which are separating for the usual (anti-)symmetric tensor product, are necessarily linear, and offer some speculations on the source of quantum linearity.
\end{abstract}
\maketitle
\section{Introduction}
One of the properties considered in speculations about possible fundamental non-linearities in quantum mechanics is {\em separation\/}, that is, product functions evolve as product functions. Separation
is considered a nonlinear version of the notion of non-interaction, as then uncorrelated states remain uncorrelated under time
evolution. We show here that if separation is combined with either Fermi or Bose statistics embodied in the usual \linebreak(anti-)symmetrized tensor product states, and if all the multi-particle Shr\"odinger operators are differential, then they are necessarily linear.
The motivation for studying hierarchies of multi-particle non-linear Schr\"odinger equations comes from two sources: (1) intellectual speculation about possible non-linearities in quantum mechanics\cite{BBMKos}, and (2) examples arising in
representations of current algebras (diffeomorphism groups)\cite{HDDandGS}. We consider the second motivation compelling as current algebra representations were found to include many known linear quantum systems and to predict new ones, anyons in particular\cite{anyons}.
The non-linear theories considered still maintain that states are represented by rays in a Hilbert space, that evolution is given by a (non-linear) Schr\"odinger-type equation for the wave function, and that the modulus of the (normalized) wave-function gives the probability density of detection. Though these assumptions can all be questioned, an important class of theories do satisfy them.
A complete analysis of separating hierarchies of Schr\"odinger-type equations for non-identical particles was given in \cite{SVETLICHNY:GS}, however as the world is made up of bosons and fermions, the identical particle case has to be addressed.
In \cite{nlrqm} we explored the possibility of formulating a
nonlinear relativistic theory based on a nonlinear version of the
consistent histories approach to quantum mechanics. A toy model led to a set
of equation among which there were instances of a weakened form of the
separation property for scalar bosons. This showed once more that such a
property is fundamental for understanding any nonlinear extension of ordinary
quantum mechanics.
In \cite{SK} we showed that separating second-order differential hierarchies for identical particles are necessarily linear under various simplifying assumptions. We here prove linearity under fewer assumptions and in a more transparent fashion.
The present result should not be taken as an argument against non-linear quantum mechanics. As such, it would be a much weaker physical argument than the causality violation objections already raised by various authors\cite{gpl,svetBG}. Though a degree of separability is necessary to be able to isolate and observe an independent physical system, it need not be exact. Another possibility is that in non-linear theories one could conceivably form multiparticle states from states of fewer number of particles in a way other than by the usual (anti-)symmetric tensor product. In fact by using the non-linear gauge transformations of Doebner, Goldin, and Nattermann\cite{doegolnat} one can deform a linear separating hierarchy of differential Schr\"odinger operators to a non-linear hierarchy of differential Schr\"odinger separating with respect to a deformed tensor product. Whether differential hierarchies that are not equivalent to linear ones and separating with respect to deformed tensor products exist, is still to be determined.
Lastly, our results are strictly non-relativistic.
Causal relativistic non-linear theories are seemingly hard to formulate, though they probably do exist\cite{nlrqm,bonae}. What separation implies in such a context is still to be explored. What the present result hints at is the origin of linearity about which we comment in the final section.
\section{Separation}
At time \(t\) an \(n\)-particle wave function \(\Psi\) depends on the positions \(x_1,\dots,x_n\) of each particle, where each \(x_i\in \mathbb R^d\), \(d\) being the dimension of space, and on \(A_1,\dots,A_n\) where each \(A_i\) is an index denoting the internal degrees of freedom of each particle. Initially we assume the \(n\) particles to always belong to different species and so no permutation symmetry property is assumed of the wave-function. We use the symbol \(s=(s_1,\dots,s_n)\) as labelling the species of the particle. For initial notational ease we shall combine the internal degrees of freedom index \(A_i\) with the position \(x_i\) into a single symbol \(\xi_i=(x_i,A_i)\) and denote the \(n\)-tuple of such by \(\xi\). Thus
we denote an \(n\)-particle wave function at time \(t\) by \(\Psi(\xi,t)\).
We assume that the evolution from time \(t_1\) to time \(t_2\) of the state corresponding to the ray with representative wave function \(\Psi(\xi,t_1)\) can be expressed by a not necessarily linear evolution operator \(E_s(t_2,t_1)\) applied to the wave-function,
that is:
\begin{displaymath}\Psi(\xi,t_2)=(E_s(t_2,t_1)\Psi)(\xi,t_1).
\end{displaymath}
The simple tensor product of an \(n\)- and an \(m\)-particle wave function is defined as
\begin{equation} \label{stp}
(\phi\otimes\psi)(\xi_1,\dots,\xi_n,\xi_{n+1},\dots,\xi_{n+m})=
\phi(\xi_1,\dots,\xi_n)\psi(\xi_{n+1},\dots,\xi_{n+m}).
\end{equation}
The separation property for the simple tensor product now reads:
\begin{equation}\label{eq:stpsep}
E_s(t_2,t_1)(\Psi_1\otimes\Psi_2)=
E_{s_1}(t_2,t_1)(\Psi_1) \otimes E_{s_2}(t_2,t_1)(\Psi_2),
\end{equation}
where the species index \(s\) of \(\Psi\) is the concatenation of the species indices \(s_i\) of the \(\Psi_i\). Strictly speaking, since states correspond to rays and not vectors, the right-hand side should be multiplied by a complex number
\(\gamma(t_2,t_1,s_1,s_2,\Psi_1,\Psi_2)\). To our knowledge, a full analysis of the possibility of such a factor has not been carried out. For the rest of this paper we shall assume that \(\gamma=1\), the general assumption in the literature.
Now, the world is made
of bosons and fermions and one should reconsider the separation property when one is dealing with a single species
of identical particles. The separation property (\ref{eq:stpsep})
must then be reformulated
with respect to the symmetric or anti-symmetric tensor product \(\phi\hat\otimes\psi\) which is the right-hand side of (\ref{stp}) symmetrized or anti-symmetrized according to either bose or fermi statistics:
\begin{equation}\label{eq:sitp}
(\phi\hat\otimes\psi)(\xi_1,\dots,\xi_n,\xi_{n+1},\dots,\xi_{n+m})
=\frac{n!m!}{(n+m)!}\sum_{{\cal I}}(-1)^{fp({\cal I})} \phi(\xi_{i_1},\dots,\xi_{i_n})\psi(\xi_{j_1},\dots,\xi_{j_m}),
\end{equation}
where \({\cal I}=(i_1,\dots,i_n)\) are \(n\)
numbers from \(\{1,\dots,n+m\}\), in ascending order, \((j_1,\dots,j_m)\) the
complementary numbers, also in ascending order, \(f\) is the {\em Fermi number\/} \(0\) for bosons and \(1\) for fermions, and \(p({\cal I})\) is the parity
(\(0\) for even and \(1\) for odd) of the permutation \((1,\dots,n+m)\mapsto (i_1,\dots,i_n,j_1,\dots,j_m)\). We have taken into account that both \(\phi\) and \(\psi\) are either symmetric or antisymmetric with respect to permutations of their arguments. The normalizing factor makes the product associative and the map \(\phi\otimes\psi\mapsto\phi\hat\otimes\psi\) into a projection. For the identical particle case, the species symbol \(s\) reduces just to the particle number \(n\).
If we pass to the generators of the evolution operators
\begin{displaymath} H_s(t)=\left.\frac1i\frac{\partial}{\partial t_2}E_s(t_2,t_1)\right|_{t_2=t_1=t}
\end{displaymath}
then the separation property (\ref{eq:stpsep}) (under the assumption that \(\gamma=1\)) becomes:
\begin{equation}\label{eq:stender}
H_s(\Psi_1\otimes\Psi_2)
= H_{s_1}(\Psi_1)\otimes\Psi_2+\Psi_1\otimes
H_{s_2}(\Psi_2),
\end{equation}
where for notational simplicity we have suppressed indicating the \(t\) dependence of the \(H\)'s. This relation (which we called {\em tensor derivation\/}) was fully analyzed in \cite{SVETLICHNY:GS}. Canonical
decompositions and constructions were also presented.
An (anti-)symmetric tensor derivation would be a hierarchy of operators \(H_n\) that
satisfies (\ref{eq:stender}) with \(\hat\otimes\) instead of
\(\otimes\).
One does not have a classification of these as one has for ordinary tensor
derivations as given in \cite{SVETLICHNY:GS}. It seems that the conditions to
be a tensor derivation in the \linebreak(anti-)symmetric case is rather stringent, and as we
shall now see, in the case of differential operators, implies linearity.
It now becomes convenient to disentangle the space-coordinate \(x\) and the internal degree of freedom index \(A\).
Our one-particle wave function will thus be denoted by \(\psi^A(x)\) with the index as a superscript for convenience. Multi-particle wave function will carry multiple indices in the usual way. The possibly non-linear operators of the tensor derivation will be assumed to depend on the real and imaginary parts of the wave function in an independent fashion, though, to simplify notation, this is not denoted explicitly. Likewise, for notational ease, internal degree of freedom indices will be suppressed when no confusion can arise.
We shall use a multi-index notation for partial derivatives. Given a function \(u(x_1,\dots,x_n)\) and \(I=(i_1,\dots,i_n)\) an \(n\)-tuple of non-negative integers, we denote by \(|I|\) the sum \(i_1+\cdots+i_n\) and by \(u_I\) the partial derivative
\begin{displaymath}
\partial_I u=\frac{\partial^{|I|}u}{\partial x_1^{i_1}\cdots\partial x_n^{i_n}}.
\end{displaymath}
For the case of a function \(u(x,y)\) of two variables we write \(u_{I,J}\) for \(I\) differentiations with respect to \(x\), and \(J\) with respect to \(y\).
Let us consider possibly nonlinear differential
operators of any order (dependence on time can be construed as simply dependence on a
parameter). Such a two-particle operator has the form \(H(x,y,\phi^{AB}_{I,J}(x,y))\). Introducing variable names for
the arguments of \(H\), we write \(H(x,y,a^{AB}_{I,J})\).
When \(\phi\) is constrained to be an (anti-)symmetrized product
\begin{displaymath}\phi^{AB}(x,y) =
\frac12(\alpha^A(x)\beta^B(y)+(-1)^f\beta^A(x)\alpha^B(y)),
\end{displaymath}
then the arguments of $H$ are
constrained to take on values of the form.
\begin{equation}\label{svetlichny:ra}
a^{AB}_{I,J}=\frac12(\alpha^A_I\tilde\beta^B_J+(-1)^f\beta^A_I\tilde\alpha^B_J).
\end{equation}
Here quantities without the tilde are derivatives evaluated at \(x\) and those with, at \(y\).
The quantities on the right-hand sides: $\alpha^A_I, \beta^A_I,
\tilde\alpha^B_J, \tilde\beta^B_J$, which we
shall call the \(\alpha\beta\)-quantities, can be given, by Borel's lemma,
arbitrary complex values by an appropriate choice of the points \(x\) and
\(y\) and functions \(\alpha\) and \(\beta\). Denote the right-hand sides of
the above equations by \(\hat a^{AB}_{I,J}\).
The separability condition for the symmetrized
tensor product now reads:
\begin{equation}\label{svetlichny:2sep}
2H^{AB}_2(x,y, \hat a_{I,J}) =
H^A_1(x,\alpha_I)\tilde\beta^B_{0} +
\alpha^A_{0}H^B_1(y ,\tilde\beta_J)+
(-1)^fH^A_1(x, \beta_I)\tilde\alpha^B_{0}+
(-1)^f\beta^A_{0}H^B_1(y, \tilde\alpha_J).
\end{equation}
Now we come to the main point: in the space of the \(\alpha\beta\)-quantities there are flows that leave \(\hat a_{I,J}\) invariant, and so must leave the right-hand side of (\ref{svetlichny:2sep}) invariant. This leads to linearity.
\section{Proof of linearity}
One easily sees that the following transformations leave the \(\alpha\beta\)-quantities invariant:
\begin{eqnarray}\label{scale}\nonumber
\alpha_I^A\mapsto s\alpha_I^A,&& \tilde\beta_J^B\mapsto s^{-1} \tilde\beta_J^B;\\ \label{shift}
\alpha_I^A\mapsto \alpha_I^A+s\beta_I^A, &&\tilde\alpha_J^B\mapsto \tilde\alpha_J^B-s(-1)^f\tilde\beta_J^B;
\end{eqnarray}
and the same with \(\alpha\) and \(\beta\) interchanged. Symmetry (\ref{shift}) is enough to force linearity.
Note that \(s\) is a {\em complex\/} parameter, which means that the real and imaginary parts of the quantities undergo separate transformations. As a result, the right-hand side of (\ref{svetlichny:2sep}) has to be annihilated by
the vector field corresponding to (\ref{shift}):
\begin{equation}\label{abiflow}
\sum_{C,I}\left(\beta^C_I\pd{}{\alpha^C_I}-(-1)^f\tilde\beta^C_I\pd{}{\tilde\alpha^C_I}\right),
\end{equation}
where by \( \partial/\partial \alpha^C_I\) we mean the usual convention \( (1/2)\left(\partial/\partial \hbox{Re}\,\alpha^C_I-i\partial/\partial\hbox{Im}\alpha^C_I\right)\) and similarly for the other partial derivative.
Applying now (\ref{abiflow}) to the right-hand side of (\ref{svetlichny:2sep}), we get:
\begin{displaymath}
\left[\sum_{C,I}\beta_I^C\pd{H_1^A}{{\alpha^C_I}}(x,\alpha)-H_1^A(x,\beta)\right]\tilde \beta^B_0-\beta^A_0\left[\sum_{C,I}{{\tilde\beta}_I^C} \pd{H_1^B}{{{\tilde\alpha}^C_I}}(y,\tilde\alpha)-H_1(y,\tilde \beta)\right]=0.
\end{displaymath}
Now the \(\alpha\beta\)-quantities can be chosen arbitrarily and generically we have \(\beta_0^A\neq 0\) and \(\tilde\beta_0^B\neq 0\) for all \(A\) and \(B\) and so generically
\begin{displaymath}
\frac1{\beta^A_0}\left[\sum_{C,I}\beta_I^C
\pd{H_1^A}{{\alpha^C_I}}(x,\alpha)-H_1^A(x,\beta)\right]=
\frac1{\tilde \beta^B_0}\left[\sum_{C,I}{{\tilde\beta}_I^C} \pd{H_1^B}{{{\tilde\alpha}^C_I}}(y,\tilde\alpha)-H_1(y,\tilde \beta)\right].
\end{displaymath}
Since both sides depend on different sets of variables, each side is a constant \(k\) and we now have:
\begin{displaymath}
\sum_{C,I}\beta_I^C\pd{H_1^A}{{\alpha^C_I}}(x,\alpha)-H_1^A(x,\beta)=k\beta^A_0.
\end{displaymath}
Fixing \(\alpha\) this equation states that \(H_1(x,\beta)\) is a linear function of \(\beta\) with coefficients depending on \(x\). We have thus shown:
{\bf Lemma:} {\sl
In an (anti-)symmetric tensor derivation in which the one-particle and two-particle operators are differential, the one-particle operator is necessarily linear. }
To show the whole hierarchy is linear we procede as in
\cite{SK}. An \(N\)-particle wave-function for particles in \(\mathbb R^d\) can be viewed as a one-particle wave-function for particles (call them {\em
conglomerate\/} particles) in \(\mathbb R^{Nd}\). Consider
the separating property for a
\(2N\)-particle operator acting on an (anti-)symmetrized tensor product of two
\(N\)-particle wave-functions, reinterpreted now as a separating property for operators acting on the wave-functions of two and one conglomerate particles.
The only difference in relation to what we have already done, is the permutation symmetry of conglomerate particles.
Let \(\phi(x_1,\dots,x_N)\) and \(\psi(y_1,\dots,y_N)\)
be two properly (anti-)symmetric \(N\)-particle wave-functions. One has using the conventions of (\ref{eq:sitp}):
\begin{equation} \label{svetlichny:2congl}(\phi\hat\otimes\psi)^{{\cal A}}(x_1,\dots,x_{2N})
=\frac{N!^2}{(2N)!}\sum_{{\cal I}}(-1)^{fp(I)} \phi^{{\cal A}_I}(x_{i_1},\dots,x_{i_N})\psi^{{\cal A}_J}(x_{j_1},\dots,x_{j_N}),
\end{equation}
where \({\cal A}=(A_1,\dots,A_{2N})\), \({\cal A}_I=(A_{i_1},\dots,A_{i_N})\), and \({\cal A}_J=(A_{j_1},\dots,A_{j_N})\) are internal degree of freedom indices. For (\ref{svetlichny:2congl}) the possible values that one
can attribute to the wave-function and its derivatives at a point is now more
complicated than that given by (\ref{svetlichny:ra}), but since by an appropriate choice of
coordinates and an appeal to Borel's lemma we can again use (\ref{svetlichny:ra}) as a particular case for two
conglomerate particles, the only differences being the change of the combinatorial factor \(1/2\) to \(N!^2/(2N)!\) and the possibility that the factor \((-1)^f\) may be absent even in the Fermi case. These differences are non-essential to the derivation, and repeating the argument presented above for the
two-particle case we see that the operator for one conglomerate particle
must be linear and so the \(N\)-particle operator must be linear. With this
the whole hierarchy must be linear. We thus have:
{\bf Theorem:} {\sl An (anti-)symmetric tensor derivation in which all multiparticle operators are differential, is necessarily linear.}
\section{Comments on the origin of quantum linearity}
Our view on quantum-mechanical linearity is that it is an emergent feature of the world that arises along with the manifold structure of space-time from some more fundamental pre-geometric reality. Thus questions of (non)linearity should be joined with the general quantum gravity program. Previous clues in this direction are provided by (1) the apparent connections between linearity and the causal structure of space-time\cite{svetBG,cover} and by (2) the difficulty of incorporating internal degrees of freedom, such as spin, in separating non-linear theories, requiring new multi-particle effects at every particle number\cite{tandor}. We consider the present result as another such clue, linking linearity to the statistics of identical particles and the possibility of independently evolving systems.
The emergent view of linearity is also supported by the present extremely small experimental bounds on possible non-linear effects, the suppression factor being about \(10^{-20}\)\cite{nlexp1}. If linearity is emergent, experimental evidence would be hard to come by. There is however the possibility that ultra-high-energy cosmic rays actually do probe the hypothetically non-linear pre-geometric regime\cite{cosmicrays}. The possible role of non-linearities on the Planck scale has also been considered by T.~P.~Singh\cite{singh}, and by N.~E.~Mavromatos and R.~J.~Szabo\cite{mavsz}.
\end{document}
|
\begin{document}
\title[Hyers-Ulam stability for hyperbolic random dynamics]{Hyers-Ulam stability for hyperbolic random dynamics}
\begin{abstract}
We prove that small nonlinear perturbations of random linear dynamics admitting a tempered exponential dichotomy have a random version of the shadowing property. As a consequence, if the exponential dichotomy is uniform, we get that the random linear dynamics is Hyers-Ulam stable. Moreover, we apply our results to study the conservation of Lyapunov exponents of the random linear dynamics subjected to nonlinear perturbations.
\end{abstract}
\author{Lucas Backes}
\address{Departamento de Matem\'{a}tica, Universidade Federal do Rio Grande do Sul, Av. Bento Gonc¸alves 9500, CEP 91509-900, Porto Alegre, RS, Brazil}
\email{[email protected]}
\author{Davor Dragi\v cevi\'c}
\address{Department of Mathematics, University of Rijeka, Croatia}
\email{[email protected]}
\keywords{ Hyers-Ulam stability, hyperbolicity, random dynamical systems}
\subjclass[2010]{Primary: 37C50, 34D09; Secondary: 34D10.}
\maketitle
\maketitle
\section{Introduction}
The foundations of the theory of chaotic dynamical systems dates back to the work of Poincar\'e \cite{Poi90} and is now a well developed area of research. An important feature of chaotic dynamical systems, already observed by Poincar\'e, is the sensitivity to initial conditions: any small change to the initial condition may led to a large discrepancy in the output. This fact makes somehow complicated or even impossible the task of predicting the real trajectory of the system based on approximations. On the other hand, many chaotic systems, like uniformly hyperbolic dynamical systems \cite{An70, Bow75}, exhibit an important property stating that, even though a small error in the initial condition may led eventually to a large effect, there exists a true orbit with a slightly different initial condition that stays near the approximate trajectory. This property is known as the \emph{shadowing property} and its study was initiated in the seminal papers of Anosov~\cite{An70} and Bowen~\cite{Bow75}. Their original approach which was based on the invariant manifold theory was later greatly simplified by Mayer and Sell~\cite{MS87} as well as Palmer~\cite{Pal88},
who presented purely analytic proofs of the shadowing property in the context of uniformly hyperbolic dynamics. We refer to the books~\cite{Pal00, Pil99} for more details and further references on the shadowing theory.
More recently, numeruous authors have begun to study a similar problem in the context of differential or difference equations. More precisely, they are concerned with formulating sufficient conditions under which one is able to find an exact solution of a differential or difference equation in a vicinity of an approximate solution. If the equation
possesses this property, we say that it exhibits \emph{Hyers-Ulam stability}. This terminology is used due to the fact that Ulam~\cite{Ulam} proposed a similar type of problem for functional equations, whose partial solution was provided by Hyers~\cite{Hyers}.
As already mentioned, in the recent years many results dealing with Hyers-Ulam stability (discussing both its presence and lack of it) of differential and difference equations have been obtained. We in particular refer to the works of Brzd\c{e}k, Popa and Xu~\cite{BPX1, BPX2, BPX3, BPX4}, Jung~\cite{Jung}, Popa~\cite{Popa, Popa2}, Popa and Ra\c{s}a~\cite{PopaRasa,PopaRasa2},
Wang, Fe\v{c}kan and Tian~\cite{WFT}, Wang, Fe\v{c}kan and Zhou~\cite{WFZ},
Xu~\cite{X}
as well as Xu, Brzd\c{e}k and Zhang~\cite{XBZ}.
The relationship between hyperbolicity and Hyers-Ulam stability has been systematically studied in a series of papers by C. Bu\c{s}e and collaborators~\cite{BBT2,BRST,BLR}. Let us briefly describe the main results from~\cite{BBT2}. Assume that $A$ is a complex matrix of order $m$ and consider the associated dynamics
\begin{equation}\label{nad}
x_{n+1}=Ax_n, \quad n\ge 0.
\end{equation}
Then, it was established in~\cite{BBT2} that the following statements are equivalent:
\begin{itemize}
\item $A$ is hyperbolic, i.e. the spectrum of $A$ doesn't intersect the unit circle;
\item there exists $L>0$ such that for any $\varepsilon >0$ and any sequence $(y_n)_{n\ge 0}\subset \mathbb C^m$ such that
\[
\sup_{n\ge 0}\lVert y_{n+1}-Ay_n\rVert \le \varepsilon,
\]
there exists a solution $(x_n)_{n\ge 0}\subset \mathbb C^m$ of~\eqref{nad} such that
\[
\sup_{n\ge 0}\lVert x_n-y_n\rVert \le L\varepsilon.
\]
In other words, each approximate solution of~\eqref{nad} is close to an exact solution.
\end{itemize}
We note that similar result concerned with a difference equation
\begin{equation}\label{nad2}
x_{n+1}=A_nx_n, \quad n\ge 0,
\end{equation}
where $(A_n)_{n\ge 0}$ is a periodic sequence of complex matrices were obtained in~\cite{BRST}.
A recent advancement in this line of the research was made in~\cite{BDp} (with the arguments based on~\cite{BD19}, which in turn are inspired by versions of shadowing lemma for nonautonomous dynamics~\cite{CLP89}). More precisely, we study~\eqref{nad2} in the infinite-dimensional case and without an assumption on the periodicity of the sequence $(A_n)_{n\ge 0}$. We have proved that if the sequence
$(A_n)_{n\ge 0}$ admits an exponential dichotomy then~\eqref{nad2} exhibits Hyers-Ulam stability (showing that also that the converse holds in the finite-dimensional case). In fact, we show that the same conclusion holds true if we slightly perturb linear dynamics~\eqref{nad2}, i.e. if we consider nonlinear dynamics
\[
x_{n+1}=A_nx_n+f_n(x_n), \quad n\ge 0,
\]
where $(f_n)_{n\ge 0}$ is a sequence of suitable Lipschitz (nonlinear) maps on $X$.
The objective of this paper is to establish results similar to those in~\cite{BDp} in the context of random dynamical systems. The theory of random dynamical systems is by now very well developed and its relevance in studying various real-life phenomena (such as for example those modelled by stochastic differential equations) is widely recognized. We refer to~\cite{A} for a detailed exposition of this theory.
As usual in random dynamics, we start with a base space which is a probability space $(\Omega, \mathcal F, \mathbb P)$ together with an $\mathbb P$-preserving invertible transformation $\sigma \colon \Omega \to \Omega$. Furthermore, we have a family of linear operators $(A(\omega))_{\omega \in \Omega}$ acting on some Banach space $X$ and a family $(f_\omega)_{\omega \in \Omega}$ of (nonlinear) maps on $X$. For
$\omega \in \Omega$,
we consider the nonlinear dynamics
\begin{equation}\label{83:6}
x_{n+1}=A(\sigma^n (\omega))x_n+f_{\sigma^n (\omega)}(x_n), \quad n\in \mathbb{Z}.
\end{equation}
We show that if the cocycle generated by the linear part of~\eqref{83:6} admits a so-called tempered exponential dichotomy and under suitable assumptions for nonlinear maps $f_\omega$, in a vicinity of each suitable approximate solution of~\eqref{83:6} we can find an exact solution. In the particular case, when the linear part of~\eqref{83:6} admits a uniform exponential dichotomy, our results imply that~\eqref{83:6} is Hyers-Ulam stable.
In contrast to~\cite{BDp}, besides considering a random dynamics (and not nonautonomous dynamics given by a sequence of maps), we also:
\begin{itemize}
\item deal with the situation when the linear part of~\eqref{83:6} admits a tempered-exponential dichotomy, i.e. it is nonuniformly hyperbolic. The concept of nonuniform hyperbolicity originated in the landmark works of Oseledets~\cite{Osel} and particularly
Pesin~\cite{P1} and proved to be a nontrivial and far-reaching extension of the classical theory of uniformly hyperbolic dynamical systems
initiated by Smale~\cite{Smale}. For extension of this theory to the case of infinite-dimensional dynamics we refer to the works of Ruelle~\cite{Ruelle}, Ma\~{n}\'{e}~\cite{Mane}, Thieullen~\cite{T}, Lian and Lu~\cite{LL}, Zhou, Lu and Zhang~\cite{ZLZ} and additional references therein.
\item consider a broader class of approximate solutions of~\eqref{83:6} than those in~\cite{BDp}. More precisely, we now deal with situations when the error in the one step iteration of the dynamics is no longer uniform over time.
\end{itemize}
We stress that those novelties require us to substantially modify our previous arguments from~\cite{BDp}.
\section{Preliminaries}
Let $X$ be an arbitrary Banach space and let $B(X)$ denote the space of all bounded operators on $X$. Furthermore, consider a probability space $(\Omega, \mathcal F, \mathbb P)$ together with a $\mathbb P$-preserving invertible transformation $\sigma \colon \Omega \to \Omega$. We will assume that $\mathbb P$ is ergodic.
Let $A\colon \Omega \to B(X)$ be a strongly measurable map, i.e. $\omega \to A(\omega)x$ is a measurable map for each $x\in X$. We consider the associated cocycle $\EuScript{A} \colon \Omega \times \mathbb N_0 \to B(X)$ defined by
\[
\EuScript{A}(\omega, n)=\begin{cases}
\text{\rm Id} & \text{if $n=0$;}\\
A(\sigma^{n-1}(\omega)) \cdots A(\sigma (\omega))A(\omega) & \text{if $n\in \mathbb N$.}
\end{cases}
\]
Observe that
\[
\EuScript{A}(\omega, n+m)=\EuScript{A}(\sigma^m (\omega), n) \EuScript{A}(\omega, m), \quad \text{for $\omega \in \Omega$ and $m, n\in \mathbb N_0$.}
\]
We recall some notions of central importance to our results.
\begin{definition}
A measurable map $K\colon \Omega \to (0, \infty)$ is said to be a \emph{tempered random variable} if
\[
\lim_{n\to \pm \infty} \frac 1 n \log K(\sigma^n(\omega))=0, \quad \text{for $\mathbb P$-a.e. $\omega \in \Omega$.}
\]
\end{definition}
\begin{definition}\label{xxv}
We say that $\EuScript{A}$ admits a \emph{tempered exponential dichotomy} if there exist $\lambda >0$, a tempered random variable $K\colon \Omega \to (0, \infty)$, a $\sigma$-invariant set $\Omega' \subset \Omega$ with $\mathbb P(\Omega')=1$ and a strongly measurable map $\Pi \colon \Omega \to B(X)$ such that for $\omega \in \Omega'$:
\begin{enumerate}
\item $\Pi(\omega)$ is a projection on $X$;
\item \begin{equation}\label{proj}\Pi(\sigma^n (\omega))\EuScript{A}(\omega, n)=\EuScript{A}(\omega, n)\Pi(\omega) \quad \text{for $n\in \mathbb N$;}
\end{equation}
\item for $n\in \mathbb N$,
\[
\EuScript{A}(\omega, n)\rvert_{\Ker \Pi(\omega)} \colon \Ker \Pi(\omega) \to \Ker \Pi(\sigma^n (\omega))
\]
is invertible;
\item
\begin{equation}\label{td1}
\lVert \EuScript{A}(\omega, n)\Pi(\omega)\rVert \le K(\omega)e^{-\lambda n}, \quad n\ge 0
\end{equation}
and
\begin{equation}\label{td2}
\lVert \EuScript{A}(\omega, -n)(\text{\rm Id}-\Pi(\omega))\rVert \le K(\omega)e^{-\lambda n}, \quad n\ge 0,
\end{equation}
where
\[
\EuScript{A}(\omega, -n):=\bigg{(}\EuScript{A}(\sigma^{-n}(\omega), n)\rvert_{\Ker \Pi (\sigma^{-n} (\omega))} \bigg{)}^{-1}.
\]
\end{enumerate}
\end{definition}
\begin{remark}
It was proved in~\cite{BD19-2} that if $\EuScript{A}$ satisfies the assumptions of the version of the Multiplicative ergodic theorem established in~\cite{GTQ} and has all nonzero Lyapunov exponents that then it admits a tempered exponential dichotomy. Hence, the notion of a tempered exponential dichotomy is ubiquitous from the ergodic theory point of view.
\end{remark}
From now on, we assume that $\EuScript{A}$ is a cocycle that admits a tempered exponential dichotomy. Let $f_\omega\colon X \to X$, $\omega \in \Omega$, be a family of (nonlinear) maps.
For $\omega \in \Omega$, we consider the associated nonlinear and nonautonomous dynamics~\eqref{83:6}.
Observe that~\eqref{83:6} can be written as
\begin{equation}\label{nde}
x_{n+1}=F_{\sigma^n (\omega)}(x_n)\quad n\in \mathbb{Z},
\end{equation}
where
\[
F_\omega:=A(\omega)+f_\omega.
\]
We now introduce a family of adapted norms. For $\omega \in \Omega'$ and $x\in X$, let
\[
\lVert x\rVert_{\omega}:=\sup_{n\ge 0}(\lVert \EuScript{A}(\omega, n)\Pi(\omega)x\rVert e^{\lambda n})+\sup_{n\ge 0}(\lVert \EuScript{A}(\omega, -n)(\text{\rm Id}-\Pi(\omega))x\rVert e^{\lambda n}).
\]
Note that it follows from~\eqref{td1} and~\eqref{td2} that
\begin{equation}\label{ln}
\lVert x\rVert \le \lVert x\rVert_\omega \le 2K(\omega)\lVert x\rVert, \quad \text{for $\omega \in \Omega'$ and $x\in X$.}
\end{equation}
We need the following classical lemma whose proof we include for the sake of completness.
\begin{lemma}
We have that for each $\omega \in \Omega'$, $n\ge 0$ and $x\in X$,
\begin{equation}\label{ln1}
\lVert \EuScript{A}(\omega, n)\Pi(\omega)x\rVert_{\sigma^n (\omega)}\le e^{-\lambda n}\lVert x\rVert_\omega
\end{equation}
and
\begin{equation}\label{ln2}
\lVert \EuScript{A}(\omega, -n)(\text{\rm Id}-\Pi(\omega))x\rVert_{\sigma^{-n} (\omega)}\le e^{-\lambda n}\lVert x\rVert_\omega.
\end{equation}
\end{lemma}
\begin{proof}
We have that
\[
\begin{split}
\lVert \EuScript{A}(\omega, n)\Pi(\omega)x\rVert_{\sigma^n (\omega)} &=\sup_{m\ge 0}(\lVert \EuScript{A}(\sigma^{n}(\omega), m)\Pi(\sigma^n (\omega))\EuScript{A}(\omega, n)\Pi(\omega)x\rVert e^{\lambda m}) \\
&=\sup_{m\ge 0} (\lVert \EuScript{A}(\omega, n+m) \Pi(\omega)x\rVert e^{\lambda m}) \\
&=e^{-\lambda n}\sup_{m\ge 0} (\lVert \EuScript{A}(\omega, n+m) \Pi(\omega)x\rVert e^{\lambda (m+n)}) \\
&\le e^{-\lambda n} \lVert x\rVert_\omega,
\end{split}
\]
and hence~\eqref{ln1} holds. Similarly, we have
\[
\begin{split}
&\lVert \EuScript{A}(\omega, -n)(\text{\rm Id}-\Pi(\omega))x\rVert_{\sigma^{-n} (\omega)} \\
&=\sup_{m\ge 0} (\lVert \EuScript{A}(\sigma^{-n} (\omega), -m)(\text{\rm Id}-\Pi(\sigma^{-n}(\omega)))\EuScript{A}(\omega, -n)(\text{\rm Id}-\Pi(\omega))x\rVert e^{\lambda m})\\
&=\sup_{m\ge 0}(\lVert \EuScript{A}(\omega, -m-n)(\text{\rm Id}-\Pi(\omega))x\rVert e^{\lambda m})\\
&=e^{-\lambda n}\sup_{m\ge 0}(\lVert \EuScript{A}(\omega, -m-n)(\text{\rm Id}-\Pi(\omega))x\rVert e^{\lambda (m+n)})\\
&\le e^{-\lambda n}\lVert x\rVert_\omega,
\end{split}
\]
and consequently~\eqref{ln2} also holds.
\end{proof}
Before we state our first result, we will introduce additional terminology.
\begin{definition}
For $r>0$ and a two-sided sequence of positive real numbers $\delta \colon \mathbb{Z} \to (0, \infty)$, we say that $\delta$ is \emph{$r$-admissible} if:
\[
\sup_{n\in \mathbb{Z}}\max \bigg{\{}\frac{\delta (n+1)}{\delta (n)}, \frac{\delta (n)}{\delta (n+1)} \bigg{\}}\le r.
\]
\end{definition}
\section{Main results}
The following is our first result.
\begin{theorem}\label{t1}
Assume that $\EuScript{A}$ admits a tempered exponential dichotomy and let $\Omega'\subset \Omega$, $\lambda >0$ and $K\colon \Omega \to (0, \infty)$ be as in the Definition~\ref{xxv}. Furthermore, suppose that $\varepsilon>0$, $c\ge 0$ are such that $\varepsilon \le \lambda$ and that
\begin{equation}\label{contraction}
2ce^{\lambda-\varepsilon} \frac{1+e^{-\varepsilon}}{1-e^{-\varepsilon}}<1.
\end{equation}
Finally, we assume that
\begin{equation}\label{non}
\lVert f_\omega(x)-f_\omega(y)\rVert \le \frac{c}{K(\sigma (\omega))}\lVert x-y\rVert, \quad \text{for $\omega \in \Omega'$ and $x, y\in X$.}
\end{equation}
Then, there exists $L=L(\lambda, \varepsilon, c)>0$ such that for every $e^{\lambda-\varepsilon}$-admissible sequence
$\delta \colon \mathbb{Z} \to (0, \infty)$,
every $\omega \in \Omega'$ and an arbitrary sequence $(y_n)_{n\in \mathbb{Z}}\subset X$ satisfying
\begin{equation}\label{pseudo}
\lVert y_n-F_{\sigma^{n-1}(\omega)}(y_{n-1})\rVert \le \frac{\delta (n)}{2K(\sigma^n (\omega))} \quad \text{for $n\in \mathbb{Z}$,}
\end{equation}
there is a solution $(x_n)_{n\in \mathbb{Z}}$ of~\eqref{nde} with the property that
\begin{equation}\label{shad}
\lVert x_n-y_n\rVert \le L\delta (n), \quad \text{for each $n\in \mathbb{Z}$.}
\end{equation}
\end{theorem}
\begin{proof}
We will split the proof into several lemmas.
Let us begin by introducing some auxiliarly notation. Set
\[
Y_{\delta, \infty}:=\bigg{\{} \mathbf z=(z_n)_{n\in \mathbb{Z}}\subset X: \sup_{n\in \mathbb{Z}}\big{(}\delta (n)^{-1}\lVert z_n\rVert_{\sigma^n(\omega)} \big{)}<\infty \bigg{\}}.
\]
It is easy to verify that $Y_{\delta, \infty}$ is a Banach space with respect to the norm
\[
\lVert \mathbf z\rVert_{\delta, \infty}:=\sup_{n\in \mathbb{Z}}\big{(}\delta (n)^{-1}\lVert z_n\rVert_{\sigma^n(\omega)} \big{)}.
\]
Furthermore, we define $\Gamma_\omega \colon Y_{\delta, \infty} \to Y_{\delta, \infty}$ by
\[
\begin{split}
(\Gamma_\omega \mathbf z)_n &=\sum_{k=0}^\infty \EuScript{A}(\sigma^{n-k}(\omega),k)\Pi(\sigma^{n-k}(\omega))z_{n-k} \\
&\phantom{=}-\sum_{k=1}^\infty \EuScript{A}(\sigma^{n+k}(\omega), -k)(\text{\rm Id}-\Pi(\sigma^{n+k}(\omega)))z_{n+k}.
\end{split}
\]
We need the following auxiliarly result.
\begin{lemma}\label{l2}
We have that $\Gamma_\omega$ is a well-defined and bounded linear operator on $Y_{\delta, \infty}$. Furthermore,
\begin{equation}\label{bms}
\lVert \Gamma_\omega \rVert \le \frac{1+e^{-\varepsilon}}{1-e^{-\varepsilon}}.
\end{equation}
\end{lemma}
\begin{proof}[Proof of the lemma]
Obviously $\Gamma_\omega$ is linear. Moreover,
observe that for each $\mathbf z=(z_n)_{n\in \mathbb{Z}} \in Y_{\delta, \infty}$, it follows from~\eqref{ln1} and \eqref{ln2} that
\[
\begin{split}
\delta(n)^{-1}\lVert (\Gamma_\omega \mathbf z)_n\rVert_{\sigma^n (\omega)} &\le \delta(n)^{-1}\sum_{k=0}^\infty e^{-\lambda k}\lVert z_{n-k}\rVert_{\sigma^{n-k}(\omega)} \\
&\phantom{\le}+\delta(n)^{-1}\sum_{k=1}^\infty e^{-\lambda k}\lVert z_{n+k}\rVert_{\sigma^{n+k}(\omega)} \\
&=\sum_{k=0}^\infty e^{-\lambda k}\frac{\delta (n-k)}{\delta (n)}\delta(n-k)^{-1}\lVert z_{n-k}\rVert_{\sigma^{n-k}(\omega)} \\
&\phantom{=}+\sum_{k=1}^\infty e^{-\lambda k}\frac{\delta (n+k)}{\delta (n)}\delta (n+k)^{-1}\lVert z_{n+k}\rVert_{\sigma^{n+k}(\omega)}\\
&\le \sum_{k=0}^\infty e^{-\lambda k}e^{(\lambda-\varepsilon)k}\delta(n-k)^{-1}\lVert z_{n-k}\rVert_{\sigma^{n-k}(\omega)} \\
&\phantom{\le}+\sum_{k=1}^\infty e^{-\lambda k} e^{(\lambda-\varepsilon)k}\delta (n+k)^{-1}\lVert z_{n+k}\rVert_{\sigma^{n+k}(\omega)}\\
&\le \bigg{(} \sum_{k=0}^\infty e^{-\varepsilon k}+\sum_{k=1}^\infty e^{-\varepsilon k}\bigg{)}\lVert \mathbf z\rVert_{\delta, \infty}\\
&=\frac{1+e^{-\varepsilon}}{1-e^{-\varepsilon}}\lVert \mathbf z\rVert_{\delta, \infty}.
\end{split}
\]
Hence, by taking supremum over all $n\in \mathbb{Z}$, we obtain that
\[
\lVert \Gamma_\omega \mathbf z\rVert_{\delta, \infty} \le \frac{1+e^{-\varepsilon}}{1-e^{-\varepsilon}}\lVert \mathbf z\rVert_{\delta, \infty}.
\]
We conclude that $\Gamma_\omega$ is well-defined and bounded operator. In addition, \eqref{bms} holds.
\end{proof}
In the following lemma we explain the role of the operator $\Gamma_\omega$.
\begin{lemma}\label{l3}
For $\omega\in \Omega'$ and $\mathbf z=(z_n)_{n\in \mathbb{Z}}, \mathbf w=(w_n)_{n\in \mathbb{Z}}\in Y_{\delta, \infty}$, the following statements are equivalent:
\begin{enumerate}
\item $\Gamma_\omega \mathbf z=\mathbf w$;
\item for $n\in \mathbb{Z}$,
\begin{equation}\label{10:34}
w_n-A(\sigma^{n-1}(\omega))w_{n-1}=z_n.
\end{equation}
\end{enumerate}
\end{lemma}
\begin{proof}[Proof of the lemma]
Let us assume that $\Gamma_\omega \mathbf z=\mathbf w$. For each $n\in \mathbb{Z}$, we have that
\[
\begin{split}
&w_n-A(\sigma^{n-1}(\omega))w_{n-1} \\
&=\sum_{k=0}^\infty \EuScript{A}(\sigma^{n-k}(\omega),k)\Pi(\sigma^{n-k}(\omega))z_{n-k}\\
&\phantom{=}-A(\sigma^{n-1}(\omega))\sum_{k=0}^\infty \EuScript{A}(\sigma^{n-k-1}(\omega),k)\Pi(\sigma^{n-k-1}(\omega))z_{n-k-1}\\
&\phantom{=}-\sum_{k=1}^\infty \EuScript{A}(\sigma^{n+k}(\omega), -k)(\text{\rm Id}-\Pi(\sigma^{n+k}(\omega)))z_{n+k}\\
&\phantom{=}+A(\sigma^{n-1}(\omega))\sum_{k=1}^\infty \EuScript{A}(\sigma^{n+k-1}(\omega), -k)(\text{\rm Id}-\Pi(\sigma^{n+k-1}(\omega)))z_{n+k-1}\\
&=\sum_{k=0}^\infty \EuScript{A}(\sigma^{n-k}(\omega),k)\Pi(\sigma^{n-k}(\omega))z_{n-k}\\
&\phantom{=}-\sum_{k=0}^\infty \EuScript{A}(\sigma^{n-k-1}(\omega),k+1)\Pi(\sigma^{n-k-1}(\omega))z_{n-k-1}\\
&\phantom{=}-\sum_{k=1}^\infty \EuScript{A}(\sigma^{n+k}(\omega), -k)(\text{\rm Id}-\Pi(\sigma^{n+k}(\omega)))z_{n+k}\\
&\phantom{=}+\sum_{k=1}^\infty \EuScript{A}(\sigma^{n+k-1}(\omega), -(k-1))(\text{\rm Id}-\Pi(\sigma^{n+k-1}(\omega)))z_{n+k-1}\\
&=\Pi(\sigma^n (\omega))z_n+(\text{\rm Id}-\Pi(\sigma^n (\omega)))z_n \\
&=z_n.
\end{split}
\]
Thus,
\[
w_n-A(\sigma^{n-1}(\omega))w_{n-1}=z_n, \quad \text{for $n\in \mathbb{Z}$.}
\]
Let us now establish the converse. Assume that~\eqref{10:34} holds for each $n\in \mathbb{Z}$. Set
\[
w_n^s:=\Pi(\sigma^n(\omega))w_n \quad \text{and} \quad w_n^u:=w_n-w_n^s.
\]
It follows from~\eqref{proj} and~\eqref{10:34} that
\[
\begin{split}
w_n^s &= \Pi(\sigma^n(\omega))A(\sigma^{n-1} (\omega))w_{n-1}+\Pi(\sigma^n(\omega))z_n \\
&=A(\sigma^{n-1} (\omega))\Pi(\sigma^{n-1} (\omega))w_{n-1}+\Pi(\sigma^n(\omega))z_n \\
&=A(\sigma^{n-1} (\omega))\Pi(\sigma^{n-1} (\omega)) A(\sigma^{n-2}(\omega))w_{n-2} \\
&\phantom{=}+A(\sigma^{n-1} (\omega))\Pi(\sigma^{n-1} (\omega))z_{n-1}+\Pi(\sigma^n(\omega))z_n \\
&=\EuScript{A}(\sigma^{n-2}(\omega), 2)\Pi(\sigma^{n-2}(\omega))w_{n-2} \\
&\phantom{=}+\EuScript{A}(\sigma^{n-1} (\omega), 1)\Pi(\sigma^{n-1} (\omega))z_{n-1}+\EuScript{A}(\sigma^n (\omega), 0)\Pi(\sigma^n(\omega))z_n. \\
\end{split}
\]
Proceeding inductively, we find that that
\[
\begin{split}
w_n^s &=\EuScript{A}(\sigma^{n-k}(\omega), k)\Pi(\sigma^{n-k}(\omega))w_{n-k} \\
&\phantom{=}+\sum_{j=0}^{k-1}\EuScript{A}(\sigma^{n-j} (\omega), j)\Pi(\sigma^{n-j} (\omega))z_{n-j},
\end{split}
\]
for each $k\in \mathbb N$. Passing to the limit when $k\to \infty$ and using~\eqref{ln1}, we conclude that
\begin{equation}\label{11:54}
w_n^s=\sum_{k=0}^\infty \EuScript{A}(\sigma^{n-k}(\omega),k)\Pi(\sigma^{n-k}(\omega))z_{n-k}, \quad \text{for $n\in \mathbb{Z}$.}
\end{equation}
Similarly, one can show that
\begin{equation}\label{11:55}
w_n^u=-\sum_{k=1}^\infty \EuScript{A}(\sigma^{n+k}(\omega), -k)(\text{\rm Id}-\Pi(\sigma^{n+k}(\omega)))z_{n+k}, \quad \text{for $n\in \mathbb{Z}$.}
\end{equation}
By~\eqref{11:54} and~\eqref{11:55}, we have that $\Gamma_\omega \mathbf z=\mathbf w$ and the proof of the lemma is complete.
\end{proof}
For $n\in \mathbb{Z}$, we define $g_n\colon X\to X$ by
\[
\begin{split}
g_n(x) &=f_{\sigma^n (\omega)}(x+y_n)-f_{\sigma^n (\omega)}(y_n)+F_{\sigma^n (\omega)}(y_n)-y_{n+1} \\
&=f_{\sigma^n (\omega)}(x+y_n)+A(\sigma^n(\omega))y_n-y_{n+1},
\end{split}
\]
for $x\in X$. Furthermore,
for $\mathbf z=(z_n)_{n\in \mathbb{Z}}\in Y_{\delta, \infty}$, set
\[
(S(\mathbf z))_n:=g_{n-1}(z_{n-1}), \quad n\in \mathbb{Z}.
\]
We need the following estimate.
\begin{lemma}\label{l4}
For $\mathbf z^i=(z_n^i)_{n\in \mathbb{Z}} \in Y_{\delta, \infty}$, $i=1, 2$ we have that
\[
\lVert S(\mathbf z^1)-S(\mathbf z^2)\rVert_{\delta, \infty} \le 2ce^{\lambda-\varepsilon}\lVert \mathbf z^1-\mathbf z^2\rVert_{\delta, \infty}.
\]
\end{lemma}
\begin{proof}[Proof of the lemma]
By~\eqref{ln} and \eqref{non}, we have that
\[
\begin{split}
&\lVert S(\mathbf z^1)-S(\mathbf z^2)\rVert_{\delta, \infty} \\
&=\sup_{n\in \mathbb{Z}} \bigg{(}\delta (n)^{-1}\lVert g_{n-1}(z_{n-1}^1)-g_{n-1}(z_{n-1}^2)\rVert_{\sigma^n (\omega)} \bigg{)}\\
&=\sup_{n\in \mathbb{Z}} \bigg{(}\delta (n)^{-1}\lVert f_{\sigma^{n-1}(\omega)}(z_{n-1}^1+y_{n-1})-f_{\sigma^{n-1}(\omega)}(z_{n-1}^2+y_{n-1})\rVert_{\sigma^n (\omega)} \bigg{)}\\
&\le \sup_{n\in \mathbb{Z}}\bigg{(}2K(\sigma^n (\omega)) \delta (n)^{-1}\lVert f_{\sigma^{n-1}(\omega)}(z_{n-1}^1+y_{n-1})-f_{\sigma^{n-1}(\omega)}(z_{n-1}^2+y_{n-1})\rVert \bigg{)}\\
&\le \sup_{n\in \mathbb{Z}}\bigg{(}2K(\sigma^n (\omega)) \delta (n)^{-1}\frac{c}{K(\sigma^n (\omega))} \lVert z_{n-1}^1- z_{n-1}^2\rVert \bigg{)}\\
&\le \sup_{n\in \mathbb{Z}}\bigg{(}2c\delta (n)^{-1}\lVert z_{n-1}^1- z_{n-1}^2\rVert_{\sigma^{n-1}(\omega)} \bigg{)}\\
&=\sup_{n\in \mathbb{Z}}\bigg{(}2c\frac{\delta(n-1)}{\delta (n)} \delta (n-1)^{-1}\lVert z_{n-1}^1- z_{n-1}^2\rVert_{\sigma^{n-1}(\omega)} \bigg{)}\\
&\le 2ce^{\lambda-\varepsilon}\sup_{n\in \mathbb{Z}}\bigg{(} \delta (n-1)^{-1}\lVert z_{n-1}^1- z_{n-1}^2\rVert_{\sigma^{n-1}(\omega)} \bigg{)}\\
&=2ce^{\lambda-\varepsilon}\lVert \mathbf z^1-\mathbf z^2\rVert_{\delta, \infty},
\end{split}
\]
and the conclusion of the lemma follows.
\end{proof}
For $\mathbf z\in Y_{\delta, \infty}$, let
\begin{equation}\label{eq: op T}
T(\mathbf z)=\Gamma_\omega S(\mathbf z).
\end{equation}
It follows from Lemmas~\ref{l2} and~\ref{l4} that
\begin{equation}\label{954}
\lVert T(\mathbf z^1)-T(\mathbf z^2)\rVert_{\delta, \infty} \le 2ce^{\lambda-\varepsilon} \frac{1+e^{-\varepsilon}}{1-e^{-\varepsilon}} \lVert \mathbf z^1-\mathbf z^2\rVert_{\delta, \infty}, \quad \text{for $\mathbf z^i \in Y_{\delta, \infty}$, $i=1,2$.}
\end{equation}
Let $L>0$ be such that it satisfies
\[
\frac{1+e^{-\varepsilon}}{1-e^{-\varepsilon}}+2ce^{\lambda-\varepsilon} \frac{1+e^{-\varepsilon}}{1-e^{-\varepsilon}}L=L.
\]
Set
\[
D:=\{ \mathbf z\in Y_{\delta, \infty}: \lVert \mathbf z\rVert_{\delta, \infty} \le L\}.
\]
The final ingredient of the proof is the following lemma.
\begin{lemma}\label{l5}
We have that $T(D)\subset D$.
\end{lemma}
\begin{proof}[Proof of the lemma]
We begin by observing that~\eqref{ln} and~\eqref{pseudo} imply that
\[
\begin{split}
\lVert S(\mathbf 0)\rVert_{\delta, \infty}&=\sup_{n\in \mathbb{Z}} (\delta(n)^{-1}\lVert y_n-F_{\sigma^{n-1}(\omega)}(y_{n-1})\rVert_{\sigma^n (\omega)})\\
&\le \sup_{n\in \mathbb{Z}} (\delta(n)^{-1}2K(\sigma^n (\omega))\lVert y_n-F_{\sigma^{n-1}(\omega)}(y_{n-1})\rVert)\\
&\le 1.
\end{split}
\]
Hence, it follows from~\eqref{954} that for any $\mathbf z\in D$, we have that
\[
\begin{split}
\lVert T(\mathbf z)\rVert_{\delta, \infty} &\le \lVert T(\mathbf 0)\rVert_{\delta, \infty}+\lVert T(\mathbf z)-T(\mathbf 0)\rVert_{\delta, \infty} \\
&\le \lVert \Gamma_\omega \rVert \cdot \lVert S(\mathbf 0)\rVert_{\delta, \infty}+\lVert T(\mathbf z)-T(\mathbf 0)\rVert_{\delta, \infty} \\
&\le \frac{1+e^{-\varepsilon}}{1-e^{-\varepsilon}}+2ce^{\lambda-\varepsilon} \frac{1+e^{-\varepsilon}}{1-e^{-\varepsilon}} \lVert \mathbf z\rVert_{\delta, \infty}\\
&\le \frac{1+e^{-\varepsilon}}{1-e^{-\varepsilon}}+2ce^{\lambda-\varepsilon} \frac{1+e^{-\varepsilon}}{1-e^{-\varepsilon}}L\\
&=L,
\end{split}
\]
and thus the desired conclusion holds.
\end{proof}
By~\eqref{954} and Lemma~\ref{l5}, we have that $T$ is a contraction on $D$ and thus it has a unique fixed point $\mathbf z=(z_n)_{n\in \mathbb{Z}}\in D$. Hence, $\Gamma_\omega S(\mathbf z)=\mathbf z$ and consequently Lemma~\ref{l3} implies that
\[
\begin{split}
&z_n-A(\sigma^{n-1}(\omega))z_{n-1} \\
&=f_{\sigma^{n-1} (\omega)}(z_{n-1}+y_{n-1})-f_{\sigma^{n-1} (\omega)}(y_{n-1})+F_{\sigma^{n-1} (\omega)}(y_{n-1})-y_{n} \\
&=f_{\sigma^{n-1} (\omega)}(z_{n-1}+y_{n-1})+A(\sigma^{n-1}(\omega))y_{n-1}-y_n,
\end{split}
\]
for each $n\in \mathbb{Z}$. Therefore, the sequence $(x_n)_{n\in \mathbb{Z}}$ defined by
\[
x_n=y_n+z_n \quad n\in \mathbb{Z},
\]
is a solution of~\eqref{nde}. Moreover, we have (using~\eqref{ln}) that
\[
\sup_{n\in \mathbb{Z}} (\delta (n)^{-1}\lVert x_n-y_n\rVert) \le \sup_{n\in \mathbb{Z}} (\delta (n)^{-1}\lVert x_n-y_n \rVert_{\sigma^n (\omega)})=\lVert \mathbf z\rVert_{\delta, \infty} \le L,
\]
which readily implies~\eqref{shad}.
\end{proof}
Let us discuss the relationship between Theorem \ref{t1} and previous results available in the literature.
\begin{remark}\label{remark 2}
Along with results due to Katok, Hirayama and many others (see \cite{BP07,Pal00,Pil99}), Theorem \ref{t1} can be regarded as a version of the shadowing property for nonuniformly hyperbolic dynamics. Nevertheless, to the best of our knowledge, in all the previous results there are some stronger assumptions like differentiability, invertibility, boundedness and/or compactness or finite dimensionality, that are not present in Theorem \ref{t1}. Moreover, the error allowed in the one step iteration of the dynamics \eqref{pseudo} is not necessarily uniform over time as it is in the usual shadowing type results. For instance, if $\varepsilon<\lambda$, the sequence $\delta(n)=e^{(\lambda-\varepsilon)|n|}$ is $e^{\lambda-\varepsilon}$-admissible. Consequently, the error allowed in \eqref{pseudo} can even grow exponentially fast. In particular, Theorem \ref{t1} extends previous results even in the nonautonomous context and under a uniform exponential dichotomy assumption (see Section \ref{sec: settings} for applications to these settings).
\end{remark}
As a consequence of the previous construction we get that the solution of \eqref{nde} is actually unique whenever we require that the deviation of the pseudotrajectory from the true one is small with respect to the adapted norm. Indeed, we have the following result.
\begin{corollary}\label{cor: shad new norm}
Suppose we are in the hypothesis of Theorem \ref{t1} and let $(y_n)_{n\in \mathbb{Z}}$ be a sequence satisfying \eqref{pseudo}. Then, there exists a solution $(x_n)_{n\in \mathbb{Z}}$ of ~\eqref{nde} satisfying
\begin{equation*}
\lVert x_n-y_n\rVert _{\sigma^n(\omega)} \le L\delta (n), \quad \text{for each $n\in \mathbb{Z}$.}
\end{equation*}
Moreover, this solution is unique.
\end{corollary}
\begin{proof}
Existence of such a solution follows readily from the proof of Theorem \ref{t1}. So, it remains to observe that it is unique.
Let $(x_n)_{n\in \mathbb{Z}}$ be a solution of ~\eqref{nde} associated to $(y_n)_{n\in \mathbb{Z}}$ by the ``existence part" and consider $\textbf w=(w_n)_{n\in \mathbb{Z}}$ given by $w_n=x_n-y_n$. We start observing that $\textbf w$ is a fixed point for the operator $T$ given by \eqref{eq: op T}. Indeed,
\begin{displaymath}
\begin{split}
\left( S(\mathbf w) \right)_n &= g_{n-1}(w_{n-1}) \\
&=f_{\sigma^{n-1}(\omega)}(w_{n-1}+y_{n-1})-f_{\sigma^{n-1}(\omega)}(y_{n-1})+F_{\sigma^{n-1}(\omega)}(y_{n-1})-y_n\\
&=f_{\sigma^{n-1}(\omega)}(x_{n-1})+A(\sigma^{n-1}(\omega))y_{n-1}-y_n\\
&=f_{\sigma^{n-1}(\omega)}(x_{n-1})+A(\sigma^{n-1}(\omega))x_{n-1}-A(\sigma^{n-1}(\omega))w_{n-1}-y_n\\
&=x_n-A(\sigma^{n-1}(\omega))w_{n-1}-y_n\\
&=w_n-A(\sigma^{n-1}(\omega))w_{n-1}.
\end{split}
\end{displaymath}
Consequently, using property \eqref{proj},
\begin{displaymath}
\begin{split}
& \left(\Gamma_\omega(S(\mathbf w))\right)_n \\
&=\sum_{k=0}^\infty \EuScript{A}(\sigma^{n-k}(\omega),k)\Pi(\sigma^{n-k}(\omega))(S(\mathbf w))_{n-k} \\
&-\sum_{k=1}^\infty \EuScript{A}(\sigma^{n+k}(\omega),-k)\Pi(\sigma^{n+k}(\omega))(S(\mathbf w))_{n+k} \\
&=\sum_{k=0}^\infty \EuScript{A}(\sigma^{n-k}(\omega),k)\Pi(\sigma^{n-k}(\omega))\left(w_{n-k}-A(\sigma^{n-k-1}(\omega))w_{n-k-1}\right) \\
&-\sum_{k=1}^\infty \EuScript{A}(\sigma^{n+k}(\omega),-k)\left(\text{\rm Id}-\Pi(\sigma^{n+k}(\omega))\right)\left(w_{n+k}-A(\sigma^{n+k-1}(\omega))w_{n+k-1}\right) \\
&= \Pi(\sigma^{n}(\omega))w_{n}+\left(\text{\rm Id}-\Pi(\sigma^{n}(\omega))\right)w_{n}\\
&= w_n.
\end{split}
\end{displaymath}
Therefore, recalling that $T(\textbf w)=\Gamma_\omega(S(\textbf w))$, it follows that
\begin{displaymath}
T(\textbf w) = \mathbf w
\end{displaymath}
as claimed.
Moreover,
\begin{displaymath}
\begin{split}
\|\textbf w\|_{\delta,\infty}&=\sup_{n\in \mathbb{Z}} (\delta (n)^{-1}\lVert w_n \rVert_{\sigma^n (\omega)})\\
&=\sup_{n\in \mathbb{Z}} (\delta (n)^{-1}\lVert x_n-y_n \rVert_{\sigma^n (\omega)})\\
&\leq L.
\end{split}
\end{displaymath}
Consequently, since $T$ is a contraction from $ D=\{ \mathbf z\in Y_{\delta, \infty}: \lVert \mathbf z\rVert_{\delta, \infty} \le L\}$ to itself and, in particular, its fixed point in $D$ is unique, the result follows.
\end{proof}
\begin{corollary}[Expansivity] \label{cor: expansivity}
Suppose that the hypothesis of Theorem~\ref{t1} hold and let $(x_n)_{n\in \mathbb{Z}}$ and $(y_n)_{n\in \mathbb{Z}}$ be solutions of ~\eqref{nde} satisfying
\begin{equation*}
\lVert x_n-y_n\rVert _{\sigma^n(\omega)} \le L\delta (n), \quad \text{for each $n\in \mathbb{Z}$.}
\end{equation*}
Then, $(x_n)_{n\in \mathbb{Z}}=(y_n)_{n\in \mathbb{Z}}$.
\end{corollary}
\begin{proof}
Obviously $(y_n)_{n\in \mathbb{Z}}$ satisfies \eqref{pseudo} and moreover it is shadowed by itself and by $(x_n)_{n\in \mathbb{Z}}$. Thus, uniqueness given by the previous corollary implies $(x_n)_{n\in \mathbb{Z}}=(y_n)_{n\in \mathbb{Z}}$ as claimed.
\end{proof}
\begin{corollary}\label{cor1}
Assume that $\EuScript{A}$ admits a tempered exponential dichotomy and let $\Omega'\subset \Omega$, $\lambda >0$ and $K\colon \Omega \to (0, \infty)$ be as in the Definition~\ref{xxv}. Furthermore, suppose that $c\ge 0$ satisfies
\begin{equation}\label{contraction2}
2c\frac{1+e^{-\lambda}}{1-e^{-\lambda}}<1,
\end{equation}
and that~\eqref{non} holds.
There exists $L=L(\lambda, c)>0$ such that for any $t>0$, $\omega \in \Omega'$ and a sequence $(y_n)_{n\in \mathbb{Z}}\subset X$ such that
\[
\lVert y_n-F_{\sigma^{n-1}(\omega)}(y_{n-1})\rVert \le \frac{t}{2K(\sigma^n (\omega))} \quad \text{for $n\in \mathbb{Z}$,}
\]
then there exists a solution $(x_n)_{n\in \mathbb{Z}}$ of~\eqref{nde} with the property that
\[
\lVert x_n-y_n\rVert \le Lt, \quad \text{for each $n\in \mathbb{Z}$.}
\]
\end{corollary}
\begin{proof}
It only remains to apply Theorem~\ref{t1} in the particular case when $\delta \colon \mathbb{Z} \to (0, \infty)$ is a constant map $\delta(n)=t$, $n\in \mathbb{Z}$. Indeed, observe that $\delta$ is an $1$-admissible sequence and thus the desired conclusion follows from Theorem~\ref{t1} applied for $\varepsilon=\lambda$ (observe that in this case~\eqref{contraction} and~\eqref{contraction2}
coincide).
\end{proof}
We stress that Theorem~\ref{t1} in particular applies to linear dynamics
\begin{equation}\label{ldyn}
x_{n+1}=A(\sigma^n (\omega))x_n, \quad n\in \mathbb{Z}.
\end{equation}
\begin{corollary}
Assume that $\EuScript{A}$ admits a tempered exponential dichotomy and let $\Omega'\subset \Omega$, $\lambda >0$ and $K\colon \Omega \to (0, \infty)$ be as in the Definition~\ref{xxv}.
Furthermore, suppose that $0<\varepsilon \le \lambda$.
Then, there exists $L=L(\lambda, \varepsilon)>0$ such that for any $\omega \in \Omega'$ and a sequence $(y_n)_{n\in \mathbb{Z}}\subset X$ such that
\begin{equation*}
\lVert y_n-A(\sigma^{n-1}(\omega))y_{n-1}\rVert \le \frac{\delta (n)}{2K(\sigma^n (\omega))} \quad \text{for $n\in \mathbb{Z}$,}
\end{equation*}
there is a solution $(x_n)_{n\in \mathbb{Z}}$ of~\eqref{ldyn} with the property that~\eqref{shad} holds.
\end{corollary}
\begin{proof}
The desired conclusion follows directly from Theorem~\ref{t1} applied to the case when $f_\omega=0$. In this case $c=0$ and consequently~\eqref{contraction} is trivially satisfied.
\end{proof}
We also have the following version of Corollary~\ref{cor1}.
\begin{corollary}\label{2:59}
Assume that $\EuScript{A}$ admits a tempered exponential dichotomy and let $\Omega'\subset \Omega$, $\lambda >0$ and $K\colon \Omega \to (0, \infty)$ be as in the Definition~\ref{xxv}.
Then, there exists $L=L(\lambda)>0$ such that for
any $t>0$, $\omega \in \Omega'$ and a sequence $(y_n)_{n\in \mathbb{Z}}\subset X$ such that
\[
\lVert y_n-A(\sigma^{n-1}(\omega))y_{n-1} \rVert \le \frac{t}{2K(\sigma^n (\omega))} \quad \text{for $n\in \mathbb{Z}$,}
\]
then there exists a solution $(x_n)_{n\in \mathbb{Z}}$ of~\eqref{ldyn} with the property that
\[
\lVert x_n-y_n\rVert \le Lt, \quad \text{for each $n\in \mathbb{Z}$.}
\]
\end{corollary}
\section{Conservation of Lyapunov exponents} \label{sec: conservation}
In this section we present an application of our results to the theory of Lyapunov exponents. More precisely, we prove that Lyapunov exponents associated with a random linear dynamics admitting a tempered exponential dichotomy remain unchanged under small nonlinear perturbations.
In order to simplify exposition, we will restrict ourselves to the case when $X=\mathbb{R}^d$ and $A:\Omega \to GL(d,\mathbb{R})$. Moreover, we consider the linear cocycle $\EuScript{A} \colon \Omega \times \mathbb{Z} \to GL(d,\mathbb{R})$ defined by
\begin{displaymath}
\EuScript{A}(\omega, n)=\begin{cases}
A(\sigma^{n-1}(\omega)) \cdots A(\sigma (\omega))A(\omega) & \text{if $n\geq 1$;}\\
\text{\rm Id} & \text{if $n=0$;}\\
A(\sigma^{-|n|}(\omega))^{-1}\cdots A(\sigma^{-2}(\omega))^{-1}A(\sigma^{-1} (\omega))^{-1} & \text{if $n<0$.}
\end{cases}
\end{displaymath}
Assuming $\log ^+\|A(\omega)^{\pm 1}\|\in L^1(\mathbb{P})$, it follows from the \emph{Oseledets theorem}~\cite{Osel} that there exist numbers $\lambda _1(\EuScript{A},\mathbb{P})>\ldots > \lambda _{k}(\EuScript{A},\mathbb{P})$, called the \emph{Lyapunov exponents}, and a decomposition $\mathbb{R}^d=E^1_{\omega}\oplus \ldots \oplus E^k_{\omega}$, called the \emph{Oseledets splitting}, into vector subspaces depending measurably on $\omega$ such that for $\mathbb{P}$-almost every $\omega \in \Omega$,
\begin{equation}\label{eq: Lyap exp}
A(\omega)E^i_{\omega}=E^i_{\sigma(\omega)} \; \textrm{and} \; \lambda _i(\EuScript{A},\mathbb{P}) =\lim _{n\to \pm \infty} \dfrac{1}{n}\log \| \EuScript{A}(\omega,n)x\|
\end{equation}
for every non-zero $x\in E^i_{\omega}$ and $1\leq i \leq k$ (see for instance \cite{Via14}). By diminishing $\Omega '$ given in Definition \ref{xxv}, if necessary, we may assume that all these claims hold true for every $\omega\in \Omega'$.
From now on assume we are in the hypothesis of Theorem \ref{t1}. Moreover, assume the sequence $\delta:\mathbb{Z}\to (0,+\infty)$ grows sub-exponentially, that is,
\begin{equation}\label{eq: sub-exp}
\lim_{n\to \pm \infty} \frac{1}{n}\log \delta(n) =0
\end{equation}
and that the nonlinear perturbations $f_\omega :\mathbb{R}^d \to \mathbb{R}^d$, $\omega \in \Omega$, are small in the sense that
\begin{equation}\label{eq: bound f Lyap}
\|f_{\sigma^n(\omega)}(x)\|\le \frac{\delta(n)}{4K(\sigma^n(\omega))}
\end{equation}
for every $\omega \in \Omega$ and $x\in \mathbb{R}^d$. Observe that if $r:=\inf_{n\in \mathbb{Z}}\delta(n)>0$, then condition \eqref{eq: bound f Lyap} holds whenever
\begin{equation*}
\|f_{\omega}(x)\|\le \frac{r}{4K(\omega)},
\end{equation*}
for every $\omega \in \Omega$ and $x\in \mathbb{R}^d$. We refer to Section \ref{sec: settings} for some examples where the above condition holds. Decreasing the constant $c$ given in \eqref{non}, if necessary, we have that $F_\omega=A(\omega)+f_\omega$ is a homeomorphism (see \cite[Section 4.1]{BD19}) and thus we can consider the cocycle
\begin{displaymath}
\mathcal{F}(\omega, n)=\begin{cases}
F_{\sigma^{n-1}(\omega)} \circ \cdots \circ F_{\sigma (\omega)} \circ F_{\omega} & \text{if $n\geq 1$;}\\
\text{\rm Id} & \text{if $n=0$;}\\
F_{\sigma^{-|n|}(\omega)}^{-1}\circ \cdots \circ F_{\sigma^{-2}(\omega)}^{-1}\circ F_{\sigma^{-1} (\omega)}^{-1} & \text{if $n<0$.}
\end{cases}
\end{displaymath}
We define the \emph{forward and backward Lyapunov exponents of $\mathcal{F}$ at $\omega$ in the direction $x\in \mathbb{R}^d$}, respectively, by
\begin{displaymath}
\lambda^+ (\mathcal{F},\omega,x) =\limsup_{n\to + \infty} \dfrac{1}{n}\log \| \mathcal{F}(\omega,n)x\|
\end{displaymath}
and
\begin{displaymath}
\lambda^- (\mathcal{F},\omega,x) =\limsup_{n\to - \infty} \dfrac{1}{n}\log \| \mathcal{F}(\omega,n)x\| .
\end{displaymath}
\begin{theorem} \label{theo: conservation}
For every $i=1,2,\ldots,k$ there exist $x\in \mathbb{R}^d$, $\omega\in \Omega'$ and $*\in \{+,-\}$ so that
$$\lambda_i(\EuScript{A},\mathbb{P})=\lambda^* (\mathcal{F},\omega,x).$$
Reciprocally, for every $\omega\in \Omega'$ there exists $p:=p(\omega)\in \mathbb{R}^d$ so that for every $x\in \mathbb{R}^d\setminus \{p\}$ there exists $i\in \{1,2,\ldots,k\}$ such that
$$\lambda^+ (\mathcal{F},\omega,x)=\lambda_i(\EuScript{A},\mathbb{P}) \text{ or } \lambda^- (\mathcal{F},\omega,x)=\lambda_i(\EuScript{A},\mathbb{P}).$$
\end{theorem}
In other words, what we are proving is that all the Lyapunov exponents of $\mathcal{A}$ are also Lyapunov exponents of $\mathcal{F}$ and, reciprocally, at least one of the Lyapunov exponents $\lambda^+ (\mathcal{F},\omega,x)$ and $ \lambda^- (\mathcal{F},\omega,x)$, $\omega\in \Omega'$ and $x\in \mathbb{R}^d\setminus \{p\}$, of $\mathcal{F}$ is also a Lyapunov exponent of $\mathcal{A}$. To the best of our knowledge, no such result appeared previously in the literature.
\begin{proof} Given $i\in \{1,2,\ldots , k\}$, let $\omega\in \Omega'$ and $x\in E^i_{\omega}$ be such that
\begin{equation}\label{eq: aux 1 LE}
\lambda _i(\EuScript{A},\mathbb{P}) =\lim _{n\to \pm \infty} \dfrac{1}{n}\log \| \EuScript{A}(\omega,n)x\|.
\end{equation}
Since $\EuScript{A}$ admits a tempered exponential dichotomy, we have that all the Lyapunov exponents of $\EuScript{A}$ are nonzero. Assume for the moment that $\lambda_i(\EuScript{A},\mathbb{P})<0$. Considering $(x_n)_{n\in \mathbb{Z}}$ given by $x_n=\EuScript{A}(\omega,n)x$, we have that $x_{n+1}=A(\sigma^n(\omega))x_n$ for evey $n \in \mathbb{Z}$. Moreover,
\begin{displaymath}
\|x_{n+1}-F_{\sigma^n(\omega)}(x_n)\|=\|-f_{\sigma^n(\omega)}(x_n)\|\le \frac{\delta(n)}{4K(\sigma^n(\omega))}.
\end{displaymath}
In particular, $(x_n)_{n\in \mathbb{Z}}$ satisfies \eqref{pseudo} and thus, by Theorem \ref{t1}, there exists a sequence $(y_n)_{n\in \mathbb{Z}}$ satisfying $y_{n+1}=F_{\sigma^n(\omega)}(y_n)$, $n\in \mathbb{Z}$, so that
\begin{equation}\label{eq: aux 2 LE}
\|x_n-y_n\|\le L\delta(n) \text{ for every } n\in \mathbb{Z}.
\end{equation}
We are going to observe now that $\lambda_i(\EuScript{A},\mathbb{P})=\lambda^- (\mathcal{F},\omega,y_0)$. It follows from \eqref{eq: aux 1 LE} that
$$-\lambda _i(\EuScript{A},\mathbb{P}) =\lim _{n\to +\infty} \dfrac{1}{n}\log \| \EuScript{A}(\omega,-n)x\|>0$$
which can be rewritten as
$$-\lambda _i(\EuScript{A},\mathbb{P}) =\lim _{n\to +\infty} \dfrac{1}{n}\log \| x_{-n}\|>0.$$
Thus, using \eqref{eq: sub-exp} and \eqref{eq: aux 2 LE} it follows that
$$\lim _{n\to +\infty} \dfrac{1}{n}\log \| y_{-n}\|=-\lambda_i(\EuScript{A},\mathbb{P})$$
and consequently,
$$\lambda^{-} (\mathcal{F},\omega,y_0)=\limsup_{n\to -\infty} \dfrac{1}{n}\log \| \mathcal{F}(\omega,n)y_0\|=\lambda_i(\EuScript{A},\mathbb{P}).$$
Similarly, if $\lambda_i(\EuScript{A},\mathbb{P})>0$, we obtain that
$$\lambda^+ (\mathcal{F},\omega,y_0)=\limsup_{n\to +\infty} \dfrac{1}{n}\log \| \mathcal{F}(\omega,n)y_0\|=\lambda_i(\EuScript{A},\mathbb{P}).$$
We now prove the converse statement by proceeding similarly to what we did above. Let $x\in \mathbb{R}^d$ and $\omega\in \Omega'$ be given. Considering $(x_n)_{n\in \mathbb{Z}}$ given by $x_n=\mathcal{F}(\omega,n)x$, we have that $x_{n+1}=F_{\sigma^n(\omega)}(x_n)$, $n\in \mathbb{Z}$. Moreover,
\begin{displaymath}
\|x_{n+1}-A(\sigma^n(\omega))x_n\|=\|f_{\sigma^n(\omega)}(x_n)\|\le \frac{\delta(n)}{4K(\sigma^n(\omega))}.
\end{displaymath}
In particular, $(x_n)_{n\in \mathbb{Z}}$ satisfies \eqref{pseudo} for $\delta'(n)=\frac{\delta(n)}{2}$ instead of $\delta$ and in the case when $f_\omega\equiv 0$ for every $\omega\in \Omega$. By Corollary \ref{cor: shad new norm}, there exists a unique sequence $(y_n)_{n\in \mathbb{Z}}$ satisfying $y_{n+1}=A(\sigma^n(\omega))y_n$ for $n\in \mathbb{Z}$, and such that
\begin{equation}\label{eq: aux 3 LE}
\|x_n-y_n\|_{\sigma^n(\omega)}\le \frac{L\delta(n)}{2} \text{ for every } n\in \mathbb{Z}.
\end{equation}
We are now in a position to construct the point $p$ from the statement of the theorem.
\begin{lemma}\label{claim}
There exists a unique point $p\in \mathbb{R}^d$, for which the sequence $(y_n)_{n\in \mathbb{Z}}$ given by \eqref{eq: aux 3 LE} satisfies $y_n=0$ for every $n\in \mathbb{Z}$.
\end{lemma}
\begin{proof}[Proof of the Lemma~\ref{claim}]
We start observing that, if such a point does exist, then it is unique. Indeed, given $z,x\in \mathbb{R}^d$ let us consider $(x_n)_{n\in \mathbb{Z}}$ and $(z_n)_{n\in \mathbb{Z}}$ given by $x_n=\mathcal{F}(\omega,n)x$ and $z_n=\mathcal{F}(\omega,n)z$, respectively. By the previous construction we know that each of these sequences is shadowed by an actual orbit of $\EuScript{A}(\omega, \cdot)$. Suppose both $(x_n)_{n\in \mathbb{Z}}$ and $(z_n)_{n\in \mathbb{Z}}$ are shadowed by the null sequence $(0)_{n\in \mathbb{Z}}$. That is, $\|x_n-0\|_{\sigma^n (\omega)}\le \frac{L\delta(n)}{2}$ and $\|z_n-0\|_{\sigma^n (\omega)}\le \frac{L\delta(n)}{2}$ for every $n\in \mathbb{Z}$. Then, $\|x_n-z_n\|_{\sigma^n (\omega)}\le L\delta(n)$ for every $n\in \mathbb{Z}$. It follows from Corollary~\ref{cor: expansivity} that $x_n=z_n$ for $n\in \mathbb{Z}$. In particular, we have that $x=x_0=z_0=z$ as claimed.
Existence of this point can be proved by observing that the null sequence $(0)_{n\in \mathbb{Z}}$ satisfies \eqref{pseudo} and applying Corollary \ref{cor: shad new norm}. By doing so, we obtain a sequence $(x_n)_{n\in \mathbb{Z}}\subset X$ such that $x_n=\mathcal F(\omega, n)x_0$ for $n\in \mathbb{Z}$ and
$\| x_n\|_{\sigma^n(\omega)}\le \frac{L\delta(n)}{2}$ for $n\in \mathbb{Z}$. Thus, $p=x_0$ satisfies the desired conclusion.
\end{proof}
Returning back to the proof of the theorem and assuming that $x\neq p$, we have (since $(y_n)_{n\in \mathbb{Z}}$ is not the null sequence) that
\begin{displaymath}
\lim_{n\to +\infty}\frac{1}{n}\log \|y_n\|=\lim_{n\to +\infty}\frac{1}{n}\log \|\EuScript{A} (\omega,n)y_0\|=\lambda_i(\EuScript{A},\mathbb{P})
\end{displaymath}
for some $i\in \{1,2,\ldots,k\}$. Now, since $\EuScript{A}$ admits a tempered exponential dichotomy it follows that $\lambda_i(\EuScript{A},\mathbb{P})\neq 0$. If $\lambda_i(\EuScript{A},\mathbb{P})>0$ then using \eqref{ln}, \eqref{eq: sub-exp} and \eqref{eq: aux 3 LE}, we conclude that
\begin{displaymath}
\lambda_i(\EuScript{A},\mathbb{P})=\lim_{n\to +\infty}\frac{1}{n}\log \|x_n\|=\lim_{n\to +\infty}\frac{1}{n}\log \|\mathcal{F} (\omega,n)x\|=\lambda^+(\mathcal{F},\omega,x).
\end{displaymath}
On the other hand, if $\lambda_i(\EuScript{A},\mathbb{P})<0$ then we similarly conclude that
\begin{displaymath}
\lambda_i(\EuScript{A},\mathbb{P})=\lambda^-(\mathcal{F},\omega,x).
\end{displaymath}
The proof of the theorem is completed.
\end{proof}
\begin{remark}
We observe that the conclusion of Theorem \ref{theo: conservation} is sharp in the sense that we may have $\lambda^+ (\mathcal{F},\omega,x)\neq \lambda^- (\mathcal{F},\omega,x)$ and, consequently, we don't necessarily have $\lambda^+ (\mathcal{F},\omega,x)=\lambda_i(\EuScript{A},\mathbb{P})= \lambda^- (\mathcal{F},\omega,x)$ in the reciprocal part. Indeed, let $(\Omega, \mathcal F, \mathbb P)$ be a probability space and $\sigma \colon \Omega \to \Omega$ an ergodic $\mathbb P$-preserving invertible transformation. Assume moreover that $\mathbb{P}$ is non-atomic. Consider $A:\Omega\to GL(1,\mathbb{R})$ given by $A(\omega)=A$ for every $\omega\in \Omega$ where $A\colon \mathbb{R} \to \mathbb{R}$ is given by $Ax=\frac{1}{2} x$ for every $x\in \mathbb{R}$. Let $\EuScript{A}:\Omega \times \mathbb{Z}\to GL(1,\mathbb{R})$ be the cocycle generated by $A$ as in the beginning of this section. Observe that this cocycle admits a tempered exponential dichotomy with $\Omega'=\Omega$, $\Pi(\omega)=\text{\rm Id}$ and $K(\omega)=1$ for every $\omega\in \Omega$ and $\lambda=\log 2$. Fix a non-periodic point $\omega\in \Omega$ and $\tau>0$. Now, for each $\omega'\in \Omega$ let us consider $f_{\omega'}:\mathbb{R}\to \mathbb{R}$ given by $f_{\omega'}(x)=\tau$ for every $x\in \mathbb{R}$ if $\omega'\in \{\sigma^n(\omega)\}_{n\geq 0}$ and $f_{\omega'}\equiv0$ otherwise. It is now easy to see that, whenever $\tau$ is sufficiently small, the hypothesis of Theorems \ref{t1} and \ref{theo: conservation} are satisfied and, moreover, given $x\in \mathbb{R}\setminus\{0\}$, $\lambda^- (\mathcal{F},\omega,x)=-\log 2$ while $\lambda^+ (\mathcal{F},\omega,x)=0$.
\end{remark}
\section{Applicable Settings} \label{sec: settings}
The hypothesis on the perturbations allowed in our main results are written in terms of constants coming from the hyperbolicity.
For instance, condition \eqref{non} requires the perturbations $f_\omega$ to be Lipschitz. In addition, the Lipschitz constants of maps $f_\omega$ depend on the strength of the hyperbolicity. In this section we will present some settings where these hypothesis can be easily fulfilled and thus, our results can be applied.
\subsection{Uniformly hyperbolic systems}
We start by applying our main results to the case when $\EuScript{A}$ is uniformly hyperbolic.
\begin{definition}\label{2x44}
We say that $\EuScript{A}$ admits a \emph{uniform exponential dichotomy} if there exist $K, \lambda >0$ and a family of projections $\Pi(\omega)$, $\omega \in \Omega$ such that for every $\omega \in \Omega$:
\begin{enumerate}
\item $\Pi(\sigma^n (\omega))\EuScript{A}(\omega, n)=\EuScript{A}(\omega, n)\Pi(\omega)$ for $n\in \mathbb N$;
\item for $n\in \mathbb N$,
\[
\EuScript{A}(\omega, n)\rvert_{\Ker \Pi(\omega)} \colon \Ker \Pi(\omega) \to \Ker \Pi(\sigma^n (\omega))
\]
is invertible;
\item
\begin{equation}\label{td111}
\lVert \EuScript{A}(\omega, n)\Pi(\omega)\rVert \le Ke^{-\lambda n}, \quad n\ge 0
\end{equation}
and
\begin{equation}\label{td22}
\lVert \EuScript{A}(\omega, -n)(\text{\rm Id}-\Pi(\omega))\rVert \le Ke^{-\lambda n}, \quad n\ge 0,
\end{equation}
where
\[
\EuScript{A}(\omega, -n):=\bigg{(}\EuScript{A}(\sigma^{-n}(\omega), n)\rvert_{\Ker \Pi (\sigma^{-n} (\omega))} \bigg{)}^{-1}.
\]
\end{enumerate}
\end{definition}
Then, we have the following consequence of Theorem~\ref{t1}.
\begin{corollary}\label{cor: unif}
Assume that $\EuScript{A}$ admits a uniform exponential dichotomy and let $\lambda, K >0$ be as in the Definition~\ref{2x44}. Furthermore, suppose that $\varepsilon>0$, $c\ge 0$ are such that $\varepsilon \le \lambda$ and that~\eqref{contraction} holds.
Finally, we assume that
\begin{equation*}
\lVert f_\omega(x)-f_\omega(y)\rVert \le \frac c K \lVert x-y\rVert, \quad \text{for $\omega \in \Omega$ and $x, y\in X$.}
\end{equation*}
Then, there exists $L=L(\lambda, \varepsilon, c)>0$ such that for every $e^{\lambda-\varepsilon}$-admissible sequence
$\delta \colon \mathbb{Z} \to (0, \infty)$, $\omega \in \Omega$ and a sequence $(y_n)_{n\in \mathbb{Z}}\subset X$ such that
\begin{equation}\label{pseudo3}
\lVert y_n-F_{\sigma^{n-1}(\omega)}(y_{n-1})\rVert \le \delta (n) \quad \text{for $n\in \mathbb{Z}$,}
\end{equation}
there is a solution $(x_n)_{n\in \mathbb{Z}}$ of~\eqref{nde} such that~\eqref{shad} holds.
\end{corollary}
\begin{proof}
One only needs to apply Theorem~\ref{t1} in the case when $\Omega'=\Omega$, $K(\omega)=K$ and replace $\delta$ with $n\mapsto 2K\delta(n)$.
\end{proof}
As observed in Remark \ref{remark 2}, due to the flexibility of \eqref{pseudo3}, Corollary \ref{cor: unif} extends previous results (even under a uniform exponential dichotomy assumption).
As in the general case, our results in particular apply to linear dynamics~\eqref{ldyn}. We shall formulate only a version of Corollary~\ref{2:59} in this context.
\begin{corollary}\label{ff}
Assume that $\EuScript{A}$ admits a uniform exponential dichotomy and let $\lambda, K >0$ be as in the Definition~\ref{2x44}.
Then, there exists $L=L(\lambda)>0$ such that for
any $t>0$, $\omega \in \Omega$ and a sequence $(y_n)_{n\in \mathbb{Z}}\subset X$ such that
\[
\lVert y_n-A(\sigma^{n-1}(\omega))y_{n-1} \rVert \le t \quad \text{for $n\in \mathbb{Z}$,}
\]
then there exists a solution $(x_n)_{n\in \mathbb{Z}}$ of~\eqref{ldyn} with the property that
\[
\lVert x_n-y_n\rVert \le Lt, \quad \text{for each $n\in \mathbb{Z}$.}
\]
\end{corollary}
\begin{remark}
The result obtained in Corollary~\ref{ff} can be described as a Hyers-Ulam stability result for the random linear dynamics given by~\eqref{ldyn} under the assumption that it admits a uniform exponential dichotomy.
\end{remark}
Let us now obtain a partial converse to Corollary~\ref{ff}.
\begin{proposition}
Assume that $\EuScript{A}$ is an invertible cocycle, i.e. that $A(\omega)$ is an invertible operator for each $\omega \in \Omega$. Furthermore, suppose that there exists $L>0$ such that for each sequence $(y_n)_{n\in \mathbb{Z}}\subset X$ such that
\begin{equation}\label{340}
\lVert y_n-A(\sigma^{n-1}(\omega))y_{n-1} \rVert \le 1 \quad \text{for $n\in \mathbb{Z}$,}
\end{equation}
then there exists a solution $(x_n)_{n\in \mathbb{Z}}$ of~\eqref{ldyn} with the property that
\begin{equation}\label{opp}
\lVert x_n-y_n\rVert \le L, \quad \text{for each $n\in \mathbb{Z}$.}
\end{equation}
Finally, assume that~\eqref{ldyn} has no bounded solutions.
Then, $\EuScript{A}$ admits a uniform exponential dichotomy.
\end{proposition}
\begin{proof}
Let us fix $\omega \in \Omega$ and take an arbitrary $\mathbf z=(z_n)_{n\in \mathbb{Z}}\subset X$ such that $\lVert \mathbf z\rVert_\infty:=\sup_{n\in \mathbb{Z}} \lVert z_n\rVert<\infty$. Take a sequence $(y_n)_{n\in \mathbb{Z}}\subset X$ such that
\[
y_n-A(\sigma^{n-1}(\omega))y_{n-1}=\frac{1}{\lVert \mathbf z\rVert_\infty}z_n, \quad \text{for $n\in \mathbb{Z}$.}
\]
Observe that $(y_n)_{n\in \mathbb{Z}}$ satisfies~\eqref{340}. Hence, there exists a solution $(x_n)_{n\in \mathbb{Z}}$ of~\eqref{ldyn} satisfying~\eqref{opp}. Consequently, the sequence $\mathbf w=(w_n)_{n\in \mathbb{Z}}\subset X$ given by
\[
w_n=\lVert \mathbf z\rVert_\infty (y_n-x_n) \quad n\in \mathbb{Z},
\]
is a solution of~\eqref{ldyn}. In addition,
\[
\lVert \mathbf w\rVert_\infty =\sup_{n\in \mathbb{Z}} \lVert w_n\rVert \le L\lVert \mathbf z\rVert_\infty.
\]
Hence, by applying results from~\cite{CL}, we conclude that $\EuScript{A}$ admits a uniform exponential dichotomy.
\end{proof}
\subsubsection{Theorem \ref{theo: conservation} for uniformly hyperbolic systems} Suppose we are in the setting of Section \ref{sec: conservation} and that the assumptions of Corollary \ref{cor: unif} are satisfied. Let $\delta:\mathbb{Z}\to (0,+\infty)$ be a sequence satisfying~\eqref{eq: sub-exp} and such that $r:=\inf_{n\in \mathbb{Z}} \delta(n)>0$. For instance, we can take $\delta$ to be a constant sequence or we can take $\delta$ given by $\delta(n)=|n|+1$, $n\in \mathbb{Z}$. Let $f_\omega:\mathbb{R}^d\to \mathbb{R}^d$, $\omega\in \Omega$, be such that
\begin{displaymath}
\|f_\omega(x)\|\leq \frac{r}{4K} \text{ for every } x\in \mathbb{R}^d.
\end{displaymath}
Observe that the above condition implies that \eqref{eq: bound f Lyap} holds and consequently Theorem \ref{theo: conservation} can be applied to this setting.
\subsection{Nonuniformly hyperbolic systems}\label{NHS}
Suppose that a cocycle $\EuScript{A}: \Omega \times \mathbb{N}_0 \to X$ admits a tempered exponential dichotomy and let $K\colon \Omega \to (0, \infty)$ be the tempered random variable from Definition~\ref{xxv}. It follows from \cite[Proposition 4.3.3 ii)]{A} that for every $\rho >0$ there exists a random variable $D=D_\rho: \Omega\to (0,+\infty)$ such that
\begin{equation}\label{eq: conseq temp}
K(\omega)\leq D(\omega) \text{ and } D(\sigma^n(\omega))\leq D(\omega )e^{\rho |n|},
\end{equation}
for $\mathbb P$-a.e. $\omega \in \Omega$ and $n\in \mathbb{Z}$.
Take now an arbitrary $\rho>0$ and consider the corresponding random variable $D=D_\rho:\Omega\to (0,+\infty)$ satisfying~\eqref{eq: conseq temp}. Given $T>0$, let us consider
\begin{displaymath}
\Omega_T'=\{\omega\in \Omega'; D(\omega)\le T\}.
\end{displaymath}
Noting that $\lim_{T\to \infty} \mathbb P(\Omega_T')=1$, we can fix $T$ sufficiently large so that $\mathbb P(\Omega_T')>0$.
For $n\in \mathbb N$, set \[\Omega_T^n:=\sigma^{-n}(\Omega_T')\setminus \cup_{k=0}^{n-1}\sigma^{-k}(\Omega_T').\] In addition, let $\Omega_T^0:=\Omega_T'$. Then, the ergodicity of the base system $(\Omega, \mathcal F, \mathbb P, \sigma)$ implies that $\mathbb{P}\left(\cup_{n=0}^\infty \Omega_T^n\right)=1$. Moreover, observe that
$\Omega_T^n \cap \Omega_T^m=\emptyset$ for $n\neq m$.
For $n\ge 0$ and $\omega\in \Omega_T^n$, let $f_\omega:X\to X$ be such that
\begin{equation}\label{eq: lip f ex}
\|f_\omega(x)-f_\omega(y)\|\leq \frac{c}{T}e^{-\rho|n-1|}\|x-y\| \text{ for every } x,y \in X,
\end{equation}
where $c$ is as in the statement of Theorem~\ref{t1}.
It is easy to see that \eqref{eq: lip f ex} combined with \eqref{eq: conseq temp} implies that \eqref{non} is satisfied. Indeed, if $n\ge 1$ we have that $\omega=\sigma^{-n}(\omega')$ for some $\omega'\in \Omega_T'$. Therefore,
\[
K(\sigma(\omega))\le D(\sigma(\omega))=D(\sigma^{-(n-1)}(\omega'))\le e^{\rho |n-1|}D(\omega')\le Te^{\rho |n-1|},
\]
which gives that
\[
\frac{c}{T}e^{-\rho |n-1|}\le \frac{c}{K(\sigma (\omega))}.
\]
Consequently, \eqref{eq: lip f ex} implies~\eqref{non}. One can argue similarly in the case when $n=0$.
We conclude that in the present setting Theorem~\ref{t1} is applicable. In addition, observe that if $\omega \in \Omega_T^m$ for some $m\ge 0$, then each sequence $(y_n)_{n\in \mathbb{Z}}\subset X$ such that
\[
\lVert y_n-F_{\sigma^{n-1}(\omega)}(y_{n-1})\rVert \le \frac{\delta(n)}{2T}e^{-\rho|n-m|} \quad \text{for $n\in \mathbb{Z}$,}
\]
satisfies~\eqref{pseudo} (and consequently~\eqref{shad} holds).
\subsubsection{Theorem \ref{theo: conservation} for nonuniformly hyperbolic systems} Suppose we are in the setting of Section \ref{sec: conservation} and Subsection~\ref{NHS}. Furthermore,
let $\delta:\mathbb{Z}\to (0,+\infty)$ be a sequence satisfying~\eqref{eq: sub-exp} and such that $r:=\inf_{n\in \mathbb{Z}}\delta(n)>0$. For each $\omega \in \Omega^n_T$, $n\ge 0$, let $f_\omega:\mathbb{R}^d\to \mathbb{R}^d$ be such that
\begin{displaymath}
\|f_\omega(x)\|\leq \frac{r}{4T}e^{-\rho n} \text{ for every } x\in \mathbb{R}^d.
\end{displaymath}
It is easy to see that the hypothesis of Theorem \ref{theo: conservation} are satisfied and thus we may apply it in the present setting.
{\bf Acknowledgements.} We would like to thank the anonymous referee for constructive comments that helped us to improve our paper. L.B. was partially supported by a CNPq-Brazil PQ fellowship under Grant No. 306484/2018-8. D. D. was supported in part by Croatian Science Foundation under the project
IP-2019-04-1239 and by the University of Rijeka under the projects uniri-prirod-18-9
and uniri-prprirod-19-16.
\end{document}
|
\betaegin{document}
\subjclass[2000]{Primary: ; Secondary: }
\kappaeywords{Stochastic PDEs, fractional Brownian--motion, pathwise solutions, fractional calculus. \\
}
\epsilonnsuremath{\tilde}tle[Stochastic Shell Models driven by a multiplicative fBm]
{Stochastic Shell Models driven by a multiplicative fractional Brownian--motion}
\alphauthor{Hakima Bessaih}
\alphaddress[Hakima Bessaih]{Department of Mathematics\\
University of Wyoming\\
Laramie 82071 USA}
\epsilonmail[Hakima Bessaih]{[email protected]}
\alphauthor{Mar\'{\i}a J. Garrido-Atienza}\alphaddress[Mar\'{\i}a J. Garrido-Atienza]{Dpto. Ecuaciones Diferenciales y An\'alisis Num\'erico\\
Universidad de Sevilla, Apdo. de Correos 1160, 41080-Sevilla, Spain} \epsilonmail[Mar\'{\i}a J. Garrido-Atienza]{[email protected]}
\alphauthor{Bj{\"o}rn Schmalfu{\ss }}
\alphaddress[Bj{\"o}rn Schmalfu{\ss }]{Institut f\"{u}r Stochastik\\
Friedrich Schiller Universit{\"a}t Jena, Ernst Abbe Platz 2, 77043\\
Jena,
Germany
}
\epsilonmail[Bj{\"o}rn Schmalfu{\ss }]{[email protected]}
\betaegin{abstract}
We prove existence and uniqueness of the solution of a stochastic shell--model. The equation is driven by an infinite dimensional fractional Brownian--motion with Hurst--parameter $H\in (1/2,1)$, and contains a non--trivial coefficient in front of the noise which satisfies special regularity conditions.
The appearing stochastic integrals are defined in a fractional sense. First, we prove the existence and uniqueness of variational solutions to approximating equations driven by piecewise linear continuous noise, for which we are able to derive important uniform estimates in some functional spaces. Then, thanks to a compactness argument and these estimates, we prove that these variational solutions converge to a limit solution, which turns out to be the unique pathwise mild solution associated to the shell--model with fractional noise as driving process.
\epsilonnd{abstract}
\maketitle
\section*{\today}
\section{Introduction}
In this paper we consider some shell--models under the influence of a noise. Shell--models of turbulence describe the evolution of
complex Fourier-like components of a scalar velocity field $u_{n}(t)\in\epsilonnsuremath{\mathbb{C}}$ and the associated wavenumbers $k_{n}$,
where the discrete index $n$ is referred as the shell--index.
The evolution of the infinite sequence $(u_n)_{n\in \epsilonnsuremath{\mathbb{N}}}$ is given by
\betaegin{equation}\epsilonnsuremath{\lambda}bel{SHELL}
\dot u_n(t)+\nu k_n^2 u_n(t)+b_n(u(t),u(t))=g_n(t, u(t))\dot{\epsilonnsuremath{\omega}ega}(t),
\qquad n\in \mathbb N
\epsilonnd{equation}
with the constraints $u_{-1}(t)=u_0(t)=0$ and $u_n(t) \in \mathbb C$ for $n \in \epsilonnsuremath{\mathbb{N}}$. $\dot{\epsilonnsuremath{\omega}ega}$ gives a noise path that will be described below. Here
$\nu \ge 0$ and, in analogy with Navier-Stokes--equations, $\nu$
represents a kinematic viscosity;
$k_n=k_0 \epsilonnsuremath{\lambda}mbda^n$ ($k_0>0$ and $\epsilonnsuremath{\lambda}mbda>1$) and $g_n$ is a forcing term.
The exact form of $b_n(u,v)\in \epsilonnsuremath{\mathbb{C}}$ varies from one model to another.
However in all various models,
it is assumed that $b_n(u,v)$ is chosen in such a way that
\betaegin{equation}\epsilonnsuremath{\lambda}bel{incomp}
{\mathbb R}e \displaystyle\sum_{n=1}^{\infty}b_n(u,v)\betaar v_n=0,
\epsilonnd{equation}
where ${\mathbb R}e$ denotes the real part and $\overline x$
the complex conjugate of $x$.
Equation \epsilonqref{incomp}
implies a formal law of conservation of energy in the inviscid
($\nu=0$) and unforced form of \epsilonqref{SHELL}.
In particular, we define the bilinear terms $b_n$ as
\[b_n(u,v)=i(a k_{n+1}\betaar u_{n+1} \betaar v_{n+2}+
b k_{n}\betaar u_{n-1} \betaar v_{n+1}-
a k_{n-1}\betaar u_{n-1} \betaar v_{n-2}-
b k_{n-1}\betaar u_{n-2} \betaar v_{n-1})
\]
in the GOY--model (see \cite{G,goy})
and by
\[
b_n(u,v)=-i(a k_{n+1}\betaar u_{n+1} v_{n+2}+
b k_{n}\betaar u_{n-1} v_{n+1}+
a k_{n-1} u_{n-1} v_{n-2}+
b k_{n-1} u_{n-2} v_{n-1})
\]
in the SABRA--model (see \cite{sabra}). The two parameters $a, b$ are real numbers. There are several shell--models in literature, the GOY-- and SABRA--models defined above have been introduced in \cite{G, goy, sabra}. The viscous version of the GOY-- and SABRA--models, well posedness, global regularity of solutions and smooth dependence on the initial data can be found in \cite{clt}.
In recent years, shell--models of turbulence have attracted a lot of interest for their ability to capture some of the statistical properties of the three-dimensional turbulence while presenting a structure much simpler than the Navier--Stokes--equations.
The stochastic version of the GOY--model under the influence of an additive white noise has been studied in \cite{Barsanti} where some statistical properties in terms of the invariant measure have been shown. For the same model in \cite{Bessaih-Ferrario1, Bessaih-Ferrario2} a Gau{\ss}ian invariant measure is associated and a flow constructed.
In this article we consider a long term multiplicative noise allowing to model
some memory effects. Such a noise is given by a trace-class fractional Brownian--motion in our state space with Hurst--parameter $H\in (1/2,1)$,
see below. In contrast to white noise, a fractional--Brownian motion is not a martingale, and therefore the multiplicative noise term cannot be presented by an Ito--integral. However, to deal with stochastic integration where the integrator is only H{\"o}lder--continuous with an exponent larger than $1/2$, one can use
the Young--integration, see Young \cite{You36} or the adaptation to a stochastic set up by Z{\"a}hle \cite{Zah98}. Since the definition of these integrals is based on fractional derivatives (see Samko {\it et al.} \cite{Samko} for a general presentation), this theory is often called fractional calculus.
An advantage of this theory is that, in contrast to the Ito--integral which is given in general by a limit in probability of Darboux--sums derived from an adapted integrand, we can define our integral pathwise which means that for any sufficiently regular
integrand and integrator the integral is well defined. Or in other words, the exceptional sets of measure zero which appear in the classical Ito--integration
does not depends on the integrand.
Moreover, integrals can be defined for non-adapted integrands.
The main issue of our work is to prove existence and uniqueness of a pathwise solution of the stochastic shell--model driven by a fractional multiplicative noise.
Applying an infinite dimensional version of the fractional integration theory we are able to present \epsilonqref{SHELL} in a mild sense where the last term of this equation generates a fractional integral.
In particular, the properties of the nonlinear term $B$ generated by the sequence $(b_i(u,v))_{i\in\epsilonnsuremath{\mathbb{N}}}$ allow to present such a solution in a mild form. Nevertheless, in a first step we replace the fractional noise path by a piecewise linear continuous approximation. Considering \epsilonqref{SHELL} with such a noise path, we are able to construct global and unique mild solutions. It is important to emphasize that the classical contraction method cannot be used alone since the bilinear term
$(b_i(u,v))_{i\in\epsilonnsuremath{\mathbb{N}}}$ causes to have estimates that do not close with the right norms. This is why, we have, first to construct weak solutions and get some a priori estimates. These weak solutions have to be constructed with a smoother path noise in order to define the corresponding stochastic integral. The a priori estimates combined with the estimates obtained from the mild form are then used to pass to the limit by means of a compactness argument, and the limit will turn out to be a mild solution of the original problem. The uniqueness of solutions is proved by an argument that uses the balance of suitable norms. As we mentioned before, just using the mild form in its usual norm does not allow to close the estimates, reason for which we rather again combine the a priori estimates and the norms obtained from the mild form to solve an algebraic system of two inequalities where the unknown is given in terms of the difference of two mild solutions starting from the same initial condition but in two different norms. The solution of this system is zero and this is what allows to conclude the uniqueness of solutions. We believe that our result of existence of solutions can be generalized to the Navier-Stokes equations although careful calculations have to be performed on the nonlinear term which is the main difference with the current result. We might have to work in slightly different spaces, and this will be done in the forthcoming paper \cite{BeGaSch15}.
Articles dealing with pathwise solutions for quite general stochastic ordinary differential equations driven by a multiplicative fractional--Brownian motion are, e.g., \cite{NuaRas02} and \cite{GMS08}. In the infinite dimensional context, there are also articles studying the existence of pathwise solutions, like \cite{NuaVui06} (dealing with variational solutions) and \cite{MasNua03}, \cite{GLS09}, \cite{diop} and \cite{ChGGSch12}, for the mild solution. In these papers the Hurst--parameter $H\in (1/2,1)$, the diffusion and the drift are assumed to be Lipschitz--continuous and the existence of solutions is proved using pathwise arguments through the fractional integrals.
There is an extensive literature for fluid flows driven by a Brownian--motion but only a few with a fractional Brownian--motion. In \cite{CaQTu} another fluid model is considered driven by
a fractional Brownian--motion with Hurst--parameter bigger than 1/2. In particular the authors find a local solution of the 3D Navier--Stokes--equation by using the Young--integral. In \cite{viens} the 2D Navier--Stokes--equation is studied driven by a fractional Brownian--motion with more general Hurst--parameters. However, the considered noise is additive.
An interesting advantage of considering the existence of pathwise solutions for the stochastic shell--model is that they will generate a random dynamical system, which gives us the possibility to an intensive asymptotic analysis of \epsilonqref{SHELL}. In particular, this is the foundation to show the existence of random attractors and the analysis of their structure. In the forthcoming paper \cite{BeGaSch15} the dynamics of the stochastic shell--model is investigated by using the random dynamical system theory. We would like to point out that, despite the fact that there are similarities between the 2D Navier--Stokes--equation and the shell-model, more effort and more involved techniques will be necessary to obtain similar results for the stochastic 2D Navier--Stokes--equation than the ones considered in \cite{BeGaSch15}. Let us also mention that the generation of a random dynamical system as well as the study of the corresponding random attractor for another kind of stochastic evolution equations with multiplicative fractional noise have been very recently investigated in the papers \cite{ChGGSch12}, \cite{ChGGSch14}, and \cite{GMS08}.
The paper is organized as follows: in Section 2 we introduce the functional analytical framework. In Section 3, we define the fractional derivatives and the stochastic integral using some type of generalized Young--integrals. In Section 4, we introduce the different assumptions on the diffusion and give the definitions of the different solutions. In
Section 5, we prove that the system driven by an smoother path has a unique weak solution, that it is also a mild solution. Furthermore, we obtain some fundamental uniform estimates for the solution of the system driven by such a kind of smooth path. In Section 6, thanks to these uniform estimates and a compactness reasoning we construct a unique pathwise mild solution to the shell--model having a fractional Brownian--motion as driving path. Section 7 is devoted to an example of a particular diffusion fitting the assumptions required for developing the abstract framework. Finally, Section 8 contains the proofs of some results that have been used in different sections of the paper.
As usual we denote by $c$ a positive constant that can change their value from line to line.
\section{Preliminaires}\epsilonnsuremath{\lambda}bel{S-FS}
\subsection{Spaces and operators}\epsilonnsuremath{\lambda}bel{s1}
For any $\epsilonnsuremath{\alphalpha}pha \in \mathbb R$, let us introduce the following spaces, see Constantin et al. \cite{clt} for the details,
\betaegin{equation*}
V_{\epsilonnsuremath{\alphalpha}pha}=\{u=(u_1, u_2, \ldots) \in \mathbb {C}^\infty:
\sum_{n=1}^\infty k_n^{4\epsilonnsuremath{\alphalpha}pha}|u_n|^2<\infty \}\footnotemark.
\epsilonnd{equation*}
This is a separable Hilbert--space with scalar product
$( u,v)_{V_\epsilonnsuremath{\alphalpha}pha}=\sum_{n=1}^\infty k_n^{4\epsilonnsuremath{\alphalpha}pha} u_n
\betaar v_n$.\footnotetext{Here there is an important difference w.r.t. the notation of spaces in \cite{clt} and \cite{clt2}.}
Denote by $\|\cdot \|_{V_\epsilonnsuremath{\alphalpha}pha}$ its norm.
We have the compact embedding
\betaegin{equation*}
V_{\epsilonnsuremath{\alphalpha}pha_1}\subset V_{\epsilonnsuremath{\alphalpha}pha_2} \qquad \text{ if } \epsilonnsuremath{\alphalpha}pha_1>\epsilonnsuremath{\alphalpha}pha_2.
\epsilonnd{equation*}
Let us denote by $V:=V_{0}$ and its norm simply by $\|\cdot\|$ and its scalar product by $( \cdot, \cdot )_V$.
Let $A: D(A)=V_1\to V$ be the linear unbounded operator defined as
\betaegin{equation*}
A: (u_1, u_2, \ldots) \mapsto (-\nu k_1^2 u_1,-\nu k_2^2 u_2,\ldots).
\epsilonnd{equation*}
For simplicity let us set $\nu=1$.
It is known that $A$ generates an analytic semigroup $S(\cdot)$ which follows from the Lax-Milgram lemma, see Sell and You \cite{SelYou02} Theorem 36.6, and this semigroup is exponentially stable. Furthermore, $V_\epsilonnsuremath{\alphalpha}pha =D(A^\epsilonnsuremath{\alphalpha}pha)$ and $(u,v)_{V_\epsilonnsuremath{\alphalpha}pha}=(A^{\epsilonnsuremath{\alphalpha}pha}u, A^{\epsilonnsuremath{\alphalpha}pha}v)_V$, $u,v\in V_{\epsilonnsuremath{\alphalpha}pha}$.
Let $L(V_\deltalta,V_\gamma)$ denote the space of linear continuous operators from $V_\deltalta$ into $V_\gamma$.
As usual, $L(V)$ denotes $L(V,V)$.
The following properties are well known for analytic semigroups and their generators: for $\zeta\ge \epsilonnsuremath{\alphalpha}pha$ there exists a constant $c>0$ such that
\betaegin{eqnarray}
|S(t)|_{L(V_\epsilonnsuremath{\alphalpha}pha, V_{\zeta})}=|A^\zeta S(t)|_{L(V_\epsilonnsuremath{\alphalpha}pha,V)}\le
\frac{c}{t^{\zeta-\epsilonnsuremath{\alphalpha}pha}}e^{-\epsilonnsuremath{\lambda}mbda t},\quad t>0\epsilonnsuremath{\lambda}bel{eq1},
\epsilonnd{eqnarray}
\betaegin{eqnarray}
|S(t)-{\rm id}|_{L(V_{\epsilonnsuremath{\sigma}gma+\nu},V_{\theta+\nu})} &\le c
t^{\epsilonnsuremath{\sigma}gma-\theta}, \quad {\rm for }\ \epsilonnsuremath{\sigma}gma\in
[\theta,1+\theta],\quad \nu\in \epsilonnsuremath{\mathbb{R}} \epsilonnsuremath{\lambda}bel{eq2},
\epsilonnd{eqnarray}
where $\epsilonnsuremath{\lambda}mbda$ in (\ref{eq1}) is a positive constant, see for instance Pazy \cite{Pazy} Theorem 2.6.13. From these inequalities, for $\nu,\,\epsilonta\in [0,1]$, $\xi,\deltalta\in \epsilonnsuremath{\mathbb{R}}$ such that $\deltalta\leq \zeta+\nu$, there exists a $c>0$ such that for $0\leq q\leq r\leq s\leq t$,
\betaegin{align}\epsilonnsuremath{\lambda}bel{eq30}
\betaegin{split}
|S(t-r)-S(t-q)|_{L(V_{\deltalta},V_{\zeta})}\le c(r-q)^\nu(t-r)^{-\nu-\zeta+\deltalta},\\
|S(t-r)- S(s-r)-S(t-q)+S(s-q)|_{L(V)}\leq
c(t-s)^{\nu}(r-q)^{\epsilonta}(s-r)^{-(\nu+\epsilonta)}.\epsilonnd{split}
\epsilonnd{align}
Define the bilinear operator $B:{\mathbb C}^\infty\epsilonnsuremath{\tilde}mes {\mathbb
C}^\infty\to {\mathbb C}^\infty$ as
\[
B(u,v)=-(b_1(u,v), b_2(u,v),\ldots)
\]
where the components $b_i$ satisfy (\ref{incomp}).
$B$ is well defined when its domain is
$V_{1/2} \epsilonnsuremath{\tilde}mes V$ or $V \epsilonnsuremath{\tilde}mes V_{1/2}$ (see \cite{clt}), that is,
$ B: V_{1/2} \epsilonnsuremath{\tilde}mes V\to V$ and
$ B: V \epsilonnsuremath{\tilde}mes V_{1/2}\to V$ are bounded operators. The operator $B$ enjoys the following properties
\betaegin{align*}
\betaegin{split}
( B(u,v),w)_V&=-( B(u,w),v)_V,\quad u\in V_{1/2}, \quad v, w\in V,\\
( B(u,v),w)_V&=-( B(u,w),v)_V,\quad u\in V, \quad v, w\in V_{1/2}.
\epsilonnd{split}
\epsilonnd{align*}
As a consequence, we also have that
\betaegin{equation}\epsilonnsuremath{\lambda}bel{skew2}
( B(u,v),v)_V=0, \quad u\in V, \quad v\in V_{1/2}.
\epsilonnd{equation}
Moreover, we extend the
result of Constantin {\it et al.} \cite{clt} to more general spaces:
\betaegin{lemma}\epsilonnsuremath{\lambda}bel{Bgenerale}
For any $\epsilonnsuremath{\alphalpha}pha_1, \epsilonnsuremath{\alphalpha}pha_2, \epsilonnsuremath{\alphalpha}pha_3 \in \mathbb R$
$$
B:V_{\epsilonnsuremath{\alphalpha}pha_1}\epsilonnsuremath{\tilde}mes V_{\epsilonnsuremath{\alphalpha}pha_2} \to V_{-\epsilonnsuremath{\alphalpha}pha_3} \;
\text{ with } \epsilonnsuremath{\alphalpha}pha_1+\epsilonnsuremath{\alphalpha}pha_2+\epsilonnsuremath{\alphalpha}pha_3\ge\frac12
$$
and there exists a constant $c$ depending on the
$\epsilonnsuremath{\alphalpha}pha_j$'s such that
\[
\|B(u,v)\|_{V_{-\epsilonnsuremath{\alphalpha}pha_3}}\le c \|u\|_{V_{\epsilonnsuremath{\alphalpha}pha_1}} \|v\|_{V_{\epsilonnsuremath{\alphalpha}pha_2}},
\quad u \in V_{\epsilonnsuremath{\alphalpha}pha_1}, \quad v \in V_{\epsilonnsuremath{\alphalpha}pha_2}.
\]
\epsilonnd{lemma}
The proof of this result follows by Proposition 1 of Constantin {\it et al.} \cite{clt2}, and Bessaih {\it et al.} \cite{Bessaih-Ferrario1} and hence we omit it here.\\
Let $C([s,t];V_\mu)$ be the space of continuous functions on $[s,t]$ with values in $V_\mu$ and with the usual norm $\|\cdot\|_{C,\mu}$ (or $\|\cdot\|_{C,s,t,\mu}$ when we want to stress the interval). In the particular case that $\mu=0$, we simply write $\|\cdot\|_{C}$ (or $\|\cdot\|_{C,s,t}$ respectively).
For $\betaeta\in (0,1]$ we denote by $C^\betaeta([s,t];V_\mu)$ the space of H{\"o}lder--continuous functions on $[s,t]$ and with values in $V_\mu$, equipped with the norm
\betaegin{equation*}
\|u\|_{\betaeta,\mu}=\|u\|_{C,\mu}+|||u|||_{\betaeta,\mu},\quad |||u|||_{\betaeta,\mu}:=\sup_{s\le p<q\le t}\frac{\|u(q)-u(p)\|_{V_\mu}}{(q-p)^\betaeta}.
\epsilonnd{equation*}
In particular, for the case $\betaeta=1$ this is the space of Lipschitz--continuous functions.
The spaces $L^p(s,t;V_\mu),\,p\in [1,\infty]$ have the standard meaning with the usual norms.
As we have mentioned above, sometimes it is important to consider the above norms on different time intervals $[s,t]$, thus in those cases the time interval will be indicated in the index of the norm.
For the previous spaces the following compactness theorem holds true:
\betaegin{theorem}\epsilonnsuremath{\lambda}bel{t1}
(i) For $\epsilonnsuremath{\alphalpha}pha,\deltalta>0$, $L^2(s,t;V_{\epsilonnsuremath{\alphalpha}pha})\cap C^\betaeta([s,t];V_{-\deltalta})$ is compactly embedded into $L^2(s,t;V)\cap C([s,t];V_{-\deltalta})$.
(ii) For $0 \leq \deltalta_1< \deltalta_2$ and $0\le\betaeta_1<\betaeta_2\le 1$ the space
$C^{\betaeta_2}([s,t];V_{-\deltalta_1})$ is compactly embedded into $C^{\betaeta_1}([s,t];V_{-\deltalta_2}) $.
\epsilonnd{theorem}
For the first part see Vishik and Fursikov \cite{FurVis} Chapter IV Theorem 4.1. For the second part we refer to Maslowski and Nualart \cite{MasNua03} Lemma 4.5. Indeed, we have the compact embedding $V_{-\deltalta_1}\subset V_{-\deltalta_2}$.\\
We now rewrite the equation \epsilonqref{SHELL} in an abstract form
\betaegin{equation}\epsilonnsuremath{\lambda}bel{abstract}
du(t)=\left(Au(t)+ B(u(t),u(t))\right)dt+ G(u(t))d\epsilonnsuremath{\omega}ega(t).
\epsilonnd{equation}
where $G$ is a nontrivial diffusion term representing the external force, and which assumptions will be describe in Section \ref{ms} below. Here $\epsilonnsuremath{\omega}ega$ represents a {\epsilonm path} in $C^{\betaeta^\partialrime}([0,T];V)$, with $\betaeta^\partialrime>1/2$, or in particular, a fractional Brownian--motion with Hurst--parameter $H\in (1/2,1)$, see the definition in Section \ref{s2}. This stochastic evolution equation has therefore a multiplicative noise. In what follows we will describe the type of stochastic integral we are going to consider, which will allow us to give an appropriate meaning to \epsilonqref{abstract}.
\section{Integrals in Hilbert--spaces for H{\"o}lder-continuous integrators with H{\"o}lder exponents greater than $1/2$}\epsilonnsuremath{\lambda}bel{s2}
In this section we are concerned with the definition of the following infinite dimensional integral
\betaegin{equation*}
\int_{T_1}^{T_2} Zd\epsilonnsuremath{\omega}ega,
\epsilonnd{equation*}
where $\epsilonnsuremath{\omega}ega$ is a H{\"o}lder-continuous function with H\"older exponent $\betaeta^\partialrime>1/2$ and $Z$ is an appropriate integrand. We follow the recent definition given by Chen {\it et al.} \cite{ChGGSch12}, and for the sake of completeness, next we shall borrow the main steps of their construction.
We start by considering an abstract separable Hilbert--space $\epsilonnsuremath{\tilde}lde V$, then for $0<\epsilonnsuremath{\alphalpha}pha<1$ and general measurable functions $Z:[T_1,T_2]\to \epsilonnsuremath{\tilde}lde V$ and $\epsilonnsuremath{\omega}ega:[T_1,T_2]\to V$, we define the following fractional derivatives
\betaegin{align*}
D_{{T_1}+}^\epsilonnsuremath{\alphalpha}pha Z[r]&=\frac{1}{\Gamma(1-\epsilonnsuremath{\alphalpha}pha)}\betaigg(\frac{Z(r)}{(r-T_1)^\epsilonnsuremath{\alphalpha}pha}+\epsilonnsuremath{\alphalpha}pha\int_{T_1}^r\frac{Z(r)-Z(q)}{(r-q)^{1+\epsilonnsuremath{\alphalpha}pha}}dq\betaigg)\in \epsilonnsuremath{\tilde}lde V,\,\\
D_{{T_2}-}^{1-\epsilonnsuremath{\alphalpha}pha} \epsilonnsuremath{\omega}ega_{T_2-}[r]&=\frac{(-1)^{1-\epsilonnsuremath{\alphalpha}pha}}{\Gamma(\epsilonnsuremath{\alphalpha}pha)}
\betaigg(\frac{\epsilonnsuremath{\omega}ega(r)-\epsilonnsuremath{\omega}ega(T_2-)}{(T_2-r)^{1-\epsilonnsuremath{\alphalpha}pha}}
+(1-\epsilonnsuremath{\alphalpha}pha)\int_r^{T_2}\frac{\epsilonnsuremath{\omega}ega(r)-\epsilonnsuremath{\omega}ega(q)}{(q-r)^{2-\epsilonnsuremath{\alphalpha}pha}}dq\betaigg)\in
V,
\epsilonnd{align*}
where $ \epsilonnsuremath{\omega}ega_{T_2-}(r)= \epsilonnsuremath{\omega}ega(r)- \epsilonnsuremath{\omega}ega(T_2-)$, being $\epsilonnsuremath{\omega}ega(T_2-)$ the left side limit of $\epsilonnsuremath{\omega}ega$ at $T_2$. Here $\Gamma(\cdot)$ denotes the Gamma function.
Let us start with the case when the integrand $z$ and the integrator $\zeta$ are one-dimensional.
Suppose that $z(T_1+),\,\zeta(T_1+),\,\zeta(T_2-)$ exist, being respectively the right side limit of $z$ at $T_1$ and the right and left side limits of $\zeta$ at $T_1,\,T_2$, and that $z \in I_{T_1+}^\epsilonnsuremath{\alphalpha}pha (L^p(T_1,T_2;\mathbb R)),\, \zeta_{T_2-} \in
I_{T_2-}^{\epsilonnsuremath{\alphalpha}pha} (L^{p^\partialrime}(T_1,T_2; \mathbb R))$ with $1/p+1/{p^\partialrime}\le 1$ and $\epsilonnsuremath{\alphalpha}pha p<1$ (the definition of these spaces can be found, for instance, in Samko {\it et al.} \cite{Samko}). Then following Z\"ahle \cite{Zah98} we define
\betaegin{align}\epsilonnsuremath{\lambda}bel{eq11}
\int_{T_1}^{T_2} zd\zeta&=(-1)^\epsilonnsuremath{\alphalpha}pha\int_{T_1}^{T_2} D_{T_1+}^\epsilonnsuremath{\alphalpha}pha z[r]D_{T_2-}^{1-\epsilonnsuremath{\alphalpha}pha}\zeta_{T_2-}[r]dr.
\epsilonnd{align}
Suppose now that $\zeta$ is Lipschitz--continuous. Then $\zeta$ generates a signed measure $d\zeta$ and $\zeta\in I_{T_2-}^{\epsilonnsuremath{\alphalpha}pha} (L^{p^\partialrime}(T_1,T_2; \mathbb R))$. Therefore, in this situation the integral
\betaegin{equation*}
\int_{T_1}^{T_2}zd\zeta
\epsilonnd{equation*}
can be expressed by \epsilonqref{eq11}.
Let $\hat V$ be a separable Hilbert--space endowed with the norm $\|\cdot\|_{\hat V}$ and consider the separable Hilbert--space $L_2(V,\hat V)$ of Hilbert-Schmidt--operators from $V$ into $\hat V$ with the norm $\|\cdot\|_{L_2(V,\hat V)}$ and inner product $(\cdot,\cdot)_{L_2(V,\hat V)}$. Let $(e_i)_{i\in\epsilonnsuremath{\mathbb{N}}}$ and $(f_i)_{i\in\epsilonnsuremath{\mathbb{N}}}$ be a complete orthonormal basis of $V$ and $\hat V$, resp. A base in $L_2(V,\hat V)$ is given by
\betaegin{align*} E_{ij}e_k=\left\{\betaegin{array}{lcl}
0&:& j\not= k\\
f_i &:& j= k.
\epsilonnd{array}
\right.
\epsilonnd{align*}
Let us consider now mappings $Z:[T_1,T_2]\to L_2(V,\hat V)$ and $\epsilonnsuremath{\omega}ega:[T_1,T_2]\to V$. Suppose that $z_{ji}=(Z,E_{ji})_{L_2(V,\hat V)}\in I_{T_1+}^\epsilonnsuremath{\alphalpha}pha (L^p(T_1,T_2;\mathbb R))$ and $z_{ji}(T_1+)$ exists and $\epsilonnsuremath{\alphalpha}pha p<1$. Moreover, let us also assume that $\zeta_{iT_2-}=(\epsilonnsuremath{\omega}ega_{T_2-}(t),e_i)_V\in I_{T_2-}^{1-\epsilonnsuremath{\alphalpha}pha} (L^{p^\partialrime}(T_1,T_2; \mathbb R))$ such that $1/p+1/p^\partialrime\le 1$, and the mapping
\betaegin{equation*}
[T_1,T_2]\ni r\mapsto \|D_{T_1+}^\epsilonnsuremath{\alphalpha}pha Z[r]\|_{L_2(V,\hat V)}\|D_{T_2-}^{1-\epsilonnsuremath{\alphalpha}pha} \epsilonnsuremath{\omega}ega_{T_2-}[r]\|\in L^{1}(T_1,T_2;\epsilonnsuremath{\mathbb{R}}).
\epsilonnd{equation*}
We introduce
\betaegin{align}\epsilonnsuremath{\lambda}bel{eq3}
\betaegin{split}
\int_{T_1}^{T_2} Z d\epsilonnsuremath{\omega}ega&:= (-1)^\epsilonnsuremath{\alphalpha}pha\int_{T_1}^{T_2} D_{T_1+}^\epsilonnsuremath{\alphalpha}pha Z[r]D_{T_2-}^{1-\epsilonnsuremath{\alphalpha}pha}\epsilonnsuremath{\omega}ega_{T_2-}[r]dr\\
&:=(-1)^\epsilonnsuremath{\alphalpha}pha\sum_{j=1}^\infty\betaigg(\sum_{i=1}^\infty \int_{T_1}^{T_2}
D_{T_1+}^{\epsilonnsuremath{\alphalpha}pha}z_{ji}[r]D_{T_2-}^{1-\epsilonnsuremath{\alphalpha}pha}\zeta_{iT_2-}[r]dr \betaigg) f_j.
\epsilonnd{split}
\epsilonnd{align}
This last equality is well defined due to the fact that Pettis' theorem and the separability of $V$ ensure that the integrand is weakly measurable and hence measurable. Moreover, the norm of the above integral is given by
\betaegin{align*}
\betaigg\|\int_{T_1}^{T_2}Zd\epsilonnsuremath{\omega}ega\betaigg\|_{\hat V}
&=\betaigg(\sum_{j=1}^\infty \betaigg|\sum_{i=1}^\infty \int_{T_1}^{T_2}D_{T_1+}^{\epsilonnsuremath{\alphalpha}pha}z_{ji}[r]D_{T_2-}^{1-\epsilonnsuremath{\alphalpha}pha}\zeta_{iT_2-}[r]dr\betaigg|^2\betaigg )^\frac12\\
&\le \int_{T_1}^{T_2}\|D_{T_1+}^\epsilonnsuremath{\alphalpha}pha Z[r]\|_{L_2(V,\hat V)}\|D_{T_2-}^{1-\epsilonnsuremath{\alphalpha}pha} \epsilonnsuremath{\omega}ega_{T_2-}[r]\| dr.
\epsilonnd{align*}
The next result, which proof can be found in \cite{ChGGSch12}, considers the definition of the above integral when having suitable H\"older continuous integrator and integrand functions:
\betaegin{lemma}\epsilonnsuremath{\lambda}bel{l3} Suppose that $Z\in C^{\betaeta}([T_1,T_2];L_2(V,\hat V))$ and $\epsilonnsuremath{\omega}ega\in C^{\betaeta^\partialrime}([T_1,T_2];V)$ with $1-\betaeta^\partialrime<\epsilonnsuremath{\alphalpha}pha<{\betaeta}$. Then
\[
\int_{T_1}^{T_2} Z d\epsilonnsuremath{\omega}ega\in \hat V
\]is well-defined in the sense of (\ref{eq3}). Also, there exists a constant $c$ depending only on $T_2,\,\betaeta,\,\betaeta^\partialrime$ such that
\betaegin{align*}
\betaigg\|\int_{T_1}^{T_2} Z d\epsilonnsuremath{\omega}ega\betaigg\|_{\hat V}&\le
c \|Z\|_{C^{\betaeta}([T_1,T_2];L_2(V,\hat V))} |||\epsilonnsuremath{\omega}ega|||_{\betaeta^\partialrime,T_1,T_2}(T_2-T_1)^{{\betaeta^\partialrime}}.
\epsilonnd{align*}
\epsilonnd{lemma}
Moreover, the above integral with driving path $\epsilonnsuremath{\omega}ega$ is well-defined even though the integrand is locally H\"older-continuous, which will be the case in the next sections when the semigroup $S$ is part of the integrand, see \cite{ChGGSch12} for the proof of this assertion.
In the following we would like to consider the above integrals when the integrator is a noise given by a fractional Brownian--motion (fBm). An one--dimensional fBm is a centered Gau\ss--process given by the auto-covariance
\betaegin{equation*}
R(s,t)=\frac12(t^{2H}+s^{2H}-|t-s|^{2H})
\epsilonnd{equation*}
where $H\in (0,1)$ is the so-called Hurst--parameter. The value $H=1/2$ determines
a Brownian--motion, which is a martingale and a Markov--process with independent increments. When $H\not=1/2$ these properties do not hold.
An fBm can be also defined in a separable Hilbert--space. By the following construction we obtain such an infinite-dimensional noise with values in $V$: let $(\zeta_i)_{i\in\epsilonnsuremath{\mathbb{N}}}$ be a iid-sequence of fBm in $\epsilonnsuremath{\mathbb{R}}$
having the same Hurst--parameter $H$. Then
\betaegin{equation*}
t\to \epsilonnsuremath{\omega}ega(t):=\sum_{i=1}^\infty q_i^\frac12\zeta_i e_i,
\epsilonnd{equation*}
where $(q_i)_{i\in\epsilonnsuremath{\mathbb{N}}}\in l_2$, defines an fBm with values in $V$ and with auto-covariance
\betaegin{equation*}
\frac12Q(t^{2H}+s^{2H}-|t-s|^{2H}),
\epsilonnd{equation*}
where the operator $Q$ of diagonal form is defined by
\betaegin{equation*}
(e_i,Qe_j)_{V}=\deltalta_{ij}q_i.
\epsilonnd{equation*}
One very important property that will be crucial in this paper is that, thanks to Kolmogorov's theorem, the stochastic process $\epsilonnsuremath{\omega}ega$ has a $\gamma$--H{\"o}lder--continuous version for any $\gamma<H$, see Theorem 1.4.1 in Kunita \cite{Kunita90}.
For simplicity we restrict ourself to a real fBm. However, taking two one--dimensional independent real fBm $\zeta^1,\,\zeta^2$ then we could construct
a one--dimensional complex fBm: $\zeta:=1/\sqrt{2}(\zeta^1+i\zeta^2)$. Then by the above formula we could construct a complex fBm $\epsilonnsuremath{\omega}ega$ in $V$.
\\
\betaegin{remark} \epsilonnsuremath{\lambda}bel{sep} For our further purposes we need the fBm $\epsilonnsuremath{\omega}ega$ to be piecewise linear approximated. As one can check later, we will use the property that given $\epsilonnsuremath{\omega}ega$ we can find a sequence of piecewise linear continuous functions $\epsilonnsuremath{\omega}ega_n$ converging to $\epsilonnsuremath{\omega}ega$ in a H\"older--continuous space. However, the space of H\"older--continuous functions is not separable, but we can modify it in such a way that the modified space is: for $\betaeta^\partialrime<\gamma<H$
\betaegin{align*}
C^{0,\betaeta^\partialrime}([0,T];V):=\betaigg\{\epsilonnsuremath{\omega}ega\in C^{\betaeta^\partialrime}([0,T];V): \lim_{\deltalta \to 0} \sup_{|s_1-s_2|<\deltalta, [0,T] \ni s_1\not =s_2} \frac{\|\epsilonnsuremath{\omega}ega(s_1)-\epsilonnsuremath{\omega}ega(s_2)\|}{(s_1-s_2)^{\betaeta^\partialrime}}=0\betaigg\},
\epsilonnd{align*}
is a separable space since $V$ is itself separable (see \cite{FH13}, \cite{FV10} and \cite{ducsigsch}). It is easy to see that $C^{\gamma}([0,T];V) \subset C^{0,\betaeta^\partialrime}([0,T];V)$, and therefore this latter space is the one that we should take when considering the fBm. Hence, in what follows we shall assume that the path $\epsilonnsuremath{\omega}ega$ in $C^{\betaeta^\partialrime}([0,T];V)$ can be piecewise linear approximated by a sequence $(\epsilonnsuremath{\omega}ega_n)_{n\in \mathbb N}$ converging in $C^{\betaeta^\partialrime}([0,T];V)$, because we assume that $\epsilonnsuremath{\omega}ega\in C^{\gamma}([0,T];V)$ with $\gamma <H$, but this statement must be understood according to the sense given above.
\epsilonnd{remark}
\section{Definition of a solution of the stochastic shell--model}\epsilonnsuremath{\lambda}bel{ms}
In this section we would like to formulate conditions ensuring that \epsilonqref{abstract} has a global unique solution.
We emphasize that we have to formulate a definition of solution which is appropriate in our context: on the one hand, the driving function belongs to $C^{\betaeta^\partialrime}([0,T];V)$ for a $\betaeta^\partialrime\in (1/2,1)$, which means that we cannot define integrals by using the standard integration theory of bounded-variation integrators. In particular our situation here covers the case when the driving function is given by an fBm in $V$ with Hurst--parameter $H>1/2$. On the other hand, the bilinear operator $B$ will be the responsible of having to deal with {\epsilonm non--Lipschitz--coefficients} in this model.
We now formulate the assumptions for the diffusion operator $G$ of \epsilonqref{abstract}. In what follows,
we choose a constant $\deltalta>0$ which will be determined later.
{\betaf Assumption (G)} Assume that the mapping $G: V_{-\deltalta}\to L_2(V)$ is bounded and twice continuously Fr\'echet--differentiable with bounded first and second derivatives $DG(u)$ and $D^2G(u)$, for $u\in V_{-\deltalta}$. Let us denote, respectively, by $c_G$, $c_{DG}$ and $c_{D^2G}$
the bounds for $G$, $DG$ and $D^2G$. Then, for $u\in V_{-\deltalta}$
\betaegin{equation*}
\|G(u)\|_{L_2(V)} \le c_G.
\epsilonnd{equation*}
Furthermore, for $u_1,u_2\in V_{-\deltalta}$,
\betaegin{equation*}
\|G(u_1)-G(u_2)\|_{L_2(V)} \leq c_{DG}\|u_1-u_2\|_{V_{-\deltalta}},
\epsilonnd{equation*}
and for $u_1, u_2, v_1, v_2 \in V_{-\deltalta}$,
\betaegin{align}\epsilonnsuremath{\lambda}bel{eq12}
\betaegin{split}
\|G(u_1)-G(v_1)&-(G(u_2)-G(v_2))\|_{L_2(V)}\\
\le & c_{DG}\|u_1-v_1-(u_2-v_2)\|_{V_{-\deltalta}}+c_{D^2G} \|u_1-u_2\|_{V_{-\deltalta}}(\|u_1-v_1\|_{V_{-\deltalta}}+\|u_2-v_2\|_{V_{-\deltalta}}).
\epsilonnd{split}
\epsilonnd{align}
Notice that $DG: V_{\deltalta} \mapsto L_2(V\epsilonnsuremath{\tilde}mes V_{-\deltalta},V)$ is a bilinear mapping whereas $D^2G$ a trilinear mapping.
\betaigskip
In this paper we shall look at the existence and uniqueness of a solution of \epsilonqref{abstract} according to the next definition:
\betaegin{definition}\epsilonnsuremath{\lambda}bel{def1}
Let $1/2<\betaeta<\betaeta^\partialrime\le 1$ and let $\epsilonnsuremath{\omega}ega\in C^{\betaeta^\partialrime}([0,T],V)$, $u_0\in V$ and $\deltalta\in (\betaeta,1)$. A function
$u$ is said to be a {\it mild} solution to \epsilonqref{abstract} over the interval $[0,T]$ associated to the initial condition $u_{0}$ if
\betaegin{equation*}
u\in C([0,T], V)\cap L^{2}(0,T,V_{1/2})\cap C^{\betaeta}([0,T],V_{-\deltalta})
\epsilonnd{equation*}
and such that
\betaegin{equation}\epsilonnsuremath{\lambda}bel{eq7}
u(t)=S(t)u_0+\int_0^tS(t-r)B(u(r),u(r))dr+\int_0^tS(t-r)G(u(r))d\epsilonnsuremath{\omega}ega(r)
\epsilonnd{equation}
for every $t\in [0,T]$.
\epsilonnd{definition}
\betaegin{remark}\epsilonnsuremath{\lambda}bel{r1}
Note that the first integral in \epsilonqref{eq7} is well defined in $V$ because of the fact that $u\in C([0,T],V)\cap L^2(0,T,V_{1/2})$ and Lemma \ref{Bgenerale}. The stochastic integral in \epsilonqref{eq7} must be understood in $V$ according to the definition given in Section \ref{s2}.
\epsilonnd{remark}
We stress that we are interested in finding a mild solution for (\ref{abstract}). Following \cite{NuaVui06} we could also consider weak solutions for our problem. Nevertheless for $u\in L^\infty(0,T,V)\cap L^2(0,T,V_{1/2})$ we have that $B(u,u)$ is sufficiently regular so that we can work with mild solutions.
When $\epsilonnsuremath{\omega}ega$ is regular, we can also interpret the solution in the following weak sense:
\betaegin{definition}\epsilonnsuremath{\lambda}bel{def1b}
Assume that $\epsilonnsuremath{\omega}ega$ is piecewise linear continuous in $[0,T]$ with values in $V$ and $u_0\in V$. We say that
$u$ is a {\it weak} solution to \epsilonqref{abstract} over the interval $[0,T]$ associated to the initial condition $u_{0}$ if
\betaegin{equation*}
u\in C([0,T], V)\cap L^{2}(0,T,V_{1/2})
\epsilonnd{equation*}
and such that
\betaegin{equation}\epsilonnsuremath{\lambda}bel{eq5}
( u(t),\partialhi)_V+\int_{0}^{t}( A^{1/2}u(s),A^{1/2}\partialhi)_V ds
-\int_{0}^{t}( B(u(s),u(s)), \partialhi)_V ds=( u_0,\partialhi)_V
+\int_{0}^{t} (G(u(s))\epsilonnsuremath{\omega}ega^\partialrime (s),\partialhi)_V ds
\epsilonnd{equation}
\epsilonnd{definition}
holds for every $\varphi\in V_{1/2}$ and $t\in [0,T]$.
\section{Solutions of the stochastic shell--model for piecewise linear continuous path noise}\epsilonnsuremath{\lambda}bel{s4}
In this section we assume that $\epsilonnsuremath{\omega}ega$ is a piecewise linear continuous function. This case is the foundation for studying the more general case which will be treated in the next section. Indeed, in further sections given $\epsilonnsuremath{\omega}ega\in C^{\betaeta^\partialrime}([0,T],V)$ we shall consider a sequence $(\epsilonnsuremath{\omega}ega_n)_{n\in\epsilonnsuremath{\mathbb{N}}}$ of piecewise linear continuous paths converging to $\epsilonnsuremath{\omega}ega\in C^{\betaeta^\partialrime}([0,T],V)$, see Remark \ref{sep}. As we cannot assume that the sequence $(\epsilonnsuremath{\omega}ega_n^\partialrime )_{n\in\epsilonnsuremath{\mathbb{N}}}$ in uniformly bounded in $L^\infty([0,T],V)$, we will have to construct uniform a priori estimates for the solutions to equations driven by $\epsilonnsuremath{\omega}ega_n$, which will be based on uniform estimates of $(\epsilonnsuremath{\omega}ega_n)_{n\in\epsilonnsuremath{\mathbb{N}}}$ with respect to the $C^{\betaeta^\partialrime}$-norm. \\
We start by studying the existence of solutions for the stochastic Shell--model having this kind of regular driving function:
\betaegin{prop}\epsilonnsuremath{\lambda}bel{w-smooth}
Assume that $\hat\betaeta\in (1/2,1)$, $\deltalta \in (\hat \betaeta, 1)$, $u_{0}\in V$, $\epsilonnsuremath{\omega}ega$ is a piecewise linear continuous function and $G$ and satisfies the assumption {\betaf (G)}. Then, there is a global unique weak solution $u$ for equation \epsilonqref{abstract} in the sense of Definition \ref{def1b}.
\epsilonnd{prop}
\betaegin{proof}
The proof is very classical but, for the the sake of completeness, we will sketch it here.
Let us denote by $P_{n}$ the projection operator in $V$ onto the space spanned by
$e_{1}, e_{2}, \dots, e_{n}$. Then, the Galerkin--approximations $(u_n)_{n\in \epsilonnsuremath{\mathbb{N}}}$ to problem (\ref{abstract}) are solutions of the finite-dimensional
systems
\betaegin{equation}\epsilonnsuremath{\lambda}bel{abstract-n}
du_{n}(t)=( Au_{n}(t)+ P_{n}B(u_{n}(t),u_{n}(t)))dt+ P_{n}G(u_{n}(t))\epsilonnsuremath{\omega}ega'(t)dt.
\epsilonnd{equation}
On the other hand, if $G^*$ denotes the adjoint operator of $G$, taking the scalar product of \epsilonqref{abstract-n} by $u_{n}$, using the property (\ref{skew2}) and assumption {\betaf (G)}, we get that
\betaegin{equation*}
\betaegin{split}
\frac{1}{2}\frac{d}{dt}\|u_{n}(t)\|^{2}+\|u_{n}(t)\|_{V_{1/2}}^{2}&\leq
\left|( P_{n}G(u_{n}(t))\epsilonnsuremath{\omega}ega'(t), u_{n}(t))_V\right|\leq \left|( \epsilonnsuremath{\omega}ega'(t), G^{*}(u_{n}(t)) u_{n}(t))_V\right| \\
&\leq \|\epsilonnsuremath{\omega}ega'(t)\| \|G^{*}(u_{n}(t)) u_{n}(t)\|\leq c_G\|\epsilonnsuremath{\omega}ega'(t)\| \|u_{n}(t)\|\\
&\leq \frac{c_G^2}{2}\|\epsilonnsuremath{\omega}ega'(t)\|^{2}+\frac{1}{2}\|u_{n}(t)\|^{2}.
\epsilonnd{split}
\epsilonnd{equation*}
Hence, using the Gronwall lemma yields that
\betaegin{equation*}
\sup_{t\in [0,T]}\|u_{n}(t)\|^{2}\leq c(\|u_n(0)\|, \|\epsilonnsuremath{\omega}ega'\|_{L^\infty(0,T,V)}^{2}, T)
\epsilonnd{equation*}
for an appropriate positive constant $c$, and consequently we also have
\betaegin{equation*}
\int_{0}^{T}\|u_{n}(t)\|_{V_{1/2}}^{2}dt\leq c,
\epsilonnd{equation*}
uniformly in $n$.
Also, by classical arguments, we get that $(u_{n})_{n\in\epsilonnsuremath{\mathbb{N}}}$ is bounded in
$C^{\hat \betaeta}([0,T], V_{-\deltalta})$. In fact, since $u_n\in L^\infty(0,T,V)$ and, in particular, $\deltalta >1/2$, it follows by Lemma \ref{Bgenerale}
\betaegin{equation*}
\sup_{0\le s<t\le T}\frac{\int_s^t\|A^{-\deltalta}B(u_n(r),u_n(r))\|dr}{(t-s)^{\hat \betaeta}}\le c\sup_{0\le s<t\le T}\frac{\int_s^t\|A^{-\frac12}B(u_n(r),u_n(r))\|dr}{(t-s)^{\hat \betaeta}}\le c T^{1-\hat \betaeta}\|u_n\|_{L^\infty(0,T,V)}^2<\infty,
\epsilonnd{equation*}
and by {\betaf (G)} we arrived at
\betaegin{equation*}
\sup_{0\le s<t\le T}\frac{\int_s^t\|A^{-\deltalta}G(u_n(r))\epsilonnsuremath{\omega}ega^\partialrime(r)\|dr}{(t-s)^{\hat \betaeta}}\le c \sup_{0\le s<t\le T}\frac{\int_s^t\|G(u_n(r))\|_{L_2(V)}\|\epsilonnsuremath{\omega}ega^\partialrime(r)\|dr}{(t-s)^{\hat \betaeta}}\le c c_G T^{1-\hat \betaeta}\|\epsilonnsuremath{\omega}ega^\partialrime\|_{L^\infty(0,T,V)}<\infty.
\epsilonnd{equation*}
Moreover, applying the interpolation inequality (see \cite{SelYou02}, Theorem 37.6), we know that there exists a constant $c=c(\deltalta)\geq 1$ such that
$$\|A^{1-\deltalta} v\| \leq c \|A^0 v\|^{2\deltalta-1} \|A^{1/2} v\|^{2-2\deltalta}\quad \text{ for all } v\in V,$$
and therefore
\betaegin{align*}
& \sup_{0\le s<t\le T}\frac{\int_s^t\|A^{-\deltalta}Au_n(r)\|dr}{(t-s)^{\hat \betaeta}}\le c \sup_{0\le s<t\le T} \frac{\int_s^t \|u_n(r)\|^{2\deltalta-1} \|A^{1/2} u_n(r)\|^{2-2\deltalta}dr}{(t-s)^{\hat \betaeta}}\\
\le & c \|u_n\|_{L^\infty(0,T,V)}^{2\deltalta-1} \sup_{0\le s<t\le T} \frac{(\int_s^t dr)^{\deltalta} (\int_s^t \|A^{1/2} u_n(r)\|^{2}dr)^{1-\deltalta}}{(t-s)^{\hat \betaeta}}
\le c T^{\deltalta-\hat \betaeta} \|u_n\|_{L^\infty(0,T,V)}^{2\deltalta-1} \|u_n\|_{L^2(0,T,V_{1/2})}^{2-2\deltalta} <\infty.
\epsilonnd{align*}
Hence, by the compactness Theorem \ref{t1} (i) we get a subsequence, still denoted by $(u_{n})_{n\in\epsilonnsuremath{\mathbb{N}}}$, that converges strongly in $L^2(0,T, V)\cap C([0,T],V_{-\deltalta})$ to some limit $u$. Since $(u_n)_{n\in\epsilonnsuremath{\mathbb{N}}}$ is bounded in $L^\infty(0,T, V)\cap L^2(0,T, V_{{1/2}})$
this sequence is relatively weak-star compact in $L^\infty(0,T, V)$ and relatively weak compact in $L^2(0,T,V_{{1/2}})$.
As a consequence, the limit
$u\in L^\infty(0,T, V)\cap L^2(0,T, V_{{1/2}})$. Now, it remains to prove that the limit $u$ is a solution to the system \epsilonqref{abstract} according to the Definition \ref{def1b}. Indeed, assuming that $u_{n}$ is solution in the sense of Definition \ref{def1b}, we can pass to the limit on each term. Furthermore, the regularity of $u$ implies that the right hand side of (\ref{eq5}) as well as the last two terms of the left hand side of (\ref{eq5}) are in $C([0,T], V)$, hence $u\in C([0,T], V)$. For similar limit considerations we refer to Constantin {\it et al.} \cite{clt}.
\epsilonnd{proof}
Moreover, we have the following result about mild solutions:
\betaegin{prop}\epsilonnsuremath{\lambda}bel{prop}
Under the same hypotheses than in Proposition \ref{w-smooth}, every weak solution $u$ to \epsilonqref{abstract} is a mild solution, that is, $u\in C([0,T], V)\cap L^{2}(0,T,V_{{1/2}})\cap C^{\hat \betaeta}([0,T],V_{-\deltalta})$ and satisfies for every $t\in[0,T]$ the following integral formulation in $V$:
\betaegin{equation}\epsilonnsuremath{\lambda}bel{integral1}
u(t)=S(t)u_{0}+\int_{0}^{t}S(t-r)B(u(r),u(r))dr+\int_{0}^{t}S(t-r)G(u(r))\epsilonnsuremath{\omega}ega'(r)dr.
\epsilonnd{equation}
\epsilonnd{prop}
\betaegin{proof}
Suppose that $u$ fulfills \epsilonqref{eq5}. Then
\betaegin{equation*}
t\mapsto B(u(t),u(t))+G(u(t))\epsilonnsuremath{\omega}ega^\partialrime(t)\in L^2(0,T,V)
\epsilonnd{equation*}
such that
\betaegin{equation*}
t\mapsto \int_{0}^{t}S(t-r)B(u(r),u(r))dr+\int_{0}^{t}S(t-r)G(u(r))\epsilonnsuremath{\omega}ega'(r)dr\in C([0,T],V),
\epsilonnd{equation*}
see Pazy \cite{Pazy}, proof of Theorem 4.3.1. In addition, every Galerkin--approximation solution of \epsilonqref{abstract-n} satisfies
\betaegin{align}\epsilonnsuremath{\lambda}bel{eq6}
\betaegin{split}
( u_n(t),\partialhi)_V&=( S(t)P_n u_n(0),\partialhi)_V+\int_{0}^{t}(S(t-r)P_nB(u_n(r),u_n(r)),\partialhi)_Vdr\\
&+\int_{0}^{t}(S(t-r)P_nG(u_n(r))\epsilonnsuremath{\omega}ega'(r),\partialhi)_Vdr
\epsilonnd{split}
\epsilonnd{align}
for every $\partialhi\in V$ and every $t\in [0,T]$. From the convergence of $(u_n)_{n\in \epsilonnsuremath{\mathbb{N}}}$ in $L^2(0,T,V)$ and the boundedness in $L^2(0,T,V_{{1/2}})$ it follows
that the left hand side of \epsilonqref{eq6} converges to
\betaegin{equation*}
( S(t)u_{0},\partialhi)_V+\int_{0}^{t}(S(t-r)B(u(r),u(r)),\partialhi)_Vdr+\int_{0}^{t}(S(t-r)G(u(r))\epsilonnsuremath{\omega}ega'(r),\partialhi)_Vdr
\epsilonnd{equation*}
for every $t\in [0,T]$. On the other hand, from the proof of Proposition \ref{w-smooth} we know that $(u_n)_{n\in\epsilonnsuremath{\mathbb{N}}}$ converges to $u$ in $C([0,T],V_{-\deltalta})$ and hence $u_n(t)$ converges to $u(t)$ in $V_{-\deltalta}$ for every $t\in[0,T]$.
Since the right hand side is in $V$ for every $t\in[0,T]$, $u(t)$ too.
Also, following the same reasoning than in Proposition \ref{w-smooth}, one can prove that $(u_n)_{n\in \mathbb N}$ is bounded in $C^{\gamma} ([0,T], V_{-\hat \deltalta})$ for $\gamma=\hat \betaeta+\varepsilon$ and $\hat \deltalta=\deltalta-\varepsilon$ for small enough $\varepsilon >0$ such that $\hat \deltalta >\gamma$. Then it suffices to apply Theorem \ref{t1} (ii) to conclude the proof.
\epsilonnd{proof}
From now on, we often use the following property, which is a consequence of the definition of Beta function: for every $0\leq s<t\leq T$, $a,\,b>-1$,
\betaegin{equation}\epsilonnsuremath{\lambda}bel{prop}
\int_s^t(r-s)^a(t-r)^b dr=c(t-s)^{a+b+1}
\epsilonnd{equation}
where $c$ only depends on $a$ and $b$. \\
Next we develop a priori estimates that later we need to derive the existence of a solution for a general $\epsilonnsuremath{\omega}ega\in C^{\betaeta^\partialrime}([0,T],V)$.
We cannot use the estimate from Proposition \ref{w-smooth} because we do not have that the sequence $(\epsilonnsuremath{\omega}ega_n)_{n\in\epsilonnsuremath{\mathbb{N}}}$ approximating
$\epsilonnsuremath{\omega}ega$ in $C^{\betaeta^\partialrime}([0,T],V)$ is in general uniformly bounded in $L^\infty(0,T,V)$. That is why in the following estimates $|||\epsilonnsuremath{\omega}ega|||_{\betaeta^\partialrime}$ appears.
\betaegin{lemma}\epsilonnsuremath{\lambda}bel{l5}
Assume that $1/2<\hat\betaeta<\betaeta^\partialrime$, $1-\betaeta^\partialrime<\epsilonnsuremath{\alphalpha}pha<\hat\betaeta$, $\deltalta \in (\hat \betaeta, 1)$, $u_{0}\in V$, $\epsilonnsuremath{\omega}ega$ is a piecewise linear continuous function and $G$ satisfies {\betaf (G)}. Then, if $u$ is a weak solution to (\ref{abstract}) in the sense of Definition \ref{def1b}, there is a constant $c>0$ such that for $t\in [0,T]$
\betaegin{align}\epsilonnsuremath{\lambda}bel{E1}
\betaegin{split}
\|u(t)\|^{2}+
2 \int_{0}^{t}\|u(r)\|_{V_{1/2}}^{2}dr
& \leq \|u_{0}\|^{2}+c |||\epsilonnsuremath{\omega}ega|||_{\betaeta^\partialrime} t^{\betaeta^\partialrime}\|u\|_{C,0,t}+c|||\epsilonnsuremath{\omega}ega|||_{\betaeta^\partialrime} t^{\hat{\betaeta}+\betaeta^\partialrime}(1+\|u\|_{C,0,t})|||u|||_{\hat \betaeta,-\deltalta,0,t}.
\epsilonnd{split}
\epsilonnd{align}
\epsilonnd{lemma}
\betaegin{proof}
Applying the formula of the square norm, see Teman \cite{Teman} Lemma III.1.2 , using the skew-symmetric property (\ref{skew2}), and finally integrating over $(0,t)$, this gives us for every $t\in [0,T]$ the following energy inequality
\betaegin{equation*}
\|u(t)\|^{2}+2\int_{0}^{t}\|u(r)\|_{V_{1/2}}^{2}dr\leq \|u_{0}\|^{2}+
2\left|\int_{0}^{t}(G^{*}(u(r))u(r),\epsilonnsuremath{\omega}ega^\partialrime(r))_Vdr\right|.
\epsilonnd{equation*}
The integral on the right hand side of the previous expression can be interpreted in the sense of Section \ref{s2} using fractional derivatives.
Since for any $r$ we have $\|D_{0+}^\epsilonnsuremath{\alphalpha}pha G^\alphast(u(r))u(r)\|<\infty$ the expression $D_{0+}^\epsilonnsuremath{\alphalpha}pha G^\alphast(u(r))u(r)$ can be interpreted as an element in the space of Hilbert--Schmidt--operators $L_2(V,\epsilonnsuremath{\mathbb{R}})\epsilonnsuremath{\sigma}meq V$. Moreover, from the definition of the fractional derivative it is easy to derive that
\betaegin{align}\epsilonnsuremath{\lambda}bel{er}
\|D^{1-\epsilonnsuremath{\alphalpha}pha}_{t-}\epsilonnsuremath{\omega}ega_{t-}[r]\|\leq c |||\epsilonnsuremath{\omega}ega|||_{\betaeta^\partialrime} (t-r)^{\epsilonnsuremath{\alphalpha}pha+\betaeta^\partialrime-1},
\epsilonnd{align}
and therefore we get
\betaegin{align*}
\betaegin{split}
&\left|\int_{0}^{t}(G^{*}(u(r))u(r),\epsilonnsuremath{\omega}ega^\partialrime(r))_Vdr\right|
\leq c |||\epsilonnsuremath{\omega}ega|||_{\betaeta'} \int_{0}^{t} (t-r)^{\epsilonnsuremath{\alphalpha}pha+\betaeta'-1} \left(\frac{ \|G^{*}(u(r))u(r)\|}{r^{\epsilonnsuremath{\alphalpha}pha}} \right. \\
&\qquad \qquad \qquad +\left. \int_{0}^{r}\frac{\|G^{*}(u(r))u(r)-G^{*}(u(q))u(q)\|}{(r-q)^{1+\epsilonnsuremath{\alphalpha}pha}}dq\right)
dr.
\epsilonnd{split}
\epsilonnd{align*}
Trivially the boundedness of $G$ implies that $\|G^{*}(u)u\| \leq c_G\|u\|$ for $u\in V$ and therefore
\betaegin{equation*}
\frac{\|G^{*}(u(r))u(r)\|}{r^{\epsilonnsuremath{\alphalpha}pha}}\leq
\frac{c_G\|u\|_{C,0,t}}{r^{\epsilonnsuremath{\alphalpha}pha}},\quad r\in [0,t].
\epsilonnd{equation*}
The boundedness and the Lipschitz--continuity of $G$ imply
\betaegin{align*}
\int_{0}^{r}&\frac{\|G^{*}(u(r))u(r)-G^{*}(u(q))u(q)\|}{(r-q)^{1+\epsilonnsuremath{\alphalpha}pha}}dq\\
&\leq c_G\int_{0}^{r}\frac{\|u(r)-u(q)\|_{V_{-\deltalta}}}{(r-q)^{1+\epsilonnsuremath{\alphalpha}pha}}dq+\|u\|_{C,0,t}\int_{0}^{r}\frac{\|G^{*}(u(r))-G^{*}(u(q))\|_{L_2(V_{-\deltalta}, V)}}{(r-q)^{1+\epsilonnsuremath{\alphalpha}pha}}dq\\
&\leq (c_G+c_{DG}\|u\|_{C,0,t}) |||u|||_{\hat\betaeta,-\deltalta,0,t}\int_{0}^{r}(r-q)^{-1-\epsilonnsuremath{\alphalpha}pha+\hat\betaeta}dq\\
&= c(c_G+c_{DG}\|u\|_{C,0,t}) |||u|||_{\hat\betaeta,-\deltalta,0,t}r^{\hat\betaeta-\epsilonnsuremath{\alphalpha}pha}.
\epsilonnd{align*}
Hence, for an appropriate $c>0$
\betaegin{align*}
\left|\int_{0}^{t}( G^{*}(u(r))u(r),\epsilonnsuremath{\omega}ega^\partialrime(r))_Vdr\right| &\leq c |||\epsilonnsuremath{\omega}ega|||_{\betaeta^\partialrime} t^{\betaeta^\partialrime}\|u\|_{C,0,t}+c |||\epsilonnsuremath{\omega}ega|||_{\betaeta^\partialrime} t^{\hat\betaeta+\betaeta^\partialrime}(1+\|u\|_{C,0,t})|||u|||_{\hat\betaeta,-\deltalta,0,t}.
\epsilonnd{align*}
\epsilonnd{proof}
\betaegin{lemma}\epsilonnsuremath{\lambda}bel{l7}
Under the same conditions of Lemma \ref{l5}, if $u$ is a solution to (\ref{abstract}) there exist constants $c,\,\betaar c>0$ such that for $t\in [0,T]$
\betaegin{align}\epsilonnsuremath{\lambda}bel{beta}
\betaegin{split}
|||u|||_{\hat\betaeta,-\deltalta,0,t}&\leq \betaar ct^{\deltalta-\hat\betaeta}\|u_0\|+ct^{1-\hat\betaeta} \|u\|_{C,0,t}^2 +c |||\epsilonnsuremath{\omega}ega|||_{\betaeta'}t^{\betaeta^\partialrime-\hat\betaeta} ( 1+t^{\hat\betaeta}|||u|||_{\hat\betaeta,-\deltalta,0,t} ).
\epsilonnd{split}
\epsilonnd{align}
\epsilonnd{lemma}
\betaegin{proof}
Consider \epsilonqref{integral1} written as
\betaegin{equation*}
u(t)=S(t)u_{0}+A^{1/2}\int_{0}^{t}S(t-r)A^{-1/2}B(u(r),u(r))dr+\int_{0}^{t}S(t-r)G(u(r))\epsilonnsuremath{\omega}ega^\partialrime(r)dr.
\epsilonnd{equation*}
Then the following splitting is considered:
\betaegin{align}\epsilonnsuremath{\lambda}bel{eq4}
\betaegin{split}
&A^{-\deltalta}(u(q)-u(p))=A^{-\deltalta}(S(q)-S(p))u_{0}+A^{-\deltalta+1/2}\int_{p}^{q}S(q-r)A^{-1/2}B(u(r),u(r))dr\\
&+
A^{-\deltalta+1/2}\int_{0}^{p}(S(q-r)-S(p-r))A^{-1/2}B(u(r),u(r))dr\\
&+A^{-\deltalta}\int_{p}^{q}S(q-r)G(u(r))\epsilonnsuremath{\omega}ega^\partialrime(r)dr+
A^{-\deltalta}\int_{0}^{p}(S(q-r)-S(p-r))G(u(r))\epsilonnsuremath{\omega}ega^\partialrime(r)dr\\
&=:I_{1}+I_{2}+I_{3}+I_{4}+I_{5}.
\epsilonnd{split}
\epsilonnd{align}
For the term related to the initial condition, due to the fact that $\deltalta \in (\hat\betaeta,1)$ and \epsilonqref{eq1}, \epsilonqref{eq2} we have
\betaegin{align*}
\sup_{0\leq p<q\leq t}\frac{\|I_{1}(p,q)\|}{(q-p)^{\hat\betaeta}}&\leq\sup_{0\leq p<q\leq t}\frac{\|A^{-\deltalta}(S(q-p)-{\rm Id})S(p)u_0\|}{(q-p)^{\hat\betaeta}} \leq \betaar c \sup_{0\leq p<q\leq t}\frac{(q-p)^\deltalta\|u_0\|}{(q-p)^{\hat\betaeta}} \leq \betaar c t^{\deltalta-\hat\betaeta}\|u_0\|.
\epsilonnd{align*}
Moreover, due to Lemma \ref{Bgenerale} and taking into account that $V\subset V_{-\deltalta+1/2}$,
\betaegin{align}\epsilonnsuremath{\lambda}bel{eq18}
\betaegin{split}
\sup_{0\leq p<q\leq t}\frac{\|I_{2}(p,q)\|}{(q-p)^{\hat\betaeta}}&\leq\sup_{0\leq p<q\leq t}\frac{1}{(q-p)^{\hat\betaeta}}
\int_{p}^{q}\|A^{-\deltalta+1/2}S(q-r)A^{-1/2}B(u(r),u(r))\|dr\\
&\leq \sup_{0\leq p<q\leq t} \frac{c}{(q-p)^{\hat\betaeta}}
\int_{p}^{q} \|A^{-1/2}B(u(r),u(r))\|dr\\
&\leq \sup_{0\leq p<q\leq t} \frac{c}{(q-p)^{\hat\betaeta}}(q-p)\|u\|_{C,0,t}^{2}
\leq c t^{1-\hat\betaeta} \|u\|_{C,0,t}^{2}.
\epsilonnd{split}
\epsilonnd{align}
For $I_{3}$, thanks to Lemma \ref{Bgenerale} and \epsilonqref{eq30},
\betaegin{align}\epsilonnsuremath{\lambda}bel{eq19}
\betaegin{split}
\sup_{0\leq p<q\leq t} \frac{\|I_{3}(p,q)\|}{(q-p)^{\hat\betaeta}}&\leq \sup_{0\leq p<q\leq t} \frac{c}{(q-p)^{\hat\betaeta}}
\int_{0}^{p}\|A^{-\deltalta+1/2}(S(q-p)-{\rm Id})S(p-r)A^{-1/2}B(u(r),u(r))\|dr\\
&\leq c\|u\|_{C,0,t}^{2} \sup_{0\leq p \leq t} \int_{0}^{p} (p-r)^{\deltalta-1/2-\hat\betaeta}dr\\
&\leq c t^{\deltalta+1/2-\hat\betaeta} \|u\|_{C,0,t}^{2}\le c^\partialrime t^{1-\hat\betaeta}\|u\|_{C,0,t}^2.
\epsilonnd{split}
\epsilonnd{align}
Similar estimates to those of $I_4,\,I_5$ can be found in \cite{ChGGSch12}. However, and for the completeness of the presentation, we also show these technical estimates in this paper, but we have shifted these calculations into the Appendix Section, see Lemma \ref{l4} above. In particular, in Lemma \ref{l4} we get
\betaegin{equation}\epsilonnsuremath{\lambda}bel{eq10}
\sup_{0\leq p<q\leq t} \frac{\|I_4(p,q)\|+\|I_{5}(p,q)\|}{(q-p)^{\hat\betaeta}} \leq ct^{\betaeta'-\hat\betaeta}|||\epsilonnsuremath{\omega}ega|||_{\betaeta'}
\left(1+t^{\hat\betaeta}|||u|||_{\hat\betaeta,-\deltalta,0,t}\right).
\epsilonnd{equation}
Hence, collecting all the estimates for the expressions $I_j$ the inequality (\ref{beta}) is obtained.
\epsilonnd{proof}
\betaegin{lemma}\epsilonnsuremath{\lambda}bel{corou}
Under the assumptions of Lemma \ref{l5}, if $u_n$ is a solution of \epsilonqref{eq5} on $[0,T]$ with initial condition $u_0\in V$ and driven by a piecewise linear continuous path $\epsilonnsuremath{\omega}ega_n$ where $(\epsilonnsuremath{\omega}ega_n)_{n\in\epsilonnsuremath{\mathbb{N}}}$ is bounded in $C^{\betaeta^\partialrime}([0,T],V)$, then $(u_n)_{n\in\epsilonnsuremath{\mathbb{N}}}$ is uniformly bounded in $C^{\hat\betaeta}([0,T],V_{-\deltalta}) \cap C([0,T],V)$.
\epsilonnd{lemma}
The proof of the previous result rests upon the technical Lemmas \ref{l6}--\ref{l8} whose proofs are presented into the Appendix section.
\betaegin{remark}
We emphasize that we consider H{\"o}lder--continuity with respect to the space $V_{-\deltalta}$ in the definition of a mild solution and in the results of this section, as well. The estimates in these results also make sense for smaller $\deltalta$. However, the initial condition $u_0$ is the responsible of having to consider $\deltalta\in (\hat \betaeta, 1)$, since in (\ref{beta}) the exponent in the term $t^{\deltalta-\hat\betaeta}$ multiplying $\|u_0\|$ must be positive.
\epsilonnd{remark}
\section{Construction of solutions}
We are now able to construct solutions for the stochastic equation \epsilonqref{abstract} and give the main result of this paper.
We consider a sequence of solutions $(u_n)_{n\in\epsilonnsuremath{\mathbb{N}}}$ to \epsilonqref{abstract} driven by $(\epsilonnsuremath{\omega}ega_n)_{n\in\epsilonnsuremath{\mathbb{N}}}$, a sequence of piecewise linear continuous approximations of $\epsilonnsuremath{\omega}ega$ converging to $\epsilonnsuremath{\omega}ega$ where $\epsilonnsuremath{\omega}ega$ satisfies Remark \ref{sep}.
First we formulate a general uniqueness theorem.
\betaegin{theorem}\epsilonnsuremath{\lambda}bel{t2}
Suppose that there are two mild solutions $u_1,\,u_2$ of \epsilonqref{eq7} with $u_1(0)=u_2(0)=u_0\in V$ and driven by the same path $\epsilonnsuremath{\omega}ega$. Then, under the before mentioned assumptions on $A$, $B$ and $G$ we have $u_1(t)=u_2(t)$ for $t\in [0,T]$.
\epsilonnd{theorem}
\betaegin{proof}
Assume that there exists a maximal interval $[0,t_0]$ contained in $[0,T]$ such that $\Deltaelta u:=u_1-u_2$ is zero on this interval being $t_0<T$. Then there exists a $0<\mu<1$ such that
$\Deltaelta u\not=0$ on $(t_0,t_0+\mu]$.
We divide the proof in several steps:
(i) First we want to estimate
$$|||\Deltaelta u|||_{\betaeta,-\deltalta,t_0,t_0+\mu}=\sup_{t_0\leq s<t\leq t_0+\mu}\frac{\|\Deltaelta u (t)-\Deltaelta u(s)\|_{V_{-\deltalta}}}{(t-s)^\betaeta}.$$
Regarding the non-stochastic integral, we have to estimate
\betaegin{align*}
\frac{1}{(t-s)^\betaeta}&\betaigg\|\int_{s}^{t}S(t-r)A^{-\deltalta}(B(u_1(r),u_1(r))-B(u_2(r),u_2(r)))dr\betaigg\|\\
&+\frac{1}{(t-s)^\betaeta}\betaigg\|\int_{t_0}^{s}(S(t-r)-S(s-r))A^{-\deltalta}(B(u_1(r),u_1(r))-B(u_2(r),u_2(r)))dr\betaigg\|\\
&=: J_1+J_2.
\epsilonnd{align*}
Since $V_{-1/2} \subset V_{-\deltalta}$, from Lemma \ref{Bgenerale} we obtain
\betaegin{align*}
\|A^{-\deltalta}(B(u_1(r),u_1(r))-B(u_2(r),u_2(r)))\| & \le c\|B(\Deltaelta u(r),u_1(r))\|_{V_{-1/2}}+c \|B(u_2(r),\Deltaelta u(r))\|_{V_{-1/2}}\\
&\le c\|\Deltaelta u(r)\|(\|u_1(r)\|+\|u_2(r)\|).
\epsilonnd{align*}
Therefore
\betaegin{align*}
&J_1\leq \frac{c}{(t-s)^\betaeta} \int_{s}^{t} \|\Deltaelta u(r)\|(\|u_1(r)\|+\|u_2(r)\|)dr
\leq c \mu^{1-\betaeta} ||\Deltaelta u||_{C,t_0,t_0+\mu} (||u_1||_{C,t_0,t_0+\mu}+||u_2||_{C,t_0,t_0+\mu}).
\epsilonnd{align*}
Notice also that using the properties of the semigroup $S$
\betaegin{align*}
\|(S(t-r)&-S(s-r))A^{-\deltalta}(B(u_1(r),u_1(r))-B(u_2(r),u_2(r)))\| \\
=& \|(S(t-s)-{\rm id}) S(s-r)(B(u_1(r),u_1(r))-B(u_2(r),u_2(r)))\|_{V_{-\deltalta}} \\
\le & c(t-s)^\deltalta \|(B(u_1(r),u_1(r))-B(u_2(r),u_2(r)))\| \\
\le & c (t-s)^\deltalta \|\Deltaelta u(r)\|(\|u_1(r)\|_{V_{1/2}}+\|u_2(r)\|_{V_{1/2}}),
\epsilonnd{align*}
and thus
\betaegin{align*}
J_2\leq &\frac{1}{(t-s)^\betaeta}\int_{t_0}^{s} (t-s)^\deltalta \|\Deltaelta u(r)\| (\|u_1(r)\|_{V_{1/2}}+\|u_2(r)\|_{V_{1/2}}) dr\\
\leq & \mu^{\frac12+\deltalta-\betaeta} \|\Deltaelta u\|_{C,t_0,t_0+\mu}(||u_1||_{L^2(0,T, V_{1/2})}+||u_2||_{L^2(0,T, V_{1/2})}).
\epsilonnd{align*}
To analyze the terms corresponding to the stochastic integral, that is,
\betaegin{equation*}
\sup_{t_0\leq s<t\leq t_0+\mu}\frac{\betaigg\| \displaystyle {\int_{s}^{t} S(t-r)(G(u_1(r))-G(u_2(r)))d\epsilonnsuremath{\omega}ega- \int_{t_0}^{s} (S(t-r)-S(s-r))(G(u_1(r))-G(u_2(r)))d\epsilonnsuremath{\omega}ega}\betaigg\|_{V_{-\deltalta}}}{(t-s)^\betaeta}
\epsilonnd{equation*}we can consider the estimates of $I_4,\,I_5$ given in the Appendix, replacing $\|A^{-{\deltalta}}(G(u(r))\|_{L_2(V)}$ by
\betaegin{equation}\epsilonnsuremath{\lambda}bel{eq16}
\|A^{-\deltalta}(G(u_1(r))-G(u_2(r)))\|_{L_2(V)}\le c_{DG}\|\Deltaelta u(r)\|_{V_{-\deltalta}}
\epsilonnd{equation}
and $\|A^{-\deltalta}(G(u(r))-G(u(q)))\|_{L_2(V)}$ by
\betaegin{align}\epsilonnsuremath{\lambda}bel{eq17}
\betaegin{split}
\|A^{-\deltalta}(G(u_1(r))&-G(u_2(r))-(G(u_1(q))-G(u_2(q))))\|_{L_2(V)}\\
&\le
c_{DG}\|\Deltaelta u(r)-\Deltaelta u(q)\|_{V_{-\deltalta}}+c_{D^2G}(\|\Deltaelta u(r)\|_{V_{-\deltalta}}(\|u_1(r)-u_1(q)\|_{V_{-\deltalta}}\\
&+\|u_2(r)-u_2(q)\|_{V_{-\deltalta}}),
\epsilonnd{split}
\epsilonnd{align}
where these two above estimates follow by {\betaf (G)}. Then following the steps of Lemma \ref{l4} and taking into account
\betaegin{align*}
\|\Deltaelta u(r)\|_{V_{-\deltalta}}= \|\Deltaelta u(r)-\Deltaelta u(t_0)\|_{V_{-\deltalta}}\le |||\Deltaelta u|||_{\betaeta,-\deltalta,t_0,t_0+\mu}(r-t_0)^\betaeta,
\epsilonnd{align*}
which is true due to the fact that $\Deltaelta u(t_0)=0$, we obtain the following term as an upper bound of the stochastic part:
$$c|||\epsilonnsuremath{\omega}ega|||_{\betaeta^\partialrime}\mu^{\betaeta^\partialrime}|||\Deltaelta u|||_{\betaeta,-\deltalta,t_0,t_0+\mu}+c|||\epsilonnsuremath{\omega}ega|||_{\betaeta^\partialrime}\mu^{\betaeta^\partialrime+\betaeta}(|||u_1|||_{\betaeta,-\deltalta,0,T}+|||u_2|||_{\betaeta,-\deltalta,0,T}) \|\Deltaelta u\|_{C,t_0,t_0+\mu}.$$
Collecting everything we get
\betaegin{align}\epsilonnsuremath{\lambda}bel{esti1}
|||\Deltaelta u|||_{\betaeta,-\deltalta,t_0,t_0+\mu} \leq c_\mu^1 |||\Deltaelta u|||_{\betaeta,-\deltalta,t_0,t_0+\mu}+ c_\mu^2 \|\Deltaelta u\|_{C,t_0,t_0+\mu},
\epsilonnd{align}
with
\betaegin{align}\epsilonnsuremath{\lambda}bel{esti2}
\betaegin{split}
c_\mu^1 &=c \mu^{\betaeta^\partialrime} |||\epsilonnsuremath{\omega}ega|||_{\betaeta^\partialrime},
\\
c_\mu^2&=c(\mu^{\frac12+\deltalta-\betaeta} (||u_1||_{L^2(0,T, V_{1/2})}+||u_2||_{L^2(0,T, V_{1/2})})+\mu^{\betaeta^\partialrime+\betaeta}|||\epsilonnsuremath{\omega}ega|||_{\betaeta^\partialrime}(|||u_1|||_{\betaeta,-\deltalta,0,T}+|||u_2|||_{\betaeta,-\deltalta,0,T})\\
&\quad +\mu^{1-\betaeta} (||u_1||_{C, t_0,t_0+\mu}+||u_2||_{C, t_0,t_0+\mu})).
\epsilonnd{split}
\epsilonnd{align}
(ii) In this second step we are interested in estimating $\|\Deltaelta u\|_{C,t_0,t_0+\mu}$. The non-stochastic part gives us
\betaegin{align*}
&\sup_{t_0\le t \le t_0+\mu}\betaigg\|\int_{t_0}^{t}S(t-r)(B(u_1(r),u_1(r))-B(u_2(r),u_2(r)))dr\betaigg\|\\
\leq &c \sup_{t_0\le t \le t_0+\mu} \int_{t_0}^{t} \|\Deltaelta u(r)\| (||u_1(r)||_{V_{1/2}}+||u_2(r)||_{V_{1/2}}) dr\\
\leq &c \mu^\frac12 (||u_1||_{L^2(0,T, V_{1/2})}+||u_2||_{L^2(0,T, V_{1/2})}) \|\Deltaelta u\|_{C,-\deltalta,t_0,t_0+\mu}.
\epsilonnd{align*}
To study the norm of the stochastic integral, for $t\in [t_0,t_0+\mu]$ we split it as follows
\betaegin{align*}
&|||\epsilonnsuremath{\omega}ega|||_{\betaeta'} \int_{t_0}^{t}(t-r)^{\epsilonnsuremath{\alphalpha}pha+\betaeta'-1} \left(\frac{\|S(t-r)(G(u_1(r))-G(u_2(r)))\|_{L_2(V)}}{(r-t_0)^{\epsilonnsuremath{\alphalpha}pha}} \right.\\
&\qquad \qquad \qquad \left.+\int_{t_0}^{r}\frac{\|(S(t-r)-S(t-\hat r))(G(u_1(r))-G(u_2(r)))\|_{L_2(V)}}{(r-\hat r)^{\epsilonnsuremath{\alphalpha}pha+1}}d\hat r \right.
\\&\qquad \qquad \qquad \left.+
\int_{t_0}^{r}\frac{\|S(t-\hat r)((G(u_1(r))-G(u_2(r)))-(G(u_1(\hat r))-G(u_2(\hat r))))\|_{L_2(V)}}{(r-\hat r)^{\epsilonnsuremath{\alphalpha}pha+1}}d\hat r \right) dr\\
&\qquad =:J_3(t)+J_4(t)+J_5(t).
\epsilonnd{align*}
Following the steps of Lemma \ref{l4}, thanks to {\betaf(G)} we obtain
\betaegin{align*}
\sup_{t_0\le t \le t_0+\mu} J_3(t)\leq c |||\epsilonnsuremath{\omega}ega|||_{\betaeta'} \mu^{\betaeta^\partialrime} \|\Deltaelta u\|_{C,t_0,t_0+\mu},\\
\sup_{t_0\le t \le t_0+\mu} J_4(t) \leq c |||\epsilonnsuremath{\omega}ega|||_{\betaeta'} \mu^{\betaeta^\partialrime} \|\Deltaelta u\|_{C,t_0,t_0+\mu}.
\epsilonnd{align*}
Finally, using again {\betaf(G)}, since $\|\Deltaelta u(r)\|_{V_{-\deltalta}}\leq c\|\Deltaelta u(r)\|$,
\betaegin{align*}
\sup_{t_0\le t \le t_0+\mu} J_5(t) & \leq c |||\epsilonnsuremath{\omega}ega|||_{\betaeta'} \int_{t_0}^{t}(t-r)^{\epsilonnsuremath{\alphalpha}pha+\betaeta'-1} \\
& \quad \epsilonnsuremath{\tilde}mes \betaigg(\int_{t_0}^{r} \frac{\|\Deltaelta u(r)-\Deltaelta u (\hat r)\|_{V_{-\deltalta}}+ \|\Deltaelta u(r)\| (\|u_1(r)-u_1(\hat r)\|_{V_{-\deltalta}}+\|u_2(r)-u_2(\hat r)\|_{V_{-\deltalta}}) }{(r-\hat r)^{\epsilonnsuremath{\alphalpha}pha+1}}d\hat r\betaigg) dr\\
&\leq c |||\epsilonnsuremath{\omega}ega|||_{\betaeta'} ( |||\Deltaelta u|||_{\betaeta,-\deltalta,t_0,t_0+\mu} + \|\Deltaelta u\|_{C,t_0,t_0+\mu} (|||u_1|||_{\betaeta,-\deltalta,t_0,t_0+\mu}+|||u_2|||_{\betaeta,-\deltalta,t_0,t_0+\mu}) \\
& \quad \epsilonnsuremath{\tilde}mes \sup_{t_0\le t \le t_0+\mu} \int_{t_0}^{t}(t-r)^{\epsilonnsuremath{\alphalpha}pha+\betaeta'-1} \betaigg(\int_{t_0}^{r} (r-\hat r)^{\betaeta-\epsilonnsuremath{\alphalpha}pha-1}d\hat r\betaigg) dr\\
&\leq c |||\epsilonnsuremath{\omega}ega|||_{\betaeta'} \mu^{\betaeta+\betaeta^\partialrime}( |||\Deltaelta u|||_{\betaeta,-\deltalta,t_0,t_0+\mu} + \|\Deltaelta u\|_{C,t_0,t_0+\mu} (|||u_1|||_{\betaeta,-\deltalta,0,T}+|||u_2|||_{\betaeta,-\deltalta,0,T})).
\epsilonnd{align*}
Hence,
\betaegin{align}\epsilonnsuremath{\lambda}bel{esti3}
\|\Deltaelta u\|_{C,t_0,t_0+\mu} \leq c_\mu^3 \|\Deltaelta u\|_{C,t_0,t_0+\mu}+c_\mu^4 |||\Deltaelta u|||_{\betaeta,-\deltalta,t_0,t_0+\mu},
\epsilonnd{align}
with
\betaegin{align}\epsilonnsuremath{\lambda}bel{esti4}
\betaegin{split}
c_\mu^3 &=c(\mu^\frac12 (||u_1||_{L^2(0,T, V_{1/2})}+||u_2||_{L^2(0,T, V_{1/2})})+\mu^{\betaeta^\partialrime} |||\epsilonnsuremath{\omega}ega|||_{\betaeta^\partialrime}\\
&\quad + \mu^{\betaeta^\partialrime+\betaeta}|||\epsilonnsuremath{\omega}ega|||_{\betaeta^\partialrime}(|||u_1|||_{\betaeta,-\deltalta,0,T}+|||u_2|||_{\betaeta,-\deltalta,0,T})),
\\
c_\mu^4&=c \mu^{\betaeta^\partialrime+\betaeta}|||\epsilonnsuremath{\omega}ega|||_{\betaeta^\partialrime}.
\epsilonnd{split}
\epsilonnd{align}
Therefore, solving the system given by (\ref{esti1}) and (\ref{esti3}) means that we have to solve a system of inequalities, namely
$$X\leq c_\mu^1 X+c_\mu^2 Y, \qquad Y\leq c_\mu^3 Y+c_\mu^4 X $$
with $c_\mu^i$ given by (\ref{esti2}) and (\ref{esti4}). It is now straightforward to check that for a small enough $\mu\in (0,1)$ we obtain that $||\Deltaelta u||_{C, t_0,t_0+\mu}=0$, which contradicts the fact that the maximal interval of uniqueness is $[0,t_0]$. Hence the solution of \epsilonqref{abstract} is unique.
\epsilonnd{proof}
Finally, we can prove the main theorem of the paper:
\betaegin{theorem}\epsilonnsuremath{\lambda}bel{t3}
Under the assumptions of Lemma \ref{l5} there exists a mild solution to the stochastic shell--model \epsilonqref{abstract} with driving function $\epsilonnsuremath{\omega}ega\in C^{\betaeta^\partialrime}([0,T];V)$.
\epsilonnd{theorem}
\betaegin{proof} We divide the proof in several steps:
(i) Let $(\epsilonnsuremath{\omega}ega_n)_{n\in\epsilonnsuremath{\mathbb{N}}}$ be a sequence of piecewise linear continuous functions converging to $\epsilonnsuremath{\omega}ega$ in $C^{\betaeta^\partialrime}([0,T],V)$, see Remark \ref{sep}, and let $(u_n)_{n\in{\mathbb N}}$ be the sequence of unique solutions driven by $(\epsilonnsuremath{\omega}ega_n)_{n\in\epsilonnsuremath{\mathbb{N}}}$ with initial condition $u_0\in V$. From Lemma \ref{corou} we know that $(u_n)_{n\in{\mathbb N}}$ is uniformly bounded in $C^{\hat\betaeta}([0,T],V_{-\deltalta})\cap C([0,T],V)$. Then (\ref{E1}) implies that $(\|u_n\|_{L^2(0,T,V_{1/2})})_{n\in{\mathbb N}}$ is also bounded and hence $(u_n)_{n\in\epsilonnsuremath{\mathbb{N}}}$ is relatively weak compact in $L^2(0,T,V_{1/2})$. Furthermore, this sequence is relatively compact in $L^2(0,T,V)\cap C([0,T],V_{-\deltalta})$ by Theorem \ref{t1} (i). Moreover, from Lemma \ref{l7} and Lemma \ref{l4} we obtain that
\betaegin{align*}
|||u_n|||_{\hat\betaeta,-\hat \deltalta,0,t}&\leq ct^{\hat \deltalta-\hat\betaeta}\|u_n(0)\|+ct^{1-\hat\betaeta} \|u_n\|_{C,0,t}^2 +c |||\epsilonnsuremath{\omega}ega_n|||_{\betaeta'}t^{\betaeta^\partialrime-\hat\betaeta-\varepsilon} ( 1+t^{\hat\betaeta}|||u_n|||_{\hat\betaeta,-\deltalta,0,t} ),
\epsilonnd{align*}
which, together with the fact that $(\epsilonnsuremath{\omega}ega_n)_{n\in \mathbb N}$ converges to $\epsilonnsuremath{\omega}ega$, imply that $(u_n)_{n\in\epsilonnsuremath{\mathbb{N}}}$ is uniformly
bounded in $C^{\hat\betaeta}([0,T],V_{-\hat\deltalta})$ with $\hat\deltalta=\deltalta-\varepsilon$ being $\varepsilon>0$ arbitrarily small such that
$\deltalta,\,\hat\deltalta$ satisfies still the conditions of Lemma \ref{l5}. Hence, by Theorem \ref{t1} (ii), this sequence in relatively compact in $C^{\betaeta}([0,T],V_{-\deltalta})$.
(ii) Let $(u_{n^\partialrime})_{n^\partialrime \in\epsilonnsuremath{\mathbb{N}}}$ be a subsequence converging to some limit point $u\in L^ 2(0,T,V_{1/2})\cap C^\betaeta([0,T],V_{-\deltalta})$. Let us denote this subsequence simply by $(u_n)_{n\in\epsilonnsuremath{\mathbb{N}}}$. Then, since $B:V_\frac12 \epsilonnsuremath{\tilde}mes V_{-\deltalta} \to V_{-\deltalta}$ and also $B: V_{-\deltalta} \epsilonnsuremath{\tilde}mes V_\frac12 \to V_{-\deltalta}$ and $u_n(0)-u(0)=0$, applying Lemma \ref{Bgenerale} we have
\betaegin{align*}
&\betaigg\|\int_0^tS(t-r)(B(u_n(r),u_n(r))-B(u(r),u(r)))dr\betaigg\|_{V_{-\deltalta}} \\
&\le \int_0^t (\|B(u_n(r),u_n(r))-B(u(r),u_n(r))\|_{V_{-\deltalta}}+\|B(u(r),u_n(r))-B(u(r),u(r))\|_{V_{-\deltalta}})dr \\
&\le c\int_0^t (\|u_n(r)\|_{V_{1/2}}+\|u(r)\|_{V_{1/2}})\|u(r)-u_n(r)\|_{V_{-\deltalta}}dr\\
&\le c |||u-u_n|||_{\betaeta,-\deltalta,0,T}\int_0^t r^\betaeta (\|u_n(r)\|_{V_{1/2}}+\|u(r)\|_{V_{1/2}})dr\\
&\le c T^{\betaeta+\frac12} |||u-u_n|||_{\betaeta,-\deltalta,0,T}(\|u_n\|_{L^2(0,T,V_{1/2})}+\|u\|_{L^2(0,T,V_{1/2})})
\epsilonnd{align*}
which shows the convergence in $V_{-\deltalta}$ of the left hand side to zero.
For the stochastic integral we consider the splitting
\betaegin{align*}
\betaigg\|&\int_0^tS(t-r)G(u_n(r))d\epsilonnsuremath{\omega}ega_n(r)-\int_0^tS(t-r)G(u(r))d\epsilonnsuremath{\omega}ega(r)\betaigg\|_{V_{-\deltalta}}\\
&\le \betaigg\|\int_0^tS(t-r)G(u_n(r))d(\epsilonnsuremath{\omega}ega_n(r)-\epsilonnsuremath{\omega}ega(r))\betaigg\|_{{V_{-\deltalta}}}+
\betaigg\|\int_0^tS(t-r)(G(u_n(r))-G(u(r)))d\epsilonnsuremath{\omega}ega_n(r)\betaigg\|_{V_{-\deltalta}}.
\epsilonnd{align*}
Similar to \epsilonqref{eq10}, an upper bound for the first integral on the right hand side is given by
\betaegin{equation*}
CT^{\betaeta'}|||\epsilonnsuremath{\omega}ega_n-\epsilonnsuremath{\omega}ega|||_{\betaeta'}\left(1+T^\betaeta|||u_n|||_{\betaeta,-\deltalta,0,T}\right)
\epsilonnd{equation*}
and since the set $\{|||u_n|||_{\betaeta,-\deltalta,0,T}\}_{n\in\epsilonnsuremath{\mathbb{N}}}$ is bounded, we obtain the convergence in $V_{-\deltalta}$ of the first integral on the right hand side.
Now using \epsilonqref{eq16}-\epsilonqref{eq17}, setting $u_1=u_{n},\,u_2=u$ we arrive at
\betaegin{align*}
\betaigg\|\int_0^tS(t-r)(G(u_n(r))-G(u(r)))d\epsilonnsuremath{\omega}ega_n(r)\betaigg\|_{V_{-\deltalta}}&\leq c|||\epsilonnsuremath{\omega}ega_n|||_{\betaeta^\partialrime}T^{\betaeta^\partialrime}|||u_n-u|||_{\betaeta,-\deltalta,0,T}\\
\quad &\epsilonnsuremath{\tilde}mes (1+T^\betaeta(1+|||u_n|||_{\betaeta,-\deltalta,0,T}+|||u|||_{\betaeta,-\deltalta,0,T}))
\epsilonnd{align*}
which shows the convergence in $V_{-\deltalta}$ of the second integral.
Also, since $(u_n)_{n\in \mathbb N}$ converges to $u$ in $C([0,T],V_{-\deltalta})$, for every $t\in [0,T]$ we have that $u_n(t) \to u(t)$ in $V_{-\deltalta}$.
(iii) Since $u\in L^2(0,T,V_{1/2})\cap L^\infty(0,T,V)$ we have that $t\mapsto B(u(t),u(t))\in L^ 2(0,T,V)$ and hence the continuity in $V$ of the first integral of
\epsilonqref{eq7} with respect to $t$ follows. Moreover, since $u\in C^\betaeta([0,T],V_{-\deltalta})$ by {\betaf (G)} we obtain that
\betaegin{equation*}
t\mapsto \int_0^tS(t-r)G(u(r))d\epsilonnsuremath{\omega}ega\in C([0,T],V).
\epsilonnd{equation*}
(iv) Collecting the above properties, on the one hand (i)-(ii) mean that $u\in C^{\betaeta}([0,t],V_{-\deltalta}) \cap L^2(0,T,V_{1/2})$ and $u$ satisfies (\ref{eq7}) in $V_{-\deltalta}$. On the other hand, (iii) means that the right hand side of (\ref{eq7}) belongs to $C([0,T],V)$, and hence also the left hand side. In conclusion, we have proven the existence of a mild solution $u$ to the stochastic shell--model in the sense of Definition \ref{def1}.
\epsilonnd{proof}
\section{An example of diffusion term}
We define the operator $G$ by a sequence of functions $g_m^n(u)\in \epsilonnsuremath{\mathbb{C}}$ with $u\in V_{-\deltalta}$, such that for $v\in V$:
\betaegin{equation}\epsilonnsuremath{\lambda}bel{ex}
(G(u)v)_{n}:=\sum_{n, m=1}^{\infty}g^{n}_{m}(u)v_{m}\in V.
\epsilonnd{equation}
We now define properties for this sequence such that $G$ satisfies the hypotheses {\betaf (G)}.
For every $n, m=1,\dots $, assume that
\betaegin{equation}\epsilonnsuremath{\lambda}bel{g}
\sup_{u\in V_{-\deltalta}}\sum_{n, m=1}^{\infty}|g_{m}^{n}(u)|^{2}=:c_G^2<\infty .
\epsilonnd{equation}
In addition, let us assume that the operators $g_m^n$ are twice differentiable having the following properties:
For $u, h\in V_{-\deltalta}$ and $(f_k)_{k\in\epsilonnsuremath{\mathbb{N}}}$ an orthonormal base in $V_{-\deltalta}$ we have that
\betaegin{align}\epsilonnsuremath{\lambda}bel{g2}
\betaegin{split}
&\sum_{n, m=1}^{\infty}\left(g_{m}^{n}(u+h)-g_{m}^{n}(u)-Dg_{m}^{n}(u)h\right)^2=
\sum_{n, m=1}^{\infty}\left(o^{n,m}_u(\|h\|_{V_{-\deltalta}})\right)^2=o_u(\|h\|_{V_{-\deltalta}})^2,\\
&\sup_{u\in V_{-\deltalta}}\sum_{n, m,k=1}^{\infty}|D g_{m}^{n}(u)f_k|^{2}=:c_{DG}^2<\infty.
\epsilonnd{split}
\epsilonnd{align}
The $o^{n,m}_u,\,o_u$ have the usual properties: $\lim_{h\to 0}|o^{n,m}_u(\|h\|_{V_{-\deltalta}})|/\|h\|_{V_{-\deltalta}}=0$ and similar for $o_u$.
In addition we assume that for $u, h_1, h_2\in V_{-\deltalta}$
\betaegin{align}\epsilonnsuremath{\lambda}bel{g3}
\betaegin{split}
&\sum_{n, m=1}^{\infty}\left(Dg_{m}^{n}(u+h_2)h_1-Dg_{m}^{n}(u)h_1-D^2g_{m}^{n}(u)h_1 h_2\right)^2=
\sum_{n, m=1}^{\infty}\left(o^{n,m}_{u, h_1}(\|h_2\|_{V_{-\deltalta}})\right)^2
=:\left(o_{u,h_1}(\|h_2\|_{V_{-\deltalta}})\right)^2,\\
&\sup_{u\in V_{-\deltalta}}
\sum_{n, m,k,l=1}^{\infty}|D^2 g_{m}^{n}(u)(f_k,f_l)|^{2}=:c_{D^2G}^2<\infty
\epsilonnd{split}
\epsilonnd{align}
where the {\epsilonm little o's} have the same property as above.
Now we can verify the properties of the operator $G$ formulated in hypothesis {\betaf(G)}.
It follows from \epsilonqref{g} that
\betaegin{align*}
\sup_{u\in V_{-\deltalta}}\|G(u)\|_{L_2(V)}^2&=\sup_{u\in V_{-\deltalta}}\sum_{m=1}^{\infty} \|G(u)e_{m}\|^2=
\sup_{u\in V_{-\deltalta}}\sum_{n,m=1}^{\infty} |(G(u)e_{m})_{n}|^2\\
& = \sup_{u\in V_{-\deltalta}}\sum_{n,m=1}^{\infty}|g^{n}_{m}(u)|^2=c_G^2.
\epsilonnd{align*}
Simple calculations show that \epsilonqref{g2}, \epsilonqref{g3} imply that the operator $DG$ and $D^2G$ exist and are bounded. In fact, if $u, h\in V_{-\deltalta}$, then we have that
\betaegin{equation*}
\|G(u+h)-G(u)-DG(u)h\|_{L_{2}(V)}^{2}=\left(o(\|h\|_{V_{-\deltalta}})\right)^2
\epsilonnd{equation*}
and
\betaegin{align*}
\sup_{u\in V_{-\deltalta}}\|DG(u)\|_{L_2(V \epsilonnsuremath{\tilde}mes V_{-\deltalta},V)}^2 &= \sup_{u\in V_{-\deltalta}}\sum_{m,k=1}^{\infty} \|DG(u)(e_m,f_k)\|^2= \sup_{u\in V_{-\deltalta}}\sum_{n,m,k=1}^{\infty}
|Dg^{n}_{m}(u)f_k|^2 =c_{DG}^2.
\epsilonnd{align*}
Now, using the boundedness of $DG$ we can prove the Lipschitz condition.
Similarly, \epsilonqref{g3} implies that the operator $D^2G$ exists and is bounded. Using the boundedness of the second derivative of $G$ standard calculations give \epsilonqref{eq12}.
\section{Appendix}
We start this section by completing the proof of Lemma \ref{l7}, although in the next result (item (i)) we prove a bit more.
\betaegin{lemma}\epsilonnsuremath{\lambda}bel{l4}
(i) Let $I_4,\,I_5$ be defined in \epsilonqref{eq4}. Then for any sufficient small $\varepsilon\geq 0$ such that $\betaeta^\partialrime-\hat \betaeta>\varepsilon$ we have
\betaegin{equation*}
\sup_{0\leq p<q\leq t} \frac{\|A^\varepsilon I_4(p,q)\|+\|A^\varepsilon I_{5}(p,q)\|}{(q-p)^{\hat\betaeta}} \leq c t^{\betaeta'-\hat\betaeta-\varepsilon}|||\epsilonnsuremath{\omega}ega|||_{\betaeta'}
\left(1+t^{\hat\betaeta}|||u|||_{\hat\betaeta,-\deltalta,0,t}\right).
\epsilonnd{equation*}
(ii) Let $I_2,\, I_3$ be defined in \epsilonqref{eq4} and let $0\le \varepsilon<\deltalta-1/2$.
Then
\betaegin{equation*}
\sup_{0\leq p<q\leq t} \frac{\|A^\varepsilon I_{2}(p,q)\|+\|A^\varepsilon I_{3}(p,q)\|}{(q-p)^{\hat\betaeta}}\le ct^{1-\hat \betaeta}\|u\|_{C,0,t}^2
\epsilonnd{equation*}
\epsilonnd{lemma}
Note that \epsilonqref{eq10} follows then by (i) simply taking $\varepsilon=0$.
\betaegin{proof}
Throughout the proof we will use frequently the properties \epsilonqref{eq1}, \epsilonqref{eq2} and (\ref{prop}). We choose an $\epsilonnsuremath{\alphalpha}pha$ in the same conditions than in Lemma \ref{l5}, that is, $1-\betaeta^\partialrime<\epsilonnsuremath{\alphalpha}pha<\hat \betaeta$.
First, using the definition of the stochastic integral and the estimate (\ref{er}),
\betaegin{align*}
&\sup_{0\leq p<q\leq t}\frac{\|A^\varepsilon I_{4}(p,q)\|}{(q-p)^{\hat\betaeta}}\leq \sup_{0\leq p<q\leq t} \frac{1}{(q-p)^{\hat\betaeta}}
|||\epsilonnsuremath{\omega}ega|||_{\betaeta'} \int_{p}^{q}(q-r)^{\epsilonnsuremath{\alphalpha}pha+\betaeta'-1} \left(\frac{\|S(q-r)A^\varepsilon A^{-\deltalta}G(u(r))\|_{L_2(V)}}{(r-p)^{\epsilonnsuremath{\alphalpha}pha}} \right.\\
&\qquad \left.+\int_{p}^{r}\frac{\|(S(q-r)-S(q-\hat r))A^\varepsilon A^{-\deltalta}G(u(r))\|_{L_2(V)}}{(r-\hat r)^{\epsilonnsuremath{\alphalpha}pha+1}}d\hat r+
\int_{p}^{r}\frac{\|S(q-\hat r)A^\varepsilon A^{-\deltalta}(G(u(r))-G(u(\hat r)))\|_{L_2(V)}}{(r-\hat r)^{\epsilonnsuremath{\alphalpha}pha+1}}d\hat r \right) dr.
\epsilonnd{align*}
The first term is estimated by
\betaegin{align*}
\frac{\|S(q-r)A^\varepsilon A^{-\deltalta}G(u(r))\|_{L_2(V)}}{(r-p)^{\epsilonnsuremath{\alphalpha}pha}}
&\leq c\frac{c_G}{(r-p)^{\epsilonnsuremath{\alphalpha}pha}(q-r)^\varepsilon}
\epsilonnd{align*}
and since $\epsilonnsuremath{\alphalpha}pha+\betaeta^\partialrime-\varepsilon>0$, we get
\betaegin{align*}
\sup_{0\leq p<q\leq t} c \frac{c_G}{(q-p)^{\hat\betaeta}}
|||\epsilonnsuremath{\omega}ega|||_{\betaeta'} \int_{p}^{q} (q-r)^{\epsilonnsuremath{\alphalpha}pha+\betaeta'-\varepsilon-1}(r-p)^{-\epsilonnsuremath{\alphalpha}pha} dr \leq c |||\epsilonnsuremath{\omega}ega|||_{\betaeta'} t^{\betaeta'-\hat\betaeta-\varepsilon}.
\epsilonnd{align*}
Concerning the second term, taking an appropriate $\epsilonnsuremath{\alphalpha}pha^\partialrime >\epsilonnsuremath{\alphalpha}pha$ such that $\epsilonnsuremath{\alphalpha}pha+\betaeta^\partialrime> \epsilonnsuremath{\alphalpha}pha^\partialrime+\varepsilon$, we have
\betaegin{align*}
\int_{p}^{r}&\frac{\|(S(q-r)-S(q-\hat r))A^\varepsilon A^{-\deltalta}G(u(r))\|_{L_2(V)}}{(r-\hat r)^{\epsilonnsuremath{\alphalpha}pha+1}}d\hat r
\leq \frac{c\,c_G}{(q-r)^{\epsilonnsuremath{\alphalpha}pha'+\varepsilon}}\int_{p}^{r}\frac{(r-\hat r)^{\epsilonnsuremath{\alphalpha}pha'}}{(r-\hat r)^{\epsilonnsuremath{\alphalpha}pha+1}}d\hat r\leq \frac{c\, c_G(r-p)^{\epsilonnsuremath{\alphalpha}pha'-\epsilonnsuremath{\alphalpha}pha}}{(q-r)^{\epsilonnsuremath{\alphalpha}pha'+\varepsilon}},
\epsilonnd{align*}
and hence
\betaegin{align*}
\sup_{0\leq p<q\leq t} \frac{c\, c_G}{(q-p)^{\hat\betaeta}}
|||\epsilonnsuremath{\omega}ega|||_{\betaeta'} \int_p^q\frac{(r-p)^{\epsilonnsuremath{\alphalpha}pha'-\epsilonnsuremath{\alphalpha}pha}}{(q-r)^{\epsilonnsuremath{\alphalpha}pha'+\varepsilon}}(q-r)^{\epsilonnsuremath{\alphalpha}pha+\betaeta^\partialrime-1}dr\le c |||\epsilonnsuremath{\omega}ega|||_{\betaeta'}t^{\betaeta^\partialrime-\hat\betaeta-\varepsilon}.
\epsilonnd{align*}
Finally, since $\hat\betaeta>\epsilonnsuremath{\alphalpha}pha$
\betaegin{align*}
\int_{p}^{r}&\frac{\|S(q-\hat r)A^\varepsilon A^{-\deltalta}(G(u(r))-G(u(\hat r)))\|_{L_2(V)}}{(r-\hat r)^{\epsilonnsuremath{\alphalpha}pha+1}}d\hat r\leq
c\int_{p}^{r}\frac{\|A^{-\deltalta}(G(u(r))-G(u(\hat r)))\|_{L_2(V)}}{(r-\hat r)^{\epsilonnsuremath{\alphalpha}pha+1}(q-\hat r)^\varepsilon}d\hat r\\
&\leq c c_{DG} |||u|||_{\hat\betaeta,-\deltalta,0,t}\frac{1}{(q-r)^\varepsilon}\int_{p}^{r}\frac{(r-\hat r)^{\hat \betaeta}}{(r-\hat r)^{\epsilonnsuremath{\alphalpha}pha+1}}d\hat r
\leq c c_{DG} |||u|||_{\hat\betaeta,-\deltalta,0,t} \frac{(r-p)^{\hat\betaeta-\epsilonnsuremath{\alphalpha}pha}}{(q- r)^\varepsilon},
\epsilonnd{align*}
and since $\betaeta'+\epsilonnsuremath{\alphalpha}pha-\varepsilon >0$ we have
\betaegin{align*}
\sup_{0\leq p<q\leq t} \frac{c c_{DG} |||u|||_{\hat\betaeta,-\deltalta,0,t}}{(q-p)^{\hat\betaeta}}
|||\epsilonnsuremath{\omega}ega|||_{\betaeta'} \int_{p}^{q}(q-r)^{\epsilonnsuremath{\alphalpha}pha+\betaeta'-\varepsilon-1}(r-p)^{\hat\betaeta-\epsilonnsuremath{\alphalpha}pha}dr& \leq c |||\epsilonnsuremath{\omega}ega|||_{\betaeta'} |||u|||_{\hat\betaeta,-\deltalta,0,t} t^{\betaeta'-\varepsilon}.
\epsilonnd{align*}
Hence, we get that
\betaegin{align*}
\sup_{0\leq p<q\leq t} & \frac{\|A^{\varepsilon}I_{4}(p,q)\|}{(q-p)^{\hat\betaeta}}\leq
ct^{\betaeta'-\hat\betaeta-\varepsilon}|||\epsilonnsuremath{\omega}ega|||_{\betaeta'}
\left(1+t^{\hat\betaeta}|||u|||_{\hat\betaeta,-\deltalta,0,t}\right).
\epsilonnd{align*}
Thanks to the definition of the stochastic integral and the estimate (\ref{er}) for $I_{5}$ we get
\betaegin{align*}
\sup_{0\leq p<q\leq t} & \frac{\|A^{\varepsilon}I_{5}(p,q)\|}{(q-p)^{\hat\betaeta}}\leq
\sup_{0\leq p<q\leq t} \frac{1}{(q-p)^{\hat\betaeta}} |||\epsilonnsuremath{\omega}ega|||_{\betaeta'} \int_{0}^{p}(p-r)^{\epsilonnsuremath{\alphalpha}pha+\betaeta'-1} \betaigg( \frac{\|(S(q-r)-S(p-r))A^{\varepsilon}A^{-\deltalta}G(u(r))\|_{L_2(V)}}{r^{\epsilonnsuremath{\alphalpha}pha}} \\
&\qquad \qquad +\int_{0}^{r}\frac{\|(S(q-\hat r)-S(p-\hat r))A^{\varepsilon}A^{-\deltalta}(G(u(r))-G(u(\hat r)))\|_{L_2(V)}}{(r-\hat r)^{\epsilonnsuremath{\alphalpha}pha+1}}d\hat r \\
&\qquad \qquad +\int_{0}^{r}\frac{\|(S(q-r)-S(q-\hat r)-S(p-r)+S(p-\hat r))A^{\varepsilon}A^{-\deltalta}G(u(r))\|_{L_2(V)}}{(r-\hat r)^{\epsilonnsuremath{\alphalpha}pha+1}} d\hat r\betaigg) dr \\
&=:\sup_{0\leq p<q\leq t}\frac{1}{(q-p)^{\hat\betaeta}} |||\epsilonnsuremath{\omega}ega|||_{\betaeta'} \int_{0}^{p}(p-r)^{\epsilonnsuremath{\alphalpha}pha+\betaeta'-1}
\left(I_{5,1}+I_{5,2}+I_{5,3}\right)dr.
\epsilonnd{align*}
We start with
\betaegin{align*}
I_{5,1}&=\frac{\|(S(q-p)-{\rm Id})S(p-r)A^{\varepsilon}A^{-\deltalta}G(u(r))\|_{L_2(V)}}{r^{\epsilonnsuremath{\alphalpha}pha}}\leq c \frac{ c_G(q-p)^{\hat\betaeta}}{(p-r)^{\hat\betaeta+\varepsilon}r^{\epsilonnsuremath{\alphalpha}pha}}
\epsilonnd{align*}
and because $\epsilonnsuremath{\alphalpha}pha<1/2$ and $\epsilonnsuremath{\alphalpha}pha+\betaeta^\partialrime-\hat\betaeta-\varepsilon>0$, the term involving $I_{5,1}$ is estimated by
\betaegin{align*}
\sup_{0\leq p<q\leq t}\frac{c \, c_G }{(q-p)^{\hat\betaeta}} |||\epsilonnsuremath{\omega}ega|||_{\betaeta'}(q-p)^{\hat\betaeta}\int_{0}^{p}(p-r)^{\epsilonnsuremath{\alphalpha}pha+\betaeta'-1-\hat\betaeta-\varepsilon}r^{-\epsilonnsuremath{\alphalpha}pha}dr
&\leq c |||\epsilonnsuremath{\omega}ega|||_{\betaeta'} t^{\betaeta'-\hat\betaeta-\varepsilon}.
\epsilonnd{align*}
On the other hand, \betaegin{align*}
I_{5,2}&= \int_{0}^{r} \frac{\|(S(q-p)-{\rm Id})S(p-\hat r)A^{\varepsilon}A^{-\deltalta}(G(u(r))-G(u(\hat r)))\|_{L_2(V)}}{(r-\hat r)^{\epsilonnsuremath{\alphalpha}pha+1}}d\hat r\\
&\leq c\, c_{DG}\int_{0}^{r}\frac{(p-\hat r)^{-\hat\betaeta-\varepsilon} (q-p)^{\hat\betaeta} \|u(r)-u(\hat r)\|_{V_{-\deltalta}}}{(r-\hat r)^{\epsilonnsuremath{\alphalpha}pha+1}}d\hat r\\
&\leq c\, c_{DG} |||u|||_{\hat\betaeta,-\deltalta,0,t}(p-r)^{-\hat\betaeta-\varepsilon} (q-p)^{\hat\betaeta} \int_{0}^{r} \frac{1}{(r-\hat r)^{\epsilonnsuremath{\alphalpha}pha+1-\hat\betaeta}}d\hat r\\
&\leq c\, c_{DG} |||u|||_{\hat\betaeta,-\deltalta,0,t} (p-r)^{-\hat\betaeta-\varepsilon} (q-p)^{\hat\betaeta} r^{\hat \betaeta-\epsilonnsuremath{\alphalpha}pha},
\epsilonnd{align*}
and thus
\betaegin{align*}
\betaegin{split}
\sup_{0\leq p<q\leq t} & \frac{1}{(q-p)^{\hat\betaeta}} |||\epsilonnsuremath{\omega}ega|||_{\betaeta^\partialrime} \int_{0}^{p}(p-r)^{\epsilonnsuremath{\alphalpha}pha+\betaeta'-1}I_{5,2}dr\\
&\leq c\, c_{DG} |||\epsilonnsuremath{\omega}ega|||_{\betaeta^\partialrime} |||u|||_{\hat\betaeta,-\deltalta,0,t} \sup_{0\leq p<q\leq t} \int_{0}^{p} (p-r)^{\epsilonnsuremath{\alphalpha}pha+\betaeta'-1-\hat\betaeta-\varepsilon}
r^{\hat \betaeta -\epsilonnsuremath{\alphalpha}pha}dr\\
&\leq c\, c_{DG} |||\epsilonnsuremath{\omega}ega|||_{\betaeta^\partialrime} |||u|||_{\hat\betaeta,-\deltalta,0,t} t^{\betaeta'-\varepsilon}.
\epsilonnd{split}
\epsilonnd{align*}
Finally, taking $\epsilonnsuremath{\alphalpha}pha^\partialrime$ close enough to $\epsilonnsuremath{\alphalpha}pha$ such that $\epsilonnsuremath{\alphalpha}pha^\partialrime >\epsilonnsuremath{\alphalpha}pha$ and $\epsilonnsuremath{\alphalpha}pha+\betaeta^\partialrime> \epsilonnsuremath{\alphalpha}pha^\partialrime+\hat\betaeta+\varepsilon$ (for a small enough $\varepsilon$), applying the second part of \epsilonqref{eq30}
\betaegin{align*}
I_{5,3}&\leq c\int_{0}^{r}\frac{(q-p)^{\hat\betaeta}(r-\hat r)^{\epsilonnsuremath{\alphalpha}pha'}(p-r)^{-\epsilonnsuremath{\alphalpha}pha'-\hat\betaeta-\varepsilon}\|A^{-\deltalta}G(u(r))\|_{L_2(V)}}
{(r-\hat r)^{\epsilonnsuremath{\alphalpha}pha+1}}d\hat r\\
&\leq c \,c_G (q-p)^{\hat\betaeta}(p-r)^{-\epsilonnsuremath{\alphalpha}pha'-\hat\betaeta-\varepsilon}
\int_{0}^{r}(r-\hat r)^{\epsilonnsuremath{\alphalpha}pha'-\epsilonnsuremath{\alphalpha}pha-1}d\hat r\\
&\leq c \, c_G (q-p)^{\hat\betaeta}(p-r)^{-\epsilonnsuremath{\alphalpha}pha'-\hat\betaeta-\varepsilon}r^{\epsilonnsuremath{\alphalpha}pha'-\epsilonnsuremath{\alphalpha}pha},
\epsilonnd{align*}
and hence
\betaegin{align*}
\sup_{0\leq p<q\leq t} & \frac{1}{(q-p)^{\hat\betaeta}} |||\epsilonnsuremath{\omega}ega|||_{\betaeta^\partialrime} \int_{0}^{p}(p-r)^{\epsilonnsuremath{\alphalpha}pha+\betaeta'-1}I_{5,3}dr\\
&\leq
c \, \epsilonnsuremath{\tilde}lde c_G |||\epsilonnsuremath{\omega}ega|||_{\betaeta^\partialrime} \sup_{0\leq p<q\leq t} \int_{0}^{p} (p-r)^{\epsilonnsuremath{\alphalpha}pha+\betaeta'-1-\epsilonnsuremath{\alphalpha}pha'-\hat\betaeta-\varepsilon}r^{\epsilonnsuremath{\alphalpha}pha'-\epsilonnsuremath{\alphalpha}pha}dr\\
&\leq c \, \epsilonnsuremath{\tilde}lde c_G |||\epsilonnsuremath{\omega}ega|||_{\betaeta^\partialrime} t^{\betaeta^\partialrime-\hat\betaeta-\varepsilon} .
\epsilonnd{align*}
Taking into account the previous estimates we finally get
\betaegin{align*}
\sup_{0\leq p<q\leq t} & \frac{\|A^{\varepsilon}I_{5}(p,q)\|}{(q-p)^{\hat\betaeta}} \leq c t^{\betaeta'-\hat\betaeta-\varepsilon}|||\epsilonnsuremath{\omega}ega|||_{\betaeta'}
\left(1+t^{\hat\betaeta}|||u|||_{\hat\betaeta,-\deltalta,0,t}\right).
\epsilonnd{align*}
(ii) The proof of this part follows similarly to the estimates \epsilonqref{eq18} and \epsilonqref{eq19}. In particular, for the estimate of $\|A^\varepsilon I_2(p,q)\|$ we need to use the continuous embedding $V \subset V_{-\deltalta+\varepsilon+1/2}$, which holds true for small enough $\varepsilon\ge 0$ since $\deltalta \in (\hat \betaeta, 1)$.
\epsilonnd{proof}
The rest of the Appendix section is devoted to the proof of Lemma \ref{corou}, which relies upon several results that are proven below.
\betaegin{lemma}\epsilonnsuremath{\lambda}bel{l6}
Let $1/2<\hat \betaeta<\epsilonnsuremath{\tilde}lde \betaeta<\deltalta$ and suppose that $u\in C^{\epsilonnsuremath{\tilde}lde \betaeta}([0,T],V_{-\deltalta})$. Then the mapping
\betaegin{equation*}
[s,T]\ni t\mapsto |||u|||_{\hat \betaeta,-\deltalta,s,t}
\epsilonnd{equation*}
is continuous and
\betaegin{equation*}
\lim_{t\to s^+} |||u|||_{\hat \betaeta,-\deltalta,s,t}=0.
\epsilonnd{equation*}
\epsilonnd{lemma}
\betaegin{proof}
We only consider here the case $s=0$. Let us define the following transform of $u$ given by
\betaegin{equation*}
\hat u_{\hat t}(r)=\left\{\betaegin{array}{lcr}
u(r)&:& r\le \hat t,\\
u(\hat t)&:& r\ge \hat t.
\epsilonnd{array}
\right.
\epsilonnd{equation*}
Then for $0\le \hat t <t\le T$
\betaegin{align*}
&|||u|||_{\hat \betaeta,-\deltalta,0,t}-|||u|||_{\hat \betaeta,-\deltalta,0,\hat t}=|||u|||_{\hat \betaeta,-\deltalta,0,t}-|||\hat u_{\hat t}|||_{\hat \betaeta,-\deltalta,0,t}\le|||u|||_{\hat \betaeta,-\deltalta,\hat t,t}
\le c (t-\hat t)^{\epsilonnsuremath{\tilde}lde \betaeta-\hat \betaeta}|||u|||_{\epsilonnsuremath{\tilde}lde \betaeta,-\deltalta,0,T}
\epsilonnd{align*}
from which the desired continuity follows immediately. The convergence to 0 follows in the same way.
\epsilonnd{proof}
\betaegin{lemma}\epsilonnsuremath{\lambda}bel{xx1}
For positive continuous functions $a(t),\,b(t)$ consider
\betaegin{equation*}
Y=b(t)+a(t) Y^2
\epsilonnd{equation*}
and assume $4a(t)b(t)<1$ for every $t\in [0,t_1]$, where $t_1>0$ is some positive number. Then there exist two real solutions $Y_1(t)<Y_2(t)\in\epsilonnsuremath{\mathbb{R}}^+$ given by
\betaegin{equation*}
Y_{1}(t)=\frac{1}{2a(t)}(1-\sqrt{1-4a(t)b(t)}),\quad Y_{2}(t)=\frac{1}{2a(t)}(1+\sqrt{1-4a(t)b(t)})
\epsilonnd{equation*}
where $Y_1(t)\le 2 b(t)$.
Suppose in addition that $y(t)\ge0$ is continuous on $[0,t_1]$ such that
\betaegin{equation*}
y(t)\le b(t)+a(t) y(t)^2,\quad \lim_{t\to 0^+}y(t)=0,
\epsilonnd{equation*}
and that $\lim_{t\to 0^+}a(t)=0$.
Then we have $y(t)\le Y_1(t)$ on $[0,t_1]$.
\epsilonnd{lemma}
\betaegin{proof}
It follows by Sohr \cite{Sohr} Page 317 that under the conditions of the lemma there exist real solutions $Y_1$, $Y_{2}$ satisfying the above conditions.
On the other hand, $y$ satisfies the above inequality if and only if $y(t)\le Y_1(t)$ or $y(t)\ge Y_2(t)$. If $y(t)\ge Y_2(t)$ for some $t\in (0,t_1]$ then by the continuity of $y,\,Y_2$ and by the fact that $Y_2(t)>Y_1(t)$ on $(0,t_1]$, it follows that $y(t)\ge Y_2(t)$ on $[0,t_1]$. However, under the assumptions we have $\lim_{t\to 0^+}Y_2(t)=+\infty$ and this is a contradiction with respect to the behavior of $y$.
\epsilonnd{proof}
To simplify the presentation of the following technical result we assume that $T=1$.
In the following, see Lemma \ref{l8} below, we shall consider inequalities of the type
\betaegin{equation}\epsilonnsuremath{\lambda}bel{ec}
y(t)\le d(t,x)y(t) +f(t,x)+h(t) y(t)^2,\quad t\in [0,t_1]
\epsilonnd{equation}
where the increasing functions $d(\cdot,x),\,f(\cdot,x),\,h(\cdot)$ are defined by
\betaegin{align}\epsilonnsuremath{\lambda}bel{ec1}
&d(t,x)=ct^{\betaeta^\partialrime}+{ 2}c^3t^{1+2\betaeta^\partialrime}+{ 2} c^2xt^{1+\betaeta^\partialrime},\nonumber\\
&f(t,x)=x t^{\deltalta-\hat\betaeta}
+cx^2t^{1-\hat\betaeta}+c^2xt^{1+\betaeta^\partialrime-\hat\betaeta}+c^3t^{1+2\betaeta^\partialrime-\hat\betaeta}+ct^{\betaeta^\partialrime-\hat\betaeta},\\
&h(t)={ 4} c^3t^{1+2\betaeta^\partialrime+\hat\betaeta}.\nonumber
\epsilonnd{align}
Note that $d(t,x)$ and $f(t,x)$ depend on a positive parameter $x$. Furthermore, the constants $c,\,c^2,\,c^3$ are coming from the estimates of Lemma \ref{l5} and Lemma \ref{l7}, as we will show in Lemma \ref{l8} below. In particular, these constants are constricted such that they are including the value $|||\epsilonnsuremath{\omega}ega|||_{\betaeta^\partialrime}=|||\epsilonnsuremath{\omega}ega|||_{\betaeta^\partialrime,0,1}$. In the following proof we need these constants with norms only for subintervals of $[0,1]$. However,
using $|||\epsilonnsuremath{\omega}ega|||_{\betaeta^\partialrime,0,1}$ these constants can be chosen independently of the subinterval.
In that result, depending on the value of $x$ we shall choose $t_1>0$ such that $d(t_1,x)\le 1/2$. Then, defining $a(t):=2h(t)$ and $b(t,x):=2f(t,x)$ we can rewrite (\ref{ec}) as
\betaegin{equation*}
y(t)\le b(t,x)+a(t) y(t)^2,\quad t\in [0,t_1]
\epsilonnd{equation*}
which looks like the inequality of Lemma \ref{xx1}. Let us emphasize that with the above choice $\lim_{t\to 0^+}a(t)=\lim_{t\to 0^+}2h(t)=0$. In the next result we will also choose suitable values of $x$ such that the rest of assumptions of Lemma \ref{xx1} also hold.\\
\betaegin{lemma}\epsilonnsuremath{\lambda}bel{l8}
Let $u$ be a solution of \epsilonqref{abstract} on $[0,1]$ with initial condition $u_0\in V$ and driven by a piecewise linear continuous path $\epsilonnsuremath{\omega}ega$. Then for any $x_0\ge \max\{1, \betaar c\|u_0\|\ , \|u_0\|\} $ (where $\betaar c$ here denotes the constant of (\ref{beta})) there exist constants $K\ge \hat K>1$ defining finitely many intervals $(I_i)_{i=1,\cdots,i^\alphast}$ by
\betaegin{equation*}
I_1=[0,\frac{1}{\hat K}]=[\check t_1,\hat t_1],\cdots, I_i=[\hat t_{i-1},\hat t_{i-1}+\frac{1}{Ki}]=[\check t_i,\hat t_i]
\epsilonnd{equation*}
in such a way that on $I_i$ we have
\betaegin{align*}
|||u|||_{\hat\betaeta,-\deltalta,I_1}\le (\hat K)^{\hat \betaeta},\quad |||u|||_{\hat\betaeta,-\deltalta,I_i} \le(Ki)^{\hat\betaeta}\\
\|u\|_{C,I_1}\le \frac{3c {\hat K}^{1-\betaeta^\partialrime}}{1-\betaeta^\partialrime},\quad \|u\|_{C,I_i}\le \frac{3c (Ki)^{1-\betaeta^\partialrime}}{1-\betaeta^\partialrime}
\epsilonnd{align*}
for $i=2,\cdots, i^\alphast$. This constant $c$ in particular depends on $|||\epsilonnsuremath{\omega}ega|||_{\betaeta^\partialrime,0,1}$.
\epsilonnd{lemma}
We point out that in the previous result $i^\alphast$ is given by the condition $\check t_{i^\alphast}<1=T\le \hat t_{i^\alphast}$, and in this case we set $\hat t_{i^\alphast}=1$.
\betaegin{proof}
We abbreviate $x_1(t):=\max \{1,\|u\|_{C,0,t}\}$ and $y_1(t):=|||u|||_{\hat\betaeta,-\deltalta,0,t}$, for $t\in I_1=[0,\hat t_1]$, where $\hat t_1$ will be determined later.
The inequality \epsilonqref{E1} together with the fact that $x_0\ge \max\{1, \|u_0\|\}$ imply
\betaegin{align*}
\|u(t)\|^{2}
& \leq \|u_{0}\|^{2}+c t^{\betaeta^\partialrime}\|u\|_{C,0,t}+c t^{\hat{\betaeta}+\betaeta^\partialrime}(1+\|u\|_{C,0,t})|||u|||_{\hat \betaeta,-\deltalta,0,t}\\
&\leq x_0^{2}+c t^{\betaeta^\partialrime}\|u\|_{C,0,t}+c t^{\hat{\betaeta}+\betaeta^\partialrime}(1+\|u\|_{C,0,t})|||u|||_{\hat \betaeta,-\deltalta,0,t},
\epsilonnd{align*}
and also
\betaegin{align*}
1
&\leq x_0^{2}+c t^{\betaeta^\partialrime}\|u\|_{C,0,t}+c t^{\hat{\betaeta}+\betaeta^\partialrime}(1+\|u\|_{C,0,t})|||u|||_{\hat \betaeta,-\deltalta,0,t}.
\epsilonnd{align*}
Then
\betaegin{align*}
\max\{\|u\|_{C,0,t}^{2},1\}
&\leq x_0^{2}+c t^{\betaeta^\partialrime}\|u\|_{C,0,t}+c t^{\hat{\betaeta}+\betaeta^\partialrime}(1+\|u\|_{C,0,t})|||u|||_{\hat \betaeta,-\deltalta,0,t}\\
&\leq x_0^{2}+c t^{\betaeta^\partialrime}\max\{1,\|u\|_{C,0,t}\}+c t^{\hat{\betaeta}+\betaeta^\partialrime}(1+\max\{1,\|u\|_{C,0,t}\})|||u|||_{\hat \betaeta,-\deltalta,0,t}
\epsilonnd{align*}
and therefore
\betaegin{equation}\epsilonnsuremath{\lambda}bel{eq21}
x_1^2(t)\le x_0^2+c\,x_1(t) t^{\betaeta^\partialrime}+2c x_1(t)\,y_1(t) t^{\hat\betaeta+\betaeta^\partialrime}.
\epsilonnd{equation}
Furthermore, \epsilonqref{beta} implies
\betaegin{equation}\epsilonnsuremath{\lambda}bel{eq22}
y_1(t)\le x_0 t^{\deltalta-\hat\betaeta}+c x_1^2(t) t^{1-\hat\betaeta}+c t^{\betaeta^\partialrime-\hat\betaeta}+cy_1(t) t^{\betaeta^\partialrime}.
\epsilonnd{equation}
Note that in \epsilonqref{eq21} we have used that $x_1(t)\ge 1$ and therefore the corresponding last term on the left hand side of (\ref{E1}) can be estimated as
$$c t^{\hat\betaeta+\betaeta^\partialrime} (1+x_1(t)) y_1(t) \leq 2c x_1(t)\,y_1 (t) t^{\hat\betaeta+\betaeta^\partialrime}.$$
Now combining \epsilonqref{eq21} with \epsilonqref{eq22} we get
\betaegin{equation}\epsilonnsuremath{\lambda}bel{eq14}
y_1(t)\le x_0 t^{\deltalta-\hat\betaeta}+c(x_0^2+c\,x_1 (t) t^{\betaeta^\partialrime}+2c x_1(t)\,y_1 (t) t^{\hat\betaeta+\betaeta^\partialrime}) t^{1-\hat\betaeta}+c t^{\betaeta^\partialrime-\hat\betaeta}+cy_1(t) t^{\betaeta^\partialrime}.
\epsilonnd{equation}
In addition, from \epsilonqref{eq21} the following estimate holds
\betaegin{equation}\epsilonnsuremath{\lambda}bel{eq8}
x_1(t)\le \frac{c t^{\betaeta^\partialrime}+2c y_1(t) t^{\hat\betaeta+\betaeta^\partialrime}}{2}+\sqrt{\frac{(c t^{\betaeta^\partialrime}+2c y_1(t) t^{\hat\betaeta+\betaeta^\partialrime})^2+4x_0^2}{4}}
\le c t^{\betaeta^\partialrime}+2c y_1(t) t^{\hat\betaeta+\betaeta^\partialrime}+x_0
\epsilonnd{equation}
and plugging this into \epsilonqref{eq14} we finally arrive at
\betaegin{equation*}
y_1(t)\le d(t,x_0)y_1(t)+ f(t,x_0)+h(t)y_1(t)^2,\quad t\in I_1=[0,\hat t_1],
\epsilonnd{equation*}
where the functions have been defined in (\ref{ec1}).
Then, taking $a(t)=2h(t)$ and $b(t,x_0)=2f(t,x_0)$ there exists a $K_1\ge 1$ such that for any $\hat K\ge K_1$ and $\hat t_1=\hat K^{-1}$ we have
\betaegin{equation*}
d(\hat K^{-1},x_0)\le \frac12, \quad b(\hat K^{-1},x_0)\le \frac{(\hat K)^{\hat \betaeta}}{2},\quad 4a(\hat K^{-1})b(\hat K^{-1},x_0)<1.
\epsilonnd{equation*}
Hence, we have the conditions Lemma \ref{xx1} and as a consequence we claim that $y_1(\hat t_1)\le (\hat K)^{\hat \betaeta}=\hat t_1^{-\hat\betaeta}$.
Let us fix such a $\hat K$ such that in addition $x_0\le c\hat K^{1-\betaeta^\partialrime}/(1-\betaeta^\partialrime)$. Then, from (\ref{eq8}), simply using the general notation for constants $c$, we get
\betaegin{equation*}
x_1 (\hat t_1) \le 3c \hat t_1^{\betaeta^\partialrime}+x_0\le \frac{4c {\hat K}^{1-\betaeta^\partialrime}}{1-\betaeta^\partialrime}=:\hat x_1.
\epsilonnd{equation*}
Now we can repeat the same arguments than above in each interval $I_i$, $i=2,3,\cdots,$ by doing the corresponding suitable changes. In order to do that we need to rewrite the estimates (\ref{E1}) and (\ref{beta}) in those intervals. In particular, in $I_i$ we have to take as initial condition $u(\hat t_{i-1})$ and $t$ can be estimated by the length of the interval $I_i$ which is nothing but $(Ki)^{-1}$. Taking $x_i(t):=\max\{1,\|u\|_{C,\hat t_{i-1},t}\}$ with $x_i(\hat t_{i-1})\ge 1$ and $y_i(t):=|||u|||_{\hat\betaeta,-\deltalta,\hat t_{i-1},t}$, for $t\in I_i$. For induction we assume
\betaegin{equation*}
x_{i-1}(t_{i-1})\le 3c\sum_{j=1}^{i-1}K_j^{-\betaeta^\partialrime}+x_0\le \frac{4c (K(i-1))^{1-{\betaeta^\partialrime}}}{1-\betaeta^\partialrime}=: \hat x_{i-1}
\epsilonnd{equation*}
and choose $K>\hat K$ such that for $i=2,3,\cdots$
\betaegin{align*}
d((Ki)^{-1},\hat x_{i-1})&=c(Ki)^{-\betaeta^\partialrime}+{ 2}c^3(Ki)^{-1-2\betaeta^\partialrime}+{ 2}c^2\hat x_{i-1}(Ki)^{-\betaeta^\partialrime-1}\\
&\le cK^{-\betaeta^\partialrime}+{ 2}c^3K^{-1-2\betaeta^\partialrime}+\frac{ 8c^3}{1-\betaeta^\partialrime}K^{1-\betaeta^\partialrime}K^{-\betaeta^\partialrime-1}
\le
o(K^{-\varepsilon})\le \frac12\\
f((Ki)^{-1},\hat x_{i-1})&=\hat x_{i-1}(Ki)^{\hat \betaeta-\deltalta}+c \hat x_{i-1}^2(Ki)^{\hat\betaeta-1}+c^2\hat x_{i-1}(Ki)^{-1+\hat\betaeta-\betaeta^\partialrime}+c^3(Ki)^{-1-2\betaeta^\partialrime+\hat\betaeta}+c(Ki)^{-\betaeta^\partialrime+\hat\betaeta} \\
&\le Co(K^{-\varepsilon}) (Ki)^{\hat\betaeta} \le\frac{(Ki)^{\hat\betaeta}}{4}
\epsilonnd{align*}
for a constant $C$ and an sufficiently small $\varepsilon>0$ independent of $K$ and $i$.
For example, for the critical term in the expression of $f$ given for the quadratic term, we have that
\betaegin{equation*}
c \hat x_{i-1}^2(Ki)^{\hat\betaeta-1}\le \frac{{ 16}c^3}{(1-\betaeta^\partialrime)^2} K^{2-2\betaeta^\partialrime-1+\hat\betaeta}i^{2-2\betaeta^\partialrime-1+\hat\betaeta}\le \frac{16 c^3}{(1-\betaeta^\partialrime)^2}K^{1-2\betaeta^\partialrime}(Ki)^{\hat\betaeta}\leq Co(K^{-\varepsilon}) (Ki)^{\hat\betaeta}
\epsilonnd{equation*}
where this last inequality is true since $\betaeta^\partialrime\in (1/2,1)$.
Again, for $a(t)=2h(t),\,b(t,\hat x_{i-1})=2f(t,\hat x_{i-1})$, choosing $K$ sufficiently large such that
\betaegin{equation*}
4a((Ki)^{-1})b((Ki)^{-1},\hat x_{i-1})\le 16c^3(Ki)^{-1-2\betaeta^\partialrime-\hat\betaeta} \frac{(Ki)^{\hat\betaeta}}{4}<1
\epsilonnd{equation*}
we obtain by Lemma \ref{l6} and Lemma \ref{xx1} that $y_i(\hat t_{i})\le (Ki)^{\hat\betaeta}$. If we denote $\hat t_i-\check t_i=:\Deltaelta t_i=(Ki)^{-1}$ the previous inequality can be rewriten as $y_i(\hat t_{i})\le (\Deltaelta t_i)^{-{\hat \betaeta}}$, and similar to \epsilonqref{eq8}
\betaegin{align*}
x_i (\hat t_i) & \le c \Deltaelta t_i^{\betaeta^\partialrime}+2c y_i (\hat t_i)\Deltaelta t_i^{\hat\betaeta+\betaeta^\partialrime}+x_{i-1}(\hat t_{i-1})\leq 3c\Deltaelta t_i^{\betaeta^\partialrime}+x_{i-1}(\hat t_{i-1})\\
&\le x_{0}+3c\sum_{j=1}^i(Kj)^{-\betaeta^\partialrime}\le x_0 + 3cK^{-\betaeta^\partialrime}\int_0^ir^{-\betaeta^\partialrime}dr\le x_0+\frac{3cK^{-\betaeta^\partialrime}i^{1-\betaeta^\partialrime}}{1-\betaeta^\partialrime}\le
\frac{4c (Ki)^{1-\betaeta^\partialrime}}{1-\betaeta^\partialrime}=:\hat x_i
\epsilonnd{align*}
and therefore we obtain that $x_i (\hat t_i)\leq \hat x_i$.
\epsilonnd{proof}
Finally we present the proof of Lemma \ref{corou}:
\betaegin{proof}
Consider the sequence $(u_n)_{n\in \epsilonnsuremath{\mathbb{N}}}$ of weak solutions of (\ref{eq5}) driven by the sequence $(\epsilonnsuremath{\omega}ega_n)_{n\in \epsilonnsuremath{\mathbb{N}}}$ of piecewise linear continuous paths. Following the steps of Proposition \ref{prop} we could prove that each $u_n\in C^{\epsilonnsuremath{\tilde}lde \betaeta}([0,T];V_{-\deltalta})$ with $1/2<\hat \betaeta <\epsilonnsuremath{\tilde}lde \betaeta$. Then we can apply Lemmas \ref{l6}-\ref{l8} to $(u_n)_{n\in \epsilonnsuremath{\mathbb{N}}}$, obtaining that this sequence is uniformly bounded in $C^{\hat\betaeta}([0,T],V_{-\deltalta}) \cap C([0,T],V)$.
\epsilonnd{proof}
\end{document}
|
\begin{document}
\title{Outside-Obstacle Representations\ with All Vertices on the Outer Face}
\pdfbookmark[1]{Abstract}{Abstract}
\begin{abstract}
An \emph{obstacle representation} of a graph~$G$ consists of a set
of polygonal obstacles and a drawing of~$G$ as a \emph{visibility
graph} with respect to the obstacles:
vertices are mapped to points and edges to straight-line segments
such that each edge avoids all obstacles whereas each non-edge
intersects at least one obstacle.
Obstacle representations have been investigated quite
intensely over the last few years.
Here we focus on \emph{outside-obstacle representations} (OORs) that use
only one obstacle in the outer face of the drawing.
It is known that every outerplanar graph admits such a
representation [Alpert, Koch, Laison; DCG 2010].
We strengthen this result by showing that every (partial) 2-tree has
an OOR. We also consider restricted versions of OORs
where the vertices of the graph lie on a convex polygon
or a regular polygon.
We characterize when the complement of a tree and when a complete graph
minus a simple cycle admits a convex OOR.
We construct regular OORs for all (partial) outerpaths,
cactus graphs, and grids.
\keywords{obstacle representation \and visibility graph \and outside obstacle}
\end{abstract}
\section{Introduction}
\label{sec:intro}
Recognizing graphs that have a certain type of geometric
representation is a well-established field of research dealing with,
e.g., geometric intersection graphs, visibility graphs, and graphs admitting certain contact representations.
Given a set $\mathcal C$ of \emph{obstacles} (here, simple polygons without holes)
and a set~$P$ of points in the plane, the \emph{visibility
graph}~$G_{\mathcal C}(P)$ has a vertex for each point
in~$P$ and an edge~$pq$ for any two points~$p$ and~$q$ in~$P$ that can
\emph{see} each other, that is, the line segment $\overline{pq}$ connecting $p$ and $q$
does not intersect any obstacle in~$\mathcal C$.~An \emph{obstacle representation} of a graph $G$ consists of a
set~$\mathcal{C}$ of obstacles in the plane and a mapping of the
vertices of~$G$ to a set~$P$ of points such that~$G = G_{\mathcal C}(P)$.
The mapping defines a straight-line drawing $\Gamma$ of
$G_{\mathcal C}(P)$. We planarize $\Gamma$ by replacing all
intersection points by dummy vertices. The outer face of the resulting
planar drawing is a closed polygonal chain $\Pi_\Gamma$ where vertices
and edges can occur several times. We call the complement
of the closure of $\Pi_\Gamma$ the \emph{outer face} of $\Gamma$.
We differentiate between two types of obstacles: \emph{outside} obstacles lie in the
outer face of the drawing, and \emph{inside} obstacles
lie in the complement of the outer face; see~\cref{fig:inside-outside}.
\begin{figure}
\caption{IOR but not OOR;
see \cite{clpw-ov1o-GD16}
\label{fig:inside-outside}
\caption{OOR but not convex OOR
(see \cref{clm:ngon:6vertices}
\label{fig:not-convex}
\caption{Circular OOR but not regular OOR
(move~7 towards $x$)}
\label{fig:outerplanar7}
\caption{Regular OOR of~$C_6$}
\label{fig:regularOOR}
\caption{Inside- and outside-obstacle representations (IORs and OORs).}
\label{fig:examples}
\end{figure}
Every graph trivially admits an obstacle representation:
take an arbitrary straight-line drawing without collinear vertices and
``fill'' each face with an obstacle.
This, however, can lead to a large number of obstacles, which motivates
the optimization problem of finding an obstacle
representation with the minimum number of obstacles.
For a graph $G$, the \emph{obstacle number} $\Obs(G)$ is
the smallest number of obstacles that suffice to represent~$G$ as a
visibility graph.
In this paper, we focus on \emph{outside} obstacle representations (OORs),
that is, obstacle representations with a single outside obstacle
and without any inside obstacles.
For such a representation, it suffices to specify the positions of the~vertices;
the~outside obstacle is simply the whole outer face of the
representation.
In an OOR every non-edge~must thus intersect the outer face.
We also consider three special types:
In~a \emph{convex} OOR,
the vertices must be in convex position;
in a \emph{circular} OOR,
the vertices must lie on a circle; and
in a \emph{regular} OOR,
the vertices must form a regular $n$-gon.
In general, the class of graphs representable by outside obstacles is
not closed under taking subgraphs, but the situation is different for
graphs admitting a \emph{reducible} OOR, meaning that all of its edges
are incident to the outer face:
\begin{obs}
\label{clm:reducible}
If a graph $G$ admits a reducible OOR, then every subgraph of~$G$
also admits such a representation.
\end{obs}
\paragraph{Previous Work.}
Alpert et al.~\cite{akl-ong-DCG10} introduced the notion of the obstacle number of a graph in 2010.
They also introduced
\emph{inside} obstacle representations, i.e., representations without an
outside obstacle. They characterized the class of~graphs
that have an inside obstacle representation
with a single convex obstacle
and showed that every outerplanar graph has an OOR.
Chaplick et al.~\cite{clpw-ov1o-GD16} proved that the class
of graphs with an inside obstacle representation is incomparable with
the class of graphs with an OOR.
They showed that any graph with at most seven vertices has an
OOR, which does not hold for a specific
8-vertex~graph.
Alpert {et~al.}~\cite{akl-ong-DCG10} further showed that
$\Obs(K^*_{m,n}) \le 2$ for any $m \le n$, where
$K^*_{m,n}$
is the complete bipartite graph $K_{m,n}$ minus a matching of size~$m$.
They also proved that $\Obs(K^*_{5,7})=2$.
Pach and Sar\i\"{o}z \cite{ps-sglon-GC11} showed that $\Obs(K^*_{5,5})=2$.
Berman et al.~\cite{bcfghw-gong1-JGAA16} suggested
some necessary conditions for a graph to have obstacle number~1.
They gave a SAT formula that they used to find a {\em planar}
10-vertex graph (with treewidth 4) that has no 1-obstacle representation.
Obviously, any $n$-vertex graph has obstacle number~$\ensuremath{\mathcal{O}}(n^2)$.
Balko {et~al.}~\cite{bcv-dgusno-DCG18} improved this to~$\ensuremath{\mathcal{O}}(n\log n)$.
On the other hand, Balko {et~al.}~\cite{bcghvw-bcong-ESA22} showed that
there are $n$-vertex~graphs whose obstacle number
is $\Omega(n/\log\log n)$, improving previous lower bounds, e.g.,
\cite{akl-ong-DCG10,dm-on-EJC15,mpp-lbong-EJC12,mps-glon-WG10}.
They also showed that, when restricting obstacles to {\em convex}
polygons, for some $n$-vertex~graphs, even $\Omega(n)$ obstacles are
needed. Furthermore, they showed that computing the obstacle number
of a graph~$G$ is fixed-parameter tractable in the vertex cover number
of~$G$.
\paragraph{Our Contribution.}
We first strengthen the result of Alpert et
al.~\cite{akl-ong-DCG10} regarding OORs of outerplanar
graphs by showing that
every (partial) 2-tree admits a reducible OOR
with all vertices on the outer face; see
\cref{sec:nonconvex}. Equivalently, every graph of treewidth at most
two, which includes outerplanar and series-parallel graphs, admits
such a representation.
Then we establish two combinatorial conditions for
convex OORs (see \cref{sec:conditions}).
In particular, we introduce a necessary condition
that can be used to show that a given graph does {\em not} admit a
convex OOR as,~e.g., the graph in \cref{fig:not-convex}.
We apply these conditions to characterize
when the complement of a tree and when a complete graph minus a simple cycle admits a convex OOR.
We construct {\em regular} reducible OORs for all outerpaths, grids,
and cacti; see \cref{sec:ngon}. The result for grids strengthens an
obversation by Dujmovi\'{c} and Morin~\cite[Fig.~1]{dm-on-EJC15},
who showed that grids have (outside) obstacle number~1.
We postpone the proofs of statements with a (clickable)
``$\star$'' to the appendix.
\paragraph{Notation.}
For a graph $G$, let $V(G)$ be the vertex set of $G$, and let $E(G)$ be the edge set of $G$.
Arranging the vertices of~$G$ in circular order $\sigma=\langle v_1,\dots,v_n \rangle$,
we write, for $i\neq j$, $[v_i,v_j)$ to refer to the sequence $\langle v_i,v_{i+1},\dots,v_{j-1}\rangle$,
where indices are interpreted modulo $n$. Sequences $(v_i,v_j)$
and $[v_i,v_j]$ are defined analogously.
\section{Outside-Obstacle Representations for Partial 2-Trees}
\label{sec:nonconvex}
The graph class of \emph{$2$-trees} is recursively defined as follows:
$K_3$ is a $2$-tree. Further, any graph is a 2-tree if it is obtained
from a $2$-tree~$G$ by introducing a new vertex~$x$ and making $x$
adjacent to the endpoints of some edge $uv$ in~$G$. We say that~$x$
is \emph{stacked} on~$uv$. The edges~$xu$ and $xv$ are called the
\emph{parent edges} of~$x$.
\begin{restatable}[{\hyperref[clm:nonconvex*]{$\star$}}]{theorem}{NonConvex}
\label{clm:nonconvex}
Every $2$-tree admits a reducible OOR
with all vertices on the outer face.
\end{restatable}
\begin{sketch}
Every $2$-tree~$T$ can be constructed through the following iterative procedure:
(1) Start with one edge, called the \emph{base} edge and mark its vertices as \emph{inactive}.
Stack any number of vertices onto the base edge and mark them as \emph{active}.
During the entire procedure, every present vertex is
marked either as active or inactive.
Moreover, once a vertex is inactive, it remains inactive for the remainder of the construction.
(2) Pick one active vertex~$v$ and stack any number of vertices onto each of its two parent edges.
All the new vertices are marked as active and $v$ as inactive.
(3) If there are active vertices remaining,~repeat step~(2).
We construct a drawing of $T$
by geometrically implementing this iterative procedure,
so that after every step of the algorithm the present part of the graph is realized
as a straight-line drawing satisfying the following invariants:
\begin{enumerate}[left=0pt,label=(\roman*)]
\item Each vertex~$v$ not incident to the base edge is associated with an open circular arc $C_v$
that lies completely in the outer face and whose endpoints belong to the two parent edges of~$v$.
Moreover, $v$ is located at the center of~$C_v$ and the parent edges of $v$ are below~$v$.
\item Each non-edge intersects the circular arc of at least one of its incident vertices.
\item For each active vertex~$v$, the region $R_v$ enclosed by~$C_v$ and the two parent edges of~$v$ is \emph{empty},
meaning that~$R_v$ is not intersected by any edges, vertices, or circular arcs.
\item Every vertex is incident to the outer face.
\end{enumerate}
It is easy to see that once the procedure terminates with a
drawing that satisfies invariants (i)--(iv), we obtain the
desired representation (in particular, invariants~(i) and~(ii)
together imply that each non-edge intersects the outer face).
\paragraph{Construction.}
To carry out step~(1), we draw the base edge horizontally and place the stacked vertices on a common horizontal line above the base edge, see \cref{fig:G1}.
Circular arcs that satisfy the invariants are now easy to define.
\begin{figure}
\caption{Step (1).}
\label{fig:G1}
\caption{Step (2); the shaded areas do not contain any vertices.}
\label{fig:rotation}
\caption{Construction steps in the proof of \cref{clm:nonconvex}
\label{fig:twoTree:construction}
\end{figure}
Suppose we have obtained a drawing~$\Gamma$ of the graph obtained after step~(1)
and some number of iterations of step~(2) such that~$\Gamma$ is equipped with a set of circular arcs satisfying the invariants (i)--(iv).
We describe how to carry out another iteration of step~(2) while maintaining the invariants.
Let~$v$ be an active vertex.
By invariant~(i), both parent edges of~$v$ are below~$v$.
Let~$e_\ell$ and~$e_r$ be the left and right parent edge, respectively.
Let $\ell_1,\ell_2,\dots,\ell_i$ and $r_1,r_2,\dots,r_j$ be the vertices stacked onto~$e_\ell$ and~$e_r$, respectively.
We refer to $\ell_1,\ell_2,\dots,\ell_i$ and $r_1,r_2,\dots,r_j$ as the \emph{new} vertices; the vertices of~$\Gamma$ are called \emph{old}.
We place all the new vertices on a common horizontal line~$h$ that intersects~$R_v$ above~$v$, see~\cref{fig:rotation}.
The vertices $\ell_1,\ell_2,\dots,\ell_i$ are placed inside~$R_v$, to the right of the line~$\overline{e_\ell}$ extending~$e_\ell$.
Symmetrically, $r_1,r_2,\dots,r_j$ are placed inside~$R_v$, to the left of the line~$\overline{e_r}$ extending~$e_r$.
We place $\ell_1,\ell_2,\dots,\ell_i$ close enough to~$e_\ell$ and $r_1,r_2,\dots,r_j$ close enough to~$e_r$
such that the following properties are satisfied:
\begin{enumerate*}[label=(\alph*)]
\item None of the parent edges~of the new vertices intersect~$C_v$.
\item For each new vertex, the unbounded open~cone obtained by extending its parent edges to the bottom does not contain any~\mbox{vertices}.
\end{enumerate*}
Each of the old vertices retains its circular arc from~$\Gamma$.
By invariants~(i) and~(iii) for $\Gamma$, it is easy to define circular arcs for the new vertices that satisfy invariant~(i).
Using invariants (i)--(iv) for $\Gamma$ and properties~(a) and~(b), it can be shown that all invariants are satisfied.
\end{sketch}
\section{Convex Outside Obstacle Representations}
\label{sec:conditions}
We start with a sufficient condition.
Suppose that we have a convex OOR~$\Gamma$ of a
graph~$G$. Let~$\sigma$ be the clockwise circular order of the
vertices of~$G$ along the convex hull.
If all neighbors of a vertex~$v$ of $G$ are consecutive in~$\sigma$,
we say that $v$ has the \emph{consecutive-neighbors property},
which implies that all non-edges incident to $v$ are consecutive around $v$
and trivially intersect the outer face
in the immediate vicinity of~$v$; see \cref{fig:conditions:cnp}.
\begin{lemma}[Consecutive-neighbors property]
\label{clm:consecutiveNeighbors}
A graph $G$ admits a convex
OOR with circular vertex order $\sigma$ if there is a subset $V'$
of $V(G)$ that covers all non-edges of $G$ and each vertex of $V'$
has the consecutive-neighbors property with respect to $\sigma$.
\end{lemma}
\begin{figure}
\caption{Vertex $v$ has the CNP}
\label{fig:conditions:cnp}
\caption{Gap~$g$ is a candidate gap for the non-edge~$\bar{e}
\label{fig:conditions:candidategap}
\caption{Examples for the consecutive-neighbors property (CNP) and a candidate gap.}
\label{fig:conditions}
\end{figure}
Next, we derive a necessary condition.
For any two consecutive vertices $v$ and $v'$ in~$\sigma$
that are not adjacent in~$G$, we say that the line segment
$g = \overline{vv'}$ is a \emph{gap}.
Then the \emph{gap region} of $g$ is the inner face of $\Gamma+vv'$
incident to~$g$; see the gray region in \cref{fig:conditions:candidategap}.
We consider the gap region to be open, but add to it the relative interior
of the line segment $\overline{vv'}$, so that the non-edge $vv'$
intersects its own gap region.
Observe that each non-edge $\bar e = xy$ that intersects the outer face
has to intersect some gap region in an OOR.
Suppose that $g$ lies between $x$ and $y$ with
respect to~$\sigma$, that is, $[v,v'] \subseteq [x,y]$.
We say that $g$ is a \emph{candidate gap} for $\bar e$ if there is no
edge that connects a vertex in $[x,v]$ and a vertex in $[v',y]$.
Note that $\bar e$ can only intersect gap regions of candidate gaps.
\begin{lemma}[Gap condition]
\label{clm:gap-condition}
A graph $G$ admits a convex
OOR with circular vertex order $\sigma$
only if there exists a candidate gap with respect to $\sigma$
for each non-edge of $G$.
\end{lemma}
It remains an open problem whether the gap condition is also sufficient.
Nonetheless, we can use the gap condition for no-certificates.
To this end, we derived a SAT formula from the following expression,
which checks the gap condition for every non-edge of a graph~$G$:
\begin{equation*}
\bigwedge_{xy\notin E(G)} \!
\left[\bigvee_{v\in [x,y)} \!\!
\left(\bigwedge_{u\in [x,v], w\in (v,y]} \!\!\! uw\notin E(G)\right)
\lor
\bigvee_{v\in [y,x)} \!\!
\left(\bigwedge_{u\in [y,v], w\in (v,x]} \!\!\! uw\notin E(G)\right)
\right]
\end{equation*}
We have used this formula to test whether all
connected cubic graphs with up to 16 vertices admit convex OORs.
The only counterexample we found was the Petersen graph.
The so-called Blanusa snarks, the Pappus graph, the dodecahedron, and
the generalized Peterson graph $G(11,2)$ satisfy the gap condition.
The latter three graphs do admit convex
OORs~\cite{Gol21}.
The smallest graph (and the only 6-vertex graph) that does not
satisfy the gap condition is the wheel graph~$W_6$
(see \cref{clm:ngon:6vertices} in \cref{app:small-graphs}).
Hence, $W_6$ does not admit a {\em convex} OOR,
but it does admit a (non-convex) OOR; see~\cref{fig:not-convex}.
In the following, we consider ``dense'' graphs, namely the complements
of trees. For any graph~$G$, let $\bar G = (V(G), \bar E(G))$ with $\bar E(G) = \{uv \mid uv \not \in E(G)\}$ be the complement of~$G$.
A \emph{caterpillar} is a tree where all vertices are within distance
at most~1 of a central path.
\begin{restatable}[{\hyperref[clm:caterpillar-tree*]{$\star$}}]{theorem}{DenseGraphClassesTree}
\label{clm:caterpillar-tree}
For any tree $T$, the graph~$\bar T$ has a convex OOR
if and only if $T$ is a caterpillar.
\end{restatable}
\begin{sketch}
First, we show that for every caterpillar~$C$, the graph~$\bar C$
has a circular OOR. To this end, we
arrange the vertices of the central path~$P$ on a circle in the order given by~$P$.
Then, for each vertex of~$P$, we insert its leaves as an interval next
to it; see \cref{fig:compl-caterpillar} in \cref{app:proofs}.
The result is a circular OOR since every non-edge of~$\bar C$ intersects the outer face
in the vicinity of the incident path vertex (or vertices).
Second, we show that if $T$ is a tree that is not a caterpillar,
then for any circular vertex order,
there exists at least one non-edge of $\bar T$ that is a diagonal of
a quadrilateral formed by edges of $\bar T$.
\end{sketch}
Another class of dense graphs consists of complete graphs from which
we remove the edge set of a simple (not necessarily Hamiltonian) cycle.
Using \cref{clm:gap-condition}, we can prove the following theorem similarly as \cref{clm:caterpillar-tree}.
\begin{restatable}[{\hyperref[clm:complete-cycle*]{$\star$}}]{theorem}{DenseGraphClassesCycle}
\label{clm:complete-cycle}
Let $3 \le k \le n$. Then the graph~$G_{n,k}=K_n - E(C_k)$, where
$C_k$ is a simple $k$-cycle, admits a
convex OOR if and only if $k \in \{3,4,n\}$.
\end{restatable}
\section{Regular Outside Obstacle Representations}
\label{sec:ngon}
This section deals with regular OORs.
A \emph{cactus} is a connected graph
where every edge is contained in at most one simple cycle.
An \emph{outerpath} is a graph that admits an \emph{outerpath} drawing, i.e.,
an outerplanar drawing whose weak dual is a path.
A \emph{grid} is the Cartesian product $P_k\square P_\ell$ of two simple paths $P_k,P_\ell$.
\begin{restatable}[{\hyperref[clm:ngon:positive*]{$\star$}}]{theorem}{RegularRepresentation}
\label{clm:ngon:positive}
The following graphs have reducible regular OORs:\\
\begin{enumerate*}
\item every cactus; \label{enum:cactus}
\item every grid; \label{enum:grid}
\item every outerpath. \label{enum:outerpath}
\end{enumerate*}
\end{restatable}
\begin{figure}
\caption{cactus graphs}
\label{fig:cactus-small}
\caption{grids}
\label{fig:grid-small}
\caption{outerpaths}
\label{fig:outerpath-small}
\caption{Graph classes that admit reducible regular OORs (see
\cref{clm:ngon:positive}
\label{fig:regular-OORs}
\end{figure}
\begin{sketch}
For cacti, we use a decomposition into \emph{blocks} (i.e., maximal
2-connected subgraphs or bridges). We start with an arbitrary block
and insert its child blocks as intervals next to the corresponding
cut vertices etc.; see \cref{fig:cactus-small}. For a grid, we lay
out each horizontal path in a separate arc, in a zig-zag manner.
Then we add the vertical edges accordingly; see
\cref{fig:grid-small}. Our strategy for (maximal) outerpaths relies
on a specific stacking order. We start with a triangle. Then we
always place the next inner edge (black in
\cref{fig:outerpath-small}) such that it avoids the empty arc that
corresponds to the previous inner edge.
\end{sketch}
Every graph with up to six vertices~-- except for the graph in
\cref{fig:not-convex}~-- and every outerplanar graph with up to seven
vertices admits a regular OOR
(see \cref{clm:ngon:6vertices} in
\cref{app:small-graphs} and \cite{Lang22}, respectively). The 8-vertex outerplanar graph in
\cref{fig:outerplanar7} (and only it \cite{Lang22}), however, does not admit any regular
OOR (see \cref{clm:ngon:outerplanar}).
Our representations for cacti, outerpaths, and complements of
caterpillars depend only on the vertex order.
Hence, given such a graph with $n$ vertices, every cocircular point
set of size $n$ is \emph{universal},
i.e., can be used for an OOR.
\section{Open Problems}
\begin{enumerate*}[label=(\arabic*)]
\item What is the complexity of deciding whether a given graph
admits an OOR?
\item Is the gap condition sufficient, i.e., does every graph with a
circular vertex order satisfying the gap condition admit a convex OOR?
\item Does every graph that admits a {\em convex} OOR also admit a
{\em circular} OOR?
\item Does every outerplanar graph admit a (reducible) convex OOR?
\item Does every connected cubic graph {\em except the Peterson graph} admit a convex OOR?
\end{enumerate*}
\pdfbookmark[1]{References}{References}
\appendix
\noindent{\Large\sf\bfseries Appendix}
\section{Small Graphs}
\label{app:small-graphs}
\begin{proposition} \label{clm:ngon:6vertices}
There exists a regular OOR for every
graph with up to six vertices, except for the wheel graph $W_6$.
\end{proposition}
\begin{proof}
Note that $W_6$ is isomorphic to $G_{6,5} = K_6 - E(C_5)$. Hence,
by \cref{clm:complete-cycle}, $W_6$ does not admit a convex OOR.
Except the excluded one, all graphs satisfy the gap condition:
For graphs with up to four vertices, this is not difficult to check.
For graphs with five vertices, see \cref{fig:reg5gon}.
\begin{figure}
\caption{Every graph with up to five vertices admits a regular OOR.}
\label{fig:reg5gon}
\end{figure}
For graphs with six vertices consider the following cases occurring
when drawing any permutation satisfying the gap condition on the hexagon:
\begin{itemize}
\item A non-edge on the outer face is obviously visible.
\item For a non-edge of length~2 (for instance between vertices
2 and 4 in \cref{fig:reg6gon_1}), there are three possible gaps,
through which it could be visible (depicted with a green non-edge).
The gap condition further enforces all non-edges depicted in purple,
such that in all three cases the orange non-edge is visible.
\item If the orange non-edge spans length 3, there are two possibly used
gaps and the first of these does not work geometrically, since the orange
non-edge could only be cut at the center point, which, however, would also
cut two edges. (Red edges are enforced by the case distinction.)
\end{itemize}
To solve this case we argue, that for all graphs having this pattern, there
is also another permutation without this problem. To this end see \cref{fig:reg6gon_2}.
The first row depicts all possible graphs with the problematic pattern and
the second row their respective alternative permutation. We group the graphs
according to the non-edges on the outer cycle.
\begin{itemize}
\item In the first column is the case, where all pairs of vertices not explicitly
given from the pattern are edges.
\item In the second case $12$ and $56$ are non-edges. Note that we can hence
require the edges $15$ and $26$, since if either is missing, we do not need
to change the drawing. All other possible edges (depicted in gray) are undecided.
\item The third case has non-edges $12$ and $16$, such that we may require
the edges $26$, $56$ and $46$.
\item The forth case has non-edge $12$.
\item The fifth case has non-edge $16$.
\end{itemize}
This gives us a working permutation for every graph with the problematic pattern.
\end{proof}
\begin{figure}
\caption{Given the orange non-edge, there are different green non-edges
through which the gap-condition could be satisfied. The purple non-edges
are additionally enforced by the gap-conditions satisfaction.}
\label{fig:reg6gon_1}
\end{figure}
\begin{figure}
\caption{The first row depicts sub-cases of the forth case from
\cref{fig:reg6gon_1}
\label{fig:reg6gon_2}
\end{figure}
\begin{proposition}
\label{clm:ngon:outerplanar}
There exists an 8-vertex outerplanar graph that has no regular
OOR.
\end{proposition}
\begin{proof}
Consider the 6-vertex outerplanar graph~$G$ in
\cref{fig:outerplanar6}. Up to rotation and mirroring, it has
only two regular OORs, which we tested
using a variant of the SAT formulation described in
\cref{sec:conditions}. We call them Type 1
if the brown edge $v_2 v_4$ passes through the center of the regular hexagon,
and Type 2 if the purple edge $v_1 v_4$ does,
see \cref{fig:outerplanar6}.
Let $H$ be a supergraph of~$G$ such that the two vertices
$u, w \in H - G$ are incident to $v_2, v_4$ and $v_1, v_4$
respectively; see
\cref{fig:outerplanar7}. None of the possibilities for adding~$u$
and $w$ into the cyclic order of the vertices of~$G$ in
\cref{fig:outerplanar6} yields a regular OOR
since in each case one of the non-edges incident to~$u$ or to~$w$
does not lie in the outer face; see \cref{fig:outerplanar7-perm}
for permutations that satisfy the gap condition, but do not
admit regular OORs.
\end{proof}
\begin{figure}
\caption{An outerplanar graph~$G$ and its regular OORs.}
\label{fig:outerplanar6}
\end{figure}
\begin{figure}
\caption{All possibilities for adding~$u$ and $w$ into the cyclic
order of the vertices of~$G$ in \cref{fig:outerplanar6}
\label{fig:outerplanar7-perm}
\end{figure}
\section{Omitted Proofs}
\label{app:proofs}
\NonConvex*
\label{clm:nonconvex*}
\begin{proof}
It follows readily from the definition of $2$-trees that every $2$-tree~$T=(V,E)$ can be constructed through the following iterative procedure:
\begin{enumerate}[label=(\arabic*)]
\item\label{enum:2tree-base} We start with one edge, called the \emph{base} edge and mark its vertices as \emph{inactive}.
We stack any number of vertices onto the base edge and mark them as \emph{active}.
During the entire procedure, every present vertex is
marked either as active or inactive.
Moreover, once a vertex is inactive, it remains inactive for the remainder of the construction.
\item\label{enum:2tree-step} As an iterative step we pick one active
vertex~$v$ and stack any number of vertices onto each of its two parent edges.
All the new vertices are marked as active and $v$ is now marked as inactive.
\item\label{enum:2tree-repeat} If there are active vertices remaining,
repeat step~\ref{enum:2tree-step}.
\end{enumerate}
Observe that step~\ref{enum:2tree-step} is performed exactly once for each vertex that is not incident to the base edge.
We construct a drawing of $T$
by geometrically implementing the iterative procedure described above,
so that after every step of the algorithm the present part of the graph is realized
as a straight-line drawing satisfying the following set of invariants:
\begin{enumerate}[label=(\roman*)]
\item Each vertex~$v$ that is not incident to the base edge is associated with an open circular arc $C_v$
that lies completely in the outer face and whose endpoints belong to the two parent edges of~$v$.
Moreover, $v$ is located at the center of~$C_v$ and the parent edges of $v$ are below~$v$.
\item Each non-edge intersects the circular arc of at least one of its incident vertices.
\item For each active vertex~$v$, the region $R_v$ enclosed by~$C_v$ and the two parent edges of~$v$ is \emph{empty},
meaning that~$R_v$ is not intersected by any edges, vertices, or circular arcs.
(Combined with (i), it follows that~$R_v$ lies completely in the outer face.)
\item Every vertex is incident to the outer face.
\end{enumerate}
Once the procedure terminates, we have indeed obtained the desired drawing:
invariants (i) and (ii) imply that each non-edge passes through the outer face and, hence, we have indeed obtained an OOR.
Moreover,
invariant (i) implies that each non-base edge is incident to the outer face of the drawing.
The base edge will be drawn horizontally.
By the second part of invariant (i), all vertices not incident to the base edge are above the base edge.
Consequently, the base edge is incident to the outer face as well and, hence, the representation is reducible.
Finally, by invariant (iv), every vertex belongs to the outer face.
\paragraph{Construction.}
To carry out step~\ref{enum:2tree-base}, we draw the base edge horizontally and place the stacked vertices on a common horizontal line above the base edge, see \cref{fig:G1}.
Circular arcs that satisfy the invariants are now easy to define.
Suppose we have obtained a drawing~$\Gamma$ of the graph obtained after step~\ref{enum:2tree-base}
and some number of iterations of step~\ref{enum:2tree-step}
such that~$\Gamma$ is equipped with a set of circular arcs satisfying the invariants (i)--(iv).
We describe how to carry out another iteration of step~\ref{enum:2tree-step} while maintaining the invariants.
Let~$v$ be an active vertex.
By invariant~(i), both parent edges of~$v$ are below~$v$.
Let~$e_\ell$ and~$e_r$ be the left and right parent edge, respectively.
Let $\ell_1,\ell_2,\dots,\ell_i$ and $r_1,r_2,\dots,r_j$ be the vertices stacked onto~$e_\ell$ and~$e_r$, respectively.
We refer to $\ell_1,\ell_2,\dots,\ell_i$ and $r_1,r_2,\dots,r_j$ as the \emph{new} vertices; the vertices of~$\Gamma$ are called \emph{old}.
We place all the new vertices on a common horizontal line~$h$ that intersects~$R_v$ above~$v$, for an illustration see~\cref{fig:rotation}.
The vertices $\ell_1,\ell_2,\dots,\ell_i$ are placed inside~$R_v$, to the right of the line~$\overline{e_\ell}$ extending~$e_\ell$.
Symmetrically, $r_1,r_2,\dots,r_j$ are placed inside~$R_v$, to the left of the line~$\overline{e_r}$ extending~$e_r$.
We place $\ell_1,\ell_2,\dots,\ell_i$ close enough to~$e_\ell$ and $r_1,r_2,\dots,r_j$ close enough to~$e_r$ such that the following properties are satisfied.
\begin{enumerate}[label=\alph*)]
\item None of the parent edges of the new vertices intersect~$C_v$.
\item For each new vertex, the unbounded open cone obtained by extending its parent edges to the bottom does not contain any vertices.
\end{enumerate}
These properties are easy to achieve:
let~$\overline{e_\ell}(\alpha)$ be the line created by rotating~$\overline{e_\ell}$ clockwise around~$v$ by angle~$\alpha$.
Clearly, there is an angle~$\alpha^*$ such that (A) the intersection~$x(\alpha^*)$ of~$\overline{e_\ell}(\alpha^*)$ with~$h$ lies in~$R_v$
and the line segment between~$x(\alpha^*)$ and~$e_\ell\setminus\{v\}$ does not intersect~$C_v$,
and (B) the open region that lies clockwise between~$\overline{e_\ell}$ and~$\overline{e_\ell}(\alpha^*)$ contains no vertices.
We place the vertices $\ell_1,\ell_2,\dots,\ell_i$ to the left of~$x(\alpha^*)$.
Then property~(A) guarantees property~(a). Property~(b) follows from property~(B) and invariants (iii) and (iv) for $\Gamma$.
The vertices $r_1,r_2,\dots,r_j$ are placed symmetrically.
\paragraph{Correctness.}
It remains to show that the invariants are maintained.
Each of the old vertices retains its circular arc from~$\Gamma$.
By invariant (iii), the region~$R_v$ is completely contained in the outer face of~$\Gamma$.
Hence, it is easy to define circular arcs for the new vertices that satisfy invariant~(i).
To show that invariant~(i) also holds for the circular arcs of the old vertices, we argue as follows: by the construction property (a),
each parent edge~$e$ of a new vertex can be decomposed as follows:
a line segment~$e_a$ that lies in~$R_v$ and a line segment~$e_b$ that lies in the triangle formed by the endpoints of the parent edges of~$v$.
By invariant~(iii) for~$\Gamma$, the region~$R_v$ is empty and, hence, $e_a$ does not intersect the circular arc of any old vertex.
By invariant~(i) for~$\Gamma$, the circular arcs of the old vertices lie in the outer face of~$\Gamma$
and, hence, it follows that $e_b$ also does not intersect the circular arc of any old vertex.
Consequently, invariant (i) is maintained for the circular arcs of old vertices.
Invariant~(ii) is retained for the edges that join two old
vertices since the circular arcs
of these vertices have not been changed.
Property~(b) and the fact that all new vertices are placed on $h$ imply that each of the non-edges incident to a new vertex~$w$ intersect~$C_w$.
Hence, invariant~(ii) is also satisfied for the new non-edges.
Invariant~(iii) holds for the circular arcs of the new vertices by (iii) for~$v$ in~$\Gamma$ and by (i) for the new vertices.
To see that invariant~(iii) holds for the circular arcs of the old vertices, let~$u\neq v$ be an old vertex.
Let~$e$ be a parent edge of a new vertex and recall the definitions of~$e_a$ and~$e_b$ from above.
The part~$e_a$ lies in~$R_v$ and~$e_b$ does not pass through the outer face of~$\Gamma$.
Hence, it follows that invariant (iii) is retained for~$u$.
By invariant (iii) for~$\Gamma$, the region~$R_v$ is contained in the outer face of~$\Gamma$.
Hence, by construction, invariant (iv) holds for~$v$ and the new vertices.
Moreover, invariant (iv) is also retained for the remaining vertices since, by construction, the edges incident to new vertices intersect the outer face of~$\Gamma$ in~$R_v$ only.
\end{proof}
\DenseGraphClassesTree*
\label{clm:caterpillar-tree*}
\begin{proof}
We prove the statement in two steps. First, we show that,
for every caterpillar~$C$, the graph~$\bar{C}$ has a
circular OOR. Then we show that,
for every tree~$T$ that is not a caterpillar, $\bar T$
does not admit any convex OOR.
Let $C$ be a caterpillar with central path
$\langle p_1, p_2, \ldots, p_r \rangle$.
For $i \in \{2, \dots, r-1\}$, let $\ell_1^i, \ell_2^i, \dots,
\ell_{n_i}^i$ be the leaves adjacent to path vertex~$p_i$ (if any).
We arrange the vertices of~$\bar C$ in cyclic order as
follows. First, we take the path vertices in the
given order. Then, for each $i\in\{2,\dots,r-1\}$, we insert
the leaves adjacent to vertex~$p_i$ between~$p_i$
and~$p_{i+1}$ into the cyclic order;
see \cref{fig:compl-caterpillar}.
\begin{figure}
\caption{A caterpillar and a regular OOR of its complement.}
\label{fig:compl-caterpillar}
\end{figure}
The resulting order is
$\langle p_1, p_2, \ell_1^2, \ell_2^2, \ldots, \ell_{n_2}^2, \ldots, p_{r-1}, \ell_1^{r-1}, \ell_2^{r-1}, \ldots, \ell_{n_{r-1}}^{r-1}, p_r \rangle$.
Note that all non-neighbors of a vertex $v$ of $\bar C$ that
succeed~$v$ in the circular order form an interval that starts right after~$v$.
Therefore, every non-edge of $\bar C$ is incident to a path
vertex $p_i$ and intersects the outer face
in the vicinity of~$p_i$.
Hence, the drawing of $\bar C$ is a circular OOR.
Since we fixed only the circular order of vertices and not
their specific position, we can place them on a regular polygon.
\begin{figure}
\caption{Case 1}
\label{fig:tree-case1}
\caption{Case 2}
\label{fig:tree-case2}
\caption{Case 3}
\label{fig:tree-case3}
\caption{Case 4}
\label{fig:tree-case4}
\caption{Case distinction in the proof of \cref{clm:caterpillar-tree}
\label{fig:tree-cases}
\end{figure}
Now we prove the second part of the statement.
Let $Y$ be the tree
that consists of a root~$c$ with three children~$\ell$,
$m$ and $r$, each of which has one child, namely
$\ell'$, $m'$ and $r'$, respectively; see
\cref{fig:compl-tree} (left).
Let $T$ be a tree that is not a caterpillar. Note
that $T$ has a subtree that is isomorphic to~$Y$.
Let $\sigma$ be any circular order of $V(Y)$.
We now show that $\bar{T}$ admits no convex OOR with respect to $\sigma$.
To this end, we find an edge $e$ of $Y$ (that is, a non-edge of $\bar T$) that is a diagonal
of a convex quadrilateral $Q$ formed by four non-edges of $Y$.
This yields our claim because any non-edge of $Y$ must
be an edge of~$\bar{T}$ (otherwise $T$ would contain a cycle).
Without loss of generality, let $\langle c, r, m, \ell
\rangle$ be the order of $c$ and its children
in $\sigma$. We distinguish four cases.
\noindent\textbf{Case 1:} None of the edges of $Y$ intersects $c m$; see \cref{fig:tree-case1}.
\noindent Then $e = c m$ lies inside the quadrilateral
$Q = \square cr'm\ell'$ formed by non-edges of~$Y$.
In the following three cases, we assume, without loss of
generality,
that $\ell \ell'$ intersects $c m$.
Let $\alpha$ be the open circular arc from $c$ to $\ell'$
in clockwise direction.
\noindent\textbf{Case 2:} $\alpha \cap Y \neq \emptyset$, i.e., at least one vertex
of $Y$ lies in $\alpha$, say $r$; see \cref{fig:tree-case2}.
\noindent Then $e = \ell \ell'$ lies inside
the quadrilateral $Q = \square \ell r \ell' m$.
In the remaining two cases, we assume that $\alpha \cap Y = \emptyset$.
\noindent\textbf{Case 3:} The vertices $c$, $r$, and $m'$
appear in this order in~$\sigma$; see \cref{fig:tree-case3}.
\noindent Then $e = c r$ lies inside the quadrilateral $Q = \square c \ell' r m'$.
\noindent\textbf{Case 4:} Otherwise; see \cref{fig:tree-case4}.
\noindent Then $e = m m'$ lies inside the quadrilateral $Q = \square \ell m' r m$.
\end{proof}
\Cref{fig:compl-tree} depicts the smallest tree that is not a
caterpillar and hence does not have a convex OOR.
It does, however, have an OOR.
\begin{figure}
\caption{The smallest tree $Y$ that is not a caterpillar (left); a
non-convex OOR of $\bar Y$.}
\label{fig:compl-tree}
\end{figure}
\DenseGraphClassesCycle*
\label{clm:complete-cycle*}
\begin{proof}
First, we show that, for $k \in \{3, 4, n\}$, the graph $G_{n,k}$
admits a convex OOR. To this end, we place the vertices
$v_1, \dots, v_k$ of $C_k$ as
an interval on a circle. If $k<n$, we place the remaining vertices
in an arbitrary order, also as an interval, on the same circle.
For $k = 3$, the vertex order of $C_3$ is
determined; see \cref{fig:cycle-C3}. For $k = 4$, we place the
vertices of $C_4$ in the order
$\langle v_1, v_2, v_4, v_3 \rangle$;
see \cref{fig:cycle-C4}. For $k = n$, we take the
vertex order of~$C_n$; see \cref{fig:cycle-Cn}. In the cases $k=3$
and $k=n$, let $V' = V(G_{n,k})$.
In the case $k=4$, let $V' = V(G_{n,k}) \setminus \{v_2,v_4\}$ and
note that $v_2$ and $v_4$ are adjacent in $G_{n,k}$.
In all cases all vertices in $V'$ satisfy the
consecutive-neighbors property and $V'$ covers all non-edges.
Therefore, by \cref{clm:consecutiveNeighbors},
the graph~$G_{n,k}$ admits a
convex OOR with respect to the circular vertex order
(depending on $k$) described above. Note that, in all
cases, the OORs are even regular.
\begin{figure}
\caption{$G_{8,3}
\label{fig:cycle-C3}
\caption{$G_{8,4}
\label{fig:cycle-C4}
\caption{$G_{8,8}
\label{fig:cycle-Cn}
\caption{Regular OORs of the graph $G_{n,k}
\label{fig:small-cycles}
\end{figure}
\begin{figure}
\caption{Case 1: $\{m\}
\label{fig:cycle-case1}
\caption{Case 2: $\big|(c,c') \cap V(C_k)\big|>1$}
\label{fig:cycle-case2}
\caption{Case distinction in the proof of
\cref{clm:complete-cycle}
\label{fig:cycle-cases}
\end{figure}
Now let $k \in \{5, \dots, n-1\}$. The graph~$G_{n,k}$
contains at least one vertex~$v$ that is adjacent to all other
vertices. Let $\sigma$ be any circular order of $V(G_{n,k})$
starting at~$v$ in clockwise direction. We prove that
$G_{n,k}$ does not admit a convex OOR with respect
to~$\sigma$.
To this end, let $c$ be the first vertex of $C_k$ after~$v$
in~$\sigma$. Let $c'$ be the last vertex in~$\sigma$ that is
not adjacent to~$c$. We consider two cases.
\noindent\textbf{Case 1:} There is only one
vertex~$m$ of~$C_k$ in the interval $(c,c')$;
see~\cref{fig:cycle-case1}.
\noindent Note that $c m$ is a non-edge. Let $m'$ be the
other vertex that shares a non-edge with~$m$. Note that $m'$
lies between $c'$ and $v$ since $c$ is the first vertex
of~$C_k$ after~$v$. Hence, $mm'$ is a
diagonal of the quadrilateral $Q=\square v m c' m'$. We argue
that the edges of~$Q$ belong to~$G_{n,k}$. For $vm$ and $vm'$
this is obvious, but it is also true for $m c'$ and $c' m'$,
otherwise the non-edges of~$G_{n,k}$ would contain a~$C_3$ or
a~$C_4$, respectively. But $G_{n,k}$ has a simple $k$-cycle
of non-edges with $k\ge5$.
\noindent\textbf{Case 2:} There are at least two
vertices of~$C_k$ in the interval $(c,c')$; see \cref{fig:cycle-case2}.
\noindent Recall that $cc'$ is a non-edge of~$G_{n,k}$.
For $G_{n,k}$ to admit a convex OOR with respect to~$\sigma$,
by \cref{clm:gap-condition}, $cc'$ would have to have a
candidate gap~$g$
(that is, an edge of~$C_k$). Due to the presence of the
edges~$vc$ and~$vc'$, the gap~$g$ must lie on the side of
$cc'$ that is opposite of~$v$. Let $m$ be the first endpoint
of~$g$, and let $m'$ be the second endpoint (according
to~$\sigma$). Then, by the definition of a candidate gap, no
edge connects the intervals $[c,m]$ and $[m',c']$.
This implies
that $cm'$ and $mc'$ are non-edges. Hence $\langle c,c',m,m'
\rangle$ is a 4-cycle of non-edges~-- a contradiction to the
fact that $G_{n,k}$ has a simple $k$-cycle of non-edges with
$k \ge 5$.
In both cases, we have shown that $G_{n,k}$ does not admit a
convex OOR with respect to~$\sigma$.
\end{proof}
{\RegularRepresentation*\label{clm:ngon:positive*}}
\begin{proof}
\textcolor{darkgray}{\sffamily\bfseries\ref{enum:cactus}.}
For the given cactus, we first compute the block-cut tree (whose definition we recall below).
Then, following the structure of the block-cut tree, we treat the blocks one by one.
For each block, we insert its vertices as an interval into the vertex order of the subgraph that we have treated so far.
Finally, we prove that the resulting circular vertex order
yields a reducible regular OOR.
Recall that a \emph{block-cut tree} of a biconnected graph is a tree that
has a vertex for each \emph{block} (i.e., a maximal 2-connected subgraph or bridge) and
for each cut vertex. There is an edge in the block-cut tree
for each pair of a block and a cut vertex that belongs to it;
for an example of a block-cut tree of a cactus, see~\cref{fig:cactus-block-cut-tree}.
We root the block-cut tree in an arbitrary block vertex and
number the block vertices according to a breadth-first search
traversal starting at the root.
\begin{figure}
\caption{A cactus and its block-cut tree. Black vertices
correspond to cut vertices of the cactus.}
\label{fig:cactus-block-cut-tree}
\end{figure}
In order to draw a cactus $G$, we treat its blocks one by one and insert
the vertices of each block into the circular order, starting with the root
block. For each further block $B$, there is a cut vertex~$v_B$ that belongs
to a block that we have already inserted before. Hence, we insert
the vertices of $B$ as an interval between~$v_B$ and its
clockwise successor in the circular order. For the root block $B^\star$, let
$v_{B^\star}$ be an arbitrary vertex of $B^\star$.
Now we draw the current block $B$. If $B$ is a single edge $v_B w$,
we place $w$ immediately behind~$v_B$. If $B$ is a cycle with
$k$ vertices ($k \ge 3$), we start with the cut vertex $v_B$
and proceed in a zig-zag
manner, mapping the vertices to positions
1 (which is $v_B$), $k, 2, k-1, \dots, \lceil(k+1)/2\rceil$; see~\cref{fig:cactus}.
For~$k \le 4$ all vertices satisfy the consecutive-neighbors property.
For~$k \ge 5$, exactly two vertices do not satisfy this
property, but these two vertices are adjacent,
so by \cref{clm:consecutiveNeighbors} all non-edges of $B$ intersect
the outer face as required.
Now we draw $G$ by placing the vertices in the circular order on a circle
that we just defined; the exact positions on the circle do not matter.
Consider the convex hulls of the blocks in the drawing.
Observe that any two of them share at most one
vertex. Moreover, each convex hull lies
completely in the outer face of the drawing. In the process
described above, each block has its own OOR
due to \cref{clm:consecutiveNeighbors}.
Hence, the whole drawing is an OOR of~$G$.
The representation is reducible since each vertex has degree~2
within each block, and each block is surrounded by the outer face.
\begin{figure}
\caption{Constructing a reducible regular OOR of
a cactus.
}
\label{fig:cactus}
\end{figure}
Since we do not specify the exact vertex positions on the circle,
observe that (a)~the representation can be chosen
such that consecutive vertices are, for example,
spaced equally (i.e., every set of $n$ co-circular points is
universal for the class of $n$-vertex cactus graphs)
and (b)~even cactus forests admit
OORs.
\noindent\textcolor{darkgray}{\sffamily\bfseries\ref{enum:grid}.}
For $i \ge 1$, let $P_i$ be a path with $i$ vertices
$v_1, v_2, \dots, v_i$ in this order and let~$G$ be the graph of a
$k \times \ell$ square grid with $k, \ell \geq 2$. Formally,
$G=P_k \square P_\ell$, where $G_1 \square G_2$ is the Cartesian
product of graphs~$G_1$ and~$G_2$. We place the vertices of each
copy of~$P_k$ in the order
$v_k, v_{k-2}, \dots, v_3, v_1, v_2, \dots, v_{k-1}$ if $k$ is odd
and in the order
$v_k, v_{k-2}, \dots, v_2, v_1, v_3, \dots, v_{k-1}$ if $k$ is even;
see \cref{fig:grid}.
\begin{figure}
\caption{Constructing a reducible regular OOR
of the grid $P_5 \square P_3$.}
\label{fig:grid}
\end{figure}
Within~$P_k$, $v_{k-1}v_k$ is the longest edge; it has circular
length $k-1$ (that is, there are $k-2$ other vertices between the
endpoints of the edge on the circle). The copies of $P_k$ are
placed one after the other, which implies that every edge within a
copy of~$P_\ell$ (colored lightly in \cref{fig:grid}) has circular
length exactly~$k$.
We now show that every non-edge intersects the outer face. First,
consider a non-edge $\{s,t\}$ that has circular length at least
$k+1$; see \cref{fig:grid}. Then it is longer than every edge
of~$G$; hence it intersects the gap region between the first and
last copy of~$P_k$. Note that non-edges of length exactly~$k$ exist
only between vertices of the first and the last copy of~$P_k$, but
these non-edges intersect the gap region between these two copies.
\newcommand{\ensuremath{W^\mathrm{in}}}{\ensuremath{W^\mathrm{in}}}
\newcommand{\ensuremath{W^\mathrm{out}}}{\ensuremath{W^\mathrm{out}}}
\begin{figure}
\caption{The four wedges with respect to a vertex~$v$ of the grid
graph $P_k \square P_\ell$. The black edges represent a copy
of~$P_k$. The two colored edges of length~$k$ belong to two
different copies of~$P_\ell$.}
\label{fig:wedges}
\end{figure}
Next, consider a non-edge $\set{x, y}$ that has length less
than~$k$.
Every vertex~$v$ of~$G$ is incident to one or two edges of
length~$k$ and one or two edges of length less than~$k$. Let
$W^=_v$ be the wedge with apex~$v$ formed by (and including) the length-$k$
edge(s)~-- if there is just one such edge, then $W^=_v$ is the ray
starting in~$v$ that contains this edge; see \cref{fig:wedges}.
Similarly, let $W^<_v$ be the wedge formed by (and including) the
shorter edge(s).
Hence the angular space around~$v$ is subdivided into four wedges;
$W^=_v$, $W^<_v$, $\ensuremath{W^\mathrm{in}}_v$, and $\ensuremath{W^\mathrm{out}}_v$, where $\ensuremath{W^\mathrm{out}}_v$ is the
(open) wedge between $W^=_v$ and $W^<_v$ that, in the vicinity of~$v$,
contains the outer face of the drawing and $\ensuremath{W^\mathrm{in}}_v$ is the
remainder of the plane. Note that $\{x,y\}$ is
neither contained in~$W^=_x$ nor in~$W^<_x$. This is due to the
fact that~$W^=_x$ contains only vertices of distance greater
than~$k$ and~$W^<_x$ contains no vertices in its interior. For the
same reasons, $\{x,y\}$ is neither contained in~$W^=_y$ nor
in~$W^<_y$. It remains to show that $\{x,y\}$ lies in~$\ensuremath{W^\mathrm{out}}_x$ or
in~$\ensuremath{W^\mathrm{out}}_y$.
Suppose that $\{x,y\}$ lies in~$\ensuremath{W^\mathrm{in}}_x$ and has circular
length~$j<k$. Then the edges in~$W^<_x$ have length less than~$j$.
If $y$ belongs to the same copy of~$P_k$ then, due to our layout
of~$P_k$, the edges in~$W^<_y$ must be longer than~$j$.
(See for example the edge in \cref{fig:wedges} that starts in $\ensuremath{W^\mathrm{in}}_v$.)
This implies that $\{x,y\}$ lies in~$\ensuremath{W^\mathrm{out}}_y$.
It remains to consider the case that $y$ lies in a different (but
neighboring) copy of~$P_k$, say, the next copy. Then, for $\{x,y\}$
to lie in~$\ensuremath{W^\mathrm{in}}_x$, $x$ must lie in the ``left'' half (that is,
$x \in \{v_k, v_{k-2}, \dots, v_1\}$) of its copy. Since $j<k$,
$y$ must lie in the left half of its copy, too. Due to our layout
of~$P_k$, the (short) edges in~$W^<_y$ go to the other half of the
copy. Therefore, the two long edges that define~$W^=_y$ must lie
between the short non-edge $\{x,y\}$ and the two short edges that
define~$W^<_y$. In other words, $\{x,y\}$ lies in~$\ensuremath{W^\mathrm{out}}_y$.
For reducibility, we can argue similarly as for the non-edges.
Indeed, every edge of length~$k$ is adjacent to the gap region
between the first and last copy of~$P_k$. The shorter edges
alternate in direction, so for $i\in\{1,\dots,k-1\}$, the edge
$v_iv_{i+1}$ of~$P_k$ is adjacent to the outer face in the vicinity
of vertex~$v_{i+1}$.
\noindent\textcolor{darkgray}{\sffamily\bfseries\ref{enum:outerpath}.}
Let~$G$ be an $n$-vertex outerpath, and let~$\Gamma$ be an outerpath
drawing of~$G$. We show that~$G$ admits a reducible regular OOR.
The statement is trivial for $n\le 3$, so assume otherwise. By
reducibility and appropriately triangulating the internal faces
of~$\Gamma$, we may assume without loss of generality that each
internal face of~$\Gamma$ is a triangle. Let the path
$(t_1,t_2,\dots,t_{n-2})$ be the weak dual of~$\Gamma$. Let~$V_i$
denote the set of vertices of~$G$ that are incident to the triangles
$t_1,t_2,\dots,t_i$. By definition, $V_1$ contains a vertex~$v_1$
of degree~2. For $4\le i\le n$, let $v_i$ denote the unique vertex
in
$V_{i-2}\setminus \bigcup_{j=1}^{i-3}V_j = \lbrace
v_1,v_2,\dots,v_{i-1} \rbrace$; see \cref{fig:outerpath}(left). For
$4\le i< n$, the vertex $v_i$ is incident to an internal
edge~$e_i=v_iv_j$ of~$\Gamma$ such that~$j<i$ and~$v_j$ belongs to
the triangle~$t_{i-3}$. Let $G_i=G[v_1,v_2,\dots,v_i]$. We
iteratively construct reducible regular OORs
$\Gamma_3,\Gamma_4,\dots,\Gamma_{n-2}$ of
$G_3,G_4,\dots,G_{n-2}(=G)$, respectively. We create~$\Gamma_3$ by
arbitrarily drawing~$G_3$ on the circle. To obtain~$\Gamma_i$, for
$4 \le i<n$ from~$\Gamma_{i-1}$, we consider the edge~$e_i=v_iv_j$
and place $v_i$ next to~$v_j$ on the circle, avoiding the (empty)
arc of the circle that corresponds to~$e_{i-1}$; see
\cref{fig:outerpath}(right). Vertex~$v_n$ is placed next
to~$v_{n-1}$, avoiding the arc that corresponds to~$e_{n-1}$.
(This yields that~$v_n$ has the consecutive neighbors property.)
\begin{figure}
\caption{A drawing~$\Gamma$ of an outerpath~$G$ and a reducible
regular OOR of~$G$ based on~$\Gamma$. Inner edges are black,
outer edges are blue, weak dual edges are green.}
\label{fig:outerpath}
\end{figure}
For any three points $a$, $b$, and $c$ on the circle~$C$,
let~$h_{ab}^{+c}$ be the open halfplane that is defined by the
line~$\ell_{ab}$ through $a$ and $b$ and that contains~$c$.
Similarly, let~$h_{ab}^{-c}$ be the open halfplane defined
by~$\ell_{ab}$ that does not contain~$c$. Hence,
$h_{ab}^{+c}\cup\ell_{ab} \cup h_{ab}^{-c} = \mathbb{R}^2$.
We keep the invariant that when we place~$w$ on~$C$, $h_{vw}^{-u}$
is empty and $h_{v'w}^{-u}$ contains only~$v$ (among the vertices
placed so far).
Now we show that when we add the triangle $\triangle wv'v$, if a
non-edge went through the outer face of the drawing~$\Gamma_{i-1}$
of~$G_{i-1}$, it will continue to do so in the drawing~$\Gamma_i$
of~$G_i$. We assume that $\triangle uv'v$ is oriented
counterclockwise (as in \cref{fig:outerpath-proof}).
\begin{figure}
\caption{Constructing a representation for outerpaths: the
invariants are maintained.}
\label{fig:outerpath-proof}
\end{figure}
Let $r$ be the rightmost neighbor of~$v$ in~$G_{i-1}$ (with respect
to the ray from~$v$ to the center of~$C$). Since $u$ is a neighbor
of~$v$, $r=u$ (as in \cref{fig:outerpath-proof}) or $r$ lies
strictly between $u$ and $v$, but $r \ne w$ because
$w \not\in V(G_{i-1})$. Note that the halfplane~$h_{v'w}^{-u}$
contains (the interior of) $\triangle wv'v$, but $v$ is the only
vertex in~$h_{v'w}^{-u}$. Therefore, only non-edges {\em incident
to~$v$} can be affected by the addition of $\triangle wv'v$, and
among these only the ones that go through the outer face
of~$\Gamma_{i-1}$ in the vicinity of~$v$. These are the non-edges
(dashed red in \cref{fig:outerpath-proof}) that are incident to~$v$
and lie in the halfplane~$h_{rv}^{-u}$ that is induced by $rv$ and
does not contain~$v'$. Any such non-edge $\{v,x\}$ intersects~$v'w$
since $v$ and $x$ lie on different sides of~$v'x$. The intersection
point of $\{v,x\}$ and~$v'w$ lies on the outer face of~$\Gamma_i$
because~$h_{v'w}^{-u}$ contains only~$v$ and, in~$G_i$, $w$ is
incident to only~$v$ and~$v'$. This proves our claim regarding the
``old'' non-edges.
The ``new'' non-edges (orange dashed in
\cref{fig:outerpath-proof}) are all incident to~$w$ and lie
in~$h_{v'w}^{-v}$. Since the two neighbors of~$w$, namely~$v$
and~$v'$, are consecutive in~$\Gamma_i$, all non-edges incident
to~$w$ go through the outer face.
It remains to show that $\Gamma_i$ is reducible. For the two new
edges incident to~$w$ it is clear that they are both part of the
outer face~-- at least in the vicinity of~$w$.
Since~$h_{vw}^{-u}$ is empty, the only old edge that is affected
by the addition of~$\triangle wv'v$ is the edge~$vr$. It used to
be part of the outer face at least in the vicinity of~$v$.
Arguing similarly as we did above for the non-edge~$\{v,x\}$, we
claim that the intersection point of~$vr$ and~$v'w$ lies on the
outer face.
\end{proof}
A graph is \emph{convex round} if its vertices can be circularly
enumerated such that the open neighborhood of every vertex is an
interval in the enumeration.
A \emph{bipartite graph} with bipartition $(U,W)$ of the vertex set is
\emph{convex} if $U$ can be enumerated such that, for each vertex
in~$W$, its neighborhood is an interval in the enumeration of~$U$.
By definition, every convex round and convex bipartite graph admits a
circular order such that every vertex satisfies the
consecutive-neighbors property. This yields the following.
\begin{obs}
Every convex round and convex bipartite graph admits a regular
OOR.
\end{obs}
\end{document}
|
\begin{document}
\title[M. Nahon]{Existence and regularity of optimal shapes for spectral functionals with Robin boundary conditions}
\author[M. Nahon]{Mickaël Nahon}
\address[Mickaël Nahon]{Univ. Savoie Mont Blanc, CNRS, LAMA \\ 73000 Chamb\'ery, France}
\email{ [email protected]}
\keywords{Free Discontinuity, Spectral optimization, Robin Laplacian, Robin boundary conditions}
\subjclass[2020]{ 35P15, 49Q10. }
\maketitle
\begin{abstract}
We establish the existence and find some qualitative properties of open sets that minimize functionals of the form $ F(\lambda_1(\Om;\beta),\hdots,\lambda_k(\Om;\beta))$ under measure constraint on $\Om$, where $\lambda_i(\Om;\beta)$ designates the $i$-th eigenvalue of the Laplace operator on $\Om$ with Robin boundary conditions of parameter $\beta>0$. Moreover, we show that minimizers of $\lambda_k(\Om;\beta)$ for $k\geq 2$ verify the conjecture $\lambda_k(\Om;\beta)=\lambda_{k-1}(\Om;\beta)$ in dimension three and more.
\end{abstract}
\tableofcontents
\section{Introduction}
Let $\Om$ be a bounded Lipschitz domain in $\Rn$, $\beta>0$ a parameter that is constant throughout the paper, and $f\in L^2(\Om)$. The Poisson equation with Robin boundary conditions is
\[\begin{cases}-\Delta u=f& \text{ in }\Om,\\ \partial_\nu u+\beta u=0 & \text{ in }\partial\Om,\end{cases}\]
where $\partial_\nu$ is the outward normal derivative that may only have a meaning in the sense that for all $v\in H^1(\Om)$,
\[\int_{\Om}\nabla u\cdot\nabla v\mathrm{d}\Ln+\int_{\partial\Om}\beta u v\mathrm{d}\Hs=\int_{\Om}fv\mathrm{d}\Ln.\]
This equation (and in particular its boundary conditions) has several interpretations: we may see the solution $u$ as the temperature obtained in an homogeneous solid $\Om$ with the volumetric heat source $f$, and insulator on the boundary (more precisely, a width $\beta^{-1}\epsilon$ of insulator of conductivity $\epsilon$ for $\epsilon\rightarrow 0$) that separates the solid $\Om$ from a thermostat.\\
Another interpretation is to see $u$ as the vertical displacement of a membrane with shape $\Om$ on which we apply a volumetric normal force $f$, and the membrane is fixed on its boundary by elastic with stiffness proportional to $\beta$.\\
This equation is associated to a sequence of eigenvalues
\[0<\lambda_1(\Om;\beta)\leq \lambda_2(\Om;\beta)\leq \hdots\rightarrow +\infty,\]
with eigenfunctions $u_k(\Om;\beta)$ that verify
\[\begin{cases}\Delta u_k(\Om;\beta)+\lambda_{k}(\Om;\beta)u_k(\Om;\beta)=0& \text{ in }\Om,\\ \partial_\nu u_k(\Om;\beta)+\beta u_k(\Om;\beta)=0 & \text{ in }\partial\Om.\end{cases}\]
The quantities $(\lambda_k(\Om;\beta))_k$ may be extended to any open set $\Om$ in a natural way, see Section 2 for more details.\bigbreak
In this paper, we study some shape optimization problems involving the eigenvalues $(\lambda_k(\Om;\beta))_k$ with measure constraint on general open sets. In particular we prove that when $F(\lambda_1,\hdots,\lambda_k)$ is a function with positive partial derivative in each $\lambda_i$ (such as $F(\lambda_1,\hdots,\lambda_k)=\lambda_1+\hdots+\lambda_k$), then for any $m,\beta>0$ the optimisation problem
\[\min\left\{F\left(\lambda_1(\Om;\beta),\hdots,\lambda_k(\Om;\beta)\right),\ \Omega\subset\Rn\text{ open such that }|\Om|=m\right\}\]
has a solution. Moreover the topological boundary of an optimal set is rectifiable, Ahlfors-regular, with finite $\Hs$-measure.
For functionals of the form $F(\lambda_1,\hdots,\lambda_k)=\lambda_k$, while minimizers are only known to exist in a relaxed $\SBV$ setting (that will be detailed in the second section), we show that any $\SBV$ minimizer verifies
\[\lambda_k(\Om;\beta)=\lambda_{k-1}(\Om;\beta)\]
in any dimension $n\geq 3$.
\subsection{State of the art}
The link between the eigenvalues of the Laplace operator (or other differential operators) on a domain and the geometry of this domain is a problem that has been widely studied, in particular in the field of spectral geometry.\\
The earliest and most well-known result in this direction dates back to the Faber-Krahn inequality, that states that the first eigenvalue of the Laplacian with Dirichlet boundary conditions is, among sets of given measure, minimal on the disk. The same result was shown for Robin boundary conditions with positive parameter in \cite{B88} in the two-dimensional case, then in \cite{D06} in any dimension for a certain class of domains on which the trace may be defined, using dearrangement methods. It was extended in \cite{BG10}, \cite{BG15} in the $\SBV$ framework that we will describe in the next section, such that the first eigenvalue with Robin boundary condition is minimal on the ball among all open sets of given measure. In order to handle the lack of uniform smoothness of the admissible domains, the method here is to consider a relaxed version of the problem, so as to optimize an eigenfunction instead of a shape. Once it is known a minimizer exists in the relaxed framework, it is shown by regularity and symmetry arguments that this minimizer corresponds to the disk.\\
Similar problems of spectral optimization with Neumann boundary conditions or Robin conditions with negative parameter have been shown to be different in nature, in the former case the first eigenvalue is maximal on the disk, and this is shown with radically different method, mainly building appropriate test functions since the eigenvalues are defined as an infimum through the Courant-Fischer min-max formula. Let us also mention several maximization result for Robin boundary condition with parameter that scales with the perimeter, obtained in \cite{L19}, \cite{GL19} with similar methods.\\
The existence and partial regularity for minimizers of functions $F(\lambda_1^D(\Om),\hdots,\lambda_k^D(\Om))$ (where $\lambda_i^D(\Om)$ is the $i$-th eigenvalue of the Laplacien with Dirichlet boundary conditions) with measure constraint or penalization has been achieved in \cite{B12}, \cite{MP13}, \cite{KL18}, \cite{KL19}: it is known that if $F$ is increasing and bi-Lipschitz in each $\lambda_i$ then there is an optimal open set that is $\mathcal{C}^{1,\alpha}$ outside of a singular set of codimension at least three, and if $F$ is merely nondecreasing in each coordinate then there is an optimal quasiopen set that has analytic boundary outside of a singular set of codimension three and points with Lebesgue density one. It has been shown in \cite{BMPV15}, \cite{KL19} that a shape optimizer for the $k$-th eigenvalue with Dirichlet boundary conditions and measure constraint admits Lipschitz eigenfunctions. In these papers the monotonicity and scaling properties of the eigenvalues with Dirichlet boundary condition ($\om\mapsto \lambda_k^D(\om)$ is decreasing in $\om$) plays a crucial role, however eigenvalues with Robin boundary conditions have no such properties so the same methods cannot be extended in a straightforward way.\\
The minimization of $\lambda_2(\Om;\beta)$ under measure constraint on $\Om$ was treated in \cite{K09}; as in the Dirichlet case, the minimizer is the disjoint union of two balls of same measure. For the minimization of $\lambda_k(\Om;\beta)$ or other functionals of $\lambda_1(\Om;\beta),\hdots,\lambda_k(\Om;\beta)$, nothing is known except for the existence of a minimizer in the relaxed setting with bounded support for $\lambda_k(\Om;\beta)$, see \cite{BG19}. The regularity theory for minimizers of functionals involving Robin boundary conditions was developped in \cite{CK16}, \cite{K19} and we will relay on some of its results in our vectorial setting.\\
Numerical simulations in \cite{AFK13} for two-dimensional minimizers of $\lambda_k(\cdot;\beta)$ (for $3\leq k\leq 7$) with prescribed area suggest a bifurcation phenomena in which the optimal shape is a union of $k$ balls for every small enough $\beta$, and it is connected for any large enough $\beta$. In \cite{AFK13}, the connected minimizers were searched by parametric optimization among perturbations of the disk, however a consequence of our analysis in the last section is that minimizers of $\lambda_3(\cdot;\beta)$ are never homeomorphic to the disk.
\subsection{Statements of the main results}
In the first part of the paper, we are concerned in what we call the non-degenerate case; consider
\[F:\left\{\lambda\in\R^k:0<\lambda_1\leq \lambda_2\leq\hdots\leq \lambda_k\right\}\to\R_+\]
a Lipschitz function with directional derivatives - in the sense that for any $\lambda\in\R^k$ there is some positively homogeneous function $F_0$ such that $F(\lambda+\nu)=F(\lambda)+F_0(\nu)+o_{\nu\to 0}(|\nu|)$ - such that for all $i\in\left\{1,\hdots,n\right\}$, and all $0<\lambda_1\leq \hdots\leq\lambda_k$
\begin{equation}\label{HypF}
\frac{\partial F}{\partial^\pm \lambda_i}(\lambda_1,\hdots,\lambda_k)>0,\ F(\lambda_1,\hdots,\lambda_{k-1},\mu_k)\underset{\mu_k\to\infty}{\longrightarrow}+\infty,
\end{equation}
where $\frac{\partial}{\partial^{\pm} \lambda_i}$ designates the directional partial derivatives in $\lambda_i$. This applies in particular to any of these:
\[F_p(\lambda_1,\hdots,\lambda_k)=\left(\sum_{i=1}^k \lambda_i^p\right)^\frac{1}{p}.\]
Our first main result is the following.
\begin{main}\label{main1}
Let $F$ be such a function, $m>0$, then there exists an open set that minimizes the functional
\[\Om\mapsto F(\lambda_1(\Om;\beta),\hdots,\lambda_k(\Om;\beta))\]
among open sets of measure $m$ in $\Rn$. Moreover any minimizing set is bounded, verifies $\Hs(\partial\Om)\leq C$ for some constant $C>0$ depending only on $(n,m,\beta,F)$, and $\partial\Om$ is Ahlfors-regular.
\end{main}
Here are the main steps of the proof:
\begin{itemize}[label=\textbullet]
\item \textbf{Relaxation}. We relax the problem in the $\SBV$ framework; this is introduced in the next subsection, following \cite{BG10}, \cite{BG15}, \cite{BG19}. The idea is that the eigenfunctions on a domain $\Om$ are expected to be zero almost nowhere on $\Om$; we extend these eigenfunctions by zero outside of $\Omega$ (thereby creating a discontinuity along $\partial\Om$) and reformulate the optimization problem on general functions defined in $\R^n$ that may have discontinuities, with measure constraint on their support. The advantage is that we have some compactness and lower semi-continuity results to obtain the existence of minimizers in the relaxed framework, however a sequence of eigenfunctions extended by zero may converge to a function that does not correspond to the eigenfunction of an open domain, so we will need to show some regularity on relaxed minimizers.
\item \textbf{A priori estimates and nondegeneracy}. We obtain a priori estimates for relaxed interior minimizers (meaning minimizers compared to any set that it contains). More precisely for any interior minimizer that corresponds to the eigenfunctions $(u_1,\hdots,u_k)$, we show that for almost any point in the support of these eigenfunctions, at least one of them is above a certain positive threshold. We also obtain $L^\infty$ bounds of these eigenfunctions and deduce a lower estimate for the Lebesgue density of the support, from which we obtain the boundedness of the support.
\item \textbf{Existence of minimizers}. We consider a minimizing sequence and show that, up to a translation, it either converges to a minimizer or it splits into two minimizing sequences of similar functionals depending on $p$ and $k-p$ (where $1\leq p<k$) eigenvalues respectively, and we know minimizers of these exists by induction on $k$.
\item \textbf{Regularity}. Finally, we show the regularity of relaxed minimizer, meaning that a relaxed minimizer corresponds to the eigenfunctions of a certain open domain that were extended by zero, by showing that the singular set of relaxed minimizers is closed up to a $\Hs$-negligible set.
\end{itemize}
Notice that in the second step we do not show that the first eigenfunction (or one of the $l$ first in the case where the minimizer has $l$ connected components) is positive, which is what we expect in general for sufficiently smooth sets; if $u_1$ is the first (positive) eigenfunction on a connected $\mathcal{C}^2$ set $\Om$, and suppose $u_1(x)=0$ for some $x\in\partial\Om$ then by Hopf's lemma $\partial_\nu u_1(x)<0$, which breaks the Robin condition at $x$, so $\inf_{\Om}u_1>0$. In our case, we get instead a "joint non-degeneracy" of the eigenfunctions in the sense that at every point of their joint support, at least one is positive.\bigbreak
Notice also that the second hypothesis in \eqref{HypF} is not superfluous: without it, a minimizing sequence $(\Om^i)$ could have some of its first $k$ eigenvalues diverge. This is because, unlike the Dirichlet case, there is no upper bound for $\frac{\lambda_k(\cdot;\beta)}{\lambda_1(\cdot;\beta)}$ in general even among sets with fixed measure. While $\lambda_k(\cdot;\beta)$ is not homogeneous by dilation, we still have the scaling property
\[\lambda_k(r\Om;\beta)=r^{-2}\lambda_k(\Om;r\beta).\]
Consider a connected smooth open set $\Om$. Since each $\lambda_k(\Om;r\beta)$ converges to $\lambda_k(\Om,0)$ (the eigenvalues with Neumann boundary conditions) as $r\to 0$, and $0=\lambda_1(\Om,0)<\lambda_2(\Om,0)$, then for any $k\geq 2$, $\frac{\lambda_k(r\Om;\beta)}{\lambda_1(r\Om;\beta)}\underset{r\to 0}{\longrightarrow}+\infty$. A counterexample among sets of fixed measure may be obtained with the disjoint union of $r\Om$ for small $r$ and a set $\om$ with prescribed measure such that $\lambda_1(\om;\beta)>\lambda_k(r\Om;\beta)$, such as a disjoint union of enough balls of radius $\rho>0$, chosen small enough to have $\lambda_1(\om,\beta)=\lambda_1(\B_\rho;\beta)>\lambda_k(r\Om;\beta)$.\bigbreak
In the second part of the paper, we study the minimizers of the functional
\[\Om\mapsto \lambda_k(\Om;\beta).\]
A minimizer in the $\SBV$ framework (see the introduction below) was shown to exist in \cite{BG19}, and aside from the fact that its support is bounded nothing more is known. We show that, in this $\SBV$ framework, a minimizer necessarily verify that $\lambda_{k-1}(\Om;\beta)=\lambda_k(\Om;\beta)$, in the context of definition \ref{def_relaxed}.\bigbreak
This is a long lasting open problem for minimizers of $\lambda_k$ with Dirichlet boundary condition (see \cite[open problem 1]{H06} and \cite{O04}).\bigbreak
However, although we prove it for Robin conditions, we do not expect this result to directly extend to the Dirichlet case ; simply put, even if some smooth sequence of minimizers $\Om^\beta$ of $\lambda_k(\cdot;\beta)$ approached a minimizer $\Om$ of $\lambda_k^D$ that is a counterexample of the conjecture, then there is no reason why the upper semi-continuity $\lambda_{k-1}^D(\Om)\geq \limsup_{\beta\to\infty}\lambda_{k-1}(\Om^\beta;\beta)$ should hold.
\begin{main}
Suppose $n\geq 3$, $k\geq 2$. Let $m>0$, and let $\U$ be a relaxed minimizer of
\[\V\mapsto \lambda_k(\V;\beta)\]
among admissible functions with support of measure $m$. Then
\[\lambda_{k-1}(\U;\beta)=\lambda_k(\U;\beta).\]
\end{main}
Here are the main steps and ideas of the proof:
\begin{itemize}[label=\textbullet]
\item First, we replace the minimizer $\U=(u_1,\hdots,u_k)$ with another minimizer $\V=(v_1,\hdots,v_k)$, with the property that $v_1\geq 0$, $\lambda_{k}(\V;\beta)>\lambda_{k-1}(\V;\beta)$, and $\V\in L^\infty(\R^n)$. One might think this estimate also holds for $\U$, however there is no particular reason why $\Span(\U)$ should contain eigenfunctions for $\lambda_1(\U;\beta),\hdots,\lambda_{k-1}(\U;\beta)$ in a variational sense.\\
This phenomenon may be easily understood in a finite-dimensional setting as follows: consider the matrix
$A=\begin{pmatrix}\lambda_1 \\ & \lambda_2 \\ & & \lambda_3\end{pmatrix}$ with $\lambda_1<\lambda_2<\lambda_3$. Then $\Lambda_2\Big[A\Big]$ is given by:
\[\lambda_2=\inf_{V\subset \R^3,\text{dim}(V)=2}\sup_{x\in V}\frac{(x,Ax)}{(x,x)}.\]
This infimum is reached by the subspace $\Span(e_1,e_2)$, but also by any subspace $\Span(e_1+te_3,e_2)$ for $|t|\leq \frac{\lambda_2-\lambda_1}{\lambda_3-\lambda_2}$, and these subspaces do not contain the first eigenvector $e_1$.
\item Then we obtain a weak optimality condition on $u_k$ using perturbations on sets with a small enough perimeter. The reason for this is that we have no access to any information on $u_1,\hdots,u_{k-1}$ apart from the fact that their Rayleigh quotient is strictly less than $\lambda_k(\Om;\beta)$, so we must do perturbations of $u_k$ that do not increase dramatically the Rayleigh quotient of $u_1,\hdots,u_{k-1}$.
\item We apply this to sets of the form $\B_{x,r}\cap\left\{|u_k|\leq t\right\}$ where $r$ is chosen small enough for each $t$. With this we obtain that $|u_k|\geq c 1_{\left\{ u_k\neq 0\right\}}$.
\item We deduce the result by showing that the support of $\U$ is disconnected, so the $k$-th eigenvalue may be decreased without changing the volume by dilations.
\end{itemize}
While the existence of open minimizers is not yet known, we end with a few observations on the topology of these minimizer, in particular with the fact that a bidimensionnal minimizer of $\lambda_3(\cdot;\beta)$ with prescribed measure is never simply connected.
\section{Relaxed framework}
Throughout the paper, we use the relaxed framework of $\SBV$ functions to define Robin eigenvalues on any open set without regularity condition, and more importantly to transform our shape optimization problem into a free discontinuity problem on functions that are not defined on a particular domain any more. The $\SBV$ space was originally developed to handle relaxations of free discontinuity problems such as the Mumford-Shah functional that will come into play later, we refer to \cite{AFP00} for a complete introduction. $\SBV$ functions may be thought of as "$W^{1,1}$ by part" functions, and this space is defined as a particular subspace of $BV$ as follows:
\begin{definition}
A $\SBV$ function is a function $u\in BV(\Rn,\R)$ such that the distributional derivative $Du$ (which is a finite vector-valued Radon measure) may be decomposed into
\[Du=\nabla u \Ln +(\overline{u}-\underline{u})\nu_u \Hs_{\lfloor J_u}, \]
where $\nabla u\in L^1(\Rn)$, $J_u$ is the jump set of $u$ defined as the set of point $x\in\Rn$ for which there is some $\overline{u}(x)\neq \underline{u}(x)\in\R$, $\nu_u(x)\in\mathbb{S}^{n-1}$, such that
\[\left(y\mapsto u(x+ry)\right)\underset{L^1_{\text{loc}}(\Rn)}{\longrightarrow}\overline{u}(x)1_{\{y:y\cdot\nu_u(x)>0\}}+\underline{u}(x)1_{\{y:y\cdot\nu_u(x)<0\}}\text{ as }r\to 0.\]
\end{definition}
We will not work directly with the $\SBV$ space but with an $L^2$ analog defined below, that was studied in \cite{BG10}.
\begin{definition}
Let $\Uk$ be the space of functions $\U\in L^2(\R^n,\R^k)$ such that
\[D\U=\nabla \U \Ln +(\overline{\U}-\underline{\U})\nu_\U \Hs_{\lfloor J_\U}, \]
where $\nabla \U\in L^2(\R^n,\R^{nk})$ and $\int_{J_\U}(|\overline{\U}|^2+|\underline{\U}|^2)\mathrm{d}\Hs<\infty$. The second term will be written $D^s\U$ ($s$ stands for singular). The function $\U$ is said to be linearly independant if its components span a $k$-dimensional space of $L^2(\R^n)$.\\
We will also say that a function $\U\in \Uk$ is disconnected if there is a measurable partition $\Om,\om$ of the support of $\U$ such that $\U1_\Om$ and $\U1_\om$ are in $\Uk$, and: \begin{align*}
D^s(\U1_\Om)&=(\overline{\U1_\Om}-\underline{\U1_\Om})\nu_\U \Hs_{\lfloor J_\U},\\
D^s(\U1_\om)&=(\overline{\U1_\om}-\underline{\U1_\om})\nu_\U \Hs_{\lfloor J_\U}.\end{align*}
In this case we will write $\U=(\U1_\Om)\oplus(\U1_\om)$.
\end{definition}
The following compactness theorem is a reformulation of Theorem 2 from \cite{BG10}.
\begin{proposition}\label{CompactnessLemma}
Let $(\U^i)$ be a sequence of $\Uk$ such that
\[\sup_{i}\int_{\R^n}|\nabla\U^i|^2\mathrm{d}\Ln+\int_{J_\U}(|\underline{\U^i}|^2+|\overline{\U^i}|^2)\mathrm{d}\Hs+\int_{\Rn}|\U^i|^2\mathrm{d}\Ln<\infty,\]
then there exists a subsequence $(\U^{\phi(i)})$ and a function $\U\in\Uk$ such that
\begin{align*}
\U^{\phi(i)}&\underset{L^2_\text{loc}}{\longrightarrow}\U,\\
\nabla\U^{\phi(i)}&\underset{L^2_{\text{loc}}-\text{weak}}{\rightharpoonup}\nabla\U,\\
\end{align*}
Moreover for any bounded open set $A\subset\R^n$
\begin{align*}
\int_{A}|\nabla \U|^2\mathrm{d}\Ln&\leq \liminf_{i\to+\infty}\int_{A}|\nabla \U_\phi(i)|^2\mathrm{d}\Ln,\\
\int_{J_u\cap A}(|\overline{\U}|^2+|\underline{\U}|^2)\mathrm{d}\Hs&\leq \liminf_{i\to+\infty}\int_{J_u\cap A}(|\overline{\U^{\phi(i)}}|^2+|\underline{\U^{\phi(i)}}|^2)\mathrm{d}\Hs.\\
\end{align*}
\end{proposition}
\begin{proof}
The proof is an adaptation of \cite[theorem 2]{BG10} to a multidimensional case.
\end{proof}
We define a notion of $i$-th eigenvalue of the Laplace operator with Robin boundary conditions that allows us to speak of the functional $\lambda_k(\cdot;\beta)$ with no pre-defined domain, and to define the $k$-th eigenvalue on any open set even when the trace of $H^1$ functions is not well-defined.
\begin{definition}\label{def_relaxed}
Let $\U\in\Uk$ be linearly independant. We define the two Gram matrices:
\begin{align*}\label{defmatrix}
A(\U)&=\left(\langle u_i,u_j\rangle_{L^2(\mathbb{R}^n,\Ln)}\right)_{1\leq i,j\leq k},\\
B(\U)&=\left(\langle \nabla u_i,\nabla u_j\rangle_{L^2(\mathbb{R}^n,\Ln)}+\beta\langle \overline{u_i},\overline{u_j}\rangle_{L^2(J_\U,\Hs)}+\beta\langle \underline{u_i},\underline{u_j}\rangle_{L^2(J_\U,\Hs)}\right)_{1\leq i,j\leq k}.
\end{align*}
We then define the $i$-th eigenvalue of the vector-valued function $\U$ as
\begin{equation}
\lambda_i(\U;\beta)=\inf_{V\subset\Span(\U), \dim(V)=i}\sup_{v\in V}\frac{\int_{\mathbb{R}^n}|\nabla v|^2\mathrm{d}\Ln+\beta \int_{J_\U}(\overline{v}^2+\underline{v}^2)\mathrm{d}\Hs}{\int_{\mathbb{R}^n}v^2\mathrm{d}\Ln}=\Lambda_i\Big[A(\U)^{-\frac{1}{2}}B(\U)A(\U)^{-\frac{1}{2}}\Big],
\end{equation}
where $\Lambda_i$ designates the $i$-th eigenvalue of a symmetric matrix.\\
We will say that $\U$ is normalized if $A(\U)=I_k$ and $B(\U)$ is the diagonal $(\lambda_1(\U;\beta),\hdots,\lambda_k(\U;\beta))$. Following the spectral theorem, for any linearly independant $\U\in\Uk$ there exists $P\in \text{GL}_k(\R)$ such that $P\U$ is normalized.\\
Although we expect the optimal sets to have rectifiable boundary, we may define the eigenvalues with Robin boundary conditions for any open set $\Om\subset \Rn$ as
\begin{equation}\label{defopen}
\lambda_k(\Om;\beta):=\inf\Big[\lambda_k(\U;\beta), \U\in \Uk\text{ linearly independant}:\Hs(J_\U\setminus \partial\Om)=\Ln(\left\{\U\neq 0\right\}\setminus \Om)=0\Big].
\end{equation}
\end{definition}
It may be checked that for any bounded Lipschitz domain, the admissible space corresponds to linearly independant functions $\U\in H^1(\Om)^k$ so this definition is coherent with the usual.
\section{Strictly monotonous functionals}
Let us first restate the first main result in the $\SBV$ framework. We define the admissible set of functions as
\[\Uk(m)=\left\{ \V\in\Uk:\ \V\text{ is linearly independant and }|\{\V\neq 0\}|=m\right\}.\]
For any linearly independant $\U\in\Uk$, we let:
\begin{align*}
\F(\U)&:=F(\lambda_1(\U;\beta),\hdots,\lambda_k(\U;\beta)),\\
\F_\gamma(\U)&:=\F(\U)+\gamma|\left\{\U\neq 0\right\}|.
\end{align*}
Our goal is now to show that $\F$ has a minimizer in $\Uk(m)$, and that any minimizer of $\F$ in $\Uk(m)$ is deduced from an open set, meaning there is an open set $\Om$ that essentially contains $\left\{\U\neq 0\right\}$ such that $\U_{|\Om}\in H^1(\Om)^k$. This is not the case for every $\SBV$ functions: some may have a dense and non-closed jump set, while $\partial\Om$ is closed and not dense.\\
The lemma \ref{PenalizationLemma} will make a link between minimizers of $\F$ in $\Uk(m)$ and minimizers of $\F_\gamma$ among linearly independant $\U\in\Uk$ for which the support's measure is less than $m$.
\subsection{A priori estimates}
An internal relaxed minimizer of $\F_\gamma$ is a linearly independant function $\U\in\Uk$ such that for any linearly independant $\V\in \Uk$ verifying $|\left\{\V\neq 0\right\}\setminus\left\{\U\neq 0\right\}|=0$:
\[\F_\gamma(\U)\leq \F_\gamma(\V).\]
To shorten some notations, we introduce the function $G:S_k^{++}(\R)\to \R$ such that
\[\F_\gamma(\U)=G\Big[A(\U)^{-\frac{1}{2}}B(\U)A(\U)^{-\frac{1}{2}}\Big]+\gamma|\left\{\U\neq 0\right\}|,\]
meaning that for any positive definite symmetric matrix $S$, $G\Big[S\Big]=F\left(\Lambda_1\Big[S\Big],\hdots,\Lambda_k\Big[S\Big]\right)$.
The smoothness of $F$ does not imply the smoothness of $G$ in general, because of the multiplicities of eigenvalues. However the monotonicity of $F$ implies the monotonicity of $G$ in the following sense: suppose $M,N$ are positive symmetric matrices, then:
\begin{align*}
G\Big[M+N\Big]&\leq G\Big[M\Big]+\left(\max_{i=1,\hdots,k}\sup_{\Lambda_j\Big[M\Big]\leq\lambda_j\leq \lambda_j(M+N)}\frac{\partial F}{\partial^\pm\lambda_i}(\lambda_1,\hdots,\lambda_k)\right) \Tr\Big[N\Big],\\
G\Big[M+N\Big]&\geq G\Big[M\Big]+\left(\min_{i=1,\hdots,k}\inf_{\lambda_j\Big[M\Big]\leq\lambda_j\leq \lambda_j\Big[M+N\Big]}\frac{\partial F}{\partial^\pm\lambda_i}(\lambda_1,\hdots,\lambda_k)\right) \Tr\Big[N\Big].\end{align*}
Above $\frac{\partial F}{\partial^\pm\lambda_i}$ designates the directional partial derivatives of $F$.
Moreover, $G$ has directional derivative everywhere; let $M=\begin{pmatrix}\lambda_1 \\ & \ddots \\ & & \lambda_k\end{pmatrix}$ be a diagonal matrix with $p$ distincts eigenvalues and $1\leq i_1<i_2<i_p\leq k$ be such that for any $i\in I_l:=[i_l,i_{l+1})$:
\[\lambda_{i_l}=\lambda_i<\lambda_{i_{l+1}}.\]
Then for each $i\in I_l$ the function $N\mapsto \Lambda_i\Big[N\Big]$ admits the following directional derivative at $M$:
\[\Lambda_i\Big[M+N\Big]=\Lambda_i\Big[M\Big]+\Lambda_{i-i_l+1}\Big[N_{|I_l}\Big]+\underset{N\to 0}{o}(N),\]
where $N_{|I}:=(N_{i,j})_{i,j\in I}$. Since $F$ has a directional derivative everywhere, this means that $G$ admits a directional derivative
\begin{equation}\label{Diff}
G\Big[M+N\Big]=G\Big[M\Big]+F_0\left(\Lambda_1\Big[ N_{|I_1}\Big],\hdots,\Lambda_{k-i_{k}+1}\Big[ N_{|I_p}\Big]\right)+\underset{N\to 0}{o}\left(N\right),
\end{equation}
where $F_0$ is a positiverly homogeneous function that is the directional derivative of $F$ at $(\lambda_1,\hdots,\lambda_k)$.
\begin{proposition}\label{apriori}
Let $\U$ be a relaxed internal minimizer of $\F_\gamma$, suppose it is normalized. Then there exists constants $M,\delta,R>0$ that only depend on $(n,k,\beta,\F_\gamma(\U),F)$ such that
\[\delta 1_{\left\{ \U\neq 0\right\}}\leq |\U|\leq M.\]
Moreover, up to translation of its connected component, $\U$ is supported in a set of diameter bounded by $R$.
\end{proposition}
Estimates of the form $|\U|\geq \delta 1_{\{\U\neq 0\}}$ for solution of elliptic equations with Robin boundary conditions appear in \cite{BL14}, \cite{BG15}, \cite{CK16}, see also \cite{BMV21} in a context without free discontinuity. It is a crucial steps to show the regularity of the function $\U$; once $\U$ is known to take values between two positive bounds, then it may be seen as a quasi-minimizer of the Mumford-Shah functional $\int_{\Rn}|\nabla\U|^2\mathrm{d}\Ln+\Hs(J_\U)$ on which the techniques used to show the regularity of Mumford-Shah minimizers (see \cite{DCL89}) may be extended (see \cite{CK16}, \cite{BL14}).
\begin{proof}
We show, in order, that the eigenvalues $\lambda_i(\U;\beta)$ are bounded above and below, the $L^\infty$ bound on $\U$, the lower bound on $\U_{|\left\{\U\neq 0\right\}}$, a lower bound on the Lebesgue density of $\left\{\U\neq 0\right\}$, and then the boundedness of the support.
\begin{itemize}[label=\textbullet]
\item Since $|\left\{\U\neq 0\right\}|\leq \F_\gamma(\U)/\gamma$, then by the Faber-Krahn inequality with Robin Boundary conditions (as proved in \cite{BG10}) $\lambda_1(\U;\beta)\geq \lambda_1(\B^{|\F_\gamma(\U)|/\gamma};\beta)=:\lambda$.\\
In a similar way, since
\[F(\lambda,\hdots,\lambda,\lambda_k(\U;\beta))\leq F(\lambda_1(\U;\beta),\hdots,\lambda_k(\U;\beta))\leq \F_\gamma(\U)\]
and $F$ diverges when its last coordinate does, so $\lambda_k(\U;\beta)$ is bounded by a constant $\Lambda>0$ that only depends on the behaviour of $F$ and $\F_\gamma(\U)$. Let us write:
\begin{align*}
a&=\inf_{\frac{1}{2}\lambda\leq \lambda_1\leq \lambda_2\leq\hdots\leq \lambda_k\leq 2\Lambda}\inf_{i=1,\hdots,k}\frac{\partial F}{\partial^\pm \lambda_i}(\lambda_1,\hdots,\lambda_k),\\
b&=\sup_{\frac{1}{2}\lambda\leq \lambda_1\leq \lambda_2\leq\hdots\leq \lambda_k\leq 2\Lambda}\sup_{i=1,\hdots,k}\frac{\partial F}{\partial^\pm \lambda_i}(\lambda_1,\hdots,\lambda_k).\end{align*}
$a$ and $b$ are positive and only depend on $\F_\gamma(\U)$ and the behaviour of $F$.
\item For the $L^\infty$ bound we use a Moser iteration procedure (see for instance \cite[Th 4.1]{HL11} for a similar method). We begin by establishing that $u_i$ is an eigenfunction of $\lambda_i(\U;\beta)$ in a variational sense. \\
Let $v_i\in \mathcal{U}_1$ be such that $\left\{v_i\neq 0\right\}\subset \left\{\U\neq 0\right\}$ and $J_{v_i}\subset J_\U$, we show that $V(u_i,v_i)=0$, where
\[V(u_i,v_i):=\int_{\Rn}\nabla u_i\cdot\nabla v_i\mathrm{d}\Ln+\beta\int_{J_\U}u_i v_i\mathrm{d}\Hs-\lambda_i(\U;\beta)\int_{\R^n}u_i v_i \mathrm{d}\Ln.\]
For this consider $\U_t=\U-t (v_i-\sum_{j\neq i}V(v_i,u_j)u_j) e_i$. Since $A(\U_t)$ converges to $I_k$, $\U_t$ is linearly independant for a small enough $t$ and
\[\F_\gamma(\U)\leq \F_\gamma(\U_t).\]
This implies, since $\left\{\U_t\neq 0\right\}\subset\left\{\U\neq 0\right\}$, that
\[F(\lambda_1(\U;\beta),\hdots,\lambda_k(\U;\beta))\leq F(\lambda_1(\U_t;\beta),\hdots,\lambda_k(\U_t;\beta)),\]
which may also be written
\[G\Big[B(\U)\Big]\leq G\Big[A(\U_t)^{-\frac{1}{2}}B(\U_t)A(\U_t)^{-\frac{1}{2}}\Big].\]
Now, $A(\U_t)^{-\frac{1}{2}}B(\U_t)A(\U_t)^{-\frac{1}{2}}=B(\U)-(e_i e_i^*)V(u_i,v_i)t+\mathcal{O}(t^2)$. Suppose that $V(u_i,v_i)>0$. Let $i'$ be the lowest index such that $\lambda_{i'}(\U;\beta)=\lambda_i(\U;\beta)$. Then knowing the directional derivative of $G$ given in \eqref{Diff} we obtain (for $t>0$)
\[G\Big[A(\U_t)^{-\frac{1}{2}}B(\U_t)A(\U_t)^{-\frac{1}{2}}\Big]=G\Big[B(\U)\Big]+tV(u_i,v_i)F_0\left(0,0,\hdots,0,-1,0,\hdots,0\right)+\underset{t\to 0}{o}(t),\]
which is less than $G\Big[B(\U)\Big]$ for a small enough $t$: this is a contradiction. When $V(u_i,v_i)\leq 0$ we may do the same by replacing $v_i$ with $-v_i$. Thus for all $v_i$ with support and jump set included in the support and jump set of $\U$
\[\int_{\Rn}\nabla u_i\cdot\nabla v_i\mathrm{d}\Ln+\beta\int_{J_\U}u_i v_i\mathrm{d}\Hs=\lambda_i(\U;\beta)\int_{\R^n}u_i v_i \mathrm{d}\Ln.\]
Now we use Moser iteration methods. Let $\alpha\geq 2$ be such that $u_i\in L^\alpha$, then by taking $v_i$ to be a truncation of $|u_i|^{\alpha-2}u_i $ in $[-M,M]$ for $M\to\infty$ in the variational equation above, we obtain
\[\int_{\Rn}(\alpha-1) |u_i|^{\alpha -2}|\nabla u_i|^2\mathrm{d}\Ln+\int_{J_\U}(|\overline{u_i}|^{\alpha}+|\underline{u_i}|^{\alpha})\mathrm{d}\Hs=\lambda_i(\U;\beta)\int_{\Rn}|u_i|^\alpha \mathrm{d}\Ln.\]
Using the embedding $BV(\Rn)\hookrightarrow L^{\frac{n}{n-1}}(\Rn)$ we have:
\begin{align*}
\Vert u_i^\alpha\Vert_{L^{\frac{n}{n-1}}}&\leq C_n\Vert u_i^\alpha\Vert_{BV}\\
&\leq C_n\left(\int_{\Rn} |\nabla (|u_i|^{\alpha -1}u_i)|\mathrm{d}\Ln+\int_{J_\U}(|\overline{u_i}|^{\alpha}+|\underline{u_i}|^{\alpha})\mathrm{d}\Hs\right)\\
&\leq C_n\left(\int_{\Rn}\alpha\left(|u_i|^\alpha+|u_i|^{\alpha-2}|\nabla u_i|^2\right)\mathrm{d}\Ln+\int_{J_\U}(|\overline{u_i}|^{\alpha}+|\underline{u_i}|^{\alpha})\mathrm{d}\Hs\right)\\
&\leq C_{n,\beta}\left(\alpha+\lambda_i(\U;\beta)\right)\Vert u_i\Vert_{L^\alpha}^{\alpha}.\\
\end{align*}
And so $\Vert u_i\Vert_{L^{\frac{n}{n-1}\alpha}}\leq \Big[ C_{n,\beta}\left(\alpha+\lambda_i(\U;\beta)\right)\Big]^\frac{1}{\alpha}\Vert u_i\Vert_{L^\alpha}$. We may apply this iteratively with $\alpha_p=2\left(\frac{n}{n-1}\right)^p$ to obtain an $L^\infty$ bound of $u_i$ that only depends on $n,\beta$ and $\lambda_i(\U;\beta)$. In fact using the Faber-Krahn inequality for Robin conditions $\lambda_i(\U;\beta)\geq \lambda_1\left(\B^{|\{\U\neq 0\}|};\beta\right)$ the previous inequality applied to $\alpha_p$ may be simplified into
\[\log\left(\frac{\Vert u_i\Vert_{L^{\alpha_{p+1}}}}{\Vert u_i\Vert_{L^{\alpha_{p}}}}\right)\leq \left(C(n,\beta,|\{\U\neq 0\}|)(p+1)+\frac{1}{2}\log\lambda_i(\U;\beta)\right)\left(\frac{n-1}{n}\right)^p,\]
and summing in $p$ we obtain an estimate of the form $\Vert u_i\Vert_{L^\infty}\leq C(n,\beta,|\{\U\neq 0\}|)\lambda_i(\U;\beta)^\frac{n}{2}$.
\item Lower bound on $\U$: our goal is first to obtain an estimate of the form
\begin{equation}\label{EstCaff}
\Tr\Big[B_t\Big]+|\left\{ 0<|\U|\leq t\right\}|\leq \frac{1}{\eps}\Tr\Big[\beta_t\Big],
\end{equation}
where $\eps>0$ is a constant that only depends on the parameters and
\begin{align*}
(B_t)_{i,j}&=\int_{|\U|\leq t}\nabla u_i\cdot\nabla u_j\mathrm{d}\Ln +\beta\int_{J_\U}\left(\overline{u_i 1_{|\U|\leq t}}\cdot\overline{u_j 1_{|\U|\leq t}}+\underline{u_i 1_{|\U|\leq t}}\cdot\underline{u_j 1_{|\U|\leq t}}\right)\mathrm{d}\Hs,\\
(\beta_t)_{i,j}&=\beta\int_{\partial^*\left\{ |\U|>t\right\}\setminus J_\U}u_i u_j\mathrm{d}\Hs.
\end{align*}
This is intuitively what we obtain by comparing $\U$ and $\U1_{\left\{|\U|>t\right\}}$. From this we will derive a lower bound of $\inf_{\U\neq 0}|\U|$ with similar arguments as what was done in \cite{CK16}. Suppose \eqref{EstCaff} does not hold. This means that, since $B_t\leq B$ and $|\left\{\U\neq 0\right\}|\leq \F_\gamma(\U)$,
\[\beta_t\leq \left(B(\U)+\F_\gamma(\U)I_k\right)k\eps\leq c\eps B(\U),\]
for a certain $c>0$ since $B(\U)\geq \lambda I_k$. Let us now compare $\U$ with $\U_t=\U1_{\left\{ |\U|>t\right\}}$; this function is admissible for a small enough $t$ because $A(\U_t)=I_k-A\left(\U 1_{\left\{ 0<|\U|\leq t\right\} }\right)$, so
\[\Vert A(\U_t)-I_k\Vert\leq Ct^2|\left\{ 0<|\U|\leq t\right\}|.\]
Notice also that $B(\U_t)=B(\U)-B_t+\beta_t$. Then the optimality condition $\F_\gamma(\U)\leq \F_\gamma(\U_t)$ gives
\begin{equation}\label{eqInter}
G\Big[B(\U)\Big]+\gamma|\left\{0<|\U|\leq t\right\}|\leq G\Big[A(\U_t)^{-\frac{1}{2}}(B(\U)-B_t+\beta_t)A(\U_t)^{-\frac{1}{2}}\Big].
\end{equation}
We first show that $B_t$ is small enough for small $t$. With our hypothesis on $\beta_t$ and the fact that $A(\U_t)^{-\frac{1}{2}}\leq I_k+Ct^2I_k\leq (1+c\eps)I_k$ for a small enough $t$
\[G\Big[B(\U)\Big]+\gamma|\left\{0<|\U|\leq t\right\}|\leq G\Big[\Big[1+2c\eps\Big]B(\U)-B_t\Big].\]
So $G\Big[B(\U)\Big]\leq G\Big[(1+2c\eps)B(\U)-B_t\Big]$. Now, there exists $i\in\left\{1,\hdots,k\right\}$ such that
\[\Lambda_i\Big[(1+c\eps)B(\U)-B_t\Big]\leq (1+c\eps)\Lambda_i\Big[B(\U)\Big]-\frac{1}{k}\Tr\Big[B_t\Big].\]
And so, using the monotonicity of $F$ and the definition of $a,b$ in the first part of the proof:
\begin{align*}
G\Big[B(\U)\Big]&\leq G\Big[(1+c\eps)B(\U)-B_t\Big]\\
&\leq F\left((1+c\eps)\lambda_1(\U;\beta),\hdots,(1+c\eps)\lambda_{i-1}(\U;\beta),(1+c\eps)\lambda_i(\U;\beta)-\frac{1}{k}\Tr\Big[B_t\Big],\hdots,(1+c\eps)\lambda_k(\U;\beta)\right)\\
&\leq G\Big[B(\U)\Big]+bc\eps\Tr\Big[B(\U)\Big]-a\min\left(\frac{1}{k}\Tr\Big[B_t\Big],\frac{\lambda}{2}\right).
\end{align*}
With a small enough $\eps$, we obtain $\Tr\Big[B_t\Big]\leq \frac{\lambda}{2}$. Now we may come back to \eqref{eqInter}, and using the fact that $A(\U_t)^{-\frac{1}{2}}\leq (1+Ct^2|\left\{0<|\U|\leq t\right\}|)I_k$ we obtain
\[G\Big[B(\U)\Big]+\gamma|\left\{0<|\U|\leq t\right\}|\leq G\Big[B(\U)+\beta_t-B_t+Ct^2|\left\{0<|\U|\leq t\right\}|I_k\Big],\]
and so with the monotonicity of $G$
\[G\Big[B(\U)\Big]+\gamma|\left\{0<|\U|\leq t\right\}|\leq G\Big[B(\U)\Big] +b\Tr\Big[\beta_t\Big]-a\Tr\Big[B_t\Big]+Cbt^2|\left\{0<|\U|\leq t\right\}|.\]
In particular, for a small enough $t>0$ (depending only on the parameters)
\[a\Tr\Big[B_t\Big]+\frac{\gamma}{2}|\left\{0<|\U|\leq t\right\}|\leq b\Tr\Big[\beta_t\Big],\]
and so we obtained that there is a big enough constant $C>0$, and a small enough $t_1>0$, such that for any $t\in (0,t_1]$
\begin{equation}\label{EstCaff2}
\Tr\Big[B_t\Big]+|\left\{0<|\U|\leq t\right\}|\leq C\Tr\Big[\beta_t\Big].
\end{equation}
Now we let
\[V:=|\U|=\sqrt{u_1^2+\hdots+u_k^2}\left(\geq \delta 1_{\left\{ \U\neq 0\right\}}\right).\]
Let $f(t)=\int_0^t \tau\Hs(\partial^*\left\{ V> t\right\}\setminus J_\U)\mathrm{d}\tau$. Notice that the right-hand side of \eqref{EstCaff2} is $Ctf'(t)$. Then for any $t\leq t_1$:
\begin{align*}
f(t)&\leq \int_0^t \tau\Hs(\partial^*\left\{ V> \tau\right\}\setminus J_V)d\tau=\int_{\om_t}V|\nabla V|\mathrm{d}\Ln\\
&\leq |\left\{ 0<V\leq t\right\}|^{\frac{1}{2n}}\left(\int_{\left\{ 0<V\leq t\right\}}|\nabla V|^2\mathrm{d}\Ln\right)^{\frac{1}{2}}\left(\int_{\left\{ 0<V\leq t\right\}}(V^2)^{\frac{n}{n-1}}\mathrm{d}\Ln\right)^\frac{n-1}{2n}\\
&\leq C(tf'(t))^{\frac{1}{2n}+\frac{1}{2}}\left(|D(V^2)|(\left\{ 0<V\leq t\right\})\right)^\frac{1}{2}\\
&\leq C(tf'(t))^{\frac{1}{2n}+\frac{1}{2}}\left(\int_{\left\{ 0<V\leq t\right\}}V|\nabla V|\mathrm{d}\Ln +\int_{J_V\cap \left\{ 0<V\leq t\right\}}V^2\mathrm{d}\Hs\right)^\frac{1}{2}\\
&\leq C(tf'(t))^{\frac{1}{2n}+\frac{1}{2}}\left(t|\left\{ 0<V\leq t\right\}|^{\frac{1}{2}}\left(\int_{\left\{ 0<V\leq t\right\}}|\nabla V|^2\mathrm{d}\Ln\right)^{\frac{1}{2}} +\int_{J_V\cap \left\{ 0<V\leq t\right\}}V^2\mathrm{d}\Hs\right)^\frac{1}{2}\\
&\leq C(tf'(t))^{1+\frac{1}{2n}}.
\end{align*}
The constant $C>0$ above depends only on the parameters and may change from line to line. This implies that $f'(t)f(t)^{-\frac{2n}{2n+1}}\geq ct^{-1}$, so for any $t\in ]0,t_1[$ such that $f(t)>0$ this may be integrated from $t$ to $t_1$ to obtain
\[\frac{1}{2n+1}f(t_1)^{\frac{1}{2n+1}}\geq c\log(t_1/t).\]
Since $f(t_1)\leq |\left\{\U\neq 0\right\}|^{\frac{1}{2}}\left(\int_{\Rn}|\nabla V|^2\right)^\frac{1}{2}\leq \sqrt{k\F_\gamma(\U)\Lambda/\gamma}$, then $t$ is bounded below in terms of the parameters of the problem. This means that $f(\delta)=0$ for a certain explicit $\delta>0$. In particular, $(\nabla\U)1_{\left\{ |\U|\leq \delta\right\}}=0$, and by comparing $\U$ with $\U_\delta$, \eqref{EstCaff} becomes $|\left\{ 0<|\U|\leq \delta\right\}|\leq 0$, so we obtained
\[|\U|\geq \delta 1_{\left\{ \U\neq 0\right\}}.\]
\item To show the support $\left\{\U\neq 0\right\}$ (or its connected components) is bounded, we begin by showing a lower estimate for the Lebesgue density on this set. This is obtained by comparing $\U$ with $\U_r:=u1_{\mathbb{R}^n\setminus \B_r}$ where $\B_r$ is a ball of radius $r>0$.\bigbreak
As previously, we first need to check that $\U_r$ is admissible for any small enough $r>0$. With the $L^\infty$ bound on $\U$, we get $|A(\U_r)-I_k|\leq C|\left\{ \U\neq 0\right\}\cap \B_r|$. In particular, $|A(\U_r)-I_k|\leq C r^n$, which proves that $A(\U_r)$ is invertible for a small enough $r$.\bigbreak
Let $f(r)=|\left\{ \U\neq 0\right\}\cap \B_r|$. By comparing $\U$ with $\U_r$ we obtain
\[G\Big[B(\U)\Big]+\gamma|\left\{\U\neq 0\right\}\cap \B_r|\leq G\Big[A_r(B-B_r+\beta_r)A_r\Big],\]
where $A_r,B_r,\beta_r$ are defined as previously: $A_r=A(\U_r)^{-\frac{1}{2}}$ and
\begin{align*}
(B_r)_{i,j}&=\int_{\B_r}\nabla u_i\cdot\nabla u_j \mathrm{d}\Ln+\beta\int_{J_\U}\left(\overline{u_i 1_{\B_r}}\cdot\overline{u_j 1_{\B_r}}+\underline{u_i 1_{\B_r}}\cdot\underline{u_j 1_{\B_r}}\right)\mathrm{d}\Hs,\\
(\beta_r)_{i,j}&=\beta\int_{\partial\B_r\setminus J_\U}u_i u_j\mathrm{d}\Hs.
\end{align*}
With the same argument as what we did to obtain the lower bound, this estimate implies that for any $r\in ]0,r_0]$ where $r_0$ is small enough
\[c\Tr\Big[B_r\Big]\leq \Tr\Big[\beta_r\Big]+f(r),\]
for a certain $c>0$. With the $L^\infty$ bound and the lower bound on $\U$, we deduce that for a certain constant $C>0$:
\[\Hs(\B_r\cap J_\U)\leq C\left(f(r)+\Hs(\partial \B_r\cap \left\{ \U\neq 0\right\})\right).\]
Notice that $f'(r)=\Hs(\partial \B_r\cap \left\{ \U\neq 0\right\})$, so with the isoperimetric inequality
\[c_n f(r)^{1-\frac{1}{n}}\leq \Hs(\B_r\cap J_\U)+\Hs(\partial \B_r\cap \left\{ \U\neq 0\right\})\leq C(f(r)+f'(r)).\]
Since $f(r)\leq Cr^n\rightarrow 0$, we deduce that for a certain constant $C>0$ and any small enough $r$ ($r<r_0$) we have
\[f(r)^{1-\frac{1}{n}}\leq Cf'(r).\]
Suppose now that $f(r)>0$ for any $r>0$. Then by integrating the above estimate from $0$ to $r_0$, we obtain that for a certain constant $c>0$ and any $r\in [0,r_0]$
\[|\left\{ \U\neq 0\right\}\cap \B_{x,r}|\geq cr^n.\]
Consider now a system of points $S\subset\Rn$ such that for any $x\in S$ and any $r>0$, $|\left\{\U\neq 0\right\}\cap\B_{x,r}|>0$, and such that for any distinct $x,y\in S$, $|x-y|\geq 2r_0$. Then
\[\F_\gamma(\U)\geq \gamma |\left\{ \U\neq 0\right\}|\geq \gamma\sum_{x\in S}|\left\{ \U\neq 0\right\}\cap \B_{x,r_0}|\geq c\gamma r_0^n \mathrm{Card}(S),\]
so $\mathrm{Card}(S)$ is bounded. Then by taking a maximal set of separated points $S$ as above, the balls $(\B_{x,2r_0})_{x\in S}$ cover $\left\{\U\neq 0\right\}$. This means in particular that the support of $u$ is bounded by a constant only depending on the parameters, up to a translation of the its connected components.
\end{itemize}
\end{proof}
\subsection{Existence of a relaxed minimizer with prescribed measure}
This section is dedicated to the proof of the following result.
\begin{proposition}\label{existence}
Let $m,\beta>0$, then there exists $\U\in \Uk$ that minimizes $\F$ in the admissible set $\Uk(m)$.
\end{proposition}
We begin with a lemma that will help us to show that any minimizing sequence of $\F$ in $\Uk(m)$ has concentration points, meaning points around which the measure of the support is bounded below by a positive constant.
\begin{lemma}\label{Conc}
Let $\U\in \Uk$, we let $K_p:=p+[-\frac{1}{2},\frac{1}{2}]^n$, then there exists $p\in \mathbb{Z}^n$ such that
\[|\{\U\neq 0\}\cap K_p|\geq \left(\frac{c_n\Vert \U\Vert_{L^2(\Rn)}^2}{\Vert \U\Vert_{L^2(\Rn)}^2+\int_{\Rn}|\nabla\U|^2\mathrm{d}\Ln+\int_{J_\U}\left(|\overline{\U}|^2+|\underline{\U}|^2\right)\mathrm{d}\Hs}\right)^n.\]
\end{lemma}
\begin{proof}
It is the consequence of the $BV(K_p)\hookrightarrow L^\frac{n}{n-1}(K_p)$ embedding, see \cite[lemma 12]{BGN21}.
\end{proof}
The following lemma makes a straightforward link between minimizers of $\F$ with fixed volume and interior minimizers of $\F_\gamma$ for a sufficiently small $\gamma$, which means that all the a priori estimates apply.
\begin{lemma}\label{PenalizationLemma}
Let $\U\in \Uk$ be a minimizer of $\F$ in the admissible set
\[\left\{ \V\in\Uk:\ \V\text{ is linearly independant and }|\{\V\neq 0\}|=m\right\}.\]
Then there exists $\gamma>0$ depending only on $(n,m,\beta,\F(u),F)$ such that $\U$ is a minimizer of $\F_\gamma$ in the admissible set
\[\left\{ \V\in\Uk:\ \V\text{ is linearly independant and }|\{\V\neq 0\}|\in ]0,m]\right\}.\]
\end{lemma}
\begin{proof}
Consider a linearly independant $\V\in\Uk$ such that $\delta:=\frac{|\{\V\neq 0\}|}{|\{\U\neq 0\}|}\in ]0,1]$. Let $\W(x):=\V(x\delta^{1/n})$. Then the support of $\W$ has the same measure as $\U$ and so $\F(\U)\leq \F(\W)$. Looking how the matrices $A$ and $B$ scale with the change of variable $x\to x\delta^{-\frac{1}{n}}$ we obtain
\[A(\W)^{-\frac{1}{2}}B(\W)A(\W)^{-\frac{1}{2}}\leq \delta^{\frac{1}{n}}A(\V)^{-\frac{1}{2}}B(\V)A(\V)^{-\frac{1}{2}},\]
hence
\[\F(\U)\leq F\left(\delta^{\frac{1}{n}}\lambda_1(\V;\beta),\hdots,\delta^{\frac{1}{n}}\lambda_k(\V;\beta)\right).\]
By the Faber-Krahn inequality for Robin eigenvalues, $\lambda_1(\V;\beta)\geq \lambda_1(\B^{|\{\V\neq 0\}|};\beta)$. Moreover since $F$ diverges when its last coordinate does, we may suppose without loss of generality that $\lambda_k(\V;\beta)$ is bounded by a certain constant $\Lambda>0$ that does no depend on $\V$. This in turn means that $|\{\V\neq 0\}|$ is bounded below by a positive constant depending only on $n,\beta,\Lambda$ by the Faber-Krahn inequality, so $\delta$ is bounded below. Then by denoting $a$ the minimum of the partial derivatives of $F$ on $[\delta^{\frac{1}{n}}\lambda_1(\B^{m};\beta),\Lambda]^{k}$, we obtain
\[\F(\U)\leq \F(\V)-a(\lambda_1(\V;\beta)+\hdots+\lambda_k(\V;\beta))(1-\delta^{1/n})\leq \F(\V)-\frac{ka\lambda_1(\B^m;\beta)}{nm}(|\{\U\neq 0\}|-|\{\V\neq 0\}|).\]
This concludes the proof.
\end{proof}
We may now prove the main result of this section.
\begin{proof}
We proceed by induction on $k$. The main idea is that we either obtain the existence of a minimizer by taking the limit of a minimizing sequence, or we don't and in this case the minimizer is disconnected so it is the union of two minimizers of different functionals depending on strictly less than $k$ eigenvalues.\\
The initialisation for $k=1$ amounts to showing there is a minimizer for $\lambda_1(\U;\beta)$ in $\mathcal{U}_1(m)$: this has been done in \cite{BG15} and it is known to be the first eigenfunction of a ball of measure $m$.\\
Suppose now that $k\geq 2$ and the result is true up to $k-1$. Consider $(\U^i)_i$ a minimizing sequence for $\F$ in $\Uk(m)$. Then the concentration lemma \ref{Conc} may be applied to each $\U^i$ to find a sequence $(p^i)_i$ in $\mathbb{Z}^n$ such that
\begin{equation}\label{EstVol}
\liminf_{i\to\infty}|K_{p_i}\cap \{\U^i\neq 0\}|>0.
\end{equation}
We lose no generality in supposing, up to a translation of each $\U^i$, that $p^i=0$. Now with the compactness lemma \ref{CompactnessLemma}, we now up to extraction that $\U^i$ converges in $L^2_\text{loc}$ to a certain function $\U\in \Uk$ with local lower semicontinuity of its Dirichlet-Robin energy.\\
We now split $\U^i$ into a "local" part and a "distant" part; we may find an increasing sequence $R^i\to\infty$ such that
\[\U^i 1_{\B_{p^i,R^i}}\underset{L^2(\Rn)}{\longrightarrow} \U.\]
Up to changing each $R^i$ with a certain $\tilde{R^i}\in [\frac{1}{2}R^i,R^i]$, we may suppose that
\[\int_{\partial \B_{R^i}\setminus J_{\U^i}}|\U^i|^2\mathrm{d}\Hs=\underset{i\to \infty}{o}(1),\]
so that for each $i\in \{1,\hdots,k\}$
\[\lambda_i\left((\U1_{\B_{R^i}},\U1_{\B_{R^i}^c});\beta\right)\leq\lambda_i(\U;\beta)+\underset{i\to \infty}{o}(1).\]
Since $A\left(\U1_{\B_{R^i}},\U1_{\B_{R^i}^c}\right)$ and $B\left(\U1_{\B_{R^i}},\U1_{\B_{R^i}^c}\right)$ are block diagonal (with two blocks of size $k\times k$), then up to extraction on $i$ there is a certain $p\in \{0,1,\hdots,k\}$ such that
\[\Big[\lambda_1(\U^i 1_{\B_{R^i}};\beta),\hdots,\lambda_p(\U^i 1_{\B_{R^i}};\beta),\lambda_1(\U^i 1_{\B_{R^i}^c};\beta),\hdots,\lambda_p(\U^i 1_{\B_{R^i}^c};\beta)\Big]^{\mathfrak{S}_k}\leq (\lambda_1(\U^i;\beta),\hdots,\lambda_k(\U^i;\beta))+\underset{i\to \infty}{o}(1),\]
where $\Big[a_1,\hdots,a_k\Big]^{\mathfrak{S}_k}$ designate the ordered list of the values $(a_1,\hdots,a_k)$. There are now three cases:
\begin{itemize}[label=\textbullet]
\item $p=0$: we claim this can not occur. Indeed this would mean that $\U^i1_{\B_{R^i}^c}$ is such that
\[\F(\U^i1_{\B_{R^i}^c})\underset{i\to\infty}{\longrightarrow}\inf_{\Uk(m)}\F.\]
However, because of \eqref{EstVol} we know there is a certain $\delta>0$ such that for all big enough $i$ the measure of the support of $\U^i1_{\B_{R^i}^c}$ is less than $m-\delta$. Letting $\V^i=\U^i1_{\B_{R^i}^c}\left(\Big[\frac{m-\delta}{m}\Big]^\frac{1}{n}\cdot\right)$, $\V^i$ is a linearly independant sequence of $\Uk$, with support of volume less than $m$, such that $\F(\V^i)<\inf_{\Uk(m)}\F$ for a big enough $i$: this is a contradiction.
\item $p=k$. In this case $\U(=\lim_i \U^i1_{\B_{R^i}})$ is a minimizer of $\F$ with measure less than $m$. This is because, in addition to the fact that $\U^i1_{\B_{R^i}}$ converges to $\U$ in $L^2$, the lower semi-continuity result tells us that for each $z\in\R^k$:
\[z^*B(\U)z\leq \liminf_{i}z^*B(\U^i1_{\B_{R^i}})z,\]
thus for any $j=1,\hdots,k$, $\lambda_j(\U;\beta)\leq \liminf_{i}\lambda_j(\U^i1_{\B_{R^i}};\beta)$. And $|\{\U\neq 0\}|\leq \liminf |\{\U^i1_{B_{R^i}}\neq 0\}|\leq m$.
\item $1\leq p\leq k-1$. This is where we will use the induction hypothesis. We let:
\begin{align*}
\lambda_j&=\lim_{i\to\infty}\lambda_j(\U^i1_{\B_{R^i}};\beta),\ \forall j=1,\hdots,p & m_\text{loc}=\lim_{i\to\infty}|\{\U^i1_{B_{R^i}}\neq 0\}|,\\
\mu_j&=\lim_{i\to\infty}\lambda_j(\U^i1_{\B_{R^i}}^c;\beta),\ \forall j=1,\hdots,k-p & m_\text{dist}=\lim_{i\to\infty}|\{\U^i1_{B_{R^i}^c}\neq 0\}|.\\
\end{align*}
Then by continuity of $F$
\[\inf_{\Up(m)}\F=F\left(\Big[\lambda_1,\hdots,\lambda_p,\mu_1,\hdots,\mu_{k-p}\Big]^{\mathfrak{S}_k}\right).\]
Let us introduce
\[\F_{\text{loc}}:\V\in \mathcal{U}_{p}(m_\text{loc})\mapsto F\left(\Big[\lambda_1(\V;\beta),\hdots,\lambda_p(\V;\beta),\mu_1,\hdots,\mu_{k-p}\Big]^{\mathfrak{S}_k}\right).\]
This functional verify the hypothesis \eqref{HypF}, so following the induction hypothesis we know it has a minimizer $\V$. Moreover, according to the a priori bounds, $\V$ is known to have bounded support. Since $|\{\U^i1_{\B_{R^i}\neq 0}\}|\underset{i\to\infty}{\rightarrow}m_{\text{loc}}$, then by the optimality of $\V$ we get
\[\F_\text{loc}(\V)\leq \liminf_{i\to\infty}\F_\text{loc}(\U^i1_{\B_{R^i}})=F\left(\Big[\lambda_1,\hdots,\lambda_p,\mu_1,\hdots,\mu_{k-p}\Big]^{\mathfrak{S}_k}\right)=\inf_{\Uk(m)}\F.\]
Now consider the functional
\begin{align*}
\F_{\text{dist}}:\W\in \mathcal{U}_{k-p}(m_\text{dist})&\mapsto F\left(\Big[\lambda_1(\V;\beta),\hdots,\lambda_{p}(\V;\beta),\lambda_1(\W;\beta),\hdots,\lambda_{k-p}(\W;\beta)\Big]^{\mathfrak{S}_k}\right).
\end{align*}
With the same arguments, there is a minimizer $\W$ with bounded support. By comparing $\W$ with $\U^i1_{\B_{R^i}^c}$ we obtain
\[\F_\text{dist}(\W)\leq\liminf_{i\to\infty}\F_\text{dist}(\U^i1_{\B_{R^i}^c})=\F_\text{loc}(\V)\left(\leq \inf_{\Uk(m)}\F\right).\]
Since both $\V$ and $\W$ have bounded support we may suppose up to translation that their support are a positive distance from each other. Consider $\U=\V\oplus\W$, then $\F(\U)=\F_{\text{dist}}(\W)$ so $\U$ is a minimizer of $\F$ in $\Uk(m)$.
\end{itemize}
\end{proof}
\subsection{Regularity of minimizers}
Here we show that the relaxed global minimizer $\U$ that we found in the previous section corresponds to the eigenfunctions of an open set. What this means is that there is an open set $\Om$ that contains almost all the support of $\U$ such that $\U_{|\Om}\in H^1(\Om)^k$ and $\lambda_1(\Om;\beta),\hdots,\lambda_k(\Om;\beta)$ as defined in \eqref{defopen} are reached for $u_{1|\Om},\hdots,u_{k|\Om}$ respectively (provided $\U$ is normalized). Moreover we show that this open set $\partial\Om$ is Ahlfors regular and $\Hs(\partial\Om)<\infty$.\\
The main step is to show that $J_\U$ is essentially closed, meaning $\Hs\left(\overline{J_\U}\setminus J_\U\right)=0$. This is obvious for functions $\U$ that are eigenfunctions of a smooth open set $\Om$, since $J_\U=\partial\Om$, however an $\SBV$ function could have a dense jump set.\\
This is dealt using similar methods as in \cite{DCL89}, \cite{CK16}; we show that for every point $x\in \Rn$ with sufficiently low $(n-1)$ dimensional density in $J_\U$, the energy of $\U$ decreases rapidly around that point (this is lemma \ref{DecayLemma}). This is obtained by contradiction and blow-up methods, by considering a rescaling of a sequence of function that do not verify this estimate.
As a consequence we obtain uniform lower bound on the $(n-1)$ dimensional density of $J_\U$, which implies that it is essentially closed.
We point out that in similar problems (see \cite{BL14}), the essential closedness of the jump set is obtained using the monotonicity of $\frac{1}{r^{n-1}}\left(\int_{\B_r}|\nabla u|^2+\Hs(J_u\cap \B_r)\right)\wedge c+c'r^{\alpha}$ for some constants $c,c',\alpha>0$ (where $u$ is a scalar solution of some similar free discontinuity problem). However our optimality condition (see \eqref{EstOpt} below) does not seem to be enough to establish a similar monotonicity property, namely due to the remainder on the right-hand side and the multiplicities of eigenvalues.
\begin{proposition}\label{regularity}
Let $\U$ be a relaxed minimizer $\F_\gamma$. Then $\Hs\left(\overline{J_\U}\setminus J_\U\right)=0$ and $\Om:=\left\{ \overline{|\U|}>0\right\}\setminus \overline{J_\U}$ is an open set such that $(u_1,\hdots,u_k)$ are the first $k$ eigenfunctions of the Laplacian with Robin boundary conditions on $\Om$.
\end{proposition}
Since the proof is very similar to what was done in \cite{CK16}, we only sketch the specific parts of the proof that concern the vectorial character of our problem.
\begin{proof}
We first establish an optimality conditions for perturbations of $\U$ on balls with small diameter. We suppose $\U$ is normalized and, using the same notations as in \eqref{Diff} for $M=B(\U)$ we denote
\begin{equation}\label{G0}
G_0\Big[N\Big]=F_0\left(\Lambda_1\Big[ N_{|I_1}\Big],\hdots,\Lambda_{k-i_{k}+1}\Big[ N_{|I_p}\Big]\right),
\end{equation}
such that:
\begin{equation}\label{Dev}
G\Big[B(\U)+N\Big]=G\Big[B(\U)\Big]+G_0\Big[N\Big]+\underset{N\to 0}{o}(N).\end{equation}
While $G_0$ is not linear (except in the particular case where $\frac{\partial F}{\partial\lambda_i}=\frac{\partial F}{\partial \lambda_j}$ for each $i,j$ such that $\lambda_i(\U;\beta)=\lambda_j(\U;\beta)$), it is positively homogeneous. We let
\[E_0\Big[N\Big]=\max\left(G_0\Big[N\Big],\Tr\Big[N\Big]\right).\]
$E_0$ is also positively homogeneous and verify that for any non-zero $S\in S_k^+(\R)$, $E_0\Big[-S\Big]<0$. We show that:
\begin{center}\textit{For any $\V\in\Uk$ that differs from $\U$ on a ball $\B_{x,r}$ where $r$ is small enough, we have}\end{center}
\begin{equation}\label{EstOpt}
E_0\Big[B(\V;\B_{x,r})-B(\U;\B_{x,r})\Big]\geq -\Lambda r^n -\delta(r)|B(\U;\B_{x,r})|.
\end{equation}
Where $\Lambda>0$, $\delta(r)\underset{r\to 0}{\rightarrow}0$, and
\[B(\W;\B_{x,r})_{i,j}:=\int_{\B_{x,r}}\nabla w_i\cdot\nabla w_j \mathrm{d}\Ln+\beta\int_{J_\W}\left(\overline{w_i 1_{\B_{x,r}}}\cdot\overline{w_j 1_{\B_{x,r}}}+\underline{w_i 1_{\B_{x,r}}}\cdot\underline{w_j 1_{\B_{x,r}}}\right)\mathrm{d}\Hs.\]
To show \eqref{EstOpt}, we may suppose that $\Tr\Big[B(\V;\B_{x,r})\Big]\leq \Tr\Big[B(\U;\B_{x,r})\Big]$ (or else it is automatically true) and that $\V$ is bounded in $L^\infty$ by the same bound as $\U$. The optimality condition of $\U$ gives
\[\F_\gamma(\U)\leq \F_\gamma(\V),\]
where the right-hand side is well defined for any small enough $r>0$ since $|A(\V)-I_k|\leq Cr^n$. This implies
\[G\Big[B(\U)\Big]\leq G\Big[(1+Cr^n)(B(\U)-B(\U;\B_{x,r})+B(\V;\B_{x,r}))\Big]+\gamma|\B_{x,r}|.\]
Thus, using the monotonicity of $G$ and the developpement \eqref{Dev} we obtain the estimate \eqref{EstOpt}. Let us now show that this estimate, along with the a priori estimate
\begin{equation}\label{Estapriori}
\delta 1_{\left\{\U\neq 0\right\}}\leq |\U|\leq M,
\end{equation}
implies the closedness of $J_\U$, following arguments of \cite{CK16} that were originally developped in \cite{DCL89} for minimizers of the Mumford-Shah functional. The crucial argument is the following decay lemma.
\begin{lemma}\label{DecayLemma}
For any small enough $\tau \in ]0,1[$, there exists $\overline{r}=\overline{r}(\tau),\eps=\eps(\tau)>0$, such that for any $x\in \Rn$, $r\in ]0,\overline{r}]$, $\W\in \Uk$ verifying the a priori estimates \eqref{Estapriori} and the optimality condition \eqref{EstOpt}
\[\left(\Hs(J_\W\cap \B_{x,r})\leq \eps r^{n-1},\ \Tr\Big[B(\W;\B_{x,r})\Big]\geq r^{n-\frac{1}{2}}\right)\text{ implies }Tr\Big[B(\W;\B_{x,\tau r})\Big]\leq \tau^{n-\frac{1}{2}}Tr\Big[B(\W;\B_{x,r})\Big].\]
\end{lemma}
\begin{proof}
The proof is sketched following the same steps as \cite{CK16}. Consider a sequence of functions $\W^i\in \Uk$ with a sequence $r_i,\eps_i\to 0$ and a certain $\tau\in ]0,1[$ that will be fixed later, such that:
\begin{align}
\Hs(J_{\W^i}\cap \B_{r_i})&=\eps_i r_i^{n-1},\\
\Tr\Big[B(\W^i;\B_{r_i})\Big]&\geq r_i^{n-\frac{1}{2}},\\ \label{Absurd}
Tr\Big[B(\W^i;\B_{x,\tau r_i})\Big]&\geq \tau^{n-\frac{1}{2}}Tr\Big[B(\W;\B_{r_i})\Big].
\end{align}
And let
\[\V^i(x)=\frac{\W^i\left(x/r_i\right)}{\sqrt{r_i^{2-n}\Tr\Big[B(\W^i;\B_{r_i})\Big]}}.\]
Then, since $\int_{\B_1}|\nabla\V^i|^2\mathrm{d}\Ln\leq 1$ and $\Hs(J_{\V^i}\cap\B_1)=\eps_i\to 0$, we know there exists some sequences $\tau_i^-<m_i<\tau_i^+$ such that the function:
$\tilde{\V^i}:=\min(\max(\V^i,\tau_i^-),\tau_i^+)$ (where the $\min$ and $\max$ are taken for each component) verifies:
\begin{align*}
\Vert \tilde{\V^i}-m_i\Vert_{L^\frac{2n}{n-2}(\B_1)}&\leq C_n\Vert \nabla\V\Vert_{L^2(\B_1)}&\left(\leq 1\right),\\
\Ln(\{\tilde{\V^i}\neq \V^i\})&\leq C_n\Hs(J_{\V^i}\cap \B_1)^\frac{n}{n-1}&\left(=C_n\eps_i^\frac{n}{n-1}\right).
\end{align*}
One may prove (using a $\mathrm{BV}$ and a $L^{\frac{2n}{n-2}}$ bound) that $\tilde{\V}^i-m_i$ converges in $L^2$ with lower semi-continuity for the Dirichlet energy to some $\V\in H^1(\B_1)$. We claim $\V$ is harmonic as a consequence of \eqref{EstOpt}: for this consider a function $\boldsymbol{\varphi}\in H^1(\B_1)^k$ that coincides with $\V$ outside a ball $\B_\rho$ for some $\rho<1$. Let $\rho'\in ]\rho,1[$, $\eta\in \mathcal{C}^\infty_{\text{compact}}(\B_{\rho'},[0,1])$ such that $\eta=1$ on $\B_\rho$ and $|\nabla\eta|\leq 2(\rho'-\rho)^{-1}$. Then we define
\begin{align*}
\boldsymbol{\varphi}^i&= (m_i+\boldsymbol{\varphi})\eta+\tilde{\V}^i(1-\eta)1_{\B_{\rho'}}+\V^i 1_{\Rn\setminus\B_{\rho'}},\\
\boldsymbol{\Phi}^i(x)&=\sqrt{r_i^{2-n}\Tr\Big[B(\W^i;\B_{r_i})\Big]}\boldsymbol{\varphi}^i(r_ix).
\end{align*}
$\boldsymbol{\Phi}^i$ coincides with $\W^i$ outside of a ball of radius $\rho' r_i$, so it may be compared to $\W^i$ using the optimality condition \eqref{EstOpt}. With the same computations as in \cite{CK16} we obtain, as $\rho\nearrow \rho'$, that
\[E_0\Big[B(\boldsymbol{\varphi};\B_{\rho'})-B(\V;\B_{\rho'})\Big]\geq 0.\]
Taking $\boldsymbol{\varphi}$ to be the harmonic extension of $\V_{|\partial \B_\rho}$ in $\B_\rho$, we find that $B(\boldsymbol{\varphi};\B_{\rho'})\leq B(\V;\B_{\rho'})$ with equality if and only if $\V$ is equal to its harmonic extension. If it is not, then \[E_0\Big[B(\boldsymbol{\varphi};\B_{\rho'})-B(\V;\B_{\rho'})\Big]< 0,\]
which contradicts the optimality. This means that the components of $\V$ are harmonic. Since $\int_{\B_1}|\nabla \V|^2\mathrm{d}\Ln\leq 1$, then $|\nabla \V|\leq \sqrt{1/|\B_{1/2}|}$ on $\B_{1/2}$, so for any $\tau< \frac{1}{2^n|\B_1|}$ we find that $\int_{\B_\tau}|\nabla u|^2\mathrm{d}\Ln<\tau^{n-\frac{1}{2}}$; this contradicts the condition \eqref{Absurd}.
\end{proof}
The decay lemma implies the existence of $r_1,\eps_1>0$ such that for any $x\in J_\U^{\text{reg}}$ and $r\in ]0,r_1[$:
\begin{equation}\label{Ahlfors}
\Hs(J_\U\cap \B_{x,r})\geq\eps_1 r^{n-1}.
\end{equation}
Suppose indeed that it is not the case for some $x\in J_{\U}$. Let $\tau_0\in ]0,1[$ be small enough to apply lemma \ref{DecayLemma}. Then for a small enough $\tau_1$,
\[\Tr\Big[B(\U:\B_{x,\tau_1 r}\Big]\leq \delta^2\eps(\tau_0) (\tau_1r)^{n-1}.\]
Indeed, either $\Tr\Big[B(\U;\B_{x, r})\Big]$ is less than $r^{n-\frac{1}{2}}$ and this is direct provided we take $r_1<\delta^4\eps(\tau_0)^2\tau_1^{2(n-1)}$, or it is not and then by application of the lemma (and using the fact that $\Tr\Big[B(\U;\B_{x,r})\Big]\leq C(\U)r^{n-1}$, which is obtained by comparing $\U$ with $\U1_{\Rn\setminus \B_{x,r}}$) we get
\[\Tr\Big[B(\U;\B_{x,\tau_1r})\Big]\leq C(\U)\tau_1^{n-\frac{1}{2}}r^{n-1}\leq \delta^2\eps(\tau_0)(\tau_1 r)^{n-1},\]
provided we choose $\tau_1\leq C(\U)^{-2}\delta^4 \eps(\tau_0)^2$ (and $\eps_1=\eps(\tau_1),r_1<\overline{r}(\tau_1)$ so that the lemma may be applied). Then we may show by induction that for all $k\in\mathbb{N}$,
\begin{equation}\label{Induction}
\Tr\Big[B(\U;\B_{x,\tau_0^{k}\tau_1r})\Big]\leq \delta^2\eps(\tau_0)\tau_0^{k(n-\frac{1}{2})}(\tau_1r)^{n-1}.\end{equation}
Indeed \eqref{Induction} implies that $\Hs(J_{\U}\cap \B_{\tau_0^k\tau_1 r})\leq \eps(\tau_0)(\tau_0^k\tau_1r)^{n-1}$, so with the same dichotomy as above we may apply the lemma \ref{DecayLemma} again to obtain \eqref{Induction} by induction.\bigbreak
Overall this means that $\frac{1}{\rho^{n-1}}\left(\int_{\B_{x,\rho}}|\nabla\U|^2\mathrm{d}\Ln +\Hs(J_\U\cap\B_{x,r})\right)\underset{\rho\to 0}{\rightarrow}0$, which is not the case when $x\in J_\U$ (see \cite{DCL89}, Theorem 3.6), so \eqref{Ahlfors} holds. By definition it also holds for $x\in \overline{J_{\U}}$ with a smaller constant, however according to \cite{DCL89}, lemma 2.6, $\Hs$-almost every $x$ such that $\liminf_{r\to 0}\frac{\Hs(J_{\U}\cap\B_{x,r})}{r^{n-1}}>0$ is in $J_{\U}$, which ends the proof.
\end{proof}
As a consequence of the existence of a relaxed minimizer and the regularity of relaxed minimizers, we obtain the theorem \ref{main1}.
\begin{proof}
We know from the proposition \ref{existence} that there exists a relaxed minimizer $\U$ of $\F$ in $\Uk(m)$, and from lemma \ref{PenalizationLemma} that $\U$ is an internal relaxed minimizer of $\F_\gamma$ for some $\gamma>0$ that only depends on the parameters. From the proposition \ref{apriori} we obtain that for certain constants $\delta,M,R>0$ only depending on the parameters, $\delta 1_{\{\U\neq 0\}}\leq |\U|\leq M$ and the diameter of the support of $\U$ (up to translation of its components) is less than $R$. From proposition \ref{regularity} we know that $\Hs(\overline{J_\U}\setminus J_\U)=0$. Since $|\U|\geq \delta 1_{\{\U\neq 0\}}$, we obtain
\begin{align*}
\Hs(\overline{J_\U})&=\Hs(J_\U)\leq \delta^{-2}\int_{J_{\U}}\left(|\overline{\U}|^2+|\underline{\U}|^2\right)\mathrm{d}\Hs\\
&\leq \beta^{-1}\delta^{-2}\left(\lambda_1(\U;\beta)+\hdots+\lambda_k(\U;\beta)\right)\leq C(n,m,\beta,F).
\end{align*}
Let $\Om$ be the union of the connected components of $\R^n\setminus \overline{J_\U}$ on which $\U$ is not zero almost everywhere. By definition $\partial\Om=\overline{J_{\U}}$, and $\U$ is continuous on $\Rn\setminus J_\U$ and do not take the values $\pm\frac{\delta}{2}$, thus $|\U|\geq \delta$ on $\Om$. In particular, $\{\U\neq 0\}$ and $\Om$ differ by a $\Ln$-negligible set, and $J_\U\subset\partial\Om$, so $\U_{|\Om}\in H^1(\Om)^k$. This means that for every $i=1,\hdots,k$, $\lambda_i(\Om;\beta)\leq \lambda_i(\U;\beta)$, so $\Om$ is optimal for $\F$.\bigbreak
In the proof of proposition \ref{regularity} we obtained the existence of a certain $\eps_1,r_1>0$ such that for every $x\in\partial\Om(=\overline{J_\U})$, $r<r_1$, then $\Hs(\B_{x,r}\cap\partial\Om)\geq \eps_1r^{n-1}$. By comparing $\U$ with $\U1_{\Rn\setminus B_{x,r}}$ (similarly to what was done in the proof of the proposition \ref{apriori}), we obtain the upper bound $\Hs(\B_{x,r}\cap\partial\Om)\leq Cr^{n-1}$; this concludes the proof.
\end{proof}
\section{The functional $\Om\mapsto\lambda_k(\Om;\beta)$}
We are now interested by the specific functional
\[\Om\mapsto \lambda_k(\Om;\beta).\]
While it is not covered by the previous existence result, relaxed minimizers of this functional were shown to exist in \cite{BG19}. To understand its regularity, it might be tempting to consider a sequence of relaxed minimizers with the function $F(\lambda_1,\hdots,\lambda_k)=\lambda_k+\eps (\lambda_1+\hdots+\lambda_{k-1})$ where $\eps\to 0$, however while the $L^\infty$ bound does not depend on $\eps$, the lower bound does and it seems to degenerate to 0 as $\eps$ goes to 0.\\
This prevents us to obtain any regularity on relaxed minimizers of this functional. We are, however, able to treat the specific case where the $k$-th eigenvalue would be simple, and this analysis allows us to prove that this does not happen in general. In particular, we shall prove $\lambda_k(\U;\beta)=\lambda_{k-1}(\U;\beta)$.\\
\subsection{Regularization and perturbation lemma}
We begin with a density result that allows us to suppose without loss of generality that $\U$ is bounded in $L^\infty$. This relies on the same procedure as \cite[Theorem 4.3]{BG19}.\\
We remind the notation for admissible functions used previously:
\[\Uk(m)=\left\{ \V\in\Uk:\ \V\text{ is linearly independant and }|\{\V\neq 0\}|=m\right\},\]
as well as the fact that if $\U$ is a relaxed minimizer of $\lambda_k(\cdot;\beta)$ in $\Uk(m)$ then according to lemma \ref{PenalizationLemma} there is a constant $\gamma>0$ such that $\U$ is a minimizer of
\[\V\mapsto \lambda_k(\V;\beta)+\gamma|\{\V\neq 0\}|\]
for linearly independant function $\V$ such that $|\{\V\neq 0\}|\in ]0,m]$.
\begin{lemma}\label{LemmaApriori}
Let $\U=(u_1,\hdots,u_k)$ be a relaxed minimizer of $\lambda_k(\U;\beta)$ in $\Uk(m)$. Suppose that $\lambda_k(\U;\beta)>\lambda_{k-1}(\U;\beta)$. Then there exists another minimizer $\V\in \Uk(m)$ that is linearly independant, normalized, such that $v_1\geq 0$, $\V\in L^\infty(\Rn)$, and
\[\lambda_{k-1}(\V;\beta)<\lambda_{k}(\V;\beta).\]
\end{lemma}
This justifies that in all the following propositions we may suppose that $\U\in L^\infty(\Rn)$ without loss of generality.
\begin{proof}
Without loss of generality suppose that $\U$ is normalized. Then according to \cite{BG19}, which itself relies on the Cortesani-Toader regularization (see \cite{CT99}), there exists a sequence of bounded polyhedral domains $(\Om^p)$ along with a sequence $\U^p\in \Uk\cap H^1(\Om^p)^k$ such that $\U^p\underset{p\to\infty}{\rightarrow}\U$ in $L^2$, and
\[\limsup_{p\to\infty}B(\U^p)\leq B(\U),\ \limsup_{p\to\infty}|\Om^p|\leq |\left\{\U\neq 0\right\}|.\]
Let $\V^p=(v_1^p,\hdots,v_k^p)$ be the first $k$ eigenfunctions of $\Om^p$ (with an arbitrary choice in case of multiplicity; notice $v_1^p$ may be chosen positive), then $B(\V^p)\leq B(\U^p)$ and with Moser iteration $\V^p$ is bounded in $L^\infty$ by $C_{n,\beta,m}\lambda_k(\U;\beta)^\frac{n}{2}$ (which, in particular, does not depend on $p$). Using the compactness result \ref{CompactnessLemma}, we find that up to an extraction $\V^p$ converges in $L^2$ and almost everywhere to $\V\in \Uk$ with lower semi-continuity on its Dirichlet-Robin energy, thus $\V$ is a minimizer in $L^\infty$ with $v_1\geq 0$. Moreover,
\[\lambda_{k-1}(\V;\beta)\leq \liminf_{p\to\infty}\lambda_{k-1}(\V^p;\beta)\leq \liminf_{p\to\infty}\lambda_{k-1}(\U^p;\beta)\leq \lambda_{k-1}(\U;\beta)<\lambda_{k}(\U;\beta)\leq \lambda_k(\V;\beta).\]
\end{proof}
\begin{lemma}\label{LemmaPerturb}
Let $\U=(u_1,\hdots,u_k)\in \Uk(m)\cap L^\infty(\Rn)$ be an internal relaxed minimizer of $\lambda_k(\U;\beta)$ in $\Uk(m)$, that we suppose to be normalized. Suppose that $\lambda_k(\U;\beta)=\lambda_{k-l+1}(\U;\beta)>\lambda_{k-l}(\U;\beta)$. Then there exists $\delta,\gamma>0$ such that, for all $\om\subset \mathbb{R}^n$ that verify
\[|\om|+\Per(\omega;\Rn\setminus J_\U)<\delta,\]
there exists $\alpha\in (\left\{ 0\right\}^{k-l}\times \mathbb{R}^l)\cap\mathbb{S}^{k-1}$ such that
\begin{equation}\label{EstPerturb}
\int_{\om}|\nabla u_\alpha|^2\mathrm{d}\Ln +\beta\int_{J_\U}\left(\overline{u_\alpha1_{\om}}^2+\underline{u_\alpha1_{\om}}^2\right)\mathrm{d}\Hs+\gamma|\om|\leq 2\beta \int_{\partial^*\om\setminus J_\U}u_\alpha^2\mathrm{d}\Hs+2\lambda_k(\U;\beta)\int_{\om}u_\alpha^2\mathrm{d}\Ln.\end{equation}
\end{lemma}
As may be seen in the proof, the factors $2$ on the right-hand side may be replaced by $1+\underset{\delta\to 0}{o}(1)$, however this will not be useful for us.\\
This result will only be applied in the particular case where $l=1$: when $l>1$ it gives a very weak information on the eigenspace of $\lambda_k(\U;\beta)$ and it would be interesting to see if the regularity of one of the eigenfunctions might be deduced from it as was done in \cite{BMPV15} (in the same problem with Dirichlet boundary conditions). In this case better estimates were obtained by perturbing the functional into $(1-\eps)\lambda_{k}+\eps\lambda_{k-1}$, considering a minimizer $\Om^\eps$ that contains the minimizer $\Om$ of $\lambda_k$, and separating the cases where $\lambda_k(\Om^\eps)$ is simple or not. However these arguments use crucially the monotonicity and scaling properties of $\lambda_i$, which are not available for Robin boundary conditions.
\begin{proof}
Let us denote $\V=\U 1_{\mathbb{R}^n\setminus\omega}$, $A,B=A(\V),B(\V)$, and for any $\alpha,\beta\in\R^k$,
\begin{align*}
A_{\alpha,\beta}&=\sum_{i=1}^k \alpha_i \beta_i A_{i,j},\\
B_{\alpha,\beta}&=\sum_{i=1}^k \alpha_i \beta_i B_{i,j}.\\
\end{align*}
We study the quantity
\[\lambda_k(\V;\beta)=\max_{\alpha\in\mathbb{S}^{k-1}}\frac{B_{\alpha,\alpha}}{A_{\alpha,\alpha}}.\]
Due to the $L^\infty$ bound on $\U$ and the fact that $|\om|+\Per(\om;\Rn\setminus J_\U)\leq \delta$:
\begin{align*}
\inf_{\alpha\in \left\{0\right\}^{k-l}\times \R^{l}\cap\mathbb{S}^{k-1}}\frac{B_{\alpha,\alpha}}{A_{\alpha,\alpha}}&\underset{\delta\to 0}{\longrightarrow}\lambda_{k}(\U;\beta),\\
\sup_{\eta\in\R^{k-l}\times \left\{0\right\}^{l}\cap\mathbb{S}^{k-2}}\frac{B_{\eta,\eta}}{A_{\eta,\eta}}&\underset{\delta\to 0}{\longrightarrow}\lambda_{k-l}(\U;\beta)(<\lambda_{k}(\U;\beta))
\end{align*}
Thus for a small enough $\delta$ the maximum above is attained for a certain $\frac{\alpha+t\eta}{\sqrt{1+t^2}}$ where $\alpha\in \left\{0\right\}^{k-l}\times \R^{l}\cap\mathbb{S}^{k-1}$, $\eta\in\R^{k-l}\times \left\{0\right\}^{l}\cap\mathbb{S}^{k-1}$ and $t\in\R$. $\alpha$ and $\eta$ are fixed in what follows and so
\[\lambda_k(\V;\beta)=\max_{t\in\mathbb{R}}\frac{B_{\alpha,\alpha}+2tB_{\alpha,\eta}+t^2B_{\eta,\eta}}{A_{\alpha,\alpha}+2tA_{\alpha,\eta}+t^2A_{\eta,\eta}}.\]
We let
\begin{align*}
b_{\alpha,\eta}&=\frac{B_{\alpha,\eta}}{B_{\alpha,\alpha}},\ &b_{\eta,\eta}=\frac{B_{\eta,\eta}}{B_{\alpha,\alpha}},\\
a_{\alpha,\eta}&=\frac{A_{\alpha,\eta}}{A_{\alpha,\alpha}},\ &a_{\eta,\eta}=\frac{A_{\eta,\eta}}{A_{\alpha,\alpha}},\\
\end{align*}
\[F(t)=\frac{1+2tb_{\alpha,\eta}+t^2b_{\eta,\eta}}{1+2ta_{\alpha,\eta}+t^2a_{\eta,\eta}}.\]
Then we may rewrite
\begin{equation}\label{eqlambdak}
\lambda_k(\V;\beta)=\frac{B_{\alpha,\alpha}}{A_{\alpha,\alpha}}\max_{t\in\mathbb{R}}F(t).\end{equation}
Moreover,
\[a_{\eta,\eta}\underset{\delta\to 0}{\longrightarrow}1,\ \limsup_{\delta\to 0}b_{\eta,\eta}\leq \frac{\lambda_{k-l}(\U;\beta)}{\lambda_{k}(\U;\beta)}<1.\]
We look for the critical points of $F$; $F'(t)$ has the same sign as
\[(a_{\alpha,\eta} b_{\eta,\eta}-a_{\eta,\eta} b_{\alpha,\eta})t^2-(a_{\eta,\eta}-b_{\eta,\eta})t+(b_{\alpha,\eta}-a_{\alpha,\eta}).\]
Since $F$ has the same limit in $\pm\infty$, this polynomial has two real roots given by:
\[t^{\pm}=\frac{a_{\eta,\eta}-b_{\eta,\eta}}{2(a_{\alpha,\eta}b_{\eta,\eta} - a_{\eta,\eta} b_{\alpha,\eta})}\left(1\pm\sqrt{1-4\frac{(b_{\alpha,\eta}-a_{\alpha,\eta})(a_{\alpha,\eta}b_{\eta,\eta} - a_{\eta,\eta} b_{\alpha,\eta})}{(a_{\eta,\eta}-b_{\eta,\eta})^2}}\right).\]
Since $F'$ has the same sign as $(a_{\alpha,\eta} b_{\eta,\eta}-a_{\eta,\eta} b_{\alpha,\eta})$ in $\pm\infty$, we find that the maximum of $F$ is attained in $t^{-}$. For any small enough $\delta$ we obtain
\[|t^{-}|\leq C_1|a_{\alpha,\eta}-b_{\alpha,\eta}|,\]
where $C_1$ only depends on $\lambda_k(\U;\beta),\lambda_{k-1}(\U;\beta)$. We evaluate $F$ in $t^-$ to obtain, for small enough $\delta$,
\[F(t^-)\leq 1+C_2(A_{\alpha,\eta}^2+B_{\alpha,\eta}^2),\]
where $C_2$ is another such constant. With the Cauchy Schwarz inequality we obtain
\begin{align*}
A_{\alpha,\eta}^2&=\underset{\delta\to 0}{o}\left(\int_{\om}u_\alpha^2\mathrm{d}\Ln\right),\\
B_{\alpha,\eta}^2&=\underset{\delta\to 0}{o}\left(\int_{\om}|\nabla u_\alpha|^2+\beta \int_{J_\U\cup \partial^*\om}\left(\underline{u_\alpha1_{\om}}^2+\overline{u_\alpha1_{\om}}^2\right)\mathrm{d}\Hs\right).
\end{align*}
Moreover,
\begin{align*}
B_{\alpha,\alpha}&=B(\U)_{\alpha,\alpha}-\int_{\om}|\nabla u_\alpha|^2\mathrm{d}\Ln-\beta\int_{J_\U}\left(\underline{u_\alpha1_{\om}}^2+\overline{u_\alpha1_{\om}}^2\right)\mathrm{d}\Hs+\int_{\partial^*\om\setminus J_\U}u_\alpha^2 \mathrm{d}\Hs,\\
A_{\alpha,\alpha}&=1-\int_{\om}u_\alpha^2\mathrm{d}\Ln.\end{align*}
Thus for a small enough $\delta$, we obtained the following estimate in \eqref{eqlambdak}
\begin{align*}
\left(1-\int_{\om}u_\alpha^2\mathrm{d}\Ln\right)\lambda_k(\V;\beta)&\leq B(\U)_{\alpha,\alpha}-(1-\underset{\delta\to 0}{o}(1))\left(\int_{\om}|\nabla u_\alpha|^2\mathrm{d}\Ln+\beta\int_{J_\U}\left(\underline{u_\alpha1_{\om}}^2+\overline{u_\alpha1_{\om}}^2\right)\mathrm{d}\Hs\right)\\
&+(1+\underset{\delta\to 0}{o}(1))\int_{\partial^*\om\setminus J_\U}u_\alpha^2 \mathrm{d}\Hs+\underset{\delta\to 0}{o}\left(\int_{\om}u_\alpha^2\mathrm{d}\Ln\right).\end{align*}
The optimality condition on $\U$ ($\lambda_k(\U;\beta)+\gamma |\om|\leq \lambda_k(\V)$ for a certain $\gamma>0$ that does not depend on $\om$, obtained through Lemma \ref{PenalizationLemma}) coupled with the fact that $\lambda_k(\U)=B(\U)_{\alpha,\alpha}$ gives us the estimate \eqref{EstPerturb} for any small enough $\delta$.
\end{proof}
\subsection{Non-degeneracy lemma and the main result}
\begin{proposition}
Let $\U=(u_1,\hdots,u_k)\in \Uk\cap L^\infty(\Rn)$ an internal relaxed minimizer of $\lambda_k(\U;\beta)+\gamma|\left\{ \U\neq 0\right\}|$. Suppose $n\geq 3$, and that $\lambda_k(\U;\beta)>\lambda_{k-1}(\U;\beta)$. Then there exists $c>0$ such that $|u_k|\geq c 1_{\left\{ u_k\neq 0\right\}}$.
\end{proposition}
\begin{proof}
We actually prove that there exists $r,t>0$ such that for any $x\in\Rn$, $|u_k|\geq t 1_{\B_{x,r}\cap\left\{ u_k\neq 0\right\}}$, since this is sufficient to conclude. We suppose $x=0$ to simplify the notations. We cannot proceed as in the proof of result \ref{apriori} because we do not know whether $\Per(\left\{|u_k|>t\right\};\Rn\setminus J_\U)$ is less than a constant $\delta$ or not. The idea is to compare $\U$ with $\U1_{\mathbb{R}^n\setminus \om_t}$ where
\[\om_t=\B_{r(t)}\cap\left\{ |u_k|\leq t\right\},\]
for $t>0$ and $r(t)>0$ chosen sufficiently small such that $\Per(\om_t;\Rn\setminus J_u)$ is sufficiently small.
\begin{lemma}\label{Lemma_Estimate}
Under these circumstances, there exists $t_1>0$ such that for all $t<t_1$,
\begin{equation}\label{Estimate}
\int_{\om_{t}}|\nabla u_k|^2\mathrm{d}\Ln +\beta\int_{J_\U}\left(\overline{u_k1_{\om_{t}}}^2+\underline{u_k1_{\om_{t}}}^2\right)\mathrm{d}\Hs+\frac{1}{2}\gamma|\om_{t}|\leq 2\beta \int_{\partial^*\om_{t}\setminus J_\U}u_k^2\mathrm{d}\Hs,\end{equation}
where $\om_t=\left\{|u_k|\leq t\right\}\cap \B_{r(t)}$ with $r(t):=\epsilon t^{\frac{2}{n-1}}$ for a small enough $\epsilon>0$.
\end{lemma}
\begin{proof}
As we said previously, this estimate will be obtained by comparing $\U$ and $\U1_{\Rn\setminus \om_t}$ where $\om_t=\B_{r(t)}\cap\left\{ |u_k|\leq t\right\}$. This is direct if we can apply Lemma \ref{LemmaPerturb}, we only need to show the hypothesis
\[\Hs(\partial^*\om_t\setminus J_u)< \delta.\]
Suppose that
\[\beta t^2\Hs(\partial^*\left\{|u_k|\leq t\right\}\cap\B_{r(t)}\setminus J_\U)\leq \int_{\om_{t}}|\nabla u_k|^2\mathrm{d}\Ln +\beta\int_{J_\U}\left(\overline{u_k1_{\om_{t}}}^2+\underline{u_k1_{\om_{t}}}^2\right)\mathrm{d}\Hs+\gamma|\om_{t}|.\]
Indeed if this inequality is false then we obtained the result. Then, comparing $\U$ with $\U1_{\Rn\setminus \B_{r(t)}}$ with the lemma \ref{LemmaPerturb} (which is allowed for any small enough $r>0$) we obtain the estimate
\begin{align*}
\int_{\B_{r(t)}}|\nabla u_k|^2\mathrm{d}\Ln +\beta\int_{J_\U}\left(\overline{u_k1_{\B_{r(t)}}}^2+\underline{u_k1_{\B_{r(t)}}}^2\right)\mathrm{d}\Hs+\frac{1}{2}\gamma|\B_{r(t)}|&\leq 2\beta \int_{\partial \B_{r(t)}\setminus J_\U}u_k^2\mathrm{d}\Hs\\
&\leq C(n,\beta,\Vert u_k\Vert_{L^\infty})r(t)^{n-1}.
\end{align*}
Combining the two previous inequalities,
\begin{align*}\Hs(\partial^*\left\{|u_k|\leq t\right\}\cap\B_{r(t)}\setminus J_\U)&\leq C(n,\beta,\Vert u_k\Vert_{L^\infty})\frac{r(t)^{n-1}}{t^2}\\
&=C(n,\beta,\Vert u_k\Vert_{L^\infty})\epsilon^{n-1}\\
&\leq \frac{1}{2}\delta\text{ for a small enough }\epsilon.\end{align*}
And so Lemma \ref{LemmaPerturb} may be applied, concluding the proof.
\end{proof}
We introduce the sets
\[\om^{\text{sup}}=\left\{ x:|u_k(x)|\geq |x/\eps|^{\frac{n-1}{2}}\right\},\ \om^{\text{inf}}=\left\{ x:|u_k(x)|\leq |x/\eps|^{\frac{n-1}{2}}\right\}\]
and the function
\[f(t)=\int_{\om_{t}}\left(|\nabla u_k|1_{\om^\text{sup}}+1_{\om^{\text{inf}}}\right)|u_k|\mathrm{d}\Ln.\]
From the coarea formula we get
\[f(t)=\int_0^t \left(\int_{\partial^*\left\{|u_k|\leq \tau\right\}\cap\B_{r(\tau)}\setminus J_\U}|u_k|\mathrm{d}\Hs\right)\mathrm{d}\tau+\int_{0}^{r(t)}\left(\int_{\partial\B_r\cap\left\{ |u_k|\leq (r/\eps)^{\frac{n-1}{2}}\right\}\setminus J_\U}|u_k|\mathrm{d}\Hs\right)\mathrm{d}r.\]
So $f$ is absolutely continuous and
\[f'(t)=\int_{\partial^*\{|u_k|\leq t\}\cap\B_{r(t)}\setminus J_\U}|u_k|\mathrm{d}\Hs+\frac{2\epsilon}{n-1}t^{-\frac{n-3}{n-1}}\int_{\partial\B_{r(t)}\cap \left\{ |u_k|\leq t\right\}\setminus J_\U}|u_k|\mathrm{d}\Hs.\]
We use here the fact that $n\geq 3$, so that for all small enough $t$ we get
\[\frac{1}{\epsilon}f'(t)\geq\int_{\partial^*\{|u_k|\leq t\}\cap\B_{r(t)}\setminus J_\U}|u_k|\mathrm{d}\Hs+\int_{\partial\B_{r(t)}\cap \left\{ |u_k|\leq t\right\}\setminus J_\U}|u_k|\mathrm{d}\Hs.\]
We will now estimate $f$ in a similar manner as in result \ref{apriori}.
\begin{align*}
c_n\left(\int_{\om_{t}}|u_k|^{2\frac{n}{n-1}}\mathrm{d}\Ln\right)^{\frac{n-1}{n}}&\leq D(|u_k|^21_{\om_{t}})(\mathbb{R}^n)\\
&=\int_{\om_{t}}2|u_k\nabla u_k|\mathrm{d}\Ln +\int_{J_\U}\left(\overline{u_k1_{\om_{t}}}^2+\underline{u_k1_{\om_{r,t}}}^2\right)\mathrm{d}\Hs\\
&+\int_{\partial^*\om_{t}\setminus J_u}|u_k|^2\mathrm{d}\Hs\\
&\leq |\om_{t}|+\int_{\om_{t}}|\nabla u_k|^2\mathrm{d}\Ln +\int_{J_\U}\left(\overline{u_k1_{\om_{t}}}^2+\underline{u_k1_{\om_{r,t}}}^2\right)\mathrm{d}\Hs\\
&+\int_{\partial^*\om_{t}\setminus J_u}|u_k|^2\mathrm{d}\Hs\\
&\leq C_{\beta,\gamma}\int_{\partial^*\om_{t}\setminus J_u}|u_k|^2\mathrm{d}\Hs\\
&\leq \frac{C_{\beta,\gamma}}{\epsilon}tf'(t).
\end{align*}
We used the lemma \ref{Lemma_Estimate} in the penultimate line, which is only valid for small enough $t$. The hypothesis that $n\geq 3$ was used in the last line. Finally,
\begin{align*}
f(t)&=\int_{\om_{t}}\left(|\nabla u_k|1_{\om^\text{sup}}+1_{\om^{\text{inf}}}\right)|u_k|\mathrm{d}\Ln\\
&\leq |\om_{t}|^{\frac{1}{2n}}\left(\int_{\om_{t}}|\nabla u_k|^2\mathrm{d}\Ln\right)^{\frac{1}{2}}\left(\int_{\om_{t}}|u|^{2\frac{n}{n-1}}\mathrm{d}\Ln\right)^{\frac{n-1}{2n}}+\gamma|\om_{t}|^{\frac{n+1}{2n}}\left(\int_{\om_{t}}|u_k|^{2\frac{n}{n-1}}\mathrm{d}\Ln\right)^{\frac{n-1}{2n}}\\
&\leq C_{n,\beta,\gamma}\left(tf'(t)\right)^{\frac{2n+1}{2n}},
\end{align*}
which implies for a certain $t>0$ that $f(t)=0$. Let $r=\eps t^{\frac{n-1}{2}}$, we show $|u_k|\geq t 1_{\B_{x,r}\cap\left\{ u_k\neq 0\right\}}$. From $f(t)=0$ we get that $u_k=0$ on $\B_r\cap \left\{ x:|u_k(x)|\leq |x/\eps|^{\frac{n-1}{2}}\right\}$. In particular, up to reducing slightly $r$ and $t$ we may suppose
\[\Hs(\partial\B_r\cap \left\{ |u_k|\leq t\right\})=0.\]
Moreover, $f(t)=0$ also gives that $\nabla u_k=0$ on $\B_r\cap\left\{ 0<u\leq t\right\}$. Consider $\U'=\U1_{\mathbb{R}^n\setminus \om}$ where $\om=\B_r\cap\left\{ |u_k|\leq t\right\}$. Note that $J_{\U'}\subset J_{\U}$, and for any small enough $t>0$,
\[\lambda_k(\U;\beta')\leq \lambda_k(\U;\beta)+2t^2|\om|-\frac{1}{2}\beta\int_{J_\U}\left(\overline{u_k1_{\om}}^2+\underline{u_k1_{\om}}^2\right)\mathrm{d}\Hs.\]
This contradicts the minimality of $\lambda_k(\U;\beta)+\gamma|\left\{ \U\neq 0\right\}|$ as soon as $|\om|>0$. This concludes the proof.\bigbreak
\end{proof}
Note that the proof fails when $n=2$ ; we need to choose $r(t)\ll t^{\frac{2}{n-1}}$ to ensure that the competitor $u1_{\Rn\setminus\om_t}$ yields information, but later we use $\inf_{t<1}r'(t)>0$ in a crucial way. When $n=2$ the inequalities are weakened to instead yield $f(t)\geq c t^5$, which is not enough to conclude.\bigbreak
We now deduce the second main result as a consequence.
\begin{proposition}
Suppose $n\geq 3,k\geq 2$. Let $\U=(u_1,\hdots,u_k)$ a relaxed minimizer of $\lambda_k(\U;\beta)$ in $\Uk(m)$. Then
\[\lambda_k(\U;\beta)=\lambda_{k-1}(\U;\beta).\]
\end{proposition}
\begin{proof}
Suppose that $\lambda_k(\U;\beta)>\lambda_{k-1}(\U;\beta)$. We may apply lemma \ref{LemmaApriori} to assume without loss of generality that $u_1\geq 0$ and $\U\in L^\infty$, so that all the previous estimates apply.\\
Let $\Om$ be the support of $u_k$, with $\Om^+=\left\{u_k>0\right\}$ and $\Om^- =\left\{u_k<0\right\}$.\bigbreak
We first notice that $|\left\{\U\neq 0\right\}\setminus\Om|=0$. Suppose indeed that it is not the case, and let $\om=\left\{\U\neq 0\right\}\setminus\Om$. Since $|u_k|\geq \delta 1_{\Om}$, $\U$ may be written as a disconnected sum of two $\Uk$ functions
\[\U=(\U1_\Om)\oplus(\U1_\om).\]
We may translate $\Om$ and $\om$ so that they have a positive distance from each other. Then consider $t>1$ and $s=s(t)<1$ chosen such that
\[|t\Om|+|s\om|=|\Om|+|\om|,\]
and $\U_t$ the function built by dilation of $\U$ on $t\Om\cup s\om$. Then for $t=1+\epsilon$ with a small enough $\epsilon$ we have $\lambda_k(\U_t;\beta)<\lambda_k(\U;\beta)$ with support of same measure, which is absurd by minimality of $\U$. Thus $|\om|=0$.\bigbreak
Since $u_1$ is nonnegative, has support in $\Om$, and $\langle u_1,u_k\rangle_{L^2}=0$, this means that $|\Om^+|,|\Om^-|>0$. We may again decompose $\U$ into
\[\U=(\U1_{\Om^+})\oplus(\U1_{\Om^-}).\]
Consider $\V\in \Up(m)$ for some $p\in \{k,\hdots,2k\}$ an extraction of $(\U1_{\Om^+},\U1_{\Om^-})$, such that it spans the same space in $L^2(\R^n)$ and $\V$ is linearly independant. Then for each $i\in\{1,\hdots,k\}$,
\[\lambda_i(\V;\beta)\leq \lambda_i(\U;\beta),\]
with equality if $i=k$ by optimality of $\U$. Since $A(\V)$ and $B(\V)$ are block diagonals we may suppose $\V$ is normalized such that its components have support in either $\Om^+$ or $\Om^-$: say $v_k$ is supported in $\Om^+$. This means that $\V=(v_1,\hdots,v_k)$ is a minimizer in $L^\infty$ such that $\lambda_k(\V;\beta)>\lambda_{k-1}(\V;\beta)$, and by the previous arguments we know that up to a negligible set $\{\V\neq 0\}\subset\{v_k\neq 0\}$, thus $|\Om^-|=0$: this is a contradiction.
\end{proof}
\subsection{Discussion about the properties of open minimizers}
Here we make a few observations on the properties of minimizing open sets, provided we know such sets exist.
\begin{proposition}
Let $\Om$ be an open minimizer of $\lambda_k(\Om;\beta)$ among opens sets of measure $m$, for $k\geq 2$, with eigenfunction $u_1,\hdots,u_k$. Suppose $\lambda_{k-l}(\Om;\beta)<\lambda_{k-l+1}(\Om;\beta)=\lambda_k(\Om;\beta)$. Then we know that
\[\cap_{i=k-l+1}^k \left(u_i^{-1}(\left\{0\right\})\cap\Om\right)=\emptyset.\]
In particular, for $k=3$ and $n=2$, $\Om$ is not simply connected.
\end{proposition}
\begin{proof}
By contradiction, consider $x\in \cap_{i=k-l+1}^k u_i^{-1}(\left\{0\right\})$, and $\U_r=(u_1,\hdots,u_k)1_{\R^n\setminus \B_{x,r}}$. For a small enough $r$, $\U_r$ is admissible and, with the same estimate as in Lemma \ref{LemmaPerturb}, there is a ($L^2$-normalized) eigenfunction $u_\alpha$ associated to $\lambda_k(\Om;\beta)$ such that
\[\int_{\B_{x,r}}|\nabla u_\alpha|^2\mathrm{d}\Ln+\gamma|\B_{x,r}|\leq 2\beta \int_{\partial\B_{x,r}}u_\alpha^2\mathrm{d}\Hs.\]
This implies that for any small enough $r>0$,
\[ \fint_{\partial\B_{x,r}}u_\alpha^2\mathrm{d}\Hs\geq \frac{r\gamma}{2n\beta}.\]
However, if $x$ is at the intersection of every nodal line associated to eigenfunctions of $\lambda_k(\Om;\beta)$, and since these eigenfunctions are $\mathcal{C}^1$, there is a constant $C>0$ such that for all $\alpha$, $|u_\alpha(y)|\leq C|x-y|$, thus
\[ \fint_{\partial\B_{x,r}}u_\alpha^2\mathrm{d}\Hs\leq C^2r^2,\]
which is a contradiction.\bigbreak
Let us now suppose that $n=2$, $k=3$, and that $\Om$ is simply connected. Since any eigenfunction related to $\lambda_3(\Om;\beta)$ has a non-empty nodal set, we know that
\[\lambda_1(\Om;\beta)<\lambda_2(\Om;\beta)=\lambda_3(\Om;\beta).\]
Let $u_1,u,v$ be the associated eigenfunctions. Every non-trivial linear combination of $u$ and $v$ is an eigenfunction associated to $\lambda_2(\Om;\beta)$ so it has a non-empty nodal set and no more than two nodal domains, thus, with the simple connectedness of $\Om$, its nodal set is connected (either a circle or a curve) and the eigenfunction changes sign at the nodal set.\\
Let us parametrize the eigenspace with
\[w_t(x)=\cos(t)u(x)+\sin(t)v(x).\]
We show that the nodal sets $(\left\{w_t=0\right\})_{t\in \frac{\R}{\pi\mathbb{Z}}}$ are a partition of $\Om$ and that there is a continuous open function $T:\Om\to\frac{\R}{\pi\mathbb{Z}}$ such that $x\in \left\{w_{T(x)}=0\right\}$ for all $x\in\Om$. Indeed, the sets $(\left\{w_t=0\right\})_{t\in \frac{\R}{\pi\mathbb{Z}}}$ are disjoints because $u$ and $v$ have no common zeroes, and for any $x$ we may define
\[T(x)=-\text{arctan}\left(\frac{u(x)}{v(x)}\right),\]
where $\text{arctan}(\infty)=\frac{\pi}{2}[\pi]$ by convention. The function $T$ is continuous, $x\in \left\{w_{T(x)}=0\right\}$, and since eigenfunctions change sign at their nodal lines then $T$ is open. Since $\Om$ is simply connected $T$ may be lifted into
\[\Om\underset{T'}{\longrightarrow}\mathbb{R}\underset{p}{\longrightarrow}\mathbb{R}/\pi\mathbb{Z}.\]
Let $I$ be the image of $T'$, since $T$ is open, then $T'$ is too so $I$ is an open interval. If $T(x)=T(y)$, then $x$ and $y$ are in the same nodal line and since these are connected we know $T'(x)=T'(y)$. In particular, if $t$ is in $I$, then $t\pm \pi\notin I$; this implies that $I=]a,b[$ where $a<b$ and $b-a\leq\pi$. However every $w_t$ has a non-empty nodal set so $\frac{\R}{\pi\mathbb{Z}}=T(\Om)=p(]a,b[)$: this is a contradiction, thus $\Om$ is not simply connected.
\end{proof}
\end{document}
|
\begin{document}
\begin{frontmatter}
\title{A Random Walk with Drift: Interview with~Peter~J.~Bickel}
\runtitle{Interview with Peter J. Bickel}
\begin{aug}
\author{\fnms{Ya'acov} \snm{Ritov}\corref{}\ead[label=e1]{[email protected]}}
\runauthor{Y. Ritov}
\address{Ya'acov Ritov is Professor, Department of Statistics, The Hebrew University of Jerusalem, Mount Scopus, Jerusalem,
91905, Israel
\printead{e1}.}
\end{aug}
\end{frontmatter}
I met Peter J. Bickel for the first time in 1981. He came to Jerusalem
for a year; I had just started working on my Ph.D. studies. Yossi Yahav,
who was my advisor at this time, busy as the Dean of Social Sciences,
brought us together. Peter became my chief thesis advisor. A year and a
half later I came to Berkeley as a post-doc. Since then we have
continued to work together. Peter was first my advisor, then a teacher,
and now he is also a friend. It is appropriate that this interview took
place in two cities. We spoke together first in Jerusalem, at Mishkenot
Shaananim and the Center for Research of Rationality, and then at the
University of California at Berkeley. These conversations were not
formal interviews, but just questions that prompted Peter to tell his
story.
The interview is the intellectual story of a post-war Berkeley
statistician who certainly is one of the leaders of the third generation
of mathematical statisticians, a~generation which is still fruitful
today.
The conversation was soft spoken, a stream of memories, ordered more by
association than by chro\-nology. In fact, I led Peter to tell his story
in a reverse direction, starting from the pure science and ending with
the personal background. So, please sit back, and imagine you are part
of the chat.
\textit{Peter, if you try to summarize the many stages of your career,
how do you characterize the different periods?}
A random walk with drift. Shall I start with the very beginning?
\textit{No, for now can you tell us about your academic career?}
I would say that the first period was almost exclusively theoretical
work, but actually, I think, almost from the beginning, driven as much
by people as by a focus on the subject.
I did my thesis with Erich Lehmann. From the thesis, I published two
papers on multivariate analogues of Hotelling's $T^{2}$ (Bickel, \citeyear{1964};
Bickel, \citeyear{1965a}), really not knowing much about multivariate analysis at
all, learning asymptotics as I went along. After the thesis, partly
talking with Peter Huber, and partly talking with Erich Lehmann, I did
some interesting things on questions of robust estimation. I had a paper
on trimmed means and how they compare to the mean and median (Bickel,
\citeyear{1965b}); again, the results were in the spirit of Hodges and Lehmann.
\begin{figure}
\caption{David Blackwell, Peter Bickel and Erich Lehmann at a party
celebrating Peter's election the National Academy of Science.}
\end{figure}
The next stage happened by a curious accident due to Govindarajulu. He
asked me if I had ever thought about investigating linear combinations
of order statistics. I said, no, but I had some ideas, having learned
about weak convergence of stochastic processes. Initially, he said he
wanted to work with me, but, at the same time, he was talking with
people in Stanford; and then he carried the problem to Le Cam. The
result was, finally, that the problem was attacked with three different
approaches. One was the approach of the Stanford group, growing from the
work Herman Chernoff did on rank statistics, mine, using weak
convergence of the quantile process (Bickel, \citeyear{1967Bickel}), and Le Cam's, which
used the H\'{a}jek projection technique. We all got
results.
Within this work, I was very pleased about the result that the
covariance of two order statistics is non-negative, which Richard Savage
claimed was a~long unsolved problem. Then it turned out that it was an
inequality in Hardy, Littlewood and Polya. This work also led to work in
multivariate goodness of fit tests and other problems to which I applied
the new notions of weak convergence of processes.
I went through my Ph.D. studies very quickly, which left me unfamiliar
with many parts of statistics. I tried to fill the gaps later on. After
my thesis Yossi Yahav and I started talking about his notion of
asymptotic pointwise sequential analysis purely from a theoretical point
of view. He had solved some special cases, and I realized that there was
a general pattern we could use. We have two or three papers in that
direction (Bickel and Yahav, \citeyear{1967BickelYahav}; Bickel and Yahav,
\citeyear{1968}; Bickel and
Yahav, \citeyear{1969a}; Bickel and Yahav, \citeyear{1969b}).
Erich Lehmann characterizes people as problem solvers, like Joe Hodges,
and system builders, like Erich himself. I fall somewhere in between,
but I am primarily a problem solver.
\begin{figure}
\caption{From left to right: Peter Bickel, Juliet Shaffer, Erich
Lehmann, Kjell Doksum and David Freedman, celebrating the 65th birthday
of Erich Lehmann.}
\end{figure}
An area that I started to work on in the seventies with van Zwet and
G\"{o}tze was second order asymptotics (Bickel, G\"{o}tze and van Zwet,
\citeyear{1985}, \citeyear{1986}). It was prompted by Hodges and Lehmann's paper on
deficiency. I got an idea on how to prove things for one sample rank
tests. Van Zwet, independently, got further, but was stumped by two
samples tests. We talked and, using a method of Renyi, we got a complete
answer for rank test statistics and permutation test statistics (Albers,
Bickel and van Zwet, \citeyear{1976}). Later, I was asked to give what is now known
as an IMS Medallion Lecture, for which I had to give a~topic. I proposed
Edgeworth expansions and nonparametric statistics, some of which I knew
how to do, at least formally. But then I had to have a~real example, so
I chose $U$-statistics. Something like a~month before the lecture I~realized
that there was a substantial difficulty with my arguments.
Luckily, just in time, I found an idea which worked. Not quite the right
idea, but it did give the first Berry--Esseen bound for $U$-statistics
(Bickel, \citeyear{1974}). Subsequently, Jon Weirman in Seattle got it right.
I took part in the Princeton Robustness year in 1971 (Andrews et al.,
\citeyear{1972}). Unfortunately, I didn't understand Tukey most of the time. He had
his own language that one had to follow. But there were other
interesting people I could talk with easily. Peter Huber was there,
David Andrews, and Frank Hampel. One of the things that came up was the
issue of adaptation. Peter Huber and I agreed that it was symmetry that
was doing the trick. Then I decided to think more about this question.
But we'll return to that in a moment.
Then, surprisingly, I moved into something genuinely applied.
\begin{figure}
\caption{Willem van Zwet, Jon Wellner and Jianqing Fan. Bickelfest,
Princeton 2005.}
\end{figure}
Somehow, through Joe Hodges, I got interested in finding out more about
the university, so I joined something called the Graduate Council.
Eugene Hammel, the Associate Dean of Graduate Studies, presi\-ded over the
Council. One day, Hammel told me he had a very strange problem: he had
analyzed data on graduate admissions, because he was worried that the
government would cut funding, on the grounds that Berkeley was biased.
Indeed, he found strong evidence of gender bias. So he looked to find
the departments or units where the bias was, since decisions were made
on the department or unit level; he couldn't find them. I told him that
there is no contradiction and I gave him an example of what I later
learned was the Simpson paradox. Eventually, we made several tests for
conditional independence by units, one of which we later found out was
equivalent to the Mantel-Haenszel test. It was an enjoyable paper; it
appeared in \textit{Science} (Bickel, Hammel and O'Connell, \citeyear{1975}).
I had some other excursions into applications. I was recommended by
Betty Scott to the National Research Council and served on two
committees\break which studied two insurance problems. The first problem was
how to implement a mud slide insurance program. The government already
ran a flood insurance program in which it subsidized insurance companies
to give insurance to flood areas, provided the communities would agree
that people would not build on the flood plains. When Southern
California had a period of extreme rain, there were mud slides all over.
Consequently, the representatives of the districts with mud slides got a
law passed in Congress requiring the government to construct a~mud slide
insurance program. But nobody knew how to do~that.
\begin{figure}
\caption{Katerina Kechris, Haiyan Huang, Friedrich Goetze and Peter
Bickel. Bickelfest, Princeton 2000.}
\end{figure}
So they convened the mud slide panel. It turned out that the main
problem was the nature of the data. They had very extensive aerial
photographs of the extent of mud slides, but nobody knew whether they
had happened last year or a thousand years ago, so there was no way to
set premiums. Basically, the panel knew what had to be done. They
proposed that teams of engineers be collected to look at the candidate
areas. The engineers would scratch their heads, and they would come up
with an insurance rate, given the information available. I couldn't see
what else could be done. However, I suggested, ``Why don't you have
different groups of engineers rate the same area and see if you can test
for consistency?'' Nobody else on the panel agreed.
The funny thing is that a year or two later I was put on another panel
dealing with the same issue. This time it was about flood insurance. The
problem was that the Federal Insurance Administration contracted out to
different government agencies to assess flood risk. The US Geological
Survey and the Army Corps of Engineers were assessing adjacent areas.
The border was in the middle of a flood plain, but they came up with
different rates for the two halves of the flood plain! Anyway that was
fun. I enjoyed it. But I never did any serious data analysis, in part
because I didn't trust myself to be sufficiently observant.
\begin{figure}
\caption{Peter Bickel and two recent students, Ben Brown (left) and
Choongsoon Bae.}
\end{figure}
Another line of research that had an impact on my later interests was a
paper on the maximum deviation of kernel density estimates that I worked
on with Murray Rosenblatt (Bickel and Rosenblatt, \citeyear{1973}). Murray came to
my office back in the 70s and asked whether empirical process theory
with weak convergence would work on extrema. I started to think about
this and realized that it couldn't work that way, because the limit is
white noise. Eventually I saw a way that you could attack the problem
with Skorohod embedding. While working with Rosenblatt and van Zwet, I
read an old paper of Hodges and Lehmann where they looked at minimaxity
subject to restrictions, where they did some explicit calculations for
the binomial. At some point I learned an identity that Larry Brown
pointed to: you can relate the Bayes risk in the Gaussian shift model to
the Fisher information of the marginal distribution. This led to papers
on semiparametric robustness (Bickel, \citeyear{1984}) and on estimation of a
normal mean (Bickel, \citeyear{1983}), which had a surprising follow-up in the work
of Donoho and Johnstone.
At about the same time, I looked at the question of adaptation again. A
paper on that subject (Bickel, \citeyear{1982}), as well as the preceding work on
asymptotic restricted minimax, was developed during a Miller
professorship, and that work was given in my Wald lectures. Then I came
to Israel on sabbatical and I had the good fortune of having you as a
student working on Bayesian robustness.
The next stage started when I gave lectures at Johns Hopkins on
semiparametric models, and I began to put things in context and realized
the connections with Jon Wellner's and Pfanzagl's work and robustness
and so on. Then you, Jon, Chris Klaassen and I started working on our
joint book (Bickel et al., \citeyear{19931998}) and, in the process, solved,
separately or jointly, the problems of censored regression and
errors-in-variables (Bickel and Ritov, \citeyear{1987}). It was great fun working
on the book.
A significant excursion was some work with Leo Breiman. Leo was visiting
Berkeley, considering whe\-ther to become a faculty member. He spoke about
a multivariate goodness of fit test he devised using nearest neighbors
in high dimension. Its asymptotic limiting behavior was harder to
understand than either of us thought initially, but eventually we worked
it out (Bickel and Breiman, \citeyear{1983BickelBreiman}). The work appeared in the
\textit{Annals of Probability} because the \textit{Annals of Statistics}
was then run by David Hinkley, whose views of what constituted
statistics were quite different from mine.
\begin{figure}
\caption{Peter Buehlmann, John Rice and Niklaus Hengartner. Bickelfest,
Princeton, 2005.}
\end{figure}
Another excursion which was important was to the theory of the
bootstrap, on which I early on worked with Freedman, and later in the
90s with G\"{o}tze and van Zwet (Bickel, G\"{o}tze and van Zwet, \citeyear{1995Bickel}).
Brad Efron introduced the bootstrap, following his profound insight into
the impact of computing on statistics. It took me a while to realize
that the bootstrap could be viewed as Monte Carlo implementation of
nonparametric maximum likelihood. So these various things intertwined. I~eventually became interested in the interplay between high dimensional
data and computing and the tradeoff between computing and efficiency.
From the 90s to the present, you and I moved from semiparametrics to
nonparametrics, for example, nonparametric testing, the LASSO and that
kind of thing. I think it is fair to say that this work was promoted in
part by our participation in an unclassified National Security
Administration program and in part by conversations with Leo Breiman. As
you know, Leo and I got quite close. I~learned a lot from him and I
became more sharply aware that high dimensional data and computing had
led to a paradigm change in statistics.
During our collaboration, I have tried to keep up with you. Working with
graduate students---especial\-ly you---is a key part of my life. Ideas
come to me when I talk. I have had lots of students, many very good, a
few outstanding; all of them were important to me. Just as important
have been senior collaborators, including a number of former students
and colleagues at Berkeley, Chicago, Stanford, Seattle, Michigan,
Harvard, Zurich, Leiden, Bielefeld and Israel.
\begin{figure}
\caption{Peter Bickel, Quang Pham, Kang James, Yossi Yahav, Berry
James and Nancy Bickel. Bickelfest, Princeton 2005.}
\end{figure}
As I became older, I finally became bolder in starting to think
seriously about the interaction between theory and applications, a
direction initiated in part by working with my student and, later,
collaborator, Liza Levina. Nancy, my wife, thinks, and I think she is
right, that getting the MacArthur Fellowship made a change. I never was
sure of myself and the MacArthur helped. I became more self-confident.
When my student, Niklaus Hengartner, was working on a specific applied
problem, we realized that the theoretical semiparametric ideas really
helped (Hengartner et al., \citeyear{1995Hengartner}). Then, the work
with John Rice, you, and the Engineering and Computer Science people on
transportation problems played an important role. I think my
collaboration with John Rice has been very fruitful. He is a wonderful
data analyst, has very interesting ideas on the questions that can be
asked and is also very knowledgeable about different techniques. I
really like the recent work with John and Nicolai Meinshausen, which is
coming out in the \textit{Annals of Applied Statistics} (Meinshausen,
Bickel and Rice, \citeyear{2009}). As far as I know, it is the first paper which
makes it absolutely clear that a main issue is the tradeoff between the
efficiency of a procedure and the amount of computer time required to
implement it successfully.
\begin{figure}
\caption{David Donoho, Iain Johnstone and Peter Bickel, celebrating
Iain's election to the National Academy.}
\end{figure}
Then there is biology. I was always interested in biology. I met this
wonderful guy, Alexander Glazer, who was then the chair of Molecular
Biology in Ber\-keley. At some point, because I was thinking about
exploring biology, we talked after a lecture. He said he was unhappy
about critiques by phylogeneticists of recent work on some proteins he
had long studied. When he gave a talk at Stanford, these critics claimed
that, using statistical methods, they had obtained a phylogenetic tree
that contradicted his views. I had some doubts about statistical methods
in this context, said so, and we started to talk. He was just retiring
and closing his lab, becoming a high level administrator, but he wanted
to keep his hand in.
I had a very good student, Katerina Kechris, who was just starting. I
got her involved in his program. I told her it was risky, but we had a
chance to work with a real biologist. It worked out so well that our
first paper actually became Alex's inaugural paper in the Proceedings of
the National Academy of Sciences when he was elected to the Academy
(Kech\-ris et al., \citeyear{2006}). This work taught me something
about the limitations of fundamental experimental information. We did
some statistical analyses and introduced some funny methods for finding
critically important sites in proteins. Since the crystallographic
structure of the proteins we were studying was known, we looked to see
if the sites we identified statistically were visibly critical to the
structure. Not a chance! So in the end we made our case with statistics,
by saying that the percentage of time that mutations in our sites have
serious consequences is larger than expected by chance, they are closer
to some critical structures then if they were randomly selected and so.
But we never were able to say that if you change the amino acid in one
of the positions we identified, everything collapses.
\begin{figure}
\caption{Peter Bickel and Ya'acov Ritov, a day before the interview took place.
In the background, Mount Oilve, Jerusalem.}
\end{figure}
Then I put together a proposal with Katerina Kech\-ris and a young
colleague, Haiyan Huang, for a special program of the National Science
Foundation, funded largely by the National Institute of General Medical
Sciences, for problems on the borders of biology and mathematics. The
proposal had two parts. One was on a key question of Alex Glazer about
lateral gene transfer between different species of bacteria. A second
part was on the functional importance of genomic sites conserved between
very distant species. We proposed to use a data set from Eddy Rubin's
lab. The reviewers liked the conserved sites part of the proposal, but
hated the lateral gene transfer part, so our overall score was
borderline. Shula Gross was then an NSF program director. She persuaded
the NIGMS people to fund the proposal. Katerina, Haiyan, a student, Na
Xu, and I started working on the conserved sites problem---with very
modest success. Because we got the funding, I~was able to attract a
student, Ben Brown, from an engineering program and put him to work on
the genomics questions. He is both passionate and scholarly about the
biology and has mastered a great range of computing techniques on his
own. Our results on the Rubin data were still not very satisfying.
Fortunately, we were led to change our focus by connecting with a group
at the National Human Genome Research Institute.
Because I have grandchildren in Washington, I decided to find a suitable
academic base for visiting in DC. I looked for a place to work at the
National Human Genome Research Institute. I didn't know anybody there,
but I was on a committee with biochemist Maynard Olson, who had been a
post doc with Alex Glaser. Olson's own post doc, Eric Green, directed a
lab group at NHGRI. There, I got involved in the ENCODE (Encyclopedia of
DNA) project. Little happened during my first visit, but the next year I
brought Ben. That made a big difference. Ben was able to talk with the
biologists in their language and carry through computations.
I had not expected important statistical methods to come out of the
project. My main motivation was learning more about the biology. But, it
turned out that we needed to develop a nonparametric model for the
genome, which might turn out to be interesting and important. This model
is the most nonparametric model you could think of for the genome. In
addition, the method of inference that we developed, a modified block
bootstrap, gives a check on whether what you are doing is reasonable. On
the scales that ENCODE was studying, our theory requires that the
statistics on which our methods are based should have approximately
Gaussian distribution. By plotting the bootstrap distributions of these
statistics, we can see if this assumption is roughly valid. Moreover,
the model is robust in the sense that, even if the approximation is
poor, $p$-values for tests of association between features are
conservative. This work turned out to be very nice theoretically and the
biologists seem to like it a lot. I am now somewhat confident that this
framework may lead to major contributions.
The other direction I've been following is connected with the location
of my second set of grandchildren in Boulder, Colorado. I've been
visiting at the National Center for Atmospheric Research in Boulder,
working with the statistics group headed by Doug Nychka. A basic goal at
NCAR is to do relatively short term weather prediction based on computer
models. You can say ``Why do I need a~computer model? I can make
predictions using yesterday's weather or other past information.'' But,
because of the high dimensionality of the problem, this approach doesn't
work. The computer models are valid enough to dramatically improve
prediction. However, the computer models themselves produce high
dimensional data.
Again, through chance, Thomas Bengtsson came~to Berkeley. Bengtsson had
spent some time in NCAR and, working with the physicists, had been
trying to understand how to use these computer models effectively. They
had hit a serious problem, the collapse of particle filters in high
dimension. Bengtsson, Snyder, Anderson, two physicists at NCAR and I
were able to analyze this phenomenon. This led to a paper in the
\textit{Monthly Weather Review} (Snyder et al.,
\citeyear{2008Snyder}) and some theoretical papers (Bickel, Li and Bengtsson, \citeyear{2008Bickel}). I am
now working with Jing Lei, a graduate student, trying to bypass these
difficulties of particle filters.
As it turns out, both of my current fields of application, genomics and
weather prediction, have fed naturally into my theoretical interests in
understanding high dimensional data analysis. I still work at a~rather
abstract level, and don't deal well with details, but, fortunately, my
students, post docs and colleagues compensate for my shortcomings, so
together we're able to make satisfying contributions to both theory and
practice.
\textit{I want to go back to your student years in Berkeley.}
I started at Caltech, was there for two years, but then transferred to
the University of California, Ber\-keley. I finished my undergraduate work
at Berkeley in one year, because I had done five years of high school in
Canada, not four as in the US. Caltech paid no attention to the extra
year, but Berkeley did give me credit for my fifth high school year,
provided I completed my undergraduate degree in mathematics. I had moved
to UCB intending to switch to psychology. That's what brought me to a
class taught by Joe Hodges. I thought statistics would be necessary for
a psychology student. That class drew
me into statistics just at the time that a graduate class in
mathematical learning theory made me skeptical about psychology.
I took a Master's degree in math while I was deciding whether to go into
math or statistics. I actually wanted to do my Ph.D. with Hodges, but
Hodges insisted that people come to him with their own problems, and I~wasn't ready to do that.
So he steered me to Lehmann. That was very
fortunate for me. Erich Lehmann was really a life guide, not just an
academic one. Academically, my progress was a bit funny. I spent only
two years in my Ph.D. program. I had already taken the basic graduate
probability course. I found statistics interesting and wanted to pursue
it in depth. I could really have gained by studying a little bit more,
but again chance intervened. The Department had an oral Qualifying Exam
for the Ph.D., with three panels of faculty members---in theoretical
statistics, applied statistics and probability theory. The students were
examined by each of the three panels. My friend Helen Wittenberg (now
Shanna Swann) needed a~study partner and enlisted me. So I took the
qualifying exam a~year earlier than I otherwise would have
done.\looseness=1
The applied statistics exam was a bit of a farce. How did one prepare
oneself? One read thoroughly Scheffe's book, \textit{The Analysis of
Variance}, a lovely book, but it's really a theory book, with few
examples of analyses of real data. The panel on applied statistics
consisted of Elizabeth Scott, Jerzy Neyman, Evelyn Fix and Henry
Scheffe. They asked me about the book, and that was OK. Then Betty Scott
actually asked me something applied, and I didn't know what to say. They
passed me anyway. I was tired of school and wanted do a thesis right
away. Erich gave me a problem, which I didn't know very much about, but
I succeeded. The Department hired me; so I stayed.
\textit{How would you describe the people in Berkeley at this time?}
It was a very eminent group and there was a lot of collaboration and a
cordial environment. I have enjoyed these aspects of the department very
much from the beginning. There were many joint papers. Between Hodges
and Lehmann, of course, there was a long collaboration. Blackwell and
Hodges had papers; Blackwell and Le Cam had papers. I don't know about
Neyman and Le Cam, but certainly they interacted intensely. Henry
Scheffe and Erich also worked together a lot before my time. Before he
moved to Stanford in the early 50s, Charles Stein worked with Erich.
There were some tensions in the department, but I wasn't aware of them
at that time.\looseness=1
\begin{figure}
\caption{Peter Bickel with his parents, Madeleine and Eliezer Bickel.
Ca. 1943.}
\end{figure}
Le Cam and Neyman viewed themselves as applied statisticians, though the
rest of the statistical world might not have agreed. Betty Scott did
applied statistics, in astronomy and climatology. Hen\-ry Scheffe was a
serious applied statistician. He wor\-ked with Cuthbert Daniel, who was a
private consultant and very impressive. Henry brought him to Berkeley
for a semester of lectures, which was very good for all of us. Joe
Hodges was considered the most talented applied statistician in the
department. He had a wonderful sense of data, but Joe, interestingly
enough, didn't want to be an applied statistician.
The intellectual center of the Department was certainly mathematical
theory. There was a young group of probabilists, including David
Freedman and Lester Dubins and the more senior Loeve and Le Cam. David
Freedman eventually switched to statistics. The relations with Stanford
were excellent. We used to have the Berkeley--Stanford colloquia twice a
quarter, one in Berkeley, and one in Stanford. So, it was a very
pleasant place to work.
I collaborated with Erich and Joe and with David Blackwell. Eventually,
but much later, I collaborated with David Freedman, I think that's about
it with the early years group. Subsequently, I collaborated with later
arrivals, Leo Breiman, Rudy Beran and Warry Millar, as well as, of
course, Kjell Doksum, with whom, in addition to papers, I published
a~book whose second edition we are still working on.
Not long after I started teaching, in the late 60s and early 70s,
Berkeley was full of turmoil. Things happened that had nothing to do
with statistics. I and most of my colleagues supported the Free Speech
Movement. Later on we supported a student strike by holding our classes
off campus, but we were not personally engaged. Of course, it was very
emotional. People left the university from both the right and the left.
When Reagan was governor and when Nixon was president, conflict about
the Vietnam War got heated. I remember teaching class in Dwinelle Hall
at noon, and tear gas coming through the Windows.
\begin{figure}
\caption{Peter Bickel with his uncle and aunt, Shlomo and Yetta
Bickel, and his cousin Alexander Bickel.}
\end{figure}
\textit{How do you define your generation? Erich Leh\-mann was the leading
person in the second generation. The third generation more or less
started with you and your colleagues.}
To some extent, yes. Moving beyond Berkeley, I've been struck by a
curious observation. A substantial number of leading figures in my
generation came from Caltech. They include, among others, Brad Efron,
Larry Brown, Chuck Stone and Carl Morris. Nobody taught statistics at
Caltech. But, for some reason, we all felt we wanted to do things in the
real world. Among us, only Brad claims that he always wanted to do
statistics. Larry went to Cornell and worked with Jack Kiefer and wrote
a statistical thesis. Chuck said he wanted to do statistics, but he
moved to probability as a student of Karlin. Later he got involved with
Leo Breiman and went into statistics.
\textit{Can you tell about yourself? You once told me, ``We are lucky to
belong to a generation that didn't suffer from wars.'' I found it an
interesting comment from somebody who was born as a Jew in central
Europe during WWII. Can you tell us about your history?}
I was born in Bucharest in 1940, but I was really not very aware of the
war, except that sometimes we had to go to the bomb shelters (when the
Americans were bombing the oil field in Ploesti). Once, when we were
coming back, I saw broken windows in some office buildings. My father
Eliezer (Lothar) Bickel was able to continue to practice medicine during
the war. There was a pogrom in Bucharest, which we narrowly avoided by
my mother Madeleine's courageous behavior. Then, after the war and after
the communists took over, my parents arranged, with difficulty, to leave
Romania legally. But I didn't realize the difficulties at the time.
\begin{figure}
\caption{The young Nancy and Peter Bickel, and their daughter and son
Amenda and Stephen. This picture was used as a New Year's greeting card.}
\end{figure}
We went to France in 1948 and then to Canada in 1949. I studied in
France in a public school for ten months. In France my father insisted
on giving me English lessons after an eight hour day of school and
homework. From Canada we went to California. I could have been drafted
in the Vietnam War, but I married young and we had a child. So that
probably was the source of my remark.
\textit{Can you tell us about the intellectual influence of your family
on you?}
My father was born in Bukovina, a German speaking province of the
Austro--Hungarian Empire. He had a traditional Jewish and then a secular
education. In high school, under the influence of one of the high school
teachers, he and other Jewish students became involved in one of the
many intellectual groups of the time. He became, basically, a disciple
of a German philosopher called Constatin Brunner, a son of the grand
rabbi of Hamburg, who rebelled against his father. Brunner was involved
in elaborating a philosophical system based on Spinoza. He believed
strongly that the Jews should assimilate to German culture. One of his
books, called ``\textit{Unser Christus oder Das Wesen des Genie},'' or
in English, ``\textit{Our Christ, or The Essence of Geniu}s,'' among
other things, advocated assimilation to German culture, including
Christianity. Like Brunner, my father favored assimilation. In Romania,
we never celebrated Jewish holidays, and in fact, we celebrated
Christmas, but in a nonreligious way.
My father became the leader of the Brunner group; he was treated almost
as the equivalent of a Hasidic rabbi. He pursued two fields at the same
time---medicine and philosophy. He was able to study medi\-cine at the
University of Bucharest, even though very few Jews were admitted because
there was a~``numerus clausus.'' Also, as he was the second son, his
father wanted him to run the family store and wouldn't pay for his
further education. He rebelled and had to struggle on his own. My father
went to Germany to do post graduate study in medicine. At the same time
he was able to meet and study with Brunner. He was successful in both
fields. I found out later that he was an experimentalist, publishing 23
papers while in Germany. Then, and later, he published books in
philosophy.
If Hitler had not come to power, I might not be here. My father would
have stayed in Germany and become a professor in Berlin. But when Hitler
expelled the foreign Jews, my father returned to Bucha\-rest, married my
mother, and I came to be. My mother was seriously ill during the first
few years of my life, but I had a loving set of grandparents and a nurse
who took care of me and, as far as I can remember, was happy.
I was eleven when my father died. By then we were living in Canada.
Although he was very ill with heart disease, he was studying very hard
to qualify as a doctor in Canada. My relations with him were never easy.
He kept a notebook of anecdotes about me from ages one to five. When I
translated it for my wife Nancy, I saw as I read that the anecdotes are
all instances in which the father humiliates the child. After he died,
I~tried to help my mother, in the house, and by taking a job delivering
for the local drugstore. She coped wonderfully even though her life in
Romania had not prepared her for the role
of relatively poor single mother. Our relations were very close. I would
act as confidante and counselor and was very proud of what help I could
give. She remarried, and that's how we got to California.
I found school work and languages very easy as a~child, too easy, as I
discovered when I got to Caltech and had to compete with many who were
as quick or quicker than I was. I wanted to be a scientist ever since I
read the books of George Gamow. But I was broadly interested in physics
on the one hand and physiology and biochemistry on the other. I avoided
medicine and philosophy, since my father had been a~physician and
philosopher. Fortunately, I found my way through mathematics to
statistics, which has allowed me to dip into almost every
science.
\textit{Your uncle was a lawyer?}
In Romania my uncle Shlomo Bickel was a lawyer, but he, like my father,
and most of his generation of Jews, was part of a movement. He was a
Yiddishist and a Zionist. He was able to get out to the States in 1938.
He couldn't practice law, so became a journalist and wrote weekly
columns for \textit{The Day,} one of the two large Yiddish papers in New
York. He also wrote several books; one chapter of his book ``Rumania,''
later translated into English, was about my father and other rebellious
Bickels. I felt very close to my uncle Shlomo and aunt Yetta. Their
household was full of intellectual and literary discussion. They showed
great affection to each other and to me, particularly after my father
died. They showed me how loving and intellectually lively family life
could be. Like my uncle, I've been fortunate to have a family life full
of love and discussion.
\section*{Acknowledgments}
The encouragement of Nancy Bickel was more than helpful both to Peter
and me. The pictures collection was done by her. The pictures were taken
by friends, family, and students and staff in Berkeley and Princeton. I
apologize that I cannot give personal acknowledgments for them.
\end{document}
|
\begin{document}
\begin{frontmatter}
\title{Local Linearization - Runge Kutta Methods: a class of A-stable explicit integrators for dynamical systems}
\author[impa,uci]{H. de la Cruz\corref{cor1}}
\ead{[email protected]}
\author[uval,icimaf]{R.J. Biscay}
\ead{[email protected]}
\author[icimaf]{J.C. Jimenez}
\ead{[email protected]}
\author[mgill]{F. Carbonell}
\ead{[email protected]}
\address[impa]{IMPA, Estrada Dona Castorina 110, Rio de Janeiro, Brasil}
\address[uci]{Universidad de Ciencias Inform\'aticas, La Habana, Cuba}
\address[uval]{CIMFAV-DEUV, Facultad de Ciencias, Universidad de Valparaiso, Chile}
\address[icimaf]{Instituto de Cibern\'etica, Matem\'atica y F\'isica, Calle 15 No. 551, La Habana, Cuba}
\address[mgill]{Montreal Neurological Institute, McGill University, Montreal, Canada}
\cortext[cor1]{Corresponding author}
\fntext[]{Supported by CNPq under grant no. 500298/2009-2}
\begin{abstract}
A new approach for the construction of high order A-stable explicit
integrators for ordinary differential equations (ODEs) is theoretically
studied. Basically, the integrators are obtained by splitting, at each time
step, the solution of the original equation in two parts: the solution of a
linear ordinary differential equation plus the solution of an auxiliary ODE.
The first one is solved by a Local Linearization scheme in such a way that
A-stability is ensured, while the second one can be approximated by any
extant scheme, preferably a high order explicit Runge-Kutta scheme. Results
on the convergence and dynamical properties of this new class of schemes are
given, as well as some hints for their efficient numerical implementation.
An specific scheme of this new class is derived in detail, and its
performance is compared with some Matlab codes in the integration of a
variety of ODEs representing different types of dynamics.
\end{abstract}
\begin{keyword}
Numerical integrators \sep A-stability \sep Local linearization\sep Runge Kutta methods \sep Variation of constants formula \sep Hyperbolic stationary points. \\
\textbf{MSC}: 65L20; 65L07
\end{keyword}
\end{frontmatter}
\section{Introduction}
It is well known (see, i.e., \cite{Cartwright 1992, Stewart 1992}) that
conventional numerical schemes such as Runge-Kutta, Adams-Bashforth,
predictor-corrector and others produce misleading dynamics in the
integration of Ordinary Differential Equations (ODEs). Typical difficulties
are, for instance, the convergence to spurious steady states, changes in the
basis of attraction, appearance of spurious bifurcations, etc. The essence
of such difficulties is that the dynamics of the numerical schemes (viewed
as discrete dynamical systems) is far richer than that of its continuous
counterparts. Contrary to the common belief, drawbacks of this type may not
be solved by reducing the step-size of the numerical method. Therefore, it
is highly desirable the development of numerical integrators that preserve,
as much as possible, the dynamical properties of the underlaying dynamical
system for all step sizes or relative big ones. In this direction, some
modest advances has been archived by a number of relative recent integrators
of the class of Exponential Methods, which are characterized by the explicit
use of exponentials to obtain an approximate solution. In fact, their
development has been encouraged because their capability of preserving a
number of geometric and dynamical features of the ODEs at the expense of
notably less computational effort than implicit integrators. This have
become feasible due to advances in the computation of matrix exponentials
(see, e.g., \cite{Hochbruck-Lubich97}, \cite{Sidje 1998}, \cite
{Dieci-Papini00}, \cite{Celledoni-Iserles01}, \cite{Higham04}) and multiple
integrals involving matrix exponentials (see, e.g., \cite{Carbonell-etal05},
\cite{VanLoan78}). Some instances of this type of integrators are the
methods known as exponential fitting \cite{Liniger70}, \cite{Carroll93},
\cite{Voss 1988}, \cite{Cash 1981}, \cite{Iserles 1978}, exponential
integrating factor \cite{Lawson67}, exponential integrators \cite
{Hochbruck-etal98},\cite{Hochbruck-Ostermann10}, exponential time
differencing \cite{Cox-Matthews02}, \cite{Kassam05}, truncated Magnus
expansion \cite{Iserles99}, \cite{Blanes-etal00}, truncated Fer expansion
\cite{Zanna99} (also named exponential of iterated commutators in \cite
{Iserles 1984}), exponential Runge-Kutta \cite{Hochbruck-Ostermann04}, \cite
{Hochbruck-Ostermann05}, some schemes based on versions of the variation of
constants formula (e.g., \cite{Norsett69}, \cite{Jain72}, \cite{Iserles81},
\cite{Pavlov-Rodionova87}, \cite{Friesner-etal89}), local linearization
(see, e.g., \cite{Pope63}, \cite{Ramos 1997a}, \cite{Jimenez02 AMC}, \cite
{Jimenez05 AMC}, \cite{Carr11}), and high order local linearization methods
\cite{de la Cruz 06}, \cite{de la Cruz 07}, \cite{Jimenez09}, \cite
{Hochbruck-Ostermann11}.
The present paper deals with the class of high order local linearization
integrators called Local Linearization-Runge Kutta (LLRK) methods, which was
recently introduced in \cite{de la Cruz 06} as a flexible approach for
increasing the order of convergence of the Local Linearization (LL) method
while retaining its desired dynamical properties. Essentially, the LLRK
integrators are obtained by splitting, at each time step, the solution of
the underlying ODE in two parts: the solution $\mathbf{v}$ of a linear ODE
plus the solution $\mathbf{u}$ of an auxiliary ODE. The first one is solved
by an LL scheme in such a way that the A-stability is ensured, while the
second one is integrated by any high order explicit Runge-Kutta (RK) scheme.
Likewise Implicit-Explicit Runge-Kutta (IMEX RK) and conventional splitting
methods (see e.g. \cite{McLachlan02}, \cite{Ascher-etal97}), the splitting
involved in the LLRK approximations is based on the representation of the
underlying vector field as the addition of linear and nonlinear components.
However, there are notable differences among these methods: i) Typically, in
splitting and IMEX methods the vector field decomposition is global instead
of local, and it is not based on a first-order Taylor expansion. ii) In
contrast with IMEX and LLRK approaches, splitting methods construct an
approximate solution by composition of the flows corresponding to the
component vector fields. iii) IMEX RK methods are partitioned (more
specifically, additive) Runge-Kutta methods that compute a solution $\mathbf{
y}=\mathbf{v+u}$ by solving certain ODE for $\left( \mathbf{v,u}\right) $,
setting different RK coefficients for each block. LLRK methods also solve a
partitioned system for $\left( \mathbf{v,u}\right) $, but a different one.
In this case, one of the blocks is linear and uncoupled, which is solved by
the LL method. After inserting the (continuous time) LL approximation into
the second block, this is treated as a non-autonomous ODE, for which any
extant RK discretization can be used. On the other hand, it is worth noting
that the LLRK methods can also be thought of a flexible approach to
construct new A-stable explicit schemes based on standard explicit RK
integrators. In comparison with the well known Rosenbrock \cite{Bui79}, \cite
{Shampine97} and Exponential Integrators \cite{Hochbruck-etal98},\cite
{Hochbruck-Ostermann05} the A-stability of the LLRK schemes is achieved in a
different way. Basically, Rosenbrock and Exponential integrators are
obtained by inserting a stabilization factor ($1/(1-z)$ or $(e^{z}-1)/z$,
respectively) into the explicit RK formulas, whose coefficients must then be
determined to fulfil both A-stability and order conditions. In contrast,
A-stability of an LLRK scheme results from the fact that the component $
\mathbf{v}$ associated with the linear part of the vector field is computed
through an A-stable LL scheme. Another major difference is that the RK
coefficients involved in the LLRK methods are not constrained by any
stability condition and they just need satisfy the usual order conditions
for RK schemes. Thus, the coefficients in the LLRK methods can be just those
of any standard explicit RK scheme. This makes the LLRK approach greatly
flexible and allows for simple numerical implementations on the basis of
available subroutines for LL and RK methods.
In \cite{de la Cruz 06}, \cite{de la Cruz Ph.D. Thesis} a number of
numerical simulations were carried out in order to illustrate the
performance of the LLRK schemes and to compare them with other numerical
integrators. With special emphasis, the dynamical properties of the LLRK
schemes were considered, as well as, their capability for integrating some
kinds of stiff ODEs. For these equations, LLRK schemes showed stability
similar to that of implicit schemes with the same order of convergence,
while demanding much lower computational cost. The simulations also showed
that the LLRK schemes exhibit a much better behavior near stationary
hyperbolic points and periodic orbits of the continuous systems than others
conventional explicit integrators. However, no theoretical support to such
findings has been published so far.
The main aim of the present paper is to provide a theoretical study of LLRK
integrators.\ Specifically, the following subjects are considered: rate of
convergence, linear stability, preservation of the equilibrium points, and
reproduction of the phase portrait of the underlying dynamical system near
hyperbolic stationary points and periodic orbits. Furthermore, unlike the
majority of the previous papers on exponential integrators, this study is
carried out not only for the discretizations but also for the numerical
schemes that implement them in practice.
The paper is organized as follows. In section 2, the formulations of the LL
and LLRK methods are briefly reviewed. Sections 3 and 4 deal with the
convergence, linear stability and dynamic properties of LLRK
discretizations. Section 5 focuses on the preservation of these properties
by LLRK numerical schemes. In the last section, a new simulation study is
presented in order to compare the performance of an specific order 4 LLRK
scheme and some Matlab codes in a variety of ODEs representing different
types of dynamics.
\section{High Order Local Linear discretizations \label{Section LLA}}
Let $\mathcal{D}\subset\mathbb{R}^{d}$ be an open set. Consider the $d$
-dimensional differential equation
\begin{align}
\frac{d\mathbf{x}\left( t\right) }{dt} & =\mathbf{f}\left( t,\mathbf{x}
\left( t\right) \right) \text{, \ \ }t\in\left[ t_{0},T\right]
\label{ODE-LLA-1} \\
\mathbf{x}(t_{0}) & =\mathbf{x}_{0}, \label{ODE-LLA-2}
\end{align}
where $\mathbf{x}_{0}\in\mathcal{D}$ is a given initial value, and $\mathbf{f
}:\left[ t_{0},T\right] \times\mathcal{D}\longrightarrow \mathbb{R}^{d}$ is
a differentiable function. Lipschitz and smoothness conditions on the
function $\mathbf{f}$ are assumed in order to ensure a unique solution of
this equation in $\mathcal{D}$.
In what follows, for $h>0$, $(t)_{h}$ will denote a partition $
t_{0}<t_{1}<...<t_{N}=T$ of the time interval $\left[ t_{0},T\right] $ such
that
\begin{equation*}
\underset{n}{sup}(h_{n})\leq h<1,
\end{equation*}
where $h_{n}=t_{n+1}-t_{n}$ for $n=0,...,N-1$.
\subsection{Local Linear discretization}
Suppose that, for each $t_{n}\in\left( t\right) _{h}$, $\mathbf{y}_{n}\in
\mathcal{D}$ is a point close to $\mathbf{x}\left( t_{n}\right) $. Consider
the first order Taylor expansion of the function $\mathbf{f}$ around the
point $(t_{n},\mathbf{y}_{n})$:
\begin{equation*}
\mathbf{f}\left( s,\mathbf{u}\right) \approx\mathbf{f(}t_{n},\mathbf{y}_{n})+
\mathbf{f}_{\mathbf{x}}(t_{n},\mathbf{y}_{n})(\mathbf{u}-\mathbf{y}_{n})+
\mathbf{f}_{t}(t_{n},\mathbf{y}_{n})(s-t_{n}),\text{ }
\end{equation*}
for $s\in\mathbb{R}$ and $\mathbf{u}\in\mathcal{D}$, where $\mathbf{f}_{
\mathbf{x}}$, and $\mathbf{f}_{t}$ denote the partial derivatives of $
\mathbf{f}$ with respect to the variables $\mathbf{x}$ and $t$,
respectively. Adopting this linear approximation of $\mathbf{f}$ at each
time step, the solution of (\ref{ODE-LLA-1})-(\ref{ODE-LLA-2}) can be
locally approximated on each interval $[t_{n},t_{n+1})$ by the solution of
the linear ODE
\begin{align}
\frac{d\mathbf{y}\left( t\right) }{dt} & =\mathbf{A}_{n}\mathbf{y}(t)+
\mathbf{a}_{n}\left( t\right) \text{, \ \ }t\in\lbrack t_{n},t_{n+1})
\label{ODE-LLA-8} \\
\mathbf{y}\left( t_{n}\right) & =\mathbf{y}_{n}\text{ , \ \ }
\label{ODE-LLA-8b}
\end{align}
where $\mathbf{A}_{n}=\mathbf{f}_{\mathbf{x}}(t_{n},\mathbf{y}_{n})$ is a
constant matrix, $\mathbf{a}_{n}(t)=\mathbf{f}_{t}\left( t_{n},\mathbf{y}
_{n}\right) (t-t_{n})+\mathbf{f}\left( t_{n},\mathbf{y}_{n}\right) \mathbf{-A
}_{n}\mathbf{y}_{n}$ is a linear vector function of $t$. According to the
variation of constants formula, such a solution is given by
\begin{equation}
\mathbf{y}(t)=e^{\mathbf{A}_{n}(t-t_{n})}(\mathbf{y}_{n}+\int
\limits_{0}^{t-t_{n}}e^{-\mathbf{A}_{n}u}\mathbf{a}_{n}\left( t_{n}+u\right)
du). \label{ODE-LLA-6}
\end{equation}
Furthermore, by using the identity
\begin{equation}
\int\limits_{0}^{\Delta}e^{-\mathbf{A}_{n}u}du\text{ }\mathbf{A}_{n}=-(e^{-
\mathbf{A}_{n}\Delta}-\mathbf{I}),\text{ \ \ \ \ }\Delta\geq0
\label{ODE-LLA-3}
\end{equation}
and simple rules from the integral calculus, the expression (\ref{ODE-LLA-6}
) can be rewritten as
\begin{equation}
\mathbf{y}(t)=\mathbf{y}_{n}+\mathbf{\phi}(t_{n},\mathbf{y}_{n};t-t_{n}),
\label{ODE-LLA-9}
\end{equation}
where
\begin{align}
\mathbf{\phi}(t_{n},\mathbf{y}_{n};t-t_{n}) & =\int\limits_{0}^{t-t_{n}}e^{
\mathbf{A}_{n}(t-t_{n}-u)}(\mathbf{A}_{n}\mathbf{y}_{n}+\mathbf{a}_{n}\left(
t_{n}+u)\right) du \notag \\
& =\int\limits_{0}^{t-t_{n}}e^{\mathbf{f}_{\mathbf{x}}\left( t_{n},\mathbf{y}
_{n}\right) (t-t_{n}-u)}(\mathbf{f}\left( t_{n},\mathbf{y}_{n}\right) +
\mathbf{f}_{t}\left( t_{n},\mathbf{y}_{n}\right) u)du. \label{ODE-LLA-7}
\end{align}
In this way, by setting $\mathbf{y}_{0}=\mathbf{x}(t_{0})$ and iteratively
evaluating the expression (\ref{ODE-LLA-9}) at $t_{n+1}$ (for $n=0,1,\ldots
,N-1$) a sequence of points $\mathbf{y}_{n+1}$ can be obtained as an
approximation to the solution of the equation (\ref{ODE-LLA-1})-(\ref
{ODE-LLA-2}). This is formalized in the following definition.
\begin{definition}
\label{definition LLD} (\cite{Jimenez02}, \cite{Jimenez05 AMC}) For a given
time discretization $\left( t\right) _{h}$, the Local Linear discretization
for the ODE (\ref{ODE-LLA-1})-(\ref{ODE-LLA-2}) is defined by the recursive
expression
\begin{equation}
\mathbf{y}_{n+1}=\mathbf{y}_{n}+\mathbf{\phi}\left( t_{n},\mathbf{y}
_{n};h_{n}\right) , \label{ODE-LLA-4}
\end{equation}
starting with $\mathbf{y}_{0}=\mathbf{x}_{0}$.
\end{definition}
The Local Linear discretization\ (\ref{ODE-LLA-4}) is, by construction,
A-stable. Furthermore, under quite general conditions, it does not have
spurious equilibrium points \cite{Jimenez02 AMC} and preserves the local
stability of the exact solution at hyperbolic equilibrium points and
periodic orbits \cite{Jimenez02 AMC}, \cite{McLachlan09}. On the basis of
the recursion (\ref{ODE-LLA-4}) (also known as Exponentially fitted Euler,
Euler Exponential or piece-wise linearized method) a variety of numerical
schemes for ODEs has been constructed (see a review in \cite{Jimenez05 AMC},
\cite{de la Cruz 07}). These numerical schemes essentially differ with
respect to the numerical algorithm used to compute (\ref{ODE-LLA-7}), and so
in the dynamical properties that they inherit from the LL discretization. A
major limitation of such schemes is their low order of convergence, namely
two.
\subsection{Local Linear - Runge Kutta discretizations}
A modification of the classical LL method can be done in order to improve
its order of convergence while retaining desirable dynamic properties. To do
so, note that the solution of the local linear ODE (\ref{ODE-LLA-8})-(\ref
{ODE-LLA-8b}) is an approximation to the solution of the local nonlinear ODE
\begin{align*}
\frac{d\mathbf{z}\left( t\right) }{dt}& =\mathbf{f}\left( t,\mathbf{z}\left(
t\right) \right) \text{, \ \ }t\in \lbrack t_{n},t_{n+1}) \\
\mathbf{z}\left( t_{n}\right) & =\mathbf{y}_{n},\text{\ \ }
\end{align*}
which can be rewritten as
\begin{align*}
\frac{d\mathbf{z}\left( t\right) }{dt}& =\mathbf{A}_{n}\mathbf{z}(t)+\mathbf{
a}_{n}\left( t\right) +\mathbf{g}(t_{n},\mathbf{y}_{n};t,\mathbf{z}\left(
t\right) )\text{, \ \ }t\in \lbrack t_{n},t_{n+1}) \\
\mathbf{z}\left( t_{n}\right) & =\mathbf{y}_{n},\text{\ \ }
\end{align*}
where $\mathbf{g}(t_{n},\mathbf{y}_{n};t,\mathbf{z}\left( t\right) )=\mathbf{
f(}t,\mathbf{z}\left( t\right) )-\mathbf{A}_{n}\mathbf{z}(t)-\mathbf{a}
_{n}\left( t\right) $, and $\mathbf{A}_{n}$, $\mathbf{a}_{n}(t)$ are defined
as in the previous subsection. From the variation of constants formula, the
solution $\mathbf{z}$ of this equation can be written as
\begin{equation*}
\mathbf{z}\left( t\right) =\mathbf{y}_{LL}\left( t;t_{n},\mathbf{y}
_{n}\right) +\mathbf{r}\left( t;t_{n},\mathbf{y}_{n}\right) ,
\end{equation*}
where
\begin{equation}
\mathbf{y}_{LL}(t;t_{n},\mathbf{y}_{n})=e^{\mathbf{A}_{n}(t-t_{n})}(\mathbf{y
}_{n}+\int\limits_{0}^{t-t_{n}}e^{-\mathbf{A}_{n}u}\mathbf{a}_{n}\left(
t_{n}+u\right) du) \label{ODE_HLLA-2b}
\end{equation}
is solution of the linear equation (\ref{ODE-LLA-8})-(\ref{ODE-LLA-8b}) and
\begin{equation}
\mathbf{r}(t;t_{n},\mathbf{y}_{n})=\int\limits_{0}^{t-t_{n}}e^{\mathbf{f}_{
\mathbf{x}}(t_{n},\mathbf{y}_{n})(t-t_{n}-u)}\mathbf{g}\left( t_{n},\mathbf{y
}_{n};t_{n}+u,\mathbf{z}\left( t_{n}+u\right) \right) du \label{ODE-HLLA-2}
\end{equation}
is the remainder term of the LL approximation $\mathbf{y}_{LL}$ to $\mathbf{z
}$. Consequently, if $\mathbf{r}_{\kappa }$ is an approximation to $\mathbf{r
}$ of order $\kappa >2$, then $\mathbf{y}(t)=\mathbf{y}_{LL}\left( t;t_{n},
\mathbf{y}_{n}\right) +\mathbf{r}_{\kappa }\left( t;t_{n},\mathbf{y}
_{n}\right) $ should provide a better estimate to $\mathbf{z}(t)$ than the
LL approximation $\mathbf{y}(t)=\mathbf{y}_{LL}\left( t;t_{n},\mathbf{y}
_{n}\right) $ for all $t\in \lbrack t_{n},t_{n+1})$. This motivates the
definition of the following high order local linear discretization.
\begin{definition}
\label{definition HLLD}\cite{Jimenez09} For a given time discretization $
\left( t\right) _{h}$, an order $\gamma $ Local Linear discretization for
the ODE (\ref{ODE-LLA-1})-(\ref{ODE-LLA-2}) is defined by the recursive
expression
\begin{equation}
\mathbf{y}_{n+1}=\mathbf{y}_{LL}\left( t_{n}+h_{n};t_{n},\mathbf{y}
_{n}\right) +\mathbf{r}_{\kappa }\left( t_{n}+h_{n};t_{n},\mathbf{y}
_{n}\right) , \label{ODE-HLLA-1}
\end{equation}
starting with $\mathbf{y}_{0}=\mathbf{x}_{0}$, where $\mathbf{r}_{\kappa }$
is an approximation to the remainder term (\ref{ODE-HLLA-2}) such that $
\left\Vert \mathbf{x}(t_{n})-\mathbf{y}_{n}\right\Vert =O(h^{\gamma })$ with
$\gamma >2$, for all $t_{n}\in \left( t\right) _{h}$.
\end{definition}
Depending on the way in which the remainder term $\mathbf{r}$ is
approximated, two classes of high order LL discretizations have been
proposed.\ In the first one, $\mathbf{g}$ is approximated by a polynomial.
For instance, by means of a truncated Taylor expansion \cite{de la Cruz 07}
or an Hermite interpolation polynomial \cite{Hochbruck-Ostermann11},
resulting in the so called Local Linearization - Taylor schemes and the
Linearized Exponential Adams schemes, respectivelly. The second one is based
on approximating $\mathbf{r}$ by means of a standard integrator that solves
an auxiliary ODE. This is called the Local Linearization-Runge Kutta (LLRK)
methods when a Runge-Kutta integrator is used for this purpose \cite{de la
Cruz 06}. A computational advantage of the latter class is that it does not
require calculation of high order derivatives of the vector field $\mathbf{f}
$.
Specifically, the LLRK methods are derived as follows. By taking derivatives
with respect to $t$ in (\ref{ODE-HLLA-2}), it is obtained that $\mathbf{r}
\left( t;t_{n},\mathbf{y}_{n}\right) $ satisfies the differential equation
\begin{align}
\frac{d\mathbf{u}\left( t\right) }{dt}& =\mathbf{q(}t_{n},\mathbf{y}_{n};t
\mathbf{,\mathbf{u}}\left( t\right) \mathbf{),}\text{ \ \ }t\in \lbrack
t_{n},t_{n+1}), \label{Diff. Equat for rn} \\
\mathbf{u}\left( t_{n}\right) & =\mathbf{0},
\label{Initial Cond. Diff. Equat for rn}
\end{align}
with vector field
\begin{equation*}
\mathbf{q(}t_{n},\mathbf{y}_{n};s\mathbf{,\xi )}=\mathbf{\mathbf{f}_{\mathbf{
x}}}(t_{n},\mathbf{y}_{n})\mathbf{\xi }+\mathbf{g}\left( t_{n},\mathbf{y}
_{n};s,\mathbf{y}_{n}+\mathbf{\phi }\left( t_{n},\mathbf{y}
_{n};s-t_{n}\right) +\mathbf{\xi }\right) ,
\end{equation*}
which can be also written as
\begin{align*}
\mathbf{q(}t_{n},\mathbf{y}_{n};s\mathbf{,\xi )}=\mathbf{f(}& s,\mathbf{y}
_{n}+\mathbf{\phi }\left( t_{n},\mathbf{y}_{n};s-t_{n}\right) +\mathbf{\xi }
)-\mathbf{f}_{\mathbf{x}}(t_{n},\mathbf{y}_{n})\mathbf{\phi }\left( t_{n},
\mathbf{y}_{n};s-t_{n}\right) \\
& -\mathbf{f}_{t}\left( t_{n},\mathbf{y}_{n}\right) (s-t_{n})-\mathbf{f}
\left( t_{n},\mathbf{y}_{n}\right) ,
\end{align*}
where $\mathbf{\phi }$ is the vector function (\ref{ODE-LLA-7}) that defines
the LL discretization (\ref{ODE-LLA-4}). Thus, an approximation $\mathbf{r}
_{\kappa }$ to $\mathbf{r}$ can be obtained by solving the ODE (\ref{Diff.
Equat for rn})-(\ref{Initial Cond. Diff. Equat for rn}) through any
conventional numerical integrator. Namely, if $\mathbf{u}_{n+1}=\mathbf{u}
_{n}+\mathbf{\Lambda }^{\mathbf{y}_{n}}\left( t_{n},\mathbf{u}
_{n};h_{n}\right) $ is some one-step numerical scheme for this equation,
then $\mathbf{r}_{\kappa }\left( t_{n}+h_{n};t_{n},\mathbf{y}_{n}\right) =
\mathbf{\Lambda }^{\mathbf{y}_{n}}\left( t_{n},\mathbf{0};h_{n}\right) $.
In particular, we will focus on the approximation $\mathbf{r}_{\kappa}$
obtained by means of an explicit RK scheme of order $\kappa$. Consider an
s-stage explicit RK scheme with coefficients $\mathbf{c}=\left[ c_{i}\right]
$, \ $\mathbf{A}=\left[ a_{ij}\right] $, \ $\mathbf{b}=\left[ b_{j}\right] $
applied to the equation (\ref{Diff. Equat for rn})-(\ref{Initial Cond. Diff.
Equat for rn}), i.e., the approximation defined by the map
\begin{equation}
\mathbf{\rho}\left( t_{n},\mathbf{y}_{n};h_{n}\right)
=h_{n}\sum_{j=1}^{s}b_{j}\mathbf{k}_{j}, \label{RK}
\end{equation}
where
\begin{equation*}
\mathbf{k}_{i}=\mathbf{q}(t_{n},\mathbf{y}_{n};\text{ }t_{n}+c_{i}h_{n}
\mathbf{,}\text{ }h_{n}\sum_{j=1}^{i-1}a_{ij}\mathbf{k}_{j}).
\end{equation*}
This suggests the following definition.
\begin{definition}
(\cite{de la Cruz 07}) An order $\gamma$ \textit{Local Linear-Runge Kutta
(LLRK) discretization }is an order $\gamma$ Local Linear discretization of
the form (\ref{ODE-HLLA-1}), where the approximation $\mathbf{r}_{\kappa}$
to the remainder term (\ref{ODE-HLLA-2}) is defined by the \textit{Runge
Kutta} formula (\ref{RK}).
\end{definition}
\section{Convergence and linear stability}
In order to study the rate of convergence of the LLRK discretizations, three
useful lemmas will be stated first.
\begin{lemma}
\label{LLRK local error gen}Let $\mathbf{u}_{n+1}=\mathbf{u}_{n}+{\Lambda }^{
\mathbf{y}_{n}}\left( t_{n},\mathbf{u}_{n};h_{n}\right) $ be an approximate
solution of the auxiliary equation (\ref{Diff. Equat for rn})-(\ref{Initial
Cond. Diff. Equat for rn}) at $t=t_{n+1}\in \left( t\right) _{h}$ given by
an order $\gamma $ numerical integrator, and $\mathbf{y}_{n+1}$ the
discretization
\begin{equation*}
\mathbf{y}_{n+1}=\mathbf{y}_{n}+h_{n}\mathbf{\digamma }(t_{n},\mathbf{y}
_{n};h_{n}),
\end{equation*}
where
\begin{equation*}
\mathbf{\digamma }(s,\mathbf{\xi };h)=\frac{1}{h}\left\{ \mathbf{\phi }(s,
\mathbf{\xi };h)+{\Lambda }^{\mathbf{\xi }}(s,\mathbf{0};h)\right\}
\end{equation*}
with $\mathbf{y}_{0}=\mathbf{x}_{0}$. Then the local truncation error $
L_{n+1}$ satisfies
\begin{equation*}
L_{n+1}=\left\Vert \mathbf{x}(t_{n+1};\mathbf{x}_{0})-\mathbf{x}(t_{n};
\mathbf{x}_{0})-h_{n}\mathbf{\digamma }(t_{n},\mathbf{x}(t_{n};\mathbf{x}
_{0});h_{n})\right\Vert \leq C_{1}(\mathbf{x}_{0})h_{n}^{\gamma +1}
\end{equation*}
for all $t_{n},t_{n+1}\in \left( t\right) _{h}$. Moreover, if $\mathbf{
\digamma }$ satisfies the local Lipschitz condition
\begin{equation}
\left\Vert \mathbf{\digamma (}s,\mathbf{\xi }_{2};h)-\mathbf{\digamma }(s,
\mathbf{\xi }_{1};h)\right\Vert \leq B_{\epsilon }\text{ }\left\Vert \mathbf{
\xi }_{2}-\mathbf{\xi }_{1}\right\Vert \text{, \ \ with }B_{\epsilon }>0
\text{ and }\mathbf{\xi }_{1},\mathbf{\xi }_{2}\in \epsilon (\mathbf{\xi }
)\subset \mathcal{D}, \label{Lipschitz}
\end{equation}
where $\epsilon (\mathbf{\xi })$ is a neighborhood of $\mathbf{\xi }$ for
each $\mathbf{\xi }$ $\subset \mathcal{D}$, then for $h$ small enough there
exists a positive constant $C_{2}(\mathbf{x}_{0})$ depending only on $
\mathbf{x}_{0}$ such that
\begin{equation*}
\left\Vert \mathbf{x}(t_{n+1};\mathbf{x}_{0})-\mathbf{y}_{n+1}\right\Vert
\leq C_{2}(\mathbf{x}_{0})h^{\gamma }
\end{equation*}
for all $t_{n+1}\in \left( t\right) _{h}$.
\end{lemma}
\begin{proof}
Taking into account that
\begin{equation*}
\mathbf{x}(t_{n+1};\mathbf{x}_{0})=\mathbf{y}_{LL}\left( t_{n}+h_{n};t_{n},
\mathbf{x}(t_{n};\mathbf{x}_{0})\right) +\mathbf{r}\left( t_{n}+h_{n};t_{n},
\mathbf{x}(t_{n};\mathbf{x}_{0})\right) ,
\end{equation*}
where $\mathbf{y}_{LL}$ and $\mathbf{r}$ are defined as in (\ref{ODE_HLLA-2b}
) and (\ref{ODE-HLLA-2}), respectively, it is obtained that
\begin{equation*}
L_{n+1}=\left\Vert \mathbf{r}\left( t_{n}+h_{n};t_{n},\mathbf{x}(t_{n};
\mathbf{x}_{0})\right) -{\Lambda }^{\mathbf{x}(t_{n};\mathbf{x}_{0})}\mathbf{
(}t_{n},\mathbf{0};h_{n})\right\Vert ,
\end{equation*}
where $L_{n+1}$ denotes the local truncation error of the discretization
under consideration. Since $\mathbf{r}\left( t_{n}+h_{n};t_{n},\mathbf{x}
(t_{n};\mathbf{x}_{0})\right) $ is the exact solution of the equation (\ref
{Diff. Equat for rn})-(\ref{Initial Cond. Diff. Equat for rn}) with $\mathbf{
y}_{n}=\mathbf{x}(t_{n};\mathbf{x}_{0})$ at $t_{n+1}$ and $\mathbf{u}_{n+1}=
\mathbf{u}_{n}+{\Lambda }^{\mathbf{x}(t_{n};\mathbf{x}_{0})}\left( t_{n},
\mathbf{u}_{n};h_{n}\right) $ is the approximate solution of that equation
at $t_{n+1}$ given by an order $\gamma $ numerical integrator, there exists
a positive constant $C_{1}(\mathbf{x}_{0})$ such that
\begin{equation*}
\left\Vert \mathbf{r}\left( t_{n}+h_{n};t_{n},\mathbf{x}(t_{n};\mathbf{x}
_{0})\right) -{\Lambda }^{\mathbf{x}(t_{n};\mathbf{x}_{0})}\mathbf{(}t_{n},
\mathbf{0};h_{n})\right\Vert \leq C_{1}(\mathbf{x}_{0})h_{n}^{\gamma +1},
\end{equation*}
which provides the stated bound for $L_{n+1}$.
On the other hand, since the compact set $\mathcal{X}=\left\{ \mathbf{x}
\left( t;\mathbf{x}_{0}\right) :t\in \left[ t_{0},T\right] \right\} $ is
contained in the open set $\mathcal{D}\subset \mathbb{R}^{d}$, there exists $
\varepsilon >0$ such that the compact set
\begin{equation*}
\mathcal{A}_{\varepsilon }=\left\{ \xi \in \mathbb{R}^{d}:\underset{\mathbf{x
}\left( t;\mathbf{x}_{0}\right) \in \mathcal{X}}{\min }\left\Vert \xi -
\mathbf{x}\left( t;\mathbf{x}_{0}\right) \right\Vert \leq \varepsilon
\right\}
\end{equation*}
is contained in $\mathcal{D}$. Since $\mathbf{\digamma }$ satisfies the
local Lipschitz condition (\ref{Lipschitz}), Lemma 2 in \cite{Perko01} (pp.
92) implies the existence of a positive constant $L$ such that
\begin{equation}
\left\Vert \mathbf{\digamma (}s,\mathbf{\xi }_{2};h)-\mathbf{\digamma }(s,
\mathbf{\xi }_{1};h)\right\Vert \leq L\text{ }\left\Vert \mathbf{\xi }_{2}-
\mathbf{\xi }_{1}\right\Vert \label{Global Lipschitz}
\end{equation}
for all $\mathbf{\xi }_{1},\mathbf{\xi }_{2}\in \mathcal{A}_{\varepsilon }$.
Hence, the stated estimate $\left\Vert \mathbf{x}(t_{n+1};
\mathbf{x}_{0})-\mathbf{y}_{n+1}\right\Vert \leq C_{2}(\mathbf{x}
_{0})h^{\gamma }$ for the global error straightforwardly follows
from the Lipschitz condition (\ref{Global Lipschitz}) and Theorem
3.6 in \cite{Hairer-Wanner93}, where $C_{2}(\mathbf{x}_{0})$ is a
positive contant. Finally, in order to guarantee that
$\mathbf{y}_{n+1}\in \mathcal{A}_{\varepsilon }$ for all
$n=0,...,N-1,$ and so that the LLRK
discretization is well-defined, it is sufficient that $0<h<\delta $, where $
\delta $ is chosen in such a way that $C_{2}(\mathbf{x}_{0})\delta ^{\gamma
}\leq \varepsilon $.
\end{proof}
Note that this lemma requires of an order $\gamma$ numerical integrator for
the auxiliary equation (\ref{Diff. Equat for rn})-(\ref{Initial Cond. Diff.
Equat for rn}). For this, certain conditions on the vector field $\mathbf{q}$
of this equation have to be assumed (usually, Lipschitz and smoothness
conditions). The next two lemmas show that the function $\mathbf{\phi}$
\textbf{,} and so the vector field $\mathbf{q}$\textbf{,} inherits such
conditions from the vector field $\mathbf{f}$.
\begin{lemma}
\label{Lemma de phi LLT} Let $\mathbf{\varphi}(.;h)=\frac{1}{h}\mathbf{\phi }
\left( .;h\right) $. Suppose that
\begin{equation*}
\mathbf{f\in}\text{ }\mathcal{C}^{p+1,q+1}\left( [t_{0},T]\times \mathcal{D},
\mathbb{R}^{d}\right) ,
\end{equation*}
where $p,q\in\mathbb{N}$. Then $\mathbf{\varphi}\in\mathcal{C}
^{p,q,r}([t_{0},T]\times\mathcal{D}\times\mathbb{R}_{+},\mathbb{R}^{d})$ for
all $r\in\mathbb{N}$.
\end{lemma}
\begin{proof}
Let $\vartheta _{j}$ be the analytical function recursively defined by
\begin{equation*}
\vartheta _{j+1}\left( z\right) =\left\{
\begin{array}{c}
\left( \vartheta _{j}\left( z\right) -1/j!\right) /z \\
e^{z}
\end{array}
\begin{array}{c}
\text{ \ \ \ for }j=1,2,\ldots \\
j=0
\end{array}
\right\} \text{ \ }
\end{equation*}
for $z\in \mathbb{C}$. Since
\begin{equation*}
\vartheta _{j}\left( s\mathbf{M}\right) =\frac{1}{\left( j-1\right) !s^{j}}
\int_{0}^{s}e^{\left( s-u\right) \mathbf{M}}u^{j-1}du,
\end{equation*}
for all $s\in \mathbb{R}_{+}$ and $\mathbf{M}\in \mathbb{R}^{d}\times
\mathbb{R}^{d}$ (see for instance \cite{Sidje 1998}), the function $\mathbf{
\varphi }$ can be written as
\begin{equation*}
\mathbf{\varphi }\left( \tau ,\mathbf{\xi };\delta \right) =\vartheta
_{1}\left( \delta \mathbf{f}_{\mathbf{x}}\left( \tau ,\mathbf{\xi }\right)
\right) \mathbf{f}\left( \tau ,\mathbf{\xi }\right) +\vartheta _{2}\left(
\delta \mathbf{f}_{\mathbf{x}}\left( \tau ,\mathbf{\xi }\right) \right)
\mathbf{f}_{t}\left( \tau ,\mathbf{\xi }\right) \delta
\end{equation*}
for all $\tau \in \mathbb{R}$, $\mathbf{\xi }\in \mathbb{R}^{d}$ and $\delta
\geq 0$. Thus, from the analyticity of $\vartheta _{j}$ and the continuity
of $\mathbf{f}$ the proof is completed.
\end{proof}
\begin{lemma}
\label{Lemma for Local Error LLRK} Let $\mathbf{f}$ and $\mathbf{q}$ be the
vector fields of the ODEs (\ref{ODE-LLA-1})-(\ref{ODE-LLA-2}) and (\ref
{Diff. Equat for rn})-(\ref{Initial Cond. Diff. Equat for rn}), respectively.
\end{lemma}
\begin{enumerate}
\item[\textit{i)}] There exists $\varepsilon >0$ such that the compact set
\begin{equation*}
\mathcal{A}_{\varepsilon }=\left\{ \mathbf{z}\in \mathbb{R}^{d}:\underset{
t\in \lbrack t_{0},T]}{\min }\left\Vert \mathbf{x}\left( t\right) -\mathbf{z}
\right\Vert \leq \varepsilon \right\}
\end{equation*}
is contained in $\mathcal{D}$. Moreover, there exists a compact set $
\mathcal{K}_{\varepsilon }$ included into an open neiborhood of $\mathbf{0}$
and a $\delta _{\varepsilon }>0$, such that
\begin{equation*}
\mathbf{x}(t)+\mathbf{\phi }\left( t,\mathbf{x}(t);\delta \right) +\mathbf{
\xi }\in \mathcal{A}_{\varepsilon },
\end{equation*}
for all $\delta \in \lbrack 0,\delta _{\varepsilon }]$, $\mathbf{\xi }\in
\mathcal{K}_{\varepsilon }$ and $t\in \lbrack t_{0},T]$.
\item[\textit{ii)}] If $\mathbf{f}$ and its first partial derivatives are
bounded on $[t_{0},T]\times \mathcal{D}$, and $\mathbf{f}(t,.)$ is a locally
Lipschitz function on $\mathcal{D}$ with Lipschitz constant independent of $
t $, then there exists a positive constant $P$ such that
\begin{equation*}
\left\Vert \mathbf{q(}t,\mathbf{x}(t);t+\delta ,\mathbf{\xi }_{2}\mathbf{)-q(
}t,\mathbf{x}(t);t+\delta ,\mathbf{\xi }_{1}\mathbf{)}\right\Vert \leq
P\left\Vert \mathbf{\xi }_{2}-\mathbf{\xi }_{1}\right\Vert
\end{equation*}
for all $\delta \in \lbrack 0,\delta _{\varepsilon }]$, $\mathbf{\xi }_{1},
\mathbf{\xi }_{2}\in \mathcal{K}_{\varepsilon }$ and $t\in \lbrack t_{0},T]$.
\item[\textit{iii)}] If $\mathbf{f}\in \mathcal{C}^{p}\left( [t_{0},T]\times
\mathcal{D},\mathbb{R}^{d}\right) $ for some $p\in \mathbb{N}$, then $
\mathbf{q(}t,\mathbf{x}(t);\cdot \mathbf{)}\in \mathcal{C}^{p}([t,t+\delta
_{\varepsilon }]\times \mathcal{K}_{\varepsilon },\mathbb{R}^{d})$ for all $
t\in \lbrack t_{0},T]$.
\end{enumerate}
\begin{proof}
The first part of assertion \textit{i)} follows from the fact that $\mathcal{
X}=\left\{ \mathbf{x}\left( t\right) :t\in \left[ t_{0},T\right] \right\} $
is a compact set contained into the open set $\mathcal{D}$, whereas its
second part results from the continuity of $\mathbf{\phi }$ on $
[t_{0},T]\times \mathcal{A}_{\varepsilon }\times \lbrack 0,\delta
_{\varepsilon }]$ stated by the Lemma \ref{Lemma de phi LLT}. Assertion
\textit{ii)} is a straighforward consecuence of Lemma 2 in \cite{Perko01}
(pp. 92). Assertion \textit{iii)} follows from the definition of the vector
field $\mathbf{q}$ and Lemma \ref{Lemma de phi LLT}.
\end{proof}
The next theorem characterizes the convergence rate of LLRK discretizations.
For this purpose, for all $t_{n}\in\left( t\right) _{h}$, denote by
\begin{equation}
\mathbf{y}_{n+1}=\mathbf{y}_{n}+h_{n}\mathbf{\varphi}_{\gamma}(t_{n},\mathbf{
y}_{n};h_{n}) \label{LLRK_Discretizat}
\end{equation}
the LL discretization defined in (\ref{ODE-HLLA-1}), taking $\mathbf{r}
_{\kappa}$ as an order $\gamma$ RK scheme of the form (\ref{RK}). That is,
\begin{equation*}
\mathbf{\varphi}_{\gamma}(t_{n},\mathbf{y}_{n};h_{n})=\frac{1}{h_{n}}\left\{
\mathbf{\phi}\left( t_{n},\mathbf{y}_{n};h_{n}\right) +\mathbf{\rho}\left(
t_{n},\mathbf{y}_{n};h_{n}\right) \right\} ,
\end{equation*}
where $\mathbf{\phi}$ is defined by (\ref{ODE-LLA-7}).
\begin{theorem}
\label{Local Error LLRK}Suppose that
\begin{equation}
\mathbf{f}\in \mathcal{C}^{\gamma +1}([t_{0},T]\times \mathcal{D},\mathbb{R}
^{d}). \label{ODE-CONV-8}
\end{equation}
Then
\begin{equation*}
\left\Vert \mathbf{x}(t_{n}+h;\mathbf{x}_{0})-\mathbf{x}(t_{n};\mathbf{x}
_{0})-h\mathbf{\varphi }_{\gamma }(t_{n},\mathbf{x}(t_{n};\mathbf{x}
_{0});h)\right\Vert \leq C_{1}(\mathbf{x}_{0})h^{\gamma +1},
\end{equation*}
and the LLRK discretization (\ref{LLRK_Discretizat}) satisfies
\begin{equation*}
\left\Vert \mathbf{x}(t_{n+1};\mathbf{x}_{0})-\mathbf{y}_{n+1}\right\Vert
\leq C_{2}(\mathbf{x}_{0})h^{\gamma },
\end{equation*}
for all $t_{n},t_{n+1}\in \left( t\right) _{h}$, where $C_{1}(\mathbf{x}
_{0}) $ and $C_{2}(\mathbf{x}_{0})$ are positive constants depending only on
$\mathbf{x}_{0}$.
\end{theorem}
\begin{proof}
By Theorem 3.1 in \cite{Hairer-Wanner93}, the local truncation error of the
order $\gamma$\ explicit RK scheme (\ref{RK}) for the equation (\ref{Diff.
Equat for rn})-(\ref{Initial Cond. Diff. Equat for rn}) with $\mathbf{y}_{n}=
\mathbf{x}(t_{n};\mathbf{x}_{0})$ is
\begin{equation}
\left\Vert \mathbf{u}(t_{n}+h)-\mathbf{\rho}\left( t_{n},\mathbf{x}(t_{n};
\mathbf{x}_{0});h\right) \right\Vert \leq C(\mathbf{x}_{0})\text{ }
h^{\gamma+1}, \label{ODE-CONV-9}
\end{equation}
where
\begin{equation*}
C(\mathbf{x}_{0})=\frac{1}{(\gamma+1)!}\underset{\theta\in\lbrack0,1]}{\max }
\left\Vert \frac{d^{\gamma+1}}{dt^{\gamma+1}}\mathbf{u}(t_{n}+\theta
h)\right\Vert +\frac{1}{\gamma!}\sum\limits_{i=1}^{s}\left\vert
b_{i}\right\vert \underset{\theta\in\lbrack0,1]}{\max}\left\Vert \frac{
d^{\gamma}}{dt^{\gamma}}\mathbf{k}_{i}(\theta h)\right\Vert
\end{equation*}
with
\begin{equation*}
\mathbf{k}_{i}(\theta h)=\mathbf{q}\left( t_{n},\mathbf{x}(t_{n};\mathbf{x}
_{0}),t_{n}\mathbf{+}c_{i}\theta h\mathbf{,}\theta h\sum _{j=1}^{i-1}a_{ij}
\mathbf{k}_{j}(\theta h)\right) .
\end{equation*}
By taking into account that the solution $\mathbf{r}$ of (\ref{Diff. Equat
for rn})-(\ref{Initial Cond. Diff. Equat for rn}) is the remainder term of
the LL approximation and by setting $\mathbf{y}_{n}=\mathbf{x}(t_{n};\mathbf{
x}_{0})$ in (\ref{Diff. Equat for rn}), it follows that
\begin{equation}
\mathbf{u}\left( t_{n}+\theta h\right) =\mathbf{x}\left( t_{n}+\theta h;
\mathbf{x}_{0}\right) -\mathbf{x}\left( t_{n};\mathbf{x}_{0}\right) -\mathbf{
\phi }\left( t_{n},\mathbf{x}\left( t_{n};\mathbf{x}_{0}\right) ;\theta
h\right) , \label{equat_u}
\end{equation}
and so
\begin{equation*}
\left\Vert \frac{d^{\gamma +1}}{dt^{\gamma +1}}\mathbf{u}(t_{n}+\theta
h)\right\Vert =\left\Vert \frac{d^{\gamma }}{dt^{\gamma }}\mathbf{q}(t_{n,}
\mathbf{x}(t_{n};\mathbf{x}_{0});t_{n}+\theta h,\mathbf{u}(t_{n}+\theta
h))\right\Vert ,
\end{equation*}
where the derivative in the right term of the last expression is with
respect to the last two arguments of the function $\mathbf{q}$. Condition (
\ref{ODE-CONV-8}), assertion $iii$) of Lemma \ref{Lemma for Local Error LLRK}
and expression (\ref{equat_u}) imply that $\mathbf{q(}.\mathbf{,x}(.,\mathbf{
x}_{0});.,\mathbf{u}(.))\in \mathcal{C}^{\gamma }([t_{0},T],\mathbb{R}^{d})$
. Hence, there exists a constant $M$ such that
\begin{equation*}
\underset{\theta \in \lbrack 0,1],\text{ }t_{n}\in \lbrack t_{0},T]}{\max }
\left\Vert \frac{d^{\gamma +1}}{dt^{\gamma +1}}\mathbf{u}(t_{n}+\theta
h)\right\Vert \leq M.
\end{equation*}
Likewise, condition (\ref{ODE-CONV-8}) and Lemma \ref{Lemma for Local Error
LLRK} imply that
\begin{equation*}
\underset{\theta \in \lbrack 0,1],\text{ }t_{n}\in \lbrack t_{0},T]}{\max }
\left\Vert \frac{d^{\gamma }}{dt^{\gamma }}\mathbf{k}_{i}(\theta
h)\right\Vert \leq M.
\end{equation*}
Therefore, $C(\mathbf{x}_{0})$ in (\ref{ODE-CONV-9}) is bounded as a
function of $\mathbf{x}_{0}\in \mathcal{D}$.
In addition, Lemma \ref{Lemma de phi LLT} and Lemma 3.5 in \cite
{Hairer-Wanner93} combined with assertion $iii)$ of Lemma \ref{Lemma for
Local Error LLRK} imply that $\mathbf{\phi }$ and $\mathbf{\rho }$ satisfy
the local Lipschitz condition (\ref{Lipschitz}), and so does the function
\begin{equation*}
\mathbf{\varphi }_{\gamma }(t_{n},\mathbf{y}_{n};h)=\frac{1}{h}\left\{
\mathbf{\phi }\left( t_{n},\mathbf{y}_{n};h\right) +\mathbf{\rho }\left(
t_{n},\mathbf{y}_{n};h\right) \right\}
\end{equation*}
as well. This and Lemma \ref{LLRK local error gen} complete the proof.
\end{proof}
Note that the Lipschitz and smoothness conditions in Lemma \ref{LLRK local
error gen} and Theorem \ref{Local Error LLRK} are the usual ones required to
derive the convergence of numerical integrators (see, e.g., Theorems 3.1 and
3.6 in \cite{Hairer-Wanner93}). These conditions directly imply that
smoothness of the solution of the ODE in a bounded domain (see, e.g.,
Theorem 1 pp. 79 and Remark 1 pp. 83 in \cite{Perko01}). In this way, to
ensure the convergence of the LLRK integrators, the involved RK coefficients
are not constrained by any stability condition and they just need to satisfy
the usual order conditions for RK schemes. This is a major difference with
the Rosenbrock and Exponential Integrators and makes the LLRK methods more
flexible and simple. Further note that, like these integrators, the LLRK are
trivially A-stable.
\section{Steady states \label{Sec. teoria}}
In this section the relation between the steady states of an autonomous
equation
\begin{align}
\frac{d\mathbf{x}\left( t\right) }{dt} & =\mathbf{f}\left( \mathbf{x}\left(
t\right) \right) \text{, \ \ }t\in\left[ t_{0},T\right] , \label{ODE-SS-0a}
\\
\mathbf{x}(t_{0}) & =\mathbf{x}_{0}\in\mathbb{R}^{d}, \label{ODE-SS-0b}
\end{align}
and those of their LLRK discretizations is considered. For the sake of
simplicity, a uniform time partition $h_{n}=h$ is adopted.
It will be convenient to rewrite the order $\gamma $ LLRK discretization in
the form
\begin{equation}
\mathbf{y}_{n+1}=\mathbf{y}_{n}+h\mathbf{\varphi }_{\gamma }(\mathbf{y}
_{n},h), \label{ODE-SS-1}
\end{equation}
where
\begin{equation}
\mathbf{\varphi }_{\gamma }\left( \mathbf{\xi ,}\delta \right) ={\Phi }(
\mathbf{\xi },\delta )\mathbf{f}(\mathbf{\xi })+\sum_{i=1}^{s}b_{i}\mathbf{k}
_{i}\left( \mathbf{\xi ,}\delta \right) , \label{ODE-SS-9}
\end{equation}
with
\begin{equation}
{\Phi }(\mathbf{\xi },\delta )=\frac{1}{\delta }\int\limits_{0}^{\mathbb{
\delta }}e^{\mathbf{f}_{\mathbf{x}}(\mathbf{\xi })u}du, \label{ODE-SS-3}
\end{equation}
\begin{equation*}
\mathbf{k}_{i}\left( \mathbf{\xi ,}\delta \right) =\mathbf{q}(\mathbf{\xi };
\text{ }c_{i}\delta ,\text{ }\delta \sum_{j=1}^{i-1}a_{ij}\mathbf{k}
_{j}\left( \mathbf{\xi ,}\delta \right) )
\end{equation*}
and
\begin{equation*}
\mathbf{q(\xi };\delta \mathbf{,u)}=\mathbf{f(\xi }+\delta {\Phi }(\mathbf{
\xi },\delta )\mathbf{f}(\mathbf{\xi })+\mathbf{u})-\mathbf{f}_{\mathbf{x}}(
\mathbf{\xi })\delta {\Phi }(\mathbf{\xi },\delta )\mathbf{f}(\mathbf{\xi })-
\mathbf{f}\left( \mathbf{\xi }\right) .
\end{equation*}
For later reference, the following Lemma states some useful properties of
the functions $\mathbf{\varphi}_{\gamma}$ on neighborhoods of invariant sets
of ODEs.
\begin{lemma}
\label{Lemma 5.4}Let $\Sigma\subset\mathbb{R}^{d}$ be an invariant set for
the flow of the equation (\ref{ODE-SS-0a}). Let $\mathcal{K}$ and $\Omega$
be, respectively, compact and bounded open sets such that $\Sigma\subset
\mathcal{K}\subset\Omega$. Suppose that the solution $\mathbf{x}$ of (\ref
{ODE-SS-0a})-(\ref{ODE-SS-0b}) fulfils the condition
\begin{equation}
\mathbf{x}(t;\mathbf{x}_{0})\subset\Omega\text{ for all initial point }
\mathbf{x}_{0}\in\mathcal{K}\text{ and }t\in\lbrack t_{0},T], \label{H1}
\end{equation}
and the vector field $\mathbf{f}$ satisfies the continuity condition
\begin{equation}
\mathbf{f}\in\mathcal{C}^{\gamma+1}(\Omega,\mathbb{R}^{d}). \label{H2}
\end{equation}
Further, let
\begin{equation*}
\mathbf{y}_{n+1}=\mathbf{y}_{n}+h\mathbf{\varphi}_{\gamma}(\mathbf{y}_{n},h)
\end{equation*}
be the order $\gamma$ LLRK discretization defined by (\ref{ODE-SS-1}). Then
\begin{enumerate}
\item[\textit{i)}] $\mathbf{\varphi}_{\gamma}\rightarrow\mathbf{f}$ and $
\partial\mathbf{\varphi}_{\gamma}\mathbf{/\partial y}_{n}\rightarrow \mathbf{
f}_{\mathbf{x}}$ \textit{as }$h\rightarrow0$\textit{\ uniformly in }$
\mathcal{K}$\textit{,}
\item[\textit{ii)}] $\left\Vert (\mathbf{x}(t_{0}+h;\mathbf{x}_{0})-\mathbf{x
}_{0})/h-\mathbf{\varphi }_{\gamma }(\mathbf{x}_{0},h)\right\Vert
=O(h^{\gamma })$\textit{\ uniformly for }$\mathbf{x}_{0}\in \mathcal{K}$.
\end{enumerate}
\end{lemma}
\begin{proof}
According to Lemma 5 in \cite{Jimenez02 AMC}, $\mathbf{f}\in\mathcal{C}
^{\gamma+1}(\Omega)$ implies that ${\Phi f}\rightarrow\mathbf{f}$ and $
\partial({\Phi f)/\partial\xi}\rightarrow\mathbf{f}_{\mathbf{x}}$ as $
h\rightarrow0$\ uniformly in $\mathcal{K}$.
On the other hand, $\mathbf{k}_{i}(\mathbf{\xi },0)=\mathbf{0}$, for all $
\mathbf{\xi }\in \Omega $ and $i=1,\ldots ,s$. Besides, since
\begin{equation*}
\frac{\partial \mathbf{k}_{i}}{\mathbf{\partial \xi }}(\mathbf{\xi },\delta
)=\frac{\partial \mathbf{q}}{\mathbf{\partial \xi }}(\mathbf{\xi };\text{ }
c_{i}\delta ,\delta \sum_{j=1}^{i-1}a_{ij}\mathbf{k}_{j}(\mathbf{\xi }
,\delta )),
\end{equation*}
where
\begin{align*}
\frac{\partial \mathbf{q}}{\mathbf{\partial \xi }}\mathbf{(\xi };\delta
\mathbf{,u)}& =\mathbf{\mathbf{f}_{\mathbf{x}}(\xi }+\delta {\Phi }(\mathbf{
\xi },\delta )\mathbf{f}(\mathbf{\xi })+\mathbf{u})\text{ }\frac{\partial }{
\mathbf{\partial \xi }}\left( \mathbf{\xi }+\delta {\Phi }(\mathbf{\xi }
,\delta )\mathbf{f}(\mathbf{\xi })+\mathbf{u}\right) \\
& -\delta \frac{\partial }{\mathbf{\partial \xi }}\left( \mathbf{f}_{\mathbf{
x}}(\mathbf{\xi }){\Phi }(\mathbf{\xi },\delta )\mathbf{f}(\mathbf{\xi }
)\right) -\mathbf{f}_{\mathbf{x}}\left( \mathbf{\xi }\right) +\mathbf{
\mathbf{f}_{\mathbf{x}}(\xi }+\delta {\Phi }(\mathbf{\xi },\delta )\mathbf{f}
(\mathbf{\xi })+\mathbf{u})\text{ }\frac{\partial \mathbf{u}}{\mathbf{
\partial \xi }}
\end{align*}
with
\begin{equation*}
\mathbf{u}=\delta \sum_{j=1}^{i-1}a_{ij}\mathbf{k}_{j}(\mathbf{\xi },\delta )
\text{ \ \ \ \ \ and \ \ \ \ }\frac{\partial \mathbf{u}}{\mathbf{\partial
\xi }}=\delta \sum_{j=1}^{i-1}a_{ij}\frac{\partial }{\mathbf{\partial \xi }}
\mathbf{k}_{j}(\mathbf{\xi },\delta ),
\end{equation*}
$\partial \mathbf{k}_{i}(\mathbf{\xi },0)\mathbf{/\partial \xi }=\mathbf{0}$
for all $i=1,\ldots ,s$. Thus, since each $\mathbf{k}_{i}$ and $\partial
\mathbf{k}_{i}\mathbf{/\partial \xi }$ are continuous functions on $\Omega
\times \lbrack 0,1]$, it holds that $\mathbf{k}_{i}\rightarrow \mathbf{0}$
and $\partial \mathbf{k}_{i}\mathbf{/\partial \xi }\rightarrow \mathbf{0}$
as $h\rightarrow 0$\ uniformly in the compact set $\mathcal{K}$. Thus,
assertion \textit{i)} holds.
From Theorem \ref{Local Error LLRK} we have
\begin{equation*}
\left\Vert (\mathbf{x}(t_{0}+h;\mathbf{x}_{0})-\mathbf{x}_{0})/h-\mathbf{
\varphi }_{\gamma }(\mathbf{x}_{0},h)\right\Vert \leq C(\mathbf{x}
_{0})h^{\gamma },
\end{equation*}
where
\begin{equation*}
C(\mathbf{x}_{0})=\frac{1}{(\gamma +1)!}\underset{\theta \in \lbrack 0,1]}{
\max }\left\Vert \frac{d^{\gamma +1}}{dt^{\gamma +1}}\mathbf{u}(t_{0}+\theta
h)\right\Vert +\frac{1}{\gamma !}\sum\limits_{i=1}^{s}\left\vert
b_{i}\right\vert \underset{\theta \in \lbrack 0,1]}{\max }\left\Vert \frac{
d^{\gamma }}{dt^{\gamma }}\mathbf{k}_{i}(\theta h)\right\Vert
\end{equation*}
is a positive constant depending of $\mathbf{x}_{0}$,
\begin{equation*}
\mathbf{k}_{i}(\theta h)=\mathbf{q}\left( \mathbf{x}(t_{0};\mathbf{x}
_{0});c_{i}\theta h\mathbf{,}\theta h\sum_{j=1}^{i-1}a_{ij}\mathbf{k}
_{j}(\theta h)\right) ,\text{\ \ \ \ \ \ \ \ }i=1,\ldots ,s.
\end{equation*}
and $\mathbf{u}\left( t_{0}+\theta h\right) =\mathbf{x}\left( t_{0}+\theta h;
\mathbf{x}_{0}\right) -\mathbf{x}\left( t_{0};\mathbf{x}_{0}\right) -\mathbf{
\phi }\left( t_{0},\mathbf{x}\left( t_{0};\mathbf{x}_{0}\right) ;\theta
h\right) $.
Clearly,
\begin{eqnarray*}
\left\Vert \frac{d^{\gamma +1}}{dt^{\gamma +1}}\mathbf{u}(s)\right\Vert
&=&\left\Vert \frac{d^{\gamma +1}}{dt^{\gamma +1}}(\mathbf{x}\left( s;
\mathbf{x}_{0}\right) -\mathbf{x}\left( t_{0};\mathbf{x}_{0}\right) -\mathbf{
\phi }\left( t_{0},\mathbf{x}\left( t_{0};\mathbf{x}_{0}\right)
;s-t_{0}\right) )\right\Vert \\
&\leq &\left\Vert \frac{d^{\gamma }}{dt^{\gamma }}\mathbf{f}(\mathbf{x}
\left( s;\mathbf{x}_{0}\right) )\right\Vert +\left\Vert \frac{d^{\gamma +1}}{
dt^{\gamma +1}}\mathbf{\phi }\left( t_{0},\mathbf{x}\left( t_{0};\mathbf{x}
_{0}\right) ;s-t_{0}\right) \right\Vert
\end{eqnarray*}
for all $s\in \lbrack t_{0},t_{0}+h]$. Since $\mathbf{x}(t;\mathbf{x}
_{0})\in \Omega $ for all $t\in \lbrack t_{0},T]$ and $\mathbf{x}_{0}\in
\mathcal{K}$ $\subset $ $\Omega $, there exists a compact set $\mathcal{A}
_{h}$ depending of $h$ such that $\mathcal{K}\subset \mathcal{A}_{h}$ $
\subset \Omega $ and $\mathbf{\mathbf{x}}(s\mathbf{;\mathbf{x}}_{0})\in
\mathcal{A}_{h}$ for all $s\in \lbrack t_{0},t_{0}+h]$\ and $\mathbf{\mathbf{
x}}_{0}\in \mathcal{K}$. In addition, since condition $\mathbf{f}\in
\mathcal{C}^{\gamma +1}(\Omega ,\mathbb{R}^{d})$ implies that there exists a
constant $M$ such that
\begin{equation*}
\underset{\xi \in \mathcal{A}_{h}}{\sup }\left\Vert \frac{d^{\gamma }}{
dt^{\gamma }}\mathbf{f}(\xi )\right\Vert \leq M,
\end{equation*}
it is obtained that
\begin{equation*}
\underset{\theta \in \lbrack 0,1],\text{ }\mathbf{\mathbf{x}}_{0}\in
\mathcal{K}}{\max }\left\Vert \frac{d^{\gamma }}{dt^{\gamma }}\mathbf{f}(
\mathbf{x}\left( t_{0}+\theta h;\mathbf{x}_{0}\right) )\right\Vert \leq
\underset{\xi \in \mathcal{A}_{h}}{\sup }\left\Vert \frac{d^{\gamma }}{
dt^{\gamma }}\mathbf{f}(\xi )\right\Vert \leq M.
\end{equation*}
Taking into account that $\mathbf{\phi }$ and $\mathbf{k}_{i}$ are functions
of $\mathbf{f}$, we can similarly proceed to find a bound $B>0$ independent
of $\theta $, $\mathbf{x}_{0}$ for $\left\Vert \frac{d^{\gamma +1}}{
dt^{\gamma +1}}\mathbf{\phi }\left( t_{0},\mathbf{x}\left( t_{0};\mathbf{x}
_{0}\right) ;s-t_{0}\right) \right\Vert $ and $\left\Vert \frac{d^{\gamma }}{
dt^{\gamma }}\mathbf{k}_{i}(\theta h)\right\Vert $. Hence, we conclude that $
C(\mathbf{x}_{0})$ is bounded on $\mathcal{K}$ by a constant independent of $
\mathbf{x}_{0}$, and so \textit{ii)} follows.
\end{proof}
\subsection{Fixed points and linearization preserving}
\begin{theorem}
\label{Prop. 5.2 LLRK} Suppose that the vector field $\mathbf{f}$ of the
equation (\ref{ODE-SS-0a}) and its derivatives up to order $\gamma $\ are
defined and bounded on $\mathbb{R}^{d}$. Then, all equilibrium points of the
given ODE (\ref{ODE-SS-0a}) are fixed points of any LLRK discretization.
\end{theorem}
\begin{proof}
Let $\mathbf{\varphi }_{\gamma }$, ${\Phi }$ and $\mathbf{k}_{i}$ be the
functions defined in expression (\ref{ODE-SS-9}). If $\mathbf{\xi }$ is an
equilibrium point of (\ref{ODE-SS-0a}), then $\mathbf{f}(\mathbf{\xi })=
\mathbf{0}$ and so ${\Phi }(\mathbf{\xi },h)\mathbf{f}(\mathbf{\xi })=
\mathbf{0}$ and $\mathbf{k}_{i}(\mathbf{\xi },h)=\mathbf{0}$ for all $h$ and
$i=1,\ldots ,s$. Thus, $\mathbf{\varphi }_{\gamma }(\mathbf{\xi },h)=\mathbf{
0}$ for all $h$, which implies that $\mathbf{\xi }$ is a fixed point of the
LLRK discretization (\ref{ODE-SS-1}).
\end{proof}
A numerical integrator $\mathbf{u}_{n+1}=\mathbf{u}_{n}+{\Lambda }\left(
t_{n},\mathbf{u}_{n};h_{n}\right) $ is linearization preserving at an
equilibrium point $\mathbf{\xi }$ of the ODE (\ref{ODE-SS-0a}) if from the
Taylor series expansion of ${\Lambda }\left( t_{n},\mathbf{\cdot }
;h_{n}\right) $ around $\mathbf{\xi }$ it is obtained that
\begin{equation*}
\mathbf{u}_{n+1}-\mathbf{\xi }=e^{h\mathbf{f}_{x}(\mathbf{\xi })}(\mathbf{u}
_{n}-\mathbf{\xi )}+O(\left\Vert \mathbf{u}_{n}-\mathbf{\xi }\right\Vert
^{2}).
\end{equation*}
Furthermore, an integrator is said to be linearization preserving if it is
linearization preserving at all equilibrium points of the ODE \cite
{McLachlan09}.
This property ensures that the integrator correctly captures all eigenvalues
of the linearized system at every equilibrium point of an ODE, which
guarantees the exact preservation (in type and parameters) of a number of
local bifurcations of the underlying equation \cite{McLachlan09}. Certainly,
this results in a correct reproduction of the local dynamics before, during
and after a bifurcation anywhere in the phase space by the numerical
integrator.
In \cite{McLachlan09} the linearization preserving property of the LL
discretization (\ref{ODE-LLA-4}) was demonstrated. This property is also
inherited by LLRK discretizations as it is shown by the next theorem.
\begin{theorem}
Let the vector field $\mathbf{f}$ of the equation (\ref{ODE-SS-0a}) and its
derivatives up to order $2$\ be functions defined and bounded on $\mathbb{R}
^{d}$. Then, LLRK discretizations are linearization preserving.
\end{theorem}
\begin{proof}
Let $\mathbf{\xi}$ be an arbitrary equilibrium point of the ODE (\ref
{ODE-SS-0a}) and let the initial condition $\mathbf{y}_{n}$ be in the
neighborhood of $\mathbf{\xi}$.
Let us consider the Taylor expansion of $\mathbf{f}$ around $\mathbf{\xi}$
\begin{equation*}
\mathbf{f}(\mathbf{y}_{n})=\mathbf{f}_{\mathbf{x}}(\mathbf{\xi})(\mathbf{y}
_{n}-\mathbf{\xi})+O(\left\Vert \mathbf{y}_{n}-\mathbf{\xi}\right\Vert ^{2})
\end{equation*}
and the LL discretization
\begin{equation*}
\mathbf{y}_{n+1}=\mathbf{y}_{n}+h{\Phi}(\mathbf{y}_{n},h)\mathbf{f}(\mathbf{y
}_{n}),
\end{equation*}
where ${\Phi}$ defined as in (\ref{ODE-SS-3}) is, according to assertion
\textit{i)} of Lemma 1 in \cite{Jimenez02 AMC}, a Lipschitz function. By
combining this Taylor expansion with both, the identity (\ref{ODE-LLA-3})
and the Lipschitz inequality $\left\Vert {\Phi}(\mathbf{y}_{n},h)-{\Phi }(
\mathbf{\xi},h)\right\Vert \leq$ $\lambda\left\Vert \mathbf{y}_{n}-\mathbf{
\xi}\right\Vert $ it is obtained
\begin{equation}
h{\Phi}(\mathbf{\xi},h)\mathbf{f}(\mathbf{y}_{n})=(e^{h\mathbf{f}_{\mathbf{x}
}(\mathbf{\xi})}-\mathbf{I})(\mathbf{y}_{n}-\mathbf{\xi})+O(\left\Vert
\mathbf{y}_{n}-\mathbf{\xi}\right\Vert ^{2}) \label{ODE-SS-11}
\end{equation}
and
\begin{equation}
\left\Vert ({\Phi}(\mathbf{y}_{n},h)-{\Phi}(\mathbf{\xi},h))\mathbf{f}(
\mathbf{y}_{n})\right\Vert \leq C\left\Vert \mathbf{y}_{n}-\mathbf{\xi }
\right\Vert ^{2}, \label{ODE-SS-12}
\end{equation}
respectively, where $C$ is a positive constant.
Now, consider the LLRK discretization
\begin{equation*}
\mathbf{y}_{n+1}=\mathbf{y}_{n}+h\mathbf{\varphi }_{\gamma }(\mathbf{y}
_{n},h),
\end{equation*}
with $\mathbf{\varphi }_{\gamma }$ defined as in (\ref{ODE-SS-9}). From the
Taylor formula with Lagrange remainder it is obtained that
\begin{align*}
\left\Vert \mathbf{q(y}_{n};h\mathbf{,u)}\right\Vert & =\left\Vert \mathbf{
f(y}_{n}+{\Phi }(\mathbf{y}_{n},h)\mathbf{f}(\mathbf{y}_{n})h+\mathbf{u})-
\mathbf{f}_{\mathbf{x}}(\mathbf{y}_{n}){\Phi }(\mathbf{y}_{n},h)\mathbf{f}(
\mathbf{y}_{n})h-\mathbf{f}\left( \mathbf{y}_{n}\right) \right\Vert \\
& \leq M\left\Vert {\Phi }(\mathbf{y}_{n},h)\mathbf{f}(\mathbf{y}_{n})h+
\mathbf{u}\right\Vert ^{2}+\left\Vert \mathbf{f}_{\mathbf{x}}(\mathbf{y}
_{n})\right\Vert \left\Vert \mathbf{u}\right\Vert ,
\end{align*}
where the positive constant $M$ is a bound for $\left\Vert \mathbf{f}_{
\mathbf{xx}}\right\Vert $ on a compact subset $\mathcal{K}\subset $ $\mathbb{
R}^{d}$ such that $\mathbf{y}_{n},\mathbf{y}_{n+1},\mathbf{\xi \in }$ $
\mathcal{K}$. By using (\ref{ODE-SS-11}) and (\ref{ODE-SS-12}) it follows
that
\begin{equation*}
\left\Vert \mathbf{q(y}_{n};h\mathbf{,u)}\right\Vert \leq 2M\left\Vert
\mathbf{u}\right\Vert ^{2}+\left\Vert \mathbf{f}_{\mathbf{x}}(\mathbf{y}
_{n})\right\Vert \left\Vert \mathbf{u}\right\Vert +O(\left\Vert \mathbf{y}
_{n}-\mathbf{\xi }\right\Vert ^{2}).
\end{equation*}
From the last inequality and taking into account that $\mathbf{k}_{1}=
\mathbf{0}$, it is obtained that $\left\Vert \mathbf{k}_{2}\right\Vert \leq
O(\left\Vert \mathbf{y}_{n}-\mathbf{\xi }\right\Vert ^{2})$. Furthermore, by
induction, it is obtained that $\left\Vert \mathbf{k}_{i}\right\Vert \leq
O(\left\Vert \mathbf{y}_{n}-\mathbf{\xi }\right\Vert ^{2})$ for all $
i=1,2,...,s.$ From this, (\ref{ODE-SS-11}) and (\ref{ODE-SS-9}) it follows
that
\begin{equation*}
h\mathbf{\varphi }_{\gamma }(\mathbf{y}_{n},h)=(e^{h\mathbf{f}_{\mathbf{x}}(
\mathbf{\xi })}-\mathbf{I})(\mathbf{y}_{n}-\mathbf{\xi })+O(\left\Vert
\mathbf{y}_{n}-\mathbf{\xi }\right\Vert ^{2}),
\end{equation*}
which implies that the LLRK discretization is linearization preserving.
\end{proof}
The next two subsections deal with a more precise analysis of the dynamical
behavior of the LLRK discretizations in the neighborhood of some steady
states.
\subsection{Phase portrait near equilibrium points}
Let $\mathbf{0}$ be a hyperbolic equilibrium point of the equation (\ref
{ODE-SS-0a}). Let $X_{s},X_{u}\subset\mathbb{R}^{d}$ be the stable and
unstable subspaces of the linear vector field $\mathbf{f}_{\mathbf{x}}(
\mathbf{0})$ such that $\mathbb{R}^{d}=X_{s}\oplus X_{u}$, $(\mathbf{x}_{s},
\mathbf{x}_{u})=\mathbf{x}\in\mathbb{R}^{d}$ and $\left\Vert \mathbf{x}
\right\Vert =\max\{\left\Vert \mathbf{x}_{s}\right\Vert ,\left\Vert \mathbf{x
}_{u}\right\Vert \}$. It is well-known that the local stable and unstable
manifolds at $\mathbf{0}$ may be represented as $M_{s}=\{(\mathbf{x}_{s},p(
\mathbf{x}_{s})):\mathbf{x}_{s}\in\mathcal{K}_{\varepsilon,s}\}$ and $
M_{u}=\{(q(\mathbf{x}_{u}),\mathbf{x}_{u})):\mathbf{x}_{u}\in\mathcal{K}
_{\varepsilon,u}\}$, respectively, where the functions $p:\mathcal{K}
_{\varepsilon,s}=\mathcal{K}_{\mathbb{\varepsilon}}\cap X_{s}\rightarrow
\mathcal{K}_{\varepsilon,u}=\mathcal{K}_{\mathbb{\varepsilon}}\cap X_{u}$
and $q:\mathcal{K}_{\varepsilon,u}\rightarrow\mathcal{K}_{\varepsilon,s}$
are as smooth as $\mathbf{f}$, and $\mathcal{K}_{\mathbb{\varepsilon}}=\{
\mathbf{x}\in\mathbb{R}^{d}:\left\Vert \mathbf{x}\right\Vert
\leq\varepsilon\}$ for $\varepsilon>0$.
\begin{theorem}
\label{Prop. 5.3} Suppose that the conditions (\ref{H1})-(\ref{H2}) of Lemma
\ref{Lemma 5.4} hold on a neighborhood $\Omega $ of $\mathbf{0}$. Then there
exist constants $C$, $\varepsilon $, $\varepsilon _{0}$, $h_{0}>0$ such that
the local stable $M_{s}^{h}$ and unstable $M_{u}^{h}$ manifolds of the order
$\gamma $ LLRK discretization (\ref{ODE-SS-1}) at $\mathbf{0}$ are of the
form
\begin{equation*}
M_{s}^{h}=\{(\mathbf{x}_{s},p^{h}(\mathbf{x}_{s})):\mathbf{x}_{s}\in
\mathcal{K}_{\varepsilon ,s}\}\text{ and }M_{u}^{h}=\{(q^{h}(\mathbf{x}_{u}),
\mathbf{x}_{u}):\mathbf{x}_{u}\in \mathcal{K}_{\varepsilon ,u}\},
\end{equation*}
where $p^{h}=p+O(h^{\gamma })$ uniformly in $\mathcal{K}_{\varepsilon ,s}$ ,
and $q^{h}=q+O(h^{\gamma })$ uniformly in $\mathcal{K}_{\varepsilon ,u}$.
Moreover, for any $\mathbf{x}_{0}\in \mathcal{K}_{\mathbb{\varepsilon }}$
and $h\leq h_{0}$, there exists $\mathbf{z}_{0}=\mathbf{z}_{0}(\mathbf{x}
_{0},h)\in \mathcal{K}_{\varepsilon _{0}}$ satisfying
\begin{equation}
sup\{\left\Vert \mathbf{x}(t_{n};\mathbf{x}_{0})-\mathbf{y}_{n}(\mathbf{z}
_{0})\right\Vert :\mathbf{x}(t;\mathbf{x}_{0})\in \mathcal{K}_{\mathbb{
\varepsilon }}\text{ for }t\in \lbrack t_{0},t_{n}]\}\leq Ch^{\gamma }.
\label{ODE-SS-6}
\end{equation}
Correspondingly, for any $\mathbf{z}_{0}\in \mathcal{K}_{\mathbb{\varepsilon
}}$ and $h\leq h_{0}$, there exists $\mathbf{x}_{0}=\mathbf{x}_{0}(\mathbf{z}
_{0},h)\in \mathcal{K}_{\varepsilon _{0}}$ that fulfils (\ref{ODE-SS-6}),
where the supremum is taken over all $n$ satisfying $\mathbf{y}_{j}(\mathbf{z
}_{0})\in \mathcal{K}_{\mathbb{\varepsilon }}$, $j=0,\ldots ,n$.
\end{theorem}
\begin{proof}
Since $\Omega$ is a neighborhood of the invariant set $\mathbf{0}$, there
exists a constant $\varepsilon>0$ and a compact set $\mathcal{K}
_{\varepsilon }=\{\mathbf{\xi}\in\mathbb{R}^{d}:\left\Vert \mathbf{\xi}
\right\Vert \leq\varepsilon\}\subset\Omega$ such that Lemma \ref{Lemma 5.4}
holds with $\mathcal{K}=\mathcal{K}_{\varepsilon}$. Furthermore, by
assertion \textit{i)} of Theorem \ref{Prop. 5.2 LLRK}, $\mathbf{f}(\mathbf{
\xi})=\mathbf{0}$ implies $\mathbf{\varphi}_{\gamma}(\mathbf{\xi},h)=\mathbf{
0}$ for all $h$. Thus, the hypotheses of Theorem 3.1 in \cite{Beyn 1987a}
hold for the LLRK discretizations, which completes the prove.
\end{proof}
Theorem \ref{Prop. 5.3} shows that the phase portrait of a continuous
dynamical system near a hyperbolic equilibrium point is correctly reproduced
by LLRK discretizations for sufficiently small step-sizes. It states that
any trajectory of the dynamical system can be correctly approximated by a
trajectory of the LLRK discretization if the discrete initial value is
conveniently adjusted. It also affirms that any trajectory of a LLRK
discretization approximates some trajectory of the continuous system with a
suitably selection of the starting point. In both cases, these results are
valid for sufficiently small step-sizes and as long as the trajectories stay
within some neighborhood of the equilibrium point. Moreover, the theorem
ensures that the local stable and unstable manifolds of a LLRK
discretization at the equilibrium point converge to those of the continuous
system as the step-size goes to zero.
\subsection{Phase portraits near periodic orbits}
Suppose that the equation (\ref{ODE-SS-0a}) has a hyperbolic closed orbit $
\Gamma =\{\overline{\mathbf{x}}(t):t\in \lbrack 0,T]\}$ of period $T$ in an
open bounded set $\Omega \subset \mathbb{R}^{d}$ . Let $\overline{\Omega }$
be the closure of $\Omega $.
\begin{theorem}
\label{Prop. 5.5} Let the assumptions (\ref{H1})-(\ref{H2}) of Lemma \ref
{Lemma 5.4} hold on a neighborhood of $\overline{\Omega}$. Then there exist $
h_{0}>0$ and an open neighborhood $U$ of $\Gamma$ such that the order $
\gamma $ LLRK discretization
\begin{equation*}
\mathbf{y}_{n+1}=\mathbf{y}_{n}+h\mathbf{\varphi}_{\gamma}(\mathbf{y}_{n},h)
\end{equation*}
has an invariant closed curve $\Gamma_{h}\subset U$ for all $h\leq h_{0}$.
More precisely, there exist $T-$periodic functions $\overline{\mathbf{y}}
_{h}:\mathbb{R}\rightarrow U$ and $\sigma_{h}-1:\mathbb{R}\rightarrow
\mathbb{R}$ for $h\leq h_{0}$, which are uniformly Lipschitz and satisfy
\begin{equation*}
\overline{\mathbf{y}}_{h}(t)+h\mathbf{\varphi}_{\gamma}(\overline{\mathbf{y}}
_{h}(t),h)=\overline{\mathbf{y}}_{h}(\sigma_{h}(t)),\text{ }t\in\mathbb{R}
\end{equation*}
and
\begin{equation*}
\sigma_{h}(t)=t+h+O(h^{\gamma+1})\text{ uniformly for }t\in\mathbb{R}.
\end{equation*}
Furthermore, the curve $\Gamma_{h}=\{\overline{\mathbf{y}}_{h}(t):t\in
\lbrack0,T]\}$ converges to $\Gamma$ in the Lipschitz norm. In particular,
\begin{equation*}
\underset{t\in\mathbb{R}}{max}\left\Vert \overline{\mathbf{x}}(t)-\overline {
\mathbf{y}}_{h}(t)\right\Vert =O(h^{\gamma})
\end{equation*}
and
\begin{equation*}
\underset{t_{1}\neq t_{2}}{sup}\frac{\left\Vert (\overline{\mathbf{x}}-
\overline{\mathbf{y}}_{h})(t_{1})-(\overline{\mathbf{x}}-\overline {\mathbf{y
}}_{h})(t_{2})\right\Vert }{\left\vert t_{1}-t_{2}\right\vert }\rightarrow0
\text{ as }h\rightarrow0.
\end{equation*}
\end{theorem}
\begin{proof}
Since Lemma \ref{Lemma 5.4} holds on a neighborhood of $\overline{\Omega }$,
it also holds on $\Omega $. In addition, Lemmas \ref{Lemma de phi LLT} and
\ref{Lemma for Local Error LLRK} imply that $\mathbf{\varphi }_{\gamma }\in
\mathcal{C}^{2}(\overline{\Omega }\times \lbrack 0,h_{0}])$, so $\partial
\mathbf{\varphi }_{\gamma }\mathbf{/\partial y}_{n}$ is Lipschitz on $\Omega
$ uniformly in $h$. Thus, the hypotheses of Theorem 2.1 in \cite{Beyn 1987b}
hold for the LLRK discretizations of order $\gamma >2$, which completes the
proof.
\end{proof}
Theorem \ref{Prop. 5.5} affirms that, for $h$ sufficiently small, the LLRK
discretizations have a closed invariant curve $\Gamma_{h}$ , i.e., $(1+h
\mathbf{\varphi}(.;h))(\Gamma_{h})=\Gamma_{h}$ , which converges to the
periodic orbit $\Gamma$ of the continuous system.
The next theorem deals with the behavior of the discrete trajectories of
LLRK discretizations near the invariant curve $\Gamma_{h}$ when the ODE (\ref
{ODE-SS-0a}) has a stable periodic orbit $\Gamma$. For $\mathbf{x}_{0}$ in a
neighborhood of $\Gamma$, the notations
\begin{equation*}
W_{h}(\mathbf{x}_{0})=\{\mathbf{y}_{n}(\mathbf{x}_{0}):n\geq0\}\text{ \ and
\ \ }w(\mathbf{x}_{0})=\{\mathbf{x}(t;\mathbf{x}_{0}):t\geq0\}
\end{equation*}
will be used. In addition,
\begin{equation*}
d(A,B)=max\{\underset{\mathbf{z}\in A}{\sup}\text{ dist}(\mathbf{z},B),
\underset{\mathbf{z}\in B}{\sup}\text{ dist}(\mathbf{z},A)\}
\end{equation*}
will denote the Hausdorff distance between two sets $A$ and $B$.\newline
\begin{theorem}
\label{Prop. 5.7}Let $\Gamma $ be a stable closed orbit of the equation (\ref
{ODE-SS-0a}). Then, under the assumptions of Theorem \ref{Prop. 5.5}, there
exist $h_{0}$, $\alpha $, $\beta $, $C$ and $\rho >0$ such that for $h\leq
h_{0}$ and $dist(\mathbf{x}_{0},\Gamma _{h})\leq \rho $ the following holds:
\begin{equation*}
dist(\mathbf{y}_{n}(\mathbf{x}_{0}),\Gamma _{h})\leq C\text{ }\exp (-\alpha
t_{n})\text{ }dist(\mathbf{x}_{0},\Gamma _{h})
\end{equation*}
and
\begin{equation*}
dist(\mathbf{y}_{n}(\mathbf{x}_{0}),w(\mathbf{x}_{0}))\leq C(h^{\gamma
}+min\{h^{\gamma }\exp (\beta t_{n}),\exp (-\alpha t_{n})\})
\end{equation*}
for $n\geq 0$. Moreover, for any $\delta >0$ there exist $\rho (\delta )$, $
h(\delta )>0$ such that
\begin{equation*}
\underset{n\geq 0}{sup}\left\{ dist(\mathbf{y}_{n}(\mathbf{x}_{0}),w(\mathbf{
x}_{0}))\right\} \leq Ch^{\gamma -\delta }
\end{equation*}
for $h\leq h(\delta )$ and $dist(\mathbf{x}_{0},\Gamma _{h})\leq \rho
(\delta )$. Finally,
\begin{equation*}
d(W_{h}(\mathbf{x}_{0}),w(\mathbf{x}_{0}))\rightarrow 0\text{ as }
h\rightarrow 0
\end{equation*}
uniformly for $dist(\mathbf{x}_{0},\Gamma )\leq \rho $.
\end{theorem}
\begin{proof}
It can be proved in a similar way as Theorem \ref{Prop. 5.5}, but using
Theorem 3.2 in \cite{Beyn 1987b} instead of Theorem 2.1.
\end{proof}
This theorem states the stability of the invariant curve $\Gamma_{h}$ and
the convergence of the trajectories of a LLRK discretization to the
continuous trajectories of the underlying ODE when the discretization starts
at a point close enough to the stable periodic orbit $\Gamma$.
\section{A-stable explicit LLRK schemes\label{ODE LL Schemes}}
This section deals with practical issues of the LLRK methods, that is, with
the so called Local Linearization - Runge Kutta schemes.
Roughly speaking, every numerical implementation of a LLRK discretization
will be called LLRK scheme. More precisely, they are defined as follows.
\begin{definition}
\label{definition LLS} For an order $\gamma$ LLRK discretization
\begin{equation}
\mathbf{y}_{n+1}=\mathbf{y}_{n}+h_{n}\mathbf{\varphi}_{\gamma}(t_{n},\mathbf{
y}_{n};h_{n}), \label{ODE-LLS-22}
\end{equation}
as defined in (\ref{LLRK_Discretizat}), any recursion of the form
\begin{equation*}
\widetilde{\mathbf{y}}_{n+1}=\widetilde{\mathbf{y}}_{n}+h_{n}\widetilde {
\mathbf{\varphi}}_{\gamma}\left( t_{n},\widetilde{\mathbf{y}}
_{n};h_{n}\right) \mathbf{,}\text{ \ \ \ \ \ \ \ \ \ with }\widetilde{
\mathbf{y}}_{0}=\mathbf{y}_{0},
\end{equation*}
where $\widetilde{\mathbf{\varphi}}_{\gamma}$ denotes some numerical
algorithm to compute $\mathbf{\varphi}_{\gamma}$, is called an LLRK scheme.
\end{definition}
When implementing the LLRK discretization (\ref{ODE-LLS-22}), that is, when
a LLRK scheme is constructed, the required evaluations of the expression $
\mathbf{y}_{n}+\mathbf{\phi}\left( t_{n},\mathbf{y}_{n};.\right) $ at $
t_{n+1}-t_{n}$ and $c_{i}\left( t_{n+1}-t_{n}\right) $ may be computed by
different algorithms. In \cite{de la Cruz 07}, \cite{Jimenez05 AMC} a number
of them were reviewed, which yield the following two basic kinds of LLRK
schemes:
\begin{equation*}
\widetilde{\mathbf{y}}_{n+1}=\widetilde{\mathbf{y}}_{n}+\widetilde{\mathbf{
\phi }}\left( t_{n},\widetilde{\mathbf{y}}_{n};h_{n}\right) +\widetilde{
\mathbf{\rho }}\left( t_{n},\widetilde{\mathbf{y}}_{n};h_{n}\right) \mathbf{,
}
\end{equation*}
and
\begin{equation*}
\widetilde{\mathbf{y}}_{n+1}=\widetilde{\mathbf{z}}\left( t_{n}+h_{n};t_{n},
\widetilde{\mathbf{y}}_{n}\right) +\widetilde{\mathbf{\rho }}\left( t_{n},
\widetilde{\mathbf{y}}_{n};h_{n}\right) \mathbf{,}
\end{equation*}
where $\widetilde{\mathbf{\phi }}$ is a\ numerical implementation of $
\mathbf{\phi }$,\ $\widetilde{\mathbf{z}}$ is a numerical solution of the
linear ODE
\begin{align}
\frac{d\mathbf{z}\left( t\right) }{dt}& =\mathbf{B}_{n}\mathbf{z}(t)+\mathbf{
b}_{n}\left( t\right) \mathbf{,}\text{ \ \ }t\in \lbrack t_{n},t_{n+1}],
\label{ODE-LLS-13} \\
\mathbf{z}\left( t_{n}\right) & =\widetilde{\mathbf{y}}_{n}\text{,}
\label{ODE-LLS-13b}
\end{align}
and $\widetilde{\mathbf{\rho }}~$is the map of the Runge-Kutta scheme
applied to the ODE
\begin{align}
\frac{d\mathbf{v}\left( t\right) }{dt}& =\widetilde{\mathbf{q}}\mathbf{(}
t_{n},\mathbf{z}\left( t_{n}\right) ;t\mathbf{,\mathbf{v}}\left( t\right)
\mathbf{),}\text{ \ \ }t\in \lbrack t_{n},t_{n+1}],\quad \label{ODE-LLS-14}
\\
\mathbf{v}\left( t_{n}\right) & =\mathbf{0}, \label{ODE-LLS-14b}
\end{align}
with vector field
\begin{align*}
\widetilde{\mathbf{q}}\mathbf{(}t_{n},\widetilde{\mathbf{y}}_{n};s\mathbf{
,\xi )}& =\mathbf{f(}s,\widetilde{\mathbf{y}}_{n}+\widetilde{\mathbf{\phi }}
\left( t_{n},\widetilde{\mathbf{y}}_{n};s-t_{n}\right) +\mathbf{\xi })-
\mathbf{f}_{\mathbf{x}}(t_{n},\widetilde{\mathbf{y}}_{n})\widetilde{\mathbf{
\phi }}\left( t_{n},\widetilde{\mathbf{y}}_{n};s-t_{n}\right) \\
& -\mathbf{f}_{t}\left( t_{n},\widetilde{\mathbf{y}}_{n}\right) (s-t_{n})-
\mathbf{f}\left( t_{n},\widetilde{\mathbf{y}}_{n}\right) ,
\end{align*}
for the first kind of LLRK scheme, or
\begin{align*}
\widetilde{\mathbf{q}}\mathbf{(}t_{n},\widetilde{\mathbf{y}}_{n};s\mathbf{
,\xi )}& =\mathbf{f(}s,\widetilde{\mathbf{z}}\left( s;t_{n},\widetilde{
\mathbf{y}}_{n}\right) +\mathbf{\xi })-\mathbf{f}_{\mathbf{x}}(t_{n},
\widetilde{\mathbf{y}}_{n})(\widetilde{\mathbf{z}}\left( s;t_{n},\widetilde{
\mathbf{y}}_{n}\right) -\widetilde{\mathbf{y}}_{n})-\mathbf{f}_{t}\left(
t_{n},\widetilde{\mathbf{y}}_{n}\right) (s-t_{n}) \\
& -\mathbf{f}\left( t_{n},\widetilde{\mathbf{y}}_{n}\right)
\end{align*}
for the second one. In the equation (\ref{ODE-LLS-13}), $\mathbf{B}_{n}=
\mathbf{f}_{\mathbf{x}}\left( t_{n},\widetilde{\mathbf{y}}_{n}\right) $ is a
$d\times d$ constant matrix and $\mathbf{b}_{n}(t)=\mathbf{f}_{t}\left(
t_{n},\widetilde{\mathbf{y}}_{n}\right) (t-t_{n})+\mathbf{f}\left( t_{n},
\widetilde{\mathbf{y}}_{n}\right) \mathbf{-B}_{n}\widetilde{\mathbf{y}}_{n}$
is a $d$-dimensional linear vector function.
Obviously, a LLRK scheme will preserve the order $\gamma$ of the underlaying
LLRK discretization only if $\widetilde{\mathbf{\phi}}$ is a suitable
approximation to $\mathbf{\phi}$. This requirement is considered in the next
theorem.
\begin{theorem}
\label{Conv LLRKScheme}Let $\mathbf{x}$ be the solution of the ODE (\ref
{ODE-LLA-1})-(\ref{ODE-LLA-2}) with vector field $\mathbf{f}$ satisfying the
condition (\ref{ODE-CONV-8}). With $t_{n},$ $t_{n+1}\in\left( t\right) _{h}$
, let $\widetilde{\mathbf{z}}_{n+1}=\widetilde{\mathbf{z}}_{n}+h_{n}{\Lambda}
_{1}\left( t_{n},\widetilde{\mathbf{z}}_{n};h_{n}\right) $ and $\widetilde{
\mathbf{v}}_{n+1}=\widetilde{\mathbf{v}}_{n}+h_{n}{\Lambda }_{2}^{\widetilde{
\mathbf{z}}_{n}}\left( t_{n},\widetilde{\mathbf{v}}_{n};h_{n}\right) $ be
one-step explicit integrators of the ODEs (\ref{ODE-LLS-13})-(\ref
{ODE-LLS-13b}) and (\ref{ODE-LLS-14})-(\ref{ODE-LLS-14b}), respectively.
Suppose that these integrators have order of convergence $r$ and $p$,
respectively. Further, assume that ${\Lambda}_{1}$ and ${\Lambda}_{2}^{
\widetilde{\mathbf{z}}_{n}}$ fulfill the local Lipschitz condition (\ref
{Lipschitz}). Then, for $h$ small enough, the numerical scheme
\begin{equation*}
\widetilde{\mathbf{y}}_{n+1}=\widetilde{\mathbf{y}}_{n}+h_{n}{\Lambda}
_{1}\left( t_{n},\widetilde{\mathbf{y}}_{n};h_{n}\right) +h_{n}{\Lambda}
_{2}^{\widetilde{\mathbf{y}}_{n}}\left( t_{n},\mathbf{0};h_{n}\right)
\end{equation*}
satisfies that
\begin{equation*}
\left\Vert \mathbf{x}(t_{n+1})-\widetilde{\mathbf{y}}_{n+1}\right\Vert \leq
Ch^{\min\{r,p\}}
\end{equation*}
for all $t_{n+1}\in\left( t\right) _{h}$, where $C$ is a positive constant.
\end{theorem}
\begin{proof}
Let $\mathcal{X}=\left\{ \mathbf{x}\left( t\right) :t\in\left[ t_{0},T\right]
\right\} .$ Since $\mathcal{X}$ is a compact set contained in the open set $
\mathcal{D}\subset\mathbb{R}^{d}$, there exists $\varepsilon>0$ such that
the compact set
\begin{equation*}
\mathcal{A}_{\varepsilon}=\left\{ \xi\in\mathbb{R}^{d}:\underset{\mathbf{x}
\left( t\right) \in\mathcal{X}}{\min}\left\Vert \xi -\mathbf{x}\left(
t\right) \right\Vert \leq\varepsilon\right\}
\end{equation*}
is contained in $\mathcal{D}$.
First, set $\widetilde{\mathbf{y}}_{n}=\mathbf{x}(t_{n})$ in the equations (
\ref{ODE-LLS-13})-(\ref{ODE-LLS-13b}) and (\ref{ODE-LLS-14})-(\ref
{ODE-LLS-14b}). Since $\mathbf{x}\left( t_{n+1}\right) =\mathbf{y}
_{LL}\left( t_{n}+h_{n};t_{n},\mathbf{x}(t_{n})\right) +\mathbf{r}\left(
t_{n}+h_{n};t_{n},\mathbf{x}(t_{n})\right) $, it is obtained that
\begin{align}
\left\Vert
\begin{array}{c}
\mathbf{x}(t_{n+1})-\mathbf{x}(t_{n})-h_{n}{\Lambda }_{1}(t_{n},\mathbf{x}
(t_{n});h_{n}) \\
-h_{n}{\Lambda }_{2}^{\mathbf{x}(t_{n})}(t_{n},\mathbf{0};h_{n})
\end{array}
\right\Vert & \leq \left\Vert \mathbf{\phi }\left( t_{n},\mathbf{x}
(t_{n});h_{n}\right) -h_{n}{\Lambda }_{1}(t_{n},\mathbf{x}
(t_{n});h_{n})\right\Vert \notag \\
& +\left\Vert \mathbf{r}\left( t_{n}+h_{n};t_{n},\mathbf{x}(t_{n})\right) -
\mathbf{v}(t_{n+1})\right\Vert \notag \\
& +\left\Vert \mathbf{v}(t_{n+1})-h_{n}{\Lambda }_{2}^{\mathbf{x}
(t_{n})}(t_{n},\mathbf{0};h_{n})\right\Vert , \label{ODE-LLS-15}
\end{align}
where $\mathbf{v}(t_{n+1})$ is the solution of equation (\ref{ODE-LLS-14})-(
\ref{ODE-LLS-14b}) at $t=t_{n+1}$.
By definition, $\mathbf{r}\left( t_{n}+h_{n};t_{n},\mathbf{x}(t_{n})\right) $
is solution of the differential equation
\begin{align*}
\frac{d\mathbf{u}\left( t\right) }{dt}& =\mathbf{q(}t_{n},\mathbf{x}(t_{n});t
\mathbf{,\mathbf{u}}\left( t\right) \mathbf{),}\text{ \ \ }t\in \lbrack
t_{n},t_{n+1}],\quad \\
\mathbf{u}\left( t_{n}\right) & =\mathbf{0},
\end{align*}
evaluated at $t=t_{n+1}$. Thus, by applying the "fundamental lemma" (see,
e.g., Theorem 10.2 in \cite{Hairer-Wanner93}), it is obtained that
\begin{equation}
\left\Vert \mathbf{r}\left( t;t_{n},\mathbf{x}(t_{n})\right) -\mathbf{v}
(t)\right\Vert \leq \frac{\epsilon }{P}(e^{P(t-t_{n})}-1) \label{ODE-LLS-16}
\end{equation}
for $t\in \lbrack t_{n},t_{n+1}]$, where
\begin{align*}
\epsilon & =\underset{t\in \lbrack t_{n},t_{n+1}]}{sup}\left\Vert \mathbf{q(}
t_{n},\mathbf{x}(t_{n});t\mathbf{,\mathbf{u}}\left( t\right) \mathbf{)}-
\widetilde{\mathbf{q}}\mathbf{(}t_{n},\mathbf{x}(t_{n});t\mathbf{,\mathbf{u}}
\left( t\right) \mathbf{)}\right\Vert \\
& \leq M\left\Vert \mathbf{\phi }\left( t_{n},\mathbf{x}(t_{n});h_{n}\right)
-h_{n}{\Lambda }_{1}(t_{n},\mathbf{x}(t_{n});h_{n})\right\Vert ,
\end{align*}
$M=\underset{t\in \lbrack t_{0},T],\xi \in \mathcal{A}_{\varepsilon }}{2\sup
}\left\Vert \mathbf{f}_{\mathbf{x}}(t,\xi )\right\Vert $, and $P$ is the
Lipschitz constant of the function $\mathbf{q(}t_{n},\mathbf{x}(t_{n});\cdot
\mathbf{)}$ (which exists by Lemma \ref{Lemma for Local Error LLRK}).
Furthermore,
\begin{equation}
\left\Vert \mathbf{\phi }\left( t_{n},\mathbf{x}(t_{n});h_{n}\right) -h_{n}{
\Lambda }_{1}(t_{n},\mathbf{x}(t_{n});h_{n})\right\Vert =\left\Vert \mathbf{z
}\left( t_{n+1}\right) -\mathbf{z}\left( t_{n}\right) -h_{n}{\Lambda }
_{1}(t_{n},\mathbf{z}\left( t_{n}\right) ;h_{n})\right\Vert ,
\label{ODE-LLS-17}
\end{equation}
since $\mathbf{z}\left( t_{n+1}\right) =\mathbf{x}(t_{n})+\mathbf{\phi }
\left( t_{n},\mathbf{x}(t_{n});h_{n}\right) $ is the solution (\ref
{ODE-LLS-13})-(\ref{ODE-LLS-13b}) with $\widetilde{\mathbf{y}}_{n}=\mathbf{x}
(t_{n})$ at $t=t_{n+1}$. On the other hand,
\begin{equation}
\left\Vert \mathbf{z}\left( t_{n+1}\right) -\mathbf{z}\left( t_{n}\right)
-h_{n}{\Lambda }_{1}(t_{n},\mathbf{z}\left( t_{n}\right) ;h_{n})\right\Vert
\leq c_{1}h^{r+1} \label{ODE-LLS-18}
\end{equation}
and
\begin{equation}
\left\Vert \mathbf{v}(t_{n+1})-h_{n}{\Lambda }_{2}^{\mathbf{x}(t_{n})}(t_{n},
\mathbf{0};h_{n})\right\Vert \leq c_{2}h^{p+1} \label{ODE-LLS-19}
\end{equation}
hold, since $\widetilde{\mathbf{z}}_{n+1}=\widetilde{\mathbf{z}}_{n}+h_{n}{
\Lambda }_{1}\left( t_{n},\widetilde{\mathbf{z}}_{n};h_{n}\right) $ and $
\widetilde{\mathbf{v}}_{n+1}=\widetilde{\mathbf{v}}_{n}+h_{n}{\Lambda }_{2}^{
\widetilde{\mathbf{z}}_{n}}\left( t_{n},\widetilde{\mathbf{v}}
_{n};h_{n}\right) $ are order $r$ and $p$ integrators, respectively. Here, $
c_{1}$ and $c_{2}$ are positive constants independent of $h$.
From the inequalities (\ref{ODE-LLS-15})-(\ref{ODE-LLS-19}), the one-step
integrator
\begin{equation*}
\widetilde{\mathbf{y}}_{n+1}=\widetilde{\mathbf{y}}_{n}+h_{n}{\Lambda }
_{1}\left( t_{n},\widetilde{\mathbf{y}}_{n};h_{n}\right) +h_{n}{\Lambda }
_{2}^{\widetilde{\mathbf{y}}_{n}}\left( t_{n},\mathbf{0};h_{n}\right)
\end{equation*}
has local truncation error
\begin{equation*}
\left\Vert \mathbf{x}(t_{n+1})-\mathbf{x}(t_{n})-h_{n}{\Lambda }_{1}(t_{n},
\mathbf{x}(t_{n});h_{n})-h_{n}{\Lambda }_{2}^{\mathbf{x}(t_{n})}(t_{n},
\mathbf{0};h_{n})\right\Vert \leq c\text{ }h^{\min \{r,p\}+1},
\end{equation*}
where $c=c_{1}+c_{2}+c_{1}M(e^{P}-1)/P$ is a positive constant. In addition,
since ${\Lambda }_{1}+$ ${\Lambda }_{2}^{\mathbf{x}(t_{n})}$ with fixed $
t_{n},h_{n}$ is a local Lipschitz function on $\mathcal{D}$, Lemma 2 in \cite
{Perko01} (pp. 92) implies that ${\Lambda }_{1}+$ ${\Lambda }_{2}^{\mathbf{x}
(t_{n})}$ is a Lipschitz function on $\mathcal{A}_{\varepsilon }\subset
\mathcal{D}$. Thus, the stated estimate $\left\Vert \mathbf{x}(t_{n+1})-
\widetilde{\mathbf{y}}_{n+1}\right\Vert \leq Ch^{\min \{r,p\}}$ for the
global error of the LLRK scheme $\widetilde{\mathbf{y}}_{n+1}$
straightforwardly follows from Theorem 3.6 in \cite{Hairer-Wanner93}, where $
C$ is a positive contant. Finally, in order to guarantee that $\mathbf{y}
_{n+1}\in \mathcal{A}_{\varepsilon }$ for all $n=0,...,N-1,$ and so that the
LLRK scheme is well-defined, it is sufficient that $0<h<\delta $, where $
\delta $ is chosen in such a way that $C\delta ^{\min \{r,p\}}\leq
\varepsilon $.
\end{proof}
As an example, consider the computation of the function $\mathbf{\phi}$
through a Pad\'{e} approximation combined with the\ "scaling and squaring"
strategy for exponential matrices \cite{Golub 1989}. To do so, note that $
\mathbf{\phi}$ can be written as \cite{Jimenez02 AML}, \cite{Jimenez05 AMC}
\begin{equation*}
\mathbf{\phi}\left( t_{n},\widetilde{\mathbf{y}}_{n};h_{n}\right) =\mathbf{L}
e^{\widetilde{\mathbf{D}}_{n}h_{n}}\mathbf{r,}
\end{equation*}
where
\begin{equation*}
\widetilde{\mathbf{D}}_{n}=\left[
\begin{array}{ccc}
\mathbf{f}_{\mathbf{x}}(t_{n},\widetilde{\mathbf{y}}_{n}) & \mathbf{f}
_{t}(t_{n},\widetilde{\mathbf{y}}_{n}) & \mathbf{f}(t_{n},\widetilde {
\mathbf{y}}_{n}) \\
0 & 0 & 1 \\
0 & 0 & 0
\end{array}
\right] \in\mathbb{R}^{(d+2)\times(d+2)},
\end{equation*}
$\mathbf{L}=\left[
\begin{array}{ll}
\mathbf{I}_{d} & \mathbf{0}_{d\times2}
\end{array}
\right] $ and $\mathbf{r}^{\intercal}=\left[
\begin{array}{ll}
\mathbf{0}_{1\times(d+1)} & 1
\end{array}
\right] $ in case of non-autonomous ODEs; and
\begin{equation*}
\widetilde{\mathbf{D}}_{n}=\left[
\begin{array}{cc}
\mathbf{f}_{\mathbf{x}}(\widetilde{\mathbf{y}}_{n}) & \mathbf{f}(\widetilde{
\mathbf{y}}_{n}) \\
0 & 0
\end{array}
\right] \in\mathbb{R}^{(d+1)\times(d+1)},
\end{equation*}
$\mathbf{L}=\left[
\begin{array}{ll}
\mathbf{I}_{d} & \mathbf{0}_{d\times1}
\end{array}
\right] $ and $\mathbf{r}^{\intercal}=\left[
\begin{array}{ll}
\mathbf{0}_{1\times d} & 1
\end{array}
\right] $ for autonomous equations.
\begin{proposition}
Set $\widetilde{\mathbf{\phi }}\left( t_{n},\widetilde{\mathbf{y}}
_{n};h_{n}\right) =\mathbf{L}$ $(\mathbf{P}_{p,q}(2^{-\kappa _{n}}\widetilde{
\mathbf{D}}_{n}h_{n}))^{2^{\kappa _{n}}}$ $\mathbf{r}$, where $\mathbf{P}
_{p,q}(2^{-\kappa _{n}}\widetilde{\mathbf{D}}_{n}h_{n})$ is the $(p,q)$-Pad
\'{e} approximation of $e^{2^{-\kappa _{n}}\widetilde{\mathbf{D}}_{n}h_{n}}$
, $\kappa _{n}$ is the smallest integer number such that $\left\Vert
2^{-\kappa _{n}}\widetilde{\mathbf{D}}_{n}h_{n}\right\Vert \leq \frac{1}{2}$
, and the matrices $\widetilde{\mathbf{D}}_{n}$,$\mathbf{L}$\textbf{,} $
\mathbf{r}$ are defined as above. Further, let $\widetilde{\mathbf{\rho }}~$
be the numerical solution of the ODE (\ref{ODE-LLS-14})-(\ref{ODE-LLS-14b})
given by an order $\gamma $ explicit Runge-Kutta scheme. Then, under the
assumptions of Theorem \ref{Local Error LLRK}, the global error of the LLRK
scheme
\begin{equation}
\widetilde{\mathbf{y}}_{n+1}=\widetilde{\mathbf{y}}_{n}+\widetilde{\mathbf{
\phi }}\left( t_{n},\widetilde{\mathbf{y}}_{n};h_{n}\right) +\widetilde{
\mathbf{\rho }}\left( t_{n},\widetilde{\mathbf{y}}_{n};h_{n}\right)
\label{LLRK scheme}
\end{equation}
for the integration of the ODE (\ref{ODE-LLA-1})-(\ref{ODE-LLA-2}) is given
by
\begin{equation*}
\left\Vert \mathbf{x}(t_{n})-\widetilde{\mathbf{y}}_{n}\right\Vert \leq
Mh^{\min \{\gamma ,p+q\}}
\end{equation*}
for all $t_{n}\in \left( t\right) _{h}$, where $M$ is a positive constant$.$
\end{proposition}
\begin{proof}
Let $\mathcal{K}\subset \mathcal{D}$ be a compact set. Since $\mathbf{P}
_{p,q}$ is an analytical function on the unit circle, it is also a Lipschitz
function on this region. This and condition $\left\Vert 2^{-\kappa _{n}}
\widetilde{\mathbf{D}}_{n}h_{n}\right\Vert \leq \frac{1}{2}$ for all $
t_{n}\in \left( t\right) _{h}$ imply that there exists a positive constant $
L $ such that
\begin{equation*}
\left\Vert \widetilde{\mathbf{\phi }}\left( t_{n},\mathbf{\xi }
_{2};h_{n}\right) -\widetilde{\mathbf{\phi }}\left( t_{n},\mathbf{\xi }
_{1};h_{n}\right) \right\Vert \leq L\left\Vert \mathbf{\xi }_{2}-\mathbf{\xi
}_{1}\right\Vert
\end{equation*}
for all $\mathbf{\xi }_{1},\mathbf{\xi }_{2}\in \mathcal{K}$ and $t_{n}\in
\left( t\right) _{h}$. On the other hand, Lemma 4.1 in \cite{Jimenez12 BIT}
implies that there exists a positive constant $M$ such that
\begin{equation*}
\left\Vert \mathbf{z}(t_{n+1})-\mathbf{z}(t_{n})-\widetilde{\mathbf{\phi }}
\left( t_{n},\mathbf{z}(t_{n});h_{n}\right) \right\Vert \leq Mh^{p+q+1}
\end{equation*}
for all $t_{n}\in \left( t\right) _{h}$, where $\mathbf{z}$ is the solution
of the linear ODE (\ref{ODE-LLS-13})-(\ref{ODE-LLS-13b}).
In addition, since $\widetilde{\mathbf{\rho}}$ is an order\ $\gamma$
approximation to the solution of (\ref{ODE-LLS-14})-(\ref{ODE-LLS-14b}) that
satisfies the condition (\ref{Lipschitz}), the hypotheses of Theorem \ref
{Conv LLRKScheme} hold, which completes the proof.
\end{proof}
The next theorem presents a way to define a class of A-stable LLRK schemes
on the basis of Pad\'{e} approximations to matrix exponentials.
\begin{theorem}
LLRK schemes of the form (\ref{LLRK scheme}) are A-stable if the $(p,q)$-Pad
\'{e} approximation is taken with $p\leq q\leq p+2$. Moreover, if $q=p+1$ or
$q=p+2$, then such LLRK schemes are also L-stable.
\end{theorem}
\begin{proof}
Consider the scalar test equation
\begin{equation*}
dx\left( t\right) =\lambda x\left( t\right) dt,
\end{equation*}
where $\lambda$ is a complex number with non-positive real part.
An LLRK scheme of the form (\ref{LLRK scheme}) applied to this autonomous
equation results in the recurrence
\begin{align}
\widetilde{y}_{n+1}& =\widetilde{y}_{n}+\widetilde{\mathbf{\phi }}\left(
t_{n},\widetilde{y}_{n};h_{n}\right) \notag \\
& =\widetilde{y}_{n}+\mathbf{L}(\mathbf{P}_{p,q}(\mathbf{M}))^{2^{\kappa
_{n}}}\mathbf{r,} \label{LLRK scheme for linear ODE}
\end{align}
where $\mathbf{M}=2^{-\kappa _{n}}\widetilde{\mathbf{D}}_{n}h_{n}$ and
\begin{equation*}
\widetilde{\mathbf{D}}_{n}=\left[
\begin{array}{cc}
\lambda & \lambda \widetilde{y}_{n} \\
0 & 0
\end{array}
\right] .
\end{equation*}
Here,
\begin{equation*}
\mathbf{P}_{p,q}(z)=\frac{\mathbf{N}_{p,q}(z)}{\mathbf{D}_{p,q}(z)}
\end{equation*}
denotes the $(p,q)-$Pad\'{e} approximation to $e^{z}$, where
\begin{equation*}
\mathbf{N}_{p,q}(z)=1+\frac{p}{q+p}z+\frac{p(p-1)}{(q+p)(q+p-1)}\frac{z^{2}}{
2!}+\ldots +\frac{p(p-1)...1}{(q+p)...(q+1)}\frac{z^{p}}{p!},
\end{equation*}
and $\mathbf{D}_{p,q}(z)=\mathbf{N}_{q,p}(-z)$.
Since
\begin{equation*}
\left( \mathbf{M}\right) ^{j}=\left[
\begin{array}{cc}
\left( 2^{-\kappa _{n}}h_{n}\lambda \right) ^{j} & \text{ \ }\left(
2^{-\kappa _{n}}h_{n}\lambda \right) ^{j}\widetilde{y}_{n} \\
0 & 0
\end{array}
\right] ,
\end{equation*}
it can be shown that
\begin{equation*}
\mathbf{N}_{p,q}(\mathbf{M})=\left[
\begin{array}{cc}
\mathbf{N}_{p,q}\left( 2^{-\kappa _{n}}h_{n}\lambda \right) & \text{ \ }
\left( \mathbf{N}_{p,q}\left( 2^{-\kappa _{n}}h_{n}\lambda \right) -1\right)
\widetilde{y}_{n} \\
0 & 1
\end{array}
\right] .
\end{equation*}
Likewise,
\begin{equation*}
\mathbf{D}_{p,q}(\mathbf{M})=\left[
\begin{array}{cc}
\mathbf{D}_{p,q}\left( 2^{-\kappa _{n}}h_{n}\lambda \right) & \text{ \ }
\left( \mathbf{D}_{p,q}\left( 2^{-\kappa _{n}}h_{n}\lambda \right) -1\right)
\widetilde{y}_{n} \\
0 & 1
\end{array}
\right] .
\end{equation*}
Hence,
\begin{equation*}
\mathbf{D}_{p,q}^{-1}(\mathbf{M})=\left[
\begin{array}{cc}
\left( \mathbf{D}_{p,q}\left( 2^{-\kappa _{n}}h_{n}\lambda \right) \right)
^{-1} & \text{ \ }-\frac{\left( \mathbf{D}_{p,q}\left( 2^{-\kappa
_{n}}h_{n}\lambda \right) -1\right) }{\left( \mathbf{D}_{p,q}\left(
2^{-\kappa _{n}}h_{n}\lambda \right) \right) }\widetilde{y}_{n} \\
0 & 1
\end{array}
\right] .
\end{equation*}
Therefore,
\begin{align*}
\mathbf{P}_{p,q}(\mathbf{M})& =\mathbf{N}_{p,q}(\mathbf{M})\mathbf{D}
_{p,q}^{-1}(\mathbf{M}) \\
& =\left[
\begin{array}{cc}
\frac{\mathbf{N}_{p,q}\left( 2^{-\kappa _{n}}h_{n}\lambda \right) }{\mathbf{D
}_{p,q}\left( 2^{-\kappa _{n}}h_{n}\lambda \right) } & \text{ \ }\left(
\frac{\mathbf{N}_{p,q}\left( 2^{-\kappa _{n}}h_{n}\lambda \right) }{\mathbf{D
}_{p,q}\left( 2^{-\kappa _{n}}h_{n}\lambda \right) }-1\right) \widetilde{y}
_{n} \\
0 & 1
\end{array}
\right] ,
\end{align*}
and so
\begin{equation*}
\left( \mathbf{P}_{p,q}(\mathbf{M})\right) ^{2^{\kappa _{n}}}=\left[
\begin{array}{cc}
\left( \frac{\mathbf{N}_{p,q}\left( 2^{-\kappa _{n}}h_{n}\lambda \right) }{
\mathbf{D}_{p,q}\left( 2^{-\kappa _{n}}h_{n}\lambda \right) }\right)
^{2^{\kappa _{n}}} & \text{ \ }\left( \left( \frac{\mathbf{N}_{p,q}\left(
2^{-\kappa _{n}}h_{n}\lambda \right) }{\mathbf{D}_{p,q}\left( 2^{-\kappa
_{n}}h_{n}\lambda \right) }\right) ^{2^{\kappa _{n}}}-1\right) \widetilde{y}
_{n} \\
0 & 1
\end{array}
\right] .
\end{equation*}
By substituting the above expression in (\ref{LLRK scheme for linear ODE})
it is obtained that
\begin{equation*}
\widetilde{y}_{n+1}=R(\lambda )\widetilde{y}_{n},
\end{equation*}
where
\begin{equation*}
R(\lambda )=\left( \frac{\mathbf{N}_{p,q}\left( 2^{-\kappa _{n}}h_{n}\lambda
\right) }{\mathbf{D}_{p,q}\left( 2^{-\kappa _{n}}h_{n}\lambda \right) }
\right) ^{2^{\kappa _{n}}}.
\end{equation*}
Since\ $\Re (2^{-\kappa _{n}}h_{n}\lambda )\leq 0$, Theorem 353A, pp. 238 in
\cite{Butcher 2008} implies that $\left\vert R(\lambda )\right\vert \leq 1$
for $p\leq q\leq p+2$. That is, for these values of $p$ and $q$ the LLRK
scheme (\ref{LLRK scheme}) is A-stable. The proof concludes by noting that,
for $q=p+1$ or $q=p+2$, $R(z)=0$ when $z\rightarrow \infty $.
\end{proof}
From an implementation viewpoint, further simplifications for LLRK schemes
can be achieved in order to reduce the computational budget of the
algorithms. For instance, if all the Runge Kutta coefficients $c_{i}$ have a
minimum common multiple $\kappa $, then the LLRK scheme (\ref{LLRK scheme})
can be implemented in terms of a few powers of the same matrix exponential $
e^{\kappa h_{n}\widetilde{\mathbf{D}}_{n}}$. To illustrate this, let us
consider the so called \textit{four order classical Runge-Kutta scheme}
(see, e.g., pp. 180 in \cite{Butcher 2008}) with coefficients $c=\left[
\begin{array}{cccc}
0 & \frac{1}{2} & \frac{1}{2} & 1
\end{array}
\right] $. This yields the following efficient order 4 LLRK scheme
\begin{equation}
\widetilde{\mathbf{y}}_{n+1}=\widetilde{\mathbf{y}}_{n}+\widetilde{\mathbf{
\phi }}\left( t_{n},\widetilde{\mathbf{y}}_{n};h_{n}\right) +\widetilde{
\mathbf{\rho }}\left( t_{n},\widetilde{\mathbf{y}}_{n};h_{n}\right) ,
\label{LLRK4 scheme}
\end{equation}
where
\begin{equation*}
\widetilde{\mathbf{\rho }}\left( t_{n},\widetilde{\mathbf{y}}
_{n};h_{n}\right) =\frac{h_{n}}{6}(2\widetilde{\mathbf{k}}_{2}+2\widetilde{
\mathbf{k}}_{3}+\widetilde{\mathbf{k}}_{4}),
\end{equation*}
with\ \ \ \ \ \ \
\begin{align*}
\widetilde{\mathbf{k}}_{i}& =\mathbf{f}\left( t_{n}+c_{i}h_{n},\widetilde{
\mathbf{y}}_{n}+\widetilde{\mathbf{\phi }}(t_{n},\widetilde{\mathbf{y}}
_{n};c_{i}h_{n})+c_{i}h_{n}\widetilde{\mathbf{k}}_{i-1}\right) -\mathbf{f}
\left( t_{n},\widetilde{\mathbf{y}}_{n}\right) \\
& -\mathbf{f}_{\mathbf{x}}\left( t_{n},\widetilde{\mathbf{y}}_{n}\right)
\widetilde{\mathbf{\phi }}\left( t_{n},\widetilde{\mathbf{y}}
_{n};c_{i}h_{n}\right) \ -\mathbf{f}_{t}\left( t_{n},\widetilde{\mathbf{y}}
_{n}\right) c_{i}h_{n},
\end{align*}
$\widetilde{\mathbf{k}}_{1}\equiv \mathbf{0}$, $\widetilde{\mathbf{\phi }}
(t_{n},\widetilde{\mathbf{y}}_{n};\frac{h_{n}}{2})=\mathbf{LAr}$, $
\widetilde{\mathbf{\phi }}(t_{n},\widetilde{\mathbf{y}}_{n};h_{n})=\mathbf{LA
}^{2}\mathbf{r}$, $\mathbf{A}=(\mathbf{P}_{p,q}(2^{-\kappa _{n}}\widetilde{
\mathbf{D}}_{n}h_{n}))^{2^{\kappa _{n}}}$,
\begin{equation*}
\widetilde{\mathbf{D}}_{n}=\left[
\begin{array}{ccc}
\mathbf{f}_{\mathbf{x}}(t_{n},\widetilde{\mathbf{y}}_{n}) & \mathbf{f}
_{t}(t_{n},\widetilde{\mathbf{y}}_{n}) & \mathbf{f}(t_{n},\widetilde{\mathbf{
y}}_{n}) \\
0 & 0 & 1 \\
0 & 0 & 0
\end{array}
\right] \in \mathbb{R}^{(d+2)\times (d+2)},
\end{equation*}
and $\kappa _{n}$ is the smallest integer number such that $\left\Vert
2^{-\kappa _{n}}\widetilde{\mathbf{D}}_{n}h_{n}\right\Vert \leq \frac{1}{2}$.
Note that the dynamical properties of an order $\gamma$ LLRK discretization,
as stated in section \ref{Sec. teoria}, are inherited by its numerical
implementations if the approximation to the map $\mathbf{\phi}+$ $\mathbf{
\rho}$ is $o(h^{\gamma-1})$ and smooth enough (i.e., of class $C^{\gamma}$).
In particular, these conditions are satisfied by the implementations just
introduced, namely, those given by (\ref{LLRK scheme}). This provides
theoretical support to the simulation study presented in \cite{de la Cruz 06}
, \cite{de la Cruz Ph.D. Thesis}, which reports satisfactory dynamical
behavior of LLRK schemes in the neighborhood of invariant sets of ODEs.
Finally note that, as an example, this section has focused on a specify kind
of LLRK scheme, namely, the A-stable scheme (\ref{LLRK4 scheme}) that
combines the A-stable Pad\'{e} algorithm to compute the $\mathbf{\varphi }
_{\gamma }$ with the $4$ order classical Runge-Kutta scheme to compute the
solution of the auxiliary equation (\ref{ODE-LLS-14})-(\ref{ODE-LLS-14b}).
However, because of the flexibility in the numerical implementation of the
LLRK methods, specific schemes can be designed for certain classes of ODEs,
i.e., LLRK schemes based on L-stable Pad\'{e} algorithm and Rosenbrock
schemes for stiff equations; or LLRK schemes based on Krylov algorithm in
case of high dimensional ODEs, etc. For all of them the results of this
section also apply.
\section{Numerical simulations}
In this section, the performance of the LLRK$4$ scheme (\ref{LLRK4 scheme})
is illustrated by means of numerical simulations. To do so, a variety of
ODEs were selected. All simulations were carried out in Matlab2007b, and the
Matlab function\ "expm" was used in all computations involving exponential
matrices.
The first example is taken from \cite{Beyn 1987a} to illustrate the
dynamical behavior of the LLRK$4$ scheme in the neighborhood of hyperbolic
stationary points. For comparative purposes, the order $2$ Local
Linearization scheme of \cite{Jimenez02 AMC}, and a straightforward
non-adaptive implementation of the order $5$ Runge-Kutta formula of Dormand
\& Prince \cite{Dormand80}\ (used in Matlab2007b) are considered too. They
will be denoted by LL$2$ and RK$45$, respectively.
\textbf{Example 1}
\begin{align}
\frac{d\mathbf{x}_{1}}{dt} & =-2\mathbf{x}_{1}+\mathbf{x}_{2}+1-\mu f\left(
\mathbf{x}_{1},\lambda\right) , \label{EJ1-E1} \\
\frac{d\mathbf{x}_{2}}{dt} & =\mathbf{x}_{1}-2\mathbf{x}_{2}+1-\mu f\left(
\mathbf{x}_{2},\lambda\right) , \label{EJ1-E2}
\end{align}
where $f\left( u,\lambda\right) =u\left( 1+u+\lambda u^{2}\right) ^{-1}$.
For $\mu=15$, $\lambda=57$, this system has two stable stationary points and
one unstable stationary point in the region $0\leq x_{1},x_{2}\leq1$. There
is a nontrivial stable manifold for the unstable point which separates the
basins of attraction for the two stable points.
Figure 1a) presents the phase portrait obtained by the LLRK$4$ scheme with a
very small step-size $\left( h=2^{-13}\right) $, which can be regarded as
the exact solution for comparative purposes. The stable manifold $M_{s}$ of
the unstable point was found by bisection. Figures 1b), 1c) and 1d) show the
phase portraits obtained, respectively, by the LL$2$, the RK$45$ and the LLRK
$4$ schemes with step-size $h=2^{-2}$ fixed. It can be observed that the RK$
45$ discretization fails to reproduce correctly the phase portrait of the
underlying system near one of the point attractors. On the contrary, the
exact phase portrait is adequately approximated near both point attractors
by the LL$2$ and LLRK$4$ schemes, being the latter much more accurate. Other
significant difference in the integration of this equation appears near to
the stable manifold $M_{s}$. Changes in the intersection point $(0,\xi _{h})$
of the approximate stable manifold $M_{s}^{h}$ with the $x_{2}$-axis is
shown in Table I for the considered schemes. The values of $\xi _{h}$ were
calculated by a bisection method and the estimated order of convergence was
calculated as
\begin{equation*}
r_{h}=\frac{1}{\ln 2}\ln (\frac{\xi _{h}-\xi _{h/2}}{\xi _{h/2}-\xi _{h/4}}).
\end{equation*}
For $h<2^{-4}$, the reported values of $r_{h}$ for the schemes LL$2$ and LLRK
$4$ are in concordance with the expected asymptotic behavior $\xi _{h}=\xi
_{0}+Ch^{r}+O(h^{r+1})$ stated by Theorem \ref{Prop. 5.3} and Theorems 3 in
\cite{Jimenez02 AMC}, respectively, but not with the stated by Theorem 3.1
in \cite{Beyn 1987a} for the RK$45$, i.e., $r_{h}\approx 5$. This means that
the LL$2$ and LLRK$4$ schemes provide better approximations to the stable
and unstable manifolds on bigger neighborhoods of the equilibrium points,
which is obviously a favorable result for them. These results show out too
that the LLRK4 scheme preserves much better the basins of attraction of the
ODE (\ref{EJ1-E1})-(\ref{EJ1-E2}) than the RK$45$ and LL$2$ schemes.
\begin{figure}
\caption{Phase portrait of the system (\ref{EJ1-E1}
\end{figure}
In what follows, we compare the accuracy of the LLRK$4$ scheme with those of
the LL$2$ scheme, and the Matlab2007b codes ode$45$ and ode$15s$ in the
integration of a variety of ODEs. We recall that the code ode$45$ is a
variable step-size implementation of the explicit Runge-Kutta $(4,5)$ pair
of Dormand \& Prince \cite{Dormand80}, which is considered for many authors
the most recommendable scheme to apply as a first try for most problems. On
the other hand, the code ode$15s$ is a quasi-constant step-size
implementation in terms of backward differences of the Klopfenstein-Shampine
family of numerical differentiation formulas of orders $1-5$, which is
designed for stiff problems when the ode$45$ fails to provide desired result
\cite{Shampine97}.\newline
\begin{center}
$\ $
\begin{tabular}
[c]{|c|c|c|c|}\hline
Step-size
$\backslash$
Scheme & LL$2$ & RK$45$ & LLRK$4$\\\hline
\begin{tabular}
[c]{l}
$h$\\
$2^{-1}$\\
$2^{-2}$\\
$2^{-3}$\\
$2^{-4}$\\
$2^{-5}$\\
$2^{-6}$\\
$2^{-7}$\\
$2^{-8}$
\end{tabular}
&
\begin{tabular}{ll}
$\xi_{h}$ & $r_{h}$ \\
\multicolumn{1}{c}{$0.71911$} & \multicolumn{1}{c}{} \\
\multicolumn{1}{c}{$0.69688$} & \multicolumn{1}{c}{$1.931$} \\
\multicolumn{1}{c}{$0.61727$} & \multicolumn{1}{c}{$2.190$} \\
\multicolumn{1}{c}{$0.59639$} & \multicolumn{1}{c}{$2.145$} \\
\multicolumn{1}{c}{$0.59182$} & \multicolumn{1}{c}{$2.056$} \\
\multicolumn{1}{c}{$0.59079$} & \multicolumn{1}{c}{$2.027$} \\
\multicolumn{1}{c}{$0.59054$} & $2.014$ \\
\multicolumn{1}{c}{$0.59048$} &
\end{tabular}
&
\begin{tabular}{ll}
$\xi_{h}$ & $r_{h}$ \\
$0.70377$ & \\
$0.53673$ & $3.00$ \\
$0.59859$ & $4.18$ \\
$0.59088$ & $7.23$ \\
$0.590458$ & $6.93$ \\
$0.59045593$ & $6.39$ \\
$0.5904559168$ & $5.98$ \\
$0.5904559165$ &
\end{tabular}
&
\begin{tabular}{ll}
$\xi_{h}$ & $r_{h}$ \\
$0.56615$ & \\
$0.58441$ & $5.142$ \\
$0.59032$ & $2.384$ \\
$0.59049$ & $3.354$ \\
$0.590459$ & $3.901$ \\
$0.5904561$ & $3.973$ \\
$0.590455917$ & $3.989$ \\
$0.590455916$ &
\end{tabular}
\\ \hline
\end{tabular}
\end{center}
{\small Table I. Values of }$\xi_{h}${\small \ and }$r_{h}${\small \
computed by the LL}${\small 2}${\small , RK}${\small 45}$ {\small and LLRK}$
{\small 4}$ {\small schemes in the integration of the system (\ref{EJ1-E1})-(
\ref{EJ1-E2}), for different values of }$h${\small .}\newline
In order to compare the (non-adaptive) LL schemes with the adaptive Matlab
codes, the following procedure was carried out. First, one of the Matlab
codes is used \ to compute the solution with fixed values of relative ($RT$)
and absolute ($AT$) tolerance. Then, the resulting integration steps $
(t)_{h} $ are set as input in the other schemes for obtaining solutions at
the same integration steps. Second, the Matlab code ode15s is used to
compute on $(t)_{h}$ a very accurate solution $\mathbf{z}$ with $
RT=RA=10^{-13}$. Third, the approximate solution $\mathbf{y}$ of the ODE is
computed for each scheme on $(t)_{h}$, and the relative error
\begin{equation*}
RE=\underset{{\small i=1,\ldots ,d;}\text{ }{\small t}_{j}{\small \in (t)}
_{h}}{\max }\left\vert \frac{\mathbf{z}_{i}(t_{j})-\mathbf{y}_{i}(t_{j})}{
\mathbf{z}_{i}(t_{j})}\right\vert
\end{equation*}
is evaluated.
The following four examples are of the form
\begin{equation}
\frac{d\mathbf{x}}{dt}=\mathbf{Ax+f}(\mathbf{x),} \label{Semi-Linear}
\end{equation}
where $\mathbf{A}$ is a square constant matrix, and $\mathbf{f}$ is a
nonlinear function of $\mathbf{x}$. The vector field of the first two ones
has Jacobians with eigenvalues on or near to the imaginary axis, which make
these oscillators difficulty to be integrated by a number of conventional
integrators \cite{Gaffney84,Shampine97}. The other two are also hard for
conventional explicit schemes since they are examples of stiff equations
\cite{Shampine97}. Example 5 has an additional complexity for a number of
integrators that do not update the Jacobians of the vector field at each
integration step \cite{Shampine97,Hochbruck-etal09}: the Jacobian of the
linear term has positive eigenvalues, which results a problem for the
integration in a neighborhood of the stable equilibrium point $\mathbf{x}=1$.
\textbf{Example 2}. Periodic linear:
\begin{equation*}
\frac{d\mathbf{x}}{dt}=\mathbf{A}(\mathbf{x}+2),
\end{equation*}
with
\begin{equation*}
\mathbf{A}=\left[
\begin{array}{cc}
i & 0 \\
0 & -i
\end{array}
\right] ,
\end{equation*}
$\mathbf{x}_{1}(t_{0})=-2.5$, $\mathbf{x}_{2}(t_{0})=-1.5$, and $
[t_{0},T]=[0,4\pi ]$.
\textbf{Example 3. }Periodic linear plus nonlinear part:
\begin{equation*}
\frac{d\mathbf{x}}{dt}=\mathbf{A}(\mathbf{x}+2)+0.1\mathbf{x}^{2},
\end{equation*}
where the matrix $\mathbf{A}$ is defined as in the previous example, $
\mathbf{x}(t_{0})=1$, and $[t_{0},T]=[0,4\pi ]$.
\textbf{Example 4. }Stiff equation:
\begin{equation*}
\frac{d\mathbf{x}}{dt}=-100\mathbf{H}(\mathbf{x+1}),
\end{equation*}
where $\mathbf{H}$ is the 12-dimensional Hilbert matrix (with conditioned
number $1.69\times 10^{16}$), $\mathbf{x}_{i}(t_{0})=1$, and $
[t_{0},T]=[0,1] $.
\textbf{Example 5. }Stiff linear plus nonlinear part:
\begin{equation*}
\frac{d\mathbf{x}}{dt}=100\mathbf{H}(\mathbf{x}-\mathbf{1})+100(\mathbf{x}-
\mathbf{1})^{2}-60(\mathbf{x}^{3}-\mathbf{1}),
\end{equation*}
where $\mathbf{H}$ is the 12-dimensional Hilbert matrix, $\mathbf{x}
_{i}(t_{0})=-0.5$, and $[t_{0},T]=[0,1]$.
\newline
\newline
\begin{center}
$\ $
\begin{tabular}{|c|c|c|c|c|c|}
\hline
{\small Example} & {\small Scheme} & $
\begin{array}{c}
\text{{\small Relative}} \\
\text{{\small Tolerance}}
\end{array}
$ & $
\begin{array}{c}
\text{{\small Absolute}} \\
\text{{\small Tolerance}}
\end{array}
$ & $
\begin{array}{c}
\text{{\small NS}} \\
\end{array}
$ & $
\begin{array}{c}
\text{{\small Relative}} \\
\text{{\small Error}}
\end{array}
$ \\ \hline
\multicolumn{1}{|l|}{$
\begin{array}{c}
\text{{\small 2 : Periodic linear}}
\end{array}
$} & $
\begin{array}{c}
{\small ode15s}^{\ast} \\
{\small ode45} \\
{\small LL2} \\
{\small LLRK4}
\end{array}
$ & $
\begin{array}{c}
{\small 10}^{-3} \\
{\small 5\times10}^{-6} \\
{\small -} \\
{\small -}
\end{array}
$ & $
\begin{array}{c}
{\small 10}^{-6} \\
{\small 5\times10}^{-9} \\
{\small -} \\
{\small -}
\end{array}
$ & $
\begin{array}{c}
{\small 334} \\
{\small 340} \\
{\small 334} \\
{\small 334}
\end{array}
$ & \multicolumn{1}{|l|}{$
\begin{array}{c}
{\small 0.19} \\
{\small 8.2\times10}^{-5} \\
{\small 1.6\times10}^{-12} \\
{\small 1.6\times10}^{-12}
\end{array}
$} \\ \hline
\multicolumn{1}{|l|}{$
\begin{array}{c}
\text{{\small 3 : Periodic linear}} \\
\text{{\small plus nonlinear part}}
\end{array}
$} & $
\begin{array}{c}
{\small ode15s}^{\ast} \\
{\small ode45} \\
{\small LL2} \\
{\small LLRK4}
\end{array}
$ & $
\begin{array}{c}
{\small 10}^{-3} \\
{\small 6\times10}^{-6} \\
{\small -} \\
{\small -}
\end{array}
$ & $
\begin{array}{c}
{\small 10}^{-6} \\
{\small 6\times10}^{-9} \\
{\small -} \\
{\small -}
\end{array}
$ & $
\begin{array}{c}
{\small 287} \\
{\small 289} \\
{\small 287} \\
{\small 287}
\end{array}
$ & \multicolumn{1}{|l|}{$
\begin{array}{c}
{\small 0.30} \\
{\small 1.3\times10}^{-4} \\
{\small 3.1\times10}^{-2} \\
{\small 1.1\times10}^{-5}
\end{array}
$} \\ \hline
\multicolumn{1}{|l|}{$
\begin{array}{c}
\text{{\small 4 : Stiff linear}}
\end{array}
$} & $
\begin{array}{c}
{\small ode15s}^{\ast} \\
{\small ode45} \\
{\small LL2} \\
{\small LLRK4}
\end{array}
$ & $
\begin{array}{c}
{\small 10}^{-3} \\
{\small 5\times10}^{-4} \\
{\small -} \\
{\small -}
\end{array}
$ & $
\begin{array}{c}
{\small 10}^{-6} \\
{\small 5\times10}^{-7} \\
{\small -} \\
{\small -}
\end{array}
$ & $
\begin{array}{c}
{\small 66} \\
{\small 66} \\
{\small 66} \\
{\small 66}
\end{array}
$ & \multicolumn{1}{|l|}{$
\begin{array}{c}
{\small 6.7\times10}^{-2} \\
{\small 5.3\times10}^{-3} \\
{\small 1.8\times10}^{-10} \\
{\small 1.8\times10}^{-10}
\end{array}
$} \\ \hline
\multicolumn{1}{|l|}{$
\begin{array}{c}
\text{{\small 5 : Stiff linear \ \ \ \ \ \ \ }} \\
\text{{\small plus nonlinear part}}
\end{array}
$} & $
\begin{array}{c}
{\small ode15s}^{\ast} \\
{\small ode45} \\
{\small LL2} \\
{\small LLRK4}
\end{array}
$ & $
\begin{array}{c}
{\small 10}^{-2} \\
{\small 10}^{-1} \\
{\small -} \\
{\small -}
\end{array}
$ & $
\begin{array}{c}
{\small 10}^{-4} \\
{\small 10}^{-3} \\
{\small -} \\
{\small -}
\end{array}
$ & $
\begin{array}{c}
{\small 49} \\
{\small 104} \\
{\small 49} \\
{\small 49}
\end{array}
$ & \multicolumn{1}{|l|}{$
\begin{array}{c}
{\small 0.31} \\
{\small 0.37} \\
{\small 0.43} \\
{\small 4.3\times10}^{-5}
\end{array}
$} \\ \hline
\multicolumn{1}{|l|}{$
\begin{array}{c}
\text{{\small 6 : Nonlinear}} \\
\text{{\small (no stiff)}}
\end{array}
$} & $
\begin{array}{c}
{\small ode15s} \\
{\small ode45}^{\ast} \\
{\small LL2} \\
{\small LLRK4}
\end{array}
$ & $
\begin{array}{c}
{\small 10}^{-2} \\
{\small 10}^{-3} \\
{\small -} \\
{\small -}
\end{array}
$ & $
\begin{array}{c}
{\small 10}^{-5} \\
{\small 10}^{-6} \\
{\small -} \\
{\small -}
\end{array}
$ & $
\begin{array}{c}
{\small 103} \\
{\small 47} \\
{\small 47} \\
{\small 47}
\end{array}
$ & $
\begin{array}{c}
{\small 0.35} \\
{\small 0.08} \\
{\small 4.19} \\
{\small 0.25}
\end{array}
$ \\ \hline
\multicolumn{1}{|l|}{$
\begin{array}{c}
\text{{\small 7 : Nonlinear \ \ \ }} \\
\text{{\small (moderate stiff)}}
\end{array}
$} & $
\begin{array}{c}
{\small ode15s} \\
{\small ode45}^{\ast} \\
{\small LL2} \\
{\small LLRK4}
\end{array}
$ & $
\begin{array}{c}
{\small 1.5\times10}^{-9} \\
{\small 10}^{-7} \\
{\small -} \\
{\small -}
\end{array}
$ & $
\begin{array}{c}
{\small 1.5\times10}^{-12} \\
{\small 10}^{-10} \\
{\small -} \\
{\small -}
\end{array}
$ & $
\begin{array}{c}
{\small 2281} \\
{\small 2285} \\
{\small 2285} \\
{\small 2285}
\end{array}
$ & \multicolumn{1}{|l|}{$
\begin{array}{c}
{\small 1.2\times10}^{-3} \\
{\small 1.6\times10}^{-3} \\
{\small 400} \\
{\small 6.9\times10}^{-3}
\end{array}
$} \\ \hline
\end{tabular}
\end{center}
{\small Table II. Accuracy of the LL2, LLRK4, ode45 and ode15s
schemes in the integration of examples (}${\small 2}${\small
)-(}${\small 7}${\small ).
With the symbol * is denoted the Matlab code used to set the time partition }
${\small (t)}_{h}${\small \ in each example. NS denotes the number
of steps
required for each scheme to compute the solution on }${\small (t)}_{h}$
{\small .}\newline
The results of the integration of these equations for each scheme
are shown in Table II. For illustration, Figure 2 shows the path of the variable $
\mathbf{x}_{1}$ and its approximation $\mathbf{y}_{1}$ obtained by the LLRK$
4 $ scheme in the integration of these equations. Remarkable, in all
the examples, the relative error of the solution obtained by the
LLRK$4$ scheme is much lower that those of the LL$2$, ode$45$ and
ode$15s$ with the same or lower number of steps. These results are
easily comprehensible for five reasons: 1) the dynamics of these
equations strongly depend on the linear part of their vector fields;
2) the LL$2$ and LLRK$4$ schemes preserve the stability of the
linear systems for all step-sizes, which is not so for conventional
explicit integrators; 3) the LL$2$ and LLRK$4$ schemes are able to\
"exactly" (up to the precision of the floating-point arithmetic)
integrate linear ODEs, which is a property not satisfied by for
conventional explicit and implicit schemes; 4) the LL$2$ and LLRK$4$
schemes update the exact Jacobian of the vector field at each
integration step, which is not done by most of conventional schemes;
and 5) the LLRK$4$ has higher order of convergence than the LL$2$
scheme. Further, note that although the LLRK$4$ scheme is not
designed for the integration of stiff ODEs in general (because the
auxiliary equation (\ref{ODE-LLS-14})-(\ref{ODE-LLS-14b}) might
\textquotedblleft inherit\textquotedblright\ the stiffness of the
original one) it is clear that, by construction, it is suitable for
equations with stiffness confined to the linear part. Example are
the classes of stiff linear and semilinear equations represented in
the Examples $2$ and $3$. This is so, because at each integration
step the stiff linear term is
locally removed from the vector field of the auxiliary equation (\ref
{ODE-LLS-14}) and, in this way, the stiff linear part is well
integrated by
the (A-stable) LL scheme and the resulting non-stiff equation (\ref
{ODE-LLS-14}) can be well integrated by the explicit RK scheme.
The following two examples are well known nonlinear oscillators.
\textbf{Example 6}. Non-stiff nonlinear:
\begin{align*}
\frac{d\mathbf{x}_{1}}{dt}& =1+\mathbf{x}_{1}^{2}\mathbf{x}_{2}-4\mathbf{x}
_{1}, \\
\frac{d\mathbf{x}_{2}}{dt}& =3\mathbf{x}_{1}-\mathbf{x}_{1}^{2}\mathbf{x}_{2}
\end{align*}
where $\mathbf{x}_{1}(t_{0})=1.5$, $\mathbf{x}_{2}(t_{0})=3$, and $
[t_{0},T]=[0,20]$. This equation, known as Brusselator equation, is a
typical test equation of non-stiff nonlinear problems (see, e.g., \cite
{Hairer-Wanner93} ) .
\textbf{Example 7}. Mild-stiff nonlinear:
\begin{align*}
\frac{d\mathbf{x}_{1}}{dt}& =\mathbf{x}_{2}, \\
\frac{d\mathbf{x}_{2}}{dt}& =\varepsilon ((1-\mathbf{x}_{2}^{2})\mathbf{x}
_{1}+\mathbf{x}_{2}),
\end{align*}
where $\varepsilon =10^{3}$, $\mathbf{x}_{1}(t_{0})=2$, $\mathbf{x}
_{2}(t_{0})=0$, and $[t_{0},T]=[0,2]$. This equation, known as Van der Pol
equation, is a typical test equation of stiff nonlinear problems (see, e.g.,
\cite{Hairer-Wanner96} ).
\begin{figure}
\caption{Path of the variables $\mathbf{x}
\end{figure}
The results of the integration of last two equations for each scheme
are also shown in Table II and Figure $2$. For these equations, the
relative error of the solutions obtained by the LLRK$4$ scheme is
much lower that
those of the LL$2$, but quite similar to those of the codes ode$45$ and ode$
15s$ (which have higher order of convergence). This indicates that the LLRK$
4 $ scheme is also appropriate for integrating non-stiff and
mild-stiff nonlinear problems as well.
In summary, results of Table II clearly indicate that the non adaptive
implementation of the LLRK$4$ scheme provides similar or much better
accuracy than the Matlab codes with equal or lower number of steps in the
integration of variety of equations. This suggests that adaptive
implementations the LLRK discretizations might archive similar accuracy than
the Matlab codes with lower or much lower number of steps, a subject that
has been already studied in \cite{Sotolongo11,Jimenez12}.
Finally, we want to point out that equations of type (\ref{Semi-Linear})
frequently arises from the discretization of nonlinear partial differential
equations. In such a case, mild or high dimensional ODEs of that form are
obtained and, as it is obvious, LLRK schemes like (\ref{LLRK4 scheme}) based
on Pad\'{e} approximations are not appropriate. Nevertheless, because the
flexibility of the high order Local Linearization approach described in
Section \ref{Section LLA}, feasible high order LL schemes can be designed
for this purpose too. For instance, by taking into account that
\begin{equation*}
\mathbf{\phi }(t_{n},\mathbf{y}_{n};\frac{h_{n}}{2})=\mathbf{\varphi }(\frac{
h_{n}}{2}\mathbf{f}_{\mathbf{x}}(\mathbf{y}_{n}))\mathbf{f}(\mathbf{y}_{n}),
\end{equation*}
where $\mathbf{\varphi }(z)=(e^{z}-1)/z$, the LLRK$4$ scheme (\ref{LLRK4
scheme}) can easily modified to defined an order $4$ LLRK scheme for high
dimensional ODEs. Indeed, such scheme can be defined by the same expression (
\ref{LLRK4 scheme}), but replacing the formulas of $\widetilde{\mathbf{\phi }
}(t_{n},\widetilde{\mathbf{y}}_{n};\frac{h_{n}}{2})$ and $\widetilde{\mathbf{
\phi }}(t_{n},\widetilde{\mathbf{y}}_{n};h_{n})$ by
\begin{equation*}
\widetilde{\mathbf{\phi }}(t_{n},\widetilde{\mathbf{y}}_{n};\frac{h_{n}}{2})=
\widetilde{\mathbf{\varphi }}(\frac{h_{n}}{2}\mathbf{f}_{\mathbf{x}}(
\widetilde{\mathbf{y}}_{n}))\mathbf{f}(\widetilde{\mathbf{y}}_{n})
\end{equation*}
and
\begin{equation*}
\widetilde{\mathbf{\phi }}(t_{n},\widetilde{\mathbf{y}}_{n};h_{n})=\left(
\frac{h_{n}}{4}\mathbf{f}_{\mathbf{x}}(\widetilde{\mathbf{y}}_{n})\widetilde{
\mathbf{\varphi }}(\frac{h_{n}}{2}\mathbf{f}_{\mathbf{x}}(\widetilde{\mathbf{
y}}_{n}))+\mathbf{I}\right) \widetilde{\mathbf{\phi }}(t_{n},\widetilde{
\mathbf{y}}_{n};\frac{h_{n}}{2}),
\end{equation*}
respectively, where $\widetilde{\mathbf{\varphi }}$ denotes the
approximation to $\mathbf{\varphi }$ provided by the Krylov subspace method
(see, i.e., \cite{Hochbruck-etal98}). Then, a comparison with
exponential-type integrators designed for high dimensional equations of the
form (\ref{Semi-Linear}) can be carried out, but this subject is out of the
scope of this paper.
\section{Conclusions}
In summary, this paper has shown the following: 1) the LLRK approach defines
a general class of high order A-stable explicit integrators; 2) in contrast
with others A-stable explicit methods (such as Rosenbrock or the Exponential
integrators), the RK coefficients involved in the LLRK integrators are not
constrained by any stability condition and they just need to satisfy the
usual, well-known order conditions of RK schemes, which makes the LLRK
approach more flexible and simple; 3) LLRK integrators have a number of
convenient dynamical properties as the linearization preserving and the
conservation of the exact solution dynamics around hyperbolic equilibrium
points and periodic orbits; 4) unlike the majority of the previous published
works on exponential integrators, the above mentioned convergence, stability
and dynamical properties are studied not only for the discretizations but
also for the numerical schemes that implement them in practice; 5) because
of the flexibility in the numerical implementation of the LLRK methods,
specific-purpose schemes can be designed for certain classes of ODEs, e.g.,
for stiff equations, high dimensional systems of equations, etc.; 6) order $
4 $ LLRK formula considered in this paper provides similar or much better
accuracy than the order $5$ Matlab codes with equal or lower number of steps
in the integration of variety of equations, as well as, much better
reproduction of the dynamics of the underlying equation near stationary
hyperbolic points.
Finally, it is worth to point out that theoretical properties of the LLRK
methods studied here strongly support the results of the numerical
experiments carried out by the authors in previous works \cite{de la Cruz 06}
, \cite{de la Cruz Ph.D. Thesis}, in which the performance of other LLRK
schemes is compared with that of existing explicit and implicit schemes.
\end{document}
|
\begin{document}
\title[Nested Canalyzing Functions And Their Average Sensitivities]{Nested Canalyzing Functions And Their Average Sensitivities}
{\rm Aut}hor[Yuan Li, John O. Adeyeye, Reinhard Laubenbacher]{Yuan Li$^{1\ast}$, John O. Adeyeye $^{2\ast}$, Reinhard Laubenbacher$^{3}$}
\address{{\small $^{1}$Department of Mathematics, Winston-Salem State University, NC
27110,USA}\\
{\small email: [email protected] }\\
$^{2}$Department of Mathematics, Winston-Salem State University, NC 27110,USA,
{\small email: [email protected]}\\
{\small $^{3}$Virginia Bioinformatics Institute, Virginia Tech, Blacksburg, VA
24061,USA }\\
{\small email: [email protected]}}
\thanks{$^{\ast}$ Supported by an award from the USA DoD $\#$ W911NF-11-10166}
\keywords{Nested Canalyzing Function, Layer Number, Extended Monomial, Multinomial Coefficient,
Dynamical System, Hamming Weight, Activity, Average Sensitivity. }
\date{}
\begin{abstract}
In this paper, we obtain complete characterization for nested canalyzing
functions (NCFs) by obtaining its unique algebraic normal form (polynomial
form). We introduce a new concept, LAYER NUMBER for NCF. Based on this, we obtain explicit formulas for
the the following important parameters: 1) Number of all the nested canalyzing functions, 2) Number of all the NCFs with given LAYER NUMBER, 3) Hamming weight of any NCF, 4) The activity number of any variable of any NCF, 5) The average sensitivity of any NCF.
Based on these formulas, we show the activity number is greater for those variables in out layer and equal in the same layer. We show the average sensitivity attains minimal value when the NCF has only one layer. We also prove the average sensitivity for any NCF (No matter how many variables it has) is between $0$ and $2$. Hence, theoretically, we show why NCF is stable since a random Boolean function has average sensitivity $\frac{n}{2}$. Finally we conjecture that the NCF attain the maximal average sensitivity if it has the maximal LAYER NUMBER $n-1$. Hence, we guess the uniform upper bound for the average sensitivity of any NCF can be reduced to $\frac{4}{3}$ which is tight.
\end{abstract}
\maketitle
\section{Introduction}
\label{sec-intro} Canalyzing function were introduced by Kauffman \cite{Kau1}
as appropriate rules in Boolean network models or gene regulatory networks.
Canalyzing functions are known to have other important applications in
physics, engineering and biology. In \cite{Mor} it was shown that the dynamics
of a Boolean network which operates according to canalyzing rules is robust
with regard to small perturbations. In \cite{Win2}, W. Just, I. Shmulevich and J. Konvalina
derived an exact formula for the number of canalyzing functions. In \cite{Yua2},
the definition of canalyzing functions was generalized to any finite fields $\mathbb{F}_{q}$,
where $q$ is a power of a prime. Both the exact formulas and the
asymptotes of the number of the generalized canalyzing functions were obtained.
Nested Canalyzing Functions (NCFs) were introduced recently in \cite{Kau2}.
One important characteristic of (nested) canalyzing functions is that they
exhibit a stabilizing effect on the dynamics of a system. That is, small
perturbations of an initial state should not grow in time and must eventually
end up in the same attractor of the initial state. The stability is typically
measured using so-called Derrida plots which monitor the Hamming distance
between a random initial state and its perturbed state as both evolve over
time. If the Hamming distance decreases over time, the system is considered
stable. The slope of the Derrida curve is used as a numerical measure of
stability. Roughly speaking, the phase space of a stable system has few
components and the limit cycle of each component is short.
In \cite{Kau3}, the authors studied the dynamics of nested canalyzing Boolean
networks over a variety of dependency graphs. That is, for a given random
graph on $n$ nodes, where the in-degree of each node is chosen at random
between $0$ and $k$, where $k\leq n$, a nested canalyzing function is assigned
to each node in terms of the in-degree variables of that node. The dynamics of
these networks were then analyzed and the stability measured using Derrida
plots. It is shown that nested canalyzing networks are remarkably stable
regardless of the in-degree distribution and that the stability increases as
the average number of inputs of each node increases.
An extensive analysis of available biological data on gene regulations (about
150 genes) showed that 139 of them are regulated by canalyzing functions
\cite{Har}. In \cite{Kau3, Nik}, it was shown that 133 of the 139 are in fact
nested canalyzing.
Most published molecular networks are given in the form of a wiring diagram,
or dependency graph, constructed from experiments and prior published
knowledge. However, for most of the molecular species in the network, little
knowledge, if any, could be deduced about their regulatory mechanisms, for
instance in the gene transcription networks in yeast \cite{Herr} and E. Coli
\cite{Bar}. Each one of these networks contains more than 1000 genes. Kauffman
et. al \cite{Kau2} investigated the effect of the topology of a sub-network of
the yeast transcriptional network where many of the transcriptional rules are
not known. They generated ensembles of different models where all models have
the same dependency graph. Their heuristic results imply that the dynamics of
those models which used only nested canalyzing functions were far more stable
than the randomly generated models. Since it is already established that the
yeast transcriptional network is stable, this suggests that the unknown
interaction rules are very likely nested canalyzing functions. In a recent
article \cite{Bal}, the whole transcriptional network of yeast, which has 3459
genes as well as the transcriptional networks of E. Coli (1481 genes) and B.
subtillis (840 genes) have been analyzed in a similar fashion, with similar findings.
These heuristic and statistical results show that the class of nested
canalyzing functions is very important in systems biology. It is shown in
\cite{Jar} that this class is identical to the class of so-called unate
cascade Boolean functions, which has been studied extensively in engineering
and computer science. It was shown in \cite{But} that this class produces the
binary decision diagrams with the shortest average path length. Thus, a more
detailed mathematical study of this class of functions has applications to
problems in engineering as well.
In \cite{Abd2}, the authors provided a
description of nested canalyzing function. As a corollary of the equivalence,
a formula in the literature for the number of unate cascade functions also
provides such a formula the number of nested canalyzing functions. Recently,
in \cite{Mur2}, those results were generalized to the multi-state nested
canalyzing functions on finite fields $\mathbb{F}_{p}$, where $p$ is a prime.
They obtained the formula for the number of the generalized NCFs, as a
recursive relation.
In \cite{Coo}, Cook et al. introduced the notion of sensitivity as a combinatorial
measure for Boolean functions providing lower bounds on the time needed by CREW PRAM (concurrent read
, but exclusive write (CREW) parallel random access machine (PRAM)). It was extended by Nisan \cite{Nis} to block
sensitivity. It is still open whether sensitivity and block sensitivity are polynomially related (they are equal for monotone Boolean
functions). Although the definition is straightforward, the sensitivity is understood only for a few classes function. For monotone functions, Ilya Shmulevich \cite {Shm2} derived asymptotic formulas for a typical monotone Boolean functions. Recently, Shengyu Zhang \cite{Zha} find a formula for the average sensitivity of any monotone Boolean functions, hence, a tight bound is derived.
In \cite{Shm}, Ilya Shmulevich and Stauart A. Kauffman considered the activities of the variables of Boolean functions with only one
canalyzing variable. They obtained the average sensitivity of this kind of Boolean function.
In this paper, we revisit the NCF, obtaining a more explicit characterization
of the Boolean NCFs than those in \cite{Abd2}. We introduce a new concept, the
$LAYER$ $NUMBER$ in order to classify all the variables. Hence, the
dominance of the variable can be quantified. As a consequence, we
obtain an explicit formula for the number of NCFs. Thus, a nonlinear recursive
relation (the original formula) is solved, which maybe of independent
mathematical interest.
Using our unique algebraic normal form of NCF, for any NCF, we get the formula of activity for its variables.
We show that the variables in a more dominant layer have greater activity number. Variables in the same layer have the same activity
numbers.
Consequently, we obtain the formula of any NCF's average sensitivity, its lower bound is $\frac{n}{2^{n-1}}$ and its upper bound is $2$ (No matter what $n$ is) which is much less
than $\frac{n}{2}$, the average sensitivity of a random Boolean function. So, theoretically, we proved why NCF is ``stable''. We also find the formula of the Hamming weight of each NCF. Finally, we conjecture that the NCF attains its maximal value if it has the maximal LAYER NUMBER $n-1$. Hence, we guess the tight upper bound is $\frac{4}{3}$.
In the next section, we introduce some definitions and notations.
\section{Preliminaries}
\label{2} In this section we introduce the definitions and notations. Let
$\mathbb{F}=\mathbb{F}_{2}$ be the Galois field with $2$ elements. If $f$ is a
$n$ variable function from $\mathbb{F}^{n}$ to $\mathbb{F}$, it is well known
\cite{Lid} that $f$ can be expressed as a polynomial, called the algebraic
normal form(ANF):
\[
f(x_{1},x_{2},\ldots,x_{n})=\bigoplus_{0\leq k_i\leq 1,i=1,\ldots,n}a_{k_{1}k_{2}\ldots k_{n}}{x_{1}}^{k_{1}}{x_{2}}^{k_{2}
}\cdots{x_{n}}^{k_{n}}
\]
where each coefficient $a_{k_{1}k_{2}\ldots k_{n}}\in\mathbb{F}$ is a
constant. The number $k_{1}+k_{2}+\cdots+k_{n}$ is the multivariate degree of
the term $a_{k_{1}k_{2}\ldots k_{n}}{x_{1}}^{k_{1}}{x_{2}}^{k_{2}}\cdots
{x_{n}}^{k_{n}}$ with nonzero coefficient $a_{k_{1}k_{2}\ldots k_{n}}$. The
greatest degree of all the terms of $f$ is called the algebraic degree,
denoted by $deg(f)$.
\begin{defn}
\label{def2.1} $f(x_{1},x_{2},\ldots, x_{n})$ is essential in variable $x_{i}$
if there exist $r, s\in\mathbb{F}$ and $x_{1}^{*},\ldots,x_{i-1}^{*}$
$,x_{i+1}^{*},\ldots, x_{n}^{*}$ such that $f(x_{1}^{*},\ldots,x_{i-1}
^{*},r,x_{i+1}^{*},\ldots,x_{n}^{*})\neq f(x_{1}^{*},\ldots,x_{i-1}
^{*},s,x_{i+1}^{*},\ldots,x_{n}^{*})$.
\end{defn}
\begin{defn}
\label{def2.2} A function $f(x_{1},x_{2},\ldots,x_{n})$ is $<i:a:b>$
canalyzing if
$f(x_{1},\ldots,x_{i-1},a,x_{i+1},\ldots,x_{n})=b$, for all $x_{j}$, $j\neq
i$, where $i\in\{1,\dots,n\}$, $a$,$b\in\mathbb{F}$.
\end{defn}
The definition is reminiscent of the concept of “canalisation” introduced by
the geneticist C. H. Waddington \cite{Wad} to represent the ability of a
genotype to produce the same phenotype regardless of environmental variability.
\begin{defn}
\label{def2.3} Let $f$ be a Boolean function in $n$ variables. Let $\sigma$ be
a permutation on $\{1,2,\ldots,n\}$. The function $f$ is nested canalyzing
function (NCF) in the variable order
$x_{\sigma(1)},\ldots,x_{\sigma(n)}$ with canalyzing input values
$a_{1},\ldots,a_{n}$ and canalyzed values $b_{1},\ldots,b_{n}$, if it can be
represented in the form
$f(x_{1},\ldots,x_{n})=\left\{
\begin{array}
[c]{ll}
b_{1} & x_{\sigma(1)}=a_{1},\\
b_{2} & x_{\sigma(1)}= \overline{ a_{1}}, x_{\sigma(2)}=a_{2},\\
b_{3} & x_{\sigma(1)}= \overline{ a_{1}}, x_{\sigma(2)}= \overline{ a_{2}},
x_{\sigma(3)}=a_{3},\\
.\ldots. & \\
b_{n} & x_{\sigma(1)}= \overline{ a_{1}}, x_{\sigma(2)}= \overline{ a_{2}
},\ldots,x_{\sigma(n-1)}= \overline{ a_{n-1}}, x_{\sigma(n)}=a_{n},\\
\overline{b_{n}} & x_{\sigma(1)}= \overline{ a_{1}}, x_{\sigma(2)}= \overline{
a_{2}},\ldots,x_{\sigma(n-1)}= \overline{ a_{n-1}}, x_{\sigma(n)}=\overline{
a_{n}}.
\end{array}
\right. $
Where $\overline{a}=a\oplus 1$.The function f is nested canalyzing if f is nested
canalyzing in the variable order $x_{\sigma(1)},\ldots,x_{\sigma(n)}$ for some
permutation $\sigma$.
\end{defn}
Let $\alpha=(a_{1},a_{2},\ldots,a_{n})$ and $\beta=(b_{1},b_{2},\ldots,b_{n}
)$, we say $f$ is $\{\sigma:\alpha:\beta\}$ NCF if it is NCF in the variable
order $x_{\sigma(1)},\ldots,x_{\sigma(n)}$ with canalyzing input values
$\alpha=(a_{1},\ldots,a_{n})$ and canalyzed values $\beta=(b_{1},\ldots
,b_{n})$.
Given vector $\alpha=(a_{1},a_{2},\ldots,a_{n})$, we define $\alpha
^{i_{1},\ldots,i_{k}}=(a_{1},\ldots,\overline{a_{i_{1}}},\ldots,\overline
{a_{i_{k}}},\ldots,a_{n})$
From the above definition, we immediately have the following
\begin{prop}
$f$ is $\{\sigma:\alpha:\beta\}$ NCF $\Longleftrightarrow$ $f$ is
$\{\sigma:\alpha^{n}:\beta^{n}\}$ NCF
\end{prop}
\begin{example}
\label{exa2.1} $f(x_{1},x_{2},x_{3})=x_{1}(x_{2}\oplus 1)x_{3}\oplus 1$ is
$\{(1,2,3):(0,1,0):(1,1,1)\}$ NCF.
Actually, one can check this function is nested canalyzing in any variable order.
\end{example}
\begin{example}
\label{exa2.2} $f(x_{1},x_{2},x_{3})=(x_{1}\oplus 1)(x_{2}(x_{3}\oplus 1)\oplus 1)\oplus 1$. This
function is
$\{(1,2,3):(1,0,1):(1,0,0)\}$ NCF. It is also $\{(1,3,2):(1,1,1):(1,0,1)\}$ NCF.
One can check this function can be nested canalyzing in only two variable
orders $(x_{1},x_{2},x_{3})$ and $(x_{1},x_{3},x_{2})$.
\end{example}
From the above definitions, we know a function is NCF, all the $n$ variable
must be essential. However, a constant function $b$ can be $<i:a:b>$
canalyzing for any $i$ and $a$.
\section{A Complete Characterization for NCF}
\label{3}
In \cite{Lor}, the author introduced Partially Nested Canalyzing Functions
(PNCFs), a generalization of the NCFs, and the nested canalyzing depth, which
measures the extent to which it retains a nested canalyzing structure. In
\cite{Win}, the author introduced the extended monomial system.
As we will see, in a Nested Canalyzing Function, some variables are more
dominant than the others. We will classify all the variables of a NCF into
different levels according to the extent of their dominance. Hence, we will
give description about NCF with more detail. Actually, we will obtain clearer
description about NCF by introducing a new concept: LAYER NUMBER. As a by-product, we also obtain some enumeration
results. Eventually, we will find an explicit formula of the number of all the NCFs.
First, we have
\begin{defn}
\label{def3.1} \cite{Win} $M(x_{1},\ldots,x_{n})$ is an extended monomial of
essential variables $x_{1},\ldots,x_{n}$ if $M(x_{1},\ldots,x_{n}
)=(x_{1}\oplus a_{1})(x_{2}\oplus a_{2})...(x_{n}\oplus a_{n})$, where $a_{i}\in\mathbb{F}_{2}$.
\end{defn}
Basically, we will rewrite Theorem 3.1 in \cite{Abd2} with more information.
\begin{lemma}
\label{lm3.1} $f(x_{1},x_{2},...x_{n})$ is $<i:a:b>$ canalyzing iff
$f(X)=f(x_{1},x_{2},...,x_{n})=(x_{i}\oplus a)Q(x_{1},\ldots, x_{i-1},x_{i+1}\ldots
x_{n})\oplus b$.
\end{lemma}
\begin{proof}
From the algebraic normal form of $f$, we rewrite it as $f=x_{i}g_{1}
(X_{i})\oplus g_{0}(X_{i})$, where $X_{i}=(x_{1},\ldots,x_{i-1},x_{i+1}
,\ldots,x_{n})$. Hence, $f(X)=f(x_{1},x_{2},\ldots,x_{n})$ $=(x_{i}
\oplus a)g_{1}(X_{i})\oplus ag_{1}(X_{i})\oplus g_{0}(X_{i})$. Let $g_{1}(X_{i})=Q(x_{1},\ldots,
x_{i-1},x_{i+1}\ldots x_{n})$ and $r(X_{i})=ag_{1}(X_{i})\oplus g_{0}(X_{i})$. Then
$f(X)=f(x_{1},\ldots,x_{n})=(x_{i}\oplus a)Q(x_{1},\ldots, x_{i-1},x_{i+1}\ldots
x_{n})\oplus r(X_{i})$
Since $f(X)$ is $<i:a:b>$ canalyzing, we get $f(X)=f(x_{1},...x_{i-1}
,a,x_{i+1},\ldots,x_{n})=b$ for any $x_{1},\ldots, x_{i-1},x_{i+1}\ldots
x_{n}$, i.e., $r(X_{i})=b$ for any $X_{i}$. So $r(X_{i})$ must be the constant
$b$. We finished the necessity. The sufficiency is obvious.
\end{proof}
\begin{remark}\label{remark1}
\label{re1} 1) When we contrast this lemma to the first part of Theorem 3.1 in
\cite{Abd2},we make clear that here, the $x_{i}$ is not essential in $Q$. 2)
In \cite{Yua2}, there is a general version of this Lemma over any finite fields.
3) In the above lemma, if $f$ is constant, then
$Q=0$.
\end{remark}
From Definition \ref{def2.3}, we have the following
\begin{prop}
\label{prop3.1} If $f(x_{1},\ldots,x_{n})$ is $\{\sigma:\alpha:\beta\}$ NCF,
i.e., if it is NCF in the variable order
$x_{\sigma(1)},\ldots,x_{\sigma(n)}$ with canalyzing input values
$\alpha=(a_{1},\ldots,a_{n})$ and canalyzed values $\beta=(b_{1},\ldots
,b_{n})$.
Then, for $1\leq k\leq n-1$, let $x_{\sigma(1)}=\overline{a_{1}}
,\ldots,x_{\sigma(k)}=\overline{a_{k}}$, then the function
$f(x_{1},\ldots,\overset{\sigma(1)}{\overline{a_{1}}},\ldots,\overset
{\sigma(k)}{\overline{a_{k}}},\ldots, x_{n})$ is $\{\sigma^{*}:\alpha
^{*}:\beta^{*}\}$ NCF on those remaining variables, where $\sigma^{*}
=x_{\sigma(k+1)},\ldots,x_{\sigma(n)}$, $\alpha^{*}=(a_{k+1},\ldots,a_{n})$
and $\beta^{*}=(b_{k+1},\ldots,b_{n})$.
\end{prop}
\begin{defn}
\label{def3.2} If $f(x_{1},\ldots,x_{n})$ is a NCF. We call variable $x_{i}$
the most dominant variable of $f$, if there is an order
$\alpha=(x_{i},\ldots)$ such that $f$ is NCF with this variable order(In other
words, if $f$ is also $<i:a:b>$ canalyzing for some $a$ and $b$).
\end{defn}
In Example \ref{exa2.1}, all the three variables are most dominant, in Example
\ref{exa2.2}, only $x_{1}$ is the most dominant variable. We have
\begin{theorem}\label{th3.1}
Given NCF $f(x_{1},\ldots,x_{n})$, all the variables are most
dominant iff
$f=M(x_{1},\ldots,x_{n})\oplus b$, where $M$ is an extended monomial, i.e.,
$M=(x_{1}\oplus a_{1})(x_{2}\oplus a_{2})...(x_{n}\oplus a_{n})$.
\end{theorem}
\begin{proof}
$x_{1}$ is the most dominant, from Lemma \ref{lm3.1}, we know there exist
$a_{1}$ and $b$ such that
$f(x_{1},x_{2},\ldots,x_{n})=(x_{1}\oplus a_{1})Q(x_{2},\ldots, x_{n})\oplus b$, i.e.,
$(x_{1}\oplus a_{1})|(f\oplus b)$. Now, $x_{2}$ is also the most dominant, we have $a_{2}$
and $b^{\prime}$ such that
$f(x_{1},a_{2},x_{3},\ldots,x_{n})=b^{\prime}$ for any $x_{1},x_{3}
,\ldots,x_{n}$. Specifically, let $x_{1}=a_{1}$, we get
$f(a_{1},a_{2},x_{3},\ldots,x_{n})=b=b^{\prime}$. Hence, we also get
$(x_{2}\oplus a_{2})|(f\oplus b)=(x_{1}\oplus a_{1})Q(x_{2},\ldots, x_{n})$, since $x_{1}\oplus a_{1}$
and $x_{2}\oplus a_{2}$ are coprime, we get $(x_{2}\oplus a_{2})|Q(x_{2},\ldots, x_{n})$,
hence, $f(x_{1},x_{2},\ldots,x_{n})=(x_{1}\oplus a_{1})(x_{2}\oplus a_{2})Q^{\prime}
(x_{3},\ldots, x_{n})\oplus b$. With induction principle, the necessity is proved.
The sufficiency if evident.
\end{proof}
We are ready to prove the following main result of this section.
\begin{theorem}\label{th2}
\label{th3.2} Given $n\geq2$, $f(x_{1},x_{2},\ldots,x_{n})$ is nested
canalyzing iff it can be uniquely written as
\begin{equation}\label{eq3.1}
f(x_{1},x_{2},\ldots,x_{n})=M_{1}(M_{2}(\ldots(M_{r-1}
(M_{r}\oplus 1)\oplus 1)\ldots)\oplus 1)\oplus b.
\end{equation}
Where each $M_{i}$ is an extended monomial of a set of disjoint variables.
More precisely, $M_{i}=\prod_{j=1}^{k_{i}}(x_{i_{j}}\oplus a_{i_{j}})$,
$i=1,\ldots,r$, $k_{i}\geq1$ for $i=1,\ldots,r-1$, $k_{r}\geq2$, $k_{1}
\oplus \ldots \oplus k_{r}=n$, $a_{i_{j}}\in\mathbb{F}_{2}$, $\{i_{j}|j=1,\ldots,k_{i},
i=1,\ldots,r\}=\{1,\ldots,n\}$.
\end{theorem}
\begin{proof}
We use induction on $n$.
When $n=2$, there are 16 boolean functions, 8 of them are NCFs, Namely
$(x_{1}\oplus a_{1})(x_{2}\oplus a_{2})\oplus c=M_{1}\oplus 1\oplus b$, where $b=1\oplus c$ and $M_{1}
=(x_{1}\oplus a_{1})(x_{2}\oplus a_{2})$.
If $(x_{1}\oplus a_{1})(x_{2}\oplus a_{2})\oplus c=(x_{1}\oplus{a_{1}}^{\prime})(x_{2}\oplus{a_{2}
)}^{\prime}\oplus c^{\prime}$, by equating the coefficients, we immediately obtain
$a_{1}={a_{1}}^{\prime}$, $a_{2}={a_{2}}^{\prime}$ and $c=c^{\prime}$. So,
uniqueness is true.
We have proved that equation \ref{eq3.1} is true for $n=2$, where $r=1$.
Let's assume that equation \ref{eq3.1} is true for any nested canalyzing
function which has at most $n-1$ essential variables.
Now, consider NCF $f(x_{1},\ldots,x_{n})$.
Suppose $x_{\sigma(1)},\ldots,x_{\sigma(k_{1})}$ are all the most dominant
canalyzing variables of $f$, $1\leq k_{1}\leq n$.
Case 1: $k_{1}=n$, by Theorem \ref{th3.1}, the conclusion is true with $r=1$.
Case 2: $k_{1}<n$, with the same arguments to Theorem \ref{th3.1}, we can get
$f=M_{1}g\oplus b$, where
$M_{1}=(x_{\sigma(1)}\oplus a_{\sigma(1)})\ldots(x_{\sigma(k)}\oplus a_{\sigma(k)})$. Let
$x_{\sigma(1)}=\overline{a_{\sigma(1)}},\ldots, x_{\sigma(k)}=\overline
{a_{\sigma(k)}}$ in $f$, the function $g\oplus b$, hence, $g$, of the remaining
variables will also be nested canalyzing by Proposition \ref{prop3.1}. Since
$g$ has $n-k_{1}\leq n-1$ variables, by induction assumption, we get
$g=M_{2}(M_{3}(\ldots(M_{r-1}(M_{r}\oplus 1)\oplus 1)\ldots)\oplus 1)\oplus b_{1}$, at this time,
$b_{1}$ must be $1$. Otherwise, all the variables in $M_{2}$ will also be the
most dominant variables of $f$. Hence, we are done.
\end{proof}
Because each NCF can be uniquely written as \ref{eq3.1} and the number $r$ is
uniquely determined by $f$, we have
\begin{defn}
\label{def3.3} For a NCF written as equation \ref{eq3.1}, the number $r$ will
be called its LAYER NUMBER. Essential variables of $M_{1}$ will be called the
most dominant variables(canalyzing variable), they belong to the first layer of this NCF.
Essential variables of $M_{2}$ will be
called the second most dominant variables and belong to the second layer
of this NCF and etc.
\end{defn}
The function in example \ref{exa2.1} has LAYER NUMBER 1 and the function in
example \ref{exa2.2} has LAYER NUMBER 2.
\begin{remark}\label{remark2}
In Theorem \ref{th2}, 1) $k_r\geq 2$. It is impossible that $k_r=1$. Otherwise, $M_r\oplus 1$ will be a factor of $M_{r-1}$ which means LAYER NUMBER is $r-1$.
2) If variable $x_i$ is in the first layer, and $x_i\oplus a_i$ is a factor of $M_i$, then this NCF is $<i:a_i:b>$ canalyzing, we simply say $x_i$ is a canalyzing variable of this NCF.
\end{remark}
Let $\mathbb{NCF}(n,r)$ stands for the set of all the $n$ variable nested
canalyzing functions with LAYER NUMBER $r$ and $\mathbb{NCF}(n)$ stands for
the set of all the $n$ variable nested canalyzing functions. We have
\begin{cor}
\label{co3.1} Given $n\geq2$,
\[
|\mathbb{NCF}(n,r)|=2^{n+1}\sum_{\substack{k_{1}+\ldots+k_{r}=n\\k_{i}
\geq1,i=1,\ldots,r-1, k_{r}\geq2}}\binom{n}{k_{1},\ldots,k_{r-1}}
\]
and
\[
|\mathbb{NCF}(n)|=2^{n+1}\sum_{\substack{r=1}}^{n-1}\sum_{\substack{k_{1}
+\ldots+k_{r}=n\\k_{i}\geq1,i=1,\ldots,r-1, k_{r}\geq2}}\binom{n}{k_{1}
,\ldots,k_{r-1}}
\]
Where the multinomial coefficient $\binom{n}{k_{1},\ldots,k_{r-1}}=\frac
{n!}{k_{1}!\ldots k_{r}!}$
\end{cor}
\begin{proof}
From Equation \ref{eq3.1}, for each choice $k_{1},\ldots,k_{r}$, with
condition $k_{1}+\ldots+k_{r}=n$, $k_{i}\geq1$, $i=1,\ldots,r-1$ and
$k_{r}\geq2$,
there are $2^{k_{1}}\binom{n}{k_{1}}$ many ways to form $M_{1}$,
there are $2^{k_{2}}\binom{n-k_{1}}{k_{2}}$ many ways to form $M_{2}$,
$\ldots$,
there are $2^{k_{r}}\binom{n-k_{1}-\ldots-k_{r-1}}{k_{r}}$ many ways to form
$M_{r}$,
$b$ has two choices.
Hence,
\[
|\mathbb{NCF}(n,r)|=2\sum_{\substack{k_{1}+\ldots+k_{r}=n\\k_{i}
\geq1,i=1,\ldots,r-1, k_{r}\geq2}}2^{k_{1}+\ldots+k_{r}}\binom{n}{k_{1}}
\binom{n-k_{1}}{k_{2}}\ldots\binom{n-k_{1}-\ldots-k_{r-1}}{k_{r}}
\]
\[
=2^{n+1}\sum_{\substack{k_{1}+\ldots+k_{r}=n\\k_{i}\geq1,i=1,\ldots,r-1,
k_{r}\geq2}}\frac{n!}{(k_{1})!(n-k_{1})!}\frac{(n-k_{1})!}{(k_{2}
)!(n-k_{1}-k_{2})!}\ldots\frac{(n-k_{1}-\ldots-k_{r-1})!}{k_{r}!(n-k_{1}
-\ldots-k_{r})!}
\]
\[
=2^{n+1}\sum_{\substack{k_{1}+\ldots+k_{r}=n\\k_{i}\geq1,i=1,\ldots,r-1,
k_{r}\geq2}}\frac{n!}{k_{1}!k_{2}!\ldots k_{r}!}=2^{n+1}\sum_{\substack{k_{1}
+\ldots+k_{r}=n\\k_{i}\geq1,i=1,\ldots,r-1, k_{r}\geq2}}\binom{n}{k_{1}
,\ldots,k_{r-1}}.
\]
Since $\mathbb{NCF}(n)=\bigcup_{r=1}^{n-1}\mathbb{NCF}(n,r)$ and
$\mathbb{NCF}(n,i)\bigcap\mathbb{NCF}(n,j)=\phi$ when $i\neq j$, we get the
formula of $|\mathbb{NCF}(n)|$.
\end{proof}
One can check that $|\mathbb{NCF}(2)|=8$, $|\mathbb{NCF}(3)|=64$,
$|\mathbb{NCF}(4)|=736$, $|\mathbb{NCF}(5)|=10624$,...
These results are consistent with those in \cite{Ben, Sas}.
By equating our formula to the recursive relation in \cite{Ben, Sas}, we have
the following
\begin{cor}
\label{co3.2} The solution of the nonlinear recursive sequence
\[
a_{2}=8, a_{n}=\sum_{r=2}^{n-1}\binom{n}{r-1}2^{r-1}a_{n-r+1}+2^{n+1} , n\geq3
\]
is
\[
a_{n}=2^{n+1}\sum_{\substack{r=1}}^{n-1}\sum_{\substack{k_{1}+\ldots
+k_{r}=n\\k_{i}\geq1,i=1,\ldots,r-1, k_{r}\geq2}}\binom{n}{k_{1}
,\ldots,k_{r-1}}.
\]
\end{cor}
\section{ Activity, Sensitivity and Hamming Weight}
A Boolean function is balanced if exactly half of its value is zero. Equivalently, the Hamming weight of this $n$ variables Boolean function is $2^{n-1}$. There are $\binom{2^n}{2^{n-1}}$ balanced functions . It is easy to show that a Boolean functions with canalyzing variables is not balanced, i.e., biased. Actually, very biased. For example, Two constant functions are trivially canalyzing, They are the most biased. Extended monomial functions are the second most biased since for any of them, only one value is nonzero. But biased functions may have no canalyzing variables. For example, $f(x_1,x_2,x_3)=x_1x_2x_3\oplus x_1x_2\oplus x_1x_3\oplus x_2x_3$ is biased but without canalyzing variables.
In Boolean functions, some variable have greater influence over the output of the function than other variables. To formalize this, a concept called $activity$ was introduced. Let $\frac{\partial f(x_1,\ldots,x_n)}{\partial x_i}=f(x_1,\ldots,x_i\oplus 1,\ldots,x_n)\oplus f(x_1,\ldots,x_i,\ldots,x_n)$. The $activity$ of variable $x_i$ of $f$ is defined as
\begin{equation}\label{act1}
\alpha_i^f=\frac{1}{2^n}\sum_{(x_1,\ldots,x_n)\in \mathbb{F}_2^n}\frac{\partial f(x_1,\ldots,x_n)}{\partial x_i}
\end{equation}
Note, the above definition can also be written as the following
\begin{equation}\label{act2}
\alpha_i^f=\frac{1}{2^{n-1}}\sum_{(x_1,\ldots,x_{i-1},x_{i+1},\ldots,x_n)\in \mathbb{F}_2^{n-1}}(f(x_1,\ldots,\overset{i}{0},\ldots,x_n)\oplus f(x_1,\ldots,\overset{i}{1},\ldots,x_n))
\end{equation}
The activity of any variables of constant functions is 0. For affine function $f(x_1,\ldots,x_n)=x_1\oplus \ldots\oplus x_n\oplus b$, $\alpha_i^f=1$ for any $i$. It is clear, for any $f$ and $i$, we have $0\leq \alpha_i^f\leq 1$.
Another important quantity is the sensitivity of a Boolean function, which measures how sensitive the output of the function is if the input changes (This was introduced in \cite {Coo}). The sensitivity $s^f(x_1,\ldots,x_n)$ of $f$ on vector $(x_1,\ldots,x_n)$ is defined as
the number of Hamming neighbors of $(x_1,\ldots,x_n)$ on which the function value is different from $f(x_1,\ldots,x_n)$. That is,
\begin{equation*}
s^f(x_1,\ldots,x_n)=|\{i|f(x_1,\ldots,\overset{i}{0},\ldots,x_n)\neq f(x_1,\ldots,\overset{i}{1},\ldots,x_n), i=1,\ldots,n \}|.
\end{equation*}
Obviously, $s^f(x_1,\ldots,x_n)=\sum_{i=1}^n\frac{\partial f(x_1,\ldots,x_n)}{\partial x_i}$
The average sensitivity of function $f$ is defined as
\begin{equation*}
s^f=E[s^f(x_1,\ldots,x_n)]=\frac{1}{2^n}\sum_{(x_1,\ldots,x_n)\in \mathbb{F}_2^n}s^f(x_1,\ldots,x_n)=\sum_{i=1}^n\alpha_i^f.
\end{equation*}
It is clear that $0\leq s^f\leq n$.
The average sensitivity is one of the most studied concepts in the analysis of Boolean functions. Recently, It receives a lot of attention.
See \cite{Ama, Ber, Ber2, Bop, Che, Chr, Kel, Liu, Li, Qia, Shm2, Shm, Shp, Sch, Vir}. Bernasconi \cite{Ber} has showed that a random Boolean function has average sensitivity $\frac{n}{2}$. It means the average value of the average sensitivities of all the $n$ variables Boolean functions is $\frac{n}{2}$. In \cite {Shm}, Ilya Shmulevich and Stuart A. Kauffman calculated the activity of all the variables of a Boolean functions with exactly one canalyzing variable and unbiased input for the other variable. Add all the activities, the average sensitivity of this function was also obtained.
In the following, using Equation \ref{eq3.1}, we will obtain the formula of the Hamming weight of any NCF, the activities of all the variables of any NCF and the average sensitivity (which is bounded by constant) of any NCF.
First, we have
\begin{lemma}\label{lm4.1}
$(x_1\oplus a_1)\ldots (x_k\oplus a_k)=$
$\left\{
\begin{array}{ll}
1, & (x_1,\ldots,x_k)=(\overline{a_1},\ldots,\overline{a_k})\\
0, & otherwise.
\end{array}\right.$
i.e., only one value is $1$ and all the other $2^k-1$ values are $0$.
\end{lemma}
\begin{theorem}\label{th4.1}
Given $n\geq 2$. Let $f_1=M_1$, $f_r=M_{1}(M_{2}(\ldots(M_{r-1}
(M_{r}\oplus 1)\oplus 1)\ldots)\oplus 1)$ , $r\geq 2$, where $M_i$ is same as that in the in Theorem \ref{th3.2}, Then the Hamming weight of $f_r$ is
\begin{equation}\label{eq4.3}
W(f_r)=\sum_{j=1}^r(-1)^{j-1}2^{n-\sum_{i=1}^jk_i}
\end{equation}
The Hamming weight of $f_r\oplus 1$ is
\begin{equation}\label{eq4.4}
W(f_r\oplus 1)=\sum_{j=0}^r(-1)^{j}2^{n-\sum_{i=1}^jk_i}
\end{equation}
Where $\sum_{i=1}^0k_i$ should be explained as $0$.
\end{theorem}
\begin{proof}
First, let's consider the Hamming weight of $f_r$.
When $r=1$, we know the result is true by Lemma \ref{lm4.1}.
When $r>1$, we consider two cases:
Case A: $r$ is odd, $r=2t+1$.
All the vectors make $f=1$ will be divided into the following disjoint groups.
Group $1$: $M_1=1$, $M_2=0$;
Group $2$: $M_1=1$, $M_2=1$, $M_3=1$, $M_4=0$;
$\ldots$
Group $j$: $M_1=1$, $M_2=1$, $\ldots$, $ M_{2j-1}=1$, $M_{2j}=0$;
$\ldots$
Group $t$ : $M_1=1$, $M_2=1$, $\ldots$, $M_{2t-1}=1$, $M_{2t}=0$;
Group $t+1$ : $M_1=1$, $M_2=1$, $\ldots$, $M_{2t}=1$, $M_{2t+1}=M_r=1$.
In Group $1$, the number of vectors is $(2^{k_2}-1)2^{n-k_1-k_2}=2^{n-k_1}-2^{n-k_1-k_2}$.
In Group $2$, the number of vector is $(2^{k_4}-1)2^{n-k_1-k_2-k_3-k_4}=2^{n-k_1-k_2-k_3}-2^{n-k_1-k_2-k_3-k_4}$.
\ldots
In Group $t$, the number of vector is $(2^{k_{2t}}-1)2^{n-k_1-\ldots-k_{2t}}=2^{n-k_1-\ldots-k_{2t-1}}-2^{n-k_1-\ldots -k_{2t}}$.
In Group $t+1$, the number of vectors is $2^{n-k_1-\ldots -k_r}=1$.
Add all of them, we get the formula Equation \ref{eq4.3}.
Case B: $r$ is even, $r=2t$.
All the vectors make $f=1$ will be divided into the following disjoint groups.
Group $1$: $M_1=1$, $M_2=0$;
Group $2$: $M_1=1$, $M_2=1$, $M_3=1$, $M_4=0$;
$\ldots$
Group $j$: $M_1=1$, $M_2=1$, $\ldots$, $ M_{2j-1}=1$, $M_{2j}=0$;
$\ldots$
Group $t-1$ : $M_1=1$, $M_2=1$, $\ldots$, $M_{2t-3}=1$, $M_{2t-2}=0$;
Group $t$ : $M_1=1$, $M_2=1$, $\ldots$, $M_{2t-1}=1$, $M_{2t}=M_r=0$.
In Group $1$, the number of vectors is $(2^{k_2}-1)2^{n-k_1-k_2}=2^{n-k_1}-2^{n-k_1-k_2}$.
In Group $2$, the number of vector is $(2^{k_4}-1)2^{n-k_1-k_2-k_3-k_4}=2^{n-k_1-k_2-k_3}-2^{n-k_1-k_2-k_3-k_4}$.
\ldots
In Group $t-1$, the number is $(2^{k_{2t-2}}-1)2^{n-k_1-\ldots-k_{2t-2}}=2^{n-k_1-\ldots-k_{2t-3}}-2^{n-k_1-\ldots -k_{2t-2}}$.
In Group $t$, the number of vectors is $2^{n-k_1-\ldots -k_{2t-1}}-2^{n-k_1-\ldots -k_{2t}}=2^{k_{2t}}-1$.
Add all of them, we get the formula Equation \ref{eq4.3} again.
Because $|\{(x_1,\ldots,x_n)|f(x_1,\ldots,x_n)=0\}|+|\{(x_1,\ldots,x_n)|f(x_1,\ldots,x_n)=1\}|=2^n$,
we know the Hamming weight of $f_r\oplus 1$ is
\begin{equation*}
W(f_r\oplus 1)=2^n-W(f_r)=2^n-\sum_{j=1}^r(-1)^{j-1}2^{n-\sum_{i=1}^jk_i}=\sum_{j=0}^r(-1)^{j}2^{n-\sum_{i=1}^jk_i}.
\end{equation*}
Where $\sum_{i=1}^0k_i$ should be explained as $0$.
\end{proof}
In the following, we will calculate the activities of the variables of any NCF.
Let $f$ be a NCF and written as the form in Theorem \ref{th3.2}. Without loss of generality(to avoid the complicated notation), we assume
$M_1=(x_1\oplus a_1)(x_2\oplus a_2)\ldots (x_{k_1}\oplus a_{k_1})$ and $m_1=(x_1\oplus a_1)\ldots(x_{i-1}\oplus a_{i-1})(x_{i+1}\oplus a_{i+1})\ldots (x_{k_1}\oplus a_{k_1})$, i.e., $M_1=(x_i\oplus a_i)m_1$.
If $r=1$, i.e., $k_1=n$, then
\begin{equation*}
\alpha_i^f=\frac{1}{2^{n-1}}\sum_{(x_1,\ldots,x_{i-1},x_{i+1},\ldots,x_n)\in \mathbb{F}_2^{n-1}}(f(x_1,\ldots,\overset{i}{0},\ldots,x_n)\oplus f(x_1,\ldots,\overset{i}{1},\ldots,x_n))
\end{equation*}
\begin{equation*}
=\frac{1}{2^{n-1}}\sum_{(x_1,\ldots,x_{i-1},x_{i+1},\ldots,x_n)\in \mathbb{F}_2^{n-1}}m_1=\frac{1}{2^{n-1}}W(m_1)=\frac{1}{2^{n-1}}.
\end{equation*}
by Lemma \ref{lm4.1}.
If $1<r\leq n-1$,
Let's consider the activity of $x_i$ in the first layer, i.e., $1\leq i\leq k_1$. We have
\begin{equation*}
\alpha_i^f=\frac{1}{2^{n-1}}\sum_{(x_1,\ldots,x_{i-1},x_{i+1},\ldots,x_n)\in \mathbb{F}_2^{n-1}}(f(x_1,\ldots,\overset{i}{0},\ldots,x_n)\oplus f(x_1,\ldots,\overset{i}{1},\ldots,x_n))
\end{equation*}
\begin{equation*}
=\frac{1}{2^{n-1}}\sum_{(x_1,\ldots,x_{i-1},x_{i+1},\ldots,x_n)\in \mathbb{F}_2^{n-1}}m_1(M_{2}(\ldots(M_{r-1}
(M_{r}\oplus 1)\oplus 1)\ldots)\oplus 1).
\end{equation*}
\begin{equation*}
=\frac{1}{2^{n-1}}W(m_1(M_{2}(\ldots(M_{r-1}
(M_{r}\oplus 1)\oplus 1)\ldots)\oplus 1)).
\end{equation*}
$=\left\{
\begin{array}{ll}
\frac{1}{2^{n-1}}\sum_{j=1}^r(-1)^{j-1}2^{n-1-(\sum_{i=1}^jk_i-1)}, & k_1>1\\
\frac{1}{2^{n-1}}\sum_{j=0}^{r-1}(-1)^{j}2^{n-1-\sum_{i=1}^jk_{i+1}}, & k_1=1.
\end{array}\right.$
$=\left\{
\begin{array}{ll}
\frac{1}{2^{n-1}}\sum_{j=1}^r(-1)^{j-1}2^{n-\sum_{i=1}^jk_i}, & k_1>1\\
\frac{1}{2^{n-1}}\sum_{j=0}^{r-1}(-1)^{j}2^{n-\sum_{i=0}^jk_{i+1}}, & k_1=1.
\end{array}\right.=\left\{
\begin{array}{ll}
\frac{1}{2^{n-1}}\sum_{j=1}^r(-1)^{j-1}2^{n-\sum_{i=1}^jk_i}, & k_1>1\\
\frac{1}{2^{n-1}}\sum_{j=0}^{r-1}(-1)^{j}2^{n-\sum_{i=1}^{j+1}k_{i}}, & k_1=1.
\end{array}\right.$
$=\left\{
\begin{array}{ll}
\frac{1}{2^{n-1}}\sum_{j=1}^r(-1)^{j-1}2^{n-\sum_{i=1}^jk_i}, & k_1>1\\
\frac{1}{2^{n-1}}\sum_{j=1}^r(-1)^{j-1}2^{n-\sum_{i=1}^jk_i}, & k_1=1.
\end{array}\right.=\frac{1}{2^{n-1}}\sum_{j=1}^r(-1)^{j-1}2^{n-\sum_{i=1}^jk_i}$
by Theorem \ref{th4.1}. Note, in the above, $k_1=1$ means $m_1=1$, we used the Equation \ref{eq4.4} with layer number $r-1$ and the first layer is $M_2$ for $n-1$ variables functions.
Now let's consider the variables in the second layer, i.e., $x_i$ is an essential variable of $M_2$. We have
$M_2=(x_i+a_i)m_2$ and
\begin{equation*}
\alpha_i^f=\frac{1}{2^{n-1}}\sum_{(x_1,\ldots,x_{i-1},x_{i+1},\ldots,x_n)\in \mathbb{F}_2^{n-1}}(f(x_1,\ldots,\overset{i}{0},\ldots,x_n)\oplus f(x_1,\ldots,\overset{i}{1},\ldots,x_n))
\end{equation*}
\begin{equation*}
=\frac{1}{2^{n-1}}\sum_{(x_1,\ldots,x_{i-1},x_{i+1},\ldots,x_n)\in \mathbb{F}_2^{n-1}}M_1(m_{2}(\ldots(M_{r-1}
(M_{r}\oplus 1)\oplus 1)\ldots)).
\end{equation*}
\begin{equation*}
=\frac{1}{2^{n-1}}\sum_{(x_1,\ldots,x_{i-1},x_{i+1},\ldots,x_n)\in \mathbb{F}_2^{n-1}}M_1m_{2}(\ldots(M_{r-1}
(M_{r}\oplus 1)\oplus 1)\ldots).
\end{equation*}
\begin{equation*}
=\frac{1}{2^{n-1}}\sum_{j=1}^{r-1}(-1)^{j-1}2^{n-1-((k_1+k_2-1)+\ldots +k_{j+1}))}=\frac{1}{2^{n-1}}\sum_{j=1}^{r-1}(-1)^{j-1}2^{n-\sum_{i=1}^{j+1}k_i}
\end{equation*}
by Equation \ref{eq4.3} in Theorem \ref{th4.1}. Note, $M_1m_2$ is the first layer, $M_3$ is the second layer and etc.
Now let's consider the variables in the $lth$ layer, i.e., $x_i$ is an essential variable of $M_l$, $2\leq l\leq r-1$. We have
$M_l=(x_i+a_i)m_l$ and
\begin{equation*}
\alpha_i^f=\frac{1}{2^{n-1}}\sum_{(x_1,\ldots,x_{i-1},x_{i+1},\ldots,x_n)\in \mathbb{F}_2^{n-1}}(f(x_1,\ldots,\overset{i}{0},\ldots,x_n)\oplus f(x_1,\ldots,\overset{i}{1},\ldots,x_n))
\end{equation*}
\begin{equation*}
=\frac{1}{2^{n-1}}\sum_{(x_1,\ldots,x_{i-1},x_{i+1},\ldots,x_n)\in \mathbb{F}_2^{n-1}}M_1\ldots M_{l-1}m_l(M_{l+1}(\ldots (M_r\oplus 1)\ldots )\oplus 1).
\end{equation*}
\begin{equation*}
=\frac{1}{2^{n-1}}\sum_{j=1}^{r-l+1}(-1)^{j-1}2^{n-1-((k_1+\ldots + k_{l}-1)+k_{l+1}+\ldots +k_{j+l-1}))}=\frac{1}{2^{n-1}}\sum_{j=1}^{r-l+1}(-1)^{j-1}2^{n-\sum_{i=1}^{j+l-1}k_i}
\end{equation*}
by Equation \ref{eq4.3} in Theorem \ref{th4.1}. Note, $M_1\ldots M_{l-1}m_l$ is the first layer, $M_{l+1}$ is the second layer, and etc.
Let $x_i$ be the variable in the last layer $M_{r}$, we have
\begin{equation*}
=\frac{1}{2^{n-1}}\sum_{(x_1,\ldots,x_{i-1},x_{i+1},\ldots,x_n)\in \mathbb{F}_2^{n-1}}M_1M_2\ldots\ M_{r-1}m_r=\frac{1}{2^{n-1}}
\end{equation*} by Lemma \ref{lm4.1}.
Variables in the same layer have the same activities, so we use $A_l^f$ to stand for the activity number of each variable in the $lth$ layer $M_l$, $1\leq l\leq r$. We find the formula of $A_l^f$ for $2\leq l\leq r-1$ is also true when $l=r$ or $r=1$. Hence, we write all the above as the following
\begin{theorem}\label{th4.2}
Let $f$ be a NCF and written as in the Theorem \ref{th3.2}.
then the activity of each variable in the $lth$ layer , $1\leq l\leq r$, is
\begin{equation}\label{4.5}
A_l^f=\frac{1}{2^{n-1}}\sum_{j=1}^{r-l+1}(-1)^{j-1}2^{n-\sum_{i=1}^{j+l-1}k_i}
\end{equation}
The average sensitivity of $f$ is
\begin{equation}\label{eq4.6}
s^f=\sum_{l=1}^rk_lA_l^f=\frac{1}{2^{n-1}}\sum_{l=1}^r k_l\sum_{j=1}^{r-l+1}(-1)^{j-1}2^{n-\sum_{i=1}^{j+l-1}k_i}
\end{equation}
\end{theorem}
We do some analysis about the formulas in Theorem \ref{th4.2}, we have
\begin{cor}\label{co4.1}
$n\geq 3$, $A_1^f>A_2^f>\ldots >A_r^f$ and $\frac{n}{2^{n-1}}\leq s^f < 2- \frac{1}{2^{n-2}}$
\end{cor}
\begin{proof}
\begin{equation*}
A_l^f=\frac{1}{2^{n-1}}\sum_{j=1}^{r-l+1}(-1)^{j-1}2^{n-\sum_{i=1}^{j+l-1}k_i}=\frac{1}{2^{n-1}}(2^{n-k_1-\ldots -k_l}-2^{n-k_1-\ldots -k_{l+1}}+\ldots (-1)^{r-l})
\end{equation*}
Since the sum is an alternate decreasing sequence and $k_{l+1}\geq 1$, we have
\begin{equation*}
\frac{1}{2^{n-1}}(2^{n-k_1-\ldots -k_l-1})\leq \frac{1}{2^{n-1}}(2^{n-k_1-\ldots -k_l}-2^{n-k_1-\ldots -k_{l+1}})< A_l^f< \frac{1}{2^{n-1}}(2^{n-k_1-\ldots -k_l})
\end{equation*}
Hence,
\begin{equation*}
A_{l+1}^f< \frac{1}{2^{n-1}}(2^{n-k_1-\ldots -k_{l+1}})\leq \frac{1}{2^{n-1}}(2^{n-k_1-\ldots -k_l-1})<A_l^f.
\end{equation*}
We have
\begin{equation*}
k_1A_1^f=\frac{k_1}{2^{n-1}}(2^{n-{k_1}}-2^{n-{k_1}-k_2}+2^{n-{k_1}-k_2-k_3}-\ldots (-1)^{r-1})
\end{equation*}
\begin{equation*}
k_2A_2^f=\frac{k_2}{2^{n-1}}(2^{n-k_1-k_2}-2^{n-k_1-k_2-k_3}+2^{n-k_1-k_2-k_3-k_4}-\ldots (-1)^{r-2})
\end{equation*}
\begin{equation*}
\ldots \ldots
\end{equation*}
\begin{equation*}
k_lA_l^f=\frac{k_l}{2^{n-1}}(2^{n-k_1-\ldots-k_l}-2^{n-k_1-\ldots-k_l-k_{l+1}}-\ldots (-1)^{r-l})
\end{equation*}
\begin{equation*}
\ldots \ldots
\end{equation*}
\begin{equation*}
k_rA_r^f=\frac{k_r}{2^{n-1}}
\end{equation*}
Hence, $s^f=\sum_{l=1}^rk_lA_l^f\geq \frac{k_1}{2^{n-1}}+\frac{k_2}{2^{n-1}}+\ldots +\frac{k_r}{2^{n-1}}=\frac{n}{2^{n-1}}$, so we know the NCF with LAYER NUMBER 1 has the minimal average sensitivity.
On the other hand, $s^f=\sum_{l=1}^rk_lA_l^f<\frac{k_1}{2^{n-1}}2^{n-k_1}+\frac{k_2}{2^{n-1}}2^{n-k_1-k_2}+\ldots +\frac{k_l}{2^{n-1}}2^{n-k_1-\ldots -k_l}+\dots+\frac{k_r}{2^{n-1}}=U(k_1,\ldots,k_r)$, where $k_1+\ldots +k_r=n$, $k_i\geq 1$, $i=1,\ldots, r-1$ and $k_r\geq 2$. We will find the maximal value of $U(k_1,\ldots,k_r)$ in the following.
First, we claim $k_r=2$ if $U(k_1,\ldots,k_r)$ reach maximal value. Because if $k_r$ is increased by $1$, and the last term makes $\frac{1}{2^{n-1}}$ more contributions to $U(k_1,\ldots,k_r)$, then there exists $l$, $k_l$ will be decreased by $1$ ($k_1+\ldots+k_r=n$), hence
\begin{equation*}
\frac{k_l}{2^{n-1}}2^{n-k_1-\ldots -k_l}
\end{equation*}
will be decreased more than $\frac{1}{2^{n-1}}$.
Now, Look at $\frac{k_1}{2^{n-1}}2^{n-k_1}$, it is obvious it attains the maximal value only when $k_1=1$ or $2$ but obviously $k_1=1$ will be the choice since it also make all the other terms greater..
Now Look at $\frac{k_2}{2^{n-1}}2^{n-k_1-k_2}$, it attains the maximal value when $k_1=k_2=1$ or $k_1=1$ and $k_2=2$, again, $k_2=1$ is the best choice to make all the other terms greater.
In general, if $k_1=\ldots=k_{l-1}=1$, then $\frac{k_l}{2^{n-1}}2^{n-k_1-\ldots -k_l}$ attains its maximal value when $k_l=1$, where $1\leq l\leq r-1$.
In summary, we have showed that $U(k_1,\ldots,k_r)$ reaches maximal value when $r=n-1$, $k_1=\ldots=k_{n-2}=1$, $k_{n-1}=2$ and
Max $ U(k_1,\ldots,k_r)=U(1,\ldots,1,2)=\frac{1}{2^{n-1}}(2^{n-1}+2^{n-2}+\ldots+ 2^{2}+2)=2-\frac{1}{2^{n-2}}$.
\end {proof}
\begin{remark}
So, we know the average sensitivity is bounded by constants for any NCF with any number of variables Since the minimal value approaches to $0$ and the maximal value of $U(k_1,\ldots,k_r)$ approaches to $2$ as $n\rightarrow \infty$. Hence, $0<s^f<2$ for any NCF with arbitrary number of variables.
\end{remark}
In the following, we evaluate the formula Equation \ref{eq4.6} for some parameters $k_1,\ldots,k_r$, we have
\begin{lemma}\label{lm4.2}
1) When $r=n-1$, $k_1=\ldots=k_{n-2}=1$, $k_{n-1}=2$, $s^f=\frac{4}{3}-\frac{3+(-1)^n}{3\times 2^n}$;
2) Given $n\geq 4$, $r=n-2$, $k_1=\ldots=k_{n-3}=1$, $k_{n-2}=3$, $s^f=\frac{4}{3}-\frac{9+5(-1)^{n-1}}{3\times 2^n}$;
3) If $n$ is even and $n\geq 6 $, $r=\frac{n}{2}$, $k_1=1$, $k_2=\ldots=k_{\frac{n}{2}-1}=2$, $k_{\frac{n}{2}}=3$, $s^f=\frac{4}{3}-\frac{4}{3\times 2^n}$. Hence, these three cardinalities are equal if $n$ is even.
\end{lemma}
\begin{proof}
When $r=n-1$, $k_1=\ldots=k_{n-2}=1$, $k_{n-1}=2$ by Equation \ref{eq4.6}. We have
\begin{equation*}
s^f=\sum_{l=1}^{r}k_lA_l^f=\frac{1}{2^{n-1}}\sum_{l=1}^{n-1} k_l\sum_{j=1}^{n-l}(-1)^{j-1}2^{n-\sum_{i=1}^{j+l-1}k_i}
\end{equation*}
\begin{equation*}
=\frac{1}{2^{n-1}}\sum_{l=1}^{n-1} k_l(\sum_{j=1}^{n-l-1}(-1)^{j-1}2^{n-j-l+1}+(-1)^{n-l-1})=\frac{1}{2^{n-1}}\sum_{l=1}^{n-1} k_l(\frac{1}{3}2^{n-l+1}+\frac{1}{3}(-1)^{n-l})
\end{equation*}
\begin{equation*}
=\frac{1}{2^{n-1}}(\sum_{l=1}^{n-2} (\frac{1}{3}2^{n-l+1}+\frac{1}{3}(-1)^{n-l})+2)=\frac{4}{3}-\frac{3+(-1)^n}{3\times 2^n}
\end{equation*}
The other two formulas are also routine simplifications of Equation \ref{eq4.6}.
\end{proof}
Based on our numerical calculation, Lemma \ref{lm4.2} and the proof of Corollary \ref{co4.1}, We have the following
\begin{conj}\label{conj4.1}
The maximal value of $s^f$ is $s^f=\frac{4}{3}-\frac{3+(-1)^n}{3\times 2^n}$. It will be reached if the NCF has the maximal LAYER NUMBERS $n-1$, i.e., if $r=n-1$, $k_1=\ldots=k_{n-2}=1$, $k_{n-1}=2$. When $n$ is even, this maximal value is also reached by NCF with parameters
$n\geq 4$, $r=n-2$, $k_1=\ldots=k_{n-3}=1$, $k_{n-2}=3$ or $n\geq 6 $, $r=\frac{n}{2}$, $k_1=1$, $k_2=\ldots=k_{\frac{n}{2}-1}=2$ and $k_{\frac{n}{2}}=3$.
\end{conj}
\begin{remark}\label{re4.2}
When $n=6$, the NCF with $k_1=1$, $k_2=2$, $k_3=1$ and $k_4=2$ also has the maximal average sensitivity $\frac{21}{16}$. But this can not be generalized. If the above conjecture is true, then we have $0<s^f<\frac{4}{3}$ for any NCF with arbitrary number of variables. In other words, both $0$ and $\frac{4}{3}$ are uniform tight bounds for any NCF.
\end{remark}
We point out, given the algebraic normal form of $f$, it is easy to find all of its canalyzing variables (the first layer $M_1$), then
write $f=M_1g+b$, repeating the schedule to $g$, we can easily to determine if $f$ is NCF, if yes, we then write it as the form in Theorem \ref{th3.2}.
We end this section by the following example.
\begin{example}
Let $N(x_1,x_2,x_3,x_4)=x_1x_2x_3\oplus x_2x_3x_4\oplus x_1x_3\oplus x_3x_4\oplus 1$ and
$Y(x_1,x_2,x_3,x_4,x_5)=x_1x_2x_3x_4x_5\oplus x_1x_2x_3x_4\oplus x_1x_2x_4x_5\oplus x_1x_2x_4\oplus x_1x_3x_4\oplus x_1x_3\oplus x_1x_4\oplus x_1$.
For $N(x_1,x_2,x_3,x_4)$, for all the $4$ variables, we found when $x_2=1$ or $x_3=0$, then functions becomes constant $1$, so we know
$N(x_1,x_2,x_3,x_4)=(x_2\oplus 1)(x_3)N_1\oplus 1$. Actually,
$N(x_1,x_2,x_3,x_4)=x_1x_2x_3\oplus x_2x_3x_4\oplus x_1x_3\oplus x_3x_4\oplus 1=x_3(x_1x_2\oplus x_2x_4\oplus x_1\oplus x_4)\oplus 1$
$=x_3(x_2(x_1\oplus x_4)\oplus x_1\oplus x_4)\oplus 1=x_3((x_2\oplus 1)(x_1\oplus x_4))\oplus 1$. Since $x_1\oplus x_4$ has no canalyzing variable, we know $N$ is not NCF, but a partially NCF.
For $Y(x_1,x_2,x_3,x_4,x_5)$, We find $x_1=0$ or $x_3=1$, the function will be reduced to $0$, so we know $Y=x_1(x_3\oplus 1)Y_1$.
Where $Y_1=x_2x_4x_5\oplus x_2x_4\oplus x_4\oplus 1$, for this function we find only when $x_4=0$, $Y_1$ will be reduced to $1$, so $Y_1=x_4Y_2\oplus 1$, where $Y_2=x_2x_5\oplus x_2\oplus 1$, and finally, we have $Y_2=x_2(x_5\oplus 1)\oplus 1$, So $Y$ is NCF with $n=5$, $r=3$ and $k_1=2$, $k_2=1$, $k_3=2$, $M_1=x_1(x_3\oplus 1)$, $M_2=x_4$ and $M_3=x_2(x_5\oplus 1)$, hence its Hamming weight is $5$ by Equation \ref{eq4.3} and its average sensitivity is $\frac{15}{16}$by Equation \ref{eq4.6}.
\end{example}
\section{Conclusion}
We obtain a complete characterization for nested canalyzing functions
(NCFs) by deriving its unique algebraic normal form (polynomial form). We
introduced a new invariant, LAYER NUMBER for nested canalyzing function. So,
the dominance of nested canalyzing variables is quantified. Consequently, we
obtain the explicit formula of the number of nested canalyzing functions. Based on the
polynomial form, we also obtain the formula of the Hamming weight of each NCF.
The activity number of each variable of a NCF is also provided with an explicit formula. Consequently, we proved the average sensitivity
of any NCF is less than $2$, hence, we proved why NCF is stable theoretically. Finally, we conjecture that the tight upper bound for the average sensitivity of any NCF is $\frac{4}{3}$.
\end{document}
|
\begin{document}
\title{Landau-Zener topological quantum state transfer}
\section{Introduction}
Excitation transfer in classical and quantum networks is of major interest in different areas of science and technology with a wealth of applications ranging from coherent control of chemical reactions \cite{r1} and efficient excitation transfer in organic molecules \cite{r2,r3,r4} to quantum state transfer (QST) and large-scale quantum information processing \cite{r5,r6,r7,r8,r9,r10,r11,r12,r12bis,r13,r14,r15,r16,r17,r18,r19,r19bis,r19tris}. For the latter application, quantum states need to be coherently and robustly transferred between distant nodes in a quantum network. In the past two decades, different schemes have been proposed to implement QST in various physical systems. Examples include probabilistic state transfer in a chain with uniform parameters \cite{r6}, perfect state transfer in time-independent chains with properly tailored hopping amplitudes \cite{r10,r11,r20,r21,r22}, state transfer using externally applied time-dependent control fields \cite{r14,r15,r19bis}, Rabi flopping of nearly-resonant edge states \cite{r17}, adiabatic, superadiabatic and topologically-protected QST schemes \cite{r12bis,r23uff,r23,r24,r25,r25bis,r25tris,r25quatris,r26,r27,r27bis,r28}. A major requirement of QST protocols is to be robust against sizable imperfections in the network. To this regard, topological QST methods, where a quantum state can be stored and transmitted in a topologically-protected manner, have attracted great interest in the past few years owing to the opportunity to harvest topological phenomena for guiding and transmitting quantum information reliably \cite{r23,r24,r25,r25bis,r26,r27,r27bis,r28}. The Su-Schrieffer-Heeger (SSH) model, originally introduced to describe transport properties of the conductive polyacetylene \cite{r29}, provides perhaps the most basic model system supporting topological excitations protected by chiral symmetry that is a promising setting for the realization of topological QST \cite{r12bis,r23,r25bis,r27,r27bis,r28}. In the SSH dimeric chain, two distinct QST protocols have been suggested, depending on whether the chain comprises an odd or even number of sites. For a SSH chain with an odd number of sites, i.e. with half integer dimers, there is only one edge state, which is localized either at the left or right edges of the chain depending on whether the intra- to inter-hopping rate ratio $r=t_2/t_1$ is larger or smaller than one. By adiabatically varying the ratio $r$, from below to above one, QST is realized by pumping the localized state from one edge to the other one (Thouless pumping) \cite{r12bis,r25,r28}. Since the edge state is topologically protected against perturbations that do not break chiral symmetry, this QST protocol shows partial protection against structural imperfections of the hopping amplitudes in the chain (off-diagonal disorder). However, it remains sensitive to on-diagonal disorder, i.e. disorder of site energies. For a SSH chain with an even number of sites, i.e. with an integer number of dimers, in the non-trivial topological phase $r<1$ there are two edge modes. For finite chains, the two edge modes hybridize and undergo Rabi-like oscillations, which can be exploited to realize QST between the two edge sites of the chain \cite{r25bis,r27,r27bis}. For static chains, the time required to achieve QST with a high fidelity turns out to be extremely long \cite{r25bis,r27bis}, which is undesirable owing to decoherence effects. Moreover, a careful timing of the interaction is required, preventing the possibility to delay the transfer process on demand. Recently, a protocol has been suggested to shorten the transit time, where the ratio $r$ of hopping rates is adiabatically varied to confine $(r \simeq 0$), delocalize and interfere ($r \simeq 1$), and then relocalize again ($r \simeq 0$) the two edge states \cite{r27}. However, the time for QST is affected by structural disorder in the chain, even though the disorder is only off-diagonal and does not break the chiral symmetry of the underlying Hamiltonian. Hence, the intrinsic robustness of the topological edge states is not fully exploited in such a QST scheme. \\
In this article we suggest a different route for topological QST in a SSH chain which is robust against both off-diagonal and on-diagonal structural disorder in the chain. We consider a SSH chain with an integer number of dimers \cite{r25bis,r27,r27bis} and realize QST between the two topological edge modes via a Landau-Zener (rather than Rabi flopping) transition, which is robust against both off- and on-diagonal disorder of the chain. As compared to QST based on Rabi flopping of adiabatically-deformed topological edge states \cite{r27}, the increase in transfer time is minimal while high fidelity is observed even for a moderate-to-strong disorder in the chain.
\begin{figure}
\caption{\label{<label name>}
\label{<label name>}
\end{figure}
\section{Quantum State Transfer in a dimerized spin chain}
As a paradigmatic model of QST, we consider the transfer of a single qubit in spin-1/2 chain systems \cite{r5,r6}, however different setups could be envisaged, such as superconducting qubit chains \cite{r19tris,r28} and optical waveguide lattices \cite{r20,r21,r22,r23,uffa1}. {{ In photonic systems, topologically-protected light guiding has been demonstrated in several experiments \cite{Alex1,Alex2,Alex3}}}, and adiabatic transport of topological edge states via Thouless pumping has been reported using either classical or quantum light \cite{r23,r25,r34bis}.
Let us assume a dimerized spin chain \cite{r31} comprising $N$ dimers with spins coupled through the nearest-neighbor XX model with alternating coupling strengths $t_1/2$ and $t_2/2$ [Fig.1(a)]. Staggered magnetic fields, with amplitudes $\delta/2$ and $-\delta/2$, are applied at sublattices A and B of the spin chain. The Hamiltonian of the system reads \cite{r5,r6,r31}
\begin{equation}
\hat{H}=\sum_{n=1}^{2N-1}J_n ( \sigma_n^x\sigma_{n+1}^{x}+ \sigma_n^y\sigma_{n+1}^{y})+\sum_{n=1}^{2N} h_n \sigma_n^z
\end{equation}
where $J_n=t_1/2$ for $n$ even, $J_n=t_2/2$ for $n$ odd, and $h_n=-(-1)^n \delta/2$. In the standard protocol of one-qubit QST \cite{r5},
the initial state, encoded on the left-edge sender spin ${\mathcal A}$, is assumed to be given by
$| \psi (0) \rangle = \alpha |0 \rangle_z + \beta |1 \rangle_z$, with $|\alpha|^2+|\beta|^2=1$ ($|0\rangle_z$ and $|1\rangle_z$ denote the spin-up and
-down states along the $z$ axis, respectively), whereas the other sites of the chain
are prepared with all spins up. The
efficiency of the state transfer to the right-edge receiver spin $\mathcal{B}$ at time $t$ is quantified by the fidelity $\mathcal{F}(t)$, which equals 1 for a perfect
transfer. In order to evaluate the channel quality independently
of the specific input state, one usually introduces the average fidelity $\bar{\mathcal{F}}(t )$, which is obtained from $\mathcal{F}(t)$
after averaging over all possible pure input states of the qubit. The average fidelity reads \cite{r3,r18}
\begin{equation}
\bar{\mathcal{F}}(t )=\frac{1}{2}+\frac{1}{3}|f(t)|+ \frac{1}{6} | f(t)|^2
\end{equation}
where $f(t)$ is the transition amplitude of a
spin excitation from the left to the right edge sites of the chain. Clearly, a high average fidelity is achieved whenever the excitation transfer probability $|f(t)|^2$ is close as much as possible to one.
Since the dynamics occurs in the subspace of single excitation sector, $f(t)$ can be calculated
from the hopping dynamics of a single spinless particle along a tight-binding chain with alternating hopping rates $t_1$,$t_2$ and site potentials $\pm \delta$ in the two sublattices A and B \cite{r6,r17,r18,r31}.
After writing $|\psi(t) \rangle=\sum_{n=1}^{2N} c_n(t) |n \rangle$ for the vector state of the spineless particle hopping on the chain, the evolution equations of the occupation amplitudes
$c_n$ at the various sites $|n \rangle$ of the chain, as obtained from the single-particle Schr\"odinger equation, read
\begin{equation}
i \frac{dc_n}{dt}=\sum_{m=1}^{2N}\mathcal{H}_{n,m}c_m
\end{equation}
($n=1,2,...,2N$) where the $2N \times 2N$ matrix Hamiltonian $\mathcal{H}$ is the Rice-Mele Hamiltonian \cite{r37bis}, given by
\begin{equation}
\mathcal{H}= \left(
\begin{array}{cccccccccc}
\delta & t_2 & 0 & 0 & 0 & ...& 0 & 0 & 0 &0\\
t_2 & -\delta & t_1 & 0 & 0 & ... & 0 & 0 & 0 & 0 \\
0 & t_1 & \delta & t_2 & 0 & ... & 0 & 0 & 0 & 0\\
... & ... & ...& ...& ... & ...& ...& ...& ...& ... \\
0 & 0& 0 & 0& 0 & ... & 0 & t_1 &\delta & t_2 \\
0 & 0& 0 & 0& 0 & ... & 0 & 0 & t_2 &- \delta \\
\end{array}
\right).
\end{equation}
Note that $\mathcal{H}$ reduces to the SSH model in the $\delta=0$ limit.
The single-particle transfer excitation amplitude $f(t)$, that determines the average fidelity according to Eq.(2), is given by $f(t)=c_{2N}(t)$, where $c_{2N}(t)$ is the solution to Eq.(3) with the initial condition $c_n(0)=\delta_{n,1}$.\\
Let us first briefly review the QST protocols based on Rabi flopping of left ($L$) and right ($R$) topological edge states, recently introduced in Refs.\cite{r27,r27bis}. In such protocols, one assumes $\delta=0$ (no local magnetic fields) and the non-trivial topological phase $ r \equiv t_2/t_1<1$ of the SSH chain, which ensures the existence of topological edge states. The state transfer arises because of hybridization of the $L$ and $R$ edge states in the finite chain, which occupy the A and B sublattices, respectively [Fig.1(b)]. They are defined by
\begin{eqnarray}
|L \rangle & = \mathcal{N} & \sum_{n=1,3,5,...,2N-1} (-t_2/t_1)^{(n-1)/2} | n \rangle \\
|R \rangle & = \mathcal{N }& \sum_{n=2,4,6,...,2N} (-t_2/t_1)^{(N-n/2)} | n \rangle.
\end{eqnarray}
where
\[ \mathcal{N}=\sqrt{\frac{1}{\sum_{n=0}^{N-1} (t_2/t_1)^{2n}}}= \sqrt{\frac{r^2-1}{r^{2N}-1}}\]
is the normalization factor.
Strictly, the $L$ and $R$ edge states defined by Eqs.(5) and (6) are exact eigenmodes of the Hamiltonian $\mathcal{H}$ only for semi-infinite chains, i.e. when the chain is truncated only at the left or right edges, respectively. In this limiting case, $| L \rangle$ and $| R \rangle$ are zero-energy degenerate modes with topological protection for off-diagonal disorder (hopping rate disorder) that does not close the gap. Both edge states are exponentially localized with a localization length (measured in units of lattice period) given by
\begin{equation}
\Lambda \sim \frac{1}{2 \log (t_1/t_2)}.
\end{equation}
Note that $\Lambda$ shrinks to zero as $t_2 / t_1 \rightarrow 0$ (flat band limit), while $\Lambda$ diverges as $t_2/t_1 \rightarrow 1$ (gap closing limit). Thus, the two edge states are well overlapped with the sender $\mathcal{A}$ and receiver $\mathcal{B}$ sites provided that $t_2 / t_1 \ll 1$. For a finite chain of $N$ dimers the $L$ and $R$ modes hybridize and the zero-energy degeneracy is lifted. In fact, in the subspace described by the vectors $|L \rangle$ and $|R \rangle$ defined by Eqs.(5) and (6), after expanding the vector state as
\begin{equation}
| \psi(t) \rangle= a_L(t) |L \rangle+ a_R(t) |R \rangle
\end{equation}
the reduced two-state dynamics of amplitudes $a_{R,L}(t)$ reads (see Appendix A)
\begin{eqnarray}
i \frac{da_L}{dt} & = & \kappa a_R \\
i \frac{da_R}{dt} & = & \kappa a_L
\end{eqnarray}
where we have set
\begin{equation}
\kappa \equiv \frac{t_1 \left( t_2/t_1\right)^{N} \left[ (t_2/t_1)^2-1\right]}{(t_2/t_1)^{2N}-1}.
\end{equation}
Equations (9) and (10) show that in the finite chain the two edge state eigenvectors of the Hamiltonian $\mathcal{H}$ are approximately given by the odd/even superpositions $(|L \rangle \pm | R \rangle) / \sqrt{2}$ of $L$ and $R$ states, with eigen-energies $ \pm \kappa$. Interestingly, if at time $t=0$ the particle is prepared in state $L$, with strong overlap with the sender state $\mathcal{A}$ and $L$, i.e. assuming $t_2 / t_1 \ll 1$ and $a_L(0)=1$, $a_R(0)=0$, at time $t=T$ with
\begin{equation}
T = \frac{\pi}{2 \kappa}
\end{equation}
\begin{figure}
\caption{\label{<label name>}
\label{<label name>}
\end{figure}
one has $a_L(T)=0$ and $a_R(T)=-i$, indicating excitation transfer from $L$ to $R$ edge states (Rabi flopping). This is basically the transfer method considered in Refs.\cite{r25bis,r27bis}. The main limitation of this transfer scheme is that, in order to achieve transfer from $\mathcal{A}$ to $\mathcal{B}$ with high fidelity, the ratio $ r =t_2 /t_1$ should be chosen as much as small possible, corresponding to an extremely long transit time $T$ according to Eqs.(11) and (12). A variant of the Rabi-flopping QST scheme, which considerably reduces the transit time $T$, has been recently proposed in Ref.\cite{r27}. The main idea is to adiabatically change the localization length of the edge states $L$ and $R$ by varying in time the ratio $r=t_2/t_1$, from zero at $t=0$ to a value $r=1-\epsilon$ at $t=T/2$ and then back to zero at $t=T$. For example, one can assume the adiabatic transfer protocol \cite{r27}
\begin{equation}
t_1=1 \; , \;\; t_2=\frac{1-\epsilon}{2} \left[1-\cos(2 \pi t /T) \right] \; , \;\; \delta=0
\end{equation}
as shown in Fig.2(a). In this case, at $t=0,T$, where $r=0$, the $L$ and $R$ edge states are tightly confined and exactly coincide with the sender ($\mathcal{A}$) and receiver ($\mathcal{B}$) edge sites of the chain, respectively, while at intermediate times the two states $L$ and $R$ are delocalized and they can undergo Rabi flopping in a short time (since $\kappa$ takes a non-negligible value). The parameter $\epsilon$ ($0< \epsilon <1$) entering in Eq.(13) determines the band gap of the SSH lattice at time $t=T/2$, with $\epsilon \rightarrow 0$ corresponding to a closing gap and $\epsilon \rightarrow 1$ to a flat band. In the adiabatic regime, a rough estimation of the minimum interaction time $T$ required to realize QST is obtained from the $^{\prime}$area theorem$^{\prime}$
\begin{equation}
\int_0^T \kappa(t) dt= \pi/2
\end{equation}
An example of QST based on the adiabatic Rabi protocol is shown in Figs.2(b-e). Figure 2(b) shows the behavior of the excitation transfer probability $p_{2N}(T) \equiv |f(T)|^2=|c_{2N}(T)|^2$ versus interaction time $T$ as obtained by numerical solution of the Schr\"odinger equation (3) (solid curve) with the initial condition $c_n(0)=\delta_{n,1}$ for a chain comprising $N=10$ dimers and assuming $\epsilon=0.1$ in Eq.(13). The dashed curve in figure shows the corresponding behavior of the transfer probability $p_{2N}(T)$ as obtained by the approximate two-level model. The minimum optimal transfer time is obtained at $T \simeq 86$, corresponding roughly to the condition (14) (area theorem). A detailed behavior of the occupation probabilities of sender ($p_1(t)=|c_1(t)|^2$) and receiver ($p_{2N}(t)=|c_{2N}(t)|^2$) sites in the chain, for the optimal interaction time $T=86$, is shown in Fig.2(c). The main discrepancy between the exact and approximate two-level model results observed in Figs.2(b) and (c) can be mainly ascribed to the \small value of $\epsilon$ chosen in the simulations, corresponding to a small gap near $t=T/2$ and rather delocalized $L$ and $R$ states. At larger values of $\epsilon$ the two-state approximation clearly provides a more accurate description of the dynamics [see for example the results shown in Figs.2(d) and (e), where $\epsilon=0.2$], however this would require a longer interaction time.
\par
The adiabatic Rabi flopping scheme enables to greatly reduce the interaction time as compared to a static model, thus avoiding decoherence effects. However, this method is sensitive not only to diagonal (on-site) disorder in the chain, but also to disorder in the coupling constants (off-diagonal disorder), in spite of the topological nature of edge states (see Sec.4 below). The main reason thereof is that, since the coupling $\kappa$ of $L$ and $R$ edge states is an integral overlap of $L$ and $R$ modes (see Appendix A), its value [and thus the optimal transfer time $T$ satisfying the area theorem (14)] is sensitive to off-diagonal disorder. In other words, while off-diagonal disorder does not break chiral symmetry of the lattice, thus protecting the zero-energy value of edge modes in the large (thermodynamic) $N$ limit, in the finite chain the disorder modifies the profile of edge states and thus their energy splitting $2 \kappa$. Therefore, the optimal interaction time $T$ is sensitive to disorder in the chain, requiring a careful timing of the interaction to avoid degradation of fidelity.
\section{Landau-Zener topological quantum state transfer}
In two-state systems, it is well known that adiabatic Landau-Zener (LZ) tunneling is a much more robust method than Rabi flopping to realize excitation transfer. The LZ model is one of the
most widely used two-state approximations in resonance physics and found broad applications in different areas of science, such as in atomic and molecular physics, quantum optics, chemical physics, etc. (see, e.g., \cite{r30} and references therein). In quantum control and quantum information science, several works {{in different experimental settings}} pointed out that LZ tunneling may provide
a simple and effective solution for the realization of high fidelity quantum state control without the need for precise timing \cite{r30a,r30b,r30c,r30d,r30e,r30f,r32}. {{Since the earlier experimental demonstrations of LZ interferometry in strongly-driven superconducting qubits \cite{palle1,palle2}, adiabatic rapid passage techniques are nowadays routinely realized in superconducting qubit systems. For example, interference in a superconducting qubit under periodic latching modulation, in which the level separation is switched abruptly between two values
and is kept constant otherwise, has been demonstrated in \cite{r19tris}, whereas fast and high-fidelity perfect quantum state transfer in a superconducting qubit chain with parametrically tunable couplings has been recently reported in \cite{palle3}}}. Such previous studies suggest us that LZ tunneling of topological edge states in the SSH chain, besides of avoiding the timing problem of Rabi-like QST methods, could provide a viable route for high-fidelity QST which is robust against both diagonal and off-diagonal disorder of the chain \cite{r32,r33}. The main idea is to add a staggered local magnetic field $\delta$, of opposite sign in the two sublattices A and B of the spin chain, which is linearly and slowly ramped in time so as to realize LZ tunneling between the two edge states when they are delocalized in the chain. A schematic of the topological QST protocol based on LZ transition is shown in Fig.3(a) and corresponds to the following time-dependent parameters in the Rice-Mele Hamiltonian (4) [compare with Eq.(13)]
\begin{eqnarray}
t_1 & = & 1 \nonumber \\
t_2 & = & \left\{
\begin{array}{ll}
\frac{1-\epsilon}{2} \left[1-\cos( \pi t /\tau) \right] & 0<t<\tau \\
1- \epsilon & \tau<t<\tau+\tau_Z \\
\frac{1-\epsilon}{2} \left[1-\cos( \pi (t-\tau_Z) /\tau) \right] & \tau+\tau_Z<t<T
\end{array}
\right. \\
\delta & = & \left\{
\begin{array}{ll}
\delta_0 & 0<t< \tau \\
\delta_0-\alpha(t-\tau)/2 & \tau<t<\tau_z+\tau \\
-\delta_0 & \tau+\tau_Z<t<T
\end{array}
\right. \nonumber
\end{eqnarray}
\begin{figure}
\caption{\label{<label name>}
\label{<label name>}
\end{figure}
\begin{figure}
\caption{\label{<label name>}
\label{<label name>}
\end{figure}
where $T=2 \tau+\tau_Z$ is the interaction time and $\alpha=4 \delta_0 / \tau_Z$ is the temporal gradient of the local magnetic field. Note that the transfer scheme comprises three stages: in the first stage I (time duration $\tau$), the two edge states are adiabatically delocalized as in the adiabatic Rabi scheme of Fig.2(a), however the applied local magnetic field $\delta_0$ splits the energies of the two edge states far apart so that they do not interact. In the second stage II (duration $\tau_Z$) the magnetic field is linearly decreased in time till to vanish and reverse sign, while the ratio $r=t_2/t_1$ is kept constant at a value close to one: in this time interval LZ tunneling between the delocalized $L$ and $R$ states occurs. Finally, in the third step III (time duration $\tau$) the two edge states are adiabatically re-localized at the edge sites. In the spirit of the two-level approximation, the excitation transfer between the sender and receiver edge sites of the chain is described by the coupled equations (see Appendix A)
\begin{eqnarray}
i \frac{da_L}{dt} & = & \delta(t) a_L+\kappa(t) a_R \\
i \frac{da_R}{dt} & = & - \delta(t) a_R+\kappa(t) a_L
\end{eqnarray}
where $\kappa=\kappa(t)$ is given by Eq.(11) and the time dependence of $\delta$ and $t_2$ is defined by Eq.(15). We require $\delta_0 > \sim \kappa$ so that the two edge states are decoupled in stages I and III. Under such an assumption, the transition probability is given by the well-known Landau-Zener relation \cite{r30} $p_{2N} \simeq 1-\exp(-2 \pi \Gamma)$, with $\Gamma= \kappa^2/ \alpha$. Hence, a high excitation transfer is realized provided that $\Gamma > \sim 1$, i.e.
\begin{equation}
\tau_Z > \sim \frac{4 \delta_0}{\kappa^2}
\end{equation}
\begin{figure}
\caption{\label{<label name>}
\label{<label name>}
\end{figure}
with $\delta_0 > \sim \kappa$. As an example, Fig.3(b) shows the numerically-computed behavior of the transfer probability $p_{2N}$ for parameter values {{$N=10$}}, $\epsilon=0.1$, $\tau=60$, $\delta_0=0.2$ and for increasing values of the LZ time $\tau_Z$, i.e. of the interaction time $T=2 \tau+\tau_Z$. Solid and dashed curves in the figure refer to the full numerical simulations of the Schr\"odinger equation and to the approximate two-level model, respectively. Clearly, for a sufficiently long LZ time $\tau_Z$ [$\tau_Z > \sim 80$ in the simulation of Fig.3(b)], efficient excitation transfer is realized, which becomes largely insensitive to a change of $\tau_Z$, thus indicating that -- unlike in the Rabi flopping scheme-- precise timing of interaction is not required in the topological LZ QST protocol. An example of the detailed behavior of the occupation probabilities at sender ($p_1(t)=|c_1(t)|^2$) and receiver ($p_{2N}(t)=|c_{2N}(t)|^2$) sites, for a transit time $T=240$, is shown in Fig.3(c). Note that, as compared to the Rabi-flopping scheme of Fig.2, the LZ adiabatic scheme requires a longer interaction time (due to the additional LZ time $\tau_Z$), however the increase of transfer time $T$ is moderate (less than one order of magnitude). Parameter optimization to obtain a high-fidelity transfer in in the shortest possible interaction time $T$ would require full numerical simulations to scan the entire 4-dimensional parameter space $\epsilon$, $\delta_0$, $\tau$ and $\tau_Z$, with $T=2 \tau+\tau_Z$. This is a rather cumbersome task which goes beyond the scope of the present work. However, extended numerical simulations in reduced 2-dimensional space indicate that there exist wide range of parameters where high values of transfer probability ($p_{2N}$ larger than $0.95$) can be achieved with an interaction time $T$ few times larger than the one typically required in the adiabatic Rabi scheme of Ref.\cite{r27}. As an example, Figs.4(a) and (b) show numerically-computed maps of the transfer probability $p_{2N}$ in the $(\delta_0,T)$ and $(\epsilon,T)$ planes, respectively, for fixed values of other parameters. { {The results shown in the figures refer to the exact numerical simulations of the Schr\"odinger equation (3), i.e. beyond the two-level approximation}}. The broad white areas in the plots, corresponding to a transfer probability larger than $\sim 0.95$, clearly indicate that high-fidelity QST can be achieved without any precise fine tuning of parameter values. {{Finally, let us discuss about the scalability of the adiabatic LZ protocol with separation between the two qubits, i.e. number $2N$ of sites in the chain. Like in the adiabatic Rabi protocol \cite{r27}, the interaction time $T$ required to realize state transfer with a high fidelity is ultimately limited by the finite propagation speed of excitation in the chain, expressed by the Lieb-Robinson bound \cite{Robinson}, and by the adiabaticity criterion to avoid losses into the bulk states of the SSH lattice. In practice, in optimized protocols the dependence of transfer time $T$ on lattice sites $2N$ scales with the algebraic law $T \sim (2N)^ \rho$ with $\rho \geq 1$ \cite{r27}, the lowest value $\rho=1$ corresponding to the Lieb-Robinson bound \cite{r27}. Figure 5 shows the numerically-computed behavior of the transfer probability $p_{2N}$ versus the quits distance $2N$ for the three values of the exponent $\rho=1$, 1,1 and 1.3. Parameter values are as in Fig.3, expect that at each value of $2N$ all the time constants are scaled by the factor $\sim (2N)^{\rho}$ while $\epsilon$ and $\delta_0$ are scaled by the factor $\sim 1/N$. The results clearly indicate that, for an interaction time $T$ that increases slightly more than linear with the size $2N$ of the chain (curve with $\rho=1.3$), the probability transfer remains larger than $ 98 \%$ over the entire range from $2N=20$ to $2N=80$.}
\begin{figure}
\caption{\label{<label name>}
\label{<label name>}
\end{figure}
\begin{figure}
\caption{\label{<label name>}
\label{<label name>}
\end{figure}
\begin{figure}
\caption{\label{<label name>}
\label{<label name>}
\end{figure}
\section{Effect of disorder on quantum state transfer: comparison between Rabi and Landau-Zener protocols}
The main advantage of the LZ topological QST method, over Rabi-flopping schemes \cite{r25bis,r27,r27bis}, is to be robust against disorder and structural imperfections of the chain, thus fully harnessing the topological protection feature of edge states. In addition, since the LZ transition is rather insensitive to the precise value of energy splitting of the edge states, the robustness of the LZ QST protocol persists even for disorder that breaks the chiral symmetry of the SSH lattice. We checked that the topological LZ QST scheme is more robust than the adiabatic Rabi flopping scheme by a statistical analysis of the effects of either off-diagonal and on-diagonal disorder on the transfer probability $p_{2N}=|c_{2N}(T)|^2$ in the protocol schemes defined by Eq.(13) (adiabatic Rabi scheme) and Eq.(15) (LZ scheme). The disorder is introduced by considering the modified Hamiltonian $\mathcal{H}+\delta \mathcal{H}$, where $\mathcal{H}$ is the Hamiltonian of the ordered chain given by Eq.(4) and $\delta \mathcal{H}$ accounts for either off-diagonal or on-diagonal disorder. For the sake of simplicity, structural off-diagonal disorder is emulated by introducing random fluctuations of the (static) inter-dimer hopping rate $t_1$ solely around the mean value 1, i.e. we assume
\begin{equation}
\mathcal{\delta H}= \left(
\begin{array}{cccccccccc}
0 & 0 & 0 & 0 & 0 & ...& 0 & 0 & 0 &0\\
0 & 0 & \sigma_1 & 0 & 0 & ... & 0 & 0 & 0 & 0 \\
0 & \sigma_1 & 0 & 0 & 0 & ... & 0 & 0 & 0 & 0\\
... & ... & ...& ...& ... & ...& ...& ...& ...& ... \\
0 & 0& 0 & 0& 0 & ... & 0 & 0 &\sigma_{N-1} & 0 \\
0 & 0& 0 & 0& 0 & ... & 0 & \sigma_{N-1} &0 & 0 \\
0 & 0& 0 & 0& 0 & ... & 0 & 0 & 0 & 0 \\
\end{array}
\right).
\end{equation}
where $\sigma_n$ is a random variable with uniform distribution in the range $(-\sigma,\sigma)$ and $\sigma$ is a measure of the off-diagonal disorder strength. However, we do not expect substantial qualitative changes of results by considering disorder in inter-dimer hopping rate $t_2$ as well, since the main feature of disorder in the SSH lattice is known to arise from the gap closing condition $t_2/t_1=1$ which breaks the topological protection of edge states. Structural on-diagonal disorder is emulated by considering the diagonal Hamiltonian
\begin{equation}
\mathcal{\delta H}= \left(
\begin{array}{cccccccccc}
\delta E_1 & 0 & 0 & 0 & 0 & ...& 0 & 0 & 0 &0\\
0 & \delta E_2 & 0 & 0 & 0 & ... & 0 & 0 & 0 & 0 \\
0 & 0 & \delta E_3 & 0 & 0 & ... & 0 & 0 & 0 & 0\\
... & ... & ...& ...& ... & ...& ...& ...& ...& ... \\
0 & 0& 0 & 0& 0 & ... & 0 & \delta E_{2N-2} &0 & 0 \\
0 & 0& 0 & 0& 0 & ... & 0 & 0 & \delta E_{2N-1}& 0 \\
0 & 0& 0 & 0& 0 & ... & 0 & 0 & 0 & \delta E_{2N} \\
\end{array}
\right).
\end{equation}
where $\delta E_n$ is a random variable with uniform distribution in the range $(-\delta E,\delta E)$ and $\delta E$ measures the strength of on-diagonal (site energy) disorder. Statistical analysis has been performed by numerical computation of the transfer excitation probability $p_{2N}$ {{, using the exact Schr\"odinger equation (3),}} for 10000 realizations of disorder. For the adiabatic Rabi protocol, parameter values used in the simulations are $\epsilon=0.1$ and $T=86$, corresponding to $p_{2N} \simeq 0.995$ in the absence of disorder [see Fig.2(c)]. Figure 6 shows the statistical distribution $F(p_{2N})$ of $p_{2N}$ in the presence of diagonal [Fig.6(a)] and off-diagonal [Fig.6(b)] disorder of moderate strength ($20\%$ in units of the hopping rate $t_1$). The normalization condition $\int_0^1 dp_{2N} F(p_{2N})=1$ is assumed for the statistical density distribution function $F$. For both diagonal and off-diagonal disorder, $F$ shows a long tail departing from $p_{2N}=1$, indicating that the fidelity of the QST is heavily degraded by structural disorder in the chain, especially in case of diagonal disorder. Such results should be compared to the ones shown in Fig.7, which refer to the impact of the same strength of disorder in the topological LZ protocol. In this case parameter values used in the simulations are those in Fig.3(c) [$\epsilon=0.1$, $\delta_0=0.2$, $\tau=60$, $\tau_Z=120$], corresponding to $p_{2N} \simeq 0.995$ in the absence of disorder. Clearly, in this case the statistical distribution $F$ is much more squeezed toward $p_{2N}=1$, with negligible tails below $p_{2N}=0.9$, indicating that the fidelity of state transfer is not appreciably degraded even in the presence of a moderate disorder in the chain. {{An inspection of Figs. 6 and 7 shows that the diagonal (on-site) disorder is more detrimental than off-diagonal disorder. What happens if we increase the disorder strength further? Clearly, as the strength of disorder is increased, the transfer probability is degraded in both protocols, however the largest strength of on-diagonal disorder that is tolerated by the LZ protocol is much larger than the one of the Rabi protocol. This is shown in Fig.8, where we compare the statistical distribution $F(p_{2N})$ of the transfer probability $p_{2N}$ for the two protocols for a few increasing values of the diagonal disorder strength $\delta E$. Clearly, even for extremely strong disorder of on-site potential, larger than the staggered magnetic field amplitude $\delta_0$, the LZ protocol shows a strong robustness against disorder, while the Rabi protocol becomes fully unreliable (compare upper and lower panels in Fig.8). This result can be physically explained as follows. In the Rabi protocol, the on-site disorder changes the energy splitting of the edge states in a rather random fashion, so that for a fixed interaction time $T$ the excitation transfer between the two edge sites undergoes large fluctuations because the area on the left hand side of Eq.(14) can greatly deviate from the target value $\pi/2$. In the LZ protocol, the splitting of the edge states also undergoes the same random fluctuation, depending on the precise realization of disorder, however the transfer probability is now much less sensitive to the fluctuations provided that these remain smaller than the amplitude $\delta_0$ of the staggered magnetic field: in fact, in this case the ramp of the magnetic field in stage II of Fig.3(a) will always set the two edge states in resonance and thus LZ tunneling will occur.}}
\section{Conclusions}
In recent years, topological protection has emerged as a promising route for guiding and transmitting quantum information reliably. Adiabatic (Thouless) pumping of topological states offers some topological protection of quantum state transfer against sizable imperfections in the system \cite{r12bis,r23,r28}. However, the existence of topological states in a network does not itself ensure that any QST protocol fully exploits the topological protection of states. For example, some recent QST methods based on static or adiabatic Rabi flopping of edge states \cite{r25bis,r27,r27bis} turn out to be sensitive to structural imperfections of the network and thus they require special disorder-dependent timing for the realization of high-fidelity QST.
In this work we introduced a novel scheme for robust QST of topologically protected edge states in a dimeric Su-Schrieffer-Heeger spin chain assisted by Landau-Zener tunneling. As compared to topological QST protocols based on Rabi flopping, our scheme {{is more advantageous}} in terms of robustness against both diagonal and off-diagonal disorder in the chain, without a substantial increase of the interaction time.
Our model could be of potential relevance for experimental implementation using current technology in different setups: possible candidates are chains of superconducting qubits or optical waveguide lattices. The underlying concepts of our protocol also suggest that topological protection could be exploited in more complicated quantum information tasks, as, for instance, entanglement transfer in structured networks or reservoir engineering.\\
\\
{\bf{Acknowledgments.}} G.L.G. acknowledges financial support from the "Consellaria d'Innovació, Recerca i Turisme del Govern de les Illes Balears". S.L. acknowledges hospitality from IFISC-UIB (Palma de Mallorca) under the "professors convidats" program. This work was supported by
MINECO/AEI/FEDER through project EPheQuCS FIS2016-78010-P.\\
\\
{\bf{Conflict of Interests.}} The authors declare no conflict of interest.\\
\\
{\bf{Keywords.}} Quantum state transfer, spin chains, topological protection.\\
\appendix
\renewcommand{A.\arabic{equation}}{A.\arabic{equation}}
\setcounter{equation}{0}
\section{Reduced two-level model of state transfer dynamics}
In this Appendix we briefly derive the approximate two-level model describing excitation transfer between left $|L \rangle$ and right $|R \rangle$ topological edge states of the SSH chain. The two edge states are defined by Eqs.(5) and (6) given in the main text. For a matrix Hamiltonian $\mathcal{H}$ [Eq.(4)] with constant parameters $t_1$. $t_2$ and $\delta$, it can be readily shown that, in the $N \rightarrow \infty$ limit, $|L \rangle$ and $|R \rangle$ states are eigenstates of $\mathcal{H}$ with eigen-energies $\delta$ and $-\delta$, respectively, i.e. $\mathcal{H}|L \rangle=\delta |L \rangle$ and $\mathcal{H}|R \rangle=-\delta |R \rangle$. An approximate description of the excitation transfer protocols, which captures the main qualitative features of the process, can be gained by making the rather crude assumption that the dynamics occurs in the subspace of the instantaneous eigenvectors $|L \rangle$ and right $|R \rangle$ of $\mathcal{H}(t)$ (two-level approximation). Such an assumption is a reasonable one provided that (i) the initial excitation state $|\psi(0) \rangle$ is limited to the two-level subspace (in our case, since $c_n(0)=\delta_{n,1}$, this means $r(0) \equiv t_2(0)/t_1(0) \ll 1$; (ii) the time variation of parameters $t_1$, $t_2$ and $\delta$ is sufficiently slow to neglect non-adiabatic effects; (iii) at each time, the instantaneous localization length $\Lambda$ of edge modes [Eq.(7)] remains smaller than the chain size $N$. We stress that we use the two-level approximation in order to catch the main qualitative features of the transfer dynamics, however it is clear that such a rather crude approximation may fail to provide the exact quantitative analysis of the dynamics, such as the optimal transfer time $T$ and fidelity, which should be computed by numerically solving the Schr\"odinger equation (3) in the full Hilbert space. In particular, the two-level approximation is expected to get less accurate when $r$ gets close to one, i.e. near the gap closing regime, owing to non-adiabatic excitation of bulk states. In the spirit of the two-level approximation, we make the Ansatz
\begin{equation}
| \psi(t) \rangle \simeq a_L(t) |L \rangle+ a_R(t) |R \rangle
\end{equation}
where $a_L(t)$ and $a_R(t)$ are the occupation amplitudes of the two edge states at time $t$. The evolution equations of $a_{L,R}(t)$ are obtained after substitution of the Anstaz (A.1) into the Schr\"odinger equation $(i d | \psi \rangle /dt)= \mathcal{H} | \psi(t) \rangle$ and multiplying the equation so obtained by $\langle L|$ and $\langle R|$. Taking into account that
\[
\langle L| R \rangle=\langle L | (d R/dt) \rangle= \langle R | (d L /dt) \rangle=0 \]
and $\langle L | dL/dt \rangle=\langle R | dR/dt \rangle \neq 0$, after gauging out an inessential phase term one obtains
\begin{eqnarray}
i \frac{da_L}{dt} & = & \langle L | \mathcal{H} |L \rangle a_L+\langle L | \mathcal{H} |R \rangle a_R = \delta a_L+\kappa a_R \\
i \frac{da_R}{dt} & = & \langle R | \mathcal{H} |L \rangle a_L+\langle R | \mathcal{H} |R \rangle a_R = - \delta a_R+\kappa a_L
\end{eqnarray}
where $\kappa$ is given by
\begin{equation}
\kappa \equiv \langle L | \mathcal{H} |R \rangle=\langle R | \mathcal{H} |L \rangle=\frac{t_1 \left( t_2/t_1\right)^{N} \left[ (t_2/t_1)^2-1\right]}{(t_2/t_1)^{2N}-1}.
\end{equation}
\end{document}
|
\begin{document}
\begin{abstract}
We describe a canonical spanning tree of the ridge graph of a subword complex on a finite Coxeter group. It is based on properties of greedy facets in subword complexes, defined and studied in this paper. Searching this tree yields an enumeration scheme for the facets of the subword complex. This algorithm extends the greedy flip algorithm for pointed pseudotriangulations of points or convex bodies in the plane.
\end{abstract}
\textit{vs.}~pace*{-1cm}
\title{The greedy flip tree of a subword complex}
\textit{vs.}~pace*{-.5cm}
\section{Introduction}
Subword complexes on Coxeter groups were defined and studied by A.~Knutson and E.~Miller in the context of Gr\"obner geometry in Schubert varieties~\cite{KnutsonMiller-subwordComplex,KnutsonMiller-GroebnerGeometry}. Type~$A$ spherical subword complexes can be visually interpreted using pseudoline arrangements on primitive sorting networks. These were studied by V.~Pilaud and M.~Pocchiola \cite{PilaudPocchiola} as combinatorial models for pointed pseudotriangulations of planar point sets~\cite{RoteSantosStreinu-survey} and for multitriangulations of convex polygons~\cite{PilaudSantos-multitriangulations}. These two families of geometric graphs extend in two different ways the family of triangulations of a convex polygon.
The greedy flip algorithm was initially designed to generate all pointed pseudotriangulations of a given set of points or convex bodies in general position in the plane~\cite{PocchiolaVegter, BronnimannKettnerPocchiolaSnoeying}. It was then extended in~\cite{PilaudPocchiola} to generate all pseudoline arrangements supported by a given primitive sorting network. The goal of this paper is to generalize the greedy flip algorithm to any subword complex on any finite Coxeter system. Based on combinatorial properties of greedy facets, we construct the greedy flip tree of a subword complex, which spans its ridge graph. This tree can be visited in polynomial time per node and polynomial working space to generate all facets of the subword complex. For type~$A$ spherical subword complexes, the resulting algorithm is that of~\cite{PilaudPocchiola}, although the presentation is quite different.
The paper is organized as follows. In Section~\ref{sec:subwordComplexes}, we recall some notions on finite Coxeter systems and subword complexes. Our main results appear in Section~\ref{sec:greedyFlipTree} where we define the greedy facet of a subword complex, construct the greedy flip tree, and describe the greedy flip algorithm.
\section{Subword complexes on Coxeter groups}
\label{sec:subwordComplexes}
\subsection{Coxeter systems}
\label{subsec:CoxeterSystems}
We recall some basic notions on Coxeter systems needed in this paper. More background material can be found in~\cite{Humphreys}.
Let~$V$ be an $n$-dimensional euclidean vector space. For~$v \in V \smallsetminus 0$, we denote by~$s_v$ the reflection interchanging~$v$ and~$-v$ while fixing pointwise the orthogonal hyperplane. We consider a \defn{finite Coxeter group}~$W$ acting on~$V$, \textit{i.e.}~ a finite group generated by orthogonal reflections of~$V$. We assume without loss of generality that the intersection of all reflecting hyperplanes of~$W$ is reduced to~$0$.
Computations in~$W$ are simplified by root systems. A \defn{root system} for~$W$ is a set~$\Phi$ of vectors stable by~$W$ and containing precisely two opposite vectors orthogonal to each reflection hyperplane of~$W$. Fix a linear functional~$f:V \to \mathbb{R}$ such that $f(\beta) \ne 0$ for all~$\beta \in \Phi$. It splits the root system~$\Phi$ into the set of \defn{positive roots}~$\Phi^+ \eqdef \set{\beta \in \Phi}{f(\beta)>0}$ and the set of \defn{negative roots}~$\Phi^- \eqdef -\Phi^+$. The \defn{simple roots} are the roots which lie on the extremal rays of the cone generated by~$\Phi^+$. They form a basis~$\Delta$ of the vector space~$V$. The \defn{simple reflections}~$S \eqdef \set{s_\alpha}{\alpha \in \Delta}$ generate the Coxeter group~$W$. The pair~$(W,S)$ is called a \defn{finite Coxeter system}. For~$s \in S$, we let~$\alpha_s$ be the simple root orthogonal to the reflecting hyperplane of~$s$.
The \defn{length} of an element~$w \in W$ is the length~$\ell(w)$ of the smallest expression of~$w$ as a product of the generators in~$S$. It is also known to be the cardinality of the \defn{inversion set} of~$w$, defined as the set~$\inv(w) \eqdef \Phi^+ \cap w^{-1}(\Phi^-)$ of positive roots sent to negative roots by~$w$. An expression~$w = s_1 \cdots s_p$, with $s_1, \dots, s_p \in S$, is \defn{reduced} if~$p = \ell(w)$. The \defn{Demazure product} on the Coxeter system~$(W,S)$ is the function~$\delta$ from the words on~$S$ to~$W$ defined inductively by
$$\delta(\varepsilon) = e \quad\text{and}\quad \delta(\sq{Q}s) = \begin{cases} \delta(\sq{Q})s & \text{if } \ell(\delta(\sq{Q})s) > \ell(\delta(\sq{Q})) \\ \delta(\sq{Q}) & \text{if } \ell(\delta(\sq{Q})s) < \ell(\delta(\sq{Q})) \end{cases},$$
where $\varepsilon$ is the empty word and $e$ is the identity of~$W$.
\begin{example}[Type~$A$ --- Symmetric groups]
The symmetric group~$\mathfrak{S}_{n+1}$, acting on the linear hyperplane $1\!\!1^\perp \eqdef \set{x \in \mathbb{R}^{n+1}}{\dotprod{1\!\!1}{x} = 0}$ by permutation of the coordinates, is the reflection group of \defn{type~$A_n$}. It is the group of isometries of the standard $n$-dimensional regular simplex $\conv \{e_1,\dots,e_{n+1}\}$. See \fref{fig:coxeterArrangement} (left). Its reflections are the transpositions of~$\mathfrak{S}_{n+1}$ and the set~$\set{e_i-e_j}{i,j \in [n+1]}$ is a root system for~$A_n$. We can choose the linear functional~$f$ such that the simple reflections are the adjacent transpositions~$\tau_i \eqdef (i\;\;i+1)$, for~$i \in [n]$, and the simple roots are the vectors~$e_{i+1}-e_i$, for~$i \in [n]$.
\end{example}
\begin{figure}
\caption{The $A_3$-, $B_3$-, and $H_3$-arrangements.}
\label{fig:coxeterArrangement}
\end{figure}
\begin{example}[Type~$B$ --- Hyperoctahedral groups]
The semidirect product of the symmetry group of~$\mathfrak{S}_n$ (acting on~$\mathbb{R}^n$ by permutation of the coordinates) with the group~$(\mathbb{Z}_2)^n$ (acting on~$\mathbb{R}^n$ by sign change) is the reflection group of \defn{type~$B_n$}. It is the isometry group of the \mbox{$n$-dimen}\-sional regular cross-polytope~$\conv \{\pm e_1,\dots,\pm e_n\}$ and of its polar $n$-dimensional regular cube~$[-1,1]^n$. See \fref{fig:coxeterArrangement} (middle). Its reflections are the transpositions of~$\mathfrak{S}_n$ and the changes of one single sign. The set $\set{\pm e_p \pm e_q}{p<q \in [n]} \cup \set{\pm e_p}{p \in [n]}$ is a root system for~$B_n$. We can choose the linear functional~$f$ such that the simple reflections are the adjacent transpositions $\tau_i \eqdef (i\;\;i+1)$, for~$i \in [n-1]$, together with the change~$\chi$ of the first sign, and thus the simple roots are the vectors $e_{i+1}-e_i$, for~$i \in [n-1]$, together with~the~vector~$e_1$.
\end{example}
\begin{example}[Type~$H_3$ --- Icosahedral group]
The isometry group of the regular icosahedron (and of its polar dodecahedron) is a Coxeter group. See \fref{fig:coxeterArrangement}~(right).
\end{example}
\subsection{The subword complex}
\label{subsec:subwordComplex}
Consider a finite Coxeter system~$(W,S)$, a word $\sq{Q} \eqdef q_1q_2 \cdots q_m$ on the generators of~$S$, and an element~$\rho \in W$. A.~Knutson and E.~Miller~\cite{KnutsonMiller-subwordComplex} define the \defn{subword complex}~$\subwordComplex$ to be the simplicial complex of subwords of~$\sq{Q}$ whose complements contain a reduced expression for~$\rho$ as a subword. A vertex of~$\subwordComplex$ is a position in~$\sq{Q}$. We denote by~$[m] \eqdef \{1,2,\dots,m\}$ the set of positions in~$\sq{Q}$. A facet of~$\subwordComplex$ is the complement of a set of positions which forms a reduced expression for~$\rho$ in~$\sq{Q}$. We denote by~$\facets$ the set of facets of~$\subwordComplex$. We write~$\rho \prec \sq{Q}$ when~$\sq{Q}$ contains a reduced expression of~$\rho$, \textit{i.e.}~ when~$\subwordComplex$ is non-empty.
\begin{example}
\label{exm:toto}
Consider the type~$A$ Coxeter group~$\mathfrak{S}_4$ generated by~$\set{\tau_i}{i \in [3]}$. Let~$\bar{\sq{Q}} \eqdef \tau_2\tau_3\tau_1\tau_3\tau_2\tau_1\tau_2\tau_3\tau_1$ and~$\bar\rho \eqdef [4,1,3,2]$. The reduced expressions of~$\bar\rho$ are $\tau_2\tau_3\tau_2\tau_1$, $\tau_3\tau_2\tau_3\tau_1$, and $\tau_3\tau_2\tau_1\tau_3$. Thus, the facets of the subword complex~$\subwordComplex[\bar{\sq{Q}},\bar\rho]$ are $\{1, 2, 3, 5, 6\}$, $\{1, 2, 3, 6, 7\}$, $\{1, 2, 3, 7, 9\}$, $\{1, 3, 4, 5, 6\}$, $\{1, 3, 4, 6, 7\}$, $\{1, 3, 4, 7, 9\}$, $\{2, 3, 5, 6, 8\}$, $\{2, 3, 6, 7, 8\}$, $\{2, 3, 7, 8, 9\}$, $\{3, 4, 5, 6, 8\}$, $\{3, 4, 6, 7, 8\}$, and $\{3, 4, 7, 8, 9\}$.
We denote by~$\bar I \eqdef \{1,3,4,7,9\}$ and~$\bar J \eqdef \{3,4,7,8,9\}$. We will use this example as a recurrent example in this paper to illustrate further notions.
\end{example}
\begin{example}[Type~$A$ --- Primitive networks]
\label{exm:typeA}
\begin{figure}
\caption{The network~$\mathcal{N}
\label{fig:network}
\end{figure}
For type~$A$ Coxeter systems, subword complexes can be visually interpreted using primitive networks. A \defn{network}~$\mathcal{N}$ is a collection of~$n+1$ horizontal lines (called \defn{levels}, and labeled from bottom to top), together with~$m$ vertical segments (called \defn{commutators}, and labelled from left to right) joining two different levels and such that no two of them have a common endpoint. We only consider \defn{primitive} networks, where any commutator joins two consecutive levels. See \fref{fig:network} (left).
A \defn{pseudoline} supported by the network~$\mathcal{N}$ is an abscissa monotone path on~$\mathcal{N}$. A commutator of~$\mathcal{N}$ is a \defn{crossing} between two pseudolines if it is traversed by both pseudolines, and a \defn{contact} if its endpoints are contained one in each pseudoline. A \defn{pseudoline arrangement}~$\Lambda$ is a set of~$n+1$ pseudolines on~$\mathcal{N}$, any two of which have at most one crossing, possibly some contacts, and no other intersection. We label the pseudolines of~$\Lambda$ from bottom to top on the left of the network, and we define~$\pi(\Lambda) \in \mathfrak{S}_{n+1}$ to be the permutation given by the order of these pseudolines on the right of the network. Note that the crossings of~$\Lambda$ correspond to the inversions of~$\pi(\Lambda)$. See \fref{fig:network} (right).
Consider the type~$A$ Coxeter group~$\mathfrak{S}_{n+1}$ generated by~$S = \set{\tau_i}{i \in [n]}$, where~$\tau_i$ is the adjacent transposition~$(i\;\;i+1)$. To a word~$\sq{Q} \eqdef q_1q_2 \cdots q_m$ with~$m$ letters on~$S$, we associate a primitive network~$\mathcal{N}_\sq{Q}$ with~$n+1$ levels and~$m$ commutators. If~$q_j = \tau_p$, the $j$\textsuperscript{th}{} commutator of~$\mathcal{N}_\sq{Q}$ is located between the $p$\textsuperscript{th}{} and $(p+1)$\textsuperscript{th}{} levels of~$\mathcal{N}_\sq{Q}$. See \fref{fig:network} (left). For~$\rho \in \mathfrak{S}_{n+1}$, a facet~$I$ of~$\subwordComplex$ corresponds to a pseudoline arrangement~$\Lambda_I$ supported by~$\mathcal{N}_\sq{Q}$ and with~$\pi(\Lambda_I) = \rho$. The positions of the contacts (resp.~crossings) of~$\Lambda_I$ correspond to the elements of~$I$ (resp.~of the complement of~$I$). See \fref{fig:network} (right).
\end{example}
\begin{example}[Combinatorial models for geometric graphs]
\label{exm:geometricGraphs}
As pointed out in~\cite{PilaudPocchiola}, pseudoline arrangements on primitive networks give combinatorial models for the following families of geometric graphs (see \fref{fig:geometricGraphs}):
\begin{enumerate}[(i)]
\item triangulations of convex polygons;
\item multitriangulations of convex polygons~\cite{PilaudSantos-multitriangulations};
\item pointed pseudotriangulations of points in general position in the plane~\cite{RoteSantosStreinu-survey};
\item pseudotriangulations of disjoint convex bodies in the plane~\cite{PocchiolaVegter}.
\end{enumerate}
For example, consider a triangulation~$T$ of a convex~$(n+3)$-gon. Define the direction of a line of the plane to be the angle~$\theta \in [0,\pi)$ of this line with the horizontal axis. Define also a bisector of a triangle~$\triangle$ to be a line passing through a vertex of~$\triangle$ and separating the other two vertices of~$\triangle$. For any direction~$\theta \in [0,\pi)$, each triangle of~$T$ has precisely one bisector in direction~$\theta$. We can thus order the~$n+1$ triangles of~$T$ according to the order~$\pi_\theta$ of their bisectors in direction~$\theta$. The pseudoline arrangement associated to~$T$ is then given by the evolution of the order~$\pi_\theta$ when the direction~$\theta$ describes the interval~$[0,\pi)$. A similar duality holds for the other three families of graphs, replacing triangles by the natural cells decomposing the geometric graph (stars for multitriangulations~\cite{PilaudSantos-multitriangulations}, or pseudotriangles for pseudotriangulations~\cite{RoteSantosStreinu-survey}). See \fref{fig:geometricGraphs} for an illustration. Details can be found in~\cite{PilaudPocchiola}.
\begin{figure}
\caption{Primitive sorting networks are combinatorial models for certain families of geometric graphs.}
\label{fig:geometricGraphs}
\end{figure}
\end{example}
\begin{example}[Type~$B$ --- Symmetric primitive networks]
Consider the type~$B$ Coxeter group~$\mathfrak{S}_n \rtimes (\mathbb{Z}_2)^n$ acting on~$\mathbb{R}^n$ and generated by~${S = \set{\tau_i}{i \in [n-1]} \cup \chi}$, where~$\tau_i$ exchange the $i$\textsuperscript{th}{} and~$(i+1)$\textsuperscript{th}{} coordinates, and~$\chi$ changes the sign of the first coordinate. To a word~$\sq{Q} \eqdef q_1q_2 \cdots q_m$ on~$S$ with~$m$ letters and~$x$ occurrences of~$\chi$, we associate a primitive network~$\mathcal{N}_\sq{Q}$ with~$2n$ levels and~$2m-x$ commutators, which is symmetric with respect to the horizontal axis. The levels of~$\mathcal{N}_\sq{Q}$ are labeled by $-n,\dots,-1,1,\dots,n$ from bottom to top. An occurrence of~$\tau_i$ is replaced by a pair of symmetric commutators between~$-i-1$ and~$-i$ and between~$i$ and~$i+1$, and an occurrence of~$\chi$ is replaced by a commutator between~$-1$ and~$1$. A facet~$I$ of the subword complex~$\subwordComplex$ is represented by a symmetric pseudoline arrangement~$\Lambda_I$ supported by~$\mathcal{N}_\sq{Q}$ whose contacts correspond to the positions in~$I$. If the pseudolines of~$\Lambda_I$ are labeled by~$-n,\dots,-1,1,\dots,n$ from bottom to top on the left of~$\mathcal{N}_\sq{Q}$, then their order on the right of~$\mathcal{N}_\sq{Q}$ is given by $-\rho(n),\dots,-\rho(1),\rho(1),\dots,\rho(n)$.
\end{example}
\begin{example}[Combinatorial models for centrally symmetric geometric graphs]
Type~$B$ subword complexes provide combinatorial models for the centrally symmetric versions of the geometric graphs of Example~\ref{exm:geometricGraphs}. Indeed, the central symmetry of a geometric graph translates into an horizontal symmetry on its dual pseudoline arrangement. See also the discussion in~\cite{CeballosLabbeStump} in particular the dictionnary in Table~2.
\end{example}
\subsection{Generating the subword complex}
\label{subsec:generationgSubwordComplex}
In this paper, we discuss the problem to exhaustively generate the set~$\facets$ of facets of the subword complex~$\subwordComplex$. We underline in this section two immediate enumeration algorithms which illustrate relevant properties of the subword complex.
For the evaluation of the time and space complexity of the different enumeration algorithms, we consider as parameters the rank~$n$ of the Coxeter group~$W$, the size~$m$ of the word~$\sq{Q}$, and the length~$\ell$ of the element~$\rho$. None of these parameters can be considered to be constant a priori. For example, if we want to generate all triangulations of a convex $(n+3)$-gon (see Example~\ref{exm:geometricGraphs}), we consider a subword complex with a group~$W$ of rank~$n$, a word~$\sq{Q}$ of size~${n(n+3)/2}$, and an element~$\rho$ of length~${n(n+1)/2}$.
\paragraph{Inductive structure}
The first method to generate~$\facets$ relies on the inductive structure of the family of subword complexes. Throughout this paper, we denote by~$\sq{Q}_\vdash \eqdef q_2 \cdots q_m$ and~$\sq{Q}_\dashv \eqdef q_1 \cdots q_{m-1}$ the words on~$S$ obtained from~$\sq{Q} \eqdef q_1 \cdots q_m$ by deleting its first and last letters respectively. For a set~$\mathcal{X}$ of subsets of~$\mathbb{Z}$, we denote by~$\mathcal{X} \join z \eqdef z \join \mathcal{X} \eqdef \set{X \cup z}{X \in \mathcal{X}}$ the join of~$\mathcal{X}$ with some~$z \in \mathbb{Z}$. Moreover, let~$\shiftRight{\mathcal{X}} \eqdef \set{\shiftRight{X}}{X \in \mathcal{X}}$, where~$\shiftRight{X} \eqdef \set{x+1}{x \in X}$ denotes the right shift of the set~$X \in \mathcal{X}$. Remember that $\ell(\rho)$ denotes the length of~$\rho$ and that we write~$\rho \prec \sq{Q}$ when~$\sq{Q}$ contains a reduced expression of~$\rho$.
We can decompose inductively the facets of~$\facets$ according on whether or not they contain the last letter of~$\sq{Q}$:
\begin{equation}
\label{eq:inductionRight}
\facets =
\begin{cases}
\facets[\sq{Q}_\dashv,\rho q_m] & \text{if } \rho \not\prec \sq{Q}_\dashv, \\
\facets[\sq{Q}_\dashv, \rho] \join m & \text{if } \ell(\rho q_m) > \ell(\rho), \\
\facets[\sq{Q}_\dashv,\rho q_m] \, \sqcup \, \big( \facets[\sq{Q}_\dashv,\rho] \join m \big) & \text{otherwise}.
\end{cases}
\end{equation}
For later reference, let us also explicitly write the inductive decomposition of the facets of~$\facets$ according on whether or not they contain the first letter of~$\sq{Q}$:
\begin{equation}
\label{eq:inductionLeft}
\facets =
\begin{cases}
\shiftRight{\facets[\sq{Q}_\vdash,q_1\rho]} & \text{if } \rho \not\prec \sq{Q}_\vdash \\
1 \join \shiftRight{\facets[\sq{Q}_\vdash,\rho]} & \text{if } \ell(q_1\rho) > \ell(\rho), \\
\shiftRight{\facets[\sq{Q}_\vdash,q_1\rho]} \, \sqcup \, \big( 1 \join \shiftRight{\facets[\sq{Q}_\vdash,\rho]} \big) & \text{otherwise}.
\end{cases}
\end{equation}
The inductive structure of~$\subwordComplex$ yields an inductive algorithm for the enumeration of~$\facets$, whose running time per facet is polynomial. More precisely, since all subword complexes which appear in the different cases of the induction formula~(\ref{eq:inductionRight}) are non-empty, and since the tests~$\rho \not\prec \sq{Q}_\dashv$ and~$\ell(\rho q_m) > \ell(\rho)$ can be performed in~$O(mn)$ time, the running time per facet of this inductive algorithm is in~$O(m^2n)$.
The inductive structure of~$\subwordComplex$ is moreover useful for the following result.
\begin{theorem}[\cite{KnutsonMiller-subwordComplex}]
\label{theo:KnutsonMiller}
The subword complex~$\subwordComplex$ is a topological sphere if~$\rho$ is precisely the Demazure product~$\delta(\sq{Q})$ of~$\sq{Q}$, and a topological ball otherwise.
\end{theorem}
\paragraph{The flip graph}
The second direct method to generate~$\facets$ relies on flips. Let~$I$ be a facet of~$\subwordComplex$ and~$i$ be an element of~$I$. If there exists a facet~$J$ of $\subwordComplex$ and an element~$j \in J$ such that~$I \smallsetminus i = J \smallsetminus j$, we say that $I$~and~$J$ are \defn{adjacent} facets, that~$i$ is \defn{flippable} in~$I$, and that~$J$ is obtained from~$I$ by \defn{flipping}~$i$. Note that, if they exist,~$J$ and~$j$ are unique by Theorem~\ref{theo:KnutsonMiller}. We denote by~$\flipGraph$ the graph of flips, whose vertices are the facets of~$\subwordComplex$ and whose edges are pairs of adjacent facets. In other words, $\flipGraph$~is the ridge graph of the simplicial complex~$\subwordComplex$.
\begin{example}
\fref{fig:flipGraph} represents the flip graph~$\flipGraph[\bar{\sq{Q}},\bar\rho]$ for the subword complex~$\subwordComplex[\bar{\sq{Q}}, \bar\rho]$ of Example~\ref{exm:toto}.
\begin{figure}
\caption{The flip graph~$\flipGraph[\bar{\sq{Q}
\label{fig:flipGraph}
\end{figure}
\end{example}
Since the flip graph~$\flipGraph$ is connected by Theorem~\ref{theo:KnutsonMiller}, we can explore it to generate~$\facets$. Since~$\flipGraph$ has degree bounded by~$m-\ell(\rho)$, we need~${O(m-\ell(\rho))}$ flips per facet for this exploration. However, we need to store all facets of~$\subwordComplex$ during the algorithm, which may require an exponential working space. This happens for example if we want to generate the~$\frac{1}{n+2}{2n+2 \choose n+1}$ triangulations of a convex $(n+3)$-gon (see Example~\ref{exm:geometricGraphs}).
In this paper, we present the \emph{greedy flip algorithm} to generate the facets of the subword complex~$\subwordComplex$. This algorithm explores a spanning tree of the graph of flips~$\flipGraph$, which we call the \emph{greedy flip tree}. The construction of this tree is based on \emph{greedy facets} in subword complexes and on their inductive structure, similar to the inductive structure of the subword complexes described above. The running time per facet of the greedy flip algorithm is also in~$O(m^2n)$, while its working space is in~$O(mn)$. We compare experimental running times of the inductive algorithm and of the greedy flip algorithm later in Section~\ref{subsec:greedyFlipTree}.
\subsection{Roots and flips}
\label{subsec:roots&flips}
Throughout the paper, we consider a flip in a subword complex as an elementary operation to measure the complexity of our algorithm. In practice, the necessary information to perform flips in a facet~$I$ of~$\subwordComplex$ is encoded in its root function~$\mathbb{R}oot{I}{\cdot} : [m] \to \Phi$ defined by
$$\mathbb{R}oot{I}{k} \eqdef \sigma_{[k-1] \smallsetminus I}(\alpha_{q_k}),$$
where~$\sigma_X$ denotes the product of the reflections~$q_x$ for~$x \in X$. The \defn{root configuration} of the facet~$I$ is the multiset~$\mathbb{R}oots{I} \eqdef \multiset{\mathbb{R}oot{I}{i}}{i \text{ flippable in } I}$. The root function was introduced by C.~Ceballos, J.-P.~Labb\'e and C.~Stump~\cite{CeballosLabbeStump} and the root configuration was extensively studied by V.~Pilaud and C.~Stump~\cite{PilaudStump} in the construction of brick polytopes for spherical subword complexes. The main properties of the root function are summarized in the following proposition, whose proof is similar to that of~\cite[Lemmas~3.3 and 3.6]{CeballosLabbeStump} or~\cite[Lemma~3.3]{PilaudStump}.
\begin{proposition}
\label{prop:roots&flips}
Let~$I$ be any facet of the subword complex~$\subwordComplex$.
\begin{enumerate}
\item
\label{prop:roots&flips:inversions}
The map~$\mathbb{R}oot{I}{\cdot}:i \mapsto \mathbb{R}oot{I}{i}$ is a bijection from the complement of~$I$ to the inversion set of~$\rho^{-1}$.
\item
\label{prop:roots&flips:flippable}
The map~$\mathbb{R}oot{I}{\cdot}$ sends the flippable elements in~$I$ to~$\set{\pm \beta}{\beta \in \inv(\rho^{-1})}$ and the unflippable ones to~$\Phi^+ \smallsetminus \inv(\rho^{-1})$.
\item
\label{prop:roots&flips:flip}
If $I$~and~$J$ are two adjacent facets of~$\subwordComplex$ with~$I \smallsetminus i = J \smallsetminus j$, the position $j$ is the unique position in the complement of~$I$ for which~${\mathbb{R}oot{I}{j} = \pm\mathbb{R}oot{I}{i}}$. Moreover, $\mathbb{R}oot{I}{i} = \mathbb{R}oot{I}{j} \in \Phi^+$ when $i < j$, while $\mathbb{R}oot{I}{i} = -\mathbb{R}oot{I}{j} \in \Phi^-$ when $j < i$.
\item
\label{prop:roots&flips:update}
In the situation of (\ref{prop:roots&flips:flip}), the map~$\mathbb{R}oot{J}{\cdot}$ is obtained from the map~$\mathbb{R}oot{I}{\cdot}$ by:
$$\mathbb{R}oot{J}{k} = \begin{cases} s_{\mathbb{R}oot{I}{i}}(\mathbb{R}oot{I}{k}) & \text{if } \min(i,j) < k \le \max(i,j) \\ \mathbb{R}oot{I}{k} & \text{otherwise.} \end{cases}$$
\end{enumerate}
\end{proposition}
Observe that this proposition ensures in particular that we can perform flips in the subword complex~$\subwordComplex$ in~$O(mn)$ time if we store and update the facets of~$\subwordComplex$ together with their root functions. Note that this storage requires~$O(mn)$ space.
\begin{figure}
\caption{The flip between the adjacent facets~$\bar I = \{1,3,4,7,9\}
\label{fig:flip}
\end{figure}
\begin{example}
In type~$A$ (and~$B$), roots and flips are easily read on the primitive network interpretation presented in Example~\ref{exm:typeA}. Consider a word~$\sq{Q}$ on~$\set{\tau_i}{i \in [n]}$, an element~$\rho \in \mathfrak{S}_{n+1}$, and a facet~$I$ of~$\subwordComplex$. For any~$k \in [m]$, the root~$\mathbb{R}oot{I}{k}$ is the difference~$e_t-e_b$ where~$t$ and~$b$ are the indices of the pseudolines of~$\Lambda_I$ which arrive respectively on the top and bottom endpoints of the $k$\textsuperscript{th}{} commutator of~$\mathcal{N}_\sq{Q}$. \fref{fig:flip} illustrates the properties of Proposition~\ref{prop:roots&flips} on the subword complex~$\subwordComplex[\bar{\sq{Q}}, \bar\rho]$ of Example~\ref{exm:toto}.
\end{example}
\section{The greedy flip tree}
\label{sec:greedyFlipTree}
\subsection{Increasing flips and greedy facets}
\label{subsec:greedyFacets}
Let~$I$ and~$J$ be two adjacent facets of~$\subwordComplex$ with~$I \smallsetminus i = J \smallsetminus j$. We say that the flip from~$I$ to~$J$ is \defn{increasing} if~$i<j$. This is equivalent to~$\mathbb{R}oot{I}{i} \in \Phi^+$ by Proposition~\ref{prop:roots&flips}. We now consider the flip graph $\flipGraph$ oriented by increasing flips.
\begin{proposition}
\label{prop:increasingFlipGraph}
The graph~$\flipGraph$ of increasing flips is acyclic. The lexicographically smallest (resp.~largest) facet of~$\subwordComplex$ is the unique source (resp.~sink) of~$\flipGraph$.
\end{proposition}
\begin{proof}
The graph~$\flipGraph$ is acyclic, since it is a subgraph of the Hasse diagram of the order defined by~$I \le J$ iff there is a bijection~$\phi:I \to J$ such that~$i \le \phi(i)$ for all~$i \in I$. The lexicographically smallest facet is a source of~$\flipGraph$ since none of its flips can be decreasing. We prove that this source is unique by induction the word on~$\sq{Q}$. Denote by~$X(\sq{Q}_\dashv, \rho)$ (resp.~$X(\sq{Q}_\dashv, \rho q_m)$) the lexicographically smallest facet of~$\subwordComplex[\sq{Q}_\dashv, \rho]$ (resp.~$\subwordComplex[\sq{Q}_\dashv, \rho q_m]$) and assume that it is the unique source of the flip graph~$\flipGraph[\sq{Q}_\dashv, \rho]$ (resp.~$\flipGraph[\sq{Q}_\dashv, \rho q_m]$). Consider a source~$X$ of~$\flipGraph$. We distinguish two cases:
\begin{itemize}
\item If~$\ell(\rho q_m) > \ell(\rho)$, then~$q_m$ cannot be the last reflection of a reduced expression for~$\rho$. Thus~$\subwordComplex = \subwordComplex[\sq{Q}_\dashv, \rho] \join m$ and~$X = X(\sq{Q}_\dashv, \rho) \cup m$.
\item Otherwise,~$\ell(\rho q_m) < \ell(\rho)$. If~$m$ is in~$X$, it is flippable (by Proposition~\ref{prop:roots&flips}, since~$\mathbb{R}oot{X}{m} = \rho(\alpha_{q_m}) \in \Phi^-\cap\rho(\Phi^+) = -\inv(\rho^{-1})$) and its flip is decreasing. This would contradict the assumption that~$X$ is a source of~$\flipGraph$. Consequently,~$m \notin X$. Since the facets of~$\subwordComplex$ which do not contain~$m$ coincide with the facets of~$\flipGraph[\sq{Q}_\dashv, \rho q_m]$, we obtain that $X = X(\sq{Q}_\dashv,\rho q_m)$.
\end{itemize}
In both cases, we obtain that the source~$X$ is the lexicographically smallest facet of~$\subwordComplex$.
The proof is similar for the sink.
\end{proof}
We call \defn{positive} (resp.~\defn{negative}) \defn{greedy facet} and denote by~$\positiveGreedy$ (resp.~$\negativeGreedy$) the unique source (resp.~sink) of the graph~$\flipGraph$ of increasing flips. The term ``positive'' (resp.~``negative'') emphasizes the fact that~$\positiveGreedy$ (resp.~$\negativeGreedy$) is the unique facet of~$\subwordComplex$ whose root configuration is a subset of positive (resp.~negative) roots, while the term ``greedy'' refers to the greedy properties of these facets underlined in Lemmas~\ref{lem:greedy1} and~\ref{lem:greedy2}.
\begin{example}
The greedy facets of the subword complex~$\subwordComplex[\bar{\sq{Q}},\bar\rho]$ of Example~\ref{exm:toto} are~$\positiveGreedy[\bar{\sq{Q}},\bar\rho] = \{1,2,3,5,6\}$ and~$\negativeGreedy[\bar{\sq{Q}},\bar\rho] = \{3,4,7,8,9\}$. See \fref{fig:greedy}.
\begin{figure}
\caption{The positive and negative greedy facets of~$\subwordComplex[\bar{\sq{Q}
\label{fig:greedy}
\end{figure}
\end{example}
The positive and negative greedy facets are clearly related by a reversing operation. More precisely, $\negativeGreedy[q_1 \cdots q_m,\rho] = \set{m+1-p}{p \in \positiveGreedy[q_m \cdots q_1,\rho^{-1}]}$. However, we will work in parallel with both positive and negative greedy facets, since certain results are simpler to understand and prove with~$\positiveGreedy$ while the others are simpler with~$\negativeGreedy$. In each proof, we only deal with the simplest situation and leave to the reader the translation to the opposite situation.
Remember that we denote by~$\sq{Q}_\vdash \eqdef q_2 \cdots q_m$ and~$\sq{Q}_\dashv \eqdef q_1 \cdots q_{m-1}$ the words on~$S$ obtained from~$\sq{Q} \eqdef q_1 \cdots q_m$ by deleting its first and last letters respectively. We moreover denote by~$\shiftRight{X} \eqdef \set{x+1}{x \in X}$ and~$\shiftLeft{X} \eqdef \set{x-1}{x \in X}$ the right and left shifts of a subset~$X \subset \mathbb{Z}$. If~$\mathcal{X}$ is a set of subsets of~$\mathbb{Z}$, we also write~$\shiftRight{\mathcal{X}} \eqdef \set{\shiftRight{X}}{X \in \mathcal{X}}$. Finally, remember that $\ell(\rho)$ denotes the length of~$\rho$, and that we write~${\rho \prec \sq{Q}}$ when~$\sq{Q}$ contains a reduced expression of~$\rho$.
The following two lemmas provide two (somehow inverse) greedy inductive procedures to construct the greedy facets~$\positiveGreedy$ and~$\negativeGreedy$. These lemmas are direct consequences of the definition of the greedy facets and the induction formulas~(\ref{eq:inductionRight}) and~(\ref{eq:inductionLeft}) on the subword complex.
\begin{lemma}
\label{lem:greedy1}
The greedy facets~$\positiveGreedy$ and~$\negativeGreedy$ can be constructed inductively from $\positiveGreedy[\varepsilon,e] = \negativeGreedy[\varepsilon,e] = \emptyset$ using the following formulas:
\begin{align*}
\positiveGreedy & = \begin{cases} \positiveGreedy[\sq{Q}_\dashv, \rho q_m] & \text{if } \ell(\rho q_m) < \ell(\rho), \\ \positiveGreedy[\sq{Q}_\dashv, \rho] \cup m & \text{otherwise.}\end{cases}
\\
\negativeGreedy & = \begin{cases} \shiftRight{\negativeGreedy[\sq{Q}_\vdash, q_1\rho]} & \text{if } \ell(q_1\rho) < \ell(\rho), \\ 1 \cup \shiftRight{\negativeGreedy[\sq{Q}_\vdash, \rho]} & \text{otherwise.}\end{cases}
\end{align*}
\end{lemma}
\begin{lemma}
\label{lem:greedy2}
The greedy facets~$\positiveGreedy$ and~$\negativeGreedy$ can be constructed inductively from $\positiveGreedy[\varepsilon,e] = \negativeGreedy[\varepsilon,e] = \emptyset$ using the following formulas:
\begin{align*}
\positiveGreedy & = \begin{cases} 1 \cup \shiftRight{\positiveGreedy[\sq{Q}_\vdash,\rho]} & \text{if } \rho \prec \sq{Q}_\vdash, \\ \shiftRight{\positiveGreedy[\sq{Q}_\vdash,q_1\rho]} & \text{otherwise.} \end{cases}
\\
\negativeGreedy & = \begin{cases} \negativeGreedy[\sq{Q}_\dashv,\rho] \cup m & \text{if } \rho \prec \sq{Q}_\dashv, \\ \negativeGreedy[\sq{Q}_\dashv,\rho q_m] & \text{otherwise.} \end{cases}
\end{align*}
\end{lemma}
Lemmas~\ref{lem:greedy1} and~\ref{lem:greedy2} can be reformulated to obtain greedy sweep procedures on the word~$\sq{Q}$ itself, avoiding the use of induction. Namely, the positive greedy facet is obtained:
\begin{enumerate}
\item either sweeping~$\sq{Q}$ from right to left placing inversions as soon as possible,
\item or sweeping~$\sq{Q}$ from left to right placing non-inversions as long as possible.
\end{enumerate}
The negative greedy facet is obtained similarly, inversing the directions of the~sweeps.
\subsection{The greedy flip tree}
\label{subsec:greedyFlipTree}
We construct in this section the positive and negative greedy flip trees of~$\subwordComplex$. This construction mainly relies on the following \defn{greedy flip property} of greedy facets.
\begin{proposition}
\label{prop:gfp}
If~$m$ is a flippable element of~$\negativeGreedy$, then~$\negativeGreedy[\sq{Q}_\dashv,\rho q_m]$ is obtained from~$\negativeGreedy$ by flipping~$m$. If~$1$ is a flippable element of~$\positiveGreedy$, then $\positiveGreedy[\sq{Q}_\vdash,q_1\rho]$ is obtained from~$\positiveGreedy$ by flipping~$1$ and shifting to the left.
\end{proposition}
\begin{proof}
Although the formulation is simpler for the negative greedy facets, the proof is simpler for the positive ones (due to the direction chosen in the definition of the root function). Assume that~$1$ is a flippable element of~$\positiveGreedy$. Let~$J \in \facets$ and~$j \in J$ be such that $\positiveGreedy \smallsetminus 1 = J \smallsetminus j$. Consider the facet~$\shiftLeft{J}$ of~$\subwordComplex[\sq{Q}_\vdash,q_1\rho]$ obtained shifting~$J$ to the left. Proposition~\ref{prop:roots&flips} (\ref{prop:roots&flips:update}) enables us to compute the root function~$\mathbb{R}oot{J}{\cdot}$ for~$J$, which in turn gives us the root function for~$\shiftLeft{J}$:
$$\mathbb{R}oot{\shiftLeft{J}}{k} = \begin{cases} \mathbb{R}oot{\positiveGreedy}{k+1} & \text{if } 1 \le k \le j-1, \\ q_1(\mathbb{R}oot{\positiveGreedy}{k+1}) & \text{otherwise.} \end{cases}$$
Since all positions~$i \in \positiveGreedy$ such that~$\mathbb{R}oot{\positiveGreedy}{i} = \alpha_{q_1}$ are located before~$j$, and since $\alpha_{q_1}$ is the only positive root sent to a negative root by the simple reflection~$q_1$, all roots~$\mathbb{R}oot{\shiftLeft{J}}{k}$, for~$k \in \shiftLeft{J}$, are positive. Consequently,~$\shiftLeft{J} = \positiveGreedy[\sq{Q}_\vdash,q_1\rho]$.
\end{proof}
\begin{example}
Consider the subword complex of Example~\ref{exm:toto}. Since~$9$ is flippable in~$\negativeGreedy[\bar{\sq{Q}},\bar\rho] = \{3,4,7,8,9\}$, we have~$\negativeGreedy[\bar{\sq{Q}}_\dashv,\bar\rho\tau_1] = \{3,4,6,7,8\}$. Since~$1$~is flippable in~$\positiveGreedy[\bar{\sq{Q}},\bar\rho] = \{1,2,3,5,6\}$, we have~$\positiveGreedy[\bar{\sq{Q}}_\vdash,\tau_2\bar\rho] = \shiftLeft{\{2,3,5,6,8\}} = \{1,2,4,5,7\}$.
\end{example}
We now define inductively the \defn{negative greedy flip tree}~$\negativeGreedyTree$. The induction follows the right induction formula~(\ref{eq:inductionRight}) for the facets~$\facets$. For the empty word~$\varepsilon$ and the identity~$e$ of~$W$, the tree~$\negativeGreedyTree[\varepsilon,e]$ is formed by the unique facet~$\emptyset$ of~$\subwordComplex[\varepsilon,e]$. For a non-empty word~$\sq{Q}$, we define the tree~$\negativeGreedyTree$ as
\begin{enumerate}[(i)]
\item $\negativeGreedyTree[\sq{Q}_\dashv, \rho q_m]$ if~$m$ appears in none of the facets of~$\subwordComplex$;
\item $\negativeGreedyTree[\sq{Q}_\dashv, \rho] \join m$ if~$m$ appears in all the facets of~$\subwordComplex$;
\item the disjoint union of~$\negativeGreedyTree[\sq{Q}_\dashv, \rho q_m]$ and~$\negativeGreedyTree[\sq{Q}_\dashv, \rho] \join m$, with an additional arc from~$\negativeGreedy[\sq{Q}_\dashv, \rho q_m]$ to~$\negativeGreedy = \negativeGreedy[\sq{Q}_\dashv, \rho] \cup m$, otherwise.
\end{enumerate}
See \fref{fig:negativeGreedyTree} for the negative greedy flip tree~$\negativeGreedyTree[\bar{\sq{Q}},\bar \rho]$ of Example~\ref{exm:toto}.
\begin{figure}
\caption{The negative greedy flip tree~$\negativeGreedyTree[\bar{\sq{Q}
\label{fig:negativeGreedyTree}
\end{figure}
We define similarly the \defn{positive greedy flip tree}~$\positiveGreedyTree$ of~$\subwordComplex$. The induction now follows the left induction formula~(\ref{eq:inductionLeft}) for the facets~$\facets$. The tree~$\positiveGreedyTree[\varepsilon,e]$ is formed by the unique facet~$\emptyset$ of~$\subwordComplex[\varepsilon,e]$. For a non-empty word~$\sq{Q}$, we define the tree~$\positiveGreedyTree$ as
\begin{enumerate}[(i)]
\item $\shiftRight{\positiveGreedyTree[\sq{Q}_\vdash, q_1 \rho]}$ if~$1$ appears in none of the facets of~$\subwordComplex$;
\item $1 \join \shiftRight{\positiveGreedyTree[\sq{Q}_\vdash, \rho]}$ if~$1$ appears in all the facets of~$\subwordComplex$;
\item the disjoint union of~$\shiftRight{\positiveGreedyTree[\sq{Q}_\vdash, q_1 \rho]}$ and~$1 \join \shiftRight{\positiveGreedyTree[\sq{Q}_\vdash, \rho]}$, with an additional arc from~$\positiveGreedy = 1 \cup \shiftRight{\positiveGreedy[\sq{Q}_\vdash, \rho]}$ to~$\shiftRight{\positiveGreedy[\sq{Q}_\vdash, q_1\rho]}$, otherwise.
\end{enumerate}
See \fref{fig:positiveGreedyTree} for the positive greedy flip tree~$\positiveGreedyTree[\bar{\sq{Q}},\bar \rho]$ of Example~\ref{exm:toto}.
\begin{figure}
\caption{The positive greedy flip tree~$\positiveGreedyTree[\bar{\sq{Q}
\label{fig:positiveGreedyTree}
\end{figure}
\begin{lemma}
\label{lem:greedyTree}
The negative (resp.~positive) greedy flip tree is a spanning trees of the increasing flip graph~$\flipGraph$, oriented towards its root~$\negativeGreedy$ (resp.~from its root~$\positiveGreedy$).
\end{lemma}
\begin{proof}
We prove the result for~$\negativeGreedyTree$ by induction on the length of~$\sq{Q}$. On the one hand, both the increasing flip graphs~$\flipGraph[\sq{Q}_\dashv, \rho q_m]$ and~$\flipGraph[\sq{Q}_\dashv, \rho] \join m$ are subgraphs of the increasing flip graph~$\flipGraph$. On the other hand, in the case where~$m$ appears in some but not all facets of~$\subwordComplex$, the additional arc from~$\negativeGreedy[\sq{Q}_\dashv, \rho q_m]$ to~$\negativeGreedy$ is an increasing flip according to Proposition~\ref{prop:gfp}.
\end{proof}
\begin{example}
Consider the subword complex~$\subwordComplex[\bar{\sq{Q}},\bar\rho]$ of Example~\ref{exm:toto}. Figures~\ref{fig:negativeGreedyTree} and \ref{fig:positiveGreedyTree} represent respectively the negative and the positive greedy flip trees~$\negativeGreedyTree[\bar{\sq{Q}},\bar\rho]$ and~$\positiveGreedyTree[\bar{\sq{Q}},\bar\rho]$. These trees are also represented on \fref{fig:flipGraph2} as spanning trees of the flip graph~$\flipGraph[\bar{\sq{Q}},\bar\rho]$ of \fref{fig:flipGraph}.
\begin{figure}
\caption{The negative greedy flip tree~$\negativeGreedyTree[\bar{\sq{Q}
\label{fig:flipGraph2}
\end{figure}
\end{example}
The goal of the end of this section is to give a direct description of the greedy flip trees~$\negativeGreedyTree$ and~$\positiveGreedyTree$, avoiding the use of induction. Let~$I$ be a facet of the subword complex~$\subwordComplex$. We define the \defn{negative greedy index}~$\mathsf{n}(I)$ of the facet~$I$ to be the last position~$x \in [m]$ such that $I \cap [x] = \negativeGreedy[q_1 \cdots q_x, \sigma_{[x] \smallsetminus I}]$. In other words, the facet~$I$ is greedy until~$\mathsf{n}(I)$ and not afterwards. Note in particular that~$I \cap [x]$ is greedy if and only if~$x \le \mathsf{n}(I)$. Similarly, we define the \defn{positive greedy index}~$\mathsf{p}(I)$ of the facet~$I$ to be the smallest position ${x \in [m]}$ such that ${\set{i-x}{i \in I \smallsetminus [x]} = \positiveGreedy[q_{x+1} \cdots q_m, \sigma_{[x+1,m] \smallsetminus I}]}$.
\begin{example}
\label{exm:greedyIndex}
Consider the subword complex~$\subwordComplex[\bar{\sq{Q}},\bar\rho]$ of Example~\ref{exm:toto}. The pseudoline arrangements associated to the facets~$\bar I \eqdef \{1,3,4,7,9\}$ and~$\bar J \eqdef \{3,4,7,8,9\}$ are represented in \fref{fig:flip}. We have~$\mathsf{n}(\bar I) = 7$, while~$\mathsf{n}(\bar J) = 9$ (\textit{i.e.}~ $\bar J$ is the negative greedy facet).
In \fref{fig:negativeGreedyTree}, the symbol~$|$ separates the elements smaller or equal to~$\mathsf{n}(I)$ from those which are strictly larger than~$\mathsf{n}(I)$. Similarly, in \fref{fig:positiveGreedyTree}, the symbol~$|$ separates the elements strictly smaller than~$\mathsf{p}(I)$ from those which are larger or equal to~$\mathsf{p}(I)$.
\end{example}
The following lemma provides the rule to update the greedy indices when we perform certain specific flips.
\begin{lemma}
\label{lem:greedyIndex}
Let~$I$ and~$J$ be two adjacent facets of~$\subwordComplex$ with~$I \smallsetminus i = J \smallsetminus j$. If $i < j \le \mathsf{n}(J)$, then $\mathsf{n}(I) = j-1$. If $\mathsf{p}(I) \le i < j$, then $\mathsf{p}(J) = i+1$.
\end{lemma}
\begin{proof}
We prove the result for the negative greedy index. On the one hand, we have $j \in J \cap [j] = \negativeGreedy[q_1 \cdots q_j,\sigma_{[j] \smallsetminus J}] = \negativeGreedy[q_1 \cdots q_j,\sigma_{[j] \smallsetminus I}]$. Since $j \notin I \cap [j]$, this implies that~$\mathsf{n}(I) < j$.
On the other hand, the negative greedy flip property of Proposition~\ref{prop:gfp} ensures that~$I \cap [j-1] = \negativeGreedy[q_1 \cdots q_{j-1}, \sigma_{[j-1] \smallsetminus I}]$ since it is obtained from~$J \cap [j] = \negativeGreedy[q_1 \cdots q_j,\sigma_{[j] \smallsetminus J}]$ by flipping~$j$. Thus, $\mathsf{n}(I) \ge j-1$.
\end{proof}
\begin{proposition}
\label{prop:greedyTree}
The negative greedy flip tree~$\negativeGreedyTree$ (resp. positive greedy flip tree~$\positiveGreedyTree$) has one vertex for each facet of~$\facets$, and one arc from a facet~$I$ to a facet~$J$ if and only if~$I \smallsetminus i = J \smallsetminus j$ for some~$i \in I$ and~$j \in J$ satisfying~$i < j \le \mathsf{n}(J)$ (resp.~$\mathsf{p}(I) \le i < j$).
\end{proposition}
\begin{proof}
We prove the result for the negative greedy flip tree by induction on the length of~$\sq{Q}$. We write here~$\mathsf{n}_{\sq{Q},\rho}(I)$ to specify that we consider the negative greedy index of a set~$I$ regarded as a facet of~$\subwordComplex$. The result holds on the subword complex~$\subwordComplex[\varepsilon, e]$. Consider now two facets~$I$ and~$J$ of~$\subwordComplex$ with~$I \smallsetminus i = J \smallsetminus j$ for some $i < j$. We have three cases:
\begin{enumerate}[(i)]
\item If~$m$ is neither in~$I$ nor in~$J$, then~$I$ and~$J$ are both facets of~$\subwordComplex[\sq{Q}_\dashv,\rho q_m]$ and $\mathsf{n}_{\sq{Q}_\dashv,\rho q_m}(J) = \min(\mathsf{n}_{\sq{Q},\rho}(J), m-1)$. We thus conclude by induction.
\item If~$m$ is both in~$I$ and in~$J$, then~$I \smallsetminus m$ and~$J \smallsetminus m$ are both facets of~$\subwordComplex[\sq{Q}_\dashv,\rho]$ and $\mathsf{n}_{\sq{Q}_\dashv,\rho}(J \smallsetminus m) = \min(\mathsf{n}_{\sq{Q},\rho}(J), m-1)$. We thus conclude by induction.
\item Otherwise, $m$ is in precisely one of the facets~$I$ and~$J$. Thus, we must have~${j = m}$. If~$j \le \mathsf{n}(J)$, then $J = \negativeGreedy$, $I = \negativeGreedy[\sq{Q}_\dashv,\rho q_m]$, and the flip from~$I$ to~$J$ is an arc of~$\negativeGreedyTree$. Conversely, if~$j > \mathsf{n}(J)$, then $J \ne \negativeGreedy$ and the flip from~$I$ to~$J$ is not an arc of~$\negativeGreedyTree$. \qedhere
\end{enumerate}
\end{proof}
Although we defined the greedy flip trees~$\negativeGreedyTree$ and~$\positiveGreedyTree$ inductively, the results of Lemma~\ref{lem:greedyIndex} and Proposition~\ref{prop:greedyTree} enable us to construct them directly on the graph~$\flipGraph$, avoiding the use of induction. We use this construction to provide a non-inductive enumeration scheme for the facets of~$\subwordComplex$.
\subsection{The greedy flip algorithm}
\label{subsec:greedyFlipTree}
The \defn{greedy flip algorithm} generates all facets of the subword complex~$\subwordComplex$ by a depth first search procedure on the (positive or negative) greedy flip tree. The preorder traversal of the greedy flip tree also provides an iterator on the facets of~$\subwordComplex$. Given a facet~$I$ of~$\subwordComplex$, we can indeed compute its next element in the preorder traversal, provided we know its root function, its greedy index and the path from~$I$ to the root in the greedy flip tree. These data can be updated at each step of the algorithm, using Proposition~\ref{prop:roots&flips} for the root function and Lemma~\ref{lem:greedyIndex} for the greedy index.
To evaluate the running time and working space of the greedy flip algorithm, remember that we consider as parameters both the rank~$n$ of the group~$W$ and the size~$m$ of the word~$\sq{Q}$. During the algorithm, we only need to remember the current facet, together with its root function, its greedy index, and its path to the root in the greedy flip tree. Thus, the working space of the algorithm is in~$O(mn)$. Concerning running time, each facet needs at most~$m$ flips to generate all its children in the greedy flip tree. Since a flip can be performed in~$O(mn)$ time (see Section~\ref{subsec:roots&flips}), the running time per facet of the greedy flip algorithm is in~$O(m^2n)$.
\begin{figure}
\caption{Comparison of the running times of the inductive algorithm and the greedy flip algorithm to generate the $k$-cluster complex of type~$A_n$. On the left, $k$ is fixed at~$1$ while~$n$ increases; on the right, $n$ is fixed at~$3$ while~$k$ increases. The time is presented in millisecond per facet.}
\label{fig:runnngTimes}
\end{figure}
We have implemented the greedy flip algorithm using the mathematical software Sage~\cite{sage}. This implementation is integrated into C.~Stump's patch on subword complexes. The user can now select either the inductive algorithm (directly based on the inductive structure of the subword complex as discussed in Section~\ref{subsec:generationgSubwordComplex}) or the greedy flip algorithm. We have seen that these two algorithms have the same theoretical complexity. To compare their experimental running time, we have constructed the $k$-cluster complex of type~$A_n$ for increasing values of~$k$ and~$n$. Its facets correspond to the $k$-triangulations of the $(n+2k+1)$-gon (see Example~\ref{exm:geometricGraphs} and~\cite{CeballosLabbeStump} for the definition of multicluster complexes in any finite type). The rank of the group is~$n$, while the length of the word is~$kn+{n \choose 2}$. \fref{fig:runnngTimes} presents the running time per facet for both enumeration algorithms in two situations: on the left, $k$ is fixed at~$1$ while~$n$ increases; on the right, $n$ is fixed at~$3$ while~$k$ increases. The greedy flip algorithm is better than the inductive algorithm in the first situation, and worst in the second. We observe a similar behavior for the computation of $k$-cluster complexes of types~$B_n$ and~$D_n$. In general, the inductive algorithm is experimentally faster when the Coxeter group is fixed, but slower when the size of the Coxeter group increases.
\begin{remark}
For type~$A$ spherical subword complexes, our algorithm is similar to that of~\cite{PilaudPocchiola} (which was formulated in terms of primitive sorting networks). Observe however that, contrarily to~\cite{PilaudPocchiola}, we allow~$\rho$ to be any element of~$W$. This slight generalization enables us to provide an inductive definition for the greedy flip tree, which simplifies the presentation of the algorithm. For the subword complexes which provide combinatorial models for pointed pseudotriangulations (see Example~\ref{exm:geometricGraphs}), our algorithm coincides with the greedy flip algorithm of~\cite{BronnimannKettnerPocchiolaSnoeying}.
\end{remark}
\section*{Acknowledgments}
I am grateful to C.~Stump for fruitful discussions on subword complexes and related topics and to M.~Pocchiola for introducing me to the greedy flip algorithm on pseudotriangulations. I also thank three anonymous referees for valuable comments and suggestions on this paper, in particular for pointing out an important mistake in a previous version of this paper. I thank the Sage and Sage-Combinat development team for making available this powerful mathematics software and C.~Stump again for helpful support on the Sage implementation of Coxeter groups and subword complexes.
\end{document}
|
\begin{document}
\begin{frontmatter}
\title{Defining Predictive Probability Functions for Species
Sampling Models}
\runtitle{Predictive Probability Functions}
\begin{aug}
\author[a]{\fnms{Jaeyong} \snm{Lee}\ead[label=e1]{[email protected]}},
\author[b]{\fnms{Fernando A.} \snm{Quintana}\corref{}\ead[label=e2]{[email protected]}},
\author[c]{\fnms{Peter} \snm{M\"uller}\ead[label=e3]{[email protected]}}
\and
\author[d]{\fnms{Lorenzo} \snm{Trippa}\ead[label=e4]{[email protected]}}
\runauthor{Lee, Quintana, M\"uller and Trippa}
\affiliation{Seoul National University, Pontificia Universidad Cat\'
olica de Chile,
University of Texas at Austin and Harvard University}
\address[a]{Jaeyong Lee is Professor, Department of Statistics,
Seoul National University, Seoul, 151-747, South Korea
\printead{e1}.}
\address[b]{Fernando A. Quintana is Professor,
Departamento de Estad\'{\i}stica,
Pontificia Universidad Cat\'olica de Chile,
Macul, Santiago 22, Chile \printead{e2}.}
\address[c]{Peter M\"uller is Professor,
Department of Mathematics,
University of Texas at Austin,
Austin, Texas 78712-1202, USA \printead{e3}.}
\address[d]{Lorenzo Trippa is Assistant Professor,
Department of Biostatistics, Harvard University,
and
Department of Biostatistics and Computational Biology,
Dana-Farber Cancer Institute, Boston, Massachusetts
02115, USA
\printead{e4}.}
\end{aug}
\begin{abstract}
We review the class of species sampling models (SSM). In particular, we
investigate the relation between the exchangeable partition probability
function (EPPF) and the predictive probability function (PPF). It is
straightforward to define a PPF from an EPPF, but the converse is not
necessarily true. In this paper we introduce the notion of putative
PPFs and show novel conditions for a putative PPF to define an EPPF. We
show that all possible PPFs in a certain class have to define
(unnormalized) probabilities for cluster membership that are linear in
cluster size. We give a new necessary and sufficient condition for
arbitrary putative PPFs to define an EPPF. Finally, we show posterior
inference for a large class of SSMs with a PPF that is not linear in
cluster size and discuss a numerical method to derive its PPF.
\end{abstract}
\begin{keyword}
\kwd{Species sampling prior}
\kwd{exchangeable partition probability functions}
\kwd{prediction probability functions}
\end{keyword}
\end{frontmatter}
\section{Introduction}
\label{secintro}
The status of the Dirichlet process (\citeauthor{Ferguson73},\break \citeyear{Ferguson73})
(DP) among
nonparametric priors is comparable to that of the normal distribution
among finite-dimensional distributions. This is in part due to the
marginalization property: a random sequence sampled from a random
probability measure with a Dirichlet process prior forms marginally a
Polya urn sequence \citep{Blackwell1973}. Markov chain Monte Carlo
simulation based on the marginalization property has been the central
computational tool for the DP and facilitated a wide variety of
applications. See \citet{MacEachern94}, \citet
{EscobarWest95} and
\citet{MacEachernMuller98}, to name just a few. In Pitman
(\citeyear{Pitman95,Pitman96}), the species sampling model
(SSM) is proposed as a generalization of the DP. SSMs can be used as
flexible alternatives to the popular DP model in nonparametric Bayesian
inference. The SSM is defined as the directing random probability
measure of an exchangeable species sampling sequence which is defined
as a generalization of the Polya urn sequence. The SSM has a
marginalization property similar to the DP. It therefore enjoys the
same computational advantage as the DP while it defines a much wider
class of random probability measures. For its theoretical properties
and applications, we refer to \citet{IshwaranJames03},
\citet{LijoiMena05}, \citet{LijoiPrunster05}, \citet{James08},
\citet{NavarreteQuintana08}, \citet{JamesLijoi09} and
\citet{JangLee10}.
Suppose $(X_1,X_2,\ldots)$ is a sequence of random variables. In a
traditional application the sequence arises as a random sample from a
large population of units, and $X_i$ records the species of the $i$th
individual in the sample. This explains the name SSM. Let $\tilde{X}_j$ be
the $j$th distinct species to appear. Let $n_{jn}$ be the number of
times the $j$th species $\tilde{X}_j$ appears in $(X_1,\ldots,X_n)$,
$j=1,2,\ldots\,$, and
\[
\mathbf{n}_n =
(n_{jn}, j=1,\ldots,k_n),
\]
where $k_n = k_n(\mathbf{n}_n) = \max\{j\dvtx n_{jn} > 0 \}$ is the number of
different species to appear in $(X_1,\ldots,X_n)$. The sets $\{i\le
n\dvtx X_i=\tilde{X}_j\}$ define clusters that partition the index set
$\{1,\ldots,n\}$. When $n$ is understood from the context we just write
$n_j$, $\mathbf{n}$ and $k$ or $k(\mathbf{n})$.
We now give three alternative characterizations of species sampling
sequences: (i) by the predictive probability function, (ii) by the
driving measure of the exchangeable sequence, and (iii) by the
underlying exchangeable partition probability function.
\subsection*{PPF}
Let $\nu$ be a diffuse (or nonatomic) probability measure on a complete
separable metric space $\mathcal{X}$ equipp\-ed with Borel $\sigma$-field. An
exchangeable sequence $(X_1,\break X_2,\ldots)$ is called a species sampling
sequence (SSS) if $X_1 \sim\nu$ and
\begin{eqnarray}\label{eqpred}
&&
X_{n+1}\mid X_1,\ldots,X_n \nonumber\\[-8pt]\\[-8pt]
&&\quad\sim\sum
_{j=1}^{k_n} p_j(\mathbf{n}_n)
\delta_{\tilde{X}_j} + p_{k_n+1}(\mathbf{n}_n) \nu,\nonumber
\end{eqnarray}
where $\delta_x$ is the degenerate probability measure at~$x$. Examples
of SSS include the P\'olya urn sequence $(X_1, X_2, \ldots)$ whose
distribution is the same as the marginal distribution of independent
observations from a Dirichlet random distribution $F$, that is, $ X_1,\break X_2,
\ldots\mid F \iid F$ with $ F \sim \operatorname{DP}(\alpha
\nu)$, where $\alpha> 0$. The conditional distribution of the P\'olya
urn sequence is
\[
X_{n+1}\mid X_1,\ldots,X_n \sim\sum
_{j=1}^{k_n} \frac{n_j}{n+\alpha} \delta_{\tilde{X}_j} +
\frac{\alpha}{n+\alpha} \nu.
\]
This marginalization property has been a central tool for posterior
simulation in DP mixture models, which benefit from the fact that one
can integrate out $F$ using the marginalization property. The posterior
distribution becomes then free of the infinite-dimensional object $F$.
Thus, Markov chain Monte Carlo algorithms for DP mixtures do not pose
bigger difficulties than the usual parametric Bayesian models
(\citecs{MacEachern94}; \citeauthor{MacEachernMuller98},\break \citeyear{MacEachernMuller98}).
Similarly, alternative\vadjust{\goodbreak} discrete random distributions have been considered in the literature
and proved computationally attractive due to analogous marginalization
properties; see, for example, Lijoi, Mena and Pr{\"u}nster
(\citeyear{LijoiMena05,Lijoi2007}).
The sequence of functions $(p_1,p_2,\ldots)$ in (\ref{eqpred}) is
call\-ed a sequence of predictive probability\vspace*{1pt}
functions\break
(PPF). These are defined on
$\mathbb{N}^*=\bigcup_{k=1}^\infty\mathbb{N}^k$, where $\mathbb{N}$ is the set of natural
numbers, and satisfy the conditions
\begin{eqnarray}
\label{pppf}
&&p_j(\mathbf{n}) \geq0 \quad\mbox{and}\quad \sum
_{j=1}^{k_n+1} p_j(\mathbf{n}) = 1\nonumber\\[-8pt]\\[-8pt]
&&\eqntext{\mbox{for all } \mathbf{n}\in\mathbb{N}^*.}
\end{eqnarray}
Motivated by these properties of PPFs, we define a sequence of
\textit{putative PPFs} as a sequence of functions $(p_j,j=1,2,\ldots)$ defined
on $\mathbb{N}^*$ which satisfies~(\ref{pppf}). Note that not all putative
PPFs are PPFs, because (\ref{pppf}) does not guarantee exchangeability
of $(X_1,X_2,\ldots)$ in~(\ref{eqpred}). Note that the weights
$p_j(\cdot)$ depend on the data only indirectly through the cluster
sizes $\mathbf{n}_n$. The widely used DP is a special case of a species
sampling model, with $p_j(\mathbf{n}_n) \propto n_j$ and $p_{k+1}(\mathbf{n}_n)
\propto\alpha$ for a DP with total mass parameter $\alpha$. The use of
$p_j$ in (\ref{eqpred}) implies
\begin{eqnarray}
p_j(\mathbf{n}) &=& \mathbb{P}(X_{n+1} = \tilde{X}_j \mid
X_1,\ldots,X_n),\nonumber\\
&&\eqntext{j=1,\ldots,k_n,}
\\
p_{k_n+1}(\mathbf{n}) & = & \mathbb{P}\bigl(X_{n+1} \notin\{ \tilde
X_1,\ldots, \tilde X_{k_n}\} \mid X_1,
\ldots,X_n\bigr).\nonumber
\end{eqnarray}
In words, $p_j$ is the probability of the next observation being the
$j$th species (falling into the $j$th cluster) and $p_{k_n+1}$ is the
probability of a new species (starting a new cluster).
An important point in the above definition is that a sequence $X_i$
can be a SSS only if it is exchangeable.
\subsection*{SSM}
Alternatively, a SSS can be characterized by the following defining
property. An exchangeable se-\break quence of random variables
$(X_1,X_2,\ldots)$ is a species sampling sequence if and only if
$X_1,X_2,\ldots\mid G$ is a random sample from $G$ where
\begin{equation}
\label{eqssp} G = \sum_{h=1}^\infty
P_h \delta_{m_h} + R\nu
\end{equation}
for some sequence of positive random variables $(P_h)$ and $R$ such
that $1-R = \sum_{h=1}^\infty P_h \leq1$ with probability 1, $(m_h)$
is a sequence of independent variables with distribution\vadjust{\goodbreak} $\nu$,
and $(P_i)$ and $(m_h)$ are independent. See \citet{Pitman96}. The
result is an extension of de Finetti's theorem and characterizes
the directing random probability measure of the species sample
sequence. We call the directing random probability measure $G$ in
equation (\ref{eqssp}) the \textit{SSM} of the\break SSS $(X_i)$.
\subsection*{EPPF}
A third alternative definition of a SSS and corresponding SSM is in
terms of the implied probability model on a sequence of random
partitions.
Suppose a SSS $(X_1, X_2, \ldots)$ is given. Since the de Finetti
measure (\ref{eqssp}) is partly discrete, there are ties among
$X_i$'s. The ties among $(X_1,X_2, \ldots, X_n)$ for a given $n$ induce
an equivalence relation in the set $[n]=\{1,2,\ldots,n\}$, that is, $i
\sim j$ if and only if $X_i = X_j$. This equivalence relation on
$[n]$, in turn, induces the partition $\Pi_n$ of $[n]$. Due to the
exchangeability of $(X_1, X_2, \ldots)$, it can be easily seen that the
random partition $\Pi_n$ is an exchangeable random partition on $[n]$,
that is, for any partition $\{A_1, A_2, \ldots, A_k \}$ of $[n]$, the
probability $P(\Pi_n = \{A_1, A_2, \ldots, A_k \})$ is invariant under
any permutation on $[n]$ and can be expressed as a function of $\mathbf{n}=
(n_1, n_2, \ldots, n_k)$, where $n_i$ is the cardinality of $A_i$ for
$i=1,2,\ldots,k$. Extending the above argument to the entire SSS, we
can get an exchangeable random partition on the natural numbers $\mathbb{N}$
from the SSS. Kingman (\citeyear{Kingman78,Kingman82})
showed a remarkable result, called Kingman's representation theorem,
that in fact every exchangeable random partition can be obtained by a
SSS.
For any partition $\{A_1, A_2, \ldots, A_k \}$ of $[n]$, we can
represent $P(\Pi_n = \{A_1, A_2, \ldots, A_k \}) = p(\mathbf{n})$ for a
symmetric function $p\dvtx \mathbb{N}^* \rightarrow[0,1]$ satisfying
\begin{eqnarray}
\label{eqeppf} p(1) &=& 1,
\nonumber\\[-8pt]\\[-8pt]
p(\mathbf{n}) &=& \sum_{j=1}^{k(\mathbf{n})+1}p\bigl(
\mathbf{n}^{j+}\bigr)\qquad\mbox{for all } \mathbf{n}\in\mathbb{N}^*,
\nonumber
\end{eqnarray}
where $\mathbf{n}^{j+}$ is the same as $\mathbf{n}$ except that the $j$th element
is increased by $1$. This function is called an exchangeable partition
probability function (EPPF) and characterizes the distribution of an
exchangeable random partition on $\mathbb{N}$.
We are now ready to pose the problem for the present paper. It is
straightforward to verify that any EPPF defines a PPF by
\begin{equation}
\label{eqrelation} p_j(\mathbf{n}) = \frac{p(\mathbf{n}^{j+})}{p(\mathbf{n})},\qquad
j=1,2,\ldots,k+1.
\end{equation}
The converse is not true. Not every putative $p_j(\mathbf{n})$ defines an
EPPF and thus a SSM and a SSS.\vadjust{\goodbreak} For example, it is easy to show that
$p_j(\mathbf{n}) \propto n_j^2+1$, $j=1,\ldots,k(\mathbf{n})$, does not. In Bayesian
data analysis it is often convenient, or at least instructive, to
elicit features of the PPF rather than the joint EPPF. Since the PPF is
crucial for posterior computation, applied Bayesians tend to focus on
it to specify the species sampling prior for a specific problem. For
example, the PPF defined by a DP prior implies that the probability of
joining an existing cluster is proportional to the cluster size. This
is not always desirable. Can the user define an alternative PPF that
allocates new observations to clusters with probabilities proportional
to alternative functions $f(n_j)$ and still define a SSS? In general,
the simple answer is no. We already mentioned that a PPF implies a SSS
if and only if it arises as in (\ref{eqrelation}) from an EPPF. But
this result is only a characterization. It is of little use for data
analysis and modeling since it is difficult to verify whether or not a
given PPF arises from an EPPF. In this paper we develop some conditions
to address this gap. We consider methods to define PPFs in two
different directions. First we give an easily verifiable necessary
condition for a putative PPF to arise from an EPPF (Lemma~\ref{lemm1}) and a
necessary and sufficient condition for a putative PPF to arise from an
EPPF. A consequence of this result is an elementary proof of the
characterization of all possible PPFs with form $p_j(\mathbf{n}) \propto
f(n_j)$. This result has been proved earlier by \citet{GnedinPitman06}.
Although the result in Section~\ref{secppfeppf} gives necessary and sufficient
conditions for a putative PPF to be a PPF, the characterization is not
constructive. It does not give any guidance in how to create a new PPF
for a specific application. In Section~\ref{seclorenzo} we propose an alternative
approach to define a SSM based on directly defining a joint probability
model for the $P_h$ in (\ref{eqssp}). We develop a numerical algorithm
to derive the corresponding PPF. This facilitates the use of such
models for nonparametric Bayesian data analysis. This approach can
naturally create PPFs with very different features than the well-known
PPF under the DP.
The literature reports some PPFs with closed-form analytic expressions
other than the PPF under the DP prior. There are a few directions which
have been explored for constructing extensions of the DP prior and
deriving PPFs. The normalization of complete random measures (CRM) has
been proposed in \citet{Kingman75}. A CRM such as the generalized
gamma process \citep{Brix1999}, after normalization, defines a discrete
random distribution and, under mild assumptions, a SSM. Developments\vadjust{\goodbreak}
and theoretical results on this approach have been discussed in a
series of papers; see, for example, \citet{PermanPitman92},
\citet{Pitman03} and \citet{RegazziniLijoi03}. Normalized
CRM models
have also been studied and applied in \citet{LijoiMena05},
\citet{NietoPrunster04} and more recently in \citet
{JamesLijoi09}. A
second related line of research considered the so-called Gibbs models.
In these models the analytic expressions of the PPFs share similarities
with the DP model. An important example is the Pitman--Yor process.
Contributions include \citet{Gnedin2006}, \citet{Lijoi2007},
Lijoi, Pr{\"u}nster and Walker (\citeyear{Lijoi08,Lijoi08a})
and \citet{Gnedin2010}. \citet{Lijoi2010} provide a recent
overview on
major results from the literature on normalized CRM and Gibbs-type
partitions.\looseness=1
\section{When Does a PPF Imply an EPPF?}
\label{secppfeppf}
Suppose we are given a putative PPF $(p_j)$. Using equation
(\ref{eqrelation}), one can attempt to define a function
$p\dvtx\mathbb{N}^*\rightarrow[0,1]$ inductively by the following mapping:
\begin{eqnarray}
\label{eqeppfdef}
p(1) &=& 1,
\nonumber\\
p\bigl(\mathbf{n}^{j+}\bigr) &=& p_j(\mathbf{n}) p(\mathbf{n})\qquad\qquad\qquad\\
&&\eqntext{\mbox{for
all } \mathbf{n}\in\mathbb{N}\mbox{ and } j =1,2,\ldots,k(\mathbf{n})+1.}
\end{eqnarray}
In general, equation (\ref{eqeppfdef}) does not lead to a unique
definition of $p(\mathbf{n})$ for each $\mathbf{n}\in\mathbb{N}^*$. For example, let
$\mathbf{n}= (2,1)$. Then, $p(2,1)$ could be computed in two different ways
as $p_2(1)p_1(1,1)$ and $p_1(1)p_2(2)$ which correspond to partitions
$\{\{1,3\},\{2\}\}$ and $\{\{1,2\},\break\{3\}\}$, respectively. If
$p_2(1)p_1(1,1) \neq p_1(1)p_2(2)$, equation (\ref{eqeppfdef}) does
not define a function $p\dvtx\mathbb{N}^* \rightarrow[0,1]$. The following
lemma shows a condition for a PPF for which equation (\ref{eqeppfdef})
leads to a valid unique definition of $p\dvtx\mathbb{N}^* \rightarrow[0,1]$.
Suppose $\Pi= \{A_1,A_2,\ldots,A_k\}$ is a partition of $[n]$ with
clusters indexed in the order of appearance. For $1 \leq m \leq n$, let
$\Pi_m$ be the restriction of $\Pi$ on $[m]$. Let $\mathbf{n}(\Pi) =
(n_1,\ldots,n_k)$, where $n_i$ is the cardinality of $A_i$, and let
$\Pi(i)$
be the class index of element $i$ in partition $\Pi$ and $\Pi([n]) =
(\Pi(1),\ldots,\Pi(n))$.
\begin{lemma}\label{lemm1}
If and only if a putative PPF $(p_j)$ satisfies
\begin{eqnarray}
\label{eqsuffdef} p_i(\mathbf{n}) p_j\bigl(\mathbf{n}^{i+}
\bigr) = p_j(\mathbf{n}) p_i\bigl(\mathbf{n}^{j+}\bigr)\nonumber\\[-8pt]\\[-8pt]
&&\eqntext{\mbox{for all } \mathbf{n}\in\mathbb{N}^*, i, j =1,2,\ldots,k(\mathbf{n})+1,}
\end{eqnarray}
then $p$ defined by (\ref{eqeppfdef}) is a function from $\mathbb{N}^*$ to
$[0,1]$, that is, $p$ in (\ref{eqeppfdef}) is uniquely defined.\vadjust{\goodbreak}
\end{lemma}
\begin{pf}
Let $\mathbf{n}= (n_1,\ldots,n_k)$ with $\sum_{i=1}^k
n_i = n$ and $\Pi$ and $\Omega$ be two partitions of $[n]$ with
$\mathbf{n}(\Pi) = \mathbf{n}(\Omega) = \mathbf{n}$. Let $p^\Pi(\mathbf{n}) = \prod_{i=1}^{n-1}
p_{\Pi(i+1)}(\mathbf{n}(\Pi_i))$ and\break $p^\Omega(\mathbf{n}) = \prod_{i=1}^{n-1}
p_{\Omega(i+1)}(\mathbf{n}(\Omega_i))$. We need to show that
$p^\Pi(\mathbf{n})=p^\Omega(\mathbf{n})$. Without loss of generality, we can assume
$\Pi([n]) = (1,\ldots,1,2,\ldots,2,\ldots,k,\ldots,k)$, where $i$ is
repeated $n_i$ times for $i=1,\ldots,k$. Note that $\Omega([n])$ is
just a certain permutation of $\Pi([n])$ and by a finite times of
swapping two consecutive elements in $\Omega([n])$, one can change
$\Omega([n])$ to $\Pi([n])$. Thus, it suffices to show when
$\Omega([n])$ is different from $\Pi([n])$ in only two consecutive
positions. But, this is guaranteed by condition (\ref{eqsuffdef}).
The opposite is easy to show. Assume $p_j$ defines a unique $p(\mathbf{n})$.
Consider (\ref{eqsuffdef}) and multiply on both sides with $p(\mathbf{n})$.
By assumption, we get on either side $p(\mathbf{n}^{i+j+})$. This completes
the proof.
\end{pf}
Note that the conclusion of Lemma~\ref{lemm1} is not (yet) that $p$ is an EPPF.
The missing property is exchangeability, that is, invariance of $p$ with
respect to permutations of the group indices $j=1,\ldots,k(\mathbf{n})$. When
the function $p$, recursively defined by expression
(\ref{eqeppfdef}), satisfies the balance imposed by equation
(\ref{eqsuffdef}) it is called the \textit{partially exchangeable probability
function} (Pitman, \citeyear{Pitman95,Pitman06}) and
the resulting random partition of $\mathbb{N}$ is termed partially
exchangeable. In \citet{Pitman95}, it is proved that a $p\dvtx
\mathbb{N}^*\rightarrow[0,1]$ is a \textit{partially exchangeable probability
function} if and only if it exists as a sequence of nonnegative random
variables $P_i$, $i=1,\ldots\,$, with $\sum_i P_i\le1$ such that
\begin{equation}
\label{eqpartialcar} p(n_1,\ldots,n_k)= E \Biggl[ \prod
_{i=1}^k P_i^{n_i-1}
\prod_{i=1}^{k-1} \Biggl(1-\sum
_{j=1}^{i} P_i \Biggr) \Biggr],\hspace*{-24pt}
\end{equation}
where the expectation is with respect to the distribution of the
sequence $(P_i)$. We refer to \citet{Pitman95} for an extensive
study of
partially exchangeable random partitions.
It is easily checked whether or not a given PPF satisfies the condition
of Lemma~\ref{lemm1}. Corollary~\ref{coro1} describes all possible PPFs that have the
probability of cluster memberships depend on a function of the cluster
size only. This result is part of a theorem in \citet{GnedinPitman06},
but we give here a more straightforward proof.
\begin{corollary}\label{coro1}
Suppose a putative PPF $(p_j)$ satisfies (\ref{eqsuffdef}) and
\begin{equation}
\label{ppf1} p_j(n_1,\ldots,n_k) \propto
\cases{
f(n_j), & $j=1,\ldots,k$,\cr
\thetaeta, & $j= k+1$,}
\end{equation}
where $f(k)$ is a function from $\mathbb{N}$ to $(0,\infty)$ and $\thetaeta>
0$. Then, $f(k) = a k$ for all $k \in\mathbb{N}$ for some $a > 0$.\vadjust{\goodbreak}
\end{corollary}
\begin{pf}
Note that for any $\mathbf{n}= (n_1,\ldots,n_k)$ and
$i=1,\ldots,k+1$,
\[
p_i(n_1,\ldots,n_k) = \cases{
\dfrac{f(n_i)}{\sum_{u=1}^k f(n_u) + \thetaeta}, & $i=1,\ldots,k$,\vspace*{2pt}\cr
\dfrac{\thetaeta}{\sum_{u=1}^k f(n_u) + \thetaeta}, & $i = k+1$.}
\]
Equation (\ref{eqsuffdef}) with $1 \leq i \neq j \leq k$ implies
\begin{eqnarray*}
&&\frac{f(n_i)}{\sum_{u=1}^k f(n_u) + \thetaeta} \frac{f(n_j)}{\sum_{u
\neq i}^k f(n_u) + f(n_i+1)
+\thetaeta} \\
&&\quad=\frac{f(n_j)}{\sum_{u=1}^k f(n_u) + \thetaeta} \frac{f(n_i)}{\sum_{u
\neq j}^k f(n_u) +
f(n_j+1) +\thetaeta},
\end{eqnarray*}
which in turn implies
\[
f(n_i) + f(n_j+1) = f(n_j) +
f(n_i + 1)
\]
or
\[
f(n_j+1)-f(n_j) = f(n_i+1) -
f(n_i).
\]
Since this holds for all $n_i$ and $n_j$, we have for all $k \in\mathbb{N}$
\begin{equation}
\label{eq1} f(m) = a m + b
\end{equation}
for some $a, b \in\mathbb{R}$.
Now consider $i=k+1$ and $1\leq j \leq k$. Then,
\begin{eqnarray*}
&&
\frac{\thetaeta}{\sum_{u=1}^k f(n_u) + \thetaeta} \frac{f(n_j)}{\sum_{u
=1}^k f(n_u) + f(1)
+\thetaeta} \\
&&\quad=\frac{f(n_j)}{\sum_{u=1}^k f(n_u) + \thetaeta} \frac{\thetaeta}{\sum
_{u \neq j}^k f(n_u) +
f(n_j+1) +\thetaeta},
\end{eqnarray*}
which implies $f(n_j) + f(1) = f(n_j +1)$ for all $n_j$. This together
with (\ref{eq1}) implies $b= 0$. Thus, we have $f(k) = ak$ for some $a
> 0$.
\end{pf}
For any $a > 0$, the putative PPF
\[
p_i(n_1,\ldots,n_k) \propto\cases{
a n_i, & $i=1,\ldots,k$,\cr
{\thetaeta}, & $i = k+1$,}
\]
defines a function $p\dvtx\mathbb{N}\rightarrow[0,1]$,
\[
p(n_1,\ldots,n_k) = \frac{\thetaeta^{k-1} a^{n-k}}{[\thetaeta+1]_{n-1;a}}
\prod
_{i=1}^k (n_i-1)!,
\]
where $[\thetaeta]_{k;a} = \thetaeta(\thetaeta+a)
\cdots(\thetaeta+ (k-1)a)$. Since this function is symmetric in its
arguments, it is an EPPF. This is the EPPF for a DP with total mass
$\thetaeta/a$. Thus, Corollary~\ref{coro1} implies that the EPPF under the DP is the
only EPPF that satisfies (\ref{ppf1}). The corollary shows that it is
not an entirely trivial matter to come up with a putative PPF that
leads to a valid EPPF. A version of Corollary~\ref{coro1} is also well known as
Johnson's Sufficientness postulate \citep{good65}. See also the
discussion in \citet{zabell82}.
We now give a necessary and sufficient condition for the function $p$
defined by (\ref{eqrelation}) to be an EPPF, without any constraint on
the form of $p_j$ (as were present in the earlier results). Suppose
$\sigma$ is a permutation of $[k]$ and $\mathbf{n}=(n_1,\ldots,n_k) \in
\mathbb{N}^*$. Define $\sigma(\mathbf{n}) = \sigma(n_1,\ldots,n_k) =
(n_{\sigma(1)},n_{\sigma(2)},\ldots,n_{\sigma(k)})$. In\break words,
$\sigma$
is a permutation of group labels and $\sigma(\mathbf{n})$ is the
corresponding permutation of the group sizes~$\mathbf{n}$.
\begin{theorem}\label{theo1}
Suppose a putative PPF $(p_j)$ satisfies (\ref{eqsuffdef}) as well
as the following condition: for all $\mathbf{n}=(n_1,\ldots,n_k) \in\mathbb{N}^*$,
and permutations $\sigma$ on $[k]$ and $i=1,\ldots,k$,
\begin{eqnarray}
\label{eqsuffeppf}
p_i(n_1,\ldots,n_k)
=
p_{\sigma^{-1}(i)}(n_{\sigma(1)},n_{\sigma(2)},\ldots,n_{\sigma(k)}).\hspace*{-35pt}
\end{eqnarray}
Then, $p$ defined by (\ref{eqeppfdef}) is an EPPF. The condition is
also necessary; if $p$ is an EPPF, then (\ref{eqsuffeppf}) holds.
\end{theorem}
\begin{pf}
Fix $\mathbf{n}=(n_1,\ldots,n_k) \in\mathbb{N}^*$ and a
permutation on $[k]$, $\sigma$. We wish to show that for the function
$p$ defined by (\ref{eqeppfdef})
\begin{equation}\label{eq9a}\quad
p(n_1,\ldots,n_k) = p(n_{\sigma(1)},n_{\sigma(2)},
\ldots,n_{\sigma(k)}).
\end{equation}
Let $\Pi$ be
the partition of $[n]$ with $\mathbf{n}(\Pi) = (n_1,\ldots,n_k)$ such that
\[
\Pi\bigl([n]\bigr) = (1,2,\ldots,k,1,\ldots,1,2,\ldots,2,\ldots,k,\ldots,k),
\]
where after the first $k$ elements $1,2,\ldots,k$, $i$ is repeated
$n_i-1$ times for all $i=1,\ldots,k$. Then,
\[
p(\mathbf{n}) = \prod_{i=2}^k p_i(
\mathbf{1}_{(i-1)})\times\prod_{i=k}^{n-1}
p_{\Pi(i+1)}\bigl(\mathbf{n}(\Pi_i)\bigr),
\]
where $\mathbf{1}_{(j)}$ is the vector of
length $j$ whose elements are all $1$'s.
\begin{figure*}
\caption{The lines in each panel show 10 draws
${\mathbf P}
\label{figw}
\end{figure*}
Now consider a partition $\Omega$ of $[n]$ with $\mathbf{n}(\Omega) =
(n_{\sigma(1)},n_{\sigma(2)},\ldots,n_{\sigma(k)})$ such that
\begin{eqnarray*}
\Omega\bigl([n]\bigr) &=& \bigl(1,2,\ldots,k,\sigma^{-1}(1),\ldots,\sigma^{-1}(1),\\
&&\hspace*{6pt}
\sigma^{-1}(2),\ldots,\sigma^{-1}(2),
\ldots,\\
&&\hspace*{60pt}\sigma^{-1}(k),\ldots,\sigma^{-1}(k)\bigr),
\end{eqnarray*}
where after the first $k$ elements $1,2,\ldots,k$, $\sigma^{-1}(i)$ is
repeated $n_i-1$ times for all $i=1,\ldots,k$. Then,
\begin{eqnarray*}
&&
p(n_{\sigma(1)},n_{\sigma(2)},\ldots,n_{\sigma(k)}) \\
&&\quad = \prod
_{i=2}^k p_i(
\mathbf{1}_{(i-1)})\times\prod_{i=k}^{n-1}
p_{\Omega(i+1)}\bigl(\mathbf{n}(\Omega_i)\bigr)
\\[-2pt]
&&\quad = \prod_{i=2}^k p_i(
\mathbf{1}_{(i-1)})\times\prod_{i=k}^{n-1}
p_{\sigma^{-1}(\Omega(i+1))} \bigl(\sigma\bigl(\mathbf{n}(\Omega_i)\bigr)\bigr)
\\[-2pt]
&&\quad = \prod_{i=2}^k p_i(
\mathbf{1}_{(i-1)})\times\prod_{i=k}^{n-1}
p_{\Pi(i+1)}\bigl(\mathbf{n}(\Pi_i)\bigr)
\\[-2pt]
&&\quad = p(n_1,\ldots,n_k),
\end{eqnarray*}
where the second equality follows from (\ref{eqsuffeppf}). This
completes the proof of the sufficient direction.
Finally, we show that every EPPF $p$ satisfies (\ref{eqeppfdef})
and (\ref{eqsuffeppf}). By Lemma~\ref{lemm1}, every EPPF satisfies
(\ref{eqeppfdef}). Condition (\ref{eq9a}) is true by the definition of
an EPPF, which includes the condition of symmetry in its arguments. And
(\ref{eq9a}) implies (\ref{eqsuffeppf}).
\end{pf}
\citet{FortiniLadelli00} prove results related to Theorem
\ref{theo1}. They provide sufficient conditions for a system of
predictive distributions $p(X_{n}\mid X_1,\ldots,X_{n-1})$,
$n=1,\ldots\,$, of a sequence of random variables $(X_i)$ that imply
exchangeability. The relation between these conditions and Theorem~\ref{theo1}
becomes apparent by constructing a sequence $(X_i)$ that
induces a $p$-distributed random partition of $\mathbb{N}$. Here, it is
implicitly assumed the mapping of $(X_i)$ to the only partition such
that $i,j\in\mathbb{N}$ belongs to the same subset if and only if
$X_i=X_j$.
A second more general example, which extends the predictive structure
considered in Corollary~\ref{coro1}, includes the\vadjust{\goodbreak} so-called Gibbs random
partitions.\break Within this class of models
\begin{equation}
\label{Gibbs} p(n_1, n_2, \ldots, n_k) =
V_{n,k} \prod_{i=1}^k
W_{n_i},
\end{equation}
where $(V_{n,k})$ and $(W_{n_i})$ are sequences of positive real
numbers. In this case the predictive probability of a novel species
is a function of the sample size $n$ and of the number of observed
species $k$. See \citet{Lijoi2007} for related distributional results
on Gibbs type models. \citet{GnedinPitman06} obtained sufficient
conditions for the sequences $(V_{n,k})$ and $(W_{n_i})$, which imply
that $p$ is an EPPF.
\section{SSMs Beyond the DP} \label{seclorenzo}
\subsection{\texorpdfstring{The $\operatorname{SSM}(p,\nu)$}{The SSM(p, nu)}}\label{secdirect}
We know that an SSM with a nonlinear PPF, that is, $p_j$ different from
the PPF of a DP, cannot be described as a function $p_j \propto
f(n_j)$ of $n_j$ only. It must be a more complicated function
$f(\mathbf{n})$. Alternatively, one could try to define an EPPF and deduce
the implied PPF. But directly specifying a symmetric function $p(\mathbf{n})$
such that it complies with (\ref{eqeppf}) is difficult. As a third
alternative we propose to consider the weights ${\mathbf P}= \{P_{h},
h=1,2,\ldots\}$ in (\ref{eqssp}).
Figure~\ref{figw}(a) illustrates $p({\mathbf P})$ for a DP model. The sharp
decline is typical.\vadjust{\goodbreak} A~few large weights account for most of the
probability mass. The stick breaking construction for a DP prior with
total mass $\theta$ implies $E(P_h) = \theta^{h-1} (1+\theta)^{-h}$. Such
geometrically decreasing mean weights are inappropriate to describe
prior information in many applications. The weights can be interpreted
as asymptotic relative cluster sizes. A~typical application of the DP
prior is, for example, a partition of patients in a clinical study into
clusters. However, if clusters correspond to disease subtypes defined
by variations of some biological process, then one would rather expect
a number of clusters with a priori comparable size. Many small clusters
with very few patients are implausible and would also be of little
clinical use. This leads us to propose the use of alternative SSMs.
Figure~\ref{figw}(b) shows an alternative probability mo\-del~$p({\mathbf P})$.
There are many ways to define $p({\mathbf P})$; we consider, for
$h=1,2,\ldots\,$,
\[
P_h \propto u_h \quad\mbox{or}\quad P_h =
\frac{u_h}{\sum_{i=1}^\infty u_i},
\]
where $u_h$ are independent and nonnegative random variables with
\begin{equation}
\label{condition1} \sum_{i=1}^\infty
u_i < \infty\quad \mbox{a.s.}
\end{equation}
A sufficient condition for (\ref{condition1}) is
\begin{equation}
\label{condition2} \sum_{i=1}^\infty
E(u_i) < \infty
\end{equation}
by the monotone convergence theorem. Note that when the unnormalized
random variables $u_h$ are defined as the sorted atoms of a
nonhomogeneous Poisson process on the positive real line, under mild
assumptions, the above $(P_h)$ construction coincides with the
Poisson--Kingman models. \citet{FergusonKlass72} provide a detailed
discussion on the outlined mapping of a Poisson process into a
sequence of unnormalized positive weights. In this particular case the
mean of the Poisson process has to satisfy minimal requirements
(see, e.g., \citecs{Pitman03}) to ensure that the sequence
$(P_i)$ is well defined.
As an illustrative example in the following discussion, we define, for
$h=1,2,\ldots\,$,
\begin{eqnarray}\label{eqex}
P_h \propto e^{X_h} \hspace*{140pt}\nonumber\\[-8pt]\\[-8pt]
&&\eqntext{\mbox{with } X_h \sim N
\bigl(\log\bigl(1-\bigl\{1+e^{b- a h}\bigr\}^{-1}\bigr),
\sigma^2 \bigr),}
\end{eqnarray}
where $a, b, \sigma^2$ are positive constants. The existence of such
random probabilities is guaranteed by (\ref{condition2}), which is easy
to check.
The S-shaped nature of the random distribution (\ref{eqex}), when
plotted against $h$, distinguishes it from the DP model. The first few
weights are a priori of equal size (before sorting). This is in
contrast to the stochastic ordering of the DP and the Pitman--Yor
process in general. In panel (a) of Figure~\ref{figw} the prior mean
of the sorted and unsorted weights is almost indistinguishable,
because the prior already implies strong stochastic ordering of the
weights.
The prior in Figure~\ref{figw}(b) reflects prior information of an
investigator who believes that there should be around 5 to 10 clusters
of comparable size in the population. This is in sharp contrast to the
(often implausible) assumption of one large dominant cluster and
geometrically smaller clusters that is reflected in panel (a). Prior
elicitation can exploit such readily interpretable implications of the
prior choice to propose models like~(\ref{eqex}).
We use $\operatorname{SSM}(p,\nu)$ to denote a SSM defined by $p({\mathbf P})$ for the
weights $P_h$ and $m_h \iid\nu$. The attraction of
defining the SSM through ${\mathbf P}$ is that by (\ref{eqssp}) any joint
probability model $p({\mathbf P})$ such that\break $P(\sum_h P_h=1)$ defines a proper
SSM. There are no additional constraints as for the PPF $p_j(\mathbf{n})$ or
the EPPF $p(\mathbf{n})$. However, we still need the implied PPF to implement
posterior inference and also to understand the implications of the
defined process. Thus, a practical use of this second approach requires
an algorithm to derive the PPF starting from an arbitrarily defined
$p({\mathbf P})$.
\subsection{An Algorithm to Determine the PPF}
\label{secppfalgo}
Recall definition (\ref{eqssp}) for an SSM random probability measure.
Assuming a proper SSM, we have
\begin{equation}\label{eqSSM}
G = \sum_{h=1}^{\infty} P_h
\delta_{m_h}.
\end{equation}
Let $\bP=(P_h, h \in\mathbb{N})$ denote the sequence of weights. Recall the
notation $\tilde{X}_j$ for the $j$th unique value in the SSS $\{X_i, i =1,
\ldots, n\}$. The algorithm requires indicators that match the $\tilde{X}_j$
with the $m_h$, that is, that match the clusters in the partition with the
point masses of the SSM. Let $\pi_j=h$ if $\tilde{X}_j=m_h$,
$j=1,\ldots,k_n$. In the following discussion it is important that the
latent indicators $\pi_j$ are only introduced up to $j=k$. Conditional
on $m_h$, $h \in\mathbb{N}$ and $\tilde{X}_j$, $j\in\mathbb{N}$, the indicators $\pi_j$
are deterministic. After marginalizing with respect to the $m_h$ or
with respect to the $\tilde{X}_j$, the indicators become latent variables.
Also, we use cluster membership indicators $s_i=j$ for $X_i=\tilde{X}_j$ to
simplify notation.\vadjust{\goodbreak} We use the convention of labeling clusters in the
order of appearance, that is, $s_1=1$ and $s_{i+1} \in
\{1,\ldots,k_i,k_i+1\}$.
In words, the algorithm proceeds as follows. We write the desired PPF
$p_j(\mathbf{n})$ as an expectation of the conditional probabilities
$p(X_{n+1}=\tilde{X}_j \mid\mathbf{n}, \pi, \bP)$. The expectation is with respect
to $p(\bP,\pi\mid\mathbf{n})$. Next we approximate the integral with
respect to $p(\bP,\pi\mid\mathbf{n})$ by a weighted Monte Carlo average
over samples $(\bPell,\pi^{(\ell)}) \sim p(\bPell) p(\pi^{(\ell)}\mid\bPell)$
from the prior. Note $\pi$ and $\bP$ together define the size-biased
permutation of $(P_j)$,
\[
\tilde{P}_j = P_{\pi_j},\quad j=1,2,\ldots.
\]
The size-biased permutation $(\tilde{P}_j)$ of $(P_j)$ is a resampled
version of $(P_j)$ where sampling is done with probability proportional
to $P_j$ and without replacement. Once the sequence $(P_j)$ is
simulated, it is computationally straightforward to get
$(\tilde{P}_j)$. Note also that the properties of the random partition
can be characterized by the distribution on $\bP$ only. The point
masses $m_h$ are not required.
Using the cluster membership indicators $s_i$ and the size-biased
probabilities $\tilde{P}_j$, we write the desired PPF as
\begin{eqnarray}\label{eqppf}\qquad
p_j(\mathbf{n}) &=& p(s_{n+1}=j \mid\mathbf{n})
\nonumber
\\
& = & \int p(s_{n+1}=j \mid\mathbf{n}, \tilde{\bP}) p( \tilde{\bP} \mid\mathbf{n})
\,\mathrm{d}\tilde{\bP}
\nonumber\\[-8pt]\\[-8pt]
&\propto& \int p(s_{n+1}=j \mid\mathbf{n}, \tilde{\bP}) p(\mathbf{n}\mid\tilde{
\bP}) p(\tilde{\bP}) \,\mathrm{d}\tilde{\bP}
\nonumber
\\
& \approx& \frac1L \sum p\bigl(s_{n+1}=j \mid\mathbf{n},\tilde{\bP}ell\bigr) p\bigl(\mathbf{n}
\mid\tilde{\bP}ell\bigr).\nonumber
\end{eqnarray}
The Monte Carlo sample $\tilde{\bP}ell$ or, equivalently,\break $(\bPell,{\pi}^{(\ell)})$,
is obtained by first generating $\bPell\sim p(\bP)$ and then
$p(\pi^{(\ell)}_j=h \mid\bPell, \pi^{(\ell)}_1,\ldots,\pi^{(\ell)}_{j-1}) \propto
P^{(\ell)}_h$, $h \notin\{\pi^{(\ell)}_1,\ldots,\pi^{(\ell)}_{j-1}\}$. In actual
implementation the elements of $\bPell$ and ${\pi}^{(\ell)}$ are only
generated as and when needed.
The terms in the last line of (\ref{eqppf}) are easily evaluated. The
first factor is given as predictive cluster membership probabilities
\begin{eqnarray}
\label{eqppfi}
&&
p(s_{n+1}=j \mid\mathbf{n},\tilde{\bP})\nonumber\\[-8pt]\\[-8pt]
&&\quad = \cases{
\tilde{P}_j, & $j=1,\ldots,k_n$,
\vspace*{2pt}\cr
\displaystyle \Biggl(1-\sum
_{j=1}^{k_n} \tilde{P}_j\Biggr), &
$j=k_n+1$.}\nonumber
\end{eqnarray}
The second factor is evaluated as
\[
p(\mathbf{n}\mid\tilde{\bP}) = \prod_{j=1}^k
\tilde{P}_j^{n_j-1} \prod_{j=1}^{k-1}
\Biggl(1-\sum_{i=1}^{j}
\tilde{P}_i\Biggr).
\]
Note that the second factor coincides with the previously mentioned
[cf. expression (\ref{eqpartialcar})] Pitman's representation result
for partially exchangeable partitions.
\begin{figure*}
\caption{Panel \textup{(a)}
\label{figppf}
\end{figure*}
\begin{figure*}
\caption{Posterior estimated sampling model \mbox{$\overline{F}
\label{figfbar}
\end{figure*}
Figure~\ref{figppf} shows an example. The figure plots\break $p(s_{i+1}=j
\mid{\mathbf s})$ against cluster size $n_j$. In contrast, the DP Polya urn
would imply a straight line. The plotted probabilities are averaged
with respect to all other features of ${\mathbf s}$, in particular, the
multiplicity of cluster sizes, etc. The figure also shows probabilities
(\ref{eqppfi}) for specific simulations.
\subsection{A Simulation Example}
\label{secex}
Many data analysis applications of the DP prior are based on DP
mixtures of normals as models for a random probability measure $F$.
Applications include density estimation, random effects distributions,
generalizations of a probit link, etc. We consider a stylized example
that is chosen to mimic typical features of such models.
\begin{figure*}
\caption{Co-clustering probabilities
$p(s_i=s_j \mid\mathrm{data}
\label{figsij}
\end{figure*}
In this section we show posterior inference conditional on the data
set $(y_1, y_2,\ldots,y_9)= (-4,-3,-2,\break\ldots,4)$. The use of these data
highlights the differences in posterior inference between the SSM and
DP priors. Assume $y_i \iid F$, with a semi-parametric mixture of
normal prior on $F$,
\[
y_i \iid F\quad \mbox{with } F(y_i) = \int N
\bigl(y_i; \mu,\sigma^2\bigr) \,\mathrm{d} G\bigl(\mu,
\sigma^2\bigr).
\]
Here $N(x; m,s^2)$ denotes a normal distribution with moments
$(m,s^2)$ for the random variable $x$. We estimate $F$ under two
alternative priors,
\[
G \sim\operatorname{SSM}(p,\nu) \quad\mbox{or}\quad G \sim\operatorname{DP}(M,\nu).
\]
The distribution $p$ of the weights for the $\operatorname{SSM}(p,\cdot)$ prior is
defined as in (\ref{eqex}). The total mass parameter $M$ in the DP
prior is fixed to match the prior mean number of clusters, $E(k_n)$,
implied by (\ref{eqex}). We find $M=2.83$. Let $\operatorname{Ga}(x; a,b)$ indicate
that the random variable $x$ has a Gamma distribution with shape
parameter $a$ and inverse scale parameter $b$. For both prior models we
use
\[
\nu\bigl(\mu,1/\sigma^2\bigr)= N\bigl(x; \mu_0, c
\sigma^2\bigr) \operatorname{Ga}\bigl(1/\sigma^2; a/2,b/2\bigr).
\]
We fix $\mu_0=0$, $c=10$ and $a=b=4$. The model can alternatively be
written as $y_i \sim N(\mu_i,\sigma_i^2)$ and
$X_i=(\mu_i,1/\sigma^2_i) \sim G$.
Figures~\ref{figfbar} and~\ref{figsij} show some inference summaries.
Inference is based on Markov chain Monte Carlo (MCMC) posterior
simulation with $1000$ iterations. Posterior simulation is for
$(s_1,\ldots,s_n)$ only. The clus\-ter-specific parameters $(\tilde
\mu_j,\tilde\sigma_j^2)$, $j=1,\ldots,k_n$, are analytically
marginalized. One of the transition probabilities (Gibbs sampler) in
the MCMC requires the PPF under $\operatorname{SSM}(p,\nu)$. It is evaluated using
(\ref{eqppf}).\vadjust{\goodbreak}
Figure~\ref{figfbar} shows the posterior estimated sampling
distributions $F$. The figure highlights a limitation of the DP prior.
The single total mass parameter $M$ controls both, the number of
clusters and the prior precision. A small value for $M$ favors a small
number of clusters and implies low prior uncertainty. Large $M$ implies
the opposite. Also, we already illustrated in Figure~\ref{figw} that
the DP prior implies stochastically ordered cluster sizes, whereas the
chosen SSM prior allows for many approximately equal size clusters. The
equally spaced grid data $(y_1,\ldots,y_n)$ implies a likelihood that
favors a moderate number of approximately equal size clusters. The
posterior distribution on the random partition is shown in Figure
\ref{figsij}. Under the SSM prior the posterior supports a moderate
number of similar size clusters. In contrast, the DP prior shrinks the
posterior toward a few dominant clusters. Let $n_{(1)} \equiv
\max_{j=1,\ldots,k_n} n_j$ denote the leading cluster size. Related
evidence can be seen in the marginal posterior distribution (not shown)
of $k_n$ and $n_{(1)}$. We find $E(k_n \mid\mathrm{data})=6.4$ under the
SSM model versus $E(k_n \mid\mathrm{data})=5.1$ under the DP prior. The
marginal posterior modes are $k_n=6$ under the SSM prior and $k_n=5$
under the DP prior. The marginal posterior modes for $n_{(1)}$ is
$n_{(1)}=2$ under the SSM prior and $n_{(1)}=3$ under the DP prior.
\subsection{Analysis of Sarcoma Data}
We analyze data from of a small phase II clinical trial for sarcoma
patients that was carried out in the M.~D. Anderson Cancer Center. The
study was designed to assess efficacy of a treatment for sarcoma
patients across different subtypes. We consider the data accrued for
$8$ disease subtypes that were classified as having overall
intermediate prognosis, as presented in Table~\ref{tabdata}. Each
table entry indicates the total number of patients for each sarcoma
subtype and the number of patients who reported a treatment success.
See further discussion in \citet{leonnoveloetal12}.
\begin{table}
\tabcolsep=0pt
\caption{Sarcoma data. For each disease subtype (top row)\break we report
the total
number of patients and the number\break of treatment successes. See
Le{\'o}n-Novelo et al. (\citeyear{leonnoveloetal12})\break
for a discussion of disease subtypes}\label{tabdata}
\begin{tabular*}{\tablewidth}{@{\extracolsep{\fill}}lcccccccc@{}}
\hline
\textbf{Sarcoma} &
\textbf{LEI} &
\textbf{LIP} &
\textbf{MFH} &
\textbf{OST} &
\textbf{Syn} &
\textbf{Ang} &
\textbf{MPNST} &
\textbf{Fib} \\
\hline
& $6/28$ & $7/29$ & $3/29$ & $5/26$ & $3/20$ & $2/15$ & $1/5$ & $1/12$\\
\hline
\end{tabular*}
\end{table}
\begin{figure*}
\caption{Posterior probabilities of pairwise co-clustering, $p_{ij}
\label{figpairwise}
\end{figure*}
One limitation of these data is the small sample size, which prevents
separate analysis for each disease subtype. On the other hand, it is
not clear that we should simply treat the subtypes as exchangeable. We
deal with these issues by modeling each table entry as a binomial
response and adopt a hierarchical framework for the success
probabilities. The hierarchical model includes a random partition of
the subtypes. Conditional on a given partition, data across all
subtypes in the same cluster are pooled, thus allowing more precise
inference on the common success probabilities for all subtypes in this
cluster. We consider two alternative models for the random partition,
based on a $\operatorname{DP}(M,\nu)$ prior versus a $\operatorname{SSM}(p,\nu)$ prior.
Specifically, we consider the following models:
\begin{eqnarray*}
y_i|\pi_i & \sim& \operatorname{Bin}(n_i,
\pi_i),
\\
\pi_i|G & \sim& G,
\\
G & \sim& \operatorname{DP}(M, \nu) \quad\mbox{or}\quad \operatorname{SSM}(p, \nu),
\end{eqnarray*}
where $\nu$ is a diffuse probability measure on $[0,1]$ and $p$ is
again defined as in (\ref{eqex}).
The hierarchical structure of the data and the aim of clustering
subpopulations in order to achieve borrowing of strength is in
continuity with a number of applied contributions. Several of these,
for instance, are meta analyses of medical studies
(Berry and\break Christensen, \citeyear{berrychristensen79}), with subpopulations defined by medical
institutions or by clinical trials. In most cases the application of
the DP is chosen for computational advantages and (in some cases) due
to the easy implementation of strategies for prior specification
\citep{LIU96}. With a small number of studies, as in our example, ad
hoc construction of alternative SSM combines hierarchical modeling
with advantageous posterior clustering. The main advantage is the
possibility of avoiding the exponential decrease typical of the ordered
DP atoms.
In this particular analysis, we used $M=2.83$ and chose $\nu$ to be
the $\operatorname{Beta}(0.15,0.85)$ distribution, which was designed to match the
prior mean of the observed data and has prior equivalent sample size
of~$1$. The total mass $M=2.83$ for the DP prior was selected to
achieve matching prior expected number of clusters under the two
models. The DP prior on $G$ favors the formation of large clusters
(with matched prior mean number of clusters) which leads to less
posterior shrinkage of cluster-specific means. In contrast, under the
SSM prior the posterior puts more weight on several smaller clusters.
Figure~\ref{figpairwise} shows the estimated posterior probabilities
of pairwise co-clustering for model (\ref{eqex}) in the left panel
and for the DP case (right panel). Clearly, compared to the DP
model, the chosen SSM induces a posterior distribution with more
clusters, as reflected in the lower posterior probabilities
$p(s_i=\break s_j\mid y)$ for all $i,j$.
\begin{figure*}
\caption{Posterior distribution on the number of clusters.}
\label{figncl}
\end{figure*}
\begin{figure*}
\caption{Posterior distribution on the size of the largest cluster.}
\label{figlargest}
\end{figure*}
Figure~\ref{figncl} shows the posterior distribution of the number of
clusters under the SSM and DP mixture models. Under the DP (right
panel) includes high probability for a single cluster, $k=1$, with
$n_1=8$. The high posterior probability for few large clusters also
implies high posterior probabilities $\widehat{p}_{ij}$ of co-clustering.
Under the SSM (left panel) the posterior distribution on $\rho$ retains
substantial uncertainty. Finally, the same pattern is confirmed in the
posterior distribution of sizes of the largest cluster, $p(n_1 \mid
y)$, shown in Figure~\ref{figlargest}. The high posterior probability
for a single large cluster of all $n=8$ sarcoma subtypes seems
unreasonable for the given data.
\section{Discussion}
We have reviewed alternative definitions of SSMs. We also reviewed the
fact that all SSMs with a PPF of the form $p_j(\mathbf{n}) = f(n_j)$ must
necessarily be a linear function of $n_j$ and provided a new elementary
proof. In other words, the PPF $p_j(\mathbf{n})$ depends on the current data
only through the cluster sizes. The number of clusters and any other
aspect of the partition $\Pi_n$ do not change the prediction. This is
an excessively simplifying assumption for most data analysis problems.
We provide an alternative class of models that allows for more general
PPFs. These models are obtained by directly specifying the
distribution of unnormalized weights $u_h$. The proposed approach for
defining SSMs allows the incorporation of the desired qualitative
properties concerning the decrease of the ordered clusters
cardinalities. This flexibility comes at the cost of additional
computation required to implement the algorithm described in
Section~\ref{secppfalgo}, compared to the standard approaches under
DP-based models. Nevertheless, the benefits obtained in the case of
data sets that require more flexible models compensate the increase in
computational effort. A different strategy for constructing discrete
random distributions has been discussed in \citet
{trippafavaro12}. In
several applications, the scope for which SSMs are to be used suggests
these \textit{desired qualitative properties}. Nonetheless, we see the
definition of a theoretical framework supporting the selection of a SSM
as an open problem.
R code for an implementation of posterior inference under the proposed
new model is available at \url{http://math.utexas.edu/users/pmueller/}.
\section*{Acknowledgments}
Jaeyong Lee was supported by the National Research Foundation of Korea
(NRF) grant funded by the Korea government (MEST) (No. 2011-0030811).
Fernando Quintana was
supported by Grant\break FONDECYT 1100010. Peter M\"uller was partially
funded by Grant NIH/NCI CA075981.
\end{document}
|
\begin{document}
\title{
A gradient estimate for solutions
to parabolic equations
with discontinuous coefficients
}
\begin{abstract}
Li-Vogelius and Li-Nirenberg gave
a gradient estimate for solutions of strongly elliptic equations and systems of divergence forms
with piecewise smooth coefficients, respectively. The discontinuities of the coefficients are assumed to be given by
manifolds of codimension 1, which we called them {\it manifolds of discontinuities}. Their gradient estimate is independent
of the distances between manifolds of discontinuities.
In this paper, we gave
a parabolic version of their results.
That is, we gave a gradient estimate
for parabolic equations of divergence forms
with piecewise smooth coefficients. The coefficients are assumed to be independent of time and their discontinuities are likewise the
previous elliptic equations.
As an application of this estimate,
we also gave a pointwise gradient estimate
for the fundamental solution
of a parabolic operator
with piecewise smooth coefficients. The both gradient estimates are independent
of the distances between manifolds of discontinuities.
\end{abstract}
\section{Introduction.}\label{section:introduction}
For strongly elliptic, second order scalar equations with real coefficients, it is well-known that their solutions have
the H{\"o}lder continuity even in the case that the coefficients are only bounded measurable functions.
However, the solutions do not have the Lipschitz continuity in general.
For example,
Piccinini-Spagnolo~\cite[p.\ 396, Example 1]{PiccininiSpagnolo}
and
Meyers~\cite[p.\ 204]{Meyers}
gave the following example:
\begin{example}{\rm (\cite{Meyers}, \cite{PiccininiSpagnolo})}
Let $B_1:=\{x \in \mathbb{R}^{n} : \lvert x \rvert < 1\}$
and each $a_{i j} \in L^\infty (B_1)$ be defined as
\[
a_{1 1}
= \frac{M x_{1}^{2} + x_{2}^{2}}{\lvert x \rvert^{2}} , \quad
a_{2 2} = \frac{x_{1}^{2} + M x_{2}^{2}}{\lvert x \rvert^{2}} , \quad
a_{1 2} = a_{2 1} =
\frac{(M-1) x_{1} x_{2}}{\lvert x \rvert^{2}}
\]
with a constant $M>1$.
Then, if we define $u$ as
\begin{equation}\label{eq:example of u}
u (x) =
\lvert x \rvert^{1 / \sqrt{M}}
\frac{x_{1}}{\lvert x \rvert},
\end{equation}
it is easy to see that the H{\"o}lder exponent of $u$ is
at least less than
or equal to $1 / \sqrt{M}$
(indeed, for $\overline{x} = ( x_{1} , 0 )$ we have
\begin{math}
\lvert u( \overline{x} ) - u(0) \rvert
= \lvert \overline{x} \rvert^{1 / \sqrt{M}}
\end{math}. Hence we have
\[
\frac{\lvert u( \overline{x} ) - u(0) \rvert}{
\lvert \overline{x} \rvert^{( 1 / \sqrt{M} ) + \varepsilon}
} = \lvert \overline{x} \rvert^{- \varepsilon}
\rightarrow + \infty \mbox{ as } \overline{x} \rightarrow 0
\]
for any $\varepsilon > 0$.)
and $u$ satisfies the strongly elliptic scalar equation with real coefficients
\begin{equation}\label{eq:elliptic equation}
\sum_{i,j=1}^{2}
\frac{\partial}{\partial x_{i}} \left(
a_{i j} \frac{\partial u}{\partial x_{j}}
\right) = 0.
\end{equation}
The same thing can be said also to the parabolic equation
\begin{equation}\label{eq:parabolic equation}
\frac{\partial u}{\partial t}
- \sum_{i,j=1}^{2}
\frac{\partial}{\partial x_{i}} \left(
a_{i j} \frac{\partial u}{\partial x_{j}}
\right) = 0,
\end{equation}
because $u$ given by \eqref{eq:example of u} satisfies this equation.
\end{example}
This example shows that we cannot expect gradient estimates of solutions to equations \eqref{eq:elliptic equation} and \eqref{eq:parabolic equation}
in the case $a_{ij}\in L^\infty(B_1)$, but we may have the estimates in the case of piecewise $C^\mu$ (see \eqref{eq:piecewiseCmu} below) coefficients.
The fact that the gradient estimate of solutions is independent of the distances between manifolds of discontinuities was first observed by
Babu{\v{s}}ka-Andersson-Smith-Levin~\cite{Babuska}
numerically
for certain homogeneous isotropic linear systems of elasticity,
that is $\lvert \nabla u \rvert$
is bounded independently of the distances between manifolds of discontinuities. They considered that this numerical property of solutions is mathematically true. This is
the so-called Babu{\v{s}}ka's conjecture. Recently, \cite{LiVogelius} and \cite{LiNirenberg}
gave mathematical proofs for this conjecture. In elasticity, a small static deformation of an elastic medium with inclusions can be described by an elliptic system of divergence form with piecewise smooth coefficients. The discontinuities of coefficients form the boundaries of inclusions.
Similar physical interpretation is also possible for heat conductors.
Our main theorem \ref{theorem:main} given below ensures that
this property also holds for parabolic equations of the form \eqref{eq:parabolic equation}. The details of result given in \cite{LiVogelius} and \cite{LiNirenberg} for scalar equations will be given below as Theorem \ref{theorem:LN}.
In order to state our main theorem, we begin with introducing several notations which will be used throughout this paper.
Let $D \subset \mathbb{R}^{n}$ be a bounded domain
with a $C^{1, \alpha}$ boundary for some $0 < \alpha < 1$, which means
that the domain $D$ contains $L$ disjoint subdomains
$D_{1} , \ldots , D_{L}$
with $C^{1, \alpha}$ boundaries, i.e.
$D = ( \bigcup_{m=1}^{L} \overline{D_{m}} ) \setminus \partial D$,
and we also assume that
$\overline{D_{m}} \subset D$ for $1 \leq m \leq L-1$. Physically, $D$ is a material and $D_m\,(1\le m\le L-1)$ are
considered as inclusions in $D$.
We define the $C^{1, \alpha}$ norm
(resp.\ $C^{1, \alpha}$ seminorm)
of $C^{1, \alpha}$ domain $D_{m}$
in the same way as in~\cite{LiNirenberg},
that is, as the largest positive number $a$
such that in the $a$-neighborhood of every point of
$\partial D_{m}$, identified as $0$
after a possible translation and rotation
of the coordinates so that $x_{n} = 0$ is the tangent to
$\partial D_{m}$ at $0$,
$\partial D_{m}$ is given by the graph of a $C^{1, \alpha}$ function $\psi_{m}$,
defined in $\lvert x^{\prime} \rvert < 2 a$
($x^{\prime} = ( x_{1} , \ldots , x_{n-1} )$),
the $2 a$-neighborhood of $0$ in the tangent plane, and it satisfies the estimate
\begin{math}
\lVert \psi_{m} \rVert_{
C^{1, \alpha} ( \lvert x^{\prime} \rvert < 2 a )
} \leq 1 / a
\end{math}
(resp.\
\begin{math}
[ \psi_{m} ]_{
C^{1, \alpha} ( \lvert x^{\prime} \rvert < 2 a )
} \leq 1 / a
\end{math}),
where
\begin{align*}
[ \psi ]_{
C^{1, \alpha} ( \lvert x^{\prime} \rvert < 2 a )
}
& := \sup_{
\lvert x^{\prime} \rvert , \lvert \xi^{\prime} \rvert < 2 a
} \frac{\lvert
\nabla^{\prime} \psi ( x^{\prime} )
- \nabla^{\prime} \psi ( \xi^{\prime} )
\rvert}{\lvert x^{\prime} - \xi^{\prime} \rvert^{\alpha}} ,
\displaybreak[1] \\
\lVert \psi \rVert_{
C^{1, \alpha} ( \lvert x^{\prime} \rvert < 2 a )
}
& := \lVert \psi \rVert_{C^{1} ( \lvert x^{\prime} \rvert < 2 a )}
+ [ \psi ]_{
C^{1, \alpha} ( \lvert x^{\prime} \rvert < 2 a )
} .
\end{align*}
Further, let $( a_{i j} )$ be a symmetric, positive definite matrix-valued function
defined on $D$ satisfying
\begin{equation}\label{eq:aijxiixij}
\lambda \lvert \xi \rvert^{2}
\leq \sum_{i,j=1}^{n} a_{i j} (x) \xi_{i} \xi_{j}
\leq \Lambda \lvert \xi \rvert^{2} .
\end{equation}
Here each $a_{i j}$ is piecewise $C^{\mu}$ in $D$,
$0 < \mu < 1$, that is
\begin{equation}\label{eq:piecewiseCmu}
a_{i j} (x) = a_{i j}^{(m)} (x) \mbox{ for } x \in D_{m} , \
1 \leq m \leq L
\end{equation}
with $a_{i j}^{(m)} \in C^{\mu} ( \overline{D_{m}} )$.
As we have already mentioned above, we will discuss in this paper a gradient estimate
for solutions to parabolic equations
with piecewise smooth coefficients.
Our result is a parabolic version
for the results of Li-Vogelius~\cite{LiVogelius} and the scalar equations version of Li-Nirenberg~\cite{LiNirenberg}.
They showed that solutions $u\in H^1(D)$
to the elliptic equation
\begin{equation}\label{eq:elliptic}
\sum_{i,j=1}^{n} \frac{\partial}{\partial x_{i}}
\left( a_{i j} \frac{\partial u}{\partial x_{j}} \right)
= h + \sum_{i=1}^{n} \frac{\partial g_{i}}{\partial x_{i}},
\end{equation}
where $h\in L^\infty(D)$ and each $g_i$ is defined in $D$ such that $g_i|_{D_m}\,\,(1\le m\le L)$ have continuous extensions $\in C^\mu(\overline{D_m}),\,0<\mu<1$ up to $\partial D_m$ have global $W^{1, \infty}$
and piecewise $C^{1, \alpha^{\prime}}$ estimates (see \eqref{eq:ellipticest} below).
These estimates are independent of the distances
between inclusions
when a material has inclusions.
We first give the result of Li-Nirenberg~\cite{LiNirenberg} for scalar equations.
\begin{theorem}[{\cite[Theorem~1.1]{LiNirenberg}}]\label{theorem:LN}
For any $\varepsilon > 0$,
there exists a constant $C_{\sharp} > 0$ such that
for any $\alpha^{\prime}$ satisfying
\[
0 < \alpha^{\prime}
< \min \left\{ \mu , \frac{\alpha}{2 ( \alpha + 1 )} \right\} ,
\]
we have
\begin{equation}\label{eq:ellipticest}
\sum_{m=1}^{L} \lVert u \rVert_{
C^{1, \alpha^{\prime}} ( \overline{D_{m}} \cap D_{\varepsilon} )
} \leq C_{\sharp} \left(
\lVert u \rVert_{L^{2} (D)}
+ \lVert h \rVert_{L^{\infty} (D)}
+ \sum_{m=1}^{L} \sum_{i=1}^{n} \lVert g_{i} \rVert_{
C^{\alpha^{\prime}} ( \overline{D_{m}} )
}
\right) ,
\end{equation}
where we denote
\[
D_{\varepsilon} := \{
x \in D : \mathop{\mathrm{dist}}\nolimits ( x, \partial D ) > \varepsilon
\}
\]
and a positive constant $C_{\sharp}$ depends only on
\begin{math}
n, L, \mu, \alpha , \varepsilon , \lambda , \Lambda ,
\lVert a_{i j} \rVert_{C^{\alpha^{\prime}} ( \overline{D_{m}})}
\end{math}
and the $C^{1, \alpha^{\prime}}$ norms of $D_{m}$.
\end{theorem}
\begin{remark}
The constant $C_{\sharp} > 0$
is independent of the distances between inclusions $D_{m}$.
Therefore, the estimate (\ref{eq:ellipticest}) holds
even in the case that some of inclusions touch another inclusions
as in Figure~\ref{figure:1}.
\end{remark}
\begin{figure}
\caption{The case that an inclusion touches another inclusion.
($L=7$)\label{figure:1}
\label{figure:1}
\end{figure}
Now, we consider the parabolic equation
\begin{equation}\label{eq:parabolic}
\frac{\partial u}{\partial t}
- \sum_{i,j=1}^{n} \frac{\partial}{\partial x_{i}}
\left( a_{i j} \frac{\partial u}{\partial x_{j}} \right)
= f - \sum_{i=1}^{n} \frac{\partial f_{i}}{\partial x_{i}}
\mbox{ in } Q,
\end{equation}
where
\begin{align*}
& f\in L^\infty(Q),\,\frac{\partial f}{\partial t}\in L^\kappa(Q),
\displaybreak[1] \\
& f_{i}\in L^p(Q),\,\frac{\partial f_i}{\partial t}\in L^p(Q) \mbox{ and }
f_{i} = f_{i}^{(m)} \mbox{ on } D_{m} \times (0,T],
\end{align*}
with $p>n+2$, $\kappa = p(n+2)/(n+2+p)$,
$Q := D \times (0,T]$
and
$f_{i}^{(m)} \in L^{\infty} ( 0, T; C^{\mu} ( \overline{D_{m}} ) ) $.
Now we define a weak solution to the equation (\ref{eq:parabolic}).
\begin{definition}
We call
\begin{math}
u \in V_{2}^{1,0} (Q)
:= L^{2} ( 0,T; H^{1} (D) )
\cap C ( [0,T]; L^{2} (D) )
\end{math}
a weak solution to the equation (\ref{eq:parabolic})
when
\begin{align}
& \int_{D} u( x, t^{\prime} ) \, \varphi ( x, t^{\prime} ) \, d x
- \int_{0}^{t^{\prime}} \int_{D}
u( x, t ) \, \frac{\partial \varphi}{\partial t} ( x, t ) \,
d x \, d t \notag \\
& \mbox{}
+ \int_{0}^{t^{\prime}} \int_{D}
\sum_{i,j=1}^{n} a_{i j} (x) \,
\frac{\partial u}{\partial x_{j}} (x,t) \,
\frac{\partial \varphi}{\partial x_{i}} (x,t) \,
d x \, d t \notag \\
& = \int_{0}^{t^{\prime}} \int_{D}
f(x,t) \, \varphi (x,t) \,
d x \, d t
+ \int_{0}^{t^{\prime}} \int_{D}
\sum_{i=1}^{n} f_{i} (x,t) \,
\frac{\partial \varphi}{\partial x_{i}} (x,t) \,
d x \, d t \label{eq:weaksolution}
\end{align}
for any
\begin{math}
\varphi \in L^{2} ( 0, T; \mathring{H}^{1} (D) )
\cap H^{1} ( 0, T; L^{2} (D) )
\end{math}
with $\varphi ( \cdot , 0 ) = 0$
and $0 < t^{\prime} \leq T$.
\end{definition}
Our main result is as follows.
\begin{theorem}[Main theorem]\label{theorem:main}
Any weak solutions $u \in V_{2}^{1,0} (Q)$
to {\rm (\ref{eq:parabolic})}
have the following up to the inclusion boundary regularity estimate:
For any $\varepsilon > 0$,
there exists a constant $C_{\sharp}^{\prime} > 0$
such that for any $\alpha^{\prime}$ satisfying
\begin{equation}\label{eq:alphaprime}
0 < \alpha^{\prime}
< \min \left\{ \mu , \frac{\alpha}{2 ( \alpha + 1 )} \right\} ,
\end{equation}
we have
\[
\sum_{m=1}^{L}
\sup_{\varepsilon^{2} < t \leq T}
\lVert u ( \cdot , t ) \rVert_{
C^{1, \alpha^{\prime}} ( \overline{D_{m}} \cap D_{\varepsilon} )
} \leq C_{\sharp}^{\prime}
\left(
\lVert u \rVert_{L^{2} (Q)} + F_{\ast} + F_{\ast \ast}
\right) ,
\]
where
\begin{align*}
F_{\ast}
& := \lVert f \rVert_{L^{\kappa} (Q)}
+ \lVert f \rVert_{
L^{\max \{ 2, \kappa \}} (Q)
} + \lVert f \rVert_{L^{\infty} (Q)}
+ \left\lVert
\frac{\partial f}{\partial t}
\right\rVert_{L^{\kappa} (Q)} , \displaybreak[1] \\
F_{\ast \ast}
& := \sum_{i=1}^{n} \Biggl(
\lVert f_{i} \rVert_{L^{p} (Q)}
+ \left\lVert
\frac{\partial f_{i}}{\partial t}
\right\rVert_{L^{2} (Q)}
+ \left\lVert
\frac{\partial f_{i}}{\partial t}
\right\rVert_{L^{p} (Q)} \\
& \hspace*{20ex} \mbox{}
+ \sum_{m=1}^{L} \sup_{0 < t \leq T}
\lVert f_{i} ( \cdot , t ) \rVert_{
C^{\alpha^{\prime}} ( \overline{D_{m}} )
}
\Biggr)
\end{align*}
and
$C_{\sharp}^{\prime}$ depends only on
\begin{math}
n, L, \mu, \alpha , \varepsilon , \lambda , \Lambda , p,
\lVert a_{i j} \rVert_{C^{\alpha^{\prime}} ( \overline{D_{m}})}
\end{math}
and the $C^{1, \alpha^{\prime}}$ norms of $D_{m}$.
\end{theorem}
\begin{remark}\label{remark:main}
(i)
Again,
the constant $C_{\sharp}^{\prime} > 0$
is independent of the distances between inclusions $D_{m}$.
Then Theorem~\ref{theorem:main} holds
even in the case that an inclusion touches another inclusion
as Figure~\ref{figure:1}.
(ii)
It is easy to obtain
\begin{align*}
F_{\ast}
& \leq C^{\ast}
\left(
\lVert f \rVert_{L^{\infty} (Q)}
+ \left\lVert
\frac{\partial f}{\partial t}
\right\rVert_{L^{\kappa} (Q)}
\right) , \displaybreak[1] \\
F_{\ast \ast}
& \leq C^{\ast} \sum_{i=1}^{n} \left(
\sum_{m=1}^{L} \sup_{0 < t \leq T}
\lVert f_{i} ( \cdot , t ) \rVert_{
C^{\alpha^{\prime}} ( \overline{D_{m}} )
}
+ \left\lVert
\frac{\partial f_{i}}{\partial t}
\right\rVert_{L^{p} (Q)}
\right) .
\end{align*}
However, a constant $C^{\ast} > 0$
depends on $T$ and $D$, unfortunately.
\end{remark}
For heat conductive materials with inclusions, \eqref{eq:parabolic} describes the temperature distribution in the materials. When these inclusions are unknown and need to be identified, thermography is one of non-destructive testing which identifies these inclusions. The measurement for the thermography could be temperature distribution at the boundary generated by injecting heat flux at the boundary.
The mathematical analysis for this thermography has not yet been developed so far. However, if we have enough measurements, the so called dynamical probe method (\cite{IKN}) can give a mathematically rigorous way to identify these inclusions. In the proof of justifying this method, the gradient estimate of the fundamental solution of parabolic equation with non-smooth coefficient is one of the essential ingredients.
The dynamical probe method has been developed only for the case that the inclusions do not touch another inclusions. So, it is natural to consider the case when some of them touch. For the first task to handle this case, we need to have the gradient estimate of the fundamental solution. Our main result has given the answer to this. Similar situation can be considered for stationary thermography and
non-destructive testing using acoustic waves. For example, \cite{NUW} and \cite{Yoshida}
effectively used a result of Li-Vogelius~\cite{LiVogelius} to give a procedure of reconstructing inclusions
by enclosure method (see \cite{Ikehata}, for example). What is interested about their arguments is that, by adding further arguments, we can
even reconstruct the inclusions in the case that they can touch another inclusions (\cite{Nag-Nak}). Therefore, we believe that
our gradient estimates will be useful for inverse problems identifying unknown inclusions.
The rest of this paper is organized as follows.
In Section~\ref{section:proof},
we prove our main theorem, i.e.\ Theorem~\ref{theorem:main}
by applying Lemma~\ref{lemma:keyestimate}.
We prove Lemma~\ref{lemma:keyestimate}
in Section~\ref{section:someestimates}.
In Section~\ref{section:gradestoffundsol},
we consider a pointwise gradient estimate
for the fundamental solution
of parabolic operators
with piecewise smooth coefficients
by applying Theorem~\ref{theorem:main}.
\section{Proof of main result.}\label{section:proof}
In this section, we prove our main theorem.
We first state some estimates in Lemma~\ref{lemma:keyestimate}
which we need to prove our main theorem.
We prove Lemma~\ref{lemma:keyestimate}
in Section~\ref{section:someestimates}.
\begin{lemma}\label{lemma:keyestimate}
Let $( a_{i j} )$ be a matrix-valued function defined on $D$.
Assume that $( a_{i j} )$ is symmetric, positive definite,
and satisfies the condition {\rm (\ref{eq:aijxiixij})}.
Let $Q$ as before and
\begin{math}
\widehat{Q}_{\varepsilon}
:= D_{\varepsilon} \times ( \varepsilon^{2} , T ]
\end{math}.
Then for $p>n+2$,
a weak solution $u \in V_{2}^{1,0} (Q)$
to {\rm (\ref{eq:parabolic})} satisfies
the following estimates:
\begin{align}
\sup_{\varepsilon^{2} < t \leq T}
\lVert u ( \cdot , t ) \rVert_{L^{2} ( D_{\varepsilon} )}
& \leq C \left( \lVert u \rVert_{L^{2} (Q)} + F_{0} \right) ,
\label{eq:LinftyL2} \displaybreak[1] \\
\lVert u \rVert_{L^{\infty} ( \widehat{Q}_{\varepsilon} )}
& \leq C \left( \lVert u \rVert_{L^{2} (Q)} + F_{0} \right) ,
\label{eq:Linfty} \displaybreak[1] \\
\left\lVert \frac{\partial u}{\partial t} \right\rVert_{
L^{2} ( \widehat{Q}_{\varepsilon})
}
& \leq C \left( \lVert u \rVert_{L^{2} (Q)} + F_{1} \right) ,
\label{eq:dudtL2uL2}
\end{align}
where we set
\begin{align}
F_{0} & :=
\lVert f \rVert_{L^{\frac{p(n+2)}{n+2+p}} (Q)}
+ \sum_{i=1}^{n} \lVert f_{i} \rVert_{L^{p} (Q)} ,
\label{eq:defOfF0} \displaybreak[1] \\
F_{1} & :=
\lVert f \rVert_{
L^{\max \left\{ 2, \frac{p(n+2)}{n+2+p} \right\}} (Q)
}
+ \sum_{i=1}^{n} \left(
\lVert f_{i} \rVert_{L^{p} (Q)}
+ \left\lVert
\frac{\partial f_{i}}{\partial t}
\right\rVert_{L^{2} (Q)}
\right) , \label{eq:defOfF1}
\end{align}
and $C>0$ depends only on $n, \lambda , \Lambda , p$
and $\varepsilon$.
\end{lemma}
Now we prove our main theorem by applying
Lemma~\ref{lemma:keyestimate}.
This proof is inspired by \cite{LRU}.
\begin{proof}[of Theorem~\ref{theorem:main}]
Before going into the proof, we remark that a general constant $C$ which we used below in our estimates depends only on $n, \lambda , \Lambda , p$
and $\varepsilon_{j}$ ($j = 1, 2, 3$). To begin with the proof, let $0 < \varepsilon_{1} < \varepsilon_{2} < \varepsilon_{3}$.
Then we have
\begin{equation}\label{eq:proof11}
\sup_{\varepsilon_{2}^{2} < t \leq T}
\lVert u ( \cdot , t ) \rVert_{L^{2} ( D_{\varepsilon_{2}} )}
\leq C \left(
\lVert u \rVert_{L^{2} (Q)} + F_{0}
\right)
\end{equation}
and
\begin{equation}\label{eq:proof12}
\left\lVert
\frac{\partial u}{\partial t}
\right\rVert_{L^{2} ( \widehat{Q}_{\varepsilon_{1}} )}
\leq C \left(
\lVert u \rVert_{L^{2} (Q)} + F_{1}
\right)
\end{equation}
by (\ref{eq:LinftyL2}) and (\ref{eq:dudtL2uL2})
in Lemma~\ref{lemma:keyestimate},
where $F_{0}$, $F_{1}$ are defined by
(\ref{eq:defOfF0}) and (\ref{eq:defOfF1}).
On the other hand,
$u_{t} = \partial u / \partial t$
satisfies the equation
\[
\frac{\partial u_{t}}{\partial t}
- \sum_{i,j=1}^{n} \frac{\partial}{\partial x_{i}} \left(
a_{i j} (x) \frac{\partial u_{t}}{\partial x_{j}}
\right) = \frac{\partial f}{\partial t}
- \sum_{i=1}^{n} \frac{\partial}{\partial x_{i}} \left(
\frac{\partial f_{i}}{\partial t}
\right)
\]
by applying $\partial / \partial t$ to (\ref{eq:parabolic})
(also see Remark~\ref{remark:steklov}).
Hence we have
\begin{equation}\label{eq:proof2}
\lVert u_{t} \rVert_{L^{\infty} ( \widehat{Q}_{\varepsilon_{2}} )}
\leq C \left(
\lVert u_{t} \rVert_{L^{2} ( \widehat{Q}_{\varepsilon_{1}} )}
+ F_{0}^{\prime}
\right)
\end{equation}
by Lemma~\ref{lemma:keyestimate} (\ref{eq:Linfty}),
where we define
\[
F_{0}^{\prime} :=
\left\lVert
\frac{\partial f}{\partial t}
\right\rVert_{L^{\frac{p(n+2)}{n+2+p}} (Q)}
+ \sum_{i=1}^{n}
\left\lVert
\frac{\partial f_{i}}{\partial t}
\right\rVert_{L^{p} (Q)} .
\]
In particular,
$u_t( \cdot , t ) \in L^{\infty} ( D_{\varepsilon_{2}} )$
holds for a.e.\ $t \in ( \varepsilon_{2}^{2} , T ]$.
Now we regard the equation (\ref{eq:parabolic}) as
the elliptic equation
\begin{equation}\label{eq:aselliptic}
\sum_{i,j=1}^{n} \frac{\partial}{\partial x_{i}} \left(
a_{i j} (x) \frac{\partial u}{\partial x_{j}}
\right)
= \frac{\partial u}{\partial t}
- f + \sum_{i=1}^{n} \frac{\partial f_{i}}{\partial x_{i}}
\end{equation}
by fixing $t \in ( \varepsilon_{2}^{2} , T ]$.
We remark that
$\partial u / \partial t - f \in L^{\infty} ( D_{\varepsilon_{2}} )$.
Then, for any $\alpha^{\prime}$
with the condition (\ref{eq:alphaprime}),
we have the estimate
\begin{align}
& \sum_{m=1}^{L}
\lVert u ( \cdot , t ) \rVert_{
C^{1, \alpha^{\prime}}
( \overline{D_{m}} \cap D_{\varepsilon_{3}} )
} \notag \\
& \leq C_{\sharp} \Biggl(
\lVert u ( \cdot , t ) \rVert_{L^{2} ( D_{\varepsilon_{2}} )}
+ \left\lVert \frac{\partial u}{\partial t} ( \cdot , t )
\right\rVert_{
L^{\infty} ( D_{\varepsilon_{2}} )
}
+ \lVert f ( \cdot , t ) \rVert_{L^{\infty} ( D_{\varepsilon_{2}} )}
\notag \\
& \hspace*{35ex} \mbox{}
+ \sum_{m=1}^{L} \sum_{i=1}^{n}
\lVert f_{i} ( \cdot , t ) \rVert_{
C^{\alpha^{\prime}} ( \overline{D_{m}})
}
\Biggr) \label{eq:x}
\end{align}
by Theorem~\ref{theorem:LN}, where
$C_{\sharp} > 0$ depends only on
\begin{math}
n, L, \mu, \alpha , \varepsilon , \lambda , \Lambda ,
\lVert a_{i j} \rVert_{C^{\alpha^{\prime}} ( \overline{D_{m}})}
\end{math}
and the $C^{1, \alpha^{\prime}}$ norms of $D_{m}$.
Taking the supremum of the inequality
(\ref{eq:x}) over $( \varepsilon_{2}^{2} , T ]$
with respect to $t$,
and using (\ref{eq:proof11}), (\ref{eq:proof12})
and (\ref{eq:proof2}),
we have
\begin{align*}
& \sum_{m=1}^{L}
\sup_{\varepsilon_{2}^{2} < t \leq T}
\lVert u ( \cdot , t ) \rVert_{
C^{1, \alpha^{\prime}}
( \overline{D_{m}} \cap D_{\varepsilon_{3}} )
} \\
& \leq C_{\sharp} \Biggl(
\sup_{\varepsilon_{2}^{2} < t \leq T}
\lVert u ( \cdot , t ) \rVert_{L^{2} ( D_{\varepsilon_{2}} )}
+ \left\lVert \frac{\partial u}{\partial t}
\right\rVert_{
L^{\infty} ( \widehat{Q}_{\varepsilon_{2}} )
}
+ \lVert f \rVert_{L^{\infty} ( \widehat{Q}_{\varepsilon_{2}} )} \\
& \hspace*{30ex} \mbox{}
+ \sum_{m=1}^{L} \sum_{i=1}^{n}
\sup_{\varepsilon_{2}^{2} < t \leq T}
\lVert f_{i} ( \cdot , t ) \rVert_{
C^{\alpha^{\prime}} ( \overline{D_{m}})
}
\Biggr) \displaybreak[1] \\
& \leq C_{\sharp} C \Biggl(
\lVert u \rVert_{L^{2} (Q)}
+ F_{0} + F_{1} + F_{0}^{\prime}
+ \lVert f \rVert_{L^{\infty} ( \widehat{Q}_{\varepsilon_{2}} )} \\
& \hspace*{30ex} \mbox{}
+ \sum_{m=1}^{L} \sum_{i=1}^{n}
\sup_{\varepsilon_{2}^{2} < t \leq T}
\lVert f_{i} ( \cdot , t ) \rVert_{
C^{\alpha^{\prime}} ( \overline{D_{m}})
}
\Biggr) ,
\end{align*}
which is the estimate we want to obtain.
\end{proof}
\begin{remark}\label{remark:steklov}
Since we assume that $u$ belongs only in $V_{2}^{1,0} (Q)$
with respect to the regularity of a weak solution,
one may think that we cannot apply $\partial / \partial t$ directly.
However, it is enough to consider the Steklov mean function
and to make $h$ tend to $0$,
where we define the Steklov mean function
$v_{h}$ of $v$ by
\[
v_{h} (x,t) = \frac{1}{h} \int_{t}^{t + h} v( x, \tau ) \, d \tau.
\]
Hereafter we omit the detail with respect to this remark
although we often apply this argument. Also see
\cite[III \S2 p.\,141]{LSU}
and (62) in \cite[p.\,152]{LRU}, for example.
\end{remark}
\section{Some estimates.}\label{section:someestimates}
In this section, we prove Lemma~\ref{lemma:keyestimate}.
The estimates (\ref{eq:LinftyL2}) and (\ref{eq:Linfty}) are
well-known, but we give these proofs in Appendix
for readers' convenience.
In order to show the estimate (\ref{eq:dudtL2uL2}), we prepare some necessary lemmas
for its proof.
Throughout this section, $C > 0$ denotes a general constant
depending only on $n, \lambda , \Lambda$. Also, we assume that
the coefficient $( a_{i j} )$ is a matrix-value function
defined on $D$, symmetric, positive definite,
and satisfies the condition (\ref{eq:aijxiixij}).
Moreover, we set
$Q_{r} := B_{r} ( x_{0} ) \times ( t_{0} - r^{2} , \, t_{0} ]$,
and assume that
$Q_{2 \rho} \subset D \times (0,T]$
with $0 < \rho \leq 1$.
The following two lemmas are essentially shown in
\cite{LRU}.
We give their proofs here for the sake of completeness.
\begin{lemma}[{\cite[Lemma~3]{LRU}}]\label{lemma:DuL2}
Let $1 < r < \infty$ and $1/r + 1 / r^{\prime} = 1$.
Then a solution $u$ to {\rm (\ref{eq:parabolic})}
satisfies the estimate
\begin{equation}\label{eq:DuL2}
\lVert \nabla u \rVert_{L^{2} ( Q_{\rho} )}
\leq C \left[
( \rho^{n/2} + \rho^{(n+2) / r^{\prime}} ) \mathop{\mathrm{osc}}_{Q_{2 \rho}} u
+ \lVert f \rVert_{L^{r} ( Q_{2 \rho} )}
+ \sum_{i=1}^{n} \lVert f_{i} \rVert_{L^{2} ( Q_{2 \rho} )}
\right] .
\end{equation}
\end{lemma}
\begin{proof}
Let $\zeta$ be a smooth cut-off function on $Q_{2 \rho}$
satisfying $\zeta \equiv 1$ on $Q_{\rho}$,
$\zeta \equiv 0$ on
$Q_{2 \rho} \setminus Q_{3 \rho / 2}$,
$0 \leq \zeta \leq 1$ on $Q_{2 \rho}$,
and
\begin{math}
\lvert \partial \zeta / \partial t \rvert
+ \lvert \nabla \zeta \rvert^{2}
\leq C \rho^{-2}
\end{math}
on $Q_{2 \rho}$.
Let $u_{0}$ be the average value of $u$ in $Q_{2 \rho}$:
\[
u_{0} := \frac{1}{\lvert Q_{2 \rho} \rvert}
\iint_{Q_{2 \rho}} u(x,t) \, d x \, d t,
\]
where $\lvert Q_{2 \rho} \rvert$ denotes the measure of $ Q_{2 \rho}$.
Testing (\ref{eq:parabolic})
by $( u - u_{0} ) \zeta^{2}$
and integrating by parts
(i.e.\ taking $\varphi = ( u - u_{0} ) \zeta^{2}$
for (\ref{eq:weaksolution}).
Also see Remark~\ref{remark:steklov}), we have
\begin{align*}
& \frac{1}{2} \int_{B_{2 \rho} ( x_{0} )}
\bigl( ( u - u_{0} )^{2} \zeta^{2} \bigr) ( x, t_{0} ) \,
d x
- \iint_{Q_{2 \rho}}
( u - u_{0} )^{2} \zeta \frac{\partial \zeta}{\partial t}
d x \, d t \\
& \mbox{}
+ \iint_{Q_{2 \rho}}
\sum_{i,j=1}^{n} a_{i j} \frac{\partial u}{\partial x_{j}}
\frac{\partial u}{\partial x_{i}} \zeta^{2} \,
d x \, d t
+ 2 \iint_{Q_{2 \rho}}
\sum_{i,j=1}^{n} a_{i j} \frac{\partial u}{\partial x_{j}}
( u - u_{0} ) \zeta \frac{\partial \zeta}{\partial x_{i}}
d x \, d t \\
& = \iint_{Q_{2 \rho}} f ( u - u_{0} ) \zeta^{2} \, d x \, d t
+ \sum_{i=1}^{n} \iint_{Q_{2 \rho}}
\left[
f_{i} \frac{\partial u}{\partial x_{i}} \zeta^{2}
+ 2 f_{i} ( u - u_{0} ) \zeta \frac{\partial \zeta}{\partial x_{i}}
\right]
d x \, d t.
\end{align*}
Hence we have
\begin{align*}
& \frac{1}{2} \int_{B_{2 \rho} ( x_{0} )}
\bigl( ( u - u_{0} )^{2} \zeta^{2} \bigr) ( x, t_{0} ) \,
d x + \lambda \iint_{Q_{2 \rho}}
\lvert \nabla u \rvert^{2} \zeta^{2} \,
d x \, d t \\
& \leq \frac{1}{2} \int_{B_{2 \rho} ( x_{0} )}
\bigl( ( u - u_{0} )^{2} \zeta^{2} \bigr) ( x, t_{0} ) \,
d x + \iint_{Q_{2 \rho}}
\sum_{i,j=1}^{n} a_{i j} \frac{\partial u}{\partial x_{j}}
\frac{\partial u}{\partial x_{i}} \zeta^{2} \,
d x \, d t \displaybreak[1] \\
& = \iint_{Q_{2 \rho}}
( u - u_{0} )^{2} \zeta \frac{\partial \zeta}{\partial t}
d x \, d t
- 2 \iint_{Q_{2 \rho}}
\sum_{i,j=1}^{n} a_{i j} \frac{\partial u}{\partial x_{j}}
( u - u_{0} ) \zeta \frac{\partial \zeta}{\partial x_{i}}
d x \, d t \\
& \hspace*{3ex} \mbox{}
+ \iint_{Q_{2 \rho}} f ( u - u_{0} ) \zeta^{2} \, d x \, d t \\
& \hspace*{3ex} \mbox{}
+ \sum_{i=1}^{n} \iint_{Q_{2 \rho}}
\left[
f_{i} \frac{\partial u}{\partial x_{i}} \zeta^{2}
+ 2 f_{i} ( u - u_{0} ) \zeta \frac{\partial \zeta}{\partial x_{i}}
\right]
d x \, d t \displaybreak[1] \\
& \leq \iint_{Q_{2 \rho}}
( u - u_{0} )^{2} \zeta
\left\lvert \frac{\partial \zeta}{\partial t} \right\rvert
d x \, d t
+ \varepsilon_{1} \iint_{Q_{2 \rho}}
\lvert \nabla u \rvert^{2} \zeta^{2} \,
d x \, d t \\
& \hspace*{3ex} \mbox{}
+ \frac{C}{\varepsilon_{1}} \iint_{Q_{2 \rho}}
\lvert u - u_{0} \rvert^{2} \lvert \nabla \zeta \rvert^{2} \,
d x \, d t
+ \frac{1}{2} \left(
\iint_{Q_{2 \rho}}
\lvert f \zeta \rvert^{r} \,
d x \, d t
\right)^{2/r} \\
& \hspace*{3ex} \mbox{}
+ \frac{1}{2} \left(
\iint_{Q_{2 \rho}}
\lvert ( u - u_{0} ) \zeta \rvert^{r^{\prime}} \,
d x \, d t
\right)^{2 / r^{\prime}}
+ \varepsilon_{1} \iint_{Q_{2 \rho}}
\lvert \nabla u \rvert^{2} \zeta^{2} \,
d x \, d t \\
& \hspace*{3ex} \mbox{}
+ \left( \frac{1}{\varepsilon_{1}} + 1 \right)
\iint_{Q_{2 \rho}}
\sum_{i=1}^{n} \lvert f_{i} \rvert^{2} \zeta^{2} \,
d x \, d t
+ \iint_{Q_{2 \rho}}
\lvert u - u_{0} \rvert^{2}
\lvert \nabla \zeta \rvert^{2} \,
d x \, d t .
\end{align*}
We now take $\varepsilon_{1} > 0$ small enough.
Then, we have
\begin{align*}
& \iint_{Q_{\rho}} \lvert \nabla u \rvert^{2} \, d x \, d t
\leq \iint_{Q_{2 \rho}}
\lvert \nabla u \rvert^{2} \zeta^{2} \,
d x \, d t \displaybreak[1] \\
& \leq C \iint_{Q_{2 \rho}}
( u - u_{0} )^{2} \left[
\zeta \left\lvert \frac{\partial \zeta}{\partial t} \right\rvert
+ \lvert \nabla \zeta \rvert^{2}
\right]
d x \, d t \\
& \hspace*{3ex} \mbox{}
+ C \left(
\iint_{Q_{2 \rho}}
\lvert ( u - u_{0} ) \zeta \rvert^{r^{\prime}} \,
d x \, d t
\right)^{2 / r^{\prime}} \\
& \hspace*{3ex} \mbox{}
+ C \left(
\iint_{Q_{2 \rho}} \lvert f \zeta \rvert^{r} \, d x \, d t
\right)^{2/r}
+ C \iint_{Q_{2 \rho}}
\sum_{i=1}^{n} \lvert f_{i} \rvert^{2} \zeta^{2} \,
d x \, d t \displaybreak[1] \\
& \leq C \left[
\left( \rho^{n} + \rho^{2(n+2) / r^{\prime}} \right)
\left( \mathop{\mathrm{osc}}_{Q_{2 \rho}} u \right)^{2}
+ \lVert f \rVert_{L^{r} ( Q_{2 \rho} )}^{2}
+ \sum_{i=1}^{n} \lVert f_{i} \rVert_{L^{2} ( Q_{2 \rho} )}^{2}
\right],
\end{align*}
because
$\lvert u(x,t) - u_{0} \rvert \leq \mathop{\mathrm{osc}}_{Q_{2 \rho}} u$
holds for any $(x,t) \in Q_{2 \rho}$. This completes the proof.
\end{proof}
\begin{lemma}[{\cite[Lemma~5]{LRU}}]\label{lemma:dudtL2}
A solution $u$ to {\rm (\ref{eq:parabolic})}
satisfies the estimate
\begin{align}
\left\lVert
\frac{\partial u}{\partial t}
\right\rVert_{L^{2} ( Q_{\rho} )}
& \leq C \Biggl[
\rho^{-1} \lVert \nabla u \rVert_{L^{2} ( Q_{2 \rho} )}
+ \lVert f \rVert_{L^{2} ( Q_{2 \rho} )} \notag \\
& \hspace*{10ex}
+ \sum_{i=1}^{n}
\left(
\rho^{-1}
\lVert f_{i} \rVert_{L^{2} ( Q_{2 \rho} )}
+ \left\lVert
\frac{\partial f_{i}}{\partial t}
\right\rVert_{L^{2} ( Q_{2 \rho} )}
\right)
\Biggr] \label{eq:dudtL2}
\end{align}
\end{lemma}
\begin{proof}
We first take the same smooth cut-off function $\zeta$
as in the proof of Lemma~\ref{lemma:DuL2}.
Testing (\ref{eq:parabolic}) by
$( \partial u / \partial t ) \zeta^{2}$
and integrating by parts (also see Remark~\ref{remark:steklov}),
we have
\begin{align*}
& \frac{1}{2} \int_{B_{2 \rho} ( x_{0} )}
\sum_{i,j=1}^{n}
\left(
a_{i j} \frac{\partial u}{\partial x_{i}}
\frac{\partial u}{\partial x_{j}} \zeta^{2}
\right) ( x, t_{0} ) \,
d x \\
& \mbox{}
+ \iint_{Q_{2 \rho}}
\left[
\left\lvert \frac{\partial u}{\partial t} \right\rvert^{2}
\zeta^{2}
- \sum_{i,j=1}^{n} a_{i j} \frac{\partial u}{\partial x_{i}}
\frac{\partial u}{\partial x_{j}}
\zeta \frac{\partial \zeta}{\partial t}
+ 2 \sum_{i,j=1}^{n} a_{i j} \frac{\partial u}{\partial x_{j}}
\frac{\partial u}{\partial t} \zeta
\frac{\partial \zeta}{\partial x_{i}}
\right]
d x \, d t \\
& = \iint_{Q_{2 \rho}}
f \frac{\partial u}{\partial t} \zeta^{2} \,
d x \, d t
+ \sum_{i=1}^{n} \Biggl[
\int_{B_{2 \rho} ( x_{0} )}
\left( f_{i} \frac{\partial u}{\partial x_{i}} \zeta^{2} \right)
( x, t_{0} ) \,
d x \\
& \hspace*{10ex} \mbox{}
+ \iint_{Q_{2 \rho}}
\left(
- \frac{\partial f_{i}}{\partial t}
\frac{\partial u}{\partial x_{i}} \zeta^{2}
- 2 f_{i} \frac{\partial u}{\partial x_{i}}
\zeta \frac{\partial \zeta}{\partial t}
+ 2 f_{i} \frac{\partial u}{\partial t}
\zeta \frac{\partial \zeta}{\partial x_{i}}
\right)
\Biggr]
d x \, d t
\end{align*}
due to
\[
\sum_{i,j=1}^{n} a_{i j}
\frac{\partial^{2} u}{\partial t \partial x_{i}}
\frac{\partial u}{\partial x_{j}} \zeta^{2}
= \frac{1}{2} \frac{\partial}{\partial t} \left(
\sum_{i,j=1}^{n} a_{i j}
\frac{\partial u}{\partial x_{i}}
\frac{\partial u}{\partial x_{j}} \zeta^{2}
\right)
- \sum_{i.j=1}^{n} a_{i j}
\frac{\partial u}{\partial x_{i}} \frac{\partial u}{\partial x_{j}}
\zeta \frac{\partial \zeta}{\partial t}
\]
and
\[
f_{i} \frac{\partial^{2} u}{\partial t \partial x_{i}} \zeta^{2}
= \frac{\partial}{\partial t} \left(
f_{i} \frac{\partial u}{\partial x_{i}} \zeta^{2}
\right)
- \frac{\partial u}{\partial x_{i}}
\frac{\partial}{\partial t} ( f_{i} \zeta^{2} ) .
\]
Hence we have
\begin{align*}
& \frac{\lambda}{2} \int_{B_{2 \rho} ( x_{0} )}
\left( \lvert \nabla u \rvert^{2} \zeta^{2} \right)
( x, t_{0} ) \,
d x + \iint_{Q_{2 \rho}}
\left\lvert \frac{\partial u}{\partial t} \right\rvert^{2}
\zeta^{2} \,
d x \, d t \\
& \leq \frac{1}{2} \int_{B_{2 \rho} ( x_{0} )}
\left(
\sum_{i,j=1}^{n} a_{i j} \frac{\partial u}{\partial x_{i}}
\frac{\partial u}{\partial x_{j}} \zeta^{2}
\right) ( x, t_{0} ) \,
d x + \iint_{Q_{2 \rho}}
\left\lvert \frac{\partial u}{\partial t} \right\rvert^{2}
\zeta^{2} \,
d x \, d t \displaybreak[1] \\
& = \iint_{Q_{2 \rho}}
\left[
\sum_{i,j=1}^{n} a_{i j} \frac{\partial u}{\partial x_{i}}
\frac{\partial u}{\partial x_{j}}
\zeta \frac{\partial \zeta}{\partial t}
- 2 \sum_{i,j=1}^{n} a_{i j} \frac{\partial u}{\partial x_{j}}
\frac{\partial u}{\partial t} \zeta
\frac{\partial \zeta}{\partial x_{i}}
\right]
d x \, d t \\
& \hspace*{3ex} \mbox{}
+ \iint_{Q_{2 \rho}}
f \frac{\partial u}{\partial t} \zeta^{2} \,
d x \, d t
+ \sum_{i=1}^{n} \Biggl[
\int_{B_{2 \rho} ( x_{0} )}
\left( f_{i} \frac{\partial u}{\partial x_{i}} \zeta^{2} \right)
( x, t_{0} ) \,
d x \\
& \hspace*{10ex} \mbox{}
+ \iint_{Q_{2 \rho}}
\left(
- \frac{\partial f_{i}}{\partial t}
\frac{\partial u}{\partial x_{i}} \zeta^{2}
- 2 f_{i} \frac{\partial u}{\partial x_{i}}
\zeta \frac{\partial \zeta}{\partial t}
+ 2 f_{i} \frac{\partial u}{\partial t}
\zeta \frac{\partial \zeta}{\partial x_{i}}
\right)
\Biggr]
d x \, d t \displaybreak[1] \\
& \leq C \iint_{Q_{2 \rho}}
\lvert \nabla u \rvert^{2} \zeta
\left\lvert \frac{\partial \zeta}{\partial t} \right\rvert
d x \, d t
+ \varepsilon_{2} \iint_{Q_{2 \rho}}
\left\lvert \frac{\partial u}{\partial t} \right\rvert^{2}
\zeta^{2} \,
d x \, d t \\
& \hspace*{3ex} \mbox{}
+ \frac{C}{\varepsilon_{2}} \iint_{Q_{2 \rho}}
\lvert \nabla u \rvert^{2}
\lvert \nabla \zeta \rvert^{2} \,
d x \, d t
+ \varepsilon_{2} \iint_{Q_{2 \rho}}
\left\lvert \frac{\partial u}{\partial t} \right\rvert^{2}
\zeta^{2} \,
d x \, d t \\
& \hspace*{3ex} \mbox{}
+ \frac{C}{\varepsilon_{2}} \iint_{Q_{2 \rho}}
\lvert f \rvert^{2} \zeta^{2} \,
d x \, d t
+ \varepsilon_{2} \int_{B_{2 \rho} ( x_{0} )}
\left( \lvert \nabla u \rvert^{2} \zeta^{2} \right) ( x, t_{0} ) \,
d x \\
& \hspace*{3ex} \mbox{}
+ \frac{C}{\varepsilon_{2}} \int_{B_{2 \rho} ( x_{0} )}
\left( \sum_{i=1}^{n} \lvert f_{i} \rvert^{2} \zeta^{2} \right)
( x, t_{0} ) \,
d x \\
& \hspace*{3ex} \mbox{}
+ C \iint_{Q_{2 \rho}}
\lvert \nabla u \rvert^{2} \zeta^{2} \,
d x \, d t
+ C \iint_{Q_{2 \rho}}
\sum_{i=1}^{n}
\left\lvert \frac{\partial f_{i}}{\partial t} \right\rvert^{2}
\zeta^{2} \,
d x \, d t \\
& \hspace*{3ex} \mbox{}
+ C \iint_{Q_{2 \rho}}
\lvert \nabla u \rvert^{2} \zeta
\left\lvert \frac{\partial \zeta}{\partial t} \right\rvert
d x \, d t
+ C \iint_{Q_{2 \rho}}
\sum_{i=1}^{n}
\lvert f_{i} \rvert^{2}
\zeta \left\lvert \frac{\partial \zeta}{\partial t} \right\rvert
d x \, d t \\
& \hspace*{3ex} \mbox{}
+ \varepsilon_{2} \iint_{Q_{2 \rho}}
\left\lvert \frac{\partial u}{\partial t} \right\rvert^{2}
\zeta^{2} \,
d x \, d t
+ \frac{C}{\varepsilon_{2}} \iint_{Q_{2 \rho}}
\sum_{i=1}^{n} \lvert f_{i} \rvert^{2}
\lvert \nabla \zeta \rvert^{2} \,
d x \, d t .
\end{align*}
We remark that
\begin{align*}
\int_{B_{2 \rho} ( x_{0} )}
( f_{i} \zeta )^{2} ( x, t_{0} ) \,
d x
& = \int_{B_{2 \rho} ( x_{0} )}
\int_{t_{0} - ( 2 \rho )^{2}}^{t_{0}}
\frac{\partial}{\partial t}
\left( ( f_{i} \zeta )^{2} \right) ( x, t ) \,
d t \,
d x \\
& \leq C \iint_{Q_{2 \rho}}
\left[
\lvert f_{i} \rvert^{2} \left(
\zeta^{2} + \zeta \left\lvert
\frac{\partial \zeta}{\partial t}
\right\rvert
\right)
+ \left\lvert \frac{\partial f_{i}}{\partial t} \right\rvert^{2}
\zeta^{2}
\right]
d x \, d t.
\end{align*}
Therefore, by taking $\varepsilon_{2} > 0$ small enough, we have
\begin{align*}
& \iint_{Q_{\rho}}
\left\lvert \frac{\partial u}{\partial t} \right\rvert^{2}
d x \, d t
\leq \int_{B_{2 \rho} ( x_{0} )}
\left( \lvert \nabla u \rvert^{2} \zeta^{2} \right)
( x, t_{0} ) \,
d x + \iint_{Q_{2 \rho}}
\left\lvert \frac{\partial u}{\partial t} \right\rvert^{2}
\zeta^{2} \,
d x \, d t \\
& \leq C \iint_{Q_{2 \rho}}
\lvert \nabla u \rvert^{2} \left(
\zeta^{2}
+ \zeta
\left\lvert \frac{\partial \zeta}{\partial t} \right\rvert
+ \lvert \nabla \zeta \rvert^{2}
\right)
d x \, d t
+ C \iint_{Q_{2 \rho}}
\lvert f \rvert^{2} \zeta^{2} \,
d x \, d t \\
& \hspace*{3ex} \mbox{}
+ C \iint_{Q_{2 \rho}}
\sum_{i=1}^{n} \left[
\lvert f_{i} \rvert^{2} \left(
\zeta^{2}
+ \zeta
\left\lvert \frac{\partial \zeta}{\partial t} \right\rvert
+ \lvert \nabla \zeta \rvert^{2}
\right)
+ \left\lvert \frac{\partial f_{i}}{\partial t} \right\rvert^{2}
\zeta^{2}
\right]
d x \, d t \displaybreak[1] \\
& \leq C \rho^{-2} \lVert \nabla u \rVert_{L^{2} ( Q_{2 \rho} )}^{2}
+ C \lVert f \rVert_{L^{2} ( Q_{2 \rho} )}^{2}
+ C \rho^{-2} \sum_{i=1}^{n}
\lVert f_{i} \rVert_{L^{2} ( Q_{2 \rho} )}^{2} \\
& \hspace*{3ex} \mbox{}
+ C \sum_{i=1}^{n} \left\lVert
\frac{\partial f_{i}}{\partial t}
\right\rVert_{L^{2} ( Q_{2 \rho} )}^{2} .
\hspace*{40ex}
\end{align*}
\end{proof}
We obtain the estimate (\ref{eq:dudtL2uL2})
from Lemmas~\ref{lemma:Linfty} (given in Appendix),
\ref{lemma:DuL2} and
\ref{lemma:dudtL2}.
\section{A gradient estimate of the fundamental solution.}
\label{section:gradestoffundsol}
In this section, we consider a gradient estimate
of the fundametal solution of parabolic operators.
We first state some facts.
It is known that if coefficient $( a_{i j} )$ is
a symmetric and positive definite matrix-value
$L^{\infty}(\mathbb{R}^n)$ function
satisfying (\ref{eq:aijxiixij}), then
there exists a fundamental solution
$\Gamma (x,t;y,s)$ of the parabolic operator
\begin{equation}\label{eq:op}
\frac{\partial}{\partial t}
- \sum_{i,j=1}^{n} \frac{\partial}{\partial x_{i}} \left(
a_{i j} \frac{\partial}{\partial x_{j}}
\right)
\end{equation}
with the estimate
\begin{equation}\label{eq:estoffundsol}
\lvert \Gamma (x,t;y,s) \rvert
\leq \frac{C_{\ast}}{(t-s)^{n/2}} \exp \left(
- \frac{c_{\ast} \lvert x - y \rvert^{2}}{t-s}
\right) \chi_{[ s, \infty )} (t)
\end{equation}
for all
$t,s \in \mathbb{R}$,
and a.e.\ $x,y \in \mathbb{R}^{n}$,
where $C_{\ast} , c_{\ast} > 0$ depend only on
$n, \lambda , \Lambda$
(see \cite{Aronson} or \cite{FS}, for example).
In particular, the constants $C_{\ast}$ and $c_{\ast}$
are independent of the distance between inclusions.
If the coefficients $( a_{i j} )$ is not piecewise smooth
but H{\"o}lder continuous in the whole space $\mathbb{R}^{n}$,
then the pointwise gradient estimate
\[
\lvert \nabla_{x} \Gamma (x,t;y,s) \rvert
\leq \frac{C_{\ast}}{(t-s)^{(n+1)/2}}
\exp \left(
- \frac{c_{\ast} \lvert x - y \rvert^{2}}{t-s}
\right) \chi_{[ s, \infty )} (t)
\]
holds for
$t,s \in \mathbb{R}$, a.e.\ $x,y \in \mathbb{R}^{n}$
(see \cite[Chapter IV \S11--13]{LSU}, for example).
Now, the aim of this section is to show
the gradient estimate (\ref{eq:gradientestimate})
in Theorem~\ref{theorem:gradientestimate}
even if the coefficients are piecewise $C^{\mu}$ in $D$.
We assume that $( a_{i j} )$ defined in $D$
satisfies the conditions
(\ref{eq:aijxiixij}) and (\ref{eq:piecewiseCmu}),
and extend it to the whole $\mathbb{R}^{n}$
by defining $( a_{i j} ) \equiv \Lambda I$
in $\mathbb{R}^{n} \setminus D$,
where $I$ is the identity matrix.
We remark that this extension does not destroy the conditions
(\ref{eq:aijxiixij}) and (\ref{eq:piecewiseCmu}).
Then there exists a fundamental solution
$\Gamma (x,t;y,s)$ of the parabolic operator (\ref{eq:op})
with the estimate (\ref{eq:estoffundsol})
as we stated above.
To prove our gradient estimate of the fundamental solution, we apply the following corollary from
Theorem~\ref{theorem:main}.
\begin{corollary}\label{corollary:1}
Let $0 < \rho \leq 1$.
Then a solution $u$ to the parabolic equation
\begin{equation}\label{eq:parabolicrho}
\frac{\partial u}{\partial t}
- \sum_{i,j=1}^{n} \frac{\partial}{\partial x_{i}} \left(
a_{i j} \frac{\partial u}{\partial x_{j}}
\right) = 0 \mbox{ in }
B_{\rho} ( x_{0} ) \times ( t_{0} - \rho^{2} , \, t_{0} ]
\end{equation}
has the estimate
\begin{equation}\label{eq:estimaterho}
\lVert \nabla u \rVert_{L^{\infty} (
B_{\rho / 2} ( x_{0} ) \times ( t_{0} - ( \rho / 2 )^{2} , \, t_{0}] )
)
} \leq \frac{C_{\sharp}^{\prime}}{\rho^{n/2+2}}
\lVert u \rVert_{L^{2} (
B_{\rho} ( x_{0} ) \times ( t_{0} - \rho^{2} , \, t_{0} ]
)
} ,
\end{equation}
where $C_{\sharp}^{\prime} > 0$ depends only on
$n, L, \mu, \alpha , \lambda , \Lambda$,
and
\begin{math}
\lVert a_{i j} \rVert_{C^{\alpha^{\prime}} ( \overline{D_{m}})}
\end{math}
and the $C^{1, \alpha^{\prime}}$ norms of $D_{m}$
for some $\alpha^{\prime}$ with {\rm (\ref{eq:alphaprime})}.
\end{corollary}
\begin{proof}
It is enough to apply the scaling argument.
To begin with, let
$\rho y = x - x_{0}$,
$\rho^{2} ( s - 1 ) = t - t_{0}$
and
\begin{align}
& \widetilde{u} (y,s) := u(x,t)
= u \bigl( \rho y + x_{0} , \, \rho^{2} (s-1) + t_{0} \bigr) ,
\label{eq:tilde} \displaybreak[1] \\
& \widetilde{a}_{i j} (y)
:= a_{i j} (x) = a_{i j} ( \rho y + x_{0} ), \notag
\displaybreak[1] \\
& \widetilde{D}_{m}
:= \left\{
\frac{1}{\rho} ( x - x_{0} ) : x \in D_{m}
\right\} . \notag
\end{align}
Then we have
\begin{equation}\label{eq:tildeu}
\frac{\partial \widetilde{u}}{\partial s}
- \sum_{i,j=1}^{n} \frac{\partial}{\partial y_{i}} \left(
\widetilde{a}_{i j} \frac{\partial \widetilde{u}}{\partial y_{j}}
\right) = 0 \mbox{ in } B_{1} (0) \times (0,1].
\end{equation}
Therefore, by noting Remark~\ref{remark:indepofrho}, we have
\[
\lVert \nabla \widetilde{u} \rVert_{
L^{\infty} ( B_{1/2} (0) \times ( 3/4, 1 ] )
} \leq C_{\sharp}^{\prime}
\lVert \widetilde{u} \rVert_{L^{2} ( B_{1} (0) \times (0,1) )}
\]
by Theorem~\ref{theorem:main},
where $C_{\sharp}^{\prime}$ depends only on
\begin{math}
n, L, \mu, \alpha , \lambda , \Lambda ,
\lVert a_{i j} \rVert_{C^{\alpha^{\prime}} ( \overline{D_{m}})}
\end{math},
and the $C^{1, \alpha^{\prime}}$ seminorms of $D_{m}$.
By this estimate and the definition (\ref{eq:tilde}),
we obtain the estimate (\ref{eq:estimaterho}).
\end{proof}
\begin{remark}\label{remark:indepofrho}
One may think that a constant $C_{\sharp}^{\prime}$
depends also on $\rho$
since
\begin{math}
\lVert \widetilde{a}_{i j} \rVert_{
C^{\alpha^{\prime}} ( \overline{\widetilde{D}_{m}} )
}
\end{math}
and the $C^{1, \alpha^{\prime}}$ norms of
$\widetilde{D}_{m}$
depend on $\rho$.
However, we can take $C_{\sharp}^{\prime}$
independent of $\rho$
by taking the following into consideration.
First we consider
\begin{align*}
\lVert \widetilde{a}_{i j} \rVert_{
C^{\alpha^{\prime}} ( \overline{\widetilde{D}_{m}} )
}
& = \lVert \widetilde{a}_{i j} \rVert_{
C^{0} ( \overline{\widetilde{D}_{m}} )
} + [ \widetilde{a}_{i j} ]_{
C^{\alpha^{\prime}} ( \overline{\widetilde{D}_{m}} )
} \\
& := \sup_{y \in \overline{\widetilde{D}_{m}}}
\lvert \widetilde{a}_{i j} (y) \rvert
+ \sup_{y, \eta \in \overline{\widetilde{D}_{m}}}
\frac{
\lvert \widetilde{a}_{i j} (y) - \widetilde{a}_{i j} ( \eta ) \rvert
}{\lvert y - \eta \rvert^{\alpha^{\prime}}} .
\end{align*}
It is easy to show
\[
\lVert \widetilde{a}_{i j} \rVert_{
C^{0} ( \overline{\widetilde{D}_{m}} )
} = \lVert a_{i j} \rVert_{C^{0} ( \overline{D_{m}} )}
\]
and
\[
[ \widetilde{a}_{i j} ]_{
C^{\alpha^{\prime}} ( \overline{\widetilde{D}_{m}} )
} = \rho^{\alpha^{\prime}} [ a_{i j} ]_{
C^{\alpha^{\prime}} ( \overline{D_{m}} )
} \leq [ a_{i j} ]_{C^{\alpha^{\prime}} ( \overline{D_{m}} )} .
\]
Then we have
\[
\lVert \widetilde{a}_{i j} \rVert_{
C^{\alpha^{\prime}} ( \overline{\widetilde{D}_{m}} )
}
\leq \lVert a_{i j} \rVert_{C^{\alpha^{\prime}} ( \overline{D_{m}} )} .
\]
Next we consider the $C^{1, \alpha^{\prime}}$ norms of
$\widetilde{D}_{m}$.
We need to recall the proofs of the results of
\cite{LiNirenberg} and \cite{LiVogelius}
more carefully.
In the case when we consider
the $L^{\infty}$-norm of $\nabla \widetilde{u}$
for a solution $\widetilde{u}$ to the equation (\ref{eq:tildeu}),
the influence of the $C^{1, \alpha^{\prime}}$ norms of
subdomains $\widetilde{D}_{m}$ appears only
in the following constant $C$ in (\ref{eq:LiVogeliusp118C}):
We estimate
$O \bigl( \lvert x^{\prime} \rvert^{1 + \alpha} \bigr)$
in the equation (49) in \cite[p.\ 118]{LiVogelius}, i.e.\
\begin{equation}\tag{49}
f_{m} ( x^{\prime} )
= f_{m} ( 0^{\prime} )
+ \nabla f_{m} ( 0^{\prime} ) x^{\prime}
+ O \bigl( \lvert x^{\prime} \rvert^{1 + \alpha} \bigr)
\end{equation}
as
\begin{equation}\label{eq:LiVogeliusp118C}
\bigl\lvert
O \bigl( \lvert x^{\prime} \rvert^{1 + \alpha} \bigr)
\bigr\rvert
\leq C \lvert x \rvert^{1 + \alpha}
\end{equation}
(See also \cite[Lemma~4.3]{LiNirenberg}).
Here $C^{1, \alpha}$ functions $f_{m}$
are defined in the cube $(-1,1)^{n}$,
and the graphs of $f_{m}$ describe $\partial D_{m}$.
Now we remark that the constant $C$ in (\ref{eq:LiVogeliusp118C})
depends only on the $C^{1, \alpha}$ seminorms of $f_{m}$.
We consider the variable change $\rho y = x$.
Then the graph $x_{n} = f_{m} ( x^{\prime} )$
is changed to $y_{n} = \widetilde{f}_{m} ( y^{\prime} )$,
where
\begin{math}
\widetilde{f}_{m} ( y^{\prime} )
:= \rho^{-1} f_{m} ( \rho y^{\prime} )
\end{math}, and we have
\begin{align*}
[ \widetilde{f}_{m} ]_{C^{1, \alpha} ( (-1,1)^{n} )}
& \leq [ \widetilde{f}_{m} ]_{
C^{1, \alpha} ( ( - 1 / \rho , 1 / \rho )^{n} )
} \\
& = \rho^{\alpha} [ f_{m} ]_{C^{1, \alpha} ( (-1,1)^{n} )}
\leq [ f_{m} ]_{C^{1, \alpha} ( (-1,1)^{n} )} .
\end{align*}
Hence, even when we consider the variable change $\rho y = x$,
we can take the constant $C$ in (\ref{eq:LiVogeliusp118C})
independent of $\rho$.
Considering the circumstances mentioned above,
we can take $C_{\sharp}^{\prime} > 0$
independent of $\rho$.
\end{remark}
Now we state the estimate of
$\nabla_{x} \Gamma (x,t;y,s)$.
\begin{theorem}\label{theorem:gradientestimate}
We have
\begin{equation}\label{eq:gradientestimate}
\lvert \nabla_{x} \Gamma (x,t;y,s) \rvert
\leq \frac{C}{( t - s )^{(n+1)/2}}
\exp \left(
- \frac{c \lvert x - y \rvert^{2}}{t-s}
\right)
\end{equation}
for a.e.\ $x, y \in \mathbb{R}^{n}$ and $t > s$
with $\lvert x - y \rvert^{2} + t - s \leq 16$,
where
$C, c>0$ depend only on
\begin{math}
n, L, \mu, \alpha , \lambda , \Lambda ,
\lVert a_{i j} \rVert_{C^{\alpha^{\prime}} ( \overline{D_{m}})}
\end{math}
and the $C^{1, \alpha^{\prime}}$ seminorms of $D_{m}$
for some $\alpha^{\prime}$ with {\rm (\ref{eq:alphaprime})}.
\end{theorem}
We prove Theorem~\ref{theorem:gradientestimate}
in the same way as the proof of
\cite[Proposition 3.6]{DiCristoVessella}.
We first show the following lemmas.
\begin{lemma}\label{lemma:GammaL2a}
Let
$\rho := ( \lvert x_{0} - \xi \rvert^{2} + t_{0} - \tau )^{1/2} / 4$.
Then
\[
\int_{t_{0} - \rho^{2}}^{t_{0}} \int_{B_{\rho} ( x_{0} )}
\lvert \Gamma( x, t; \xi , \tau ) \rvert^{2} \,
d x \, d t
\leq \frac{( C_{\ast}^{\prime} )^{2} \rho^{n}}{( t_{0} - \tau )^{n-1}}
\exp \left(
- \frac{2 c_{\ast}^{\prime} \lvert x_{0} - \xi \rvert^{2}}
{t_{0} - \tau}
\right)
\]
for $t_{0} > \tau$, where
$C_{\ast}^{\prime} , c_{\ast}^{\prime} > 0$
depend only on $n, \lambda , \Lambda$.
\end{lemma}
\begin{proof}
By (\ref{eq:estoffundsol}), it is enough to obtain the estimate
\begin{align}
I_{0}
& := \int_{t_{0} - \rho^{2}}^{t_{0}} \int_{B_{\rho} ( x_{0} )}
\frac{1}{( t - \tau )^{n}}
\exp \left(
- \frac{2 c_{\ast} \lvert x - \xi \rvert^{2}}{t - \tau}
\right) \chi_{[ \tau , \infty )} (t) \,
d x \, d t \notag \\
& \leq
\frac{( C_{\ast}^{\prime} )^{2} \rho^{n}}{( t_{0} - \tau )^{n-1}}
\exp \left(
- \frac{2 c_{\ast}^{\prime} \lvert x_{0} - \xi \rvert^{2}}
{t_{0} - \tau}
\right). \label{eq:I0}
\end{align}
We consider the following three cases:
\[
{\rm (i)}~ t_{0} - \rho^{2} \leq \tau < t_{0} , \quad
{\rm (ii)}~ t_{0} - 2 \rho^{2} \leq \tau \leq t_{0} - \rho^{2} ,
\quad
{\rm (iii)}~ \tau \leq t_{0} - 2 \rho^{2} .
\]
Now we consider the case (i).
Then we have
$( \sqrt{15} - 1 ) \rho \leq \lvert x - \xi \rvert$
for any $x \in B_{\rho} ( x_{0} )$, because
$\lvert x_{0} - \xi \rvert \geq \sqrt{15} \, \rho$.
Hence we have
\[
I_{0} \leq
\int_{\tau}^{t_{0}} \int_{B_{\rho} ( x_{0} )}
\frac{1}{( t - \tau )^{n}}
\exp \left(
- \frac{c_{1} \rho^{2}}{t - \tau}
\right)
d x \, d t
= \lvert B_{1} (0) \rvert \rho^{n}
\int_{0}^{t_{0} - \tau} \varphi_{1} (s) \, d s,
\]
where $\varphi_{1} (s) := s^{-n} \exp ( - c_{1} \rho^{2} / s )$ and
$c_{1} := 2 ( \sqrt{15} - 1 )^{2} c_{\ast}$.
If $0 < t_{0} - \tau \leq c_{1} \rho^{2} / n$,
then we have
\[
\int_{0}^{t_{0} - \tau} \varphi_{1} (s) \, d s
\leq \int_{0}^{t_{0} - \tau} \varphi_{1} ( t_{0} - \tau ) \, d s
= ( t_{0} - \tau )^{-n+1} \exp \left(
- \frac{c_{1} \rho^{2}}{t_{0} - \tau},
\right)
\]
because $\varphi_{1} (s) \leq \varphi_{1} ( t_{0} - \tau )$ holds for any
$s \in [ 0, \, t_{0} - \tau ]$.
On the other hand, if
$c_{1} \rho^{2} / n \leq t_{0} - \tau \leq \rho^{2}$,
then we have
\begin{align*}
\int_{0}^{t_{0} - \tau} \varphi_{1} (s) \, d s
& \leq \int_{0}^{t_{0} - \tau}
\varphi_{1} \left( \frac{c_{1} \rho^{2}}{n} \right)
d s
= \left( \frac{n}{c_{1}} \right)^{n} ( t_{0} - \tau )
\rho^{- 2 n} \exp ( - n ) \\
& \leq \left( \frac{n}{c_{1}} \right)^{n} ( t_{0} - \tau )^{1-n}
\exp \left(
- \frac{c_{1} \rho^{2}}{t_{0} - \tau}
\right),
\end{align*}
where we used the properties that
\begin{align*}
& \varphi_{1} (s)
\leq \varphi_{1} \left( \frac{c_{1} \rho^{2}}{n} \right)
\mbox{ for any } 0 < s \leq t_{0} - \tau ;
\displaybreak[1] \\
& n \geq \frac{c_{1} \rho^{2}}{t_{0} - \tau} \mbox{, and }
\rho^{2} \geq t_{0} - \tau .
\end{align*}
Summing up, we have
\[
I_{0} \leq \max \left\{
1, \, \left( \frac{n}{c_{1}} \right)^{n}
\right\} \lvert B_{1} (0) \rvert
\rho^{n} ( t_{0} - \tau )^{1-n} \exp \left(
- \frac{c_{1} \rho^{2}}{t_{0} - \tau}
\right) .
\]
Let us consider the case (ii).
Then we have $( \sqrt{14} - 1 ) \rho \leq \lvert x - \xi \rvert$
for all $x \in B_{\rho} ( x_{0} )$,
because $\lvert x_{0} - \xi \rvert \geq \sqrt{14} \, \rho$.
Hence we have
\begin{align*}
I_{0}
& \leq \int_{t_{0} - \rho^{2}}^{t_{0}} \int_{B_{\rho} ( x_{0} )}
\frac{1}{( t - \tau )^{n}}
\exp \left(
- \frac{c_{2} \rho^{2}}{t - \tau}
\right)
d x \, d t \\
& = \lvert B_{1} (0) \rvert \rho^{n}
\int_{t_{0} - \rho^{2} - \tau}^{t_{0} - \tau}
\varphi_{2} (s) \,
d s,
\end{align*}
where $\varphi_{2} (s) := s^{-n} \exp ( - c_{2} \rho^{2} / s )$ and
$c_{2} := 2 ( \sqrt{14} - 1 )^{2} c_{\ast}$.
In a similary way as the case (i),
if $\rho^{2} \leq t_{0} - \tau \leq c_{2} \rho^{2} / n$,
then we have
\begin{align*}
\int_{t_{0} - \rho^{2} - \tau}^{t_{0} - \tau}
\varphi_{2} (s) \,
d s
& \leq \int_{t_{0} - \rho^{2} - \tau}^{t_{0} - \tau}
\varphi_{2} ( t_{0} - \tau ) \,
d s
= \rho^{2} ( t_{0} - \tau )^{-n}
\exp \left( - \frac{c_{2} \rho^{2}}{t_{0} - \tau} \right) \\
& \leq ( t_{0} - \tau )^{-n+1}
\exp \left( - \frac{c_{2} \rho^{2}}{t_{0} - \tau} \right),
\end{align*}
because $\varphi_{2} (s) \leq \varphi ( t_{0} - \tau )$
for any $s \in [ t_{0} - \rho^{2} - \tau , \, t_{0} - \tau ]$,
and we have $\rho^{2} \leq t_{0} - \tau$.
On the other hand,
if $c_{2} \rho^{2} / n \leq t_{0} - \tau \leq 2 \rho^{2}$,
then we have
\begin{align*}
\int_{t_{0} - \rho^{2} - \tau}^{t_{0} - \tau}
\varphi_{2} (s) \,
d s
& \leq \int_{t_{0} - \rho^{2} - \tau}^{t_{0} - \tau}
\varphi_{2} \left( \frac{c_{2} \rho^{2}}{n} \right)
d s
= \left( \frac{n}{c_{2}} \right)^{n}
\rho^{-2n+2} \exp ( -n ) \\
& \leq 2^{n-1} \left( \frac{n}{c_{2}} \right)^{n}
( t_{0} - \tau )^{1-n}
\exp \left( - \frac{c_{2} \rho^{2}}{t_{0} - \tau} \right),
\end{align*}
where we used the properties that
\begin{align*}
& \varphi_{2} (s)
\leq \varphi_{2} \left( \frac{c_{2} \rho^{2}}{n} \right)
\mbox{ for any } t_{0} - \rho^{2} - \tau \leq s \leq t_{0} - \tau ;
\displaybreak[1] \\
& n \geq \frac{c_{2} \rho^{2}}{t_{0} - \tau} \mbox{, and }
\rho^{2} \geq \frac{t_{0} - \tau}{2} .
\end{align*}
Summing up, we have
\[
I_{0} \leq \lvert B_{1} (0) \rvert
\max \left\{ 1, \, 2^{n-1} \left( \frac{n}{c_{2}} \right)^{n} \right\}
\rho^{n} ( t_{0} - \tau )^{1-n}
\exp \left( - \frac{c_{2} \rho^{2}}{t_{0} - \tau} \right) .
\]
Finally we consider the case (iii).
We first remark that
\[
\int_{t_{0} - \rho^{2}}^{t_{0}} ( t - \tau )^{-n} \, d t
\leq
\left\{
\begin{aligned}
& \frac{1}{n-1} ( t_{0} - \rho^{2} - \tau )^{-n+1}
& & \mbox{if } n \geq 2, \\
& \log 2 & & \mbox{if } n = 1,
\end{aligned}
\right.
\]
because $t_{0} - \tau \leq 2 ( t_{0} - \rho^{2} - \tau )$.
In particular, we have
\[
\int_{t_{0} - \rho^{2}}^{t_{0}} ( t - \tau )^{-n} \, d t
\leq ( t_{0} - \rho^{2} - \tau )^{-n+1}
\leq 2^{n-1} ( t_{0} - \tau )^{-n+1} .
\]
Hence we have
\begin{align*}
I_{0}
& \leq \lvert B_{1} (0) \rvert \rho^{n}
\int_{t_{0} - \rho^{2}}^{t_{0}} ( t - \tau )^{-n} \, d t
\leq 2^{n-1} \lvert B_{1} (0) \rvert \rho^{n}
( t_{0} - \tau )^{-n+1} \\
& \leq 2^{n-1} \exp (8) \lvert B_{1} (0) \rvert \rho^{n}
( t_{0} - \tau )^{-n+1}
\exp \left(
- \frac{\lvert x_{0} - \xi \rvert^{2}}{t_{0} - \tau}
\right),
\end{align*}
because
\begin{math}
\lvert x_{0} - \xi \rvert^{2} / ( t_{0} - \tau)
\leq ( 4 \rho )^{2} / 2 \rho^{2} = 8
\end{math}.
Therefore we have the estimate (\ref{eq:I0}) in every case.
\end{proof}
Now we prove Theorem~\ref{theorem:gradientestimate}.
\begin{proof}[of Theorem~\ref{theorem:gradientestimate}]
Let $x_{0} , \xi \in \mathbb{R}^{n}$ and $t_{0} > \tau$.
Let
\begin{math}
\rho := ( \lvert x_{0} - \xi \rvert^{2} + t_{0} - \tau )^{1/2} / 4
\leq 1
\end{math}.
Then, by Corollary~\ref{corollary:1},
we have
\begin{align*}
& \lVert \nabla_{x} \Gamma ( \cdot , \cdot ; \xi , \tau ) \rVert_{
L^{\infty} (
B_{\rho / 2} ( x_{0} ) \times ( t_{0} - ( \rho / 2 )^{2} , \, t_{0} )
)
} \\
& \leq \frac{C_{\sharp}^{\prime}}{\rho^{n/2 + 2}}
\lVert
\Gamma ( \cdot , \cdot ; \xi , \tau )
\rVert_{L^{2}
( B_{\rho} ( x_{0} ) \times ( t_{0} - \rho^{2} , \, t_{0} ) )
},
\end{align*}
because we have
\[
\frac{\partial \Gamma}{\partial t} ( x, t; \xi , \tau )
- \sum_{i,j=1}^{n} \frac{\partial}{\partial x_{i}} \left(
a_{i j} (x) \frac{\partial \Gamma}{\partial x_{j}}
( x, t; \xi , \tau )
\right) = 0 \mbox{ in }
B_{\rho} ( x_{0} ) \times ( t_{0} - \rho^{2} , t_{0} ] .
\]
By this estimate and Lemma~\ref{lemma:GammaL2a}, we have
\begin{align*}
& \lVert \nabla_{x} \Gamma ( \cdot , \cdot ; \xi , \tau ) \rVert_{
L^{\infty} (
B_{\rho / 2} ( x_{0} ) \times ( t_{0} - ( \rho / 2 )^{2} , \, t_{0} ]
)
} \\
& \leq \frac{C_{\sharp}^{\prime}}{\rho^{n/2 + 2}}
\lVert
\Gamma ( \cdot , \cdot ; \xi , \tau )
\rVert_{L^{2}
( B_{\rho} ( x_{0} ) \times ( t_{0} - \rho^{2} , \, t_{0} ] )
} \displaybreak[1] \\
& \leq \frac{C_{\sharp}^{\prime} C_{\ast}^{\prime}}{\rho^{2}}
\frac{1}{( t_{0} - \tau )^{(n-1)/2}}
\exp \left(
- \frac{c_{\ast}^{\prime} \lvert x_{0} - \xi \rvert^{2}}
{t_{0} - \tau}
\right) \displaybreak[1] \\
& \leq
\frac{16 C_{\sharp}^{\prime} C_{\ast}^{\prime}}
{( t_{0} - \tau )^{(n+1)/2}}
\exp \left(
- \frac{c_{\ast}^{\prime} \lvert x_{0} - \xi \rvert^{2}}
{t_{0} - \tau}
\right),
\end{align*}
because we have
$\rho^{-2} \leq 16 ( t_{0} - \tau )^{-1}$.
Hence the proof is completed.
\end{proof}
\renewcommand{Appendix}{Appendix}
\renewcommand{A.\arabic{equation}}{A.\arabic{equation}}
\renewcommand{A.\arabic{theorem}}{A.\arabic{theorem}}
\section{}
In Appendix, we show the estimates
(\ref{eq:LinftyL2}) and (\ref{eq:Linfty})
in Lemma~\ref{lemma:keyestimate}
for the sake of completeness.
To begin with,
we give some embedding lemma
which is necessary to show the estimates
(\ref{eq:LinftyL2}) and (\ref{eq:Linfty}).
First, the following Gagliardo-Nirenberg's inequality is well-known
(see \cite[p.\ 24, Theorem~9.3]{Friedman}, for example).
\begin{lemma}[Gagliardo-Nirenberg's inequality]
\label{lemma:GagliardoNirenberg}
Let
$r, s$
be any numbers satisfying
$1 \leq r, s \leq \infty$,
and let
$j, k$
be any integers satisfying
$0 \leq j < k$.
If $u$ is any function in
$W_{s}^{k} ( \mathbb{R}^{n} ) \cap L^{r} ( \mathbb{R}^{n} )$,
then
\begin{equation}\label{eq:GagliardoNirenberg}
\lVert D^{j} u \rVert_{L^{q}( \mathbb{R}^{n} )}
\leq C_{1} \lVert D^{k} u \rVert_{L^{s}( \mathbb{R}^{n} )}^{\gamma}
\lVert u \rVert_{L^{r}( \mathbb{R}^{n} )}^{1 - \gamma} ,
\end{equation}
where
\begin{equation}\label{eq:gamma}
\frac{1}{q} = \frac{j}{n} +
\gamma \left( \frac{1}{s} - \frac{k}{n} \right)
+ \frac{1 - \gamma}{r}
\end{equation}
for all $\gamma$ in the interval
\[
\frac{j}{k} \leq \gamma \leq 1,
\]
where a positive constant $C_{1}$
depends only on $n, k, j, r, s, \gamma$,
with the following exception:
If $k - j - n/s$ is a nonnegative integer,
then {\rm (\ref{eq:GagliardoNirenberg})} holds
only for $j/k \leq \gamma < 1$.
\end{lemma}
Then, as an application of Lemma~\ref{lemma:GagliardoNirenberg}, we have the following embedding lemma.
\begin{lemma}[embedding lemma]\label{lemma:SobolevV2}
Let $H_{0}^{1} (D)$ be the usual $L^2$-Sobolev space with supports in $\overline D$ and
\begin{math}
v \in L^{\infty} \bigl( 0, T; L^{2} (D) \bigr)
\cap L^{2} \bigl( 0, T; H_{0}^{1} (D) \bigr)
\end{math}.
Then $v \in L^{2(n+2)/n} (Q)$ holds.
Moreover, we have the estimate
\begin{align}
\lVert v \rVert_{L^{2(n+2)/n} (Q)}
& \leq C_{1}
\lVert v \rVert_{L^{\infty} ( 0, T; L^{2} (D) )}^{2/(n+2)}
\lVert \nabla v \rVert_{L^{2} (Q)}^{n/(n+2)} \notag \\
& \leq C_{1} \left(
\lVert v \rVert_{L^{\infty} ( 0, T; L^{2} (D) )}
+ \lVert \nabla v \rVert_{L^{2} (Q)}
\right) , \label{eq:V2}
\end{align}
where a positive constant $C_{1}$ depends only on $n$,
and we denote $Q := D \times (0,T]$.
\end{lemma}
\begin{proof}
We apply Lemma~\ref{lemma:GagliardoNirenberg}
with $q = 2(n+2)/n$, $r=2$, $s=2$, $k=1$ and $j=0$.
Then the equation (\ref{eq:gamma}) yields $\gamma = n/(n+2)$.
Hence we have
\[
\lVert v ( \cdot , t ) \rVert_{L^{2(n+2)/n} (D)}
\leq C_{1} \lVert \nabla v ( \cdot , t ) \rVert_{L^{2} (D)}^{n/(n+2)}
\lVert v ( \cdot , t ) \rVert_{L^{2} (D)}^{2/(n+2)} .
\]
Therefore we have
\begin{align*}
\lVert v \rVert_{L^{2(n+2)/n} (Q)}^{2(n+2)/n}
& = \int_{0}^{T}
\lVert v ( \cdot , t ) \rVert_{L^{2(n+2)/n} (D)}^{2(n+2)/n} \,
d t \\
& \leq \int_{0}^{T}
\left(
C_{1} \lVert \nabla v ( \cdot , t ) \rVert_{L^{2} (D)}^{n/(n+2)}
\lVert v ( \cdot , t ) \rVert_{L^{2} (D)}^{2/(n+2)}
\right)^{2(n+2)/n} \,
d t \displaybreak[1] \\
& \leq C_{1}^{2(n+2)/n}
\lVert v \rVert_{L^{\infty} ( 0, T; L^{2} (D) )}^{4/n}
\lVert \nabla v \rVert_{L^{2} (Q)}^{2} .
\end{align*}
By this inequality and Young's inequality,
we have the estimate (\ref{eq:V2}).
\end{proof}
Based on Di~Giorgi's famous argument, we start to estimate solutions to the parabolic equation
(\ref{eq:parabolic}).
By testing $\max\{ u - k,0\}\zeta^{2}$ to (\ref{eq:parabolic}) we have the following lemma.
\begin{lemma}\label{lemma:DG}
Let $p > 2$.
Let
\begin{math}
Q_{\rho} := B_{\rho} ( x_{0} ) \times ( t_{0} - \rho^{2} , t_{0} ]
\subset Q
\end{math}
and
\begin{math}
\zeta \in C^{\infty} \bigl(
[ t_{0} - \rho^{2} , t_{0} ] ; C_{0}^{\infty} ( B_{\rho} ( x_{0} ) )
\bigr)
\end{math}
satisfy
$0 \leq \zeta \leq 1$
and
$\zeta ( \cdot , t_{0} - \rho^{2} ) = 0$.
Then a solution $u$ to the parabolic equation
{\rm (\ref{eq:parabolic})} satisfies
\begin{align}
& \lVert ( u - k )_{+} \zeta \rVert_{
L^{\infty} ( t_{0} - \rho^{2} , t_{0} ; L^{2} ( B_{\rho} ( x_{0} ) ) )
}^{2}
+ \bigl\lVert
\nabla \bigl( ( u - k )_{+} \zeta \bigr)
\bigr\rVert_{L^{2} ( Q_{\rho} )}^{2} \notag \\
& \leq C_{2} \Biggl[
\left(
\left\lVert
\frac{\partial \zeta}{\partial t}
\right\rVert_{L^{\infty} ( Q_{\rho} )}
+ \lVert \nabla \zeta \rVert_{L^{\infty} ( Q_{\rho} )}^{2}
\right) \lVert ( u - k )_{+} \rVert_{L^{2} ( Q_{\rho} )}^{2}
\notag \\
& \hspace*{30ex} \mbox{}
+ F_{0, \rho}^{2}
\bigl\lvert
Q_{\rho} \cap \{ u(x,t) > k \}
\bigr\rvert^{1 - 2 / p}
\Biggr] \label{eq:DG}
\end{align}
for any $k \in \mathbb{R}$,
where $v_{+} (x) := \max \{ v(x), 0 \}$,
\begin{equation}\label{eq:F0rho}
F_{0, r} := \lVert f \rVert_{L^{\frac{p(n+2)}{n+2+p}} ( Q_{r} )}
+ \sum_{i=1}^{n} \lVert f_{i} \rVert_{L^{p} ( Q_{r} )}
\mbox{ for } r>0
\end{equation}
and $C_{2} > 0$ depends only on $n, \Lambda$ and $\lambda$.
\end{lemma}
\begin{proof}
Multiplying (\ref{eq:parabolic}) by
$( u - k )_{+} \zeta^{2}$
and integrating it over
\begin{math}
Q_{\rho}^{\prime}
:= B_{\rho} ( x_{0} ) \times ( t_{0} - \rho^{2} , t^{\prime} )
\end{math}
(also see Remark~\ref{remark:steklov}),
we have
\begin{align*}
\mbox{(LHS)}
& = \iint_{Q_{\rho}^{\prime}}
\left( \frac{\partial}{\partial t} ( u - k )_{+} \right)
( u - k )_{+} \zeta^{2} \,
d x \, d t \\
& \hspace*{3ex} \mbox{}
- \sum_{i,j=1}^{n}
\iint_{Q_{\rho}^{\prime}}
\frac{\partial}{\partial x_{i}} \left(
a_{i j} \frac{\partial}{\partial x_{j}} ( u - k )_{+}
\right)
( u - k )_{+} \zeta^{2} \,
d x \, d t \displaybreak[1] \\
& = \frac{1}{2}
\iint_{Q_{\rho}^{\prime}}
\left( \frac{\partial}{\partial t} ( u - k )_{+}^{2} \right)
\zeta^{2} \,
d x \, d t \\
& \hspace*{3ex} \mbox{}
+ \sum_{i,j=1}^{n}
\iint_{Q_{\rho}^{\prime}}
a_{i j} \frac{\partial}{\partial x_{j}} ( u - k )_{+}
\frac{\partial}{\partial x_{i}} \bigl( ( u - k )_{+} \zeta^{2} \bigr)
\,
d x \, d t \displaybreak[1] \\
& = \frac{1}{2}
\iint_{Q_{\rho}^{\prime}}
\left[
\frac{\partial}{\partial t} \bigl(
( u - k )_{+}^{2} \zeta^{2}
\bigr)
- 2 ( u - k )_{+}^{2} \zeta \frac{\partial \zeta}{\partial t}
\right]
d x \, d t \\
& \hspace*{5ex} \mbox{}
+ \sum_{i,j=1}^{n}
\iint_{Q_{\rho}^{\prime}}
a_{i j}
\frac{\partial}{\partial x_{j}} \bigl( ( u - k )_{+} \zeta \bigr)
\frac{\partial}{\partial x_{i}} \bigl( ( u - k )_{+} \zeta \bigr) \,
d x \, d t \\
& \hspace*{5ex} \mbox{}
- \sum_{i,j=1}^{n}
\iint_{Q_{\rho}^{\prime}}
a_{i j} ( u - k )_{+}^{2} \frac{\partial \zeta}{\partial x_{j}}
\frac{\partial \zeta}{\partial x_{i}} \,
d x \, d t \displaybreak[1] \\
& =
\frac{1}{2}
\int_{B_{\rho} ( x_{0} )}
( u - k )_{+}^{2} \zeta^{2} \,
d x \biggr|_{t = t^{\prime}}
- \iint_{Q_{\rho}^{\prime}}
( u - k )_{+}^{2} \zeta \frac{\partial \zeta}{\partial t} \,
d x \, d t \\
& \hspace*{5ex} \mbox{}
+ \sum_{i,j=1}^{n}
\iint_{Q_{\rho}^{\prime}}
a_{i j}
\frac{\partial}{\partial x_{j}} \bigl( ( u - k )_{+} \zeta \bigr)
\frac{\partial}{\partial x_{i}} \bigl( ( u - k )_{+} \zeta \bigr) \,
d x \, d t \\
& \hspace*{5ex} \mbox{}
- \sum_{i,j=1}^{n}
\iint_{Q_{\rho}^{\prime}}
a_{i j} ( u - k )_{+}^{2} \frac{\partial \zeta}{\partial x_{j}}
\frac{\partial \zeta}{\partial x_{i}} \,
d x \, d t .
\end{align*}
Hence we have
\begin{align}
& \frac{1}{2} \int_{B_{\rho} ( x_{0} )}
( u - k )_{+}^{2} \zeta^{2} \,
d x \biggr|_{t = t^{\prime}} \notag \\
& \mbox{}
+ \sum_{i,j=1}^{n}
\iint_{Q_{\rho}^{\prime}}
a_{i j}
\frac{\partial}{\partial x_{j}} \bigl( ( u - k )_{+} \zeta \bigr)
\frac{\partial}{\partial x_{i}} \bigl( ( u - k )_{+} \zeta \bigr) \,
d x \, d t \notag \\
& = \iint_{Q_{\rho}^{\prime}}
( u - k )_{+}^{2} \zeta \frac{\partial \zeta}{\partial t} \,
d x \, d t
+ \sum_{i,j=1}^{n}
\iint_{Q_{\rho}^{\prime}}
a_{i j} ( u - k )_{+}^{2} \frac{\partial \zeta}{\partial x_{j}}
\frac{\partial \zeta}{\partial x_{i}} \,
d x \, d t \notag \\
& \hspace*{5ex} \mbox{}
+ \iint_{Q_{\rho}^{\prime}}
f ( u - k )_{+} \zeta^{2} \,
d x \, d t + \sum_{i=1}^{n}
\iint_{Q_{\rho}^{\prime}}
f_{i}
\frac{\partial}{\partial x_{i}} \bigl( ( u - k )_{+} \zeta^{2} \bigr)
\,
d x \, d t . \label{eq:identity1}
\end{align}
We remark that
\begin{align*}
& \left\lvert
\iint_{Q_{\rho}^{\prime}}
f_{i}
\frac{\partial}{\partial x_{i}} \bigl( ( u - k )_{+} \zeta^{2} \bigr)
\,
d x \, d t
\right\rvert \\
& = \Biggl\lvert
\iint_{Q_{\rho}^{\prime} \cap \{ u(x,t) > k \}}
f_{i} \zeta
\frac{\partial}{\partial x_{i}} \bigl( ( u - k )_{+} \zeta \bigr) \,
d x \, d t \\
& \hspace*{5ex} \mbox{}
+ \iint_{Q_{\rho}^{\prime} \cap \{ u(x,t) > k \}}
f_{i} ( u - k )_{+} \zeta
\frac{\partial \zeta}{\partial x_{i}} \,
d x \, d t
\Biggr\rvert \displaybreak[1] \\
& \leq \varepsilon_{1}
\iint_{Q_{\rho}^{\prime} \cap \{ u(x,t) > k \}}
\left\lvert
\frac{\partial}{\partial x_{i}} \bigl( ( u - k )_{+} \zeta \bigr)
\right\rvert^{2} \,
d x \, d t \\
& \hspace*{5ex} \mbox{}
+ \frac{1}{\varepsilon_{1}}
\iint_{Q_{\rho}^{\prime} \cap \{ u(x,t) > k \}}
\lvert f_{i} \zeta \rvert^{2} \,
d x \, d t \\
& \hspace*{5ex} \mbox{}
+ \iint_{Q_{\rho}^{\prime} \cap \{ u(x,t) > k \}}
\lvert f_{i} \zeta \rvert^{2} \,
d x \, d t
+ \iint_{Q_{\rho}^{\prime} \cap \{ u(x,t) > k \}}
( u - k )_{+}^{2}
\left\lvert \frac{\partial \zeta}{\partial x_{i}} \right\rvert^{2} \,
d x \, d t .
\end{align*}
Hence, by (\ref{eq:aijxiixij}) and (\ref{eq:identity1}), we have
\begin{align*}
& \frac{1}{2} \int_{B_{\rho} ( x_{0} )}
( u - k )_{+}^{2} \zeta^{2} \,
d x \biggr|_{t = t^{\prime}}
+ \lambda
\iint_{Q_{\rho}^{\prime}}
\bigl\lvert \nabla \bigl( ( u - k )_{+} \zeta \bigr) \bigr\rvert^{2} \,
d x \, d t \\
& \leq \frac{1}{2} \int_{B_{\rho} ( x_{0} )}
( u - k )_{+}^{2} \zeta^{2} \,
d x \biggr|_{t = t^{\prime}} \\
& \hspace*{5ex} \mbox{}
+ \sum_{i,j=1}^{n}
\iint_{Q_{\rho}^{\prime}}
a_{i j}
\frac{\partial}{\partial x_{j}} \bigl( ( u - k )_{+} \zeta \bigr)
\frac{\partial}{\partial x_{i}} \bigl( ( u - k )_{+} \zeta \bigr) \,
d x \, d t \displaybreak[1] \\
& = \iint_{Q_{\rho}^{\prime}}
( u - k )_{+}^{2} \zeta \frac{\partial \zeta}{\partial t} \,
d x \, d t
+ \sum_{i,j=1}^{n}
\iint_{Q_{\rho}^{\prime}}
a_{i j} ( u - k )_{+}^{2} \frac{\partial \zeta}{\partial x_{j}}
\frac{\partial \zeta}{\partial x_{i}} \,
d x \, d t \\
& \hspace*{5ex} \mbox{}
+ \iint_{Q_{\rho}^{\prime}}
f ( u - k )_{+} \zeta^{2} \,
d x \, d t + \sum_{i=1}^{n}
\iint_{Q_{\rho}^{\prime}}
f_{i}
\frac{\partial}{\partial x_{i}} \bigl( ( u - k )_{+} \zeta^{2} \bigr)
\,
d x \, d t \displaybreak[1] \\
& \leq \left\lVert
\frac{\partial \zeta}{\partial t}
\right\rVert_{L^{\infty} ( Q_{\rho} )}
\iint_{Q_{\rho}^{\prime}}
( u - k )_{+}^{2} \,
d x \, d t
+ \Lambda \lVert \nabla \zeta \rVert_{L^{\infty} ( Q_{\rho} )}^{2}
\iint_{Q_{\rho}^{\prime}}
( u - k )_{+}^{2} \,
d x \, d t \\
& \hspace*{5ex} \mbox{}
+ \iint_{Q_{\rho}^{\prime}}
f ( u - k )_{+} \zeta^{2} \,
d x \, d t
+ \varepsilon_{1}
\iint_{Q_{\rho}^{\prime}}
\left\lvert
\nabla \bigl( ( u - k )_{+} \zeta \bigr)
\right\rvert^{2} \,
d x \, d t \\
& \hspace*{5ex} \mbox{}
+ \left( \frac{1}{\varepsilon_{1}} + 1 \right)
\iint_{Q_{\rho}^{\prime} \cap \{ u(x,t) > k \}}
\sum_{i=1}^{n} \lvert f_{i} \rvert^{2} \,
d x \, d t \\
& \hspace*{5ex} \mbox{}
+ n \lVert \nabla \zeta \rVert_{L^{\infty} ( Q_{\rho} )}^{2}
\iint_{Q_{\rho}^{\prime}}
( u - k )_{+}^{2} \,
d x \, d t ,
\end{align*}
that is,
\begin{align*}
& \frac{1}{2} \int_{B_{\rho} ( x_{0} )}
( u - k )_{+}^{2} \zeta^{2} \,
d x \biggr|_{t = t^{\prime}}
+ ( \lambda - \varepsilon_{1} )
\iint_{Q_{\rho}^{\prime}}
\bigl\lvert \nabla \bigl( ( u - k )_{+} \zeta \bigr) \bigr\rvert^{2} \,
d x \, d t \\
& \leq ( \Lambda + n ) \left(
\left\lVert
\frac{\partial \zeta}{\partial t}
\right\rVert_{L^{\infty} ( Q_{\rho} )}
+ \lVert \nabla \zeta \rVert_{L^{\infty} ( Q_{\rho} )}^{2}
\right)
\iint_{Q_{\rho}^{\prime}}
( u - k )_{+}^{2} \,
d x \, d t \\
& \hspace*{5ex} \mbox{}
+ \left( \frac{1}{\varepsilon_{1}} + 1 \right)
\iint_{Q_{\rho}^{\prime} \cap \{ u(x,t) > k \}}
\sum_{i=1}^{n} \lvert f_{i} \rvert^{2} \,
d x \, d t
+ \iint_{Q_{\rho}^{\prime}}
f ( u - k )_{+} \zeta^{2} \,
d x \, d t.
\end{align*}
Taking the supremum of the inequality over
$( t_{0} - \rho^{2} , t_{0} ]$
with respect to $t^{\prime}$, we have
\begin{align}
& \max \left\{
\frac{1}{2}
\lVert ( u - k )_{+} \zeta \rVert_{
L^{\infty} ( t_{0} - \rho^{2} , t_{0} ; L^{2} ( B_{\rho} ( x_{0} ) ) )
}^{2} , \
( \lambda - \varepsilon_{1} )
\bigl\lVert
\nabla \bigl( ( u - k )_{+} \zeta \bigr)
\bigr\rVert_{L^{2} ( Q_{\rho} )}^{2}
\right\} \notag \\
& \leq
( \Lambda + n ) \left(
\left\lVert
\frac{\partial \zeta}{\partial t}
\right\rVert_{L^{\infty} ( Q_{\rho} )}
+ \lVert \nabla \zeta \rVert_{L^{\infty} ( Q_{\rho} )}^{2}
\right)
\iint_{Q_{\rho}}
( u - k )_{+}^{2} \,
d x \, d t \notag \\
& \hspace*{5ex} \mbox{}
+ \left( \frac{1}{\varepsilon_{1}} + 1 \right)
\iint_{Q_{\rho} \cap \{ u(x,t) > k \}}
\sum_{i=1}^{n} \lvert f_{i} \rvert^{2} \,
d x \, d t
+ \iint_{Q_{\rho}}
f ( u - k )_{+} \zeta^{2} \,
d x \, d t. \label{eq:proofDG1}
\end{align}
Now we estimate
the last two terms in the right-hand side of (\ref{eq:proofDG1}).
First we obtain
\begin{equation}\label{eq:proofDG2}
\iint_{Q_{\rho} \cap \{ u(x,t) > k \}}
\lvert f_{i} \rvert^{2} \,
d x \, d t
\leq \bigl\lvert
Q_{\rho} \cap \{ u(x,t) > k \}
\bigr\rvert^{1 - 2 / p}
\lVert f_{i} \rVert_{L^{p} ( Q_{\rho} )}^{2}
\end{equation}
by H{\"o}lder's inequality.
Now we estimate
$\iint_{Q_{\rho}} f ( u - k )_{+} \zeta^{2} \, d x \, d t$.
We first recall
\begin{align*}
& \lVert ( u - k )_{+} \zeta \rVert_{L^{2(n+2)/n} ( Q_{\rho} )} \\
& \leq C_{1} \left(
\lVert ( u - k )_{+} \zeta \rVert_{
L^{\infty} ( t_{0} - \rho^{2} , t_{0} ; L^{2} ( B_{\rho} ( x_{0} ) ) )
}
+ \bigl\lVert
\nabla \bigl( ( u - k )_{+} \zeta \bigr)
\bigr\rVert_{L^{2} ( Q_{\rho} )}
\right)
\end{align*}
by Lemma~\ref{lemma:SobolevV2},
where $C_{1} > 0$ depends only on $n$.
Then, by this inequality,
H{\"o}lder's inequality and Young's inequality,
we have
\begin{align}
& \iint_{Q_{\rho}} f ( u - k )_{+} \zeta^{2} \, d x \, d t
\notag \\
& \leq
\lVert f \zeta \rVert_{
L^{2(n+2)/(n+4)} ( Q_{\rho} \cap \{ u(x,t) > k \} )
}
\lVert ( u - k )_{+} \zeta \rVert_{L^{2(n+2)/n} ( Q_{\rho} )}
\notag \displaybreak[1] \\
& \leq \varepsilon_{2}
\lVert ( u - k )_{+} \zeta \rVert_{L^{2(n+2)/n} ( Q_{\rho} )}^{2}
+ \frac{1}{\varepsilon_{2}}
\lVert f \zeta \rVert_{
L^{2(n+2)/(n+4)} ( Q_{\rho} \cap \{ u(x,t) > k \} )
}^{2} \notag \displaybreak[1] \\
& \leq 2 \varepsilon_{2} C_{1}^{2}
\max \left\{
\lVert ( u - k )_{+} \zeta \rVert_{
L^{\infty} ( t_{0} - \rho^{2} , t_{0} ; L^{2} ( B_{\rho} ( x_{0} ) ) )
}^{2} , \
\bigl\lVert
\nabla \bigl( ( u - k )_{+} \zeta \bigr)
\bigr\rVert_{L^{2} ( Q_{\rho} )}^{2}
\right\} \notag \\
& \hspace*{5ex} \mbox{}
+ \frac{1}{\varepsilon_{2}}
\lVert f \rVert_{
L^{\frac{p(n+2)}{n+2+p}} ( Q_{\rho} )
}^{2}
\bigl\lvert
Q_{\rho} \cap \{ u(x,t) > k \}
\bigr\rvert^{1 - 2 / p} \label{eq:proofDG3}
\end{align}
because
$2(n+2)/(n+4) < p(n+2)/(n+2+p)$.
By (\ref{eq:proofDG1}), (\ref{eq:proofDG2}) and (\ref{eq:proofDG3}),
we obtain the estimate (\ref{eq:DG}).
\end{proof}
By the same argument, we obtain the following lemma for $v_{-} (x) := \max \{ - v(x), 0 \}$.
\renewcommand{\thelemmaprime}
{\mathversion{bold}\ref{lemma:DG}$^{\prime}$}
\begin{lemmaprime}
Under the same assumption as in Lemma~\ref{lemma:DG},
a solution $u$ to the parabolic equation
{\rm (\ref{eq:parabolic})} satisfies
\begin{align}
& \lVert ( u - k )_{-} \zeta \rVert_{
L^{\infty} ( t_{0} - \rho^{2} , t_{0} ; L^{2} ( B_{\rho} ( x_{0} ) ) )
}^{2}
+ \bigl\lVert
\nabla \bigl( ( u - k )_{-} \zeta \bigr)
\bigr\rVert_{L^{2} ( Q_{\rho} )}^{2} \notag \\
& \leq C_{2} \Biggl[
\left(
\left\lVert
\frac{\partial \zeta}{\partial t}
\right\rVert_{L^{\infty} ( Q_{\rho} )}
+ \lVert \nabla \zeta \rVert_{L^{\infty} ( Q_{\rho} )}^{2}
\right) \lVert ( u - k )_{-} \rVert_{L^{2} ( Q_{\rho} )}^{2} \notag \\
& \hspace*{10ex} \mbox{}
+ F_{0, \rho}^{2}
\bigl\lvert
Q_{\rho} \cap \{ u(x,t) < k \}
\bigr\rvert^{1 - 2 / p}
\Biggr] \label{eq:DGprime}
\end{align}
for any $k \in \mathbb{R}$,
where we define $F_{0, \rho}$ as {\rm (\ref{eq:F0rho})},
and $C_{2} > 0$ depends only on $n, \Lambda$ and $\lambda$.
\end{lemmaprime}
The estimate (\ref{eq:LinftyL2}) easily follows from Lemmas~\ref{lemma:DG} and \ref{lemma:DG}$^{\prime}$.
Our next task is to prove the estimate (\ref{eq:Linfty}).
We start by giving a technical lemma which will be used later.
\begin{lemma}\label{lemma:ym}
Let $\widetilde{C} > 0$, $b > 1$ and $\varepsilon > 0$.
If a sequence $\{ y_{m} \}_{m=0}^{\infty}$ satisfies
\begin{equation}\label{eq:ymcond}
y_{0} \leq \theta_{0}
:= \widetilde{C}^{- 1 / \varepsilon} b^{- 1 / \varepsilon^{2}}
\mbox{ and }
0 \leq y_{m+1} \leq \widetilde{C} b^{m} y_{m}^{1 + \varepsilon} ,
\end{equation}
then
\[
\lim_{m \rightarrow \infty} y_{m} = 0
\]
holds.
\end{lemma}
\begin{proof}
We show
\begin{equation}\label{eq:ymr}
y_{m} \leq \frac{\theta_{0}}{r^{m}} , \
m = 0, 1, 2, \ldots
\end{equation}
by inductive method,
where we will determine $r > 1$ later.
By assumption, (\ref{eq:ymr}) with $m=0$ holds.
Hence we now assume (\ref{eq:ymr}) holds,
and show (\ref{eq:ymr}) for $m+1$.
By the assumption (\ref{eq:ymcond})
and the induction hypothesis, we have
\[
y_{m+1} \leq \widetilde{C} b^{m} y_{m}^{1 + \varepsilon}
\leq \widetilde{C} b^{m}
\left( \frac{\theta_{0}}{r^{m}} \right)^{1 + \varepsilon}
= \frac{\theta_{0}}{r^{m+1}} \widetilde{C}
b^{m} \frac{\theta_{0}^{\varepsilon}}{r^{m \varepsilon - 1}} .
\]
Now we take $r = b^{1 / \varepsilon}$. Then we have
\[
y_{m+1} \leq
\frac{\theta_{0}}{r^{m+1}} \widetilde{C}
b^{m} \frac{\theta_{0}^{\varepsilon}}{r^{m \varepsilon - 1}}
= \frac{\theta_{0}}{r^{m+1}} \widetilde{C}
r \theta_{0}^{\varepsilon}
= \frac{\theta_{0}}{r^{m+1}} ,
\]
which is (\ref{eq:ymr}) for $m+1$.
\end{proof}
Now we are now ready to show the estimate (\ref{eq:Linfty}).
The estimate easily follows if we have the following lemma.
\begin{lemma}\label{lemma:Linfty}
Let $p > n + 2$.
Then a solution $u$ to {\rm (\ref{eq:parabolic})}
satisfies the estimate
\[
\lVert u \rVert_{L^{\infty} ( Q_{\rho} )}
\leq C_{\rho}
\left(
\lVert u \rVert_{L^{2} ( Q_{2 \rho} )}
+ F_{0, 2 \rho}
\right) ,
\]
where we define $F_{0, 2 \rho}$ by {\rm (\ref{eq:F0rho})},
and $C_{\rho} > 0$ depends only on $n, \lambda , \Lambda , p$
and $\rho$.
\end{lemma}
\begin{proof}
First of all a letter $C$ denotes a general constant
depending only on $n, \Lambda , \lambda$ and $p$.
Now, let
$\rho_{m} := ( 1 + 2^{-m} ) \rho$ and
$k_{m} = k ( 2 - 2^{-m} )$
for $m = 0, 1, 2, \ldots$,
where we will determine $k > 0$ later.
For $m = 0, 1, 2, \ldots$,
we take cut-off functions
\begin{math}
\zeta_{m} \in C^{\infty} ( Q_{\rho_{m}} )
\end{math}
which satisfy
\begin{align*}
& 0 \leq \zeta_{m} \leq 1 \mbox{ in } Q_{\rho_{m}} ,
\displaybreak[1] \\
& \zeta_{m} = \left\{
\begin{aligned}
& 1 \mbox{ in } Q_{\rho_{m+1}} , \\
& 0 \mbox{ in }
Q_{\rho_{m}} \setminus Q_{( \rho_{m} + \rho_{m+1} ) / 2} ,
\end{aligned}
\right. \displaybreak[1] \\
& \left\lVert
\frac{\partial \zeta_{m}}{\partial t}
\right\rVert_{L^{\infty} ( Q_{\rho_{m}} )}
+ \lVert \nabla \zeta_{m} \rVert_{L^{\infty} ( Q_{\rho_{m}} )}^{2}
\leq \frac{C}{( \rho_{m} - \rho_{m+1} )^{2}} .
\end{align*}
We remark that
$\zeta_{m} = 0$
on
\begin{math}
B_{\rho_{m}} ( x_{0} ) \times \{ t_{0}- \rho^{2} \}
\cup \partial B_{\rho_{m}} ( x_{0} )
\times ( t_{0} - \rho^{2} , t_{0} )
\end{math}
in particular.
By Lemmas~\ref{lemma:SobolevV2} and \ref{lemma:DG}, we have
\begin{align}
& \lVert
( u - k_{m+1} )_{+} \zeta_{m}
\rVert_{L^{2(n+2)/n} ( Q_{\rho_{m}} )}^{2} \notag \\
& \leq C \Bigl(
\lVert ( u - k_{m+1} )_{+} \zeta_{m} \rVert_{
L^{\infty} (
t_{0} - \rho_{m}^{2} , t_{0} ; L^{2} ( B_{\rho_{m}} ( x_{0} ) )
)
}^{2} \notag \\
& \hspace*{10ex} \mbox{}
+ \bigl\lVert
\nabla \bigl( ( u - k_{m+1} )_{+} \zeta_{m} \bigr)
\bigr\rVert_{L^{2} ( Q_{\rho_{m}} )}^{2}
\Bigr) \notag \displaybreak[1] \\
& \leq C \Biggl[
\left(
\left\lVert
\frac{\partial \zeta_{m}}{\partial t}
\right\rVert_{L^{\infty} ( Q_{\rho_{m}} )}
+ \lVert \nabla \zeta_{m} \rVert_{L^{\infty} ( Q_{\rho_{m}} )}^{2}
\right)
\lVert ( u - k_{m+1} )_{+} \rVert_{L^{2} ( Q_{\rho_{m}} )}^{2}
\notag \\
& \hspace*{10ex} \mbox{}
+ F_{0, \rho_{m}}^{2}
\bigl\lvert
Q_{\rho_{m}} \cap \{ u(x,t) > k_{m+1} \}
\bigr\rvert^{1 - 2 / p}
\Biggr] \notag \displaybreak[1] \\
& \leq C \Biggl[
\frac{2^{2 m}}{\rho^{2}}
\lVert ( u - k_{m+1} )_{+} \rVert_{L^{2} ( Q_{\rho_{m}} )}^{2}
+ F_{0, 2 \rho}^{2}
\bigl\lvert A_{m} ( k_{m+1} ) \bigr\rvert^{1 - 2 / p}
\Biggr] , \label{eq:a001}
\end{align}
where $A_{m} (l) := Q_{\rho_{m}} \cap \{ u(x,t) > l \}$
for $l \in \mathbb{R}$.
Now we take $k > 0$ as
\begin{equation}\label{eq:k1}
k \geq \rho^{1 - (n+2) / p} F_{0, 2 \rho} .
\end{equation}
Then we have
\begin{align*}
& \lVert
( u - k_{m+1} )_{+} \zeta_{m}
\rVert_{L^{2(n+2)/n} ( Q_{\rho_{m}} )}^{2} \\
& \leq C \Biggl[
\frac{2^{2 m}}{\rho^{2}}
\lVert ( u - k_{m+1} )_{+} \rVert_{L^{2} ( Q_{\rho_{m}} )}^{2}
+ \frac{k^{2}}{\rho^{2 ( 1 - (n+2) / p )}}
\bigl\lvert A_{m} ( k_{m+1} ) \bigr\rvert^{1 - 2 / p}
\Biggr]
\end{align*}
by the estimate (\ref{eq:a001}).
By defining
\begin{math}
\varphi_{m} := \lVert
( u - k_{m} )_{+}
\rVert_{L^{2} ( Q_{\rho_{m}} )}^{2},
\end{math}
we have
\begin{align}
\varphi_{m+1}
& = \lVert
( u - k_{m+1} )_{+} \zeta_{m}
\rVert_{L^{2} ( Q_{\rho_{m+1}} )}^{2}
\leq \lVert
( u - k_{m+1} )_{+} \zeta_{m}
\rVert_{L^{2} ( Q_{\rho_{m}} )}^{2} \notag \\
& \leq \lvert A_{m} ( k_{m+1} ) \rvert^{2/(n+2)}
\lVert
( u - k_{m+1} )_{+} \zeta_{m}
\rVert_{L^{2(n+2)/n} ( Q_{\rho_{m}} )}^{2} \notag
\displaybreak[1] \\
& \leq C \lvert A_{m} ( k_{m+1} ) \rvert^{2/(n+2)} \notag \\
& \hspace*{3ex} \mbox{} \times
\Biggl[
\frac{2^{2 m}}{\rho^{2}}
\lVert ( u - k_{m+1} )_{+} \rVert_{L^{2} ( Q_{\rho_{m}} )}^{2}
+ \frac{k^{2}}{\rho^{2 ( 1 - (n+2) / p )}}
\bigl\lvert A_{m} ( k_{m+1} ) \bigr\rvert^{1 - 2 / p}
\Biggr] \notag \displaybreak[1] \\
& \leq C \lvert A_{m} ( k_{m+1} ) \rvert^{2/(n+2)}
\Biggl[
\frac{2^{2 m}}{\rho^{2}} \varphi_{m}
+ \frac{k^{2}}{\rho^{2 ( 1 - (n+2) / p )}}
\bigl\lvert A_{m} ( k_{m+1} ) \bigr\rvert^{1 - 2 / p}
\Biggr] , \label{eq:a002}
\end{align}
where we used H{\"o}lder's inequality
and the estimate
\[
\lVert ( u - k_{m+1} )_{+} \rVert_{L^{2} ( Q_{\rho_{m}} )}^{2}
\leq \lVert ( u - k_{m} )_{+} \rVert_{L^{2} ( Q_{\rho_{m}} )}^{2}
= \varphi_{m} .
\]
On the other hand, we have
\begin{align*}
\varphi_{m}
& = \lVert ( u - k_{m} )_{+} \rVert_{L^{2} ( Q_{\rho_{m}} )}^{2}
\geq \iint_{A_{m} ( k_{m+1} )}
( u - k_{m} )_{+}^{2} \,
d x \, d t \\
& \geq \iint_{A_{m} ( k_{m+1} )}
( k_{m+1} - k_{m} )_{+}^{2} \,
d x \, d t
= \frac{k^{2}}{2^{2m+2}}
\lvert A_{m} ( k_{m+1} ) \rvert ,
\end{align*}
that is,
\begin{equation}\label{eq:a003}
\lvert A_{m} ( k_{m+1} ) \rvert
\leq \frac{2^{2m+2}}{k^{2}} \varphi_{m} .
\end{equation}
By (\ref{eq:a002}) and (\ref{eq:a003}), we have
\begin{align}
\varphi_{m+1}
& \leq C 2^{2 m \left( 1 + \frac{2}{n+2} \right)} \notag \\
& \hspace*{3ex} \mbox{} \times \left[
\rho^{-2} k^{- \frac{4}{n+2}} \varphi_{m}^{1 + \frac{2}{n+2}}
+ \rho^{- 2 \left( 1 - \frac{n+2}{p} \right)}
k^{- \left( \frac{4}{n+2} - \frac{4}{p} \right)}
\varphi_{m}^{1 + \frac{2}{n+2} - \frac{2}{p}}
\right] .\label{eq:a004}
\end{align}
We now take $k$ as
\begin{equation}\label{eq:k2}
k \geq \left(
\frac{1}{\lvert Q_{2 \rho} \rvert}
\iint_{Q_{2 \rho}} u^{2} \, d x \, d t
\right)^{1/2} .
\end{equation}
Then we have
\[
\varphi_{m}
\leq \iint_{Q_{\rho_{m}}} u^{2} \, d x \, d t
\leq \iint_{Q_{2 \rho}} u^{2} \, d x \, d t
\leq \lvert Q_{2 \rho} \rvert k^{2} ,
\]
that is,
\[
\varphi_{m}^{2/p} \leq
\lvert Q_{2 \rho} \rvert^{2/p} k^{4/p} .
\]
By this inequality and (\ref{eq:a004}), we have
\begin{align}
\varphi_{m+1}
& \leq C 2^{2 m \left( 1 + \frac{2}{n+2} \right)}
\varphi_{m}^{1 + \frac{2}{n+2} - \frac{2}{p}}
\left[
\rho^{-2} k^{- \frac{4}{n+2}}
\varphi_{m}^{2/p}
+ \rho^{- 2 \left( 1 - \frac{n+2}{p} \right)}
k^{- \left( \frac{4}{n+2} - \frac{4}{p} \right)}
\right] \notag \\
& \leq C 2^{2 m \left( 1 + \frac{2}{n+2} \right)}
\varphi_{m}^{1 + \frac{2}{n+2} - \frac{2}{p}} \notag \\
& \hspace*{3ex} \mbox{} \times
\left[
\rho^{-2} k^{- \frac{4}{n+2}}
\lvert Q_{2 \rho} \rvert^{2/p} k^{4/p}
+ \rho^{- 2 \left( 1 - \frac{n+2}{p} \right)}
k^{- \left( \frac{4}{n+2} - \frac{4}{p} \right)}
\right] \notag \displaybreak[1] \\
& = C 2^{2 m \left( 1 + \frac{2}{n+2} \right)}
\rho^{- 2 \left( 1 - \frac{n+2}{p} \right)}
k^{- \frac{4}{n+2} \left( 1 - \frac{n+2}{p} \right)}
\varphi_{m}^{1 + \frac{2}{n+2} - \frac{2}{p}} .
\label{eq:a005}
\end{align}
Now we denote
$y_{m} := k^{-2} \lvert Q_{2 \rho} \rvert^{-1} \varphi_{m}$.
Then by (\ref{eq:a005}), we have
\begin{equation}\label{eq:a006}
y_{m+1}
\leq C 2^{2 m \left( 1 + \frac{2}{n+2} \right)}
y_{m}^{1 + \left( \frac{2}{n+2} - \frac{2}{p} \right)} ,
\end{equation}
which is the second condition of (\ref{eq:ymcond})
with
\begin{equation}\label{eq:a008}
\widetilde{C} = C, \
b = 2^{2 \left( 1 + \frac{2}{n+2} \right)}
\mbox{ and } \varepsilon = \frac{2}{n+2} - \frac{2}{p} .
\end{equation}
Then
$\lim_{m \rightarrow \infty} y_{m} = 0$
if
\begin{equation}\label{eq:a007}
y_{0} \leq C^{- 1 / \varepsilon} b^{- 1 / \varepsilon^{2}}
=: \theta_{0}
\end{equation}
by Lemma~\ref{lemma:ym},
where $b$ and $\varepsilon$ are defined by (\ref{eq:a008})
and $C$ is the constant $C$ in (\ref{eq:a006}).
We remark that the condition (\ref{eq:a007}) is equivalent to
\begin{equation}\label{eq:a009}
\lVert ( u - k )_{+} \rVert_{L^{2} ( Q_{2 \rho} )}^{2}
\leq \theta_{0} k^{2} \lvert Q_{2\rho} \rvert .
\end{equation}
Now we take $k$ as
\begin{equation}\label{eq:k3}
k^{2} \geq \frac{1}{\theta_{0} \lvert Q_{2\rho} \rvert}
\lVert u \rVert_{L^{2} ( Q_{2 \rho} )}^{2} .
\end{equation}
Then the condition (\ref{eq:a009}), i.e.
the condition (\ref{eq:a007}) is satisfied.
Summing up, if we take $k$ such that
the conditions (\ref{eq:k1}), (\ref{eq:k2}) and (\ref{eq:k3}) are satisfied,
then we have
\begin{math}
\lim_{m \rightarrow \infty} y_{m} = 0
\end{math}.
On the other hand, since
\begin{align*}
y_{m}
& = \frac{1}{k^{2} \lvert Q_{2 \rho} \rvert} \varphi_{m}
= \frac{1}{k^{2} \lvert Q_{2 \rho} \rvert}
\lVert
( u - k_{m} )_{+}
\rVert_{L^{2} ( Q_{\rho_{m}} )}^{2} \\
& \rightarrow \frac{1}{k^{2} \lvert Q_{2 \rho} \rvert}
\lVert
( u - 2 k )_{+}
\rVert_{L^{2} ( Q_{\rho} )}^{2}
\mbox{ as } m \rightarrow \infty .
\end{align*}
Then we have
$\lVert ( u - 2 k )_{+} \rVert_{L^{2} ( Q_{\rho} )}^{2} = 0$,
that is,
\begin{equation}\label{eq:Linftyresult}
u \leq 2 k \mbox{ a.e.\ in } Q_{\rho} .
\end{equation}
Now we take $k$ as
\[
k = \frac{1}{\sqrt{\theta_{0} \lvert Q_{2 \rho} \rvert}}
\lVert u \rVert_{L^{2} ( Q_{2 \rho})}
+ \rho^{1 - (n+2) / p} F_{0, 2 \rho} ,
\]
which satisfies
the conditions (\ref{eq:k1}), (\ref{eq:k2}) and (\ref{eq:k3}).
Hence we have (\ref{eq:Linftyresult}), which is
\[
\sup_{Q_{\rho}} u \leq C_{\rho}
\left(
\lVert u \rVert_{L^{2} ( Q_{2 \rho} )} + F_{0, 2 \rho}
\right) .
\]
Replacing Lemma~\ref{lemma:DG}
by Lemma~\ref{lemma:DG}$^{\prime}$
and doing the same argument, we can obtain
\[
- u \leq C_{\rho}
\left(
\lVert u \rVert_{L^{2} ( Q_{2 \rho} )} + F_{0, 2 \rho}
\right) \mbox{ in } Q_{\rho}
\]
and thus the proof has been completed.
\end{proof}
\section*{Acknowledgments}
The first author is supported
by Postdoctoral Fellowship for Foreign Researchers
from the Japan Society for the Promotion of Science.
The third and fourth authors are supported
by Research Fellowships of the Japan Society
for the Promotion of Science for Young Scientists and Grant-in-Aid for Scientific Research (B)
(No.\ 22340023) of Japan Society for Promotion of Science, respectively.
\end{document}
|
\begin{document}
\title{Limit $T$-subspaces and the central polynomials in $n$ variables of the Grassmann algebra}
\begin{abstract}
Let $F \langle X \rangle$ be the free unitary associative algebra
over a field $F$ on the set $X = \{ x_1,x_2, \ldots \}$. A vector
subspace $V$ of $F \langle X \rangle$ is called a
\emph{$T$-subspace} (or a \emph{$T$-space}) if $V$ is closed under
all endomorphisms of $F \langle X \rangle$. A $T$-subspace $V$ in
$F \langle X \rangle$ is \emph{limit} if every larger $T$-subspace
$W \gneqq V$ is finitely generated (as a $T$-subspace) but $V$
itself is not. Recently Brand\~ao Jr., Koshlukov, Krasilnikov and
Silva have proved that over an infinite field $F$ of
characteristic $p>2$ the $T$-subspace $C(G)$ of the central
polynomials of the infinite dimensional Grassmann algebra $G$ is a
limit $T$-subspace. They conjectured that this limit $T$-subspace
in $F \langle X \rangle$ is unique, that is, there are no limit
$T$-subspaces in $F \langle X \rangle$ other than $C(G)$. In the
present article we prove that this is not the case. We construct
infinitely many limit $T$-subspaces $R_k$ $(k \ge 1)$ in the
algebra $F \langle X \rangle$ over an infinite field $F$ of
characteristic $p>2$. For each $k \ge 1$, the limit $T$-subspace
$R_k$ arises from the central polynomials in $2k$ variables of the
Grassmann algebra $G$.
\end{abstract}
\noindent \textbf{2000 AMS MSC Classification:} 16R10, 16R40, 16R50
\noindent \textbf{Keywords:} polynomial identities, central polynomials, Grassmann algebra, $T$-subspace
\section{Introduction}
Let $F$ be a field, $X$ a non-empty set and let $F \langle X
\rangle$ be the free unitary associative algebra over $F$ on the
set $X$. Recall that a \textit{T-ideal} of $F \langle X \rangle$
is an ideal closed under all endomorphisms of $F \langle X
\rangle$. Similarly, a \emph{$T$-subspace} (or a \emph{$T$-space})
is a vector subspace in $F \langle X \rangle$ closed under all
endomorphisms of $F \langle X \rangle$.
Let $I$ be a $T$-ideal in $F \langle X \rangle$. A subset $S
\subset I$ \textit{generates $I$ as a $T$-ideal} if $I$ is the
minimal $T$-ideal in $F \langle X \rangle$ containing $S$. A
$T$-subspace of $F \langle X \rangle$ generated by $S$ (as a
$T$-subspace) is defined in a similar way. It is clear that the
$T$-ideal ($T$-subspace) generated by $S$ is the ideal (vector
subspace) generated by all the polynomials $f(g_1,\ldots, g_m)$,
where $f=f(x_1, \ldots , x_m) \in S$ and $g_i\in F \langle X
\rangle$ for all $i$.
Note that if $I$ is a $T$-ideal in $F \langle X \rangle$ then
$T$-ideals and $T$-subspaces can be defined in the quotient
algebra $F \langle X \rangle/I$ in a natural way. We refer to
\cite{drbook, drformanekbook, gz, K-BR, kemerbook, rowenbook} for
the terminology and basic results concerning $T$-ideals and
algebras with polynomial identities and to \cite{BOR_cp, BKKS,
GrishCyb, grshchi, K-BR} for an account of the results concerning
$T$-subspaces.
From now on we write $X$ for $\{ x_1, x_2, \ldots \}$ and $X_n$
for $\{ x_1, \ldots , x_n \}$, $X_n \subset X$. If $F$ is a field
of characteristic $0$ then every $T$-ideal in $F \langle X
\rangle$ is finitely generated (as a $T$-ideal); this is a
celebrated result of Kemer \cite {Kemer, kemerbook} that solves
the Specht problem. Moreover, over such a field $F$ each
$T$-subspace in $F \langle X \rangle$ is finitely generated; this
has been proved more recently by Shchigolev \cite{Shchigolev01}.
Very recently Belov \cite{Belov10} has proved that, for each
Noetherian commutative and associative unitary ring $K$ and each
$n \in \mathbb N$, each $T$-ideal in $K \langle X_n \rangle$ is
finitely generated.
On the other hand, over a field $F$ of characteristic $p>0$ there
are $T$-ideals in $F \langle X \rangle$ that are not finitely
generated. This has been proved by Belov \cite{Bel99}, Grishin
\cite{Grishin99} and Shchigolev \cite{Shchigolev99} (see also
\cite{Bel00, Grishin00, K-BR}). The construction of such
$T$-ideals uses the non-finitely generated $T$-subspaces in $F
\langle X \rangle$ constructed by Grishin \cite{Grishin99} for
$p=2$ and by Shchigolev \cite{Shchigolev00} for $p>2$ (see also
\cite{Grishin00}). Shchigolev \cite{Shchigolev00} also constructed
non-finitely generated $T$-subspaces in $F \langle X_n \rangle$,
where $n>1$ and $F$ is a field of characteristic $p>2$.
A $T$-subspace $V^*$ in $F \langle X \rangle$ is called
\emph{limit} if every larger $T$-subspace $W \gneqq V^*$ is
finitely generated as a $T$-subspace but $V^*$ itself is not. A
\textit{limit $T$-ideal} is defined in a similar way. It follows
easily from Zorn's lemma that if a $T$-subspace $V$ is not
finitely generated then it is contained in some limit $T$-subspace
$V^*$. Similarly, each non-finitely generated $T$-ideal is
contained in a limit $T$-ideal. In this sense limit $T$-subspaces
($T$-ideals) form a ``border'' between those $T$-subspaces
($T$-ideals) which are finitely generated and those which are not.
By \cite{Bel99, Grishin99, Shchigolev99}, over a field $F$ of
characteristic $p>0$ the algebra $F \langle X \rangle$ contains
non-finitely generated $T$-ideals; therefore, it contains at least
one limit $T$-ideal. No example of a limit $T$-ideal is known so
far. Even the cardinality of the set of limit $T$-ideals in $F
\langle X \rangle$ is unknown; it is possible that, for a given
field $F$ of characteristic $p>0$, there is only one limit
$T$-ideal. The non-finitely generated $T$-ideals constructed in
\cite{AladovaKras} come closer to being limit than any other known
non-finitely generated $T$-ideal. However, it is unlikely that
these $T$-ideals are limit.
About limit $T$-subspaces in $F \langle X \rangle$ we know more
than about limit $T$-ideals. Recently Brand\~ao Jr., Koshlukov,
Krasilnikov and Silva \cite{BKKS} have found the first example of
a limit $T$-subspace in $F \langle X \rangle$ over an infinite
field $F$ of characteristic $p>2$. To state their result precisely
we need some definitions.
For an associative algebra $A$, let $Z(A)$ denote the centre of
$A$,
\[
Z(A) = \{ z \in A \mid za= az \mbox{ for all } a \in A \}.
\]
A polynomial $f(x_1,\ldots,x_n)$ is \emph{a central polynomial}
for $A$ if $f(a_1,\ldots, a_n) \in Z(A)$ for all $a_1, \dots , a_n
\in A$. For a given algebra $A$, its central polynomials form a
$T$-subspace $C(A)$ in $F \langle X \rangle$. However, not every
$T$-subspace can be obtained as the $T$-subspace of the central
polynomials of some algebra.
Let $V$ be the vector space over a field $F$ of characteristic
$\ne 2$, with a countable infinite basis $e_1$, $e_2, \dots $ and
let $V_s$ denote the subspace of $V$ spanned by $e_1, \ldots , e_s$
$(s = 2,3 , \ldots ) .$ Let $G$ and $G_s$ denote the unitary
Grassmann algebras of $V$ and $V_s$, respectively. Then as a
vector space $G$ has a basis that consists of 1 and of all
monomials $e_{i_1}e_{i_2}\ldots e_{i_k}$, $i_1<i_2<\cdots<i_k$,
$k\ge 1$. The multiplication in $G$ is induced by $e_ie_j=-e_je_i$
for all $i$ and $j$. The algebra $G_s$ is the subalgebra of $G$
generated by $e_1, \ldots ,e_s$, and $ \mbox{dim }G_s = 2^s$. We
refer to $G$ and $G_s$ $(s = 2,3, \ldots )$ as to the infinite
dimensional Grassmann algebra and the finite dimensional Grassmann
algebras, respectively.
The result of \cite{BKKS} concerning a limit $T$-subspace is as
follows:
\begin{theorem}[\cite{BKKS}]
\label{C_limit} Let $F$ be an infinite field of characteristic
$p>2$ and let $G$ be the infinite dimensional Grassmann algebra
over $F$. Then the vector space $C(G)$ of the central polynomials
of the algebra $G$ is a limit T-space in $F \langle X \rangle$.
\end{theorem}
It was conjectured in \cite{BKKS} that a limit $T$-subspace in $F
\langle X \rangle$ is unique, that is, $C(G)$ is the only limit
$T$-subspace in $F \langle X \rangle$. In the present article we
show that this is not the case. Our first main result is as
follows.
\begin{theorem}
\label{theorem_main1} Over an infinite field $F$ of characteristic
$p>2$ the algebra $F \langle X \rangle$ contains infinitely many
limit $T$-subspaces.
\end{theorem}
Let $F$ be an infinite field of characteristic $p >0.$ In order to
prove Theorem \ref{theorem_main1} and to find infinitely many
limit $T$-subspaces in $F \langle X \rangle$ we first find limit
$T$-subspaces in $F \langle X_n \rangle$ for $n = 2 k$, $k \ge 1$.
Let $C_n = C(G) \cap F \langle X_n \rangle$ be the set of the
central polynomials in at most $n$ variables of the unitary Grassmann
algebra $G$. Our second main result is as follows.
\begin{theorem}
\label{theorem_main2} Let $F$ be an infinite field of
characteristic $p>2.$ If $n = 2k$, $k \ge 1$, then $C_{n}$ is a
limit $T$-subspace in $F \langle X_{n} \rangle$. If $n = 2k+1$, $k
>1$, then $C_n$ is finitely generated as a $T$-subspace in $F
\langle X_n \rangle$.
\end{theorem}
\noindent \textbf{Remark. } We do not know whether the
$T$-subspace $C_3$ is finitely generated.
Define $[a,b]=ab-ba$, $[a,b,c] = [[a,b],c]$. For $k \ge 1$, let
$T^{(3,k)}$ denote the $T$-ideal in $F \langle X \rangle$
generated by $[x_1,x_2,x_3]$ and $[x_1,x_2][x_3,x_4] \ldots
[x_{2k-1},x_{2k}]$ and let $R_k$ denote the $T$-subspace in $F \langle X
\rangle$ generated by $C_{2k}$ and $T^{(3,k+1)}$. Theorem
\ref{theorem_main1} follows immediately from our third main result
that is as follows.
\begin{theorem}
\label{theorem_main3} Let $F$ be an infinite field of
characteristic $p>2.$ For each $k \ge 1$, $R_k$ is a limit
$T$-subspace in $F \langle X \rangle$. If $k \ne l$ then $R_k \ne
R_l$.
\end{theorem}
Now we modify the conjecture made in \cite{BKKS}.
\begin{problem}
Let $F$ be an infinite field of characteristic $p>2$. Is each
limit $T$-subspace in $F \langle X \rangle$ equal to either $C(G)$
or $R_k$ for some $k$? In other words, are $C(G)$ and $R_k$ $(k
\ge 1)$ the only limit $T$-subspaces in $F \langle X \rangle$?
\end{problem}
In the proof of Theorems \ref{theorem_main2} and
\ref{theorem_main3} we will use the following theorem that has
been proved independently by Bekh-Ochir and Rankin \cite{BOR_cp},
by Brand\~ao Jr., Koshlukov, Krasilnikov and Silva \cite{BKKS} and
by Grishin \cite{Grishin10}. Let
\[
q(x_1,x_2)=x_1^{p-1}[x_1,x_2]x_2^{p-1}, \qquad
q_k(x_1,\dots,x_{2k})=q(x_1,x_{2}) \cdots q(x_{2k-1},x_{2k}).
\]
\begin{theorem}[\cite{BOR_cp}, \cite{BKKS}, \cite{Grishin10}]
\label{generators_of_C} Over an infinite field $F$ of a
characteristic $p>2$ the vector space $C(G)$ of the central
polynomials of $G$ is generated (as a T-space in $F \langle X
\rangle $) by the polynomial
\[
x_1[x_2,x_3,x_4]
\]
and the polynomials
\[
x_1^p \ , \ x_1^p \, q_1(x_2,x_3) \ , \ x_1^p \, q_2(x_2,x_3,x_4,
x_5) \ , \ldots ,\ x_1^p \, q_n(x_2, \ldots , x_{2n+1}) \, ,
\ldots .
\]
\end{theorem}
In order to prove Theorems \ref{theorem_main2} and
\ref{theorem_main3} we need some auxiliary results. Define, for
each $l \ge 0$,
\[
q^{(l)}(x_1,x_2)= x_1^{p^l-1}[x_1,x_2]x_2^{p^l-1},
\]
\[
q_{k}^{(l)}(x_1,\dots,x_{2k})= q^{(l)}(x_1,x_{2}) \cdots
q^{(l)}(x_{2k-1},x_{2k}).
\]
Recall that $C_n = C(G) \cap F \langle X_n \rangle$. To prove
Theorem \ref{theorem_main2} we need the following assertions that
are also of independent interest.
\begin{proposition}
\label{generators_of_C_n} If $n=2k$, $k>1$, then $C_n$ is
generated as a $T$-subspace in $F \langle X_n \rangle$ by the
polynomials
\[
x_1[x_2,x_3,x_4], \quad x_1^p, \quad x_1^p q_1(x_2,x_3), \quad
\ldots , \quad x_1^p q_{k-1}(x_2, \ldots , x_{2k-1})
\]
together with the polynomials
\[
\{ q_k^{(l)}(x_1, \ldots , x_{2k}) \mid l=1,2, \ldots \} .
\]
If $n=2k+1$, $k>1$, then $C_n$ is generated as a $T$-subspace in
$F \langle X_n \rangle$ by the polynomials
\[
x_1[x_2,x_3,x_4], \quad x_1^p, \quad x_1^p q_1(x_2,x_3), \quad
\ldots , \quad x_1^p q_{k}(x_2, \ldots , x_{2k+1}).
\]
\end{proposition}
Let $T^{(3)}$ denote the $T$-ideal in $F \langle X \rangle$
generated by $[x_1,x_2,x_3].$ Define $T^{(3)}_n = T^{(3)} \cap F
\langle X_n \rangle$. We deduce Proposition
\ref{generators_of_C_n} from the following.
\begin{proposition}
\label{generators_of_C_n/T3n} If $n=2k$, $k \ge 1$, then
$C_n/T^{(3)}_n$ is generated as a $T$-subspace in $F \langle X_n
\rangle /T^{(3)}_n$ by the polynomials
\begin{equation}\label{gen_C_n/T3n_1}
x_1^p + T^{(3)}_n, \quad x_1^p q_1(x_2,x_3)+T^{(3)}_n, \quad
\ldots , \quad x_1^p q_{k-1}(x_2, \ldots , x_{2k-1}) + T^{(3)}_n
\end{equation}
together with the polynomials
\begin{equation}\label{gen_C_n/T3n_2}
\{ q_k^{(l)}(x_1, \ldots , x_{2k})+ T^{(3)}_n \mid l=1,2, \ldots
\} .
\end{equation}
If $n=2k+1$, $k \ge 1$, then the $T$-subspace $C_n/T^{(3)}_n$ in
$F \langle X_n \rangle/T^{(3)}_n$ is generated by the polynomials
\begin{equation}\label{gen_C_n/T3n_3}
x_1^p + T^{(3)}_n, \quad x_1^p q_1(x_2,x_3)+T^{(3)}_n, \quad
\ldots , \quad x_1^p q_{k}(x_2, \ldots , x_{2k+1})+ T^{(3)}_n.
\end{equation}
\end{proposition}
\noindent \textbf{Remarks. } 1. For each $k \ge 1$, the limit
$T$-subspace $R_k$ does not coincide with the $T$-subspace $C(A)$
of all central polynomials of any algebra $A$.
Indeed, suppose that $R_k = C(A)$ for some $A$. Let $T(A)$ be the
$T$-ideal of all polynomial identities of $A$. Then, for each $f
\in C(A)$ and each $g \in F \langle X \rangle$, we have $[f,g] \in
T(A)$. Since $[x_1,x_2] \in R_k = C(A)$, we have $[x_1,x_2,x_3]
\in T(A)$. It follows that $T^{(3)} \subseteq T(A)$.
It is well-known that if a $T$-ideal $T$ in the free unitary
algebra $F \langle X \rangle$ over an infinite field $F$ contains
$T^{(3)}$ then either $T = T^{(3)}$ or $T = T^{(3, n)}$ for some
$n$ (see, for instance, \cite[Proof of Corollary 7]{gk}). Hence,
either $T(A) = T^{(3)}$ or $T(A) = T^{(3, n)}$ for some $n$. Note
that $T^{(3)} = T(G)$ and $T^{(3,n)} = T(G_{2n-1})$ (see, for
example, \cite{gk}) so we have either $T(A) = T(G)$ or $T(A) =
T(G_{2n-1})$ for some $n$.
For an associative algebra $B$, we have $f(x_1, \ldots , x_r) \in C(B)$ if and only if $[ f(x_1, \ldots , x_r), x_{r+1} ] \in T(B)$. It follows that if $B_1, B_2$ are algebras such that $T(B_1) = T(B_2)$ then $C(B_1) = C(B_2)$. In particular, if $T(A) = T(G)$ then $C(A) = C(G)$, and if $T(A) = T(G_{2n-1})$ then $C(A) = C(G_{2n-1})$.
However,
\[
x_1[x_2,x_3] \ldots [x_{2k+2},x_{2k+3}] \in R_k \setminus C(G)
\]
so $R_k \ne C(G)$. Furthermore, the $T$-subspaces $C(G_s)$ of the
central polynomials of the finite dimensional Grassmann algebras
$G_s$ $(s =2, 3, \ldots )$ have been described recently by
Bekh-Ochir and Rankin \cite{BOR_cpfd} and by Koshlukov,
Krasilnikov and Silva \cite{KKS}; these $T$-subspaces are finitely
generated and do not coincide with $R_k$. This contradiction
proves that $R_k \ne C(A)$ for any algebra $A$, as claimed.
2. For an associative unitary algebra $A$, let $C_n (A)$ and $T_n(A)$
denote the set of the central polynomials and the set of the
polynomial identities in $n$ variables $x_1, \ldots , x_n$ of $A$,
respectively; that is, $C_n (A) = C(A) \cap F \langle X_n \rangle$
and $T_n (A) = T(A) \cap F \langle X_n \rangle$. Then $C_n(A)$ is
a $T$-subspace and $T_n(A)$ is a $T$-ideal in $F \langle X_n \rangle$.
Note that, by Belov's result \cite{Belov10}, the $T$-ideal $T_n(A)$ is
finitely generated for each algebra $A$ over a Noetherian ring and
each positive integer $n$.
On the other hand, there exist unitary algebras $A$ over
an infinite field $F$ of characteristic $p>2$ such that, for some
$n>1$, the $T$-subspace $C_n (A)$ of the central polynomials of
$A$ in $n$ variables is not finitely generated. Moreover, such
an algebra $A$ can be finite dimensional. Indeed, take $A = G_s$,
where $s \ge n$. It can be checked that $C(G_s) \cap F \langle X_n
\rangle = C_n$ if $s \ge n$. By Proposition \ref{C2k_0}, the
$T$-subspace $C_{2k} (G_s)$ in $F \langle X_{2k} \rangle$ is
not finitely generated provided that $s \ge 2k$.
However, the following problem remains open.
\begin{problem}
Does there exist a finite dimensional algebra $A$ over an infinite
field $F$ of characteristic $p >0$ such that the $T$-subspace
$C(A)$ of all central polynomials of $A$ in $F \langle X \rangle$
is not finitely generated?
\end{problem}
Note that a similar problem for the $T$-ideal $T(A)$ of all
polynomial identities of a finite
dimensional algebra $A$ over an infinite field $F$ of characteristic
$p >0$ remains open as well; it is one of the most interesting and
long-standing open problems in the area.
\section{Preliminaries}
Let $\langle S \rangle^{TS}$ denote the $T$-subspace generated by
a set $S \subseteq F \langle X \rangle$. Then $\langle S
\rangle^{TS}$ is the span of all polynomials $f(g_1,\ldots, g_n)$,
where $f\in S$ and $g_i\in F \langle X \rangle$ for all $i$. It is
clear that for any polynomials $f_1,$ \dots, $f_s \in F \langle X
\rangle$ we have $\langle f_1, \dots, f_s \rangle^{TS}= \langle
f_1 \rangle^{TS}+\dots+\langle f_s \rangle^{TS}.$
Recall that a polynomial $f(x_1, \ldots , x_n) \in F \langle X
\rangle$ is called a \textit{polynomial identity} in an algebra
$A$ over $F$ if $f (a_1, \ldots , a_n) =0$ for all $a_1, \ldots ,
a_n \in A$. For a given algebra $A$, its polynomial identities
form a $T$-ideal $T(A)$ in $F \langle X \rangle$ and for every
$T$-ideal $I$ in $F \langle X \rangle$ there is an algebra $A$
such that $I = T(A)$, that is, $I$ is the ideal of all polynomial
identities satisfied in $A.$ Note that a polynomial
$f=f(x_1,\ldots,x_n)$ is central for an algebra $A$ if and only if
$[f,x_{n+1}]$ is a polynomial identity of $A.$
Let $f = f(x_1, \ldots , x_n) \in F \langle X \rangle$. Then $f =
\sum_{0 \le i_1, \ldots , i_n} f_{i_1 \ldots i_n},$ where each
polynomial $f_{i_1 \ldots i_n}$ is multihomogeneous of degree
$i_s$ in $x_s$ $(s = 1, \ldots , n).$ We refer to the polynomials
$f_{i_1 \ldots i_n}$ as to the \textit{multihomogeneous
components} of the polynomial $f.$ Note that if $F$ is an infinite
field, $V$ is a $T$-ideal in $F \langle X \rangle$ and $f \in V$
then $f_{i_1 \ldots i_n} \in V$ for all $i_1, \ldots , i_n$ (see,
for instance, \cite{Baht, drbook, gz, rowenbook}). Similarly, if
$V$ is a $T$-subspace in $F \langle X \rangle$ and $f \in V$ then
all the multihomogeneous components $f_{i_1 \ldots i_n}$of $f$
belong to $V$.
Over an infinite field $F$ the $T$-ideal $T(G)$ of the polynomial
identities of the infinite dimensional unitary Grassmann algebra
$G$ coincides with $T^{(3)}$. This was proved by Krakowski and
Regev \cite{krreg} if $F$ is of characteristic $0$ (see also
\cite{lat}) and by several authors in the general case, see for
example \cite{gk}.
It is well known (see, for example, \cite{krreg,
lat}) that over any field $F$ we have
\begin{eqnarray} \label{prop}
&&[g_1,g_2][g_1,g_3]+T^{(3)}=T^{(3)}; \nonumber\\
&&[g_1,g_2][g_3,g_4]+T^{(3)}=-[g_3,g_2][g_1,g_4]+T^{(3)}; \\
&&[g_1^m,g_2]+T^{(3)}=m g_1^{m-1}[g_1,g_2]+T^{(3)} \nonumber
\end{eqnarray}
for all $g_1,g_2,g_3,g_4 \in F \langle X \rangle$. Also it is well known (see, for
instance, \cite{BKKS, grshchi}) that a basis of the vector space $F\langle X
\rangle/T^{(3)}$ over $F$ is formed by the elements of the form
\begin{eqnarray} \label{basa}
x_{i_1}^{m_1} \cdots x_{i_d}^{m_d} [x_{j_1},x_{j_2}] \cdots
[x_{j_{2s-1}},x_{j_{2s}}]+T^{(3)},
\end{eqnarray}
where $d,s \ge 0$, $i_1< \dots <i_d,$ $j_1<\dots<j_{2s}.$
Define $T_n^{(3)} = T^{(3)} \cap F \langle X_n \rangle.$ We claim
that if $n < 2i$ then
\begin{equation} \label{T3iTn3}
T^{(3,i)} \cap F \langle X_n \rangle = T_n^{(3)}.
\end{equation}
Indeed, a basis of the vector space $( F
\langle X_n \rangle + T^{(3)})/T^{(3)}$ is formed by the elements
of the form (\ref{basa}) such that $1 \le i_1 < \ldots < i_d \le
n,$ $1 \le j_1< \ldots < j_{2s} \le n.$ In particular, we have $2s
\le n.$ On the other hand, it can be easily checked that
$T^{(3,i)}/T^{(3)}$ is contained in the linear span of the
elements of the form (\ref{basa}) such that $s \ge i$. Since $n <
2i$, we have
\[
((F \langle X_n \rangle + T^{(3)})/T^{(3)}) \cap (T^{(3,i)}/T^{(3)}) = \{ 0 \} ,
\]
that is, $T^{(3,i)} \cap F \langle X_n \rangle \subseteq
T^{(3)}$. It follows immediately that $T^{(3,i)} \cap F \langle
X_n \rangle \subseteq T_n^{(3)}$. Since $T_n^{(3)}
\subseteq T^{(3,i)} \cap F \langle X_n \rangle$ for all $i$,
we have $T^{(3,i)} \cap F \langle X_n \rangle = T_n^{(3)}$
if $n < 2i$, as claimed.
Let $F$ be a field of characteristic $p>2.$ It is well known (see,
for example, \cite{regevca, BOR_cp, BKKS, GrishCyb}) that, for each
$g, g_1, \ldots , g_n \in F \langle X \rangle$, we have
\begin{eqnarray} \label{prop2}
&& g^p + T^{(3)} \mbox{ is central in } F \langle X \rangle /
T^{(3)}; \nonumber \\
&&(g_1 \cdots g_n)^p+T^{(3)} = g_1^p \cdots g_n^p+T^{(3)};\\
&&(g_1 +\dots + g_n)^p+T^{(3)}= g_1^p +\dots + g_n^p+T^{(3)}.
\nonumber
\end{eqnarray}
Let $F$ be an infinite field of characteristic $p>2$. Let
$Q^{(k,l)}$ be the $T$-subspace in $F\langle X \rangle$ generated
by $q_{k}^{(l)}$ $(l \ge 0)$, $Q^{(k,l)}=\langle q_{k}^{(l)}(x_1,
\dots, x_{2k}) \rangle^{TS}$. Note that the multihomogeneous
component of the polynomial
\begin{eqnarray*}
&&q_k^{(l)}(1+x_1, \ldots , 1+x_{2k}) \\
&&= (1 + x_1)^{p^l-1}[x_1,x_2](1 + x_2)^{p^l-1} \ldots (1 +
x_{2k-1})^{p^l-1}[x_{2k-1},x_{2k}](1 + x_{2k})^{p^l-1}
\end{eqnarray*}
of degree $p^{l-1}$ in all the variables $x_1, \ldots , x_{2k}$ is
equal to
\[
\gamma \, q_k^{(l-1)}(x_1, \ldots , x_{2k}) = \gamma \,
x_1^{p^{l-1}-1}[x_1,x_2]x_2^{p^{l-1}-1} \ldots
x_{2k-1}^{p^{l-1}-1}[x_{2k-1},x_{2k}]x_{2k}^{p^{l-1}-1},
\]
where $\gamma = {p^{l}-1 \choose p^{l-1}-1}^{2k} \equiv 1
\pmod{p}.$ It follows that $q_k^{(l-1)} \in Q^{(k,l)}$ for all $l
>0$ so $Q^{(k,l-1)} \subseteq Q^{(k,l)}$. Hence, for each $l >0$ we have
\begin{equation}\label{sum_q}
\sum \limits_{i=0}^{l} Q^{(k,i)} = Q^{(k,l)}.
\end{equation}
The following lemma is a reformulation of a result of Grishin and
Tsybulya \cite[Theorem 1.3, item 1)]{GrishCyb}.
\begin{lemma}\label{lemma_GTs}
Let $F$ be an infinite field of characteristic $p>2$. Let $k \ge
1$, $a_i \ge 1$ for all $i = 1, 2 \ldots , 2k$ and let
\[
m = x_1^{a_1-1}x_2^{a_2-1} \ldots x_{2k}^{a_{2k}-1}[x_1,x_2]\ldots
[x_{2k-1},x_{2k}] \in F \langle X \rangle.
\]
Suppose that, for some $i_0$, $1 \le i_0 \le 2k$, we have $a_{i_0}
= p^l b$, where $l \ge 0$ and $b$ is coprime to $p$. Suppose also
that, for each $i$, $1 \le i \le 2k$, we have $a_i \equiv 0
\pmod{p^l}$. Then
\[
\langle m \rangle^{TS} + T^{(3)} = Q^{(k,l)} + T^{(3)}.
\]
\end{lemma}
\section{Proof of Propositions \ref{generators_of_C_n} and \ref{generators_of_C_n/T3n}}
In the rest of the paper, $F$ will denote an infinite field of
characteristic $p>2$.
\subsection*{Proof of Proposition \ref{generators_of_C_n/T3n}}
Let $U$ be the $T$-subspace of $F \langle X_n \rangle$ defined as
follows:
\begin{itemize}
\item[i)] $T_n^{(3)} \subset U$;
\item[ii)] the $T$-subspace $U/T_n^{(3)}$ of $F \langle X_n
\rangle/T_n^{(3)}$ is generated by the polynomials
(\ref{gen_C_n/T3n_1}) and (\ref{gen_C_n/T3n_2}) if $n=2k$ and by
the polynomials (\ref{gen_C_n/T3n_3}) if $n=2k+1$.
\end{itemize}
To prove the proposition we have to show that
$C_n/T_n^{(3)}=U/T_n^{(3)}$ (equivalently, $C_n = U$). It can be
easily seen that $U/T_n^{(3)} \subseteq C_n/T_n^{(3)}$. Thus, it
remains to prove that $C_n/T_n^{(3)} \subseteq U/T_n^{(3)}$
(equivalently, $C_n \subseteq U$).
Let $h$ be an arbitrary element of $C_n$. We are going to check
that $h+T_n^{(3)} \in U/T_n^{(3)}$.
Since $h \in C(G)$, it follows from The\-o\-rem
\ref{generators_of_C} that
\[
h=\sum_{j} \alpha_{j} \ v_j^{p} + \sum_{i, j } \alpha_{i j} \
w_{i j}^{p} \ q_i(f_1^{(i j)}, \dots, f_{2i}^{(i j)})+ h',
\]
where $v_j, w_{i j}, f_s^{(i j)} \in F \langle X \rangle$,
$\alpha_j, \alpha_{i j} \in F$, $h' \in T^{(3)}$. Note that $h \in
F \langle X_n \rangle$ so we may assume that $v_j, w_{i j},
f_s^{(i j)}, h' \in F \langle X_n \rangle$ for all $i,j,s$. It
follows that
\[
h + T_n^{(3)} =\sum_{j} \alpha_{j} \ v_j^{p} + \sum_{i, j }
\alpha_{i j} \ w_{i j}^{p} \ q_i(f_1^{(i j)}, \dots, f_{2i}^{(i
j)})+ T_n^{(3)}.
\]
Recall that $T^{(3,i)}$ is the $T$-ideal in $F \langle X \rangle$
generated by the polynomials $[x_1,x_2,x_3]$ and $[x_1,x_2] \dots
[x_{2i-1},x_{2i}]$. By (\ref{T3iTn3}), we have $T^{(3,i)} \cap F
\langle X_n \rangle = T^{(3)}_{n}$ for each $i$ such that
$2i>n$. Since, for each $i, j$,
\[
w_{i j}^p \ q_i(f_1^{(i j)}, \dots, f_{2i}^{(i j)}) \in T^{(3,i)},
\]
we have
\[
\sum_{i > \frac{n}{2}} \sum_{ j } \alpha_{i j} \ w_{i j}^{p}
\ q_i(f_1^{(i j)}, \dots, f_{2i}^{(i j)}) \in T^{(3,i)} \cap F
\langle X_n \rangle = T_n^{(3)}.
\]
It follows that
\[
h + T_n^{(3)} =\sum_{j} \alpha_{j} \ v_j^{p} + \sum_{i \le
\frac{n}{2}} \sum_{ j } \alpha_{i j} \ w_{i j}^{p} \ q_i(f_1^{(i
j)}, \dots, f_{2i}^{(i j)})+ T_n^{(3)}.
\]
If $n=2k+1$ ($k \ge 1$) then we have
\[
h + T^{(3)}_{n} = \sum_{j} \alpha_{j} v_{j}^{p} + \sum_{i=1}^k
\sum_j \alpha_{i j} \ w_{i j}^{p} \ q_i(f_1^{(i j)}, \dots,
f_{2i}^{(i j)}) + T^{(3)}_{n}
\]
so $h + T^{(3)}_{n} \in U/T_n^{(3)}$, as required.
If $n=2k$ ($k \ge 1$) then we have
\[
h + T^{(3)}_{n} = h_1 + h_2 + T^{(3)}_{n},
\]
where
\[
h_1 = \sum_{j} \alpha_{j} v_j^{p} + \sum_{i=1}^{k-1} \sum_j
\alpha_{i j} \ w_{i j}^{p} \ q_i(f_1^{(i j)},\dots,f_{2i}^{(i
j)})
\]
and
\[
h_2 = \sum_j \alpha_{k j} \ w_{k j}^{p} \ q_k(f_1^{(k j)},\dots,
f_{2k}^{(k j)}).
\]
It is clear that $h_1 + T^{(3)}_{n}$ belongs to the $T$-subspace
generated by the polynomials (\ref{gen_C_n/T3n_1}); hence, $h_1 +
T^{(3)}_{n} \in U/T^{(3)}_{n}$. On the other hand, it can be
easily seen that $h_2 + T^{(3)}_{n}$ is a linear combination of
polynomials of the form $m + T^{(3)}_{n}$, where
\[
m = x_1^{b_1} \cdots x_{2k}^{b_{2k}}[x_1,x_2] \cdots [x_{2k-1},x_{2k}].
\]
We claim that, for each $m$ of this form, the polynomial $m + T_{2k}^{(3)}$ belongs to $U/T_{2k}^{(3)}$.
Indeed, by Lemma \ref{lemma_GTs}, we have $\langle m \rangle^{TS} + T^{(3)} = \langle q_k^{(l)} \rangle^{TS} + T^{(3)}$ for some $l \ge 0$. Since both $m$ and $q_k^{(l)}$ are polynomials in $x_1, \dots , x_{2k}$, this equality implies that $m + T_{2k}^{(3)}$ belongs to the $T$-subspace of $F \langle X_{2k} \rangle / T_{2k}^{(3)}$ that is generated by $q_k^{(l)} + T_{2k}^{(3)}$ for some $l \ge 0$. If $l \ge 1$ then $m + T_{2k}^{(3)}\in U/T_{2k}^{(3)}$ because, for $l \ge 1$, $q_k^{(l)} + T_{2k}^{(3)}$ is a polynomial of the form (\ref{gen_C_n/T3n_2}). If $l = 0$ then $m + T_{2k}^{(3)}$ belongs to the $T$-subspace of $F \langle X_{2k} \rangle / T_{2k}^{(3)}$ generated by $q_k^{(1)} + T_{2k}^{(3)}$. Indeed, in this case $m + T_{2k}^{(3)}$ belongs to the $T$-subspace generated by $q_k^{(0)} + T_{2k}^{(3)}$ and the latter $T$-subspace is contained in the $T$-subspace generated by $q_k^{(1)} + T_{2k}^{(3)}$ because $q_k^{(0)}$ is equal to the multilinear component of $q_k^{(1)}(1+x_1, \dots , 1+x_{2k})$. It follows that, again, $m + T_{2k}^{(3)}\in U/T_{2k}^{(3)}$. This proves our claim.
It follows that $h_2 + T^{(3)}_{n} \in U/T^{(3)}_{n}$ and,
therefore, $h + T^{(3)}_{n} \in U/T^{(3)}_{n}$, as required.
Thus, $C_n \subseteq U$ for each $n$. This completes the proof of
Proposition \ref{generators_of_C_n/T3n}.
\subsection*{Proof of Proposition \ref{generators_of_C_n}}
It is clear that the polynomial $x_1 [x_2,x_3,x_4] x_5$ generates
$T^{(3)}$ as a $T$-subspace in $F \langle X \rangle$. Since
$g_1 [g_2,g_3,g_4] g_5=g_1 [g_2,g_3,g_4, g_5] + g_1 g_5
[g_2,g_3,g_4]$
for all
$g_i \in F \langle X \rangle,$
the polynomial $x_1 [x_2,x_3,x_4]$ generates $T^{(3)}$ as a
$T$-subspace in $F \langle X \rangle$ as well. It follows that
$x_1 [x_2,x_3,x_4]$ generates $T_n^{(3)}$ as a $T$-subspace in $F
\langle X_n \rangle$ for each $n \ge 4$. Proposition
\ref{generators_of_C_n} follows immediately from Proposition
\ref{generators_of_C_n/T3n} and the observation above.
\section{Proof of Theorem \ref{theorem_main2} }
If $n = 2k +1$, $k > 1$, then Theorem \ref{theorem_main2} follows
immediately from Proposition \ref{generators_of_C_n}.
Suppose that $n = 2k$, $k \ge 1$. Then Theorem \ref{theorem_main2}
is an immediate consequence of the following two propositions.
\begin{proposition} \label{C2k_0}
For all $k \ge 1$, $C_{2k}$ is not finitely generated as a
$T$-subspace in $F \langle X_{2k} \rangle$.
\end{proposition}
\begin{proposition}\label{C_2k_limit}
Let $k \ge 1$ and let $W$ be a $T$-subspace of $F \langle X_{2k}
\rangle$ such that $C_{2k} \subsetneqq W$. Then $W$ is a finitely
generated $T$-subspace in $F \langle X_{2k} \rangle$.
\end{proposition}
\subsection*{Proof of Proposition \ref{C2k_0}}
The proof is based on a result of Grishin and Tsybulya
\cite[Theorem 3.1]{GrishCyb}.
By Proposition \ref{generators_of_C_n/T3n}, $C_{2k}$ is generated
as a $T$-subspace in $F \langle X_{2k} \rangle$ by $T_{2k}^{(3)}$
together with the polynomials
\begin{equation}\label{gen_C2k}
x_1^p,\ x_1^p q_1 (x_2,x_3), \ \ldots , \ x_1^p q_{k-1}(x_2,
\ldots , x_{2k-1})
\end{equation}
and
\[
\{ q_k^{(l)} (x_1, \ldots , x_{2k}) \mid l = 1, 2, \ldots \} .
\]
Let $V_l$ be the $T$-subspace of $F \langle X_{2k} \rangle $
generated by $T_{2k}^{(3)}$ together with the polynomials
(\ref{gen_C2k}) and the polynomials $ \{ q_k^{(i)}(x_1, \ldots ,
x_{2k}) \mid i \le l \} $. Then we have
\begin{equation}\label{union_wl}
C_{2k} = \bigcup_{l \ge 1} V_l.
\end{equation}
Also, it is clear that $V_1 \subseteq V_2 \subseteq \ldots .$
Let $U^{(k-1)}$ be the $T$-subspace in $F \langle X \rangle$
generated by the polynomials (\ref{gen_C2k}). The following
proposition is a particular case of \cite[Theorem 3.1]{GrishCyb}.
\begin{proposition}[\cite{GrishCyb}]\label{G-Ts}
For each $l \ge 1$,
\[
(Q^{(k,l+1)}+T^{(3)}) /T^{(3)} \not \subseteq (U^{(k-1)}+
Q^{(k,l)} + T^{(3,k+1)})/T^{(3)}.
\]
\end{proposition}
\noindent \textbf{Remark. } The $T$-subspaces $(U^{(k-1)} +
T^{(3)}) / T^{(3)}$, $(Q^{(k,l)}+T^{(3)})/T^{(3)}$ and
$T^{(3,k+1)}/T^{(3)}$ are denoted in \cite{GrishCyb} by
$\sum_{i<k} CD_p^{(i)}$, $C_{p^l}^{(k)}$ and $C^{(k+1)}$,
respectively.
Since the $T$-subspace $Q^{(k,l+1)}$ is generated by the
polynomial $q_k^{(l+1)} $ and $T^{(3)} \subset T^{(3,k+1)}$,
Proposition \ref{G-Ts} immediately implies that
\[
q_k^{(l+1)} \notin U^{(k-1)} + Q^{(k,l)} + T^{(3,k+1)}.
\]
Further, since $T_{2k}^{(3)} \subset T^{(3)} \subset T^{(3,k+1)}$, we have
\[
V_l \subset U^{(k-1)} + \sum_{i \le l} Q^{(k,i)} + T^{(3,k+1)} =
U^{(k-1)} + Q^{(k,l)} + T^{(3,k+1)}
\]
(recall that, by (\ref{sum_q}), $\sum_{i \le l} Q^{(k,i)} =
Q^{(k,l)}$). It follows that $q_k^{(l+1)} \notin V_l$ for all $l
\ge 1$; on the other hand, $q_k^{(l+1)} \in V_{l+1}$ by the
definition of $V_{l+1}$. Hence,
\begin{equation}\label{strictly_ascending_wl}
V_1 \subsetneqq V_2 \subsetneqq \ldots .
\end{equation}
It follows immediately from (\ref{union_wl}) and
(\ref{strictly_ascending_wl}) that $C_{2k}$ is not finitely
generated as a $T$-subspace in $F \langle X_{2k} \rangle $. The
proof of Proposition \ref{C2k_0} is completed.
\subsection*{Proof of the Proposition \ref{C_2k_limit}}
For all integers $i_1, \ldots , i_t$ such that $1 \leq i_1<\ldots <i_t \leq n$ and all integers $a_1, \ldots , a_n \ge 0$ such that $a_{i_1}, \ldots , a_{i_t} \ge 1$, define $\frac{x_1^{a_1} x_2^{a_2} \ldots x_n^{a_n}}{x_{i_1} x_{i_2} \ldots x_{i_t}}$ to be the monomial
\[
\frac{x_1^{a_1} x_2^{a_2} \ldots x_n^{a_n}}{x_{i_1}x_{i_2}\ldots x_{i_t}}= x_1^{b_1}x_2^{b_2}\ldots x_n^{b_n} \in F \langle X \rangle,
\]
where $b_j=a_j-1$ if $j\in \{i_1,i_2,\ldots,i_t\}$ and $b_j=a_j$ otherwise.
\begin{lemma} \label{WC(G)}
Let $f(x_1,\dots,x_n) \in F \langle X \rangle$ be a multihomogeneous polynomial of the form
\begin{equation}\label{f_modulo_T3}
f=\alpha \, x_1^{a_1} \ldots x_n^{a_n}+\sum \limits_{1 \leq i_1<\ldots <i_{2t} \leq n }
\alpha_{(i_1,\ldots,i_{2t})}\frac{x_1^{a_1}\ldots x_n^{a_n}}{x_{i_1}\ldots x_{i_{2t}}}[x_{i_1},x_{i_2}]\ldots
[x_{i_{2t-1}},x_{i_{2t}}]
\end{equation}
where $\alpha, \alpha_{(i_1,\ldots,i_{2t})}\in F$. Let $L = \langle f \rangle^{TS} + \langle [x_1,x_2] \rangle ^{TS} +T^{(3)}$.
Suppose that $a_i = 1$ for some $i$, $1 \le i \le n.$ Then either $L = F \langle X \rangle$ or $L = \langle [x_{1},x_2] \rangle^{TS} + T^{(3)}$ or $L = \langle x_1 [x_2,x_3] \ldots [x_{2 \theta},x_{2 \theta + 1}] \rangle^{TS} + \langle [x_{1},x_2] \rangle^{TS} + T^{(3)}$ for some $\theta \leq \frac{n-1}{2}$.
\end{lemma}
\begin{proof}
Note that each multihomogeneous polynomial $f(x_1,\dots,x_n) \in F \langle X \rangle$ can be written, modulo $T^{(3)}$, in the form (\ref{f_modulo_T3}). Hence, we can assume without loss of generality (permuting the free generators $x_1, \ldots , x_n$ if necessary) that $a_1=1$.
Note that if $\alpha \neq 0$, then $f(x_1,1,\ldots,1)=\alpha x_1 \in L$ so $ L = \langle x_1 \rangle^{TS} = F \langle X \rangle$. Suppose that $\alpha =0$.
We claim that we may assume without loss of generality that $f$ is of the form $f(x_1, \ldots, x_n) = x_1 g(x_2, \ldots, x_n),$ where
\begin{equation}\label{form_of_g}
g=\sum \limits_{\mathop{2 \le i_1<\ldots <i_{2t}\le n}\limits_{t \ge 1}}
\alpha_{(i_1,\ldots,i_{2t})}\frac{x_2^{a_2}\ldots x_n^{a_n}}{x_{i_1}\ldots x_{i_{2t}}}[x_{i_1},x_{i_2}]\ldots [x_{i_{2t-1}},x_{i_{2t}}].
\end{equation}
Indeed, consider a term $m = \frac{x_1^{a_1}\ldots
x_n^{a_n}}{x_{i_1}\ldots x_{i_{2t}}}[x_{i_1},x_{i_2}]\ldots
[x_{i_{2t-1}},x_{i_{2t}}]$ in (\ref{f_modulo_T3}). If $i_1 >1$
then
\begin{equation}\label{m'}
m = x_1 \, \frac{x_2^{a_2}\ldots x_n^{a_n}}{x_{i_1}\ldots x_{i_{2t}}}[x_{i_1},x_{i_2}]\ldots [x_{i_{2t-1}},x_{i_{2t}}].
\end{equation}
Suppose that $i_1 =1$; then $m = m'[x_1,x_{i_2}]\ldots [x_{i_{2t-1}},x_{i_{2t}}]$, where $m' = \frac{x_2^{a_2}\ldots x_n^{a_n}}{x_{i_2}\ldots x_{i_{2t}}}$. We have
\begin{eqnarray*}
&&m + T^{(3)} = m'[x_1,x_{i_2}]\ldots [x_{i_{2t-1}},x_{i_{2t}}]+T^{(3)} \\
&& = [m' x_1,x_{i_2}]\ldots [x_{i_{2t-1}},x_{i_{2t}}]-x_1
[m',x_{i_2}]\ldots [x_{i_{2t-1}},x_{i_{2t}}]+T^{(3)} \\
&& = [m' x_1 [x_{i_3},x_{i_4}]\ldots [x_{i_{2t-1}},x_{i_{2t}}],x_{i_2}] -x_1
[m',x_{i_2}]\ldots [x_{i_{2t-1}},x_{i_{2t}}]+T^{(3)}.
\end{eqnarray*}
Hence,
\begin{equation}\label{m''}
m = - x_1 [m',x_{i_2}]\ldots [x_{i_{2t-1}},x_{i_{2t}}] + h,
\end{equation}
where $h \in \langle [x_1,x_2] \rangle^{TS} + T^{(3)}$.
It follows easily from (\ref{m'}) and (\ref{m''}) that there
exists a multihomogeneous polynomial $g_1 = g_1(x_2, \ldots, x_n)
\in F \langle X \rangle$ such that $f = x_1 g_1 + h_1$, where $h_1
\in \langle [x_1,x_2] \rangle^{TS} + T^{(3)}$. Further, there is a
multihomogeneous polynomial $g$ of the form (\ref{form_of_g}) such
that $g \equiv g_1 \pmod{T^{(3)}}$; then $f = x_1 g + h_2$, where
$h_2 \in \langle [x_1,x_2] \rangle^{TS} + T^{(3)}$. It follows
that $L = \langle x_1g(x_2, \ldots , x_n) \rangle^{TS} + \langle
[x_1,x_2] \rangle^{TS} + T^{(3)}$. Thus, we can assume without
loss of generality that $f = x_1 g(x_2, \ldots , x_n)$, where $g$
is of the form (\ref{form_of_g}), as claimed.
If $f = 0$ then $L = \langle [x_{1},x_2] \rangle^{TS} + T^{(3)}$.
Suppose that $f \ne 0$. Let $\theta = \mbox{ min }\{ t \mid
\alpha_{(i_1,\ldots,i_{2t})} \neq 0 \}$. It is clear that $2
\theta + 1 \le n$ so $\theta \le \frac{n-1}{2}$. We can assume
that $\alpha_{(2,\ldots,2\theta+1)}\neq 0$; then
\begin{eqnarray}\label{f_final_form}
f &=& x_1 \Big( \alpha_{(2, \ldots , 2 \theta +1)}
\frac{x_2^{a_2} \ldots x_n^{a_n}}{ x_2 \ldots x_{2 \theta
+1}}[x_{2},x_{3}]\ldots
[x_{2 \theta},x_{2 \theta +1}] \nonumber\\
&+& \sum \limits_{\mathop{2 \leq i_1<\ldots <i_{2t} \leq n }
\limits_{t \ge \theta, \ i_{2t} > 2 \theta +1}}
\alpha_{(i_1,\ldots,i_{2t})} \frac{x_2^{a_2} \ldots
x_n^{a_n}}{x_{i_1}\ldots x_{i_{2t}}}[x_{i_1},x_{i_2}]\ldots
[x_{i_{2t-1}},x_{i_{2t}}] \Big).
\end{eqnarray}
Let $f_1(x_1, \ldots , x_{2 \theta + 1}) = f(x_1,x_2, \ldots,
x_{2\theta+1},1,\ldots,1) \in L$; then
\[
f_1= \alpha_{(2, \ldots , 2 \theta +1)} \ x_1 \frac{x_2^{a_2}
\ldots x_n^{a_n}}{ x_2 \ldots x_{2 \theta +1}}[x_{2},x_{3}]\ldots
[x_{2 \theta},x_{2 \theta +1}].
\]
It can be easily seen that the multihomogeneous component of
degree 1 in the variables $x_1,x_2,\ldots, x_{2\theta+1}$ of the
polynomial $f_1(x_1,x_2+1, \ldots, x_{2\theta+1}+1)$ is equal to
\[
\alpha_{(2, \ldots , 2 \theta +1)} \, x_1[x_2,x_3]\ldots
[x_{2\theta}, x_{2\theta+1}].
\]
It follows that $x_1[x_2,x_3]\ldots [x_{2\theta}, x_{2\theta+1}]
\in L$; hence,
\[
\langle x_1 [x_2,x_3] \ldots [x_{2 \theta},x_{2 \theta + 1}]
\rangle^{TS} + \langle [x_{1},x_2] \rangle^{TS} + T^{(3)}
\subseteq L.
\]
On the other hand, it is clear that the polynomial $f$ of the form
(\ref{f_final_form}) belongs to the $T$-subspace of $F \langle X
\rangle$ generated by $x_1[x_2,x_3]\ldots [x_{2\theta},
x_{2\theta+1}]$; it follows that $\langle f \rangle^{TS} \subseteq
\langle x_1 [x_2,x_3] \ldots [x_{2 \theta},x_{2 \theta + 1}]
\rangle^{TS}$ and, therefore,
\[
L \subseteq \langle x_1 [x_2,x_3] \ldots [x_{2 \theta},x_{2 \theta
+ 1}] \rangle^{TS} + \langle [x_{1},x_2] \rangle^{TS} + T^{(3)}.
\]
Thus, $L = \langle x_1 [x_2,x_3] \ldots [x_{2 \theta},x_{2 \theta
+ 1}] \rangle^{TS} + \langle [x_{1},x_2] \rangle^{TS} + T^{(3)}$.
The proof of Lemma \ref{WC(G)} is completed.
\end{proof}
\begin{proposition}
\label{C_2k/T32k_limit} Let $W$ be a $T$-subspace of $F \langle X_{2k} \rangle$ such that $C_{2k} \subsetneqq W$. Then $W=F \langle X_{2k} \rangle$ or $W$ is generated as a $T$-subspace by the polynomials
\[
x_1^p, \ x_1^pq_1(x_2,x_3), \ldots, \ x_1^pq_{\lambda -1}(x_2,\ldots,x_{2\lambda-1}),
\]
\[
x_1[x_2,x_3,x_4], \ x_1[x_2,x_3]\ldots [x_{2\lambda},x_{2\lambda+1}],
\]
for some positive integer $\lambda \leq k-1$.
\end{proposition}
\begin{proof}
It is well-known that over a field $F$ of characteristic $0$ each $T$-ideal in $F \langle X \rangle$ can be generated by its multilinear polynomials. It is easy to check that the same is true for each $T$-subspace in $F \langle X \rangle$. Over an infinite field $F$ of characteristic $p>0$ each $T$-ideal in $F \langle X \rangle$ can be generated by all its multihomogeneous polynomials $f(x_1, \ldots , x_n)$ such that, for each $i$, $ 1 \le i \le n$, $\mbox{deg}_{x_i} f = p^{s_i}$ for some integer $s_i$ (see, for instance, \cite{Baht}). Again, the same is true for each $T$-subspace in $F \langle X \rangle$.
Let $f(x_1,\ldots,x_{2k})\in W \setminus C_{2k}$ be an arbitrary multihomogeneous polynomial such that, for each $i$ ($1\leq i \leq 2k$), we have either $\mbox{deg}_{x_i} f=p^{s_i}$ or $\mbox{deg}_{x_i} f=0$. We may assume that $\mbox{deg}_{x_i} f=p^{s_i}$ for $i = 1, \ldots , l$ and $\mbox{deg}_{x_i} f=0$ for $i = l+1, \ldots , 2k$ (that is, $f = f(x_1, \ldots , x_l$)). Then we have
\[
f+T^{(3)}_{2k}=\alpha \, m+\sum \limits_{1 \le i_1<\ldots <i_{2t} \le l}
\alpha_{(i_1,\ldots,i_{2t})}\frac{m}{x_{i_1}\ldots x_{i_{2t}}}[x_{i_1},x_{i_2}]\ldots [x_{i_{2t-1}},x_{i_{2t}}]+T^{(3)}_{2k},
\]
where $\alpha, \alpha_{(i_1,\ldots,i_{2t})}\in F$, \ $m=x_1^{p^{s_1}} \ldots x_{l}^{p^{s_{l}}}.$
If $s_i > 0$ for all $i = 1, \ldots , l$ then it can be easily seen that $f \in C(G)$ so $f \in C_{2k}$, a contradiction with the choice of $f$. Thus, $s_i =0$ for some $i$, $1 \le i \le l$. Let $L_f$ be the $T$-subspace of $F \langle X \rangle$ generated by $f$, $[x_1,x_2]$ and $T^{(3)}$. By Lemma \ref{WC(G)}, we have either $L_f = F\langle X \rangle$ or
\[
L_f = \langle x_1 [x_2,x_3]\ldots [x_{2\theta},x_{2\theta+1}] \rangle^{TS} + \langle [x_1,x_2] \rangle^{TS} + T^{(3)}
\]
for some $\theta < k$ (since $f \notin C_{2k}$, we have $L_f \ne \langle [x_1,x_2] \rangle^{TS} + T^{(3)}$). Note that if $k=1$ (that is, $f=f(x_1,x_2)$) then the only possible case is $L_f = F\langle X \rangle$.
It is clear that if $L_f = F \langle X \rangle$ for some $f \in W \setminus C_{2k}$ then $x_1 \in W$ so $W = F \langle X_{2k} \rangle$. Suppose that $W \ne F \langle X_{2k} \rangle$; then $k>1$ and $L_f \ne F \langle X \rangle $ for all $f \in W \setminus C_{2k}$. For each $f \in W \setminus C_{2k}$ satisfying the conditions of Lemma \ref{WC(G)}, the $T$-subspace $L_f$ in $F \langle X \rangle$ can be generated, by Lemma \ref{WC(G)}, by the polynomials
\begin{equation}\label{polyn_generators}
[x_1,x_2], \quad x_1[x_2,x_3x_4]\quad \mbox{and} \quad x_1 [x_2,x_3]\ldots [x_{2\theta},x_{2\theta+1}]
\end{equation}
for some $\theta= \theta_f < k$. Since the polynomials (\ref{polyn_generators}) belong to $F \langle X_{2k} \rangle$ (recall that $k>1$), the $T$-subspace in $F \langle X_{2k} \rangle$ generated by $f$, $[x_1,x_2]$ and $ T^{(3)}$ is also generated (as a $T$-subspace in $F \langle X_{2k} \rangle$) by the polynomials (\ref{polyn_generators}). Note that $[x_1,x_2]$ and $x_1[x_2, x_3, x_4]$ belong to $C_{2k}$ so the $T$-subspace $V_f$ in $F \langle X_{2k} \rangle$ generated by $f$ and $C_{2k}$ can be generated by $C_{2k}$ and $x_1 [x_2,x_3]\ldots [x_{2\theta},x_{2\theta+1}]$ for some $\theta= \theta_f < k$.
Let $\lambda = \mbox{ min } \{ \theta \mid x_1 [x_2,x_3]\ldots [x_{2\theta},x_{2\theta+1}] \in W \}$. Since $W$ is the sum of the $T$-subspaces $V_f$ for all suitable multihomogeneous polynomials $f \in W \setminus C_{2k}$ and each $V_f$ is generated by $C_{2k}$ and $x_1 [x_2,x_3]\ldots [x_{2\theta},x_{2\theta+1}]$ for some $\theta = \theta_f < k$, $W$ can be generated as a $T$-subspace in $F \langle X_{2k} \rangle$ by $C_{2k}$ and $x_1 [x_2,x_3]\ldots [x_{2\lambda},x_{2\lambda+1}]$. Now it follows easily from Proposition \ref{generators_of_C_n} that $W$ can be generated by the polynomials
\[
x_1^p, \ x_1^p q_1(x_2,x_3), \ldots, \ x_1^p q_{\lambda -1}(x_2,\ldots,x_{2\lambda-1})
\]
together with the polynomials
\[
x_1[x_2,x_3,x_4] \ \mbox{ and } \ x_1[x_2,x_3] \ldots [x_{2\lambda},x_{2\lambda+1}],
\]
where we note that $\lambda <k$.
This completes the proof of Proposition \ref{C_2k/T32k_limit}.
\end{proof}
Proposition \ref{C_2k_limit} follows immediately from Proposition
\ref{C_2k/T32k_limit}. The proof of Theorem \ref{theorem_main2} is
completed.
\section{Proof of Theorem \ref{theorem_main3} }
\begin{proposition} \label{rk}
For each $k \ge 1$, $R_k$ is not finitely generated as a
$T$-subspace in $F \langle X \rangle$.
\end{proposition}
\begin{proof}
Recall that $R_k$ is the $T$-subspace in $F \langle X \rangle$
generated by $C_{2k}$ and $T^{(3,k+1)}$. By Proposition
\ref{generators_of_C_n/T3n}, $C_{2k}$ is generated as a
$T$-subspace in $F \langle X_{2k} \rangle$ by $T_{2k}^{(3)}$
together with the polynomials (\ref{gen_C2k}) and the polynomials
$\{ q_k^{(l)} (x_1, \ldots ,x_{2k}) \mid l = 1, 2, \ldots \} .$
Since $T_{2k}^{(3)} \subset T^{(3)} \subset T^{(3,k+1)}$, we have
\[
R_k=U^{(k-1)}+ \sum_{l \ge 1} Q^{(k,l)}+T^{(3, k+1)},
\]
where $U^{(k-1)}$ and $Q^{(k,l)}$ are the $T$-subspaces in $F
\langle X \rangle$ generated by the polynomials (\ref{gen_C2k})
and by the polynomial $q_k^{(l)}(x_1, \ldots , x_{2k})$,
respectively.
Let $V_l = U^{(k-1)}+ \sum_{i \le l} Q^{(k,i)}+T^{(3, k+1)}$. Then
\begin{equation}\label{union_rk}
R_k = \bigcup_{l \ge 1} V_l
\end{equation}
and $V_1 \subseteq V_2 \subseteq \ldots .$ Recall that, by
(\ref{sum_q}), $\sum_{i \le l} Q^{(k,i)} = Q^{(k,l)}$ so $V_l =
U^{(k-1)}+ Q^{(k,l)}+T^{(3, k+1)}$. By Proposition \ref{G-Ts},
$Q^{(k,l+1)} \not \subseteq V_l$ for all $l \ge 1$ so
\begin{equation}\label{strictly-ascending_rk}
V_1 \subsetneqq V_2 \subsetneqq \ldots .
\end{equation}
The result follows immediately from (\ref{union_rk}) and
(\ref{strictly-ascending_rk}).
\end{proof}
\begin{lemma}\label{fTSRk}
Let $f = f(x_1, \ldots,x_{n}) \in F \langle X \rangle$ be a
multihomogeneous polynomial of the form
\begin{equation}\label{f_modulo_T3_2}
f=\alpha \, x_1^{p^{s_1}} \ldots x_n^{p^{s_n}} + \sum_{i_1<
\ldots <i_{2t}} \alpha_{(i_1,\ldots,i_{2t})} \,
\frac{x_1^{p^{s_1}} \ldots x_n^{p^{s_n}}}{x_{i_1}\ldots
x_{i_{2t}}}[x_{i_1},x_{i_2}]\ldots [x_{i_{2t-1}},x_{i_{2t}}],
\end{equation}
where $\alpha, \alpha_{(i_1,\ldots,i_{2t})}\in F$, $s_i \geq 0$
for all $i$. Let $L = \langle f \rangle^{TS} + R_k$, $k \ge 1$.
Then one of the following holds:
\begin{enumerate}
\item $L = F \langle X \rangle$;
\item $L = R_k$;
\item $L = \langle x_1[x_2,x_3] \ldots [x_{2 \theta},x_{2 \theta
+1}] \rangle^{TS} + R_k$ for some $\theta$, $1 \le \theta \le k$;
\item $L=\langle x_1^{p^s}q_k^{(s)}(x_2,\ldots,x_{2k+1}) \rangle
^{TS}+R_k$ for some $s \ge 1$.
\end{enumerate}
\end{lemma}
\begin{proof}
Note that each multihomogeneous polynomial $f(x_1,\dots,x_n) \in
F \langle X \rangle$ of degree $p^{s_i}$ in $x_i$ ($1 \le i \le
n$) can be written, modulo $T^{(3)}$, in the form
(\ref{f_modulo_T3_2}). Hence, we can assume without loss of
generality (permuting the free generators $x_1, \ldots , x_n$ if
necessary) that $s_1 \le s_i$ for all $i$. Write $s = s_1$.
Suppose that $s=0$. Then, by Lemma \ref{WC(G)}, we have either
\[
\langle f \rangle^{TS} + \langle [x_1,x_2] \rangle^{TS} + T^{(3)}
= F \langle X \rangle
\]
or
\[
\langle f \rangle^{TS} + \langle [x_1,x_2] \rangle^{TS} + T^{(3)}
= \langle [x_1,x_2] \rangle^{TS} + T^{(3)}
\]
or
\[
\langle f \rangle^{TS} + \langle [x_1,x_2] \rangle^{TS} + T^{(3)}
= \langle x_1[x_2,x_3] \ldots [x_{2 \theta},x_{2 \theta +1}]
\rangle^{TS} + \langle [x_1,x_2] \rangle^{TS} + T^{(3)}
\]
for some $\theta$. Since $\langle [x_1,x_2] \rangle^{TS} +
T^{(3)} \subset R_k$ and $x_1[x_2,x_3] \ldots [x_{2 \theta},x_{2
\theta +1}] \in R_k$ if $\theta > k$, we have either $L = F
\langle X \rangle$ or $L = R_k$ or
\[
L = \langle x_1[x_2,x_3] \ldots [x_{2 \theta},x_{2 \theta +1}]
\rangle^{TS} + R_k
\]
for some $\theta \le k$.
Now suppose that $s>0$; then $s_i >0$ for all $i$, $1 \leq i \leq
n$. It can be easily seen that, by (\ref{prop2}), $ x_1^{p^{s_1}} \ldots
x_n^{p^{s_n}} \in \left(\langle x_1^p \rangle^{TS} + T^{(3)}
\right) \subset R_k$ and, for all $t <k$,
\[
\frac{x_1^{p^{s_1}} \ldots x_n^{p^{s_n}}}{x_{i_1}\ldots
x_{i_{2t}}}[x_{i_1},x_{i_2}]\ldots [x_{i_{2t-1}},x_{i_{2t}}] \in
\left(\langle x_1^pq_t(x_2, \ldots , x_{2t+1})\rangle^{TS} +
T^{(3)}\right) \subset R_k.
\]
Also we have $\frac{x_1^{p^{s_1}} \ldots
x_n^{p^{s_n}}}{x_{i_1}\ldots x_{i_{2t}}}[x_{i_1},x_{i_2}]\ldots
[x_{i_{2t-1}},x_{i_{2t}}] \in T^{(3,k+1)} \subset R_k$ for each $t
> k$. It follows that we can assume without loss of generality
that the polynomial $f$ is of the form
\begin{equation}\label{form_f}
f = \sum \limits_{1 \le i_1<\ldots <i_{2k} \le n}
\alpha_{(i_1,\ldots,i_{2k})} \,\frac{x_1^{p^{s_1}} \ldots
x_n^{p^{s_n}}}{x_{i_1}\ldots x_{i_{2k}}}[x_{i_1},x_{i_2}]\ldots
[x_{i_{2k-1}},x_{i_{2k}}].
\end{equation}
Note that if $n < 2k$ then $f=0$ and if $n=2k$ then
\[
f = \alpha_{(1,2, \ldots , 2k)} \frac{x_1^{p^{s_1}} \ldots
x_{2k}^{p^{s_{2k}}}}{x_{1}x_2\ldots x_{2k}}[x_{1},x_{2}]\ldots
[x_{2k-1},x_{2k}]
\]
so, by Lemma \ref{lemma_GTs}, we have $f \in Q^{(k,s)} +
T^{(3)}$, where $s=s_1>0$. In both cases we have $f \in R_k$ and
$L=R_k$.
Suppose that $n > 2k$. We claim that we may assume that $f$ is of
the form
\begin{equation}\label{form_f2}
f(x_1, \ldots , x_n) = x_1^{p^s} g(x_2, \ldots , x_n),
\end{equation}
where
\[
g= \sum \limits_{2 \le i_1<\ldots <i_{2k} \le n}
\alpha_{(i_1,\ldots,i_{2k})} \, \frac{x_2^{p^{s_2}} \ldots
x_n^{p^{s_n}}}{x_{i_1}\ldots x_{i_{2k}}}[x_{i_1},x_{i_2}]\ldots
[x_{i_{2k-1}},x_{i_{2k}}].
\]
Indeed, consider a term $m = \frac{x_1^{p^{s_1}} \ldots
x_n^{p^{s_n}}}{x_{i_1}\ldots x_{i_{2k}}}[x_{i_1},x_{i_2}]\ldots
[x_{i_{2k-1}},x_{i_{2k}}]$ in (\ref{form_f}). If $i_1 >1$ then
\begin{equation}\label{m_x1_1}
m = x_1^{p^{s}} \frac{x_2^{p^{s_2}} \ldots
x_n^{p^{s_n}}}{x_{i_1}\ldots x_{i_{2k}}}[x_{i_1},x_{i_2}]\ldots
[x_{i_{2k-1}},x_{i_{2k}}].
\end{equation}
Suppose that $i_1 = 1$. Let $a_i = p^{s_i}$ for all $i$. Then
\[
m +T^{(3,k+1)} = x_1^{p^{s}-1} \frac{x_2^{p^{s_2}} \ldots
x_n^{p^{s_n}}}{x_{i_2}\ldots x_{i_{2k}}}[x_{1},x_{i_2}]\ldots
[x_{i_{2k-1}},x_{i_{2k}}] +T^{(3,k+1)}
\]
\[
= x_{j_1}^{a_{j_1}} \cdots x_{j_{l}}^{a_{j_{l}}} x_{1}^{a_{1}-1}
\cdots x_{i_{2k}}^{a_{i_{2k}}-1} [x_1,x_{i_{2}}] \cdots
[x_{i_{2k-1}},x_{i_{2k}}] + T^{(3,k+1)}
\]
\[
= x_1^{a_1-1}x_{j_1}^{a_{j_1}} \ldots
x_{j_l}^{a_{j_l}}[x_1,x_{i_2}]x_{i_2}^{a_{i_2}-1} m'+T^{(3,k+1)},
\]
where
\[
m'=x_{i_{3}}^{a_{i_{3}}-1} [x_{i_{3}},x_{i_{4}}]
x_{i_{4}}^{a_{i_{4}}-1} \ldots x_{i_{2k-1}}^{a_{i_{2k-1}}-1}
[x_{i_{2k-1}},x_{i_{2k}}] x_{i_{2k}}^{a_{i_{2k}}-1},
\]
$\{j_1,\ldots,j_l\}=\{1,\ldots,n\} \setminus
\{1,i_2,\ldots,i_{2k}\}$, $l=n-2k>0$. Suppose that
\[
a_1=a_{j_1}=a_{j_2}=\ldots=a_{j_z} \ \ \mbox{and} \ \
a_{j_{z+1}},a_{j_{z+2}},\ldots,a_{j_l}>a_1.
\]
Let
\[
u=x_1x_{j_1} \cdots x_{j_z}x_{j_{z+1}}^{a'_{j_{z+1}}}\cdots
x_{j_{l}}^{a'_{j_{l}}},
\]
where $a'_i = a_i/p^s$ for all $i$. Let
\[
h= h(x_1, \ldots ,x_{2k}) = x_1^{a_{1}-1}[x_1,x_2]x_2^{a_{i_2}-1} \ldots
x_{2k-1}^{a_{i_{2k-1}}-1}[x_{2k-1},x_{2k}] x_{2k}^{a_{i_{2k}-1}}.
\]
By (\ref{prop}), $h \in C(G)$; hence, $h \in C_{2k} \subset R_k$.
It follows that $h(u,x_{i_2}, \ldots , x_{i_{2k}}) \in R_k$, that
is,
\begin{equation}\label{uxi2}
u^{p^s-1}[u,x_{i_2}]x_{i_2}^{a_{i_2}-1} m'\in R_k.
\end{equation}
Since, by (\ref{prop2}), $[v_1^p,v_2] \in T^{(3)} \subset
T^{(3,k+1)}$ for all $v_1,v_2 \in F \langle X \rangle$, we have
\begin{eqnarray*}
&&u^{p^s-1}[u,x_{i_2}]x_{i_2}^{a_{i_2}-1} m' +T^{(3,k+1)}\\
&&= \left(x_1x_{j_1} \cdots
x_{j_z}\right)^{p^s-1}x_{j_{z+1}}^{a_{j_{z+1}}}\cdots
x_{j_l}^{a_{j_l}}[x_1x_{j_1} \ldots
x_{j_z},x_{i_2}]x_{i_2}^{a_{i_2}-1} m' +T^{(3,k+1)} \\
&& = \left(x_1 x_{j_1} \cdots
x_{j_z}\right)^{p^s-1} x_{j_{z+1}}^{a_{j_{z+1}}}\cdots
x_{j_l}^{a_{j_l}}[x_1,x_{i_2}]x_{j_1} \ldots x_{j_z}
x_{i_2}^{a_{i_2}-1} m' \\
&& + \left(x_1x_{j_1} \cdots
x_{j_z}\right)^{p^s-1}x_{j_{z+1}}^{a_{j_{z+1}}}\cdots
x_{j_l}^{a_{j_l}}x_1[x_{j_1} \ldots
x_{j_z},x_{i_2}]x_{i_2}^{a_{i_2}-1} m' + T^{(3,k+1)} \\
&&= m + x_1^{p^s} x_{j_1}^{p^s -1} \cdots x_{j_z}^{p^s-1} x_{j_{z+1}}^{a_{j_{z+1}}}\cdots x_{j_l}^{a_{j_l}} [x_{j_1} \ldots
x_{j_z},x_{i_2}]x_{i_2}^{a_{i_2}-1} m' +T^{(3,k+1)}
\end{eqnarray*}
where the second summand is not present if $z = 0$ (that is, if $a_{j_i} > a_1$ for all $i$), in which case $m \in R_k$. Since
\begin{eqnarray*}
&&x_1^{p^s} x_{j_1}^{p^s -1} \cdots x_{j_z}^{p^s-1} x_{j_{z+1}}^{a_{j_{z+1}}}\cdots x_{j_l}^{a_{j_l}} [x_{j_1} \ldots
x_{j_z},x_{i_2}]x_{i_2}^{a_{i_2}-1} m' +T^{(3,k+1)} \\
&& = x_1^{p^s} \sum \limits_{2 \le i_1<\ldots <i_{2k}}
\beta_{(i_1,\ldots,i_{2k})} \, \frac{x_2^{p^{s_2}}
\ldots x_n^{p^{s_n}}}{x_{i_1}\ldots
x_{i_{2k}}}[x_{i_1},x_{i_2}]\ldots [x_{i_{2k-1}},x_{i_{2k}}]
+T^{(3,k+1)}
\end{eqnarray*}
for some $\beta_{(i_1,\ldots,i_{2k})} \in F$, we have
\begin{equation}\label{mi1i2k_2}
m + x_1^{p^s} \sum \limits_{2 \le i_1<\ldots <i_{2k}}
\beta_{(i_1,\ldots,i_{2k})} \, \frac{x_2^{p^{s_2}}
\ldots x_n^{p^{s_n}}}{x_{i_1}\ldots
x_{i_{2k}}}[x_{i_1},x_{i_2}]\ldots [x_{i_{2k-1}},x_{i_{2k}}] \in
R_k.
\end{equation}
It is clear that, using (\ref{m_x1_1}) and (\ref{mi1i2k_2}), we
can write $f = f_1 + f_2$, where
\[
f_1 = x_1^{p^s} \left( \sum \limits_{2 \le i_1<\ldots <i_{2k}}
\gamma_{(i_1,\ldots,i_{2k})} \, \frac{x_2^{p^{s_2}} \ldots
x_n^{p^{s_n}}}{x_{i_1}\ldots x_{i_{2k}}}[x_{i_1},x_{i_2}]\ldots
[x_{i_{2k-1}},x_{i_{2k}}] \right)
\]
is of the form (\ref{form_f2}) and $f_2 \in R_k$. Then we have
$\langle f \rangle^{TS} + R_k = \langle f_1 \rangle^{TS} + R_k.$
Thus, we can assume (replacing $f$ with $f_1$) that the polynomial
$f$ is of the form (\ref{form_f2}), as claimed.
If $f = 0$ then $L = R_k$. Suppose that $f \ne 0$. Then we can
assume without loss of generality that
$\alpha_{(2,3,\ldots,2k+1)}\neq 0$. It follows that the
$T$-subspace $\langle f \rangle^{TS}$ contains the polynomial
\begin{eqnarray*}
&&h(x_1,\ldots ,x_{2k+1})=\alpha_{(2,3,\ldots,2k+1)}^{-1}f(x_1,\ldots ,x_{2k+1},1,1,\ldots,1)\\
&&= x_1^{p^s}x_2^{p^{s_2}-1}\ldots x_{2k+1}^{p^{s_{2k+1}}-1}[x_2,x_3]\ldots [x_{2k},x_{2k+1}].
\end{eqnarray*}
Then $\langle f \rangle^{TS} + R_k$ also contains the homogeneous
component of the polynomial $h(x_1+1,\ldots,x_{2k+1}+1)$ of degree
$p^s$ in each variable $x_i$ $(i = 1,2, \ldots , 2k+1)$, that is
equal, modulo $T^{(3)}$, to
\[
\gamma \ x_1^{p^s}x_2^{p^s-1}\ldots x_{2k+1}^{p^s-1}[x_2,x_3]
\ldots [x_{2k}, x_{2k+1}],
\]
where $\gamma = \prod_{i=2}^{2k+1} {p^{s_i}-1 \choose p^s-1}
\equiv 1 \pmod{p}$. It follows that
\[
x_1^{p^s}q_k^{(s)}(x_2,\ldots,x_{2k+1}) \in \langle f \rangle^{TS} + R_k.
\]
On the other hand, for all $i_1, \ldots ,i_{2k}$ such that $2 \le
i_1 < \ldots < i_{2k} \le n$, we have
\[
x_1^{p^s} \frac{x_2^{p^{s_2}} \ldots x_n^{p^{s_n}}}{x_{i_1}\ldots
x_{i_{2k}}}[x_{i_1},x_{i_2}]\ldots [x_{i_{2k-1}},x_{i_{2k}}] \in
\langle x_1^{p^s}q_k^{(s)}(x_2,\ldots,x_{2k+1}) \rangle^{TS} +
T^{(3,k+1)}
\]
(recall that $s_i \ge s$ for all $i$) so
\[
f \in \langle x_1^{p^s}q_k^{(s)}(x_2,\ldots,x_{2k+1})
\rangle^{TS} + R_k.
\]
Thus,
\[
\langle f \rangle^{TS}+R_k=\langle
x_1^{p^s}q_k^{(s)}(x_2,\ldots,x_{2k+1}) \rangle ^{TS}+R_k,
\]
where $s \ge 1$. The proof of Lemma \ref{fTSRk} is completed.
\end{proof}
\begin{proposition} \label{Rk_gen}
Let $W$ be a $T$-subspace of $F \langle X \rangle$ such that $R_k
\subsetneqq W.$ Then one of the following holds:
\begin{enumerate}
\item $W=F\langle X \rangle$.
\item $W$ is generated as a $T$-subspace by the polynomials
\[
x_1^p, \ x_1^p q_1(x_2,x_3), \ldots, \ x_1^p q_{\lambda-1}(x_2,
\ldots ,x_{2 \lambda -1}),
\]
\[
x_1[x_2,x_3,x_4],\ x_1[x_2,x_3]\ldots [x_{2\lambda},x_{2\lambda+1}]
\]
for some $\lambda \leq k$.
\item $W$ is generated as a $T$-subspace by the polynomials
\[
x_1^p, \ x_1^pq_1(x_2,x_3), \ldots, \ x_1^pq_{k-1}(x_2, \ldots
,x_{2 k -1}),
\]
\[
\{q_k^{(l)}(x_1, \ldots ,x_{2k}) \mid 1\leq l \leq \mu-1 \}, \
x_1^{p^{\mu}}q_k^{(\mu)}(x_2, \ldots ,x_{2k+1}),
\]
\[
x_1[x_2,x_3,x_4], \ x_1[x_2,x_3]\ldots [x_{2k+2},x_{2k+3}]
\]
for some $\mu \ge 1$.
\end{enumerate}
\end{proposition}
\begin{proof}
Let $f = f(x_1, \ldots , x_n)$ be an arbitrary polynomial in $W
\setminus R_k$ satisfying the conditions of Lemma \ref{fTSRk},
that is, an arbitrary multihomogeneous polynomial such that
$\mbox{deg}_{x_i}f = p^{s_i}$ for some $s_i \ge 0$ $(1 \le i \le
n)$. Let $L_f = \langle f \rangle^{TS} + R_k$. By Lemma
\ref{fTSRk}, we have either $L_f = F \langle X \rangle$ or
\[
L_f = \langle x_1[x_2,x_3] \ldots [x_{2 \theta},x_{2 \theta +1}]
\rangle^{TS} + R_k
\]
for some $\theta \le k$ or
\[
L_f=\langle x_1^{p^s}q_k^{(s)}(x_2,\ldots,x_{2k+1}) \rangle ^{TS}+R_k
\]
for some $s \ge 1$.
Note that $W$ is generated as a $T$-subspace in $F \langle X
\rangle$ by $R_k$ together with the polynomials $f \in W
\setminus R_k$ satisfying the conditions of Lemma
\ref{fTSRk}. It follows that $W = \sum L_f $, where the sum
is taken over all the polynomials $f \in W \setminus R_k$
satisfying these conditions.
It is clear that if $L_f = F \langle X \rangle$ for some $f \in W
\setminus R_k$ then $W = F \langle X \rangle$. Suppose that $L_f
\ne F \langle X \rangle$ for all $f \in W \setminus R_k$. Let, for
some $f \in W \setminus R_k$, we have $L_f = \langle x_1[x_2,x_3]
\ldots [x_{2 \theta},x_{2 \theta +1}] \rangle^{TS} + R_k$, $\theta
\le k$. Define $\lambda = \mbox{ min }\{ \theta \mid x_1[x_2,x_3]
\ldots [x_{2 \theta},x_{2 \theta +1}] \in W \}$; note that
$\lambda \le k$. We have
\[
x_1[x_2,x_3] \ldots [x_{2 \theta},x_{2 \theta +1}] \in \langle
x_1[x_2,x_3] \ldots [x_{2 \lambda},x_{2 \lambda +1}] \rangle^{TS}
\]
for all $\theta \ge \lambda$ and
\[
x_1^{p^s}q_k^{(s)}(x_2,\ldots,x_{2k+1}) \in \langle x_1[x_2,x_3]
\ldots [x_{2 \lambda},x_{2 \lambda +1}] \rangle^{TS} + T^{(3)}
\]
for all $s$. Hence, $W = \langle x_1[x_2,x_3] \ldots [x_{2
\lambda},x_{2 \lambda +1}] \rangle^{TS} + R_k$, where $\lambda \le
k$. It follows that $W$ is generated as a $T$-subspace by the
polynomials
\[
x_1^p, \ x_1^p q_1(x_2,x_3), \ldots, \ x_1^p q_{\lambda-1}(x_2,
\ldots ,x_{2 \lambda -1}),
\]
\[
x_1[x_2,x_3,x_4], \ x_1[x_2,x_3] \ldots [x_{2 \lambda}, x_{2
\lambda+1}],
\]
$\lambda \leq k$.
Now suppose that, for all $f \in W \setminus R_k$ satisfying the
conditions of Lemma \ref{fTSRk}, we have
\[
L_f=\langle x_1^{p^s}q_k^{(s)}(x_2,\ldots,x_{2k+1}) \rangle ^{TS}+R_k
\]
for some $s = s_f \ge 1$. Note that if $s \leq r$ then
\[
x_1^{p^r}q_k^{(r)}(x_2, \ldots, x_{2k+1}) \in \langle
x_1^{p^s}q_k^{(s)}(x_2,\ldots,x_{2k+1}) \rangle ^{TS} + T^{(3)}.
\]
Take $\mu = \mbox{ min } \{ s \mid
x_1^{p^s}q_k^{(s)}(x_2,\ldots,x_{2k+1}) \in W \}$. Then we have
$W=R_k+\langle x_1^{p^\mu} q_k^{(\mu)} (x_2,\ldots,x_{2k+1})
\rangle^{TS}$ and it is straightforward to check that $W$ can be
generated as a $T$-subspace in $F \langle X \rangle$ by the
polynomials
\[
x_1^p, \ x_1^pq_1(x_2,x_3), \ldots, \
x_1^pq_{k-1}(x_2,\ldots,x_{2k-1})
\]
and the polynomials $\{ q_k^{(l)}(x_1,\ldots,x_{2k}) \mid 1\leq l
\leq \mu-1 \}$, $ x_1^{p^{\mu}} q_k^{(\mu)}(x_2, \ldots,
x_{2k+1})$ together with the polynomials
\[
x_1[x_2,x_3,x_4] \ \ \mbox{and} \ \ x_1[x_2,x_3]\ldots
[x_{2k+2},x_{2k+3}].
\]
This completes the proof of Proposition \ref{Rk_gen}.
\end{proof}
Proposition \ref{Rk_gen} immediately implies the following corollary.
\begin{corollary} \label{Rk_limit}
Let $W$ be a $T$-subspace of $F \langle X \rangle$ such that $R_k
\subsetneqq W$ ($k \ge 1$). Then $W$ is a finitely generated
$T$-subspace in $F \langle X \rangle$.
\end{corollary}
\begin{proposition} \label{Not_equal}
If $k \ne l$ then $R_k \ne R_l.$
\end{proposition}
\begin{proof}
Suppose, in order to get a contradiction, that $R_k = R_l$ for
some $k,l$, $k < l$. Then we have $C(G) \subseteq R_l.$
Indeed, by Theorem \ref{generators_of_C}, the $T$-subspace $C(G)$
is generated by the polynomial $x_1[x_2,x_3,x_4]$ and the
polynomials $x_1^p, x_1^p q_1(x_2,x_3), \ldots , x_1^p q_n(x_2,
\ldots , x_{2n+1}), \ldots .$ Clearly,
\[
x_1[x_2,x_3,x_4] \in T^{(3)} \subset R_l.
\]
Further,
\[
x_1^p, \ x_1^p q_1(x_2,x_3), \ \ldots , \ x_1^p q_{l-1}(x_2,
\ldots , x_{2l-1}) \in R_l
\]
by the definition of $R_l$ and
\[
x_1^p q_{k+1}(x_2, \ldots , x_{2k+3}), \ x_1^p q_{k+2}(x_2, \ldots
, x_{2k+5}), \ldots \in T^{(3, k+1)} \subseteq R_k = R_l
\]
by the definition of $T^{(3,k+1)}$. Since $k < l$, we have
\[
x_1^p, \ x_1^p q_1(x_2,x_3), \ \ldots , \ x_1^p q_{k}(x_2,
\ldots , x_{2k+1}), \ x_1^p q_{k+1}(x_2, \ldots , x_{2k+3}), \
\ldots \in R_l.
\]
Hence, all the generators of the $T$-subspace $C(G)$ belong to
$R_l$ so $C(G) \subseteq R_l$, as claimed.
Note that $T^{(3,k+1)} \subseteq R_l$ and $T^{(3,k+1)} \not
\subseteq C(G)$ so $ C(G) \subsetneqq R_l$. By Theorem
\ref{C_limit}, $C(G)$ is a limit $T$-subspace so each $T$-subspace
$W$ such that $ C(G) \subsetneqq W$ is finitely generated. In
particular, $R_l$ is a finitely generated $T$-subspace. On the
other hand, by Proposition \ref{rk}, the $T$-subspace $R_l$ is not
finitely generated. This contradiction proves that $R_k \ne R_l$
if $k \ne l$, as required.
\end{proof}
Theorem \ref{theorem_main3} follows immediately from Proposition \ref{rk},
Corollary \ref{Rk_limit} and Proposition \ref{Not_equal}.
\end{document}
|
\begingin{document}
\thispagestyle{empty}
\begingin{abstract}
We begin with a brief summary of issues encountered involving causality in quantum theory, placing careful emphasis on the assumptions involved in results such as the EPR paradox and Bell's inequality. We critique some solutions to the resulting paradox, including Rovelli's relational quantum mechanics and the many-worlds interpretation. We then discuss how a spacetime manifold could come about on the classical level out of a quantum system, by constructing a space with a topology out of the algebra of observables, and show that even with an hypothesis of superluminal causation enforcing consistent measurements of entangled states, a causal cone structure arises on the classical level. Finally, we discuss the possibility that causality as understood in classical relativistic physics may be an emergent symmetry which does not hold on the quantum level.
\endd{abstract}
\mathfrak maketitle
\section{Quantum theory}
Recall that a quantum system may be described by a Hilbert space $H$ (for technical reasons, this must often be a rigged Hilbert space, \cite{Antoine, Madrid, TT} together with some subalgebra $O$ of the self-adjoint operators on $H$. Operators in $O$ correspond to observables. For example, the spin of a spin-$1/2$ particle is described by a two-dimensional Hilbert space with an orthonormal basis of unit vectors given by (using Dirac's bra-ket notation) $\left| u \right\right\ranglengle$ and $\left| d \right\right\ranglengle$, the up and down states respectively. A \emph{state} of the system is a unit vector in $H$. According to the usual rules of quantum mechanics. When an observable $A\in O$ is measured, the possible outcomes are given by the spectrum of $A$. If $a_i$ is an eigenvalue for $A$, and the system is in the state $\left| \Psi \right\right\ranglengle$ then the probability of obtaining $a_i$ as the result of a measurement of $A$ is given by $|\left\left\langlengle \Psi \right|P^t_{a_i} P_{a_i}\left| \Psi \right\right\ranglengle|^2$, where $P_{a_i}$ is the projection operator onto the eigenspace of $A$ corresponding to the eigenvalue $a_i$. The quantity $\left\left\langlengle \Phi \right| \Psi \right\ranglengle$ is called the \emph{probability amplitude} of finding the system in the state $\left| \Phi \right\right\ranglengle$ when we measure it while it is in the state $\left| \Psi \right\right\ranglengle$.
According to the usual rules of quantum mechanics, a system normally evolves in a unitary fashion. That is, the state $\left| \Psi(t) \right\right\ranglengle$ is given by $U(t)\left| \Psi(0) \right\right\ranglengle$ for some unitary function $U(t)$. However, when a system is measured, the state changes in a non-unitary fashion: if a measurement of the observable $A$ while the system is in the state $\left| \Psi \right\right\ranglengle $yields the result $a_i$, then the system changes to the state $P_{a_i}\left| \Psi \right\right\ranglengle$. This is the usual \emph{axiom of measurement} in quantum theory. The difficulty in interpreting quantum mechanics is that the resulting probabilities are not additive, as we might expect in classical probability theory; rather, if a state $\left| \Psi \right\right\ranglengle=a\left| \Psi_1 \right\right\ranglengle + b\left| \Psi_2 \right\right\ranglengle$, then the probability of finding the system in the state $\left| \Phi \right\right\ranglengle$ will be given by $|\left\langlengle \Phi | ( a\left| \Psi_1 \right\right\ranglengle + b\left| \Psi_2\right\right\ranglengle)|^2$, that is, we are adding the probabilities amplitudes rather than the probabilities. This gives rise to the well-known phenomenon of quantum interference, as well as to the well-known difficulty in interpreting the concept of measurement in quantum mechanics.
\section{EPR, Bell, and causality}
The Einstein-Podolsky-Rosen paradox in quantum theory is a result which shows that ordinary quantum theory, assuming that our universe is a single universe, must either involve hidden variables, or else there must be a mechanism which involves superluminal causation. The original result is found in \cite{EPR}. Note that the assumption that our universe is a single universe simply means that all observers involved are single observers, and they are the same observers at each point in the history considered, and it does not say anything about whether parallel universes exist or not. This assumption is denied by the many-worlds interpretation, \cite{JB}, which we will discuss in Sec. \ref{MWsec}.
\begingin{theorem}
Quantum theory in a single universe without hidden variables requires a superluminal mechanism to enforce consistent measurements, \cite{EPR}.
\endd{theorem}.
\begin{proof} As an example, consider two spin-$1/2$ particles. The Hilbert space describing their spins will be the tensor product of two copies of the Hilbert space with basis $\left| u \right\right\ranglengle$ and $\left| d \right\right\ranglengle$. Thus, the Hilbert space will have a basis consisting of four vectors: $\left| u_0 \right\right\ranglengle\left| u_1 \right\right\ranglengle$, $\left| u_0 \right\right\ranglengle\left| d_1 \right\right\ranglengle$, $\left| d_0 \right\right\ranglengle\left| u_1 \right\right\ranglengle$, $\left| d_0 \right\right\ranglengle\left| d_1 \right\right\ranglengle$. Suppose these particles are produced at a certain point in spacetime and sent off in opposite directions at close to the speed of light, hitting two detectors which are separated by a spacelike interval. Suppose furthermore that their spins, prior to hitting the detectors, are described by the vector $(\frac{1}{\sqrt{2}}(\left| u_0 \right\right\ranglengle\left| u_1 \right\right\ranglengle + \left| d_0 \right\right\ranglengle\left| d_1 \right\right\ranglengle)$. Then when results are compared, either both particles are found to have been spin up, or both have been found to be spin down. But this requires either that there be local hidden variables telling the particles which state they should collapse into upon measurement, or else some superluminal mechanism to enforce their correlation. \end{proof}
On the other hand, Bell showed that local hidden variables are incapable of reproducing the results of quantum mechanics.
\begingin{theorem}
No local hidden variable theory can reproduce the measured results of quantum mechanics.
\endd{theorem}
See \cite{Bell} for a proof of this theorem, and \cite{Belltest} for an experiment demonstrating that the quantum mechanical results are experimentally verified.
Bell's theorem seems to frequently be cited as proof that hidden variables cannot be local (though non-local hidden variables, or hidden variables with superluminal causal enforcement mechanisms, can reproduce the results of quantum mechanics, e.g. \cite{DB}). However, it is often overlooked that the EPR result says that without hidden variables, there must still be some nonlocal enforcement mechanism, provided that our universe remains a single universe through history, and provided that correlations require some real causal mechanism of enforcement.
\section{Relational Quantum Mechanics and Many-Worlds}\left\langlebel{MWsec}
Here we will discuss two methods for trying to resolve the above issue. The first, relational quantum mechanics, attempts to do this in a single universe. It thus ends up violating the supposition that every correlation has some real mechanism of enforcement underlying it, and appears logically unsound. The second does away with the assumption that our universe is a single universe.
We begin with the notion of relational quantum mechanics, which relies on the idea that the split of the physical universe into quantum systems and classical observers is relative or \emph{relational} and has been suggested by Rovelli, \cite{CR}. According to this interpretation, a state vector exists only relative to an observer. Observer A may be a part of observer B's state vector (and vice verse). There are two problems with this approach. First, it does not define what systems will actually give rise to observers. Therefore, it depends, like the Copenhagen interpretation, on arbitrarily inserting observers into the physical universe. However, it is possible that this problem could be eventually resolved in a satisfactory manner. Second, it does not explain why two observers will agree whenever they compare experimental results. As an example, suppose there are a pair of spin-$1/2$ particles, a and b, which are entangled so they are both spin up or both spin down. Suppose observer A measures the spin of particle a and observer B measures the spin of particle B. Then according to observer A, measuring particle a causes the state vector to collapse to a state where particle b has exactly one spin, while observer B sees measuring particle b to cause his state vector to collapse particle a to a state with only one spin. When A and B compare their results, A sees B's measurement as being caused by A's state vector, while B sees A's measurement as being caused by the collapse of B's state vector. Rovelli argues that this is no different than the case of chronological ordering in special relativity, in which two observers may disagree about which event precedes another. Therefore, Rovelli is comfortable with the fact that A and B disagree about the cause of correlation between A's results and B's results.
However, we maintain that the two cases are quite different. In special relativity, chronological order is no longer fixed, but there is still a well-defined \emph{causal} order between events. Two observers in special relativity will agree about the causes of a chain of events, even though they might disagree on the chronological order. According to our understanding of Rovelli's relational quantum mechanics, however, the question "what \emph{actually} causes there to be a correlation between A's measurements and B's measurements" has no answer. A and B will disagree about the cause. \emph{Relational quantum mechanics is therefore incompatible with the philosophical tenet that there is an objective or real cause for the correlation between the measurements of the two observers.}
The only method that we can see to try to salvage both the relational interpretation and the objectivity of causes is to add in a many-worlds type interpretation, \cite{JB}. In order to avoid any kind of spacelike causes, we might posit that when observer A goes to meet observer B, the act of comparing results causes observer A to branch into a universe with a compatible version of observer B. Note that we cannot say that A was already in that universe from the moment they made the first measurement, without returning to the need to enforce spacelike correlations. If A went into a universe where B already had a compatible measurement, then there is a spacelike correlation being enforced: which universe A goes into at the moment of initial measurement is determined by the measurement of (the many copies of) B at a spacelike interval away. Thus, we should instead hypothesize that A moves into the appropriate universe, with its compatible version of observer B, only when A and B meet to compare their results.
One problem with the many-worlds interpretation is quantum erasure. Quantum erasure occurs when a measurement is erased, which also results in the state vector no longer being collapsed. It has been experimentally observed, \cite{SW, KY}. However, according to the many-worlds interpretation, the new branch of the universe should only have the collapsed state vector. The fact of quantum erasure suggests that the old state vector must also somehow be available to the new branch of the universe, in case such an erasure happens.
Alternately, one might argue that real observations can never be erased, and that in the experiments done (such as in \cite{SW, KY}) there was no actual observation that was erased. This would necessitate the hypothesis that we might experimentally differentiate between real observations and mere entanglements between an experiment and the observational apparatus. As such, this would make predictions that are empirically different from those of traditional quantum mechanics.
It should be noted, of course, that the issue of quantum erasure is not a problem only for the many-worlds interpretation, but for any interpretation of quantum mechanics.
\section{Relativistic invariance on the classical level as an emergent property}
\subsection{Spacetime from quantum theory}\left\langlebel{stqt}
The problem of quantizing general relativity is one which has had a rocky history, due to numerous technical problems. For some discussion of the attempts made so far, we refer the reader to \cite{TT}. Here we will merely note how a spacetime with a Lorentzian structure of causal cones could come about from a quantum theory.
Ordinarily, of course, a quantum theory is constructed on a spacetime manifold. However, we will here show how a quantum theory naturally gives rise to a space with a topology, and why this will, on the classical level, have causal cones built into the Poisson bracket structure.
Let us assume that we have some quantum theory with a (rigged) Hilbert space $V$ and an algebra of observables $O$, in the Heisenberg picture. Let $C$ be a the sub-algebra of $O$ which is physically interpreted as the Heisenberg-picture operators measuring field strengths, that is, "position" operators (as opposed to, for example, momentum operators). We will create a topological spacetime from this in the following manner. Let $M$ be a set whose elements consist of minimal non-empty intersections of complete sets of commuting observables in $C$ (these complete sets of commuting observables are essentially Cauchy data). Note that this associates elements of $C$ with elements of $M$: each element of $C$ will belong to a set which is an element of $M$. The elements of $M$ will form the points of the spacetime. To topologize $M$, we note that a complete set of commuting observables should define a (spacelike) hypersurface. Also, given a point $x\in M$, the set of points $y\in M$ such that the elements of $C$ associated with $y$ commute with the elements of $C$ associated with $x$ should form an open set. Finally, each point should be a closed set. We may give $M$ the coarsest topology induced by these requirements.
\begingin{remark}
If we apply this technique to non-relativistic quantum theories, we end up with $M$ being one-dimensional, and spacelike hypersurfaces consist of single points.
\endd{remark}
In other words, if the intersection of any two distinct complete sets of commuting observables in $C$ is empty, then the result would be a one-dimensional topology. This suggests that even if there were a causal foliation of spacetime, nonetheless as long as space is not 0-dimensional, there would still be a Lorentzian-type of structure on the commutation relations of operators at various points. That is, an operator in $C$ associated with a given point $x$ will commute with operators on many different "spacelike" hypersurfaces in $M$. The points with which it does not commute would form an analog of a light cone. This would lead to a light cone type of structure on the classical level, where the commutators become Poisson brackets. Since measurements cannot themselves transmit classical information, it is not surprising that this would mean that on the classical level, the transmission of information would be restricted to these cones (barring some discovery of how to exploit the finer causal ordering of $M$ to send information faster).
\begingin{remark}
This cone structure may have odd consequences in quantum gravity, since such a structure of causal cones will generally determine a metric up to a conformal factor, \cite{CC}. It is not clear how this may affect such theories; more work on this might yield interesting results.
\endd{remark}
\subsection{Emergent symmetries}
It should be noted that a non-local enforcement mechanism need not involve an actual foliation of spacetime by spacelike slices to define a causal order. Indeed, one could try to posit that the enforcement mechanism were still in some sense relativistically invariant, as is done with the transactional interpretation of Cramer, \cite{JC}. However, there would still be some causal ordering beyond that given by the non-vanishing Poisson brackets of classical variables, for the following reason:
Suppose that, in the EPR setup, instead of simply having two particles, we used three, all entangled so they must all be found to be spin up or all be found to be spin down. Now suppose the third particle is sent in the same direction as the second particle, but slightly slower, so that it is measured at a point which is in the timelike future of the measurement of the second particle, but still spacelike separated from the measurement of the first particle. The measurement of the second particle causally precedes the measurement of the third particle, but involves a causal enforcement on the first particle's measurement. Therefore, the first particle's measurement appears to have to causally precede the measurement of the third particle, even though these are spacelike related events. This may not fully give a causal-ordering foliation of the entire spacetime manifold, but it does give rise to an ordering of some discrete subset of measurements, which is stronger than the classical causal ordering of those events.
\begingin{theorem}
A superluminal enforcement mechanism for keeping measurements consistent, which behaves as indicated above, will result in a stronger causal ordering of measurements than would be expected classically.
\endd{theorem}
Note: by \emph{stronger}, we mean here that two events which take place at points in spacetime which are not ordered by the classical partial ordering, will be ordered by this quantum causal ordering. Thus, the quantum causal ordering is a partial order relation which contains the classical partial ordering as a subset.
We cannot, however, rule out the possibility that one might do away with some of the assumptions above about how such a superluminal enforcement mechanism would behave, which might avoid this result. However, the assumptions made above seem reasonable.
However, we argue that this result is not necessarily surprising or problematic. Indeed, there are many instances where a symmetry on one scale is broken at a smaller scale, or where a classical symmetry is broken on the quantum scale.
For an example of the former, which does not even involve quantum physics, consider a physical model of billiard balls. Such a physical model can be made assuming that each ball is a perfect sphere, with the inherent rotational symmetry which that entails. However, on a microscopic level, the balls exhibit some degree of unevenness. This unevenness breaks the symmetry, but it is irrelevant at the scales involved in the model.
When it comes to quantum mechanics, it is not uncommon to find \emph{anomalies}. These are symmetries on the classical level which do not apply on the quantum level. For example, the \emph{chiral anomaly} appears in certain theories of fermions with chiral symmetry on the classical level: this symmetry may be broken on the quantum level. See \cite{Wein} for a discussion of this phenomenon.
Perhaps the most obvious example of a broken symmetry is the commutativity of position and momentum, which exists on the classical level, but not on the quantum level.
Therefore, it does not appear to us to pose a problem to hypothesize that there are superluminal causal enforcement mechanisms involved in measurements involving quantum entanglement, which do not necessarily obey classical relativistic symmetries. In particular, these violations of classical symmetries would only appear on the level of measurements, rather than on the level of the unitary evolution of the system. Therefore they would be expected to vanish classically, since in the classical limit, measurements become moot due to the deterministic nature of the classical limit. As seen in Sec. \ref{stqt}, a relativistic style of causal cones is a natural consequence of a the commutator relations of a quantum field theory. Therefore: even if we posit superluminal causal correlations for measurements, on the classical level we might expect to see Poisson brackets which correspond to a system of causal cones. Since measurements of entangled states cannot themselves be used to transfer information, this naturally results in classical information being restricted to transmission within said cones.
On the other hand, it is important to emphasize that the results of EPR logically require local hidden variables, a branching multiverse, or a superluminal mechanism to enforce consistency. Bell's result shows that the first of these options is empirically false, which logically leaves the latter two options. This is a fact which should not be overlooked, paradoxical as it may seem.
\begingin{thebibliography}{99}
\bibitem{Antoine} J.P. Antoine, \emph{Quantum Mechanics Beyond Hilbert Space}, appearing in Irreversibility and Causality, Semigroups and Rigged Hilbert Spaces, Arno Bohm, Heinz-Dietrich Doebner, Piotr Kielanowski, eds., Springer-Verlag, ISBN 3-540-64305-2, (1996)
\bibitem{JB} J. Barrett and P. Byrne, eds., \emph{The Everett Interpretation of Quantum Mechanics: Collected Works 1955–1980 with Commentary}, Princeton University Press, (2012)
\bibitem{Bell} J.S. Bell, \emph{On the Einstein Podolsky Rosen paradox}, Physics Vol. 1, No. 3, 195-200, (1964)
\bibitem{DB} D. Bohm, \emph{A suggested interpretation of the quantum theory in terms of 'hidden variables' I}, Physical Review 85: 166–179, (1952)
\bibitem{Belltest} J.F. Clauser, A. Shimony, \emph{Bell's theorem: experimental tests and implications}, Reports on Progress in Physics. 41: 1881–1927, (1978)
\bibitem{JC} J. Cramer, \emph{The transactional interpretation of quantum mechanics}, Reviews of Modern Physics 58, 647–688, July (1986)
\bibitem{EPR} A. Einstein, B. Podolsky, and N. Rosen, \emph{Can quantum-mechanical description of physical reality be considered complete?} Phys. Rev. 47 777, (1935)
\bibitem{Madrid} R. de la Madrid, \emph{The role of the rigged Hilbert space in Quantum Mechanics}, Eur. J. Phys. 26, 287, (2005) arXiv:quant-ph/0502053
\bibitem{CC} M. Rainer, \emph{Cones and causal structures on topological and differentiable manifolds}, Journal of Mathematical Physics 40(12), 6589-6597, (1999)
\bibitem{CR}C. Rovelli: \emph{Relational quantum mechanics}, International Journal of Theoretical Physics,
35, 1637, (1996) arXiv:quant-ph/9609002
\bibitem{TT} T. Thiemann, \emph{Modern canonical quantum general relativity (Cambridge monographs on mathematical physics)}, Cambridge University Press, (2008)
\bibitem{SW} S.P. Walborn et al., \emph{Double-slit quantum eraser}, Phys. Rev. A 65 (3): 033818, 2002, arXiv:quant-ph/0106078
\bibitem{Wein} S. Weinberg, \emph{The Quantum Theory of Fields. Volume II: Modern Applications}. Cambridge University Press, (2001)
\bibitem{KY} K. Yoon-Ho, R. Yu, S.P. Kulik, Y.H. Shih, and Marlan Scully, \emph{A delayed “choice” quantum eraser}, Physical Review Letters 84: 1–5, (2000), arXiv:quant-ph/9903047
\endd{thebibliography}
\endd{document}
|
\betagin{document}
\title{A note on large induced subgraphs with prescribed residues in bipartite graphs}
\betagin{abstract}
It was proved by Scott that for every $k\ge2$, there exists a constant $c(k)>0$ such that for every bipartite $n$-vertex graph $G$ without isolated vertices, there exists an induced subgraph $H$ of order at least $c(k)n$ such that $\deg_H(v) \equiv 1\pmod{k}$ for each $v \in H$. Scott conjectured that $c(k) = \Omega(1/k)$, which would be tight up to the multiplicative constant. We confirm this conjecture.
\end{abstract}
\section{Introduction}
Given a graph $G$ and integers $q>r\ge 0$, we define $f(G,r,q)$ to be the maximum order of an induced subgraph $H$ of $G$ where $\deg_H(v) \equiv r \pmod{q}$ for all $v \in H$ (or if no such $H$ exists, we set $f(G,r,q) = 0$).
There are many questions and conjectures concerning the behavior of $f(G,r,q)$ for various $G,r,q$. An old unpublished result of Gallai in this area is that\footnote{Actually what Gallai proved was slightly stronger. He showed that for each graph $G$, we can partition $V(G)$ into two parts $A,B$ so that $\deg_{G[A]}(v) \equiv 0\pmod{2}$ (respectively $\deg_{G[B]}(v) \equiv 0 \pmod{2}$) for each $v\in A$ (respectively $v\in B$).} $f(G,0,2)\ge n/2$ for every $n$-vertex graph (see \cite[Excercise~5.17]{lovasz} for a proof). Further questions about the behavior of $f$ received attention around 20-30 years ago (see e.g., \cite{caro,caro2,scott,scott2}). And more recently, this topic has had a minor renaissance (see e.g., \cite{balister,ferber,ferber2}).
This note will focus on an old result of Scott. For positive integer $k$, we define $c(k)$ to be $\inf_G\{ f(G,1,k)/|G|\}$ where $G$ ranges over all bipartite graphs with $\delta(G)\ge 1$. The following was proved by Scott:
\betagin{thm}\cite[Lemma~8]{scott2} Let $k\ge 2$. Then
\[1/(2^k+k+1) \le c(k) \le 1/k.\]
\end{thm}\noindent Scott observed that a slightly more careful argument could further show that $c(k) = \Omega\left(\frac{1}{k^2 \log k}\right)$.
In this note we give an improved lower bound to $c(k)$ which is optimal up to the (implied) multiplicative constant.
\betagin{thm}\label{main}Let $k \ge 2$. Then $c(k) = \Omega(1/k)$.
\end{thm} \noindent This is done by taking the improved argument suggested by Scott, and then applying a dyadic pigeonhole argument which was previously overlooked.
\section{Proof of Theorem 2}
We will need the following result on the mixing time of random walks modulo $k$.
\betagin{prp}\label{equi} Let $X_i$ be i.i.d. random variables that sample $\{0,1\}$ uniformly at random. If $n\ge k^3$, then $\Bbb{P}\left(\sum_{i=1}^n X_i \equiv 1 \pmod{k}\right)\ge (1-o_k(1))/k$.
\end{prp}\noindent \hide{When outlining how to prove $c(k)\ge \Omega(k^2\log k)$ in \cite{scott2}, Scott mentions that mixing results like the above follow from arguments in \cite{diaconis} (in particular this is essentially a slight modification of \cite[Theorem 2 of Chapter 3]{diaconis}). }Proposition~\ref{equi} is a mild variant of several known results, and $k^3$ could replaced with $k^2\log k$ (or any function which is $\omega(k^2)$). We omit its proof, to keep our paper short and our methods elementary.
In \cite{scott2}, when Scott outlined how to prove $c(k) \ge \Omega\left(\frac{1}{k^2\log k}\right)$, he noted that Proposition~\ref{equi} (the key to the improvement) can be derived by slightly modifying the argument in \cite[Theorem~2 of Chapter~3]{diaconis}. These appropriate modifications now appear in \cite{ferber}. Namely, the interested reader can confirm that Proposition~\ref{equi} follows from the proof\footnote{In \cite{ferber}, the statement of their lemma hides some constants which are necessary to verify Proposition~\ref{equi}.} of \cite[Lemma~2.3]{ferber}. Both of these proofs rely on discrete Fourier Analysis.
We now proceed to the main proof.
\betagin{proof}[Proof of Theorem~\ref{main}]
pace{-\topsep} Let $G$ be an $n$-vertex bipartite graph with $\delta(G) \ge 1$, and let $V_1,V_2$ bipartition $G$ with $|V_1| \ge |V_2|$. We shall write $c_1,c_2$ to denote small positive quantities which will be determined later (it would suffice to take $c_1=1/4,c_2=1/2$, but for clarity and a slightly better constant we will only consider their values at the end of the proof and shall have them depend slightly on $k$). Our proof splits into three cases.
We take $W_1 \subseteqset V_2$ to be a minimal set satisfying $|N(v)\cap W_1| >0$ for all $v \in V_1$ (i.e., $W_1$ is a minimal dominating set of $V_1$). By minimality of $W_1$, for each $w \in W_1$ there must exist $v_w \in V_1$ where $N(v_w) \cap W_1 = \{w\}$. Let $S_1 = \{v_w:w \in W_1\}$. We conclude that $W_1 \cup S_1$ induces a matching in $G$, proving that $f(G,1,k) \ge 2|W_1|$.
Hence, we will be done if $|W_1| \ge c_1 |V_1|/k$ (this is ``Case 1''). So we continue assuming $|W_1| < c_1 |V_1|/k$.
For $2\le i \le k-1$, we inductively create sets $W_i,S_i$. We take $W_i\subseteqset W_{i-1}$ to be a minimal dominating set of $V_1\setminus \left(\bigcup_{j=1}^{i-1} S_j\right)$. And like in the above, we take $S_i \subseteqset V_1\setminus \left(\bigcup_{j=1}^{i-1} S_j\right)$ so that $W_i \cup S_i$ induces a matching in $G$.
Let $T = V_1 \setminus \left(\bigcup_{i=1}^{k-1} S_i \right)$. We have
\betagin{align*}
|T| &= |V_1|- \sum_{i=1}^{k-1} |S_i|\\
&= |V_1|- \sum_{i=1}^{k-1} |W_i|\\
&\ge |V_1|- (k-1)|W_1|\\
&\ge (1-c_1)|V_1|.\\
\end{align*}
Next, let $T^* = \{v \in T: |N(v)\cap W_{k-1}| \ge k^3\}$. Supposing that $|T^*| \ge c_2 |V_1|$ (this is ``Case 2''), we will deduce that $f(G,1,k)\ge (c_2-o_k(1))|V_1|/k$.
Indeed, let $U\subseteqset W_{k-1}$ be a random subset where each element is included (independently) with probability $1/2$. We set $T_U = \{v \in T: |N(v)\cap U| \equiv 1 \pmod{k}\}$. By Proposition~\ref{equi}, we have that $\Bbb{P}(v \in T_U) \ge (1-o_k(1))/k$ for each $v \in T^*$. Thus by linearity of expectation we may fix some $U\subseteqset W_{k-1}$ where $|T_U| \ge |T^*|(1-o_k(1))/k \ge (c_2-o_k(1))|V_1|/k$. Next choosing $S \subseteqset \bigcup_{i=1}^{k-1}S_i$ so that $|N(u)\cap (T_U\cup S)| \equiv 1 \pmod{k}$ for each $u \in U$, we have that $S\cup U \cup T_U$ induces a subgraph in $G$ demonstrating that $f(G,1,k) \ge |S\cup U \cup T_U| \ge |T_U| \ge (c_2-o_k(1))|V_1|/k$.
Otherwise, we must have that $T\setminus T^*$, the set of $v \in T$ where $|N(v)\cap W_{k-1}| <k^3$, has $>(1-c_1-c_2)|V_1|$ elements (this is ``Case 3''). By dyadic pigeonhole, there exists some $0 \le p\le \log(k^3 ) = O(\log k)$ so that \betagin{align*}
|\{v \in T : 2^p \le |N(v)\cap W_{k-1}| <2^{p+1}\}| &\ge |T\setminus T^*|/O(\log k) \\
&\ge (1-c_1-c_2)|V_1|/O(\log k).\\
\end{align*}Take $T' = \{v \in T : 2^p \le |N(v)\cap W_{k-1}| <2^{p+1}\}$ to be this large set.
We let $U \subseteqset W_{k-1}$ be a random subset so that each element is included (independently) with probability $1/2^p$. Defining $T_U$ as before, some casework\footnote{If $p =0$, then $U = W_{k-1}$ and this probability is one. Otherwise this probability is $\binom{|N(v)\cap W_{k-1}|}{1} (1-2^{-p})^{|N(v)\cap W_{k-1}|} 2^{-p} \ge (1-2^{-p})^{2^{p+1}-1} \ge e^{-2}$.} shows $\Bbb{P}(v \in T_U) \ge e^{-2}$ for each $v \in T'$. Hence, by linearity of expectation, we may fix $U$ so that $|T_U|\ge e^{-2}|T'|$. As above we may find $S \subseteqset \bigcup_{i=1}^{k-1} S_i$ so that $S\cup U\cup T_U$ demonstrates that $f(G,1,k) \ge |S\cup U \cup T_U| \ge e^{-2}(1-c_1-c_2) |V_1| /O(\log k)$.
Now fix any sufficiently small $\epsilon > 0$. Letting $c_1 = 1/3-\epsilon/2, c_2 = 2/3-\epsilon$, we get that each of the first two cases imply that $f(G,1,k)\ge (2/3-\epsilon-o_k(1))|V_1|/k \ge (1/3-\epsilon-o_k(1))n/k$ (since $|V_1| \ge |V_2|$). Meanwhile with $\epsilon$ fixed, the third case implies $f(G,1,k) = \Omega_{\epsilon}(n/\log k)$. Taking $\epsilon \downarrow 0$ as $k \to \infty$ we have that $f(G,1,k) \ge (1/3-o_k(1))n/k$.
\end{proof}
As a closing remark, we note it is still open whether $c(k) =1/k$ for all $k$ (as noted in \cite{scott2}, considering $K_{k,k}$ demonstrates that $c(k) \le 1/k$). Even for $k=2$, the best known bounds are $1/4 \le c(2)\le 1/2$, with the lower bound coming from \cite[Theorem~2]{scott}.
\betagin{ack}The author thanks Zachary Chase for spotting some typographical errors in a previous draft of this paper.
\end{ack}
\betagin{thebibliography}{}
\bibitem{balister} P. Balister, E. Powierski, A. Scott, and J. Tan, \textit{Counting partitions of $G(n,1/2)$ with degree congruence conditions,} preprint (May 2021), arXiv:2105.12612.
\bibitem{caro} Y. Caro, \textit{On induced subgraphs with odd degrees,} in \textit{Discrete Mathematics,} \textbf{132} (1994), 23–
28.
\bibitem{caro2} Y. Caro, I. Krasikov and Y. Roditty, \textit{Zero-sum partition theorems for
graphs,} in \textit{International Journal of Mathematics and Mathematical Sciences,} \textbf{17} (1994), 697–702.
\bibitem{diaconis} P. Diaconis, \textit{Group representations in probability and statistics,} (1998).
\bibitem{ferber} A. Ferber, L. Hiaman, M. Krivelevich, \textit{On subgraphs with degrees of prescribed residues in the random
graph,} preprint (July 2021), arXiv:2107.06977.
\bibitem{ferber2} A. Ferber and M. Krivelevich, \textit{Every graph contains a linearly sized induced subgraph with all
degrees odd,} preprint (September 2020), arXiv:2009.05495v3.
\bibitem{lovasz} L. Lovasz, \textit{Combinatorial problems and exercises (2nd edition),} in \textit{AMS Chelsea Publishing} (1993).
\bibitem{scott} A. Scott, \textit{Large induced subgraphs with all degrees odd,} in \textit{Combinatorics, Probability and Computing} \textbf{1} (1992), 335-349.
\bibitem{scott2} A. Scott, \textit{On induced subgraphs with all degrees odd,} in \textit{Graphs and Combinatorics} \textbf{17} (2001), 539–553.
\end{thebibliography}
\end{document}
|
\begin{document}
\title{Black-box Testing Liveness Properties of Partially Observable Stochastic Systems}
\begin{abstract}
We study black-box testing for stochastic systems and arbitrary $\omega$-regular specifications, explicitly including liveness properties. We are given a finite-state probabilistic system that we can only execute from the initial state. We have no information on the number of reachable states, or on the probabilities; further, we can only partially observe the states. The only action we can take is to restart the system. We design restart strategies guaranteeing that, if the specification is violated with non-zero probability, then w.p.1 the number of restarts is finite, and the infinite run executed after the last restart violates the specification. This improves on previous work that required full observability. We obtain asymptotically optimal upper bounds on the expected number of steps until the last restart. We conduct experiments on a number of benchmarks, and show that our strategies allow one to find violations in Markov chains much larger than the ones considered in previous work.
\end{abstract}
\newcommand{L}{L}
\section{Introduction}
Black-box testing is a fundamental analysis technique when the user does not have access to the design or the internal structure of a system \cite{LeeY96,PeledVY02}. Since it only examines one run of the system at a time, it is computationally cheap, which makes it often the only applicable method for large systems.
We study the black-box testing problem for finite-state probabilistic systems and $\omega$-regular specifications: Given an $\omega$-regular specification, the problem consists of finding a run of the program that violates the property, assuming that such runs have nonzero probability.
Let us describe our assumptions in more detail. We do not have access to the code of the system or its internal structure, and we do not know any upper bound on the size of its state space. We can repeatedly execute the system, restarting it at any time. W.l.o.g.~we assume that all runs of the system are infinite. We do not assume full observability of the states of the system, only that we can observe whether the atomic propositions of the property are currently true or false. For example, if the property states that a system variable, say $x$, should have a positive value infinitely often, then we only assume that at each state we can observe the sign of $x$; letting $\Sigma$ denote the set of possible observations, we have $\Sigma = \{ +, - \}$, standing for a positive and a zero or negative value, respectively (in the rest of the introduction we shorten ``zero or negative'' to ``negative''). Every system execution induces an observation, that is, an element of $\Sigma^\omega$. The violations of the property are the $\omega$-words $V \subseteq \Sigma^\omega$ containing only finitely many occurrences of $+$.
Our goal is to find a \emph{strategy} that decides after each step whether to abort the current run and restart the system, or continue the execution of the current run. The strategy must ensure that some run that violates the property, that is, a run whose observation belongs to $V$, is eventually executed. The strategy decides depending on the observations made so far. Formally, given $\Sigma$ and the set of actions $A = \{\mathsf{r}, \mathsf{c} \}$ (for ``restart'' and ``continue'') a strategy for $V$ is a mapping from $(\Sigma \times A)^*\Sigma$, the sequence of observations and actions executed so far, to $A$, the next decision. Our goal is to find a strategy $\sigma$ satisfying the following property:
\begin{quote}
For every finite-state program $P$ over $\Sigma$, if $V\subseteq \Sigma^\omega$ has positive probability and the runs of $P$ are restarted according to $\sigma$, then w.p.1 the number of restarts is finite, and the observation of the run executed after the last restart belongs to $V$.
\end{quote}
Observe that it is not clear that such strategies exist. They are easy to find for safety properties, where the fact that a run violates the property is witnessed by a \emph{finite} prefix\footnote{One can choose for $\sigma$ the strategy ``after the $n$-th reset, execute $n$ steps; if this finite execution is not a witness, restart, otherwise continue forever.'' Indeed, if the shortest witness has length $k$, then for every $n\geq k$, after the $n$-th restart the strategy executes a witness with positive probability, and so it eventually executes one w.p.1.}, but for liveness properties there is no such prefix in general.
We show that these strategies exist for every $\omega$-regular language $V$. Moreover, the strategies only need to maintain a number of counters that depends only on $V$, and not on the program. So in order to restart $P$ according to $\sigma$ one only needs logarithmic memory in the length of the current sequence.
\begin{example}
To give a first idea of why these strategies also exist for liveness properties, consider the property over $\Sigma = \{ +, -\}$ stating that a variable $x$ should have a positive value only finitely often. The runs violating the property are those that visit $+$-states infinitely often. Our results show that the following strategy works in detecting a run violating the property (among others):
\begin{quote}
After the $n$-th restart, repeatedly execute blocks of $2n$ steps. If at some point after executing the first block \emph{the second half} of the concatenation of the blocks executed so far contains only negative states, then restart.
\end{quote}
For example, assume there have been $4$ restarts. Then the strategy repeatedly executes blocks of $8$ steps. If after executing $1,2,3, \ldots$ of these blocks the last $4, 8, 16, \ldots$ states are negative, then the strategy restarts for the $5$th time. If that is never the case, then there are only $4$ restarts. Figure \ref{fig:ideal} shows a family of Markov chains for which naive strategies do not work, but the above strategy does: almost surely the number of restarts is finite and the run after the last restart visits the rightmost state infinitely often. Observe that for every $n \geq 0$ the family exhibits executions that visit $+$ states at least $n$ times, and executions that visit a $+$ state at most once every $n$ steps.
\begin{figure}
\caption{A family of partially observable Markov chains}
\label{fig:ideal}
\end{figure}
\end{example}
We also obtain asymptotically optimal upper bounds on the expected time until the last restart, that is, on the time until the execution of the run violating the property starts. The bounds depend on two parameters of the Markov chain associated to the program, called the \emph{progress radius} and the \emph{progress probability}. An important part of our contribution is the identification of these parameters as the key ones to analyze.
While our results are stated in an abstract setting, they easily translate into practice. In a practical scenario, on top of the values of the atomic propositions, we can also observe useful debugging information, like the values of some variables. We let a computer execute runs of the system for some fixed time $t$ according to the strategy $\sigma$. If at time $t$ we observe that the last restart took place a long time ago, then we stop testing and return the run executed since the last restart as candidate for a violation of the property. In the experimental section of our paper we use this scenario to detect errors in \emph{population protocols}, a model of distributed computation, whose state space is too large to find them by other means.
\noindent \textbf{Related work.} There is a wealth of literature on black-box testing and black-box checking \cite{LeeY96,PeledVY02}, but the underlying models are not probabilistic and the methods require to know an upper bound on the number of states. Work on probabilistic model-checking assumes that (a model of) the system is known \cite{Baier2008}. There are also works on black-box verification of probabilistic systems using statistical model checking of statistical hypothesis testing \cite{YounesS02,Sen04,SenVA05,Younes05,YounesCZ10} (see also \cite{LarsenL16a,LegayDB10} for surveys on statistical model checking). They consider a different problem: we focus on producing a counterexample run, while the goal of black-box verification is to accept or reject a hypothesis on the probability of the runs that satisfy a property.
Our work is also related to the \emph{runtime enforcement problem} \cite{Schneider00,BasinJKZ13,LigattiBW09,FalconeMFR11,FalconeP19}, which also focus on identifying violations of a property. However, in these works either the setting is not probabilistic, or only a subset of the $\omega$-regular properties close to saftey properties is considered. Finally, the paper closest to ours is \cite{EKKW21}, which considers the same problem, but for fully observable systems. In particular, in the worst case the strategies introduced in \cite{EKKW21} require to store the full sequence of states visited along a run, and so they use linear memory in the length of the current sequence, instead of logarithmic memory, as is the case for our strategy.
\noindent \textbf{Structure of the paper.} The paper is organized as follows. Section \ref{sec:prelims} contains preliminaries. Section \ref{sec:blackboxtesting} introduces the black-box testing problem for arbitrary $\omega$-regular languages with partial observability, and shows that it can be reduced
to the problem for canonical languages called the Rabin languages. Section \ref{sec:strategy} presents our black-box strategies for the Rabin languages, and proves them correct. Section \ref{sec:quantitative} obtains asymptotically optimal upper bounds on the time to the last restart. Section \ref{sec:experiments} reports some experimental results.
\section{Preliminaries}
\label{sec:prelims}
\textbf{Directed graphs.} A directed graph is a pair $G=(V, E)$, where $V$ is the set of nodes and
$E \subseteq V\times V$ is the set of edges. A path (infinite path) of $G$ is a finite (infinite) sequence
$\pi = v_0, v_1, \ldots$ of nodes such that $(v_i, v_{i+1}) \in E$ for every $i=0,1, \ldots$. A path consisting only of one node is \emph{empty}.
Given two vertices $v, v' \in V$, the \emph{distance} from $v$ to $v'$ is the length of a shortest path from $v$ to $v'$, and the distance from $v$ to a set $V' \subseteq V$ is the minimum over all $v' \in V'$ of the distance from $v$ to $v'$.
A graph $G$ is strongly connected if for every two vertices $v, v'$ there is a path leading from $v$ to $v'$. A graph $G'=(V',E')$ is a subgraph of $G$, denoted $G' \mathbb Peceq G$, if $V' \subseteq V$ and $E' \subseteq E \cap (V' \times V')$; we write $G' \mathbb Pec G$ if $G' \mathbb Peceq G$ and $G'\neq G$. A graph $G' \mathbb Peceq G$ is a strongly connected component (\acron{SCC}) of $G$ if it is strongly connected and no graph $G''$ satisfying $G' \mathbb Pec G'' \mathbb Peceq G$ is strongly connected. An \acron{SCC} $G'=(V',E')$ of $G$ is a bottom \acron{SCC} (\acron{BSCC}{}) if $v \in V'$ and $(v, v') \in E$ imply $v' \in V'$.
\noindent\textbf{Partially observable Markov chains.} Fix a finite set $\Sigma$ of \emph{observations}. A \emph{partially observable Markov chain} is a tuple $\mathcal{M} = (S, s_{in}, \Sigma, \textit{Obs}, \mathbf{P})$, where
\begin{itemize}
\item $\Sigma$ is a set of \emph{observations};
\item $S$ is a finite set of \emph{states} and $s_{in} \in S$ is the \emph{initial} state;
\item $\textit{Obs} \colon S \to \Sigma$ is an \emph{observation function} that assigns to every state an observation; and
\item $\mathbf{P} \;\colon\; S \times S \to [0,1]$ is the transition probability matrix, such that for every $s\in S$ it holds $\sum_{s'\in S} \mathbf{P}(s,s') = 1$,
\end{itemize}
Intuitively, $\textit{Obs}(s)$ models the information we can observe when the chain visits $s$. For example, if $s$ is the state of a program,
consisting of the value of the program counter and the values of all variables, $\textit{Obs}(s)$ could be just the values of the program counter, or the values of a subset of public variables.
The graph of $\mathcal{M}$ has $S$ as set of nodes and $\{ (s, s') \mid \mathbf{P}(s,s') > 0\}$ as set of edges. Abusing language,
we also use $\mathcal{M}$ to denote the graph of $\mathcal{M}$.
A \emph{run} of $\mathcal{M}$ is an infinite path $\rho = s_0 s_1 \cdots$ of $\mathcal{M}$; we let $\rho[i]$ denote the state $s_i$.
The sequence $\textit{Obs}(\rho) \coloneqq \textit{Obs}(s_0) \textit{Obs}(s_1) \cdots$ is the \emph{observation} associated to $\rho$.
Each path $\pi$ in $\mathcal{M}$ determines the set of runs $\mathsf{Cone}(\pi)$ consisting of all runs that start with $\pi$.
To $\mathcal{M}$ we assign the probability space $
(\rhos,\mathcal F,\mathbb P)$, where $\rhos$ is the set of all runs in $\mathcal{M}$, $\mathcal F$ is the $\sigma$-algebra generated by all $\mathsf{Cone}(\pi)$,
and $\mathbb P$ is the unique probability measure such that
$\mathbb P[\mathsf{Cone}(s_0s_1\cdots s_k)] =
\mu(s_0)\cdot\mathbb Pod_{i=1}^{k} \mathbf{P}(s_{i-1},s_i)$, where the empty product equals $1$.
The expected value of a random variable $f\colon\rhos\to\mathbb R$ is $\mathbb E[f]=\int_\rhos f\ d\,\mathbb P$.
\noindent\textbf{Partially Observable Markov Decision Processes.} A \emph{$\Sigma$-observable Markov Decision Process} ($\Sigma$-\acron{MDP}) is a tuple $\mathsf{M} = (S, s_{in}, \Sigma, \textit{Obs}, A, \Delta)$, where $S, s_{in}, \Sigma, \textit{Obs}$ are as for Markov chains,
$A$ is a finite set of \emph{actions}, and $\Delta \colon S \times A \to \mathcal{D}(S)$ is a \emph{transition function} that for each state $s$ and action $a \in A(s)$ yields a probability
distribution over successor states. The probability of state $s'$ in this distribution is denoted $\Delta(s, a, s')$.
\noindent\textbf{Strategies.} A \emph{strategy} on $\Sigma$-\acron{MDP}s with $A$ as set of actions is a function $\sigma \colon (\Sigma \times A)^*\Sigma \rightarrow A$, which given a finite path $\pi = \ell_0 a_0 \, \ell_1 \, a_1 1ots a_{n-1} \, \ell_n \in (\Sigma \times A)^*\Sigma$, yields the action $\sigma(\pi) \in A$ to be taken next. Notice that $\sigma$ only ``observes'' $\textit{Obs}(s)$, not the state $s$ itself. Therefore, it can be applied to any $\Sigma$-\acron{MDP} $\mathsf{M}=(S, s_{in}, \Sigma, \textit{Obs}, A, \Delta)$, inducing the Markov chain $\mathsf{M}^\sigma = (S^\sigma, s_{in}, \Sigma, \textit{Obs}, A, \mathbf{P}^\sigma)$ defined as follows: $S^\sigma = (S \times A)^*\times S$; and for every state $\pi \in S^\sigma$ of $\mathsf{M}^\sigma$ ending at a state $s \in S$ of $\mathsf{M}$, the successor distribution is defined by $\mathbf{P}^\sigma(\pi, \pi \, a \, s') \coloneqq \Delta(s, a, s')$ if $\sigma(\pi)=a$ and $0$ otherwise.
\section{The black-box testing problem}
\label{sec:blackboxtesting}
Fix a set $\Sigma$ of observations, and let $\mathsf{r}, \mathsf{c}$ (for \textbf{r}estart and \textbf{c}ontinue) be two actions. We associate to a $\Sigma$-observable Markov chain $\mathcal{M} = (S, s_{in}, \Sigma, \textit{Obs}, \mathbf{P})$ a \emph{restart \acron{MDP}} $\mathsf{M_{r}} = (S, s_{in}, \textit{Obs}, \{ \mathsf{r}, \mathsf{c}\}, \Delta)$, where for every two states $s, s' \in S$ the transition function is given by: $\Delta(s, \mathsf{r}, s')=1$ if $s' = s_{in}$ and $0$ otherwise, and $\Delta(s, \bm{c}, s')= \mathbf{P}(s, s')$. Intuitively, at every state of $\mathsf{M_{r}}$ we have the choice between restarting the chain $\mathcal{M}$ or continuing.
We consider black-box strategies on $\Sigma$ and $\{\mathsf{r}, \mathsf{c}\}$.
Observe that if a run $\pi$ of $\mathsf{M_{r}}st{\sigma}$ contains finitely many occurrences of $\mathsf{r}$, then the suffix of $\pi$ after the last occurrence of $\mathsf{r}$ is a run of $\mathcal{M}$ (after dropping the occurrences of the continue action $c$). More precisely, if $\pi = \pi_0\pi'$, where $\pi'$ is the longest suffix of $\pi$ not containing $\mathsf{r}$, then $\pi'=(\pi_0 \, s_{in}) \, (\pi_0 \, s_{in} \, \mathsf{c} \, s_1) \, (\pi_0 \, s_{in} \, \mathsf{c} \, s_1 \, \mathsf{c} \, s_2) \ldots$, where $s_{in} s_1 s_2 \ldots$ is a run of $\mathcal{M}$. The sequence of observations of $s_{in} s_1 s_2 \ldots$ is an infinite word over $\Sigma$, called the \emph{tail} of $\pi$; formally $\tail{\pi}\coloneqq \textit{Obs}(s_{in}) \textit{Obs}(s_1) \textit{Obs}(s_2) \cdots$.
\begin{definition}[Black-box testing strategies]
Let $L \subseteq \Sigma^\omega$ be an $\omega$-regular language. A \emph{black-box strategy} $\sigma$ on $\Sigma$ and $\{\mathsf{r}, \mathsf{c}\}$ is a \emph{testing strategy for $L$} if it satisfies the following property: for every $\Sigma$-observable Markov chain $\mathcal{M}$, if $\Pr_\mathcal{M}(L) > 0$ then w.p.1 a run of $\mathsf{M_{r}}st{\sigma}$ has a finite number of restarts, and its tail belongs to $L$. The \emph{black-box testing problem} for $L$ consists of finding a black-box testing strategy for $L$.
\end{definition}
We denote by $\#\restact(\rho)\in\mathbb{N}\cup\{\infty\}$ the number of appearances of the restart action $\mathsf{r}$ in $\rho$. Intuitively, the language $L$ models the set of potential violations of a given liveness specifications. If we sample any finite-state $\Sigma$-observable Markov chain $\mathcal{M}$ according to a testing strategy for $L$, then w.p.1 we eventually stop restarting, and the tail of the run is a violation, or there exist no violations.
\begin{comment}
\begin{figure}
\caption{An idealized example}
\label{fig:ideal}
\end{figure}
\begin{example}
We recall an example from \cite{EKKW21} describing the process of downloading an application
to establish a communication channel through which messages are repeatedly received. The downloading process can crash or hang; crashes can be detected by the user, but not whether the process hangs. The set of labels is $\Sigma = \{ \textit{crash}, \textit{rec}, \checkmark \}$, where $\checkmark$ indicates that a step has been taken. The language $L$ is the set of words over $\Sigma$ containing some occurrence of \textit{crash} or only finitely many occurrences of \textit{rec}. The internal behavior of the process, which is unknown to the user, is as follows. The process downloads
$n$ packages, interating each download until a CRC-checksum detects no error. If some error is not detected the process may crash or hang. A Markov chain model is shown in Figure \ref{fig:ideal}. At state $(i,b)$ the process downloads the $i$-th package; iff $b=0$ then there have been no undetected errors, and if $b=1$ there has been at least one. At state $(\ell, b)$, the application is launched. State \textit{rec}, \textit{crash}, and \textit{hang} indicate that a message has been received, that the downloading process has crashed, or that it hangs, respectively. All states but \textit{crash} and \textit{rec} are labeled with $\checkmark$.
\end{example}
\end{comment}
\subsection{Canonical black-box testing problems}
Using standard automata-theoretic techniques, the black-box testing problem for an arbitrary $\omega$-regular language $L$ can be reduced to the black-box testing problem for a canonical language.
For this, we need to introduce some standard notions of the theory of automata on infinite words.
A deterministic Rabin automaton (\acron{DRA}{}) over an alphabet $\Sigma$ is a tuple $\mathcal{A} = (\mathcal{A}S, \Sigma, \mathcal{A}Tr, \mathcal{A}Init, \mathcal{A}Acc)$, where $\mathcal{A}S$ is a finite set of states, $\mathcal{A}Tr \colon \mathcal{A}S \times \Sigma \to \mathcal{A}S$ is a transition function, $\mathcal{A}Init \in \mathcal{A}S$ is the initial state, and $\mathcal{A}Acc \subseteq 2^\mathcal{A}S \times 2^\mathcal{A}S$ is the acceptance condition. The elements of $\mathcal{A}Acc$ are called \emph{Rabin pairs}. A word $w = a_0a_1a_2 \ldots \in \Sigma^\omega$ is accepted by $\mathcal{A}$ if the unique run $\mathcal{A}Init q_1 q_2 \ldots$ of $\mathcal{A}$ on $w$ satisfies the following condition: there exists a Rabin pair $(E,F) \in \mathcal{A}Acc$ such that $a_i \in E$ for infinitely many $i \in \mathbb{N}$ and $a_i \in F$ for finitely many $i \in \mathbb{N}$. It is well known that \acron{DRA}{}s recognize exactly the $\omega$-regular languages (see e.g.~\cite{Baier2008}). The \emph{Rabin index} of an $\omega$-regular language $L$ is the minimal number of Rabin pairs of the \acron{DRA}{}s that recognize $L$.
\newcommand{\bm{e}}{\bm{e}}
\newcommand{\bm{f}}{\bm{f}}
\begin{definition}
Let $k \geq 1$, and let $M_k = \{\bm{e}_1, \ldots, \bm{e}_k, \bm{f}_1, \ldots, \bm{f}_k\}$ be a set of \emph{markers}. The \emph{Rabin language} $\mathcal{R}_k \subseteq (2^{M_k})^\omega$ is the language of all words $w = \alpha_0\alpha_1 \cdots \in (2^{M_k}) ^\omega$ satisfying the following property: there exists $1 \leq j \leq k$ such that $\bm{e}_j\in \alpha_i$ for infinitely many $i \geq 0$, and $\bm{f}_j \in \alpha_i$ for at most finitely many $i \geq 0$.
\end{definition}
We show that the black-box testing problem for languages of Rabin index $k$ can be reduced to the black-box testing problem for $\mathcal{R}_k$.
\begin{restatable}{lemma}{rabinlemma}
\label{lem:rabin}
There is an algorithm that, given an $\omega$-regular language $L \subseteq \Sigma^\omega$ of index $k$ and given a testing strategy $\sigma_k$ for $\mathcal{R}_k$, effectively constructs a testing strategy $\sigma_L$ for $L$.
\end{restatable}
\begin{proof}
(Sketch, full proof in the Appendix.) Let $\mathcal{A} = (\mathcal{A}S, \Sigma, \mathcal{A}Tr, \mathcal{A}Init, \mathcal{A}Acc)$ be a \acron{DRA}{} recognizing $L \subseteq \Sigma^\omega$ with accepting condition $\mathcal{A}Acc=\{(E_1, F_1), \ldots, (E_k, F_k)\}$, i.e., $\mathcal{A}Acc$ contains $k$ Rabin pairs. Let $\sigma_k$ be a black-box strategy for the Rabin language $\mathcal{R}_k$. We construct a black-box strategy $\sigma_L$ for $L$.
Let $w = \ell_1 a_1 \ell_2 \cdots \ell_{n-1} a_n \ell_n \in (\Sigma \times \{\mathsf{r}, \mathsf{c}\})^* \Sigma$. We define the action $\sigma_L(w)$ as follows. Let $\mathcal{A}Init q_1 \ldots q_n$ be the unique run of $\mathcal{A}$ on the word $\ell_1\ell_2 \ldots \ell_n \in \Sigma^*$. Define $v = \ell_1' a_1 \ell_2' \cdots \ell_{n-1}' a_n \ell_n' \in (2^{M_k} \times \{\mathsf{r}, \mathsf{c}\})^* 2^{M_k}$ as the word given by: $\bm{e}_j \in \ell_i'$ if{}f $q_i \in E_j$, and $\bm{f}_j \in \ell_i'$ if{}f $q_i \in F_j$. (Intuitively, we mark with $\bm{e}_j$ the positions in the run at which the \acron{DRA}{} visits $E_j$, and with $\bm{f}_j$ the positions at which the \acron{DRA}{} visits $F_j$.) We set $\sigma_L(w) \coloneqq \sigma_k(v)$. We show in the Appendix that $\sigma_L$ is a black-box strategy for $L$.
\end{proof}
\section{Black-box strategies for Rabin languages}
\label{sec:strategy}
\newcommand{\secondhalf}[1]{\textit{SecondHalf}({#1})}
\newcommand{\last}[1]{\textit{last}(#1)}
We describe a family of testing strategies for the Rabin languages $\{\mathcal{R}_k \mid k \geq 1 \}$.
In Section \ref{subsec:strategy} we describe our strategy in detail. In Section \ref{subsec:progress} we introduce the progress radius and the
progress probability, two parameters of a chain needed to prove correctness and necessary for quantitative analysis in Section \ref{sec:quantitative}. In Section \ref{subsec:correctness} we formally prove that our strategy works.
\subsection{The strategy}
\label{subsec:strategy}
Let $\mathcal{M}$ be a Markov chain with observations in $2^{M_k}$, and let $\pi = s_0 s_1 s_2 \cdots s_m$ be a finite path of $\mathcal{M}$. The \emph{length} of $\pi$ is $m$, and its last state, denoted $\last{\pi}$, is $s_m$.
The \emph{second half} of $\pi$ is the path $\secondhalf{\pi}\coloneqq s_{\lceil m / 2 \rceil} \ldots s_m$.
The \emph{concatenation} of $\pi$ and a finite path $\rho= r_0 r_1 \cdots r_l$ of $\mathcal{M}$ such that $s_m= r_0$ is the path $\pi \odot \rho \coloneqq s_0 s_1 s_2 \cdots s_m r_1 \cdots r_l$. A path
$\pi$ is \emph{$i$-good} if it has length $0$ or there are markers $\bm{e}_i, \bm{f}_i\in M_k$ such that some state $s$ of $\pi$ satisfies $\bm{e}_i \in \textit{Obs}(s)$ and no state $s$ of $\pi$ satisfies $\bm{f}_i \in \textit{Obs}(s)$.
Further, $\pi$ is \emph{good} if it is $i$-good for some $1 \leq i \leq k$.
The strategy $\mathfrak{S}[f]$, described in Figure \ref{fig:strat}, is parametrized by a function $f \colon \mathbb{N} \rightarrow \mathbb{N}$. The only requirement on $f$ is $\limsup_{n\to \infty} f(n)=\infty$.
\begin{figure}
\caption{Strategy $\mathfrak{S}
\label{alg:cstrategy}
\label{fig:strat}
\end{figure}
In words, after the $n$-th restart the strategy keeps sampling in blocks of $2 \cdot f(n)$ steps until
the \emph{second half} of the complete path sampled so far is bad, in which case it restarts. For example,
after the $n$-th restart the strategy samples a block $\pi_0 = \pi_{01} \odot \pi_{02}$, where $|\pi_{01}|=|\pi_{02}|= f(n)$, and checks whether $\pi_{02}$ is good; if not, it restarts, otherwise it samples a block $\pi_1 = \pi_{11} \odot \pi_{12}$ starting from $\last{\pi_{02}}$, and checks whether $\pi_{11} \odot \pi_{12}$ is good; if not, it restarts; if so it samples a block $\pi_2 = \pi_{21} \odot \pi_{22}$ starting from $\last{\pi_{12}}$, and checks whether $\pi_{12} \odot \pi_{21} \odot \pi_{22}$ is good, etc.
Intuitively, the growth of $f$ controls how the strategy prioritizes deep runs into the chain over quick restarts while the number of restarts increases.
In the rest of the paper we prove that our strategy is correct, and obtain optimal upper bounds on the number of steps to the last reset. These bounds are given in terms of two parameters of the chain: the progress radius and the progress probability. We introduce the parameters in section \ref{subsec:progress}.
\subsection{Progress radius and progress probability}
\begin{figure}
\caption{\textit{Left:}
\label{fig:rsandps}
\end{figure}
\label{subsec:progress}
We define the notion of progress radius and progress probability for a Markov chain $\mathcal{M}$ with $2^{M_k}$ as set of observations and such that $\Pr(\mathcal{R}_k) > 0$.
Intuitively, the progress radius is the smallest number of steps such that, for any state of the chain, conducting only this number of steps one can
``make progress'' toward producing a good run or a bad run. The progress probability gives a lower bound for the probability of the paths that make progress.
We define the notions only for the case $k=1$, which already contains all the important features. The definition for arbitrary $k$ is more technical, and is given in Appendix \ref{appendix:progress}.
\noindent\textbf{Good runs and good \acron{BSCC}{}s.} We extend the definition of good paths to good runs and good \acron{BSCC}{}s of a Markov chain.
A run $\rho=s_0s_1s_21ots$ is \emph{good} if $\bm{e}_1$ appears infinitely often in $\rho$ and $\bm{f}_1$ finitely often, and \emph{bad} otherwise.
So a run $\rho$ is good if{}f there exists a decomposition of $\rho$ into an infinite concatenation $\rho \coloneqq \pi_0 \odot \pi_1 \odot \pi_2 \odot \cdots$ of non-empty paths such that $\pi_1, \pi_2, \ldots$ are good. We let $P_\text{good}$ denote the probability of the good runs of $\mathcal{M}$.
A \acron{BSCC}{} of $\mathcal{M}$ is \emph{good} if it contains at least one state labeled by $\bm{e}_1$ and no state labeled by $\bm{f}_1$, and bad otherwise.
It is well-known that the runs of any finite-state Markov chain reach a \acron{BSCC}{} and visit all its states infinitely often w.p.1 \cite[Thm. 10.27]{Baier2008}.
It follows that good (resp. bad) runs eventually reach a good (resp. bad) \acron{BSCC}{} w.p.1.
\noindent\textbf{Progress radius.} Intuitively, the progress radius $\mathbb{R}m$ is the smallest number of steps such that, for any state $s$, by conducting $\mathbb{R}m$ steps one can
``make progress'' toward producing a good run\,---\,by reaching a good \acron{BSCC}{} or, if already in one, by reaching a state with observation $\bm{e}_1$\,---\,or a bad run.
\begin{definition}[Good-reachability and good-witness radii]
Let $B_\gamma$ be the set of states of $\mathcal{M}$ that belong to good \acron{BSCC}{}s and let $S_\gamma$ be the set of states from which it is possible to reach $B_\gamma$,
and let $s \in S_\gamma$. A non-empty path $\pi$ starting at $s$ is a \emph{good progress path} if
\begin{itemize}
\item $s \in S_\gamma \setminus B_\gamma$, and $\pi$ ends at a state of $B_\gamma$; or
\item $s \in B_\gamma$, and $\pi$ ends at a state with observation $\bm{e}_1$.
\end{itemize}
The \emph{good-reachability radius} $r_\gamma$ is the maximum, taken over every $s \in S_\gamma \setminus B_\gamma$, of the length of a shortest progress path for $s$.
The \emph{good-witness radius} $R_\gamma$ is the same maximum, but taken over every $s \in B_\gamma$.
\end{definition}
The bad-reachability and bad-witness radii, denoted $r_\beta$ and $R_\beta$ are defined analogously. Only the notion of progress path of a state$s \in B_\beta$
needs to be adapted. Loosely speaking, a bad \acron{BSCC}{} either contains no states with observation $\bm{e}_1$, or it contains some
state with observation $\bm{f}_1$. Accordingly, if no state of the \acron{BSCC}{} of $s$ has observation $\bm{e}_1$, then any non-empty path starting at $s$ is a progress path,
and otherwise a progress path of $s$ is a non-empty path starting at $s$ and ending at a state with observation $\bm{f}_1$.
We illustrate the definition of the reachability and witness radii in Figure \ref{fig:rsandps}.
We leave $r_\beta$, $R_\beta$, $p_\beta$, and $P_\beta$ undefined if the chain does not contain a bad \acron{BSCC}{}, and hence runs are good w.p.1.
\begin{definition}[Progress radius]
The \emph{progress radius} $\mathbb{R}m$ of $\mathcal{M}$ is the maximum of $r_\gamma$, $R_\gamma$, $r_\beta$, and $R_\beta$.
\end{definition}
\noindent\textbf{Progress probability.} From any state of the Markov chain it is possible to ``make progress'' by executing a progress path of length $\mathbb{R}m$.
However, the probability of such paths varies from state to state. Intuitively, the progress probability gives a lower bound on the probability
of making progress.
\begin{definition}
Let $B_\gamma$ be the set of states of $\mathcal{M}$ that belong to good \acron{BSCC}{}s, let $S_\gamma$ be the set of states from which it is possible to reach $B_\gamma$,
and let $s \in S_\gamma$. The \emph{good-reachability probability} $p_\gamma$ is the minimum, taken over every $s \in S_\gamma\setminus B_\gamma$, of the probability
that a path with length $r_\gamma$ starting at $s$ contains a good progress path. The \emph{good-witness probability} $P_\gamma$ is the same mininum, but taken over every $s \in B_\gamma$ with paths of length $R_\gamma$.
The corresponding bad probabilities are defined analogously. The \emph{progress probability} $\mathbf{P}ax$ is the minimum of $p_\gamma, P_\gamma, p_\beta, P_\beta$.
\end{definition}
\newcommand{\mathbb{N}B}{\textit{NB}}
\subsection{Correctness proof}
\label{subsec:correctness}
We prove that the strategy $\mathfrak{S}[f]$ of section \ref{subsec:strategy} is a valid testing strategy $\mathfrak{S}[f]$ for arbitrary Markov chains $\mathcal{M}$.
First, we will give an upper bound on the probability that $\mathfrak{S}[f]$ restarts ``incorrectly'', i.e.~at a state $s\in S_\gamma$ from which a good \acron{BSCC}{} could still be reached.
\begin{restatable}{lemma}{ProbRestartGood}
\label{lemma:PRestartGood}
Let $\mathcal{M}$ be a Markov chain, and let $\mathsf{M_{r}}st{\mathfrak{S}[f]}$ be its associated Markov chain with
$\mathfrak{S}[f]$ as restart strategy. Let $\mathbb{N}B_{n}$ be the set of paths of $\mathsf{M_{r}}st{\mathfrak{S}[f]}$ that have at least $n-1$ restarts and only visit states in $S_\gamma$ after the $(n-1)$-th restart. We have:
\[
\Pr[\#\restact \ge n \mid \mathbb{N}B_n]\le 3(1-\mathbf{P}ax )^{\lfloor f(n)/\mathbb{R}m \rfloor-1}
\]
\end{restatable}
The technical proof of this lemma is in the Appendix. We give here the proof for a special case that illustrates most ideas.
Consider the Markov chain with labels in $2^{ \{\bm{e}_1, \bm{f}_1\} }$ at the top of Figure \ref{fig:examplemcrun}.
\begin{figure}
\caption{A Markov chain (left), and (a finite prefix of) one of its runs for $n=2$ and $f(n)=n$ (right). After $2$ steps, the run has reached a good \acron{BSCC}
\label{fig:examplemcrun}
\end{figure}
The labeling function is
$\textit{Obs}(s_\text{goal}) = \{ \bm{e}_1 \}$ and $\textit{Obs}(s) = \emptyset$ for all other states, and $\textit{Obs}(\rho) \in \mathcal{R}_1$ if{}f $\rho$ visits
$s_\text{goal}$ infinitely often.
The set $S_\gamma$ contains all states because $s_\text{goal}$ is reachable from every state. The only \acron{BSCC}{} is $\mathcal{B} = \{s_1,s_\text{goal}\}$, and it is a good \acron{BSCC}{}.
From the definitions of the parameters we obtain $r_\gamma = R_\gamma = 1$, $p_\gamma = p$ and $P_\gamma = q$. Further, since there are no bad \acron{BSCC}{}s, $r_\beta$ and $R_\beta$ are undefined, and so $\mathbb{R}m = 1$.
So for this Markov chain Lemma \ref{lemma:PRestartGood} states $\Pr[\#\restact \ge n \mid \mathbb{N}B_n]\le 3(1-\mathbf{P}ax )^{f(n)-1}$. Let us see why this is the case.
Let $\rho$ be a run of $\mathsf{M_{r}}st{\mathfrak{S}[f]}$ such that $\#\restact(\rho) \geq n$, i.e., $\rho$ has at least $n$ restarts. Since $S_\gamma$ contains all states, we have $\rho \in \mathbb{N}B_n$ if{}f $\#\restact(\rho) \geq n-1$. We consider three cases. In the definition of the cases we start counting steps immediately after the $(n-1)$-th restart, and denote by $\rho[a, b]$ the fragment of $\rho$ that starts immediately before step $a$, and ends immediately after step $b$.
\begin{itemize}
\item[(a)] After $f(n)$ steps, $\rho$ has not yet reached $\mathcal{B}$. \\
Then $\rho$ has stayed in $s_\text{start}$ for $f(n)$ consecutive steps, which, since $p=p_\gamma$, happens with probability at most $(1-p_\gamma)^{f(n)}$.
\item[(b)] After $f(n)$ steps, $\rho$ has already reached $\mathcal{B}$. Further, the $n$-th restart happens
immediately after step $2 f(n)$.\\
In this case, by the definition of the strategy, $\rho$ does not visit $s_\text{goal}$ during the interval $\rho[f(n)+1, 2f(n)]$ (the second half of $[0, 2f(n)]$). So $\rho$ stays in $s_1$ during the interval $\rho[f(n)+1, 2f(n)]$ which, since $\rho$ has already reached $\mathcal{B}$ by step $f(n)$, occurs with probability $(1 -P_\gamma)^{f(n)}$.
\item[(c)] After $f(n)$ steps, $\rho$ has already reached $\mathcal{B}$. Further, the $n$-th restart does not happen before step $2f(n)+1$. \\
Since the $n$-th restart happens at some point, and not before step $2f(n)+1$, by the definition of the strategy there is a smallest
number $k \geq f(n)$ such that $\rho$ does not visit $s_\text{goal}$ during the interval $\rho[k+1, 2k]$. Because we assume that the $n$-th restart happens after step $2f(n)$, we even have $k>f(n)$.
By the minimality of $k$, the run $\rho$ does visit $s_\text{goal}$ during the interval $\rho[k, 2k-2]$.
So $\rho$ moves to $s_{goal}$ at step $k$, and then stays in $s_1$ for $k$ steps. The probability of the runs that eventually move to $s_{goal}$ and then move to stay in $s_1$ for $k$ steps is $P_\gamma(1-P_\gamma)^k$.
\end{itemize}
Figure \ref{fig:examplemcrun} shows at the bottom an example of a run, and how the stratgy handles it.
Since (a)-(c) are mutually exclusive events, $\Pr[\#\restact \ge n \mid \mathbb{N}B_n]$ is bounded by the sum of their probabilities, where in case (c) we sum over all possible values of $k$. This yields:
\begin{equation*}
Pr[\#\restact \ge n \mid \mathbb{N}B_n] \le \; (1-p_\gamma)^{f(n)}+(1-P_\gamma)^{f(n)}
+ \sum_{k=f(n)+1}^\infty P_\gamma(1-P_\gamma)^k\le \; 3(1-\mathbf{P}ax )^{f(n)}.
\end{equation*}
The proof for arbitrary Markov chains given in the Appendix has the same structure, and in particular the same split into three different events.
Applying Lemma \ref{lemma:PRestartGood} we now easily obtain (see the Appendix for a detailed proof) an upper bound for the probability to restart an $n$-th time. Note that this bound captures the ``correct'' as well as the ``incorrect'' restarts:
\begin{restatable}[Restarting probability]{lemma}{ProbRestart}
\label{lemma:restartingProb}
Let $\mathcal{M}$ be a Markov chain, and let $\mathsf{M_{r}}st{\mathfrak{S}[f]}$ be its associated Markov chain with
$\mathfrak{S}[f]$ as restart strategy. The probability that a run restarts again after $n-1$ restarts satisfies:
\[
\Pr [\#\restact\ge n\mid \#\restact\ge n-1]\le 1- P_\text{good} \left( 1 - 3(1-\mathbf{P}ax )^{\lfloor f(n)/\mathbb{R}m \rfloor-1} \right)
\]
\end{restatable}
\begin{proof}
Let $\mathbb{N}B_{n}$ be the set of paths of $\mathsf{M_{r}}st{\mathfrak{S}[f]}$ that have at least $n-1$ restarts and only visit states in $S_\gamma$ after the $(n-1)$-th restart and $\overline{\mathbb{N}B}_n$ its complement. We have
\begin{align*}
\Pr [\#\restact\ge n\mid \#\restact\ge n-1] = & \Pr[\#\restact \ge n \mid \mathbb{N}B_{n}] \cdot \Pr[\mathbb{N}B_{n} \mid \#\restact \ge n-1] + \\
& \Pr[\#\restact \ge n \mid \overline{\mathbb{N}B}_n] \cdot \Pr[\overline{\mathbb{N}B}_n \mid \#\restact \ge n-1].
\end{align*}
Applying Lemma \ref{lemma:PRestartGood} and $\Pr[\#\restact \ge n \mid \overline{\mathbb{N}B}_n]\le 1$, we get
\begin{align*}
\Pr [\#\restact\ge n\mid \#\restact\ge n-1] \leq &\; \big( 3(1-\mathbf{P}ax )^{\lfloor f(n)/\mathbb{R}m \rfloor -1} \big) \Pr[\mathbb{N}B_{n} \mid \#\restact \ge n-1]\\
&+ \Pr[\overline{\mathbb{N}B}_n \mid \#\restact \ge n-1]
\end{align*}
W.p.1, good runs of $\mathcal{M}$ only visit states of $S_\gamma$ and so $\Pr[\mathbb{N}B_{n} \mid \#\restact \ge n-1] \ge P_\text{good}$ and thus $\Pr[\overline{\mathbb{N}B}_{n} \mid \#\restact \ge n-1] \le 1-P_\text{good}$, which completes the proof.
\end{proof}
Finally, we show that $\mathfrak{S}[f]$ is a correct testing strategy. Further, we show that the condition $\limsup_{n\to \infty} f(n)=\infty$ is not ony sufficient, but also necessary.
The previous lemma gives an upper bound on the probability for a restart that, for increasing $f(n)$, drops below $1$. If $f(n)$ is above that threshold for infinitely many $n$,
it suffices to show that the strategy $\mathfrak{S}[f]$ restarts every bad run:
\begin{theorem}
\label{theorem:testingstrategies}
$\mathfrak{S}[f]$ is a testing strategy for the Rabin language $\mathcal{R}_k$ if{}f the function $f$ satisfies $\limsup_{n\to \infty} f(n)=\infty$.
\end{theorem}
\begin{proof}
\noindent ($\mathbb{R}ightarrow$): We prove the contrapositive. If $\limsup_{n\to \infty} f(n) < \infty$ then there is a bound $b$ such that $f(n) \leq b$ for every $n \geq 0$. Consider a Markov chain over $2^{M_1}$ consisting of a path of $2b+1$ states, with the last state leading to itself with probability 1; the last state is labeled with $\bm{e}_1$, and no state is labeled with $\bm{f}_1$ . Then the chain has a unique run that goes from the initial to the last state of the path and stays there forever, and its observation is a word of $\mathcal{R}_1$; therefore, $\Pr(\mathcal{R}_1)=1$. However, since $2 f(n) \leq 2b +1$, $\mathfrak{S}[f]$ always restarts the chain before reaching the last state.
\noindent ($\Leftarrow$): By the previous lemma, we can bound the restart probability after $n-1$ restarts by $\smash{1-\left(1-3(1-\mathbf{P}ax )^{\lfloor f(n)/\mathbb{R}m \rfloor-1}\right)P_\text{good}}$.
Because $0<\mathbf{P}ax \le 1$ and and $P_\text{good}>0$, for large enough $f(n)$ this is smaller than $1-P_\text{good}/2<1$. Because of $\limsup_{n\to \infty} f(n)=\infty$, we have that the probability to restart the run another time is at most $1-P_\text{good}/2$ for infinitely many $n$, and hence the total number of restarts is finite with probability $1$. A bad run would enter a bad \acron{BSCC}{} $B$ w.p.1 and would then go on to visit a set consisting of all the $f_i$ corresponding to $B$ infinitely often. Thus, $\mathfrak{S}[f]$ would restart this run and hence reached a good run when it does not restart.
\end{proof}
\section{Quantitative analysis}
\label{sec:quantitative}
The quality of a testing strategy is given by the expected number of steps until the last restart, because this is the overhead spent until a violation starts to be executed. As in \cite{EKKW21}, given a labeled Markov chain $\mathcal{M}$ and a testing strategy $\sigma$, we define the number of steps to the last restart as random variables over the Markov chain $\mathsf{M_{r}}st{\sigma}$:
\begin{definition}[$S(\rho)$ and $S_n(\rho)$]
Let $\rho$ be a run of $\mathsf{M_{r}}st{\sigma}$. We define: $S(\rho)$ is equal to $0$ if $\mathsf{r}$ does not occur in $\rho$; it is equal to the length of the longest prefix of $\rho$ ending in $\mathsf{r}$, if $\mathsf{r}$ occurs at least once and finitely often in $\rho$; and it is equal to $\infty$ otherwise. Further, for every $n \geq 1$ we define $S_n(\rho)$ to be equal to $0$ if $\mathsf{r}$ occurs less than $n$ times in $\rho$; and equal to the length of the segment between the $(n-1)$-th (or the beginning of $\rho$) and the $n$-th occurrence of $\mathsf{r}$.
\end{definition}
In this section we investigate the dependence of $\mathbb E[S]$ on the function $f(n)$. A priori it is unclear whether $f(n)$ should grow fast or slow. Consider the case in which all \acron{BSCC}{}s of the chain, good or bad, have size 1, and a run eventually reaches a good \acron{BSCC}{} with probability $p$. In this case the strategy restarts the chain until a sample reaches a good \acron{BSCC}{} for the first time. If $f(n)$ grows fast, then after a few restarts, say $r$, every subsequent run reaches a \acron{BSCC}{} of the chain with large probability, and so the expected number of restarts is small, at most $r + (1/p)$. However, the number of steps executed during these few restarts is large, because $f(n)$ grows fast; indeed, only the run after the penultimate restart executes already at least $2f(r + (1/p) -1)$ steps.
In a first step we show that $\mathbb E[S]= \infty$ holds for every function $f(n) \in 2^{\Omega(n)}$.
\begin{proposition}
Let $f \in 2^{\Omega(n)}$. Then there exists a Markov chain such that the testing strategy of Figure \ref{fig:strat} satisfies $\mathbb E(S)=\infty$.
\end{proposition}
\begin{proof}
Let $f$ be in $2^{\Omega(n)}$. Then there exists some positive integer $k>0$ such that we have
$\limsup_{n\to \infty} f(n)\cdot (1/2)^{n/k}>0.$
Consider a Markov chain with $P_\text{good}=1-(1/2)^{1/k}$. Then we have $
\mathbb E(S)=\sum^\infty_{n=0}\mathbb E (S_n\mid \#\restact\ge n-1)P(\#\restact\ge n-1).$
We have that $P(\#\restact\ge n-1\mid \#\restact\ge n-2)\ge 1-P_\text{good}$ because only good runs will not be restarted. We also have that $\mathbb E (S_n\mid \#\restact\ge n-1)\ge f(n)(1-P_\text{good})$ because of the same reason. Thus
\begin{align*}
\mathbb E[S]=&\sum^\infty_{n=0}\mathbb E (S_n\mid \#\restact\ge n-1)P(\#\restact\ge n-1)
\ge \sum^\infty_{n=0} f(n)(1-P_\text{good})\cdot (1-P_\text{good})^n
\end{align*}
and hence $\mathbb E[S]\ge 1isplaystyle\sum^\infty_{n=0} f(n)(1/2)^{n/k}=\infty$.
\end{proof}
It follows that (if we limit ourselves to monotonic functions, which is no restriction in practice), we only need to consider functions $f(n)$ satisfying $f(n) \in \omega(1) \cap 2^{o(n)}$. In the rest of the section we study the strategies corresponding to polynomial functions $f(n) = n^c$ for $c \in \mathbb{N}_+$, and obtain an upper bound as a function of the parameters $\mathbb{R}m/\mathbf{P}ax $, $P_\text{good}$, and $c$. The study of subexponential but superpolynomial functions is beyond the scope of this paper.
\subsection{Quantitative analysis of strategies with $f(n) = n^c$}
We give an upper bound on $\mathbb E (S)$, the expected total number of steps before the last restart. Our starting point is Lemma \ref{lemma:restartingProb}, which bounds the probability to restart for the $n$-th time, if $(n-1)$ restarts have already happened. When the number $n$ of restarts is small, the value of the right-hand-side is above $1$, and so the bound is not useful. We first obtain a value $X$ such that after $X$ restarts the right-hand-side drops below $1$.
\begin{lemma}
\label{lemma:ChoiceOfX}
Let $X = \sqrt[c]{\mathbb{R}m\left(2+\ln(1/6)/\ln(1-\mathbf{P}ax )\right)}$.
For all $n\ge X$, we have
\begin{equation*}
\Pr [\#\restact\ge n\mid \#\restact\ge n-1]\le 1-P_\text{good}/2
\end{equation*}
when restarting according to $\mathfrak{S}[n \mapsto n^c]$.
\end{lemma}
\begin{proof}
Follows immediately from Lemma \ref{lemma:restartingProb}, the fact that the restart probability decreases with $n$, the definition of $X$, and some calculations. We recall the statement of Lemma \ref{lemma:restartingProb}:
\[
\Pr [\#\restact\ge n\mid \#\restact\ge n-1]\le 1- P_\text{good} \left( 1 - 3(1-\mathbf{P}ax )^{\lfloor f(n)/\mathbb{R}m \rfloor-1} \right)
\]
Plugging in an $n\ge X$ validates the claim.
\end{proof}
We now try to find a bound for $\mathbb E[S]$: By linearity of expectation, we have $\mathbb E[S]=\sum_{i=1}^\infty \mathbb E[S_n]$. We split the sum into two parts: for $n<X$, and for $n\ge X$. For $n<X$ we just approximate $\Pr[\#\restact\ge n-1]$ by $1$. For $n>X$ we can say more thanks to Lemma \ref{lemma:ChoiceOfX}:
\begin{align*}
\Pr[\#\restact\ge n-1]&=\Pr[\#\restact\ge n-1 | \#\restact\ge n-2] \cdots \Pr[\#\restact\ge X+1 | \#\restact\ge X]\cdot \Pr[\#\restact\ge X]\\
&\le \mathbb Pod_{k=\lceil X\rceil}^n \Pr[\#\restact\ge k | R\ge k-1] \le \left(1-P_\text{good}/2\right)^{n-X}
\end{align*}
This yields:
\begin{align*}
\mathbb E[S]&=\sum_{n=0}^\infty \mathbb E[S_n\mid\#\restact \ge n-1]\Pr[\#\restact\ge n-1]\\
&\le \sum_{n=0}^{X}\mathbb E[S_n\mid\#\restact \ge n-1] + \sum_{n=X}^\infty \mathbb E[S_n\mid\#\restact \ge n-1]\cdot \left(1-P_\text{good}/2\right)^{n-X} \tag{1}
\end{align*}
It remains to bound the expected number of steps between two restarts $\mathbb E[S_n\mid\#\restact \ge n-1]$, which is done in
Lemma \ref{lemma:boundnsmall} below. The proof can be found in the Appendix. The proof first observes that the expected number of steps it takes to reach a good or a bad \acron{BSCC}{} is $r_\gamma/p_\gamma$ resp. $r_\beta/p_\beta$. Then we give a bound on the expected number of steps it takes to perform a progress path inside a bad \acron{BSCC}{} for the first time, or to not perform a progress path inside a good \acron{BSCC}{} for an entire second half of a run at some point after the $(n-1)$-st restart; the bound is also in terms of $\mathbb{R}m/\mathbf{P}ax $ and $\mathbb{R}m/\mathbf{P}ax (1 - P_\gamma)$. The term $2 f(n)$ comes from the fact that the strategy always executes at least $2 f(n)$ steps. The term $2 \mathbb{R}m$ is an artifact due to the ``granularity'' of the analysis, where we divide runs in blocks of $\mathbb{R}m$ steps.
\begin{restatable}[Expected number of steps in a fragment]{lemma}{stepsfragment}
\label{lemma:boundnsmall}
For the strategy $\mathfrak{S}[n \mapsto n^c]$ we have:
\begin{equation}
\mathbb E[S_n\mid\#\restact \ge n-1]\le 2 (\mathbb{R}m+f(n))+9\left(\frac{\mathbb{R}m}{\mathbf{P}ax (1-P_\gamma)}\right).
\end{equation}
\end{restatable}
\noindent Plugging Lemma \ref{lemma:boundnsmall} into (1), we finally obtain (see the Appendix):
\begin{restatable}[Expected number of total steps]{theorem}{totalsteps}
\label{theorem:totalsteps}
For the strategy $\mathfrak{S}[n \mapsto n^c]$ we have:
\begin{equation*}
\expected[S] \in \mathbb{O} \left((c+1)!\cdot 2^c\cdot \left(\frac{\mathbb{R}m}{\mathbf{P}ax }\right)^{1+1/c}+\frac{2^c(c+1)!}{P_{good}^{c+1}} + (c+1)!(2c)^{c+1}\right).
\end{equation*}
\end{restatable}
If we fix a value $c$, we obtain a much simpler statement:
\begin{corollary}
\label{cor:FinalBound}
For a fixed $c$, the strategy $\mathfrak{S}[n \mapsto n^c]$ satisfies:
\begin{equation*}
\expected[S] \in \mathbb{O} \left(\left(\frac{\mathbb{R}m}{\mathbf{P}ax }\right)^{1+1/c}+\frac{1}{P_\text{good}^{c+1}}\right).
\end{equation*}
\end{corollary}
Thus the bound on the total number of steps depends on two quantities, $\mathbb{R}m/\mathbf{P}ax$ and $P_\text{good}$. A small $c$ favours the effect of $\mathbb{R}m/\mathbf{P}ax$ on the bound, a larger $c$ the effect of $P_\text{good}$.
In Section \ref{sec:experiments} we will see that this closely matches the performance of the algorithms for different values of $c$ on synthetic Markov chains and on Markov chains from the \acron{PRISM} benchmark set.
\subsection{Optimality of the strategy $f(n) = n^c$}
We will prove the following optimality guarantee for our strategies.
\begin{theorem}
For every $c \in \mathbb{N}_+$ there is a family of Markov chains such that our bound of Corollary \ref{cor:FinalBound} on $\mathfrak{S}[n\mapsto n^c]$ is asymptotically optimal, i.e., no other black-box testing strategy is in a better asymptotic complexity class.
\end{theorem}
This proves two points: first, our bounds cannot be substantially improved. Second, one necessarily needs information on $\frac{\mathbb{R}m}{\mathbf{P}ax }$ and $P_\text{good}$ to pick an optimal value for $c$; without any information every value is equally good.
\begin{proof}
Consider the family of Markov chains at the top of Figure \ref{MC:SyntheticPRm}.
We take an arbitrary $k>1$ and set $M=k^{c-1}$ and $p=q=1/k$. With this choice we have
$P_\text{good} = \mathbf{P}ax = 1/k$, and $\mathbb{R}m=k^{c-1}$. By Lemma \ref{theorem:totalsteps},
the strategy $\mathfrak{S}[n\mapsto n^c]$. satisfies $\mathbb E[S] \in \mathbb{O}((\mathbb{R}m/\mathbf{P}ax )^{1+1/c}+ (1/P_\text{good})^{c+1})=\mathbb{O} (k^{c+1})$
We compare this with the optimal number of expected steps before the final restart.
Since runs that visit $s_\text{goal}$ at least once are good w.p.1, any optimal strategy
stops restarting exactly after the visit to $s_\text{goal}$. We claim that every
such strategy satisfies $\mathbb E[S] \ge \mathbb{R}m/(P_\text{good} \mathbf{P}ax )(1-P_\text{good})$. For this, we make four observations. First, the probability of a good run is $P_\text{good}$. Second, the expected number of steps of a good run until the first visit to $s_\text{goal}$ is $\mathbb{R}m/ \mathbf{P}ax $. Third, the smallest number of steps required to distinguish a bad run, i.e.~being in the left \acron{BSCC}{}, from a good run is equal to $\mathbb{R}m$, because until $\mathbb{R}m$ steps are executed, all states visited carry the same label. Hence, every strategy takes $\mathbb{R}m/(P_\text{good} \mathbf{P}ax )$ steps on average before reaching the state $s_\text{goal}$ for the first time. Fourth, on average $1/P_\text{good}$ tries are required to have one try result in a good run. Hence, on average at least $\frac{1/P_\text{good}-1}{1/P_\text{good}}$ of the $\mathbb{R}m/(P_\text{good} \mathbf{P}ax )$ steps happen before the last restart. Since $\frac{1/P_\text{good}-1}{1/P_\text{good}}=(1-P_\text{good})$, this proves the claim. Now $\mathbb{R}m/(P_\text{good} \mathbf{P}ax )(1-P_\text{good})=k^{c+1}-k^c\in \Theta(k^{c+1})$ and we are done.
\end{proof}
\begin{comment}
\subsection{A technical lemma}
\begin{lemma}
\label{lemma:sumnsquaredinfinity}
\label{lemma:sumnsquared}
\label{lemma:verygoodlemma}
For $c,X\in \mathbb{Z}_{\ge 0}$ and $0<p<1$ we have that
\[
\sum^\infty_{n=X} n^c\cdot p^{n-X}\le (c+1)!\left(\frac{(X+c)^c}{1-p}+\frac{1}{(1-p)^{c+1}}\right).
\]
\end{lemma}
\begin{proof}
We will proof the lemma by induction on $c$ starting with $c=0$. Then we have by the formula for the geometric progression that:
\[
\sum^\infty_{n=X} p^{n-X}=\sum^\infty_{i=0} p^i = \frac{1}{1-p}.
\]
This proves the the induction base case.
Now assume we have proven
\[
S_{c-1}(X)\coloneqq \sum^\infty_{n=X} n^{c-1}\cdot p^{n-X}\le c!\left(\frac{(X+c-1)^{c-1}}{1-p}+\frac{1}{(1-p)^{c}}\right).
\]
Now consider
\[
S_c(X)\coloneqq \sum^\infty_{n=X} n^c\cdot p^{n-X}
\]
When multiplying by $(1-p)$ we get the following
\begin{align*}
(1-p)S_c(X)&=X^c+\sum^\infty_{n=X} \left((n+1)^c-n^c\right)\cdot p^{n-X}\\
&=X^c+\sum^\infty_{n=X} \left(\sum_{k=1}^{c} \binom{c}{k}n^{c-k}\right)p^{n-X}\\%\left(cn^{c-1}+\binom{c}{2}n^{c-1}+1ots+\binom{c}{c}n^0\right)\cdot p^{n-X}\\
&\le X^c+\sum^\infty_{n=X}c\left(\sum_{k=0}^{c-1}\binom{c-1}{k}n^{c-k}\right)p^{n-X}\\
&= X^c+\sum^\infty_{n=X}c(n+1)^{c-1}p^{n-X}\\
&= X^c+ c\cdot \sum^\infty_{n=X+1}n^{c-1}p^{n-X-1}\\
&=X^c+ cS_{c-1}(X+1)\\
&\le X^c +c \cdot c!\left(\frac{(X+c)^{c-1}}{1-p}+\frac{1}{(1-p)^{c}}\right)\\
S_c(X)&\le (c+1)!\left(\frac{(X+c)^c}{1-p}+\frac{1}{(1-p)^{c+1}}\right)
\end{align*}
This concludes the proof.
\end{proof}
\end{comment}
\section{Experiments}
\label{sec:experiments}
We report on experiments on three kinds of systems. First, we conduct experiments on two synthetic families of Markov Chains. Second, we repeat the experiments of \cite{EKKW21} on models from the standard \acron{PRISM} Benchmark Suite \cite{DBLP:conf/qest/KwiatkowskaNP12} using our black-box strategies. Finally, we conduct experiments on population protocols from the benchmark suite of the Peregrine tool \cite{BlondinEJ18,EsparzaHJM20}.
\noindent\textbf{Synthetic Experiments.}
\begin{figure}
\caption{Two families of Markov chains. The initial state is $s_\text{start}
\label{MC:SyntheticPRm}
\end{figure}
Consider the two (families of) labeled Markov chains at the top of Figure \ref{MC:SyntheticPRm}. The labels are
$a$ and $b$. In the top chain, state $s_{\textit{goal}}$ is labeled by $a$, all others by $b$. In the bottom chain, the states $s_2$ to $s_M$ and $s_\text{goal}$ are labeled by $\{a,s_\text{start}\}$ and $s_\text{sink}$ by $b$. The language $L$ is the set of words containing infinitely many occurrences of $a$. In the top chain at the initial state we go right or left with probability $q$ and $(1-q)$, respectively. Runs that go left are bad, and runs that go right are good w.p.1. It follows $P_\text{good}=q$, $\mathbb{R}m = M$, and $\mathbf{P}ax = \min (p,q)$. In our experiments we fix $q = 1/2$. By controlling $M$ and $p$, we obtain chains with different values of $\mathbb{R}m $ and $\mathbf{P}ax $ for fixed $P_\text{good}=1/2$.
In the bottom chain, $R_\beta=R_\gamma =1$, $\mathbb{R}m =r_\gamma=M$, $p_\gamma=p^M$, $p_\beta=(1-p)$ and $\mathbf{P}ax =\min (p^M,1-p)$ and $P_\text{good}=p^M$.
\begin{figure}
\caption{
On the left, double-logarithmic plot of the expected total number of steps before the last restart $\mathbb E(S)$ for the chain at the top of Figure \ref{MC:SyntheticPRm}
\label{subfig:SynthExpRp}
\label{subfig:SynthExpPgood}
\label{fig:SynthExp}
\end{figure}
Recall that the bound obtained in the last section is $\expected[S] \le f(c)(\mathbb{R}m /\mathbf{P}ax )^{1+1/c}+g(c) (1 / P_\text{good})^{c+1}$
where $f(c)$ and $g(c)$ are fast-growing functions of $c$.
If $P_\text{good}$ and $\mathbb{R}m /\mathbf{P}ax $ are small, then $f(c)$ and $g(c)$ dominate the number of steps, and hence strategies with small $c$ should perform better. The data confirms this prediction.
Further, for fixed $P_\text{good}$, the bound predicts $\expected[S] \in O((\mathbb{R}m /\mathbf{P}ax )^{1+1/c})$, and so for growing $\mathbb{R}m /\mathbf{P}ax $ strategies with large $c$ should perform better. The left diagram confirms this. Also, the graphs become straight lines in the double logarithmic plot, confirming the predicted polynomial growth.
Finally, for $\mathbb{R}m /\mathbf{P}ax $ and $1/P_\text{good}$ growing roughly at the same speed as in the lower Markov chain, the bound predicts $\expected[S] \in O(1 / P_\text{good}^{c+1})$ for $c=2,3$ and $\expected[S] \in O(M^2/ P_\text{good}^{c+1})$ for $c=1$, and hence for growing $P_\text{good}$ and $\mathbb{R}m /\mathbf{P}ax $, strategies with small $c$ perform better. Again, the right diagram confirms this.
\noindent\textbf{Experiments on the \acron{PRISM} Data set.}
We evaluate the performance of our black-box testing strategies for different values of $c$ on discrete time Markov chain benchmarks from the \textsc{\acron{PRISM}} Benchmark suite \cite{DBLP:conf/qest/KwiatkowskaNP12}, and compare them with the strategies of \cite{EKKW21} for fully observable systems. Table \ref{tableconcur} shows the results.
The properties checked are of the form $\mathbf{GF}, (\mathbf{GF}\rightarrow \mathbf{FG})$, or their negations. We add a gridworld example\footnote{Unfortunately, the experimental setup of \cite{EKKW21} cannot be applied to this example \cite{We22}.} denoted $\overline{\mathtt{GW}}$, with larger values of the parameters, to increase the number of states to $\sim 5\cdot 10^8$. When trying to construct the corresponding Markov chain, Storm experienced a timeout. Runs are sampled using the simulator of the \textsc{Storm} Model Checker \cite{DBLP:journals/corr/abs-2002-07080} and the python extension \textsc{Stormpy}. We abort a run after $10^6$ (Up to $3\cdot 10^7$ for the gridworld examples $\mathtt{gw}$, $\overline{\mathtt{gw}}$, and $\overline{\mathtt{GW}}$) steps without a restart. The probability of another restart is negligibly small.
The Cautious\textsubscript{10}- and the Bold\textsubscript{$0.1$}-strategy of \cite{EKKW21}
store the complete sequence of states observed, and so need linear memory in the length of the sample. Our strategies use at most a logarithmic amount of memory, at none or little cost in the number of steps to the last restart. Our strategies never timeout and, surprisingly, often require \emph{fewer} steps than fully-observable ones. In particular, the strategies for fully observable systems cannot handle $\overline{\text{gridworlds}}$, and only the bold strategy handles gridworld. One reason for this difference is our strategies' ability to adapt to the size of the chain automatically by increasing values of $f(n)$ as $n$ grows. In two cases (nand and bluetooth) the fully observing strategies perform better by a factor of $\sim 2$ to $\sim 3$.
In comparison to the improvement by a factor of $\sim 50$ in scale\textsubscript{10} and a factor of $\sim 90$ in gridworld of the newly presented black-box strategies over the whitebox strategies, this is negligible.
\begin{table}[h]
\begin{center}
\begin{tabular}{l|n{5}{0}H|n{6}{0}H|n{5}{0}H|n{5}{0}H|HHn{2}{0}H|n{6}{0}H|n{6}{0}H|n{7}{0}H|}\toprule
& \multicolumn{2}{l}{nand} & \multicolumn{2}{l}{bluetooth} & \multicolumn{2}{l}{scale\textsubscript{$10$}}&\multicolumn{2}{l}{crowds}&\multicolumn{2}{H}{Crowds}&\multicolumn{2}{l}{herman}&\multicolumn{2}{c}{\texttt{gw}}&\multicolumn{2}{c}{$\overline{\mathtt{gw}}$}&\multicolumn{1}{c}{$\overline{\mathtt{GW}}$}\\ \midrule
$\#$ states & \multicolumn{1}{c}{7$\cdot$10$^7$} & & 143291 & & 121 & & \multicolumn{1}{c}{1$\cdot$10$^7$}& & 524288 & & \multicolumn{1}{c}{5$\cdot$10$^5$}& & 309327 & & 309327 & & \multicolumn{1}{c}{5$\cdot$10$^8$} & \\
$c=1$& 31246 & 90 & 4428 & 14 & 116 &10&{\npboldmath} 44&1& {\npboldmath}437& 5&{\npboldmath}2& 1& 486&8 & 171219 &319 & 8082659&1207\\
$c=2$& 18827 & 18 & 4548 & 9 & {\npboldmath} 75&4&61&1& 937& 5&{\npboldmath}1& 1&404&4 & {\npboldmath}152127 & 51 & 4883449 & 122\\
$c=3$& 32777 & 10 & 7615 & 6 & 179&3&99&1& 6352& 5&{\npboldmath} 1& 1&{\npboldmath}293&3&579896&237&{\npboldmath}4252263 & 40\\ \midrule
Bold\textsubscript{$0.1$}& 10583 & 6 & 4637 & 4 & 14528 & 1 & 199 & 1 & & &{\npboldmath} 0 & 0& \multicolumn{1}{c}{\texttt{TO}}& \multicolumn{1}{H|}{\texttt{TO}}& \multicolumn{1}{c}{\texttt{TO}}& \multicolumn{1}{H|}{\texttt{TO}}&\multicolumn{1}{H}{-} & \\
Cautious\textsubscript{$10$} & {\npboldmath} 6900 & 5 & {\npboldmath} 2425 & 5 & 3670 &1 & 101 & 1 & & & \multicolumn{1}{c}{\texttt{TO}} & \multicolumn{1}{H|}{\texttt{TO}} & 26361 & \multicolumn{1}{H|}{0.9}& \multicolumn{1}{c}{\texttt{TO}}& \multicolumn{1}{H|}{\texttt{TO}}&\multicolumn{1}{H}{-} &
\\ \bottomrule
\end{tabular}
\end{center}
\caption{Average number of steps before the final restart, averaged over 300 (100 for Herman and $\overline{\mathtt{GW}}$) runs. Results for our strategies for $c=1,2,3$, and the bold and cautious strategies of \cite{EKKW21}.}
\label{tableconcur}
\end{table}
\begin{table}[h]
\begin{center}
\begin{tabular}{l|n{6}{0}l|n{4}{0}l|n{8}{0}l|n{8}{0}l|} \toprule
& \multicolumn{2}{c}{AvC\textsubscript{17,8}(faulty)} & \multicolumn{2}{c}{Maj\textsubscript{$\le 12$}(faulty)} & \multicolumn{2}{c}{AvC\textsubscript{17,8}} & \multicolumn{2}{|c}{Maj\textsubscript{5,6}} \\ \midrule
$c=1$ & 13645 & \texttt{ce} & 872 & \texttt{ce} & 126294 &\texttt{true} & 4264508 & \texttt{ge}\\
$c=2$ & 181746 & \texttt{ce} & 4763 &\texttt{ce} & 10485163 & \texttt{true} & 11878533 & \texttt{ge} \\
\midrule
Peregrine & & \texttt{TO} & & \texttt{ce} & & \texttt{TO} & & \texttt{true}
\\ \bottomrule
\end{tabular}
\end{center}
\caption{Testing population protocols with the strategies $\mathfrak{S}[n\mapsto n^c]$. Experiments were run 100 times, averaging the number of steps to the last restart with a restart threshold of 250 for Average and Conquer (AvC) and \numprint{10000} for the Majority Protocol.}
\label{tablepp}
\end{table}
\noindent\textbf{Experiments on population Protocols.} Population protocols are consensus protocols in which a crowd of indistinguishable agents decide a property of their initial configuration by reaching a stable consensus \cite{AADFP06,BlondinEJ18}. The specification states that for each initial configuration the agents eventually reach the right consensus (property holds/does not hold). We have tested our strategies on several protocols from the benchmark suite of Peregrine, the state-of-the-art model checker for population protocols \cite{BlondinEJ18,EsparzaHJM20}. The first protocol
of Table \ref{tablepp} is faulty, but Peregrine cannot prove it; our strategy finds initial configurations for which the protocol exhibits a fault. For the second protocol both our strategies and Peregrine find faulty configurations. The third protocol is correct; Peregrine fails to prove it, and our strategies correctly fail to find counterexamples. The last protocol is correct, but in expectation consensus is reached only after an exponential number of steps in the parameters; we complement the specification, and search for a run that achieves consensus. Thanks to the logarithmic memory requirements, our strategies can run deep into the Markov chain and find the run.
\section{Conclusions}
We have studied the problem of testing partially observable stochastic systems against $\omega$-regular specifications in a black-box setting where testers can only restart the system, have no information on size or probabilities, and cannot observe the states of the system, only its outputs. We have shown that, despite these limitations, black-box testing strategies exist. We have obtained asymptotically optimal bounds on the number of steps to the last restart.
Surprisingly, our strategies never require many more steps than the strategies for fully observable systems of \cite{EKKW21}, and often even less. Sometimes, the improvement is by a large factor (up to $\sim 90$ in our experiments)
or the black-box strategies are able to solve instances where the strategies of \cite{EKKW21} time out.
\appendix
\section{Proofs of Section \ref{sec:blackboxtesting}}
\mathcal{R}lemma*
\begin{proof}
Let $\mathcal{A} = (\mathcal{A}S, \Sigma, \mathcal{A}Tr, \mathcal{A}Init, \mathcal{A}Acc)$ be a \acron{DRA}{} recognizing $L \subseteq \Sigma^\omega$, and assume that $\mathcal{A}Acc=\{(E_1, F_1), \ldots, (E_k, F_k)\}$. Let $\sigma_k$ be a black-box strategy for the Rabin language $\mathcal{R}_k$. We construct a black-box strategy $\sigma_L$ for $L$.
Let $w = \ell_1 a_1 \ell_2 \cdots \ell_{n-1} a_n \ell_n \in (\Sigma \times \{\mathsf{r}, \mathsf{c}\})^* \Sigma$. We define the action $\sigma_L(w)$ as follows. Let $\mathcal{A}Init q_1 \ldots q_n$ be the unique run of $\mathcal{A}$ on the word $\ell_1\ell_2 \ldots \ell_n \in \Sigma^*$. Define $v = \ell_1' a_1 \ell_2' \cdots \ell_{n-1}' a_n \ell_n' \in (2^{M_k} \times \{\mathsf{r}, \mathsf{c}\})^* 2^{M_k}$ as the word given by: $\bm{e}_j \in \ell_i'$ if{}f $q_i \in E_j$, and $\bm{f}_j \in \ell_i'$ if{}f $q_i \in F_j$. (Intuitively, we mark with $\bm{e}_j$ the positions in the run at which the \acron{DRA}{} visits $E_j$, and with $\bm{f}_j$ the positions at which the \acron{DRA}{} visits $F_j$.) We set $\sigma_L(w) \coloneqq \sigma_k(v)$.
We claim that $\sigma_L$ is a black-box strategy for $L$.
To prove this claim, let $\mathcal{M} = (S, s_{in}, \Sigma, \textit{Obs}, \mathbf{P})$ be an arbitrary Markov chain with labels in $\Sigma$.
Define the product of $\mathcal{M}$ and $\mathcal{A}$ as the labeled Markov chain $\mathcal{M}dra =(S \times \mathcal{A}S, s_{in}', 2^{M_k}, \textit{Obs}', \mathbf{P}')$, where
\begin{itemize}
\item $s_{in}' = (s_{in}, \mathcal{A}Init)$;
\item $\bm{e}_j \in \textit{Obs}'(s,q)$ if if{}f $q \in E_j$ and $\bm{f}_j \in \textit{Obs}'(s,q)$ if{}f $q \in F_j$;
\item $\mathbf{P}'((s,q),(s',q')) = \mathbf{P}(s,s')$ if $q'=\mathcal{A}Tr(q,\textit{Obs}(s))$ and $0$ otherwise.
\end{itemize}
Since $\mathcal{A}$ is deterministic, for every run $\pi =s_{in} s_1 s_2 \cdots$ of $\mathcal{M}$ there exists a unique run $\pi'=(s_{in} , \mathcal{A}Init) (s_1, q_1) (s_2, q_2) \cdots$ of $\mathcal{M}dra$, and the mapping that assigns $\pi'$ to $\pi$ is a bijection. By the definition of the accepting runs of $\mathcal{A}$, we have $\textit{Obs}(\pi) \in L$ if{}f $\mathcal{A}Init q_1 q_2 \cdots$ is an accepting run of $\mathcal{A}$ and, by the definition of the product, if{}f $\textit{Obs}'(\pi') \in \mathcal{R}_k$. Further, by the definition of $\mathbf{P}'$, we have $\Pr_{\mathcal{M}}(L) = \Pr_{\mathcal{M}dra}(L_C)$.
Consider now the Markov chains $\mathsf{M_{r}}st{\sigma_k}$ and $(\mathsf{M} \otimes \mathcal{A})_r^{\sigma_L}$.
A run of $\mathsf{M_{r}}st{\sigma_k}$ can be seen as an infinite sequence $\Pi = \pi_0 \, r \, \pi_1 \, r \cdots$, where $\pi_0, \pi_1, \cdots$ are paths of $\mathcal{M}$ indicating that $\mathcal{M}$ is restarted after executing $\pi_0, \pi_q$ etc., and similarly for $(\mathsf{M} \otimes \mathcal{A})_r^{\sigma_L}$. (We omit the occurrences of the continue action $c$.) We extend the mapping above so that it assigns to $\Pi$ the run $\Pi' = \pi_0' \, r \, \pi_0' \, r \, \cdots$. We have that $\Pi$ is a run of $\mathsf{M_{r}}st{\sigma_k}$ satisfying $\tail{\Pi} \in L$ if{}f $\Pi'$ is a run of $(\mathsf{M} \otimes \mathcal{A})_r^{\sigma_L}$ satisfying $\tail{\Pi'} \in \mathcal{R}_k$. Further, since the probabilities of the transitions in the run coincide, $\sigma_L$ is a black-box strategy for $L$, and the claim is proved.
\end{proof}
\section{A general definition for the progress radius and probability}
\label{appendix:progress}
We will now restate the definitions for the progress radius and probability for Rabin languages with more than one Rabin pair. They only differ by some technicalities from the definitions given in Section \ref{subsec:progress}. For convenience, we have underline all the differences.
\noindent\textbf{Good runs and good \acron{BSCC}{}s.} We extend the definition of good paths to good runs and good \acron{BSCC}{}s of a Markov chain.
A run $\rho=s_0s_1s_21ots$ is \emph{good} if \ul{there exists a Rabin pair $(\bm{e}_i, \bm{f}_i)$ such that $\bm{e}_i$ appears infinitely often in $\rho$ and $\bm{f}_i$ finitely often}, and \emph{bad} otherwise.
So a run $\rho$ is good if{}f there exists a decomposition of $\rho$ into an infinite concatenation $\rho \coloneqq \pi_0 \odot \pi_1 \odot \pi_2 \odot \cdots$ of non-empty paths \ul{such that there exists an $1\le i\le k$ such that $\pi_1, \pi_2, \ldots$ are $i$-good}. We let $P_\text{good}$ denote the probability of the good runs of $\mathcal{M}$.
A \acron{BSCC}{} of $\mathcal{M}$ is \ul{$i$}-\emph{good} if it contains at least one state labeled by $\bm{e}_i$ and no state labeled by $\bm{f}_i$. If a \acron{BSCC}{} is not \ul{$i$-good for any $1\le i\le k$} we call it bad.
\begin{definition}[Good-reachability and good-witness radii]
Let $B_\gamma$ be the set of states of $\mathcal{M}$ that belong to good \acron{BSCC}{}s and let $S_\gamma$ be the set of states from which it is possible to reach $B_\gamma$
and let $s \in S_\gamma$. A non-empty path $\pi$ starting at $s$ is a \emph{good progress path} if
\begin{itemize}
\item $s \in S_\gamma \setminus B_\gamma$, and $\pi$ ends at a state of $B_\gamma$; or
\item $s \in B_\gamma$, and $\pi$ ends at a state with observation \ul{$\bm{e}_i$ and $s$ is in an $i$-good BSCC}.
\end{itemize}
The \emph{good-reachability radius} $r_\gamma$ is the maximum, taken over every $s \in S_\gamma \setminus B_\gamma$, of the length of a shortest progress path for $s$.
The \emph{good-witness radius} $R_\gamma$ is the same maximum, but taken over every $s \in B_\gamma$.
\end{definition}
The bad-reachability and bad-witness radii, denoted $r_\beta$ and $R_\beta$ are defined similarly. Only the notion of progress path of a state$s \in B_\beta$
needs to be adapted. Loosely speaking, \ul{for every state with observation $\bm{f}_i$ a bad BSCC contains at least one state with observation $\bm{e}_i$.}
Accordingly, if no state of the \acron{BSCC}{} of $s$ has an observation $\bm{e}_i$ \ul{for any $i$}, then any non-empty path starting at $s$ is a progress path.
and otherwise a progress path of $s$ is a non-empty path starting at $s$ that, \ul{for every state with observation $\bm{e}_i$ in the BSCC of $s$, contains a state with observation $\bm{f}_i$.}
In other words, a progress path starting in a bad \acron{BSCC}{} $B$ visits states with \ul{all observations $\bm{f}_i$ that prevent $B$ from being a good BSCC.} Note that we leave the bad progress radii and probabilities undefined if the chain does not contain a bad \acron{BSCC}{}, and hence runs are good w.p.1.
\begin{definition}[Progress radius]
The \emph{progress radius} $\mathbb{R}m$ of $\mathcal{M}$ is the maximum of $r_\gamma$, $R_\gamma$, $r_\beta$, and $R_\beta$.
\end{definition}
\noindent\textbf{Progress probability.} The progress probability is now defined in the same way as it is done in the main part of the paper. From any state of the Markov chain it is possible to ``make progress'' by executing a progress path of length $\mathbb{R}m$.
However, the probability of such paths varies from state to state. Intuitively, the progress probability gives a lower bound on the probability
of making progress.
\begin{definition}
Let $B_\gamma$ be the set of states of $\mathcal{M}$ that belong to good \acron{BSCC}{}s, let $S_\gamma$ be the set of states from which it is possible to reach $B_\gamma$
and let $s \in S_\gamma$. The \emph{good-reachability probability} $p_\gamma$ is the minimum, taken over every $s \in S_\gamma\setminus B_\gamma$, of the probability
that a path with length $r_\gamma$ starting at $s$ contains a good progress path. The \emph{good-witness probability} $P_\gamma$ is the same minimum, but taken over every $s \in B_\gamma$ with paths of length $R_\gamma$.
The corresponding bad probabilities are defined analogously. The \emph{progress probability} $\mathbf{P}ax$ is the minimum of $p_\gamma, P_\gamma, p_\beta, P_\beta$.
\end{definition}
\section{Proofs of Section \ref{sec:strategy}}
In this section, we will give the technical proofs omitted in the main paper.
\ProbRestartGood*
\begin{proof}
If $\lfloor f(n)/\mathbb{R}m \rfloor \leq 1$ then the inequality holds trivially. So assume $f(n) \geq 2 \mathbb{R}m $.
Let $\rho \in \mathbb{N}B_n$. Observe that $\rho$ eventually reaches a \acron{BSCC}{} of $\mathcal{M}$ w.p.1 and, since $\rho$ only visits states of $S_\gamma$, that \acron{BSCC}{} is good. Let $B$ be this \acron{BSCC}{}. Assume that $\#\restact(\rho) \geq n$. We consider the following cases, where we start counting steps immediately after the $(n-1)$-th restart and, for $a <b$, the path $[a, b]$ is the path that starts immediately before step $a$, and ends immediately after step $b$.
\begin{itemize}
\item After $2f(n)$ steps, $\rho$ has not yet reached $\mathcal{B}$. \\
By the definition of $p_\gamma$, this happens with probability at most $(1-p_\gamma)^{f(n)/\mathbb{R}gBSSC}$.
\item After $2f(n)$ steps, $\rho$ has already reached $\mathcal{B}$. Further, the $n$-th restart happens
in the path $[2 f(n), 2(f(n)+R_\gamma)-1]$. \\
In this case, by the definition of $\mathfrak{S}[f]$, the second half of the last path sample does
does not contain any state labelled with $e_i$ such that the \acron{BSCC}{} is $i$-good. It follows that the path $[f(n)+R_\gamma+1, 2f(n)]$ does not visit $W_B$. By the definition of $P_\gamma$ and $R_\gamma$, this happens with probability at most $(1 -P_\gamma)^{\lfloor f(n)/R_\gamma\rfloor-1}$.
\item After $f(n)$ steps, $\rho$ has already reached $\mathcal{B}$. Further, the $n$-th restart happens
after the step $2f(n)+2R_\gamma-1$. \\
In this case we let $k \ge\lfloor f(n)/R_\gamma\rfloor+1$ be the smallest number such that the path $[(k+1)R_\gamma+1, 2kR_\gamma]$ does not contain any witness states, i.e.~ states labelled with $\bm{e}_i$.
\end{itemize}
By the definition of $\mathfrak{S}[f]$, if $\rho$ restarts in the interval $[2lR_\gamma+1, 2(l+2)R_\gamma-1]$
and has reached a good \acron{BSCC}{} $\mathcal{B}$ in the first $f(n)$ steps, then it is covered by the third case for some $k$ with $\lfloor f(n)/R_\gamma\rfloor+1\le k \le l$. Because $k$ is the smallest $k$ satisfying this property, and we are not in the second case, the run performed a progress path of $\mathcal{B}$ between step $(l-1)R_\gamma+1$ and $lR_\gamma$, otherwise we would have already counted this case. Hence we can bound the sum of probabilities of the last two cases by $(1-P_\gamma)^{\lfloor f(n)/R_\gamma\rfloor-1}
+ \sum_{k=\lfloor f(n)/R_\gamma\rfloor-1}^\infty P_\gamma(1-P_\gamma)^k$.
So we get:
\begin{comment} the $n$-th restart happens
between steps $(k+1)R_\gamma+1$ and $2kR_\gamma$ after the $(n-1)$-th restart for a given $k \geq 1$. \\
\vincent{That is not entirely true. Either we need to compare with the strategy that checks for a restart every two steps, which will restart with a higher probability than our strategy. Or we need to say that we can assume that }
By the definition of $\mathfrak{S}[f]$, the following holds. First, for every $j = k$ the run $\rho$ visits some state of the \acron{BSCC}{} between steps $jR_\gamma+1$ and $(j+1)R_\gamma$ after the $(n-1)$-th restart; indeed, otherwise the $n$-the restart would have happened earlier. Second, $\rho$ does not visit any accepting state of the \acron{BSCC}{} between steps $jR_\gamma+1$ and $(j+1)R_\gamma$ after the $(n-1)$-th restart. Since the probability of visiting an accepting state during any interval of $R_\gamma$ steps is at least $P_\gamma$, the probability of this case is $(1-P_\gamma)^{k-1} P_\gamma$.
\end{itemize}
\end{comment}
\begin{align*}
& Pr[\#\restact \ge n \mid \mathbb{N}B_n] \\
\le & (1-p_\gamma)^{\lfloor f(n)/r_\gamma\rfloor}+(1-P_\gamma)^{\lfloor f(n)/R_\gamma\rfloor-1}
+ \sum_{k=\lfloor f(n)/R_\gamma\rfloor-1}^\infty P_\gamma(1-P_\gamma)^k\\
\le & 3(1-\mathbf{P}ax )^{\lfloor f(n)/\mathbb{R}m \rfloor-1}
\end{align*}
\end{proof}
\ProbRestart*
\begin{proof}
We have
\begin{align*}
\Pr [\#\restact\ge n\mid R\ge n-1] = & \Pr[\#\restact \ge n \mid \mathbb{N}B_{n}] \cdot \Pr[\mathbb{N}B_{n} \mid R \ge n-1] + \\
& \Pr[\#\restact \ge n \mid \overline{\mathbb{N}B}_n] \cdot \Pr[\overline{\mathbb{N}B}_n \mid \#\restact \ge n-1].
\end{align*}
Let $\alpha \coloneqq 3(1-\mathbf{P}ax )^{\lfloor f(n)/\mathbb{R}m \rfloor-1}$.
Applying Lemma \ref{lemma:PRestartGood} and $\Pr[\#\restact \ge n \mid \overline{\mathbb{N}B}_n]\le 1$, we get
\begin{align*}
& \Pr [\#\restact\ge n\mid \#\restact\ge n-1] \\
\leq \; & \alpha \Pr[\mathbb{N}B_{n} \mid \#\restact \ge n-1] +
\Pr[\overline{\mathbb{N}B}_n \mid \#\restact \ge n-1] \\
\leq \; & \alpha \Pr[\mathbb{N}B_{n} \mid \#\restact \ge n-1] +
(1 - \Pr[\mathbb{N}B_n \mid \#\restact \ge n-1]) \\
\leq \; & (\alpha -1) \Pr[\mathbb{N}B_{n} \mid \#\restact \ge n-1] +
1
\end{align*}
W.p.1, good runs of $\mathcal{M}$ only visit states of $S_\gamma$. (Indeed, if a good run visits some state outside $S_\gamma$, then the run can only reach a bad BSSC. Since the run is good, the run cannot visit any BSSC at all, which can only happen with probability $0$.) Hence, the probability to only visit states of $S_\gamma$ before restarting is at least $P_\text{good}$ for arbitrary strategies, i.e. $\Pr[\mathbb{N}B_{n} \mid \#\restact \ge n-1] \ge P_\text{good}$. It follows
\begin{align*}
\Pr [\#\restact\ge n\mid \#\restact\ge n-1] \le \; & (\alpha -1) P_\text{good} +1 = 1 - P_\text{good} ( 1 - \alpha)
\end{align*}
\end{proof}
\section{Proofs of Section \ref{sec:quantitative}}
We prove Lemma \ref{lemma:boundnsmall}. We need a technical result:
\begin{lemma}[A technical lemma]
\label{lemma:sumnsquaredinfinity}
\label{lemma:sumnsquared}
\label{lemma:verygoodlemma}
For $c,X\in \mathbb{Z}_{\ge 0}$ and $0<p<1$ we have that
\[
\sum^\infty_{n=X} n^c\cdot p^{n-X}\le (c+1)!\left(\frac{(X+c)^c}{1-p}+\frac{1}{(1-p)^{c+1}}\right).
\]
\end{lemma}
\begin{proof}
We will proof the lemma by induction on $c$ starting with $c=0$. Then we have by the formula for the geometric progression that:
\[
\sum^\infty_{n=X} p^{n-X}=\sum^\infty_{i=0} p^i = \frac{1}{1-p}.
\]
This proves the the induction base case.
Now assume we have proven
\[
S_{c-1}(X)\coloneqq \sum^\infty_{n=X} n^{c-1}\cdot p^{n-X}\le c!\left(\frac{(X+c-1)^{c-1}}{1-p}+\frac{1}{(1-p)^{c}}\right).
\]
Now consider
\[
S_c(X)\coloneqq \sum^\infty_{n=X} n^c\cdot p^{n-X}
\]
When multiplying by $(1-p)$ we get the following
\begin{align*}
(1-p)S_c(X)&=X^c+\sum^\infty_{n=X} \left((n+1)^c-n^c\right)\cdot p^{n-X}\\
&=X^c+\sum^\infty_{n=X} \left(\sum_{k=1}^{c} \binom{c}{k}n^{c-k}\right)p^{n-X}\\%\left(cn^{c-1}+\binom{c}{2}n^{c-1}+1ots+\binom{c}{c}n^0\right)\cdot p^{n-X}\\
&\le X^c+\sum^\infty_{n=X}c\left(\sum_{k=0}^{c-1}\binom{c-1}{k}n^{c-k}\right)p^{n-X}\\
&= X^c+\sum^\infty_{n=X}c(n+1)^{c-1}p^{n-X}\\
&= X^c+ c\cdot \sum^\infty_{n=X+1}n^{c-1}p^{n-X-1}\\
&=X^c+ cS_{c-1}(X+1)\\
&\le X^c +c \cdot c!\left(\frac{(X+c)^{c-1}}{1-p}+\frac{1}{(1-p)^{c}}\right)\\
S_c(X)&\le (c+1)!\left(\frac{(X+c)^c}{1-p}+\frac{1}{(1-p)^{c+1}}\right)
\end{align*}
This concludes the proof.
\end{proof}
\stepsfragment*
\begin{proof}
\label{proof:stepsfragment}
We consider three cases:
\begin{itemize}
\item[(1)] The run $\rho$ gets restarted for the $n$-th time in at most $2f(n)+2\mathbb{R}m $ steps after the $(n-1)$-st restart. We can bound the expected number of steps in this case by $2f(n)+2\mathbb{R}m $.
\item[(2)] The run $\rho$ executes at least step $2f(n)+2\mathbb{R}m +1$ after the $(n-1)$-st restart without another restart, and only visits states in $S_\beta$. Then, the expected number of steps until a bad \acron{BSCC}{} $\mathcal{B}$ is reached is equal to $r_\gamma/p_\gamma$. After another $r_\gamma/p_\gamma$ steps, the entire second half of states visited since the last restart is now contained in $\mathcal{B}$. Then, the expected number of steps required to perform a progress path of the \acron{BSCC}{} is $R_\beta/P_\beta$. After at most $2f(n)$ additional steps, the strategy restarts. Hence, an upper bound of the expected number of steps in this case is $2r_\beta/p_\beta+R_\beta/P_\beta+2f(n)$.
\item[(3)] The run $\rho$ reaches at least step $2f(n)+2\mathbb{R}m +1$ after the $(n-1)$-st restart without another restart, and only visits states in $S_\gamma$. In this case it takes on average at most $2r_\gamma/p_\gamma$ steps to reach a good \acron{BSCC}{}. After that, we divide the rest of the run into blocks of length $2R_\gamma$. Let $v$ be the number such that the restart happens between steps $2vR_\gamma$ and $2(v+1)R_\gamma$. Then we have:
\begin{itemize}
\item[(a)]$\rho$ visits an accepting state of the \acron{BSCC}{} between steps $(v-1)R_\gamma$ and $(v+1)R_\gamma$. \\
Otherwise the restart happens before step $2vR_\gamma$
\item[(b)] $\rho$ does not visit accepting states of the \acron{BSCC}{} between step $(v+1)R_\gamma$ and $2vR_\gamma$. \\
Indeed, if we restart at step $2vR_\gamma$, then we have not visited an accepting state of the good \acron{BSCC}{} between step $vR_\gamma$ and $2vR_\gamma$. If we restart at step $2(v+1)R_\gamma$, the same applies to step $(v+1)R_\gamma$ and $2(v+1)R_\gamma$. For restarts between steps $2vR_\gamma$ and $2(v+1)R_\gamma$ a
corresponding in-between statement is true. In all these cases the run never visits an accepting state of the \acron{BSCC}{}
between step $(v+1)R_\gamma$ and $2vR_\gamma$.
\end{itemize}
The probability of (a) is at least $2P_\gamma$, and the probability of (b) is at most $(1-P_\gamma)^{v-1}$. But if (a) was not the case, we would have already counted it with probability $(1-P_\gamma)$ with at most $2R_\gamma$ steps less.
So the expected number of steps in the cases, in which we execute at most $2(v+1)R_\gamma$ steps after the $(n-1)$-st restart for some $v$, is bounded by the sum over $2P_\gamma (1-P_\gamma)^{v-1}$ mulitplied by the number of steps for all possible values of $v$.
\begin{equation*}
2f(n)+2\mathbb{R}m +\sum_{v=1}^\infty 4 R_\gamma (v+1)P_\gamma(1-P_\gamma)^{v-1}=2f(n)+2\mathbb{R}m +\frac{4R_\gamma}{P_\gamma(1-P_\gamma)}.
\end{equation*}
\end{itemize}
\noindent We obtain
\begin{align*}
E[S_n\mid \#\restact\ge n-1] = \; & \big(2f(n)+2\mathbb{R}m \big) p_1 + \\
& 2r_\beta/p_\beta+R_\beta/P_\beta+2f(n) p_2 + \\
& 2r_\gamma/p_\gamma + \frac{4R_\gamma}{P_\gamma(1-P_\gamma)} +\left(2f(n)+2R_\gamma\right) (1- p_1 - p_2)
\end{align*}
where $p_1, p_2$ are the probabilities of (1) and (2), respectively. Using $p_1, p_2 \leq 1$ and simple arithemtic yields the generous bound:
\begin{equation*}
E[S_n\mid \#\restact\ge n-1]\le 2 (\mathbb{R}m+f(n))+9\left(\frac{\mathbb{R}m }{\mathbf{P}ax (1-P_\gamma)}\right)
\end{equation*}
\end{proof}
\totalsteps*
\begin{proof}
\label{proof:totalsteps}
By linearity of expectation, we have $\mathbb E[S]=\sum_{i=1}^\infty \mathbb E(S_n)$. The idea of the proof is to split the sum into two parts: for $n<X$, and for $n\ge X$. For $n<X$ we just approximate $\Pr[\#\restact\ge n-1]$ by $1$. For $n>X$ we can say more thanks to Lemma \ref{lemma:ChoiceOfX}:
\begin{align*}
\Pr[\#\restact\ge n-1]&=\Pr[\#\restact\ge n-1\mid \#\restact\ge n-2] 1ots \Pr[\#\restact\ge X+1\mid \#\restact\ge X]\cdot \Pr[\#\restact\ge X]\\
&\le \mathbb Pod_{k=\lceil X\rceil}^n \Pr[\#\restact\ge k\mid \#\restact\ge k-1]\\
&\le \left(1-P_\text{good}/2\right)^{n-X}
\end{align*}
This yields:
\begin{align*}
E[S]&=\sum_{n=0}^\infty E[S_n\mid \#\restact\ge n-1]\Pr[\#\restact\ge n-1]\\
&\le \sum_{n=0}^XE[S_n\mid \#\restact\ge n-1]+
\sum_{n=X}^\infty E[S_n\mid \#\restact\ge n-1]\cdot \left(1-P_\text{good}/2\right)^{n-X}
\end{align*}
We bound the first summand applying Lemma \ref{lemma:boundnsmall}:
\begin{align*}
& \sum_{n=0}^X E[S_n\mid \#\restact\ge n-1] \\
\le & \sum^X_{n=0}2n^c+2\mathbb{R}m+9\left(\frac{\mathbb{R}m }{\mathbf{P}ax (1-P_\gamma)}\right)\le 2X^{c+1}+X\mathbb{R}m \left(2+9\frac{1}{\mathbf{P}ax (1-P_\gamma)}\right).
\end{align*}
Now bound the second summand, applying Lemma \ref{lemma:boundnsmall} again:
\begin{align*}
& \phantom{\le \; } \sum_{n=X}^\infty E[S_n\mid \#\restact\ge n-1]\cdot \left(1-P_\text{good}/2\right)^{n-X} \\
& \le \sum^\infty_{n=X}\left(2n^c+2\mathbb{R}m+9\left(\frac{\mathbb{R}m }{\mathbf{P}ax (1-P_\gamma)}\right)\right)\left(1-P_\text{good}/2\right)^{n-X}\\
&\le \frac{2 \mathbb{R}m }{P_\text{good}} \left( 2+9\frac{1}{\mathbf{P}ax (1-P_\gamma)}\right)+ 2\sum^\infty_{n=X} n^c \left(1-P_\text{good}/2\right)^{n-X}\\
&\le\frac{2 \mathbb{R}m }{P_\text{good}} \left( 2+9\frac{1}{\mathbf{P}ax (1-P_\gamma)}\right)+ 2(c+1)!\left(\frac{2(X+c)^c}{P_\text{good}}+\frac{2^{c+1}}{P_\text{good}^{c+1}}\right)
\end{align*}
where in the last step we used Lemma \ref{lemma:verygoodlemma}.
\begin{equation*}
\expected[S] \in \mathbb{O} \left((c+1)!\cdot 2^c\cdot \left(\frac{\mathbb{R}m }{\mathbf{P}ax }\right)^{1+1/c}+\frac{2^c(c+1)!}{P_{good}^{c+1}} + (c+1)!(2c)^{c+1}\right)
\end{equation*}
For a fixed value of $c$, i.e., for the specific strategy $f(n)=n^c$, this bound simplifies to
\begin{equation*}
\expected[S] \in \mathbb{O} \left(\left(\frac{\mathbb{R}m }{\mathbf{P}ax }\right)^{1+1/c}+\frac{1}{P_\text{good}^{c+1}}\right).
\end{equation*}
\end{proof}
\end{document}
|
\begin{document}
\title{\Large Upper bounds for the achromatic and coloring numbers of a graph\thanks {Research supported by NSFC (No. 11161046)}
\newtheorem{theorem}{Theorem}[section]
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{conjecture}[theorem]{Conjecture}
\theoremstyle{plain} \CJKtilde
\newcommand{\displaystyle}{\displaystyle}
\newcommand{\displaystyleF}[2]{\displaystyle\frac{#1}{#2}}
{\small \noindent{\bfseries Abstract:} Dvo\v{r}\'{a}k \emph{et al.}
introduced a variant of the Randi\'{c} index of a graph $G$, denoted
by $R'(G)$, where $R'(G)=\sum_{uv\in E(G)}\frac 1 {\max\{d(u),
d(v)\}}$, and $d(u)$ denotes the degree of a vertex $u$ in $G$.
The coloring number $col(G)$ of a graph $G$ is the smallest number
$k$ for which there exists a linear ordering of the vertices of $G$
such that each vertex is preceded by fewer than $k$ of its
neighbors. It is well-known that $\chi(G)\leq col(G)$ for any graph
$G$, where $\chi(G)$ denotes the chromatic number of $G$. In this
note, we show that for any graph $G$ without isolated vertices,
$col(G)\leq 2R'(G)$, with equality if and only if $G$ is obtained
from identifying the center of a star with a vertex of a complete
graph. This extends some known results. In addition, we present some
new spectral bounds for
the coloring and achromatic numbers of a graph. \\
\noindent {\bfseries Keywords}: Chromatic number; Coloring number; Achromatic number; Randi\'{c} index \\
\section{\large Introduction}
The Randi\'{c} index $R(G)$ of a (molecular) graph $G$ was
introduced by Milan Randi\'{c} \cite{R} in 1975 as the sum of
$1/\sqrt{d(u)d(v)}$ over all edges $uv$ of $G$, where $d(u)$ denotes
the degree of a vertex $u$ in $G$. Formally,
$$R(G)=\sum\limits_{uv\in E(G)}\frac{1}{\sqrt{d(u)d(v)}}.$$ This index
is useful in mathematical chemistry and has been extensively
studied, see \cite{LG}. For some recent results on the Randi\'{c} index,
we refer to \cite{DP, LLBL, LLPDLS, LS}.
A variation of the Randi\'{c} index of a graph $G$ is called the
Harmonic index, denoted by $H(G)$, which was defined in
\cite{Fajtlowicz} as follows: $$H(G)=\sum_{uv\in E(G)} \frac 2
{d(u)+d(v)}.$$ In 2011 Dvo\v{r}\'{a}k \emph{et al}. introduced another
variant of the Randi\'{c} index of a graph $G$, denoted by $R'(G)$, which has been further studied by Knor et al \cite{Knor}.
Formally,
$$R'(G)=\sum_{uv\in E(G)}\frac{ 1 }{\max\{d(u), d(v)\}}.$$
It is clear from the definitions that for a graph $G$,
\begin{equation} R'(G)\leq H(G)\leq R(G). \end{equation}
The chromatic number of $G$, denoted by $\chi(G)$, is
the smallest number of colors needed to color all vertices of $G$
such that no pair of adjacent vertices is colored the same. As
usual, $\delta(G)$ and $\displaystyleelta(G)$ denote the minimum degree and the
maximum degree of $G$, respectively. The coloring number $col(G)$ of
a graph $G$ is the least integer $k$ such that $G$ has a vertex
ordering in which each vertex is preceded by fewer than $k$ of its
neighbors. The $degeneracy$ of $G$, denoted by $deg(G)$, is defined
as $deg(G)=max\{\delta(H): H\subseteq G\}$. It is well-known (see
Page 8 in \cite{JT}) that for any graph $G$,
\begin{equation}col(G)=deg(G)+1. \end{equation}
List coloring is an extension of coloring of graphs, introduced by
Vizing \cite{V} and independently, by Erd\H{o}s et al. \cite{ERT}.
For each vertex $v$ of a graph $G$, let $L(v)$ denote a list of
colors assigned to $v$. A list coloring is a coloring $l$ of
vertices of $G$ such that $l(v)\in L(v)$ and $l(x)\neq l(y)$ for any
$xy\in E(G)$, where $v, x, y\in V(G)$. A graph $G$ is $k$-choosable
if for any list assignment $L$ to each vertex $v\in V(G)$ with
$|L(v)|\geq k$, there always exists a list coloring $l$ of $G$. The
list chromatic number $\chi_l(G)$ (or choice number) of $G$ is the
minimum $k$ for which $G$ is $k$-choosable.
It is well-known that for any graph $G$,
\begin{equation}\chi(G)\leq \chi_l(G)\leq col(G)\leq \displaystyleelta(G)+1.
\end{equation} The detail of the inequalities in (3) can be found in a
survey paper by Tuza \cite{T} on list coloring.
In 2009, Hansen and Vukicevi\'{c} \cite{HV} established the
following relation between the Randi\'{c} index and the chromatic
number of a graph.
\begin{theorem}(Hansen and Vukicevi\'{c} \cite{HV}) Let $G$ be a
simple graph with chromatic number $\chi(G)$ and Randi\'{c} index
$R(G)$. Then $\chi(G)\leq2R(G)$ and equality holds if $G$ is a
complete graph,
possibly with some additional isolated vertices.\\
\end{theorem}
Some interesting extensions of Theorem 1.1 were recently obtained.
\begin{theorem} (Deng \emph{et al} \cite{Deng} ) For a graph $G$,
$\chi(G)\leq 2H(G)$ with equality if and only if $G$ is a complete
graph possibly with some additional isolated vertices.
\end{theorem}
\begin{theorem} (Wu, Yan and Yang \cite{wu14} ) If $G$ is a graph of order $n$ without isolated vertices, then $$col(G)\leq
2R(G),$$ with equality if and only if $G\cong K_n$.
\end{theorem}
Let $n$ and $k$ be two integers such that $n\geq k\geq 1$. We denote the graph obtained from identifying the center of the star $K_{1, n-k}$ with a vertex of the complete graph $K_k$ by $K_k\bullet K_{1,n-k}$. In particular, if $k\in\{1, 2\}$, $K_k\bullet K_{1,n-k}\cong K_{1,n-1}$; if $k=n$, $K_k\bullet K_{1,n-k}\cong K_n$.
The primary aim of this note is to prove stronger versions of Theorems
1.1-1.3, noting the inequalities in (1).
\begin{theorem} For a graph $G$ of order $n$ without isolated vertices, $col(G)\leq
2R'(G)$, with equality if and only if $G\cong K_k\bullet K_{1,n-k}$ for some $k\in\{1, \ldots, n\}$.
\end{theorem}
\begin{corollary}
For a graph $G$ of order $n$ without isolated vertices, $\chi(G)\leq
2R'(G)$, with equality if and only if $G\cong K_k\bullet K_{1,n-k}$ for some $k\in\{1, \ldots, n\}$.
\end{corollary}
\begin{corollary}
For a graph $G$ of order $n$ without isolated vertices, $\chi_l(G)\leq
2R'(G)$, with equality if and only if $G\cong K_k\bullet K_{1,n-k}$ for some $k\in\{1, \ldots, n\}$.
\end{corollary}
\begin{corollary}
For a graph $G$ of order $n$ without isolated vertices, $col(G)\leq 2H(G)$, with equality if and
only if $G\cong K_n$.
\end{corollary}
The proofs of these results will be given in the next section.
A {\em complete $k$-coloring} of a graph $G$ is a
$k$-coloring of the graph such that for each pair of different
colors there are adjacent vertices with these colors. The {\it
achromatic number} of $G$, denoted by $\psi(G)$, is the maximum
number $k$ for which the graph has a complete $k$-coloring. Clearly,
$\chi(G)\leq \psi(G)$ for a graph $G$. In general,
$col(G)$ and $\psi(G)$ are incomparable. Tang \emph{et al.} \cite{tang15} proved that for a graph $G$,
$\psi(G)\leq 2R(G)$.
In Section 3, we prove new bounds for the coloring
and achromatic numbers of a graph in terms of its spectrum, which strengthen $col(G) \le 2R(G)$ and $\psi(G) \le 2R(G)$. In
Section 4, we provide an example and propose two related conjectures.
\section{\large The proofs}
For convenience, an edge $e$ of a graph $G$ may be viewed as a
2-element subset of $V(G)$ and if a vertex $v$ is an end vertex of
$e$, we denote the other end of $e$ by $e\setminus v$. Moreover,
$\partial_G(v)$ denotes the set of edges which are incident with $v$
in $G$.
First we need the following theorem, which will play a key role in the
proof of Theorem 1.4.
\begin{theorem} If $v$ is a vertex of
$G$ with $d(v)=\delta(G)$, then
$$R'(G)-R'(G-v)\geq 0,$$ with equality if and only if
$N_G(v)$ is an independent set of $G$ and
$d(w)<d(v_i)$ for all $w\in N(v_i)\setminus \{v\}$.
\end{theorem}
\begin{proof}
Let $k=d_G(v)$. The result is trivial for $k=0$. So, let $k>0$ and
let $N_G(v)=\{v_1, \cdots, v_k\}$ and $d_i=d_G(v_i)$ for each $i$.
Without loss of generality, we may assume that $d_1\geq \cdots \geq
d_k$. Let
$$E_1=\{e: e\in\partial_G(v_1)\setminus \{vv_1\}\ \text{such that}\
d_G(e\setminus v_1)<d_G(v_1)\ \text {and}\ e\not\subseteq N_G(v)
\},$$
$$E_1'=\{e: e\in\partial_G(v_1)\setminus \{vv_1\}\ \text{such that}\ \
d_G(e\setminus v_1)=d_G(v_1)\ \text {and}\ e\subseteq N_G(v)\},$$
and for an integer $i\geq 2$,
$$E_i=\{e: e\in\partial_G(v_i)\setminus \{vv_i\}\ \text{such that}\
d_G(e\setminus v_i)<d_G(v_i)\ \text {and}\ e\not\subseteq
N_G(v)\},$$
$$E_i'=\{e: e\in\partial_G(v_i)\setminus \{vv_i\}\ \text{such that}\ \
d_G(e\setminus v_i)=d_G(v_i)\ \text {and}\ e\subseteq N_G(v)\}
\setminus (\cup_{j=1}^{j=i-1} E_j').$$ Let $a_i=|E_i|$ and
$b_i=|E_i'|$ for any $i\geq 1 $. Since $a_i+b_i\leq d_i-1$,
$$R'(G)-R'(G-v)=\sum_{i=1}^k \frac 1 {d_i}-\sum_{i=1}^k \frac {a_i+b_i}
{d_i(d_i-1)}\geq 0,$$ with equality if and only if $a_i+b_i=d_i-1$
for all $i$, i.e., $N_G(v)$ is an independent set of $G$ and
$d(w)<d(v_i)$ for all $w\in N(v_i)\setminus \{v\}$.
\end{proof}
\begin{lemma}
If $T$ is a tree of order $n\geq 2$, then $R'(T)\geq 1$, with equality if and only if
$T=K_{1,n-1}$.
\end{lemma}
\begin{proof}
By induction on $n$. It can be easily checked that $R'(K_{1,n-1})=1$. So, the assertion of the lemma is true for $n\in\{2, 3\}$. So, we assume that $n\geq 4$.
Let $v$ be a leaf of $T$. Then $T-v$ is a tree of order $n-1$. By Theorem 2.1, $R'(T)\geq R'(T-v)\geq 1$. If $R'(T)=1$, then $R'(T-v)=1$, and by the induction hypothesis, $T-v\cong K_{1, n-2}$. Let $u$ be the center of $T-v$.
We claim that $vu\in E(T)$. If this is not, then $v$ is adjacent to a leaf of $T-v$, say $x$, in $T$. Thus $d_T(x)=2$. However, since $R'(T)=R'(T-v)$ and $u\in N_T(x)$, by Theorem 2.1 that $d_T(x)>d_T(u)=n-2\geq 2$, a contradiction. This shows that $vu\in E(T)$ and $T\cong K_{1,n-1}$.
\end{proof}
{\noindent \bf The proof of Theorem 1.4:}
Let $v_1, v_2, \ldots, v_n$ be an ordering of all vertices of $G$
such that $v_i$ is a minimum degree vertex
of $G_i=G-\{v_{i+1}, \ldots, v_n\}$ for each $i\in \{1, \ldots,
n\}$, where $G_n=G$ and $d_G(v_n)=\delta(G)$. It is well known that $deg(G)=\max\{d_{G_i}(v_i):\ 1\leq
i\leq n\}$ (see Theorem 12 in \cite{JT}). Let $k$ be the maximum number such that
$deg(G)=d_{G_k}(v_k)$, and $n_k$ the order of $G_k$. By Theorem 2.1,
\begin{equation} 2R'(G)\geq 2R'(G_{n-1})\geq \cdots\geq
2R'(G_k).\end{equation} Moreover,
\begin{eqnarray*}
2R'(G_k)&=& \sum\limits_{uv\in
E(G_k)}\frac{2}{\max\{d_{G_k}(u),d_{G_k}(v)\}}\\
&\geq& \sum\limits_{uv\in
E(G_k)}\frac{2}{\displaystyleelta(G_k)}\\
&\geq& \frac{\displaystyleelta(G_k)+(n_k-1)\delta(G_k)}{\displaystyleelta(G_k)} \\
&\geq& \delta(G_k)+1\\
&=& col(G).
\end{eqnarray*}
It follows that \begin{equation} \text {if}\ R'(G_k)=\delta(G_k)+1,
\text { then}\ G_k\cong K_k.\end{equation}
Now assume that $col(G)=2R'(G)$. Observe
that $col(G)=\max\{col(G_i):$ $G_i$ is a component of $G\}$ and $R'(G)=\sum_{i} R'(G_i)$. Thus, by the assumption that $col(G)=2R'(G)$ and $G$ has no isolated vertices, $G$ is connected and $col(G)\geq 2$. If $col(G)=2$, then $G$ is a tree. By Corollary 2.1, $G\cong K_{1,n-1}$.
Next we assume that $col(G)\geq 3$. By (4) and (5) we have $R'(G)=\cdots=R'(G_k)$,
$G_k\cong K_k$ and thus $col(G)=k$. We show $G\cong K_k\bullet K_{1,n-k}$ by induction on $n-k$.
It is easy to check that $col(K_n)=n+1=2R'(K_n)$ and thus the result is for $n-k=0$.
If $n-k=1$, then
$G_k=G-v_n$ and $R'(G_k)=R'(G)$, by Theorem 2.1,
$N_G(v_n)$ is an independent set of $G$. Combining with $G_k\cong K_k\ (k\geq 3)$,
$d_{G_n}(v_n)=1$. Thus $G=K_k\bullet K_{1,1}$.
Next assume that $n-k\geq 2$.
We consider $G_{n-1}$. Since $col(G_{n-1})=2R'(G_{n-1})$, by the induction hypothesis
$G_{n-1}\cong K_k\bullet K_{1,n-1-k}$. Without loss of generality, let $N_{G_{n-1}}(v_{k+1})=\cdots= N_{G_{n-1}}(v_{n-1})=\{v_k\}$. So, it remains to show that $N_G(v_n)=\{v_k\}$.
\noindent {\bf Claim 1.} $N_G(v_n)\cap \{v_{k+1}, \ldots, v_{n-1}\}=\emptyset$.
If it is not, then $\{v_{k+1}, \ldots, v_{n-1}\}\subseteq N_G(v_n)$, because $d_G(v_n)=\delta(G)$.
By Theorem 2.1, $d_G(v_{n-1})>d_G(v_k)$. But, in this case, $d_G(v_{n-1})=2<d_G(v_k)$, a contradiction.
By Claim 1, $N_G(v_n)\subseteq \{v_1, \ldots, v_k\}$.
Since $N_G(v_n)$ is an independent set and $\{v_1, \ldots, v_k\}$ is a clique of $G$, $|N_G(v_n)\cap \{v_1, \ldots, v_k\}|=1$.
If $N_G(v_n)=\{v_i\}$, where $i\neq k$, then $d_G(v_i)=k$. By Theorem 2.1, $d_G(v_i)>d_G(v_k)$.
However, $d_G(v_k)\geq n-2\geq k$, a contradiction. This shows $N_G(v_n)=\{v_k\}$ and hence $G\cong K_k\bullet K_{1,n-k}$.
It is straightforward to check that $$2R'(K_k\bullet K_{1,n-k})=2\times(\frac {k-2} 2+1)=k
=col(G).$$
{\noindent \bf The proofs of Corollaries 1.5 and 1.6:}
By (3) and Theorem 1.4, we have $\chi(G)\leq \chi_l(G)\leq 2R'(G)$. If $\chi(G)=2R'(G)$
(or $\chi_l(G)=2R'(G)$), then $col(G)=2R'(G)$. By Theorem 1.4, $G\cong K_k\bullet K_{1,n-k}$. On the other hand, it is easy to check that $$\chi(K_k\bullet K_{1,n-k})=\chi_l(K_k\bullet K_{1,n-k})=k=2R'(K_k\bullet K_{1,n-k}).$$
{\noindent \bf The proof of Corollary 1.7:}
By (1) and Theorem 1.4, we have $col(G)\leq 2H(G)$. If $col(G)=2H(G)$, then $col(G)=2R'(G)$.
By Theorem 1.4, $G\cong K_k\bullet K_{1,n-k}$. It can be checked that $k=2H(K_k\bullet K_{1,n-k})$ if and only if $k=n$, i.e., $G\cong K_n$.
\section{\large Spectral bounds}
\subsection{Definitions}
Let $\mu = \mu_1 \ge ... \ge \mu_n$ denote the eigenvalues of the adjacency matrix of $G$ and let $\pi, \nu$ and $\gamma$ denote the numbers
(counting multiplicities) of positive, negative and zero eigenvalues respectively. Then let
\[
s^+ = \sum_{i=1}^\pi \mu_i^2 \mbox{ and } s^- = \sum_{i=n-\nu+1}^n \mu_i^2.
\]
Note that $\sum_{i=1}^n \mu_i^2 = s^+ + s^- = tr(A^2) = 2m$.
\subsection{Bounds for $\psi(G)$ and $col(G)$}
\begin{theorem}
For a graph G, $\psi(G) \le 2m/\sqrt{s^+} \le 2m/\mu \le 2R(G)$.
\end{theorem}
\begin{proof}
Ando and Lin \cite{ando} proved a conjecture due to Wocjan and Elphick \cite{wocjan} that $1 + s^+/s^- \le \chi$ and consequently $s^+ \le 2m(\chi -
1)/\chi$. It is clear that $\psi(\psi - 1) \le 2m$. Therefore:
\[
s^+ \le \frac{2m(\chi - 1)}{\chi} \le \frac{2m(\psi -
1)}{\psi} \le \frac{4m^2}{\psi^2}.
\]
Taking square roots and re-arranging completes the first half of the
proof.
Favaron \emph{et al} \cite{favaron93} proved that $R(G) \ge m/\mu$.
Therefore $\psi(G) \le 2m/\mu \le 2R(G)$.
\end{proof}
Note that for regular graphs, $2m/\mu = 2R' = 2H = 2R = n$ whereas for almost all regular graphs $2m/\sqrt{s^+} < n$.
\begin{lemma}
For all graphs, $col(col - 1) \le 2m$.
\end{lemma}
\begin{proof}
As noted above, $s^+ \le 2m(\chi - 1)/\chi$ and $\chi(\chi - 1) \le 2m$.
Therefore:
\[
\sqrt{s^+}(\sqrt{s^+} + 1) = s^+ + \sqrt{s^+} \le \frac{2m(\chi - 1)}{\chi} +
\frac{2m}{\chi} = 2m.
\]
We can show that $deg(G) \le \mu(G)$ as follows.
\[
deg(G) = \max(\delta(H) : H \subseteq G) \le \max(\mu(H) : H
\subseteq G) \le \mu(G).
\]
Therefore $col(G) \le \mu + 1$, so:
\[
col(col - 1) \le \mu(\mu + 1) \le \sqrt{s^+}(\sqrt{s^+} + 1) \le 2m.
\]
\end{proof}
We can now prove the following theorem, using the same proof as for
Theorem 3.1.
\begin{theorem}
For a graph G, $col(G) \le 2m/\sqrt{s^+} \le 2m/\mu \le 2R(G)$.
\end{theorem}
\subsection{Bounds for $s^+$}
The proof of Lemma 3.2 uses that $s^+ + \sqrt{s^+} \le 2m$, from which it follows that:
\[
\sqrt{s^+} \le \frac{1}{2}(\sqrt{8m + 1} - 1).
\]
This strengthens Stanley's inequality \cite{stanley} that:
\[
\mu \le \frac{1}{2}(\sqrt{8m + 1} - 1).
\]
However, $\sqrt{2m - n + 1} \le (\sqrt{8m + 1} - 1)/2$, and Hong \cite{hong} proved for graphs with no isolated vertices that $\mu \le \sqrt{2m - n + 1}.$ Elphick \emph{et al} \cite{elphick} recently conjectured that for connected graphs $s^+ \le 2m - n + 1$, or equivalently that $s^- \ge n - 1$.
\section{\large Example and Conjectures}
A {\it Grundy $k$-coloring} of $G$ is a
$k$-coloring of $G$ such that each vertex is colored by the smallest
integer which has not appeared as a color of any of its neighbors.
The {\it Grundy number} $\Gamma(G)$ is the largest integer $k$, for
which there exists a Grundy $k$-coloring for $G$. It is clear that
for any graph $G$, \begin{equation} \Gamma(G)\leq \psi(G) \ \text{and}\ \chi(G)\leq \Gamma(G)\leq
\displaystyleelta(G)+1. \end{equation}
Note that each pair of $col(G)$ and $\psi(G)$, or $col(G)$ and $\Gamma(G)$, or $\psi(G)$ and $\displaystyleelta(G)$ is incomparable in general.
As an example of the bounds discussed in this paper, if $G=P_4$, then
$$\chi(G)=col(G)=2$$ $$\Gamma(G)=\psi(G)=\displaystyleelta(G)+1=2R'(G)=3$$
$$2H(G) = 3.67 \ \text{and}\ 2R(G) = 3.83$$
$$\mu_1(G) = 1.618 \ \text{and}\ \mu_2 = 0.618$$
$$2m/\mu_1(G) = 3.71 \ \text{and}\ 2m/\sqrt{s^+} = 3.46.$$
We believe the following conjectures to be true.
\begin{conjecture}
For any graph $G$, $\psi(G)\leq 2R'(G)$.
\end{conjecture}
In view of (6), a more tractable conjecture than the above is as follows.
\begin{conjecture}
For any graph $G$, $\Gamma(G)\leq 2R'(G)$.
\end{conjecture}
\def\hfil References{\hfil References}
\end{document}
|
\begin{document}
\title{On the centeredness of saturated ideals}
\begin{abstract}
We show that Kunen's saturated ideal over $\aleph_1$ is not centered. We also evaluate the extent of saturation of Laver's saturated ideal in terms of $(\kappa,\lambda,<\nu)$-saturation.
\end{abstract}
\section{Introduction}
In \cite{MR495118}, Kunen established
\begin{thm}[Kunen~\cite{MR495118}]\label{kunentheorem}
Suppose that $j:V \to M$ is a huge embedding with critical point $\kappa$. Then there is a poset $P$ such that $P\ast\dot{S}(\kappa,j(\kappa))$ forces that $\aleph_1$ carries a saturated ideal.
\end{thm}
This theorem has been improved in some ways. One is due to Foreman and Laver \cite{MR925267}. They established
\begin{thm}[Foreman--Laver~\cite{MR925267}]
Suppose that $j:V \to M$ is a huge embedding with critical point $\kappa$. Then there is a poset $P$ such that $P\ast\dot{R}(\kappa,j(\kappa))$ forces that $\aleph_1$ carries a centered ideal.
\end{thm}
Centeredness is one of the strengthenings of saturation. See Section 2 for the definition of centered ideal. Foreman and Laver introduced the poset $R(\kappa,\lambda)$ to obtain the centeredness, while Kunen used the Silver collapse $S(\kappa,\lambda)$. In their paper, it is claimed without proof that the ideal in Theorem \ref{kunentheorem} is not centered. We aim to give a proof of this claim in greater generality. Indeed, we will show that
\begin{thm}\label{main}
Suppose that $j:V \to M$ is a huge embedding with critical point $\kappa$ and $f:\kappa \to \mathrm{Reg}\cap \kappa$ satisfies $j(f)(\kappa) \geq \kappa$. For regular cardinals $\mu < \kappa \leq \lambda = j(f)(\kappa) < j(\kappa)$, there is a $P$ such that $P\ast\dot{S}(\lambda,j(\kappa))$ forces $\mu^{+} = \kappa$ and $\lambda^{+} = j(\kappa)$ and $\mathcal{P}_{\kappa}(\lambda)$ carries a saturated ideal that is not centered.
\end{thm}
We can regard an ideal over $\mathcal{P}_{\kappa}\kappa$ as an ideal over $\kappa$. If we put $f = \mathrm{id}$, $\lambda = \kappa$ and $\mu = \aleph_0$, then $P$ and the ideal in Theorem \ref{main} are the same as those in Theorem \ref{kunentheorem}.
We also study Laver's saturated ideal. Laver introduced Laver collapse $L(\kappa,\lambda)$ to get a model in which $\aleph_1$ carries a strongly saturated ideal. He established
\begin{thm}[Laver~\cite{MR673792}]\label{lavertheorem}
Suppose that $j:V \to M$ is a huge embedding with critical point $\kappa$. Then there is a poset $P$ such that $P\ast\dot{L}(\kappa,j(\kappa))$ forces that $\aleph_1$ carries a saturated ideal.
\end{thm}
We don't know whether Laver's saturated ideal is centered or not. But we study the extent of saturation of this ideal in terms of $(\kappa,\lambda,<\nu)$-saturation.
\begin{thm}\label{main2}
Suppose that $j$ is a huge embedding with critical point $\kappa$ and $f:\kappa \to \mathrm{Reg}\cap \kappa$ satisfies $j(f)(\kappa) \geq \kappa$. For regular cardinals $\mu < \kappa \leq \lambda = j(f)(\kappa) < j(\kappa)$, there is a $P$ such that $P\ast\dot{L}(\lambda,j(\kappa))$ forces that $\mu^{+} = \kappa$, $\lambda^{+} = j(\kappa)$ and $\mathcal{P}_{\kappa}(\lambda)$ carries a strongly saturated ideal ${I}$, that is, ${I}$ is $(j(\kappa),j(\kappa),<\lambda)$-saturated. And ${I}$ is $(j(\kappa),\lambda,\lambda)$-saturated but not $(j(\kappa),j(\kappa),\lambda)$-saturated.
\end{thm}
The structure of this paper is as follows: In Section 2, we recall the basic facts of forcing. We also recall Silver collapses. Section 3 is devoted to the proof of Theorem \ref{main}. In Section 4, we recall Laver collapse and we give the proof of Theorem \ref{main2}.
\section{Preliminaries}
In this section, we recall some definitions. We use \cite{MR1994835} as a reference for set theory in general.
Our notation is standard. We use $\kappa,\lambda,\mu$ to denote a regular cardinal unless otherwise stated. We also use $\nu$ to denote a cardinal, possibly finite, unless otherwise stated. We write $[\kappa,\lambda)$ for the set of all ordinals between $\kappa$ and $\lambda$. By $\mathrm{Reg}$, we mean the set of all regular cardinals. For $\kappa < \lambda$, $E_{\geq \kappa}^{\lambda}$ and $E_{<\kappa}^{\lambda}$ denote the set of all ordinals below $\lambda$ of cofinality $\geq \kappa$ and $< \kappa$, respectively.
Throughout this paper, we identify a poset $P$ with its separative quotient. Thus, $p \leq q \leftrightarrow \forall r \leq p (r || q) \leftrightarrow p \Vdash q \in \dot{G}$, where $\dot{G}$ is the canonical name of $(V,P)$-generic filter.
We say that $P$ is $\kappa$-centered if there is a sequence of centered subsets $\langle C_{\alpha} \mid \alpha < \kappa \rangle$ with $P = \bigcup_{\alpha<\kappa}C_{\alpha}$. A centered subset is $C \subseteq P$ such that every $X\in [C]^{<\omega}$ has a lower bound in $P$. We call such sequence a centering family of $P$. It is easy to see that the $\kappa$-centeredness implies the $\kappa^{+}$-c.c.
We say that $P$ is well-met if $\prod X \in P$ for all $X \subseteq P$ with $X$ has a lower bound. If $P$ is well-met, the $\kappa$-centeredness of $P$ is equivalent to the existence of $\kappa$-many filters that cover $P$. Note that every poset that we will deal with in this paper is well-met.
For a filter $F\subseteq Q$, by $Q / F$, we mean the subset $\{q \in Q \mid \forall p \in F(p \parallel q)\}$ ordered by $\leq_Q$. For a given complete embedding $\tau:P \to Q$, $P \ast (Q / \tau ``\dot{G})$ is forcing equivalent with $Q$. We also write $Q / \dot{G}$ for $Q / \tau ``\dot{G}$ if ${\tau}$ is obvious from context. If the inclusion mapping $P \to Q$ is complete, we say that $P$ is a complete suborder of $Q$, denoted by $P \mathrel{\lessdot} Q$.
In this paper, by ideal, we mean normal and fine ideal. For an ideal $I$ over $\mathcal{P}_{\kappa}\lambda$, $\mathcal{P}(\mathcal{P}_{\kappa,\lambda})/ I$ is $\mathcal{P}(\kappa^{+}) \setminus I$ ordered by $A \leq B \leftrightarrow A \setminus B \in I$. We say that $I$ is saturated and centered if $\mathcal{P}(\kappa^{+}) / I$ has the $\kappa^{++}$-c.c. and $\mathcal{P}(\kappa^{+})/I$ is $\kappa^{+}$-centered, respectively.
The ideal $I$ over $\kappa^{+}$ is $(\alpha,\beta,<\gamma)$-saturated if $\mathcal{P}(\kappa^{+})/ I$ has the $(\alpha,\beta,<\gamma)$-c.c. in the sense of Laver~\cite{MR673792}. Whenever $I$ is $(\lambda^{+},\lambda^{+},<\lambda)$-saturated, we say that $I$ is strongly saturated.
For a stationary subset $S \subseteq \lambda$, $P$ is $S$-layered if there is a sequence $\langle P_\alpha \mid \alpha < \lambda\rangle$ of complete suborders of $P$ such that $|P_{\alpha}| < \lambda$ and there is a club $C \subseteq \lambda$ such that $P_\alpha = \bigcup_{\beta < \alpha}P_{\beta}$ for all $\alpha \in S \cap C$. We say that $I$ is layered and if $\mathcal{P}(\mathcal{P}_{\kappa}\lambda)/ I$ is $S$-layered for some stationary $S \subseteq E^{\lambda^{+}}_{\lambda}$.
For cardinals $\kappa < \lambda$ ($\lambda$ is not necessary regular), Silver collapse $S(\kappa,\lambda)$ is the set of all $p$ with the following properties:
\begin{itemize}
\item $p \in \prod_{\gamma \in [\kappa^{+},\lambda) \cap \mathrm{Reg}}^{\leq \kappa}{^{<\kappa}\gamma}$.
\item There is a $\xi < \kappa$ with $\forall \gamma \in \mathrm{dom}(p)(\mathrm{dom}(p(\gamma)) \subseteq \xi)$.
\end{itemize}
$S(\kappa,\lambda)$ is ordered by reverse inclusion. The following properties are well known.
\begin{lem}
\begin{enumerate}
\item $S(\kappa,\lambda)$ is $\kappa$-closed.
\item If $\lambda$ is inaccessible, then $S(\kappa,\lambda)$ has the $\lambda$-c.c. Therefore, $S(\kappa,\lambda) \Vdash \kappa^{+} = \lambda$.
\item If $\mu<\lambda$ then $S(\kappa,\mu) \mathrel{\lessdot} S(\kappa,\lambda)$.
\end{enumerate}
\end{lem}
The following lemma will be used in a proof of Theorem \ref{main}.
\begin{lem}\label{dispoint}
\begin{enumerate}
\item If $\mathrm{cf}(\delta)\leq\kappa$ then $S(\kappa,\delta)$ has an anti-chain of size $\delta^{+}$.
\item If $\mathrm{cf}(\delta)>\kappa$ and $\delta^{\kappa} = \delta$ then $S(\kappa,\delta)$ forces $(\delta^{+})^{V} \geq \kappa^{+}$.
\end{enumerate}
\end{lem}
\begin{proof}
(2) follows by a standard cardinal arithmetic. Indeed, The assumption shows $|S(\kappa,\delta)| = \delta$. Therefore $S(\kappa,\delta)$ has the $\delta^{+}$-c.c. We only check about (1). By $\mathrm{cf}(\delta) < \kappa$, we can fix an increasing sequence of regular cardinals $\langle \delta_{i} \mid i < \mathrm{cf}(\delta) \rangle$ which converges to $\delta$. For $f \in \prod_{i < \mathrm{cf}(\delta)} \delta_{i}$, define $p_f \in S(\kappa,\delta)$ by $\mathrm{dom}(p_f) = \{\delta_{i} \mid i < \mathrm{cf}(\delta)\}$ and $p_f(\delta_{i}) = \{\langle 0,f(i)\rangle\}$. It is easy to see that $f \not= g$ implies $p_f \perp p_g$. Therefore $\{p_f \mid f \in \prod_{i < \mathrm{cf}}\delta_i\}$ is an anti-chain of size $(\delta^{\mathrm{cf}\delta}) \geq \delta^{+}$, as desired.
\end{proof}
\begin{rema}\label{levyremark}
A similar proof shows the following analogue of Lemma \ref{dispoint}.
\begin{enumerate}
\item If $\mathrm{cf}(\delta)<\kappa$ then $\mathrm{Coll}(\kappa,<\delta)$ has an anti-chain of size $\delta^{+}$.
\item If $\mathrm{cf}(\delta)\geq\kappa$ and $\delta^{<\kappa} = \delta$ then $\mathrm{Coll}(\kappa,<\delta)$ forces $(\delta^{+})^{V} \geq \kappa^{+}$.
\end{enumerate}
\end{rema}
\section{Proof of Theorem \ref{main}}
Let $j:V \to M$ be a huge embedding with critical point $\kappa$. Fix $\mu < \kappa$. We also fix $f:\kappa \to \kappa \cap \mathrm{Reg}$ with $\kappa \leq j(f)(\kappa) = \lambda$. We may assume that $f(\alpha) \geq \alpha$ for all $\alpha$.
Let $\langle{P_{\alpha} \mid \alpha \leq \kappa\rangle}$ be the $<\mu$-support iteration such that
\begin{itemize}
\item $P_{0} = S(\mu,\kappa)$.
\item $P_{\alpha + 1} = \begin{cases}P_{\alpha} \ast S^{P_\alpha \cap V_{\alpha}}(f(\alpha),\kappa) & \alpha\text{ is good}\\
P_{\alpha} & \text{otherwise}
\end{cases}$.
\end{itemize}
Here, we say that $\alpha$ is good if $P_{\alpha} \cap V_{\alpha} \mathrel{\lessdot} P_{\alpha}$ has the $\alpha$-c.c., $\alpha$ is inaccessible, and $\alpha \geq \mu$. The set $P_{\alpha} \ast S^{P_\alpha \cap V_{\alpha}}(\alpha,\kappa)$ is the set of all $\langle p,\dot{q}\rangle$ such that $p \in P_\alpha$ and $\dot{q}$ is a $P_{\alpha} \cap V_\alpha$-name for an element of $S^{P_\alpha \cap V_{\alpha}}(\alpha,\kappa)$.
Define $P = P_{\kappa}$. For every $p \in P$ and good $\alpha$, we may assume that $p(\alpha)$ is $P_\alpha \cap V_{\alpha}$-name. This $P$ is called an universal collapse. $P$ has the following properties:
\begin{lem}\label{basicpropofunivcoll}
\begin{enumerate}
\item $P$ is $\mu$-directed closed and has the $(\kappa,\kappa,<\mu)$-c.c.
\item $P \subseteq V_{\kappa}$ and $P \Vdash \mu^{+} = \kappa$.
\item $\kappa$ is good for $j(P)$. In particular, $j(P)_{\kappa} \cap V_{\kappa} = P \mathrel{\lessdot} j(P)_{\kappa}$.
\item There is a complete embedding $\tau:P \ast \dot{S}(\lambda,j(\kappa)) \to j(P)_{\kappa +1}\lessdot j(P)$ such that $\tau(p,\emptyset) = p$ for all $p \in P$.
\end{enumerate}
\end{lem}
Note that $j(P)$ has the $j(\kappa)$-c.c. by the hugeness of $j$. By Lemma \ref{basicpropofunivcoll} (2), in the extension by $P \ast \dot{S}(\lambda,j(\kappa))$, $\mu^{+} = \kappa$, $\lambda^{+} = j(\kappa)$, and every cardinal between $\kappa$ and $\lambda$ are preserved.
Kunen proved Theorem \ref{Kunenideal} in the case of $\kappa = \lambda$ and $\mu = \omega$. Moreover,
\begin{thm}\label{Kunenideal}
$P \ast \dot{S}(\lambda,j(\kappa))$ forces that ${\mathcal{P}}_{\kappa}\lambda$ carries a saturated ideal $\dot{I}$.
\end{thm}
\begin{proof}
The same proof as in \cite[Section 7.7]{MR2768692} works.
\end{proof}
The ideal which we call ``Kunen's saturated ideal'' is this $\dot{I}$. Studying the saturation of $\dot{I}$ will be reduced to that of some quotient forcing. Indeed, Theorem \ref{main} follows from Lemmas \ref{fms} and \ref{rephrasedmain}. We only give a proof of Lemma \ref{rephrasedmain}. In \cite{MR942519}, Foreman, Magidor and Shelah proved Lemma \ref{fms} in the case of $\kappa = \lambda$.
\begin{lem}\label{fms}
$P \ast \dot{S}(\lambda,j(\kappa))$ forces ${\mathcal{P}}({\mathcal{P}}_{\kappa}\lambda) / \dot{I} \simeq j(P) / \dot{G} \ast \dot{H}$. Here, $\dot{G} \ast \dot{H}$ is the canonical name for generic filter.
\end{lem}
\begin{proof}
The same proof as in \cite[Claim 7]{MR942519} works.
\end{proof}
\begin{lem}\label{rephrasedmain}
$P \ast \dot{S}(\lambda,j(\kappa))$ forces $j(P) / \dot{G} \ast \dot{H}$ is not $\lambda$-centered.
\end{lem}
\begin{proof}
Note that $\{\alpha < j(\kappa) \mid \alpha$ is good$\}$ is unbounded in $j(\kappa)$. We fix a good $\alpha > \kappa$. It is enough to prove that $j(P)_{\alpha + 1} / \dot{G} \ast \dot{H}$ is not $\lambda$-centered in the extension.
We show by contradiction. Suppose the existence of a centering family $\langle \dot{C}_{\xi} \mid \xi < \lambda \rangle$ of $j(P)_{\alpha + 1} / \dot{G} \ast \dot{H}$ is forced by some condition. We may assume that each $\dot{C}_{\xi}$ is forced to be a filter. To simplify notation, we assume $P \ast \dot{S}(\lambda,j(\kappa))$ forces the existence of such a
centering family.
By the $\kappa$-c.c. of $P$, for every $\langle{p,\dot{q}}\rangle \in P \ast \dot{S}(\lambda,j(\kappa))$, $P \Vdash \dot{q} \in \dot{S}(\lambda,\beta)$ for some $\beta < j(\kappa)$. For each $q \in j(P)_{\alpha+1}$, let $\rho(q)$ be defined by the following way:
For $\xi < \lambda$, let $\mathcal{A}_{q}^{\xi} \subseteq P \ast \dot{S}(\lambda,j(\kappa))$ be a maximal anti-chain such that, for every $r \in \mathcal{A}_{q}^{\xi}$, $r$ decides $q \in \dot{C}_{\xi}$. $\rho(q)$ is the least ordinal $\beta<j(\kappa)$ such that $P \Vdash \dot{q} \in \dot{S}(\lambda,\beta)$ for every $\langle p,\dot{q}\rangle \in \bigcup_{\xi}\mathcal{A}_{q}^{\xi}$.
We put $Q = j(P)_{\alpha} \cap V_{\alpha}$. Let $C \subseteq j(\kappa)$ be a club generated by $\beta \mapsto \sup\{\rho(q) \mid q \in Q \ast {S}^{Q}(\alpha,\beta) \}$. Since $j(\kappa)$ is inaccessible, we can find a strong limit cardinal $\delta \in C \cap E^{j(\kappa)}_{> \lambda} \cap E^{j(\kappa)}_{\leq\alpha} \setminus (\alpha + 1)$.
By the $\kappa$-c.c. of $P$ and Lemma \ref{dispoint} (2), $P \ast \dot{S}(\lambda,\delta) \Vdash (\delta^{+})^{V} \geq \lambda^{+}$. We will discuss in the extension by $P \ast \dot{S}(\lambda,\delta)$. Let $\dot{G} \ast \dot{H}_{\delta}$ be the canonical $P \ast \dot{S}(\lambda,\delta)$-name for a generic filter.
By Lemma \ref{dispoint} (1), $P \ast \dot{S}(\lambda,\delta) \Vdash S^{V}(\alpha,\delta)$ has an anti-chain of size $(\delta^{+})^{V} \geq \lambda^{+}$. By Lemma \ref{basicpropofunivcoll} (4), this $S^{V}(\alpha,\delta)$-anti-chain defines an anti-chain in $Q \ast S^Q(\alpha,\delta) / \dot{G} \ast \dot{H}_{\delta}$ in the extension. Thus, $Q \ast S^Q(\alpha,\delta) / \dot{G} \ast \dot{H}_{\delta}$ is forced not to have the $\lambda^{+}$-c.c., and thus not to be $\lambda$-centered.
On the other hand, a name of a centering family of $j(P)_{\alpha + 1} / \dot{G} \ast \dot{H}$ defines a centering family of $Q \ast S^{Q}(\alpha,\delta) / \dot{G} \ast \dot{H}_{\delta}$ in the extension as follows.
\begin{clam}\label{claim1}
$P \ast \dot{S}(\lambda,\delta)$ forces that $Q \ast {S}^{Q}(\alpha,\delta) / \dot{G} \ast \dot{H}_{\delta}$ is $\lambda$-centered.
\end{clam}
\begin{proof}[Proof of Claim]
For every $q \in \bigcup_{\beta<\delta} Q \ast {S}^{Q}(\alpha,\beta)$, by $\rho(q) < \delta$, the statement $q \in \dot{C}_{\xi}$ has been decided by $P \ast \dot{S}(\lambda,\delta)$ for all $\xi < \lambda$. Let $G \ast H$ be an arbitrary $(V,P \ast \dot{S}(\lambda,j(\kappa)))$-generic filter. Note that $G \ast H_\delta = G \ast H \cap (P \ast \dot{S}(\lambda,\delta))$ is $(V,P \ast \dot{S}(\lambda,\delta))$-generic. Let $D_{\xi}$ be defined by
\begin{center}
$\langle p,\dot{q}\rangle \in D_\xi$ if and only if $\langle p,\dot{q} \upharpoonright \beta \rangle \in \dot{C}_\xi$ forced by $G \ast H_\delta$ for every $\beta<\delta$.
\end{center}
It is easy to see that $D_{\xi}$ is a filter over $Q\ast {S}^{Q}(\alpha,\delta) / {G} \ast {H}_{\delta}$. We claim that $\{D_\xi\mid \xi < \lambda\}$ covers $Q \ast {S}^{Q}(\alpha,\delta) / G \ast H_\delta$. For each $\langle p,\dot{q}\rangle \in Q\ast \dot{S}(\alpha,\delta) / {G} \ast {H}_{\delta}$, in $V[G][H]$, there is a $\xi$ such that $\langle{p,\dot{q}}\rangle \in \dot{C}_\xi^{G \ast H}$. Then $\langle{p,\dot{q}\upharpoonright \beta} \rangle \in \dot{C}_{\xi}^{G \ast H}$ for every $\beta < \delta$, since $\dot{C}_{\xi}^{G \ast H}$ is a filter. In particular, $\langle{p,\dot{q}\upharpoonright \beta} \rangle \in \dot{C}_{\xi}$ is forced by $G\ast H_\delta$ for every $\beta < \delta$. By the definition of $D_\xi$, $\langle {p,\dot{q}}\rangle \in D_{\xi}$ in $V[G][H_{\delta}]$, as desired.
\end{proof}
This claim shows the contradiction. The proof is completed.
\end{proof}
Let us give a proof of Theorem \ref{main}.
\begin{proof}[Proof of Theorem \ref{main}]
By Lemmas \ref{fms} and \ref{rephrasedmain}, we have $\Vdash \mathcal{P}({\mathcal{P}}_{\kappa}\lambda) / \dot{I}$ is not $\lambda$-centered. Thus, $\dot{I}$ is not centered ideal in the extension.
\end{proof}
Lastly, we list other saturation properties of $\dot{I}$.
\begin{thm}\label{extentofkunen}$P \ast \dot{S}(\lambda,j(\kappa))$ forces that
\begin{enumerate}
\item $\dot{I}$ is $(j(\kappa),j(\kappa),<\mu)$-saturated.
\item $\dot{I}$ is not $(j(\kappa),\mu,\mu)$-saturated. In particular, $\dot{I}$ is not strongly saturated.
\item $\dot{I}$ is layered.
\item $\dot{I}$ is not centered.
\end{enumerate}
\end{thm}
\begin{proof}
For (1) and (2), we refer to \cite{preprint}. (3) has been proven in \cite{MR942519}. (4) follows by Lemma \ref{rephrasedmain}.
\end{proof}
\begin{rema}
Kunen's theorem can be improved by Magidor's trick which appeared in \cite{MR526312}. This shows that an almost huge cardinal is enough to show Theorem \ref{kunentheorem}. Then we can use Levy collapse instead of Silver collapse. Remark \ref{levyremark} enables us to show the same result with Theorem \ref{main} for Levy collapses. On the other hand, to obtain layeredness, we need an almost huge embedding $j:V \to M$ with $j(\kappa)$ is Mahlo. If $j(\kappa)$ is not Mahlo, the ideal $\dot{I}$ is forced to be not layered. For details, we refer to \cite{preprint}.
\end{rema}
\section{Proof of Theorem \ref{main2}}
In this section, we give a proof of Theorem \ref{main2}.
First, we recall the definition and basic properties of Laver collapse $L(\kappa,\lambda)$. $L(\kappa,\lambda)$ is the set of all $p$ such that
\begin{itemize}
\item $p \in \prod_{\gamma \in [\kappa^{+},\lambda) \cap \mathrm{Reg}}^{<\lambda}{^{<\kappa}\gamma}$.
\item There is a $\xi < \kappa$ with $\forall \gamma \in \mathrm{dom}(p)(\mathrm{dom}(p(\gamma)) \subseteq \xi)$.
\item $\mathrm{dom}(p) \subseteq \lambda$ is Easton subset. That is, $\forall \alpha \in \mathrm{Reg} (\sup (\mathrm{dom}(p) \cap \alpha) < \alpha)$.
\end{itemize}
$L(\kappa,\lambda)$ is ordered by reverse inclusion. It is easy to see that
\begin{lem}
\begin{enumerate}
\item $L(\kappa,\lambda)$ is $\kappa$-closed.
\item If $\lambda$ is Mahlo, then $L(\kappa,\lambda)$ has the $(\lambda,\lambda,<\mu)$-c.c. for all $\mu < \lambda$. Therefore, $L(\kappa,\lambda) \Vdash \kappa^{+} = \lambda$.
\end{enumerate}
\end{lem}
\begin{proof}
(1) follows by the standard argument. (2) follows by the usual $\Delta$-system argument.
\end{proof}
For $\kappa,\lambda,\mu,j,f$ in the assumption of Theorem \ref{main2}, let us define $P$. Let $\langle P_{\alpha} \mid \alpha \leq \kappa \rangle$ be the Easton support iteration of such that
\begin{itemize}
\item $P_{0} = L(\mu,\kappa)$.
\item $P_{\alpha + 1} = \begin{cases}P_{\alpha} \ast L^{P_\alpha \cap V_{\alpha}}(f(\alpha),\kappa) & \alpha\text{ is good}\\
P_{\alpha} & \text{otherwise}
\end{cases}$.
\end{itemize}
Goodness of $\alpha$ is defined as in Section 3. Define $P = P_\kappa$. Then
\begin{lem}\label{basicpropofunivcoll2}
\begin{enumerate}
\item $P$ is $\mu$-directed closed and has the $(\kappa,\kappa,<\nu)$-c.c for all $\nu < \kappa$.
\item $P \subseteq V_{\kappa}$ and $P \Vdash \mu^{+} = \kappa$.
\item $\kappa$ is good for $j(P)$. In particular, $j(P)_{\kappa} \cap V_{\kappa} = P \mathrel{\lessdot} j(P)_{\kappa}$.
\item There is a complete embedding $\tau:P \ast \dot{L}(\lambda,j(\kappa)) \to j(P)_{\kappa + 1} \lessdot j(P)$ such that $\tau(p,\emptyset) = p$ for all $p \in P$.
\end{enumerate}
\end{lem}
Laver proved the following in the case of $\kappa = \lambda$. But the same proof showed
\begin{thm}
$P\ast \dot{L}(\lambda,j(\kappa))$ forces that ${\mathcal{P}}_{\kappa}\lambda$ carries a strongly saturated ideal $\dot{I}$.
\end{thm}
We have an analogue of Lemma \ref{fms} for Laver's ideal.
\begin{lem}\label{dualitylaver}
$P \ast \dot{L}(\lambda,j(\kappa))$ forces $\mathcal{P}({\mathcal{P}}_{\kappa}\lambda) / \dot{I} \simeq j(P) / \dot{G} \ast \dot{H}$. Here, $\dot{G} \ast \dot{H}$ is the canonical name for generic filter.
\end{lem}\begin{proof}
The same proof as in \cite[Claim 7]{MR942519} works.
\end{proof}
\begin{lem}\label{quotientsaturation}
Suppose that $\tau:P \to Q$ is complete embedding between complete Boolean algebras and $Q$ has the $(\kappa,\mu,\mu)$-c.c. Then $P \Vdash Q / \dot{G}$ has the $(\kappa,\mu,\mu)$-c.c.
\end{lem}
\begin{proof}
We may assume that $P$ and $Q$ are complete Boolean algebras.
Let $p \Vdash \{\dot{q}_{\alpha} \mid \alpha < \kappa\} \subseteq Q / \dot{G}$ be arbitrary. For each $\alpha < \kappa$, there are $p_{\alpha} \leq p$ and $q_{\alpha} \in Q$ such that $p_{\alpha} \Vdash \dot{q}_{\alpha} = q_{\alpha}$. By the $(\kappa,\mu,\mu)$-c.c. of $Q$, there is a $Z \in [\kappa]^{\mu}$ such that $\prod_{\alpha \in Z} \tau(p_\alpha) \cdot q_{\alpha} \not= 0$. It is easy to see that
\begin{center}
$\prod_{\alpha \in Z} \tau(p_\alpha) \cdot q_{\alpha} = \prod_{\alpha \in Z}\tau(p_{\alpha}) \cdot \prod_{\alpha \in Z}q_{\alpha} = \tau(\prod_{\alpha \in Z} p_{\alpha}) \cdot \prod_{\alpha \in Z}q_{\alpha}$.
\end{center}
Let $r$ be a reduct of $\tau(\prod_{\alpha \in Z} p_{\alpha}) \cdot \prod_{\alpha \in Z}q_{\alpha}$. Then $r \leq \prod_{\alpha \in Z}p_{\alpha}\leq p$ and this forces that $\prod_{\alpha \in Z}q_{\alpha} \in Q / \dot{G}$ is a lower bound of $\{\dot{q}_{\alpha} \mid \alpha \in Z\}$, as desired.
\end{proof}
\begin{lem}\label{mainlemma2}
$P \ast \dot{L}(\lambda,j(\kappa))$ forces that $j(P) / \dot{G} \ast \dot{H}$ does not have the $(j(\kappa),j(\kappa),\lambda)$-c.c.
\end{lem}
\begin{proof}
Let us show $P \ast \dot{L}(\lambda,j(\kappa)) \Vdash j(P) / \dot{G} \ast \dot{H}$ does not has the $(j(\kappa),j(\kappa),\lambda)$-c.c. Let $\{q_{\alpha} \in j(P) \mid \alpha > \kappa + 1\}$ be an arbitrary such that $\mathrm{supp}(q_{\alpha}) = \{\alpha\}$ for all $\alpha$. Note that $\langle \emptyset,\emptyset \rangle \in P \ast \dot{L}(\lambda,j(\kappa))$. For every $p \in P\ast \dot{L}(\lambda,j(\kappa))$, $\tau(p) \in j(P)_{\kappa + 1}$ by Lemma \ref{basicpropofunivcoll2}(4). Therefore $\tau(p)$ meets with $q_{\alpha}$. We have $\Vdash \{q_{\alpha} \mid \alpha > \kappa + 1\} \subseteq j(P) / \dot{G} \ast \dot{H}$.
We fix an arbitrary $\dot{A}$ with $\Vdash \dot{A} \in [j(\kappa)]^{j(\kappa)} $. Let us find an $\dot{x}$ such that $\Vdash \dot{x} \in [\dot{A}]^{\lambda}$ and $\{q_{\alpha} \mid \alpha \in \dot{x} \}$ has a lower bound in $j(P) / \dot{G} \ast \dot{H}$. By the $j(\kappa)$-c.c. of $P \ast \dot{j}(\lambda,j(\kappa))$, there is a club $C \subseteq j(\kappa)$ such that $\Vdash C \subseteq \mathrm{Lim}(\dot{A})$. Since $j(\kappa)$ is Mahlo, there is an inaccessible $\alpha \in C\setminus (\lambda + 1)$. Let $\dot{x}$ be a $P \ast \dot{L}(\lambda,j(\kappa))$-name for the set $\dot{A} \cap \alpha$. Since $\Vdash \alpha \in \mathrm{Lim}(\dot{A})$, $\sup \dot{x} = \alpha$. Since $\lambda \leq \alpha < j(\kappa)$, $|\dot{x}| = |\alpha| = \lambda$ is forced by $P\ast \dot{L}(\lambda,j(\kappa))$. We claim that $\Vdash \{q_\alpha \mid \alpha \in \dot{x}\}$ witnesses. Suppose otherwise, there are $p \in P \ast \dot{L}(\lambda,j(\kappa))$ and $r \in j(P)$ such that $p \Vdash r \in j(P) / \dot{G}\ast\dot{H} \leq q_{\alpha}$ for all $\alpha \in \dot{x}$. Note that $\mathrm{supp}(r) \cap \alpha < \beta$ for some $\beta < \alpha$. By $q \Vdash \sup \dot{x} = \alpha$, there are $q \leq p$ and $\gamma \in [\beta^{+},\alpha)$ such that $q \Vdash \gamma \in \dot{x}$. Therefore $q \Vdash r \leq q_{\gamma}$, and thus, $\{\gamma\} \subseteq \mathrm{supp}(r)\cap [\beta^{+},\alpha)$. This is a contradiction.
\end{proof}
Let us show
\begin{proof}[Proof of Theorem \ref{main2}]
Note that $j(P)$ has the $(j(\kappa),j(\kappa),\mu)$-c.c. for all $\mu < j(\kappa)$. Therefore $j(P)$ has the $(j(\kappa),\lambda,\lambda)$-c.c. By Lemma \ref{quotientsaturation}, $j(P) / \dot{G} \ast \dot{H}$ is forced to have the $(j(\kappa),\lambda,\lambda)$-c.c. By Lemma \ref{mainlemma2}, $P \ast \dot{L}(\lambda,j(\kappa))$ forces that $\mathcal{P}({\mathcal{P}}_{\kappa}(\lambda)) / \dot{I}$ does not have the $(j(\kappa),j(\kappa),\lambda)$-c.c.
By Lemma \ref{dualitylaver}, $\dot{I}$ is forced to have the $(j(\kappa),\lambda,\lambda)$-c.c. but not the $(j(\kappa),j(\kappa),\lambda)$-c.c.
\end{proof}
\begin{thm}
$P \ast \dot{L}(\lambda,j(\kappa))$ forces that
\begin{enumerate}
\item $\dot{I}$ is $(j(\kappa),j(\kappa),<\lambda)$-saturated and $(j(\kappa),\lambda,\lambda)$-saturated.
\item $\dot{I}$ is not $(j(\kappa),j(\kappa),\lambda)$-saturated.
\item $\dot{I}$ is layered.
\end{enumerate}
\end{thm}
\begin{proof}
(1) and (2) follow from Lemma \ref{mainlemma2}. (3) follows by the proof in \cite{MR942519}.
\end{proof}
Shioya improved Theorem \ref{lavertheorem} by using Easton collapse. Easton collapse $E(\kappa,\lambda)$ is the Easton support product $\prod^{E}_{\gamma \in [\kappa^{+},\lambda) \cap \mathrm{SR}}{^{<\kappa}\gamma}$. $\mathrm{SR}$ is the class of all cardinal $\gamma$ with $\gamma^{<\gamma} = \gamma$. He showed
\begin{thm}[Shioya~\cite{MR4159767}]\label{shioya}
Suppose that $j:V \to M$ is an almost-huge embedding with critical point $\kappa$ and $j(\kappa)$ is Mahlo. For regular cardinals $\mu < \kappa \leq \lambda < j(\kappa)$, $E(\mu,\kappa) \ast \dot{E}(\lambda,j(\kappa))$ forces that $\mu^{+} = \kappa$, $j(\kappa) = \lambda^{+}$ and $\mathcal{P}_{\kappa}\lambda$ carries a strongly saturated ideal.
\end{thm}
Let $\dot{I}$ be a $E(\mu,\kappa) \ast \dot{E}(\lambda,j(\kappa))$-name for the ideal in Theorem \ref{shioya}. The similar proof shows that
\begin{thm}$E(\mu,\kappa) \ast \dot{E}(\lambda,j(\kappa))$ forces that
\begin{enumerate}
\item $\dot{I}$ is $(j(\kappa),j(\kappa),<\lambda)$-saturated and $(j(\kappa),\lambda,\lambda)$-saturated.
\item $\dot{I}$ is not $(j(\kappa),j(\kappa),\lambda)$-saturated.
\item $\dot{I}$ is layered.
\end{enumerate}
\end{thm}
We conclude this paper with the following question.
\begin{ques}
Does $P \ast \dot{L}(\lambda,j(\kappa))$ force that $\dot{I}$ is centered? What about $E(\mu,\kappa)\ast\dot{E}(\lambda,j(\kappa))$?
\end{ques}
\end{document}
|
\begin{document}
\title{Contact topology and holomorphic invariants via elementary combinatorics}
\author{Daniel V. Mathews}
\date{}
\maketitle
\begin{abstract}
In recent times a great amount of progress has been achieved in symplectic and contact geometry, leading to the development of powerful invariants of 3-manifolds such as Heegaard Floer homology and embedded contact homology. These invariants are based on holomorphic curves and moduli spaces, but in the simplest cases, some of their structure reduces to some elementary combinatorics and algebra which may be of interest in its own right. In this note, which is essentially a light-hearted exposition of some previous work of the author, we give a brief introduction to some of the ideas of contact topology and holomorphic curves, discuss some of these elementary results, and indicate how they arise from holomorphic invariants.
\mathbf{e}nd{abstract}
\tableofcontents
\section{Introduction}
In recent years a great amount of progress has been achieved in understanding the structures of symplectic and contact geometry. Powerful invariants of manifolds and their contact and symplectic structures have been defined, such as Heegaard Floer homology and contact homology. These theories are based on generalised Cauchy-Riemann equations, holomorphic curves and their moduli spaces.
In a series of papers, the author has developed some of the structure that arises in some of the most simple cases of the holomorphic invariants known as sutured Floer homology and embedded contact homology \cite{Me09Paper, Me10_Sutured_TQFT, Me11_torsion_tori, Me12_itsy_bitsy, Mathews_Schoenfeld12_string}. This note is essentially a light-hearted exposition of some of that subject matter.
The structures that we discuss here are algebraic and combinatorial, and entirely elementary. They may be of interest for their own sake, as well as for their relation to other subjects, including quantum information theory, representation theory, topological quantum field theory, and especially contact geometry. That all this structure can arise from the most simple cases of sutured Floer homology and embedded contact homology, testifies to the power of these invariants.
It is our desire here to convey some of these results to a broad mathematical audience. Consequently, this note assumes no knowledge of symplectic or contact geometry or holomorphic curves. Since the subject matter is in a certain sense a ``combinatorialization of contact geometry'', we will give an exposition, as we proceed, of how these elementary results relate to contact geometry --- and this may serve as an unorthodox introduction to some of the ideas of contact geometry.
The reader who is only interested in combinatorics or algebra can easily skip the sections on symplectic and contact geometry and holomorphic curves. The reader without such background, but who wishes to know where the subject matter comes from, can hopefully gain here some idea of the types of considerations involved, whether in symplectic or contact geometry or holomorphic curves, and is encouraged to follow the references for further details.
\section{Symplectic and contact geometry}
We begin with a little --- very, very little --- about what symplectic and contact geometry are, and where some holomorphic invariants come from. For a proper introduction to symplectic geometry we refer to \cite{McDuffSalamon_Introduction} and to \cite{Geiges_Introduction} or \cite{Et02} for contact geometry. We will give essentially no proofs in this section, just assert some basic facts.
None of this is needed for the combinatorics and algebra that we will shortly discuss, but it will be useful as we proceed to make connections with contact geometry and holomorphic invariants. The reader who is solely interested in combinatorial and algebraic aspects can safely skip this section.
\subsection{Symplectic geometry}
Symplectic geometry is the mathematical structure of Hamiltonian mechanics. A symplectic manifold is a pair
\[
(M, \omega),
\]
where $M$ is a smooth manifold and $\omega$ is a closed non-degenerate differential $2$-form on $M$. This implies that $M$ is even-dimensional.
The key property that allows this structure to produce mechanics is that any smooth function $H: M \longrightarrow \mathbb{R}$ (called a \mathbf{e}mph{Hamiltonian}) has a naturally associated vector field $X_H$ on $M$. Namely, from $H$ we obtain a differential $1$-form $dH$, and then the non-degeneracy of $\omega$ gives the vector field $X_H$ via the equation
\[
\omega( X_H, \cdot ) = dH.
\]
(Different authors have different sign conventions in this equation, but this is the idea.)
The fact that $\omega$ is closed implies that flowing along $X_H$ leaves $\omega$ unchanged:
\[
L_{X_H} \omega = i_{X_H} d\omega + d i_{X_H} \omega = i_{X_H} 0 + d(dH) = 0.
\]
The physical interpretation is that $M$ is the phase space (``space of states'') of the universe, $H$ gives the energy of any state, and $X_H$ is the time evolution of the universe.
The simplest example of a symplectic manifold is $\mathbb{R}^{2n}$ with the symplectic form
\[
\omega = \sum_{j=1}^n dx_j \wedge dy_j.
\]
Here each $y_j$ can be considered a ``position coordinate'' and each $x_j$ a corresponding ``conjugate momentum''.
\subsection{Symplectic vs. complex geometry}
The fact that symplectic geometry only arises in even numbers of dimensions suggests a similarity to complex geometry. Indeed this is even the etymological root: Weyl introduced the word ``symplectic'' as a Greek version of the word ``complex'' \cite{Weyl_Classical}. The Latin \mathbf{e}mph{complexus} and the Greek \mathbf{e}mph{symplektikos} both mean ``braided together''.
There is a difference of course. In a complex manifold every point has a neighbourhood homeomorphic to $\mathbb{C}^n$. In particular, for every direction there is also ``$i$ times'' that direction. This is very different from having a closed non-degenerate $2$-form.
The relationship between symplectic and complex geometry has been exploited to great effect in the last 25 years or so, with the use of \mathbf{e}mph{holomorphic curves}, starting with the work of Gromov in 1985 \cite{Grom}, leading to great advances in the understanding of symplectic geometry.
In particular, any symplectic manifold $(M, \omega)$ has an \mathbf{e}mph{almost complex structure}. An almost complex structure is a map which is like ``multiplication by $i$ at every point of $M$''. That is, $J$ is a map
\[
J \; : \; TM \longrightarrow TM
\]
which takes every fibre $T_p M \to T_p M$ and satisfies $J^2 = -1$.
\begin{center}
\def120pt{120pt}
\input{almost_cx_str.pdf_tex}
\mathbf{e}nd{center}
An almost complex structure is often required to be \mathbf{e}mph{compatible} with the symplectic form. Roughly this means that $J$ and $\omega$ behave like $i$ and $\sum_j dx_j \wedge dy_j$ in $\mathbb{C}^n$ (where the $z_j = x_j + i y_j$ are the coordinates). (Precisely, it means that $\omega( v, w ) = \omega (Jv, Jw)$ for all tangent vectors $v,w$; and $\omega( v, Jv) > 0$ for all tangent vectors $v \neq 0$.)
It turns out that any symplectic manifold has a compatible almost complex structure. Moreover, all almost complex structures on $(M, \omega)$ are homotopic. (The space of almost complex structures on $M$ is the space of sections of a fibre bundle over $M$ with contractible fibres.) However, an almost complex structure by no means implies the existence of a complex structure. An almost complex structure only requires a pointwise condition. A complex structure requires local charts to $\mathbb{C}^n$ with holomorphic transition functions, which is a far more onerous condition. To drop the ``almost'' requires an additional condition, the vanishing of the Nijenhuis tensor.
For proper discussion of these issues we refer to \cite{McDuffSalamon_Introduction} or \cite{McDuff_Salamon_J-holomorphic}.
\subsection{Holomorphic curves in symplectic manifolds}
Gromov in \cite{Grom} had the idea of considering \mathbf{e}mph{holomorphic curves in symplectic manifolds}. This has led to many developments, including the development of \mathbf{e}mph{Floer homology} theories.
The point of this note is not really to explain any such homology theories, or the detail of why they work, or how they work, or how to compute them. The point is to discuss some of the structure obtained from them in some very simple cases, which is elementary and of independent interest.
Nonetheless, for the interested reader, we can summarise some of the ideas involved, very roughly and avoiding all details, as follows. Readers not interested in holomorphic curves should skip to the next section.
\begin{itemize}
\item
Start with a symplectic manifold $(M, \omega)$ you want to understand.
\item
Introduce a compatible almost complex structure $J$ on $(M, \omega)$. As we just noted, one always exists, and any choice is homotopic to any other.
\item
Take a Riemann surface $(\Sigma, i)$ and consider maps
\[
u: (\Sigma, i) \longrightarrow (M, J)
\]
which are \mathbf{e}mph{holomorphic},\footnote{It is common to say ``pseudo-holomorphic'' or ``$J$-holomorphic'' to indicate that the target has an almost complex structure, rather than a complex structure. We prefer simply to say holomorphic here and there should be no ambiguity.} meaning that the following diagram commutes.
\[
\begin{tikzpicture}
\draw (0,0) node {$T\Sigma$};
\draw (3,0) node {$TM$};
\draw (0,-2) node {$T\Sigma$};
\draw (3,-2) node {$TM$};
\draw [->] (0.5,0) -- (2.5,0)
node [above, align=center, midway] {$Du$};
\draw [->] (0.5,-2) -- (2.5,-2)
node [above, align=center, midway] {$Du$};
\draw [->] (0,-0.4) -- (0,-1.6)
node [left, align=center, midway] {$i$};
\draw [->] (3,-0.4) -- (3,-1.6) node [right, align=center, midway] {$J$};
\mathbf{e}nd{tikzpicture}
\]
I.e., the \mathbf{e}mph{Cauchy--Riemann equations} hold: $Du \circ i = J \circ Du$.
\begin{center}
\def120pt{300pt}
\input{holomorphic_curve.pdf_tex}
\mathbf{e}nd{center}
\item
If we consider holomorphic curves with appropriate constraints such as marked points, fixed degree and boundary conditions, and provided sufficient transversality conditions are satisfied, then the space of holomorphic curves will be a \mathbf{e}mph{finite-dimensional} orbifold-like space called a \mathbf{e}mph{moduli space}.
(This is a vast generalisation of elementary facts such as the following. The space of holomorphic maps $\mathbb{C}P^1 \longrightarrow \mathbb{C}P^1$ of degree $1$ fixing $3$ points is a point. The space of holomorphic maps $\mathbb{C}P^1 \longrightarrow \mathbb{C}P^1$ of degree $1$ fixing $2$ points is an annulus.)
\item
More generally, under these favourable conditions, there is an index theory for holomorphic curves. This is a version of the Riemann--Roch theorem and a special case of the Atiyah--Singer index theorem; the Cauchy--Riemann equations give a $\bar{\partial}$ operator and its index is computed in terms of topological data.
The dimension of the moduli space is given in terms of the symplectic topology of the constraints. Such moduli spaces intricately encode topological data about the manifold.
\item
A moduli space has a natural \mathbf{e}mph{compactification} which is the subject of the \mathbf{e}mph{Gromov compactness theorem}. The compactified moduli space has a stratified boundary, and the strata are moduli spaces of ``degenerate'' holomorphic curves such as nodal surfaces.
\begin{center}
\def120pt{300pt}
\input{holomorphic_curve_2.pdf_tex}
\mathbf{e}nd{center}
\item
Some information can be extracted from a moduli space by exploiting the codimension-1 parts of its boundary, and defining \mathbf{e}mph{homology theories} based on it. Roughly, and although there are several different approaches, one takes a chain complex generated by sets of boundary conditions for holomorphic curves. One then defines a differential $\partial$ counting holomorphic curves between two sets of boundary conditions, at positive and negative ends respectively. Under favourable conditions, the boundary structure of the moduli space and the index theory of solutions to the Cauchy-Riemann equations give $\partial^2 = 0$.
This is analogous to how the singular homology of a smooth manifold can be obtained from a Morse function $f$ via a chain complex (the \mathbf{e}mph{Morse complex}) generated by critical points of $f$, and a differential counting gradient trajectories between critical points. (Critical points are ``boundary conditions for gradient trajectories'').
\begin{center}
\def120pt{100pt}
\input{gradient_trajectory.pdf_tex}
\mathbf{e}nd{center}
\item
Depending on the details of how this chain complex is constructed, one can obtain various flavours of \mathbf{e}mph{Floer homology}, \mathbf{e}mph{contact homology} or \mathbf{e}mph{symplectic field theory}. An intricate algebraic structure of generating functions can be used to keep track of detail about counts of holomorphic curves.
\item
It often turns out that the resulting homology is independent of the choice of almost complex structure $J$, and sometimes even of the underlying symplectic or contact structure. It is possible to obtain \mathbf{e}mph{smooth} manifold or \mathbf{e}mph{knot} invariants.
\item
\mathbf{e}mph{Heegaard Floer homology theories} are a very powerful variant of the above ideas, introduced by Ozsv{\'a}th and Szab{\'o} \cite{OS04Prop, OS04Closed, OS06}. From a $3$-manifold $M$, we take its \mathbf{e}mph{Heegaard decomposition} consisting of a surface $\Sigma$ with curves $\alpha_1, \ldots, \alpha_g$ bounding discs on one side and $\beta_1, \ldots, \beta_g$ on the other. We then (as in \cite{Lip}) consider holomorphic curves in the symplectic manifold $\Sigma \times I \times \mathbb{R}$ with asymptotics prescribed by the $\alpha_i$ and $\beta_j$.
It turns out that one can set up a chain complex so that the resulting homology is independent of any symplectic or almost complex structure chosen, giving a \mathbf{e}mph{smooth manifold invariant} of $M$. Different versions, such as the \mathbf{e}mph{hat}, \mathbf{e}mph{infinity}, \mathbf{e}mph{minus}, \mathbf{e}mph{knot}, \mathbf{e}mph{bordered} \cite{LOT08} and \mathbf{e}mph{sutured} \cite{Ju06} Heegaard Floer homologies vary between the types of manifolds they apply to, the complexity of the theory, and the ease of obtaining information.
One indication of the power of Heegaard Floer homology is that it detects the genus of a knot \cite{OS04Knot, OS04Genus}; this can be computed combinatorially \cite{MOSa}.
\mathbf{e}nd{itemize}
We shall say no more about holomorphic curves until the final section, when we briefly return to sutured Floer homology to see how it is related to the elementary structures forming the main subject of this note.
\subsection{Contact geometry}
Contact geometry is the odd-dimensional sibling of symplectic geometry. Its roots can be traced back to Lie's work on systems of differential equations, and arguably back to Christiaan Huygens \cite{Geiges01, Geiges05}.
One way to define a symplectic structure is a $2$-form $\omega$ satisfying
\[
d\omega = 0 \quad \text{and} \quad \omega^n \text{ is nowhere zero, i.e. a volume form.}
\]
Analogously, a \mathbf{e}mph{contact form} $\alpha$ on a $(2n+1)$-dimensional manifold $M$ is a $1$-form satisfying
\[
\alpha \wedge (d\alpha)^n \text{ is nowhere zero, i.e. a volume form.}
\]
The kernel of $\alpha$ is a hyperplane field $\xi$ on $M$ and it is this plane field that is called a \mathbf{e}mph{contact structure}.
Contact manifolds arise in considering wavefronts in optics. They also arise naturally as submanifolds of symplectic manifolds. In fact, so many geometric problems can be interpreted in terms of contact geometry, that Arnold once famously said that ``contact geometry is all geometry''.\cite{Arnold_Symplectic_Contact}
The condition $\alpha \wedge (d\alpha)^n \neq 0$ has a geometric interpretation from Frobenius' theorem in differential geometry. It is that $\xi$ is \mathbf{e}mph{totally non-integrable}.
We are interested in $3$-dimensional contact manifolds. Non-integrability then means that there is no 2-dimensional surface immersed in $M$ which is tangent to $\xi$. There are 1-dimensional curves tangent to $\xi$, but not 2-dimensional surfaces, not even locally.
The simplest example of a $3$-dimensional contact manifold is $\mathbb{R}^3$ with contact form
\[
\alpha = dz - y dx.
\]
The corresponding contact structure $\xi = \ker \alpha$ is shown below.
\begin{center}
\includegraphics[scale=1]{Standard_contact_structure-eps-converted-to.pdf}
\mathbf{e}nd{center}
Any contact manifold with a contact form $(M, \alpha)$ can be used to obtain a symplectic manifold, called its \mathbf{e}mph{symplectization}:
\[
\left( M \times \mathbb{R}, \; d \left( e^t \alpha \right) \right),
\]
where $t$ is the coordinate on $\mathbb{R}$. It is then possible to consider holomorphic curves in this symplectization, with asymptotics prescribed by contact geometry. Using the analytic techniques mentioned above, we can obtain holomogy theories such as \mathbf{e}mph{contact homology} \cite{Bo} and \mathbf{e}mph{embedded contact homology} \cite{Hutchings02}.
Having said a little about contact and symplectic topology, I now propose to drop the subject entirely and talk about some seemingly unrelated algebra and combinatorics. We will later see how this becomes, in a certain sense, a combinatorial version of contact topology, and how it is related to holomorphic invariants.
\section{Quantum pawn dynamics}
The theory of quantum pawn dynamics, or QPD, is a strange theory of pawns on chessboards.
\subsection{Pawns on 1-dimensional chessboards}
This theory has no space, no time, and no proper chessboards. It does have pawns though. The pawns move along a finite 1-dimensional chessboard.
\begin{center}
\begin{tabularx}{0.4\textwidth}{|>{\centering}X|>{\centering}X|>{\centering}X|>{\centering}X|>{\centering}X|>{\centering}X|}
\hline
\WhitePawnOnWhite & & \WhitePawnOnWhite & \WhitePawnOnWhite & & \\
\hline
\mathbf{e}nd{tabularx}
\mathbf{e}nd{center}
The pawns move from left to right. As they are pawns, they can only move ahead one space at a time, and only into an unoccupied square. There is no capturing, no en passant, and no double first moves. Two pawns cannot occupy the same square. So from the situation above, the pawns could eventually arrive at
\begin{center}
\begin{tabularx}{0.4\textwidth}{|X|X|X|X|X|X|}
\hline
& \WhitePawnOnWhite & \WhitePawnOnWhite & & \WhitePawnOnWhite & \\
\hline
\mathbf{e}nd{tabularx}
\mathbf{e}nd{center}
in two moves, but could never get to
\begin{center}
\begin{tabularx}{0.4\textwidth}{|X|X|X|X|X|X|}
\hline
\WhitePawnOnWhite & \WhitePawnOnWhite & & & \WhitePawnOnWhite & \\
\hline
\mathbf{e}nd{tabularx}
\mathbf{e}nd{center}
as the middle pawn would have to move backwards.
There is nothing special about the number of pawns or the size of the chess-``board''. We can have a chess-``board'' of any length $n$, and we can have any number of pawns $n_p$ where $0 \leq n_p \leq n$. Any setup of the board is a state of the QPD universe.
\subsection{Quantum ``inner product''}
We now have pawns, and they move --- hence, dynamics. We now make the dynamics quantum by declaring an ``inner product'' $\langle \cdot | \cdot \rangle$. In quantum mechanics the inner product is supposed to give you a probability amplitude for getting from one state of the universe to another.
The QPD universe is more binary than that. Given one setup of pawns $w_0$, you can either get to another setup of pawns $w_1$ or you can't. And in fact, we do not need a number system more complicated than $\mathbb{Z}_2$. So we do not have probabilities so much as \mathbf{e}mph{possibilities}.
We declare that
\[
\langle w_0 | w_1 \rangle = \left\{ \begin{array}{ll}
1 & \text{if it is possible for pawns to move from $w_0$ to $w_1$} \\
& \text{\quad (in some number of moves, possibly $0$);} \\
0 & \text{if not.}
\mathbf{e}nd{array}
\right.
\]
So our examples above translate to
\[
\left\langle \quad
\begin{tabularx}{0.4\textwidth}{|>{\centering}X|>{\centering}X|>{\centering}X|>{\centering}X|>{\centering}X|>{\centering}X|}
\hline
\WhitePawnOnWhite & & \WhitePawnOnWhite & \WhitePawnOnWhite & & \\
\hline
\mathbf{e}nd{tabularx}
\quad | \quad
\begin{tabularx}{0.4\textwidth}{|X|X|X|X|X|X|}
\hline
& \WhitePawnOnWhite & \WhitePawnOnWhite & & \WhitePawnOnWhite & \\
\hline
\mathbf{e}nd{tabularx}
\quad \right\rangle
\quad = \quad 1
\]
and
\[
\left\langle \quad
\begin{tabularx}{0.4\textwidth}{|X|X|X|X|X|X|}
\hline
\WhitePawnOnWhite & & \WhitePawnOnWhite & \WhitePawnOnWhite & & \\
\hline
\mathbf{e}nd{tabularx}
\quad | \quad
\begin{tabularx}{0.4\textwidth}{|X|X|X|X|X|X|}
\hline
\WhitePawnOnWhite & \WhitePawnOnWhite & & & \WhitePawnOnWhite & \\
\hline
\mathbf{e}nd{tabularx}
\quad \right\rangle
\quad = \quad 0
\]
Moreover, this is \mathbf{e}mph{quantum} pawn dynamics, so we can have \mathbf{e}mph{superpositions} of states --- entangled chessboards! We consider that we can \mathbf{e}mph{add} chessboards. But, as we said, we will not consider numbers more complicated than $1$; we take coefficients in $\mathbb{Z}_2$. The space of states is the $\mathbb{Z}_2$-vector space freely generated by chessboard setups. We declare that the ``inner product'' is bilinear, e.g.:
\[
\left\langle \quad
\begin{tabularx}{0.4\textwidth}{|X|X|X|X|X|X|}
\hline
\WhitePawnOnWhite & & \WhitePawnOnWhite & \WhitePawnOnWhite & & \\
\hline
\mathbf{e}nd{tabularx}
\quad | \quad
\begin{array}{c}
\begin{tabularx}{0.4\textwidth}{|X|X|X|X|X|X|}
\hline
& \WhitePawnOnWhite & \WhitePawnOnWhite & & \WhitePawnOnWhite & \\
\hline
\mathbf{e}nd{tabularx}\\
\quad + \quad \\
\begin{tabularx}{0.4\textwidth}{|X|X|X|X|X|X|}
\hline
\WhitePawnOnWhite & \WhitePawnOnWhite & & & \WhitePawnOnWhite & \\
\hline
\mathbf{e}nd{tabularx}
\mathbf{e}nd{array}
\quad \right\rangle
\quad = 1+0 = 1.
\]
Importantly, note that the ``inner product'' is \mathbf{e}mph{not symmetric}. But it is asymmetric in a very interesting way, as we'll see. It is a ``booleanized'' \mathbf{e}mph{partial order} on the setups of the chessboard. In fact, the configurations of a chessboard form a \mathbf{e}mph{complete lattice} in a natural way.
\subsection{Dirac sea and anti-pawns}
Actually, there is some strong symmetry in QPD. We can think of chessboard with no pawns as a thriving sea of anti-pawns. We can think of any square not occupied by a pawn, as containing an anti-pawn. An anti-pawn is just an ``absence of pawn''. This is very similar to the idea of the ``Dirac sea'' of matter and anti-matter. We draw pawns as white and anti-pawns as black.
\[
\begin{tabularx}{0.4\textwidth}{|X|X|X|X|X|X|}
\hline
\WhitePawnOnWhite & & \WhitePawnOnWhite & \WhitePawnOnWhite & & \\
\hline
\mathbf{e}nd{tabularx}
=
\begin{tabularx}{0.4\textwidth}{|X|X|X|X|X|X|}
\hline
\WhitePawnOnWhite & \BlackPawnOnWhite & \WhitePawnOnWhite & \WhitePawnOnWhite & \BlackPawnOnWhite & \BlackPawnOnWhite \\
\hline
\mathbf{e}nd{tabularx}
\]
Note when a pawn moves right, an anti-pawn moves left. So we can get from one setup of the chessboard to another, iff all pawns move right, iff all anti-pawns move left.
So white and black are not exactly like the opposing sides of chess, but they do move in opposite directions. Each is the absence of the other.
In each of the examples above, we imagine we have 6 pawn-particles: 3 pawns, and 3 anti-pawns. With $n$ as the number of squares on the chessboard / number of pawn-particles, and $n_p$ the number of pawns, we let $n_q = n - n_p$ be the number of anti-pawns.
\subsection{The initial creation and annihilation of pawns}
Being a quantum field theory of pawns, QPD will have to allow us to create and annihilate pawns and anti-pawns.
We'll define the \mathbf{e}mph{initial pawn creation operator} $a_{p,0}^*$ to take a chessboard setup, and adjoin a new \mathbf{e}mph{initial} square to the chessboard --- i.e. at the left hand side. This new square/particle has a pawn on it (is a pawn-particle).
\[
a_{p,0}^* \quad
\begin{tabularx}{0.4\textwidth}{|X|X|X|X|X|X|}
\hline
\WhitePawnOnWhite & \BlackPawnOnWhite & \WhitePawnOnWhite & \WhitePawnOnWhite & \BlackPawnOnWhite & \BlackPawnOnWhite \\
\hline
\mathbf{e}nd{tabularx}
=
\begin{tabularx}{0.48\textwidth}{|X|X|X|X|X|X|X|}
\hline
\WhitePawnOnWhite & \WhitePawnOnWhite & \BlackPawnOnWhite & \WhitePawnOnWhite & \WhitePawnOnWhite & \BlackPawnOnWhite & \BlackPawnOnWhite \\
\hline
\mathbf{e}nd{tabularx}
\]
We'll also define the \mathbf{e}mph{initial pawn annihilation operator} $a_{p,0}$ to delete an initial pawn. That is, it will annihilate the leftmost square from the chessboard --- the chessboard shrinks and a particle disappears --- provided that square contains a pawn. What does the QPD universe do if the initial square does not contain a pawn, but an anti-pawn? Why, it returns an error of course: error 404 universe not found mod 2 is 0.
So, for example:
\begin{align*}
a_{p,0}
\quad
\begin{tabularx}{0.4\textwidth}{|X|X|X|X|X|X|}
\hline
\WhitePawnOnWhite & \BlackPawnOnWhite & \WhitePawnOnWhite & \WhitePawnOnWhite & \BlackPawnOnWhite & \BlackPawnOnWhite \\
\hline
\mathbf{e}nd{tabularx}
&=
\begin{tabularx}{0.33\textwidth}{|X|X|X|X|X|}
\hline
\BlackPawnOnWhite & \WhitePawnOnWhite & \WhitePawnOnWhite & \BlackPawnOnWhite & \BlackPawnOnWhite \\
\hline
\mathbf{e}nd{tabularx} \\
a_{p,0} \quad
\begin{tabularx}{0.33\textwidth}{|X|X|X|X|X|}
\hline
\BlackPawnOnWhite & \WhitePawnOnWhite & \WhitePawnOnWhite & \BlackPawnOnWhite & \BlackPawnOnWhite \\
\hline
\mathbf{e}nd{tabularx}
&= 0.
\mathbf{e}nd{align*}
The vacuum $\mathbf{e}mptyset$ is the state of the QPD universe with no chessboard. (This is different from $0$.) The universe is a lonely, empty place without a chessboard. But you can make a bang and start your universal chessboard by creating an initial pawn.
\[
a_{p,0}^* \quad \mathbf{e}mptyset =
\begin{tabularx}{0.07\textwidth}{|X|}
\hline
\WhitePawnOnWhite \\
\hline
\mathbf{e}nd{tabularx}
\]
Everything we have said for pawns applies also to anti-pawns. So there is an \mathbf{e}mph{initial anti-pawn creation operator} $a_{q,0}^\dagger$, which creates a new leftmost square with an anti-pawn. There is also an \mathbf{e}mph{initial anti-pawn annihilation operator} $a_{q,0}$, which annihilates a leftmost anti-pawn, or else returns error $0$.
Using initial pawn and anti-pawn creation operators you can build any chessboard setup out of nothing.
\[
a_{p,0}^* a_{q,0}^\dagger a_{p,0}^* a_{p,0}^* a_{q,0}^\dagger a_{q,0}^\dagger \quad \mathbf{e}mptyset \quad = \quad
\begin{tabularx}{0.4\textwidth}{|X|X|X|X|X|X|}
\hline
\WhitePawnOnWhite & \BlackPawnOnWhite & \WhitePawnOnWhite & \WhitePawnOnWhite & \BlackPawnOnWhite & \BlackPawnOnWhite \\
\hline
\mathbf{e}nd{tabularx}
\]
The $*$ and $\dagger$ on creation operators refer to \mathbf{e}mph{adjoints}, which we discuss next. In the world of lattices and partial orders these are known as \mathbf{e}mph{Galois connections}. (See \cite{Me10_Sutured_TQFT} for further details.)
\subsection{Adjoints}
It's standard notation in physics to write annihilation operators with an $a$ and creation operators as $a^*$ or $a^\dagger$. And these operators are supposed to be related: they are \mathbf{e}mph{adjoint} with respect to the inner product.
\[
\langle a x | y \rangle = \langle x | a^* y \rangle,
\quad
\langle x | ay \rangle = \langle a^* x | y \rangle.
\]
In our case, the ``inner product'' isn't really an inner product because it's not symmetric. However it is bilinear and nondegenerate (check!). An operator $f$ on chessboards can have an adjoint --- but, in fact, \mathbf{e}mph{two} adjoints. We'll write one of these as $f^*$ and one of them as $f^\dagger$, as shown below.
\[
\langle f x | y \rangle = \langle x | f^* y \rangle,
\quad
\langle x | fy \rangle = \langle f^\dagger x | y \rangle.
\]
Note this means that $f^{* \dagger} = f^{\dagger *} = f$.
In general $f^{**} \neq f$. Some interesting things happen as we repeatedly take adjoints.
\subsection{Initial creation and annihilation are adjoint}
We can now see that our initial pawn creation $a_{p,0}^*$ is indeed the $*$-adjoint of the initial pawn annihilation $a_{p,0}$.
\[
\langle a_{p,0} x | y \rangle = \langle x | a_{p,0}^* y \rangle
\]
Let's see why. Here $x$ and $y$ are chessboards. Note that if the leftmost square of $x$ is empty (contains an anti-pawn), so that $x = \begin{tabularx}{0.13\textwidth}{|X|X}
\hline
\BlackPawnOnWhite & $x_1 \cdots$ \\
\hline
\mathbf{e}nd{tabularx}$, then
\[
\begin{array}{rcl}
\left\langle a_{p,0} x \quad | \quad y \right\rangle &=& \left\langle a_{p,0} \quad
\begin{tabularx}{0.13\textwidth}{|X|X}
\hline
\BlackPawnOnWhite & $x_1 \cdots$ \\
\hline
\mathbf{e}nd{tabularx}
\quad | \quad y \right \rangle
=
\langle 0 \quad | \quad y \rangle = 0 \\
\langle x \quad | \quad a_{p,0}^* y \rangle &=& \left\langle \quad
\begin{tabularx}{0.13\textwidth}{|X|X}
\hline
\BlackPawnOnWhite & $x_1 \cdots$ \\
\hline
\mathbf{e}nd{tabularx}
\quad | \quad
\begin{tabularx}{0.13\textwidth}{|X|X}
\hline
\WhitePawnOnWhite & $y \cdots$ \\
\hline
\mathbf{e}nd{tabularx}
\quad
\right\rangle
= 0.
\mathbf{e}nd{array}
\]
The first expression equals zero because we tried to annihilate an initial pawn where there was none and the universe returned an error. The second expression equals zero because a pawn would have to move backwards to get to the leftmost square.
On the other hand, if the leftmost square of $x$ contains a pawn, so that $x = \begin{tabularx}{0.13\textwidth}{|X|X}
\hline
\WhitePawnOnWhite & $x_1 \cdots$ \\
\hline
\mathbf{e}nd{tabularx}$, then
\[
\begin{array}{rcl}
\left\langle a_{p,0} x \quad | \quad y \right\rangle &=& \left\langle a_{p,0} \quad
\begin{tabularx}{0.13\textwidth}{|X|X}
\hline
\WhitePawnOnWhite & $x_1 \cdots$ \\
\hline
\mathbf{e}nd{tabularx}
\quad | \quad y \right\rangle
=
\left\langle x_1 \quad | \quad y \right\rangle \\
\left\langle x \quad | \quad a_{p,0}^* y \right\rangle &=&
\left\langle \quad
\begin{tabularx}{0.13\textwidth}{|X|X}
\hline
\WhitePawnOnWhite & $x_1 \cdots$ \\
\hline
\mathbf{e}nd{tabularx}
\quad | \quad
\begin{tabularx}{0.13\textwidth}{|X|X}
\hline
\WhitePawnOnWhite & $y \cdots$ \\
\hline
\mathbf{e}nd{tabularx}
\quad
\right\rangle
=
\left\langle x_1 \quad | \quad y \right\rangle
\mathbf{e}nd{array}
\]
So we've shown that $a_{p,0}^*$ is indeed the $*$ adjoint of $a_{p,0}$.
To (try to) put it succinctly: $a_{p,0}^* y$ begins with a pawn, so $\langle x | a_{p,0}^* y \rangle$ is only nonzero if $x$ also begins with a pawn; in which case $a_{p,0} x$ removes that pawn so $\langle a_{p,0} x | y \rangle$ gives the same result.
It's not difficult to check by a similar argument that $a_{q,0}^\dagger$, the initial anti-pawn creation operator, is indeed the $\dagger$-adjoint of the initial anti-pawn annihilation operator $a_{q,0}$.
\subsection{Adjoints and adjoints and adjoints}
We have seen one adjoint $a_{p,0}^*$, and by definition $a_{p,0}^{* \dagger} = a_{p,0}$; but what happens if we take the $*$ adjoint twice? In other words, what operator $f$ satisfies
\[
\langle a_{p,0}^* x | y \rangle = \langle x | f y \rangle?
\]
Note $a_{p,0}^* x$ just puts a pawn at the left end of $x$, and we compare $a_{p,0}^* x$ to $y$. Clearly the newly-added first pawn can't move left. The real question to compute $\langle a_{p,0}^* x | y \rangle$ is what happens to all the other pawns. In fact, if you \mathbf{e}mph{eliminate} the first pawn from the left in both $a_{p,0}^* x$ and $y$, you'll reduce to the problem of looking at the other pawns.
\[
\left\langle a_{p,0}^* \quad
\begin{tabularx}{0.33\textwidth}{|X|X|X|X|X|}
\hline
& \WhitePawnOnWhite & \WhitePawnOnWhite & & \\
\hline
\mathbf{e}nd{tabularx}
\quad | \quad
\begin{tabularx}{0.4\textwidth}{|X|X|X|X|X|X|}
\hline
& \WhitePawnOnWhite & \WhitePawnOnWhite & & \WhitePawnOnWhite & \\
\hline
\mathbf{e}nd{tabularx}
\quad \right\rangle
\]
\[
=
\left\langle \quad
\begin{tabularx}{0.4\textwidth}{|X|X|X|X|X|X|}
\hline
\textcolor{red}{\WhitePawnOnWhite} & & \WhitePawnOnWhite & \WhitePawnOnWhite & & \\
\hline
\mathbf{e}nd{tabularx}
\quad | \quad
\begin{tabularx}{0.4\textwidth}{|X|X|X|X|X|X|}
\hline
& \textcolor{red}{\WhitePawnOnWhite} & \WhitePawnOnWhite & & \WhitePawnOnWhite & \\
\hline
\mathbf{e}nd{tabularx}
\quad \right\rangle
\]
\[
= \left\langle \quad
\begin{tabularx}{0.33\textwidth}{|X|X|X|X|X|}
\hline
& \WhitePawnOnWhite & \WhitePawnOnWhite & & \\
\hline
\mathbf{e}nd{tabularx}
\quad | \quad
\begin{tabularx}{0.33\textwidth}{|X|X|X|X|X|}
\hline
& \WhitePawnOnWhite & & \WhitePawnOnWhite & \\
\hline
\mathbf{e}nd{tabularx}
\quad \right\rangle
\quad = \quad 1
\]
We conclude that $a_{p,0}^{* *}$ is the operator which \mathbf{e}mph{annihilates the first pawn on the chessboard}, from left to right. We'll write this as $a_{p,1}$.
But why stop there? What is $a_{p,1}^*$? This is the operator $f$ which satisfies
\[
\langle a_{p,1} x | y \rangle = \langle x | fy \rangle.
\]
Given chessboards $x$ and $y$, you delete the first pawn from $x$, and compare $a_{p,1} x$ and $y$. You now want to come up with a chessboard $fy$, so that comparing the deleted chessboard $a_{p,1} x$ to $y$ always gives the same result as comparing $x$ to $fy$. Well, you certainly don't want to disturb the relative positions of the pawns. You just want to quietly slip a first pawn into $y$ in such a way that the first pawn of $x$ can easily move there. And the place to slip in that pawn is to \mathbf{e}mph{double the first pawn in $y$}.
\[
\left\langle \quad a_{p,1} \quad
\begin{tabularx}{0.4\textwidth}{|X|X|X|X|X|X|X|}
\hline
& \WhitePawnOnWhite & & \WhitePawnOnWhite & & & \WhitePawnOnWhite \\
\hline
\mathbf{e}nd{tabularx}
\quad | \quad
\begin{tabularx}{0.33\textwidth}{|X|X|X|X|X|X|}
\hline
& & & \WhitePawnOnWhite & & \WhitePawnOnWhite \\
\hline
\mathbf{e}nd{tabularx}
\quad \right\rangle
\]
\[
= \left\langle \quad
\begin{tabularx}{0.33\textwidth}{|X|X|X|X|X|X|}
\hline
& & \WhitePawnOnWhite & & & \WhitePawnOnWhite \\
\hline
\mathbf{e}nd{tabularx}
\quad | \quad
\begin{tabularx}{0.33\textwidth}{|X|X|X|X|X|X|}
\hline
& & & \WhitePawnOnWhite & & \WhitePawnOnWhite \\
\hline
\mathbf{e}nd{tabularx}
\quad \right\rangle
\]
\[
= \left\langle \quad
\begin{tabularx}{0.4\textwidth}{|X|X|X|X|X|X|X|}
\hline
& \textcolor{green}{\WhitePawnOnWhite} & & \WhitePawnOnWhite & & & \WhitePawnOnWhite \\
\hline
\mathbf{e}nd{tabularx}
\quad | \quad
\begin{tabularx}{0.4\textwidth}{|X|X|X|X|X|X|X|}
\hline
& & & \textcolor{green}{\WhitePawnOnWhite} & \WhitePawnOnWhite & & \WhitePawnOnWhite \\
\hline
\mathbf{e}nd{tabularx}
\quad \right\rangle
\]
\subsection{Round-up of adjoints}
We can continue in this fashion, computing more and more adjoints. Given what's above, it might not be too surprising to discover that the iterated adjoints of $a_{p,0}$ can be expressed as
\[
\begin{array}{cccccccccccccccccc}
a_{p,0} & \rightarrow & a_{p,0}^* & \rightarrow & a_{p,1} & \rightarrow & a_{p,1}^* & \rightarrow & a_{p,2} & \rightarrow & \cdots & a_{p,n_p} & a_{p,n_p}^* & \rightarrow & a_{p,\Omega} & \rightarrow & a_{p,\Omega}^*
\mathbf{e}nd{array}
\]
where the arrows represented the operation of taking the $*$-adjoint. The operators $a_{p,i}$ for $1 \leq i \leq n_p$ delete the $i$'th pawn. The operators $a_{p,i}^*$ double the $i$'th pawn. And at the end we obtain \mathbf{e}mph{final} creation and annihilation operators, which we've denoted $a_{p,\Omega}^*$ and $a_{p,\Omega}$. These are to the right end of a chessboard what initial creation and annihilation operators are to the left end.
We can do just the same for the anti-pawn creation and annihilation operators. Drawing a similar diagram, with arrows representing the $*$ adjoint (the inverse of the $\dagger$ adjoint), we obtain something similar (although in a different direction).
\[
\begin{array}{cccccccccccccccccc}
a_{q,\Omega}^\dagger & \rightarrow & a_{q,\Omega} & \rightarrow & a_{q,n_q}^\dagger & \rightarrow & a_{q,n_q} & \rightarrow & \cdots & a_{q,2} & \rightarrow & a_{q,1}^\dagger & \rightarrow & a_{q,1} & \rightarrow a_{q,0}^\dagger & \rightarrow & a_{q,0}
\mathbf{e}nd{array}
\]
It follows that
\[
a_{p,0}^{*^{2n_p + 2}} = a_{p,\Omega} \quad \text{and}
\quad
a_{q,0}^{\dagger^{2n_q + 2}} = a_{q,\Omega}.
\]
But there's no reason to stop there; you can just keep taking adjoints. The description of these adjoints in terms of pawns, however, becomes more complicated.
It turns out that these adjoints are \mathbf{e}mph{periodic}.
\begin{thm}
On a chessboard with $n$ squares,
\[
a_{p,0}^{*^{2n+2}} = a_{p,0}, \quad a_{q,0}^{*^{2n+2}} = a_{q,0}.
\]
\mathbf{e}nd{thm}
We might say that ``$*^{2n+2} = 1$''. Our proof of this involves a little more geometry, by considering the combinatorics of chords on discs, as discussed in the next section.
The various creation and annihilation operators in different positions actually satisfy the relations of a \mathbf{e}mph{simplicial set}: see \cite{Me10_Sutured_TQFT} for details.
\subsection{Chessboards and words}
By now ``chessboard notation'' is becoming somewhat unwieldy. In fact, we can denote any chessboard by a sequence of $p$'s and $q$'s, where a $p$ is a pawn and $q$ an anti-pawn.
\[
qpqq
\quad
\leftrightarrow
\quad
\begin{tabularx}{0.3\textwidth}{|X|X|X|X|}
\hline
\BlackPawnOnWhite & \WhitePawnOnWhite & \BlackPawnOnWhite & \BlackPawnOnWhite \\
\hline
\mathbf{e}nd{tabularx}
\]
\[
\langle pqppqq | qppqpq \rangle =
\]
\[
\left\langle \quad
\begin{tabularx}{0.4\textwidth}{|>{\centering}X|>{\centering}X|>{\centering}X|>{\centering}X|>{\centering}X|>{\centering}X|}
\hline
\WhitePawnOnWhite & & \WhitePawnOnWhite & \WhitePawnOnWhite & & \\
\hline
\mathbf{e}nd{tabularx}
\quad | \quad
\begin{tabularx}{0.4\textwidth}{|X|X|X|X|X|X|}
\hline
& \WhitePawnOnWhite & \WhitePawnOnWhite & & \WhitePawnOnWhite & \\
\hline
\mathbf{e}nd{tabularx}
\quad \right\rangle
\quad = \quad 1.
\]
\section{Chords on discs}
\subsection{Curves on a disc}
We now consider curves on discs and cylinders.
We consider a disc with $2n+2$ points marked on the boundary, which we number clockwise by integers modulo $2n+2$. The point numbered $0$ is considered a basepoint. We consider \mathbf{e}mph{chord diagrams}, which are collections of non-intersecting curves joining the points up in pairs. We only consider them up to isotopy relative to the boundary.
Note that each chord in a chord diagram must connect points with opposite parity. We can shade the complementary regions of a chord diagram, so that each boundary interval $(2i, 2i+1)$ is shaded black, and each interval $(2i-1,2i)$ is shaded white.
\[
\begin{tikzpicture}[
scale=0.9,
suture/.style={thick, draw=red},
boundary/.style={ultra thick},
vertex/.style={draw=red, fill=red}]
\coordinate [label = above:{$0$}] (12) at (90:2);
\coordinate [label = above right:{$1$}] (1) at (60:2);
\coordinate [label = above right:{$2$}] (2) at (30:2);
\coordinate [label = right:{$3$}] (3) at (0:2);
\coordinate [label = below right:{$4$}] (4) at (-30:2);
\coordinate [label = below right:{$5$}] (5) at (-60:2);
\coordinate [label = below:{$6$}] (6) at (-90:2);
\coordinate [label = below left:{$-5$}] (7) at (-120:2);
\coordinate [label = below left:{$-4$}] (8) at (-150:2);
\coordinate [label = left:{$-3$}] (9) at (-180:2);
\coordinate [label = above left:{$-2$}] (10) at (-210:2);
\coordinate [label = above left:{$-1$}] (11) at (-240:2);
\filldraw[fill=black!10!white, draw=none] (11) to [bend left=90] (10) arc (150:120:2) -- cycle;
\filldraw[fill=black!10!white, draw=none] (12) arc (90:60:2) to [bend right=90] (2) arc (30:0:2) to [bend right=15] (8) arc (210:180:2) to [bend right=45] (12);
\filldraw[fill=black!10!white, draw=none] (4) to [bend right=90] (5) arc (-60:-30:2);
\filldraw[fill=black!10!white, draw=none] (6) to [bend right=90] (7) arc (-120:-90:2);
\draw[suture] (9) to [bend right=45] (12);
\draw[suture] (11) to [bend left=90] (10);
\draw[suture] (1) to [bend right=90] (2);
\draw[suture] (3) to [bend right=15] (8);
\draw[suture] (4) to [bend right=90] (5);
\draw[suture] (6) to [bend right=90] (7);
\draw [boundary] (0,0) circle (2 cm);
\foreach \point in {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12}
\fill [vertex] (\point) circle (2pt);
\mathbf{e}nd{tikzpicture}
\]
Now we will declare that the effect of the \mathbf{e}mph{creation operator} $a_{p,0}^*$ on a chord diagram is to insert a new chord between the points currently numbered $0$ and $1$; the new points are numbered $-1$ and $0$ and so the points on the left need to be renumbered. The existing chords remain in place but are pushed around the disc as shown.
\[
\begin{tikzpicture}[
scale=0.9,
suture/.style={thick, draw=red},
boundary/.style={ultra thick},
vertex/.style={draw=red, fill=red}]
\draw (-4,0) node {$a_{p,0}^*$};
\draw [->] (72:2.5) -- (66:2.3);
\draw [->] (72:2.5) -- (78:2.3);
\coordinate [label = above:{$0$}] (0) at (90:2);
\coordinate [label = above right:{$1$}] (1) at (54:2);
\coordinate [label = above right:{$2$}] (2) at (18:2);
\coordinate [label = right:{$3$}] (3) at (-18:2);
\coordinate [label = below right:{$4$}] (4) at (-54:2);
\coordinate [label = below right:{$5$}] (5) at (-90:2);
\coordinate [label = below:{$-4$}] (6) at (-126:2);
\coordinate [label = below left:{$-3$}] (7) at (-162:2);
\coordinate [label = below left:{$-2$}] (8) at (162:2);
\coordinate [label = left:{$-1$}] (9) at (126:2);
\filldraw[fill=black!10!white, draw=none] (0) to [bend right=90] (1) arc (54:90:2);
\filldraw[fill=black!10!white, draw=none] (9) to [bend right=30] (2) arc (18:-18:2) -- (8) arc (162:126:2);
\filldraw[fill=black!10!white, draw=none] (4) to [bend right=90] (5) arc (-90:-54:2);
\filldraw[fill=black!10!white, draw=none] (6) to [bend right=90] (7) arc (-162:-126:2);
\draw[suture] (9) to [bend right=30] (2);
\draw[suture] (0) to [bend right=90] (1);
\draw[suture] (3) -- (8);
\draw[suture] (4) to [bend right=90] (5);
\draw[suture] (6) to [bend right=90] (7);
\draw [boundary] (0,0) circle (2 cm);
\foreach \point in {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}
\fill [vertex] (\point) circle (2pt);
\draw (4,0) node {$=$};
\coordinate [label = above:{$0$}] (12b) at ($ (90:2) + (8,0) $);
\coordinate [label = above right:{$1$}] (1b) at ($ (60:2) + (8,0) $);
\coordinate [label = above right:{$2$}] (2b) at ($ (30:2) + (8,0) $);
\coordinate [label = right:{$3$}] (3b) at ($ (0:2) + (8,0) $);
\coordinate [label = below right:{$4$}] (4b) at ($ (-30:2) + (8,0) $);
\coordinate [label = below right:{$5$}] (5b) at ($ (-60:2) + (8,0) $);
\coordinate [label = below:{$6$}] (6b) at ($ (-90:2) + (8,0) $);
\coordinate [label = below left:{$-5$}] (7b) at ($ (-120:2) + (8,0) $);
\coordinate [label = below left:{$-4$}] (8b) at ($ (-150:2) + (8,0) $);
\coordinate [label = left:{$-3$}] (9b) at ($ (-180:2) + (8,0) $);
\coordinate [label = above left:{$-2$}] (10b) at ($ (-210:2) + (8,0) $);
\coordinate [label = above left:{$-1$}] (11b) at ($ (-240:2) + (8,0) $);
\filldraw[fill=black!10!white, draw=none] (12b) arc (90:60:2) to [bend left=45] (10b) arc (150:120:2) to [bend right=90] (12b);
\filldraw[fill=black!10!white, draw=none] (2b) arc (30:0:2) to [bend right=15] (8b) arc (210:180:2) to [bend right=15] (2b);
\filldraw[fill=black!10!white, draw=none] (4b) to [bend right=90] (5b) arc (-60:-30:2);
\filldraw[fill=black!10!white, draw=none] (6b) to [bend right=90] (7b) arc (-120:-90:2);
\draw[suture] (9b) to [bend right=15] (2b);
\draw[suture] (1b) to [bend left=45] (10b);
\draw[suture] (11b) to [bend right=90] (12b);
\draw[suture] (3b) to [bend right=15] (8b);
\draw[suture] (4b) to [bend right=90] (5b);
\draw[suture] (6b) to [bend right=90] (7b);
\draw [boundary] (8,0) circle (2 cm);
\foreach \point in {1b, 2b, 3b, 4b, 5b, 6b, 7b, 8b, 9b, 10b, 11b, 12b}
\fill [vertex] (\point) circle (2pt);
\mathbf{e}nd{tikzpicture}
\]
More generally, we will declare that the creation operator $a_{p,i}^*$ inserts a new chord between the points currently labelled $-2i$ and $-2i+1$; the new points are labelled $-2i-1$ and $-2i$, and points are renumbered accordingly,
For example
\[
\begin{tikzpicture}[
scale=0.9,
suture/.style={thick, draw=red},
boundary/.style={ultra thick},
vertex/.style={draw=red, fill=red}]
\draw (-4,0) node {$a_{p,1}^*$};
\draw [->] (144:2.5) -- (138:2.3);
\draw [->] (144:2.5) -- (150:2.3);
\coordinate [label = above:{$0$}] (0) at (90:2);
\coordinate [label = above right:{$1$}] (1) at (54:2);
\coordinate [label = above right:{$2$}] (2) at (18:2);
\coordinate [label = right:{$3$}] (3) at (-18:2);
\coordinate [label = below right:{$4$}] (4) at (-54:2);
\coordinate [label = below right:{$5$}] (5) at (-90:2);
\coordinate [label = below:{$-4$}] (6) at (-126:2);
\coordinate [label = below left:{$-3$}] (7) at (-162:2);
\coordinate [label = below left:{$-2$}] (8) at (162:2);
\coordinate [label = left:{$-1$}] (9) at (126:2);
\filldraw[fill=black!10!white, draw=none] (0) to [bend right=90] (1) arc (54:90:2);
\filldraw[fill=black!10!white, draw=none] (9) to [bend right=30] (2) arc (18:-18:2) -- (8) arc (162:126:2);
\filldraw[fill=black!10!white, draw=none] (4) to [bend right=90] (5) arc (-90:-54:2);
\filldraw[fill=black!10!white, draw=none] (6) to [bend right=90] (7) arc (-162:-126:2);
\draw[suture] (9) to [bend right=30] (2);
\draw[suture] (0) to [bend right=90] (1);
\draw[suture] (3) -- (8);
\draw[suture] (4) to [bend right=90] (5);
\draw[suture] (6) to [bend right=90] (7);
\draw [boundary] (0,0) circle (2 cm);
\foreach \point in {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}
\fill [vertex] (\point) circle (2pt);
\draw (4,0) node {$=$};
\coordinate [label = above:{$0$}] (12b) at ($ (90:2) + (8,0) $);
\coordinate [label = above right:{$1$}] (1b) at ($ (60:2) + (8,0) $);
\coordinate [label = above right:{$2$}] (2b) at ($ (30:2) + (8,0) $);
\coordinate [label = right:{$3$}] (3b) at ($ (0:2) + (8,0) $);
\coordinate [label = below right:{$4$}] (4b) at ($ (-30:2) + (8,0) $);
\coordinate [label = below right:{$5$}] (5b) at ($ (-60:2) + (8,0) $);
\coordinate [label = below:{$6$}] (6b) at ($ (-90:2) + (8,0) $);
\coordinate [label = below left:{$-5$}] (7b) at ($ (-120:2) + (8,0) $);
\coordinate [label = below left:{$-4$}] (8b) at ($ (-150:2) + (8,0) $);
\coordinate [label = left:{$-3$}] (9b) at ($ (-180:2) + (8,0) $);
\coordinate [label = above left:{$-2$}] (10b) at ($ (-210:2) + (8,0) $);
\coordinate [label = above left:{$-1$}] (11b) at ($ (-240:2) + (8,0) $);
\filldraw[fill=black!10!white, draw=none] (12b) arc (90:60:2) to [bend left=90] (12b);
\filldraw[fill=black!10!white, draw=none] (2b) arc (30:0:2) to [bend right=15] (8b) arc (210:180:2) to [bend right=90] (10b) arc (150:120:2) to [bend right=45] (2b);
\filldraw[fill=black!10!white, draw=none] (4b) to [bend right=90] (5b) arc (-60:-30:2);
\filldraw[fill=black!10!white, draw=none] (6b) to [bend right=90] (7b) arc (-120:-90:2);
\draw[suture] (9b) to [bend right=90] (10b);
\draw[suture] (1b) to [bend left=90] (12b);
\draw[suture] (11b) to [bend right=45] (2b);
\draw[suture] (3b) to [bend right=15] (8b);
\draw[suture] (4b) to [bend right=90] (5b);
\draw[suture] (6b) to [bend right=90] (7b);
\draw [boundary] (8,0) circle (2 cm);
\foreach \point in {1b, 2b, 3b, 4b, 5b, 6b, 7b, 8b, 9b, 10b, 11b, 12b}
\fill [vertex] (\point) circle (2pt);
\mathbf{e}nd{tikzpicture}
\]
We will declare the effect of the \mathbf{e}mph{annihilation operator} $a_{p,0}$ on a chord diagram to ``close off'' the points $0$ and $1$ by joining them with a chord, and pushing this into the disc. Points numbered $3, 4, \ldots$ retain their numbering; the points labelled $-1, -2, \ldots$ have their label increased by $2$. We obtain a diagram with two fewer points on the boundary. It might have a closed curve as well as chords.
For example,
\[
\begin{tikzpicture}[
scale=0.9,
suture/.style={thick, draw=red},
boundary/.style={ultra thick},
vertex/.style={draw=red, fill=red}]
\draw (-4,0) node {$a_{p,0}$};
\coordinate [label = above:{$0$}] (12) at (90:2);
\coordinate [label = above right:{$1$}] (1) at (60:2);
\coordinate [label = above right:{$2$}] (2) at (30:2);
\coordinate [label = right:{$3$}] (3) at (0:2);
\coordinate [label = below right:{$4$}] (4) at (-30:2);
\coordinate [label = below right:{$5$}] (5) at (-60:2);
\coordinate [label = below:{$6$}] (6) at (-90:2);
\coordinate [label = below left:{$-5$}] (7) at (-120:2);
\coordinate [label = below left:{$-4$}] (8) at (-150:2);
\coordinate [label = left:{$-3$}] (9) at (-180:2);
\coordinate [label = above left:{$-2$}] (10) at (-210:2);
\coordinate [label = above left:{$-1$}] (11) at (-240:2);
\filldraw[fill=black!10!white, draw=none] (12) arc (90:60:2) to [bend left=90] (12);
\filldraw[fill=black!10!white, draw=none] (2) arc (30:0:2) to [bend right=15] (8) arc (210:180:2) to [bend right=90] (10) arc (150:120:2) to [bend right=45] (2);
\filldraw[fill=black!10!white, draw=none] (4) to [bend right=90] (5) arc (-60:-30:2);
\filldraw[fill=black!10!white, draw=none] (6) to [bend right=90] (7) arc (-120:-90:2);
\draw[suture] (9) to [bend right=90] (10);
\draw[suture] (1) to [bend left=90] (12);
\draw[suture] (11) to [bend right=45] (2);
\draw[suture] (3) to [bend right=15] (8);
\draw[suture] (4) to [bend right=90] (5);
\draw[suture] (6) to [bend right=90] (7);
\draw[red, ultra thick, dotted] (12) to [bend left=90] (1);
\draw [boundary] (0,0) circle (2 cm);
\foreach \point in {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12}
\fill [vertex] (\point) circle (2pt);
\draw (4,0) node {$=$};
\coordinate [label = above:{$0$}] (0b) at ($ (90:2) + (8,0) $);
\coordinate [label = above right:{$1$}] (1b) at ($ (54:2) + (8,0) $);
\coordinate [label = above right:{$2$}] (2b) at ($ (18:2) + (8,0) $);
\coordinate [label = right:{$3$}] (3b) at ($ (-18:2) + (8,0) $);
\coordinate [label = below right:{$4$}] (4b) at ($ (-54:2) + (8,0) $);
\coordinate [label = below right:{$5$}] (5b) at ($ (-90:2) + (8,0) $);
\coordinate [label = below:{$-4$}] (6b) at ($ (-126:2) + (8,0) $);
\coordinate [label = below left:{$-3$}] (7b) at ($ (-162:2) + (8,0) $);
\coordinate [label = below left:{$-2$}] (8b) at ($ (162:2) + (8,0) $);
\coordinate [label = left:{$-1$}] (9b) at ($ (126:2) + (8,0) $);
\filldraw[fill=black!10!white, draw=none] (2b) arc (18:-18:2) -- (8b) arc (162:126:2) to [bend right=90] (0b) arc (90:54:2) to [bend right=90] (2b);
\filldraw[fill=black!10!white, draw=none] (4b) to [bend right=90] (5b) arc (-90:-54:2);
\filldraw[fill=black!10!white, draw=none] (6b) to [bend right=90] (7b) arc (-162:-126:2);
\filldraw[fill=black!10!white, draw=red] ($ (36:1.8) + (8,0) $) circle (0.1);
\draw[suture] (0b) to [bend left=90] (9b);
\draw[suture] (1b) to [bend right=90] (2b);
\draw[suture] (3b) -- (8b);
\draw[suture] (4b) to [bend right=90] (5b);
\draw[suture] (6b) to [bend right=90] (7b);
\draw [boundary] (8,0) circle (2 cm);
\foreach \point in {0b, 1b, 2b, 3b, 4b, 5b, 6b, 7b, 8b, 9b}
\fill [vertex] (\point) circle (2pt);
\mathbf{e}nd{tikzpicture}
\]
More generally, we declare that the annihilation operator $a_{p,i}$ closes off the points $-2i$ and $-2i+1$, and relabels points: the points $-2i+2, -2i+3, \ldots$ retain their numbering but the points $-2i-1, -2i-2, \ldots$ have their label increased by $2$. Again we might see a closed curve as a result.
When $i=0$, both $a_{p,i}^*$ and $a_{p,i}$ perform operations near the basepoint labelled $0$. When $i$ increases by $1$, those operations occur $2$ spots anticlockwise.
We will also declare creation operators $a_{q,i}^\dagger$ and annihilation operators $a_{q,i}$. These have a similar effect but on the other side of the diagram. We declare $a_{q,i}^\dagger$ inserts a new chord between the points labelled $2i-1$ and $2i$; and we declare that $a_{q,i}$ closes off the points $2i-1$ and $2i$. So $a_{q,0}$ and $a_{q,0}^\dagger$ perform operations near the basepoint, and when $i$ increases by $1$, those operations occur $2$ spots clockwise.
\[
\begin{tabular}{|c|c|}
\hline
\begin{tikzpicture}[
scale=0.9,
suture/.style={thick, draw=red},
boundary/.style={ultra thick},
vertex/.style={draw=red, fill=red}]
\draw (-4,0) node {$a_{p,i}^*$};
\coordinate [label = left:{$-2i+1$}] (a) at (144:2);
\coordinate [label = left:{$-2i$}] (b) at (216:2);
\filldraw[fill=black!10!white, draw=none] (a) -- (-1,0.8) -- (-1,-0.8) -- (b) arc (216:144:2);
\draw[suture] (a) -- (-1,0.8);
\draw[suture] (b) -- (-1,-0.8);
\draw [boundary](-1,1.732) arc (120:240:2);
\draw (0,0) node {$=$};
\coordinate [label = left:{$-2i+1$}] (c) at ($ (144:2) + (4,0) $);
\coordinate [label = left:{$-2i$}] (d) at ($ (168:2) + (4,0) $);
\coordinate [label = left:{$-2i-1$}] (e) at ($ (192:2) + (4,0) $);
\coordinate [label = left:{$-2i-2$}] (f) at ($ (216:2) + (4,0) $);
\filldraw[fill=black!10!white, draw=none] (c) -- (3,0.8) -- (3,-0.8) -- (f) arc (216:192:2) to [bend right=90] (d) arc (168:144:2);
\draw [suture] (c) -- (3,0.8);
\draw [suture] (d) to [bend left=90] (e);
\draw [suture] (f) -- (3,-0.8);
\draw [boundary](3,1.732) arc (120:240:2);
\foreach \point in {a, b, c, d, e, f}
\fill [vertex] (\point) circle (2pt);
\mathbf{e}nd{tikzpicture}
&
\begin{tikzpicture}[
scale=0.9,
suture/.style={thick, draw=red},
boundary/.style={ultra thick},
vertex/.style={draw=red, fill=red}]
\draw (-4,0) node {$a_{p,i}$};
\coordinate [label = left:{$-2i+2$}] (a) at (144:2);
\coordinate [label = left:{$-2i+1$}] (b) at (168:2);
\coordinate [label = left:{$-2i$}] (c) at (192:2);
\coordinate [label = left:{$-2i-1$}] (d) at (216:2);
\filldraw[fill=black!10!white, draw=none] (a) -- (-1,1) -- (-1,1.732) arc (120:144:2);
\filldraw[fill=black!10!white, draw=none] (b) -- (-1,0.4) -- (-1,-0.4) -- (c) arc (192:168:2);
\filldraw[fill=black!10!white, draw=none] (d) -- (-1,-1) -- (-1,-1.732) arc (240:216:2);
\draw[suture] (a) -- (-1,1);
\draw[suture] (b) -- (-1,0.4);
\draw[suture] (c) -- (-1,-0.4);
\draw[suture] (d) -- (-1,-1);
\draw [boundary](-1,1.732) arc (120:240:2);
\draw (0,0) node {$=$};
\coordinate [label = left:{$-2i+2$}] (e) at ($ (144:2) + (4,0) $);
\coordinate [label = left:{$-2i+1$}] (f) at ($ (216:2) + (4,0) $);
\filldraw[fill=black!10!white, draw=none] (e) -- (3,1) -- (3,1.732) arc (120:144:2);
\filldraw[fill=black!10!white, draw=none] (f) -- (3,-1) -- (3,-1.732) arc (240:216:2);
\filldraw[fill=black!10!white, draw=none] (3,0.4) .. controls (2.5,0.4) and (2.4,0.5) .. (2.4,0) .. controls (2.4,-0.5) and (2.5,-0.4) .. (3,-0.4);
\draw [suture] (e) -- (3,1);
\draw [suture] (f) -- (3,-1);
\draw [suture] (3,0.4) .. controls (2.5,0.4) and (2.4,0.5) .. (2.4,0) .. controls (2.4,-0.5) and (2.5,-0.4) .. (3,-0.4);
\draw [boundary](3,1.732) arc (120:240:2);
\foreach \point in {a, b, c, d, e, f}
\fill [vertex] (\point) circle (2pt);
\mathbf{e}nd{tikzpicture}
\\
\hline
\begin{tikzpicture}[
scale=0.9,
suture/.style={thick, draw=red},
boundary/.style={ultra thick},
vertex/.style={draw=red, fill=red}]
\draw (-3,0) node {$a_{q,i}^\dagger$};
\coordinate [label = right:{$2i-1$}] (a) at ($ (36:2) + (-3,0) $);
\coordinate [label = right:{$2i$}] (b) at ($ (-36:2) + (-3,0) $);
\filldraw[fill=black!10!white, draw=none] (a) -- (-2,0.8) -- (-2,1.732) arc (60:36:2);
\filldraw[fill=black!10!white, draw=none] (b) -- (-2,-0.8) -- (-2,-1.732) arc (-60:-36:2);
\draw[suture] (a) -- (-2,0.8);
\draw[suture] (b) -- (-2,-0.8);
\draw [boundary](-2,1.732) arc (60:-60:2);
\draw (1,0) node {$=$};
\coordinate [label = right:{$2i-1$}] (c) at ($ (36:2) + (1,0) $);
\coordinate [label = right:{$2i$}] (d) at ($ (12:2) + (1,0) $);
\coordinate [label = right:{$2i+1$}] (e) at ($ (-12:2) + (1,0) $);
\coordinate [label = right:{$2i+2$}] (f) at ($ (-36:2) + (1,0) $);
\filldraw[fill=black!10!white, draw=none] (c) -- (2,0.8) -- (2,1.732);
\filldraw[fill=black!10!white, draw=none] (f) -- (2,-0.8) -- (2,-1.732);
\filldraw[fill=black!10!white, draw=none] (d) to [bend right=90] (e);
\draw [suture] (c) -- (2,0.8);
\draw [suture] (d) to [bend right=90] (e);
\draw [suture] (f) -- (2,-0.8);
\draw [boundary](2,1.732) arc (60:-60:2);
\foreach \point in {a, b, c, d, e, f}
\fill [vertex] (\point) circle (2pt);
\mathbf{e}nd{tikzpicture}
&
\begin{tikzpicture}[
scale=0.9,
suture/.style={thick, draw=red},
boundary/.style={ultra thick},
vertex/.style={draw=red, fill=red}]
\draw (-3,0) node {$a_{q,i}$};
\coordinate [label = right:{$2i-2$}] (a) at ($ (36:2) + (-3,0) $);
\coordinate [label = right:{$2i-1$}] (b) at ($ (12:2) + (-3,0) $);
\coordinate [label = right:{$2i$}] (c) at ($ (-12:2) + (-3,0) $);
\coordinate [label = right:{$2i+1$}] (d) at ($ (-36:2) + (-3,0) $);
\filldraw[fill=black!10!white, draw=none] (-2,1) -- (a) arc (36:12:2) -- (-2,0.4) -- cycle;
\filldraw[fill=black!10!white, draw=none] (-2,-0.4) -- (c) arc (-12:-36:2) -- (-2,-1) -- cycle;
\draw[suture] (a) -- (-2,1);
\draw[suture] (b) -- (-2,0.4);
\draw[suture] (c) -- (-2,-0.4);
\draw[suture] (d) -- (-2,-1);
\draw [boundary](-2,1.732) arc (60:-60:2);
\draw (1,0) node {$=$};
\coordinate [label = right:{$2i-2$}] (e) at ($ (36:2) + (1,0) $);
\coordinate [label = right:{$2i-1$}] (f) at ($ (-36:2) + (1,0) $);
\filldraw[fill=black!10!white, draw=none] (2,0.4) .. controls (2.5,0.4) and (2.6,0.5) .. (2.6,0) .. controls (2.6,-0.5) and (2.5,-0.4) .. (2,-0.4) -- (2,-1) -- (f) arc (-36:36:2) -- (2,1) -- cycle;
\draw [suture] (e) -- (2,1);
\draw [suture] (f) -- (2,-1);
\draw [suture] (2,0.4) .. controls (2.5,0.4) and (2.6,0.5) .. (2.6,0) .. controls (2.6,-0.5) and (2.5,-0.4) .. (2,-0.4);
\draw [boundary](2,1.732) arc (60:-60:2);
\foreach \point in {a, b, c, d, e, f}
\fill [vertex] (\point) circle (2pt);
\mathbf{e}nd{tikzpicture}
\\
\hline
\mathbf{e}nd{tabular}
\]
Note that the $p$-creation operator always creates a new white region, and the $q$-creation operator always creates a new black region. Also, the $p$-annihilation operator always closes off a black region, while the $q$-annihilation operator always closes off a white region. The similarity to pawn colours is not coincidental.
\subsection{Diagrams of chessboards}
We'll call the simplest possible chord diagram, with one chord, \mathbf{e}mph{the vacuum} $\mathcal{G}amma_\mathbf{e}mptyset$.
\begin{center}
\begin{tikzpicture}[
scale=0.9,
suture/.style={thick, draw=red},
boundary/.style={ultra thick},
vertex/.style={draw=red, fill=red}]
\coordinate [label = above:{$0$}] (0) at (90:1);
\coordinate [label = below:{$1$}] (1) at (-90:1);
\filldraw[fill=black!10!white, draw=none] (0) arc (90:-90:1) -- cycle;
\draw [boundary] (0,0) circle (1 cm);
\draw [suture] (0) -- (1);
\foreach \point in {0, 1}
\fill [vertex] (\point) circle (2pt);
\mathbf{e}nd{tikzpicture}
\mathbf{e}nd{center}
Now for any word $w$ in the letters $p$ and $q$, we can apply a corresponding sequence of initial creation operators $a_{p,0}^*$ and $a_{q,0}^\dagger$ to the vacuum chord diagram $\mathcal{G}amma_\mathbf{e}mptyset$ to get a chord diagram $\mathcal{G}amma_w$ for the word $w$.
For example, for the word $w = qpqq$ we obtain
\[
\begin{tikzpicture}[
scale=0.9,
suture/.style={thick, draw=red},
boundary/.style={ultra thick},
vertex/.style={draw=red, fill=red}]
\coordinate [label = above:{$0$}] (0) at (90:1);
\coordinate [label = below:{$1$}] (1) at (-90:1);
\filldraw[fill=black!10!white, draw=none] (0) arc (90:-90:1) -- cycle;
\draw [boundary] (0,0) circle (1 cm);
\draw [suture] (0) -- (1);
\draw (-7,0) node {$\mathcal{G}amma_{qpqq} \quad = \quad a_{q,0}^\dagger a_{p,0}^* a_{q,0}^\dagger a_{q,0}^\dagger \quad \mathcal{G}amma_\mathbf{e}mptyset \quad = \quad a_{q,0}^\dagger a_{p,0}^* a_{q,0}^\dagger a_{q,0}^\dagger$};
\foreach \point in {0, 1}
\fill [vertex] (\point) circle (2pt);
\mathbf{e}nd{tikzpicture}
\]
\[
\begin{tikzpicture}[
scale=0.9,
suture/.style={thick, draw=red},
boundary/.style={ultra thick},
vertex/.style={draw=red, fill=red}]
\coordinate [label = above:{$0$}] (0) at (90:1);
\coordinate [label = right:{$1$}] (1) at (0:1);
\coordinate [label = below:{$2$}] (2) at (-90:1);
\coordinate [label = left:{$-1$}] (3) at (180:1);
\filldraw[fill=black!10!white, draw=none] (0) to [bend right=45] (1) arc (0:90:1);
\filldraw[fill=black!10!white, draw=none] (2) to [bend right=45] (3) arc (-180:-90:1);
\draw [boundary] (0,0) circle (1 cm);
\draw [suture] (0) to [bend right=45] (1);
\draw [suture] (2) to [bend right=45] (3);
\draw (-4,0) node {$= a_{q,0}^\dagger a_{p,0}^* a_{q,0}^\dagger $};
\foreach \point in {0,1,2,3}
\fill [vertex] (\point) circle (2pt);
\mathbf{e}nd{tikzpicture}
\begin{tikzpicture}[
scale=0.9,
suture/.style={thick, draw=red},
boundary/.style={ultra thick},
vertex/.style={draw=red, fill=red}]
\coordinate [label = above:{$0$}] (0) at (90:1);
\coordinate [label = above right:{$1$}] (1) at (30:1);
\coordinate [label = below right:{$2$}] (2) at (-30:1);
\coordinate [label = below:{$3$}] (3) at (-90:1);
\coordinate [label = below left:{$-2$}] (4) at (-150:1);
\coordinate [label = above left:{$-1$}] (5) at (150:1);
\filldraw[fill=black!10!white, draw=none] (0) to [bend right=90] (1) arc (30:90:1);
\filldraw[fill=black!10!white, draw=none] (2) to [bend right=90] (3) arc (-90:0:1);
\filldraw[fill=black!10!white, draw=none] (4) to [bend right=90] (5) arc (150:210:1);
\draw [boundary] (0,0) circle (1 cm);
\draw [suture] (0) to [bend right=90] (1);
\draw [suture] (2) to [bend right=90] (3);
\draw [suture] (4) to [bend right=90] (5);
\draw (-4,0) node {$\quad = \quad a_{q,0}^\dagger a_{p,0}^*$};
\foreach \point in {0,1,2,3,4,5}
\fill [vertex] (\point) circle (2pt);
\mathbf{e}nd{tikzpicture}
\]
\[
\begin{tikzpicture}[
scale=0.9,
suture/.style={thick, draw=red},
boundary/.style={ultra thick},
vertex/.style={draw=red, fill=red}]
\coordinate [label = above:{$0$}] (0) at (90:1.5);
\coordinate [label = above right:{$1$}] (1) at (45:1.5);
\coordinate [label = right:{$2$}] (2) at (0:1.5);
\coordinate [label = below right:{$3$}] (3) at (-45:1.5);
\coordinate [label = below:{$4$}] (4) at (-90:1.5);
\coordinate [label = below left:{$-3$}] (5) at (-135:1.5);
\coordinate [label = left:{$-2$}] (6) at (-180:1.5);
\coordinate [label = above left:{$-1$}] (7) at (-225:1.5);
\filldraw[fill=black!10!white, draw=none] (0) arc (90:45:1.5) to [bend left=15] (6) arc (180:135:1.5) to [bend right=90] (0);
\filldraw[fill=black!10!white, draw=none] (2) to [bend right=90] (3) arc (-45:0:1.5);
\filldraw[fill=black!10!white, draw=none] (4) to [bend right=90] (5) arc (-135:-90:1.5);
\draw [boundary] (0,0) circle (1.5 cm);
\draw [suture] (0) to [bend left=90] (7);
\draw [suture] (6) to [bend right=15] (1);
\draw [suture] (2) to [bend right=90] (3);
\draw [suture] (4) to [bend right=90] (5);
\draw (-3,0) node {$= \quad a_{q,0}^\dagger$};
\foreach \point in {0,1,2,3,4,5,6,7}
\fill [vertex] (\point) circle (2pt);
\mathbf{e}nd{tikzpicture}
\begin{tikzpicture}[
scale=0.9,
suture/.style={thick, draw=red},
boundary/.style={ultra thick},
vertex/.style={draw=red, fill=red}]
\coordinate [label = above:{$0$}] (0) at (90:2);
\coordinate [label = above right:{$1$}] (1) at (54:2);
\coordinate [label = above right:{$2$}] (2) at (18:2);
\coordinate [label = right:{$3$}] (3) at (-18:2);
\coordinate [label = below right:{$4$}] (4) at (-54:2);
\coordinate [label = below right:{$5$}] (5) at (-90:2);
\coordinate [label = below:{$-4$}] (6) at (-126:2);
\coordinate [label = below left:{$-3$}] (7) at (-162:2);
\coordinate [label = below left:{$-2$}] (8) at (162:2);
\coordinate [label = left:{$-1$}] (9) at (126:2);
\filldraw[fill=black!10!white, draw=none] (0) to [bend right=90] (1) arc (54:90:2);
\filldraw[fill=black!10!white, draw=none] (9) to [bend right=30] (2) arc (18:-18:2) -- (8) arc (162:126:2);
\filldraw[fill=black!10!white, draw=none] (4) to [bend right=90] (5) arc (-90:-54:2);
\filldraw[fill=black!10!white, draw=none] (6) to [bend right=90] (7) arc (-162:-126:2);
\draw[suture] (9) to [bend right=30] (2);
\draw[suture] (0) to [bend right=90] (1);
\draw[suture] (3) -- (8);
\draw[suture] (4) to [bend right=90] (5);
\draw[suture] (6) to [bend right=90] (7);
\draw (-4,0) node {$\quad = \quad$};
\draw [boundary] (0,0) circle (2 cm);
\foreach \point in {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}
\fill [vertex] (\point) circle (2pt);
\mathbf{e}nd{tikzpicture}
\]
Now a word $w$ in $p$ and $q$ corresponds to a chessboard, where each $p$ stands for a pawn and each $q$ stands for an anti-pawn. So we can say that the chessboard $w$ has chord diagram $\mathcal{G}amma_w$.
\[
\begin{tikzpicture}[
scale=0.9,
suture/.style={thick, draw=red},
boundary/.style={ultra thick},
vertex/.style={draw=red, fill=red}]
\coordinate [label = above:{$0$}] (0) at (90:2);
\coordinate [label = above right:{$1$}] (1) at (54:2);
\coordinate [label = above right:{$2$}] (2) at (18:2);
\coordinate [label = right:{$3$}] (3) at (-18:2);
\coordinate [label = below right:{$4$}] (4) at (-54:2);
\coordinate [label = below right:{$5$}] (5) at (-90:2);
\coordinate [label = below:{$-4$}] (6) at (-126:2);
\coordinate [label = below left:{$-3$}] (7) at (-162:2);
\coordinate [label = below left:{$-2$}] (8) at (162:2);
\coordinate [label = left:{$-1$}] (9) at (126:2);
\filldraw[fill=black!10!white, draw=none] (0) to [bend right=90] (1) arc (54:90:2);
\filldraw[fill=black!10!white, draw=none] (9) to [bend right=30] (2) arc (18:-18:2) -- (8) arc (162:126:2);
\filldraw[fill=black!10!white, draw=none] (4) to [bend right=90] (5) arc (-90:-54:2);
\filldraw[fill=black!10!white, draw=none] (6) to [bend right=90] (7) arc (-162:-126:2);
\draw[suture] (9) to [bend right=30] (2);
\draw[suture] (0) to [bend right=90] (1);
\draw[suture] (3) -- (8);
\draw[suture] (4) to [bend right=90] (5);
\draw[suture] (6) to [bend right=90] (7);
\draw (-8,0) node {
$qpqq \quad
\leftrightarrow \quad
\begin{tabularx}{0.3\textwidth}{|X|X|X|X|}
\hline
\BlackPawnOnWhite & \WhitePawnOnWhite & \BlackPawnOnWhite & \BlackPawnOnWhite \\
\hline
\mathbf{e}nd{tabularx}
\quad
\leftrightarrow \quad$};
\draw [boundary] (0,0) circle (2 cm);
\foreach \point in {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}
\fill [vertex] (\point) circle (2pt);
\mathbf{e}nd{tikzpicture}
\]
It turns out that the creation and annihilation perators $a_{p,i}, a_{p,i}^*, a_{q,i}, a_{q,i}^\dagger$ act on chessboards and chord diagrams coherently.
\begin{prop}
For any chessboard/word $w$,
\[
\mathcal{G}amma_{a_{p,i}^* w} = a_{p,i}^* \mathcal{G}amma_w.
\]
In other words, the following diagram commutes.
\[
\begin{tikzpicture}
\draw (0,0) node {$w$};
\draw (3,0) node {$\mathcal{G}amma_w$};
\draw (0,-2) node {$a_{p,i}^* w$};
\draw (3,-2) node {$\mathcal{G}amma_{a_{p,i}^* w}$};
\draw [->,decorate, decoration={snake,amplitude=.4mm,segment length=2mm,post length=1mm}] (0.5,0) -- (2.5,0)
node [above, align=center, midway] {Draw\\diagram};
\draw [->,decorate, decoration={snake,amplitude=.4mm,segment length=2mm,post length=1mm}] (0.5,-2) -- (2.5,-2)
node [above, align=center, midway] {Draw\\diagram};
\draw [->] (0,-0.4) -- (0,-1.6)
node [left, align=center, midway] {Create\\pawn};
\draw [->] (3,-0.4) -- (3,-1.6) node [right, align=center, midway] {Create\\chord};
\mathbf{e}nd{tikzpicture}
\]
\mathbf{e}nd{prop}
A similar result holds for operators $a_{q,i}^\dagger$. And also for annihilation operators $a_{p,i}$, $a_{q,i}$, provided we take any diagram with a closed loop to be zero. Annihilating a pawn in the wrong place gives an error ``universe not found''; annihilating a chord in the wrong place, so that the result is not a chord diagram, gives an error ``chord diagram not found''.
\subsection{Chord diagrams of chessboards as ski slopes}
In fact there's a quicker way to draw the diagram of a chessboard, which also illustrates why the above proposition is true. But we need to consider a different sport: skiing.
We imagine we have a ski slope, where as usual we ski from top to bottom. It is rectangular in shape, with a starting line at the top, a finishing line at the bottom, and a left and right side along the slope.
We consider a slalom run. There are obstacles, which we can imagine as poles in the ground. As you go down the slope, you have to round all the obstacles, from top to bottom; and after rounding each one you have to return to the centre of the course. There are some (white) poles placed on the left, and some (black) on the right of the slope. We'll write $p$ for a pole on the left, and $q$ for a pole on the right. For the slalom course $qpqq$ the course looks as follows.
\[
\begin{tikzpicture}[
scale=0.7,
suture/.style={thick, draw=red},
boundary/.style={ultra thick},
vertex/.style={draw=red, fill=red},
leftpole/.style={draw=black, fill=white},
rightpole/.style={draw=black, fill=black}]
\draw [boundary] (-2,0) -- (6,0) -- (6,-9) -- (-2,-9) -- cycle;
\coordinate (0) at (2,0);
\coordinate (1) at (4,-1);
\coordinate (2) at (4,-2);
\coordinate (-1) at (0,-3);
\coordinate (-2) at (0,-4);
\coordinate (3) at (4,-5);
\coordinate (4) at (4,-6);
\coordinate (5) at (4,-7);
\coordinate (6) at (4,-8);
\coordinate (7) at (2,-9);
\coordinate (p1) at (0,-3.5);
\coordinate (q1) at (4,-1.5);
\coordinate (q2) at (4,-5.5);
\coordinate (q3) at (4,-7.5);
\draw [suture] (0) .. controls (2,-0.5) and (3.5,-1) .. (1) arc (90:-90:0.5)
.. controls (3.5,-2) and (0.5,-3) .. (-1)
arc (90:270:0.5)
.. controls (0.5,-4) and (3.5,-5) .. (3)
arc (90:-90:0.5)
.. controls (3.5,-6) and (2,-6.25) .. (2,-6.5) .. controls (2,-6.75) and (3.5,-7) .. (5)
arc (90:-90:0.5)
.. controls (3.5,-8) and (2,-8.5) .. (7);
\foreach \point in {p1}
\fill [leftpole] (\point) circle (5pt);
\foreach \point in {q1, q2, q3}
\fill [rightpole] (\point) circle (5pt);
\mathbf{e}nd{tikzpicture}
\]
Now, if we just consider that part of the course inside the obstacles, we get an interesting chord diagram. In fact, it is just the chord diagram $\mathcal{G}amma_{qpqq}$.
\[
\begin{tikzpicture}[
scale=0.7,
suture/.style={thick, draw=red},
boundary/.style={ultra thick},
vertex/.style={draw=red, fill=red},
leftpole/.style={draw=black, fill=white},
rightpole/.style={draw=black, fill=black}]
\coordinate [label = above:{$0$}] (0) at (2,0);
\coordinate [label = right:{$1$}] (1) at (4,-1);
\coordinate [label = right:{$2$}] (2) at (4,-2);
\coordinate [label = left:{$-1$}] (-1) at (0,-3);
\coordinate [label = left:{$-2$}] (-2) at (0,-4);
\coordinate [label = right:{$3$}] (3) at (4,-5);
\coordinate [label = right:{$4$}] (4) at (4,-6);
\coordinate [label = right:{$5$}] (5) at (4,-7);
\coordinate [label = right:{$6$}] (6) at (4,-8);
\coordinate [label = below:{$7$}] (7) at (2,-9);
\coordinate (p1) at (0,-3.5);
\coordinate (q1) at (4,-1.5);
\coordinate (q2) at (4,-5.5);
\coordinate (q3) at (4,-7.5);
\filldraw[fill=black!10!white, draw=none] (0) .. controls (2,-0.5) and (3.5,-1) .. (1) -- (4,0) -- cycle;
\filldraw[fill=black!10!white, draw=none] (2) .. controls (3.5,-2) and (0.5,-3) .. (-1) -- (-2) .. controls (0.5,-4) and (3.5,-5) .. (3) -- cycle;
\filldraw[fill=black!10!white, draw=none] (4) .. controls (3.5,-6) and (2,-6.25) .. (2,-6.5) .. controls (2,-6.75) and (3.5,-7) .. (5);
\filldraw[fill=black!10!white, draw=none] (6) .. controls (3.5,-8) and (2,-8.5) .. (7) -- (4,-9) -- cycle;
\draw [suture] (0) .. controls (2,-0.5) and (3.5,-1) .. (1);
\draw [suture, dotted] (1) arc (90:-90:0.5);
\draw [suture] (2) .. controls (3.5,-2) and (0.5,-3) .. (-1);
\draw [suture, dotted] (-1) arc (90:270:0.5);
\draw [suture] (-2) .. controls (0.5,-4) and (3.5,-5) .. (3);
\draw [suture, dotted] (3) arc (90:-90:0.5);
\draw [suture] (4) .. controls (3.5,-6) and (2,-6.25) .. (2,-6.5) .. controls (2,-6.75) and (3.5,-7) .. (5);
\draw [suture, dotted] (5) arc (90:-90:0.5);
\draw [suture] (6) .. controls (3.5,-8) and (2,-8.5) .. (7);
\draw [boundary] (0,0) -- (4,0) -- (4,-9) -- (0,-9) -- cycle;
\draw [dotted] (0,0) -- (-2,0) -- (-2,-9) -- (0,-9);
\draw [dotted] (4,0) -- (6,0) -- (6,-9) -- (4,-9);
\foreach \point in {0, 1, 2, 3, 4, 5, 6, 7, -2, -1}
\fill [vertex] (\point) circle (2pt);
\foreach \point in {p1}
\fill [leftpole] (\point) circle (5pt);
\foreach \point in {q1, q2, q3}
\fill [rightpole] (\point) circle (5pt);
\mathbf{e}nd{tikzpicture}
\]
It's not difficult to see why this skiing algorithm gives the correct chord diagram for each word/chessboard. Having the chord diagram arranged this way, with all the chords coming from pawns / $p$'s on the left, and all the chords coming from anti-pawns / $q$'s on the right, makes it easier to see why the chord diagram operations correspond to operations on chessboards. For instance, $a_{p,i}^*$ adds an extra component to the ski run, entering from the left side, then sharply turning back to the left hand side of the slope. This is precisely what you get when you add an extra pole on the left.
\subsection{Square decomposition}
We can also note that the pawns and anti-pawns correspond to a precise decomposition of the chord diagram into squares, as shown by the green lines.
\[
\begin{tikzpicture}[
scale=0.7,
suture/.style={thick, draw=red},
boundary/.style={ultra thick},
vertex/.style={draw=red, fill=red},
rightpole/.style={draw=black, fill=black},
leftpole/.style ={draw=black, fill=white},
decomposition/.style={thick, draw=green!50!black}]
\draw (-8,-4.5) node {
$\begin{tabularx}{0.3\textwidth}{|X|X|X|X|}
\hline
\scalebox{0.9}{\BlackPawnOnWhite}
& \scalebox{0.9}{\WhitePawnOnWhite} &
\scalebox{0.9}{\BlackPawnOnWhite}
&
\scalebox{0.9}{\BlackPawnOnWhite}
\\
\hline
\mathbf{e}nd{tabularx}$};
\coordinate [label = below left:{$0$}] (0) at (2,0);
\coordinate [label = right:{$1$}] (1) at (4,-1);
\coordinate [label = right:{$2$}] (2) at (4,-2);
\coordinate [label = left:{$-1$}] (-1) at (0,-3);
\coordinate [label = left:{$-2$}] (-2) at (0,-4);
\coordinate [label = right:{$3$}] (3) at (4,-5);
\coordinate [label = right:{$4$}] (4) at (4,-6);
\coordinate [label = right:{$5$}] (5) at (4,-7);
\coordinate [label = right:{$6$}] (6) at (4,-8);
\coordinate [label = above left:{$7$}] (7) at (2,-9);
\coordinate (p1) at (0,-3.5);
\coordinate (q1) at (4,-1.5);
\coordinate (q2) at (4,-5.5);
\coordinate (q3) at (4,-7.5);
\filldraw[fill=black!10!white, draw=none] (0) .. controls (2,-0.5) and (3.5,-1) .. (1) -- (4,0) -- cycle;
\filldraw[fill=black!10!white, draw=none] (2) .. controls (3.5,-2) and (0.5,-3) .. (-1) -- (-2) .. controls (0.5,-4) and (3.5,-5) .. (3) -- cycle;
\filldraw[fill=black!10!white, draw=none] (4) .. controls (3.5,-6) and (2,-6.25) .. (2,-6.5) .. controls (2,-6.75) and (3.5,-7) .. (5);
\filldraw[fill=black!10!white, draw=none] (6) .. controls (3.5,-8) and (2,-8.5) .. (7) -- (4,-9) -- cycle;
\draw [suture] (0) .. controls (2,-0.5) and (3.5,-1) .. (1);
\draw [suture] (2) .. controls (3.5,-2) and (0.5,-3) .. (-1);
\draw [suture] (-2) .. controls (0.5,-4) and (3.5,-5) .. (3);
\draw [suture] (4) .. controls (3.5,-6) and (2,-6.25) .. (2,-6.5) .. controls (2,-6.75) and (3.5,-7) .. (5);
\draw [suture] (6) .. controls (3.5,-8) and (2,-8.5) .. (7);
\draw [decomposition] (0,-2.5) -- (4,-2.5);
\draw [decomposition] (0,-4.5) -- (4,-4.5);
\draw [decomposition] (0,-6.5) -- (4,-6.5);
\draw [boundary] (0,0) -- (4,0) -- (4,-9) -- (0,-9) -- cycle;
\draw (-2,-4.5) node {$\leftrightarrow$};
\foreach \point in {0, 1, 2, 3, 4, 5, 6, 7, -2, -1}
\fill [vertex] (\point) circle (2pt);
\foreach \point in {q1, q2, q3}
\fill [rightpole] (\point) circle (5pt);
\foreach \point in {p1}
\fill [leftpole] (\point) circle (5pt);
\mathbf{e}nd{tikzpicture}
\]
One can essentially read the chord diagram as a chessboard in this way: reading the ski slope from top to bottom, reads off the chessboard from left to right.
Moreover, we see that each of these squares in the chord diagram contains chords in a particular configuration, corresponding to a pawn or anti-pawn.
\[
\begin{tikzpicture}[
scale=2,
suture/.style={thick, draw=red},
]
\coordinate (1tl) at (0,1);
\coordinate (1tr) at (1,1);
\coordinate (1bl) at (0,0);
\coordinate (1br) at (1,0);
\draw (-1,0.5) node {$\WhitePawnOnWhite \quad \leftrightarrow$};
\filldraw[fill=black!10!white, draw=none] (0.5,1) -- (1tr) -- (1,0.5) to [bend right=45] (0.5,0) -- (1bl) -- (0,0.5) to [bend right=45] (0.5,1);
\draw (1bl) -- (1br) -- (1tr) -- (1tl) -- cycle;
\draw [suture] (0.5,0) to [bend left=45] (1,0.5);
\draw [suture] (0,0.5) to [bend right=45] (0.5,1);
\foreach \point in {1bl, 1br, 1tl, 1tr}
\fill [black] (\point) circle (1pt);
\mathbf{e}nd{tikzpicture}
\quad
\quad
\begin{tikzpicture}[
scale=2,
suture/.style={thick, draw=red},
]
\coordinate (2tl) at (3,1);
\coordinate (2tr) at (4,1);
\coordinate (2bl) at (3,0);
\coordinate (2br) at (4,0);
\draw (2,0.5) node {$\BlackPawnOnWhite \quad \leftrightarrow$};
\filldraw[fill=black!10!white, draw=none] (3.5,1) -- (2tr) -- (4,0.5) to [bend left=45] (3.5,1);
\filldraw[fill=black!10!white, draw=none] (3,0.5) to [bend left=45] (3.5,0) -- (2bl) -- cycle;
\draw (2bl) -- (2br) -- (2tr) -- (2tl) -- cycle;
\draw [suture] (3.5,1) to [bend right=45] (4,0.5);
\draw [suture] (3.5,0) to [bend right=45] (3,0.5);
\foreach \point in {2bl, 2br, 2tl, 2tr}
\fill [black] (\point) circle (1pt);
\mathbf{e}nd{tikzpicture}
\]
In this way, each pawn or anti-pawn provides one of two ways of drawing in chords into the chord diagram --- providing, in a sense, \mathbf{e}mph{one bit of information} and suggesting relations to quantum information theory. Moreover, since we have creation and annihilation operators for pawns and anti-pawns, and they can also be regarded as bits of information, one is reminded of the ``it from bit'' idea of John Archibald Wheeler \cite{Wheeler90}. These ideas are more fully developed in \cite{Me12_itsy_bitsy}.
This decomposition of a chord diagram into pieces, each of which has curves in one of two specified configurations, is also reminiscent of statistical mechanics.
\subsection{Curves on cylinders}
\label{sec:curves_on_cylinders}
Consider the cylinder shown. Its boundary consists of discs on the top and bottom, and a vertical annulus. On the vertical annulus we have some vertical curves, drawn in red.
\begin{center}
\includegraphics[scale=0.5]{fig39-eps-converted-to.pdf}
\mathbf{e}nd{center}
We are going to draw chord diagrams $\mathcal{G}amma_0$ and $\mathcal{G}amma_1$ on the bottom and top discs, and then join up all the curves to obtain curves on the boundary of the cylinder, which of course is topologically a sphere.
However, when we do so, we draw the curves are arranged along the corners as shown:
\begin{center}
\includegraphics[scale=0.4]{corner_1-eps-converted-to.pdf}
\mathbf{e}nd{center}
When we connect up the curves, we do it in the following fashion. We could imagine that we are turning and walking along the corner, as shown on the left; or rounding the corners and the curves, as shown on the right.
\begin{center}
\includegraphics[scale=0.4]{corner_2-eps-converted-to.pdf}
\quad
\includegraphics[scale=0.4]{corner_3-eps-converted-to.pdf}
\mathbf{e}nd{center}
Here we have an example, with a chord diagram at the top (namely $\mathcal{G}amma_{qp}$, if we regard the basepoint as being at the back), and another chord diagram at the bottom (namely $\mathcal{G}amma_{pq}$).
\begin{center}
\includegraphics[scale=0.5]{stack_1-eps-converted-to.pdf}
\mathbf{e}nd{center}
If we join up the curves as we specified, we obtain the following, which you can check is a single connected curve on the cylinder/sphere. (A fun exercise is to prove that if we draw the \mathbf{e}mph{same} chord diagram on the top and bottom of the cylinder, aligned exactly, then joining up the curves on the cylinder in this fashion always gives a single connected curve.)
\begin{center}
\includegraphics[scale=0.5]{stack_2-eps-converted-to.pdf}
\mathbf{e}nd{center}
\subsection{Finger moves}
Now, suppose we perform an annihilation operator $a_{p,i}$ on the chord diagram on the bottom of the cylinder. The topological effect is the same as performing the creation operator $a_{p,i}^*$ on the top of the cylinder --- the two are related by a \mathbf{e}mph{finger move} as shown.
\begin{center}
\def120pt{120pt}
\input{adjoint_1a.pdf_tex}
\quad
\def120pt{120pt}
\input{adjoint_2a.pdf_tex}
\quad
\def120pt{120pt}
\input{adjoint_3a.pdf_tex}
\mathbf{e}nd{center}
Similarly, if we perform the creation operator $a_{q,i}^\dagger$ on the \mathbf{e}mph{bottom} of the cylinder, the topological effect is the same as performing the annihilation operator $a_{q,i}$ on the \mathbf{e}mph{top}.
\begin{center}
\def120pt{120pt}
\input{adjoint_4a.pdf_tex}
\quad
\def120pt{120pt}
\input{adjoint_5a.pdf_tex}
\quad
\def120pt{120pt}
\input{adjoint_6a.pdf_tex}
\mathbf{e}nd{center}
So these creation and annihilation operators, inserting and closing off chords in a chord diagram, are related in a way that is quite similar to an adjoint.
\subsection{``Inner product'' on chord diagrams}
In fact, based on the above, we will define an ``inner product'' of two chord diagrams $\langle \mathcal{G}amma_0 | \mathcal{G}amma_1 \rangle$ via what happens when you insert those two diagrams into the cylinder and round the corners. The result must be a collection of curves on the sphere; it may be one connected curve, or it may be disconnected and consist of several connected curve components.
We set
\[
\left\langle \mathcal{G}amma_0 | \mathcal{G}amma_1 \right\rangle = \left\{ \begin{array}{ll}
1 & \text{if the result of rounding the curves on the cylinder} \\ & \quad \text{is a single connected curve on the sphere;} \\
0 & \text{if the result is disconnected.}
\mathbf{e}nd{array} \right.
\]
Note the asymmetry: we require that $\mathcal{G}amma_0$ go on the bottom, and $\mathcal{G}amma_1$ on the top.
The example drawn above in section \ref{sec:curves_on_cylinders} shows that
\[
\langle \mathcal{G}amma_{pq} | \mathcal{G}amma_{qp} \rangle = 1.
\]
The finger move we drew above shows that
\[
\langle a_{q,i}^\dagger \mathcal{G}amma_0 | \mathcal{G}amma_1 \rangle = \langle \mathcal{G}amma_0 | a_{q,i} \mathcal{G}amma_1 \rangle.
\]
Similar finger moves show that annihilation and creation operators on chord diagrams are adjoint --- just as on chessboards.
\begin{align*}
\langle a_{p,i} \mathcal{G}amma_0 | \mathcal{G}amma_1 \rangle &= \langle \mathcal{G}amma_0 | a_{p,i}^* \mathcal{G}amma_1 \rangle \\
\langle a_{q,i+1} \mathcal{G}amma_0 | \mathcal{G}amma_1 \rangle &= \langle \mathcal{G}amma_0 | a_{q,i}^\dagger \mathcal{G}amma_1 \rangle \\
\langle a_{p,i}^* \mathcal{G}amma_0 | \mathcal{G}amma_1 \rangle &= \langle \mathcal{G}amma_0 | a_{p,i+1} \mathcal{G}amma_1 \rangle.
\mathbf{e}nd{align*}
One can prove that this bilinear form is isomorphic to the one defined on chessboards.
\begin{thm}[\cite{Me09Paper, Me10_Sutured_TQFT}]
For any two chessboards/words $w_0, w_1$,
\[
\langle w_0 | w_1 \rangle = \langle \mathcal{G}amma_{w_0} | \mathcal{G}amma_{w_1} \rangle.
\]
\mathbf{e}nd{thm}
\[
\begin{tikzpicture}
\draw (0,0) node {$(w_0, w_1)$};
\draw (5,0) node {$(\mathcal{G}amma_{w_0}, \mathcal{G}amma_{w_1})$};
\draw (0,-3) node {$\langle w_0 | w_1 \rangle$};
\draw (5,-3) node {$\langle \mathcal{G}amma_{w_0} | \mathcal{G}amma_{w_1} \rangle$};
\draw [->,decorate, decoration={snake,amplitude=.4mm,segment length=2mm,post length=1mm}] (1,0) -- (4,0)
node [above, align=center, midway] {Draw\\diagrams};
\draw [<->]
(1,-3) -- (4,-3)
node [above, align=center, midway] {$=$};
\draw [->] (0,-0.4) -- (0,-2.6)
node [left, align=center, midway] {``Pawn dynamics''\\``inner product''\\bilinear form};
\draw [->] (5,-0.4) -- (5,-2.6) node [right, align=center, midway] {Insert into cylinder\\``inner product''\\bilinear form};
\mathbf{e}nd{tikzpicture}
\]
That is, the property of pawns on a chessboard being able to move from one setup to another, is precisely the property of curves on the cylinder joining up to give a single connected curve.
Having seen this, it is perhaps more plausible why repeated adjoints of chessboard operators might be periodic.
\subsection{Bypass discs and $120^\circ$ rotations}
Suppose we have a chord diagram $\mathcal{G}amma$ on a disc $D$, and we consider a sub-disc $B \subset D$ on which the chord diagram appears as shown.
\[
\begin{tikzpicture}[
scale=0.9,
suture/.style={thick, draw=red},
boundary/.style={ultra thick},
vertex/.style={draw=red, fill=red}]
\coordinate (0) at (90:1);
\coordinate (1) at (30:1);
\coordinate (2) at (-30:1);
\coordinate (3) at (-90:1);
\coordinate (4) at (-150:1);
\coordinate (5) at (150:1);
\filldraw[fill=black!10!white, draw=none] (0) arc (90:30:1) to [bend right=90] (2) arc (-30:-90:1) -- cycle;
\filldraw[fill=black!10!white, draw=none] (4) to [bend right=90] (5) arc (150:210:1);
\draw [boundary] (0,0) circle (1 cm);
\draw [suture] (1) to [bend right=90] (2);
\draw [suture] (0) -- (3);
\draw [suture] (4) to [bend right=90] (5);
\mathbf{e}nd{tikzpicture}
\]
Such a disc, on which the chord diagram restricts to $3$ parallel arcs, is called a \mathbf{e}mph{bypass disc}. There are two natural ways to adjust this chord diagram, consistent with the colours, which amount to $120^\circ$ rotations. These operations are called \mathbf{e}mph{bypass surgeries}.
\[
\begin{tikzpicture}[
scale=0.9,
suture/.style={thick, draw=red},
boundary/.style={ultra thick},
vertex/.style={draw=red, fill=red}]
\coordinate (0a) at ($ (90:1) + (-4,0) $);
\coordinate (1a) at ($ (30:1) + (-4,0) $);
\coordinate (2a) at ($ (-30:1) + (-4,0) $);
\coordinate (3a) at ($ (-90:1) + (-4,0) $);
\coordinate (4a) at ($ (-150:1) + (-4,0) $);
\coordinate (5a) at ($ (150:1) + (-4,0) $);
\filldraw[fill=black!10!white, draw=none] (4a) arc (210:150:1) to [bend right=90] (0a) arc (90:30:1) -- cycle;
\filldraw[fill=black!10!white, draw=none] (2a) to [bend right=90] (3a) arc (-90:-30:1);
\draw [boundary] (-4,0) circle (1 cm);
\draw [suture] (5a) to [bend right=90] (0a);
\draw [suture] (4a) -- (1a);
\draw [suture] (2a) to [bend right=90] (3a);
\draw (-4,-2) node {$\mathcal{G}amma'$};
\coordinate (0) at (90:1);
\coordinate (1) at (30:1);
\coordinate (2) at (-30:1);
\coordinate (3) at (-90:1);
\coordinate (4) at (-150:1);
\coordinate (5) at (150:1);
\filldraw[fill=black!10!white, draw=none] (0) arc (90:30:1) to [bend right=90] (2) arc (-30:-90:1) -- cycle;
\filldraw[fill=black!10!white, draw=none] (4) to [bend right=90] (5) arc (150:210:1);
\draw [boundary] (0,0) circle (1 cm);
\draw [suture] (1) to [bend right=90] (2);
\draw [suture] (0) -- (3);
\draw [suture] (4) to [bend right=90] (5);
\draw (0,-2) node {$\mathcal{G}amma$};
\coordinate (0b) at ($ (90:1) + (4,0) $);
\coordinate (1b) at ($ (30:1) + (4,0) $);
\coordinate (2b) at ($ (-30:1) + (4,0) $);
\coordinate (3b) at ($ (-90:1) + (4,0) $);
\coordinate (4b) at ($ (-150:1) + (4,0) $);
\coordinate (5b) at ($ (150:1) + (4,0) $);
\filldraw[fill=black!10!white, draw=none] (2b) arc (-30:-90:1) to [bend right=90] (4b) arc (210:150:1) -- cycle;
\filldraw[fill=black!10!white, draw=none] (0b) to [bend right=90] (1b) arc (30:90:1);
\draw [boundary] (4,0) circle (1 cm);
\draw [suture] (3b) to [bend right=90] (4b);
\draw [suture] (2b) -- (5b);
\draw [suture] (0b) to [bend right=90] (1b);
\draw (4,-2) node {$\mathcal{G}amma''$};
\mathbf{e}nd{tikzpicture}
\]
With these surgeries on $B \subset D$, we have three chord diagrams on $D$, which we denote $\mathcal{G}amma, \mathcal{G}amma', \mathcal{G}amma''$. We consider inserting $\mathcal{G}amma, \mathcal{G}amma', \mathcal{G}amma''$ successively into one end of a cylinder, with some other chord diagram $\mathcal{G}amma_1$ on the other end. We obtain three sets of curves on the cylinder.
Below is drawn a possible arrangment of curves on the cylinder. We draw what happens inside $B$ inside a circle, and what happens outside $B$ outside the circle --- this does not necessarily correspond to the top and bottom of the cylinder.
\[
\begin{tikzpicture}[
scale=0.9,
suture/.style={thick, draw=red},
boundary/.style={ultra thick},
vertex/.style={draw=red, fill=red}]
\coordinate (0a) at ($ (90:1) + (-4,0) $);
\coordinate (1a) at ($ (30:1) + (-4,0) $);
\coordinate (2a) at ($ (-30:1) + (-4,0) $);
\coordinate (3a) at ($ (-90:1) + (-4,0) $);
\coordinate (4a) at ($ (-150:1) + (-4,0) $);
\coordinate (5a) at ($ (150:1) + (-4,0) $);
\filldraw[fill=black!10!white, draw=none] (4a) arc (210:150:1) to [bend right=90] (0a) arc (90:30:1) -- cycle;
\filldraw[fill=black!10!white, draw=none] (2a) to [bend right=90] (3a) arc (-90:-30:1);
\draw [boundary] (-4,0) circle (1 cm);
\draw [suture] (5a) to [bend right=90] (0a);
\draw [suture] (4a) -- (1a);
\draw [suture] (2a) to [bend right=90] (3a);
\draw [suture] (0a) .. controls ($ (-4,0) + (90:2) $) and ($ (-4,0) + (1.8,1) $) .. ($ (-4,0) + (1.8,0) $) .. controls ($ (-4,0) + (1.8,-1) $) and ($ (-4,0) + (-90:2) $) .. (3a);
\draw [suture] (1a) .. controls ($ (1a) + (30:1) $) and ($ (2a) + (-30:1) $) .. (2a);
\draw [suture] (4a) .. controls ($ (4a) + (-150:1) $) and ($ (5a) + (150:1) $) .. (5a);
\draw (-4,-2) node {$\langle \mathcal{G}amma' | \mathcal{G}amma_1 \rangle$};
\coordinate (0) at (90:1);
\coordinate (1) at (30:1);
\coordinate (2) at (-30:1);
\coordinate (3) at (-90:1);
\coordinate (4) at (-150:1);
\coordinate (5) at (150:1);
\filldraw[fill=black!10!white, draw=none] (0) arc (90:30:1) to [bend right=90] (2) arc (-30:-90:1) -- cycle;
\filldraw[fill=black!10!white, draw=none] (4) to [bend right=90] (5) arc (150:210:1);
\draw [boundary] (0,0) circle (1 cm);
\draw [suture] (1) to [bend right=90] (2);
\draw [suture] (0) -- (3);
\draw [suture] (4) to [bend right=90] (5);
\draw [suture] (0) .. controls ($ (0,0) + (90:2) $) and ($ (0,0) + (1.8,1) $) .. ($ (0,0) + (1.8,0) $) .. controls ($ (0,0) + (1.8,-1) $) and ($ (0,0) + (-90:2) $) .. (3);
\draw [suture] (1) .. controls ($ (1) + (30:1) $) and ($ (2) + (-30:1) $) .. (2);
\draw [suture] (4) .. controls ($ (4) + (-150:1) $) and ($ (5) + (150:1) $) .. (5);
\draw (0,-2) node {$\langle \mathcal{G}amma | \mathcal{G}amma_1 \rangle$};
\coordinate (0b) at ($ (90:1) + (4,0) $);
\coordinate (1b) at ($ (30:1) + (4,0) $);
\coordinate (2b) at ($ (-30:1) + (4,0) $);
\coordinate (3b) at ($ (-90:1) + (4,0) $);
\coordinate (4b) at ($ (-150:1) + (4,0) $);
\coordinate (5b) at ($ (150:1) + (4,0) $);
\filldraw[fill=black!10!white, draw=none] (2b) arc (-30:-90:1) to [bend right=90] (4b) arc (210:150:1) -- cycle;
\filldraw[fill=black!10!white, draw=none] (0b) to [bend right=90] (1b) arc (30:90:1);
\draw [boundary] (4,0) circle (1 cm);
\draw [suture] (3b) to [bend right=90] (4b);
\draw [suture] (2b) -- (5b);
\draw [suture] (0b) to [bend right=90] (1b);
\draw [suture] (0b) .. controls ($ (4,0) + (90:2) $) and ($ (4,0) + (1.8,1) $) .. ($ (4,0) + (1.8,0) $) .. controls ($ (4,0) + (1.8,-1) $) and ($ (4,0) + (-90:2) $) .. (3b);
\draw [suture] (1b) .. controls ($ (1b) + (30:1) $) and ($ (2b) + (-30:1) $) .. (2b);
\draw [suture] (4b) .. controls ($ (4b) + (-150:1) $) and ($ (5b) + (150:1) $) .. (5b);
\draw (4,-2) node {$\langle \mathcal{G}amma'' | \mathcal{G}amma_1 \rangle$};
\mathbf{e}nd{tikzpicture}
\]
We find here that, of the three sets of curves on the cylinder, $2$ of them are connected. The key observation is that \mathbf{e}mph{whatever chord diagrams we have for $\mathcal{G}amma$ and $\mathcal{G}amma_1$, out of the $3$ sets of curves obtained on the cylinder, the number of sets of curves which is connected is even.}
\begin{prop}
Let $\mathcal{G}amma, \mathcal{G}amma_1$ be chord diagrams on the disc $D$ with the same number of chords. Then, with $\mathcal{G}amma', \mathcal{G}amma''$ obtained from $\mathcal{G}amma$ as above,
\[
\langle \mathcal{G}amma | \mathcal{G}amma_1 \rangle + \langle \mathcal{G}amma' | \mathcal{G}amma_1 \rangle + \langle \mathcal{G}amma'' | \mathcal{G}amma_1 \rangle = 0.
\]
In other words,
\[
\langle \quad \mathcal{G}amma + \mathcal{G}amma' + \mathcal{G}amma'' \quad | \quad \mathcal{G}amma_1 \quad \rangle = 0.
\]
\mathbf{e}nd{prop}
\begin{proof}
The arrangement is either as in the above diagram, giving $1+0+1 = 0$ mod $2$, or more degenerate, giving $0+0+0=0$. See below.
\[
\begin{tikzpicture}[
scale=0.9,
suture/.style={thick, draw=red},
boundary/.style={ultra thick},
vertex/.style={draw=red, fill=red}]
\coordinate (0a) at ($ (90:1) + (-4,0) $);
\coordinate (1a) at ($ (30:1) + (-4,0) $);
\coordinate (2a) at ($ (-30:1) + (-4,0) $);
\coordinate (3a) at ($ (-90:1) + (-4,0) $);
\coordinate (4a) at ($ (-150:1) + (-4,0) $);
\coordinate (5a) at ($ (150:1) + (-4,0) $);
\filldraw[fill=black!10!white, draw=none] (4a) arc (210:150:1) to [bend right=90] (0a) arc (90:30:1) -- cycle;
\filldraw[fill=black!10!white, draw=none] (2a) to [bend right=90] (3a) arc (-90:-30:1);
\draw [boundary] (-4,0) circle (1 cm);
\draw [suture] (5a) to [bend right=90] (0a);
\draw [suture] (4a) -- (1a);
\draw [suture] (2a) to [bend right=90] (3a);
\draw [suture] (0a) .. controls ($ (-4,0) + (90:2) $) and ($ (-4,0) + (1.8,1) $) .. ($ (-4,0) + (1.8,0) $) .. controls ($ (-4,0) + (1.8,-1) $) and ($ (-4,0) + (-90:2) $) .. (3a);
\draw [suture] (1a) .. controls ($ (1a) + (30:1) $) and ($ (2a) + (-30:1) $) .. (2a);
\draw [suture] (4a) .. controls ($ (4a) + (-150:1) $) and ($ (5a) + (150:1) $) .. (5a);
\draw (-4,-2) node {$1$};
\draw (-2,-2) node {$+$};
\coordinate (0) at (90:1);
\coordinate (1) at (30:1);
\coordinate (2) at (-30:1);
\coordinate (3) at (-90:1);
\coordinate (4) at (-150:1);
\coordinate (5) at (150:1);
\filldraw[fill=black!10!white, draw=none] (0) arc (90:30:1) to [bend right=90] (2) arc (-30:-90:1) -- cycle;
\filldraw[fill=black!10!white, draw=none] (4) to [bend right=90] (5) arc (150:210:1);
\draw [boundary] (0,0) circle (1 cm);
\draw [suture] (1) to [bend right=90] (2);
\draw [suture] (0) -- (3);
\draw [suture] (4) to [bend right=90] (5);
\draw [suture] (0) .. controls ($ (0,0) + (90:2) $) and ($ (0,0) + (1.8,1) $) .. ($ (0,0) + (1.8,0) $) .. controls ($ (0,0) + (1.8,-1) $) and ($ (0,0) + (-90:2) $) .. (3);
\draw [suture] (1) .. controls ($ (1) + (30:1) $) and ($ (2) + (-30:1) $) .. (2);
\draw [suture] (4) .. controls ($ (4) + (-150:1) $) and ($ (5) + (150:1) $) .. (5);
\draw (0,-2) node {$0$};
\draw (2,-2) node {$+$};
\coordinate (0b) at ($ (90:1) + (4,0) $);
\coordinate (1b) at ($ (30:1) + (4,0) $);
\coordinate (2b) at ($ (-30:1) + (4,0) $);
\coordinate (3b) at ($ (-90:1) + (4,0) $);
\coordinate (4b) at ($ (-150:1) + (4,0) $);
\coordinate (5b) at ($ (150:1) + (4,0) $);
\filldraw[fill=black!10!white, draw=none] (2b) arc (-30:-90:1) to [bend right=90] (4b) arc (210:150:1) -- cycle;
\filldraw[fill=black!10!white, draw=none] (0b) to [bend right=90] (1b) arc (30:90:1);
\draw [boundary] (4,0) circle (1 cm);
\draw [suture] (3b) to [bend right=90] (4b);
\draw [suture] (2b) -- (5b);
\draw [suture] (0b) to [bend right=90] (1b);
\draw [suture] (0b) .. controls ($ (4,0) + (90:2) $) and ($ (4,0) + (1.8,1) $) .. ($ (4,0) + (1.8,0) $) .. controls ($ (4,0) + (1.8,-1) $) and ($ (4,0) + (-90:2) $) .. (3b);
\draw [suture] (1b) .. controls ($ (1b) + (30:1) $) and ($ (2b) + (-30:1) $) .. (2b);
\draw [suture] (4b) .. controls ($ (4b) + (-150:1) $) and ($ (5b) + (150:1) $) .. (5b);
\draw (4,-2) node {$1$};
\draw (6,-2) node {$=$};
\draw (8,-2) node {$0$};
\mathbf{e}nd{tikzpicture}
\]
\[
\begin{tikzpicture}[
scale=0.9,
suture/.style={thick, draw=red},
boundary/.style={ultra thick},
vertex/.style={draw=red, fill=red}]
\coordinate (0a) at ($ (90:1) + (-4,0) $);
\coordinate (1a) at ($ (30:1) + (-4,0) $);
\coordinate (2a) at ($ (-30:1) + (-4,0) $);
\coordinate (3a) at ($ (-90:1) + (-4,0) $);
\coordinate (4a) at ($ (-150:1) + (-4,0) $);
\coordinate (5a) at ($ (150:1) + (-4,0) $);
\filldraw[fill=black!10!white, draw=none] (4a) arc (210:150:1) to [bend right=90] (0a) arc (90:30:1) -- cycle;
\filldraw[fill=black!10!white, draw=none] (2a) to [bend right=90] (3a) arc (-90:-30:1);
\draw [boundary] (-4,0) circle (1 cm);
\draw [suture] (5a) to [bend right=90] (0a);
\draw [suture] (4a) -- (1a);
\draw [suture] (2a) to [bend right=90] (3a);
\draw [suture] (0a) .. controls ($ (0a) + (90:1) $) and ($ (1a) + (30:1) $) .. (1a);
\draw [suture] (2a) .. controls ($ (2a) + (-30:1) $) and ($ (3a) + (-90:1) $) .. (3a);
\draw [suture] (4a) .. controls ($ (4a) + (-150:1) $) and ($ (5a) + (150:1) $) .. (5a);
\draw (-4,-2) node {$0$};
\draw (-2,-2) node {$+$};
\coordinate (0) at (90:1);
\coordinate (1) at (30:1);
\coordinate (2) at (-30:1);
\coordinate (3) at (-90:1);
\coordinate (4) at (-150:1);
\coordinate (5) at (150:1);
\filldraw[fill=black!10!white, draw=none] (0) arc (90:30:1) to [bend right=90] (2) arc (-30:-90:1) -- cycle;
\filldraw[fill=black!10!white, draw=none] (4) to [bend right=90] (5) arc (150:210:1);
\draw [boundary] (0,0) circle (1 cm);
\draw [suture] (1) to [bend right=90] (2);
\draw [suture] (0) -- (3);
\draw [suture] (4) to [bend right=90] (5);
\draw [suture] (0) .. controls ($ (0) + (90:1) $) and ($ (1) + (30:1) $) .. (1);
\draw [suture] (2) .. controls ($ (2) + (-30:1) $) and ($ (3) + (-90:1) $) .. (3);
\draw [suture] (4) .. controls ($ (4) + (-150:1) $) and ($ (5) + (150:1) $) .. (5);
\draw (0,-2) node {$0$};
\draw (2,-2) node {$+$};
\coordinate (0b) at ($ (90:1) + (4,0) $);
\coordinate (1b) at ($ (30:1) + (4,0) $);
\coordinate (2b) at ($ (-30:1) + (4,0) $);
\coordinate (3b) at ($ (-90:1) + (4,0) $);
\coordinate (4b) at ($ (-150:1) + (4,0) $);
\coordinate (5b) at ($ (150:1) + (4,0) $);
\filldraw[fill=black!10!white, draw=none] (2b) arc (-30:-90:1) to [bend right=90] (4b) arc (210:150:1) -- cycle;
\filldraw[fill=black!10!white, draw=none] (0b) to [bend right=90] (1b) arc (30:90:1);
\draw [boundary] (4,0) circle (1 cm);
\draw [suture] (3b) to [bend right=90] (4b);
\draw [suture] (2b) -- (5b);
\draw [suture] (0b) to [bend right=90] (1b);
\draw [suture] (0b) .. controls ($ (0b) + (90:1) $) and ($ (1b) + (30:1) $) .. (1b);
\draw [suture] (2b) .. controls ($ (2b) + (-30:1) $) and ($ (3b) + (-90:1) $) .. (3b);
\draw [suture] (4b) .. controls ($ (4b) + (-150:1) $) and ($ (5b) + (150:1) $) .. (5b);
\draw (4,-2) node {$0$};
\draw (6,-2) node {$=$};
\draw (8,-2) node {$0$};
\mathbf{e}nd{tikzpicture}
\]
\mathbf{e}nd{proof}
\subsection{A vector space of chord diagrams}
We have seem how some chord diagrams can be drawn from chessboards. But this is a very small class of chord diagrams; there are many more chord diagrams than chessboard / ski slope chord diagrams.
On the other hand, our creation and annihilation operators are defined on any chord diagram, not just chessboard ones. It's s easy to draw in extra chords, or close off chords, on any chord diagram, not just those drawn from chessboards. The adjoint relations from finger moves also work generally, and the bilinear form / ``inner product'' defined from insertion into a cylinder also works for any chord diagram.
We can relate general chord diagrams, to chessboard chord diagrams, with the following observations.
If our ``inner product'' is supposed to be nondegenerate, then based on our previous theorem, we should set
\[
\mathcal{G}amma + \mathcal{G}amma' + \mathcal{G}amma'' = 0
\]
for any triple of chord diagrams $\mathcal{G}amma, \mathcal{G}amma', \mathcal{G}amma''$ obtained by byass surgeries.
This relation is called the \mathbf{e}mph{bypass relation} and can be written schematically as
\[
\begin{tikzpicture}[
scale=0.9,
suture/.style={thick, draw=red},
boundary/.style={ultra thick},
vertex/.style={draw=red, fill=red}]
\coordinate (0a) at ($ (90:1) + (-4,0) $);
\coordinate (1a) at ($ (30:1) + (-4,0) $);
\coordinate (2a) at ($ (-30:1) + (-4,0) $);
\coordinate (3a) at ($ (-90:1) + (-4,0) $);
\coordinate (4a) at ($ (-150:1) + (-4,0) $);
\coordinate (5a) at ($ (150:1) + (-4,0) $);
\filldraw[fill=black!10!white, draw=none] (4a) arc (210:150:1) to [bend right=90] (0a) arc (90:30:1) -- cycle;
\filldraw[fill=black!10!white, draw=none] (2a) to [bend right=90] (3a) arc (-90:-30:1);
\draw [boundary] (-4,0) circle (1 cm);
\draw [suture] (5a) to [bend right=90] (0a);
\draw [suture] (4a) -- (1a);
\draw [suture] (2a) to [bend right=90] (3a);
\draw (-2,0) node {$+$};
\coordinate (0) at (90:1);
\coordinate (1) at (30:1);
\coordinate (2) at (-30:1);
\coordinate (3) at (-90:1);
\coordinate (4) at (-150:1);
\coordinate (5) at (150:1);
\filldraw[fill=black!10!white, draw=none] (0) arc (90:30:1) to [bend right=90] (2) arc (-30:-90:1) -- cycle;
\filldraw[fill=black!10!white, draw=none] (4) to [bend right=90] (5) arc (150:210:1);
\draw [boundary] (0,0) circle (1 cm);
\draw [suture] (1) to [bend right=90] (2);
\draw [suture] (0) -- (3);
\draw [suture] (4) to [bend right=90] (5);
\draw (2,0) node {$+$};
\coordinate (0b) at ($ (90:1) + (4,0) $);
\coordinate (1b) at ($ (30:1) + (4,0) $);
\coordinate (2b) at ($ (-30:1) + (4,0) $);
\coordinate (3b) at ($ (-90:1) + (4,0) $);
\coordinate (4b) at ($ (-150:1) + (4,0) $);
\coordinate (5b) at ($ (150:1) + (4,0) $);
\filldraw[fill=black!10!white, draw=none] (2b) arc (-30:-90:1) to [bend right=90] (4b) arc (210:150:1) -- cycle;
\filldraw[fill=black!10!white, draw=none] (0b) to [bend right=90] (1b) arc (30:90:1);
\draw [boundary] (4,0) circle (1 cm);
\draw [suture] (3b) to [bend right=90] (4b);
\draw [suture] (2b) -- (5b);
\draw [suture] (0b) to [bend right=90] (1b);
\draw (6,0) node {$=$};
\draw (8,0) node {$0$};
\mathbf{e}nd{tikzpicture}
\]
We therefore define a vector space $V_n$ as follows: $V_n$ is generated over $\mathbb{Z}_2$ by all chord diagrams with $n$ chords, subject to the bypass relation, i.e. the relation that any three chord diagrams related by bypass surgeries sum to zero.
\[
V_n = \frac{ \mathbb{Z}_2 \langle \text{Chord diagrams with $n$ chords} \rangle }{ \text{Bypass relation} }
\]
Before taking a quotient, the vector space is freely generated by chord diagrams, and its dimension is equal to the number of chord diagrams with $n$ chords, which is $C_n$, the $n$'th Catalan number.
After the quotient, something interesting happens. The dimension is reduced to $2^{n-1}$ and a basis is rather familiar.
\begin{thm}[\cite{Me09Paper}]
The $\mathbb{Z}_2$ vector space $V_n$ has dimension $2^{n-1}$ and the diagrams from chessboards of $n-1$ squares form a basis.
\mathbf{e}nd{thm}
Indeed, there are $2^{n-1}$ configurations of pawns on a chessboard with $n-1$ squares, and these give the chessboard chord diagrams with $n$ chords. (Each square, again, contributing one bit of information.) Alternatively, this is the $\mathbb{Z}_2$ vector space freely generated by words in $p$ and $q$.
\begin{cor}
\[
V_n \cong \left( p \mathbb{Z}_2 \oplus q \mathbb{Z}_2 \right)^{\otimes (n-1)}.
\]
\mathbf{e}nd{cor}
There is much more structure in this vector space, encoding various combinatorial and representation-theoretic properties of chord diagrams. For instance, $V_n$ has $2^{2^{n-1}}$ elements; the $C_n$ of these which correspond to chord diagrams are distributed in a combinatorially interesting way.
To give an idea of how chord diagrams decompose into chessboard diagrams, we give a computation below.
\[
\begin{tikzpicture}[
scale=0.5,
suture/.style={thick, draw=red},
boundary/.style={ultra thick},
vertex/.style={draw=red, fill=red},
filling/.style={fill=black!10!white, draw=none},
a/.style={xshift=6 cm},
b/.style={xshift=12 cm},
c/.style={xshift=6cm, yshift=-6 cm},
d/.style={xshift=12 cm, yshift=-6 cm},
e/.style={xshift=18 cm, yshift=-6 cm},
f/.style={xshift=24 cm, yshift=-6 cm}]
\coordinate (0) at (2,0);
\coordinate (1) at (4,-1);
\coordinate (-1) at (0,-1);
\coordinate (2) at (4,-2);
\coordinate (-2) at (0,-2);
\coordinate (3) at (4,-3);
\coordinate (-3) at (0,-3);
\coordinate (4) at (4,-4);
\coordinate (-4) at (0,-4);
\coordinate (5) at (2,-5);
\filldraw[filling] (0) -- (4,0) -- (1) arc (90:270:0.5) -- (3) to [bend left=15] (0);
\filldraw[filling] (-1) .. controls (0.5,-1) and (3.5, -4) .. (4) -- (4,-5) -- (5) to [bend right=15] (-2) -- cycle;
\filldraw[filling] (-3) -- (-4) arc (-90:90:0.5);
\draw [suture] (0) to [bend right=15] (3);
\draw [suture] (1) arc (90:270:0.5);
\draw [suture] (-1) .. controls (0.5,-1) and (3.5,-4) .. (4);
\draw [suture] (5) to [bend right=15] (-2);
\draw [suture] (-3) arc (90:-90:0.5);
\draw (0.5,-0.5) -- (3.75,-0.5) -- (3.75, -1.5) -- (0.5,-1.5) -- cycle;
\draw [boundary] (0,0) -- (4,0) -- (4,-5) -- (0,-5) -- cycle;
\foreach \point in {0, 1, 2, 3, 4, 5, -1,-2,-3,-4}
\fill [vertex] (\point) circle (2pt);
\draw (5,-2.5) node {$=$};
\coordinate (0a) at (8,0);
\coordinate (1a) at (10,-1);
\coordinate (-1a) at (6,-1);
\coordinate (2a) at (10,-2);
\coordinate (-2a) at (6,-2);
\coordinate (3a) at (10,-3);
\coordinate (-3a) at (6,-3);
\coordinate (4a) at (10,-4);
\coordinate (-4a) at (6,-4);
\coordinate (5a) at (8,-5);
\filldraw [filling, a] (0a) -- (4,0) -- (1a) arc (90:270:1.5) -- (4,-5) -- (5a) to [bend right=15] (-2a) -- (-1a) to [bend right=15] (0a);
\filldraw [filling, a] (2a) arc (90:270:0.5) -- cycle;
\filldraw [filling, a] (-3a) arc (90:-90:0.5) -- cycle;
\draw [suture, a] (0a) to [bend left=15] (-1a);
\draw [suture, a] (1a) arc (90:270:1.5);
\draw [suture, a] (2a) arc (90:270:0.5);
\draw [suture, a] (-2a) to [bend left=15] (5a);
\draw [suture, a] (-3a) arc (90:-90:0.5);
\draw [a] (0.25,-2.5) -- (3,-2.5) -- (3,-3.5) -- (0.25,-3.5) -- cycle;
\draw [boundary, a] (0,0) -- (4,0) -- (4,-5) -- (0,-5) -- cycle;
\foreach \point in {0a, 1a, 2a, 3a, 4a, 5a, -1a,-2a,-3a,-4a}
\fill [vertex] (\point) circle (2pt);
\draw (11,-2.5) node {$+$};
\coordinate (0b) at (14,0);
\coordinate (1b) at (16,-1);
\coordinate (-1b) at (12,-1);
\coordinate (2b) at (16,-2);
\coordinate (-2b) at (12,-2);
\coordinate (3b) at (16,-3);
\coordinate (-3b) at (12,-3);
\coordinate (4b) at (16,-4);
\coordinate (-4b) at (12,-4);
\coordinate (5b) at (14,-5);
\filldraw [filling, b] (0b) -- (4,0) -- (1b) to [bend left=15] (0b);
\filldraw [filling, b] (-1b) .. controls (0.5,-1) and (3.5,-2) .. (2b) -- (3b) arc (90:270:0.5) -- (4,-5) -- (5b) to [bend right=15] (-2b) -- cycle;
\filldraw [filling, b] (-3b) arc (90:-90:0.5) -- cycle;
\draw [suture, b] (0b) to [bend right=15] (1b);
\draw [suture, b] (-1b) .. controls (0.5,-1) and (3.5,-2) .. (2b);
\draw [suture, b] (-2b) to [bend left=15] (5b);
\draw [suture, b] (3b) arc (90:270:0.5);
\draw [suture, b] (-3b) arc (90:-90:0.5);
\draw [b] (0.25,-3.25) -- (3.75,-3.25) -- (3.75,-3.75) -- (0.25,-3.75) -- cycle;
\draw [boundary, b] (0,0) -- (4,0) -- (4,-5) -- (0,-5) -- cycle;
\foreach \point in {0b, 1b, 2b, 3b, 4b, 5b, -1b,-2b,-3b,-4b}
\fill [vertex] (\point) circle (2pt);
\draw (5,-8.5) node {$=$};
\coordinate (0c) at (8,-6);
\coordinate (1c) at (10,-7);
\coordinate (-1c) at (6,-7);
\coordinate (2c) at (10,-8);
\coordinate (-2c) at (6,-8);
\coordinate (3c) at (10,-9);
\coordinate (-3c) at (6,-9);
\coordinate (4c) at (10,-10);
\coordinate (-4c) at (6,-10);
\coordinate (5c) at (8,-11);
\filldraw [filling, c] (0c) -- (4,0) -- (1c) .. controls (3.5,-1) and (0.5,-4) .. (-4c) -- (-3c) arc (-90:90:0.5) -- (-1c) to [bend right=15] (0c);
\filldraw [filling, c] (2c) arc (90:270:0.5) -- cycle;
\filldraw [filling, c] (4c) -- (4,-5) -- (5c) to [bend left=15] (4c);
\draw [suture, c] (0c) to [bend left=15] (-1c);
\draw [suture, c] (2c) arc (90:270:0.5);
\draw [suture, c] (-2c) arc (90:-90:0.5);
\draw [suture, c] (1c) .. controls (3.5,-1) and (0.5,-4) .. (-4c);
\draw [suture, c] (4c) to [bend right=15] (5c);
\draw [boundary, c] (0,0) -- (4,0) -- (4,-5) -- (0,-5) -- cycle;
\foreach \point in {0c, 1c, 2c, 3c, 4c, 5c, -1c,-2c,-3c,-4c}
\fill [vertex] (\point) circle (2pt);
\draw (11,-8.5) node {$+$};
\coordinate (0d) at (14,-6);
\coordinate (1d) at (16,-7);
\coordinate (-1d) at (12,-7);
\coordinate (2d) at (16,-8);
\coordinate (-2d) at (12,-8);
\coordinate (3d) at (16,-9);
\coordinate (-3d) at (12,-9);
\coordinate (4d) at (16,-10);
\coordinate (-4d) at (12,-10);
\coordinate (5d) at (14,-11);
\filldraw [filling, d] (0d) -- (4,0) -- (1d) .. controls (3.5,-1) and (0.5,-2) .. (-2d) -- (-1d) to [bend right=15] (0d);
\filldraw [filling, d] (2d) arc (90:270:0.5) -- cycle;
\filldraw [filling, d] (-3d) .. controls (0.5,-3) and (3.5,-4) .. (4d) -- (4,-5) -- (5d) to [bend right=15] (-4d) -- cycle;
\draw [suture, d] (0d) to [bend left=15] (-1d);
\draw [suture, d] (2d) arc (90:270:0.5);
\draw [suture, d] (1d) .. controls (3.5,-1) and (0.5,-2) .. (-2d);
\draw [suture, d] (-3d) .. controls (0.5,-3) and (3.5,-4) .. (4d);
\draw [suture, d] (-4d) to [bend left=15] (5d);
\draw [boundary, d] (0,0) -- (4,0) -- (4,-5) -- (0,-5) -- cycle;
\foreach \point in {0d, 1d, 2d, 3d, 4d, 5d, -1d,-2d,-3d,-4d}
\fill [vertex] (\point) circle (2pt);
\draw (17,-8.5) node {$+$};
\coordinate (0e) at (20,-6);
\coordinate (1e) at (22,-7);
\coordinate (-1e) at (18,-7);
\coordinate (2e) at (22,-8);
\coordinate (-2e) at (18,-8);
\coordinate (3e) at (22,-9);
\coordinate (-3e) at (18,-9);
\coordinate (4e) at (22,-10);
\coordinate (-4e) at (18,-10);
\coordinate (5e) at (20,-11);
\filldraw [filling, e] (0e) -- (4,0) -- (1e) to [bend left=15] (0e);
\filldraw [filling, e] (-1e) .. controls (0.5,-1) and (3.5,-2) .. (2e) -- (3e) .. controls (3.5,-3) and (0.5,-4) .. (-4e) -- (-3e) arc (-90:90:0.5) -- (-1e);
\filldraw [filling, e] (4e) to [bend right=15] (5e) -- (4,-5) -- cycle;
\draw [suture, e] (0e) to [bend right=15] (1e);
\draw [suture, e] (-1e) .. controls (0.5,-1) and (3.5,-2) .. (2e);
\draw [suture, e] (-2e) arc (90:-90:0.5);
\draw [suture, e] (3e) .. controls (3.5,-3) and (0.5,-4) .. (-4e);
\draw [suture, e] (4e) to [bend right=15] (5e);
\draw [boundary, e] (0,0) -- (4,0) -- (4,-5) -- (0,-5) -- cycle;
\foreach \point in {0e, 1e, 2e, 3e, 4e, 5e, -1e,-2e,-3e,-4e}
\fill [vertex] (\point) circle (2pt);
\draw (23,-8.5) node {$+$};
\coordinate (0f) at (26,-6);
\coordinate (1f) at (28,-7);
\coordinate (-1f) at (24,-7);
\coordinate (2f) at (28,-8);
\coordinate (-2f) at (24,-8);
\coordinate (3f) at (28,-9);
\coordinate (-3f) at (24,-9);
\coordinate (4f) at (28,-10);
\coordinate (-4f) at (24,-10);
\coordinate (5f) at (26,-11);
\filldraw [filling, f] (0f) -- (4,0) -- (1f) to [bend left=15] (0f);
\filldraw [filling, f] (-1f) .. controls (0.5,-1) and (3.5,-2) .. (2f) -- (3f) .. controls (3.5,-3) and (0.5,-2) .. (-2f) -- cycle;
\filldraw [filling, f] (-3f) .. controls (0.5,-3) and (3.5,-4) .. (4f) -- (4,-5) -- (5f) to [bend right=15] (-4f) -- cycle;
\draw [suture, f] (0f) to [bend right=15] (1f);
\draw [suture, f] (-1f) .. controls (0.5,-1) and (3.5,-2) .. (2f);
\draw [suture, f] (-2f) .. controls (0.5,-2) and (3.5,-3) .. (3f);
\draw [suture, f] (-3f) .. controls (0.5,-3) and (3.5,-4) .. (4f);
\draw [suture, f] (-4f) to [bend left=15] (5f);
\draw [boundary, f] (0,0) -- (4,0) -- (4,-5) -- (0,-5) -- cycle;
\foreach \point in {0f, 1f, 2f, 3f, 4f, 5f, -1f, -2f, -3f, -4f}
\fill [vertex] (\point) circle (2pt);
\draw (5, -13) node {$=$};
\draw (8,-13) node {$\mathcal{G}amma_{ppqq}$};
\draw (11,-13) node {$+$};
\draw (14,-13) node {$\mathcal{G}amma_{pqqp}$};
\draw (17,-13) node {$+$};
\draw (20,-13) node {$\mathcal{G}amma_{qppq}$};
\draw (23,-13) node {$+$};
\draw (26,-13) node {$\mathcal{G}amma_{qpqp}$};
\draw (5,-15) node {$\cong$};
\draw (8,-15) node {$ppqq$};
\draw (11,-15) node {$+$};
\draw (14,-15) node {$pqqp$};
\draw (17,-15) node {$+$};
\draw (20,-15) node {$qppq$};
\draw (23,-15) node {$+$};
\draw (26,-15) node {$qpqp$};
\mathbf{e}nd{tikzpicture}
\]
We should point out that this is not the only interesting basis of $V_n$. In \cite{Me12_itsy_bitsy} we defined a large class of bases, one from any \mathbf{e}mph{quadrangulation} of a surface.
\section{Contact topology}
As it turns out, all these curves on surfaces and stuffing into cylinders and $120^\circ$ rotations describe 3-dimensional contact topology.
\subsection{Chord diagrams and contact structures}
First of all, \mathbf{e}mph{a chord diagram $\mathcal{G}amma$ on a disc $D$ describes a contact structure $\xi_\mathcal{G}amma$ on $D \times I$}. This contact structure $\xi_\mathcal{G}amma$ consists of planes which are, roughly (and inaccurately) speaking,
\begin{itemize}
\item
Tangent to $D$ around the boundary of $D$
\item
``Perpendicular'' to $D$ precisely along $\mathcal{G}amma$.\footnote{This statement makes no sense: a contact manifold has no metric, and no notion of perpendicularity! Rather, we should say that the vector field in the $I$ direction lies in $\xi$.}
\mathbf{e}nd{itemize}
In the simple example below of a disc with two chords, as we proceed from one side to the other, crossing both chords, the contact planes rotate through a full $360^\circ$. A rough (and inaccurate) interpretation of the contact condition of non-integrability is that if we follow a curve $C$ tangent to the contact structure $\xi$, dotted in the diagram, then the contact planes always rotate in the same direction about $C$.
\begin{center}
\includegraphics[scale=1]{convex_disc_b-eps-converted-to.pdf}
\mathbf{e}nd{center}
If we colour one side of the contact planes white and the other side black (grey) then the black and white regions in a chord diagram correspond to which side of the contact planes are visible from above.
\begin{center}
\includegraphics[scale=1]{convex_disc-eps-converted-to.pdf}
\mathbf{e}nd{center}
It turns out that, in a sense which can be made precise, there is only one ``way up to isotopy'' to draw contact planes consistent with the chords. The lines of intersection of the contact planes with the disc $D$ trace out a foliation called the \mathbf{e}mph{characteristic foliation}, some curves of which are dotted in the diagrams. The characteristic foliation determines the germ of a contact structure near $D$. The characteristic foliation is transverse to the chords $\mathcal{G}amma$ (known as a \mathbf{e}mph{dividing set} in contact geometry) and in fact can be directed by a vector field to exponentially dilate an area form on each component of $D \backslash \mathcal{G}amma$ exiting through $\mathcal{G}amma$. All such foliations compatible with $\mathcal{G}amma$ give isotopic germs of contact structures. This is all part of the theory of \mathbf{e}mph{convex surfaces} developed by Giroux \cite{Gi91} in 1991.
\subsection{Contact structures and overtwisted discs}
Given a 3-manifold $M$, it's an interesting and difficult question to find all the isotopy classes of contact structures on $M$.
Eliashberg in \cite{ElOT} showed that there are fundamentally two types of contact structures, called \mathbf{e}mph{overtwisted} and \mathbf{e}mph{tight}. An overtwisted contact structure is one that contains an object called an \mathbf{e}mph{overtwisted disc}; a tight contact structure is one that does not.
An \mathbf{e}mph{overtwisted disc} is a neighbourhood of a disc with a contact structure as shown. It corresponds to seeing a \mathbf{e}mph{contractible closed curve} in a chord diagram / dividing set on a disc.
\begin{center}
\includegraphics[scale=1]{overtwisted_disc-eps-converted-to.pdf}
\mathbf{e}nd{center}
Eliashberg in 1989 \cite{ElOT} proved that the classification of overtwisted contact structures on a 3-manifold $M$ is equivalent to the homotopy classification of plane fields on $M$. In this case the contact geometry reduces to homotopy theory (which is well understood) and contact geometry offers nothing new. So contact topologists tend to regard an overtwisted disc as ``spoiling'' a contact structure or rendering it trivial. It is quite surprising that the presence of one particular disc in a contact 3-manifold has such global topological consequences.
The \mathbf{e}mph{tight} contact structures on $M$, on the other hand, offer important information about the topology of $M$. Not every 3-manifold has a tight contact structure, as Etnyre and Honda proved in 1999 \cite{EH99}; and the number of tight contact structures on a $3$-manifold depends intricately on the topology of $M$ (see, e.g., \cite{Gi00, GiBundles, Hon00I, Hon00II}).
This makes it quite reasonable that we regard a diagram on a disc, with a red closed curve, as ``trivial'', and counting it as $0$, such as in this figure seen previously:
\[
\begin{tikzpicture}[
scale=0.9,
suture/.style={thick, draw=red},
boundary/.style={ultra thick},
vertex/.style={draw=red, fill=red}]
\coordinate [label = above:{$0$}] (0b) at ($ (90:2) + (8,0) $);
\coordinate [label = above right:{$1$}] (1b) at ($ (54:2) + (8,0) $);
\coordinate [label = above right:{$2$}] (2b) at ($ (18:2) + (8,0) $);
\coordinate [label = right:{$3$}] (3b) at ($ (-18:2) + (8,0) $);
\coordinate [label = below right:{$4$}] (4b) at ($ (-54:2) + (8,0) $);
\coordinate [label = below right:{$5$}] (5b) at ($ (-90:2) + (8,0) $);
\coordinate [label = below:{$-4$}] (6b) at ($ (-126:2) + (8,0) $);
\coordinate [label = below left:{$-3$}] (7b) at ($ (-162:2) + (8,0) $);
\coordinate [label = below left:{$-2$}] (8b) at ($ (162:2) + (8,0) $);
\coordinate [label = left:{$-1$}] (9b) at ($ (126:2) + (8,0) $);
\filldraw[fill=black!10!white, draw=none] (2b) arc (18:-18:2) -- (8b) arc (162:126:2) to [bend right=90] (0b) arc (90:54:2) to [bend right=90] (2b);
\filldraw[fill=black!10!white, draw=none] (4b) to [bend right=90] (5b) arc (-90:-54:2);
\filldraw[fill=black!10!white, draw=none] (6b) to [bend right=90] (7b) arc (-162:-126:2);
\filldraw[fill=black!10!white, draw=red] ($ (36:1.8) + (8,0) $) circle (0.1);
\draw[suture] (0b) to [bend left=90] (9b);
\draw[suture] (1b) to [bend right=90] (2b);
\draw[suture] (3b) -- (8b);
\draw[suture] (4b) to [bend right=90] (5b);
\draw[suture] (6b) to [bend right=90] (7b);
\draw [boundary] (8,0) circle (2 cm);
\foreach \point in {0b, 1b, 2b, 3b, 4b, 5b, 6b, 7b, 8b, 9b}
\fill [vertex] (\point) circle (2pt);
\mathbf{e}nd{tikzpicture}
\]
\subsection{Contact structures on spheres and balls}
Contact structures in the neighbourhood of a sphere $S^2$ are again given by dividing sets on $S^2$, which we continue to draw in red. It turns out that there is essentially only one tight contact structure in the neighbourhood of an $S^2$, and that is given by a \mathbf{e}mph{single} connected curve. Any dividing set with more than one curve gives an overtwisted contact structure.
\begin{center}
\includegraphics[scale=0.3]{sphere_tight-eps-converted-to.pdf}
\quad
\includegraphics[scale=0.3]{sphere_OT-eps-converted-to.pdf}
\mathbf{e}nd{center}
The sphere on the left has a tight contact neighbourhood. The sphere on the right has an overtwisted contact neighbourhood, and boundaries of two overtwisted discs are drawn in dashed black.
Now consider each sphere $S^2$ as the boundary of a ball $B^3$. Contact structures on balls were classified by Eliashberg in 1992 \cite{ElMartinet}. Putting aside the overtwisted structures, there is essentially only one \mathbf{e}mph{tight} contact structure on the ball, up to isotopy. Given a dividing set on the boundary sphere which is connected, there is an essentially unique way to fill it in.
\subsection{Contact corners}
When considering a surface $S$ with boundary $\partial S$ in a contact 3-manifold $(M, \xi)$, we often require the contact planes to be tangent to the boundary $\partial S$. Above, all our diagrams of discs with contact planes have been drawn in this way.
When two surfaces meet along their boundary, forming a \mathbf{e}mph{corner}, then the dividing sets do not intersect along the corner --- with our rough interpretation of dividing set as ``where contact planes are perpendicular to the surface'', this would mean that the contact planes are perpendicular to both surfaces meeting at the corner, as well as tangent to the corner, which is impossible.
Rather, the contact planes rotate around the corner curve, and the the dividing sets \mathbf{e}mph{interleave} as shown.
\begin{center}
\includegraphics[scale=0.6]{corner_contact_0-eps-converted-to.pdf}
\quad
\includegraphics[scale=0.6]{corner_contact_1-eps-converted-to.pdf}
\mathbf{e}nd{center}
It's not too difficult to believe, then, that if we \mathbf{e}mph{round} the corners, then dividing curves behave as we proposed earlier.
\begin{center}
\includegraphics[scale=0.6]{corner_contact_2-eps-converted-to.pdf}
\mathbf{e}nd{center}
It follows then that the ``inner product'' on chord diagrams, involving insertion into a cylinder, has a simple contact geometry interpretation.
\begin{prop}
Let $\mathcal{G}amma_0, \mathcal{G}amma_1$ be chord diagrams. The following are equivalent:
\begin{enumerate}
\item
$\langle \mathcal{G}amma_0 | \mathcal{G}amma_1 \rangle = 1$.
\item
The solid cylinder with dividing set $\mathcal{G}amma_0$ on the bottom and $\mathcal{G}amma_1$ on the top has a tight contact structure.
\mathbf{e}nd{enumerate}
\qed
\mathbf{e}nd{prop}
The ``finger moves'' we saw earlier for curves on the cylinder now correspond simply to isotopy of contact structures near the sphere, and in the ball.
\subsection{Bypasses}
Suppose we start from a surface $S$ with a dividing set on it --- such as a disc with a chord diagram. This describes a contact structure near $S$; effectively, on $S \times I$. We could then try to build up a contact manifold by gluing on more pieces on top of $S \times I$, and gluing them up.
It turns out that any contact 3-manifold can be built up in this way, using only fundamental building blocks called \mathbf{e}mph{bypasses}. Drawn in terms of chord diagrams and cylinders, a bypass is actually something we saw earlier in computing $\langle \mathcal{G}amma_{pq} \; | \; \mathcal{G}amma_{qp} \rangle$. It is drawn below.
\begin{center}
\includegraphics[scale=0.5]{stack_2-eps-converted-to.pdf}
\mathbf{e}nd{center}
A bypass is nothing but a particular contact 3-ball. Ware consider the cylinder as bounding a 3-ball; from Eliashberg's theorem we know that the contact structure on the boundary of the cylinder extends uniquely throughout the ball to a tight contact structure.
Thus, the elementary building blocks of contact topology are, in a certain sense, precisely the elementary pawn moves
\[
\begin{array}{c}
pq \\
\begin{tabularx}{0.15\textwidth}{|X|X|}
\hline
\WhitePawnOnWhite & \BlackPawnOnWhite \\
\hline
\mathbf{e}nd{tabularx}
\mathbf{e}nd{array}
\quad \rightarrow \quad
\begin{array}{c}
qp \\
\begin{tabularx}{0.15\textwidth}{|X|X|}
\hline
\BlackPawnOnWhite & \WhitePawnOnWhite \\
\hline
\mathbf{e}nd{tabularx}
\mathbf{e}nd{array}
\]
However, beware! If you stack two bypasses directly on top of each other, you will find an overtwisted disc. We might say that the shortest step in contact topology is half way to oblivion.
\begin{center}
\includegraphics[scale=0.4]{stack_3-eps-converted-to.pdf}
\mathbf{e}nd{center}
To build a tight contact structure, you'll need to place bypasses in more sophisticated locations than just on top of each other.
We see that addition of a bypass on top of a disc performs a $120^\circ$ rotation in the chord diagram. This is precisely the operation described earlier and we defined the vector space $V_n$ by setting triples of bypass-related chord diagrams to sum to zero.
In fact, our creation and annihilation operators, which added or closed off curves in a chord diagram, can also be seen as building onto an existing contact structure.
\subsection{The contact category}
Honda has introduced the concept of the \mathbf{e}mph{contact category} \cite{HonCat}.
For a disc, the contact category $\mathcal{C}_n$ consists of objects and morphisms as follows.
\begin{enumerate}
\item
The \mathbf{e}mph{objects} are chord diagrams with $n$ chords, i.e. contact structures near discs $D$.
\item
The \mathbf{e}mph{morphisms} $\mathcal{G}amma_0 \longrightarrow \mathcal{G}amma_1$ are contact structures on the cylinder $D \times I$, with $\mathcal{G}amma_0$ on the bottom and $\mathcal{G}amma_1$ on the top.
\item
\mathbf{e}mph{Composition of morphisms} is given by stacking cylinders on top of each other.
\mathbf{e}nd{enumerate}
Honda showed that this category has many of the properties of a \mathbf{e}mph{triangulated category}. In particular bypass triples behave very much like exact triangles. If $\mathcal{G}amma, \mathcal{G}amma', \mathcal{G}amma''$ are successively obtained from each other by adding bypasses, then the composition of any two is overtwisted, hence $0$.
\[
\begin{tikzpicture}[
scale=0.9,
suture/.style={thick, draw=red},
boundary/.style={ultra thick},
vertex/.style={draw=red, fill=red}]
\coordinate (0a) at ($ (90:1) + (-150:3) $);
\coordinate (1a) at ($ (30:1) + (-150:3) $);
\coordinate (2a) at ($ (-30:1) + (-150:3) $);
\coordinate (3a) at ($ (-90:1) + (-150:3) $);
\coordinate (4a) at ($ (-150:1) + (-150:3) $);
\coordinate (5a) at ($ (150:1) + (-150:3) $);
\filldraw[fill=black!10!white, draw=none] (4a) arc (210:150:1) to [bend right=90] (0a) arc (90:30:1) -- cycle;
\filldraw[fill=black!10!white, draw=none] (2a) to [bend right=90] (3a) arc (-90:-30:1);
\draw [boundary] (-150:3) circle (1 cm);
\draw [suture] (5a) to [bend right=90] (0a);
\draw [suture] (4a) -- (1a);
\draw [suture] (2a) to [bend right=90] (3a);
\draw (-150:1.5) node {$\mathcal{G}amma'$};
\coordinate (0) at ($ (90:1) + (90:3) $);
\coordinate (1) at ($ (30:1) + (90:3) $);
\coordinate (2) at ($ (-30:1) + (90:3) $);
\coordinate (3) at ($ (-90:1) + (90:3) $);
\coordinate (4) at ($ (-150:1) + (90:3) $);
\coordinate (5) at ($ (150:1) + (90:3) $);
\filldraw[fill=black!10!white, draw=none] (0) arc (90:30:1) to [bend right=90] (2) arc (-30:-90:1) -- cycle;
\filldraw[fill=black!10!white, draw=none] (4) to [bend right=90] (5) arc (150:210:1);
\draw [boundary] (90:3) circle (1 cm);
\draw [suture] (1) to [bend right=90] (2);
\draw [suture] (0) -- (3);
\draw [suture] (4) to [bend right=90] (5);
\draw (90:1.5) node {$\mathcal{G}amma$};
\coordinate (0b) at ($ (90:1) + (-30:3) $);
\coordinate (1b) at ($ (30:1) + (-30:3) $);
\coordinate (2b) at ($ (-30:1) + (-30:3) $);
\coordinate (3b) at ($ (-90:1) + (-30:3) $);
\coordinate (4b) at ($ (-150:1) + (-30:3) $);
\coordinate (5b) at ($ (150:1) + (-30:3) $);
\filldraw[fill=black!10!white, draw=none] (2b) arc (-30:-90:1) to [bend right=90] (4b) arc (210:150:1) -- cycle;
\filldraw[fill=black!10!white, draw=none] (0b) to [bend right=90] (1b) arc (30:90:1);
\draw [boundary] (-30:3) circle (1 cm);
\draw [suture] (3b) to [bend right=90] (4b);
\draw [suture] (2b) -- (5b);
\draw [suture] (0b) to [bend right=90] (1b);
\draw (-30:1.5) node {$\mathcal{G}amma''$};
\draw [->] (115:1.5) -- (-165:1.5);
\draw [->] (-135:1.5) -- (-45:1.5);
\draw [->] (-15:1.5) -- (75:1.5);
\mathbf{e}nd{tikzpicture}
\]
But we have seen here that chord diagrams themselves can be described by pawns and chessboards --- and the dynamics of pawns on chessboards is itself essentially a \mathbf{e}mph{partial order}, which is a type of category. Our work in \cite{Me09Paper} defined a \mathbf{e}mph{contact 2-category} out of this geometric structure.
In fact, looking now at the vector space $V_n$ we defined, we have essentially
\[
V_n = \frac{\mathbb{Z}_2 \langle \text{Chord diagrams with $n$ chords} \rangle }{ \text{Bypass relation} } = \frac{\mathbb{Z}_2 \langle \text{Objects of $\mathcal{C}_n$} \rangle }{ \text{Exact triangles sum to zero} }
\]
and thus $V_n$ is the \mathbf{e}mph{Grothendieck group} of $\mathcal{C}_n$.
\subsection{Combinatorics of contact geometry}
To summarise, we've now seen various ways in which the combinatorics of chords on discs and cylinders, quantum pawn dynamics, and contact geometry are connected:
\begin{enumerate}
\item
Any chord diagram describes a contact structure near a disc.
\item
The ``inner product'' $\langle \cdot | \cdot \rangle$ can be interpreted as pawn dynamics on a chessboard, insertion of chord diagrams into a cylinder, or the existence of tight contact structures in solid cylinders.
\item
Creation and annihilation operators can be defined as pawn/antipawn creation/annihilation, or the insertion/closure of curves in a chord diagram, or building a contact structure.
\item
The adjoint property of creation and annihilation can be interepreted via the combinatorics of chessboards, finger moves on cylinders, or isotopy of contact structures on the sphere.
\item
A diagram on a disc with a closed curve describes an overtwisted contact structure, hence is ``trivial'' in terms of contact geometry. Similarly for curves on a cylinder with more than one component.
\item
Bypass triples of chord diagrams correspond to: building blocks of contact 3-manifolds; triples which must sum to zero for the ``insertion into cylinder inner product'' $\langle \cdot | \cdot \rangle$ to be nondegenerate; and exact triangles in the contact category.
\item
Chessboards correspond to a special class of chord diagrams, which look like slalom ski slopes, forming a basis for $V_n$, the Grothendieck group of the contact category of the disc.
\mathbf{e}nd{enumerate}
Much of this structure is similar to \mathbf{e}mph{topological quantum field theories} (TQFT's). In particular, we have assigned vector spaces $V_n$ to discs, which are 2-dimensional. To cylinders, which we can think of as $(2+1)$-dimensional, or as the ``time evolution'' of a disc, we have associated an element of $\mathbb{Z}_2$, or a morphism in the contact category. And we have the ``inner product'' which describes not so much a probability amplitude, but a \mathbf{e}mph{possibility amplitude} for a tight contact structure to evolve from one disc to another. We can call this structure \mathbf{e}mph{contact TQFT} and summarise much of what we have said as
\[
\text{Contact TQFT} \quad \cong \quad \text{QPD}.
\]
\section{Holomorphic invariants}
In fact, all of this structure actually arose from some of the holomorphic invariants mentioned earlier.
\subsection{Sutured Floer homology}
Sutured Floer homology is a variant of Heegaard Floer homology. A sutured 3-manifold $(M, \mathcal{G}amma)$ is (roughly) a 3-manifold $M$ with boundary, and some curves $\mathcal{G}amma$ on the boundary $\partial M$, which divide $\partial M$ into positive and negative regions. Sutured 3-manifolds were studied by Gabai in the 1980s in the context of foliation theory \cite{Gabai83}, but also describe contact 3-manifolds with boundary --- indeed, the diagrams we have drawn with discs, red curves, and complementary regions coloured black and white are sutures. There are close relationships between foliation theory, sutured manifolds and contact topology \cite{HKM02, Eliashberg_Thurston}.
Sutured Floer homology, again speaking roughly and imprecisely, is defined as follows. A sutured 3-manifold\footnote{Strictly speaking, a \mathbf{e}mph{balanced} sutured 3-manifold.} $(M, \mathcal{G}amma)$ has a Heegaard decomposition, consisting of a Heegaard surface with boundary $\Sigma$, and two sets of curves $\alpha_1, \ldots, \alpha_k$ and $\beta_1, \ldots, \beta_k$ on $\Sigma$. The manifold can be recovered from the decomposition by gluing to $\Sigma \times [0,1]$ discs along $\alpha_i \times \{0\}$ and $\beta_i \times \{1\}$. The sutures lie along $\partial \Sigma \times [0,1]$, the negative part of the boundary is $\Sigma \times \{0\}$ surgered along the $\alpha_i \times \{0\}$, and the positive part of the boundary is $\Sigma \times \{1\}$ surgered along the $\beta_i \times \{1\}$. We then consider holomorphic curves
\[
u \; : \; S \longrightarrow \Sigma \times I \times \mathbb{R}
\]
where $S$ is a Riemann surface with boundary and with negative boundary punctures $p_1, \ldots, p_k$ and positive boundary punctures $q_1, \ldots, q_k$, satisfying conditions including the following (among others)\footnote{There are also some more technical conditions including conditions on the complex structure, non-constant preojections of $u$, and finite energy. We just highlight the fact that the boundary conditions come from 3-manifold topology. See \cite{Lip} and \cite{Ju06}.}:
\begin{enumerate}
\item
$u$ sends $\partial S$ to
\[
\left( \left( \cup_i \alpha_i \right) \times \{1\} \times \mathbb{R} \right) \cup \left( \left( \cup_i \beta_i \right) \times \{0\} \times \mathbb{R} \right).
\]
\item
$u$ asymptotically sends each $p_i$ to $-\infty$ in the $\mathbb{R}$ coordinate, and each $q_i$ to $+\infty$.
\item
Each $u^{-1}(\alpha_i \times \{1\} \times \mathbb{R})$ and $u^{-1} (\beta_i \times \{0\} \times \mathbb{R})$ consists of precisely one segment of $\partial S \backslash \{p_1, \ldots, p_k, q_1, \ldots, q_k\}$.
\mathbf{e}nd{enumerate}
The point I want to make here is that, whatever the precise boundary conditions for holomorphic curves, they are defined in terms of the Heegaard decomposition. At $+\infty$, the curve $u(S)$ (or rather, its projection to $\Sigma$) runs through several intersections of $\alpha$ and $\beta$ curves. Indeed, at $+\infty$ we obtain a set of points $z_1, \ldots, z_k \in \Sigma$, such that
\[
z_1 \in \alpha_1 \cap \beta_{\sigma(1)}, \quad
z_2 \in \alpha_2 \cap \beta_{\sigma(2)}, \quad
\ldots, \quad
z_k \in \alpha_k \cap \beta_{\sigma(k)}
\]
for some permutation $\sigma$. We obtain a similar set of intersection points at $-\infty$.
We define a chain complex $\widehat{CF}$ generated over $\mathbb{Z}_2$ by sets of such intersection points,
\[
{\bf x} = \{z_1, \ldots, z_k\}.
\]
If we fix ${\bf x}$ and ${\bf y}$ to be two sets of intersection points, then we consider the moduli space $\mathcal{M}({\bf x}, {\bf y})$ with boundary conditions given by ${\bf x}$ at $+\infty$ and ${\bf y}$ at $-\infty$. The dimension of this moduli space can be given in terms of the topology of ${\bf x}$ and ${\bf y}$ (an \mathbf{e}mph{index formula}). It follows, with some serious analysis, that we can define a differential to count index-1 curves
\[
\partial {\bf x} = \sum_{\dim \mathcal{M}({\bf x}, {\bf y}) = 1} \# \widehat{\mathcal{M}}({\bf x}, {\bf y}) \cdot {\bf y}
\]
and $\partial^2 = 0$. (Here $\widehat{\mathcal{M}}$ is the quotient of the moduli space $\mathcal{M}$ by the action of translation along $\mathbb{R}$; so $\widehat{\mathcal{M}}$ consists of finitely many points.) Then we may take the \mathbf{e}mph{homology} and Ozsv\'{a}th--Szab\'{o} (for closed manifolds \cite{OS04Closed, OS04Prop}) and Juhasz (for sutured manifolds \cite{Ju06}) showed that this homology is \mathbf{e}mph{independent of the choice of Heegaard decomposition} (and independent of other technical choices made along the way). That is, the resulting homology is a sutured manifold invariant which we may call $SFH(M, \mathcal{G}amma)$.
Sutured Floer homology also has the nice property that any contact structure $\xi$ on $(M, \mathcal{G}amma)$ gives a \mathbf{e}mph{contact element} $c(\xi)$ in $SFH(M, \mathcal{G}amma)$ \cite{HKM09}.\footnote{This element is only defined up to sign; however if we take $\mathbb{Z}_2$ coefficients, no ambiguity arises.}
It turns out that our subject matter in this note has been sutured Floer homology. In particular, $SFH$ of solid tori $D^2 \times S^1$. If we let $F_n$ consist of $2n$ points on the boundary $\partial D^2$, so that $F_n \times S^1$ forms a set of sutures on $D^2 \times S^1$, then we have the following.
\begin{thm}[\cite{Me09Paper}]
\[
V_n \cong SFH(D^2 \times S^1, F_n \times S^1).
\]
Moreover, any chord diagram $\mathcal{G}amma$ in $V_n$ corresponds to a tight contact structure $\xi_\mathcal{G}amma$ on $(D^2 \times S^1, F_n \times S^1)$, unique up to isotopy, and the isomorphism takes $\mathcal{G}amma \mapsto c(\xi_\mathcal{G}amma)$.
\mathbf{e}nd{thm}
Therefore, all of the structure we have been discussing, is contained in sutured Floer homology of one of the simplest sutured 3-manifolds, namely a solid torus.
\subsection{Embedded contact homology and string homology}
Another holomorphic invariant is \mathbf{e}mph{embedded contact homology}, defined by Hutchings \cite{Hutchings02}. Given a 3-manifold $M$, we consider a contact structure $\xi$ and a contact form $\alpha$. From the 1-form $\alpha$ there is a natural vector field called the \mathbf{e}mph{Reeb vector field} $R$, which is defined by
\[
d\alpha \left( R, \cdot \right) = 0, \quad \alpha(R) = 1.
\]
Embedded contact homology counts holomorphic curves in the \mathbf{e}mph{symplectization} $M \times \mathbb{R}$ with symplectic form $d(e^t \alpha)$, where $t$ is the coordinate on $\mathbb{R}$. The most important condition prescribed on holomorphic curves is that the curves must approach \mathbf{e}mph{Reeb orbits}, i.e. orbits of $R$, as $t \to \pm \infty$. Roughly, given certain collections of Reeb orbits
\[
{\bf \gamma^+} = \{ \gamma_1^+, \ldots, \gamma_n^+ \}
\quad \text{and} \quad
{\bf \gamma^-} = \{ \gamma_1^-, \ldots, \gamma_n^- \},
\]
(some of which might be repeated and some of which might be covered multiple times), we consider the moduli space $\mathcal{M}({\bf \gamma^+}, {\bf \gamma^-})$ of \mathbf{e}mph{embedded} holomorphic curves which approach ${\bf \gamma^+}$ at $+\infty$ and ${\bf \gamma^-}$ at $-\infty$. Again there is an index formula giving the dimension of the moduli space in terms of the contact geometry of the $\gamma_i^\pm$ and again, after a lot of work, it is possible to define a differential
\[
\partial {\bf \gamma^+} = \sum_{\dim \mathcal{M}({\bf \gamma^+}, {\bf \gamma^-}) = 1} \# \widehat{\mathcal{M}}({\bf \gamma^+}, {\bf \gamma^-}) \cdot {\bf \gamma^-}
\]
where $\partial^2 = 0$. The homology of this complex is \mathbf{e}mph{embedded contact homology}, $ECH$. It's possible to define $ECH$ also for sutured manifolds \cite{CGHH}.
It has recently been shown that embedded contact homology is isomorphic to Heegaard Floer homology (see e.g. \cite{CGH10}). Note that $ECH$, even though it is defined in terms of a contact form, turns out not to depend on the contact structure at all, but is a smooth manifold invariant.
This deep isomorphism between $ECH$ and $HF$ implies that we should be able to obtain all of the combinatorial and algebraic structure we have discussed above, from embedded contact homology. Based on work of Cieliebak--Latschev \cite{CL}, as discussed in \cite{Mathews_Schoenfeld12_string}, one is led to consider the following ideas.
Take a disc $D$ with $2n$ points $F$ marked on the boundary, as we have seen with chord diagrams. But now consider sets of curves on $D$ with boundary $F$, which are not necessarily chord diagrams --- the curves may intersect. We call these \mathbf{e}mph{string diagrams}.
We define a $\mathbb{Z}_2$ vector space freely generated by string diagrams, up to homotopy relative to boundary. On this vector space, there is a \mathbf{e}mph{differential} $\partial$ defined by \mathbf{e}mph{resolving crossings}. That is, given a string diagram $s$, $\partial s$ is the sum of string diagrams, each obtained by resolving one of the crossings of $s$ as shown.
\begin{center}
\begin{tikzpicture}[scale=1, string/.style={thick, draw=red, -to}]
\draw [string] (-1,0) -- (1,0);
\draw [string] (0,-1) -- (0,1);
\draw [shorten >=1mm, -to, decorate, decoration={snake,amplitude=.4mm, segment length = 2mm, pre=moveto, pre length = 1mm, post length = 2mm}]
(1.5,0) -- (2.5,0);
\draw [string] (3,0) -- (3.7,0) to [bend right=45] (4,0.3) -- (4,1);
\draw [string] (4,-1) -- (4,-0.3) to [bend left=45] (4.3,0) -- (5,0);
\mathbf{e}nd{tikzpicture}
\mathbf{e}nd{center}
It turns out that $\partial^2 = 0$, and the homology is something with which we are by now familiar.
\begin{thm}[\cite{Mathews_Schoenfeld12_string}]
\[
HS \cong V_n \cong \frac{\mathbb{Z}_2 \langle \text{chord diagrams on $(D^2, F)$} \rangle }{ \text{Bypass relation} }
\]
\mathbf{e}nd{thm}
The ``reason'' for this relation --- which is far from a proof --- is the following picture.
\begin{center}
\begin{tikzpicture}[
scale=1.2,
string/.style={thick, draw=red, postaction={nomorepostaction, decorate, decoration={markings, mark=at position 0.5 with {\arrow{>}}}}}]
\draw (-6,0) circle (1 cm);
\draw (3,0) circle (1 cm);
\draw (0,0) circle (1 cm);
\draw (-3,0) circle (1 cm);
\draw [string] ($ (-6,0) + (-90:1) $) to [bend right=45] ($ (-6,0) + (90:1) $);
\draw [string] ($ (-6,0) + (30:1) $) to [bend right=45] ($ (-6,0) + (210:1) $);
\draw [string] ($ (-6,0) + (150:1) $) to [bend right=45] ($ (-6,0) + (-30:1) $);
\draw [string] (30:1) arc (120:240:0.57735);
\draw [string] (0,-1) -- (0,1);
\draw [string] (150:1) arc (60:-60:0.57735);
\draw [string] (3,0) ++ (150:1) arc (-120:0:0.57735);
\draw [string] (3,0) ++ (30:1) -- ($ (3,0) + (210:1) $);
\draw [string] (3,0) ++ (-90:1) arc (180:60:0.57735);
\draw [string] (-3,0) ++ (30:1) arc (-60:-180:0.57735);
\draw [string] ($ (-3,0) + (150:1) $) -- ($ (-3,0) + (-30:1) $);
\draw [string] (-3,-1) arc (0:120:0.57735);
\draw (-7.5,0) node {$\partial$};
\draw (-4.5,0) node {$=$};
\draw (-1.5,0) node {$+$};
\draw (1.5,0) node {$+$};
\mathbf{e}nd{tikzpicture}
\mathbf{e}nd{center}
This idea of taking vector spaces generated by topological classes of curves, and resolving their crossings, is closer to the sorts of objects studied in \mathbf{e}mph{string topology} (e.g. \cite{CS}) and as such we can say that contact topology and sutured Floer homology are expressed as a \mathbf{e}mph{string homology}. Further details and generalisations of this result are given in \cite{Mathews_Schoenfeld12_string}.
\addcontentsline{toc}{section}{References}
\small
\mathbf{e}nd{document}
|
\begin{document}
\title{The Scaled Relative Graph of a Linear Operator}
\author{Richard Pates}
\thanks{Department of Automatic Control, Lund University, Box 118, SE-221 00, Lund, Sweden. \textit{E-mail:} \href{mailto:[email protected]}{\texttt{[email protected]}}}
\thanks{The author is a member of the ELLIIT Strategic Research Area at Lund University. This work was supported by the ELLIIT Strategic Research Area. This project has received funding from VR 2016-04764, SSF RIT15-0091 and ERC grant agreement No 834142.}
\subjclass[2021]{Primary 47A11, 47A12; Secondary 51M15}
\date{June 10, 2011 and, in revised form, ****.}
\maketitle
\begin{abstract}
The \gls{srg} of an operator is a subset of the complex plane. It captures several salient features of an operator, such as contractiveness, and can be used to reveal the geometric nature of many of the inequality based arguments used in the convergence analyses of fixed point iterations. In this paper we show that the \gls{srg} of a linear operator can be determined from the numerical range of a closely related linear operator. Furthermore we demonstrate that the \gls{srg} of a linear operator has a range of spectral and convexity properties, and satisfies an analogue of Hildebrant's theorem.
\end{abstract}
\glsresetall
\section{Introduction}
The \gls{srg} was introduced by Ryu, Hannah and Yin in \cite{RHY19} as a geometric tool for the modular analysis of operators. The \gls{srg} of an operator is a subset of the complex plane that captures a number of important features of the operator, such as whether or not it is contractive. The \glspl{srg} of simpler operators can be combined in an intuitive graphical manner to bound the \glspl{srg} of the operators resulting from their algebraic composition. These rules for combining operators can be used as geometric analogues of the inequalities typically used in, for example, the convergence proofs of fixed point iterations. This has been used to give a unified geometric treatment of the convergence rates of a wide range of algorithms, including gradient descent, Douglas-Rachford splitting and the method of alternating projections.
The promise of the \gls{srg} extends far beyond the analysis of algorithms from convex optimization. As already noted in \cite{CFS21}, the modular fashion in which the \gls{srg} can be manipulated makes it an ideal candidate for dynamical system analysis, and the authors additionally give preliminary results connecting the \gls{srg} to classical tools from control theory. In order to unlock this potential, a better understanding of how to determine the \gls{srg} of an operator is required. For example, even the question of how to determine the \gls{srg} when the operator is a square matrix with real entries has only been fully resolved in the case that the matrix is normal, or of dimension 2 \cite{HRY20}.
Our primary motivation is to better understand the geometry of the \gls{srg}, building our intuition from the finite dimensional linear case, where the operators in question are matrices. However, from a theoretical perspective, the results from the matrix case can be pushed through to the case of linear operators on Hilbert spaces with little to no changes. Since such operators are relevant in a wide range of applications, particularly in the study of differential equations, this is the setting we will consider. Our main result is to show that the \gls{srg} of a linear operator can be determined from the numerical range of a closely related linear operator. This allows much of the machinery that has been developed to understand the numerical range to be applied in the \gls{srg} setting. We use this to show that the \gls{srg}, like the numerical range, has a range of convexity and spectral properties, and satsifies an analogue of Hildebrant's theorem \cite{Hil66}. Despite these similarities, the convexity properties of the \gls{srg} are rooted in hyperbolic geometry, and its spectral properties capture information about the approximate point spectrum rather than the spectrum.
\ensuremath{\mathbb{C}}ref{sec:2} introduces the relevant concepts from the theory of linear operators and hyperbolic geometry, and also reviews the definition of the \gls{srg} and known results on the \gls{srg} of a matrix. In \cref{sec:3} we relate the \gls{srg} to the numerical range. \ensuremath{\mathbb{C}}ref{sec:31} establishes the connections in the case of complex Hilbert spaces. In this subsection we also characterise the spectral and convexity properties of the \gls{srg}, derive the analogue of Hildebrant's theorem, and show how to plot the boundary of the \gls{srg} of an operator defined either by a matrix or a linear differential equation. Finally in \cref{sec:32} we show how to determine the \gls{srg} of a linear operator on a real Hilbert space using the results from \cref{sec:31}.
\section{Notation and preliminaries}\label{sec:2}
\subsection{Basic notation}
Throughout \ensuremath{\mathbb{F}}{} will denote either the \hypertarget{realfield}{\notation{real field}}, \ensuremath{\mathbb{R}}{}, or the \hypertarget{complexfield}{\notation{complex field}}, \ensuremath{\mathbb{C}}{}. When speaking geometrically we will also refer to \ensuremath{\mathbb{C}}{} as the \hypertarget{complexplane}{\notation{complex plane}}. The \hypertarget{complexconjugate}{\notation{complex conjugate}} of $z\in\ensuremath{\mathbb{C}}$ will be denoted by $\bar{z}$. A set $s$ is said to be \hypertarget{convex}{\notation{convex}} if $ts_1+\funof{1-t}s_2\in{}s$ for all $0\leq{}t\leq{}1$ and $s_1,s_2\in{}s$, and the \hypertarget{closure}{\notation{closure}} of $s$ is denoted by $\cl{s}$. Furthermore, the \hypertarget{convexhull}{\notation{convex hull}} $\hull{s}$ is defined to be the smallest \hyperlink{convex}{convex} set containing $s$, and the \hypertarget{boundary}{\notation{boundary}} of a set $s\subseteq\ensuremath{\mathbb{C}}$ will be denoted by $\partial{}s$ ($\cl{s}\cap\cl{\cfunof{\ensuremath{\mathbb{C}}\setminus{}s}}$). We will overload notation as appropriate to apply to sets, for example $\overline{\cfunof{z_1,z_2}}$ will denote the set \cfunof{\overline{z}_1,\overline{z}_2}, and more generally $h\funof{s}=\cfunof{h\funof{z}:z\in{}s}$.
\subsection{Operators on Hilbert spaces}\label{sec:11}
$\ensuremath{\mathcal{H}}$ denotes a \hypertarget{Hilbertspace}{\notation{Hilbert space}} over the \hyperlink{realfield}{field} \ensuremath{\mathbb{F}}{}, equipped with an \hypertarget{innerproduct}{\notation{inner product}} $\langle\cdot{,}\cdot\rangle:\ensuremath{\mathcal{H}}\times{}\ensuremath{\mathcal{H}}\rightarrow{}\ensuremath{\mathbb{F}}{}$ which defines a \hypertarget{norm}{\notation{norm}} $\norm{\cdot{}}=\sqrt{\langle\cdot{,}\cdot\rangle}$. $T:\ensuremath{\mathcal{H}}\rightarrow{}\ensuremath{\mathcal{H}}$ will be called a \hypertarget{boundedlinearoperator}{\notation{linear operator}} if it is linear, and $\sup\cfunof{\norm{T\hvect{x}}:\hvect{x}\in\ensuremath{\mathcal{H}},\norm{\hvect{x}}=1}<\infty$. The \hypertarget{identityoperator}{\notation{identity operator}} will be denoted by $I$ ($I\hvect{x}=\hvect{x}$ for all $\hvect{x}\in\ensuremath{\mathcal{H}}$).
To illustrate our results we will primarily consider the cases that
\begin{enumerate}
\item $\ensuremath{\mathcal{H}}$ is $\ensuremath{\mathbb{R}}^n$, equipped with the \hyperlink{innerproduct}{inner product} $\inner{\hvect{y}}{\hvect{x}}=\hvect{x}^\mathsf{T}\hvect{y}$;
\item $\ensuremath{\mathcal{H}}$ is $\ensuremath{\mathbb{C}}^n$, equipped with the \hyperlink{innerproduct}{inner product} $\inner{\hvect{y}}{\hvect{x}}=\overline{\hvect{x}}^\mathsf{T}\hvect{y}$;
\end{enumerate}
in which case the \hyperlink{boundedlinearoperator}{linear operators} correspond to the square matrices with entries in \ensuremath{\mathbb{R}}{} and \ensuremath{\mathbb{C}}{} respectively. We will also consider the \hyperlink{Hilbertspace}{Hilbert space} of complex valued Lebesgue \hypertarget{squareintegrablefunction}{\notation{square integrable functions}} $\ensuremath{\mathcal{L}^2\left(\mathbb{R}\right)}$, with \hyperlink{innerproduct}{inner product}
\[
\inner{\hvect{y}}{\hvect{x}}=\int_{\ensuremath{\mathbb{R}}}\overline{\hvect{x}\ensuremath{\left(t\right)}{}}^\mathsf{T}\hvect{y}\ensuremath{\left(t\right)}\,dt,\;\hvect{x},\hvect{y}\in\ensuremath{\mathcal{L}^2\left(\mathbb{R}\right)},
\]
and the \hyperlink{Hilbertspace}{Hilbert space} of complex valued \hypertarget{squaresummable}{\notation{square summable sequences}} $\ensuremath{\ell^2\left(\mathbb{N}\right)}$, with \hyperlink{innerproduct}{inner product}
\[
\inner{\hvect{y}}{\hvect{x}}=\sum_{j\in\mathbb{N}}\overline{\hvect{x}}_j\hvect{y}_j,\;\hvect{x},\hvect{y}\in\ensuremath{\ell^2\left(\mathbb{N}\right)}.
\]
We define the \hypertarget{graph}{\notation{graph}} of a \hyperlink{boundedlinearoperator}{linear operator} $T$ as
\[
\graph{T}=\cfunof{\funof{\hvect{x},T\hvect{x}}:\hvect{x}\in\ensuremath{\mathcal{H}}},
\]
and denote the \hypertarget{adjoint}{\notation{adjoint}} of $T$ as $T^*$ ($\inner{T\hvect{x}}{\hvect{y}}=\inner{\hvect{x}}{T^*\hvect{y}}$ for all $\hvect{x},\hvect{y}\in\ensuremath{\mathcal{H}}$). $T$ is said to be \hypertarget{boundedlyinvertible}{\notation{invertible}} if there is a \hyperlink{boundedlinearoperator}{linear operator} $S$ such that $TS=ST=I$, and we denote this inverse as $T^{-1}$. The \hypertarget{extendedspectrum}{\notation{spectrum}} of $T$ defined to be the subset of the \hyperlink{complexplane}{complex plane}
\[
\spectrum{T}=\cfunof{z:z\in\ensuremath{\mathbb{F}},\funof{T-zI}\text{ is not \hyperlink{boundedlyinvertible}{invertible}}}.
\]
We additionally say that $\lambda\in\spectrum{T}$ is in the \hypertarget{approximatepointspectrum}{\notation{approximate point spectrum}} ($\lambda\in\spectrumap{T}$) if there exist a sequence of unit vectors such that $\lim_{n\rightarrow{}\infty}\norm{\funof{T-\lambda{}I}\hvect{x}_n}=0$. In the matrix case $\spectrum{T}=\spectrumap{T}$, and $\lambda$ is an eigenvalue of $T$ if and only if $\lambda\in\spectrum{T}$.
\subsection{The scaled relative graph}\label{sec:21}
We define the \gls{srg} of a \hyperlink{boundedlinearoperator}{linear operator} $T$ to be the subset of the \hyperlink{complexplane}{complex plane}
\[
\srg{T}=\cfunof{\frac{\norm{\hvect{y}}}{\norm{\hvect{x}}}\exp\funof{\pm{}i\arccos\funof{\frac{\real{\inner{\hvect{y}}{\hvect{x}}}}{\norm{\hvect{y}}\norm{\hvect{x}}}}}:\hvect{x}\in\ensuremath{\mathcal{H}},\hvect{y}=T\hvect{x},\norm{\hvect{x}}=1}.
\]
For \hyperlink{boundedlinearoperator}{linear operators} this definition coincides with the more general definition of the \gls{srg} from \cite{RHY19}. It follows from the definition of the \gls{srg} that $T$ is contractive if and only if $\srg{T}$ is contained in the closed unit disk.
The \gls{srg} captures some of the geometric features of the input-output pairs of the operator. Recall that the angle $\theta$ between $\hvect{x}\in\ensuremath{\mathcal{H}}$ and $\hvect{y}\in\ensuremath{\mathcal{H}}$ is typically defined through
\begin{equation}\label{eq:polar}
\cos\theta=\frac{\real{\inner{\hvect{y}}{\hvect{x}}}}{\norm{\hvect{y}}\norm{\hvect{x}}}.
\end{equation}
The \gls{srg} is then the union of the `polar representations' of the input-output pairs $\funof{\hvect{x},\hvect{y}}\in\graph{T}$, in which the magnitude is given by the ratio between the \hyperlink{norm}{norms} of the output and input, and the argument the angle between the input and output.
\subsection{The Beltrami-Klein mapping}\label{sec:23}
\begin{figure}
\caption{\label{fig:bk}
\label{local_label}
\label{local_label}
\label{fig:bk}
\end{figure}
The \hypertarget{beltramikleinmapping}{\notation{Beltrami-Klein mapping}} is a tool from two dimensional hyperbolic geometry. Its importance in the context of the \gls{srg} was first recognised in \cite{HRY20}, where it was used in the construction of the \gls{srg} for normal matrices with real entries. We will now introduce the relevant concepts and review these results. The \hyperlink{beltramikleinmapping}{Beltrami-Klein mapping} maps the \hyperlink{complexplane}{complex plane} into the closed unit disk through
\[
f\funof{z}=\frac{\funof{\overline{z}-i}\funof{z-i}}{1+\overline{z}z}.
\]
This mapping sends generalised circles centred on the real axis onto chords of the unit circle, as illustrated in \ensuremath{\mathbb{C}}ref{fig:bk}. We will also need to apply this function to \hyperlink{boundedlinearopeartor}{linear operators}, in which case it will be understood that
\[
f\funof{T}=\funof{I+T^*T}^{-\frac{1}{2}}\funof{T^*-iI}\funof{T-iI}\funof{I+T^*T}^{-\frac{1}{2}}.
\]
Note that $f\funof{z}$ is not bijective, since $f\funof{z}=f\funof{\overline{z}}$. However, for any $z$ for which $\imag{z}\geq{}0$,
\[
z=\frac{\imag{f\funof{z}}+i\sqrt{1-\abs{f\funof{z}}^2}}{\real{f\funof{z}}-1}.
\]
This relation motivates the definition of
\[
g\funof{z}=\cfunof{\frac{\imag{z}\pm{}i\sqrt{1-\abs{z}^2}}{\real{z}-1}}.
\]
This map sends each point in the closed unit disk back to the corresponding \hyperlink{complexconjugate}{complex conjugate} pair ($g\funof{f\funof{z}}=\cfunof{z,\overline{z}}$). Since $\srg{T}=\overline{\srg{T}}$, this establishes that
\begin{equation}\label{eq:sim}
\srg{T}=g\funof{f\funof{\srg{T}}}.
\end{equation}
As we will see in the next section, $f\funof{\srg{T}}$ is in many ways simpler to understand than \srg{T}. Equation~\eqref{eq:sim} then shows that we can always convert a result on $f\funof{\srg{T}}$ back to a result on $\srg{T}$ using $g\funof{\cdot}$.
This pattern of obtaining a simplified analysis of $f\funof{\srg{T}}$ can also be seen in the main result of \cite{HRY20}. Introducing the notation $\hullbk{\cdot{}}=g\funof{\hull{f\funof{\cdot}}}$, there it was shown that if $T$ is a matrix with real entries (acting on a real \hyperlink{Hilbertspace}{Hilbert space}) and $T=T^\mathsf{T}$, then $f\funof{\srg{T}}=\hull{f\funof{\spectrum{T}}}$, or equivalently
\[
\srg{T}=\hullbk{\spectrum{T}}.
\]
Furthermore the above remains true for $TT^\mathsf{T}=T^\mathsf{T}T$ with the understanding that \spectrum{T} denotes the \hyperlink{extendedspectrum}{spectrum} of $T$ when viewed on the corresponding complex \hyperlink{Hilbertspace}{Hilbert space} (i.e. look at all the eigenvalues of $T$ in $\ensuremath{\mathbb{C}}$, not just those in the underlying \hyperlink{realfield}{field} \ensuremath{\mathbb{R}}{} of the operator).
\begin{figure}
\caption{\label{fig:pc}
\label{fig:pc}
\end{figure}
\begin{remark}\label{rem:pc}
Given a set $s\subseteq\ensuremath{\mathbb{C}}$, $\hullbk{s}$ behaves like the \hyperlink{convexhull}{convex hull}, however with the notion of a straight line taken from hyperbolic geometry under the Poincar\'{e} half-plane model. First recall that $\hull{s}$ is equal to the set of all points that lie on a straight line between $z_1,z_2\in{}s$. In the Poincar\'{e} half-plane model (adapting things slightly for our needs), the straight line between two points $z_1,z_2$ consists of the two arc segments of the generalised circle centred on the real axis that passes through the points $\cfunof{z_1,z_2,\overline{z}_1,\overline{z}_2}$ that:
\begin{enumerate}
\item connect a pair of points in $\cfunof{z_1,z_2,\overline{z}_1,\overline{z}_2}$;
\item do not intersect the real axis.
\end{enumerate}
This is illustrated in \ensuremath{\mathbb{C}}ref{fig:pc}. The set $\hullbk{s}$ is then the set of all points that lie on a hyperbolic straight line under the Poincar\'{e} half-plane model between $z_1,z_2\in{}s$.
\end{remark}
\subsection{The numerical range}\label{sec:25}
The \hyperlink{numericalrange}{numerical range} is a classical object in the study of linear operators on complex \hyperlink{Hilbertspace}{Hilbert spaces}. For a \hyperlink{boundedlinearoperator}{linear operator} it is defined to be the subset of the \hyperlink{complexplane}{complex plane}
\[
\W{T}=\cfunof{\inner{T\hvect{x}}{\hvect{x}}:\hvect{x}\in\ensuremath{\mathcal{H}},\norm{\hvect{x}}=1}.
\]
A nice introduction to the \hyperlink{numericalrange}{numerical range}
can be found in \cite{Sha17,HJ91}. The following facts about the \hyperlink{numericalrange}{numerical range} of a \hyperlink{boundedlinearoperator}{linear operator} are standard:
\begin{enumerate}[i)]
\item $\W{T}=\hull{\W{T}}$ ($\W{T}$ is \hyperlink{convex}{convex});
\item if $TT^*=T^*T$, then $\Wc{T}=\hull{\spectrum{T}}$ (if $T$ is \hypertarget{normal}{\notation{normal}}, the \hyperlink{closure}{closure} of $\W{T}$ equals the \hyperlink{convexhull}{convex hull} of the \hyperlink{extendedspectrum}{spectrum} of $T$);
\item $\Wc{T}\supseteq{}\spectrum{T}$ (the \hyperlink{closure}{closure} of $\W{T}$ contains the \hyperlink{extendedspectrum}{spectrum} of $T$).
\end{enumerate}
More generally, the similarity invariance of the \hyperlink{extendedspectrum}{spectrum} implies that the \hyperlink{convexhull}{convex hull} of the \hyperlink{extendedspectrum}{spectrum} is also contained in $\Wc{STS^{-1}}$, and hence in the intersection of the sets $\Wc{STS^{-1}}$ for all choices of $S$. An elegant result of Hildebrant \cite{Hil66} shows that this containment is tight, in the sense that
\begin{enumerate}[i)]\addtocounter{enumi}{3}
\item $\hull{\spectrum{T}}=\bigcap\cfunof{\Wc{STS^{-1}}:S,S^{-1}\text{ are \hyperlink{boundedlinearoperator}{linear operators}}}.$
\end{enumerate}
\section{Results}\label{sec:3}
\subsection{Connection to the numerical range}\label{sec:31}
In this subsection we connect the \gls{srg} to the \hyperlink{numericalrange}{numerical range}. The following theorem shows that the \gls{srg} of a \hyperlink{boundedlinearoperator}{linear operator} $T$ on a complex \hyperlink{Hilbertspace}{Hilbert space} can be obtained from the \hyperlink{numericalrange}{numerical range} of $f\funof{T}$. Furthermore $\srg{T}$ is endowed with \hyperlink{convex}{convexity} and \hyperlink{extendedspectrum}{spectral} properties along the lines of i)--iv) from \cref{sec:25}, with two main differences.
\begin{enumerate}
\item The notion of \hyperlink{convex}{convexity} is taken with respect to the Poincar\'{e} half-plane model, as explained in \ensuremath{\mathbb{C}}ref{rem:pc} (i.e. replace $\hull{\cdot}$ with $\hullbk{\cdot}$).
\item The \hyperlink{extendedspectrum}{spectral} properites pertain to the \hyperlink{approximatepointspectrum}{approximate point spectrum} instead of the \hyperlink{extendedspectrum}{spectrum} (i.e. replace $\spectrum{T}$ with $\spectrumap{T}$).
\end{enumerate}
This gives the \gls{srg} a similar geometrical flavour to the \hyperlink{numericalrange}{numerical range}, albeit with respect to a different geomtery. The fact that the \gls{srg} lifts out features of the \hyperlink{approximatepointspectrum}{approximate point spectrum} rather than the \hyperlink{extendedspectrum}{spectrum} is curious, but of no consequence if $T$ is finite dimensional or \hyperlink{normal}{normal}, since in these cases $\spectrumap{T}=\spectrum{T}$. In general, as demonstrated by the set of equivalences in the theorem statement, analogues of iii)--iv) also hold for the \hyperlink{extendedspectrum}{spectrum} if and only if $\srgc{T}\supseteq\srgc{T^*}$.
\begin{theorem}\label{thm:1}
Given a \hyperlink{boundedlinearoperator}{linear operator} $T$ on a complex \hyperlink{Hilbertspace}{Hilbert space},
\begin{equation}\label{eq:thm1}
\srg{T}=g\funof{\W{f\funof{T}}}.
\end{equation}
In addition:
\begin{enumerate}[i)]
\item $\srg{T}=\hullbk{\srg{T}}$;
\item if $TT^*=T^*T$, then $\srgc{T}=\hullbk{\spectrumap{T}}$;
\item $\srgc{T}\supseteq\spectrumap{T}$;
\item $\bigcap\cfunof{\srgc{STS^{-1}}:S,S^{-1}\text{ are \hyperlink{boundedlinearoperator}{linear operators}}}=\hullbk{\spectrumap{T}}$.
\end{enumerate}
Furthermore the following are equivalent:
\begin{enumerate}[i)]\addtocounter{enumi}{4}
\item $\srgc{T}\supseteq\spectrum{T}$;
\item $\bigcap\cfunof{\srgc{STS^{-1}}:S,S^{-1}\text{ are \hyperlink{boundedlinearoperator}{linear operators}}}=\hullbk{\spectrum{T}}$;
\item $\srgc{T}\supseteq{}\srgc{T^*}$;
\item $\spectrumap{T}\supseteq{}\spectrum{T}\cap\ensuremath{\mathbb{R}}$.
\end{enumerate}
\end{theorem}
\hyperlink{proof1}{The proof} of this result is given at the end of the subsection after a series of examples.
\begin{example}[The \gls{srg} in the matrix case]
In this example we will illustrate \ensuremath{\mathbb{C}}ref{thm:1} when the operator $T$ is a matrix with entries in $\ensuremath{\mathbb{C}}$, and draw some additional conclusions that apply in this case.
\begin{enumerate}
\item The \gls{srg} is a compact set ($\srgc{T}=\srg{T}$). This follows directly from the compactness of the \hyperlink{numericalrange}{numerical range} in the finite dimensional case.
\item The \hyperlink{boundary}{boundary} of $\srg{T}$ is easily computed. This is because $f\funof{T}$ can be computed using standard algorithms, and inner and outer approximations of the \hyperlink{boundary}{boundary} of the \hyperlink{numericalrange}{numerical range} can be computed to arbitrary precision by solving a sequence of eigenvalue problems \cite{HJ91}\footnote{Software for computing the \hyperlink{boundary}{boundary} of the \hyperlink{numericalrange}{numerical range} in the matrix case can be obtained at \href{http://www.ma.man.ac.uk/~higham/mctoolbox}{http://www.ma.man.ac.uk/\textasciitilde{}higham/mctoolbox}.}. This is illustrated in \ensuremath{\mathbb{C}}ref{fig:ex1a} and \ensuremath{\mathbb{C}}ref{fig:ex2a}.
\item The \gls{srg} of $T$ is equal to the \gls{srg} of its \hyperlink{adjoint}{adjoint}. To see this, note that the \hyperlink{approximatepointspectrum}{approximate point spectra} of $T$ and $T^*$ are equal to their \hyperlink{extendedspectrum}{spectra} ($\spectrumap{T}=\spectrum{T}$ and $\spectrumap{T^*}=\spectrum{T^*}$). Therefore statement \textit{v)} in \ensuremath{\mathbb{C}}ref{thm:1} is true for both $T$ and $T^*$, implying that $\srg{T}=\srg{T^*}$.
\item $\srg{STS^{-1}}$ can be made arbitrarily close to $\hullbk{\spectrum{T}}$ using a single similarity transform. To see this, note that the Jordan decomposition of $T$ ensures that there exists an \hyperlink{boundedlyinvertible}{invertible} matrix $Q$ such that
\[
QTQ^{-1}=D+N,
\]
where $D$ is a diagonal matrix consisting of the eigenvalues of $T$, and $N$ is a strictly upper triangular matrix ($N_{jk}=0$ if $j\leq{}k$). Hence if $S_\gamma=\diag{\gamma,\gamma^2,\ldots}Q$, where $\diag{\gamma,\gamma^2,\ldots}$ denotes the diagonal matrix with entries $\gamma,\gamma^2,\ldots{}$, then
\[
S_\gamma{}TS_\gamma^{-1}=D+\diag{\gamma,\gamma^2,\ldots}N\diag{\gamma,\gamma^2,\ldots}^{-1}.
\]
Since
\[
\lim_{\gamma\rightarrow{}\infty}\norm{\diag{\gamma,\gamma^2,\ldots}N\diag{\gamma,\gamma^2,\ldots}^{-1}}=0
\]
and $\srg{D}=\hullbk{\spectrum{T}}$, it follows that by making $\gamma$ sufficiently large the difference between $\srg{S_\gamma{}TS_\gamma^{-1}}$ and $\hullbk{\spectrum{T}}$ can be made arbitrarily small.
\end{enumerate}
\end{example}
\begin{figure}
\caption{\label{fig:ex1}
\label{fig:ex1a}
\label{fig:ex1b}
\label{fig:ex1}
\end{figure}
\begin{figure}
\caption{\label{fig:ex2}
\label{fig:ex2a}
\label{fig:ex2b}
\label{fig:ex2}
\end{figure}
\begin{example}[The \gls{srg} in the differential equation case]\label{ex:2}
In this example we study the \gls{srg} of an operator defined by a differential equation. This example can be viewed as a generalisation of \cite[Theorem 1]{CFS21}. In the following\footnote{Throughout this example we will tacitly assume that $\det\funof{z^p+\ldots{}+\alpha_{p-1}z+\alpha_p}\neq{}0$ for all $z\in{}i\ensuremath{\mathbb{R}}$, and that $p\geq{}q$. We note however that these requirements can be removed by extending \ensuremath{\mathbb{C}}ref{thm:1} to cover densely defined closed operators (which need not be bounded). Indeed this is not too hard to do since $f\funof{T}$ is still a bounded \hyperlink{boundedlinearoperator}{linear operator} whenever $T$ is a densely defined closed operator. However describing these extensions requires a considerably more densely defined notation, so we will not pursue this further here. A good introduction to densely defined closed operators can be found in \cite[Chapter X]{Con94}.} we will consider
\begin{equation}\label{eq:de}
\tfrac{d^p}{dt^p}\hvect{y}+\ldots{}+\alpha_{p-1}\tfrac{d}{dt}\hvect{y}+\alpha_p\hvect{y}=\beta_0\tfrac{d^q}{dt^q}\hvect{x}+\ldots{}+\beta_{q-1}\tfrac{d}{dt}\hvect{x}+\beta_q\hvect{x},
\end{equation}
where $\alpha_j,\beta_k\in\ensuremath{\mathbb{C}}$ and $\hvect{x},\hvect{y}\in\ensuremath{\mathcal{L}^2\left(\mathbb{R}\right)}$, though the approach we describe works just as well when these coefficients are square matrices and $\hvect{x},\hvect{y}$ are vectors of functions in $\ensuremath{\mathcal{L}^2\left(\mathbb{R}\right)}$. Note that in applications it might seem more natural to work on a real \hyperlink{Hilbertspace}{Hilbert space}, where $\alpha_j,\beta_k\in\ensuremath{\mathbb{R}}$, and $\hvect{x},\hvect{y}$ are real valued functions. In the next subsection it will be shown that from the perspective of the \gls{srg} this distinction is unimportant, and we may as well consider the case of complex \hyperlink{Hilbertspace}{Hilbert spaces}.
It is possible to associate a range of different operators $\hvect{x}\mapsto{}\hvect{y}$ with \cref{eq:de} depending on the time interval or the boundary conditions that are being studied. A perspective that has been particularly profitable both in theory and in practice has been to associate \cref{eq:de} with a \hyperlink{boundedlinearoperator}{linear operator} $T:\ensuremath{\mathcal{L}^2\left(\mathbb{R}\right)}\rightarrow{}\ensuremath{\mathcal{L}^2\left(\mathbb{R}\right)}$ defined through a multiplication operator $T_h:\ensuremath{\mathcal{L}^2\left(\mathbb{R}\right)}\rightarrow{}\ensuremath{\mathcal{L}^2\left(\mathbb{R}\right)}$ in the frequency domain. In this setting, denoting the Fourier transform
as $F:\ensuremath{\mathcal{L}^2\left(\mathbb{R}\right)}\rightarrow{}\ensuremath{\mathcal{L}^2\left(\mathbb{R}\right)}{}$, $T=F^*T_hF$, where
\[
T_h\hat{\hvect{x}}\ensuremath{\left(\omega\right)}=h\ensuremath{\left(\omega\right)}\hat{\hvect{x}}\ensuremath{\left(\omega\right)},
\]
and
\[
h\ensuremath{\left(\omega\right)}=\frac{\beta_0\funof{i\omega}^q+\ldots{}\beta_{q-1}i\omega+\beta_q}{\funof{i\omega}^p+\ldots{}\alpha_{p-1}i\omega+\alpha_p}.
\]
The function $h\ensuremath{\left(\omega\right)}$ is often referred to as a multiplier or transfer function. We will now show how to determine $\srgc{T}$. The first thing to note is that both the \gls{srg} and the \hyperlink{numericalrange}{numerical range} are unitarily invariant. That is given any \hyperlink{boundedlinearoperator}{linear operator} $U$ such that $UU^*=U^*U=I$, $\srg{U^*TU}=\srg{T}$ and $\W{U^*TU}=\W{T}$. Therefore
\begin{equation}\label{eq:intabove}
\srg{T}=\srg{T_h}=g\funof{\W{S^*\funof{T_h^*-iI}\funof{T_h-iI}S}},
\end{equation}
where $S$ is any \hyperlink{invertible}{invertible} \hyperlink{boundedlinearoperator}{linear operator} such that $SS^*=\funof{I+T_h^*T_h}^{-1}$. The first equality follows from the properties of the Fourier transform, and to see the second, observe that
\[
S^{-1}\funof{I+T^*_hT_h}^{-\frac{1}{2}}
\]
is unitary, and compare \cref{eq:intabove} with the definition of $f\funof{\cdot}$ from \cref{sec:23}. A suitable $S$ can then be obtained by applying factorisation techniques for rational functions. More specifically, the process of spectral factorisation can be used to find a bounded rational function $s:\ensuremath{\mathbb{R}}\rightarrow{}\ensuremath{\mathbb{R}}$ such that for all $\omega\in{}\ensuremath{\mathbb{R}}$,
\[
\frac{1}{\overline{h\ensuremath{\left(\omega\right)}{}}h\ensuremath{\left(\omega\right)}+1}=s\ensuremath{\left(\omega\right)}\overline{s\ensuremath{\left(\omega\right)}{}}.
\]
Such a factorisation is always possible, and can be obtained directly from $h\ensuremath{\left(\omega\right)}$ using a normalised coprime factorisation \cite{Vid85}. For example, if $h\ensuremath{\left(\omega\right)}=2/\funof{i\omega{}+1}^2$ (as in \ensuremath{\mathbb{C}}ref{fig:ex1b}), then a suitable $s\ensuremath{\left(\omega\right)}$ is given by
\[
s\ensuremath{\left(\omega\right)}=\frac{\funof{i\omega+1}^2}{\funof{iw}^2+\sqrt{2+2\sqrt{5}}\,i\omega+\sqrt{5}}.
\]
The multiplication operator
\[
T_s\hat{\hvect{v}}\ensuremath{\left(\omega\right)}=s\ensuremath{\left(\omega\right)}\hat{\hvect{v}}\ensuremath{\left(\omega\right)}
\]
then satisfies $T_sT_s^*=\funof{I+T_h^*T_h}^{-1}$, and therefore
\[
\srgc{T}=g\funof{\cfunof{\W{\overline{s\ensuremath{\left(\omega\right)}}\funof{\overline{h\ensuremath{\left(\omega\right)}}-i}\funof{h\ensuremath{\left(\omega\right)}-i}s\ensuremath{\left(\omega\right)}}:\omega\in\ensuremath{\mathbb{R}}\cup\cfunof{\infty}}}.
\]
This is illustrated in \ensuremath{\mathbb{C}}ref{fig:ex1b} and \ensuremath{\mathbb{C}}ref{fig:ex2b}. The above process is easily generalised to the case that $\alpha_j,\beta_k$ are square matrices ($h\ensuremath{\left(\omega\right)}$ becomes a matrix of rational functions, and $s\ensuremath{\left(\omega\right)}$ can be obtained through the process of normalised right coprime factorisation). Note that in this setting $T$ is not guaranteed to be \hyperlink{normal}{normal}, and so unlike in the case of scalar coefficients $\srgc{T}$ is not necessarily equal to $\hullbk{\spectrum{T}}$.
\end{example}
\begin{example}[The \gls{srg} of the right shift operator]
We have now seen two examples of operators for which the statements \textit{v)--viii)} in \ensuremath{\mathbb{C}}ref{thm:1} were true, and the \gls{srg} gave information on both the \hyperlink{approximatepointspectrum}{approximate point spectrum} and the \hyperlink{extendedspectrum}{spectrum}. We will now study the \gls{srg} of an operator for which this is not the case. To this end, consider the right shift operator $T:\ensuremath{\ell^2\left(\mathbb{N}\right)}\rightarrow{}\ensuremath{\ell^2\left(\mathbb{N}\right)}$ given by
\[
T\funof{\hvect{x}_1,\hvect{x}_2,\ldots}=\funof{0,\hvect{x}_1,\hvect{x}_2,\ldots{}}.
\]
The \hyperlink{adjoint}{adjoint} of $T$ is the left shift operator $\funof{\hvect{x}_1,\hvect{x}_2,\ldots}\mapsto\funof{\hvect{x}_2,\hvect{x}_3,\ldots{}}$. It is possible to compute $\srg{T}$ and $\srg{T^*}$ directly. The steps for $T$ are particularly simple since $T^*T=I$, from which it follows that
\[
\begin{aligned}
f\funof{\srg{T}}&=\frac{1}{2i}\W{T+T^*}\\
&=\cfunof{z:z\in\ensuremath{\mathbb{C}},\real{z}=0,\abs{z}<1}.
\end{aligned}
\]
Applying the function $g\funof{\cdot}$ from \cref{sec:23} then shows that
\[
\srg{T}=\cfunof{z:z\in\ensuremath{\mathbb{C}}:\abs{z}=1,\real{z}\neq{}0}.
\]
A similar but slightly more involved calculation shows that
\[
\srg{T^*}=\cfunof{z:z\in\ensuremath{\mathbb{C}}:\abs{z}<1,\real{z}\neq{}0}.
\]
We therefore see that $\srgc{T^*}\supset\srgc{T}$. It then follows that the statements \textit{v)--viii)} in \ensuremath{\mathbb{C}}ref{thm:1} are false for $T$, but true for $T^*$. This means for example that $\srgc{T}\not\supseteq{}\hullbk{\spectrum{T}}$, but $\srgc{T^*}\supseteq{}\hullbk{\spectrum{T}}$. This is easily confirmed directly (in fact $\spectrumap{T}$ is the unit circle and $\spectrum{T}$ is the closed unit disk, meaning that $\srgc{T}=\spectrumap{T}$ and $\srgc{T^*}=\spectrum{T}$).
\end{example}
We now give the proof of \ensuremath{\mathbb{C}}ref{thm:1}.
\begin{proof}
\hypertarget{proof1}{}We start by establishing \cref{eq:thm1}. First note that considering the polar representation of a complex number $z=r\exp\funof{i\theta}$ shows that
\[
f\funof{r\exp\funof{i\theta}}=\frac{r^2-1-2ir\cos{\theta}}{1+r^2}.
\]
In light of our discussion from \cref{sec:23} (c.f. \cref{eq:polar}), we then see that for any $\funof{\hvect{x},\hvect{y}}\in\graph{T}$,
\begin{align}
\label{eq:thm111}\!\!\!\!\!\!\!\!\quad{}f\funof{\frac{\norm{\hvect{y}}}{\norm{\hvect{x}}}\exp\funof{\!\pm{}i\arccos\funof{\frac{\real{\inner{\hvect{y}}{\hvect{x}}}}{\norm{\hvect{y}}\norm{\hvect{x}}}}\!\!}\!\!}
&=
\frac{\frac{\norm{\hvect{y}}^2}{\norm{\hvect{x}}^2}-1-2i\frac{\real{\inner{\hvect{y}}{\hvect{x}}}}{\norm{\hvect{x}}^2}}{1+\frac{\norm{\hvect{y}}^2}{\norm{\hvect{x}}^2}},\\
&=\frac{\norm{\hvect{y}}^2-\norm{\hvect{x}}^2-i\funof{\inner{\hvect{y}}{\hvect{x}}+\inner{\hvect{x}}{\hvect{y}}}}{\norm{\hvect{x}}^2+\norm{\hvect{y}}^2},\nonumber{}\\
&=\frac{\inner{R\funof{\hvect{x},\hvect{y}}}{\funof{\hvect{x},\hvect{y}}}}{\norm{\hvect{x}}^2+\norm{\hvect{y}}^2}\nonumber{},
\end{align}
where $R\funof{\hvect{x},\hvect{y}}=\funof{-i\hvect{y}-\hvect{x},\hvect{y}-i\hvect{x}}$. Consider now the linear map
\[
U\hvect{v}=\funof{\funof{I+T^*T}^{-\frac{1}{2}}\hvect{v},T\funof{I+T^*T}^{-\frac{1}{2}}\hvect{v}}.
\]
It is easily checked that for all $\hvect{v}\in\ensuremath{\mathcal{H}}$, $\inner{U\hvect{v}}{U\hvect{v}}=\inner{\hvect{v}}{\hvect{v}}$, $\graph{T}=\cfunof{U\hvect{v}:\hvect{v}\in\ensuremath{\mathcal{H}}}$, and
\[
\inner{RU\hvect{v}}{U\hvect{v}}=\inner{f\funof{T}\hvect{v}}{\hvect{v}}.
\]
Therefore $f\funof{\srg{T}}=\W{f\funof{T}}$, which shows \cref{eq:thm1}. Point \textit{i)} is then immediate from the Toeplitz-Hausdorff theorem.
\begin{figure}
\caption{\label{fig:proof}
\label{fig:proofa}
\label{fig:proofb}
\label{fig:proof}
\end{figure}
We will now show \textit{ii)--iv)}. First denote the shortest and longest distances from a point $\gamma\in\ensuremath{\mathbb{C}}$ to a set $s\subseteq{}\ensuremath{\mathbb{C}}$ as
\[
\ri{\gamma,s}=\inf\cfunof{\abs{z-\gamma}:z\in{}s}\;\text{and}\;\ro{\gamma,s}=\sup\cfunof{\abs{z-\gamma}:z\in{}s}
\]
respectively. We will start by showing that given any $s\subseteq{}\ensuremath{\mathbb{C}}$,
\begin{equation}\label{eq:cohl}
\cl{\hullbk{s}}=\bigcap_{\alpha\in\ensuremath{\mathbb{R}}}\cfunof{z:z\in\ensuremath{\mathbb{C}},\ri{\alpha,s}\leq{}\abs{z-\alpha}\leq\ro{\alpha,s}}.
\end{equation}
To see this, observe that for any value of $\alpha\in\ensuremath{\mathbb{R}}$, the inequalities in \cref{eq:cohl} characterise the points that lie outside a circle centred on $\alpha$ with radius $\ri{\alpha,s}$ and lie inside a circle centred on $\alpha$ with radius $\ro{\alpha,s}$. This is illustrated in \ensuremath{\mathbb{C}}ref{fig:proofa}, and the region in question corresponds to the orange annulus. \ensuremath{\mathbb{C}}ref{fig:proofb} shows the \hyperlink{beltramikleinmapping}{Beltrami-Klein mapping} of these regions. Since the \hyperlink{beltramikleinmapping}{Beltrami-Klein mapping} bijectively maps circles centred on the real axis to chords of the unit circle, and $f\funof{\hullbk{s}}$ is \hyperlink{convex}{convex}, this annulus contains $\cl{\hullbk{s}}$. Conversely every supporting hyperplane for the set $f\funof{\hullbk{s}}$ corresponds to a circle centred on some value of $\alpha\in\ensuremath{\mathbb{R}}$, and so the intersection of these regions gives $\cl{\hullbk{s}}$.
Next note that $\srg{T-\alpha{}I}=\srg{T}-\alpha$. It then follows from the definition of the \gls{srg} that
\begin{align}
\label{eq:out5}\ri{\alpha,\srg{T}}&=\innerm{T-\alpha{}I},\\
\label{eq:out6}\ro{\alpha,\srg{T}}&=\norm{T-\alpha{I}},
\end{align}
where in the first equation we have introduced the notation
\[
\innerm{A}=\inf\cfunof{\norm{A\hvect{x}}:\hvect{x}\in\ensuremath{\mathcal{H}},\norm{\hvect{x}}=1}.
\]
It is then easily shown that
\begin{align}
\label{eq:out3}\innerm{T-\alpha{}I}&\leq{}\ri{\alpha,\spectrumap{T}},\\
\label{eq:out4}\norm{T-\alpha{}I}&\geq{}\ro{\alpha,\spectrumap{T}}.
\end{align}
The second of these inequalities is most usually stated in terms of the spectral radius (i.e. replace $\spectrumap{\cdot}$ with $\spectrum{\cdot}$). However, as shown in \cite[Problem 63]{Hal12}, $\partial{}\spectrum{T}\subseteq\spectrumap{T}$ and so this substitution incurs no loss. We therefore see that
\[
\ri{\alpha,\srg{T}}\leq{}\ri{\alpha,\spectrumap{T}}\;\text{and}\;\ro{\alpha,\srg{T}}\geq{}\ro{\alpha,\spectrumap{T}}.
\]
When combined with \cref{eq:cohl} this shows that $\hullbk{\spectrumap{T}}\subseteq{}\srgc{T}$ (the \hyperlink{approximatepointspectrum}{approximate point spectrum} is always a closed set), which shows \textit{iii)}. This claim can be strengthened to an equality whenever \cref{eq:out3,eq:out4} are equalities for all $\alpha\in\ensuremath{\mathbb{R}}$. This is the case if $TT^*=T^*T$, which shows \textit{ii)}. To show \textit{iv)} we are required to show that if $\gamma\notin\hullbk{\spectrumap{T}}$, then there there exists an \hyperlink{boundedlyinvertible}{invertible} \hyperlink{boundedlinearoperator}{linear operator} $S$ such that $\gamma\notin\srgc{STS^{-1}}$. In light of \cref{eq:out3,eq:out4,eq:out5,eq:out6} this is equivalent to showing that given any $\varepsilon>0$, there exists an \hyperlink{boundedlyinvertible}{invertible} \hyperlink{boundedlinearoperator}{linear operator} $S_1$ such that
\begin{equation}
\label{eq:out7}
\innerm{S_1\funof{T-\alpha{}I}S_1^{-1}}>\ri{\alpha,\spectrumap{T}}-\varepsilon
\end{equation}
and there exists an \hyperlink{boundedlyinvertible}{invertible} \hyperlink{boundedlinearoperator}{linear operator} $S_2$ such that
\begin{equation}
\label{eq:out8}
\norm{S_2\funof{T-\alpha{}I}S_2^{-1}}<\ro{\alpha,\spectrumap{T}}+\varepsilon.
\end{equation}
In fact \cref{eq:out8} is a well known consequence of Rota's theorem \cite{Rot60}, so we will only show \cref{eq:out7}. By \cite[Theorem 1]{MJ83},
\[
\lim_{n\rightarrow{}\infty}\innerm{\funof{T-\alpha{}I}^n}^{\frac{1}{n}}=\ri{\alpha,\spectrumap{T}}.
\]
Therefore there exists a natural number $n$ such that
\[
\innerm{\funof{T-\alpha{}I}^n}^{\frac{1}{n}}>\ri{\alpha,\spectrumap{T}}-\varepsilon.
\]
Now let
\[
A=\frac{1}{\ri{\alpha,\spectrumap{T}}-\varepsilon}\funof{T-\alpha{}I},
\]
and note that $\innerm{A^n}>1$. Defining $X=I+A^*A+\ldots{}+\funof{A^{n-1}}^*A^{n-1}$ we then see that for any non-zero $\hvect{x}\in\ensuremath{\mathcal{H}}$,
\[
\inner{\funof{A^*XA-X}\hvect{x}}{\hvect{x}}=\inner{\funof{\funof{A^{n}}^*A^{n}-I}\hvect{x}}{\hvect{x}}\geq{}\funof{\innerm{A^n}^2-1}\norm{\hvect{x}}^2>0.
\]
Furthermore since $\inner{X\hvect{x}}{\hvect{x}}\geq{}\norm{\hvect{x}}^2$, there exists an \hyperlink{boundedlyinvertible}{invertible} \hyperlink{boundedlinearoperator}{linear operator} $S_1$ such that $X=S_1^*S_1$. Putting $S_1\hvect{x}=\hvect{y}$ we now see that
\[
\frac{\inner{\funof{A^*XA-X}\hvect{x}}{\hvect{x}}}{\inner{S_1\hvect{x}}{S_1\hvect{x}}}=\frac{\inner{S_1AS_1^{-1}\hvect{y}}{S_1AS_1^{-1}\hvect{y}}}{\norm{\hvect{y}}^2}-1>0.
\]
Therefore $\innerm{S_1AS_1^{-1}}>1$, and so \cref{eq:out7} holds.
To complete the proof we focus on the equivalence of \textit{v)--viii)}.
\textit{vii)}$\,\ensuremath{\mathbb{R}}ightarrow{}$\textit{v)}: First note that $\spectrum{T}\subseteq\spectrumap{T}\cup\overline{\spectrumap{T^*}}$. Since by definition $\srg{T}=\overline{\srg{T}}$, this shows that $\spectrum{T}\subseteq{}\srgc{T}\cup\srgc{T^*}$, and so by the hypothesis of \textit{vii)} $\spectrum{T}\subseteq{}\srgc{T}$.
\textit{v)}$\,\ensuremath{\mathbb{R}}ightarrow{}$\textit{viii)}: We proceed by contraposition. Assume that $\alpha\in\spectrum{T}\cap\ensuremath{\mathbb{R}}$ is not in $\spectrumap{T}$, and so $\innerm{T-\alpha{}I}>0$. From the definition of the \gls{srg}, this imples that $0\notin\srgc{T-\alpha{}I}$. Hence $\alpha\notin\srgc{T}$, and so $\spectrum{T}\not\subseteq{}\srgc{T}$ as required.
\textit{viii})$\,\ensuremath{\mathbb{R}}ightarrow{}$\textit{vii)}: First note that $\norm{T-\alpha{}I}=\norm{T^*-\alpha{}I}$, and if $\alpha\notin{}\spectrum{T}$, then
\[
\innerm{T-\alpha{}I}=1/\Vert\funof{T-\alpha{}I}^{-1}\Vert=1/\Vert\funof{T^*-\alpha{}I}^{-1}\Vert.
\]
Consider again \cref{eq:out5,eq:out6}. Observe in particular that given any $\alpha\in\ensuremath{\mathbb{R}}$, under the hypothesis of \textit{viii}) $\ri{\alpha,\srg{T}}\neq{}0$ only if $\alpha\notin\spectrum{T}$. We therefore see from \cref{eq:cohl} that $\gamma\notin\srgc{T}$ only if $\gamma\notin\srgc{T^*}$ as required.
\textit{viii})$\,\ensuremath{\mathbb{R}}ightarrow{}$\textit{vi)}: Recall that $\partial\spectrum{T}\subseteq{}\spectrumap{T}$. Therefore under the hypothesis of \textit{viii}), if $\alpha\in\ensuremath{\mathbb{R}}$, then $\ri{\alpha,\spectrumap{T}}=\ri{\alpha,\spectrum{T}}$, and so $\hullbk{\spectrum{T}}=\hullbk{\spectrumap{T}}$. $\textit{vi)}$ now follows from \textit{iv)}.
\textit{vi)}$\,\ensuremath{\mathbb{R}}ightarrow{}$\textit{v)}: Immediate.
\end{proof}
\subsection{Real Hilbert spaces}\label{sec:32}
In the previous subsection we showed that for a \hyperlink{boundedlinearoperator}{linear operator} acting on a complex \hyperlink{Hilbertspace}{Hilbert space}, the concept of the \gls{srg} is closely related to the \hyperlink{numericalrange}{numerical range}. However, largely motivated by applications from convex optimization, the \gls{srg} has primarily been studied in the context of \hyperlink{Hilbertspace}{Hilbert spaces} over $\ensuremath{\mathbb{R}}$. At first sight, it might seem like there are fundamental differences between the real and complex case. For example when viewed as an operator on a real \hyperlink{Hilbertspace}{Hilbert space} with \hyperlink{innerproduct}{inner product} $\inner{\hvect{y}}{\hvect{x}}=\hvect{x}^\mathsf{T}\hvect{y}$,
\begin{equation}\label{eq:2by2}
f\funof{\srg{\begin{bmatrix}
0&1\\0&0
\end{bmatrix}}}=\cfunof{z:\abs{z+\frac{1+i}{2}}+\abs{z+\frac{1-i}{2}}=\sqrt{2},z\in\ensuremath{\mathbb{C}}}.
\end{equation}
This is not a \hyperlink{convex}{convex} set (it is the \hyperlink{boundary}{boundary} of an ellipse), and therefore \ensuremath{\mathbb{C}}ref{thm:1} \textit{i)} fails. However when we view the same matrix as an operator on $\ensuremath{\mathbb{C}}^2$ with \hyperlink{innerproduct}{inner product} $\inner{\hvect{y}}{\hvect{x}}=\overline{\hvect{x}}^\mathsf{T}\hvect{y}$ we obtain
\[
f\funof{\srg{\begin{bmatrix}
0&1\\0&0
\end{bmatrix}}}=\cfunof{z:\abs{z+\frac{1+i}{2}}+\abs{z+\frac{1-i}{2}}\leq{}\sqrt{2},z\in\ensuremath{\mathbb{C}}}.
\]
That is the \gls{srg} of the operator on the real \hyperlink{Hilbertspace}{Hilbert space} is equal to the \hyperlink{boundary}{boundary} of the \gls{srg} of its complexified counterpart, suggesting the two objects are in fact closely related. This is illustrated in \ensuremath{\mathbb{C}}ref{fig:5}.
A similar behaviour is seen when studying tuples of Hermitian forms (of which the \hyperlink{numericalrange}{numerical range} is a special case). More specifically, given two $n\times{}n$ symmetric matrices $A$ and $B$ with real entries, it was shown in \cite{Bri61} that
\[
\cfunof{\hvect{x}^\mathsf{T}A\hvect{x}+i\hvect{x}^\mathsf{T}B\hvect{x}:\hvect{x}^\mathsf{T}\hvect{x}=1,\hvect{x}\in\ensuremath{\mathbb{R}}^n}=\begin{cases}\partial\W{A+iB}&\text{if $n=2$;}\\
\W{A+iB}&\text{otherwise.}
\end{cases}
\]
This result relates the joint \hyperlink{numericalrange}{numerical range} $\cfunof{\funof{\inner{A\hvect{x}}{\hvect{x}},\inner{B\hvect{x}}{\hvect{x}}}:\hvect{x}\in\ensuremath{\mathcal{H}},\norm{\hvect{x}}=1}$ of two operators on a finite dimensional real \hyperlink{Hilbertspace}{Hilbert space}, to the \hyperlink{numericalrange}{numerical range} of a related operator acting on a finite dimensional complex \hyperlink{Hilbertspace}{Hilbert space}. Moreover, it shows that the two are different only if the \hyperlink{Hilbertspace}{Hilbert space} has dimension 2, where instead the real case equals the \hyperlink{boundary}{boundary} of the complex case. The main result of this subsection is an adaptation of the above that shows that the \gls{srg} behaves in an analogous manner. Before stating the result, let us first formalise the notion of \hypertarget{complexification}{complexification} beyond the matrix case. The following, which can be found in \cite[Chapter I]{Con94}, gives the suitable notion of the \hyperlink{complexification}{complexification} of a \hyperlink{Hilbertspace}{Hilbert space}.
\begin{lemma}\label{lem:1}
Let $\ensuremath{\mathcal{H}}$ be a real \hyperlink{Hilbertspace}{Hilbert space}. Then there exists a complex \hyperlink{Hilbertspace}{Hilbert space} $\ensuremath{\mathcal{H}}_{\ensuremath{\mathbb{C}}}$ and a linear map $U:\ensuremath{\mathcal{H}}{}\rightarrow{}\ensuremath{\mathcal{H}}_{\ensuremath{\mathbb{C}}}$ such that:
\begin{enumerate}[i)]
\item $\inner{U\hvect{x}_1}{U\hvect{x}_2}=\inner{\hvect{x}_1}{\hvect{x}_2}$ for all $\hvect{x}_1,\hvect{x}_2\in\ensuremath{\mathcal{H}}$;
\item for any $\hvect{y}\in\ensuremath{\mathcal{H}}_{\ensuremath{\mathbb{C}}}$, there are unique $\hvect{x}_1,\hvect{x}_2\in\ensuremath{\mathcal{H}}$ such that $\hvect{y}=U\hvect{x}_1+iU\hvect{x}_2$.
\end{enumerate}
\end{lemma}
\begin{figure}
\caption{\label{fig:5}
\label{fig:5}
\end{figure}
Given an operator $T$ on a real \hyperlink{Hilbertspace}{Hilbert space} $\ensuremath{\mathcal{H}}$, we define the \hyperlink{complexification}{complexification} of $T$ to be the operator $T_{\ensuremath{\mathbb{C}}}$ on $\ensuremath{\mathcal{H}}_{\ensuremath{\mathbb{C}}}$ which satisfies
\[
T_{\ensuremath{\mathbb{C}}}\funof{U\hvect{x}_1+iU\hvect{x}_2}=UT\hvect{x}_1+iUT\hvect{x}_2,\;\text{for all $\hvect{x}_1,\hvect{x}_2\in\ensuremath{\mathcal{H}}$}.
\]
It is easy enough to check that these abstractions behave exactly as expected in the matrix case (and also in going from operators on real valued \hyperlink{squareintegrablefunction}{square integrable functions} to $\ensuremath{\mathcal{L}^2\left(\mathbb{R}\right)}{}$). With this definition in place we are ready to state the main result of this subsection. The following theorem shows that in all dimensions except 2 (including the infinite dimensional case), $\srg{T}=\srg{T_\ensuremath{\mathbb{C}}}$. Furthermore in dimension 2, $\srg{T}$ is equal to the \hyperlink{boundary}{boundary} of $\srg{T_{\ensuremath{\mathbb{C}}}}$. This means that \ensuremath{\mathbb{C}}ref{fig:ex1,fig:ex2} also show the \glspl{srg} of the corresponding operators when viewed on a real \hyperlink{Hilbertspace}{Hilbert space}, and in all cases, \srg{T} can be obtained from the \hyperlink{numericalrange}{numerical range} of an operator on a complex \hyperlink{Hilbertspace}{Hilbert space}, as described in the previous subsection.
\begin{theorem}\label{thm:2}
Let $T$ be a \hyperlink{boundedlinearoperator}{linear operator} on a real \hyperlink{Hilbertspace}{Hilbert space}. Then
\[
\srg{T}=\begin{cases}
\partial\srg{T_{\ensuremath{\mathbb{C}}}}&\text{if $T$ has dimension 2;}\\
\srg{T_{\ensuremath{\mathbb{C}}}}&\text{otherwise.}
\end{cases}
\]
\end{theorem}
\begin{proof}
Let us first slightly rework the characterisation of $\srg{T}$ from \ensuremath{\mathbb{C}}ref{thm:1} to make it suitable for operators on real \hyperlink{Hilbertspace}{Hilbert spaces}. The issue is that as written, $f\funof{T}:\ensuremath{\mathcal{H}}\rightarrow{}\ensuremath{\mathcal{H}}_{\ensuremath{\mathbb{C}}}$, and so we cannot define its \hyperlink{numericalrange}{numerical range}. However the problem is only superficial, and starting from \cref{eq:thm111} it is easily shown that
\[
f\funof{\srg{T}}=\cfunof{\inner{A\hvect{x}}{\hvect{x}}+i\inner{B\hvect{x}}{\hvect{x}}:\hvect{x}\in\ensuremath{\mathcal{H}},\norm{\hvect{x}}=1},
\]
where
\[
\begin{aligned}
A&=\funof{I+T^*T}^{-\frac{1}{2}}\funof{T^*T-I}\funof{I+T^*T}^{-\frac{1}{2}}\;\text{and}\;\\
B&=-\funof{I+T^*T}^{-\frac{1}{2}}\funof{T+T^*}\funof{I+T^*T}^{-\frac{1}{2}}.
\end{aligned}
\]
Similarly
\[
f\funof{\srg{T_{\ensuremath{\mathbb{C}}}}}=\cfunof{\inner{A_{\ensuremath{\mathbb{C}}}\hvect{y}}{\hvect{y}}+i\inner{B_{\ensuremath{\mathbb{C}}}\hvect{y}}{\hvect{y}}:\hvect{y}\in\ensuremath{\mathcal{H}}_{\ensuremath{\mathbb{C}}},\norm{\hvect{y}}=1}.
\]
Direct calculation shows that for any $\hvect{y}=U\hvect{x}_1+iU\hvect{x}_2\in\ensuremath{\mathcal{H}}_{\ensuremath{\mathbb{C}}}$,
\[
\begin{aligned}
\inner{A_{\ensuremath{\mathbb{C}}}\hvect{y}}{\hvect{y}}&=\inner{UA\hvect{x}_1+iUA\hvect{x}_2}{U\hvect{x}_1+iU\hvect{x}_2},\\
&=\inner{UA\hvect{x}_1}{U\hvect{x}_1}+\inner{UA\hvect{x}_2}{U\hvect{x}_2}+i\funof{\inner{UA\hvect{x}_2}{U\hvect{x}_1}-\inner{UA\hvect{x}_1}{U\hvect{x}_2}},\\
&=\inner{A\hvect{x}_1}{\hvect{x}_1}+\inner{A\hvect{x}_2}{\hvect{x}_2}+i\funof{\inner{A\hvect{x}_2}{\hvect{x}_1}-\inner{A\hvect{x}_1}{\hvect{x}_2}}.
\end{aligned}
\]
Since $A=A^*$ and $\ensuremath{\mathcal{H}}$ is over $\ensuremath{\mathbb{R}}$, $\inner{A\hvect{x}_2}{\hvect{x}_1}=\inner{A\hvect{x}_1}{\hvect{x}_2}$, and so the imaginary part in the above equals zero. Using a similar argument for $\inner{B_{\ensuremath{\mathbb{C}}}\hvect{y}}{\hvect{y}}$ therefore shows that
\[
\begin{aligned}
\inner{A_{\ensuremath{\mathbb{C}}}\hvect{y}}{\hvect{y}}+i\inner{B_{\ensuremath{\mathbb{C}}}\hvect{y}}{\hvect{y}}&=\inner{A\hvect{x}_1}{\hvect{x}_1}+i\inner{B\hvect{x}_1}{\hvect{x}_1}+\inner{A\hvect{x}_2}{\hvect{x}_2}+i\inner{B\hvect{x}_2}{\hvect{x}_2}\\
&=p_1\norm{\hvect{x}_1}^2+p_2\norm{\hvect{x}_2}^2,
\end{aligned}
\]
where $p_1,p_2\in{}f\funof{\srg{T}}$. Noting that $\norm{\hvect{y}}^2=\norm{\hvect{x}_1}^2+\norm{\hvect{x}_2}^2$, this implies that $f\funof{\srg{T}}\subseteq{}f\funof{\srg{T_{\ensuremath{\mathbb{C}}}}}\subseteq\hull{f\funof{\srg{T}}}$. By \cite[Theorem 2]{Leg05}, the joint \hyperlink{numericalrange}{numerical range} of any two Hermitian forms on a real \hyperlink{Hilbertspace}{Hilbert space} is \hyperlink{convex}{convex} unless that \hyperlink{Hilbertspace}{Hilbert space} has dimension 2. Therefore $f\funof{\srg{T}}$ is \hyperlink{convex}{convex} unless $T$ has dimension 2, which establishes the second case in the theorem statement. For the two dimensional case, as noted in \cite{Bri61}, the set
\[
\cfunof{\inner{A\hvect{x}}{\hvect{x}}+i\inner{B\hvect{x}}{\hvect{x}}:\norm{\hvect{x}}=1,\hvect{x}\in\ensuremath{\mathcal{H}}{}}
\]
can only be an ellipse, circle, line or point. Since these shapes all have \hyperlink{convex}{convex} \hyperlink{boundary}{boundaries}, this then implies that $f\funof{\srg{T}}=f\funof{\partial{}\srg{T_{\ensuremath{\mathbb{C}}}}}$ as required.
\end{proof}
\section{Conclusions}
We have demonstrated that the \gls{srg} of a \hyperlink{boundedlinearoperator}{linear operator} acting on complex \hyperlink{Hilbertspace}{Hilbert space} can be determined from the \hyperlink{numericalrange}{numerical range} of a closely related \hyperlink{boundedlinearoperator}{linear operator}. This was used to show that \hyperlink{beltramikleinmapping}{Beltrami-Klein mapping} of the \gls{srg} is \hyperlink{convex}{convex}, and derive an analogue of Hildebrant's theorem for the \gls{srg}. It was further shown how to re-purpose algorithms developed for the \hyperlink{numericalrange}{numerical range} to plot the \hyperlink{boundary}{boundary} of the \gls{srg} in the matrix and linear differential equation case. Finally these results were extended to operators on real \hyperlink{Hilbertspace}{Hilbert spaces}, where it was shown that the \gls{srg} could be obtained using the results for complex \hyperlink{Hilbertspace}{Hilbert spaces} through the process of \hyperlink{complexification}{complexification}.
\end{document}
|
\begin{equation}gin{document}
\title[Heat kernel estimates]
{Heat kernel estimates for fourth order non-uniformly elliptic operators with non strongly convex symbols}
\author{Gerassimos Barbatis, Panagiotis Branikas}
\address{Gerassimos Barbatis \newline
Department of Mathematics,
National and Kapodistrian University of Athens, \newline
Panepistimioupolis, 15784 Athens, Greece}
\email{[email protected]}
\address{Panagiotis Branikas \newline
Department of Mathematics,
National and Kapodistrian University of Athens, \newline
Panepistimioupolis, 15784 Athens, Greece}
\email{[email protected]}
\subjclass[2010]{35K40, 47D06, 35K65, 35K67}
\keywords{heat kernel estimates; higher order operators; singular-degenerate coefficients}
\begin{equation}gin{abstract}
We obtain heat kernel estimates for a class of fourth order non-uniformly elliptic operators in two dimensions.
Contrary to existing results, the operators considered have symbols that are not strongly convex. This entails certain
difficulties as it is known that, as opposed to the strongly convex case, there is no absolute exponential constant. Our estimates involve sharp constants and Finsler-type distances that are induced by the operator
symbol. The main result is based on two general hypotheses, a weighted Sobolev inequalitry and an interpolation inequality, which are related to the singularity or degeneracy of the coefficients.
\end{abstract}
\maketitle
\numberwithin{equation}{section}
\newtheorem{theorem}{Theorem}[section]
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{remark}[theorem]{Remark}
\allowdisplaybreaks
\section{Introduction}
Let $\Omega$ be a planar domain and let
\begin{equation}
Hu = \partial_{x_1}^2 \big( \alpha(x) \partial_{x_1}^2 u \big) +2\partial_{x_1x_2}^2 \big( \begin{equation}ta(x) \partial_{x_1x_2}^2 u \big)
+ \partial_{x_2}^2 \big( \gamma(x) \partial_{x_2}^2 u \big)
\labelbel{oper h}
\end{equation}
be a fourth-order, self-adjoint, uniformly elliptic operator in divergence form on $\Omega$ with measurable coefficients
satisfying Dirichlet boundary conditions on $\partial\Omega$. It has been shown by Davies \cite{d} that $H$ has a continuous heat kernel
$G(x,x',t)$ which satisfies the Gaussian-type estimate
\begin{equation}
|G(x,x',t)|\leq c_1 t^{-\frac{1}{2}}\exp\Big(-c_2\frac{|x-x'|^{4/3}}{t^{1/3}}+c_3t\Big),
\label{eq:2}
\end{equation}
for some positive constants $c_1$, $c_2$, $c_3$ and all $t>0$ and $x,x'\in\Omega$. Indeed \cite{d} deals with the more general case of an operator
of order $2m$ acting on a domain in ${\mathbb{R}}^n$,
$n<2m$.
The study of fundamental solutions is central in the theory
of linear parabolic PDEs. For more results on
heat kernel estimates for higher-order operators we refer to
\cite{CLYZ,Cl,DDY,Du,DzHe,HWZD,RS-C1,RS-C2}.
See also \cite{QuR-B,Ze} for related results specific to fourth order
operators.
A sharp version of the Gaussian estimate ({\rm Re}\;f{eq:2}) is obtained in \cite{b2001} where it was proved that
\begin{equation}
|G(x,x',t)|\leq c_{\epsilon} t^{-\frac{1}{2}}\exp\Big\{-\Big( \frac{3\sqrt[3]{2}}{16} -c\theta -\epsilon\Big) \frac{d_M(x,x')^{4/3}}{t^{1/3}}+
c_{\epsilon,M}t\Big\},
\label{eq:3}
\end{equation}
for arbitrary $\epsilon$ and $M$ positive. Here $\theta \geq 0$ is a constant that is related to the regularity of the coefficients and $d_M(x,x')$, $M>0$, is a family of Finsler-type distances on $\Omega$ which
is monotone increasing and converges as $M\to +\infty$ to a limit Finsler distance $d(x,x')$. The sharpness follows by comparing against the short time asymptotics obtained in
\cite{ep} for equations with constant coefficients and which involve precisely the constant $3\sqrt[3]{2}/16$;
we refer to \cite{bb2018} for a more detailed discussion
of the distance function $d(x,x')$.
An important assumption for both the Gaussian esimate ({\rm Re}\;f{eq:3}) and for the
corresponding asymptotic estimate of \cite{ep}
is the {\em strong convexity} of the symbol
\begin{equation}
A(x,\xi) =\alpha(x) \xi_1^4 +2\begin{equation}ta(x) \xi_1^2 \xi_2^2 +\gamma(x)\xi_2^4 \; , \qquad x\in\Omega \, , \;\; \xi \in{\mathbb{R}}^2 \, ,
\label{symbol}
\end{equation}
of the operator $H$. The notion of strong convexity was introduced in \cite{ep} and it applies to operators
of order $2m$ acting on ${\mathbb{R}}^d$ which have constant coefficients.
In our context the strong convexity of the symbol ({\rm Re}\;f{symbol}) amounts to
\begin{equation}
0\leq \begin{equation}ta(x) \leq 3 \sqrt{\alpha(x)\gamma(x)} \, , \qquad x\in \Omega \, .
\footnote{We note that ({\rm Re}\;f{str conv}) is the assumption made for the heat kernel estimates of \cite{b2001};
the requirement for the short time asymptotics of the constant coefficient equation in \cite{ep} is $0 < \begin{equation}ta < 3 \sqrt{\alpha\gamma}$.}
\label{str conv}
\end{equation}
In the recent article \cite{bb2018} sharp Gaussian estimates where obtained for the heat kernel of the operator ({\rm Re}\;f{oper h}) without the strong convexity assumption.
Short time asymptotics were also obtained from which follows in particular that there is no absolute sharp exponential constant but instead the best constant depends on the range of the function
\begin{equation}
Q(x)=\frac{\begin{equation}ta(x)}{\sqrt{\alpha(x)\gamma(x)}} \, , \qquad x\in\Omega \, .
\label{q(x)}
\end{equation}
Our aim in the present article is to extend the estimates of \cite{bb2018} to the case where the operator $H$ is not uniformly elliptic and/or is not self-adjoint; in particular a sharp exponential constant
is obtained.
Concerning the singularity or degeneracy, we assume that $H$
is locally uniformly elliptic and that there is a positive weight function $w(x)$ that controls in a suitable sense the behaviour of the coefficients of the operator. Our main assumption consists of two general conditions (H1) and (H2) on $w(x)$, a weighted Sobolev inequality and a weighted interpolation inequality.
These conditions were introduced in \cite{b1998} in order to obtain (non-sharp) Gaussian estimates for non-uniformly elliptic self-adjoint operators. Besides conditions (H1) and (H2) we shall assume that the symbol $A(x,\xi)$ is close in an appropriate sense to a certain class of a ``good'' symbols induced by $w(x)$. These symbols correspond to operators which
additionally are self-adjoint and their coefficients are locally Lipschitz, with the behaviour near $\partial\Omega$ (or at infinity) being controlled by the weight $w(x)$. The estimates obtained herein
complement analogous estimates in \cite{b2004} where non-uniformly elliptic operators with strongly convex symbol were considered. The sharpness of the exponential constant $\sigma_*$ in our Gaussian estimate follows from the asymptotic estimates of \cite{bb2018}.
The proof is based on Davies' exponential perturbation method. One has to consider three different regimes depending on the values taken
by the function $Q(x)$, namely $0\leq Q(x)\leq 3$ (the strongly convex regime), $Q(x)\leq 0$ and $Q(x)\geq 0$.
While the operator $H$ may be singular or degenerate, our assumptions guarantee that the function $Q(x)$ is bounded away from zero and infinity, which is crucial for the
implementation of the method.
\section{Heat kernel estimates}
\subsection{Setting and statement of main theorem}
Let $\Omega\subset{\mathbb{R}}^2$ be open and connected. We consider a differential operator $H$ on $L^2(\Omega)$ (complex-valued functions) given formally by
\begin{equation}
Hu(x)=\partial_{x_1}^2\big(\alpha(x)\partial_{x_1}^2u\big)+2\partial^2_{x_1x_2}(\begin{equation}ta(x)\partial^2_{x_1x_2}u)+\partial_{x_2}^2\big(\gamma(x)\partial_{x_2}^2u\big),
\label{eq}
\end{equation}
where $\alpha$, $\begin{equation}ta$ and $\gamma$ are complex-valued, locally bounded functions on $\Omega$. In case $\Omega\neq {\mathbb{R}}^2$ we impose Dirichlet boundary conditions on $\partial\Omega$.
The operator $H$ is defined by means of the quadratic form
\[
Q(u)=\int_{\Omega}\big\{\alpha(x)|u_{x_1x_1}|^2+2\begin{equation}ta(x)|u_{x_1x_2}|^2+\gamma(x)|u_{x_2x_2}|^2\big\}\,dx,
\]
defined initially on $C^{\infty}_c(\Omega)$. We assume that there exists a positive weight $w(x)$ with $w^{\pm 1}\in L^{\infty}_{loc}(\Omega)$ that controls the functions $\alpha(x),\begin{equation}ta(x),\gamma(x)$ in the following sense:
First, there holds
\begin{equation}
|\alpha(x)|\leq cw(x),\qquad|\begin{equation}ta(x)|\leq cw(x),\qquad|\gamma(x)|\leq cw(x), \qquad x\in\Omega,
\label{5}
\end{equation}
for some $c>0$ and second, the weighted G{\aa}rding inequality
\[
{\rm Re}\;\,Q(u)\geq c\int_{\Omega}w(x)|\nabla^2u|^2\,dx,\qquad u\in C^{\infty}_c(\Omega),
\]
is valid for some $c>0$ (here $\nabla^2u$ denotes the vector whose components are the second-order partial derivatives of $u$).
This implies \cite[Theorem 7.12]{a} an analogous inequality for the symbol $A(x,\xi)$ of $H$, namely
\[
{\rm Re}\, A(x,\xi) \geq c \, w(x) |\xi|^{4} \; , \qquad x\in\Omega\, , \; \xi\in{\mathbb{R}}^2.
\]
The quadratic form $Q$ is closable and the domain of the closure is a weighted Sobolev space which we denote by $H^{2}_{w,0}(\Omega)$. We retain the same
symbol, $Q$, for the closure of the above form and define
$H$ as the associated accretive operator on $L^2(\Omega)$, so that $\labelngle Hf,f\rangle=Q(u)$,
$f\in{\rm Dom}(H)$, and $Hu$ is given by ({\rm Re}\;f{eq}) in the weak sense.
We make two assumptions on the weight $w(x)$, a weighted Sobolev inequality and a weighted interpolation inequality:
\
(H1) There exist $s\in [\frac{1}{2}, 1]$ and $c>0$ such that
\[
\|u\|_{\infty}\leq c[{\rm Re}\; Q(u)]^{\frac{s}{2}}\|u\|^{1-s}_{2},\qquad u\in C^{\infty}_c(\Omega).
\]
(H2) There exists a constant $c>0$ such that
\[
\int_{\Omega}w^{\frac{1}{2}}|\nabla u|^2\,dx\, \leq
\,\epsilon\,\int_{\Omega}w|\nabla^{2}u|^2\,dx+c\epsilon^{-1}\,\int_{\Omega}|u|^{2}\,dx,
\]
for all $0<\epsilon<1$ and all $u\in C^{\infty}_c(\Omega)$.
Both (H1) and (H2) are satisfied when $H$ is uniformly elliptic, in which case the best value for the exponent $s$ is $s=1/2$, showing that in the general case we cannot expect any value that is smaller than $1/2$; in particular, (H1) is valid with $s=1/2$ if $w(x)$ is bounded away from zero. We refer to \cite{b1998} for a more detailed discussion of these conditions, including examples where they are both valid.
We note that condition (H2) implies that for any $k$, $l$ with $0\leq k$, $l\leq 2$, $k+l<4$, there exists a constant $c>0$ such that
\begin{equation}
(1+\labelmbda^{4-k-l})\int_{\Omega}w^{\frac{k+l}{4}}|\nabla^{k}u|\,|\nabla^{l}u|\,dx\,
\leq \,\epsilon\,{\rm Re}\; Q(u)+c\epsilon^{-\frac{k+l}{4-k-l}}(1+\labelmbda^{4})\|u\|^{2}_{2},
\label{eq:8}
\end{equation}
for all $\epsilon\in (0,1)$, $\labelmbda>0$ and all $u\in C^{\infty}_c(\Omega)$. Indeed, for $\labelmbda=1$, ({\rm Re}\;f{eq:8}) is a consequence of (H2) and the Cauchy-Schwarz inequality; the case $\labelmbda<1$ follows trivially from the case $\labelmbda=1$; finally, writing ({\rm Re}\;f{eq:8}) for $\labelmbda=1$ and replacing $\epsilon$ by $\epsilon\labelmbda^{k+l-4}$ we obtain the result for $\labelmbda>1$.
We define the weighted Sobolev space
\[
W^{1,\infty}_{w}(\Omega)=\{u\in W^{1,\infty}_{{\rm loc}}(\Omega): \exists c\geq 0 : \; |u(x)|\leq c \, w(x) \; , \; |\nabla u(x)|\leq c\, w(x)^{\frac{3}{4}},\;x\in\Omega\}.
\]
{\bf Definition 1.} We say that the symbol $A(x,\xi)$ lies in $\mathcal{G}_w$ if the functions $\alpha(x)$, $\begin{equation}ta(x)$, $\gamma(x)$
are real-valued and belong in $W^{1,\infty}_{w}(\Omega)$.
We think of $\mathcal{G}_w$ as a class of ``good'' symbols. By assumption ({\rm Re}\;f{5}) the last condition holds true if and only if
\[
|\nabla\alpha(x)|+|\nabla\begin{equation}ta(x)|+|\nabla\gamma(x)|\leq cw(x)^{\frac{3}{4}} \; , \quad x\in\Omega \, .
\]
To state our main result we need some more definitions. We first set
\[
{\mathcal E}_{w} =\big\{\phi\in C^{2}(\Omega)\cap L^{\infty}(\Omega):\; \phi \mbox{ real valued, } \exists c>0 : \,
|\nabla\phi| \leq c w^{-\frac{1}{4}} , \; |\nabla^2\phi| \leq c w^{-\frac{1}{2}} \big\}.
\]
In case where the symbol $A(x,\xi)$ belongs in $\mathcal{G}_w$ (so in particular it is real-valued) we additionally define for any $M>0$ the subclass
\[
{\mathcal E}_{A,M} =\big\{\phi\in {\mathcal E}_{w} \, :\; A(x,\nabla\phi(x))\leq 1,\,|\nabla^{2}\phi(x)|\leq M\,w(x)^{-\frac{1}{2}}, \; x\in\Omega\big\} \, ;
\]
our Gaussian estimates will be expressed in terms of the distance
\[
d_M(x,x')=\sup\big\{\phi(x')-\phi(x)\,:\;\;\phi\in{\mathcal E}_{A,M}\big\}
\]
for arbitrariy large (but finite) $M$; we note that as $M\to +\infty$ this converges to
\[
d(x,x') = \sup \{ \phi(x') -\phi(x) \; : \; \phi\in {\rm Lip}(\Omega) \, , \;\; A( y ,\nabla\phi(y))\leq 1 \, , \;\; {\rm a.e. }\;\; y\in\Omega\}.
\]
The domain $\Omega$ is essentially partitioned in three components depending on the values of the bounded function
$Q(x)$ (cf. ({\rm Re}\;f{q(x)})).
In particular, assuming always that the symbol $A(x,\xi)$ belongs in $\mathcal{G}_w$, we define the locally Lipschitz functions
\[
k(x)=\tarr{8\frac{1-Q(x)}{(1+Q(x))^2},}{\mbox{ if } Q(x) \leq 0,}{8,}{\mbox{ if }0\leq Q(x)\leq3,}{Q(x)^2-1,}{\mbox{ if }Q(x)\geq 3,}
\]
and
\[
\sigma(x)=\frac{3}{4}\cdot\Big(\frac{1}{4k(x)}\Big)^{1/3}= \tarr{ \frac{3}{8\cdot 4^{1/3}} \frac{ (1+Q(x))^{2/3}}{ (1-Q(x))^{1/3}},}
{\mbox{ if }Q(x) \leq 0,}{\frac{3}{8\cdot 4^{1/3}},}{\mbox{ if }0\leq Q(x)\leq3,}{ \frac{3}{4^{4/3}} (Q(x)^2-1)^{-1/3} ,}{\mbox{ if }Q(x) \geq 3.}
\]
We also set
\[
k^*=\sup_{x\in\Omega}k(x) \quad \mbox{ and }\quad
\sigma_*= \inf_{x\in\Omega}\sigma(x) = \frac{3}{4}\cdot\Big(\frac{1}{4k^*}\Big)^{1/3}.
\]
In the general case where the symbol does not belong in $\mathcal{G}_w$ we denote by $\theta$ the
following weighted distance of the symbol $A(x,\xi)$ from $\mathcal{G}_w$,
\[
\theta=\inf_{\tilde{A} \in \mathcal{G}_w} \sup_{\Omega} \, \max_{|\xi|=1} \frac{ \big| A(x,\xi) - \tilde{A}(x,\xi) \big|}{w(x)} \, .
\]
We shall think of $\theta$ as a small number.
We now state our main result; the constants $c_{\epsilon}$, $c_{\epsilon,M}$ may also depend on the operator $H$.
\begin{equation}gin{theorem}
\labelbel{thm1}
Assume that (H1) and (H2) are satisfied. \newline
(a) Assume that the symbol $A(x,\xi)$ belongs in $\mathcal{G}_w$. Then for all $\epsilon\in(0,1)$ and all $M$ large there exist $c_\epsilon,c_{\epsilon,M}<\infty$ such that
\begin{equation}
|G(x,x',t)|\leq c_\epsilon t^{-s}\exp\Big\{-(\sigma_*-\epsilon)\frac{d_M(x,x')^{4/3}}{t^{1/3}}+c_{\epsilon,M}t\Big\},
\label{cov1}
\end{equation}
for all $x,x'\in\Omega$ and $t>0$. \newline
(b) If $A(x,\xi)$ does not belong $\mathcal{G}_w$ then there exists $c>0$ such that for all $\epsilon\in(0,1)$
and all $M$ large there exist $c_\epsilon,c_{\epsilon,M}<\infty$ such that
\[
|G(x,x',t)|\leq c_\epsilon t^{-s}\exp\Big\{-(\sigma_*-c\theta-\epsilon)
\frac{d_M(x,x')^{4/3}}{t^{1/3}}+c_{\epsilon,M}t\Big\},
\]
for all $x,x'\in\Omega$ and $t>0$; here $\sigma_*$ and $d_M(x,x')$ are defined as above corresponding to a symbol $\tilde{A}(x,\xi)$ in $\mathcal{G}_w$
for which $|A(x,\xi) - \tilde{A}(x,\xi)| \leq 2\theta w(x)|\xi|^{4}$, $x\in\Omega$, $\xi\in{\mathbb{R}}^2$.
\end{theorem}
{\em Remarks.} (1) It follows from the asymptotic estimates obtained in \cite{bb2018} that the constant $\sigma_*$ is the best possible. \newline
(2) In case (b) one could define the exponential constant $\sigma_*$ and the distance $d_M(x,x')$ using the symbol $A(x,\xi)$ rather than
$\tilde{A}(x,\xi)$. The resulting estimate would be comparable to the one in the theorem; such differences are anyway absorbed in the term $c\theta$
in the exponential and we prefer to used $\tilde{A}(x,\xi)$ for the definition of these quantities since otherwise the proofs would be longer.
\subsection{Proof of Theorem {\rm Re}\;f{thm1}}
As already mentioned, the proof makes use of Davies' perturbative argument \cite{d}.
It follows from hypothesis (H2) that for any $\psi\in{\mathcal E}_{w}$ the (multiplication) operator $e^{\psi}$ leaves the Sobolev space
$H^{2}_{w,0}(\Omega)$ invariant so we may define a sesquilinear form $Q_{\psi}$ on $H^{2}_{w,0}(\Omega)$ by
$Q_{\psi}(u)=Q(e^{\psi}u,e^{-\psi}u) $;
here
\[
Q(u,v)= \int_{\Omega}\big\{\alpha(x)u_{x_1x_1}\overline{v}_{x_1x_1}+2\begin{equation}ta(x)u_{x_1x_2}\overline{v}_{x_1x_2}
+\gamma(x)u_{x_2x_2}\overline{v}_{x_2x_2}\big\}\,dx
\]
is the sesquilinear form associated to $Q(\cdot)$, hence
\begin{equation}a
Q_{\psi}(u)&=&\int_{\Omega}\Big[\alpha(x)(e^{\psi}u)_{x_1x_1}
(e^{-\psi}\overline{u})_{x_1x_1}+2\begin{equation}ta(x)(e^{\psi}u)_{x_1x_2}(e^{-\psi}\overline{u})_{x_1x_2}\nonumber\\
&&\quad\quad+\gamma(x)(e^{\psi}u)_{x_2x_2}(e^{-\psi}\overline{u})_{x_2x_2} \Big] \,dx. \label{eenndd}
\end{equation}a
We shall need the following result, see \cite[Proposition 3.2]{b2004}:
\begin{equation}gin{lemma}
Assume that (H1) and (H2) are satisfied. Let $\psi\in{\mathcal E}_w$ be given and let $k \in{\mathbb{R}}$ be such that
\[
{\rm Re}\,Q_{\psi}(u)\geq - k \,\|u\|_2^2
\]
for all $u\in C_c^{\infty}(\Omega)$. Then for any $\delta\in(0,1)$ there exists a constant $c_\delta$ such that
\[
|G(x,x',t)|\leq c_{\delta} t^{-s}\exp\big\{\psi(x)-\psi(x')+(1+\delta) k t\big\},
\]
for all $x,x'\in\Omega$ and all $t>0$.
\labelbel{lem:ebd}
\end{lemma}
We now take in ({\rm Re}\;f{eenndd}) $\psi =\labelmbda\phi$ where $\labelmbda>0$ and $\phi\in{\mathcal E}_{A,M}$.
After expanding, the exponentials $e^{\labelmbda\phi}$ and $e^{-\labelmbda\phi}$ cancel and we obtain that $Q_{\labelmbda\phi}(u)$ is a linear combination of terms of the form
\begin{equation}
\labelmbda^s \int_{\Omega}b_{s,\gamma,\delta}(x)D^{\gamma}u\,D^{\delta}\overline{u}\,dx,
\label{eq:19}
\end{equation}
(multi-index notation) where $s+|\gamma+\delta|\leq 4$ and
each function $b_{s\gamma\delta}(x)$ is a product of one of the functions $\alpha(x)$, $\begin{equation}ta(x)$, $\gamma(x)$ and first or second order derivatives of $\phi(x)$ (see
also ({\rm Re}\;f{789}) below). Recalling ({\rm Re}\;f{5}) we see that for each such term we have
\begin{equation}
|b_{s,\gamma,\delta}(x) | \leq c w(x)^{\frac{|\gamma+\delta|}{4}} \; , \qquad x\in\Omega \, .
\label{est}
\end{equation}
\noindent
{\bf Definition 2.} We denote by ${\mathcal L}$ the space of (finite) linear combinations of terms of the form ({\rm Re}\;f{eq:19})
with $s+|\gamma+\delta|<4$ and $|b_{s,\gamma,\delta}(x)|\leq cw(x)^{\frac{|\gamma+\delta|}{4}}$.
We note that if the form ({\rm Re}\;f{eq:19}) belongs in ${\mathcal L}$, then by ({\rm Re}\;f{eq:8}) we have for any $\epsilon>0$,
\begin{equation}gin{align}
|T(u)| & \leq c\labelmbda^s \int_{\Omega} cw(x)^{\frac{|\gamma+\delta|}{4}} |D^{\gamma}u| \, |D^{\delta}u| dx \nonumber \\
&\leq \epsilon\,{\rm Re}\; Q(u)+c\epsilon^{-\frac{|\gamma +\delta|}{4-|\gamma +\delta|}}(1+\labelmbda^{\frac{4s}{4-|\gamma +\delta|}}) \|u\|^{2}_{2} \nonumber \\
&\leq \epsilon\,{\rm Re}\; Q(u)+c\epsilon^{-3}(1+\labelmbda^3) \|u\|^{2}_{2} \; .
\label{form:l}
\end{align}
We next define the quadratic form
\begin{equation}gin{align}
\label{789}&\qquad Q_{1,\lambda\phi}(u)= \nonumber \\
&\int_{\Omega}\Big\{\labelmbda^4\big[\alpha(x)\phi_{x_1}^4+2\begin{equation}ta(x)\phi_{x_1}^2\phi_{x_2}^2+\gamma(x)\phi_{x_2}^4\big]|u|^2\nonumber \\
&\quad+\labelmbda^2\Big\{\alpha(x)\phi_{x_1}^2(u\overline{u}_{x_1x_1}+u_{x_1x_1}\overline{u} -4|u_{x_1}|^2)\nonumber \\
&\quad+2\begin{equation}ta(x)\big[\phi_{x_1}\phi_{x_2}(u\overline{u}_{x_1x_2}+u_{x_1x_2}\overline{u}-u_{x_1}\overline{u}_{x_2}-u_{x_2}\overline{u}_{x_1})-(\phi_{x_2}^2|u_{x_1}|^2
+\phi_{x_1}^2|u_{x_2}|^2)\big]\nonumber\\
&\quad+\gamma(x)\phi_{x_2}^2(u\overline{u}_{x_2x_2}+u_{x_2x_2}\overline{u}-4|u_{x_2}|^2)\Big\}\nonumber \\
&\quad +\alpha(x)|u_{x_1x_1}|^2+2\begin{equation}ta(x)|u_{x_1x_2}|^2+\gamma(x)|u_{x_2x_2}|^2\Big\}\,dx.
\end{align}
It may be seen that $Q_{1,\lambda\phi}(u)$ contains precisely those terms of the form ({\rm Re}\;f{eq:19}) from the expansion of $Q_{\lambda\phi}(u)$ for which we have $s+|\gamma+\delta|=4$.
Hence, recalling also ({\rm Re}\;f{est}), the difference $Q_{\lambda\phi}(\cdot)-Q_{1,\lambda\phi}(\cdot)$ belongs in ${\mathcal L}$.
We now define the polar symbol
\[
A(x,z,z')=\alpha(x)z_1^2z_1'^2+2\begin{equation}ta(x)z_1z_2z_1'z_2'+\gamma(x)z_2^2z_2'^2,\qquad x\in\Omega,\quad z,\,z'\in{\mathbb{C}}^2 \, .
\]
We note that for $z=z'=\xi\in{\mathbb{R}}^2$ this reduces to the symbol $A(x,\xi)$ of $H$.
For $x\in\Omega$ and $\xi,\xi',\eta\in{\mathbb{R}}^2$ we also set
\begin{equation}
S(x,\xi,\xi',\eta)={\rm Re}\,A(x,\xi+i\eta,\xi'+i\eta)+k(x)A(x,\eta).
\label{star}
\end{equation}
Given $\phi\in{\mathcal E}_w$ and $\labelmbda >0$ we define the quadratic form $S_{\lambda\phi}$ on $H^{2}_{w,0}(\Omega)$ by
\[
S_{\lambda\phi}(u)=\frac{1}{(2\pi)^{2}}\iiint_{\Omega\times{\mathbb{R}}^2\times{\mathbb{R}}^2}S(x,\xi,\xi',\labelmbda\nabla\phi)e^{i(\xi-\xi')\cdot x}\hat{u}(\xi)\overline{\hat{u}(\xi')}\,d\xi\,d\xi'\,dx.
\]
\begin{equation}gin{lemma}
\labelbel{lem2}
Assume that the symbol $A(x,\xi)$ lies in $\mathcal{G}_w$.
Let $\phi\in{\mathcal E}_w$ and $\labelmbda>0$. There holds
\[
{\rm Re}\, Q_{1,\lambda\phi}(u)+\int_{\Omega} k(x)A(x,\labelmbda\nabla\phi)|u|^2dx=S_{\lambda\phi}(u),
\]
for all $u\in C^{\infty}_c(\Omega)$.
\end{lemma}
\noindent
{\em Proof.} This follows from ({\rm Re}\;f{star}) by using the relation
\[
D^{\alpha}u(x) =(2\pi)^{-1}\int_{{\mathbb{R}}^2}(i\xi)^{\alpha}e^{ix\cdot\xi}\hat{u}(\xi)d\xi
\]
for the various terms that appear in $Q_{1,\lambda\phi}$; the fact that $\alpha(x)$, $\begin{equation}ta(x)$ and $\gamma(x)$ are real-valued is also used here. $
\Box$
We now define for each $x\in\Omega$ a quadratic form $\Gamma(x , \cdot)$ in ${\mathbb{C}}^6$ by
\begin{equation}an
&&\Gamma(x,p)=\\
&&\hspace{-0.5cm}\left\{\begin{equation}gin{array}{l}{(Q+1)|p_1|^2+(Q+1)|p_2|^2-Q|p_3|^2-2Q|p_4|^2-}\\[0.1cm]
{\hspace{3cm}-2Q|p_5|^2-\frac{Q(3-Q)^2}{(1+Q)^2}|p_6|^2,
\hspace{1.6cm}\mbox{if }-1<Q(x)<0,}\\[0.3cm]
\frac{3-Q}{3}|p_1|^2+\frac{3-Q}{3}|p_2|^2+\frac{Q}{3}|p_1+p_2|^2+\frac{4Q}{3}|p_3|^2,
\hspace{2cm}\mbox{if }0\leq Q(x)\leq3, \\[0.3cm]
{2(Q-3)|p_1|^2+|p_2|^2 +2(Q-1)|p_3|^2+2\frac{Q-3}{Q-1}(Q+1)(Q^2+3)|p_4|^2,}\\[0.1cm]
\hspace{9cm}\mbox{if }Q(x)>3,
\end{array}
\right.
\end{equation}an
for any $p=(p_1,\ldots,p_6)\in{\mathbb{C}}^6$.
Clearly $\Gamma(x , \cdot)$ is positive semidefinite for each $x\in\Omega$. We denote by $\Gamma(x,\cdot,\cdot)$ the corresponding sesquilinear form in ${\mathbb{C}}^6$, that is $\Gamma(x, p ,q)$ is given by a formula similar to the one above with each $|p_k|^2$ being replaced by $p_k\overline{q_k}$
and with $|p_1+p_2|^2$ being replaced by $(p_1+p_2)\overline{(q_1+q_2)}$.
Next, for any $x\in\Omega$ and $\xi,\eta\in{\mathbb{R}}^2$ we define a vector $p_{x,\xi,\eta}\in{\mathbb{R}}^6$ by
\begin{equation}an
&&p_{x,\xi,\eta}=\\
&& \left\{
\begin{equation}gin{array}{l}
{\!\!\!\!\Big(\alpha^{1/2}[\xi_1^2-\frac{3-Q}{1+Q}\eta_1^2],\gamma^{1/2}[\xi_2^2-\frac{3-Q}{1+Q}\eta_2^2],\,\alpha^{1/2}\xi_1^2-\gamma^{1/2}\xi_2^2,\,\alpha^{1/2}\xi_1\eta_1 \! +\!\gamma^{1/2}\xi_2\eta_2,}\\
{\qquad \alpha^{1/4}\gamma^{1/4}(\xi_1\eta_2+\xi_2\eta_1),\,\alpha^{1/2}\eta_1^2-\gamma^{1/2}\eta_2^2\Big),}
\hspace{1.3cm}{\mbox{ if } -1<Q(x)<0,} \\[0.2cm]
{\!\!\!\Big(\alpha^{1/2}[\xi_1^2-3\eta_1^2],\,\gamma^{1/2}[\xi_2^2-3\eta_2^2],\,\alpha^{1/4}\gamma^{1/4}[\xi_1\xi_2-3\eta_1\eta_2],\,0,\,0,\,0\Big),} \\
\hspace{8.3cm}{\mbox{ if } 0\leq Q(x)\leq3,} \\[0.2cm]
{\!\!\!\Big(\alpha^{1/2}\xi_1\eta_1-\gamma^{1/2}\xi_2\eta_2,\,\alpha^{1/2}(\xi_1^2-Q \eta_1^2)+\gamma^{1/2}(\xi_2^2-Q\eta_2^2),}\\
{\qquad\alpha^{1/4}\gamma^{1/4}[\xi_1\xi_2-\frac{Q+3}{Q-1}\eta_1\eta_2],\,\alpha^{1/4}\gamma^{1/4}\eta_1\eta_2,\,0,\,0\Big),}
\hspace{.45cm} {\mbox{ if }Q(x)>3.}
\end{array}
\right.
\end{equation}an
A crucial property of the form $\Gamma(x,\cdot)$ and the vectors $p_{x,\xi,\eta}$ is that
\begin{equation}
S(x;\xi,\xi,\eta)=\Gamma(x,p_{x,\xi,\eta},p_{x,\xi,\eta}),
\labelbel{s:g}
\end{equation}
for all $x\in\Omega$ and $\xi,\eta\in{\mathbb{R}}^2$.
We finally define a quadratic form $\Gamma_{\lambda\phi}(\cdot)$ on $H^{2}_{w,0}(\Omega)$ by
\[
\Gamma_{\lambda\phi}(u)=\frac{1}{(2\pi)^{2}}\iiint_{\Omega\times{\mathbb{R}}^2\times{\mathbb{R}}^2}\Gamma(x, \, p_{x,\xi,\labelmbda\nabla\phi},p_{x,\xi',\labelmbda\nabla\phi})e^{i(\xi-\xi')\cdot x}\hat{u}(\xi)
\overline{\hat{u}(\xi')}\,d\xi\,d\xi'\,dx.
\]
We then have
\begin{equation}gin{lemma}
\labelbel{lem3}
Assume that the symbol $A(x,\xi)$ lies in $\mathcal{G}_w$. Then the difference $S_{\lambda\phi}(\cdot)-\Gamma_{\lambda\phi}(\cdot)$ belongs to ${\mathcal L}$.
\end{lemma}
\noindent
{\em Proof.} We consider the difference
\[
S(x,\xi,\xi',\eta)-\Gamma(x,p_{x,\xi,\eta},p_{x,\xi',\eta}),
\]
of the two symbols and we group together terms that have the property that if we set $\xi'=\xi$ then they are similar as monomials of the variables
$\xi$ and $\eta$. Due to ({\rm Re}\;f{s:g}) one can use integration by parts to conclude that the total contribution of each such
group belongs to ${\mathcal L}$. We shall illustrate this for two particular groups, the one consisting of terms which for $\xi=\xi'$ involve the monomial $\xi_1^2\eta_1^2$ and those which
for $\xi=\xi'$ involve $\xi_1^2\eta_2^2$. For the sake of brevity we shall consider directly the sum of the terms of both groups.
The terms of these two groups from $S(x,\xi,\xi',\eta)$ add up to
\[
-\alpha(x)\eta_1^2(\xi_1^2+\xi_1'^2+4\xi_1\xi_1')-2\begin{equation}ta(x)\eta_2^2\xi_1\xi_1'.
\]
The corresponding terms in $\Gamma(x,p_{x,\xi,\eta},p_{x,\xi',\eta})$ are
\[
\left\{
\begin{equation}gin{array}{l}
{\!\!\alpha(x)\eta_1^2\big[(Q(x)-3)(\xi_1^2+\xi_1'^2)-2Q(x)\xi_1\xi_1'\big]-2\begin{equation}ta(x)\eta_2^2\xi_1\xi_1',} \hspace{1.4cm} {\mbox{if }Q(x)\leq 0,} \\[0.2cm]
{\!\!-3\alpha(x)\eta_1^2(\xi_1^2+\xi_1'^2)-\begin{equation}ta(x)\eta_2^2(\xi_1^2+\xi_1'^2),}
\hspace{3.6cm}\mbox{if }0\leq Q(x)\leq3,\\[0.2cm]
{\!\!\alpha(x)\eta_1^2\big[-Q(x)(\xi_1^2+\xi_1'^2)+2(Q(x)-3)\xi_1\xi_1'\big]-\begin{equation}ta(x)\eta_2^2(\xi_1^2+\xi_1'^2),} \hspace{.3cm}\mbox{if }Q(x)\geq 3.
\end{array}
\right.
\]
Hence the difference of these terms in $S(x,\xi,\xi',\eta)-\Gamma(x,p_{x,\xi,\eta},p_{x,\xi',\eta})$ is
\[
\tarr{\alpha(x)\eta_1^2\big[2-Q(x)\big](\xi_1-\xi_1')^2,}{\mbox{ if }Q(x)\leq 0,}
{\big[2\alpha(x)\eta_1^2+\begin{equation}ta(x)\eta_2^2\big](\xi_1-\xi_1')^2,}{\mbox{ if }0\leq Q(x)\leq3,}
{\big[\alpha(x)(Q(x)-1)\eta_1^2+\begin{equation}ta(x)\eta_2^2\big](\xi_1-\xi_1')^2,}{\mbox{ if } Q(x)\geq 3.}
\]
This can also be written as $\big[\alpha(x)\eta_1^2 R(x)+\eta_2^2 P(x)\big](\xi_1-\xi_1')^2$ where
\[
R(x)=\tarr{2-Q(x),}{\mbox{ if }Q(x)\leq 0,}{2,}{\mbox{ if }0\leq Q(x)\leq3,}{Q(x)-1,}{\mbox{ if }Q(x)\geq 3,}
\quad \mbox{and} \quad
P(x)=\left\{\begin{equation}gin{array}{rcl} 0, & \mbox{if} & \begin{equation}ta(x) \leq 0, \\ \begin{equation}ta(x), & \mbox{if} & \begin{equation}ta(x)\geq 0 . \end{array}\right.
\]
Inserting this in the triple integral and recalling that $\eta=\labelmbda\nabla\phi$ we obtain that the contribution of the above terms in the difference $S_{\lambda\phi}(u)-\Gamma_{\lambda\phi}(u)$ is
\begin{equation}an
&&(2\pi)^{-2}\iiint_{\Omega\times{\mathbb{R}}^2\times{\mathbb{R}}^2}\big[\alpha(x)R(x)\phi_{x_1}^2+P(x)\phi_{x_2}^2\big](\xi_1-\xi_1')^2\labelmbda^2 e^{i(\xi-\xi')\cdot x} \\
&&\hspace{4cm}\hat{u}(\xi)\overline{\hat{u}(\xi')}\,d\xi\,d\xi'\,dx\\
&=&\labelmbda^2\int_{\Omega}\big[\alpha(x)R(x)\phi_{x_1}^2+P(x)\phi_{x_2}^2\big](-u_{x_1x_1}\overline{u}-u\overline{u}_{x_1x_1}-2|u_{x_1}|^2)dx\\
&=&-\labelmbda^2\int_{\Omega}\big[\alpha(x)R(x)\phi_{x_1}^2+P(x)\phi_{x_2}^2\big]( u_{x_1}\overline{u}+u\overline{u}_{x_1})_{x_1}dx \\
&=&\labelmbda^2\int_{\Omega}\big[\alpha(x)R(x)\phi_{x_1}^2+P(x)\phi_{x_2}^2\big]_{x_1}( u_{x_1}\overline{u}+u\overline{u}_{x_1})dx ,
\end{equation}an
where we have used the fact that
the function $\alpha(x)R(x)\phi_{x_1}^2+P(x)\phi_{x_2}^2$ is locally Lipschitz. To conclude that the last expression belongs in ${\mathcal L}$ we must prove that ({\rm Re}\;f{est}) is valid, that is
$\big| [\alpha(x)R(x)\phi_{x_1}^2+P(x)\phi_{x_2}^2]_{x_1} \big| \leq c w(x)^{\frac{1}{4}}$.
We shall only consider the first of the two terms, the proof being similar for the second. Using the relations
$|Q(x)|\leq c$, $|\nabla Q(x)|\leq cw(x)^{-1/4}$ we obtain
\begin{equation}gin{eqnarray*}
\big| (\alpha(x)R(x)\phi_{x_1}^2)_{x_1} \big| &\leq & |\alpha_{x_1} R| \phi_{x_1}^2 + |\alpha R_{x_1} | \phi_{x_1}^2 + 2| \alpha R \phi_{x_1} \phi_{x_1x_1} | \\
&\leq & c w^{\frac{3}{4}} w^{-\frac{1}{2}} + cw w^{-\frac{1}{4}} w^{-\frac{1}{2}} + cw w^{-\frac{1}{4}} M w^{-\frac{1}{2}} \\
&=& c_M w^{\frac{1}{4}},
\end{eqnarray*}
as required. $
\Box$
\begin{equation}gin{lemma}
\labelbel{lem4}
Assume that the symbol $A(x,\xi)$ lies in $\mathcal{G}_w$ and let $M>0$ be given. Then for any $\phi\in{\mathcal E}_{A,M}$ and $\labelmbda>0$ we have
\[
{\rm Re}\;\,Q_{\labelmbda\phi}(u)\geq-k^*\labelmbda^4\,\|u\|_2^2+T(u),
\]
for some quadratic form $T\in{\mathcal L}$ and all $u\in C^{\infty}_c(\Omega)$.
\end{lemma}
\noindent
{\em Proof.} The assumption $\phi\in{\mathcal E}_{A,M}$ implies that $A(x,\nabla\phi(x))\leq1$, $x\in\Omega$. Recalling that the difference $Q_{\lambda\phi}(\cdot)-Q_{1,\lambda\phi}(\cdot)$ belongs in ${\mathcal L}$ and
using Lemmas {\rm Re}\;f{lem2}, {\rm Re}\;f{lem3} and {\rm Re}\;f{lem4}
we obtain
\begin{equation}an
{\rm Re}\;\,Q_{\labelmbda\phi}(u)&=&-\int_\Omega k(x)A(x,\labelmbda\nabla\phi)\,|u|^2\,dx+\Gamma_{\lambda\phi}(u)+T(u)\\
&\geq&-k^*\labelmbda^4\int_\Omega|u|^2\,dx+\Gamma_{\lambda\phi}(u)+T(u),
\end{equation}an
for some form $T\in{\mathcal L}$ and all $u\in C^{\infty}_c(\Omega)$. Moreover
\begin{equation}gin{align*}
\Gamma_{\lambda\phi}(u)\, =\; &\frac{1}{(2\pi)^{2}}\iiint_{\Omega\times{\mathbb{R}}^2\times{\mathbb{R}}^2}\Gamma(x, \, p_{x,\xi,\labelmbda\nabla\phi},p_{x,\xi',\labelmbda\nabla\phi})e^{i(\xi-\xi')\cdot x}\hat{u}(\xi)\overline{\hat{u}(\xi')}\,d\xi\,d\xi'\,dx\\
\, =\;&\frac{1}{(2\pi)^{2}}\int_{\Omega}\Gamma\Big(x , \,\int_{{\mathbb{R}}^2}e^{i\xi\cdot x}\hat{u}(\xi)p_{x,\xi,\labelmbda\nabla\phi}d\xi,\int_{{\mathbb{R}}^2}e^{i\xi'\cdot x}\hat{u}(\xi')p_{x,\xi',\labelmbda\nabla\phi}d\xi'\Big)\,dx\\
\, \geq \; &0,
\end{align*}
by the positive semi-definiteness of $\Gamma$; the result follows.$
\Box$
\noindent
{\bf\em Proof of Theorem {\rm Re}\;f{thm1}.}
{\em Part (a).} We claim that for any $\epsilon$ and $M$ positive there exists $c_{\epsilon,M}$ (which may also depend on the operator $H$) such that
\begin{equation}
{\rm Re}\;\,Q_{\lambda\phi}(u)\geq-\Big\{(k^*+\epsilon)\labelmbda^4+c_{\epsilon,M}(1+\labelmbda^3)\Big\}\|u\|_2^2.
\label{claim}
\end{equation}
for all $\labelmbda>0$ and $\phi\in{\mathcal E}_{A,M}$.
To prove this we first recall (cf. ({\rm Re}\;f{form:l})) that any form $T\in{\mathcal L}$ satisfies
\[
|T(u)|\leq\epsilon Q(u)+c_{\epsilon,M}(1+\labelmbda^3)\,\|u\|_2^2,
\]
for all $\epsilon\in(0,1)$, $\labelmbda>0$ and $u\in C^{\infty}_c(\Omega)$. Hence, since $Q(u)$ is real, Lemma {\rm Re}\;f{lem4} implies
\begin{equation}
\label{gui1}
{\rm Re}\;\,Q_{\lambda\phi}(u)\geq-\Big\{k^*\labelmbda^4+c_{\epsilon,M}(1+\labelmbda^3)\Big\}\|u\|_2^2-
\epsilon Q(u).
\end{equation}
Now, considering the expansion of $Q_{\lambda\phi}$ already discussed and recalling ({\rm Re}\;f{eq:8}) we infer that there exists a constant $c_M$ such that for any $\phi\in{\mathcal E}_{A,M}$ and $\labelmbda>0$ there holds
\begin{equation}
\big| Q(u)-Q_{\lambda\phi}(u) \big| \leq\frac{1}{2}Q(u)+c_M(\labelmbda+\labelmbda^4)\|u\|_2^2 \, .
\label{fil1}
\end{equation}
Furthermore, we note that the dependence on $M$ in this estimate comes from those terms in the expansion
of $Q_{\lambda\phi}$ that contain at least one second-order derivative of $\phi$.
Since the coefficient of $\labelmbda^4$ in the expansion only involves first
derivatives of $\phi$, ({\rm Re}\;f{fil1}) can be improved to
\[
\big|Q(u)-Q_{\lambda\phi}(u) \big|\leq\frac{1}{2}Q(u)+\big\{c_M(\labelmbda+\labelmbda^3)+c
\labelmbda^4\big\}\|u\|_2^2,
\]
which in turn implies
\begin{equation}
Q(u)\leq\, 2{\rm Re}\,Q_{\lambda\phi}(u)+\big\{c_M(\labelmbda+\labelmbda^3)+c\labelmbda^4\big\}
\|u\|_2^2.
\label{last}
\end{equation}
Let $u\in C^{\infty}_c(\Omega)$ be given. If ${\rm Re}\; Q_{\lambda\phi}(u)\geq0$ then ({\rm Re}\;f{claim}) is obviously true. If not we then have from ({\rm Re}\;f{gui1}) and ({\rm Re}\;f{last})
\begin{equation}an
{\rm Re}\;\,Q_{\lambda\phi}(u)&\geq&-\Big\{k^*\labelmbda^4+c_{\epsilon,M}(1+\labelmbda^3)\Big\}\|u\|_2^2-2\epsilon\, {\rm Re}\; Q_{\lambda\phi}(u) \\
&& -\epsilon\big\{c_M(\labelmbda+\labelmbda^3)+c\labelmbda^4\big\}\|u\|_2^2\\
&\geq&-\Big\{(k^*+c\epsilon)\labelmbda^4+c_{\epsilon,M}(1+\labelmbda^3)+\epsilon\big\{c_M(\labelmbda+\labelmbda^3)+c\labelmbda^4\big\}\Big\}\|u\|_2^2,
\end{equation}an
and ({\rm Re}\;f{claim}) again follows; hence the claim has been proved.
We complete the standard argument; Lemma {\rm Re}\;f{lem:ebd} and ({\rm Re}\;f{claim}) imply
\[
|G(x,x',t)|<c_\epsilon t^{-s}\exp\Big\{\labelmbda \big( \phi(x)-\phi(x') \big)+(1+\epsilon)\big\{(k^*+\epsilon)\labelmbda^4+c_{\epsilon,M}(1+\labelmbda^3)\big\}\,t\Big\},
\]
for all $\epsilon\in(0,1)$. Optimizing over $\phi\in{\mathcal E}_{A,M}$ yields
\[
|G(x,x',t)|<c_\epsilon t^{-s}\exp\Big\{-\labelmbda d_M(x,x')+(1+\epsilon) \big\{ (k^*+\epsilon)\labelmbda^4+c_{\epsilon,M}(1+\labelmbda^3) \big\}\,t\Big\} .
\]
Finally choosing $\labelmbda=[d_M(x,x')/(4k^*t)]^{1/3}$ we have
\[
-\labelmbda d_M(x,x')+k^*\labelmbda^4t =-\sigma_*\frac{d_M(x,x')^{4/3}}{t^{1/3}},
\]
and ({\rm Re}\;f{cov1}) follows.
{\em Part (b).} There exists a symbol $\tilde{A}(x,\xi)$ in $\mathcal{G}_w$ such that
\[
\max \big\{ | \alpha(x)-\tilde\alpha(x)| \, , \; |\begin{equation}ta(x)-\tilde\begin{equation}ta(x)|\, , \; |\gamma(x)-\tilde\gamma(x)| \big\} \leq 2\theta \, w(x) \; , \qquad x\in\Omega.
\]
Given $\phi\in{\mathcal E}_{\tilde{A},M}$ and $\labelmbda>0$ it follows from the proof of Part (a) that
\begin{equation}
{\rm Re}\;\,\tilde{Q}_{\lambda\phi}(u)\geq-\Big\{ k^* \labelmbda^4+c_{\epsilon,M}(1+\labelmbda^3)\Big\}\|u\|_2^2-\epsilon \, {\rm Re}\; Q(u),
\label{25}
\end{equation}
for all $u\in C^{\infty}_c(\Omega)$. Moreover it is easily seen that
\begin{equation}
\big| Q_{\lambda\phi}(u)-\tilde{Q}_{\lambda\phi}(u) \big| \leq c\theta \big\{ {\rm Re}\; Q(u)+\labelmbda^4\|u\|_2^2\big\}.
\label{26}
\end{equation}
The argument used for ({\rm Re}\;f{last}) also applies to $H$ and we thus obtain
\begin{equation}
{\rm Re} \, Q(u)\leq\, 2{\rm Re}\,Q_{\lambda\phi}(u)+\big\{c_M(\labelmbda+\labelmbda^3)+c\labelmbda^4\big\} \|u\|_2^2.
\label{last1}
\end{equation}
Combining ({\rm Re}\;f{25}), ({\rm Re}\;f{26}) and ({\rm Re}\;f{last1}) we conclude that
\[
{\rm Re}\;\,Q_{\lambda\phi}(u)\geq-\Big\{(k^*+c\theta+\epsilon )\labelmbda^4+c_{\epsilon,M}(1+\labelmbda^3)\Big\}\|u\|_2^2 , \qquad u\in C^{\infty}_c(\Omega),
\]
and the argument is completed as in Part (a); we omit further details. $
\Box$
\begin{equation}gin{thebibliography}{00}
\bibitem{a} S. Agmon, Lectures on elliptic boundary value problems, Van Nostrand 1965; revised edition 2010.
\bibitem{b1998} G. Barbatis, {\em Spectral theory of singular elliptic operators with measurable coefficients}, J. Funct. Analysis {\bf 155} (1998), 125-152
\bibitem{b2001} G. Barbatis, {\em Explicit estimates on the fundamental solution of higher-order parabolic equations with measurable coefficients}, J. Differential Equations {\bf 174} (2001), 442-463
\bibitem{b2004} G. Barbatis, {\em Sharp heat-kernel estimates for higher-order operators with singular coefficients}, Edinburgh Mathematical Society {\bf 47} (2004), 53-67
\bibitem{bb2018} G. Barbatis, P. Branikas {\em On the heat kernel of a class of fourth order operators in two dimensions: Sharp Gaussian estimates and short time asymptotics}, J. Differential Equations {\bf 265} (2018), 5237-5261
\bibitem{CLYZ}J. Cao, Y. Liu, D. Yang, C. Zhang,
{\em Gaussian estimates for heat kernels of higher order Schr\"{o}dinger operators with potentials in generalized Schechter Classes}, J. London Math. Soc. {\bf 106} (2022),
2136-2192
\bibitem{Cl}N.S. Claire, {\em Gaussian upper bounds on heat
kernels of uniformly elliptic operators on bounded domains}, J. Operator Theory
{\bf 68} (2012), 85–100
\bibitem{d} E.B. Davies, {\em Uniformly elliptic operators with measurable coefficients}, J. Funct. Anal. {\bf 132} (1995), 141-169
\bibitem{DDY}Q. Deng, Y. Ding, X. Yao,
{\em Gaussian bounds for higher-order elliptic differential operators with Kato type potentials}, J. Funct. Anal. {\bf 266} (2014), 5377–5397
\bibitem{Du}N. Dungey,
{\em Higher order operators and Gaussian bounds on Lie groups of
polynomial growth}, J. Operator Theory {\bf 46} (2001), 45–61
\bibitem{DzHe}J. Dziuba\'{n}ski, A. Hejna,
{\em On semigroups generated by sums of even powers of Dunkl
operators,} Integral Equations Operator Theory {\bf 93} (2021), no. 3,
Paper No. 31, 30 pp.
\bibitem{ep} M.A. Evgrafov, M.M. Postnikov, {\em Asymptotic
behavior of Green's functions for parabolic and elliptic
equations with constant
coefficients}, Math. USSR Sbornik {\bf 11} (1970), 1--24
\bibitem{HWZD}S. Huang, M. Wang, Q. Zheng, Z. Duan,
{\em $L^p$ estimates for fractional Schr\"{o}dinger operators with Kato class potentials,} J. Differential Equations {\bf 265} (2018), 4181--4212
\bibitem{QuR-B}C. Quesada, A. Rodr\'{i}guez-Bernal,
{\em Smoothing and perturbation for some fourth order linear parabolic equations in $R^N$}, J. Math. Anal. Appl.
{\bf 412} (2014), 1105–1134
\bibitem{RS-C1}E. Randles, L. Saloff-Coste, {\em Davies' method for
heat-kernel estimates: an extension to the semi-elliptic setting,}
Trans. Amer. Math. Soc. {\bf 373} (2020), 2525–2565
\bibitem{RS-C2}E. Randles, L. Saloff-Coste,
On-diagonal asymptotics for heat kernels of a class of inhomogeneous partial differential operators, preprint 2022
\bibitem{Ze}C. Zeng, {\em Time analyticity of the biharmonic heat equation, the heat equation with potentials and some nonlinear heat equations}, Commun. Pure Appl. Anal. {\bf 21} (2022), 749–783
\end{thebibliography}
\end{document}
|
\begin{document}
\title{Improved Imaging by Invex Regularizers\ with Global Optima Guarantees}
\begin{abstract}
Image reconstruction enhanced by regularizers, e.g., to enforce sparsity, low rank or smoothness priors on images, has many successful applications in vision tasks such as computer photography, biomedical and spectral imaging. It has been well accepted that non-convex regularizers normally perform better than convex ones in terms of the reconstruction quality. But their convergence analysis is only established to a critical point, rather than the global optima. To mitigate the loss of guarantees for global optima, we propose to apply the concept of \textit{invexity} and provide the first list of proved invex regularizers for improving image reconstruction. Moreover, we establish convergence guarantees to global optima for various advanced image reconstruction techniques after being improved by such invex regularization. To the best of our knowledge, this is the first practical work applying invex regularization to improve imaging with global optima guarantees. To demonstrate the effectiveness of invex regularization, numerical experiments are conducted for various imaging tasks using benchmark datasets.
\end{abstract}
\section{Introduction}
Image reconstruction (restoration) enhanced by regularizers has a wide application in vision tasks such as computed tomography \cite{sidky2012convex,zhang2022opk_snca}, optical imaging \cite{meinhardt2017learning,afonso2010fast}, magnetic resonance imaging \cite{ehrhardt2016multicontrast,fessler2020optimization}, computer photography \cite{rostami2021power,heide2013high}, biomedical and spectral imaging \cite{wang2019hyperspectral,zhang2013parallel}. In general, an image reconstruction task can be formulated as the solution of the following optimization problem:
\begin{align}
\minimize_{\boldsymbol{x}\in \mathbb{R}^{n}} \hspace{0.5em}F(\boldsymbol{x}) = f(\boldsymbol{x}) + g(\boldsymbol{x}).
\label{eq:basicProblem}
\end{align}
Here $f(\boldsymbol{x})$ models a data fidelity term, which usually corresponds to an error loss for image reconstruction, and is assumed to be differentiable. The other function $g(\boldsymbol{x})$ acts as a regularizer which can be non-smooth. It imposes image priors such as sparsity, low rank or smoothness \cite{monga2017handbook}. The use of an appropriate regularizer plays an important role in obtaining robust reconstruction results.
Convex regularization has been popular in the last decade \cite{monga2017handbook,sun2019computed,beck2009fast,liu2016projected,soldevila2016computational}, because it can result in guaranteed global optima. The most well-known examples include the $\ell_{1}$-norm and nuclear norm, which are the continuous and convex surrogates of the $\ell_{0}$-pseudo norm and rank, respectively \cite{fu2014low}. Although convex regularizers have demonstrated their success in signal/image processing, biomedical informatics and computer vision applications \cite{beck2009fast,shevade2003simple,wright2008robust,ye2012sparse}, they are suboptimal in many cases, as they promote sparsity and low rank only under very limited conditions (more measurements from the scene are needed \cite{candes2008enhancing,zhang2010analysis}). To address such limitations, non-convex regularizers have been proposed. For instance, several interpolations between the $\ell_{0}$-pseudonorm and the $\ell_{1}$-norm have been explored including the $\ell_{p}$-quasinorms (where $0<p<1$) \cite{marjanovic2012l_q}, Capped-$\ell_{1}$ penalty \cite{zhang2007surrogate}, Log-Sum Penalty \cite{candes2008enhancing}, Minimax Concave Penalty \cite{zhang2010nearly}, Geman Penalty \cite{geman1995nonlinear}. However, these non-convex regularizers unfortunately come with the price of losing global optima guarantees.
Image reconstruction methods based on Eq. \eqref{eq:basicProblem} include model-based approaches that directly solve Eq. \eqref{eq:basicProblem} using well-established optimization techniques, e.g., proximal operators and gradient descent rules \cite{beck2017first,jin2021non,sarao2021analytical}, learning-based approaches that train an inference neural network \cite{zhang2021plug,goodfellow2016deep}, as well as hybrid approaches that draw links between iterative signal processing algorithms and the layer-wise neural network architectures \cite{pinilla2022unfolding,monga2021algorithm}. Many of these exploit non-convex assumptions over $f(\boldsymbol{x})$ and/or $g(\boldsymbol{x})$, for which we present a summary of some commonly used or successful ones in Table \ref{tab:literatureComposite}. The table includes algorithms like the iterative reweighted least squares (IRLS) \cite{mohan2012iterative,ochs2015iteratively}, where the regularizer is a composition between the one-dimensional $\ell_{p}$-quasinorm and the trace of a matrix. In \cite{attouch2013convergence,frankel2015splitting}, the objective function $F(\boldsymbol{x})$ is assumed to form a semi-algebraic or tame optimization problem solved by gradient descent algorithms. In \cite{gong2013general}, the regularizer $g(\boldsymbol{x})$ is assumed to be the subtraction of two convex functions, and the general iterative shrinkage and thresholding (GIST) algorithm is proposed to optimize $F(\boldsymbol{x})$. Lastly, \cite{ochs2014ipiano} assumes non-convex $f(\boldsymbol{x})$ but convex $g(\boldsymbol{x})$ and proposes the inertial proximal (iPiano) algorithm for optimization.
\begin{table}[t!]
\centering
\caption{\small Comparison between the assumptions made in this work for $f(\boldsymbol{x})$, and $g(\boldsymbol{x})$ to be optimized in Eq. \eqref{eq:basicProblem} and the most common/successful assumptions in the state-of-the-art.}
\footnotesize
\begin{tabular}{m{3.5cm} m{6cm} m{3cm} }
\hline
\hline
\textbf{Method name}
& \textbf{Assumption}
& \textbf{Global optimizer}
\\
\hline
IRLS \cite{mohan2012iterative,ochs2015iteratively} & special $f$ and $g$ & No \\
\hline
General descent \cite{attouch2013convergence,frankel2015splitting} & Kurdyka-Łojasiewicz & No \\
\hline
GIST \cite{gong2013general} & nonconvex $f$, $g = g_{1} -g_{2}$, $g_{1},g_{2}$ convex & No \\
\hline
iPiano \cite{ochs2014ipiano} & nonconvex $f$, convex $g$ & No \\
\hline
\textbf{Proposed} & convex $f$, invex $g$ & \textbf{Yes} \\
\hline
\hline
\end{tabular}
\label{tab:literatureComposite}
\end{table}
For algorithms with the convexity assumptions removed, e.g., those in Table \ref{tab:literatureComposite}, their convergence analysis unfortunately can only be established for a critical point. Ideally, we always prefer algorithms that can find the optimal solution for the target problem. One way to mitigate the loss of guarantees for global optima is by revisiting the concept of \textit{invexity} which was first introduced by Hanson \cite{hanson1981sufficiency}, Craven and Glover \cite{craven1985invex} in the 1980s. What makes this class of functions special is that, for any point where the derivative of a function vanishes (stationary point), it is a global minimizer of the function. Convexity is a special case of invexity. Since 1990s, a lot of mathematical implications for invex functions have been developed, but with the lack of practical applications \cite{zualinescu2014critical}. Examples of the few successful works implementing the invexity theory include \cite{barik2021fair,syed2013invexity,chen2016generalized}. To the best of our knowledge, there is no existing work on the application of invex regularization for imaging.
In this paper, we focus on image reconstruction problems formulated in the form of Eq. \eqref{eq:basicProblem}, where the data fidelity term $f(\boldsymbol{x})$ is based on the $\ell_{2}$-norm and an invex regularizer $g(\boldsymbol{x})$ is used. Most invex theory research lacks clarity on how to benefit practical applications, and this does not encourage the practitioners to exploit the invex property \cite{zualinescu2014critical}. We aim at filling this gap by providing for the first time concrete and useful invex optimization formulations for imaging applications.
Specifically, we make the following contribution:
\begin{itemize}
\item Provide the first list of regularizers with proved invexity that fits optimization problems for imaging applications.
\item Establish convergence guarantees to global optima for three types of advanced image reconstruction techniques enhanced by invex reguarlizers.
\item Empirically demonstrate the effectiveness of invex regularization for various imaging tasks.
\end{itemize}
\section{Preliminaries}
Throughout this paper, we use boldface lowercase and uppercase letters for vectors and matrices, respectively. The $i$-th entry of a vector $\boldsymbol{w}$, is $\boldsymbol{w}[i]$. For vectors, $\lVert \boldsymbol{w}\rVert_p$ is the $\ell_p$-norm. An open ball is defined as $B(\boldsymbol{x};r) = \left \lbrace \boldsymbol{y}\in \mathbb{R}^{n}: \lVert \boldsymbol{y}-\boldsymbol{x} \rVert_{2}<r \right\rbrace$. The operation $\text{conv}(\mathcal{A})$ represents the convex hull of the set $\mathcal{A}$, and the operation $\text{sign}(w)$ returns the sign of $w$. We use $\sigma_{i}(\boldsymbol{W})$ to denote the $i$-th singular value of $\boldsymbol{W}$ assumed in descending order.
We present several concepts needed for the development of this paper starting with the definition of a locally Lipschitz continuous function.
\begin{definition}[\textbf{Locally Lipschitz Continuity}]
A function $f:\mathbb{R}^{n}\rightarrow \mathbb{R}$ is locally Lipschitz continuous at a point $\boldsymbol{x}\in \mathbb{R}^{n}$ if there exist scalars $K>0$ and $\epsilon>0$ such that
\begin{align}
\lvert f(\boldsymbol{y}) - f(\boldsymbol{z}) \rvert \leq K\lVert \boldsymbol{y}-\boldsymbol{z} \rVert_{2},
\end{align}
for all $\boldsymbol{y},\boldsymbol{z}\in B(\boldsymbol{x},\epsilon)$.
\label{def:lipschitz}
\end{definition}
Since the ordinary directional derivative being the most important tool in optimization does not necessarily exist for locally Lipschitz continuous functions, it is required to introduce the concept of subdifferential \cite{B2014} which is calculated in practice as follows.
\begin{theorem}[\textbf{Subdifferential}]{\cite[Theorem 3.9]{B2014}}
\label{theo:auxDerivative}
Let $f:\mathbb{R}^{n}\rightarrow \mathbb{R}$ be a locally Lipschitz continuous function at $\boldsymbol{x}\in \mathbb{R}^{n}$, and define $\Omega_{f} = \{\boldsymbol{x}\in \mathbb{R}^{n}| \text{ f is not differentiable at the point } \boldsymbol{x}\}$. Then the subdifferential of $f$ is given by
\begin{align}
\partial f(\boldsymbol{x}) = \text{ conv }&\left( \left \lbrace \boldsymbol{\zeta}\in \mathbb{R}^{n}| \text{ exists } (\boldsymbol{x}_{i})\in \mathbb{R}^{n}\setminus \Omega_{f} \text{ such that } \boldsymbol{x}_{i}\rightarrow \boldsymbol{x} \text{ and }\nabla f(\boldsymbol{x}_{i})\rightarrow \boldsymbol{\zeta} \right \rbrace\right).
\end{align}
\end{theorem}
The notion of subdifferential is given for locally Lipschitz continuous functions because it is always nonempty \cite[Theorem 3.3]{B2014}. Based on these, the concept of invex function is presented as follows.
\begin{definition}[\textbf{Invexity}]
\label{def:invex}
Let $f:\mathbb{R}^{n}\rightarrow \mathbb{R}$ be locally Lipschitz; then $f$ is invex if there exists a function $\eta:\mathbb{R}^{n}\times \mathbb{R}^{n} \rightarrow \mathbb{R}^{n}$ such that
\begin{align}
f(\boldsymbol{x})-f(\boldsymbol{y}) \geq \boldsymbol{\zeta}^{T}\eta(\boldsymbol{x},\boldsymbol{y}),
\label{eq:basicInvex}
\end{align}
$\forall \boldsymbol{x},\boldsymbol{y} \in \mathbb{R}^{n}$, $\forall \boldsymbol{\zeta} \in \partial f(\boldsymbol{y})$.
\end{definition}
It is well known that a convex function simply satisfies this definition for $\eta(\boldsymbol{x},\boldsymbol{y}) = \boldsymbol{x}-\boldsymbol{y}$.
The following classical theorem \cite[Theorem 4.33]{mishra2008invexity} makes connection between an invex function and its well-known optimum property that supports the motivation of designing invex regularizers.
\begin{theorem}[\textbf{Invex Optimality}]{\cite[Theorem 4.33]{mishra2008invexity})}
Let $f:\mathbb{R}^{n}\rightarrow \mathbb{R}$ be locally Lipschitz. Then the following statements are equivalent.
\begin{enumerate}
\item $f$ is invex.
\item Every point $\boldsymbol{y}\in \mathbb{R}^{n}$ that satisfies $\boldsymbol{0}\in \partial f(\boldsymbol{y})$ is a global minimizer of $f$.
\item Definition \ref{def:invex} is satisfied for $\eta:\mathbb{R}^{n}\times \mathbb{R}^{n} \rightarrow \mathbb{R}^{n}$ given by
\begin{align}
\eta(\boldsymbol{x},\boldsymbol{y})= \left \lbrace \begin{array}{ll}
\boldsymbol{0} & f(\boldsymbol{x})\geq f(\boldsymbol{y}),\\
\frac{f(\boldsymbol{x})-f(\boldsymbol{y})}{\lVert \boldsymbol{\zeta}^{*}_{\boldsymbol{y}}\rVert_{2}^{2}}\boldsymbol{\zeta^{*}_{y}} & \text{ otherwise, }
\end{array}\right.
\end{align}
where $\boldsymbol{\zeta^{*}_{y}}$ is an element in $\partial f(\boldsymbol{y})$ of minimum norm.
\end{enumerate}
\label{theo:optimal_v0}
\end{theorem}
\section{Invex Functions}
\label{sec:InvexRegul}
We start this section by firstly presenting five examples of invex functions that are useful for imaging applications. Four of these have been labelled as non-convex in existing works \cite{wu2019improved,wen2018survey}. This is the first time that they are formally proved to be invex functions. We prove their invexity by showing they satisfy Statement 2 of Theorem \ref{theo:optimal_v0} (see proof in Appendix \ref{app:invexProof} of supplementary material).
\begin{lemma} [\textbf{Invex Functions}]
All of the following functions are invex:
\begin{align}
\label{fun1}
g(\boldsymbol{x}) =& \sum_{i=1}^{n}\left(\lvert\boldsymbol{x}[i] \rvert + \epsilon \right)^{p}, \textmd{for }p\in (0,1) \textmd{ and } \epsilon\geq \left(p(1-p)\right)^{\frac{1}{2-p}}, \\
\label{fun2}
g(\boldsymbol{x}) = &\sum_{i=1}^{n}\log(1 + \lvert \boldsymbol{x}[i] \rvert),\\
\label{fun3}
g(\boldsymbol{x}) = & \sum_{i=1}^{n}\frac{\lvert \boldsymbol{x}[i]\rvert}{2 + 2\lvert \boldsymbol{x}[i] \rvert},\\
\label{fun4}
g(\boldsymbol{x}) = &\sum_{i=1}^{n}\frac{\boldsymbol{x}^{2}[i]}{1 + \boldsymbol{x}^{2}[i]},
\end{align}
\begin{align}
\label{fun5}
g(\boldsymbol{x})= &\sum_{i=1}^{n} \log(1+\lvert \boldsymbol{x}[i] \rvert) - \frac{\lvert \boldsymbol{x}[i] \rvert}{2 + 2\lvert \boldsymbol{x}[i] \rvert}.
\end{align}
\label{theo:invexProof}
\end{lemma}
We provide further insights of these functions in Section \ref{app:discussionRegu} of Supplemental material. Table \ref{tab:list} summarizes their applications. Specifically, Eq. \eqref{fun1} is known as quasinorm, and has attracted a lot of attention because it has resulted in theoretical improvements for matrix completion and compressive sensing \cite{marjanovic2012l_q,wu2013improved}. The analysis on the quasinorms is valid with and without the constant $\epsilon$. We prefer to add $\epsilon$ in order to formally satisfy the Lipschitz continuity in Definition \ref{def:lipschitz}. Eqs. \eqref{fun2} and \eqref{fun3} enhance the convex $\ell_{1}$-norm regularizer, and they have significantly improved image denoising \cite{wu2019improved}. Eq. \eqref{fun4} has been used as the loss function to improve support vector classification \cite{zhuang2019surrogate}.
\begin{table}[ht]
\centering
\caption{List of invex functions studied in this work.}
\resizebox{0.8\textwidth}{!}{
\begin{tabular}{ m{2.5cm} m{2.5cm} m{5cm} }
\hline
\hline
\textbf{Reference}
& \textbf{Invex function}
& \textbf{Application}
\\
\hline
\cite{marjanovic2012l_q,mohan2012iterative,zhang2018sparse,lin2019efficient} & Eq. \eqref{fun1} & Matrix completion \\
\hline
\cite{candes2008enhancing,gong2013general,yao2016efficient,wen2018survey} & Eq. \eqref{fun2} & Enhancing compressive sensing \\
\hline
\cite{wu2019improved,hu2021low,wang2022accelerated} & Eq. \eqref{fun3} & Image denoising \\
\hline
\cite{zhuang2019surrogate} & Eq. \eqref{fun4} & Support vector classification \\
\hline
Proposed &
Eq. \eqref{fun5} & Compressive sensing \\
\hline
\end{tabular}}
\label{tab:list}
\end{table}
We propose the last function in Eq. \eqref{fun5} by the subtraction between Eq. \eqref{fun2} and Eq. \eqref{fun3}. This design is motivated by the optimization framework in \cite{gong2013general} where the regularization term is assumed to be the subtraction of two convex functions (see GIST in Table \ref{tab:literatureComposite}). This has been found to be highly successful in imaging applications (see the survey \cite{wen2018survey}). But until now there is no evidence that this subtraction produces another convex function (if exists) potentially useful in imaging applications. Therefore, we propose this example to show that at least this is possible in the invex case.
Additionally, we present another way of constructing an invex function in the following lemma. It establishes that an invex function $f:\mathbb{R}^{m}\rightarrow \mathbb{R}$ composed with an affine mapping $\boldsymbol{H}\boldsymbol{x}-\boldsymbol{b}$ for $\boldsymbol{H}\in \mathbb{R}^{m\times n}$, $\boldsymbol{x}\in \mathbb{R}^{n}$ and $\boldsymbol{b}\in \mathbb{R}^{m}$, is also invex if $\boldsymbol{H}$ is full row-rank. This condition on $\boldsymbol{H}$ is a mild assumption, because we show in Section \ref{sec:invexImag} imaging application examples that satisfy this criterium.
\begin{lemma}[\textbf{Affine Invex Construction}]
Let $f:\mathbb{R}^{m}\rightarrow \mathbb{R}$ be a continuously differentiable invex function, $\boldsymbol{H}\in \mathbb{R}^{m\times n}$ have full row rank, and $\boldsymbol{b}\in \mathbb{R}^{m}$ be a vector. Then the function $h(\boldsymbol{x}) = f(\boldsymbol{H}\boldsymbol{x}-\boldsymbol{b})$ is invex.
\label{theo:invexComposited}
\end{lemma}
Similar to Lemma \ref{theo:invexProof}, it is proved by showing that the composed function satisfies Statement 2 of Theorem \ref{theo:optimal_v0} (see Appendix \ref{app:invexComposited} of supplementary material). Eq. \eqref{fun4} is an example of such an invex construction that satisfies the continuously differentiable assumption in Lemma \ref{theo:invexComposited}, easily verified its proof in Appendix \ref{app:invexComposited}. A practical implication of Lemma \ref{theo:invexComposited} for imaging applications appears when we want to solve linear system of equations (e.g. \cite{zhuang2019surrogate}). We demonstrate an application of this kind of invex construction in Section \ref{sub:PnP} to improve a widely used image restoration framework.
\section{Invex Imaging Examples, Algorithms and Convergence Analysis}
\label{sec:invexImag}
In this section, we demonstrate the use of invex regularizers to improve some advanced imaging methodologies. To benefit both practitioners and theory development, we present practical invex imaging algorithms and prove their convergence guarantees to global optima which was only possible for convex functions.
\subsection{Image Denoising}
\label{sub:denoising}
Image denoising plays a critical role in modern signal processing systems since images are inevitably contaminated by noise during acquisition, compression, and transmission, leading to distortion and loss of image information \cite{fan2019brief}. Plenty of denoising methods exist, originating from a wide range of disciplines such as probability theory, statistics, partial differential equations, linear and nonlinear filtering, spectral and multiresolution analysis, also classical machine learning and deep learning \cite{mahmoudi2005fast,krull2019noise2void,fan2019brief}. All these methods rely on some explicit or implicit assumptions about the true (noise-free) signal in order to separate it properly from the random noise.
One of the most successful assumptions is that a signal can be well approximated by a linear combination of few basis elements in a transform domain \cite{dabov2007image,elad2006image}. Under this assumption, a denoising method can be implemented as a two-step procedure: i) to obtain high-magnitude transform coefficients that convey mostly the true-signal energy, ii) to discard the transform coefficients which are mainly due to noise. Typical choices for the first step are the wavelet, cosine transforms, and principal component analysis (PCA) \cite{dabov2007image,elad2006image,cai2014data}. The second step is seen as a filtering procedure that is formally modelled as a proximal optimization problem~\cite{parikh2014proximal}
\begin{align}
\text{Prox}_{g}(\boldsymbol{u}) = \argmin_{\boldsymbol{x}\in \mathbb{R}^{n}} \left( g(\boldsymbol{x}) + \frac{1}{2}\lVert \boldsymbol{x} -\boldsymbol{u}\rVert_{2}^{2}\right),
\label{eq:prox1}
\end{align}
where $g(\boldsymbol{x})$ acts as a regularization term, and $\boldsymbol{u}$ represents the noisy transform coefficients. In fact, the usefulness of Eq. \eqref{eq:prox1} is not just limited to denoising, but other imaging problems like computer tomography \cite{jorgensen2021core}, optical imaging \cite{sun2019regularized}, biomedical and spectral imaging \cite{sun2019online}. In general, global optima guarantees in Eq. \eqref{eq:prox1} is restricted to convex $g(\boldsymbol{x})$, e.g., $\ell_{1}$-norm.
We improve this important proximal operator by incorporating invex regularizers. Specifically, using those invex functions $g(\boldsymbol{x}) $ as listed in Table \ref{tab:list}, global minimization is achieved in Eq. \eqref{eq:prox1}. The result is presented in the following theorem:
\begin{theorem}[\textbf{Invex Proximal}]
Consider the optimization problem in Eq. \eqref{eq:prox1} for all functions in Table \ref{tab:list}. Then the following holds:
\begin{enumerate}
\item The function $h(\boldsymbol{x}) = g(\boldsymbol{x}) + \frac{1}{2}\lVert \boldsymbol{x} -\boldsymbol{u}\rVert_{2}^{2}$ is convex (therefore invex).
\item The resolvent operator of the proximal is $(\mathbf{I} + \partial g)^{-1}$ and it is treated as a singleton because it always maps to a global optimizer.
\end{enumerate}
\label{theo:proximalProof}
\end{theorem}
It is classically known that the sum of two invex functions is not necessarily invex in general \cite{mishra2008invexity}. Therefore, presenting examples like above, where the sum of $f(\boldsymbol{x}) $ and $g(\boldsymbol{x}) $ is invex, is important to both invexity and imaging communities. We present the proof of Theorem \ref{theo:proximalProof} and provide the solution to Eq. \eqref{eq:prox1} for each function in Table \ref{tab:list} in Appendix \ref{app:proximalProof} of supplementary material.
\subsection{Image Compressive Sensing}
\label{sub:CSAlg}
Image \textit{compressive sensing} has been extensively exploited in areas such as microscopy, holography, optical imaging and spectroscopy \cite{arce2013compressive,jerez2020fast,guerrero2020phase}. It is an inverse problem that aims at recovering an image $\boldsymbol{f}\in \mathbb{R}^{n}$ from its measurement data vector $\boldsymbol{b} = \boldsymbol{\Phi}\boldsymbol{f}$, where $\boldsymbol{\Phi} \in \mathbb{R}^{m\times n}$ is the image acquisition matrix ($m<n$). Since $m<n$, compressive sensing assumes $\boldsymbol{f}$ has a $k$-sparse representation $\boldsymbol{x}\in \mathbb{R}^{n}$ ($k\ll n$ non-zero elements) in a basis $\boldsymbol{\Psi}\in \mathbb{R}^{n\times n}$, that is $\boldsymbol{f}=\boldsymbol{\Psi}\boldsymbol{x}$, in order to ensure uniqueness under some conditions. Examples of this sparse basis $\boldsymbol{\Psi}$ in imaging are the Wavelet (also Haar Wavelet) transform, cosine and Fourier representations \cite{foucart13}. Hence, one can work with the abstract model $\boldsymbol{b}= \boldsymbol{\Phi}\boldsymbol{\Psi}\boldsymbol{x}=\boldsymbol{H}\boldsymbol{x}$, where $\boldsymbol{H}$ encapsulates the product between $\boldsymbol{\Phi}$, and $\boldsymbol{\Psi}$, with $\ell_{2}$-normalized columns \cite{arce2013compressive,candes2008introduction}. Under this setup, compressive sensing enables to recover $\boldsymbol{x}$ using much lesser samples than what are predicted by the Nyquist criterion \cite{candes2008introduction}. The task formulation is
\begin{align}
\minimize_{\boldsymbol{x}\in \mathbb{R}^{n}} \hspace{0.5em} & f(\boldsymbol{x}) + \lambda g(\boldsymbol{x}) = \frac{1}{2}\lVert \boldsymbol{H}\boldsymbol{x} - \boldsymbol{b} \rVert_{2}^{2} + \lambda g(\boldsymbol{x}),
\label{eq:problem4}
\end{align}
where $\lambda\in (0,1]$ is a typical choice in practice. When the regularizer $g(\boldsymbol{x})$ takes the convex form of $\ell_{1}$-norm, and when the sampling matrix $\boldsymbol{H}$ satisfies the \textit{restricted isometry property} (RIP) for any $k$-sparse vector $\boldsymbol{x}\in \mathbb{R}^{n}$, i.e., $(1-\delta_{2k})\lVert \boldsymbol{x} \rVert_{2}^{2} \leq \lVert \boldsymbol{H}\boldsymbol{x} \rVert_{2}^{2} \leq (1+\delta_{2k})\lVert \boldsymbol{x} \rVert_{2}^{2}$ for $\delta_{2k}<\frac{1}{3}$ \cite[Theorem 6.9]{foucart13}, it has been proved that $\boldsymbol{x}$ can be exactly recovered by solving Eq. \eqref{eq:problem4} \cite{candes2006robust}.
We are interested in invex regularizers. It has been proved that, when $g(\boldsymbol{x})$ takes the particular invex form in Eq. \eqref{fun1}, $\boldsymbol{x}$ can be exactly recovered by solving Eq. \eqref{eq:problem4} \cite{wu2013improved}. Below we further generalize this result to all the invex functions as listed in Table \ref{tab:list}. The generalized result is presented in Theorem \ref{theo:ourCS}.
\begin{theorem}[\textbf{Invex Image Compressive Sensing}]
Assume $\boldsymbol{H}\boldsymbol{x}=\boldsymbol{b}$, where $\boldsymbol{x}\in \mathbb{R}^{n}$ is $k$-sparse, the matrix $\boldsymbol{H}\in \mathbb{R}^{m\times n}$ ($m<n$) with $\ell_{2}$-normalized columns that satisfies the RIP condition for any $k$-sparse vector, and $\boldsymbol{b}\in \mathbb{R}^{m}$ is a noiseless measurement vector. If $g(\boldsymbol{x})$ in Eq. \eqref{eq:problem4} takes the form of the functions in Table \ref{tab:list}, then the following holds:
\begin{enumerate}
\item The objective function $\frac{1}{2}\lVert \boldsymbol{H}\boldsymbol{x} - \boldsymbol{b} \rVert_{2}^{2} + \lambda g(\boldsymbol{x})$ is invex.
\item $\boldsymbol{x}$ can be exactly recovered by solving Eq. \eqref{eq:problem4} i.e. only global optimizers exist. When $g(\boldsymbol{x})$ takes the form of Eq. \eqref{fun4}, extra mild conditions on $\boldsymbol{x}$ are needed.
\end{enumerate}
\label{theo:ourCS}
\end{theorem}
We clarify that if $\boldsymbol{H}$ satisfies the mentioned RIP, then each sub-matrix with $k$-columns of $\boldsymbol{H}$, selected according to indices of the nonzero elements of the $k$-sparse signal is a full row-rank matrix. This result is important to invex community, because it supports the validity of Lemma \ref{theo:invexComposited} to build invex functions with affine mappings. Additionally, we present another proved form of function sum that can result in an invex function, i.e., the sum of $g(\boldsymbol{x})$ and the $\ell_{2}$-norm composed with the affine mapping $\boldsymbol{H}\boldsymbol{x}-\boldsymbol{b}$. The complete proof is provided in Appendix \ref{app:ourCS} of supplementary material.
Next, we present different algorithms to solve Eq. \eqref{eq:problem4} using invex $g(\boldsymbol{x})$ as in Table \ref{tab:list}. We select a few of the most important and successful image reconstruction techniques to start from, and develop their invex extensions. Taking advantage of the invex property, we prove convergence to global minimizers for each extended algorithm, which is unexplored up to date.
\begin{algorithm}[ht]
\caption{Accelerated Proximal Gradient}
\label{alg:invexProximal}
\begin{algorithmic}[1]
\State{\textbf{input}: Tolerance constant $\epsilon\in (0,1)$, initial point $\boldsymbol{x}^{(0)}$, and number of iterations $T$.}
\State{\textbf{initialize}: $\boldsymbol{x}^{(1)}=\boldsymbol{x}^{(0)}=\boldsymbol{z}^{(0)}, r_{1}=1,r_{0}=0, \alpha_{1},\alpha_{2}< \frac{1}{L}$, and $\lambda \in (0,1]$}
\For{$t=1$ to $T$}
\State{$\boldsymbol{y}^{(t)}= \boldsymbol{x}^{(t)} + \frac{r_{t-1}}{r_{t}}(\boldsymbol{z}^{(t)}-\boldsymbol{x}^{(t)}) + \frac{r_{t-1}-1}{r_{t}}(\boldsymbol{x}^{(t)}- \boldsymbol{x}^{(t-1)})$}
\State{$\boldsymbol{z}^{(t+1)}=\text{prox}_{\alpha_{2} \lambda g}(\boldsymbol{y}^{(t)} - \alpha_{2}\nabla f(\boldsymbol{y}^{(t)}))$}
\State{$\boldsymbol{v}^{(t+1)}=\text{prox}_{\alpha_{1} \lambda g}(\boldsymbol{x}^{(t)} - \alpha_{1}\nabla f(\boldsymbol{x}^{(t)}))$}
\State{$r_{t+1}=\frac{\sqrt{4(r_{t})^{2}+1}+1}{2}$}
\State{$\boldsymbol{x}^{(t+1)}=\left \lbrace\begin{array}{ll}
\boldsymbol{z}^{(t+1)}, & \text{ if }f(\boldsymbol{z}^{(t+1)} ) + \lambda g(\boldsymbol{z}^{(t+1)} )\leq f(\boldsymbol{v}^{(t+1)} ) + \lambda g(\boldsymbol{v}^{(t+1)} )\\
\boldsymbol{v}^{(t+1)}, & \text{ otherwise }
\end{array}\right.$}
\EndFor
\State{\textbf{return:} $\boldsymbol{x}^{(T)}$}
\end{algorithmic}
\end{algorithm}
\subsubsection{Accelerated Proximal Gradient Algorithm}
The accelerated proximal gradient (APG) method \cite{li2015accelerated} has been shown to be effective solving Eq. \eqref{eq:problem4}, achieving better imaging quality in less iterations than its predecessors \cite{beck2009fast,frankel2015splitting,gong2013general,ochs2014ipiano,boct2016inertial}, and been frequently used by recent imaging works \cite{wang2022accelerated,mai2022energy,zhang2022continual,ge2022fast}. Its convergence to global optima is only guaranteed for convex loss \cite{li2015accelerated}. For non-convex cases, convergence to a critical point has been stated \cite{li2015accelerated}. Its pseudo-code for solving Eq. \eqref{eq:problem4} is provided in Algorithm \ref{alg:invexProximal}.
Taking advantage that the loss function $f(\boldsymbol{x}) + \lambda g(\boldsymbol{x})$ in Eq. \eqref{eq:problem4} is invex, and the uniqueness result in Theorem \ref{theo:proximalProof}, we formally extend APG in the following lemma stating that the sequence $\left\{\boldsymbol{x}^{(t+1)}\right\}$ generated by Algorithm \ref{alg:invexProximal} converges to a global minimizer of Eq. \eqref{eq:problem4}.
\begin{lemma}[\textbf{Invex APG}]
\label{lem:convergeAPG}
Under the setup of Theorem \ref{theo:ourCS} and using $L=\sigma_{1}\left(\boldsymbol{H}^{T}\boldsymbol{H}\right)$ (maximum singular value), the sequence $\left\{\boldsymbol{x}^{(t)}\right\}_{t=0}^{T-1}$ generated by Algorithm \ref{alg:invexProximal} converges to a global minimizer.
\end{lemma}
To prove Lemma \ref{lem:convergeAPG}, we apply the Statement 2 of Theorem \ref{theo:optimal_v0} to Eq. \eqref{eq:problem4} and the unicity of the proximal operators for functions in Table \ref{tab:list}. The proof is provided in Appendix \ref{app:lemAPG} of supplementary material.
\subsubsection{Plug-and-play with Deep Denoiser Prior}
\label{sub:PnP}
Plug-and-play (PnP) is a powerful framework for regularizing imaging inverse problems \cite{sun2019online} and has gained popularity in a range of applications in the context of imaging inverse problems \cite{zhang2021plug,sun2019online,wei2022tfpnp,kamilov2022plug,hu2022monotonically}. It replaces the proximal operator in an iterative algorithm with an image denoiser, which does not necessarily have a corresponding regularization objective. This implies that the effectiveness of PnP goes beyond standard proximal algorithms such as primal-dual splitting \cite{ono2017primal,kamilov2017plug,zha2022simultaneous}. It has guarantees to a fixed point only when convex objective functions are employed \cite{kamilov2017plug}.
To apply the PnP framework, we modify Algorithm \ref{alg:invexProximal} by replacing the proximal operator (Line 6 in its pseudo-code) with a neural network based denoiser Noise2Void \cite{krull2019noise2void}, resulting in
\begin{align}
\boldsymbol{v}^{(t+1)}=\text{Noise2Void}\left(\boldsymbol{x}^{(t)} - \alpha_{1}\nabla f\left(\boldsymbol{x}^{(t)}\right)\right).
\label{eq:PnPVariant}
\end{align}
The complete pseudo-code is presented in Algorithm \ref{alg:invexPnP} of Appendix \ref{app:PnP} in supplemental material. We remark that in Algorithm \ref{alg:invexPnP}, Line 5 of the Algorithm \ref{alg:invexProximal} is retained to allows the comparison between regularizers (invex and convex). More specifically, Line~5 computes the proximal step, while Line~6 relies on a neural network for the same purpose \eqref{eq:PnPVariant}. This offers an avenue for simultaneously exploiting both the model-based and data-driven approaches. The output of Algorithm 3 is a close estimation to the solution of Eq. \eqref{eq:problem4} \cite{kamilov2017plug}. The benefit of using this denoiser is that it does not require clean target images in order to be trained. We present the following convergence result for this modified algorithm under the assumption of $f(\boldsymbol{x})$ in Eq. \eqref{eq:problem4} being invex which is a generalization of \cite{kamilov2017plug} (restricted to convex functions only).
\begin{lemma}[\textbf{Invex Plug-and-play}]
Assume $f(\boldsymbol{x})$ in Eq. (\ref{eq:problem4}) is invex with Lipschitz continuous gradient, and a denoiser $d:\mathbb{R}^{n}\rightarrow \mathbb{R}$. Under the setup of Theorem \ref{theo:ourCS} and some mild conditions on $d$, the sequence $\left\{\boldsymbol{x}^{(t)}\right\}_{t=0}^{T}$ generated by Algorithm \ref{alg:invexProximal} satisfies
\begin{align}
\frac{1}{T}\sum_{t=1}^{T}\left \| \boldsymbol{x}^{(t)} - d\left(\boldsymbol{x}^{(t)}-\alpha_{1}\nabla f\left(\boldsymbol{x}^{(t)}\right)\right) \right\|_{2}^{2} \leq \frac{2}{T}\left(\frac{1+\kappa}{1-\kappa}\right) \left\| \boldsymbol{x}^{(0)} - \boldsymbol{x}^{*} \right\|_{2}^{2},
\label{eq:fix}
\end{align}
for any $\boldsymbol{x}^{*}=d(\boldsymbol{x}^{*}- \alpha_{1}\nabla f(\boldsymbol{x}^{*}))$ (fixed point) and for some $\kappa\in (0,1)$.
\label{theo:PnP}
\end{lemma}
Eq. \eqref{eq:fix} guarantees that $\left\{\boldsymbol{x}^{(t)}\right\}_{t=0}^{T}$ is arbitrarily close to the set of fixed points of $d(\cdot)$, which is considered a close estimation to the solution of Eq. \eqref{eq:problem4} \cite{kamilov2017plug}. Its proof is provided in Appendix \ref{app:PnP} of supplementary material. Eq. \eqref{fun4} is an example satisfying assumption required in Lemma \ref{theo:PnP}.
\subsubsection{Unrolling}
\label{unrolling}
The \textit{unrolling} or \textit{unfolding} framework is another imaging strategy for solving Eq. \eqref{eq:problem4}. It offers a systematic connection between iterative algorithms used in signal processing and the neural networks \cite{pinilla2022unfolding,monga2021algorithm,hu2020iterative}. Unrolled neural networks become popular due to their potential in developing efficient and high-performing network architectures from reasonably sized training sets \cite{chowdhury2021unfolding,naimipour2020upr}. A folded version of the proximal gradient algorithm is presented in Algorithm \ref{alg:unrolling}. Particularly, existing works \cite{chen2018theoretical,liu2019alista} have shown that the efficiency of Algorithm \ref{alg:unrolling} can be improved by simulating a recurrent neural network so that its layers mimic the iterations in Line 4 of Algorithm \ref{alg:unrolling}. Specifically, each $\boldsymbol{x}^{(t+1)}$ constitutes one linear operation which models a layer of the network, followed by a proximal operation that models the activation function. Thus, one forms a deep network by mapping each iteration to a network layer and stacking the layers together to learn $\boldsymbol{H}, \alpha_{t}$, and $\boldsymbol{x}^{(t)}$ for all $t$ which is equivalent to executing an iteration of Algorithm \ref{alg:unrolling} multiple times. Their study was conducted only for $g(\boldsymbol{x})$ in the form of $\ell_{1}$-norm.
\begin{algorithm}[ht]
\caption{Folded Proximal Gradient Algorithm}
\label{alg:unrolling}
\begin{algorithmic}[1]
\State{\textbf{input}: initial point $\boldsymbol{x}^{(0)}$, number of iterations $T$}
\State{\textbf{initialize}: $\alpha_{t}< \frac{2}{L+2}$, and $\lambda \in (0,1]$}
\For{$t=0$ to $T$}
\State{$\boldsymbol{x}^{(t+1)}=\text{prox}_{\alpha_{t} \lambda g}(\boldsymbol{x}^{(t)} - \alpha_{t}\boldsymbol{H}^{T}(\boldsymbol{H}\boldsymbol{x}^{(t)}-\boldsymbol{b}))$}
\EndFor
\State{\textbf{return:} $\boldsymbol{x}^{(T)}$}
\end{algorithmic}
\end{algorithm}
Convergence guarantees to global optima for Algorithm \ref{alg:unrolling} has been established in \cite{beck2009fast}, but it is restricted to convex objective functions. Therefore, due to the success and importance of unrolling we aim to extend the global optima guarantees of Algorithm \ref{alg:unrolling} to invex objectives, and present the results in the following lemma:
\begin{lemma}[\textbf{Invex Unrolling}]
\label{lem:convergeUnrolling}
Under the setup of Theorem \ref{theo:ourCS} and using $L=\sigma_{1}\left(\boldsymbol{H}^{T}\boldsymbol{H}\right)$ (maximum singular value) and $\alpha_{t}<\frac{2}{L+2}$, the sequence $\left\{\boldsymbol{x}^{(t)}\right\}_{t=0}^{T-1}$ generated by Algorithm \ref{alg:unrolling} converges to a global minimizer.
\end{lemma}
The key to proving Lemma \ref{lem:convergeUnrolling} relies on the uniqueness result of the proximal operator for functions in Table \ref{tab:list} as stated in Theorem \ref{theo:proximalProof}. The proof is presented in Appendix \ref{app:unrolling} of supplementary material. Such results confirm that the invex unrolled network of Algorithm \ref{alg:unrolling}, which uses the proximal operators of invex mappings as the activation functions, can reach the optimal solution during training.
\section{Experiments and Results}
\label{others}
A number of datasets have been merged to formulate one unique dataset for our training and evaluation purposes. These are DIV2K super-resolution~\cite{agustsson2017ntire}, the McMaster~\cite{zhang2011color}, Kodak~\cite{kodak}, Berkeley Segmentation (BSDS 500) \cite{MartinFTM01}, Tampere Images (TID2013) \cite{ponomarenko2013color} and the Color BSD68 \cite{martin2001database} datasets. We conduct various experiments to study the performance of those invex regularizers as listed in Table \ref{tab:list} in non-ideal conditions. We compare them against the state-of-the-art methods originally developed for convex regularizers ($\ell_{1}$-norm) ensuring global optima. When neural network training is involved, we take a total of 900 images which are randomly divided into a training set of 800 images, a validation set of 55 images, and a test set of 45 images. For all the experiments, the images are scaled into the range $[0,1]$. For the invex regularizer in Eq. \eqref{fun1}, we vary the value of $p$.
\subsection{Image Compressive Sensing Experiments}
We assess signal reconstruction, in these experiments, by averaging the peak-signal-to-noise-ratio (PSNR) in dB over the testing image set. We consider additive white Gaussian noise in the measurements data vector with three different levels of SNR (Signal-to-Noise Ratio) = 20, 30, and $\infty$ (noiseless case). For Algorithm \ref{alg:invexProximal} and its plug-and-play variant, the parameters $\lambda,\alpha_{1}$, and $\alpha_{2}$ were chosen to be the best for each analyzed function determined by cross validation, and the initial point $\boldsymbol{x}^{(0)}$ was the blurred image $\boldsymbol{b}$. The results are summarized in Table \ref{tab:globalResults}, where the best and least efficient among invex functions is highlighted in boldface and underscore, respectively. Additional results are reported in Appendix \ref{app:newResults} of supplemental material for each experiment, using the structural similarity index measure to assess imaging quality.
\textbf{Experiment 1} studies the effect of different invex regularizers, the Smoothly Clipped Absolute Deviation (SCAD) \cite{fan2001variable}, and the Minimax Concave Penalty (MCP) \cite{zhang2010nearly}, under Algorithm \ref{alg:invexProximal}. A deconvolution problem is studied to formulate Eq. \eqref{eq:problem4} which is an important problem in signal processing due to imperfect artefacts in physical setups such as mismatch, calibration errors, and loss of contrast \cite{yeh2015experimental}. To compare, the used state-of-the-art methods that employ convex regularization are the Total Variation Minimization by Augmented Lagrangian (TVAL3) \cite{li2013efficient}, and the fast iterative shrinkage-thresholding algorithm (FISTA) \cite{beck2009fast} which ensures global optima. Further, to comparing with convolutional neural networks methodologies, the non-iterative reconstruction methodology ReconNet \cite{kulkarni2016reconnet} is used. To model this problem, all pixels of the testing set are fixed to $256\times 256$ pixels. The images went through a Gaussian blur of size $9\times 9$ and standard deviation $4$, followed by an additive zero-mean white Gaussian noise. The sensing matrix $\boldsymbol{H}$ is built as $\boldsymbol{H}=\boldsymbol{\Phi}\boldsymbol{\Psi}$ (for all methods except ReconNet), where $\boldsymbol{\Phi}$ represents the blur operator over the images and $\boldsymbol{\Psi}$ is the inverse of a three stage Haar wavelet transform. This experiment is extremely ill-conditioned, where the condition number of $\boldsymbol{H}^{T}\boldsymbol{H}$ is significantly higher than 1. This means that in practice the RIP condition is not guaranteed. To achieve a fair comparison, the number of iterations was fixed for all functions as $T=800$. The deconvolution problem follows a compressive sensing setup because the Gaussian filter remove high frequency information of the input image.
In the case of ReconNet, we follow existing setting in \cite{kulkarni2016reconnet}. For the learning of ReconNet, we extract patches of size $33\times 33$ from the noisy blurred training image set, and we train it using the Adam optimization algorithm and a learning rate $5\times 10^{-4}$ for 512 epochs with a batch size of $128$.
\textbf{Experiment 2} studies the invex regularizers under the plug-and-play modification of Algorithm \ref{alg:invexProximal} as described in Section \ref{sub:PnP} \footnote{We used Noise2Void implementation at \url{https://github.com/juglab/n2v}} \cite{krull2019noise2void}. The same deconvolution problem as in Experiment 1 is used. The interesting aspect of this scenario is that Algorithm \ref{alg:invexProximal} has a proximal step in Line 5 that allows to compare between regularizers (invex and convex) while using neural networks in Line 6 (see Algorithm 3 in Appendix \ref{app:PnP} of Supplemental material). Noise2Void is trained by randomly extracting patches of size $64\times 64$ pixels from the training images where zero-mean white Gaussian noise was added for $SNR = 20,30$dB. Data augmentation on the training dataset is used, by rotating each image three times by $90°$ and also added all mirrored versions. The learning rate is fixed as $0.0004$.
\begin{table}[ht]
\renewcommand1.3{1.2}
\centering
\caption{Performance comparison, in terms of PSNR (dB), where the best and least efficient among invex functions is highlighted in boldface and underscore, respectively.
}
\begin{subtable}{1\textwidth}
\resizebox{1\textwidth}{!}{ \renewcommand{1.3}{1.3}
\begin{tabular}{P{0.5cm} P{1cm} P{1cm} P{1cm} P{1cm} P{1cm} P{1.2cm} |P{2.0cm}| P{2.5cm} |P{1.8cm} P{1.8cm} P{1.8cm}}
\hline
\multicolumn{7}{P{7.5cm}|}{(Experiment 1) Algorithm \ref{alg:invexProximal}, $p=0.5$ for Eq. \eqref{fun1}.} & FISTA \cite{beck2009fast} & ReconNet \cite{kulkarni2016reconnet} & TVAL3 \cite{li2013efficient} & SCAD \cite{fan2001variable} & MCP \cite{zhang2010nearly} \\
\hline
\multicolumn{2}{P{1.3cm}}{SNR} & Eq. \eqref{fun1} & Eq. \eqref{fun2} & Eq. \eqref{fun3} & Eq. \eqref{fun4} & Eq. \eqref{fun5} & $\ell_{1}$-norm & & & & \\
\hline
\multicolumn{2}{P{1.3cm}}{\centering $\infty$} & \textbf{33.40} & 31.25 & 31.93 & $\underline{30.00}$ & 32.65 & 29.97 & 27.01 & 28.77 & 30.55 & 31.30 \\
\multicolumn{2}{P{1.3cm}}{\centering $20$dB} & \textbf{24.60} & 22.83 & 23.39 & $\underline{22.00}$ & 23.98 & 21.80 & 19.99 & 20.49 & 22.60 & 23.01 \\
\multicolumn{2}{P{1.3cm}}{\centering $30$dB} & \textbf{27.61} & 26.56 & 26.90 & $\underline{26.00}$ & 27.25 & 24.91 & 22.01 & 23.99 & 26.10 & 26.77 \\
\hline
\end{tabular}
}
\end{subtable}
\begin{subtable}{0.48\textwidth}
\centering
\resizebox{1\textwidth}{!}{ \renewcommand{1.3}{1.3}
\begin{tabular}{P{0.5cm} P{1cm} P{1cm} P{1cm} P{1cm} P{1.2cm} P{1.2cm}}
\hline
\multicolumn{7}{P{7.5cm}}{(Experiment 2) Algorithm \ref{alg:invexPnP}, $p=0.8$ for Eq. \eqref{fun1}.} \\
\hline
SNR & Eq. \eqref{fun1} & Eq. \eqref{fun2} & Eq. \eqref{fun3} & Eq. \eqref{fun4} & Eq. \eqref{fun5} & $\ell_{1}$-norm \\
\hline
\centering $\infty$ & \textbf{34.51} & 32.37 & 33.06 & $\underline{31.40}$ & 33.76 & 31.10 \\
\centering $20$dB & \textbf{25.55} & 23.92 & 24.44 & $\underline{23.00}$ & 24.98 & 22.95 \\
\centering $30$dB & \textbf{28.30} & 26.87 & 27.33 & $\underline{26.05}$ & 27.80 & 26.00 \\
\hline
\end{tabular}
}
\end{subtable}
\hfil
\begin{subtable}{0.48\textwidth}
\centering
\resizebox{1\textwidth}{!}{ \renewcommand{1.3}{1.45}
\begin{tabular}{P{1.5cm} P{1cm} P{1cm} P{1.2cm} P{1.5cm} P{1.5cm}}
\hline
\multicolumn{4}{P{6cm}|}{(Denoising experiment) Algorithm \ref{alg:denoising}, $p=0.5$ for Eq. \eqref{fun1}} & BM3D \cite{dabov2007image} & Noise2Void \cite{krull2019noise2void} \\
\hline
Metric & Eq. \eqref{fun1} & Eq. \eqref{fun3} & Eq. \eqref{fun5} & $\ell_{1}$-norm & \\
\hline
SNR (dB) & \textbf{49.40} & $\underline{43.85}$ & 46.46 & 41.52 & 39.43 \\
SSIM & \textbf{0.886} & $\underline{0.872}$ & 0.876 & 0.869 & 0.853 \\
\hline
\end{tabular}
}
\end{subtable}
\begin{subtable}{0.48\textwidth}
\centering
\resizebox{1\textwidth}{!}{ \renewcommand{1.3}{1.3}
\begin{tabular}{P{0.5cm} P{1cm} P{1cm} P{1cm} P{1cm} P{1.2cm} P{1.2cm} | P{1.2cm} | P{1.2cm}}
\hline
\multicolumn{9}{P{10.5cm}}{(Experiment 3) Algorithm \ref{alg:unrolling} - unfolded LISTA. $p=0.85$ for Eq. \eqref{fun1}} \\
\hline
SNR & $m/n$& Eq. \eqref{fun1} & Eq. \eqref{fun2} & Eq. \eqref{fun3} & Eq. \eqref{fun4} & Eq. \eqref{fun5} & $\ell_{1}$-norm \cite{chen2018theoretical} & ReconNet \cite{kulkarni2016reconnet} \\
\hline
\centering \multirow{3}{*}{$\infty$} & \centering 0.2
\hrule& \textbf{31.32} & 29.20 & 29.87 & $\underline{28.56}$ & 30.58 & 27.95 & 26.59 \\
& \centering 0.4
\hrule& \textbf{36.10} & 33.50 & 34.34 & $\underline{32.75}$ & 35.20 & 32.01 & 31.86 \\
& \centering 0.6 & \textbf{41.27} & 37.81 & 38.90& $\underline{36.09}$ & 40.05 & 35.82 & 34.42 \\
\hline
\centering \multirow{3}{*}{20dB} & \centering 0.2
\hrule& \textbf{26.00} & 24.45 & 24.94 & $\underline{23.97}$ & 25.01 & 23.52 & 22.00 \\
& \centering 0.4
\hrule& \textbf{32.67} & 30.64 & 31.32 & $\underline{30.02}$ & 32.29 & 29.43 & 28.24 \\
& \centering 0.6 & \textbf{34.38} & 33.00 & 33.28 & $\underline{32.94}$ & 33.64 & 32.60 & 30.20 \\
\hline
\centering \multirow{3}{*}{30dB} &\centering 0.2
\hrule& \textbf{27.65} & 26.20 & 26.66 & $\underline{25.75}$ & 27.15 & 25.32 & 23.64 \\
& \centering 0.4
\hrule& \textbf{34.33} & 31.89 & 32.66 & $\underline{31.02}$ & 33.47 & 30.46 & 29.88 \\
& \centering 0.6 & \textbf{37.03} & 34.84 & 35.54 & $\underline{34.17}$ & 36.27 & 33.53 & 31.71 \\
\hline
\end{tabular}
}
\end{subtable}
\hfil
\begin{subtable}{0.48\textwidth}
\centering
\resizebox{1\textwidth}{!}{ \renewcommand{1.3}{1.12}
\begin{tabular}{P{0.5cm} P{1cm} P{1cm} P{1cm} P{1cm} P{1.2cm} | P{1.2cm} | P{1.2cm}}
\hline
\multicolumn{8}{P{10.5cm}}{(Experiment 3) Algorithm \ref{alg:unrolling} - unfolded ISTA-Net. $p=0.85$ for Eq. \eqref{fun1}} \\
\hline
SNR & $m/n$& Eq. \eqref{fun1} & Eq. \eqref{fun2} & Eq. \eqref{fun3} & Eq. \eqref{fun4} & Eq. \eqref{fun5} & $\ell_{1}$-norm \cite{zhang2018ista} \\
\hline
\centering \multirow{3}{*}{$\infty$} & \centering 0.2
\hrule& \textbf{32.50} & 30.15 & 30.89 & $\underline{29.04}$ & 31.67 & 28.77 \\
& \centering 0.4
\hrule& \textbf{38.33} & 35.72 & 36.55 & $\underline{34.92}$ & 37.41 & 34.17 \\
& \centering 0.6 & \textbf{43.61} & 40.07 & 41.18& $\underline{39.02}$ & 42.36 & 38.02 \\
\hline
\centering \multirow{3}{*}{20dB} & \centering 0.2
\hrule& \textbf{28.29} & 26.22 & 26.87 & $\underline{25.60}$ & 27.56 & 25.01 \\
& \centering 0.4
\hrule& \textbf{33.96} & 32.11 & 32.71 & $\underline{31.55}$ & 33.32 & 31.00 \\
& \centering 0.6 & \textbf{35.77} & 34.68 & 35.03 & $\underline{34.33}$ & 35.39 & 33.99 \\
\hline
\centering \multirow{3}{*}{30dB} &\centering 0.2
\hrule& \textbf{29.34} & 28.30 & 28.63 & $\underline{27.97}$ & 28.98 & 27.65 \\
& \centering 0.4
\hrule& \textbf{35.41} & 33.33 & 33.99 & $\underline{32.69}$ & 34.68 & 32.08 \\
& \centering 0.6 & \textbf{38.95} & 36.25 & 37.10 & $\underline{35.43}$ & 38.00 & 34.65 \\
\hline
\end{tabular}
}
\end{subtable}
\label{tab:globalResults}
\end{table}
\textbf{Experiment 3} compares the invex regularizers but under the unrolling framework as described in Section \ref{unrolling}. The gold standard convex regularizations to compare with are the learned iterative shrinkage and thresholding algorithm (LISTA) \cite{liu2019alista}, and the Interpretable optimization-inspired deep network (ISTA-Net)\cite{zhang2018ista}. Also, to comparing with convolutional neural networks methodologies, the non-iterative reconstruction methodology ReconNet \cite{kulkarni2016reconnet} is used. We follow the existing setting for LISTA in \cite{chen2018theoretical}\footnote{We used the implementation from \cite{chen2018theoretical} at \url{https://github.com/VITA-Group/LISTA-CPSS}}, and for ISTA-Net in \cite{liu2019alista}. For the training stage we extract $10000$ patches $\boldsymbol{b}\in \mathbb{R}^{16\times 16}$ at random positions of each image, with all means removed. We then learn a dictionary $\boldsymbol{D}\in \mathbb{R}^{256\times 512}$ from the extracted patches, using the same strategy as in \cite{chen2018theoretical}. Gaussian i.i.d sensing matrices $\boldsymbol{\Phi}\in \mathbb{R}^{m\times 256}$ are created from the standard Gaussian distribution, $\boldsymbol{\Phi}[i,j]\sim \mathcal{N}(0,1/m)$ and then normalize its columns to have the unit $\ell_{2}$-norm, where $m$ is selected such that $\frac{m}{256}=0.2,0.4,0.6$. The matrix $\boldsymbol{H}$ is built as $\boldsymbol{H}=\boldsymbol{\Phi}\boldsymbol{\Psi}$ with $T=16$ (number of layers). We follow the same two-step strategy in \cite{chen2018theoretical} to train a recurrent neural network. First, perform a layer-wise pre-training solving Eq. \eqref{eq:problem4} for each extracted patch $\boldsymbol{b}$ by fixing $\boldsymbol{H}=\boldsymbol{\Psi}$. Second, append a learnable fully-connected layer at the end of the network structure, initialized by $\boldsymbol{\Psi}$. Then, perform an end-to-end training solving Eq. \eqref{eq:problem4} where $\boldsymbol{H}$ in this case is learnt by updating the initial matrix $\boldsymbol{\Psi}$. For each testing image, we divide it into non-overlapping $16\times 16$ patches. When $g(\boldsymbol{x})$ is the the $\ell_{1}$-norm, we recover~\cite{chen2018theoretical}.
In the case of ISTA-Net, and ReconNet, for their learning stage we extract patches from the training image set of size $33\times 33$. Gaussian i.i.d sensing matrices $\boldsymbol{\Phi}\in \mathbb{R}^{m\times 1089}$ are created with $\ell_{2}$-normalized columns as for LISTA, where $m$ is selected such that $\frac{m}{1089}=0.2,0.4,0.6$. The optimizer employed was Adam algorithm and a learning rate $1\times 10^{-4}$ for 200 and 512 epochs for ISTA-Net and ReconNet respectively, with a batch size of $64$ for both networks. For ISTA-Net $T=16$ (number of unrolled iterations). We recall that when $g(\boldsymbol{x})$ is the the $\ell_{1}$-norm in ISTA-Net, we recover~\cite{zhang2018ista}.
\subsection{ Image Denoising Experiment}
Two image datasets, which we merge ($80$ images in total), are used for this experiment comes from a neutron image formation phenomenon\footnote{Acquired with the ISIS Neutron and Moun Source system at Harwell Science and Innovation Campus.}. These type of images contain the neutron attenuation properties of the object which helps analyze material structure. Performance is assessed by averaging along all the images the experimental SNR in dB given by $SNR = 20\log\left (\frac{\lVert \boldsymbol{z} \rVert_{2}}{\lVert \hat{\boldsymbol{z}}-\boldsymbol{z} \rVert_{2}}\right)$, where $\boldsymbol{z}$ and $\hat{\boldsymbol{z}}$ stand for the noisy and the denoised image, respectively, and the structural similarity index measure (SSIM) computed between $\boldsymbol{z}$ and $\hat{\boldsymbol{z}}$. Taking advantage of results observed from previous experiments, we compare the top three regularizers in Eqs. \eqref{fun1}, \eqref{fun3}, and \eqref{fun5} with two state-of-the-art denoising techniques including the block-matching and 3-D filtering (BM3D) \cite{dabov2007image} using $\ell_{1}$-norm regularizer and the deep learning technique Noise2Void (trained as in Experiment 2) \cite{krull2019noise2void}. We follow the two-step denoising procedure described in Section \ref{sub:denoising}. In the first step, the transform domain is built using PCA as in \cite{cai2014data}. To build this transform we extract patches of $16\times 16$ from the noisy image that are then used to adaptively construct a tight frame (nearly orthogonal matrix) tailored to the given noisy data \footnote{We used implementation at \url{https://www.math.hkust.edu.hk/~jfcai/}.}. Results are summarized in Table \ref{tab:globalResults}. We report examples of denoised images obtained by Eqs. \eqref{fun1}, \eqref{fun3}, \eqref{fun5}, BM3D, and Noise2Void are illustrated in Appendix \ref{app:denoising} of supplementary material, along with the algorithm used for the invex regularizers to denoise these images.
\section{Discussion, Limitations and Conclusion}
Application advancement of invex theory has paused for decades due to the lack of practical examples, which has caused a significantly reduced interest in invexity research. To address this issue, we present for the first time a list of invex regularizers for image reconstruction applications, and formulate corresponding optimization problems. Particularly, for image compressive sensing, we improve three advanced imaging techniques using the listed functions in Table \ref{tab:list} as invex regularizers. We present their solution algorithms and develop theoretical guarantees on their convergence to global minimum. We also conducted various image compressive sensing and denoising experiments to demonstrate the effectiveness of invex regularizers under practical scenarios that are non-ideal with noisy data observed and RIP condition not guaranteed. Significant benefit of using invex regularizers have been proved from both theoretical and empirical aspects. In fact, Table \ref{tab:globalResults} and theoretical results in Section \ref{sec:invexImag} revive the potential of exploring invex theory in practical applications.
The numerical results presented in Table \ref{tab:globalResults} confirm performance improvement by using invex regularizers over the $\ell_{1}$-norm-based methods (e.g FISTA, TVAL3) in unexplored scenarios. These tables and theoretical results in Section \ref{sec:invexImag} revive the potential of exploring invex theory in practical applications. The best result is obtained with Eq. \eqref{fun1}, and Eq. \eqref{fun4} is the least efficient. The intuition behind the superiority of Eq. \eqref{fun1} comes from the possibility of adjusting the value of $p$ in data-dependent manner \cite{wu2013improved}. This means that when the images are strictly sparse, and the noise is relatively low, a small value of $p$ should be used. Conversely, when images are non-strictly sparse and/or the noise is relatively high, a larger value of $p$ tend to yield better performance (which seems to be the case for the selected image datasets). We believe that the remaining invex, SCAD, and MCP regularizers have a lesser performance than Eq. \eqref{fun1} as they do not have the flexibility of adjustment to the sparsity of the data. In fact, Eq. \eqref{fun4} shows the poorest performance because in the proof of Theorem \ref{theo:ourCS}, we theoretically guarantee that Eq. \eqref{fun4} cannot sparsify all images. Therefore, this analysis leads to the conclusion that the invex function Eq. \eqref{fun1} offers the best performance for the metrics concerns and the imaging problems studied here.
Although, we have presented theoretical results with global optima using invexity for some of most important and successful image reconstruction techniques, we highlight several limitations of our analysis. Specifically, we focused on reconstructed the image of interest in an ideal scenario, that is, without the present of noise (Theorem \ref{theo:ourCS}). Additionally, we have limited our numerical results to tasks like denoising, and deconvolution. And, the convergence guarantees for the plug-and-play result only ensures a close estimate of the solution (Lemma \ref{theo:PnP}). Therefore, we see there are a number of future directions this research can be taken further improving the results even further. One aspect is to explore avenues for improving convergence guarantees to global optima the plug-and-play framework. Another direction is the study of inclusion of noise in the analysis of imaging applications, which may be an enabler to improve downstream tasks like invex robust image reconstruction. Finally, we feel that the application domains for invex functions can go well beyond denoising, and deconvolution imaging problems, especially around deep learning research, which can improved a number of downstream applications.
\section*{Broader Impact}
We believe that the presented mathematical and empirical analysis over the studied regularizers has the potential to unlock the benefits of invexity for further applications in signal and image processing. This may be an enabler to improve downstream tasks like deep learning for imaging, and to provide more robust image reconstruction algorithms.
\section*{Acknowledgments}
This work was partially supported by the Facilities Funding from Science and Technology Facilities Council (STFC) of UKRI, and Wave 1 of the UKRI Strategic Priorities Fund under the EPSRC grant EP/T001569/1, particularly the ``AI for Science" theme within that grant, by the Alan Turing Institute.
{
\small
}
\appendix
\section{Proof of Lemma \ref{theo:invexProof}}
\label{app:invexProof}
In this proof we seek to guarantee that the list of functions in Table \ref{tab:list} are invex. We point out that, since the regularizers in Table \ref{tab:list} is the sum of a scalar function applied to each entry of a vector, then it is enough to analyze the scalar function to determine the invexity of the regularizer.
\paragraph{Eq. \eqref{fun1}.}
\begin{proof}
Take $r_{\epsilon}(w) = \left(\lvert w \rvert + \epsilon \right)^{p}, \forall w\in \mathbb{R}$, for $p\in (0,1)$ and $\epsilon\geq \left(p(1-p)\right)^{\frac{1}{2-p}}$. The need to add the constant $\epsilon$ it is to formally satisfy the Lipschitz continuous condition required to be invex according to Definition \ref{def:invex}. Observe that if $w>0$ then we have that $\partial r_{\epsilon}(w)=\left \lbrace \frac{p}{\left(\lvert w \rvert + \epsilon \right)^{1-p}}\right \rbrace$, which means that $0\not \in \partial r_{\epsilon}(w)$. Conversely, if $w<0$ then $\partial r_{\epsilon}(w)=\left \lbrace \frac{-p}{\left(\lvert w \rvert + \epsilon \right)^{1-p}}\right \rbrace$, leading to $0\not \in \partial r_{\epsilon}(w)$. Lets examinate $w^{*}=0$. Note that
\begin{align}
\lim_{w\rightarrow 0^{+}} r_{\epsilon}^{\prime}(w) = \lim_{w\rightarrow 0^{+}} \frac{p}{\left(\lvert w \rvert + \epsilon \right)^{1-p}} = \frac{p}{\epsilon^{1-p}},
\label{eq:rightLimit}
\end{align}
and that
\begin{align}
\lim_{w\rightarrow 0^{-}} r_{\epsilon}^{\prime}(w) = \lim_{w\rightarrow 0^{-}} \frac{-p}{\left(\lvert w \rvert + \epsilon \right)^{1-p}} = \frac{-p}{\epsilon^{1-p}}.
\label{eq:leftLimit}
\end{align}
Additionally, since $r_{\epsilon}(w)$ is a Lipschitz continuous function, then appealing to Theorem \ref{theo:auxDerivative} we have that $\partial r_{\epsilon}(w^{*}=0) = \text{ conv }\left \lbrace \frac{-p}{\epsilon^{1-p}}, \frac{p}{\epsilon^{1-p}}\right \rbrace = \left \lbrack \frac{-p}{\epsilon^{1-p}}, \frac{p}{\epsilon^{1-p}} \right\rbrack$. This means that $0\in \partial r_{\epsilon}(0)$. Further, given the fact that $r_{\epsilon}(0)\leq r_{\epsilon}(w)$ for all $w\in \mathbb{R}$, then $w^{*} = 0$ is a global minimizer of $r_{\epsilon}$. Therefore, the function $r_{\epsilon}$ is invex.
\end{proof}
\paragraph{Eq. \eqref{fun2}}
\begin{proof}
Take $r(w) = \log(1+\lvert w\rvert)$. Observe that if $w>0$ then we have that $\partial r(w)=\left \lbrace \frac{1}{1 + \lvert w \rvert}\right \rbrace$, which means that $0\not \in \partial r(w)$. Conversely, if $w<0$ then $\partial r(w)=\left \lbrace \frac{-1}{1 + \lvert w \rvert}\right \rbrace$, leading to $0\not \in \partial r(w)$. Lets examinate $w^{*}=0$. Note that
\begin{align}
\lim_{w\rightarrow 0^{+}} r^{\prime}(w) = \lim_{w\rightarrow 0^{+}} \frac{1}{1 + \lvert w \rvert} = 1,
\label{eq:rightLimit1}
\end{align}
and that
\begin{align}
\lim_{w\rightarrow 0^{-}} r^{\prime}(w) = \lim_{w\rightarrow 0^{-}} \frac{-1}{1+ \lvert w \rvert} = -1.
\label{eq:leftLimit1}
\end{align}
Additionally, since $r(w)$ is a Lipschitz continuous function, then appealing to Theorem \ref{theo:auxDerivative} we have that $\partial r(w^{*}=0) = \text{ conv }\left \lbrace -1,1\right \rbrace = \left \lbrack -1,1 \right\rbrack$. This means that $0\in \partial r(0)$. Further, given the fact that $r(0)\leq r(w)$ for all $w\in \mathbb{R}$, then $w^{*} = 0$ is a global minimizer of $r(w)$. Therefore, the function $r(w)$ is invex.
\end{proof}
\paragraph{Eq. \eqref{fun3}}
\begin{proof}
Take $r(w) = \frac{\lvert w \rvert}{2+2\lvert w \rvert}$. Observe that if $w>0$ then we have that $\partial r(w)=\left \lbrace \frac{1}{2(1 + \lvert w \rvert)^{2}}\right \rbrace$, which means that $0\not \in \partial r(w)$. Conversely, if $w<0$ then $\partial r(w)=\left \lbrace \frac{-1}{2(1 + \lvert w \rvert)^{2}}\right \rbrace$, leading to $0\not \in \partial r(w)$. Lets examinate $w^{*}=0$. Note that
\begin{align}
\lim_{w\rightarrow 0^{+}} r^{\prime}(w) = \lim_{w\rightarrow 0^{+}} \frac{1}{2(1 + \lvert w \rvert)^{2}} = \frac{1}{2},
\label{eq:rightLimit2}
\end{align}
and that
\begin{align}
\lim_{w\rightarrow 0^{-}} r^{\prime}(w) = \lim_{w\rightarrow 0^{-}} \frac{-1}{2(1 + \lvert w \rvert)^{2}} = -\frac{1}{2}.
\label{eq:leftLimit2}
\end{align}
Additionally, since $r(w)$ is a Lipschitz continuous function, then appealing to Theorem \ref{theo:auxDerivative} we have that $\partial r(w^{*}=0) = \text{ conv }\left \lbrace -\frac{1}{2},\frac{1}{2}\right \rbrace = \left \lbrack -\frac{1}{2},\frac{1}{2} \right\rbrack$. This means that $0\in \partial r(0)$. Further, given the fact that $r(0)\leq r(w)$ for all $w\in \mathbb{R}$, then $w^{*} = 0$ is a global minimizer of $r(w)$. Therefore, the function $r(w)$ is invex.
\end{proof}
\paragraph{Eq. \eqref{fun4}}
\begin{proof}
Consider $r(w) = \frac{w^{2}}{1+w^{2}}$. Observe that $\partial r(w) = \left\lbrace \frac{2w}{(1+w^{2})^{2}}\right \rbrace$, which means $r(w)$ is continuously differentiable. Then, it is clear that $w=0$ is the only point that satisfies $0\in \partial r(0)$. In addition, the value $r(w=0)$ is the global minimum of $r(w)$. Thus, since the only stationary point of $r(w)$ is a global minimizer, then $r(w)$ is invex.
\end{proof}
\paragraph{Eq. \eqref{fun5}}
\begin{proof}
Take $r(w) = \log(1+\lvert w \rvert) - \frac{\lvert w \rvert}{2 + 2\lvert w \rvert}$. Observe that if $w>0$ then we have that $\partial r(w)=\left \lbrace \frac{1}{2(1+\lvert w \rvert)^{2}} + \frac{w}{(1+\lvert w \rvert)^{2}} \right \rbrace$, which means that $0\not \in \partial r(w)$. Conversely, if $w<0$ then $\partial r(w)=\left \lbrace \frac{-1}{2(1+\lvert w \rvert)^{2}} + \frac{w}{(1+\lvert w \rvert)^{2}}\right \rbrace$, leading to $0\not \in \partial r(w)$. Lets examinate $w^{*}=0$. Note that
\begin{align}
\lim_{w\rightarrow 0^{+}} r^{\prime}(w) = \lim_{w\rightarrow 0^{+}} \frac{1}{2(1+\lvert w \rvert)^{2}} + \frac{w}{(1+\lvert w \rvert)^{2}} = \frac{1}{2},
\label{eq:rightLimit3}
\end{align}
and that
\begin{align}
\lim_{w\rightarrow 0^{-}} r^{\prime}(w) = \lim_{w\rightarrow 0^{-}} \frac{-1}{2(1+\lvert w \rvert)^{2}} + \frac{w}{(1+\lvert w \rvert)^{2}} = -\frac{1}{2}.
\label{eq:leftLimit4}
\end{align}
Additionally, since $r(w)$ is a Lipschitz continuous function, then appealing to Theorem \ref{theo:auxDerivative} we have that $\partial r(w^{*}=0) = \text{ conv }\left \lbrace -\frac{1}{2},\frac{1}{2}\right \rbrace = \left \lbrack -\frac{1}{2},\frac{1}{2} \right\rbrack$. This means that $0\in \partial r(0)$. Further, given the fact that $r(0)\leq r(w)$ for all $w\in \mathbb{R}$, then $w^{*} = 0$ is a global minimizer of $r(w)$. Therefore, the function $r(w)$ is invex.
\end{proof}
\subsection{Additional Discussion on Invex Regularizers}
\label{app:discussionRegu}
To address sub-optimal limitations of convex regularizers, non-convex mappings have been proposed. For instance, the Smoothly Clipped Absolute Deviation (SCAD) \cite{fan2001variable}, and Minimax Concave Penalty (MCP) \cite{zhang2010nearly}. However, a recent survey in imaging \cite{wen2018survey}, which compared the performance of several regularizers including SCAD and MCP for a number of imaging, concludes that Eq. \eqref{fun1} shows higher performance than SCAD and MCP because the value of $p$ can be adjusted in data-dependent manner. This means that when the images are strictly sparse, and the noise is relatively low, a small value of $p$ should be used. Conversely, when images are non-strictly sparse and/or the noise is relatively high, a larger value of $p$ tend to yield better performance. Furthermore, in the context of invexity, we highlight that SCAD and MCP are non-invex regularizer because they reach a maximum value, which makes the first derivative zero in non-minimizer values leading to its non-invexity (see Theorem \ref{theo:invexProof}).
On the other hand, in the case of minimax-concave-type of regularizers, we present a new function in our manuscript (Eq. \eqref{fun5}). From Eq. \eqref{fun5} it is clear we are subtracting $g_{1}(\boldsymbol{x})=\sum_{i=1}^{n}\log(1+\lvert \boldsymbol{x}[i] \rvert)$, and $g_{2}(\boldsymbol{x})=\sum_{i=1}^{n}\frac{\lvert \boldsymbol{x}[i] \rvert}{2 + 2\lvert \boldsymbol{x}[i] \rvert}$ (selected due to results in \cite{wu2019improved}). We propose to study regularizer in Eq. \eqref{fun5}, that is $g_{1}-g_{2}$, for three reasons. First, because $g_{1}$, $g_{2}$, and $g_{1}-g_{2}$ are invex, as stated in Lemma 1, and all of them can achieve global optima for the scenarios studied in the paper. Second, to the best of our knowledge, there is no evidence that subtracting two convex penalties (current proposal in the minimax-concave literature) produces another convex regularizer (if exists). Therefore, we present Eq. \eqref{fun5} to show that at least this is possible in the invex case, as stated in Section \ref{sec:InvexRegul}.
Finally, we point out that the performance of invex regularizers in Eqs. \eqref{fun2}, \eqref{fun3}, \eqref{fun4}, and \eqref{fun5} can be justified under the framework of re-weighted $\ell_{1}$-norm minimization (see \cite{candes2008enhancing}), which enhances the performance of just $\ell_{1}$-norm minimization.
\section{Proof of Lemma \ref{theo:invexComposited}}
\label{app:invexComposited}
\begin{proof}
To prove this theorem, we show that for each $\boldsymbol{x}\in \mathbb{R}^{n}$ such that $\boldsymbol{0} \in \partial h(\boldsymbol{x})$ where $h(\boldsymbol{x}) = f\left(\boldsymbol{H}\boldsymbol{x} -\boldsymbol{v}\right)$ is a global minimizer. Observe that
\begin{align}
\partial h(\boldsymbol{x}) = \left\{ \nabla h(\boldsymbol{x}) \right\} = \left\{\boldsymbol{H}^{T}\nabla f(\boldsymbol{H}\boldsymbol{x}-\boldsymbol{v})\right\}.
\label{eq:gradient1}
\end{align}
Take $\boldsymbol{x}^{*}\in \mathbb{R}^{n}$ such that $\boldsymbol{0}\in \partial h(\boldsymbol{x}^{*})$, then since $\boldsymbol{H}$ is a full row-rank matrix (equivalently $\boldsymbol{H}^{T}$ full col-rank matrix) from Eq. \eqref{eq:gradient1} we have
\begin{align}
\nabla h(\boldsymbol{x}^{*}) = \boldsymbol{H}^{T}\nabla f(\boldsymbol{H}\boldsymbol{x}^{*}-\boldsymbol{v}) = \boldsymbol{0}\leftrightarrow \nabla f(\boldsymbol{H}\boldsymbol{x}^{*}-\boldsymbol{v}) =\boldsymbol{0}.
\label{eq:problem2}
\end{align}
The above equation means that each stationary point of $h(\boldsymbol{x})$ is found through the stationary points of $f(\boldsymbol{x})$. Thus, since $f$ is invex then $\boldsymbol{H}\boldsymbol{x}^{*}-\boldsymbol{v}$ is a global minimizer of $f$ i.e. $h$ is invex.
\end{proof}
\section{Proof of Theorem \ref{theo:proximalProof}}
\label{app:proximalProof}
In this appendix we seek to guarantee that the proximal operator of the functions in Table \ref{tab:list} are invex. We point out that, since the proximal of the regularizers in Table \ref{tab:list} is the sum of a scalar function applied to each entry of a vector, then it is enough to analyze the scalar function to determine the invexity of the proximal.
\subsection{Invexity proofs of the proximal operators}
In the following we provide the proof for the first statement in Theorem \ref{theo:proximalProof}.
\paragraph{Eq. \eqref{fun1}}
\begin{proof}
Let $h(w)$ be a function defined, for $p\in (0,1)$, as
\begin{align}
h(w) = (\lvert w \rvert + \epsilon)^{p} + \frac{1}{2}(w-u)^{2},
\end{align}
for fixed $u\in \mathbb{R}$, and $\epsilon\geq \left(p(1-p)\right)^{\frac{1}{2-p}}$. Then, we seek to show that the second derivate of $h(w)$ with respect to $w$ for $w\not = 0$ is non-negative. Observe that,
\begin{align}
h^{\prime \prime}(w) = \frac{p(p-1)}{(\lvert w \rvert + \epsilon )^{2-p}} + 1.
\label{eq:deri1}
\end{align}
From Eq. \eqref{eq:deri1} we have that $(\lvert w \rvert + \epsilon )^{2-p}$ is a positive increasing function since $2-p>1$. This implies that to show $h^{\prime \prime}(w)$ is non-negative for all $w\in \mathbb{R}$ we need to analyze only when $w=0$. Therefore, $\frac{p(p-1)}{(\lvert w \rvert + \epsilon )^{2-p}} \in [-1,0)$ for all $w\in \mathbb{R}$, because $p(p-1)<0$ and $\epsilon \geq \left(p(1-p)\right)^{\frac{1}{2-p}}$. Thus, $h^{\prime \prime}(w)$ is non-negative, leading to the invexity of $h(w)$ (i.e. $h^{\prime \prime}(w)$ positive implies convexity).
\end{proof}
\paragraph{Eq. \eqref{fun2}}
\begin{proof}
Take $h(w) = \log(1+\lvert w\rvert) + \frac{1}{2}(w-u)^{2}$ for fixed $u\in \mathbb{R}$. Observe that the second derivative of $h(w)$, for $w\not =0$ is given by
\begin{align}
h^{\prime \prime}(w) = \frac{-1}{(1 + \lvert w \rvert)^{2}} + 1.
\end{align}
Then, since $(1 + \lvert w \rvert)^{2}\geq 1$ for all $w$, this implies that $\frac{-1}{(1 + \lvert w \rvert)^{2}} \in [-1,0)$. Thus, $h^{\prime \prime}(w)$ is non-negative, leading to the invexity of $h(w)$.
\end{proof}
\paragraph{Eq. \eqref{fun3}}
\begin{proof}
Take $h(w) = \frac{\lvert w \rvert}{2 + 2\lvert w \rvert} + \frac{1}{2}(w-u)^{2}$ for fixed $u\in \mathbb{R}$. We will use the same argument as in previous cases. Then, for $w\not =0$ notice that the second derivative of $h(w)$ is given by
\begin{align}
h^{\prime \prime}(w) = \frac{-1}{(1+\lvert w \rvert)^{3}} + 1.
\end{align}
Then, from the above equation it is clear that $\frac{-1}{(1+\lvert w \rvert)^{3}}\in [-1,0)$ for all $w\in \mathbb{R}$. Thus, $h^{\prime \prime}(w)$ is non-negative, leading to the invexity of $h(w)$.
\end{proof}
\paragraph{Eq. \eqref{fun4}}
\begin{proof}
Take $h(w) = \frac{w^{2}}{1+w^{2}} + \frac{1}{2}(w-u)^{2}$, for fixed $u\in \mathbb{R}$. Then, notice that the second derivative of $h(w)$ is given by
\begin{align}
h^{\prime \prime}(w) = \frac{2-6w^{2}}{(1+ w^{2})^{3}} + 1.
\end{align}
Then, we show that $s(w)=\frac{2-6w^{2}}{(1+ w^{2})^{3}} \geq -1$, by determining its extreme values. Observe that
\begin{align}
s^{\prime}(w) = \frac{24w(w^{2}-1)}{(1+w^{2})^{3}} =0,
\end{align}
only when $w=0,1,-1$. It is clear that the maximum value of $s(w)$ is attained when $w=0$, i.e. $s(w)=2$. And, its minimum value is achieved when $w=-1$, that is $s(1)=s(-1)=\frac{-1}{2}$. Thus, since $s(w)\geq -1$ then $h(w)$ is invex.
\end{proof}
\paragraph{Eq. \eqref{fun5}}
\begin{proof}
Take $h(w) = \log(1+\lvert w \rvert) - \frac{\lvert w \rvert}{2 + 2\lvert w \rvert} + \frac{1}{2}(w-u)^{2}$, for fixed $u\in \mathbb{R}$. Then, for $w\not =0$ notice that the second derivative of $h(w)$ is given by
\begin{align}
h^{\prime \prime}(w) = \frac{-\lvert w \rvert}{(1+\lvert w \rvert)^{3}} + 1.
\end{align}
Then, from the above equation it is clear that $\frac{-\lvert w \rvert}{(1+\lvert w \rvert)^{3}} \in [-1,1]$, which implies that $h^{\prime \prime}(w)$ is non-negative for any $w$. Thus, $h(w)$ is invex.
\end{proof}
\subsection{The resolvent of proximal operator only has global optimizers}
\begin{proof}
Now we proof the second part of Theorem \ref{theo:proximalProof}. From the previous analysis on each proximal operator, we have that $h(\boldsymbol{x})$ is an convex (therefore invex) function, then Theorem \ref{theo:optimal_v0} states that any global minimizer $\boldsymbol{y}$ of $h$ satisfies that $\mathbf{0}\in \partial h(\boldsymbol{y})$. This condition implies that $\mathbf{0} \in \partial g(\boldsymbol{y}) + (\boldsymbol{y}-\boldsymbol{v})$, from which we obtain that $\boldsymbol{y} \in ( \partial g + \mathbf{I})^{-1}(\boldsymbol{v})$. Thus, we have that $\text{\textbf{prox}}_{g}(\boldsymbol{v})=(\partial g + \mathbf{I})^{-1}(\boldsymbol{v})$ from which the result holds.
\end{proof}
\begin{figure}
\caption{Here we present a visual comparison between the one-dimensional version of $\ell_{1}
\label{fig:landscape}
\end{figure}
\subsection{Numerical Analysis of Proximal}
In this section we present additional numerical analysis on the proximal of invex regularizers listed in Table \ref{tab:list}. We start by providing a visual comparison between the one-dimensional version of $\ell_{1}$-norm and the invex regularizers in Eqs. \eqref{fun1}, \eqref{fun2}, \eqref{fun3}, \eqref{fun4}, and \eqref{fun5}. This comparison is reported in Fig. \ref{fig:landscape}. From this illustration it is easy to conclude why Eqs. \eqref{fun1}, \eqref{fun2}, \eqref{fun3}, \eqref{fun4}, and \eqref{fun5} are non-convex.
To complement the comparison between convex and invex regularizers, we present a graphical validation of the theoretical result in Theorem \ref{theo:proximalProof}. To that end, we illustrate also in Fig. \ref{fig:landscape} the landscape of function $h(\boldsymbol{x}) = g(\boldsymbol{x}) + \frac{1}{2}\lVert \boldsymbol{x}-\boldsymbol{u} \rVert_{2}^{2}$ where $\boldsymbol{x}$ is a vector of two dimensions $\boldsymbol{x}=[x_{1},x_{2}]^{T}$, $\boldsymbol{u}=[1,1]^{T}$, with $g(\boldsymbol{x})$ taking the form of all invex regularizers in Eqs. \eqref{fun1}, \eqref{fun2}, \eqref{fun3}, \eqref{fun4}, and \eqref{fun5}. From these results, it is clear that the level curves are concentric convex sets which confirms that $h(\boldsymbol{x})$ is convex (therefore invex), as stated in Theorem \ref{theo:proximalProof}.
Lastly, the running time to compute the proximal of invex regularizers is also an important aspect to compare with its convex competitor i.e. $\ell_{1}$-norm. The reason for this, is because it is desire to improve imaging quality keeping the same computational complexity to obtain it. Therefore, the following Table \ref{tab:timesProx} reports the running time to compute the proximal (in GPU) of Eqs. \eqref{fun1}, \eqref{fun2}, \eqref{fun3}, \eqref{fun4}, and \eqref{fun5} for an image of $2048\times 2048$ pixels. Observe that Table \ref{tab:timesProx} suggests that computing the proximal of the $\ell_{1}$-norm is faster than the proximal of invex regularizers. However, this difference is given in milliseconds making it negligible in practice.
\begin{table}[ht]
\centering
\caption{Time to compute the proximal for all invex and convex regularizers, of an image with $2048\times 2048$ pixels. The reported time is the averaged over $256$ trials. For Eq. \eqref{fun1} we select $p=0.5$, and $\epsilon=(p(1-p))^{\frac{1}{2-p}}$.}
\renewcommand{1.3}{1.2}
\begin{tabular}{P{0.5cm} P{1cm} P{1cm} P{1cm} P{1cm} P{1cm} P{1.2cm} |P{2.0cm}}
\hline
\multicolumn{2}{P{1.3cm}}{} & Eq. \eqref{fun1} & Eq. \eqref{fun2} & Eq. \eqref{fun3} & Eq. \eqref{fun4} & Eq. \eqref{fun5} & $\ell_{1}$-norm \\
\hline
\multicolumn{2}{P{1.3cm}}{\centering Time} & $1.47ms$ & $0.63ms$ & $2.8ms$ & $4.7ms$ & $2.4ms$ & $0.66ms$ \\
\hline
\end{tabular}
\label{tab:timesProx}
\end{table}
\section{Solutions to the Proximal Operator in Eq. \eqref{eq:prox1}}
\label{app:solProxi}
In this section we present the proximal operator for the functions in Eqs. \eqref{fun1}-\eqref{fun5} summarized in Table \ref{tab:proximals}. In the case of Eq. \eqref{fun1} its proximal operator was calculated in \cite{marjanovic2012l_q}. We recall that the analysis for Eq. \eqref{fun1} is valid with and without the constant $\epsilon$. We prefer to add $\epsilon$ in order to formally satisfy the Lipschitz continuity as in Definition \ref{def:lipschitz}. Moreover, for functions in Eqs. \eqref{fun2}-\eqref{fun5} we present how to estimate their proximal operator them in the following.
\paragraph{Proximal of Eq. \eqref{fun2}}
Consider $h(w)=\lambda \log(1+\lvert w \rvert) + \frac{1}{2}(w-u)^{2}$ for $\lambda\in (0,1]$, and fixed $u\in \mathbb{R}$. We note first that we only consider $w's$ for which $\text{sign}(w)=\text{sign}(u)$, otherwise $h(w)=\lambda \log(1+\lvert w \rvert) + \frac{1}{2}w^{2}+\lvert u \rvert\lvert w \rvert + \frac{1}{2}u^{2}$ which is clearly minimized at $w=0$. Then, since with $\text{sign}(w)=\text{sign}(u)$ we have $(w-u)^{2} = (\lvert w \rvert-\lvert u \rvert)^{2}$, we replace $u$ with $\lvert u \rvert$ and take $w\geq 0$. As $h(w)$ is differentiable for $w>0$, re-arranging $h^{\prime}(w)=0$ gives
\begin{align}
\psi_{\lambda}(w)\mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptstyle\Delta}}}\frac{\lambda}{1+w} + w = \lvert u \rvert.
\label{eq:proxlog}
\end{align}
Observe that $\psi^{\prime}_{\lambda}(w)$ is always positive then it means that $\psi_{\lambda}(w)$ is monotonically increasing. Thus, the equation $\psi_{\lambda}(w)=\lvert u \rvert $ has unique solution i.e. at some point the quality holds. Thus, solving $\psi_{\lambda}(w)=\lvert u \rvert$ is equivalent to
\begin{align}
w^{2} + (1-\lvert u \rvert) w + \lambda - \lvert u \rvert = 0.
\label{eq:proxlog1}
\end{align}
It is easy to verify that the solution to Eq. \eqref{eq:proxlog1} that returns the minimum value of $h(w)$ is given by $w=\frac{\lvert u \rvert -1 + \sqrt{(\lvert u \rvert + 1)^2 - 4\lambda}}{2}$ when $(\lvert u \rvert+1)^2\geq 4\lambda$, and $0$ otherwise.
\paragraph{Proximal of Eq. \eqref{fun3}}
Consider $h(w)=\lambda \frac{\lvert w \rvert}{2 + 2\lvert w \rvert} + \frac{1}{2}(w-u)^{2}$ for $\lambda\in (0,1]$, and fixed $u\in \mathbb{R}$. We note first that we only consider $w's$ for which $\text{sign}(w)=\text{sign}(u)$, otherwise $h(w)=\lambda \frac{\lvert w \rvert}{2 + 2\lvert w \rvert} + \frac{1}{2}w^{2}+\lvert u \rvert\lvert w \rvert + \frac{1}{2}u^{2}$ which is clearly minimized at $w=0$. Then, since with $\text{sign}(w)=\text{sign}(u)$ we have $(w-u)^{2} = (\lvert w \rvert-\lvert u \rvert)^{2}$, we replace $u$ with $\lvert u \rvert$ and take $w\geq 0$. As $h(w)$ is differentiable for $w>0$, re-arranging $h^{\prime}(w)=0$ gives
\begin{align}
\psi_{\lambda}(w)\mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptstyle\Delta}}}\frac{\lambda}{2(1+w)^{2}} + w = \lvert u \rvert.
\label{eq:proxrat}
\end{align}
Observe that $\psi^{\prime}_{\lambda}(w)$ is always positive then it means that $\psi_{\lambda}(w)$ is monotonically increasing. Thus, the equation $\psi_{\lambda}(w)=\lvert u \rvert $ has unique solution i.e. at some point the quality holds. Thus, solving $\psi_{\lambda}(w)=\lvert u \rvert$ is equivalent to
\begin{align}
2w^{3} + (4-2\lvert u \rvert) w^{2}+(2-4\lvert u \rvert)w + \lambda - 2\lvert u \rvert = 0.
\label{eq:proxrat1}
\end{align}
Equation \eqref{eq:proxrat1} is easily solved using traditional python packages\footnote{Example of Python function to solve Eq. \eqref{eq:proxrat1} at \url{https://numpy.org/doc/stable/reference/generated/numpy.roots.html}. }.
\begin{table}[t]
\centering
\caption{Invex regularization functions from Table \ref{tab:list} and their corresponding proximity operator ($\lambda \in (0,1]$ is a thresholding parameter).}
\begin{tabular}{|p{0.5cm}| p{5.2cm}| p{6.7cm}|}
\hline
Ref & Invex function& Proximal operator \\
\hline
\cite{marjanovic2012l_q} & $g_{\lambda}(x)=\lambda \lvert x \rvert^{p}$, $p\in (0,1)$, $x\not =0$. & $\text{Prox}_{g_{\lambda}}(t)=\left\lbrace \begin{array}{ll}
0 & \lvert t \rvert < \tau \\
\{0,\text{sign}(t)\beta\} & \lvert t \rvert=\tau \\
\text{sign}(t)y & \lvert t \rvert>\tau
\end{array} \right.$
where $\beta = [2\lambda (1-p)]^{1/(2-p)}$, $\tau = \beta + \lambda p \beta^{p-1}$, $h(y)= \lambda p y^{p-1} + y - \lvert t \rvert = 0$, $y\in [\beta,\lvert t \rvert]$\\
\hline
- & $g_{\lambda}(x)= \lambda\log(1 + \lvert x \rvert)$ & $\text{Prox}_{g_{\lambda}}(t)=\left\lbrace \begin{array}{ll}
0 & (\lvert t \rvert+1)^2< 4\lambda \\
\text{sign}(t)\beta & \beta\geq 0 \\
0 & \text{ otherwise }
\end{array} \right.$
where $\beta = \frac{\lvert t \rvert -1 + \sqrt{(\lvert t \rvert + 1)^2 - 4\lambda}}{2}$.\\
\hline
- & $g_{\lambda}(x)=\lambda \frac{\lvert x \rvert}{2 + 2\lvert x \rvert}$ & $\text{Prox}_{g_{\lambda}}(t)=\left\lbrace \begin{array}{ll}
0 & \lvert t \rvert = 0 \\
\text{sign}(t)\beta & \text{ otherwise }
\end{array} \right.$
where $2\beta^{3} + (4-2\lvert t \rvert) \beta^{2}+(2-4\lvert t \rvert)\beta + \lambda - 2\lvert t \rvert = 0$, $\beta>0$, and closest to $\lvert t \rvert$.\\
\hline
- & $g_{\lambda}(x)=\lambda \frac{x^{2}}{1 + x^{2}}$ & $\text{Prox}_{g_{\lambda}}(t)=\left\lbrace \begin{array}{ll}
0 & \lvert t \rvert = 0 \\
\text{sign}(t)\beta & \text{ otherwise }
\end{array} \right.$
where $\beta^{5} - \lvert t \rvert \beta^{4} + 2\beta^{3} - 2\lvert t \rvert \beta^{2}+(1+2\lambda)\beta - \lvert t \rvert = 0$, $\beta>0$, and closest to $\lvert t \rvert$\\
\hline
- & $g_{\lambda}(x)=\lambda \left(\log(1 + \lvert x \rvert) - \frac{\lvert x \rvert}{2 + 2\lvert x \rvert}\right)$ & $\text{Prox}_{g_{\lambda}}(t)=\left\lbrace \begin{array}{ll}
0 & \lvert t \rvert = 0 \\
\text{sign}(t)\beta & \text{ otherwise }
\end{array} \right.$
where $2\beta^{3} + (4-2\lvert t \rvert)\beta^{2} + (2\lambda + 2 -4\lvert t \rvert)\beta + \lambda - 2\lvert t \rvert=0$, $\beta >0$, and closest to $\lvert t \rvert$.\\
\hline
\end{tabular}
\label{tab:proximals}
\end{table}
\paragraph{Proximal of Eq. \eqref{fun4}}
Consider $h(w)=\lambda \frac{w^{2}}{1 + w^{2}} + \frac{1}{2}(w-u)^{2}$ for $\lambda\in (0,1]$, and fixed $u\in \mathbb{R}$. We note first that we only consider $w's$ for which $\text{sign}(w)=\text{sign}(u)$, otherwise $h(w)=\lambda \frac{w^{2}}{1 + w^{2}} + \frac{1}{2}w^{2}+\lvert u \rvert\lvert w \rvert + \frac{1}{2}u^{2}$ which is clearly minimized at $w=0$. Then, since with $\text{sign}(w)=\text{sign}(u)$ we have $(w-u)^{2} = (\lvert w \rvert-\lvert u \rvert)^{2}$, we replace $u$ with $\lvert u \rvert$ and take $w\geq 0$. As $h(w)$ is differentiable for $w>0$, re-arranging $h^{\prime}(w)=0$ gives
\begin{align}
\psi_{\lambda}(w)\mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptstyle\Delta}}}\frac{2\lambda w}{(1+w^{2})^{2}} + w = \lvert u \rvert.
\label{eq:proxrat2}
\end{align}
Observe that $\psi^{\prime}_{\lambda}(w)$ is always positive then it means that $\psi_{\lambda}(w)$ is monotonically increasing. Thus, the equation $\psi_{\lambda}(w)=\lvert u \rvert $ has unique solution i.e. at some point the quality holds. Thus, solving $\psi_{\lambda}(w)=\lvert u \rvert$ is equivalent to
\begin{align}
w^{5} - \lvert u \rvert w^{4} + 2w^{3} - 2\lvert u \rvert w^{2}+(1+2\lambda)w - \lvert u \rvert = 0.
\label{eq:proxrat3}
\end{align}
Equation \eqref{eq:proxrat3} is easily solved using traditional python packages.
\paragraph{Proximal of Eq. \eqref{fun5}}
Consider $h(w)=\lambda \left(\log(1 + \lvert w \rvert) - \frac{\lvert w \rvert}{2 + 2\lvert w \rvert}\right) + \frac{1}{2}(w-u)^{2}$ for $\lambda\in (0,1]$, and fixed $u\in \mathbb{R}$. We note first that we only consider $w's$ for which $\text{sign}(w)=\text{sign}(u)$, otherwise $h(w)=\lambda \left(\log(1 + \lvert w \rvert) - \frac{\lvert w \rvert}{2 + 2\lvert w \rvert}\right) + \frac{1}{2}w^{2}+\lvert u \rvert\lvert w \rvert + \frac{1}{2}u^{2}$ which is clearly minimized at $w=0$. Then, since with $\text{sign}(w)=\text{sign}(u)$ we have $(w-u)^{2} = (\lvert w \rvert-\lvert u \rvert)^{2}$, we replace $u$ with $\lvert u \rvert$ and take $w\geq 0$. As $h(w)$ is differentiable for $w>0$, re-arranging $h^{\prime}(w)=0$ gives
\begin{align}
\psi_{\lambda}(w)\mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptstyle\Delta}}}\lambda\frac{2w+1}{2(1+w)^{2}} + w = \lvert u \rvert.
\label{eq:proxlog2}
\end{align}
Observe that $\psi^{\prime}_{\lambda}(w)$ is always positive then it means that $\psi_{\lambda}(w)$ is monotonically increasing. Thus, the equation $\psi_{\lambda}(w)=\lvert u \rvert $ has unique solution i.e. at some point the quality holds. Thus, solving $\psi_{\lambda}(w)=\lvert u \rvert$ is equivalent to
\begin{align}
2w^{3} + (4-2\lvert u \rvert)w^{2} + (2\lambda + 2 -4\lvert u \rvert)w + \lambda - 2\lvert u \rvert=0.
\label{eq:proxlog3}
\end{align}
Equation \eqref{eq:proxlog3} is easily solved using traditional python packages.
\section{Proof of Theorem \ref{theo:ourCS}}
\label{app:ourCS}
We split the proof of Theorem \ref{theo:ourCS} into two parts. First part we focus our analysis on functions in Eqs. \eqref{fun2},\eqref{fun3},\eqref{fun5} and second part Eq. \eqref{fun4} (extra mild conditions are needed). Recall we skipped Eq. \eqref{fun1}.
\subsection{Part one}
We particularized \cite[Theorem 1]{gribonval2007highly} in order to prove that whenever the $\ell_{1}$-norm solution of optimization problem in Eq. \eqref{eq:problem4} is unique, then Eq. \eqref{eq:problem4} when $g(\boldsymbol{x})$ satisfies the following definition has the same global optima.
\begin{definition}{(\textit{Sparseness measure} \cite{gribonval2007highly})}
Let $g:\mathbb{R}^{n}\rightarrow \mathbb{R}$ such that $g(\boldsymbol{w})=\sum_{i=1}^{n}r(\boldsymbol{w}[i])$, where $r:[0,\infty)\rightarrow [0,\infty)$ and increasing. If $r$, not identically zero, with $r(0)=0$ such that $r(t)/t$ is non-increasing on $(0,\infty)$, then $g(\boldsymbol{x})$ is said to be a \textit{sparseness measure}.
\label{def:measure}
\end{definition}
Now we present the particular version in \cite[Theorem 1]{gribonval2007highly} as follows.
\begin{lemma}
Assume $\boldsymbol{H}\boldsymbol{x}=\boldsymbol{b}$, where $\boldsymbol{x}\in \mathbb{R}^{n}$ is $k$-sparse, the matrix $\boldsymbol{H}\in \mathbb{R}^{m\times n}$ ($m<n$) with $\ell_{2}$-normalized columns that satisfies RIP for any $2k$-sparse vector, with $\delta_{2k}<\frac{1}{3}$, and $\boldsymbol{b}\in \mathbb{R}^{m}$ is a noiseless measurements data vector. If $g(\boldsymbol{x})$ in Eqs. \eqref{fun2},\eqref{fun3}, \eqref{fun5} satisfies Definition \ref{def:measure}, then $\boldsymbol{x}$ is exactly recovered by solving Eq. \eqref{eq:problem4} i.e. only global optimizers exists.
\end{lemma}
In the following we prove functions in Eqs. \eqref{fun2},\eqref{fun3}, and \eqref{fun5} satisfy Definition \ref{def:measure}, and we proceed by cases.
\begin{wrapfigure}[14]{r}{0.4\textwidth}
\begin{center}
\includegraphics[width=0.38\textwidth]{figures/auxUniquenessLog.png}
\end{center}
\caption{\small Plot of $g(w)/w$ for $g(w)$ being Eq. \eqref{fun5} and $w>0$ to check that $g(w)/w$ is non-increasing on $(0,\infty)$.}
\label{fig:auxLogProof}
\end{wrapfigure}
\begin{proof}
\textbf{Eq. \eqref{fun2}: }Take $g(w)= \log(1 + \lvert w \rvert)$ for any $w\in \mathbb{R}$. It is trivial to see that $g(0)=0$, and that $g(w)$ it is not identically zero. Then, we just need to show that $g(w)/w$ is non-increasing on $(0,\infty)$. Define $r(w)=\frac{\log(1+w)}{w}$. Observe that the derivative of $r(w)$ is given by $r^{\prime}(w) = \frac{\frac{w}{1+w}-\log(1+w)}{w^{2}}$, for $w\in (0,\infty)$. Since $\frac{w}{1+w}-\log(1+w)< 0$, then $r^{\prime}(w)< 0$ leads to conclude that $g(w)/w$ is non-increasing on $(0,\infty)$.
\textbf{Eq. \eqref{fun3}: }Take $g(w)= \frac{\lvert w\rvert}{2 + 2\lvert w\rvert}$ for any $w\in \mathbb{R}$. It is trivial to see that $g(0)=0$, and that $g(w)$ it is not identically zero. Then, we just need to show that $g(w)/w$ is non-increasing on $(0,\infty)$. Define $r(w)=\frac{w}{2w+2w^{2}} = \frac{1}{2+2w}$. Then, it is clear to conclude that $g(w)/w$ is non-increasing on $(0,\infty)$.
\textbf{Eq. \eqref{fun5}: }Take $g(w)= \log(1+\lvert w \rvert) - \frac{\lvert w \rvert}{2 + 2\lvert w \rvert}$ for any $w\in \mathbb{R}$. It is trivial to see that $g(0)=0$, and that $g(w)$ it is not identically zero. Then, we just need to show that $g(w)/w$ is non-increasing on $(0,\infty)$. For easy of exposition we present in Figure \ref{fig:auxLogProof} the plot of $g(w)/w$. Then it is clear that $g(w)/w$ is non-increasing on $(0,\infty)$.
\end{proof}
\subsection{Part two}
For this second part we appeal to a generalized result of \cite[Theorem 1]{gribonval2007highly} presented in \cite[Theorem 3.10]{woodworth2016compressed}. To exploit this generalized theorem we introduce the following definition.
\begin{definition}{(\textit{Admissible sparseness measure} \cite{woodworth2016compressed})}
A function $g:\mathbb{R}^{n}\rightarrow \mathbb{R}$ such that $g(\boldsymbol{w})=\sum_{i=1}^{n}r(\boldsymbol{w}[i])$ is said to be an admissible sparseness measure if
\begin{itemize}
\item $r(0)=0$, and g even on $\mathbb{R}$,
\item $r$ is continuous on $\mathbb{R}$, and strictly increasing and strictly concave on $\mathbb{R}$.
\end{itemize}
\label{def:admissible}
\end{definition}
Based on the above definition we particularized \cite[Theorem 3.10]{woodworth2016compressed} in the lemma below in order to prove the solution of optimization problem in Eq. \eqref{eq:problem4} is unique, when functions in Eq. \eqref{fun4} are used under some mild conditions.
\begin{lemma}{(\cite[Theorem 3.10]{woodworth2016compressed})}
Assume $\boldsymbol{H}\boldsymbol{x}=\boldsymbol{b}$, where $\boldsymbol{x}\in \mathbb{R}^{n}$ is $k$-sparse, the matrix $\boldsymbol{H}\in \mathbb{R}^{m\times n}$ ($m<n$) with $\ell_{2}$-normalized columns that satisfies RIP for $\delta_{s}\in (0,1)$ with $s\geq 2k$, and $\boldsymbol{b}\in \mathbb{R}^{m}$ is a noiseless measurements data vector. Define $\beta_{1},\beta_{2} >0$ to be the lower and upper bound of magnitudes of non-zero entries of feasible vectors of Eq. \eqref{eq:problem4} (their existence if guaranteed \cite{woodworth2016compressed}). If $kr(2\beta_{2}) < (s+k-1)r(\beta_{1})$, then $\boldsymbol{x}$ is exactly recovered by solving Eq. \eqref{eq:problem4} i.e. only global optimizers exists.
\end{lemma}
In the following we prove functions from Eq. \eqref{fun4} in Table \ref{tab:list} are able to exactly recover the signal $\boldsymbol{x}$ under some mild conditions.
\begin{proof}
\textbf{Eq. \eqref{fun4}: }Take $r(w)= \frac{w^{2}}{1 + w^{2}}$ for any $w\in \mathbb{R}$. It is trivial to see that $r(0)=0$, to check that it is even, continuous, and strictly increasing. Observe that the second derivative of $r(w)$ is given by $r^{\prime \prime}(w)=\frac{2-6w^{2}}{(1+w^2)^2}$. Then it is clear to conclude that $r(w)$ is strictly concave when $w>\frac{1}{3}$. Then, in order to have the chance to exactly recover the signal $\boldsymbol{x}$ we need to assume that the lower bound of magnitudes of non-zero entries of feasible vectors is $\beta_{1}>\frac{1}{3}$. Without loss of generality we assume $\boldsymbol{x}$ is a normalized signal (in practical imaging applications $\boldsymbol{x}$ is always normalized). Then, we take $\beta_{1} = 0.5$, and $\beta_{2}=1.0$. In addition, assuming $\boldsymbol{H}$ satisfies RIP when $s\geq 4k+2$, with $\delta_{s}\in (0,1)$, it is numerically easy to verified that $kr(2\beta_{2}) < (s+k-1)r(\beta_{1})$.
\end{proof}
\section{Proof of Lemma \ref{lem:convergeAPG}}
\label{app:lemAPG}
Before proving Lemma \ref{lem:convergeAPG} we consider two definitions in the following which the loss function $F(\boldsymbol{x})=f(\boldsymbol{x}) + \lambda g(\boldsymbol{x})$ in Eq. \eqref{eq:problem4} satisfies. Recall that $\lambda\in (0,1]$.
\begin{definition}
A function $h:\mathbb{R}^{n}\rightarrow (-\infty,\infty]$ is said to be proper if $\text{dom }h \not = \emptyset$, where $\text{dom }=\{\boldsymbol{x}\in \mathbb{R}^{n}: h(\boldsymbol{x})< \infty\}$.
\end{definition}
Since we are assuming the sensing matrix $\boldsymbol{H}$ satisfies RIP it guarantees the existence of a solution to Eq. \eqref{eq:problem4} implying that $\text{dom }F \not = \emptyset$. Thus, $F(\boldsymbol{x})$ in Eq. \eqref{eq:problem4} satisfies the above definition because.
\begin{definition}
A function $h:\mathbb{R}^{n}\rightarrow \mathbb{R}$ is coercive, if $h$ is bounded from below and $h(\boldsymbol{x})\rightarrow \infty$ when $\lVert \boldsymbol{x} \rVert_{2}\rightarrow \infty$.
\end{definition}
Considering that the list of invex functions in Table \ref{tab:list}, and $f(\boldsymbol{x})=\lVert \boldsymbol{H}\boldsymbol{x} -\boldsymbol{v} \rVert_{2}^{2}$ (for fix $\boldsymbol{v}$ and $\boldsymbol{H}$ satisfying RIP) are positive, then the loss function $F(\boldsymbol{x})$ satisfies $F(\boldsymbol{x})\geq 0$. The second part of the coercive definition is trivially guaranteed since $\boldsymbol{H}$ satisfies RIP, otherwise we will be denying the existence of a global solution to Eq. \eqref{eq:problem4} which is a contradiction.
Now we proceed to prove Lemma \ref{lem:convergeAPG}.
\begin{proof}
Line 6 in Algorithm \ref{alg:invexProximal} is given by
\begin{align}
\boldsymbol{v}^{(t+1)} = \argmin_{\boldsymbol{x}\in \mathbb{R}^{n}} \hspace{0.5em} \left\langle \nabla f(\boldsymbol{x}^{(t)}),\boldsymbol{x}-\boldsymbol{x}^{(t)} \right\rangle + \frac{1}{2\lambda \alpha_{1}} \lVert \boldsymbol{x}-\boldsymbol{x}^{(t)} \rVert_{2}^{2} + g(\boldsymbol{x}).
\label{eq:noninvex1}
\end{align}
We write equal in the above equation because the proximal in Line 6 is invex therefore it always map to a global optimizer. So from Eq. \eqref{eq:noninvex1} we have
\begin{align}
\left\langle \nabla f(\boldsymbol{x}^{(t)}),\boldsymbol{v}^{(t+1)}-\boldsymbol{x}^{(t)} \right\rangle + \frac{1}{2\lambda\alpha_{1}} \lVert \boldsymbol{v}^{(t+1)}-\boldsymbol{x}^{(t)} \rVert_{2}^{2} + g(\boldsymbol{v}^{(t+1)}) \leq g(\boldsymbol{x}^{(t)}).
\label{eq:noninvex2}
\end{align}
From the Lipschitz continuous of $\nabla f$ and Eq. \eqref{eq:noninvex2} we have
\begin{align}
F(\boldsymbol{v}^{(t+1)}) &\leq g(\boldsymbol{v}^{(t+1)}) + f(\boldsymbol{x}^{(t)}) + \left\langle \nabla f(\boldsymbol{x}^{(t)}),\boldsymbol{v}^{(t+1)}-\boldsymbol{x}^{(t)} \right\rangle + \frac{L}{2} \lVert \boldsymbol{v}^{(t+1)}-\boldsymbol{x}^{(t)} \rVert_{2}^{2} \nonumber\\
&\leq g(\boldsymbol{x}^{(t)}) - \left\langle \nabla f(\boldsymbol{x}^{(t)}),\boldsymbol{v}^{(t+1)}-\boldsymbol{x}^{(t)} \right\rangle - \frac{1}{2\lambda\alpha_{1}} \lVert \boldsymbol{v}^{(t+1)}-\boldsymbol{x}^{(t)} \rVert_{2}^{2} \nonumber\\
&+ f(\boldsymbol{x}^{(t)}) + \left\langle \nabla f(\boldsymbol{x}^{(t)}),\boldsymbol{v}^{(t+1)}-\boldsymbol{x}^{(t)} \right\rangle + \frac{L}{2} \lVert \boldsymbol{v}^{(t+1)}-\boldsymbol{x}^{(t)} \rVert_{2}^{2} \nonumber\\
&=F(\boldsymbol{x}^{(t)}) - \left(\frac{1}{2\lambda\alpha_{1}}-\frac{L}{2}\right)\lVert \boldsymbol{v}^{(t+1)}-\boldsymbol{x}^{(t)} \rVert_{2}^{2}.
\label{eq:noninvex3}
\end{align}
If $F(\boldsymbol{z}^{(t+1)})\leq F(\boldsymbol{v}^{(t+1)})$, then
\begin{align}
\boldsymbol{x}^{(t+1)} = \boldsymbol{z}^{(t+1)}, F(\boldsymbol{x}^{(t+1)})=F(\boldsymbol{z}^{(t+1)})\leq F(\boldsymbol{v}^{(t+1)}).
\label{eq:noninvex4}
\end{align}
If $F(\boldsymbol{z}^{(t+1)})> F(\boldsymbol{v}^{(t+1)})$, then
\begin{align}
\boldsymbol{x}^{(t+1)} = \boldsymbol{v}^{(t+1)}, F(\boldsymbol{x}^{(t+1)})=F(\boldsymbol{v}^{(t+1)}).
\label{eq:noninvex5}
\end{align}
From Eqs. \eqref{eq:noninvex3}, \eqref{eq:noninvex4} and \eqref{eq:noninvex5} we have
\begin{align}
F(\boldsymbol{x}^{(t+1)} )\leq F(\boldsymbol{v}^{(t+1)}) \leq F(\boldsymbol{x}^{(t)}).
\label{eq:nonIncreasing}
\end{align}
So
\begin{align}
F(\boldsymbol{x}^{(t+1)})\leq F(\boldsymbol{x}^{(1)}), F(\boldsymbol{v}^{(t+1)})\leq F(\boldsymbol{x}^{(1)}),
\label{eq:noninvex6}
\end{align}
for all $t$. Recall that we consider the estimation of $\boldsymbol{z}^{(t+1)}$ unique because it is performed through the proximal of $g(\boldsymbol{x})$ which always map to a global optimizer.
Observe that from Eq. \eqref{eq:nonIncreasing} was concluded that $F(\boldsymbol{x}^{(t)})$ is nonincreasing then for all $t>1$ we have $F(\boldsymbol{x}^{(t)})\leq F(\boldsymbol{x}^{(1)})$ and therefore $\boldsymbol{x}^{(t)}\in \{\boldsymbol{w}: F(\boldsymbol{w})\leq F(\boldsymbol{x}^{(1)})\}$ (known as level sets). Since $F(\boldsymbol{x})$ is coercive then all its level sets are bounded. Then we know that $\{\boldsymbol{x}^{(t)} \}$, and $\{\boldsymbol{v}^{(t)}\}$ are also bounded. Thus $\{\boldsymbol{x}^{(t)} \}$ has accumulation points. Let $\boldsymbol{x}^{*}$ be any accumulation point of $\{\boldsymbol{x}^{(t)}\}$, say a subsequence satisfying $\{\boldsymbol{x}^{(t_{j}+1)}\}\rightarrow \boldsymbol{x}^{*}$ as $j\rightarrow \infty$. Let $F^{*}$ be $\displaystyle \lim_{j\rightarrow \infty} \hspace{0.5em}F(\boldsymbol{x}^{(t_{j}+1)}) = F(\boldsymbol{x}^{*})=F^{*}$. The existence of this limit is guaranteed since $f$ is continuously differentiable. Then, from Eq. \eqref{eq:noninvex3} we have
\begin{align}
\left(\frac{1}{2\lambda\alpha_{1}}-\frac{L}{2}\right)\lVert \boldsymbol{v}^{(t+1)}-\boldsymbol{x}^{(t)} \rVert_{2}^{2} \leq F(\boldsymbol{x}^{(t)}) - F(\boldsymbol{v}^{(t+1)}) \leq F(\boldsymbol{x}^{(t)}) - F(\boldsymbol{x}^{(t+1)}).
\label{eq:noninvex7}
\end{align}
Summing over $t=1,2,\dots,\infty$, we have
\begin{align}
\left(\frac{1}{2\lambda\alpha_{1}}-\frac{L}{2}\right)\sum_{t=1}^{\infty}\lVert \boldsymbol{v}^{(t+1)}-\boldsymbol{x}^{(t)} \rVert_{2}^{2} \leq F(\boldsymbol{x}^{(1)}) - F^{*} < \infty.
\label{eq:noninvex8}
\end{align}
From $\alpha_{1}< \frac{1}{L}$ we have
\begin{align}
\lVert \boldsymbol{v}^{(t+1)}-\boldsymbol{x}^{(t)} \rVert_{2}^{2}\rightarrow 0, \text{ as } t\rightarrow \infty.
\label{eq:noninvex9}
\end{align}
From the optimality condition of Eq. \eqref{eq:noninvex1} we have
\begin{align}
\boldsymbol{0} &\in \nabla f(\boldsymbol{x}^{(t)}) + \frac{1}{\lambda\alpha_{1}}(\boldsymbol{v}^{(t+1)} - \boldsymbol{x}^{(t)}) + \partial g(\boldsymbol{v}^{(t+1)}) \nonumber\\
&=\nabla f(\boldsymbol{x}^{(t)}) + \nabla f(\boldsymbol{v}^{(t+1)}) - \nabla f(\boldsymbol{v}^{(t+1)}) + \frac{1}{\lambda\alpha_{1}}(\boldsymbol{v}^{(t+1)} - \boldsymbol{x}^{(t)}) + \partial g(\boldsymbol{v}^{(t+1)}).
\label{eq:noninvex10}
\end{align}
So we have
\begin{align}
-\nabla f(\boldsymbol{x}^{(t)}) + \nabla f(\boldsymbol{v}^{(t+1)}) - \frac{1}{\lambda\alpha_{1}}(\boldsymbol{v}^{(t+1)} - \boldsymbol{x}^{(t)}) \in \partial F(\boldsymbol{v}^{(t+1)}),
\label{eq:noninvex11}
\end{align}
and
\begin{align}
\left \lVert \nabla f(\boldsymbol{x}^{(t)})- \nabla f(\boldsymbol{v}^{(t+1)}) + \frac{1}{\lambda\alpha_{1}}(\boldsymbol{v}^{(t+1)} - \boldsymbol{x}^{(t)}) \right \rVert_{2} \leq \left(\frac{1}{\lambda\alpha_{1}} + L\right) \lVert \boldsymbol{v}^{(t+1)} - \boldsymbol{x}^{(t)} \rVert_{2} \rightarrow 0,
\label{eq:noninvex12}
\end{align}
as $t\rightarrow \infty$.
From Eq. \eqref{eq:noninvex9} we have $\boldsymbol{v}^{(t_{j}+1)}\rightarrow \boldsymbol{x}^{*}$ as $j\rightarrow \infty$. From Eq. \eqref{eq:noninvex1} we have
\begin{align}
&\left\langle \nabla f(\boldsymbol{x}^{(t_{j})}),\boldsymbol{v}^{(t_{j}+1)}-\boldsymbol{x}^{(t_{j}+1)} \right\rangle + \frac{1}{2\lambda\alpha_{1}} \lVert \boldsymbol{v}^{(t_{j}+1)}-\boldsymbol{x}^{(t_{j})} \rVert_{2}^{2} + g(\boldsymbol{v}^{(t_{j}+1)}) \nonumber\\
&\leq \left\langle \nabla f(\boldsymbol{x}^{(t_{j})}),\boldsymbol{x}^{*}-\boldsymbol{x}^{(t_{j})} \right\rangle + \frac{1}{2\lambda\alpha_{1}} \lVert \boldsymbol{x}^{*}-\boldsymbol{x}^{(t_{j})} \rVert_{2}^{2} + g(\boldsymbol{x}^{*})
\label{eq:noninvex13}
\end{align}
So
\begin{align}
\limsup_{j\rightarrow \infty} \hspace{0.5em}g(\boldsymbol{v}^{(t_{j}+1)}) \leq g(\boldsymbol{x}^{*}).
\label{eq:noninvex14}
\end{align}
From the continuity assumption on $g$ we have $\displaystyle \liminf_{j\rightarrow \infty} \hspace{0.5em}g(\boldsymbol{v}^{(t_{j}+1)}) \geq g(\boldsymbol{x}^{*})$, then we conclude
\begin{align}
\lim_{j\rightarrow \infty} \hspace{0.5em}g(\boldsymbol{v}^{(t_{j}+1)}) = g(\boldsymbol{x}^{*}).
\label{eq:noninvex15}
\end{align}
Because $f$ is continuously differentiable, we have $\displaystyle \lim_{j\rightarrow \infty} \hspace{0.5em}F(\boldsymbol{v}^{(t_{j}+1)}) = F(\boldsymbol{x}^{*})$. From $\{\boldsymbol{v}^{(t_{j}+1)}\}\rightarrow \boldsymbol{x}^{*}$, and Eq. \eqref{eq:noninvex11} we have $\boldsymbol{0} \in \partial F(\boldsymbol{x}^{*})$. Therefore, since $F(\boldsymbol{x})$ is invex according to Theorem \ref{theo:ourCS} we have that the sequence $\{\boldsymbol{x}^{(t)}\}$ converges to a global minimizer of $F(\boldsymbol{x})$.
\end{proof}
\subsection{Numerical Validation of Lemma \ref{lem:convergeAPG}}
To numerically validate the proof of Lemma \ref{lem:convergeAPG} provided in the above section, we present Fig. \ref{fig:msevstimeiter}. In this figure we are reporting the numerical convergence of Algorithm \ref{alg:invexProximal} for all invex regularizers to recover an image of size $256\times 256$ from blurred data, as explained in Experiment 1 for noiseless case. Specifically, Fig. \ref{fig:msevstimeiter}(left) reports how the loss function $F(\boldsymbol{x})=\ell_{2}+ \lambda g(\boldsymbol{x})$, analyzed in the above proof, is minimized along $T=800$ iterations. This plot numerically validates the proof of Lemma \ref{lem:convergeAPG}. As a complement to this plot, Fig. \ref{fig:msevstimeiter}(right) presents the running time of Algorithm \ref{alg:invexProximal} to perform $T=800$ iterations for all invex regularizers and the $\ell_{1}$-norm. This second plot suggests that Algorithm \ref{alg:invexProximal} using the $\ell_{1}$-norm as regularizer requires 1.8 seconds less than its invex competitors to perform $T=800$ iterations. We remark that this negligible difference is expected, since in Table \ref{tab:timesProx} was concluded that the running time to compute the proximal operator for all invex differs in the order of milliseconds with the computation of the proximal of $\ell_{1}$-norm.
\begin{figure}
\caption{Numerical convergence of Algorithm \ref{alg:invexProximal}
\label{fig:msevstimeiter}
\end{figure}
\section{Proof of Lemma \ref{theo:PnP}}
\label{app:PnP}
We proceed to prove this lemma by extending the mathematical analysis in \cite{kamilov2017plug} to invex functions. To that end, we recall some definitions and a classical result from monotone operator theory needed for the proof of this lemma as follows.
\begin{definition}{(\textit{Nonexpansiveness})}
An operator $F:\mathbb{R}^{n}\rightarrow \mathbb{R}^{n}$ is said to be nonexpansive if it is Lipschitz continuous as in Definition \ref{def:lipschitz} with $L=1$.
\label{def:nonexpansive}
\end{definition}
Based on the nonexpansiveness concept we give the following definition.
\begin{definition}
For a constant $\beta \in (0,1)$ we say a function $G$ is $\beta$-average, if there exists a nonexpansive operator $F$ such that $G = (1-\beta)\boldsymbol{I} + \beta F$
\label{def:average}
\end{definition}
Now based on the concept of average operators we recall the following classical results.
\begin{lemma}{(\cite[Proposition 4.44]{bauschke2011convex})}
Let $G_{1}$ be $\beta_{1}$-averaged and $G_{2}$ be $\beta_{2}$-averaged. Then, the composite operator $G\mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptstyle\Delta}}} G_{2}\circ G_{1}$ is
\begin{align}
\beta \mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptstyle\Delta}}} \frac{\beta_{1} + \beta_{2} - 2\beta_{1}\beta_{2}}{1-\beta_{1}\beta_{2}},
\end{align}
averaged operator.
\label{lem:composedAverage}
\end{lemma}
\begin{lemma}
Let $F$ be a $\beta$-average operator with $\beta \in (0,1)$. Then
\begin{align}
\lVert F(\boldsymbol{x})-F(\boldsymbol{y}) \rVert_{2}^{2} \leq \lVert \boldsymbol{x} -\boldsymbol{y} \rVert_{2}^{2} - \left(\frac{1-\beta}{\beta}\right) \lVert \boldsymbol{x}-F(\boldsymbol{x}) - \boldsymbol{y} + F(\boldsymbol{y})\rVert_{2}^{2}.
\end{align}
\label{lem:equiavalentAverage}
\end{lemma}
Now we proceed to prove Lemma \ref{theo:PnP}
\begin{proof}
Following the assumptions made in Lemma \ref{theo:PnP}, we start this proof by noticing that for a differentiable invex function $f$ a point $\boldsymbol{x}$ is a global minimizer of $f$ according to Theorem \ref{theo:optimal_v0} if
\begin{align}
\boldsymbol{0} = \nabla f(\boldsymbol{x}) \leftrightarrow \boldsymbol{x} = (\boldsymbol{I}-\alpha \nabla f )(\boldsymbol{x}),
\end{align}
for non-zero $\alpha$. In other words, $\boldsymbol{x}$ is a minimizer of $f$ if and only if it is a fixed point of the mapping $\boldsymbol{I}-\alpha \nabla f$. This property of invex functions is what allows to extend the mathematical guarantees in \cite{kamilov2017plug} given only for convex functions. Now considering that $f$ is assumed to have Lipschitz continuous gradient with parameter $L$, then the operator $\boldsymbol{I}-\alpha \nabla f$ is Lipschitz with parameter $L_{G}=\max\{1,\lvert 1-\alpha L \rvert \}$ and therefore is nonexpansive for $\alpha \in (0,2/L]$. So it is averaged for $\alpha \in (0,2/L)$ since
\begin{align}
\boldsymbol{I}-\alpha \nabla f = (1-\kappa) \boldsymbol{I} + \kappa \left(\boldsymbol{I}-2/L \nabla f\right),
\end{align}
where $\kappa = \alpha L/2 <1$.
Assume the denoiser $d$ is $\kappa$-averaged and the operator $G_{\alpha} = \boldsymbol{I}-\alpha \nabla f$. Observe that $G_{\alpha}$ is $(\gamma L/2)$-averaged for any $\alpha \in (0,2/L)$. From Lemma \ref{lem:composedAverage} their composition $P=d\circ G_{\alpha}$ is
\begin{align}
\beta \mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptstyle\Delta}}} \frac{\kappa + \gamma L/2 - 2\kappa\gamma L/2}{1-\kappa\gamma L/2},
\end{align}
averaged. Consider a single iteration $\boldsymbol{v}^{+}=P(\boldsymbol{x})$, then we have for any $\boldsymbol{x}^{*}=P(\boldsymbol{x}^{*})$ (fixed point) we have that
\begin{align}
\lVert \boldsymbol{v}^{+} - \boldsymbol{x}^{*} \rVert_{2}^{2} &¨= \lVert P(\boldsymbol{x}) - P(\boldsymbol{x}^{*}) \rVert_{2}^{2} \nonumber\\
&\leq \lVert \boldsymbol{x} - \boldsymbol{x}^{*} \rVert_{2}^{2} - \left(\frac{1-\beta}{\beta}\right) \lVert \boldsymbol{x}-P(\boldsymbol{x}) - \boldsymbol{x}^{*} + P(\boldsymbol{x}^{*}) \rVert_{2}^{2} \nonumber\\
&= \lVert \boldsymbol{x} - \boldsymbol{x}^{*} \rVert_{2}^{2} - \left(\frac{1-\beta}{\beta}\right) \lVert \boldsymbol{x}-P(\boldsymbol{x}) \rVert_{2}^{2},
\end{align}
where we used Lemma \ref{lem:equiavalentAverage}. From Line 6 in Algorithm \ref{alg:invexPnP} the iteration $t+1$ and rearranging the terms, we obtain
\begin{align}
\lVert \boldsymbol{x}^{(t)} - P(\boldsymbol{x}^{(t)})\rVert_{2}^{2} \leq \left(\frac{\beta}{1-\beta}\right) \left[\lVert \boldsymbol{x}^{(t)} - \boldsymbol{x}^{*} \rVert_{2}^{2} - \lVert \boldsymbol{v}^{(t+1)} - \boldsymbol{x}^{*} \rVert_{2}^{2} \right].
\end{align}
If $F(\boldsymbol{z}^{(t+1)})\leq F(\boldsymbol{v}^{(t+1)})$, then
\begin{align}
\boldsymbol{x}^{(t+1)} = \boldsymbol{z}^{(t+1)}, F(\boldsymbol{x}^{(t+1)})=F(\boldsymbol{z}^{(t+1)})\leq F(\boldsymbol{v}^{(t+1)}).
\label{eq:noninvex41}
\end{align}
If $F(\boldsymbol{z}^{(t+1)})> F(\boldsymbol{v}^{(t+1)})$, then
\begin{align}
\boldsymbol{x}^{(t+1)} = \boldsymbol{v}^{(t+1)}, F(\boldsymbol{x}^{(t+1)})=F(\boldsymbol{v}^{(t+1)}).
\label{eq:noninvex51}
\end{align}
From Eqs. \eqref{eq:noninvex41} and \eqref{eq:noninvex51} we have
\begin{align}
F(\boldsymbol{x}^{(t+1)} )\leq F(\boldsymbol{v}^{(t+1)}) \leq F(\boldsymbol{x}^{(t)}).
\label{eq:nonIncreasing1}
\end{align}
Observe that from Eq. \eqref{eq:nonIncreasing1} was concluded that $F(\boldsymbol{x}^{(t)})$ is nonincreasing then for all $t>1$ we have $F(\boldsymbol{x}^{(t)})\leq F(\boldsymbol{x}^{(1)})$ and therefore $\boldsymbol{x}^{(t)}\in \{\boldsymbol{w}: F(\boldsymbol{w})\leq F(\boldsymbol{x}^{(1)})\}$ (known as level sets). Since $F(\boldsymbol{x})$ is coercive then all its level sets are bounded (concluded from Appendix \ref{app:lemAPG}). Then we know that $\{\boldsymbol{x}^{(t)} \}$, and $\{\boldsymbol{v}^{(t)}\}$ are also bounded. Thus $\{\boldsymbol{x}^{(t)} \}$ has accumulation points which guarantees the existence of $\boldsymbol{x}^{*}$ implying that for a subsequence satisfying $\{\boldsymbol{x}^{(t_{j}+1)}\}\rightarrow \boldsymbol{x}^{*}$ as $j\rightarrow \infty$, we also have $\displaystyle \lim_{j\rightarrow \infty} \hspace{0.5em}F(\boldsymbol{x}^{(t_{j}+1)}) = F(\boldsymbol{x}^{*})=F^{*}$. Then, from Eqs. \eqref{eq:noninvex41}, \eqref{eq:noninvex51}, and the continuity of $F$, it is easy to see that $\lVert \boldsymbol{x}^{(t+1)} - \boldsymbol{x}^{*} \rVert_{2}^{2}\leq \lVert \boldsymbol{v}^{(t+1)} - \boldsymbol{x}^{*} \rVert_{2}^{2}$, which leads to
\begin{align}
\lVert \boldsymbol{x}^{(t)} - P(\boldsymbol{x}^{(t)})\rVert_{2}^{2} \leq \left(\frac{\beta}{1-\beta}\right) \left[\lVert \boldsymbol{x}^{(t)} - \boldsymbol{x}^{*} \rVert_{2}^{2} - \lVert \boldsymbol{x}^{(t+1)} - \boldsymbol{x}^{*} \rVert_{2}^{2} \right].
\end{align}
By averaging this inequality over $T$ iterations and dropping the last term $\lVert \boldsymbol{x}^{(t+1)} - \boldsymbol{x}^{*} \rVert_{2}^{2}$, we obtain
\begin{align}
\frac{1}{T}\sum_{t=1}^{T} \lVert \boldsymbol{x}^{(t)} - P(\boldsymbol{x}^{(t)}) \rVert_{2}^{2} \leq \frac{2}{T}\left(\frac{1+\kappa}{1-\kappa}\right) \lVert \boldsymbol{x}^{(0)} - \boldsymbol{x}^{*} \rVert_{2}^{2}.
\label{eq:finalPnP}
\end{align}
To obtain the result that depends on $\kappa \in (0,1)$, we note that for any $\alpha \in (0,1/L]$, we write
\begin{align}
\frac{\beta}{1-\beta} = \frac{\kappa + \alpha L/2 - \kappa \alpha L}{(1-\kappa)(1-\alpha L/2)} \leq \frac{\kappa + \frac{1}{2}}{\frac{1-\kappa }{2}} \leq 2 \left(\frac{1+\kappa}{1-\kappa}\right).
\label{eq:finalPnP1}
\end{align}
Thus, from Eqs. \eqref{eq:finalPnP} and \eqref{eq:finalPnP1} the result holds.
\end{proof}
\subsection{Pseudo-code for plug-and-play invex imaging}
For the sake of completeness we present Algorithm \ref{alg:invexPnP} which is the pseudo-code of the plug-and-play version of APG for solving Eq. \eqref{eq:problem4}. The scaled-up convergence of APG are offered by two auxiliary variables, i.e., $\boldsymbol{y}^{(t+1)}$ and $\boldsymbol{z}^{(t+1)}$ in Lines 4 and 5. In Line 6 is presented the replacement of the proximal operator in APG pseudo-code with a neural network based denoiser Noise2Void \cite{krull2019noise2void}. And a monitor constrain computed in Line 8, to satisfy the sufficient descent property.
\begin{algorithm}[H]
\caption{Plug-and-play Proximal Gradient Algorithm}
\label{alg:invexPnP}
\begin{algorithmic}[1]
\State{\textbf{input}: Tolerance constant $\epsilon\in (0,1)$, initial point $\boldsymbol{x}^{(0)}$, and number of iterations $T$}
\State{\textbf{initialize}: $\boldsymbol{x}^{(1)}=\boldsymbol{x}^{(0)}=\boldsymbol{z}^{(0)}, r_{1}=1,r_{0}=0, \alpha_{1},\alpha_{2}< \frac{1}{L}$, and $\lambda \in (0,1]$}
\For{$t=1$ to $T$}
\State{$\boldsymbol{y}^{(t)}= \boldsymbol{x}^{(t)} + \frac{r_{t-1}}{r_{t}}(\boldsymbol{z}^{(t)}-\boldsymbol{x}^{(t)}) + \frac{r_{t-1}-1}{r_{t}}(\boldsymbol{x}^{(t)}- \boldsymbol{x}^{(t-1)})$}
\State{$\boldsymbol{z}^{(t+1)}=\text{prox}_{\alpha_{2} \lambda g}(\boldsymbol{y}^{(t)} - \alpha_{2}\nabla f(\boldsymbol{y}^{(t)}))$}
\State{$\boldsymbol{v}^{(t+1)} = \text{Noise2Void}(\boldsymbol{x}^{(t)} - \alpha_{1}\nabla f(\boldsymbol{x}^{(t)}))$ \Comment{This calls the trained Noise2Void model}}
\State{$r_{t+1}=\frac{\sqrt{4(r_{t})^{2}+1}+1}{2}$}
\State{$\boldsymbol{x}^{(t+1)}=\left \lbrace\begin{array}{ll}
\boldsymbol{z}^{(t+1)}, & \text{ if }f(\boldsymbol{z}^{(t+1)} ) + \lambda g(\boldsymbol{z}^{(t+1)} )\leq f(\boldsymbol{v}^{(t+1)} ) + \lambda g(\boldsymbol{v}^{(t+1)} )\\
\boldsymbol{v}^{(t+1)}, & \text{ otherwise }
\end{array}\right.$}
\EndFor
\State{\textbf{return:} $\boldsymbol{x}^{(T)}$}
\end{algorithmic}
\end{algorithm}
\section{Proof of Lemma \ref{lem:convergeUnrolling}}
\label{app:unrolling}
\begin{proof}
To prove this lemma we start exploiting the convexity of $\lambda g(\boldsymbol{x}) + \frac{1}{2}\lVert \boldsymbol{x} -\boldsymbol{u}\rVert^{2}_{2}$ for fixed $\boldsymbol{v}\in \mathbb{R}^{n}$ according to Theorem \ref{theo:proximalProof}, and $\lambda\in (0,1]$. Then, for all functions in Table \ref{tab:list}, we have
\begin{align}
\lambda g(\boldsymbol{x}) + \frac{1}{2}\lVert \boldsymbol{x} -\boldsymbol{u}\rVert^{2}_{2} - \lambda g(\boldsymbol{y}) - \frac{1}{2}\lVert \boldsymbol{y} -\boldsymbol{u}\rVert^{2}_{2} &\geq \left(\boldsymbol{\zeta}_{y} + \boldsymbol{y}-\boldsymbol{u}\right)^{T}(\boldsymbol{x}-\boldsymbol{y}) \nonumber\\
\lambda g(\boldsymbol{x}) - \lambda g(\boldsymbol{y}) &\geq \left(\boldsymbol{\zeta}_{y} + \boldsymbol{y}-\boldsymbol{u}\right)^{T}(\boldsymbol{x}-\boldsymbol{y}) + \frac{1}{2}\lVert \boldsymbol{y} -\boldsymbol{u}\rVert^{2}_{2} - \frac{1}{2}\lVert \boldsymbol{x} -\boldsymbol{u}\rVert^{2}_{2} \nonumber\\
\lambda g(\boldsymbol{x}) - \lambda g(\boldsymbol{y}) & \geq \left(\boldsymbol{\zeta}_{y} + \boldsymbol{y}-\boldsymbol{u}\right)^{T}(\boldsymbol{x}-\boldsymbol{y}) + (\boldsymbol{x}-\boldsymbol{u})^{T}(\boldsymbol{y}-\boldsymbol{x})
\label{eq:proofLem1}
\end{align}
for all $\boldsymbol{x},\boldsymbol{y}\in \mathbb{R}^{n}$, and $\boldsymbol{\zeta}_{y}\in \partial \lambda g(\boldsymbol{y})$, where the third inequality comes from the convexity of $f(\boldsymbol{x}) = \frac{1}{2}\lVert \boldsymbol{x} -\boldsymbol{u}\rVert^{2}_{2}$. Then, from Eq. \eqref{eq:proofLem1} we conclude
\begin{align}
\lambda g(\boldsymbol{x}) - \lambda g(\boldsymbol{y}) & \geq \boldsymbol{\zeta}^{T}_{y} (\boldsymbol{x}-\boldsymbol{y}) - \lVert \boldsymbol{x}-\boldsymbol{y} \rVert_{2}^{2},
\label{eq:quasiConvex}
\end{align}
for all $\boldsymbol{x},\boldsymbol{y}\in \mathbb{R}^{n}$, and $\boldsymbol{\zeta}_{y}\in \partial \lambda g(\boldsymbol{y})$.
The iterative procedure summarized in Algorithm \ref{alg:unrolling} is seen as
\begin{align}
\boldsymbol{x}^{(t+1)} = \argmin_{\boldsymbol{x}\in \mathbb{R}^{n}} \hspace{0.5em} \left\langle \nabla f(\boldsymbol{x}^{(t)}),\boldsymbol{x}-\boldsymbol{x}^{(t)} \right\rangle + \frac{1}{2\alpha_{t} \lambda} \lVert \boldsymbol{x}-\boldsymbol{x}^{(t)} \rVert_{2}^{2} + g(\boldsymbol{x})
\label{eq:proofLem2}
\end{align}
We write equal in the above equation because the proximal in Eq. \eqref{eq:prox1} is invex therefore it always map to a global optimizer. From the Lipschitz continuous of $\nabla f$ we have
\begin{align}
f(\boldsymbol{x}^{(t+1)}) \leq f(\boldsymbol{x}^{(t)}) + \left\langle \nabla f(\boldsymbol{x}^{(t)}),\boldsymbol{x}^{(t+1)}-\boldsymbol{x}^{(t)} \right\rangle + \frac{L}{2} \lVert \boldsymbol{x}^{(t+1)}-\boldsymbol{x}^{(t)} \rVert_{2}^{2}.
\label{eq:proofLem3}
\end{align}
Considering the fact that from Eq. \eqref{eq:proofLem2} we conclude $-\nabla f(\boldsymbol{x}^{(t)}) + \frac{1}{\alpha_{t}\lambda} \left( \boldsymbol{x}^{(t)} -\boldsymbol{x}^{(t+1)}\right) \in \partial g(\boldsymbol{x}^{(t+1)})$, then Eq. \eqref{eq:quasiConvex} leads to
\begin{align}
\lambda g(\boldsymbol{x}^{(t)}) - \lambda g(\boldsymbol{x}^{(t+1)}) &\geq \left\langle -\nabla f(\boldsymbol{x}^{(t)}) + \frac{1}{\alpha_{t}\lambda} \left( \boldsymbol{x}^{(t)} -\boldsymbol{x}^{(t+1)}\right) , \boldsymbol{x}^{(t)}-\boldsymbol{x}^{(t+1)} \right\rangle - \lVert \boldsymbol{x}^{(t)}-\boldsymbol{x}^{(t+1)} \rVert_{2}^{2} \nonumber\\
&\geq \left\langle \nabla f(\boldsymbol{x}^{(t)}) , \boldsymbol{x}^{(t+1)} - \boldsymbol{x}^{(t)} \right\rangle + \left(\frac{1}{\alpha_{t}\lambda} - 1\right)\lVert \boldsymbol{x}^{(t)}-\boldsymbol{x}^{(t+1)} \rVert_{2}^{2}.
\label{eq:proofLem4}
\end{align}
The above Eq. \eqref{eq:proofLem4} combined with Eq. \eqref{eq:proofLem3} yields
\begin{align}
\lambda g(\boldsymbol{x}^{(t)}) - \lambda g(\boldsymbol{x}^{(t+1)}) &\geq f(\boldsymbol{x}^{(t+1)}) - f(\boldsymbol{x}^{(t)}) - \frac{L}{2} \lVert \boldsymbol{x}^{(t+1)}-\boldsymbol{x}^{(t)} \rVert_{2}^{2} +\left(\frac{1}{\alpha_{t}\lambda} - 1\right)\lVert \boldsymbol{x}^{(t)}-\boldsymbol{x}^{(t+1)} \rVert_{2}^{2} \nonumber\\
f(\boldsymbol{x}^{(t)}) + \lambda g(\boldsymbol{x}^{(t)})&\geq f(\boldsymbol{x}^{(t+1)}) + \lambda g(\boldsymbol{x}^{(t+1)}) + \left( \frac{1}{\alpha_{t}\lambda} - 1 - \frac{L}{2} \right)\lVert \boldsymbol{x}^{(t)}-\boldsymbol{x}^{(t+1)} \rVert_{2}^{2}.
\label{eq:proofLem5}
\end{align}
Observe that by taking $\alpha_{t}<\frac{2}{L+2}$, then from Eq. \eqref{eq:proofLem5} we have that $f(\boldsymbol{x}^{(t)}) + \lambda g(\boldsymbol{x}^{(t)})\geq f(\boldsymbol{x}^{(t+1)}) + \lambda g(\boldsymbol{x}^{(t+1)})$ which is a sufficient decreasing condition. In addition, considering that the list of invex functions in Table \ref{tab:list}, and $f(\boldsymbol{x})=\lVert \boldsymbol{H}\boldsymbol{x} -\boldsymbol{v} \rVert_{2}^{2}$ are positive, then the loss function in Eq. \eqref{eq:problem4} is bounded below. Thus, in particular $f(\boldsymbol{x}^{(t)}) + \lambda g(\boldsymbol{x}^{(t)}) - (f(\boldsymbol{x}^{(t+1)}) + \lambda g(\boldsymbol{x}^{(t+1)})) \rightarrow 0$ as $t\rightarrow \infty$, which, combined with Eq. \eqref{eq:proofLem5}, implies that $\lVert \boldsymbol{x}^{(t)}-\boldsymbol{x}^{(t+1)} \rVert_{2}^{2}\rightarrow 0$ as $t\rightarrow \infty$. The later convergence implies the existence of fixed points to the proximal iteration in Algorithm \ref{alg:unrolling} (equivalently to Eq. \eqref{eq:proofLem2}), sufficient condition to guarantee that the sequence $\{\boldsymbol{x}^{(t)}\}$ convergences to a stationary point of $f(\boldsymbol{x}) + \lambda g(\boldsymbol{x})$. Thus, since in Theorem \ref{theo:ourCS} we proved the loss function in Eq. \eqref{eq:problem4} is invex then $\{\boldsymbol{x}^{(t)}\}$ converges to a global minimizer.
\end{proof}
\section{Image Compressive Sensing Experiments Evaluated with SSIM metric}
\label{app:newResults}
In this section we complement results of Experiments 1,2, and 3 of Section \ref{others}. We assess the imaging quality for these experiments using the structural similarity index measure (SSIM). The best and least efficient among invex functions is highlighted in boldface and underscore, respectively.
\textbf{Experiment 1} studies the effect of different invex regularizers under Algorithm \ref{alg:invexProximal}. The numerical results of this study are summarized in Table \ref{tab:globalResults1.1}. Also, we present Figure \ref{fig:exp1} which illustrates reconstructed images, for $SNR=30$dB, obtained by Eqs. \eqref{fun1}, \eqref{fun2}, \eqref{fun3}, \eqref{fun4}, and \eqref{fun5}, which are compared with the outputs from FISTA, TVAL3, and ReconNet. In addition, to numerically evaluate their performance we estimate the PSNR for each image.
\begin{table}[ht]
\centering
\caption{Comparison between convex and invex regularizers, in terms of SSIM, under Algorithm \ref{alg:invexProximal}, using $p=0.5$ for Eq.~\eqref{fun1}. }
\resizebox{1\textwidth}{!}{\renewcommand{1.3}{1.3}
\begin{tabular}{P{1cm} | P{1.3cm} P{1.3cm} P{1.3cm} P{1.3cm} P{1.3cm} | P{2.5cm} P{2cm} P{2.2cm}}
\hline
& \multicolumn{5}{P{7.5cm}|}{(Experiment 1) Algorithm \ref{alg:invexProximal}, $p=0.5$ for Eq. \eqref{fun1}.} & FISTA \cite{beck2009fast} & TVAL3 \cite{li2013efficient} & ReconNet \cite{kulkarni2016reconnet}\\
\hline
SNR & Eq. \eqref{fun1} & Eq. \eqref{fun2} &Eq. \eqref{fun3} & Eq. \eqref{fun4} & Eq. \eqref{fun5} & $\ell_{1}$-norm & & \\
\hline
\centering $\infty$ & \textbf{0.9486} & 0.9370 & 0.9408 & $\underline{0.9332}$ & 0.9447 & 0.9257 & 0.9294 & 0.9220 \\
\centering $20$dB & \textbf{0.8675} & 0.8495 & 0.8554 & $\underline{0.8437}$ & 0.8614 & 0.8323 & 0.8380 & 0.8267\\
\centering $30$dB & \textbf{0.9055} & 0.8944 & 0.8981 & $\underline{0.8908}$ & 0.9018 & 0.8836 & 0.8872 & 0.8801\\
\hline
\end{tabular}
}
\label{tab:globalResults1.1}
\end{table}
\begin{figure}
\caption{Reconstructed images, for $SNR=30$dB, obtained by Algorithm \ref{alg:invexProximal}
\label{fig:exp1}
\end{figure}
\textbf{Experiment 2} studies the invex regularizers under the plug-and-play modification of Algorithm \ref{alg:invexProximal} as described in Section \ref{sub:PnP} \cite{krull2019noise2void}. The same deconvolution problem as in Experiment 1 is used. The numerical results of this study are summarized in Table \ref{tab:globalResults2.1}. Also, we present Figure \ref{fig:exp2} which illustrates reconstructed images obtained by Eqs. \eqref{fun1}, \eqref{fun2}, \eqref{fun3}, \eqref{fun4}, and \eqref{fun5}, which are compared with the outputs from $\ell_{1}$-norm. In addition, to numerically evaluate their performance we estimate the PSNR for each image.
\begin{table}[ht]
\centering
\caption{Comparison between convex and invex regularizers, in terms of SSIM, under plug-and-play Algorithm \ref{alg:invexPnP}, using $p=0.8$ for Eq. \eqref{fun1}.}
\resizebox{1\textwidth}{!}{ \renewcommand{1.3}{1.3}
\begin{tabular}{P{1cm} | P{2cm} P{2cm} P{2cm} P{2cm} P{2cm} | P{2cm}}
\hline
& \multicolumn{5}{P{7.5cm}|}{(Experiment 2) Algorithm \ref{alg:invexPnP}, $p=0.8$ for Eq. \eqref{fun1}.} & \\
\hline
SNR &Eq. \eqref{fun1} & Eq. \eqref{fun2} & Eq. \eqref{fun3} & Eq. \eqref{fun4} & Eq. \eqref{fun5} & $\ell_{1}$-norm \\
\hline
\centering $\infty$ & \textbf{0.9581} & 0.9409 & 0.9465 & $\underline{0.9352}$ & 0.9523 & 0.9297\\
\centering $20$dB & \textbf{0.8808} & 0.8680 & 0.8722 & $\underline{0.8638}$ & 0.8765 & 0.8597\\
\centering $30$dB & \textbf{0.9189} & 0.9043 & 0.9091 & $\underline{0.8995}$ & 0.9140 & 0.8948\\
\hline
\end{tabular}
}
\label{tab:globalResults2.1}
\end{table}
\begin{figure}
\caption{Reconstructed images, for $SNR=30$dB, obtained by Algorithm \ref{alg:invexPnP}
\label{fig:exp2}
\end{figure}
\textbf{Experiment 3} compares the invex regularizers but under the unrolling framework as described in Section \ref{unrolling}. The numerical results of this study are summarized in Table \ref{tab:globalResults3.1}. Also, we present Figure \ref{fig:exp4} which illustrates reconstructed images obtained by Eqs. \eqref{fun1}, \eqref{fun2}, \eqref{fun3}, \eqref{fun4}, and \eqref{fun5} with ISTA-Net, which are compared with the outputs from $\ell_{1}$-norm + ISTA-Net. In addition, to numerically evaluate their performance we estimate the PSNR for each image.
\begin{table}[ht]
\centering
\caption{ Performance comparison between convex and invex regularizers, in terms of SSIM, for the unrolling experiment, using $p=0.85$ for Eq. \eqref{fun1}.}
\resizebox{1\textwidth}{!}{ \renewcommand{1.3}{1.3}
\begin{tabular}{P{1cm}| P{1.1cm} | P{1.5cm} P{1.5cm} P{1.5cm} P{1.5cm} P{2.0cm} |P{2cm} | P{2.2cm}|}
\hline
& & \multicolumn{5}{P{10.5cm}|}{(Experiment 3) Algorithm \ref{alg:unrolling} - unfolded LISTA. $p=0.85$ for Eq. \eqref{fun1}} & LISTA \cite{chen2018theoretical}& ReconNet \cite{kulkarni2016reconnet}\\
\hline
SNR & $m/n$& Eq. \eqref{fun1} & Eq. \eqref{fun2} & Eq. \eqref{fun3} & Eq. \eqref{fun4} & Eq. \eqref{fun5} & $\ell_{1}$-norm & \\
\hline
\centering \multirow{3}{*}{$\infty$} & \centering 0.2
\hrule& \textbf{0.9279} & 0.9132 & 0.9181 & $\underline{0.9084}$ & 0.9230 & 0.9037 & 0.8990\\
& \centering 0.4
\hrule& \textbf{0.9610} & 0.9423 & 0.9485 & $\underline{0.9363}$ & 0.9547 & 0.9303 & 0.9244\\
& \centering 0.6 & \textbf{0.9890} & 0.9620 & 0.9708 & $\underline{0.9533}$ & 0.9798 & 0.9448 & 0.9364\\
\cline{2-9}
\centering \multirow{3}{*}{20dB} & \centering 0.2
\hrule& \textbf{0.8690} & 0.8628 & 0.8649 & $\underline{0.8608}$ & 0.8669 & 0.8587 & 0.8567 \\
& \centering 0.4
\hrule& \textbf{0.9370} & 0.9205 & 0.9259 & $\underline{0.9151}$ & 0.9314 & 0.9098 & 0.9045\\
& \centering 0.6 & \textbf{0.9498} & 0.9411 & 0.9440 & $\underline{0.9382}$ & 0.9469 & 0.9353 & 0.9325\\
\cline{2-9}
\centering \multirow{3}{*}{30dB} &\centering 0.2
\hrule& \textbf{0.8876} & 0.8781 & 0.8812 & $\underline{0.8750}$ & 0.8844 & 0.8719 & 0.8688 \\
& \centering 0.4
\hrule& \textbf{0.9510} & 0.9318 & 0.9381 & $\underline{0.9255}$ & 0.9445 & 0.9194 & 0.9133 \\
& \centering 0.6 & \textbf{0.9619} & 0.9545 & 0.9569 & $\underline{0.9520}$ & 0.9594 & 0.9496 & 0.9472\\
\cline{1-9}
& & \multicolumn{5}{P{10.5cm}}{(Experiment 3) Algorithm \ref{alg:unrolling} - unfolded ISTA-Net. $p=0.85$ for Eq. \eqref{fun1}} & & \\
\cline{1-9}
SNR & $m/n$& Eq. \eqref{fun1} & Eq. \eqref{fun2} & Eq. \eqref{fun3} & \multicolumn{1}{P{1.3cm}}{Eq. \eqref{fun4}} & \multicolumn{1}{P{1.3cm}|}{Eq. \eqref{fun5}}& $\ell_{1}$-norm \cite{zhang2018ista} & \\
\cline{1-9}
\centering \multirow{3}{*}{$\infty$} & \centering 0.2
\hrule& \textbf{0.9350} & 0.9219 & 0.9262 & $\underline{0.9176}$ & 0.9306 & 0.9134 & \multirow{9}{*}{-}\\
& \centering 0.4
\hrule& \textbf{0.9733} & 0.9541 & 0.9604 & $\underline{0.9479}$ & 0.9668 & 0.9417 & \\
& \centering 0.6 & \textbf{0.9899} & 0.9697 & 0.9763 & $\underline{0.9632}$ & 0.9831 & 0.9567 & \\
\cline{2-8}
\centering \multirow{3}{*}{20dB} & \centering 0.2
\hrule& \textbf{0.8829} & 0.8745 & 0.8773 & $\underline{0.8717}$ & 0.8801 & 0.8690 & \\
& \centering 0.4
\hrule& \textbf{0.9501} & 0.9323 & 0.9382 & $\underline{0.9265}$ & 0.9441 & 0.9208 & \\
& \centering 0.6 & \textbf{0.9611} & 0.9520 & 0.9550 & $\underline{0.9490}$ & 0.9580 & 0.9460 & \\
\cline{2-8}
\centering \multirow{3}{*}{30dB} &\centering 0.2
\hrule& \textbf{0.8990} & 0.8836 & 0.8887 & $\underline{0.8786}$ & 0.8938 & 0.8736 & \\
& \centering 0.4
\hrule& \textbf{0.9641} & 0.9437 & 0.9504 & $\underline{0.9370}$ & 0.9572 & 0.9305 & \\
& \centering 0.6 & \textbf{0.9859} & 0.9695 & 0.9749 & $\underline{0.9641}$ & 0.9804 & 0.9588 & \\
\hline
\end{tabular}
}
\label{tab:globalResults3.1}
\end{table}
\begin{figure}
\caption{Reconstructed images, for $SNR=30$dB, obtained by ISTA-Net using Eqs. \eqref{fun1}
\label{fig:exp4}
\end{figure}
\section{Image Denoising Illustration}
\label{app:denoising}
For the sake of completeness we present in Algorithm \ref{alg:denoising} the denoising procedure employed in this paper (following \cite{cai2014data}) using invex regularizers $g(\boldsymbol{x})$ in Eqs. \eqref{fun1}, \eqref{fun3}, and \eqref{fun5}. The parameters $\lambda_{1},\lambda_{2}, K$, and $S$ were chosen to be the best for each analyzed function determined by cross validation.
\begin{figure}
\caption{Denoised image illustration for Eqs. \eqref{fun1}
\label{fig:denoising}
\end{figure}
\begin{algorithm}[ht]
\caption{Denoising procedure using invex regularizers }
\label{alg:denoising}
\begin{algorithmic}[1]
\State{\textbf{input}: noisy image $\boldsymbol{x}$, $S$ the number of patches of size $16\times 16$, $K$ number of iterations, and constant $\lambda_{1},\lambda_{2}\in (0,1]$.}
\State{\textbf{initialize}: $\boldsymbol{W}^{(0)}=\frac{1}{256}\boldsymbol{1}$ where $\boldsymbol{1}\in \mathbb{R}^{256\times 256}$ is the matrix of ones.}
\State{\textbf{Compute:} $\boldsymbol{P}\in \mathbb{R}^{256\times S}$ matrix containing random patches of size $16\times 16$ from $\boldsymbol{x}$}
\State{$\boldsymbol{A} = (\boldsymbol{I}_{256}-\boldsymbol{W}^{(0)}(\boldsymbol{W}^{(0)})^{T})\boldsymbol{P}$, where $\boldsymbol{I}_{256}\in \mathbb{R}^{256\times 256}$ is the identity matrix}
\For{$t=1$ to $K$}
\State{$\boldsymbol{W}^{(t)}=(\boldsymbol{W}^{(t-1)})^{T}\boldsymbol{P}$}
\State{$\hat{\boldsymbol{W}}^{(t)}[i,j]=\left \lbrace\begin{array}{ll}
\boldsymbol{W}[i,j] & \lvert \boldsymbol{W}[i,j] \rvert \leq \lambda_{1}\\
0 & \text{ otherwise }
\end{array}\right.$}
\State{run the SVD decomposition on $\boldsymbol{A}(\hat{\boldsymbol{W}}^{(t)})^{T}$ such that $\boldsymbol{A}(\hat{\boldsymbol{W}}^{(t)})^{T}=\boldsymbol{U}\boldsymbol{D}\boldsymbol{V}^{T}$.}
\State{$\boldsymbol{W}^{(t)} = \boldsymbol{U}\boldsymbol{V}^{T}$}
\EndFor
\State{$\hat{\boldsymbol{x}}= (\boldsymbol{W}^{(K)})^{T}\text{Prox}_{\lambda_{2} g}(\boldsymbol{W}^{(K)}\boldsymbol{x})$\Comment{Denoising step}}
\State{\textbf{return:} $\hat{\boldsymbol{x}}$\Comment{Denoised image}}
\end{algorithmic}
\end{algorithm}
Employing Algorithm \ref{alg:denoising}, in Figure \ref{fig:denoising} we present some denoised images obtained by Eqs. \eqref{fun1}, \eqref{fun3}, \eqref{fun5}, which are compared with the outputs from BM3D, and Noise2Void. Since we are analyzing all the regularizers under non-ideal scenarios due to noise, results in Figure \ref{fig:denoising} highlight the benefit of having invex regularizers since the cleanest image is obtained by Eq. \eqref{fun1}. In addition, to numerically evaluate their performance we employ the structural similarity index measure (SSIM) by reporting the SSIM map for each denoised image and its averaged value. Recall that SSIM is reported in the range $[0,1]$ where 1 is the best achievable quality and 0 the worst. In the SSIM map small values of SSIM appear as dark pixels. Thus, we conclude the best performance is achieved using the regularizer in Equation \eqref{fun1} since it has the whitest SSIM maps (with highest SSIM values).
\end{document}
|
\begin{document}
\theoremstyle{plain}
\newtheorem{theorem}{Theorem}
\newtheorem{cor}[theorem]{Corollary}
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{claim}[theorem]{Claim}
\newtheorem{quest}[theorem]{Question}
\newtheorem{conjecture}[theorem]{Conjecture}
\theoremstyle{definition}
\newtheorem{definition}{Definition}
\everymath{\displaystyle}
\title{Gaussian Happy Numbers}
\begin{abstract}
This paper extends the concept of a $B$-happy number, for $B \geq 2$, from the rational integers, $\mathbb{Z}$, to the Gaussian integers, $\mathbb{Z}[i]$.
We investigate the fixed points and cycles of the Gaussian
$B$-happy functions, determining them for small values of $B$ and providing a method for computing them for any $B \geq 2$. We discuss heights of Gaussian $B$-happy numbers, proving results concerning the smallest Gaussian $B$-happy numbers of certain heights. Finally, we prove conditions for the existence and non-existence of arbitrarily long arithmetic sequences of Gaussian $B$-happy numbers.
\end{abstract}
\section{Introduction}
Happy numbers~\cite{oeis} and, for bases other than 10, generalized happy numbers~\cite{genhappy}, are defined in terms of iterating the base $B$ happy function $S_B:\mathbb{Z}^+ \rightarrow \mathbb{Z}^+$, defined by
\begin{equation*}
S_B\left(\sum_{j=0}^n a_j B^j \right) =
\sum_{j=0}^n a_j^2,
\end{equation*}
where $B\geq 2$, $a_n \neq 0$, and, for each $j$, $0\leq a_j \leq B-1$.
The function has been generalized in other ways, allowing for other exponents~\cite{genhappy,fifth}, and allowing for the addition of an augmentation constant~\cite{augment, desert, oasis}. In this paper, we extend the concept of generalized happy numbers to $\mathbb{Z}[i]$, the set of Gaussian integers. Although we restrict our attention to the case with exponent two, we note that higher exponents may also lead to interesting results.
Let $B \geq 2$. For $a + b i \in \mathbb{Z}[i] - \{0\}$, we write
\begin{equation}\label{notation}
a + b i = \sum_{j = 0}^n (a_j + b_j i) B^j,
\end{equation}
where $a_n$ and $b_n$ are not both $0$ and, for each $j$,
$\sgn(a) a_j \geq 0$, $\sgn(b) b_j \geq 0$,
$|a_j| \leq B - 1$, and $|b_j| \leq B - 1$.
Note that these conditions mean that each nonzero $a_j$ has the same sign as $a$ and each nonzero $b_j$ has the same sign as $b$.
\begin{definition}
For an integer $B\geq 2$, the \emph{Gaussian $B$-happy function} $S_B : \mathbb{Z}[i] \to \mathbb{Z}[i]$ is defined by $S_B (0) = 0$ and for $a + b i \in \mathbb{Z}[i] - \{0\}$,
\[
S_B (a + b i) = \sum_{j = 0}^n \left(a_j + b_j i\right)^2 = \sum_{j = 0}^n \left(a_j^2 - b_j^2\right) + 2 \left(\sum_{j = 0}^n a_jb_j\right)i.
\]
A Gaussian integer $a + b i$ is a \emph{(Gaussian) $B$-happy number} if, for some $k \in \mathbb{Z}^+,$ $S_B^k (a + b i) = 1$.
\end{definition}
We note that the Gaussian $B$-happy function, when restricted to rational integers, agrees with the generalized $B$-happy function. Hence, no confusion should result from expanding the definition of the notation $S_B$ and of the term $B$-happy number in this way. For clarity, at times we use the term \emph{rational $B$-happy numbers} to differentiate them from the Gaussian $B$-happy numbers.
We begin with some basic properties of the Gaussian $B$-happy function. Each is proved by a straight-forward calculation.
\begin{lemma}\label{basic}
The following hold for each $a + b i \in \mathbb{Z}[i]$.
\begin{enumerate}
\item \label{neg}
$S_B (a + b i) = S_B ( - ( a + b i))$.
\item \label{conj}
$S_B (\overline{a + b i}) = \overline{S_B (a + b i)}$.
\item \label{multbyi}
$S_B \left(i(a + b i)\right) = -S_B ( a + b i)$.
\item \label{real}
$S_B (a + b i) \in \mathbb{R}$ if and only if for every $j$, $a_j b_j = 0$.
\item \label{pimag}
$S_B (a + b i)$ is purely imaginary if and only if $a = \pm b$.
\item \label{2i}
$S_B (a + b i)\in \mathbb{Z}[i]even$, i.e., $\imag (S_B (a + b i))$ is even.
\end{enumerate}
\end{lemma}
The following is immediate from Lemma~\ref{basic}(\ref{neg}),(\ref{conj}), and (\ref{multbyi}).
\begin{lemma}\label{rep}
Fix $B\geq 2$. If $z\in \mathbb{Z}[i]$ is a Gaussian $B$-happy number, then so are $-z$, $\pm iz$, $\pm \overline{z}$, and $\pm i\overline{z}$.
\end{lemma}
Although $S_B$ is not an additive function, it has a useful additive property, which generalizes directly to the Gaussian case.
\begin{lemma}\label{pullapart}
Let $a$, $b$, $c$, $d$, $r \in \mathbb{Z}_{\geq 0}$. If $B^r > \max\{c,d\}$, then
\[S_B\left((a + bi)B^r + (c + di)\right) = S_B(a + bi) + S_B(c + di).\]
\end{lemma}
\begin{proof}
Since $B^r > \max \{c, d\},$ there exist rational integers $c_j$ and $d_j$ such that
\[c + d i = \sum_{j=0}^{r - 1} (c_j + d_j i)B^j,\] with $0 \leq c_j \leq B-1 $ and $0 \leq d_j \leq B-1$. Using the usual notation, as in~(\ref{notation}), for $a + bi$, we have
\[(a + bi) B^r = \sum_{j=0}^n (a_j + b_j i)B^{j+r} = \sum_{j=r}^{n+r} (a_{j-r} + b_{j-r} i)B^{j}.\] Thus
\begin{align*}
S_B ((a + bi) B^r + (c + d i))
&= S_B\left(\sum_{j=r}^{n+r} (a_{j-r} + b_{j-r} i)B^{j} + \sum_{j=0}^{r - 1} (c_j + d_j i)B^j\right) \\
&= \sum_{j = r}^{n+r} (a_{j-r} + b_{j-r} i)^2 + \sum_{j = 0}^{r-1} (c_j + d_j i)^2 \\
&= \sum_{j = 0}^{n} (a_{j} + b_{j} i)^2 + \sum_{j = 0}^{r-1} (c_j + d_j i)^2 \\
&= S_B (a + bi) + S_B (c + d i),
\end{align*}
as desired.
\end{proof}
The remainder of this paper is organized as follows.
In Section 2, we provide a method for computing the fixed points and cycles for the Gaussian
$B$-happy functions and apply it to $S_B$ for $2 \leq B \leq 10$. In Section 3, we consider heights of Gaussian $B$-happy numbers. Finally, in Section 4, we discuss the existence and non-existence of arbitrarily long arithmetic sequences of Gaussian $B$-happy numbers.
\section{Fixed Points and Cycles of $S_B$}
In this section, we examine the trajectories of the function $S_B$, identifying all fixed points and cycles of the functions, for $2\leq B \leq 10$. First, we prove that, for each $B \geq 2$, when a Gaussian integer is ``sufficiently large," the output of the Gaussian $B$-happy function has a smaller absolute value than the input. This allows for a computer search leading to Tables~\ref{fixedbegin} and~\ref{fixedend}. Note that by Lemma~\ref{basic}(\ref{conj}) nonreal fixed points and cycles come in conjugate pairs.
\begin{theorem}\label{threedigits}
Let $g \in \mathbb{Z}[i]$ satisfy
\[\max\left\{|\real(g)|,|\imag(g)|\right\} \geq \begin{cases}
B^3,&\text{if } B\geq 7;\\
B^4,&\text{if } 2\leq B \leq 6.
\end{cases}
\]
Then $|S_B (g)| < |g|$.
\end{theorem}
\begin{proof}
Fix $B \geq 2$.
Let $g = a + b i = \sum_{j = 0}^n (a_j + b_j i) B^j$, as in~(\ref{notation}), with $n \geq 3$ for all $B \geq 2$, and with the added condition that $n \geq 4$ if $B\leq 6$.
For each $0\leq j\leq n$, since $|a_j| \leq B - 1$ and $|b_j| \leq B - 1$, we have
$|a_j^2 - b_j^2| \leq (B - 1)^2$ and $|a_j b_j| \leq (B - 1)^2$.
Thus
\begin{align*}
|S_B(g)| & = \left|\sum_{j = 0}^n (a_j^2 - b_j^2) + 2 \left(\sum_{j = 0}^n a_jb_j\right)i\right|\\ & =
\sqrt{\left(\sum_{j = 0}^n (a_j^2 - b_j^2)\right)^2 + \left(2 \sum_{j = 0}^n a_jb_j\right)^2}\\ & \leq
\sqrt{((B - 1)^2(n+1))^2 + (2(B - 1)^2(n+1))^2} \\ & =
\sqrt{5}(n+1)(B - 1)^2.
\end{align*}
On the other hand, $|g| \geq \max\{|\real(g)|,|\imag(g)|\} \geq B^n$. So it suffices to prove that, regardless of the value of $n$, $B^n > \sqrt{5}(n+1)(B - 1)^2$.
The inequality is easy to verify for $2 \leq B \leq 6$ with $n = 4$, and for $B = 7$ or $8$ with $n = 3$. For $B \geq 9$ with $n = 3$, note that $B > 4 \sqrt{5}$, which implies that $B^n > \sqrt{5} (n + 1) (B - 1)^2$.
Proceeding now by induction on $n$, fix $B \geq 2$ and assume that $B^n > \sqrt{5} (n + 1) (B - 1)^2$. It follows that $B^{n+1} > B\sqrt{5}(n+1)(B - 1)^2 > \sqrt{5}(n+2)(B - 1)^2$, as desired.
Hence,
$|S_B(g)| \leq \sqrt{5}(n+1)(B - 1)^2 < B^n \leq |g|$.
\end{proof}
The following corollary is immediate.
\begin{cor}\label{allcyclesthm}
Let $B \geq 2$. Every cycle of $S_B$ contains a point $g = a + bi$ such that $|a| < B^n$ and $|b| < B^n$, where $n = 4$ if $2 \leq B \leq 6$ and $n=3$ if $B\geq 7$.
In particular, we have that every fixed point $g = a + bi$ of $S$ satisfies $|a| < B^n$ and $|b| < B^n$.
\end{cor}
It follows from Corollary~\ref{allcyclesthm} that a direct computer search can determine all fixed points and cycles of the function $S_B$, for any fixed $B \geq 2$. For $2 \leq B \leq 6$, we applied $S_B$ iteratively to each value $0\leq g < B^4$, recording the resulting fixed points and cycles in Table~\ref{fixedbegin}. For $7 \leq B \leq 10$, we applied $S_B$ iteratively to each value $0\leq g < B^3$, recording the resulting fixed points and cycles in Table~\ref{fixedend}. The programs were run using each of MATLAB and Mathematica, thus proving Theorem~\ref{fixed}.
\begin{theorem}\label{fixed}
For $2 \leq B\leq 10$, the fixed points and cycles of $S_B$ are as given in Tables~\ref{fixedbegin} and~\ref{fixedend}.
\end{theorem}
\begin{table}[tbh]
\begin{center}
\begin{tabular}{|c|l|}\hline
$B$ & Fixed Points and Cycles, expressed in base $B$\\
\hline \hline
2 & 0, 1 \\ \hline
3 & 0, 1, 12, 22, 2+11i, 2-11i,\\
& 2 $\rightarrow$ 11 $\rightarrow$ 2, \\
&$-1+2i\to -10-11i\to -1+2i$ (and its conjugate), \\
&$-11+22i\to -20-22i\to -11+22i$ (and its conjugate)\\
\hline
4 & 0, 1 \\ \hline
5 & 0, 1, 23, 33, \\
& 4 $\rightarrow$ 31 $\rightarrow$ 20 $\rightarrow$ 4, \\
&$3+11i\to 12+11i\to 3+11i$ (and its conjugate) \\
\hline
6 & 0, 1, \\
& 5 $\rightarrow$ 41 $\rightarrow$ 25
$\rightarrow$ 45
$\rightarrow$ 105 $\rightarrow$ 42$\rightarrow$ 32
$\rightarrow$ 21 $\rightarrow$ 5\\
\hline
\end{tabular}
\caption{Fixed points and cycles of $S_B$, $2 \leq B \leq 6$.}
\label{fixedbegin}
\end{center}
\end{table}
\begin{table}[bh]
\begin{center}
\begin{tabular}{|c|l|}\hline
$B$ & Fixed Points and Cycles, expressed in base $B$\\
\hline \hline
7 & 0, 1, 13, 34, 44, 63, 25+31i, 25-31i,\\
& 2 $\rightarrow$ 4 $\rightarrow$ 22 $\rightarrow$ 11 $\rightarrow$ 2,
16 $\rightarrow$ 52 $\rightarrow$ 41 $\rightarrow$ 23 $\rightarrow$ 16,\\
&$-15+116i\to -15-116i\to-15+116i$,\\
&$-31+44i\to -31-44i\to-31+44i$,\\
&$-11+51i\to -33-15i\to -11+51i$ (and its conjugate),\\
&$-21+26i\to-50-26i\to -21+26i$ (and its conjugate),\\
&$-1+13i\to -12 - 6 i\to -43 + 33 i\to 10 - 60 i\to$ \\
&\hphantom{mmm}$-50 - 15 i\to -1 + 13 i$ (and its conjugate),\\
&$4+22i\to 11+22i\to -6+11i\to 46-15i\to 35-125i\to $\\
&\hphantom{mmm}$4-116i\to -31-66i\to -116+66i\to -46-150i \to $ \\
&\hphantom{mmm}$ 35+55i\to -22+143i\to -24-40i\to 4+22i$ \\
&\hphantom{mmm}(and its conjugate),\\
&$14+35i\to -23 + 64i\to -54 - 66 i\to -43 + 213i\to 14 - 35i\to $\\
&\hphantom{mmm} $-23 - 64i\to -54 + 66 i\to -43 - 213 i\to 14+35i$,\\
&$-13+15i\to -22 - 44 i\to -33 + 44 i\to -20 - 66 i\to $\\
&\hphantom{mmm}$-125 + 33 i\to 15 - 60 i \to -13-15i\to -22 + 44 i\to $\\
&\hphantom{mmm}$-33 - 44 i\to -20 + 66 i\to -125 - 33 i\to 15 + 60 i \to -13+15i$\\
\hline
8 &$ 0, 1, 24, 64,15+32i, 15-32i, 45+20i, 45-20i$,\\
& 4 $\rightarrow$ 20 $\rightarrow$ 4,
15 $\rightarrow$ 32 $\rightarrow$ 15,
5 $\rightarrow$ 31 $\rightarrow$ 12 $\rightarrow$ 5,\\
&$-34+72i\to -34-72i\to -34+72i$, \\
&$-11+24i\to -22-14i\to -11+24i$ (and its conjugate) \\
&$-40+70i\to -41-70i\to 40+70i$ (and its conjugate), \\
&$4+6i\to -24+60i\to -20-30i\to -5+14i\to 10-50i\to $\\
&\hphantom{mmmm}$-30-12i\to 4+6i$ (and its conjugate) \\
\hline
9 & 0, 1, 45, 55, \\
& $75\to 82 \to 75 $,
58 $\rightarrow$ 108 $\rightarrow$ 72$\rightarrow$ 58, \\
&$-4+26i\to -26-53i\to 6+62i\to -4+26i $ (and its conjugate), \\
&$10+26i\to -43+4i\to 10-26i\to -43-4i\to 10+26i $ \\
\hline
10 & 0, 1, \\
& $4 \rightarrow 16 \rightarrow 37 \rightarrow 58 \rightarrow 89
\rightarrow 145
\rightarrow 42 \rightarrow 20$ $\rightarrow$ 4,\\
& $-52+90i \to -52-90i \to -52+90i$,\\
&$35+48i \to -46+104i \to 35-48i \to -46-104i\to 35+48i$, \\
&$-15+90i \to -55-18i \to -15+90i$ (and its conjugate) \\
\hline
\end{tabular}
\caption{Fixed points and cycles of $S_B$, $7 \leq B \leq 10$.}
\label{fixedend}
\end{center}
\end{table}
Looking at the odd bases in Tables~\ref{fixedbegin} and~\ref{fixedend}, notice that $S_3$ has fixed points $12_{(3)}$ and $22_{(3)}$, $S_5$ has fixed points $23_{(5)}$ and $33_{(5)}$, $S_7$ has $34_{(7)}$ and $44_{(7)}$, and $S_9$ has $45_{(9)}$ and $55_{(9)}$. We prove that this pattern holds for all odd bases.
\begin{theorem}\label{fixedeg}
For $B\geq 3$ odd, the numbers
$(B^2 + 1)/2$ and $(B + 1)^2/2$
are each fixed points of the function $S_B$.
\end{theorem}
\begin{proof}
Writing each of these in base $B$ notation, we have
\[\frac{B^2 + 1}{2} = \left(\frac{B-1}{2}\right)B+\frac{B+1}{2}
\mbox{\ and\ }
\frac{(B + 1)^2}{2} = \left(\frac{B+1}{2}\right)B+\frac{B+1}{2}.\]
\noindent
Direct calculation then yields
\begin{align*}
S_B\left(\frac{B^2+1}{2}\right)
&= \left(\frac{B-1}{2}\right)^2+\left(\frac{B+1}{2}\right)^2
= \frac{B^2+1}{2}
\end{align*}
and
\begin{align*}
S_B\left(\frac{(B + 1)^2}{2}\right) &= \left(\frac{B+1}{2}\right)^2+\left(\frac{B+1}{2}\right)^2
= \frac{(B + 1)^2}{2}.
\end{align*}
Thus, for odd $B$, $(B^2 + 1)/2$ and $(B + 1)^2/2$
are fixed points of $S_B$.
\end{proof}
\section{Heights of Gaussian Happy Numbers}
As defined in~\cite{heights}, the height of a $B$-happy number, $a$, is the smallest $k\in \mathbb{Z}_{\geq 0}$ such that $S_B^k(a) = 1$. The smallest (rational) happy numbers of heights up to at least 12 are known~\cite{onheights,heights}.
In this section, we first determine the smallest Gaussian $B$-happy numbers of heights 0, 1 and 2, showing that the results are independent of the value of $B$. We then find the smallest Gaussian $B$-happy numbers of height three for $2\leq B \leq 10$.
Finally, we describe how to find the smallest Gaussian (10-)happy numbers of various heights and compute them for heights less than seven.
Here ``smallest" is taken to mean ``smallest absolute value." We note that this means that the smallest number of a given height is, generally, not unique. In fact, for $z\in \mathbb{Z}[i]$ of height above two, it follows from Lemma~\ref{basic} that the height and absolute value of $z$ are the same as those of $-z$, $\pm \overline{z}$, $\pm iz$, and $\pm i\overline{z}$. In results for these heights, we record representative numbers, noting that they stand for an entire equivalence class, as described in Lemma~\ref{rep}.
We first show that the values of the smallest Gaussian $B$-happy numbers of heights 0, 1, and 2 are independent of the value of $B$.
\begin{theorem}
Let $B \geq 2$. The smallest Gaussian $B$-happy numbers of heights 0 and 1 are 1 and -1, respectively. The smallest Gaussian $B$-happy numbers of height 2 are $i$ and $-i$.
\end{theorem}
\begin{proof}
Since 0 is not a $B$-happy number, the smallest $B$-happy numbers must be of absolute value 1 and, hence, in the set $\{1,-1,i,-i\}$. Each of these is a Gaussian $B$-happy number and so is the smallest of its height.
\end{proof}
For height 3 and above, the base is significant.
As seen in Table~\ref{height3}, the smallest Gaussian $B$-happy numbers of height 3 are the same for bases 2 and 4, and those in base 8 and 10 are integer multiples of those for base 2 and 4. The smallest numbers for the other small bases do not appear to follow a pattern. The following theorem is verified by direct calculation.
\begin{theorem}
The smallest Gaussian $B$-happy numbers of height 3 for $2 \leq B \leq 10$, are the values, $z$, given in Table~\ref{height3}, along with $-z$, $\pm \overline{z}$, $\pm iz$, and $\pm i\overline{z}$.
\end{theorem}
\begin{table}[hbt]
\begin{center}
\begin{tabular}{|c|c|}
\hline
Base & Smallest Height 3 \\
\hline
2&$1+i$\\
3&$7+10i$\\
4&$1+i$\\
5&$4+15i$\\
6&$11+17i$\\
7&$20+27i$\\
8&$2+2i$\\
9&$2+9i$\\
10&$12+12i$\\
\hline
\end{tabular}
\caption{Representative Smallest Height 3 Gaussian $B$-Happy Numbers.}
\label{height3}
\end{center}
\end{table}
Focusing now on base 10, the smallest Gaussian happy numbers of heights up to a given fixed height can be found using a direct search by computer, or even by hand. Noting that in the (positive) rational integers, the smallest happy numbers of heights less than 6 are all less than or equal to 23, finding the heights of all Gaussian happy numbers of absolute value at most 23, and then identifying the smallest one of each height, necessarily identifies the smallest ones of heights less than 6. This search, in fact, identifies the smallest numbers of all heights less than 7, as presented in Table~\ref{heights}.
\begin{theorem}
The smallest Gaussian happy numbers of heights 0 through 2 are given in Table~\ref{heights}. The smallest Gaussian happy numbers of heights 3 through 6 are the numbers $z$, given in Table~\ref{heights}, along with $-z$, $\pm \overline{z}$, $\pm iz$, and $\pm i\overline{z}$.
\end{theorem}
\begin{table}[hbt]
\begin{center}
\begin{tabular}{|r||c|c|c|c|c|c|c|}
\hline
Height&0&1&2&3&4&5&6\\
\hline
Happy&1&10&13&23&19&7&365\\
\hline
Gaussian Happy&1&-1&$\pm i$&$12+12i$&$ 4+4i$&$7$&$5+19i$
\\
\hline
\end{tabular}
\caption{Smallest Happy Numbers~\cite{heights} and Representative Gaussian Happy Numbers of Small Heights.}
\label{heights}
\end{center}
\end{table}
\section{Arithmetic Sequences}
We now consider arithmetic sequences of Gaussian $B$-happy numbers. Following convention, for $D \in \mathbb{Z}[i] - \{0\}$, a
{\em $D$-consecutive sequence} is an arithmetic sequence with constant difference $D$.
El-Sedy and Siksek~\cite{basetenseq} showed that there exist arbitrarily long finite 1-consecutive sequences of rational (base 10) happy numbers. Independently, Grundman and Teeple~\cite{consec} proved the more general result, given below. They also proved that the constant differences given in Theorem~\ref{GT} are the best possible.
\begin{theorem}[Grundman \& Teeple]
\label{GT}
If $B \geq 2$ and \[d = \gcd(2,B-1),\]
then there exist arbitrarily long finite
$d$-consecutive sequences of $B$-happy numbers.
\end{theorem}
In this section, we prove, for various values of $D$, that there exist arbitrarily long finite $D$-consecutive sequences of Gaussian $B$-happy numbers, depending on the parity of $B\geq 2$. We begin by showing that when the base $B$ is odd, such a $D$ must be a $\mathbb{Z}[i]$-multiple of $1 + i$. It follows that all Gaussian $B$-happy numbers are contained in a single coset of the ideal $(1 + i)\mathbb{Z}[i]$. This is the Gaussian analogy to the fact that for $B$ odd, all rational $B$-happy numbers are odd.
\begin{theorem}\label{coset}
Let $B \geq 3$ be odd. Each Gaussian $B$-happy number is an element of $1 + (1+i)\mathbb{Z}[i]$. In particular, if there is a $D$-consecutive sequence of (at least two) Gaussian $B$-happy numbers, then $D \in (1+i)\mathbb{Z}[i]$.
\end{theorem}
\begin{proof}
Assume that $B \geq 3$ is odd and note that, since $2 \in (1 + i)\mathbb{Z}[i]$, for any $a + bi \in \mathbb{Z}[i]$,
\begin{align*}
S_B(a + bi) &= S_B\left(\sum_j (a_j + b_ji)B^j \right) \equiv \sum_j (a_j^2 - b_j^2) \equiv \sum_j (a_j - b_j) \\
& \equiv \sum_j \left((a_j - b_j) + (1 + i)b_j\right) \equiv \sum_j (a_j + b_ji)B^j \\
& \equiv a + bi \pmod{(1 + i)\mathbb{Z}[i]}.
\end{align*}
Now, if $a + bi$ is a Gaussian $B$-happy number, then for some $k\in \mathbb{Z}^+$,
$S^k(a + bi) = 1$.
Thus, using an inductive argument, each Gaussian $B$-happy number is congruent to 1 modulo $(1 + i)\mathbb{Z}[i]$ and so is an element of $1 + (1+i)\mathbb{Z}[i]$.
\end{proof}
The converse of Theorem~\ref{coset} is certainly false: By Theorem~\ref{fixedeg}, for $B$ odd, $(B^2 + 1)/2$ is a fixed point of $S_B$ and, hence, is not a Gaussian $B$-happy number. Yet, for $B$ odd, $B^2 + 1 \equiv 2 \pmod{4\mathbb{Z}}$ implying that $(B^2 + 1)/2 \equiv 1 \pmod{2\mathbb{Z}}$. Hence, $(B^2 + 1)/2 \in 1 + 2\mathbb{Z} \subseteq 1 + (1+i)\mathbb{Z}[i]$.
As a corollary to Theorem~\ref{coset}, we see that if $B$ is odd, then each Gaussian $B$-happy number has real and imaginary parts of different parity.
\begin{cor}
Let $B \geq 3$ be odd. If $a + bi$ is a Gaussian $B$-happy number, then $a + b \equiv 1 \pmod{2\mathbb{Z}}$.
\end{cor}
\begin{proof}
Let $a + bi$ be a Gaussian $B$-happy number. By Theorem~\ref{coset}, $a + bi - 1 \in (1+i)\mathbb{Z}[i]$. Since $b - bi \in (1+i)\mathbb{Z}[i]$, this implies that $a + b - 1 \in (1+i)\mathbb{Z}[i]$,
and hence $a + b - 1 \in \mathbb{Z} \cap (1+i)\mathbb{Z}[i] = 2\mathbb{Z}$.
\end{proof}
Before proving a generalization of Theorem~\ref{GT} to Gaussian $B$-happy numbers, we
note the equivalence of the existence of $D$-consecutive sequences of Gaussian $B$-happy numbers for some related values of $D$. The proof follows easily from Lemma~\ref{rep}.
\begin{lemma}\label{otherds}
If there exists a $D$-consecutive sequence of Gaussian $B$-happy numbers for some $D\in \mathbb{Z}[i] - \{0\}$, then there exists a $D^\prime$-consecutive sequence of Gaussian $B$-happy numbers of the same length, for $D^\prime$ equal to each of $-D$, $\pm iD$, $\pm \overline D$, and $\pm i\overline D$.
\end{lemma}
We now generalize Theorem~\ref{GT}.
\begin{theorem} \label{easyseq}
Fix $B \geq 2$ and let $d = \gcd(2,B-1)$.
There exist arbitrarily long finite
$d$-consecutive sequences of Gaussian $B$-happy numbers, and
$d$ is the smallest element of $\mathbb{Z}^+$ for which this is true.
The same holds for $-d$-consecutive, $id$-consecutive, and $-id$-consecutive sequences.
\end{theorem}
\begin{proof}
The existence of the sequences is immediate from Theorem~\ref{GT}, since rational $B$-happy numbers are also Gaussian $B$-happy numbers. For $B$ even, $d = 1$, which is clearly the minimal value possible. For $B$ odd, Theorem~\ref{coset} eliminates the possibility of $d = 1$. Thus the result is best possible in each case. Lemma~\ref{otherds} proves the final sentence of the theorem.
\end{proof}
Theorem~\ref{1+i}, which holds for all $B \geq 2$, establishes that there exist arbitrarily long finite $(1+i)$-consecutive sequences of Gaussian $B$-happy numbers. For its proof, we need to define a function that serves as a one-sided inverse for $S_B$.
Fix $B \geq 2$. We define a function, $R_B: \mathbb{Z}^+ \rightarrow \mathbb{Z}^+$ by, for each $t \in \mathbb{Z}^+$,
\[R_B(t) = \sum_{j=1}^t B^j.\]
Notice that, for each $t\in \mathbb{Z}^+$,
\begin{equation*}\label{RB}
S_B(R_B(t)) = t.
\end{equation*}
\begin{theorem}\label{1+i}
For $B \geq 2$ and $D = 1 + i$,
there exist arbitrarily long finite
$D$-consecutive sequences of Gaussian $B$-happy numbers.
\end{theorem}
\begin{proof}
Let $m\in \mathbb{Z}^+$ be arbitrary. We will show that there exists a $D$-consecutive sequence of $m$ Gaussian $B$-happy numbers.
Let $d = \gcd(2,B-1)$ and
\[M = \max\{S_B(2S_B(k))|1\leq k \leq m\}.\]
By Theorem~\ref{GT}, there exists
a sequence of $M$ $d$-consecutive rational $B$-happy numbers, say $a + dj$, for $0 \leq j < M$.
Set $r = 1 + \max\{k,2S_B(k)|1\leq k\leq m\}$. (Note that this means that $r$ is certainly large enough for the application of Lemma~\ref{pullapart} in the following calculation.)
Let $b = R_B(R_B(a+dM)B^r)$. Then for each $1\leq k \leq m$,
\begin{align*}
S_B^2(bB^r + k(1+i)) &= S_B(S_B(b) + S_B(k(1+i))) \\
& = S_B(S_B(R_B(R_B(a+dM)B^r)) + 2S_B(k)i) \\
& = S_B(R_B(a+dM)B^r + 2S_B(k)i) \\
&= S_B(R_B(a+dM)) + S_B(2S_B(k)i) \\
& = a + dM - S_B(2S_B(k)).
\end{align*}
By the definition of $M$, for each $k$, $1 \leq S_B(2S_B(k)) \leq M$ and, therefore, $a \leq a + dM - S_B(2S_B(k)) < a + dM$. So, if $d = 1$,
then $a + dM - S_B(2S_B(k))$ is in the sequence of $M$ $d$-consecutive rational $B$-happy numbers.
If $d = 2$, then $B$ is odd, and $S_B(2S_B(k))$ is even. Hence,
$a + dM - S_B(2S_B(k))$ is again in the sequence of $M$ $d$-consecutive rational $B$-happy numbers.
Thus, in either case,
for each $k$,
$a + dM - S_B(2S_B(k))$ is a $B$-happy number.
Therefore, for each $1\leq k \leq m$, $bB^r + k(1+i)$ is a Gaussian $B$-happy number, and so these numbers form a $D$-consecutive sequence of $m$ Gaussian $B$-happy numbers.
\end{proof}
Combining Lemma~\ref{otherds} with Theorem~\ref{1+i} yields the corollary.
\begin{cor}
Let $B \geq 2$.
For $D = 1 - i$, $-1 + i$, and $-1 -i$,
there exist arbitrarily long finite
$D$-consecutive sequences of Gaussian $B$-happy numbers.
\end{cor}
\end{document}
\end{document}
|
\begin{document}
\makeRR
\sigmaection{Introduction}
Graph-based semi-supervised learning methods have the following three principles at their foundation.
The first principle is to use a few labelled points (points with known classification) together with
the unlabelled data to tune the classifier. In contrast with the supervised machine learning, the
semi-supervised learning creates a synergy between the training data and classification data.
This drastically reduces the size of the training set and hence significantly reduces the cost
of experts' work. The second principal idea of the semi-supervised learning methods is to use
a (weighted) similarity graph. If two data points are connected by an edge, this indicates some
similarity of these points. Then, the weight of the edge, if present, reflects the degree of similarity.
The result of classification is given in the form of classification functions. Each class has its own
classification function defined over all data points. An element of a classification function gives
a degree of relevance to the class for each data point. Then, the third principal idea of the
semi-supervised learning methods is that the classification function should change smoothly
over the similarity graph. Intuitively, nodes of the similarity graph that are closer together
in some sense are more likely to belong to the same class. This idea of classification function
smoothness can naturally be expressed using graph Laplacian or its modification.
The work \cite{ZGL03} seems to be the first work where the graph-based semi-supervised learning
was introduced. The authors of \cite{ZGL03} formulated the semi-supervised learning method as
a constrained optimization problem involving graph Laplacian. Then, in \cite{Zetal04,ZB07} the
authors proposed optimization formulations based on several variations of the graph
Laplacian. In \cite{AGMS12} a unifying optimization framework was proposed which gives as
particular cases the methods of \cite{ZB07} and \cite{Zetal04}. In addition, the general
framework in \cite{AGMS12} gives as a particular case an interesting PageRank based method,
which provides robust classification with respect to the choice of the labelled points \cite{Aetal08,AGS13}.
We would like to note that the local graph partitioning problem \cite{ACL06,C09} can be
related to graph-based semi-supervised learning. An interested reader can find more details
about various semi-supervised learning methods in the surveys and books
\cite{CSZ06,FoussFrancoisseSaerens11,Z05}.
In the present work we study in detail a semi-supervised learning method based on the Regularized
Laplacian. To the best of our knowledge, the idea of using Regularized Laplacian and its kernel
for measuring proximity in graphs and application to mathematical sociology goes back to the works
\cite{CS97,CheSha98}. In \cite{FoussFrancoisseSaerens11} the authors compared experimentally many graph-based
semi-supervised learning methods on several datasets and their conclusion was that the semi-supervised
learning method based on the Regularized Laplacian kernel demonstrates one of the best performances on
nearly all datasets. In \cite{CallutSaerensDupont08} the authors studied a semi-supervised learning
method based on the Normalized Laplacian graph kernel which also shows good performance.
Interestingly, as we show below, if we choose Markovian Laplacian as a weight matrix, several known
semi-supervised learning methods reduce to the Regularized Laplacian method. In this work we formulate
the Regularized Laplacian method as a convex quadratic optimization problem which helps to design easily
parallelizable numerical methods. In fact, the Regularized Laplacian method can be regarded as
a Lagrangian relaxation of the method proposed in \cite{ZGL03}. Of course, this is a more flexible formulation,
since by choosing an appropriate value for the Lagrange multiplier one can always retrieve the method
of \cite{ZGL03} as a particular case. We establish various properties of the Regularized Laplacian method.
In particular, we show that the kernel of the method can be interpreted in terms of discrete and continuous time
random walks and possesses several important properties of proximity measures. Both optimization and linear algebra
methods can be used for efficient computation of the classification functions. We discuss advantages and
disadvantages of various numerical approaches. We demonstrate on numerical examples that the Regularized Laplacian
method is competitive with respect to the other state of the art semi-supervised learning methods.
The paper is organized as follows: In the next section we formally define the Regularized Laplacian method.
In Section~3 we discuss several related graph-based semi-supervised methods and graph kernels.
In Section~4 we present insightful interpretations and properties of the Regularized Laplacian method.
We analyse important limiting cases in Section~5. Then, in Section~6 we discuss various numerical approaches
to compute the classification functions and show by numerical examples that the performance of the Regularized
Laplacian method is better or comparable with the leading semi-supervised methods. Section~7 concludes the
paper with directions for future research.
\sigmaection{Notations and method formulation}
Suppose one needs to classify $N$ data points (nodes) into $K$ classes and assume $P$ data points are labelled.
That is, we know the class to which each labelled point belongs.
Denote by $V_k$\x{-,} the set of labelled points in class $k=1,...,K$. Of course, $|V_1|+...+|V_K|=P$.
The graph-based semi-supervised learning approach uses a weighted graph $G=(V,A)$ connecting data
points, where $V$, $|V|=N$, denotes the set of nodes and $A$ denotes the weight (similarity) matrix.
In this work we assume that $A$ is symmetric and the underlying graph is connected.
Each element $a_{ij}$\x{For compactness and unification with Y_{ik}, w_{ij} removed comma between the subscripts i,j of matrix entries.} represents the degree of similarity between data points $i$ and~$j$.
Denote by $D$ the\x{-a} diagonal matrix with its \((i,i)\)-element equal to the sum
of the \(i\)-th row of matrix \(A\): \(d_{i}=\sigmaum_{j=1}^{N}a_{ij}\).
We denote by $L=D-A\x{-W}$ the Standard (Combinatorial) Laplacian associated with the graph~$G$.
Define an \(N\times K\) matrix \(Y\) as
\begin{equation}
Y_{ik}=
\begin{cases}
1, & \text{if $i \in V_k$, i.e., point $i$ is labelled as a class $k$ point,}\\
0, & \text{otherwise.}
\end{cases} \nonumber
\end{equation}
We refer to each column $Y_{\ast k}$ of matrix $Y$ as a \emph{labeling function}.\x{\emph}
Also define an \(N\times K\) matrix \(F\) and call its columns $F_{\ast k}$
\emph{classification functions}.\x{\emph} The general idea of the graph-based semi-supervised learning
is to find classification functions so that on the one hand they are close
to the corresponding labeling function and on the other hand they change
smoothly over the graph associated with the similarity matrix.
This general idea can be expressed by means of the following particular
optimization problem:
\begin{equation}
\label{OptProb}
\min_{F}\left\{\x{left} \sigmaum_{k=1}^K (F_{\ast k}-Y_{\ast k})^T (F_{\ast k}-Y_{\ast k})
+ \beta \sigmaum_{k=1}^K F_{\ast k}^T L F_{\ast k} \right\}\x{\right},
\end{equation}
where $\beta \in (0,\infty)$ is a regularization parameter.
The regularization parameter $\beta$ represents a trade-off between
the closeness of the classification function to the labeling
function and its smoothness.
Since the Laplacian $L$ is positive-semidefinite and the second term in (\ref{OptProb})
is strictly convex, the optimization problem (\ref{OptProb}) has a unique solution
determined by the stationarity condition
$$
2 (F_{\ast k}-Y_{\ast k})^T + 2 \beta F_{\ast k}^T L = 0, \quad k=1,...,K,
$$
which gives
\begin{equation}
\label{eq:FRegLap}
F_{\ast k} = (I + \beta L)^{-1} Y_{\ast k}, \quad k=1,...,K.
\end{equation}
The matrix $Q_\beta = (I + \beta L)^{-1}$ is known as {\it Regularized Laplacian kernel\x{4}\/} of the graph \cite{KL02,SK03}
and can be related to the matrix forest theorems \cite{CS97,AgaChe01} and stochastic matrices \cite{AgaChe01}.\x{4}
The classification functions $F_{\ast k}, k=1,...,K,$ can be obtained either by numerical
linear algebra methods (e.g., power iterations) applied to (\ref{eq:FRegLap}) or by numerical
optimization methods applied to (\ref{OptProb}). We elaborate on numerical methods in Section~6.
Once the classification functions are obtained, the points are classified according to the rule
$$
F_{ik} > F_{ik'}, \forall k' \neq k \quad \mathcal{R}ightarrow \quad \mbox{Point $i$ is classified into class $k$.}
$$
The ties can be broken in arbitrary fashion.
\sigmaection{Related approaches}
Let us discuss a number of related approaches. First, we discuss formal relations and
in the numerical examples section we compare the approaches on some benchmark examples.
\sigmaubsection{Relation to heat kernels}
The authors of \cite{CY99,CY00} first introduced and studied the properties of the heat kernel
based on the normalized Laplacian. Specifically, they introduced the kernel
\begin{equation}
\label{eq:KerNormLap}
{\cal H}(t) = \exp (-t{\cal L}),
\end{equation}
where
$$
{\cal L} = D^{-1/2} L D^{-1/2}
$$
is the normalized Laplacian. Let us refer to ${\cal H}(t)$ as the \emph{normalized heat kernel}.\x{the, \emph}
Note that the normalized heat kernel can be obtained as a solution of the following
differential equation
$$
{\cal \dot{H}}(t)=-{\cal L}{\cal H}(t),
$$
with the initial condition ${\cal H}(0)=I$. Then, in \cite{C07} the PageRank heat kernel was introduced
\begin{equation}
\label{e_PRhk}
\Pi(t) = \exp(-t(I-P)),
\end{equation}
where
\begin{equation}
\label{e_Pst}
P=D^{-1}A,
\end{equation}
is the transition probability matrix of the \emph{standard random walk\/}
on the graph. In \cite{C09} the PageRank heat kernel was applied to local graph partitioning.
In \cite{KL02} the heat kernel based on the standard Laplacian
\begin{equation}
\label{HeatKer}
H(t) = \exp(-tL),
\end{equation}
with $L=D-A$, was proposed as a kernel in the support vector machine
learning method. Then, in \cite{ZGL03} the authors proposed a semi-supervised learning
method based on the solution of a heat diffusion equation with Dirichlet boundary conditions.
Equivalently, the method of \cite{ZGL03} can be viewed as the minimization of the second
term in (\ref{OptProb}) with the values of the classification functions $F_{\ast k}$\x{$...$} fixed on the labelled
points. Thus, the proposed approach (\ref{OptProb}) is more general as it can be viewed as a Lagrangian
relaxation of \cite{ZGL03}. The results of the method in \cite{ZGL03} can be retrieved with a particular
choice of the regularization parameter.
\sigmaubsection{Relation to the generalized semi-supervised learning\\ method}
In \cite{AGMS12} the authors proposed a generalized optimization framework for graph
based semi-supervised learning methods
\begin{equation}\label{eq:genopt}
\min_{F} \left\{\sigmaum_{i=1}^N \sigmaum_{j = 1}^N w_{ij}\| {d_{i}}^{\sigmaigma-1} F_{i \ast} - {d_{j}}^{\sigmaigma-1}F_{j \ast}\|^2
+ \mu\sigmaum_{i=1}^N {d_{i}}^{2\sigmaigma-1} \| F_{i \ast} - Y_{i \ast}\|^2 \right\},\x{\left\right}
\end{equation}
where $w_{ij}$ are the entries of a \emph{weight matrix} $W=(w_{ij})$ which is a function of $A$
(in particular, one can also take $W=A$).\x{}
In particular, with $\sigmaigma=1$ we retrieve the transductive semi-supervised learning method \cite{ZB07},
with $\sigmaigma=1/2$ we retrieve the semi-supervised learning with local and global consistency
\cite{Zetal04} and with $\sigmaigma=0$ we retrieve the PageRank based method \cite{Aetal08}.
The classification functions of the generalized graph based semi-supervised learning are given
by
$$
F_{\ast k} = \frac{\mu}{2+\mu} \left(I - \frac{2}{2+\mu}D^{-\sigmaigma} W D^{\sigmaigma-1}\right)^{-1} Y_{\ast k},
\quad k=1,...,K.
$$
Now taking as the weight matrix $W = I - \tau L = I - \tau (D-A)$ (note that with this choice
of the weight matrix, the generalized degree matrix $D'=\operatorname{diag}(W\bm{1})$\x{} becomes the identity matrix), the above equation transforms to
$$
F_{\ast k} = \left(I + \frac{2\tau}{\mu} L \right)^{-1} Y_{\ast k},
\quad k=1,...,K,
$$
which is (\ref{eq:FRegLap}) with $\beta=2\tau/\mu$. It is very interesting to observe that
with the proposed choice of the weight matrix all the semi-supervised learning methods defined
by various $\sigmaigma$'s coincide.
\sigmaection{Properties and interpretations of the Regularized Laplacian method}
There is a number of interesting interpretations and characterizations which we can provide for
the classification functions (\ref{eq:FRegLap}). These interpretations and characterizations will give
different insights about the Regularized Laplacian kernel\x{4} $Q_\beta$ and the classification functions (\ref{eq:FRegLap}).
\sigmaubsection{Discrete-time random walk interpretation}
The Regularized Laplacian kernel\x{4} $Q_\beta=(I+\beta L)^{-1}$ can be interpreted as the overall transition matrix of a random walk on the similarity graph $G$ with a geometrically distributed number of steps. Namely, consider a Markov chain whose states are our data points and the probabilities of transitions between distinct states are proportional to the corresponding entries
of the similarity matrix~$A$:
\begin{equation}
\label{graphMarkov}
\hat{p}_{ij}=\tau a_{ij}, \quad i,j=1,\ldots, N,\;\;i\ne j,
\end{equation}
where $\tau > 0$ is a sufficiently small parameter.
Then the diagonal elements of the transition matrix $\hat{P}=(\hat{p}_{ij})$ are
\begin{equation}
\label{dia_trans}
\hat{p}_{ii}=1-\sigmaum_{j\ne i}\tau a_{ij},\quad i=1,\ldots, N
\end{equation}
or, in the matrix form,
\begin{equation}
\label{e_transM}
\hat{P} = I - \tau L.
\end{equation}
The matrix $\hat{P}$ determines a random walk on $G$ which differs from the ``standard'' one defined by~\eqref{e_Pst} and related to the PageRank heat kernel~\eqref{e_PRhk}.
As distinct from \eqref{e_Pst}, the transition matrix~\eqref{e_transM} is symmetric for every undirected graph; in general, it has a nonzero diagonal.
It is interesting to observe that $\hat{P}$ coincides with the weight matrix $W$ used for transformation of Subsection 3.2.
Consider a sequence of independent Bernoulli trials indexed by $0,1,2,\ldots$ with a certain success probability~$q$.
Assume that the number of steps, $K$, in a random walk is equal to the trial number of the first success.
And let $X_k$ be the state of the Markov chain at step $k$.
Then, $K$ is distributed geometrically:
\[
\Pr\{K=k\}=q(1-q)^k,\quad k=0,1,2,\ldots,
\]
and the transition matrix of the overall random walk after a random number of steps~$K$,
$Z=(z_{ij})$, $z_{ij}=\Pr\{X_K=j\mid X_0=i\},\quad i,j=1,\ldots, N,$ is given by
$$
Z = q \sigmaum_{k=0}^\infty (1-q)^k \hat{P}^k = q \sigmaum_{k=0}^\infty (1-q)^k (I-\tau L)^k
$$
$$
= q \left(I-(1-q)(I-\tau L)\right)^{-1} = \left(I + \tau (q^{-1}-1) L \right)^{-1}.
$$
\noindent
Thus, $Z=Q_\beta=(I + \beta L)^{-1}$ with $\beta=\tau(q^{-1}-1).$
This means that the $i$-th component of the classification
function can be interpreted as the probability of finding
the discrete-time random walk with transition matrix (\ref{e_transM}) in node $i$ after
the geometrically distributed number of steps with parameter $q$,
given the random walk started with the distribution $Y_{\ast k}/(\1^T\/Y_{\ast k})$.
\sigmaubsection{Continuous-time random walk interpretation}
Consider the differential equation
\begin{equation}
\label{eq:diffH}
\dot{H}(t) = -L H(t),
\end{equation}
with the initial condition $H(0)=I$. Also consider the standard continuous-time
random walk that spends exponentially distributed time in node $k$ with the expected
duration $1/d_k$ and after the exponentially distributed time moves to a new node $l$ with
probability $a_{kl}/d_k$. Then, the solution $h_{ij}(t)=\exp(-tL)$ of the differential
equation (\ref{eq:diffH}) can be interpreted as a probability to find the standard
continuous-time random walk in node $j$ given the random walk started from node $i$.
By taking the Laplace transform of (\ref{eq:diffH}) we obtain
\begin{equation}\label{e_Hs}\x{#}
H(s) = (sI + L)^{-1} = s^{-1}(I + s^{-1} L)^{-1}.\x{6m}
\end{equation}
Thus, the classification function (\ref{eq:FRegLap}) can be interpreted as the Laplace
transform divided by $1/s$, or equivalently the $i$-th component of the classification
function can be interpreted as a quantity proportional to the probability of finding
the random walk in node $i$ after exponentially distributed time with mean $\beta=1/s$
given the random walk started with the distribution $Y_{\ast k}/(\1^T\/Y_{\ast k})$.\x{maybe, more explicitly...}
\sigmaubsection{Proximity and distance properties}
As before, let $Q_\beta\!=\!(q_{ij}^\beta)_{N\!\times\! N}^{}$ be the Regularized Laplacian kernel\x{4}
$(I + \beta L)^{-1}$ of \eqref{eq:FRegLap}.
$Q_\beta$ determines a positive\x{4} \emph{$1$-proximity measure\/} \cite{CheSha98a} $s(i,j):=q^\beta_{ij},$ i.e., it satisfies \cite{CS97} the following conditions:\\
\indent $(1)$ for any $i\in V,$ $\sigmaum_{k\in V}q^\beta_{ik}=1$ and\\
\indent $(2)$ for any $i,j,k\in V,$ $q^\beta_{ji}+q^\beta_{jk}-q^\beta_{ik}\le q^\beta_{jj}$ with a strict inequality whenever $i=k$ and $i\ne j$ (the \emph{triangle inequality for proximities}).
This implies \cite{CheSha98a} the following two important properties:
(a) $q^\beta_{ii}>q^\beta_{ij}$ for all $i,j\in V$ such that $i\ne j$ (\emph{egocentrism property});
(b) $\rho_{ij}^\beta:=\beta(q_{ii}^\beta+q_{jj}^\beta-q_{ij}^\beta-q_{ji}^\beta)$
is\footnote{Cf.\ the cosine law~\cite{Critchley88} and the inverse covariance mapping~\cite[Section\:5.2]{DezaLaurent97}.} a distance on~$V.$ Because of the forest interpretation of $Q_\beta$ (see Section~\ref{s_forest}), it is called the \emph{adjusted forest distance}. The distances $\rho_{ij}^\beta$ have a twofold connection with the \emph{resistance distance\/} $\tilde\rho_{ij}$ on~$G$
\cite{CheSha00}.
First, $\lim_{\beta\to\infty}\rho^\beta_{ij}=\tilde\rho_{ij},\;i,j\in V.$
Second, let $G^\beta$ be the weighted graph such that:\x{4:} $V(G^\beta)=V(G)\cup\{0\},$ the restriction of $G^\beta$ to $V(G)$ coincides with $G$, and $G^\beta$ additionally contains an edge $(i,0)$ of weight~$1/\beta$ for each node $i\in V(G)$. Then it follows that $\rho^\beta_{ij}(G)=\tilde\rho_{ij}(G^\beta),\;i,j\in V.$
In the electrical interpretation of $G$, the weight $1/\beta$ of the edges $(i,0)$ is treated as conductivity, i.e., the lines connecting each node to the ``hub'' $0$ have resistance~$\beta.$
An interested reader can find more properties of the proximity measures determined by $Q_\beta$ in~\cite{CS97}.
Furthermore, every $Q_\beta,$ $\beta>0$ determines a \emph{transitional measure\/} on $V,$ which means \cite{Che11AAM} that:
$q^\beta_{ij}\,q^\beta_{\!jk}\le q^\beta_{ik}\,q^\beta_{\!jj}$ for all $i, j,k\in V$ with\x{4with} $q^\beta_{ij}\,q^\beta_{\!jk}=q^\beta_{ik}\,q^\beta_{\!jj}$ if and only if every path in $G$ from $i$ to $k$ visits~$j.$
It follows that $d^\beta_{ij}:=-\ln\left(q^\beta_{ij}/\sigmaqrt{q^\beta_{ii}q^\beta_{jj}}\right)$ provides a distance on~$V.$ This distance is \emph{cutpoint additive\/}, that is, $d^\beta_{ij}+d^\beta_{jk}=d^\beta_{ik}$ if and only if every path in $G$ from $i$ to $k$ visits~$j.$ In the asymptotics, $d^\beta_{ij}$ becomes proportional to the shortest path distance and the resistance distance as $\beta\to0$ and $\beta\to\infty,$ respectively.
\sigmaubsection{Matrix forest characterization}
\label{s_forest}
By the \emph{matrix forest theorem\/} \cite{CS97,AgaChe01},\x{4} each entry $q_{ij}^\beta$ of $Q_\beta$ is equal to the specific weight of the spanning rooted forests that \emph{connect node $i$ to node $j$\/}\x{6m} in the weighted graph $G$ whose combinatorial Laplacian is~$L.$
More specifically, $q_{ij}^\beta={\cal F}^\beta_{i\dashv j}/{\cal F}^\beta,$ where ${\cal F}^\beta$ is the total $\beta$-weight of all spanning rooted forests of $G,$ ${\cal F}^\beta_{i\dashv j}$ being the total $\beta$-weight of such of them that have node $i$ in a tree rooted at~$j.$ Here, the \emph{$\beta$-weight of a forest\/} stands for the product of its edges weights, each multiplied by~$\beta.$
Let us mention a closely related interpretation of the Regularized Laplacian kernel\x{4} $Q_\beta$ in terms of information dissemination~\cite{Che08DAM}.
Suppose that an information unit (an idea) must be transmitted through~$G$.
A {\em plan\/} of information transmission is a spanning rooted forest ${\cal F}f$ in $G$:
the information unit is initially injected into the roots of ${\cal F}f$; after that it comes to the other nodes
along the edges of~${\cal F}f$. Suppose that a plan is chosen at random: the probability of every choice is proportional to the $\beta$-weight of the corresponding forest. Then by the matrix forest theorem, the probability that the information unit arrives at $i$ \emph{from root~$j$} equals $q_{ij}^\beta={\cal F}^\beta_{i\dashv j}/{\cal F}^\beta$.
This interpretation is particularly helpful in the context of machine learning for social networks.
\sigmaubsection{Statistical characterization}
Consider the problem of attribute evaluation from paired comparisons.
Suppose that each data point (node) $i$ has a \emph{value parameter} $v_i,$ and a series of paired comparisons $r_{ij}$
between the points is performed.
Let the result of $i$ in a comparison with $j$ obey the Scheff\'e linear statistical model~\cite{Scheffe52}
\begin{equation}\label{e_PCm}
E(r_{ij})=v_i-v_j,
\end{equation}
where $E(\cdot)$ is the mathematical expectation. The matrix form of \eqref{e_PCm} applied to an experiment is
$$
E(\bm r)=X\bm v,
$$
where $\bm v=(v_1,\ldots, v_N)^T,$ and $\bm r$ is the vector of comparison results, $X$ being the \emph{incidence matrix\/} (\emph{design matrix}\/, in terms of statistics): if the $k$th element of $\bm r$ is a comparison result of $i$ confronted to $j,$ then, in accordance with \eqref{e_PCm}, $x_{ki}=1,$ $x_{kj}=-1,$ and $x_{kl}=0$ for $l\not\in\{i,\,j\}.$
Suppose that $X$ is known, $\bm r$ being a sample, and the problem is to estimate~$\bm v$ up to a shift \cite[Section\:4]{Che94}. Then
\begin{equation}\label{e_re}
\bm{\tilde v}(\lambda)=(\lambda I+X^TX)^{-1}X^T\bm r
\end{equation}
is the well-known \emph{ridge estimate} of $\bm v,$ where $\lambda>0$ is the \emph{ridge parameter}. Denoting $\beta=\lambda^{-1}$ and $X^TX=L$ (it is easily verified that $X^TX$ is a Laplacian matrix whose $(i,j)$-entry {with $j\ne i$ is minus}\x{#} the number of comparisons between $i$ and~$j$) one has
\begin{equation}\label{e_res}
\bm{\tilde v}(\lambda)=(I+\beta L)^{-1}\beta X^T\bm r,
\end{equation}
i.e., the solution is provided by the same transformation based on the Regularized Laplacian kernel\x{4} as in~\eqref{eq:FRegLap} (cf.\ also~\eqref{e_Hs})\x{#}. Here, the weight matrix $A$ of $G$ contains the numbers
of comparisons between nodes; $\bm s=X^T\bm r$ is the vector of the sums of comparison results of the nodes: $s_i=\sigmaum_{j}r_{ij}-\sigmaum_{j}r_{ji},$ where $r_{ij}$ and $r_{ji}$ are taken from~$\bm r,$ which has one entry (either $r_{ij}$ or $r_{ji}$) for each comparison result.
Suppose now that value parameter $v_i$ (belonging to an interval centered at zero) is a \emph{positive or negative\/} intensity of some property, and thus, $v_i$ can be treated as a signed membership of data point $i$ in the corresponding \emph{class.}\x{4\emph} The pairwise comparisons $\bm r$ are performed with respect to this property.
Then $\beta X^T\bm r=\beta\bm s$\x{4\bm}
is a kind of labeling function or a crude correlate of membership in the above class, whereas \eqref{e_res} provides a refined measure of membership which takes into account proximity. Along these lines, \eqref{e_res} can be considered as a procedure of semi-supervised learning.
A Bayesian version of the model \eqref{e_PCm} enables one to interpret and estimate the ridge parameter $\lambda=1/\beta.$ Namely, assume that:
\\(i) the parameters $v_1,\ldots, v_N$ chosen at random from the universal set are independent random variables with zero mean and variance $\sigmaigma_1^2$ and
\\(ii) for any vector $\bm v$, the errors in \eqref{e_PCm} are independent and have zero mean, their unconditional variance being $\sigmaigma_2^2.$
It can be shown \cite[Proposition~4.2]{Che94} that under these conditions, the best linear predictors for the parameters $\bm v$ are the ridge estimators \eqref{e_res} with $\beta=\sigmaigma_1^2/\sigmaigma_2^2.$
The \emph{best linear predictors} for $\bm v$ are the $\tilde v_i$'s that minimize $E(\tilde v_i-v_i)^2$ among all statistics of the form
$\tilde v_i=c_i+C_i^T\bm r$ satisfying $E(\tilde v_i-v_i)=0.$
The variances $\sigmaigma_1^2$ and $\sigmaigma_2^2$ can be estimated from the experiment. In fact, there are many approaches to choosing the ridge parameter, see, e.g., \cite{Dorugade14,MunizKibria09} and the references therein.
\sigmaection{Limiting cases}
Let us analyse the formula (\ref{eq:FRegLap}) in two limiting cases: $\beta \to 0$ and
$\beta \to \infty$. If $\beta \to 0$, we have
$$
F_{\ast k} = (I - \beta L) Y_{\ast k} + \mbox{o}(\beta).
$$
Thus, for very small values of $\beta$, the method resembles\x{"Resembles" is too vague.} the\x{} nearest neighbour method
with the weight matrix $W = I - \beta L$. If there are many points situated more than
one hop away from any labelled point, the method cannot produce good classification
with very small values of $\beta$. This will be illustrated by the numerical experiments in Section~6.
Now consider the other case $\beta \to \infty$. We shall employ the Blackwell series
expansion \cite{B62,P94} for the resolvent operator $(\lambda I+L)^{-1}$ with $\lambda=1/\beta$ \x{$\lambda$ and $L$ instead of $s$ and $D-A$}
\begin{eqnarray}
(I + \beta L)^{-1}
&=&\lambda(\lambda I + L)^{-1}
\nonumber\\\label{Blackwell}
&=&\lambda\left(\frac{1}{\lambda}\frac{1}{N} \1\1^T + H - \lambda H^2 + ...\right), \x{#is it new?}
\end{eqnarray}
where $H = (L+\frac{1}{N}\1\1^T)^{-1} - \frac{1}{N}\1\1^T$ is the generalized (group) inverse
of the Laplacian. Since the first term in (\ref{Blackwell}) gives the same value
for all classes if $\1^T Y_{\ast k} = \1^T Y_{\ast l}$,\x{#\ast l} $k \neq l$ (which is typically
the case), the classification will depend on the entries of the matrix~$H$ and finally, of the matrix $(L+\frac{1}{N}\1\1^T)^{-1}$.
Note that the matrix $(L+\alpha \1\1^T)^{-1}$, with a sufficiently small positive $\alpha$, determines a proximity measure called \emph{accessibility via dense forests}. Its properties are listed in \cite[Proposition~10]{CheSha98}.
An interpretation of $H$ in terms of spanning forests can be found in \cite[Theorem~3]{CheSha98};
see also~\cite{KirklandNeumannShader97}.
The accessibility via dense forests violates a natural \emph{monotonicity\/} condition, as distinct from $(I+\beta L)^{-1}$ with a finite~$\beta.$ Thus, a better performance of the regularized Laplacian proximity measure with finite\x{4} values of $\beta$ can be expected.
For the sake of comparison, let us analyse the limiting behaviour of the heat kernels. For instance, let us consider the Standard
Laplacian heat kernel (\ref{HeatKer}), since it is also based on the Standard Laplacian. In fact, it is immediate to see that the Standard
Laplacian heat kernel has the same asymptotic as the Regularized Laplacian kernel. Namely, if $t \to 0$,
$$
H(t) = \exp (-tL) = I - t L +\mbox{o}(t).
$$
Similar expressions hold for the other heat kernels. Thus, for small values of $t$, the semi-supervised learning methods based on
heat kernels should behave as the nearest neighbour method.
Next consider the Standard Laplacian heat kernel when $t \to \infty$. Recall that the Laplacian $L=D-A$ is a positive definite symmetric
matrix. Without the loss of generality, we can denote and rearrange the eigenvalues of the Laplacian as $0=\lambda_1\le \lambda_2 \le ...$ and
the corresponding eigenvectors as $u_1,...,u_n$. Note that $u_1 = \1$. Thus, we can write
$$
H(t) = u_1 u_1^T + \sigmaum_{i=2}^N \exp(-\lambda_i t) u_i u_i^T.
$$
We can see that for large values of $t$ the first term in the above expression is non-informative as in the case of the Regularized Laplacian
method and we need to look for the second order term. However, in contrast to the Regularized Laplacian kernel, the second order term
$\exp(-\lambda_2 t) u_2 u_2^T$ is a rank-one term and cannot in principle give correct classification in the case of more than two classes.
The second term of the Regularized Laplacian kernel $H$ is not a rank-one matrix and as mentioned above can be interpreted in terms
of proximity measures.
\sigmaection{Numerical methods and examples}
Let us first discuss various approaches for the numerical computation of the classification functions (\ref{eq:FRegLap}).
Broadly speaking, the approaches can be divided into linear algebra methods and optimization methods. One of the basic
linear algebra methods is the power iteration method. Similarly to the power iteration method described in \cite{AMT15},
we can write
$$
F_{\ast k} = (I + \beta D - \beta A)^{-1} Y_{\ast k},
$$
$$
F_{\ast k} = (I - \beta (I+\beta D)^{-1}A)^{-1} (I+\beta D)^{-1}Y_{\ast k},
$$
$$
F_{\ast k} = (I - \beta (I+\beta D)^{-1}DD^{-1}A)^{-1} (I+\beta D)^{-1}Y_{\ast k}.
$$
Now denoting $B:=\beta (I+\beta D)^{-1}D$ and $C:=(I+\beta D)^{-1}$, we can propose the
following power iteration method to compute the classification functions
\begin{equation}
\label{eq:poweriter}
F_{\ast k}^{(s+1)} = B D^{-1} A F_{\ast k}^{(s)} + C Y_{\ast k}, \quad s=0,1,... \ ,
\end{equation}
with $F_{\ast k}^{(0)} = Y_{\ast k}$. Since $B$ is a diagonal matrix with the diagonal entries less
than one, the matrix $B D^{-1} A$ is substochastic with the spectral radius less than one and the
power iterations (\ref{eq:poweriter}) are convergent. However, for large values of $\beta$ and $d_i$,
the matrix $B D^{-1} A$ can be very close to stochastic and hence the convergence rate of the power
iterations can be very slow. Therefore, unless the value of $\beta$ is small, we recommend to use
the other methods from numerical linear algebra for the solution of linear systems with
symmetric matrices (recall that L is a symmetric positive semi-definite matrix in the case of undirected graphs).
In particular, we tried the Cholesky decomposition method and the conjugate gradient method. Both methods
appeared to be very efficient for the problems with tens of thousands of variables. Actually, the
conjugate gradient method can also be viewed as an optimization method for the respective convex
quadratic optimization problem such as (\ref{OptProb}) and (\ref{eq:genopt}). A very convenient property
of optimization formulations (\ref{OptProb}) and (\ref{eq:genopt}) is that the objective, and consequently,
the gradient, can be written in terms of a sum over the edges of the underlying graph. This allows a very
simple (and with some software packages even automatic) parallelization of the optimization methods based
on the gradient. For instance, we have used the parallel implementation of the gradient based methods
provided by the NVIDIA CUDA sparse matrix library (cuSPARSE) \cite{CUDA} and it showed excellent performance.
Let us now illustrate the Regularized Laplacian method and compare it with some other state of the art
semi-supervised learning methods on two datasets: Les Miselables and Wikipedia Mathematical Articles.
The first dataset represents the network of interactions between major characters in the novel Les Miserables.
If two characters participate in one or more scenes, there is a link between these two characters. We consider
the links to be unweighted and undirected. The network of the interactions of Les Miserables characters
has been compiled by Knuth \cite{Knuth1993}. There are 77 nodes and 508 edges in the graph. Using
the betweenness based algorithm of Newman and Girvan \cite{Newman2004} we obtain 6 clusters which can be identified
with the main characters: Valjean (17), Myriel (10), Gavroche (18), Cosette (10), Thenardier (12), Fantine (10),
where in brackets we give the number of nodes in the respective cluster. First, we generate randomly (100 times)
labeled points (two labeled points per class). In Figure~\ref{fig:lesmisrand} we plot average precision as a
function of parameter $\beta$. In \cite{AGMS12,AGS13} it was observed that the PageRank based semi-supervised
method (obtained by taking $\sigmaigma=0$ in (\ref{eq:genopt})) is the only method among a large family of semi-supervised
methods which is robust to the choice of the labelled data \cite{Aetal08,AGMS12,AGS13}. Thus, we compare the Regularized Laplacian method
with the PageRank based method. As we can see for Figure~\ref{fig:lesmisrand}.(a), the performance of the Regularized
Laplacian method is comparable to that of the PageRank based method on Les Miserables dataset.
The horizontal line in Figure~\ref{fig:lesmisrand}.(a) corresponds to the PageRank based method with the best choice
of the regularization parameter or the restart probability in the context of PageRank.
Since the Regularized Laplacian method is based on graph Laplacian, we also compare it in Figure~\ref{fig:lesmisrand}.(b)
with the three heat kernel methods derived from variations of the graph Laplacian. Specifically, we consider the three time-domain kernels based on various Laplacians: Standard Heat kernel (\ref{HeatKer}), Normalized Heat kernel (\ref{eq:KerNormLap}), and PageRank Heat kernel (\ref{e_PRhk}).
For instance, in the case of the Standard Heat kernel the classification functions are given by $F_{\ast k} = H(t) Y_{\ast k}$.
It turns out that all the three time-domain heat kernels are very sensitive to the value of the chosen time, $t$.
Even though there are parameter settings that give similar performances of Heat kernel methods and the Regularized Laplacian method,
the Regularized Laplacian method has a large plateau for values of $\beta$ where the good performance of the method is assured.
Thus, the Regularized Laplacian method is more robust with respect to the parameter setting than the heat kernel methods.
\begin{figure}
\caption{Les Miserables Dataset. Labelled points are chosen randomly.}
\label{fig:lesmisrand}
\end{figure}
To see better the behaviour of the heat kernel methods for large values of $t$, we have chosen a larger interval for $t$
in Figure~\ref{fig:largerts}. The performance of the heat kernel methods degrades quite significantly for large values of $t$.
This is actually predicted by the asymptotics given in Section~5. Since we have more than two classes, the heat kernels with
rank-one second order asymptotics are not able to distinguish among the classes. All heat kernel methods as well as the
Regularized Laplacian method show a deterioration in performance for small values of $t$ and $\beta$. This was predicted
in Section~5, as all the methods start to behave like the nearest neighbour method. In particular, as follows from the
asymptotics of Section~5 and can be observed in the figures the Standard Laplacian heat kernel method and the Regularized
Laplacian method shows exactly the same performance when $t \to 0$ and $\beta \to 0$.
\begin{figure}
\caption{Les Miserables Dataset. Heat Kernel methods vs PR method, larger $t$.}
\label{fig:largerts}
\end{figure}
It was observed in \cite{AGS13} that taking labelled data points with large (weighted) degree is typically
beneficial for the semi-supervised learning methods. Thus, we now label randomly two points out of three points
with maximal degree for each class. The average precision is given in Figure~\ref{fig:lesmisgs}.(a). We also test
heat kernel based methods with the same labelled points, see Figure~\ref{fig:lesmisgs}.(b). One can see that if
we choose the labelled points with large degree, the Regularized Laplacian Method outperforms the PageRank based
method. Some heat kernel based methods with large degree labelled points also outperform the PageRank based method
but their performance is much less stable with respect to the value of parameter $t$.
\begin{figure}
\caption{Les Miserables Dataset. Labelled points are chosen with large degrees.}
\label{fig:lesmisgs}
\end{figure}
Next, we consider the second dataset consisting of Wikipedia mathematical articles. This dataset is derived from the English language
Wikipedia snapshot (dump) from January 30, 2010\footnote{\textbf{\texttt{http://download.wikimedia.org/enwiki/20100130}}}.
The similarity graph is constructed by a slight modification of the hyper-text graph. Each Wikipedia article typically contains links to other Wikipedia articles which are used to explain specific terms and concepts. Thus, Wikipedia forms a graph whose nodes represent articles and whose edges represent hyper-text inter-article links. The links to special pages (categories, portals, etc.) have been ignored. In the present experiment we did not use the information about the direction of links, so the similarity graph in our experiments is undirected. Then we have built a subgraph with mathematics related articles, a list of which was obtained from ``List of mathematics articles'' page from the same dump. In the present experiments we have chosen
the following three mathematical classes: ``Discrete mathematics'' (DM), ``Mathematical analysis'' (MA), ``Applied mathematics'' (AM). With the help of AMS MSC Classification\footnote{\textbf{\texttt{http://www.ams.org/mathscinet/msc/msc2010.html}}} and experts we have classified related Wikipedia mathematical articles into the three above mentioned classes. As a result, we obtained three imbalanced classes DM (106), MA (368) and AM (435). The subgraph induced by these three topics
is connected and contains 909 articles. Then, the similarity matrix $A$ is just the adjacency matrix of this subgraph.
First, we have chosen uniformly at random 100 times 5 labeled nodes for each class. The average precisions corresponding
to the Regularized Laplacian method and the PageRank based method are plotted in Figure~\ref{fig:wikimathrand}.(a).
We also provide the results for the three heat kernel based methods in Figure~\ref{fig:wikimathrand}.(b).
As one can see, the results of Wikipedia Mathematical articles dataset are consistent with the results of Les Miserables
dataset.
\begin{figure}
\caption{Wiki Math Dataset. Labelled points are chosen randomly.}
\label{fig:wikimathrand}
\end{figure}
Then, for each class out of 10 data points with largest degrees we choose 5 points and average the results.
The average precisions for the Regularized Laplacian method, PageRank based method and for the three heat kernel
based methods are plotted in Figure~\ref{fig:wikimathgs}. The results are again consistent with the corresponding
results for Les Miserables dataset.
We would like to mention that for the computations in the Wiki Math dataset with many parameter settings and
extensive averaging using NVIDIA CUDA sparse matrix library (cuSPARSE) \cite{CUDA} were noticeably faster
than using numpy.linalg.solve calling LAPACK routine {\tt \_gesv}.
Finally, we would like to recall from Subsection~4.5 that a good value of $\beta$ can be provided by the
ratio $\sigmaigma_1^2/\sigmaigma_2^2$, where $\sigmaigma_1^2$ is the variance related to the data points and $\sigmaigma_2^2$
is the variance related to the paired comparison between points. We can argue that $\sigmaigma_1^2$ is naturally
large and the paired comparisons between points can be performed with much more certainty, and hence, $\sigmaigma_2^2$
is small. This gives a statistical explanation why it is good to take relatively large values for the
parameter $\beta$ in the Regularized Laplacian method.
\begin{figure}
\caption{Wiki Math Dataset. Labelled points are chosen with large degree.}
\label{fig:wikimathgs}
\end{figure}
\sigmaection{Conclusions}
We have studied in detail the semi-supervised learning method based on the Regularized Laplacian.
The method admits both linear algebraic and optimization formulations. The optimization formulation
appears to be particularly well suited for parallel implementation. We have provided various
interpretations and proximity-distance properties of the Regularized Laplacian graph kernel.
We have also shown that the method is related to the Scheff\'e linear statistical model.
The method was tested and compared with the other state of the art semi-supervised learning methods
on two datasets. The results from the two datasets are consistent. In particular, we can conclude that
the Regularized Laplacian method is comparable in performance with the PageRank based method and outperforms
the related heat kernel based methods in terms of robustness.
Several interesting research directions remain open for investigation. It will be interesting to compare
the Regularized Laplacian method with the other semi-supervised methods on a very large dataset. We are
currently working in this direction. We observe that there is a large plateau of $\beta$ values for which
the Regularized Laplacian method performs very well. It will be very useful to characterize this plateau
analytically. Also, it will be interesting to understand analytically why the Regularized Laplacian method
performs better when the labelled points with large degree are chosen.
\tableofcontents
\end{document}
|
\begin{document}
\title{Simulations and Experiments on Polarisation Squeezing in Optical Fibre}
\author{Joel F. Corney$^1$, Joel Heersink$^2$, Ruifang Dong$^2$, Vincent Josse$^2$, Peter D. Drummond$^1$, Gerd Leuchs$^2$ and Ulrik L. Andersen$^{2,3}$}
\affiliation{$^1$ARC Centre of Excellence for Quantum-Atom Optics, School of Physical
Sciences, The University of Queensland, Brisbane, QLD 4072, Australia}
\email{[email protected]}
\affiliation{$^2$Institut f\"ur Optik, Information und Photonik, Max-Planck Forschungsgruppe,
Universit\"at Erlangen-N\"urnberg, G\"unther-Scharowsky-Strasse 1, 91058 Erlangen, Germany}
\affiliation{$^3$Department of Physics, Technical University of Denmark, Building 309, 2800 Kgs.\ Lyngby, Denmark}
\begin{abstract}
We investigate polarisation squeezing of ultrashort pulses in optical fibre, over a wide range of input energies and fibre lengths. Comparisons are made between experimental data and quantum dynamical simulations, to find good quantitative agreement. The numerical calculations, performed using both truncated Wigner and exact $+P$ phase-space methods, include nonlinear and stochastic Raman effects, through coupling to phonons variables. The simulations reveal that excess phase noise, such as from depolarising GAWBS, affects squeezing at low input energies, while Raman effects cause a marked deterioration of squeezing at higher energies and longer fibre lengths. The optimum fibre length for maximum squeezing is also calculated.\end{abstract}
\pacs{42.50.Lc,42.50.Dv,42.81.Dp,42.65.Dr}
\maketitle
\section{Introduction}
The search for efficient means of quantum squeezing, in which quantum fluctuations in one observable are reduced below the standard quantum limit, at the expense of increased fluctuations in the conjugate, has been at the heart of modern developments in quantum optics\cite{Drummond:2004}. As well as for the fundamental interest of highly nonclassical light, optical squeezing is of interest for quantum information applications. Possible uses include: generating entanglement for quantum communication\cite{braunstein03.book}, making measurements below the standard quantum limit\cite{giovannetti04.sci}, and for precise engineering of the quantum states of matter\cite{hald00.jomo}.
The use of optical fibre for quantum squeezing has considerable technological advantages, such as generating squeezing directly at the communications wavelength and use of existing transmission technology. There is, however, a significant disadvantage in the excess phase noise that arises from acoustic waves, molecular vibrations, and defects in the amorphous silica.
Here we present an in-depth numerical and experimental study of polarisation squeezing in a single-pass scheme that successfully reduces the impact of this excess phase noise. The numerical simulations represent a quantitative, experimentally testable solution of quantum many-body dynamics.
The first proposals for the generation of squeezed light using the $\chi^{(3)}$ nonlinearity date back to 1979, with schemes involving a nonlinear Kerr interferometer~\cite{ritze79.oc} or degenerate four-wave mixing~\cite{yuen79.ol}. The first experimental demonstration used four-wave mixing in atomic samples~\cite{slusher85.prl}. The Kerr effect in optical fibers was also proposed as a mechanism for squeezing light~\cite{levenson85.ol,levenson85.pra,kitagawa86.pra}. Squeezing using fibres was first successfully implemented using a continuous wave laser, and was observed by a phase shifting cavity~\cite{shelby86.prl}.
However, early experiments\cite{levenson85.ol,levenson85.pra,shelby86.prl} were severely limited by the phase noise intrinsic to optical fibre. Such noise occurs in the form of thermally excited refractive index fluctuations in the fiber~\cite{shelby85.prl, perlmutter90.prb}, and arises from Guided Acoustic Wave Brillouin Scattering (GAWBS) and $1/f$ noise.
A substantial theoretical breakthrough was the recognition that short pulses - ideally in the form of solitons - could lead to much higher peak powers, thus allowing the generation of nonclassical light in with fiber lengths short enough so that thermally induced phase noise was not an issue. Such short pulses required a true multi-mode theoretical approach\cite{Carter:1987p1841}, which led to the first predictions of pulsed squeezing, and to an understanding of the scaling laws involved \cite{Shelby1990a}.
These predictions were confirmed in a landmark experiment by Rosenbluh and Shelby~\cite{rosenbluh91.prl} , which used intense, sub-picosecond laser pulses to eliminate much of the phase noise, and a simpler interferometric setup~\cite{kitagawa86.pra} in a balanced configuration. All fiber squeezers since have exploited ultrashort pulses.
Observation schemes implemented with standard fibers include: i) phase-shifting cavities~\cite{shelby86.prl}, ii) spectral filtering~\cite{friberg96.prl, spaelter98.oe, koenig98.jomo, nishizawa00.jjap, takeoka01.ol}, iii) balanced interferometers~\cite{rosenbluh91.prl, bergman91.ol, bergman93.ol, bergman94.ol, yu01.ol}, iv) asymmetric interferometers~\cite{schmitt98.prl, krylov98.ol, fiorentino2001a, heersink03.pra, meissner04.job} and v) a two-pulse, single-pass method generating squeezed vacuum~\cite{margalit98.oe, nishizawa02.jjap} or polarization squeezing~\cite{heersink05.ol,Dong:2008p116}.
Squeezing the polarization variables of light is a promising alternative~\cite{korolkova02.pra} to the squeezing in the amplitude quadrature or the photon number, which the vast majority of fiber squeezing experiments until now have implemented. That the quantum polarization variables could also exhibit noise reduction was first suggested by Chirkin~\textit{et al.} in 1993~\cite{chirkin93.qe}. This proposal for polarization squeezing is similar to earlier remarks by Walls and Zoller concerning atomic spin squeezing~\cite{walls81.prl} due to the systems' mathematical similarities. The first experiment to exploit the quantum properties of polarization was performed by Grangier~\textit{et al.} in 1987 in a squeezed-light-enhanced polarization interferometer~\cite{grangier87.prl}. The first explicit demonstration was achieved by S{\o}rensen~\textit{et~al.} in the context of quantum memory~\cite{sorensen98.prl}. Such a promising application sparked intensified interest, resulting in a number of theoretical investigations, e.g.~\cite{alodjants98.apb, korolkova02.pra, luis02.pra}. These in turn spawned a plethora of experiments in a variety of different systems: optical parametric oscillators~\cite{bowen02a.prl, bowen02b.prl, schnabel03.pra}, optical fibers~\cite{heersink03.pra, heersink05.ol,Dong:2008p116} and cold atomic samples~\cite{josse03.prl}.
In this paper we present a detailed experimental and theoretical investigation of the single-pass method for creating polarization squeezing, building upon our previous work~\cite{heersink05.ol, Corney:2006p023606}. This efficient and novel squeezing source has a number of advantages compared with previous experiments producing bright squeezing. For example, this setup is capable of producing squeezing over a wide range of powers, in contrast to asymmetric Sagnac loop schemes. There is thus a certain similarity to experiments using a Mach Zehnder interferometer as a flexible asymmetric Sagnac loop~\cite{fiorentino2001a}. The interference of a strong squeezed and a weak `coherent' beam in asymmetric loops however gives rise to a degradation in squeezing due to the dissimilarity of the pulses as well as losses from the asymmetric beam splitter.
In the single-pass scheme, the destructive effect arising from interfering dissimilar pulses (in power, temporal and spectral shape) is avoided by interfering two strong Kerr-squeezed pulses that co-propagate on orthogonal polarization axes. For equal power they have been found to be virtually identical within measurement uncertainties in, e.g. spectrum, autocorrelation and squeezing. This scheme presents the potential to measure greater squeezing and provides a greater robustness against input power fluctuations. Formally this interference of equally squeezed pulses is reminiscent of earlier experiments producing vacuum squeezing, for example~\cite{rosenbluh91.prl, bergman91.ol, nishizawa02.jjap}. The advantage here is that no extra local oscillator is needed in the measurement of polarization squeezing.
These novel experiments allow a careful experimental test of the multi-mode theory of optical squeezing. Here we make use of the comprehensive model developed by Carter and Drummond\cite{Carter:1991p3757} that includes the electronic $\chi^{(3)}$ nonlinear responses of the material and nonresonant coupling to phonons in the silica. The phonons provide a non-Markovian reservoir that generates additional, delayed nonlinearity, as well as spontaneous and thermal noise. The coupling is based on the experimentally determined Raman gain $\alpha^{R}(\omega)$~\cite{Stolen1989a}.
The simulation of pulse propagation entails the solution of time-domain dynamical evolution in a quantum field theory with large numbers of interacting particles. We achieve this here primarily with a truncated Wigner technique\cite{Drummond:1993p279}, which provides an accurate simulation of the quantum dynamics for short propagation times and large photon number. The quantum effects enter via initial vacuum noise, which makes the technique ideally suited to squeezing calculations. We compare simulation and experiments to find excellent agreement over a wide range of pulse energies and fibre lengths. From the simulations, we can identify the particular noise sources that are the limiting factors at high and low input energy.
We begin in Sec.~\ref{Squeezing} with an introduction to polarisation squeezing by means of the Kerr effect, from a single-mode picture, before presenting the detailed model of pulse propagation in fibres in Sec.~\ref{Pulse}. Sections \ref{Simulations} and \ref{Outputs} describe the numerical simulation methods used, while the experimental set-up is described in Sec.~\ref{Experiment}. Sec.~\ref{Results} discusses the results of both the experiment and simulations. The appendices contain further details of the theoretical description and numerical simulation.
\section{Squeezing}
\label{Squeezing}
\subsection{Kerr squeezing}
The generation of squeezed optical beams requires a nonlinear interaction to transform the statistics of the input, which is typically a coherent state. The first observation of quantum noise squeezing used four wave mixing~\cite{slusher85.prl} arising from the third order electric susceptibility $\chi^{(3)}$. Although material dispersion can place limits on the interaction length, this limitation can be circumvented by use of degenerate frequencies\cite{shelby86.pra}, as in the optical Kerr effect. Here the interaction has the effect of introducing an intensity dependence to the medium's refractive index, Eq.~(\ref{eq_n2}), which in turn induces an intensity-dependent phase shift in incident pulses. This effect dominates the nonlinearity in fibers made of fused silica, a material with an inversion symmetric molecular ordering. In a pure Kerr material the refractive index is an instantaneous function of the optical intensity and the refractive index $n$ is then given to second order by~\cite{sizmann99.pio}:
\begin{equation}
n = n_0 +n_2 I \quad{} \text{with}
\quad{} n_2 = \frac{3}{4}\ \frac{{\rm Re}\left(\chi^{(3)}_{xxxx}\right)}{n_{0}^2 \epsilon_{0} c},
\label{eq_n2}
\end{equation}
where the optical intensity is given by \mbox{$I = \frac{1}{2} n_0 \epsilon_0 c |E|^2$} and $\chi^{(3)}_{xxxx}$ is the third order susceptibility coefficient for the degenerate mode $x$. The instantaneity of fused silica's nonlinearity is true only to a first approximation. In reality, it is only the electronic contribution, which typically comprises 85\% of the total nonlinearity~\cite{smolorz99.ol}, that is instantaneous on the scale of the 130~fs pulses used here. The time dependence of the remainder cannot be neglected and arises primarily from Raman scattering~\cite{stolen89.josab}. Nonetheless, the simplification of an instantaneous response can be useful in gaining physical insight into the Kerr squeezing mechanism.
Figure~\ref{fig_kerrsq}(a) illustrates the effect of an instantaneous nonlinear refraction. Sending an ensemble of identical coherent states into a perfect Kerr medium causes a distortion of the initially symmetric phase-space distribution. One can explain this distortion by considering the input to consist of a superposition of photon number states, which the Kerr effect rotates relative to one another in phase space. The initially symmetric phase-space distribution characteristic of coherent states is thereby distorted into an ellipse or `squeezed' circle. Generally the squeezed state will be crescent shaped, however for the experimental conditions of high intensities and small nonlinearities our states never become significantly curved.
The resultant quantum state is quadrature squeezed, where the squeezed quadrature $\hat{X}(\theta_{sq})$ is rotated by $\theta_{sq}$ relative to the amplitude quadrature or radial direction. The state's phase-space uncertainty distribution is altered such that the statistics in the amplitude quadrature remain constant in keeping with energy conservation. Thus the squeezed or noise-reduced optical quadrature cannot be detected directly in amplitude or intensity measurements. A detection scheme sensitive to the angle of the squeezed ellipse $\theta_{sq}$ is required.
\begin{figure}
\caption{\it (a) Representation in phase space of the evolution of a coherent beam (bottom right) under effect of the Kerr nonlinearity, which generates a quadrature (or Kerr) squeezed state (upper left). The arrow indicates the direction of state evolution with propagation. (b) Polarization squeezing generated by overlapping two orthogonally (i.e. $x$- and $y$-) polarized quadrature-squeezed states.}
\label{fig_kerrsq}
\end{figure}
\subsection{Single-mode picture of polarisation squeezing}
The characterization of quantum polarization states relies on the measurement of the quantum Stokes operators (see Ref.~\cite{korolkova02.pra} and references therein). These Hermitian operators are defined analogously to their classical counterparts as~\cite{jauch55.book}:
\begin{eqnarray}
\hat{S}_0&=&\hat{a}^{\dag}_{x}\hat{a}_{x}+\hat{a}^{\dag}_{y}\hat{a}_{y}, \quad
\hat{S}_1=\hat{a}^{\dag}_{x}\hat{a}_{x}-\hat{a}^{\dag}_{y}\hat{a}_{y}, \nonumber \\
\hat{S}_2&=&\hat{a}^{\dag}_{x}\hat{a}_{y}+\hat{a}^{\dag}_{y}\hat{a}_{x}, \quad
\hat{S}_3=i(\hat{a}^{\dag}_{y}\hat{a}_{x}-\hat{a}^{\dag}_{x}\hat{a}_{y}),
\label{eq_qstokes}
\end{eqnarray}
where $\hat{a}_x$ and $\hat{a}_y$ are two orthogonally polarized modes (with temporal, position and mode dependence implicit). These operators obey the SU(2) Lie algebra and thus, within a factor of $\frac{\hbar}{2}$, coincide with the angular momentum operators. The commutators of these operators, following from the noncommutation of the photon operators, are given by:
\begin{equation}
\left[\hat{S}_0,\hat{S}_i \right] = 0 \quad \text{and} \quad \left[\hat{S}_i,\hat{S}_j\right] =2i\epsilon_{ijk}\hat{S}_k,
\label{eq_qstoke_commutation}
\end{equation}
where $i,j,k = 1,2,3$ and where $\epsilon$ is the antisymmetric symbol. These commutation relations lead to Heisenberg inequalities and therefore to the presence of intrinsic quantum uncertainties in analog to those of the quadrature variables. However, the fundamental noise limit depends on the mean polarization state:
\begin{equation}
\Delta^2\hat{S}_i\Delta^2\hat{S}_j\geq \epsilon_{ijk} \left|\langle\hat{S}_k\rangle\right|^2,
\label{eq_stokeuncertainty}
\end{equation}
where the variance of $\hat{S}_i$ is given by \mbox{$\Delta^2\hat{S}_i=\langle\hat{S}_i^2\rangle-\langle\hat{S}_i\rangle^2$}. This quantum picture of the polarization state of light cannot be represented as a point on the Poincar\'e sphere, but rather as a distribution in the space spanned by the Poincar\'e parameters, analogous to the phase-space representation of quantum optical states. This is visualized in Fig.~\ref{fig_poincaresphere}, which shows the variances, i.e. full-width at half-maximum of the marginal distributions, of a coherent and a polarization squeezed state.
\begin{figure}
\caption{\it Representation of the variances of a polarization squeezed (upper left) and a coherent state (lower right) on the Poincar\'e sphere.}
\label{fig_poincaresphere}
\end{figure}
Despite the fact that the Stokes uncertainty relations are state dependent, it is always possible to find pairs of maximally conjugate operators. This is equivalent to defining a Stokes basis in which only one parameter has a nonzero expectation value. This is justified insomuch that polarization transformations are unitary. Consider a polarization state described by \mbox{$\langle\hat{S}_i\rangle=\langle\hat{S}_j\rangle=0$} and \mbox{$\langle\hat{S}_k\rangle=\langle\hat{S}_0\rangle\neq0$} where $i,j,k$ represent orthogonal Stokes operators. The only nontrivial Heisenberg inequality then reads:
\begin{equation}
\Delta^2 \hat{S}_i\, \Delta^2 \hat{S}_j \geq |\langle\hat{S}_k\rangle|^2=|\langle\hat{S}_0\rangle|^2,
\end{equation}
which mirrors the quadrature uncertainty relation, and polarization squeezing can then be similarly defined:
\begin{equation}
\Delta^2 \hat{S}_i < |\langle\hat{S}_k\rangle| < \Delta^2 \hat{S}_j.
\label{eq_polsq}
\end{equation}
The definition of the conjugate operators $\hat{S}_i,\hat{S}_j$ is not unique and there exists an infinite set of conjugate operators $\hat{S}_\perp(\theta),\hat{S}_\perp(\theta+\frac{\pi}{2})$ that are perpendicular to the state's classical excitation $\hat{S}_k$, for which $\langle\hat{S}_\perp(\theta)\rangle =0$ for all $\theta$. All these operator pairs exist in the $\hat{S}_i-\hat{S}_j$ `dark plane,' i.e. the plane of zero mean intensity. A general dark plane operator is described by:
\begin{equation}
\hat{S}_{\perp}(\theta) \,=\, \cos(\theta)\hat{S}_i + \sin(\theta)\hat{S}_j,
\label{rotation}
\end{equation}
where $\theta$ is an angle in this plane defined relative to $\hat{S}_i$. Polarization squeezing is then generally given by:
\begin{equation}
\Delta^2 \hat{S}_\perp(\theta_{sq}) < |\langle\hat{S}_0\rangle| < \Delta^2 \hat{S}_\perp(\theta_{sq}+\frac{\pi}{2}),
\label{eq_stokes_uncertainty}
\end{equation}
where $\hat{S}_\perp(\theta_{sq})$ is the maximally squeezed parameter and $\hat{S}_\perp(\theta_{sq}+\frac{\pi}{2})$ the antisqueezed parameter.
Consider, for example, the specific case of a $+\hat{S}_3$ or $\sigma_+$-polarized beam as in the experiments presented here. Let this beam be composed of the two independent modes $\hat{a}_x,\hat{a}_y$ with a relative $\frac{\pi}{2}$ phase shift between their mean values. This is depicted in Fig.~\ref{fig_kerrsq}(b) and described by $\langle\hat{a}_y\rangle=i\langle\hat{a}_x\rangle=i\alpha/\sqrt{2}$ and $\alpha \in\mathbb{R}$. The beam is then circularly polarized with $\hat{a}_{\sigma_+}$ as the mean field and $\hat{a}_{\sigma_-}$ is the orthogonal vacuum mode:
\begin{eqnarray}
\hat{a}_{\sigma_+} &=& \frac{-1}{\sqrt{2}} \left( \hat{a}_x - i\hat{a}_y \right) \quad \text{with} \quad \langle\hat{a}_{\sigma_+} \rangle = - \alpha, \nonumber \\
\hat{a}_{\sigma_-} &=& \frac{1}{\sqrt{2}} \left( \hat{a}_x + i\hat{a}_y \right) \quad \text{with} \quad \langle\hat{a}_{\sigma_+} \rangle = 0.
\end{eqnarray}
The Stokes operators in the plane spanned by $\hat{S}_1-\hat{S}_2$ correspond to the quadrature operators of the dark $\hat{a}_{\sigma_-}$-polarization mode. Assuming $|\langle\delta\hat{a}\rangle|\ll|\alpha|$ and considering only the noise terms, we find:
\begin{eqnarray}
\delta \hat{S}_\perp(\theta) &=& \alpha \left(\delta\hat{a}_{\sigma_-}e^{-i\theta} + \delta\hat{a}^\dagger_{\sigma_-}e^{i\theta} \right) \nonumber \\
&=& \alpha\delta\hat{X}_{\sigma_-}(\theta) \nonumber \\
&=& \alpha\left(\delta\hat{X}_x(\theta) + \delta\hat{X}_y(\theta-\frac{\pi}{2})\right),
\label{eq_polsq_darkmode}
\end{eqnarray}
where the Stokes operator definitions of Eq.~(\ref{eq_qstokes}) have been used in a linearized form. The sum signal, a measure of the total intensity, is given by:
\begin{equation}
\delta \hat{S}_0 = \alpha\left(\delta\hat{a}_{\sigma_+} + \delta\hat{a}^\dagger_{\sigma_+} \right)= \alpha\delta\hat{X}_{\sigma_+},
\label{eq_polsq_brightmode}
\end{equation}
and thus exhibits no dependence on the dark mode. This considering of the physical interpretation of polarization squeezing shows that polarization squeezing is equivalent to vacuum squeezing in the orthogonal polarization mode:
\begin{equation}
\Delta^{2} \hat{S}_\perp(\theta) \;<\; |\langle\alpha\rangle|^2 \quad{} \Leftrightarrow \quad{} \Delta^2 \hat{X}_{\sigma_-}(\theta) \;<\; 1.
\label{eq_sq_equivalence}
\end{equation}
Whilst a particular case is considered here, a straightforward generalization to all other polarization bases is readily made as polarization transformations are unitary rotations in SU(2) space.
In dark-plane Stokes measurements, the beam's intensity is divided equally between two photodetectors. Such measurements are then identical to balanced homodyne detection: the classical excitation is a local oscillator for the orthogonally polarized dark mode. The phase between these modes is varied by rotating the Stokes measurement through the dark plane, allowing full characterization of the noise properties of the dark, $y$-polarized mode. This is a unique feature of polarization measurements and has been used to great advantage in many experiments, for example~\cite{grangier87.prl, smithey93.prl, sorensen98.prl, julsgaard04.nat, bowen03.job, josse04.job, heersink05.ol, Heersink:2006p253601}. This has also allowed the first characterization of a bright Kerr squeezed state as well as the reconstruction of the polarization variable Q function using polarization measurements~\cite{Marquardt:2007p220401}.
To show how an $\hat S_3$ polarised state is squeezed by the Kerr effect, we consider the essential Kerr Hamiltonian:
\begin{equation}
\hat{H} = (\hat a^\dagger_x\hat a_x)^2 + (\hat a^\dagger_y\hat a_y)^2,
\end{equation}
which in terms of the Stokes operators, can be expressed as
\begin{equation}
\hat{H} = \frac{1}{2} \left\{ \hat S_0^2 + \hat S_1^2 \right \}.
\end{equation}
The first term is a constant of the motion, since $\hat S_0$ gives the total number of photons, and has no effect on the dynamics. The second term is a nonlinear precession around the $S_1$ axis: the rate of precession is proportional to $S_1$, which is a manifestation of the intensity-dependent refractive index of the Kerr effect. The nonlinear precession will distort an initially symmetric distribution centred in the $S_1$-$S_2$ plane (eg the $S_3$ circularly polarised state located at a pole of the sphere) into an ellipse. As for ordinary quadrature squeezing, the nonlinear precession preserves the width in the $S_1$ direction, and so the squeezing is not directly observable by a number-difference observation.
The advantage of the squeezed $S_3$ state, as opposed to squeezing of a linearly polarised $S_2$ state, is that a simple rotation around the $S_3$ axis allows the squeezed (or antisqueezed) axis of the ellipse to be aligned to the $S_1$ axis and thus to be detected with a number-difference measurement. Such a rotation is easily implemented experimentally with a polarisation rotator.
\section{Pulse propagation}
\label{Pulse}
\subsection{Multimode description}
We have so far described the polarisation squeezing as a single-mode Kerr effect. However, this is accurate only for CW radiation, corresponding to a single momentum component. Ultrashort pulses, on the other hand, correspond to a superposition of many plane waves and thus require a multimode description. Such a description is crucial for an accurate treatment of dispersive and Raman effects. For a continuum of momentum modes, we can express the superposition as
\begin{equation}
\widehat{\Psi}_{\sigma}(t,z)=\frac{1}{\sqrt{2\pi}}\int dk\,\widehat{a}_{\sigma}(t,k)e^{i(k-k_{0})z+i\omega_{0}t},
\end{equation}
where instead of annihilation and creation operators for each polarisation mode, we now have field operators $\widehat{\Psi}^\dagger_{\sigma}(t,z)$, $\widehat{\Psi}_{\sigma}(t,z)$ for the envelopes of each of the polarisation modes $\sigma=(x,y)$. The commutation relations of the fields are
\begin{equation}
\left[\widehat{\Psi}_{\sigma}(t,z),\widehat{\Psi}_{\sigma'}^{\dagger}(t,z')\right]=\delta(z-z')\delta_{\sigma\sigma'},
\end{equation}
and with this normalisation, the total number of $\sigma$-photons in the fibre is thus $\widehat{N}_\sigma (t)=\int_0^L dz \widehat{\Psi}^\dagger_{\sigma}(t,z)\widehat{\Psi}_{\sigma}(t,z)$.
The general quantum model for a fibre with a single transverse mode is derived in \cite{Drummond:2001p139}. The relevant aspects for the current system include the dispersive pulse propagation, the electronic polarisation response that gives the instantaneous $\chi^{(3)}$, and the nonresonant coupling to phonons in the silica.
\subsection{Electromagnetic Hamiltonian}
In terms of the field operators for the slowly varying envelope defined above, the normally ordered Hamiltonian for an electromagnetic pulse in a polarisation-preserving fibre under the rotating-wave approximation is:
\begin{eqnarray}
\widehat H_{\rm EM} &=&\hbar \sum_\sigma\int\int dz\; dz' \omega_\sigma(z-z') \widehat{\Psi}^\dagger_{\sigma}(t,z)\widehat{\Psi}_{\sigma}(t,z') \nonumber \\
&& - \hbar \chi_E \sum_\sigma \int dz \widehat{\Psi}^{\dagger2}_{\sigma}(t,z)\widehat{\Psi}^2_{\sigma}(t,z),
\end{eqnarray}
where $ \omega_\sigma (z)$ is the Fourier transform of the dispersion relation:
\begin{eqnarray}
\omega_\sigma(z) \equiv \frac{1}{2\pi} \int dk \omega_\sigma(k) e^{i(k-k_{0})z}, \end{eqnarray}
and $ \chi_E $ is the strength of the third-order polarisation response. The birefringence of the polarisation response means that there are differences between the dispersion relations $\omega_x$ and $\omega_y$. The $\chi^{(3)}$ term is assumed to be independent of polarisation, and cross-Kerr effects are neglected, as the different group velocities of the pulses mean that the length of time that the pulses overlap in the fibre is negligible. The fibre is assumed to be homogeneous, with both $\omega_\sigma(k)$ and $\chi_E$ independent of the distance $z$ down the fibre.
To simplify the description of the dispersive part, we Taylor expand $\omega(k)$ around $k=k_0$ up to second order, which introduces the group velocity $v\equiv d\omega/dk |_{k=k_0}$ and dispersion parameter $\omega'' \equiv d^2\omega/dk^2 |_{k=k_0}$. Subtracting off the free evolution at the carrier frequency $\omega_0 = \omega_x(k_0)$, one obtains the simplified Hamiltonian:
\begin{eqnarray}
\widehat H_{\rm EM}' & = & \hbar \sum_\sigma\int dz \left\{ \frac{iv_\sigma}{2} (\nabla \widehat{\Psi}^\dagger_{\sigma}\widehat{\Psi}_{\sigma} - \widehat{\Psi}^\dagger_{\sigma}\nabla\widehat{\Psi}_{\sigma}) \right. \nonumber\\
&&+ \left. \frac{\omega''}{2}\nabla \widehat{\Psi}^\dagger_{\sigma}\nabla\widehat{\Psi}_{\sigma}
- \hbar \chi_E \widehat{\Psi}^{\dagger2}_{\sigma}\widehat{\Psi}^2_{\sigma}\right\}.
\end{eqnarray}
Here we have not included the difference in phase velocity between the two polarisations, which just leads to a constant relative phase shift.
For the methods that we use in this paper, it is convenient to treat the quantum dynamics in the Heisenberg picture, with time-evolving field operators. The equation of motion of the field annihilation operator that arises from the electromagnetic Hamiltonian is
\begin{eqnarray}
\frac{d}{dt} \widehat{\Psi}_{\sigma} &=& \frac{-i}{\hbar}\left[\widehat{\Psi}_{\sigma},\widehat H_{\rm EM}' \right] \nonumber\\
& = & \left \{ - v_\sigma\nabla + \frac{i\omega''}{2}\nabla^2 + i\chi_E \widehat{\Psi}_{\sigma}\widehat{\Psi}^\dagger_{\sigma}\right\}\widehat{\Psi}_{\sigma}.
\end{eqnarray}
\subsection{Raman Hamiltonian}
As well as the interaction with electrons that produces the polarisation response, the radiation field also interacts with phonons in the silica. The photons can excite both localised oscillations of the atoms around their equilibrium positions (Raman effect) as well as guided acoustic waves (GAWBs) along the waveguide. The latter can be treated as a low-frequency component of the Raman spectrum, and produces random fluctuations in the refractive index. However the effect of this is largely removed in this experiment through common-mode rejection, and any residual phase-noise can be accounted for by simple scaling laws (see section \ref{sec:phasenoise}).
The Raman interactions produce both excess phase noise and an additional nonlinearity. The atomic oscillation is modelled as a set of harmonic oscillators at each point in the fibre, and is coupled to the radiation field by a simple dispersive interaction:
\begin{eqnarray}
\widehat H_{\rm R} &=&\hbar \sum_{\sigma,k} R_k \int dz \widehat{\Psi}^\dagger_{\sigma}(z)\widehat{\Psi}_{\sigma}(z) \left\{\widehat b_{\sigma k}(z) + \widehat b^\dagger_{\sigma k}(z) \right\} \nonumber \\
&& + \hbar\sum_{\sigma,k} \omega_k \widehat b^\dagger_{\sigma k}(z) \widehat b_{\sigma k}(z),
\end{eqnarray}
where the phonon operators have the commutation relations
\begin{eqnarray}
\left[\hat{b}_{\sigma k}(z,t),\hat{b}_{\sigma' k'}^{\dagger}(z',t)\right] & = & \delta(z-z')\delta_{k,k'}\delta_{\sigma,\sigma'}
\end{eqnarray}
The spectral profile of this interaction $R(\omega)$ is well known from experimental measurements\cite{stolen89.josab} and is sampled here by oscillators of equal spectral spacing $\Delta\omega = \omega_{k+1}-\omega_k$, such that $\lim_{\Delta\omega \rightarrow 0} R_k/\sqrt{\Delta\omega} = R(\omega_k)$. The finite spectral width of the Raman profile means that the Raman contribution to the nonlinearity is not instantaneous on the time-scale of the optical pulse, leading to such effects as the soliton frequency shift\cite{mitschke86.ol,gordon86.ol}.
With the Raman and electromagnetic Hamiltonians combined, one can derive complete Heisenberg operator equations of motion for the optical field operator and the phonon operators\cite{Drummond:2001p139}: \begin{eqnarray}
\left(\frac{\partial}{\partial t}+v\frac{\partial}{\partial z}\right)\hat{\Psi}(z,t) & = & \lc-i\sum_{k}R_{k}\left\{ \hat{b}_{k}+\hat{b}_{k}^{\dagger}\right\} \right.\nonumber\\
&&\left. +\frac{i\omega''}{2}\frac{\partial^{2}}{\partial z^{2}}
+i\chi^{E}\hat{\Psi}^{\dagger}\hat{\Psi}\rc\hat{\Psi}\nonumber\\
\frac{\partial}{\partial t}\hat{b}_{k}(z,t) & = & -i\omega_k\hat{b}_{k}-iR_{k}\hat{\Psi}^{\dagger}\hat{\Psi},\end{eqnarray}
where we have suppressed the polarisation index, since the equations for each polarisation are independent.
The initial state of the phonons is thermal, with
\begin{equation}
\ex{\hat{b}_{k'}^{\dagger}(z',0)\hat{b}_{k}(z,0)}=n_{\textrm{{th}}}(\omega_k)\delta_{k,k'}\delta(z-z'),
\end{equation}
where $n_{\textrm{th}}(\omega) = 1/\left[ \exp(\hbar\omega/kT)-1\right]$ is the Bose-Einstein distribution.
\section{Simulation Methods}
\label{Simulations}
\subsection{Phase-space methods}
Phase-space methods are a means of simulating the dynamics of multimode many-body quantum systems. They are based on (quasi)probabilistic representations of the density matrix that are defined by means of coherent states. Because they are based on coherent states, they are ideally suited to simulating quantum optical experiments, which in so many cases begin with the coherent output of a laser. The two representations that give rise to practical numerical methods are the $+P$ \cite {Glauber:1963p2766,Sudarshan:1963p277,Chaturvedi:1977p187,Drummond:1980p2353} and Wigner
\cite{Wigner:1932p749} distributions. In both methods, the resultant description has the same structure as the mean-field, or classical, description, which is a form of nonlinear Schr\"odinger equation in the case of optical fibres. However there are also additional quantum noise terms, which may appear in the initial conditions or in the dynamical equations.
The $+P$ method provides an \emph{exact} probabilistic description in which stochastic averages correspond to normally ordered correlations. Because of this normal ordering, it is suited to intensity correlation measurements. Quantum effects enter by stochastic terms that have the form of spontaneous scattering. The $+P$ method has been applied to a variety of quantum-optical applications, including
superfluorescence\cite{Haake:1979p1740,Drummond:1982p3446}, parametric amplifiers\cite{Gardiner:2004} and optical fibres\cite{Carter:1987p1841,Carter:1991p3757}. More recently, it has been applied to a variety of Bose-Einstein condensate (BEC) simulations\cite{Drummond:1999p2661,Drummond:2004p040405,Poulsen:2001p013616,Hope:2001p3220,Poulsen:2001p023604,Savage:2006p033620}
The Wigner method, on the other hand, is an approximate method that is valid for large photon number $\overline{n}$ and short fibre length $L$. Here it is symmetrically ordered correlations that correspond to stochastic averages. Because of this symmetric ordering, the quantum effects enter via vacuum noise in the initial conditions\cite{Carter:1995p3274}, making it a simple and efficient method for squeezing calculations\cite{Drummond:1993p279}. It is also enjoying increasing utility in BEC simulations\cite{Steel:1998p4824}.
\subsection{Wigner equations}
The Wigner representation maps the operator equations of motion onto (almost) equivalent
stochastic phase-space equations.
The mapping is not exact because the `nonlinear' term leads to higher-order
(higher than second) derivatives in the equation for the Wigner function,
which must be neglected in order that the mapping to stochastic equations
can be completed. These neglected terms are the ones, for instance, which would allow the Wigner function to become negative.
The resultant equations are, up to a constant phase rotation,
\begin{eqnarray}
\frac{\partial}{\partial t}\Psi(z,t) & = & \lc-i\sum_{k}R_{k}\left\{ b_{k}+b_{k}^{*}\right\} - v\frac{\partial}{\partial z}\right.\nonumber\\
&&\left.+\frac{i\omega''}{2}\frac{\partial^{2}}{\partial z^{2}}+i\chi^{E}|\Psi|^{2}\rc\Psi\nonumber\\
\frac{\partial}{\partial t}b_{k}(z,t) & = & -i\omega_k b_{k}-iR_{k}\left(|\Psi|^{2}-\frac{1}{2\Delta z}\right),\nonumber\\\end{eqnarray}
where we have assumed that the fields will be discretized over a lattice with segment size $\Delta z$.
The initial conditions are \begin{eqnarray}
b_{k}(z,0) & = & \Gamma_{k}^{b}(z)\nonumber\\
\Psi(z,0) & = & \ex{\hat{\Psi}(z,0)}+\Gamma_{\Psi}(z),\end{eqnarray}
where the stochastic terms have correlations \begin{eqnarray}
\ex{{\Gamma_{k'}^{b}}^{*}(z')\Gamma_{k}^{b}(z)} & = & \left\{ n_{\textrm{{th}}}(\omega_k)+\frac{1}{2}\right\} \delta_{k,k'}\delta(z-z')\nonumber\\
\ex{\Gamma_{\Psi}^{*}(z')\Gamma_{\Psi}(z)} & = & \frac{\delta(z-z')}{2}.\end{eqnarray}
\subsection{$+P$ equations}
Phase-space equations that correspond exactly to the operator equations can be defined over a doubled phase space using the $+P$ representation. Quantum effects enter here through multiplicative noise terms in the equations, which generally lead to a larger sampling error than the Wigner method for squeezing calculations. While the Wigner method was used for nearly all of the simulations presented here, the $+P$ method provides important benchmark results, and was used to check the validity of the Wigner calculations in key cases.
The resultant $+P$ equations are
\begin{eqnarray}
\frac{\partial}{\partial t}\Psi(z,t) & = & \lc-i\sum_{k}R_{k}\left\{ b_{k}+b_{k}^{+}\right\} \Delta\omega - v\frac{\partial}{\partial z} \right.\nonumber\\
&&\left.+\frac{i\omega''}{2}\frac{\partial^{2}}{\partial z^{2}}+i\chi^{E}\Psi^+\Psi +\sqrt{i}\Gamma^{E} +i\Gamma^{R} \rc\Psi \nonumber\\
\frac{\partial}{\partial t}b_{k}(z,t) & = & -i\omega_k b_{k}-iR_{k}\Psi^+\Psi +\Gamma^{R}_k,
\label{pp1}
\end{eqnarray}
with equations for $\Psi^+$ and $b_k^+$ that have a conjugate form but with some independent noise terms. The initial conditions are
\begin{eqnarray}
b_{k}(z,0) & = & \Gamma_{k}^{b}(z)\nonumber\\
\Psi(z,0) & = & \ex{\hat{\Psi}(z,0)},
\label{pp2}
\end{eqnarray}
where the stochastic terms have correlations \begin{eqnarray}
\ex{{\Gamma_{k'}^{b}}^{+}(z')\Gamma_{k}^{b}(z)} & = & n_{\textrm{{th}}}(\omega_k) \delta_{k,k'} \delta(z-z')\nonumber\\
\ex{\Gamma^{E}(z',t')\Gamma^{E}(z,t)} & = & \chi^{E}\delta(z-z')\delta(t-t')\nonumber\\
&=&\ex{\Gamma^{E+}(z',t')\Gamma^{E+}(z,t)},\nonumber\\
\ex{\Gamma^{R}(z',t')\Gamma^{R}_k(z,t)} & = & R_k\delta(z-z')\delta(t-t')\nonumber\\
&=&\ex{\Gamma^{R+}(z',t')\Gamma^{R+}_k(z,t)},
\label{pp3}
\end{eqnarray}
with all other correlations zero.
In writing down explicit equations for the phonon variables, we have followed the approach of Carter\cite{Carter:1995p3274}. In this approach there is some freedom in how the Raman noise is distributed between the photon and phonon variables, a fact which could be exploited to optimise the performance of the simulations. The alternative approach, as in \cite{Drummond:2001p139}, analytically integrates the phonon variables out, to give nonlocal equations for the photon fields.
\subsection{Scaling}
To simplify the numerical calculation, we transform to a propagating
frame of reference with dimensionless variables: $\tau=(t-z/v)/t_{0}$,
$\zeta=z/z_{0}$ and $\Omega=\omega t_{0}$, where $z_{0}=t_{0}^{2}/k''$.
The fields are also rescaled: $\phi=\Psi\sqrt{vt_{0}/\overline{n}}$
and $\beta_{k}=r_{k}b_{k}\exp(i\Omega\tau)\sqrt{z_{0}/t_{0}\overline{n}}$,
where $r_{k}=R_{k}\sqrt{\overline{n}z_{0}/t_{0}v^{2}}$ is the rescaled
Raman coupling, which is related to the Raman gain $\alpha^{R}(\Omega)$
via $r_{k}=\sqrt{\alpha^{R}(k\Delta\Omega)/2\pi}$.
The quantity $\bar{n}=v^{2}t_{0}/\chi z_{0}$ gives the typical number
of photons in a soliton of width $t_{0}$. The effective nonlinearity
that gives rise to solitons has both electronic and Raman contributions:
$\chi=\chi_{E}+\chi_{R}$, where the Raman contribution is estimated
to be a fraction $f=\chi/\chi_{R}\simeq0.15$ of the total.
For $v^{2}t_{0}^{2}\ll z_{0}^{2}$, the rescaled Wigner equations are:
\begin{eqnarray}
\frac{\partial}{\partial\zeta}\phi(\zeta,\tau) & = & \lc-i\sum_{k}\left\{ \beta_{k}\exp(-i\Omega\tau)+\beta_{k}^{*}\exp(i\Omega\tau)\right\} \Delta\Omega \right.\nonumber\\
&&\left.+\frac{i}{2}\frac{\partial^{2}}{\partial\tau^{2}}+i(1-f)|\phi|^{2}\rc\phi\nonumber\\
\frac{\partial}{\partial\tau}\beta_{k}(\zeta,\tau) & = & -ir_{k}^{2}\left(|\phi|^{2}-\frac{vt_{0}}{2\overline{n}z_{0}\Delta\zeta}\right)\exp(i\Omega\tau),\end{eqnarray}
with initial conditions \begin{eqnarray}
\beta_{k}(\zeta,\tau & = & -\infty)=\Gamma_{k}^{\beta}(\zeta)\nonumber\\
\phi(\zeta=0,\tau) & = & \sqrt{\frac{vt_{0}}{\overline{n}}}\ex{\hat{\Psi}(0,t_{0}\tau)}+\Gamma_{\phi}(\tau),\end{eqnarray}
where the stochastic terms have correlations \begin{eqnarray}
\ex{{\Gamma_{k'}^{\beta}}^{*}(\zeta')\Gamma_{k}^{\beta}(\zeta)} & = & \frac{r_{k}^{2}}{\overline{n}}\left\{n_{k}+\frac{1}{2}\right\} \frac{\delta_{k,k'}}{\Delta\Omega}\delta(\zeta-\zeta')\nonumber\\
\ex{\Gamma_{\phi}^{*}(\tau')\Gamma_{\phi}(\tau)} & \simeq & \frac{\delta(\tau-\tau')}{2\overline{n}},\label{eq:scaled_wigner}
\end
{eqnarray}
where $n_{k}=n_{\textrm{{th}}}(k\Delta\Omega/t_{0})$
For numerical convenience, the field is split into two parts - a coherent field that obeys the classical equations of motion, and a difference field that contains the stochastic evolutions. The equations of motion of each part are given in appendix \ref{split_fields}.
The rescaled $+P$ equations follow similarly from Eqs (\ref{pp1})-(\ref{pp3}), and are given in appendix \ref{rescaled_pp}. Because of the much larger sampling error that arises in the $+P$ calculations, we make use of the fact the Wigner method calculates the linearised evolution exactly, and use the $+P$ method only to calculate the difference between the linearised and full evolution. If $\phi_{WL}$ is a Wigner solution to the linearised equations, and $\phi_{PL}$ and $\phi_{P}$ are $+P$ solutions to the linearised and full equations, respectively, calculated with identical noise sources, then the final solution is $\phi = \phi_{P}-\phi_{PL} + \phi_{WL}$. Because the difference between the full and linearised solutions is small, $\phi_P$ and $\phi_{PL}$ have very similar fluctuations in a given run; taking the difference removes most of the large $+P$ fluctuations, and adds in only the small Wigner fluctuations.
\section{Outputs and Moments}
\label{Outputs}
We find that good precision (a few percent of the squeezing in decibels) is obtained when averages are calculated using 1000 realisations of the Wigner equations. For further precision, 10,000 trajectories can be used, in which case we find that the sampling error cannot be distinctly plotted on the graphs. With the $+P$ method, on the other hand, we find that at least 10,000 trajectories are needed in some cases to produce useful results, even when the differencing method is used.
The observable moments in the polarisation squeezing measurements are integrated intensity measurements and their variances, which are neither simply normally ordered nor symmetrically ordered. Thus the results of the phase-space simulations must be adjusted for reordering, as we describe below.
In the theoretical description of the system, there are two optical fields, corresponding to the two polarisation modes of the fibre: $\widehat{\Psi}_{x}(t,z)$ and $\widehat{\Psi}_{y}(t,z)$. To describe the polarisation squeezing, we define integrated Stokes operators, which are a generalisation of Eq.~(\ref{eq_qstokes}):
\begin{eqnarray}
\widehat{S}_{0}\equiv\widehat{N}_{xx}(T)+\widehat{N}_{yy}(T), && \widehat{S}_{1}\equiv\widehat{N}_{xx}(T)-\widehat{N}_{yy}(T),\nonumber \\
\widehat{S}_{2}\equiv\widehat{N}_{xy}(T)+\widehat{N}_{yx}(T), && \widehat{S}_{3}\equiv i\widehat{N}_{yx}(T)-i\widehat{N}_{xy}(T),
\end{eqnarray}
where $T$ is the propagation time down the length of fibre and $\widehat{N}_{\sigma\sigma'}(t)=\int dz\widehat{\Psi}_{\sigma}^{\dagger}(t,z)\widehat{\Psi}_{\sigma'}(t,z)$. After the polarisation rotator, the fields are transformed to
\begin{eqnarray}
\widehat{\Psi}_{x}'(t,z) &=& \cos(\theta/2)\widehat{\Psi}_{x}(t,z) - i \sin(\theta/2)\widehat{\Psi}_{y}(t,z)\nonumber\\
\widehat{\Psi}_{y}'(t,z) &=& \sin(\theta/2)\widehat{\Psi}_{x}(t,z) + i \cos(\theta/2)\widehat{\Psi}_{y}(t,z),
\end{eqnarray}
which leaves $\widehat{S}_{0}$ unchanged but which transforms $\widehat{S}_{1}$ to
\begin{equation}
\widehat{S}_{\theta}=\cos(\theta)\widehat{S}_{1}+\sin(\theta)\widehat{S}_{2}.
\end{equation}
To calculate that squeezing in $\widehat{S}_{\theta}$, we need to calculate the mean $\ex{\widehat{S}_{\theta}}$ and mean-square $\ex{\widehat{S}_{\theta}^2}$.
\subsection{$+P$ Moments}
For the $+P$ method, stochastic averages of the phase-space variables give normally ordered moments. Thus the mean $\ex{\widehat{S}_{\theta}}$ can be calculated directly, as it already normally ordered. The mean square, however, requires a reordering:
\begin{eqnarray}
\ex{\widehat{S}_{\theta}^2}&=&\left< :\left( \cos(\theta)\widehat{S}_{1}+\sin(\theta)\widehat{S}_{2}\right)^2 : \right>+ \ex{\widehat{S}_{0}},
\end{eqnarray}
as shown in appendix \ref{Normal}.
For convenience, we define corresponding stochastic polarisation parameters ${s}_j$, ${s}_\theta$ in terms of the normalised +P fields: $n_{\sigma\sigma'}(\zeta)\equiv \int d\tau{\phi}_{\sigma}^{+}(\tau,\zeta){\phi}_{\sigma'}(\tau,\zeta)$. The measured variance can then be written:
\begin{eqnarray}
{\rm var}(\widehat{S}_{\theta})&=& \overline{n}^2 \left( \ex{{s}_{\theta}^2}_{+P} - \ex{{s}_{\theta}}_{+P}^2+ \frac{1}{\overline n} \ex{{s}_{0}}_{+P} \right ),
\end{eqnarray}
where $\ex{\dots}_{+P}$ denotes a stochastic average with respect to an ensemble of $+P$ trajectories. The correction term here corresponds to the shot-noise level of a coherent state (for which
$\ex{{s}_{\theta}^2}_{+P} = \ex{{s}_{\theta}}_{+P}^2$): ${\rm var}(\widehat{S}_{\theta})_{\rm coh} = \ex{\widehat S_0} = \overline n \ex{{s}_{0}}_{+P} $. Thus the amount of squeezing in decibels is given by:
\begin{equation}
{\rm Squeezing (dB)} = 10 \log \frac{\overline n \ex{{s}_{\theta}^2}_{+P} - \overline n \ex{{s}_{\theta}}_{+P}^2+ \ex{{s}_{0}}_{+P}}{\ex{{s}_{0}}_{+P}}.
\end{equation}
\subsection{Wigner Moments}
Stochastic averages in the Wigner method correspond to symmetrically ordered products, thus making a reordering necessary for both the mean and variance of the integrated intensity measurements. First we note the symmetric form of $\widehat{N}_{\sigma\sigma'}$:
\begin{eqnarray}
\left. \widehat N_{\sigma\sigma'} \right|_{\rm sym} &=& \frac{1}{2}\int dz\left\{\widehat{\Psi}_{\sigma}^{\dagger}(z)\widehat{\Psi}_{\sigma'}(z) + \widehat{\Psi}_{\sigma'}(z)\widehat{\Psi}_{\sigma}^{\dagger}(z) \right\} \nonumber \\
&=& \widehat N_{\sigma\sigma'} + \frac{1}{2}\delta_{\sigma\sigma'}M,
\end{eqnarray}
where $M$ is the number of Fourier modes used to decompose the pulse shape. Because $\widehat S_2$ and $\widehat S_3$ contain only cross-polarisation coherences, there is no correction from reordering. In $\widehat S_1$, the corrections from horizontal and vertically polarised terms cancel out. Thus it is only the total intensity that requires a correction, and this corresponds to the vacuum-energy contribution:
\begin{eqnarray}
\left. \widehat S_0 \right|_{\rm sym} &=& \widehat S_0 + M
\end{eqnarray}
The variance of the Stokes operators contain terms with products of four operators, each of correspond to 24 possible orderings. As appendix \ref{Symmetric} shows, most of the corrections cancel out, leaving:
\begin{eqnarray}
\left .\widehat{S}_{\theta}^2\right|_{\rm sym} &=& \widehat{S}_{\theta}^2 + \frac{1}{2}M.
\end{eqnarray}
Similarly to above, we can define an analogous stochastic polarisation parameter ${s}_\theta$ in terms of the normalised Wigner fields: $n_{\sigma\sigma'}(\zeta)\equiv \int d\tau{\phi}_{\sigma}^{*}(\tau,\zeta){\phi}_{\sigma'}(\tau,\zeta)$. The measured variance can then be written:
\begin{eqnarray}
\rm{var}(\widehat{S}_{\theta})&=& \overline{n}^2 \left( \ex{{s}_{\theta}^2}_{W} - \ex{{s}_{\theta}}_{W}^2- \frac{1}{2\overline n^2} M \right ),
\end{eqnarray}
where $\ex{\dots}_{W}$ denotes an stochastic average with respect to an ensemble of Wigner trajectories. The shot-noise reference level is given by ${\rm var}(\widehat{S}_{\theta})_{\rm coh} = \ex{\widehat S_0} = \overline n \ex{{s}_{0}}_{W} - M$. Thus the amount of squeezing in decibels is
\begin{equation}
{\rm Squeezing (dB)} = 10 \log \frac{\overline n \ex{{s}_{\theta}^2}_{W} - \overline n \ex{{s}_{\theta}}_{W}^2 - \frac{1}{2\overline n} M }{\ex{{s}_{0}}_{W} - \frac{1}{\overline n} M}.
\end{equation}
\section{Experiment}
\label{Experiment}
The laser system used in these experiments is a home-made solid state laser where Cr$^{4+}$:YAG is the active medium~\cite{spaelter97.apb}. This system emits pulses with temporal widths of $\tau_0=$130-150~fs at a central wavelength $\lambda_0$ between 1495-1500~nm. These ultrashort pulses exhibit a bandwidth limited secant-hyperbolic spatial amplitude envelope and are thus assumed to be solitons. The laser repetition rate is 163~MHz and the average output power lies between 60 and 90~mW corresponding to pulse energies of 370-550~pJ.
\begin{figure}
\caption{\it Schematic of the single-pass method for the efficient production of polarization squeezed states. The Stokes measurement after the fiber scans the dark $\hat{S}
\label{fig_polsqsetup}
\end{figure}
In the present configuration, pictured in Fig.~\ref{fig_polsqsetup}, laser pulses are coupled into only one end of the glass fiber. This produces quadrature squeezing rather than amplitude squeezing which is not directly detectable (see Fig.~\ref{fig_kerrsq}(a)). However, overlapping two such independently and simultaneously squeezed pulses after the fiber allows access to this quadrature squeezing by measurement of the Stokes parameters (Fig.~\ref{fig_kerrsq}(b)). This requires the compensation of the fiber birefringence, which we choose to carry out before the fiber to avoid unnecessary losses to the squeezed beams. The optical fiber used was the FS-PM-7811 fiber from 3M, chosen for its high birefringence, i.e. good polarization maintenance, as well as its relatively small mode field diameter, i.e. high effective nonlinearity and thus low soliton energy. The most relevant fiber parameters are listed in Table~\ref{tab_fiber}.
\begin{table}[h]
\centering
\begin{tabular}{|l|c|c|c|c|}
\hline
\quad Parameter & Symbol & Fibre I & Fibre II & Units \\
\hline
\hline
Mode field diameter & {\it d} & \( 5.42\) & \( 5.69 \) & \( \mu \)m \\
\hline
Nonlinear refractive & n\( _{2} \) & 2.9 &2.9 & m \(^{2}\)/W \\
index (\( \times 10^{-20}\) ) & & & \\
\hline
Effective nonlinearity & \( \gamma \) & 5.3 & 4.8 & 1/(m\(\cdot\)W) \\
(\( \times 10^{-3}\) ) & & & \\
\hline
Soliton energy & \( E_{Sol} \) & 56 & 60 & pJ \\
\hline
Dispersion & \( k'' (=\beta_{2}) \) & -10.5 & -11.1 & fs\( ^{2} \)/mm \\
\hline
Attenuation @ 1550~nm & \( \alpha \) & 1.82 & 2.03 & dB/km \\
\hline
Beat length & \( L_b \) & 1.67& 1.67 & mm \\
\hline
Polarization crosstalk & \(\Delta P\) & \( < -23\) & \( < -23 \) & dB \\
per 100m & & & \\
\hline
\end{tabular}\\
\caption{\it Values for the material parameters for the 3M FS-PM-7811 fiber. Fibres I and II refer to two different production runs. All values (when not otherwise stated) are for $\lambda_0= 1500$~nm and $\tau_0=130$~fs.}
\label{tab_fiber}
\end{table}
For experimental ease, the polarization of the beam after the fiber was set to be circular, e.g. $\sigma_+$. The orthogonal Stokes parameters in the dark $\hat{S}_1$-$\hat{S}_2$ plane, given by Eq.~\ref{rotation}, are measured by rotating a half-wave plate before a polarising beam splitter, as in Fig.~\ref{fig_polsqsetup}. Equations (\ref{eq_polsq_darkmode}-\ref{eq_polsq_brightmode}) provide an interpretation of the classical excitation in $\hat{a}_{\sigma_+}$ as a perfectly matched local oscillator for the orthogonally polarized dark mode $\hat{a}_{\sigma_-}$. The phase between $\hat{a}_{\sigma_+}$ and $\hat{a}_{\sigma_-}$ varies with the rotation of the half-wave plate angle, $\Theta$, to give the phase-space angle $\theta=4\Theta$. This noise level was compared with the respective Heisenberg limit. The sum photon current, $\hat{S}_0$, gives the amplitude noise of the input beam, for a Kerr-squeezed state this equals the shot noise. This reference level was verified by observation of the balanced homodyne detection of a coherent state as well the sum of the balanced homodyne detection of the $x$- and $y$-polarized modes.
The polarizing beam splitter outputs were detected by two balanced photodetectors based on pin photodiodes. The detectors had a DC output ($<$1~kHz) to monitor the optical power as well as an AC output (5-40~MHz). This frequency window was chosen to avoid low frequency technical noise and the high frequency laser repetition rate. The sum and difference of the detectors' AC photocurrents, representing the noise of different Stokes variables, were fed into a spectrum analyzer (Hewlett-Packard 8595E) to measure the spectral power density at 17.5~MHz with a resolution bandwidth of 300~kHz and a video bandwidth of 30~Hz.
\section{Results - experiment and simulation}
\label{Results}
\subsection{Characterising the single-pass method}
The single-pass squeezing method allows the measurement of greater squeezing as well as the direct and full characterization of the bright Kerr-squeezed beams~\cite{heersink05.ol, Marquardt:2007p220401}. Both of these traits are visible in Fig.~\ref{fig_polsq_rotation}. Here the measured AC noise as a function of the rotation of a half-wave plate (by the angle $\Theta$) in the dark Stokes plane is seen. A progression between very large noise and squeezing is observed, as expected from the rotation of a fiber squeezed state. Plotted on the x-axis is the projection angle $\theta$, i.e. the angle by which the state has been rotated in phase space, inferred from the wave plate angle ($\theta=4\Theta$). Here pulses of 83.7~pJ were transmitted through 13.3~m of optical fiber and the electronic signals were corrected for the $-86.1\pm0.1$~dBm dark noise.
\begin{figure}
\caption{\it Noise power against phase-space rotation angle for the rotation of the measurement half-wave plate for a pulse energy of 83.7~pJ using 13.3~m of fiber I. Inset: Schematic of the projection principle for angle $\theta$.}
\label{fig_polsq_rotation}
\end{figure}
For $\theta=0$, an $\hat{S}_1$ measurement gives a noise value equal to the shot noise. This corresponds to the amplitude quadrature of the Kerr-squeezed states emerging from the fiber. Rotation of the state by $\theta_{sq}$ makes the state's squeezing observable by projection onto the minor axis of the noise ellipse. Further rotation brings a rapid increase in noise as the excess phase noise, composed of the antisqueezing and the classical thermal noise arising from GAWBS, becomes visible. The maximum noise is observed for $\theta=\theta_{sq}+\frac{\pi}{2}$. Under the assumption of statistically identical but uncorrelated Kerr-squeezed states, this measurement is equivalent to the characterization of the individual squeezed states using standard local oscillator and homodyne detection methods. However here no stabilization is needed after production of the polarization squeezed state. This is advantageous for experiments with long acquisition times, i.e. state tomography, and has indeed allowed the reconstruction of the Wigner function of the dark Stokes plane or Kerr-squeezed states~\cite{Marquardt:2007p220401}.
\begin{figure}
\caption{\it Linear noise reduction against optical transmission for the polarization squeezing generated by pulses of an energy of 81~pJ in 3.9~m of fiber I.}
\label{fig_polsq_attntest}
\end{figure}
It is crucial to ensure that the measured squeezing did not arise from detector saturation or any other spurious effect. This was accomplished observing the noise of a variably attenuated squeezed beam, where a plot of the linear relative noise against the transmission should be linear for true squeezing. A representative plot for a 81~pJ pulse in a 3.9~m fiber exhibiting $-3.9\pm0.3$~dB of squeezing is shown in Fig.~\ref{fig_polsq_attntest}; the linear result is indicative of genuine squeezing.
\begin{figure}
\caption{\it Plot showing a stable squeezing of $-4.0\pm0.3$~dB over 100 minutes. A 30~m optical fiber with a pulse energy of 80~pJ was used.}
\label{fig_polsq_longsq}
\end{figure}
The single-pass polarization squeezer exhibits a good temporal stability, highlighted by the results in Fig.~\ref{fig_polsq_longsq}. Here the sum (shot noise) and difference (polarization squeezing) channels have been plotted. An average of {-4.0~dB} of squeezing corrected for $-85.8\pm0.1$~dBm of dark noise was measured over 100~minutes. The squeezer used 30~m of optical fiber into which two orthogonally polarized pulses of 40~pJ each were coupled. The most sensitive factor in this setup is the locking of the birefringence compensator. Further important parameters are the coupling of light into the fiber and the laser power stability. All of these parameters are easily held stable by exploiting commercially available components.
\subsection{Squeezing Results}
The squeezing angle, $\theta_{sq}$ and the squeezed and anti-squeezed quadratures were experimentally investigated as a function of pulse energy from 3.5~pJ to 178.8~pJ, as plotted in (a), (b) and (c), respectively, of Fig.~\ref{fig:results_13.2m} (triangles). The x-axis shows the total pulse energy, i.e. the sum of the two orthogonally polarized pulses comprising the polarization squeezed pulse. We observe maximum squeezing $-6.8\pm0.3$~dB at an energy of 98.6~pJ. The corresponding antisqueezing of this state is $29.6\pm0.3$~dB and the squeezing angle is $1.71^{o}$. As the optical energy goes beyond 98.6~pJ, the squeezing is reduced, eventually reaching the shot noise limit (SNL), and the increment of antisqueezing slows down to a plateau area.
The loss of the set-up was found to be 13\%: 5\% from the fibre end, 4.6\% from optical elments and from the fibre attenuation (2.03 dB/km at 1550 nm), 2\% from incomplete intereference between the two polarisation modes ($~99\%$ visibility was measured), and 2\% from the photodiodes. Thus we infer a maximum polarisation squeezing of $-10.4\pm 0.8$dB. The improvement in squeezing over previous implementations\cite{heersink05.ol, Corney:2006p023606} of the single-pass scheme is largely due to the low loss achieved here.
The theoretical simulations for the squeezing, antisqueezing and squeezing angle at different input energies are also given in Fig.~\ref{fig:results_13.2m} by solid and dashed lines. As described in further detail below, the effect of excess phase noise, such as GAWBS~\cite{Corney:2006p023606}, is included by a single-parameter fit between the simulation and experimental data for squeezing angle (shown by solid line in Fig.~\ref{fig:results_13.2m}(a)). The theoretical results for squeezing and antisqueezing then show very good agreement with the experimental data, and are consistent with the measured linear losses of 13\%. From the simulations, the effect of the GAWBS is seen to be a reduction in squeezing for lower energy pulses; above about 100pJ, it has virtually no effect on the squeezing. Although some deviations appear at very high input energy, the simulations also show the same deterioration of squeezing for higher pulse energies as is seen in the experimental results; this effect does not occur in the simulations if Raman terms are neglected, as we discuss below.
\begin{figure}
\caption{{\it Measurement results and theoretical simulations for 13.2~m 3M FS-PM-7811 fiber (run II) as a function of pulse energy: (a) the squeezing angle, (b) the squeezing and (c) antisqueezing noise. Solid and dashed lines show the simulation results with and without additional phase noise, with linear losses are taken to be 13\%. The shading indicates simulation uncertainly. The simulation result without third-order dispersion is given by the dots in (b). Diamonds represent the experimental results, with experimental uncertainty indicated by the error bars in the squeezing. Both the simulation and the experimental errors were too small to be plotted distinctly for the squeezing angle and antisqueezing. The measured noises are corrected for $-85.1\pm0.1$~dBm electronic noise.}
\label{fig:results_13.2m}
\end{figure}
\subsection{Phase-noise and GAWBS}
\label{sec:phasenoise}
Excess phase noise, caused for example by depolarising GAWBS in the fibre, is determined for each fibre length by a single-parameter fit of the experimental and simulation squeezing angles. We model this by independent random fluctuations in the refractive index at each point along the fibre length. The cumulative effect on each pulse at a given propagation length is a random phase shift whose variance is proportional to the time-width of the pulse:
\begin{equation}
\phi(\tau,\zeta) = \phi_0(\tau,\zeta) e^{i\eta},
\end{equation}
where $\ex{\eta^2} \propto t_0$.
Such phase fluctuations do not affect the number difference measurement $\widehat S_1$, but they do lead to fluctuations in $\widehat S_2$
\begin{equation}
\widehat S_2 - \left< \widehat S_2 \right> \simeq 2\eta \overline n \int |\phi_0(\tau,\zeta)|^2d\tau \propto \eta E
\end{equation}
where $\eta$ is now taken to describe the relative (depolarising) relative phase shifts.
Thus the variance relative to shot noise of $\widehat S_2$ caused by phase fluctuations scales linearly with the energy of the pulse:
\begin{eqnarray}
\sigma \equiv \frac{ {\rm var} (\widehat S_2 )}{\ex{\widehat{S}_0}} &\propto& \frac{\ex{\eta^2} E^2}{E} \nonumber \\
&=& c E
\end{eqnarray}
where the constant of proportionality $c$ is to be determined by the fit. Here we have assumed that the pulse width is a constant, independent of input energy. This assumption is not entirely accurate, because unless the energy is the soliton energy for that pulse width, the pulse will reshape to form a soliton, thereby altering the pulse width. However, for short fibre lengths, this effect should be small, and so we neglect it in our calculations.
The effect of the phase noise will be to stretch the squeezing ellipse in the $S_2$ direction, according to the formula:
\begin{equation}\frac{\rm var (\widehat S_\theta)}{\ex{\widehat S_0}} = a\cos^2(\theta-\theta_K) + b \sin^2 (\theta-\theta_K) + cE\sin^2(\theta), \label{ellipse}
\end{equation}
where $\theta_K$ is the predicted angle from the Kerr-only squeezing, $a$ is the relative Kerr squeezing and $b$ is the relative Kerr anti-squeezing. These parameters are calculated by the simulation, and are a function of the input energy $E$. The value of $c$ is determined by fitting the predicted angle of maximum squeezing as a function of $E$ against the observed values. The predicted angle is obtained from the extrema of the expression in Eq.~(\ref{ellipse}) and the fit is performed with a nonlinear least squares method. Once the value of $c$ is determined, new values of squeezing and antisqueezing are calculated from eq. (\ref{ellipse}).
As figure \ref{angles} illustrates, the excess phase noise has a substantial effect, on both the squeezing angle and the amount of squeezing, only at low levels of squeezing. For highly squeezed light, the Kerr-squeezed ellipse is more closely aligned to the phase quadrature, and thus the phase-noise merely has the effect of increasing the antisqueezing. This view is confirmed by the results in Fig.~(\ref{fig:results_13.2m}), where the difference between the curves with and without the phase-noise-fitting is discernible only at lower input energies.
\begin{figure}
\caption{\it Illustration of the effect of excess phase noise on different squeezing ellipses. Dashed line gives the Kerr-squeezed ellispse, and solid line gives the ellipse with added phase noise. The effect on the squeezing and the angle is less for the ellipse with larger Kerr squeezing (lower ellipse).}
\label{angles}
\end{figure}
\subsection{Third-order Dispersion}
The comparison between theory and experiment confirms the deterioration of squeezing at large pulse energy caused by Raman effects in the fibre. However there is still some residual discrepancy between theory and experiments, which could be caused by various higher-order effects not included in the theoretical model. We here explore the effect of third-order dispersion, and find that it accounts for some of the unexplained difference at high energies.
Third-order dispersion\cite{Yu:1995p2340} arises from the rate of change of curvature of the dispersion. It becomes more important for shorter pulses or when operating near the zero-dispersion wavelength\cite{Agrawal:2007}. In the propagation equation, it appears as an extra term in the scaled equations:
\begin{equation}
\frac{\partial}{\partial \zeta} (\zeta,\tau) = - \frac{B_3}{6} \frac{\partial^3}{\partial\tau^3},
\end{equation}
where $B_3 = k''' z_0/t_0^3$ is a dimensionless third-order dispersion parameter. For the fibre used in the experiment, the third-order dispersion at $\lambda = 1499$nm is $k''' = 8.38\times 10^{-41} $s$^3$/m, giving $B_3 = 0.097$. The effect of third-order dispersion on the pulse spectrum for various energies is shown in Fig. \ref{TOD}, where significant differences appear only above the soliton energy.
\begin{figure}
\caption{\it Simulation pulse-spectrum at pulse energies (a) $1.5E_s$, (b) $E_s$, (c) $0.5E_s$ at rescaled distances of $\zeta = 0$ (dot-dashed), and $\zeta = 25$ with (solid) and without (dashed) third-order dispersion}
\label{TOD}
\end{figure}
Third-order dispersion does not have an observable effect on the squeezing angles or the amount of antisqueezing, but its effect can be seen on the squeezing, as shown in Fig.~\ref{fig:results_13.2m}(b) by the difference between the solid and dot-dashed lines. Below the soliton energy, the third-order dispersion has no observable effect. It diminishes the amount of squeezing at around the soliton energy, and at higher energies it changes the rate at which squeezing deteriorates as a function of energy. Far above the soliton energy, there remains some difference between simulation and experiment, which indicates that other higher-order processes may be playing a role at these energies. Because, in any case, the effect of third-order dispersion is rather small, we do not include it in the other simulation results shown in this paper
\subsection{Raman noise effects}
The Raman effect has a significant effect on the pulse shape and spectrum for the more intense pulses at these subpicosecond pulse widths. For a soliton pulse, the effect of the Raman interactions is to induce a frequency shift in the soliton and hence a delay in its arrival time\cite{mitschke86.ol,gordon86.ol}. For pulses above the soliton energy, the Raman interaction affects the way the pulse reshapes. With a purely electronic (instantaneous) nonlinearity, the pulse reshapes into a narrower soliton, at the same time shedding radiation that forms a low pedestal underneath the soliton. In the frequency domain, this results in a modulation of the pulse spectrum. As Fig.~\ref{Ramanpulse} shows, with the Raman terms included, the reformed soliton separates from the pedestal, which distorts the spectrum into an asymmetric shape.
\begin{figure}
\caption{ Simulation pulse shape (a) and spectrum (b) at pulse
energies $1.5E_s$ and normalised propagation length $\zeta = 25$, with (solid) and without
(dashed) Raman effects.
}
\label{Ramanpulse}
\end{figure}
For a pure Kerr effect, the squeezing is proportional to the intensity of the light, which in our case corresponds to the input energy of the pulse. However the experimental and simulation results clearly show that while the squeezing increases with input energy over a range of energies, there is a point beyond which the squeezing deteriorates. This deterioration is largely due to Raman effects, as Fig.~\ref{raman_comparison} reveals, which compares the simulations with and without Raman effects. In the latter case the nonlinearity is taken to be of the same magnitude as the former but is instantaneous. Without Raman effects, the squeezing does not suffer the same catastrophic reduction at high energies, but it does appear to saturate at around the soliton energy (2$\times$54pJ), demonstrating that pulse-reshaping effects are also in play.
\begin{figure}
\caption{Squeezing degradation at high intensity: (a) squeezing and (b) antisqueezing measurements for $L=13.35 \textrm{m}
\label{raman_comparison}
\end{figure}
For $L=13.35 \textrm{m}$, the optimum energy is around 80\% of the soliton energy.
\subsection{Comparison with exact $+P$ results}
Nearly all of the simulation results presented in this paper were calculated with the truncated Wigner phase-space method, because results can be obtained quickly and with low sampling error. However, the Wigner technique only provides an approximation to the true quantum dynamics. While the approximation is usually a good one for intense optical pulses, some deviations from the exact result could in principle occur for long simulation times, or when highly-squeezed states are being produced. To test the Wigner method, we compared selected points with $+P$ calculations, and found agreement within the statistical uncertainty. One example is shown in Fig.~\ref{raman_comparison}, where the $+P$ results are shown as the squares. As the error-bar indicates, the sampling error for the $+P$ is much larger than that of the Wigner for the more intense pulses, even though 10 times as many trajectories were used for the $+P$ calculation. Even for the same number of trajectories, $+P$ calculation is more computationally exacting. This combination of greater computational cost per trajectory and the larger number of trajectories required for a meaningful $+P$ result is why the Wigner technique has been our method of choice for squeezing calculations. The $+P$ method comes into its own when more exotic quantum states or fewer photons are involved, i.e. when the Wigner technique is not expected to be reliable. It is also possible that appropriate diffusion\cite{Plimak2001} or drift\cite{Deuar:2006p6847} gauges may improve the performance of the $+P$ calculations.
\subsection{Comparison for different fibre lengths}
The squeezed and antisqueezed quadratures as well as the squeezing angle $\theta_{sq}$ of such polarization states were investigated as a function of pulse energy for different lengths of 3M FS-PM-7811 fiber, as shown in Figs.~\ref{fig_polsq_3.9_13.4m}-\ref{fig_polsq_50_166m}. The figures are organized into pairs of lengths: Fig.~\ref{fig_polsq_3.9_13.4m} shows 3.9 and 13.3~m, Fig.~\ref{fig_polsq_20_30m} shows 20 and 30~m and Fig.~\ref{fig_polsq_50_166m} shows 50 and 166~m. For each length the squeezing angle ($\pm0.3^\circ$), squeezing ($\pm0.3$~dB) and antisqueezing ($\pm0.3$~dB) form a column. Due to the technical limitations of the photodetectors it was not possible to measure above 125~pJ or 20~mW in this particular experimental run. The losses in this particular set-up were also larger than in that which gave the results in Fig.~\ref{fig:results_13.2m}.
Even though the simulations and experiment agree very well for the angle and the antisqueezing, some small discrepancies appear in the squeezing at longer fibre lengths. This could be caused by variation of the material parameters along the fibre length, or inaccuracies in the Raman model, which would become more prominent for longer fibres.
Ideal Kerr squeezing should increase with propagation distance. However the experimental data and simulations show that, above 13.4m, the squeezing at a given input energy is largely insensitive to the length of fibre. The exception here is that the deterioration of squeezing due Raman effects starts to occur at slightly lower energies. Thus, the maximum squeezing available at a given fibre length actually decreases for longer lengths. Meanwhile the antisqueezing increases with propagation distance, as expected.
\begin{figure}
\caption{\it Experiments (corrected for dark noise) and simulations (with and without fitted phase noise) of the polarization squeezing, antisqueezing and squeezing angle for 3.9 (a, c, e) and 13.3~m (b, d, f) of fiber I}
\label{fig_polsq_3.9_13.4m}
\end{figure}
\begin{figure}
\caption{\it Experiments (corrected for dark noise) and simulations (with and without fitted phase noise) of the polarization squeezing, antisqueezing and squeezing angle for 20 (a, c, e) and 30~m (b, d, f) of fiber II. The lighter data points in (c) are from a corrected experimental run.}
\label{fig_polsq_20_30m}
\end{figure}
\begin{figure}
\caption{\it Experiments (corrected for dark noise) and simulations (with and without fitted phase noise) of the polarization squeezing, antisqueezing and squeezing angle for 50 (a, c, e) and 166~m (b, d, f) of fiber II. The amount of dispersion over 166~m makes the simulations impractical for this case.}
\label{fig_polsq_50_166m}
\end{figure}
\subsection{Optimal squeezing as a function of power/length}
Some insight can be gained from further simulations of squeezing as a function of fibre length, for various input energies, as shown in Fig.~\ref{versuslength}. This figure reveals that for a given input energy there is an optimum length for the best squeezing, reflecting the length-dependence of the Raman-induced deterioration revealed in the previous plots. The best squeezing overall is obtained for a pulse at the soliton energy (54pJ in each pulse), which indicates that the reduced optimal squeezing at other energies is due to pulse-reshaping effects. Thus for the $t_0=$130fs used here, the optimum fibre length would be $L\simeq$ 7m, (although the improvement over 13m would only be a fraction of a dB). Alternatively, for a fixed fibre length, one could optimise the maximum squeezing by changing the pulse-width to yield a soliton at that point.
Furthermore, as Fig.~\ref{versuslength} plots the simulation results without adjustment for linear loss, it shows that inferred squeezing of over -12dB is possible.
\begin{figure}
\caption{Simulated squeezing as a function of fibre length, for various total input energies: $E = $74.4pJ (triangles), $E = $83.7pJ (diamonds), $E = $93pJ (squares), $E = $108pJ (circles) and $E = $119pJ (crosses). Linear loss and phase-noise have not been included. (Fibre I)}
\label{versuslength}
\end{figure}
\section{Conclusion \& Outlook}
An excellent $-6.8\pm0.3$~dB of polarization squeezing, the greatest measured in fibres to date, has been demonstrated with the novel single-pass setup\cite{Dong:2008p116}. From this value it is possible to inferr that $-10.4\pm0.8$~dB of squeezing was generated in the fiber. To further improve the measured noise reduction, losses after the fiber must be minimized by, for example, employing more efficient photodiodes in a minimal detection setup using highest quality optics. We speculate that net losses of as little as 5\% should be possible, thereby allowing the measurement of squeezing in excess of -8~dB.
By exclusion of the Raman and/or the GAWBS effects in the simulations, it is clear that that the former is a limiting factor for high pulse energies, whereas the latter is detrimental at low energies. Investigation of a range of fibre lengths revealed that greater squeezing is not achieved going beyond 13.2m. Indeed, simulations indicate that slightly greater squeezing may be achievable at a lower fibre length of around 7m.
Further improvement may be possible through the use of photonic crystal fibers (PCF), which are novel fibers manufactured with specially designed light-guiding air-silica structures along their length~\cite{russell03.sci}. These have already been used in several squeezing experiments~\cite{fiorentino02.ol,lorenz01.apb, hirosawa05.prl,Milanovic:2007p559}, and with fewer low-frequency acoustic vibrations, are also expected to improve squeezing results by minimizing destructive GAWBS noise~\cite{Elser:2006p133901}. Such an advance would bring fiber-produced squeezed states closer to minimum uncertainty states, a desirable feature for quantum information applications.
\acknowledgements
The work is funded by the Australian Research Council under the Centres of Excellence scheme, and by the Deutsche Forschungsgemeinschaft. We gratefully acknowledge useful comments from Murray Olsen and Ben Buchler.
\appendix
\section{Numerical implementation}
\subsection{Absorbing potentials}
The split-step Fourier method used to integrate the equations gives periodic boundary conditions. Thus any radiation shed by the pulse that reaches the edge of the time window during the simulations will re-enter the other side, eventually interfering with the original pulse and thereby giving possibly spurious results. To prevent this, we include an inhomogenous loss that absorbs any shed radiation before it reaches the boundary. We choose a (negative) gain profile of $g(\tau) = \sin^{20}(\pi \tau/2\tau_0)$, where $2\tau_0$ is the width of the simulation window. The contribution to the Wigner equations is then:
\begin{equation}
\left . \frac{d}{d\zeta} \phi(\tau,\zeta) \right |_{\rm loss} = -g(\tau) \phi(\tau,\zeta) + \Gamma_L(\tau,\zeta),
\end{equation}
where the loss noise has the correlations
\begin{equation}
\left < \Gamma_L(\tau,\zeta) \Gamma_L(\tau',\zeta')\right > = \frac{g(\tau)\delta(\zeta-\zeta')\delta(\tau-\tau')}{2\overline n}.
\end{equation}
The $+P$ equations are similar, except that there is no stochastic terms from the loss.
Naturally, because this loss is not a physical effect, the time window should be wide enough so that the loss does not affect the pulse itself.
\subsection{Split Wigner equations}
\label{split_fields}
To increase the output precision in the numerical Wigner calculation, we split the fields into
means plus deviations: $\phi=\overline{\phi}+\delta\phi$, $\beta_{k}=\overline{\beta}_{k}+\delta\beta_{k}$
and evolve the two parts separately. The simulated equations are thus:
\begin{eqnarray}
\frac{d}{d\tau}\overline{\beta}_{k}& = & -ir_{k}^{2}|\overline{\phi}|^{2}e^{i\Omega_{k}\tau}\nonumber\\
\frac{d}{d\tau}\Delta\beta_{k} & = & -ir_{k}^{2}\left\{ \overline{\phi}\Delta\phi^{*}+\Delta\phi\overline{\phi}^{*}+\Delta\phi\Delta\phi^{*}-\frac{vt_{0}}{2\overline{n}z_{0}\Delta\zeta}\right\} e^{i\Omega_{k}\tau}\nonumber\\
\frac{d}{d\zeta}\overline{\phi}& = & \frac{i}{2}\frac{\partial^{2}}{\partial\tau^{2}}\overline{\phi}+i|\overline{\phi}|^{2}\overline{\phi}-iI\overline{\phi}\nonumber\\
\frac{d}{d\zeta}\Delta\phi & = & \frac{i}{2}\frac{\partial^{2}}{\partial\tau^{2}}\Delta\phi+i\left(\overline{\phi}^{*}+\Delta\phi^{*}\right)\left(2*\Delta\phi\overline{\phi}+\Delta\phi^{2}\right) \nonumber \\
& & +i \Delta\phi^{*}\overline{\phi}^{2} - i\left(I\Delta\phi+\Delta I\overline{\phi}+\Delta I\Delta\phi\right),\nonumber \\
\end{eqnarray}
where \begin{eqnarray}
I(\zeta,\tau) & = & \sum_{k}2\Re\left\{ \overline{\beta}_{k}e^{-i\Omega_{k}\tau}\right\} \Delta\Omega\nonumber\\
\Delta I(\zeta,\tau) & = & \sum_{k}2\Re\left\{ \Delta\beta_{k}e^{-i\Omega_{k}\tau}\right\} \Delta\Omega\end{eqnarray}
and with initial conditions \begin{eqnarray}
\overline{\phi}(\zeta=0,\tau) & = & \sqrt{N}\textrm{sech}(\tau)\nonumber\\
\Delta\phi(\zeta=0,\tau) & = & \Gamma_{\phi}(\tau)\nonumber\\
\overline{\beta}_{k}(\zeta,\tau=-\infty) & = & 0\nonumber\\
\Delta\beta_{k}(\zeta,\tau=-\infty) & = & \Gamma_{k}(\zeta).\end{eqnarray}
The soliton number is defined as $N = E/E_s$, where $E_s = 2\hbar\omega \overline n$ is the energy of a fundamental sech soliton of width $t_0$.
\subsection{Rescaled $+P$ equations}
\label{rescaled_pp}
The rescaled $+P$ equations, corresponding to the Wigner equations of Eq.~(\ref{eq:scaled_wigner}) are
\begin{eqnarray}
\frac{\partial}{\partial\zeta}\phi & = & \lc-i\sum_{k}\left\{ \beta_{k}e^{-i\Omega\tau}+\beta_{k}^{+}e^{i\Omega\tau} \right\} \Delta\Omega +\frac{i}{2}\frac{\partial^{2}}{\partial\tau^{2}}\right.\nonumber\\
&&\left. +i(1-f)\phi^+\phi +\sqrt{i}\Gamma^{E}(\zeta,\tau) +i\Gamma^{R}(\zeta,\tau) \rc\phi\nonumber\\
\frac{\partial}{\partial\tau}\beta_{k} & = & r_{k}^{2}\left(-i\phi^+\phi +\Gamma^{R}_k(\zeta,\tau) \right)e^{i\Omega\tau},\end{eqnarray}
with equations of conjugate form for $\phi^+$ and $\beta_k^+$. The initial conditions are \begin{eqnarray}
\beta_{k}(\zeta,\tau & = & -\infty)=\Gamma_{k}^{\beta}(\zeta)\nonumber\\
\phi(\zeta=0,\tau) & = & \sqrt{\frac{vt_{0}}{\overline{n}}}\ex{\hat{\Psi}(0,t_{0}\tau)},\end{eqnarray}
where the stochastic terms have correlations
\begin{eqnarray}
\ex{{\Gamma_{k'}^{\beta}}^{+}(\zeta')\Gamma_{k}^{\beta}(\zeta)} & = & \frac{r_{k}^{2}n_{k}\delta_{k,k'}}{\overline{n}\Delta\Omega}\delta(\zeta-\zeta')\nonumber\\
\ex{\Gamma^{E}(\zeta',\tau')\Gamma^{E}(\zeta,\tau)} & = & \frac{1-f}{\overline n}\delta(\zeta-\zeta')\delta(\tau-\tau')\nonumber\\
&=&\ex{\Gamma^{E+}(\zeta',\tau')\Gamma^{E+}(\zeta,\tau)},\nonumber\\
\ex{\Gamma^{R}(\zeta',\tau')\Gamma^{R}_k(\zeta,\tau)} & = & \frac{1}{\overline n}\delta(\zeta-\zeta')\delta(\tau-\tau')\nonumber\\
&=&\ex{\Gamma^{R+}(\zeta',\tau')\Gamma^{R+}_k(\zeta,\tau)}.\end{eqnarray}
Preliminary investigation of other (physically equivalent) ways to numerically implement the Raman noise did not find any improvement over the simple choice given here.
\section{Output Moments}
As discussed in Sec.~\ref{Outputs}, the $+P$ and Wigner simulation methods give correlations that are normally and symmetrically ordered, respectively. To compare with experimental measurements of the Stokes' parameter variances, some re-ordering is necessary. The transformation (\ref{rotation}) that relates $\widehat S_1$ to $\widehat S_\theta$ preserves the commutation relations. Thus to reorder the variance of a Stokes parameter at a general angle in the dark plane, $\widehat{S}_{\theta}^2 $, we only need to consider the corrections that arise from reordering $\widehat{S}_{1}^2$.
\subsection{Normal Ordering}
\label{Normal}
First we expand the mean-square of $\widehat S_1$ as
\begin{eqnarray}
:\widehat{S}_{1}^2: &=&:\widehat{N}_{xx}\widehat{N}_{xx}:-2\widehat{N}_{xx}\widehat{N}_{yy}+:\widehat{N}_{yy}\widehat{N}_{yy}:\;.
\end{eqnarray}
Thus we need only consider the reordering of terms of the form:
\begin{eqnarray}
\widehat{N}_{\sigma\sigma}\widehat{N}_{\sigma\sigma} &=& \int dz \int dz' \widehat{\Psi}_{\sigma}^{\dagger}(t,z)\widehat{\Psi}_{\sigma}(t,z)\widehat{\Psi}_{\sigma}^{\dagger}(t,z')\widehat{\Psi}_{\sigma}(t,z')\nonumber\\
&=& \int dz \int dz' \widehat{\Psi}_{\sigma}^{\dagger}(t,z)\widehat{\Psi}_{\sigma}^{\dagger}(t,z')\widehat{\Psi}_{\sigma}(t,z)\widehat{\Psi}_{\sigma}(t,z') \nonumber\\
&&+ \int dz \widehat{\Psi}_{\sigma}^{\dagger}(t,z)\widehat{\Psi}_{\sigma}(t,z)
\nonumber\\
&=&:\widehat{N}_{\sigma\sigma}\widehat{N}_{\sigma\sigma}: + \widehat{N}_{\sigma\sigma},
\end{eqnarray}
which gives
\begin{eqnarray}
\widehat{S}_{1}^2 &=&:\widehat{S}_{1}^2: + \widehat{S}_{0}\;.
\end{eqnarray}
\subsection{Symmetric ordering}
\label{Symmetric}
To symmetrically order the relevant products of 4 field operators, we must start from the sum of all $4!=24$ possible orderings. The essential result that we require is:
\begin{eqnarray}
\left .\widehat{N}_{\sigma\sigma}\widehat{N}_{\sigma'\sigma'}\right|_{\rm sym} &=& \left(\widehat{N}_{\sigma\sigma}+\frac{1}{2}M\right)\left( \widehat{N}_{\sigma'\sigma'}+\frac{1}{2}M\right) + \frac{1}{4}M\delta_{\sigma,\sigma'},\nonumber\\
\end{eqnarray}
where $M$ is the number of modes. The mean square of $\widehat{S}_1$ is then:
\begin{eqnarray}
\left .\widehat{S}_{1}^2\right|_{\rm sym} &=&\left . \left( \widehat{N}_{xx}\widehat{N}_{xx}-2\widehat{N}_{xx}\widehat{N}_{yy}+\widehat{N}_{yy}\widehat{N}_{yy}\right)\right|\;,\nonumber\\
&=& \widehat{S}_{1}^2 + \frac{1}{2}M.
\end{eqnarray}
\end{document}
|
\begin{document}
\title{Primitivity of group rings of non-elementary torsion-free hyperbolic groups}
\author{Brent B. Solie}
\address{Department of Mathematics, Embry-Riddle Aeronautical University, 3700 Willow Creed Road, Prescott, AZ 86301}
\ead{[email protected]}
\date{\today}
\begin{keyword}primitive group rings \sep hyperbolic groups
\MSC[2010] 16S34 \sep 20C07 \sep 20F67 \end{keyword}
\begin{abstract}
We use a recent result of Alexander and Nishinaka to show that if $G$ is a non-elementary torsion-free hyperbolic group and $R$ is a countable domain, then the group ring $RG$ is primitive. This implies that the group ring $KG$ of any non-elementary torsion-free hyperbolic group $G$ over a field $K$ is primitive.
\end{abstract}
\maketitle
\section{Introduction}
In \cite{Alexander2017}, Alexander and Nishinaka use the following Property (*) of a group $G$ to establish the primitivity of a broad class of group rings.
(Recall that a ring $R$ is \emph{(right) primitive} if it contains a faithful irreducible (right) $R$-module.)\\
\begin{minipage}{0.9\linewidth}
(*) For each subset $M$ of $G$ consisting of a finite number of elements not equal to 1, and for any positive integer $m \geq 2$, there exist distinct $a,b,c \in G$ so that if $(x_1^{-1}g_1 x_1) (x_2^{-1}g_2 x_2) \cdots (x_m^{-1}g_m x_m) = 1$, where $g_i \in M$ and $x_i \in \{a, b, c\}$ for all $i=1, \dots, m$, then $x_i = x_{i+1}$ for some $i$.
\end{minipage}
In particular, Alexander and Nishinaka give the following broad criterion for primitivity:
\begin{thm}[{\cite[Theorem 1.1]{Alexander2017}}]
\label{Alexander-Nishinaka}
Let $G$ be a group which has a non-Abelian free subgroup whose cardinality is the same as that of $G$, and suppose that $G$ satisfies Property (*). Then, if $R$ is a domain with $|R| \leq |G|$, the group ring $RG$ of $G$ over $R$ is primitive. In particular, the group algebra $KG$ is primitive for any field $K$.
\end{thm}
Our goal at present is to show that the hypotheses of Theorem 1 hold if $G$ is a non-elementary torsion-free hyperbolic group, thus implying the primitivity of a fairly broad class of group rings.
\section{Hyperbolic groups}
Let $G$ be a group with finite generating set $X$.
Recall that the \emph{Cayley graph} $\Gamma_X(G)$ of $G$ with respect to $X$ is an $X$-digraph with vertex set $G$ and an $x$-labelled edge directed from $g$ to $gx$ for all $g \in G$ and $x \in X$.
We may equip $\Gamma_X(G)$ with the word metric by assigning each edge a length of one; as a result, $\Gamma_X(G)$ becomes a geodesic metric space.
We say that $\Gamma_X(G)$ satisfies the \emph{$\delta$-thin triangles condition} for a real constant $\delta >0$ if every side of a geodesic triangle is contained in the $\delta$-neighborhood of the union of the two remaining sides.
If a group $G$ admits a finite generating set $X$ and real constant $\delta > 0$ for which $\Gamma_X(G)$ satisfies the $\delta$-thin triangles condition, we say that $G$ is \emph{$\delta$-hyperbolic}, or simply \emph{hyperbolic}.
Hyperbolicity does not depend on choice of finite generating set, although the constant of hyperbolicity may vary.
It is easy to see that finite groups and free groups are hyperbolic.
Some more interesting examples of hyperbolic groups include one-relator groups with torsion, certain small cancellation groups, and fundamental groups of closed surfaces with negative Euler characteristic.
On the other hand, free Abelian groups of rank at least two and Baumslag-Solitar groups are standard examples of non-hyperbolic groups.
The intrinsic geometry of hyperbolic groups induces a wide range of algebraic, geometric, and algorithmic properties.
Hyperbolic groups are necessarily finitely presented, have a solvable word problem, have linear isoperimetric inequality, and satisfy the Tits alternative.
Furthermore, hyperbolicity is a common phenomenon among groups: standard statistical models in group theory show that a randomly chosen finitely presented group is overwhelmingly likely to be infinite, torsion-free, and hyperbolic \cite{Olshanskii1992}.
Hyperbolic groups and their generalizations are the subject of a tremendous amount of modern literature in group theory, and so we refer the interested reader to \cite{Alonso1991, Ghys1990, Gromov1987} for a more thorough introduction.
We say that a hyperbolic group is \emph{elementary} if it is finite or virtually infinite cyclic; in either case, primitivity of a group ring over such a group has already been characterized.
Fundamental to these characterizations is the \emph{FC center} of a group $G$, the set of elements $\Delta(G) = \{g \in G \mid [G:C_G(g)] < \infty\}$.
Similarly, the \emph{FC+ center} of $G$ is the subset $\Delta^+(G)$ of torsion elements of $\Delta(G)$.
It is well-known that if $KG$ is primitive, then $\Delta^+(G) = 1$ \cite[Lemma 9.1.1]{Passman1977}.
Consequently, if $G$ is finite, then $KG$ is not primitive for any field $K$ unless $G=1$.
Alternately, if $G$ is virtually infinite cyclic, it is necessarily polycyclic-by-finite and therefore covered by the classification collectively due to Domanov \cite{Domanov1978}, Farkas-Passman \cite{Farkas1978}, and Roseblade \cite{Roseblade1978}: for any field $K$ and polycyclic-by-finite $G$, the group ring $KG$ is primitive if and only if $\Delta(G)=1$ and $K$ is not absolute.
We now restrict our attention to non-elementary torsion-free hyperbolic groups.
We begin with a theorem of Gromov.
\begin{thm}[{\cite[Theorem 5.3.E]{Gromov1987}}]
\label{Gromov's Theorem}
There exists a constant $E=E(k,\delta)>0$ such that for every $k$ elements $g_1, \dots, g_k$ of infinite order in a $\delta$-hyperbolic group $G$, the subgroup $\langle g_1^{e_1}, \dots, g_k^{e_k} \rangle$ is free whenever $e_i \geq E$ for $i=1, \dots, k$.
\end{thm}
We obtain the following immediate corollary.
\begin{cor}
\label{Exists a noncommuting element}
Let $G$ be a non-elementary torsion-free hyperbolic group.
Let $M = \{g_1, \dots, g_k\}$ be a finite subset of nonidentity elements of $G$.
Then there exists an element $u \in G$ which is not a proper power and which commutes with no element of $M$.
\end{cor}
\begin{proof}
Let $C(g) = C_G(g)$ denote the centralizer of $g$ in $G$.
It is well-known that centralizers of nontrivial elements of a torsion-free hyperbolic group are necessarily infinite cyclic \cite{Ghys1990}.
Consequently, if $g \in G$ is nontrivial and not a proper power, then $C(g) = C(g^n) = \langle g \rangle$ for all $n \neq 0$ and $C(g)$ is maximal among the cyclic subgroups of $G$.
Thus we may replace $g_i^n \in M$ with $g_i$ without changing the set of elements which commute with no element of $M$, and we may assume that no element of $M$ is a proper power.
The same argument shows that we may always produce $u$ which is not a proper power by passing to a root if necessary.
Suppose $k=1$.
As $G$ is non-elementary, it must properly contain the maximal cyclic subgroup $C(g_1)$, and so there must exist an element $u$ of $G$ failing to commute with $g_1$.
Now suppose $k \geq 2$.
Let $E=E(k,\delta)$ be the constant from Theorem \ref{Gromov's Theorem}; we may assume $E$ to be a positive integer.
Since $G$ is torsion-free, $g_i^E$ is nontrivial and of infinite order for all $g_i \in M$.
Therefore $H=\langle g_1^E, \dots, g_k^E \rangle$ is a non-Abelian free subgroup of $G$ and so contains an element $u \in H \subseteq G$ which fails to commute with any $g_i^E$.
Since $C(g_i^E)=C(g_i)$ in $G$, we have that $u$ cannot commute with any $g_i$.
\end{proof}
In order to show that a non-elementary torsion-free hyperbolic group satisfies Property (*), we resort to a version of the big powers property of torsion-free hyperbolic groups.
The form we use here follows immediately from the stronger version for a class of relatively hyperbolic groups proved in \cite{Kharlampovich2009}.
\begin{thm}[The big powers property \cite{Kharlampovich2009}]
Let $G$ be a torsion-free hyperbolic group.
Let $u \in G$ be nontrivial and not a proper power.
Let $g_1, \dots, g_k$ be elements of $G$ which do not commute with $u$.
Then there exists $N > 0$ such that if $|n_i| \geq N$ for $i=0, \dots, k$ then
\begin{align*}
u^{n_0} g_1 u^{n_1} g_2 \cdots u^{n_{k-1}} g_k u^{n_k} \neq 1.
\end{align*}
\end{thm}
The big powers property provides a way of generating a large set of nontrivial elements of a group and so is of use in studying residual properties of groups, universal equivalence, and algebraic geometry over groups.
The property appears first due to B. Baumslag in his study of fully residually free groups \cite{Baumslag1967}.
Ol'shanski\u\i\ later generalizes the property to torsion-free hyperbolic groups \cite{Olshanskii1993}, and Kharlampovich and Myasnikov further generalize it to non-Abelian torsion-free relatively hyperbolic groups with free Abelian parabolic subgroups \cite{Kharlampovich2009}.
\section{Main Result}
\begin{prop}
\label{Property (*) for NETFHG}
If $G$ is a non-elementary torsion-free hyperbolic group, then $G$ satisfies Property (*).
\end{prop}
\begin{proof}
Let $M$ be a finite subset of $G$ not containing the identity.
By Corollary \ref{Exists a noncommuting element}, there exists an element $u \in G$ which generates its own centralizer and commutes with no $g \in M$.
Fix a positive integer $m \geq 2$ and consider a finite sequence $g_1, \dots, g_m$ of elements from $M$.
Since $u$ commutes with none of the $g_i$ and $u$ generates its own centralizer, by the big powers property, there exists $N(g_1, \dots, g_m) > 0$ such that
\begin{align*}
u^{n_0} g_1 u^{n_1} g_2 \cdots g_{m-1} u^{n_{m-1}} g_m u^{n_m} \neq 1
\end{align*}
whenever $|n_i| \geq N$ for all $i=0, \dots, m$.
Since $M$ is a finite set, there are finitely many $m$-tuples $(g_1, \dots, g_m)$ drawn from $M$.
Therefore, let $N > \max \left\{N(g_1, \dots, g_m) \mid g_1, \dots, g_m \in M \right\}$.
We now define $a=u^N, b=u^{2N}$, and $c=u^{3N}$.
Since $G$ is torsion-free, these elements are necessarily distinct.
Consider a product
\begin{align*}
w = (x_1^{-1}g_1 x_1) (x_2^{-1}g_2 x_2) \cdots (x_m^{-1}g_m x_m)
\end{align*}
where $x_1, \dots, x_m \in \{ u^N, u^{2N}, u^{3N}\}$.
We then have
\begin{align*}
w = u^{n_0} g_1 u^{n_1} g_2 \cdots g_{m-1} u^{n_{m-1}} g_m u^{n_m},
\end{align*}
where $u^{n_0} = x_1^{-1}, u^{n_m} = x_m, u^{n_i} =x_i x_{i+1}^{-1}$ and $n_i \in \{0, \pm N, \pm 2N\}$ for $i = 1, \dots, m-1$.
Note that by choice of $x_1$ and $x_m$, we have $n_0 \neq 0$ and $n_m \neq 0$.
By the big powers property and choice of $N$, if $n_i \neq 0$ for all $i=0, \dots, m$, then $w \neq 1$.
Therefore, if $w=1$, then some $n_i = 0$.
Since we cannot have $n_0=0$ or $n_{m+1}=0$, we have $n_i=0$ for some $i \in \{1, \dots, m-1\}$, in which case we must have $1 = u^{n_i} = x_i x_{i+1}^{-1} $, and so $x_i = x_{i+1}$.
\end{proof}
Since a non-elementary hyperbolic group is finitely generated by definition, it is necessarily countably infinite.
However, by \cite[Remark 3.6]{Alexander2017}, a countably infinite group satisfying Property (*) also necessarily contains a free subgroup of rank two, which is also countably infinite.
(We also note that a group $G$ satisfying Property (*) also automatically satisfies $\Delta(G)=1$.)
A non-elementary torsion-free hyperbolic group thus satisfies Property (*) and has a free subgroup of the same cardinality, and so we obtain the following main result as a corollary to Theorem \ref{Alexander-Nishinaka}.
\begin{thm}
\label{the result}
If $G$ is a non-elementary torsion-free hyperbolic group, then for any countable domain $R$, the group ring $RG$ of $G$ over $R$ is primitive. In particular, the group ring $KG$ is primitive for any field $K$.
\end{thm}
We may slightly relax the torsion-free condition on $G$ if $\Delta(G)=1$, as demonstrated by the following corollary.
\begin{cor}
If $G$ is a non-elementary virtually torsion-free hyperbolic group with $\Delta(G)=1$, then for any countable domain $R$, the group ring $RG$ of $G$ over $R$ is primitive. In particular, the group ring $KG$ is primitive for any field $K$.
\end{cor}
\begin{proof}
Let $K$ be any field and let $H$ be a torsion-free subgroup of finite index in $G$.
As a finite index subgroup, $H$ is quasi-isometric to $G$, and thus $H$ is necessarily also non-elementary and hyperbolic \cite{Gromov1987}.
By Theorem \ref{the result}, we have that $KH$ is primitive.
From \cite[Theorem 4.2.10]{Passman1977}, KG is prime if and only if $\Delta(G)$ is torsion-free Abelian.
Thus if $\Delta(G)=1$, we have that $KG$ is necessarily prime.
Finally, if $KG$ is prime, then $KG$ is primitive if and only if $KH$ is primitive by a result of Rosenberg \cite[Theorem 3]{Rosenberg1971} (cf. \cite[Theorem 9.1.11]{Passman1977}.)
\end{proof}
It is worth noting the long-standing open conjecture that every hyperbolic group is virtually torsion-free.
An affirmation of this conjecture, together with the above corollary, would imply that $KG$ is primitive for any field $K$ and any non-elementary hyperbolic group $G$.
\section*{Acknowlegements}
The author would like to thank Prof. Tsunekazu Nishinaka for introducing the author to the subject of primitive group rings and for many illuminating conversations, Prof. Hisaya Tsutsui for his support and guidance, and the referee for his or her very helpful feedback.
\section*{References}
\end{document}
|
\begin{equation}gin{document}
\title{On Karamata's proof of the Landau-Ingham Tauberian theorem}
\abstract{This is an exposition, in 12 pages including all prerequisites and a generalization, of
Karamata's little known elementary proof of the Landau-Ingham Tauberian theorem, a result in
real analysis from which the Prime Number Theorem follows in a few lines.}
\section{Introduction}
The aim of this paper is to give a self-contained, accessible and `elementary' proof of
of the following theorem, which we call the Landau-Ingham Tauberian theorem:
\begin{theorem} \label{theor-LI} Let $f:[1,\infty)\rightarrow\7R$ be non-negative and non-decreasing and
assume that
\begin{equation} F(x):=\sum_{n\le x}f\left(\frac{x}{n}\right) \quad \mathrm{satisfies} \ \quad\
F(x)=Ax\log x+Bx+C\frac{x}{\log x}+o\left(\frac{x}{\log x}\right). \label{eq-Kar}\end{equation}
Then $f(x)=Ax+o(x)$, equivalently $f(x)\sim Ax$.
\end{theorem}
The interest of this theorem derives from the fact that, while ostensibly it is a result firmly
located in classical real analysis, the prime number theorem (PNT) $\pi(x)\sim\frac{x}{\log x}$
can be deduced from it by a few lines of Chebychev-style reasoning. (Cf.\ the Appendix.)
Versions of Theorem \ref{theor-LI} were proven by Landau \cite[\S 160]{landau} as early as 1909,
Ingham \cite[Theorem 1]{ingham}, Gordon \cite{gordon} and Ellison \cite[Theorem 3.1]{ellison}, but
none of these proofs was from scratch. Landau used as input the identity
$\sum_n\frac{\mu(n)\log n}{n}=-1$. But the latter easily implies $M(x)=\sum_{n\le x}\mu(n)=o(x)$
which (as also shown by Landau) is equivalent to the PNT. Actually,
$\sum_n\frac{\mu(n)\log n}{n}=-1$ is `stronger' than the PNT in the sense that it cannot be deduced
from the latter (other than by elementarily reproving the PNT with a sufficiently strong remainder
estimate). In this sense, Gordon's version of Theorem \ref{theor-LI} is an improvement, in that he
uses as input exactly the PNT (in the form $\psi(x)\sim x$) and thereby shows that Theorem
\ref{theor-LI} is not `stronger' than the PNT. Ellison's version assumes $M(x)=o(x)$
(and an $O(x^\begin{equation}ta)$ remainder with $\begin{equation}ta<1$ in (\ref{eq-Kar})). It is thus clear that none of
these approaches provides a proof of the PNT. Ingham's proof, on the other hand, departs from the
information that $\zeta(1+it)\ne 0$ (which can be deduced from the PNT, but also be proven {\it ab
initio}). Thus his proof is not `elementary', but arguably it is one of the nicer and more
conceptual deductions of the PNT from $\zeta(1+it)\ne 0$ -- though certainly not the simplest (which
is \cite{zagier}) given that the proof requires Wiener's $L^1$-Tauberian theorem.
Our proof of Theorem \ref{theor-LI} will essentially follow the elementary Selberg-style proof
given by Karamata
\footnote{Note des \'editeurs : Jovan Karamata, n\'e pr\`es de Belgrade en 1902 et mort \`a Gen\`eve en 1967, fut professeur \`a Gen\`eve d\`es 1951 et directeur de L'Enseignement Math\'ematique de 1954 \`a 1967. Voir M.\ Tomi\'c, \emph{Jovan Karamata (1902-1967)}, Enseignement Math.\ \ (2) {\bf 15}, 1--20 (1969).}
\cite{karamata} under the assumption that $f$ is the summatory function of an
arithmetic function, i.e.\ constant between successive integers. We will remove this
assumption. For the proof of the PNT, this generality is not needed, but from an analysis
perspective it seems desirable, and it brings us fairly close to Ingham's version of the theorem,
which differed only in having $o(x)$ instead of $C\frac{x}{\log x}+o(\frac{x}{\log x})$ in the hypothesis.
Unfortunately, Karamata's paper \cite{karamata} seems to be essentially forgotten: There are so few
references to it that we can discuss them all. It is mentioned in \cite{EI} by Erd\"os and Ingham
and in the book \cite{ellison} of Ellison and Mend\`es-France. (Considering that the latter authors
know Karamata's work, one may find it surprising that for their elementary proof of the PNT they
chose the somewhat roundabout route of giving a Selberg-style proof of $M(x)=o(x)$, using this to
prove a weak version of Theorem \ref{theor-LI}, from which then $\psi(x)\sim x$ is deduced.) Even
the two books \cite{BGT,korevaar} on Tauberian theory only briefly mention Karamata's
\cite{karamata} (or just the survey paper \cite{karamata2}) but then discuss in detail only Ingham's
proof. Finally, \cite{karamata,karamata2} are cited in the recent historical article \cite{nik}, but
its emphasis is on other matters. We close by noting that Karamata is not even mentioned in the only
other paper pursuing an elementary proof of a Landau-Ingham theorem, namely Balog's \cite{balog},
where a version of Theorem \ref{theor-LI} with a (fairly weak) error term in the conclusion is proven.
Our reason for advertising Karamata's approach is that, in our view, it is the conceptually
cleanest and simplest of the Selberg-Erd\"os style proofs of the PNT, cf.\ \cite{selberg,erdos} and
followers, e.g.\ \cite{postn, nev, kalecki, levinson, schwarz, pollack}. For $f=\psi$ and
$f(x)=M(x)+\lfloor x\rfloor$, Theorem \ref{theor-LI} readily implies $\psi(x)=x+o(x)$ and
$M(x)=o(x)$. Making these substitutions in advance, the proof simplifies only marginally, but it
becomes less transparent (in particular for $f=\psi$) due to an abundance of non-linear
expressions. By contrast, Theorem \ref{theor-LI} is linear w.r.t.\ $f$ and $F$. To be sure, also the
proof given below has a non-linear core, cf.\ (\ref{eq-selb}) and Proposition \ref{prop-selb}, but
by putting the latter into evidence, the logic of the proof becomes clearer. One is actually led to
believe that the non-linear component of the proof is inevitable, as is also suggested by Theorem 2
in Erd\"os' \cite{erdos2}, to wit
\[ a_k\ge 0\ \forall k\ge 1\ \wedge\ \sum_{k=1}^N ka_k+\sum_{k+l\le N}a_ka_l=N^2+O(1)\ \ \Rightarrow\ \
\sum_{k=1}^N a_k=N+O(1),\]
from which the PNT can be deduced with little effort. (Cf.\ \cite{HT} for more in this direction.)
Another respect in which \cite{karamata} is superior to most of the later papers, including
V.\ Nevanlinna's \cite{nev} (whose approach is adopted by several books \cite{schwarz,pollack}),
concerns the Tauberian deduction of the final result from a Selberg-style integral inequality. In
\cite{karamata}, this is achieved by a theorem attributed to Erd\"os (Theorem \ref{theor-2} below)
with clearly identified, obviously minimal hypotheses and an elegant proof. This advantage over
other approaches like \cite{nev}, which tend to use further information about the discontinuities
of the function under consideration, is essential for our generalization to arbitrary non-decreasing
functions. However, we will have to adapt the proof (not least in order to work around an obscure issue).
In our exposition we make a point of avoiding the explicit summations over (pairs of) primes littering
many elementary proofs, almost obtaining a proof of the PNT free of primes! This is achieved by
defining the M\"obius and von Mangoldt functions $\mu$ and $\Lambda$ in terms of the functional
identities they satisfy and using their explicit computation only to show that they are bounded and
non-negative, respectively. Some of the proofs are formulated in terms of parametric Stieltjes
integrals, typically of the form $\int f(x/t)dg(t)$ and integration by parts. We also do this in
situations where $f$ and $g$ may both be discontinuous. Since our functions will always have
bounded variation, thus at most countably many discontinuities, this can be justified by observing
that the resulting identities hold for all $x$ outside a countable set. Alternatively, we can
replace $f(x)$ at every point of discontinuity by $(f(x+0)+f(x-0))/2$ without changing the
asymptotics. For such functions, integration by parts always holds in the theory of
Lebesgue-Stieltjes integration, cf.\ \cite{hewitt,HS}.
The proof of Theorem \ref{theor-LI} exhibited below is, including all preliminaries, just 12 pages
long, and the author hopes that this helps dispelling the prejudice that the elementary proofs of
the PNT are (conceptionally and/or technically) difficult. Indeed he thinks that this is the most satisfactory of the
elementary (and in fact of all) proofs of the PNT in that, besides not invoking complex analysis or
Riemann's $\zeta$-function, it minimizes number theoretic reasoning to a very well circumscribed
minimum. One may certainly dispute that this is desirable, but we will argue elsewhere that it is.
The author is of course aware of the fact that the more direct elementary proofs of the PNT give
better control of the remainder term. (Cf.\ the review \cite{diamond} and the very recent paper
\cite{koukou}, which provides a ``a new and largely elementary proof of the best result known on the
counting function of primes in arithmetic progressions''.) It is not clear whether this is necessarily so. \\
\noindent{\it Acknowledgments.} The author would like to thank the referees for constructive
comments that led to several improvements, in particular a better proof of Corollary \ref{coro-E2}.
\section{First steps and strategy}
\begin{prop} \label{prop-estim}
Let $f:[1,\infty)\rightarrow\7R$ be non-negative and non-decreasing and assume that
$F(x)=\sum_{n\le x}f(x/n)$ satisfies $F(x)=Ax\log x+Bx+o(x)$. Then
\begin{equation}gin{itemize}
\item[(i)] $f(x)=O(x)$.
\item[(ii)] $\displaystyleS \int_{1-0}^x \frac{df(t)}{t}=A\log x+O(1)$.
\item[(iii)] $\displaystyleS\int_1^x\frac{f(t)-At}{t^2}dt=O(1)$.
\end{itemize}
\end{prop}
{\noindent\it Proof. } (i) Following Ingham \cite{ingham}, we define $f$ to be $0$ on $[0,1)$ and compute
\begin{equation}an f(x)-f\left(\frac{x}{2}\right)+f\left(\frac{x}{3}\right)-\cdots &=& F(x)-2F\left(\frac{x}{2}\right)\\
&=& Ax\log x+Bx-2\left(A\frac{x}{2}\log\frac{x}{2}+B\frac{x}{2}\right)+o(x)=Ax\log 2+o(x).
\end{equation}an
With positivity and monotonicity of $f$, this gives $f(x)-f(x/2)\le Kx$ for some $K>0$. Adding
these inequalities for $x, \frac{x}{2}, \frac{x}{4},\ldots$, we find $f(x)\le 2Kx$. Together with
$f\ge 0$, this gives (i).
(ii) We compute
\begin{equation}an F(x) &=& \sum_{n\le x} f\left(\frac{x}{n}\right)
=\int_{1-0}^x f\left(\frac{x}{t}\right) d\lfloor t\rfloor \\
&=& \left[\lfloor t\rfloor f\left(\frac{x}{t}\right)\right]_{t=1-0}^{t=x}
-\int_{1-0}^x\lfloor t\rfloor\,df\left(\frac{x}{t}\right)\\
&=& \lfloor x\rfloor f(1)
-\int_{1-0}^x t\,df\left(\frac{x}{t}\right) +\int_{1-0}^x (t-\lfloor t\rfloor)\,df\left(\frac{x}{t}\right) \\
&=& O(x) + \int_{1-0}^x \frac{x}{u}df(u) +\int_{1-0}^x (t-\lfloor t\rfloor)\,df\left(\frac{x}{t}\right). \\
\end{equation}an
In view of $0\le t-\lfloor t\rfloor<1$ and the weak monotonicity of $f$, the last integral is
bounded by $|\int_1^x df(x/t)|=f(x)-f(1)$, which is $O(x)$ by (i). Using the hypothesis about $F$, we
have
\[ Ax\log x+Bx+o(x)=O(x)+x\int_{1-0}^x\frac{df(t)}{t}+O(x),\]
and division by $x$ proves the claim.
(iii) Integrating by parts, we have
\begin{equation}an \int_1^x\frac{f(t)-At}{t^2}dt &=&
-\frac{f(x)}{x}+\int_{1-0}^x \frac{df(t)}{t}-\int_1^x\frac{A}{t}dt \\
&=& O(1)+(A\log x+O(1))-A\log x=O(1),\end{equation}an
where we used (i) and (ii).
$\blacksquare$\\
\begin{remark} 1. The proposition can be proven under the weaker assumption $F(x)=Ax\log x+O(x)$, but we don't
bother since later we will need the stronger hypothesis anyway.
2. Theorem \ref{theor-LI}, which we ultimately want to prove, implies a strong form of
(iii): $\int_1^\infty\frac{f(t)-At}{t^2}dt=B-\gamma A$, cf.\ \cite{ingham}. Conversely, existence of
the improper integral already implies $f(x)\sim Ax$, cf.\ \cite{zagier}.
3. Putting $f=\psi$ and using (\ref{eq-ch}), the above proofs of (i) and (ii) reduce to those of
Chebychev and Mertens, respectively.
{$\Box$}
\end{remark}
The following two theorems will be proven in Sections \ref{s-th1} and \ref{s-th2}, respectively:
\begin{theorem} \label{theor-1}
Let $f,F$ be as in Theorem \ref{theor-LI}. Then $g(x)=f(x)-Ax$ satisfies
\begin{equation} \frac{|g(x)|}{x} \le \frac{1}{\log x}\int_1^x \frac{|g(t)|}{t^2}\,dt +o(1)\ \ \
\mathrm{as}\ \ x\rightarrow\infty. \label{eq-ineq}\end{equation}
(Here $f(x)\le g(x)+o(1)$ means that $f(x)\le g(x)+h(x)\ \forall x$, where $h(x)\rightarrow 0$ as $x\rightarrow\infty$.)
\end{theorem}
\begin{theorem} \label{theor-2}
For $g:[1,\infty)\rightarrow\7R$, assume that there are $M, M'\ge 0$ such that
\begin{equation} x\mapsto g(x)+Mx\ \ \mbox{is non-decreasing}, \label{et1}\end{equation}
\begin{equation} \left| \int_1^x \frac{g(t)}{t^2} dt\right| \le M' \quad\forall x\ge 1. \label{et2}\end{equation}
Then
\begin{equation} S:=\limsup_{x\rightarrow\infty}\frac{|g(x)|}{x}<\infty, \label{et3}\end{equation}
and when $S>0$ we have
\begin{equation} \limsup_{x\rightarrow\infty} \frac{1}{\log x}\int_1^x\frac{|g(t)|}{t^2}dt<S. \label{et4}\end{equation}
\end{theorem}
\begin{remark} \label{rem-th2} 1. Note that (\ref{et1}) implies that $g$ is Riemann integrable over finite
intervals.
2. In our application, (\ref{et3}) already follows from Proposition
\ref{prop-estim} so that we do not need the corresponding part of the proof of Theorem
\ref{theor-2}. It will be proven nevertheless in order to give Theorem \ref{theor-2} an independent
existence.
{$\Box$}
\end{remark}
\noindent{\it Proof of Theorem \ref{theor-LI} assuming Theorems \ref{theor-1} and \ref{theor-2}.}
Since $f$ is nondecreasing, it is clear that $g(x)=f(x)-Ax$ satisfies (\ref{et1}) with $M=A$, and
(\ref{et2}) is implied by Proposition \ref{prop-estim}(iii). Now
$S=\limsup|g(x)|/x$ is finite, by either Proposition \ref{prop-estim}(i) or the first conclusion of
Theorem \ref{theor-2}. Furthermore, $S>0$ would imply (\ref{et4}). But combining this with the
result (\ref{eq-ineq}) of Theorem \ref{theor-1}, we would have the absurdity
\[ S=\limsup_{x\rightarrow\infty}\frac{|g(x)|}{x}\le \limsup_{x\rightarrow\infty}\frac{1}{\log x}\int_1^x \frac{|g(t)|}{t^2}\,dt<S.\]
Thus $S=0$ holds, which is equivalent to $\frac{g(x)}{x}=\frac{f(x)-Ax}{x}\rightarrow 0$,
as was to be proven.
$\blacksquare$\\
The next two sections are dedicated to the proofs of Theorems \ref{theor-1} and \ref{theor-2}. The
statements of both results are free of number theory, and this is also the case for the proof of the
second. The proof of Theorem \ref{theor-1}, however, uses a very modest amount of number theory, but
nothing beyond M\"obius inversion and the divisibility theory of $\7N$ up to the fundamental theorem
of arithmetic.
\section{Proof of Theorem \ref{theor-1}}\label{s-th1}
\subsection{Arithmetic}\label{ss-mobius}
The aim of this subsection is to collect the basic arithmetic results that will be needed. We note
that this is very little.
We begin by noting that $(\7N,\cdot,1)$ is an abelian monoid. Given $n,m\in\7N$, we call $m$ a
divisor of $n$ if there is an $r\in\7N$ such that $mr=n$, in which case we write $m|n$. In view of
the additive structure of the semiring $\7N$, it is clear that the monoid $\7N$ has cancellation
($ab=ac\Rightarrow b=c$), so the quotient $r$ above is unique, and that the set of divisors
of any $n$ is finite.
Calling a function $f:\7N\rightarrow\7R$ an arithmetic function, the facts just stated allow us to define:
\begin{defin} If $f,g:\7N\rightarrow\7R$ are arithmetic functions, their Dirichlet convolution $f\star g$
denotes the function
\[ (f\star g)(n)=\sum_{d|n}f(d)g\left(\frac{n}{d}\right)=\sum_{a,b\atop ab=n} f(a)g(b).\]
\end{defin}
It is easy to see that Dirichlet convolution is commutative and associative. It has a unit given by the
function $\partialta$ defined by $\partialta(1)=1$ and $\partialta(n)=0$ if $n\ne 1$.
By $\11$ we denote the constant function $\11(n)=1$. Clearly, $(f\star\11)(n)=\sum_{d|n}f(d)$.
\begin{lemma} There is a unique arithmetic function $\mu$, called the M\"obius function, such that
$\mu\star\11=\partialta$.
\end{lemma}
{\noindent\it Proof. } $\mu$ must satisfy $\sum_{d|n}\mu(d)=\partialta(n)$. Taking $n=1$ we see that $\mu(1)=1$. For
$n>1$ we have $\sum_{d|n}\mu(d)=0$, which is equivalent to
\[ \mu(n)=-\sum_{d|n \atop d<n}\mu(d). \]
This uniquely determines $\mu(n)\in\7Z$ inductively in terms of $\mu(m)$ with $m<n$.
$\blacksquare$\\
\begin{prop} \label{prop-mu1} \begin{equation}gin{itemize}
\item[(i)] $\mu$ is multiplicative, i.e.\ $\mu(nm)=\mu(n)\mu(m)$ whenever $(n,m)=1$.
\item[(ii)] If $p$ is a prime then $\mu(p)=-1$, and $\mu(p^k)=0$ if $k\ge 2$.
\item[(iii)] $\mu(n)=O(1)$, i.e.\ $\mu$ is bounded.
\end{itemize}
\end{prop}
{\noindent\it Proof. } (i) Since $\mu(1)=1$, $\mu(nm)=\mu(n)\mu(m)$ clearly holds if $n=1$ or $m=1$. Assume, by way of
induction, that $\mu(uv)=\mu(u)\mu(v)$ holds whenever $(u,v)=1$ and $uv<nm$, and let $n\ne 1\ne m$
be relatively prime. Then every divisor of $nm$ is of the form $st$ with $s|n, t|m$, so that
\begin{equation}an 0 &=& \sum_{d|nm} \mu(d)=\mu(nm)+\sum_{s|n, t|m\atop st<nm}\mu(st)
=\mu(nm)+\sum_{s|n, t|m\atop st<nm}\mu(s)\mu(t) \\
&=& \mu(nm)+\sum_{s|n}\mu(s)\sum_{t|m}\mu(t)-\mu(n)\mu(m)=\mu(nm)-\mu(n)\mu(m),
\end{equation}an
which is the inductive step. (ii) For $k\ge 1$, we have $\mu(p^k)=-\sum_{i=0}^{k-1}\mu(p^i)$,
inductively implying $\mu(p)=-1$ and $\mu(p^k)=0$ if $k\ge 2$. Thus $\mu(p^k)\in\{0,-1\}$, which
together with multiplicativity (i) gives $\mu(n)\in\{-1,0,1\}$ for all $n$, thus (iii).
$\blacksquare$\\
\begin{prop} \label{prop-Lambda1}
\begin{equation}gin{itemize}
\item[(i)] The arithmetic function $\Lambda:=\log\star\mu$ is the unique solution of
$\Lambda\star\11=\log$.
\item[(ii)] $\Lambda(n)=-\sum_{d|n}\mu(d)\log d$. In particular, $\Lambda(1)=0$.
\item[(iii)] $\Lambda(n)=\log p$ if $n=p^k$ where $p$ is prime and $k\ge 1$, and $\Lambda(n)=0$ otherwise.
\item[(iv)] $\Lambda(n)\ge 0$.
\end{itemize}
\end{prop}
{\noindent\it Proof. } (i) Existence: $\log\star\mu\star\11=\log\star\partialta=\log$. Uniqueness: If
$\Lambda_1\star\11=\log=\Lambda_2\star\11$ then
$\Lambda_1=\Lambda_1\star\partialta=\Lambda_1\star\11\star\mu=\Lambda_2\star\11\star\mu=\Lambda_2\star\partialta=\Lambda_2$.
(ii) $\Lambda(n) =\sum_{d|n}\mu(d)\log\frac{n}{d}=\sum_{d|n}\mu(d)(\log n-\log d)
=\log n\sum_{d|n}\mu(d)-\sum_{d|n}\mu(d)\log d$. Now use
$\sum_{d|n}\mu(d)=\partialta(n)$. $\Lambda(1)=0$ is obvious.
(iii) Using (ii), we have $\Lambda(p^k)=-\sum_{l=0}^k \mu(p^l) l\log p$, which together with
Proposition \ref{prop-mu1}(ii) implies
$\Lambda(p^k)=\log p\ \forall k\ge 1$. If $n,m>1$ and $(n,m)=1$ then by the multiplicativity of $\mu$,
\begin{equation}an \Lambda(nm) &=&-\sum_{s|n}\sum_{t|m}\mu(st)\log(st) =-\sum_{s|n}\sum_{t|m}\mu(s)\mu(t)(\log s+\log t)\\
&=&\sum_{s|n}\mu(s)\log s\sum_{t|m}\mu(t)+\sum_{t|m}\mu(t)\log t\sum_{s|n}\mu(s)=0. \end{equation}an
(iv) Obvious consequence of (iii).
$\blacksquare$\\
\begin{remark} The only properties of $\mu$ and $\Lambda$ needed for the proof of Theorem
\ref{theor-1} are the defining ones ($\mu\star\11=\partialta,\ \Lambda\star\11=\log$), the trivial
consequence (ii) in Proposition \ref{prop-Lambda1}, and the boundedness of $\mu$ and the non-negativity
of $\Lambda$.
In particular, the explicit computations of $\mu(n)$ and $\Lambda(n)$ in terms of the prime
factorization of $n$ were only needed to prove the latter two properties.
(Of course, the said properties of the functions $\mu$ and $\Lambda$ would be obvious if one
defined them by the explicit formulae proven above, but this would be ad hoc and ugly, and one
would still need to use the fundamental theorem of arithmetic for proving that $\mu\star\11=\partialta$
and $\Lambda\star\11=\log$.)
Note that prime numbers will play no r\^ole whatsoever before we turn to the actual proof of the
prime number theorem in the Appendix,
where the computation of $\Lambda(n)$ will be used
again.
{$\Box$}
\end{remark}
\subsection{The (weighted) M\"obius transform}
\begin{defin} Given a function $f:[1,\infty)\rightarrow\7R$, its `M\"obius transform' is defined by
\[ F(x)=\sum_{n\le x}f\left(\frac{x}{n}\right).\]
\end{defin}
\begin{lemma} \label{lem-mobius}The M\"obius transform $f\mapsto F$ is invertible, the inverse M\"obius
transform being given by
\[ f(x)=\sum_{n\le x} \mu(n)F\left(\frac{x}{n}\right).\]
\end{lemma}
{\noindent\it Proof. } We compute
\begin{equation}an \sum_{n\le x}\mu(n) F\left(\frac{x}{n}\right) &=&
\sum_{n\le x}\mu(n) \sum_{m\le x/n}f\left(\frac{x}{nm}\right)
=\sum_{nm\le x}\mu(n)f\left(\frac{x}{nm}\right) \\
&=& \sum_{r\le x} f\left(\frac{x}{r}\right) \sum_{s|r}\mu(s)
=\sum_{r\le x} f\left(\frac{x}{r}\right) \partialta(r)=f(x),
\end{equation}an
where we used the defining property $\sum_{d|n}\mu(d)=\partialta(n)$ of $\mu$.
$\blacksquare$\\
\begin{remark} Since the point of Theorem \ref{theor-LI} is to deduce information about $f$ from information
concerning its M\"obius transform $F$, it is tempting to appeal to Lemma \ref{lem-mobius}
directly. However, in order for this to succeed, we would need control over
$M(x)=\sum_{n\le x}\mu(n)$, at least as good as $M(x)=o(x)$. But then one is back in Ellison's
approach mentioned in the introduction. The essential idea of the Selberg-Erd\"os approach to the
PNT, not entirely transparent in the early papers but clarified soon after \cite{TI}, is to consider
weighted M\"obius inversion formulae as follows.
{$\Box$}
\end{remark}
\begin{lemma} \label{lem-TI1} Let $f:[1,\infty)\rightarrow\7R$ be arbitrary and $F(x)=\sum_{n\le x}f(x/n)$. Then
\begin{equation} f(x)\log x+\sum_{n\le x}\Lambda(n)f\left(\frac{x}{n}\right)
=\sum_{n\le x}\mu(n)\log\frac{x}{n}\,F\left(\frac{x}{n}\right). \label{eq-TI}\end{equation}
\end{lemma}
{\noindent\it Proof. } We compute
\[ \sum_{n\le x}\mu(n)\log\frac{x}{n}F\left(\frac{x}{n}\right)
=\log x \sum_{n\le x}\mu(n)F\left(\frac{x}{n}\right)-\sum_{n\le x}\mu(n)\log n\,F\left(\frac{x}{n}\right).\]
By Lemma \ref{lem-mobius}, the first term equals $f(x)\log x$, whereas for the second we have
\begin{equation}an \sum_{n\le x}\mu(n)\log n\,F\left(\frac{x}{n}\right) &=&
\sum_{n\le x}\mu(n)\log n\,\sum_{m\le x/n} f\left(\frac{x}{nm}\right)
=\sum_{nm\le x}\mu(n)\log n\, f\left(\frac{x}{nm}\right) \\
&=& \sum_{s\le x}\left( \sum_{n|s}\mu(n)\log n\right) \,f\left(\frac{x}{s}\right)
=-\sum_{s\le x} \Lambda(s) \,f\left(\frac{x}{s}\right),
\end{equation}an
the last equality being Proposition \ref{prop-Lambda1}(ii). Putting everything together,
we obtain (\ref{eq-TI}).
$\blacksquare$\\
\begin{remark} 1. Eq.\ (\ref{eq-TI}) is known as the `Tatuzawa-Iseki formula', cf.\ \cite[(8)]{TI} (and
\cite[p.\ 24]{karamata}).
2. Without the factor $\log(x/n)$ on the right hand side, (\ref{eq-TI}) reduces to M\"obius
inversion. Thus (\ref{eq-TI}) is a sort of weighted M\"obius inversion formula. The presence
of the sum involving $f(x/n)$ is very much wanted, since it will allow us to obtain the
integral inequality (\ref{eq-ineq}) involving all $f(t), t\in[1,x]$. In order to do so, we must get
rid of the explicit appearance of the function $\Lambda(n)$, which is very irregular and
about which we know little. This requires some preparations.
{$\Box$}
\end{remark}
\begin{lemma} \label{lem-TI2} For any arithmetic function $f:\7N\rightarrow\7R$ we have
\[ f(n)\log n+\sum_{d|n}\Lambda(d)f\left(\frac{n}{d}\right)
=\sum_{d|n}\mu(d)\,\log\frac{n}{d}\,\sum_{m|(n/d)}f(m). \]
In particular, we have Selberg's identity:
\begin{equation} \Lambda(n)\log n+\sum_{d|n}\Lambda(d)\Lambda\left(\frac{n}{d}\right)
=\sum_{d|n}\mu(d)\log^2\frac{n}{d}. \label{eq-selb}\end{equation}
\end{lemma}
{\noindent\it Proof. } If $f$ is an arithmetic function, i.e.\ defined only on $\7N$, we extend it to $\7R$ as being
$0$ on $\7R\begin{array}ckslash\7N$. With this extension,
\[ F(n)=\sum_{m\le n}f\left(\frac{n}{m}\right) =\sum_{m|n}f\left(\frac{n}{m}\right) =\sum_{m|n}f(m),\]
so that (\ref{eq-TI}) becomes the claimed identity. Taking $f(n)=\Lambda(n)$ and using
$\sum_{d|n}\Lambda(d)=\log n$, Selberg's formula follows.
$\blacksquare$\\
\subsection{Preliminary estimates}
\begin{lemma} The following elementary estimates hold as $x\rightarrow\infty$:
\begin{equation}a
\sum_{n\le x} \frac{1}{n} &=& \log x+\gamma+O\left(\frac{1}{x}\right), \label{s1}\\
\sum_{n\le x} \frac{\log n}{n} &=& \frac{\log^2x}{2}+c+O\left(\frac{1+\log x}{x}\right), \label{s5}\\
\sum_{n\le x} \log n &=& x\log x-x+O(\log x), \label{s2}\\
\sum_{n\le x} \log\frac{x}{n} &=& x+O(\log x), \label{s2b}\\
\sum_{n\le x} \log^2 n &=& x(\log^2x-2\log x+1)+ O(\log^2x),\label{s3}\\
\sum_{n\le x} \log^2\frac{x}{n} &=& x+O(\log^2x). \label{s3b}
\end{equation}a
Here, $\gamma$ is Euler's constant and $c>0$.
\end{lemma}
{\noindent\it Proof. } (\ref{s1}): We have
\[ \sum_{n=1}^N\frac{1}{n}-\int_1^N\frac{dt}{t}=\int_{1-0}^N \frac{d(\lfloor t\rfloor -t)}{t}
=\left[\frac{\lfloor t\rfloor -t}{t}\right]_1^N+\int_1^N \frac{t-\lfloor t\rfloor}{t^2}dt. \]
Since $0\le t-\lfloor t\rfloor<1$, the integral on the r.h.s.\ converges as $N\rightarrow\infty$ to some
number $\gamma$ (Euler's constant) strictly between 0 and $1=\int_1^\infty dt/t^2$. Thus
\[ \sum_{n=1}^N\frac{1}{n}=\int_1^N\frac{dt}{t}+\gamma-\int_N^\infty \frac{t-\lfloor t\rfloor}{t^2}dt
=\log N+\gamma+O\left(\frac{1}{N}\right). \]
(\ref{s5}): Similarly to the proof of (\ref{s1}), we have
\[ \sum_{n=1}^N\frac{\log n}{n}-\int_1^N\frac{\log t}{t}dt=\int_{1-0}^N \frac{\log t}{t}\,d(\lfloor t\rfloor -t)
=\left[\frac{(\lfloor t\rfloor -t)\log t}{t}\right]_1^N+\int_1^N \frac{(t-\lfloor t\rfloor)\log t}{t^2}dt. \]
The final integral converges to some $c>0$ as $N\rightarrow\infty$ since $(\log t)/t^2=O(t^{-2+\varepsilon})$. Using
\[ \int_1^x \frac{\log t}{t}dt= \frac{\log^2 x}{2},\quad\quad
\int_N^\infty\frac{\log t}{t^2}dt=-\left[\frac{\log t}{t}\right]_N^\infty+\int_N^\infty\frac{dt}{t^2} =\frac{1+\log N}{N} \]
we have
\[ \sum_{n=1}^N\frac{\log n}{n}=\int_1^x \frac{\log t}{t}dt+c-\int_N^\infty \frac{(t-\lfloor t\rfloor)\log t}{t^2}dt
=\frac{\log^2 x}{2} + c + O\left(\frac{1+\log x}{x}\right). \]
(\ref{s2}): By monotonicity, we have
\[ \int_1^x \log t\,dt \le \sum_{n\le x} \log n \le \int_1^{x+1} \log t\,dt. \]
Combining this with $\int_1^x\log t\,dt=x\log x-x+1$, (\ref{s2}) follows.
(\ref{s2b}): Using (\ref{s2}), we have
\[ \sum_{n\le x} \log\frac{x}{n}= \lfloor x\rfloor\log x-\sum_{n\le x} \log n
= (x+O(1)) \log x-(x\log x-x+O(\log x))=x+O(\log x). \]
(\ref{s3}): By monotonicity,
\[ \int_1^x \log^2t\,dt \le \sum_{n\le x} \log^2 n \le \int_1^{x+1} \log^2t\,dt. \]
Now,
\[ \int_1^x\log^2t\,dt=\int_0^{\log x} e^u u^2du=[e^u(u^2-2u+1)]_0^{\log x}=x(\log^2x-2\log x+1)-1.\]
Combining these two facts, (\ref{s3}) follows.
(\ref{s3b}): Using (\ref{s2}) and (\ref{s3}), we compute
\begin{equation}gin{flalign*}
\quad\sum_{n\le x} \log^2\frac{x}{n} &= \sum_{n\le x} (\log x-\log n)^2 & \\
\quad &= \lfloor x\rfloor\log^2x-2\log x(x\log x-x+O(\log x))+x(\log^2x-2\log x+1)+ O(\log^2x) & \\
\quad &= x+O(\log^2x) & \blacksquare
\quad\end{flalign*}
\begin{prop} \label{prop-mu2}
The following estimates involving the M\"obius function hold as $x\rightarrow\infty$:
\begin{equation}a
\sum_{n\le x} \frac{\mu(n)}{n} &=& O(1), \label{mu1}\\
\sum_{n\le x} \frac{\mu(n)}{n}\log\frac{x}{n} &=& O(1), \label{mu2}\\
\sum_{n\le x} \frac{\mu(n)}{n}\log^2\frac{x}{n} &=& 2\log x+ O(1). \label{mu3}
\end{equation}a
\end{prop}
{\noindent\it Proof. } (\ref{mu1}): If $f(x)=1$ then $F(x)=\lfloor x\rfloor$. M\"obius inversion (Lemma
\ref{lem-mobius}) gives
\begin{equation} 1=\sum_{n\le x}\mu(n)\left\lfloor\frac{x}{n}\right\rfloor=\sum_{n\le x}\mu(n)\left(\frac{x}{n}+O(1)\right)
=x\sum_{n\le x}\frac{\mu(n)}{n}+\sum_{n\le x}O(1),\label{eq-mu}\end{equation}
where we used $\mu(n)=O(1)$ (Proposition \ref{prop-mu1}(iii)). In view of $\sum_{n\le x}O(1)=O(x)$, we have
$\sum_{n\le x}\mu(n)/n=O(x)/x=O(1)$.
(\ref{mu2}): If $f(x)=x$ then $F(x)=\sum_{n\le x}x/n=x\log x+\gamma x+O(1)$ by (\ref{s1}). By
M\"obius inversion,
\[ x=\sum_{n\le x}\mu(n)\left( \frac{x}{n}\log \frac{x}{n}+\gamma \frac{x}{n}+O(1)\right)
=x \sum_{n\le x}\frac{\mu(n)}{n}\log \frac{x}{n}+xO(1)+O(x), \]
where we used (\ref{mu1}) and Proposition \ref{prop-mu1}(iii). From this we easily read off (\ref{mu2}).
(\ref{mu3}): If $f(x)=x\log x$ then
\begin{equation}an F(x) &=& \sum_{n\le x} \frac{x}{n}\log \frac{x}{n}=\sum_{n\le x} \frac{x}{n}(\log x-\log n)
=x\log x\sum_{n\le x}\frac{1}{n}-x\sum_{n\le x}\frac{\log n}{n} \\
&=& x\log x\left(\log x+\gamma+O\left(\frac{1}{x}\right)\right)
-x\left(\frac{\log^2x}{2}+c+O\left(\frac{1+\log x}{x}\right)\right) \\
&=& \frac{1}{2}x\log^2x+\gamma x\log x-cx+O(1+\log x),
\end{equation}an
by (\ref{s1}) and (\ref{s5}). Now M\"obius inversion gives
\begin{equation}an x\log x &=& \sum_{n\le x}\mu(n)\left(\frac{x}{2n}\log^2\frac{x}{n}
+\gamma\frac{x}{n}\log \frac{x}{n}-c\frac{x}{n}+O(1+\log \frac{x}{n})\right)\\
&=& \frac{x}{2}\sum_{n\le x}\frac{\mu(n)}{n}\log^2\frac{x}{n} +xO(1)+xO(1)+O(x),
\end{equation}an
where we used (\ref{mu1}), (\ref{mu2}) and (\ref{s2b}), and division by $x/2$ gives (\ref{mu3}).
$\blacksquare$\\
\subsection{Conclusion}
\begin{prop} [Selberg, Erd\"os-Karamata \cite{EK}] \label{prop-selb} Defining
\[ K(1)=0,\quad\quad K(n)=\frac{1}{\log n}\sum_{d|n}\Lambda(d)\Lambda\left(\frac{n}{d}\right)
\ \ \mathrm{if}\ \ n\ge 2,\label{eq-K}\]
we have $K(n)\ge 0$ and
\begin{equation} \sum_{n\le x} (\Lambda(n)+K(n))=2x+O\left(\frac{x}{\log x}\right). \label{eq-K2}\end{equation}
\end{prop}
{\noindent\it Proof. } The first claim is obvious in view of Proposition \ref{prop-Lambda1}(iv). We estimate
\begin{equation}an U(x) &:= &\sum_{n\le x}\sum_{d|n}\mu(d)\log^2\frac{n}{d}
\ =\ \sum_{n\le x}\mu(n) \sum_{m\le x/n}\log^2 m \\
&=& \sum_{n\le x}\mu(n) \left( \frac{x}{n}(\log^2\frac{x}{n}-2\log\frac{x}{n}+1)+ O(\log^2\frac{x}{n})\right)\\
&=& x(2\log x+O(1))-2x O(1)+ xO(1)+ O(x)=2x\log x+O(x).
\end{equation}an
Here we used (\ref{s3}), (\ref{mu3}), (\ref{mu2}), (\ref{mu1}), the fact $\mu(n)=O(1)$, and (\ref{s3b}).
Comparing (\ref{eq-K2}) and (\ref{eq-selb}), we have
\begin{equation}an \sum_{n\le x} \Lambda(n)+K(n) &=& \sum_{2\le n\le x}\frac{1}{\log n} \sum_{d|n} \mu(d)\log^2\frac{n}{d}\\
&=& \int_{2-0}^x\frac{dU(t)}{\log t}\ =\ \left[\frac{U(t)}{\log t}\right]_2^x +\int_2^x \frac{U(t)}{t\log^2t}dt \\
&=& 2x+O\left(\frac{x}{\log x}\right)+ \int_2^x\frac{dt}{\log t}+O\left(\int_2^x\frac{dt}{\log^2t}\right).
\end{equation}an
In view of the estimate
\[ \int_2^x\frac{dt}{\log t}=\int_2^{\sqrt{x}}\frac{dt}{\log t}+\int_{\sqrt{x}}^x\frac{dt}{\log t}
\le \frac{\sqrt{x}}{\log 2}+\frac{x}{\log\sqrt{x}}=O\left(\frac{x}{\log x}\right), \]
we are done.
$\blacksquare$\\
\begin{remark} In view of (\ref{eq-selb}), the above estimate $U(x)=2x\log x+O(x)$ is equivalent to
\[ \sum_{n\le x} \Lambda(n)\log n+\sum_{ab\le x}\Lambda(a)\Lambda(b)=2x\log x+O(x),\]
which is used in most Selberg-style proofs. (It would lead to (\ref{eq-balog}) with $k=2$.)
{$\Box$}
\end{remark}
\begin{prop} \label{prop-G} If $g:[1,\infty)\rightarrow\7R$ is such that
\begin{equation} G(x)=\sum_{n\le x}g\left(\frac{x}{n}\right)
=Bx+C\frac{x}{\log x}+o\left(\frac{x}{\log x}\right) \label{eq-Kar2}\end{equation}
then
\begin{equation} g(x)\log x+\sum_{n\le x}\Lambda(n)\,g\left(\frac{x}{n}\right)= o(x\log x). \label{e2}\end{equation}
\end{prop}
{\noindent\it Proof. } In view of Lemma \ref{lem-TI1}, all we have to do is estimate
\[ \sum_{n\le x}\mu(n)\log\frac{x}{n}\left( B\frac{x}{n}+C\frac{x}{n\log \frac{x}{n}}
+o\left(\frac{x}{n\log \frac{x}{n}}\right) \right)=S_1+S_2+S_3. \]
The three terms are
\begin{equation}an S_1 &=& Bx \sum_{n\le x}\frac{\mu(n)}{n}\log\frac{x}{n}=x O(1)=O(x), \\
S_2 &=& Cx \sum_{n\le x}\frac{\mu(n)}{n}=xO(1)=O(x), \\
S_3 &=& \sum_{n\le x}\mu(n) o\left(\frac{x}{n}\right) =\sum_{n\le x}o\left(\frac{x}{n}\right)
=o\left(x\sum_{n\le x}\frac{1}{n}\right)=o(x\log x),
\end{equation}an
where we used (\ref{mu2}), (\ref{mu1}), and $\mu(n)=O(1)$, respectively.
$\blacksquare$\\
\noindent{\it Proof of Theorem \ref{theor-1}.}
In view of $g(x)=f(x)-Ax$ and Proposition \ref{prop-estim} (i), (ii), we immediately have
\begin{equation} g(x)=O(x), \quad\quad\quad \int_1^x\frac{dg(u)}{u}=O(1). \label{eq-g}\end{equation}
Furthermore, since $f$ satisfies (\ref{eq-Kar}), and (\ref{s1}) gives
$\sum_{n\le x} Ax/n=Ax\log x+A\gamma x+O(1)$, the M\"obius transform $G$ of $g(x)=f(x)-Ax$ satisfies
(\ref{eq-Kar2}) (with a different $B$), so that Proposition \ref{prop-G} applies and (\ref{e2}) holds.
Writing
$N(x)=\sum_{n\le x}\Lambda(n)+K(n)$, by Proposition \ref{prop-selb} we have $N(x)=2x+\omega(x)$
with $\omega(x)=o(x)$. Now,
\begin{equation}a \sum_{n\le x}(\Lambda(n)+K(n))g\left(\frac{x}{n}\right)
&=& \int_{1-0}^xg\left(\frac{x}{t}\right)dN(t) \nonumber\\
&=& \left[ N(t)g\left(\frac{x}{t}\right)\right]_{1-0}^x-\int_1^x N(t)dg\left(\frac{x}{t}\right)\nonumber\\
&=& (N(x)g(1)-N(1-0)g(x)) +\int_1^x N\left(\frac{x}{u}\right) dg(u)\nonumber\\
&=& O(x) +\int_1^x\left(\frac{2x}{u}+o\left(\frac{2x}{u}\right)\right)dg(u) \nonumber\\
&=& O(x) +2x\int_1^x \frac{dg(u)}{u}+o\left(x\int_1^x \frac{dg(u)}{u}\right)\nonumber\\
&=& O(x)+O(x)+o(x)=O(x), \label{eq-x1}
\end{equation}a
where we used (\ref{eq-g}).
On the other hand,
\begin{equation}a \lefteqn{ \sum_{n\le x} (\Lambda(n)+K(n))\,\left|g\left(\frac{x}{n}\right)\right| =
\int_{1-0}^x \left|g\left(\frac{x}{t}\right)\right| \, dN(t) } \nonumber\\
&=& 2\int_1^x \left|g\left(\frac{x}{t}\right)\right|\,dt+\int_{1-0}^x \left|g\left(\frac{x}{t}\right)\right|\,d\omega(t) \nonumber\\
&=& 2x\int_1^x \frac{|g(t)|}{t^2}dt -\int_{1-0}^x \omega(t)\, d\!\left|g\left(\frac{x}{t}\right)\right| +\left[g\left(\frac{x}{t}\right)\omega(t)\right]_{t=1-0}^{t=x}\nonumber\\
&=& 2x\int_1^x \frac{|g(t)|}{t^2}dt +\int_1^{x+0} \omega\left(\frac{x}{t}\right)\, d|g(t)| +g(1)\omega(x)-g(x+0)\omega(1-0).\label{eq-x2}
\end{equation}a
In view of $g(x)=O(x)$ and $\omega(x)=o(x)$, the sum of the last two terms is $O(x)$. Furthermore,
\begin{equation}an \int_1^{x+0} \omega\left(\frac{x}{t}\right)\,d|g(t)| &=& o\left( x\int_1^{x+0}\frac{|d|g(t)||}{t}\right)
\le o\left( x\int_1^{x+0}\frac{|dg(t)|}{t}\right) \\
&\le& o\left( x\int_1^{x+0}\frac{df+Adt}{t}\right)=o(x\log x), \end{equation}an
where we used $g(x)=f(x)-Ax$ and $df=|df|$ (since $f$ is non-decreasing) to obtain
$|dg|=|df-Adt|\le |df|+Adt=df+Adt$ and Proposition \ref{prop-estim}(ii). Plugging this into
(\ref{eq-x2}), we have
\begin{equation} \sum_{n\le x} (\Lambda(n)+K(n))\,\left|g\left(\frac{x}{n}\right)\right| =
2x\int_1^x \frac{|g(t)|}{t^2}dt+ o(x\log x). \label{eq-x3}\end{equation}
After these preparations, we can conclude quickly: Subtracting (\ref{eq-x1}) from (\ref{e2}) we obtain
\[ g(x)\log x= \sum_{n\le x} K(n) g\left(\frac{x}{n}\right) + o(x\log x).\]
Taking absolute values of this and of (\ref{e2}) while observing that $\Lambda$ and $K$ are
non-negative, we have the inequalities
\[ |g(x)|\log x\le \sum_{n\le x} \Lambda(n) \left|g\left(\frac{x}{n}\right)\right| + o(x\log x), \quad\quad
|g(x)|\log x\le \sum_{n\le x} K(n) \left|g\left(\frac{x}{n}\right)\right| + o(x\log x).\]
Adding these inequalities and comparing with (\ref{eq-x3}) we have
\[ 2|g(x)|\log x \le \sum_{n\le x}(\Lambda(n)+K(n)) \left|g\left(\frac{x}{n}\right)\right| + o(x\log x)
= 2x\int_1^x \frac{|g(t)|}{t^2}dt+ o(x\log x), \]
so that (\ref{eq-ineq}), and with it Theorem \ref{theor-1}, is obtained dividing by $2x\log x$.
$\blacksquare$\\
\begin{remark} 1. We did not use the full strength of Proposition \ref{prop-selb}, but only an
$o(x)$ remainder.
2. Inequality (\ref{eq-ineq}) is the special case $k=1$ of the more general integral inequality
\begin{equation} \frac{|g(x)|}{x}\log^kx\le k\int_1^x\frac{|g(t)|\log^{k-1}t}{t^2}dt+O(\log^{k-c}x)
\quad\forall k\in\7N \label{eq-balog}\end{equation}
proven in \cite{balog}, assuming a $O\left(\frac{x}{\log^2x}\right)$ in (\ref{eq-Kar}) instead of
$C\frac{x}{\log x}+o\left(\frac{x}{\log x}\right)$.
{$\Box$}
\end{remark}
\section{Proof of Theorem \ref{theor-2}}\label{s-th2}
The proof will be based on the following proposition, to be proven later:
\begin{prop} \label{prop-E} If $s:[0,\infty)\rightarrow\7R$ satisfies
\begin{equation} e^{t'}s(t')-e^t s(t)\ge -M(e^{t'}-e^t) \quad \forall t'\ge t\ge 0,\label{eq-s1}\end{equation}
\begin{equation} \left|\int_0^x s(t)dt\right|\le M'\quad\forall x\ge 0, \label{eq-s2}\end{equation}
and $S=\lim\sup|s(x)|>0$ then there exist numbers $0<S_1<S$ and $e,h>0$ such that
\begin{equation} \mu(E_{x,h,S_1}) \ge e\quad \forall x\ge 0,\quad \mathrm{where}\quad
E_{x,h,S_1}=\{ t\in [ x,x+h]\ | \ |s(t)|\le S_1 \},\label{eq-E}\end{equation}
and $\mu$ denotes the Lebesgue measure.
\end{prop}
\noindent{\it Proof of Theorem \ref{theor-2} assuming Proposition \ref{prop-E}.}
It is convenient to replace $g:[1,\infty)\rightarrow\7R$ by $s:[0,\infty)\rightarrow\7R,\ s(t)= e^{-t} g(e^t)$.
Now $s$ is locally integrable, and the assumptions (\ref{et1}) and (\ref{et2}) become
(\ref{eq-s1}) and (\ref{eq-s2}), respectively,
whereas the conclusions (\ref{et3}) and (\ref{et4}) assume the form
\begin{equation} S=\limsup_{t\rightarrow\infty}|s(t)|<\infty, \label{eq-s3}\end{equation}
\begin{equation} S>0 \quad\Rightarrow\quad \limsup_{x\rightarrow\infty} \frac{1}{x}\int_0^x |s(t)|dt < S. \label{eq-s4}\end{equation}
The proof of (\ref{eq-s3}) is easy: Dividing (\ref{eq-s1}) by $e^{t'}$ and integrating over
$t'\in[t,t+h]$, where $h>0$, one obtains
\[ \int_t^{t+h} s(t')dt' - s(t)(1-e^{-h})\ge -Mh+M(1-e^{-h}), \]
and using $|\int_a^b s(t)dt|\le |\int_0^as(t)dt|+|\int_0^bs(t)dt|\le 2M'$ by (\ref{eq-s2}), we have
the upper bound
\[ s(t)\le \frac{2M'+M(e^{-h}-1+h)}{1-e^{-h}} . \]
Similarly, dividing (\ref{eq-s1}) by $e^t$ and integrating over $t\in[t'-h,t']$, one obtains the
lower bound
\[ -\frac{2M'+M(e^h-1-h)}{e^h-1} \le s(t),\]
thus (\ref{eq-s3}) holds.
Assuming $S>0$, let $S_1,h,e$ as provided by Proposition \ref{prop-E}. For each $\widehat{S}>S$
there is $x_0$ such that $x\ge x_0\Rightarrow |s(x)|\le\widehat{S}$. Given $x\ge x_0$ and putting
$N=\left\lfloor\frac{x-x_0}{h}\right\rfloor$, we have
\begin{equation}an \int_0^x |s(t)|dt &=& \int_0^{x_0}|s(t)|dt+ \sum_{n=1}^N \int_{x_0+(n-1)h}^{x_0+nh} |s(t)|dt
+\int_{x_0+Nh}^x |s(t)|dt \\
&\le & 2M'+ N [ \widehat{S}(h-e)+S_1e] + 2M' \\
&=& \left(\frac{x-x_0}{h}+O(1)\right) h \left[\left(1-\frac{e}{h}\right)\widehat{S}+\frac{e}{h}S_1\right] +4M'\\
&=& x \left[\left(1-\frac{e}{h}\right)\widehat{S}+\frac{e}{h}S_1\right] +O(1).
\end{equation}an
Thus
\[ \limsup_{x\rightarrow\infty}\frac{1}{x}\int_0^x |s(t)|dt\le \left(1-\frac{e}{h}\right)\widehat{S}+\frac{e}{h}S_1. \]
Since $S_1<S$ and since $\widehat{S}>S$ can be chosen arbitrarily close to $S$, (\ref{eq-s4}) holds
and thus Theorem \ref{theor-2}.
$\blacksquare$\\
In order to make plain how the assumptions (\ref{eq-s1}) and (\ref{eq-s2}) enter the proof of
Proposition \ref{prop-E}, we prove two intermediate results that each use only one of the
assumptions. For the first we need a ``geometrically obvious'' lemma of isoperimetric character:
\begin{lemma} Let $t_1<t_2,\ C_1>C_2>0$ and $k:[t_1,t_2]\rightarrow\7R$ non-decreasing with
$k(t_1)\ge C_1e^{t_1}$ and $k(t_2)\le C_2e^{t_2}$. Then
\[ \mu\left(\{ t\in[t_1,t_2] \ | \ C_2e^t\le k(t)\le C_1 e^t\}\right) \ge \log\frac{C_1}{C_2}.\]
\end{lemma}
{\noindent\it Proof. } As a non-decreasing function, $k$ has left and right limits $k(t\pm 0)$ everywhere and
$k(t-0)\le k(t)\le k(t+0)$. The assumptions imply
$t_1\in A:=\{ t\in[t_1,t_2] \ | \ k(t)\ge C_1e^t\}$, thus we can define $T_1=\sup(A)$.
Quite obviously we have $t>T_1\Rightarrow k(t)<C_1e^t$, which together with the non-decreasing property of
$k$ and the continuity of the exponential function implies $k(T_1+0)\le C_1e^{T_1}$ (provided
$T_1<t_2$). We have $T_1\in A$ if and only if $k(T_1)\ge C_1e^{T_1}$. If $T_1\not\in A$ then
$T_1>t_1$, and every interval $(T_1-\varepsilon,T_1)$ (with $0<\varepsilon<T_1-t_1$) contains points $t$ such that
$k(t)\ge C_1e^t$. This implies $k(T_1-0)\ge C_1e^{T_1}$. Now assume $T_1=t_2$. If $T_1\in A$ then
$C_1e^{T_1}\le k(T_1)\le C_2e^{T_1}$. If $T_1\not\in A$ then
$C_1e^{T_1}\le k(T_1-0)\le k(T_1)\le C_2e^{t_2}$. In both cases we arrive at a contradiction since $C_2<C_1$.
Thus $T_1<t_2$.
If $T_1\in A$ (in particular if $T_1=t_1$) then
$C_1e^{T_1}\le k(T_1)\le k(T_1+0)\le C_1e^{T_1}$. Thus $k$ is continuous from the right at $T_1$ and
$k(T_1)=C_1e^{T_1}$. If $T_1\not\in A$ then $T_1>t_1$ and
$C_1e^{T_1}\le k(T_1-0)\le k(T_1+0)\le C_1e^{T_1}$. This implies $k(T_1)=C_1e^{T_1}$, thus the
contradiction $T_1\in A$. Thus we always have $T_1\in A$, thus $k(T_1)=C_1e^{T_1}$.
Now let $B=\{ t\in[T_1,t_2]\ | \ k(t)\le C_2e^t\}$. We have $t_2\in B$, thus $T_2=\inf(B)$ is
defined and $T_2\ge T_1$. Arguing similarly as before we have $t<T_2\Rightarrow k(t)>C_2e^t$, implying
$k(T_2-0)\ge C_2e^{T_2}$. And if $T_2<t_2$ and $T_2\not\in B$ then $k(T_2+0)\le C_2e^{T_2}$.
If $T_2\in B$ (in particular if $T_2=t_2$) then $C_2e^{T_2}\le k(T_2-0)\le k(T_2)\le C_2e^{T_2}$,
implying $k(T_2-0)=k(T_2)=C_2e^{T_2}$ so that $k$ is continuous from the left at $T_2$. If
$T_2\not\in B$ then $T_2<t_2$ and $C_2e^{T_2}\le k(T_2-0)\le k(T_2+0)\le C_2e^{T_2}$, implying
$k(T_2)=C_2e^{T_2}$ and thus a contradiction. Thus we always have $T_2\in B$, thus $k(T_2)=C_2e^{T_2}$.
By the above results, we have $C_2e^t\le k(t)\le C_1 e^t\ \forall t\in[T_1,T_2]$ and thus
\begin{equation} \mu\left(\{ t\in[t_1,t_2] \ | \ C_2e^t\le k(t)\le C_1 e^t\}\right) \ge T_2-T_1. \label{mu}\end{equation}
Using once more that $k$ is non-decreasing, we have
\[ C_1e^{T_1}= k(T_1)\le k(T_2)=C_2e^{T_2}, \]
implying $T_2-T_1\ge \log\frac{C_1}{C_2}$, and combining this with (\ref{mu}) proves the claim.
$\blacksquare$\\
\begin{coro} \label{coro-E2} Assume that $s:[0,\infty)\rightarrow\7R$ satisfies (\ref{eq-s1}) and
$s(t_1)\ge S_1\ge S_2\ge s(t_2)$, where $S_2+M>0$. Then
\[ \mu(\{t\in [t_1,t_2]\ | \ s(t)\in [S_2,S_1] \}) \ge\log\frac{S_1+M}{S_2+M}. \]
\end{coro}
{\noindent\it Proof. } We note that (\ref{eq-s1}) is equivalent to the statement that the function
$k:t\mapsto e^t(s(t)+M)$ is non-decreasing. The assumption $s(t_1)\ge S_1\ge S_2\ge s(t_2)$ implies
$k(t_1)\ge (S_1+M)e^{t_1}$ and $k(t_2)\le(S_2+M)e^{t_2}$. Now the claim follows directly by an
application of the preceding lemma.
$\blacksquare$\\
\begin{lemma} \label{lem-E1} Let $s:[0,\infty)\rightarrow\7R$ be integrable over bounded intervals, satisfying
(\ref{eq-s2}). Let $e>0$ and $0<S_2<S_1$ be arbitrary, and assume
\begin{equation} h\ge 2\left( e+\frac{M'}{S_1}+\frac{M'}{S_2}\right).\label{eq-h}\end{equation}
Then every interval $[x,x+h]$ satisfies at least one of the following conditions:
\begin{equation}gin{itemize}
\item[(i)] $\displaystyleS\mu(E_{x,h,S_1}) \ge e$, where $E_{x,h,S_1}$ is as in (\ref{eq-E}),
\item[(ii)] there exist $t_1,t_2$ such that $x\le t_1<t_2\le x+h$ and $s(t_1)\ge S_1$ and
$s(t_2)\le S_2$.
\end{itemize}
\end{lemma}
{\noindent\it Proof. } It is enough to show that falsity of (i) implies (ii). Define
\[ T=\sup\{ t\in[x,x+h]\ | \ s(t)\le S_2\}, \]
with the understanding that $T=x$ if $s(t)>S_2$ for all $t\in[x,x+h]$.
Then $s(t)>S_2\ \forall t\in(T,x+h]$, which implies
\[ (x+h-T)S_2\le \int_{T}^{x+h} s(t)dt\le 2M' \]
and therefore
\begin{equation} x+h-T\ \le \frac{2M'}{S_2}. \label{eq-t2}\end{equation}
We observe that (\ref{eq-t2}) with $T=x$ would contradict (\ref{eq-h}). Thus
$x<T\le x+h$, so we can indeed find a $t_2\in[x,x+h]$ with $s(t_2)\le S_2$. Since we do not assume
continuity of $s$, we cannot claim that we may take $t_2=T$, but by definition a $t_2$ can be found in
$(T-\varepsilon,T]$ for every $\varepsilon>0$.
Now we claim that there is a point $t_1\in[x,t_2]$ such that $s(t_1)\ge S_1$. Otherwise, we would have
$s(t)<S_1$ for all $t\in[x,t_2]$. By definition, $|s(t)|\le S_1$ for $t\in E_{x,h,S_1}$, thus
$|s|>S_1$ on the complement of $E_{x,h,S_1}$. Combined with $s(t)<S_1$ for $t\in[x,t_2]$, this
means $s(t)<-S_1$ whenever $t\in[x,t_2]\begin{array}ckslash E_{x,h,S_1}$. Thus
\begin{equation}an \int_x^{t_2} s(t)dt &\le& S_1\mu([x,t_2]\cap E_{x,h,S_1})-S_1 \mu([x,t_2]\begin{array}ckslash E_{x,h,S_1}) \\
&=& -S_1(t_2-x) + 2S_1\mu([x,t_2]\cap E_{x,h,S_1}) \\
&=& S_1(x-t_2+2\mu([x,t_2]\cap E_{x,h,S_1}))
\end{equation}an
In view of (\ref{eq-t2}) and $t_2>T-\varepsilon$ (with $\varepsilon>0$ arbitrary), we have
$x-t_2<x-T+\varepsilon\le 2M'/S_2-h+\varepsilon$, thus we continue the preceding inequality as
\begin{equation}an \cdots & < & S_1\left(\frac{2M'}{S_2}-h+\varepsilon + 2\mu([x,t_2]\cap E_{x,h,S_1})\right)
\end{equation}an
By our assumption that (i) is false, we have $\mu([x,t_2]\cap E_{x,h,S_1})\le\mu(E_{x,h,S_1})<e$.
Thus choosing $\varepsilon$ such that $0<\varepsilon<2(e-\mu([x,t_2]\cap E_{x,h,S_1}))$, we have
\begin{equation}an \cdots &<& S_1\left(\frac{2M'}{S_2}-h+2e\right). \end{equation}an
Combining this with (\ref{eq-h}), we finally obtain $\int_x^{t_2} s(t)dt<-2M'$, which contradicts
the assumption (\ref{eq-s2}). Thus there is a point $t_1\in[x,t_2]$ such that $s(t_1)\ge S_1$. In
view of $s(t_1)\ge S_1>S_2\ge s(t_2)$, we have $t_1\ne t_2$, thus $t_1<t_2$.
$\blacksquare$\\
\noindent{\it Proof of Proposition \ref{prop-E}.} Assuming that $S=\limsup |s(x)|>0$, choose
$S_1, S_2$ such that $0<S_2<S_1<S$. Then $e:=\log\frac{S_1+M}{S_2+M}>0$. Let $h$ satisfy
(\ref{eq-h}). Assume that there is an $x\ge 0$ such that $\mu(E_{x,h,S_1})<e$. Then Lemma
\ref{lem-E1} implies the existence of $t_1,t_2$ such that $x\le t_1<t_2\le x+h$ and
$s(t_1)\ge S_1, \ s(t_2)\le S_2$. But then Corollary \ref{coro-E2} gives
$\mu([t_1,t_2]\cap s^{-1}([S_2,S_1])\ge \log\frac{S_1+M}{S_2+M}$. Since
$[t_1,t_2]\cap s^{-1}([S_2,S_1])\subset E_{x,h,S_1}$, we have
$\mu(E_{x,h,S_1})\ge\log\frac{S_1+M}{S_2+M}=e$, which is a contradiction.
$\blacksquare$\\
\begin{remark} The author did not succeed in making full sense of the proof in \cite{karamata} corresponding
to that of Corollary \ref{coro-E2}. It seems that there is a logical mistake in the reasoning,
which is why we resorted to the above more topological approach.
{$\Box$}
\end{remark}
\appendix
\section{The Prime Number Theorem}\label{sec-pnt}
\begin{prop} \label{prop-psi} Defining $\psi(x):=\sum_{n\le x}\Lambda(n)$, we have $\psi(x)\sim x$.
\end{prop}
{\noindent\it Proof. } Since $\Lambda(x)\ge 0$, we have that $\psi$ is non-negative and non-decreasing. Furthermore,
\begin{equation} \sum_{n\le x}\psi\left(\frac{x}{n}\right)=\sum_{n\le x}\sum_{m\le x/n}\Lambda(m)
=\sum_{r\le x}\sum_{s|r}\Lambda(s)=\sum_{r\le x}\log r =x\log x-x+O(\log x) \label{eq-ch}\end{equation}
by Proposition \ref{prop-Lambda1}(i) and (\ref{s2}). Now Theorem \ref{theor-LI}
implies $\psi(x)=x+o(x)$, or $\psi(x)\sim x$.
$\blacksquare$\\
Note that we still used only (i) of Proposition \ref{prop-Lambda1}, but now we will need (iii):
\begin{theorem} Let $\pi(x)$ be the number of primes $\le x$ and $p_n$ the $n$-th prime. Then
\begin{equation}an \pi(x)&\sim&\frac{x}{\log x}, \\
p_n &\sim &n\log n. \end{equation}an
\end{theorem}
{\noindent\it Proof. } Using Proposition \ref{prop-Lambda1}(iii), we compute
\[ \psi(x)=\sum_{n\le x}\Lambda(n)=\sum_{p^k\le x} \log p
=\sum_{p\le x} \log p \left\lfloor\frac{\log x}{\log p}\right\rfloor \le \pi(x)\log x. \]
If $1<y<x$ then
\[ \pi(x)-\pi(y)=\sum_{y<p\le x}1\le\sum_{y<p\le x}\frac{\log p}{\log y}\le \frac{\psi(x)}{\log y}. \]
Thus $\pi(x)\le y+\psi(x)/\log y$. Taking $y=x/\log^2x$ this gives
\[\label{ineq} \frac{\psi(x)}{x}\le\frac{\pi(x)\log x}{x}
\le\frac{\psi(x)}{x}\,\frac{\log x}{\log(x/\log^2x)}+\frac{1}{\log x},\]
thus $\psi(x)\sim\pi(x)\log x$. Together with Proposition \ref{prop-psi}, this gives $\pi(x)\sim x/\log x$.
Taking logarithms of $\pi(x)\sim x/\log x$, we have $\log\pi(x)\sim\log x-\log\log x\sim\log x$
and thus $\pi(x)\log\pi(x)\sim x$. Taking $x=p_n$ and using $\pi(p_n)=n$ gives $n\log n\sim p_n$.
$\blacksquare$\\
\begin{remark} Karamata's proof of the Landau-Ingham theorem obviously is modeled on Selberg's original
elementary proof \cite{selberg} of the prime number theorem. However, Selberg worked with $f=\psi$
from the beginning. Most later proofs follow Selberg's approach, but there are some that work with
$M$ instead of $\psi$. Cf.\ the papers \cite{postn,kalecki} and the textbooks \cite{GL, ellison}.
As mentioned in the introduction, the result for $M$ also follows easily from Theorem \ref{theor-LI}:
{$\Box$}
\end{remark}
\begin{prop} Defining $M(x)=\sum_{n\le x}\mu(n)$, we have $M(x)=o(x)$.
\end{prop}
{\noindent\it Proof. } We define $f(x)=M(x)+\lfloor x\rfloor$, which is non-negative and non-decreasing. Now
\begin{equation}an F(x) &=& \sum_{n\le x} M\left(\frac{x}{n}\right) +\left\lfloor\frac{x}{n}\right\rfloor
=\sum_{n\le x}\sum_{m\le x/n}(\mu(m)+1) \\
&=& \sum_{m\le x}(\mu(m)+1)\left\lfloor\frac{x}{m}\right\rfloor
=1+\sum_{m\le x}\left\lfloor\frac{x}{m}\right\rfloor,
\end{equation}an
where the last identity is just the first in (\ref{eq-mu}). The remaining sum is known from
Dirichlet's divisor problem and can be computed in elementary fashion,
\begin{equation} \sum_{m\le x}\left\lfloor\frac{x}{m}\right\rfloor= x\log x+(2\gamma-1)x+O(\sqrt{x}), \label{eq-dir}\end{equation}
cf.\ e.g.\ \cite{TMF}. Thus $F(x)=x\log x+(2\gamma-1)x+O(\sqrt{x})$, and Theorem \ref{theor-LI}
implies $f(x)=x+o(x)$, thus $M(x)=o(x)$.
$\blacksquare$\\
\begin{remark} Note that we had to define $f(x)=M(x)+\lfloor x\rfloor$ and use (\ref{eq-dir}) since
$f(x)=M(x)+x$ is non-negative, but not non-decreasing. One can generalize Theorem \ref{theor-LI}
somewhat so that it applies to functions like $f(x)=M(x)+x$ weakly violating monotonicity.
But the additional effort would exceed that for the easy proof of (\ref{eq-dir}).
{$\Box$}
\end{remark}
\begin{equation}gin{thebibliography}{99}
\bibitem{balog} A. Balog: An elementary Tauberian theorem and the prime number theorem. Acta
Math. Acad. Sci. Hung. {\bf 37}, 285--299 (1981).
\bibitem{BGT} N. H. Bingham, C. M. Goldie, J. L. Teugels: {\it Regular variation}. Cambridge
University Press, 1987.
\bibitem{diamond} H. G. Diamond: Elementary methods in the study of the distribution of prime
numbers. Bull. Amer. Math. Soc. {\bf 7}, 553--589 (1982).
\bibitem{ellison} W. \& F. Ellison: {\it Prime numbers}. Wiley, Hermann, 1985. (Original French
version: W. Ellison, M. Mend\`es France: {\it Les nombres premiers}. Hermann, 1975.)
\bibitem{erdos} P. Erd\"os: On a new method in elementary number theory which leads to an elementary proof of the
prime-number theorem. P.N.A.S. {\bf 35}, 374--384 (1949).
\bibitem{erdos2} P. Erd\"os: On a Tauberian theorem connected with the new proof of the prime number
theorem. J. Indian Math. Soc. (N.S.) {\bf 13}, 131--144 (1949).
\bibitem{EI} P. Erd\"os, A. E. Ingham: Arithmetical Tauberian theorems. Acta Arith. {\bf 9}, 341--356 (1964).
\bibitem{EK} P. Erd\"os, J. Karamata: Sur la majorabilit\'e $C$ des suites des nombres
r\'eels. Publ. Inst. Math. Acad. Serbe {\bf 10}, 37--52 (1956).
\bibitem{GL} A. O. Gelfond, Yu. V. Linnik: {\it Elementary methods in the analytic theory of
numbers}. Pergamon Press, 1966.
\bibitem{gordon} B. Gordon: On a Tauberian theorem of Landau. Proc. Amer. Math. Soc. {\bf 9},
693--696 (1958).
\bibitem{hewitt} E. Hewitt: Integration by parts for Stieltjes integrals. Amer. Math. Monthly
{\bf 67}, 419--423 (1960).
\bibitem{HS} E. Hewitt, K. Stromberg: {\it Real and abstract analysis}. Springer, 1965.
\bibitem{HT} A. Hildebrand, G. Tenenbaum: On some Tauberian theorems related to the prime number
theorem. Compos. Math. {\bf 90}, 315--349 (1994).
\bibitem{ingham} A. E. Ingham: Some Tauberian theorems connected with the prime number
theorem.
J. London Math. Soc. {\bf 20}, 171--180 (1945).
\bibitem{kalecki} M. Kalecki: A simple elementary proof of $M(x)=\sum_{n\le x}\mu(n)=o(x)$. Acta
Arithm. {\bf 13}, 1--7 (1967).
\bibitem{karamata} J. Karamata: Sur les inversions asymptotiques de certains produits de
convolution. Bull. Acad. Serbe Sci. (N.S.) Cl. Sci. Math.-Nat. Sci. Math. {\bf 3}, 11--32 (1957).
\bibitem{karamata2} J. Karamata: Sur les proc\'ed\'es de sommation intervenant dans la th\'eorie des
nombres. pp. 12--31. {\it Colloque sur la th\'eorie des suites tenu \`a Bruxelles du 18 au 20 d\'ecembre
1957}. Gauthier-Villars, 1958.
\bibitem{korevaar} J. Korevaar: {\it Tauberian theory. A century of developments}. Springer, 2004.
\bibitem{koukou} D. Koukoulopoulos: Pretentious multiplicative functions and the prime number
theorem for arithmetic progressions. Compos. Math. {\bf 149}, 1129--1149 (2013).
\bibitem{landau} E. Landau: {\it Handbuch der Lehre von der Verteilung der Primzahlen}. Teubner,
1909.
\bibitem{levinson} N. Levinson: A motivated account of an elementary proof of the prime-number
theorem. Amer. Math. Monthly {\bf 76}, 225--245 (1969).
\bibitem{nev} V. Nevanlinna: {\it \"Uber die elementaren Beweise der Primzahls\"atze und deren
\"aquivalente Fassungen}. Univ. of Helsinki Thesis, 1964.
\bibitem{nik} A. Nikoli\'c: The story of majorizability as Karamata's condition of convergence for
Abel summable series. Hist. Math. {\bf 36}, 405--419 (2009).
\bibitem{pollack} P. Pollack: {\it Not always buried deep. A Second Course in Elementary Number
Theory}. Amer. Math. Soc., 2009.
\bibitem{postn} A. G. Postnikov, N. P. Romanov: A simplification of Selberg's elementary proof of the
asymptotic law of distribution of primes. (In Russian.) Uspek. Mat. Nauk. 10:4 (66) 75--87 (1955).
\bibitem{schwarz} W. Schwarz: {\it Einf\"uhrung in Methoden und Ergebnisse der
Primzahltheorie}. Bibliographisches Institut, 1969.
\bibitem{selberg} A. Selberg: An elementary proof of the prime-number theorem. Ann. Math. {\bf 50}, 305--313 (1949).
\bibitem{TI} T. Tatuzawa, K. Iseki: On Selberg's elementary proof of the prime number
theorem. Proc. Japan Acad. {\bf 27}, 340--342 (1951).
\bibitem{TMF} G. Tenenbaum, M. Mend\`es France: {\it The prime numbers and their distribution}. Amer. Math. Soc.,
2000. (Translation of {\it Les nombres premiers}, Presses Universitaires de France, 1997.)
\bibitem{zagier} D. Zagier: Newman's short proof of the prime number theorem.
Amer. Math. Monthly {\bf 104}, 705--708 (1997).
\end{thebibliography}
\end{document}
|
\betagin{document}
\title[Differential operators with prescribed spectrum]{Construction of self-adjoint differential operators with prescribed spectral properties}
\mathfrak{a}uthor[J. Behrndt]{Jussi Behrndt$^1$}
^\varepsilonmail{[email protected]}
\mathfrak{a}uthor[A. Khrabustovskyi]{Andrii Khrabustovskyi$^{1,2}$}
^\varepsilonmail{[email protected]}
\mathfrak{a}ddress{$^1$ Institute of Applied Mathematics, Graz University of Technology, Steyrergasse
30, 8010 Graz, Austria}
\mathfrak{a}ddress{$^2$ Department of Physics, Faculty of Science, University of Hradec Kr\'alov\'e, Rokitansk\'eho 62, 500 03 Hradec Kr\'alov\'e, Czech Republic}
\keywords{Differential operator, Schr\"{o}dinger operator, essential spectrum, discrete spectrum, Neumann Laplacian, singular potential, boundary condition}
\maketitle
\betagin{center}
\textit{Dedicated to the memory of our friend and colleague Hagen Neidhardt}
^\varepsilonnd{center}
\betagin{abstract}
In this expository article some spectral properties of self-adjoint differential operators
are investigated. The main objective is to illustrate and (partly) review how one can construct domains or potentials such that the
essential or discrete spectrum of
a Schr\"{o}dinger operator of a certain type (e.g. the Neumann Laplacian) coincides with a predefined subset of the real line.
Another aim is to emphasize that the spectrum of a differential operator on a bounded domain or bounded interval is not necessarily discrete, that is,
eigenvalues of infinite multiplicity, continuous spectrum, and eigenvalues embedded in the continuous spectrum may be present. This {\it unusual}
spectral effect is, very roughly speaking,
caused by (at least) one of the following three reasons: The bounded domain has a rough boundary, the potential is singular, or the boundary condition
is nonstandard. In three separate explicit constructions we demonstrate how each of these
possibilities leads to a Schr\"{o}dinger operator with prescribed essential spectrum.
^\varepsilonnd{abstract}
\section{Introduction}
This paper is concerned with spectral theory of self-adjoint differential operators in Hilbert spaces. Before we explain in more detail
the topics and results we briefly familiarize the reader with the notions of discrete spectrum and essential spectrum, that play a key role here.
Let $A$ be a (typically unbounded)
self-adjoint operator in an infinite dimensional complex Hilbert space $\mathcal{H}$, see also the beginning of Section~\ref{sec3}
for more details on the {\it adjoint} of unbounded operators and the notion {\it self-adjoint}. The {\it spectrum} $\sigma(A)$ of $A$ is a closed subset of the real line
(which is unbounded if and only if $A$ is unbounded) that consists of all those points $\lambda$ such that $A-\lambda$ does not admit
a bounded inverse. In the case that $A-\lambda$ is not invertible $\lambda$ is called an {\it eigenvalue} of $A$ and belongs to the {\it point spectrum};
in the case that $(A-\lambda)^{-1}$ exists as an unbounded operator the point $\lambda$ belongs to the {\it continuous spectrum}. An eigenvalue is
{\it discrete} if it is an isolated point in $\sigma(A)$ and the {\it eigenspace} $\ker(A-\lambda)$ is finite dimensional. This subset of the spectrum of $A$ is denoted
by $\sigma_\mathrm{disc}(A)$; the complement of the discrete spectrum in $\sigma(A)$ is called the {\it essential spectrum} of $A$ and the notation $\sigma_^\varepsilonss(A)$
is used for this set. It is clear that
$$
\sigma(A)=\sigma_\mathrm{disc}(A)\,\mathrm{d}ot\cup\,\sigma_^\varepsilonss(A)
$$
and that $\sigma_^\varepsilonss(A)$ consists of all those spectral points which are in the continuous spectrum, all
eigenvalues embedded in the continuous spectrum
and all isolated eigenvalues of infinite multiplicity. For the intuition it may be helpful to keep in mind that essential spectrum can only appear in an infinite
dimensional Hilbert space, whereas the spectrum of any matrix is necessarily discrete and hence is always present (and the only type of spectrum)
of self-adjoint operators in finite dimensional Hilbert spaces. We refer the reader to the monographs \cite{AG93,BS87,D95,K66,RS72,RS78,Sch12} for more details on the spectrum of
self-adjoint operators.
The main objective of this expository paper is to illustrate and (partly) review how one can explicitely construct rough domains,
singular potentials, or nonstandard boundary conditions
such that the essential spectrum of
a Schr\"{o}dinger operator coincides with a predefined subset of the real line.
The closely connected problem to construct Schr\"{o}dinger operators
with predefined discrete spectrum is also briefly discussed. Very roughly speaking,
the results in Section~\ref{sec1} are contained in the well-known papers \cite{A78,CdV87,HSS91,HKP97}, whereas the main results
Theorem~\ref{th-BK+} and Theorem~\ref{essit}
in the later sections seem to be new.
More precisely, in Section~\ref{sec1} we treat Laplace operators subject to Neumann boundary conditions (Neumann Laplacians) on bounded domains.
It is often believed that self-adjoint Laplace-type operators on bounded domains always have purely discrete spectrum
(or, equivalenty, a compact resolvent). This is indeed true
for Laplace operators subject to Dirichlet boundary conditions (Dirichlet Laplacian), but, in general, not true for Neumann Laplacians.
In fact, the discreteness of the spectrum of the Neumann Laplacian is equivalent to the compactness of the embedding $\mathsf{H}^1(\Omega)\hookrightarrow \mathsf{L}^2(\Omega)$,
and for this a necessary and sufficient criterion was obtained by C.J.~Amick \cite{A78}; cf. Theorem~\ref{th-A}.
The standard example of a bounded domain for which essential spectrum for Neumann Laplacian appears is a so-called called
``rooms-and-passages'' domain: a chain of bounded domains (``rooms'') connected through narrow rectangles (``passages''), see Figure~\ref{fig1}.
Rooms-and-passages domains are widely used in spectral theory and the theory of Sobolev spaces in order to demonstrate various
peculiar effects (see, e.g., \cite{EH87,A78,Fr79}). {\color{black}Some spectral properties of such domains were investigated in \cite{BEW13}. We also refer to the comprehensive monograph of V.G.~Mazya \cite{Ma11} (see also earlier contributions \cite{Ma79,Ma80,Ma85,MP97}), where rooms-and-passages together many other tricky domains were treated. }
In the celebrated paper \cite{HSS91} R.~Hempel, L.~Seco, and B.~Simon constructed a rooms-and-passages domain such that the spectrum of the
Neumann Laplacian coincides with a prescribed closed set $\mathfrak{S}\subset [0,\infty)$ with $0\in \mathfrak{S}$.
We review and prove their result in Theorem~\ref{th-HSS}; here also the continuous dependence of
the eigenvalues of Neumann Laplacians on varying domains discussed in Appendix~\ref{appa} plays an important role.
We also briefly recall another type of bounded domains -- so-called ``comb-like'' domains --
which allow to control the essential spectrum in the case $0\notin \mathfrak{S}$.
Rooms-and-passages domains can also be used in a convenient way to control the discrete spectrum within compact intervals.
We demonstrate this in Theorem~\ref{thCdV+}, where we establish a slightly weaker version of the following
celebrated result by Y.~Colin de Verdi\`{e}re \cite{CdV87}: for arbitrary numbers
$0=\lambda_1<\lambda_2<\mathrm{d}ots<\lambda_m$ there exists a bounded domain $\Omega\subset\mathbb{R}^n$ such that the spectrum of the
Neumann Laplacian on $\Omega$ is purely discrete and its first $m$ eigenvalues coincide with the above numbers.
One of the main ingredients in our proof is a multidimensional version of the intermediate value theorem
by R.~Hempel, T.~Kriecherbauer, and P.~Plankensteiner in \cite{HKP97}.
In fact, our Theorem~\ref{thCdV+} is also contained in a more general result established in \cite{HKP97}, where a domain was constructed in such a way
that the essential spectrum and a part of the discrete spectrum of the Neumann Laplacian coincides with prescribed sets.
In Section~\ref{sec2} we show that similar tools and techniques can be used for a class of singular Schr\"odinger operators describing
the motion of quantum particles in potentials being supported at a discrete set.
These operators are known as \textit{solvable models} of quantum mechanics \cite{AGHH05}.
Namely, we will treat differential operators defined by the formal expression
\betagin{gather*}
-{\mathrm{d}^2\over \mathrm{d} z^2}+\sum\limits_{k\in\mathbb{N}}\beta_k\langle\cdot\,,\,\mathrm{d}elta_{z_k}'\rangle\mathrm{d}elta_{z_k}',
^\varepsilonnd{gather*}
where $\mathrm{d}elta_{z_k}'$ is the distributional derivative of the delta-function supported at $z_k$, $\langle\phi,\mathrm{d}elta_{z_k}'\rangle$ denotes its action
on the test function $\phi$ and $\beta_k\in \mathbb{R}\cup\{\infty\}$.
Such operators are called Schr\"odinger operator with $\mathrm{d}elta'$-interactions (or \textit{point dipole interactions}) and were studied (also in the multidimensional
setting) in numerous
papers; here we only refer the reader to \cite{GHKM80,GH87,BEL14,BGLL15,BLL13,ER16,JL16,AN06,AEL94,BK15,BN13a,BN13b,BSW95,CK19,EKMT14,Ex95a,Ex95b,
Ex96,EJ13,EJ14,EK15,EK18,EL18,KM10a,KM10b,KM14,LR15,
MPS16,M96,ZSY17} and the references therein.
We will show in Theorem~\ref{th-BK} (see also Theorem~\ref{th-BK+}) that the points $z_k$ and coefficients $\beta_k$ can be chosen in such a way that the essential spectrum of
the above operator coincides with a predefined closed set. In our proof we make use of well-known convergence results for quadratic forms, which we briefly recall in
Appendix~\ref{appb}. Some of our arguments are also based and related to results in the recent paper
\cite{KM14} by A.~Kostenko and M.M.~Malamud.
Finally, in Section~\ref{sec3} we consider a slightly more abstract problem which can also be viewed as a generalization of some of the above
problems: for a given densely defined symmetric operator $S$ with infinite defect numbers, that is, $S$ admits a self-adjoint extension
$A$ and $\mathrm{dom}(A)/\mathrm{dom}(S)$ is infinite dimensional,
and under the assumption that there exists a self-adjoint extension with discrete spectrum (or, equivalently, compact resolvent),
we construct a self-adjoint extensions of $S$ with prescribed essential spectrum (possibly unbounded from below and above). Here the prescribed essential spectrum is generated via a perturbation argument
and a self-adjoint operator $\Xi$ that acts in an infinite dimensional {\it boundary space} and plays the role of a parameter in a boundary condition.
Our result is also related to the series of papers \cite{ABMN05,ABN98,B04,BMN06,BN95,BN96,BNW93} by S.~Albeverio, J.~Brasche, M.M.~Malamud, H.~Neidhardt, and J.~Weidmann
in which the existence of self-adjoint extensions
with prescribed point spectrum, absolutely continuous spectrum, and singular continuous spectrum in spectral gaps of a fixed underlying symmetric operator
was discussed.
\section{Essential and discrete spectra of Neumann Laplacians\label{sec1}}
The main objective of this section is to highlight some spectral properties of the Neumann Laplacian on bounded domains.
In the following let $\Omega$ be a bounded domain in $\mathbb{R}^n$ and assume that $n\ge 2$. As usual the Hilbert space of (equivalence classes of)
square integrable complex functions on $\Omega$ is denoted by $\mathsf{L}^2(\Omega)$, and $\mathsf{H}^1(\Omega)$ denotes the first order Sobolev space consisting
of functions in $\mathsf{L}^2(\Omega)$ that admit weak derivatives (of first order) in $\mathsf{L}^2(\Omega)$. An efficient method to introduce the Neumann Laplacian
in a mathematically rigorous way is to consider the sesquilinear form $\mathfrak{a}_\Omega$ defined by
\betagin{equation}\label{formn}
\mathfrak{a}_\Omega[u,v]=\int_\Omega\nabla u\cdot\overline{\nabla v}\,\mathrm{d} x,\qquad \mathrm{dom}(\mathfrak{a}_\Omega)=\mathsf{H}^1(\Omega).
^\varepsilonnd{equation}
It is clear that this form is densely defined in $\mathsf{L}^2(\Omega)$, nonnegative, and one can show that the form is closed, i.e.
the form domain $\mathsf{H}^1(\Omega)$ equipped with the scalar product $\mathfrak{a}_\Omega[\cdot,\cdot]+(\cdot,\cdot)_{\mathsf{L}^2(\Omega)}$ is complete.
The well-known first representation
theorem (see, e.g. \cite[Chapter 6, Theorem 2.1]{K66}) associates a unique nonnegative
self-adjoint operator $A_\Omega$ in $\mathsf{L}^2(\Omega)$ to the form $\mathfrak{a}_\Omega$ such that the domain inclusion
$\mathrm{dom}(A_\Omega)\subset\mathrm{dom}(\mathfrak{a}_\Omega)$ and the equality
\betagin{equation}\label{opn}
(A_\Omega u, v)_{\mathsf{L}^2(\Omega)}=\mathfrak{a}_\Omega[u,v],\quad u\in \mathrm{dom}(A_\Omega),\,\,v\in\mathrm{dom}(\mathfrak{a}_\Omega),
^\varepsilonnd{equation}
hold. The operator $A_\Omega$ is called \textit{the Neumann Laplacian} on $\Omega$. One can show that
\betagin{itemize}
\item $A_\Omega u = -\Delta u$, where $-\Delta u$ is understood as a distribution.
\item $\mathrm{dom}(A_\Omega)\subset \mathsf{H}^2_{\rm loc}(\Omega)\cap \mathsf{H}^1(\Omega)$.
\item If $\partial\Omega$ is $C^2$-smooth then
$$\mathrm{dom}(A_\Omega)=\left\{u\in \mathsf{H}^2 (\Omega):\ \mathrm{d}isplaystyle{\partial_n u}\!\restriction_{\partial\Omega}=0\right\},$$
where $\partial_n$ denotes the normal derivative on $\partial\Omega$.
^\varepsilonnd{itemize}
Typically the boundary condition $\mathrm{d}isplaystyle{\partial_n u}\!\restriction_{\partial\Omega}=0$ is referred to as {\it Neumann boundary condition},
which also justifies the terminology Neumann Laplacian. However, note that some regularity for the boundary of the domain has to
be required in order to be able to deal with a normal derivative. For completeness, we note that
the assumption of a $C^2$-boundary above is not optimal (but almost) for $\mathsf{H}^2$-regularity
of the domain of the Neumann Laplacian.
The rest of this section deals with some spectral properties of Neumann Laplacians. First of all we discuss in a preliminary
situation that
the Neumann Laplacian may have essential spectrum; since the domain $\Omega$ is bounded this may be a bit surprising at first sight.
In this context we then recall a well-known result due to R.~Hempel, L.~Seco, and B.~Simon from \cite{HSS91} how to explicitely
construct a bounded rooms-and-passages-type domain (with nonsmooth boundary) such that the essential spectrum of the Neumann Laplacian coincides with a prescribed
closed set. Another related topic is to construct Neumann Laplacians on appropriate domains such that finitely many
discrete eigenvalues coincide with a given set of points. Here we recall a famous result due to Y. Colin de Verdi\`{e}re from \cite{CdV87},
and supplement this theorem with a similar result which is proved with a simple rooms-and-passages-type strategy.
Actually, Theorem~\ref{thCdV+} is also a special variant of a more general result
by R.~Hempel, T.~Kriecherbauer, and P.~Plankensteiner in \cite{HKP97}.
\subsection{Neumann Laplacians may have nonempty essential spectrum\label{subsec11}}
Let $A_\Omega$ be the self-adjoint Neumann Laplacian in $\mathsf{L}^2(\Omega)$ and denote by $\sigma_{^\varepsilonss}(A_\Omega)$ the essential spectrum
of $A_\Omega$.
It is well-known that $\sigma_{^\varepsilonss}(A_\Omega)=\varnothing$
(which is equivalent to the compactness of the resolvent of $A_\Omega$ in $\mathsf{L}^2(\Omega)$)
if and only if
\betagin{gather}\label{compact}
\text{the embedding }i_\Omega:\mathsf{H}^1(\Omega)\hookrightarrow \mathsf{L}^2(\Omega)\text{ is compact};
^\varepsilonnd{gather}
cf. \cite[Satz 21.3]{T72}.
If the boundary $\partial\Omega$ is sufficiently
regular (for example Lipschitz) then ^\varepsilonqref{compact} holds; this result is known as Rellich's embedding theorem.
However, in general the embedding $i_\Omega$ need not be compact. In fact,
C.J.~Amick established in \cite[Theorem~3]{A78} a necessary and sufficient criterion for the compactness of the embedding operator $i_\Omega$, which we recall
in the next theorem.
\betagin{theorem}[Amick, 1978]\label{th-A}
The embedding $i_\Omega$ in ^\varepsilonqref{compact} is compact if and only if
\betagin{equation}\label{gamma}
\Gamma_\Omega:=\lim\limits_{^\varepsilonps\to 0}\sup_{u\in \mathsf{H}^1(\Omega)}{\|u\|^2_{\mathsf{L}^2(\Omega^\varepsilon)}\over \|u\|^2_{\mathsf{H}^1(\Omega)}}=0,
^\varepsilonnd{equation}
where $\Omega^\varepsilon=\{x\in\Omega:\ \mathrm{dist}(x,\partial\Omega)<^\varepsilonps\}$.
^\varepsilonnd{theorem}
\betagin{remark}\color{black}
Necessary and sufficient conditions for $\sigma_{^\varepsilonss}(A_\Omega)=\varnothing$ have been also obtained in \cite{Ma68}. These conditions are formulated in terms of capacities.
^\varepsilonnd{remark}
In \cite{A78} an example of a bounded domain $\Omega\subset\mathbb{R}^2$ consisting of countably
many rooms $R_k$ and passages $P_k$ with $\Gamma_\Omega>0$ was constructed (see Figure~\ref{fig1}).
\betagin{figure}[h]
\betagin{picture}(350,78)
\includegraphics[width=120mm]{rp}
\put(-145,72){\vector(1,0){23}}
\put(-145,72){\vector(-1,0){16}}
\put(-145,78){$_{\hat d_k}$}
\put(-190,72){\vector(-1,0){29}}
\put(-190,72){\vector(1,0){31}}
\put(-195,76){$_{d_k}$}
\put(-135,38){${P_k}$}
\put(-150,40){\vector(0,1){8}}
\put(-150,40){\vector(0,-1){7}}
\put(-156,36){\betagin{rotate}{90}$_{\betata_k}$^\varepsilonnd{rotate}}
\put(-195,38){$R_k$}
\put(-205,40){\vector(0,1){30}}
\put(-205,40){\vector(0,-1){29}}
\put(-210,38){\betagin{rotate}{90}$_{d_k}$^\varepsilonnd{rotate}}
^\varepsilonnd{picture}
\caption{Rooms-and-passages domain $\Omega$ \label{fig1}}
^\varepsilonnd{figure}
For the convenience of the reader we wish to recall this construction in the following.
Note that we impose slightly different assumptions on the rooms and passages compared to \cite{A78}.
Consider some sequences
$(d_k)_{k\in\mathbb{N}}$ and
$(\hat d_k)_{k\in\mathbb{N}}$ of positive numbers such that
\betagin{gather}
\label{sum}
\sum\limits_{k\in\mathbb{N}}d_k<\infty
^\varepsilonnd{gather}
and assume that there is a constant $C_1>0$ with the property
\betagin{gather}
\label{ddd}
\hat d_{k}\leq C_1\min\left\{d_k;\,d_{k+1}\right\}.
^\varepsilonnd{gather}
Note that ^\varepsilonqref{sum}-^\varepsilonqref{ddd} imply
\betagin{equation}\label{sum+}
\sum\limits_{k\in\mathbb{N}}\hat d_k<\infty
^\varepsilonnd{equation}
and
\betagin{equation}\label{ddTo0}
\lim_{k\to\infty}d_k=0,\quad \lim_{k\to\infty}\hat d_k=0.
^\varepsilonnd{equation}
One can choose, for example,
$d_k=(2k-1)^{-2},\ \hat d_k=(2k)^{-2},\ k\in\mathbb{N}$;
then conditions ^\varepsilonqref{sum} and ^\varepsilonqref{ddd} hold with $C_1\in \big[{9\over 4},\infty\big)$.
Finally, let
$(\betata_k)_{k\in\mathbb{N}}$ be a sequence of positive numbers
such that for all $k\in\mathbb{N}$
\betagin{gather}\label{beta}
\betata_k \leq C_2 (\hat d_k)^\mathfrak{a}lpha
^\varepsilonnd{gather}
with some $\mathfrak{a}lpha\ge 3$ and $C_2>0$ such that
\betagin{gather}
\label{C2}
C_2\leq \frac{1}{C_1}\cdot\left(\max_{k\in\mathbb{N}}\hat d_k\right)^{1-\mathfrak{a}lphapha}.
^\varepsilonnd{gather}
In the next step define the sequence $(x_k)_{k\in\mathbb{N}}$ by
\betagin{equation}\label{xk}
x_k:=\sum\limits_{j=1}^k (d_j + \hat d_j) - \hat d_k,
^\varepsilonnd{equation}
and define the rooms $R_k$ and passages $P_k$ by
\betagin{equation}\label{RP1}
R_k: = (x_k-d_k,x_k)\times \left(-{d_k\over 2},{d_k\over 2}\right)
^\varepsilonnd{equation}
and
\betagin{equation}\label{RP2}
P_k: = \bigl[x_k,x_{k}+\hat d_k\bigr]\times \left(-{\betata_k\over 2},{\betata_k\over 2}\right),
^\varepsilonnd{equation}
respectively. Finally, the union of $R_k$ and $P_k$ leads to the desired rooms-and-passages domain
\betagin{equation}\label{RP3}
\Omega:=\bigcup\limits_{k\in\mathbb{N}}\left(R_k\cup P_k\right).
^\varepsilonnd{equation}
From ^\varepsilonqref{sum}, ^\varepsilonqref{sum+}, and ^\varepsilonqref{beta} it is clear that $\Omega$ is bounded.
Using ^\varepsilonqref{ddd}, ^\varepsilonqref{beta}, ^\varepsilonqref{C2} and taking into account that $\mathfrak{a}lphapha>3$ we obtain the estimate
\betagin{gather}\label{betaestk}
\betata_k \leq C_2 (\hat d_k)^\mathfrak{a}lpha \leq C_2(\hat d_k)^{\mathfrak{a}lpha-1}C_1 \min\left\{d_k,d_{k+1}\right\} \leq \min\left\{d_k,d_{k+1}\right\}.
^\varepsilonnd{gather}
Hence the thickness of the passage $P_k$ is not larger than the sides
of the adjacent rooms $R_{k}$ and $R_{k+1}$, which also shows that $\Omega$ is indeed an open set.
It will now be illustrated that for this particular domain $\Omega$ the quantity $\Gamma_\Omega$ in ^\varepsilonqref{gamma} is positive, so that
the embedding in ^\varepsilonqref{compact} is not compact. In particular, the essential spectrum of the Neumann Laplacian in $\mathsf{L}^2(\Omega)$ is not empty.
For this purpose consider the piecewise linear functions $u_k$, $k=2,3,\mathrm{d}ots$, defined by
\betagin{gather*}
u_k(\textbf{x})=
\betagin{dcases}
\mathrm{d}isplaystyle{1\over d_k},&\textbf{x}=(x,y)\in R_k,\\
\mathrm{d}isplaystyle{x_k+\hat d_k-x\over d_k\hat d_k},&\textbf{x}=(x,y)\in P_k,\\
\mathrm{d}isplaystyle{x_{k-1}-x\over d_k(x_{k-1}-x_k+d_k)},&\textbf{x}=(x,y)\in P_{k-1},\\
0,&\text{otherwise}.
^\varepsilonnd{dcases}
^\varepsilonnd{gather*}
Note that $x_{k-1}-x_k+d_k=-\hat d_{k-1}$ by ^\varepsilonqref{xk}. It is easy to see that
the function $u_k$ belongs to $\mathsf{H}^1(\Omega)$. Next we evaluate its $\mathsf{L}^2$-norm. One computes
\betagin{multline}\label{uk0}
\|u_k\|^2_{\mathsf{L}^2(\Omega)}=
\|u_k\|^2_{\mathsf{L}^2(R_k)}+\|u_k\|^2_{\mathsf{L}^2(P_{k-1})}+\|u_k\|^2_{\mathsf{L}^2(P_{k})}
\\=
1+ {1\over 3(d_k)^2}\left(\betata_{k-1}\hat d_{k-1}+\betata_{k}\hat d_k\right)
\leq
1+{C_2 \over 3(d_k)^2}\left( (\hat d_{k-1})^{\mathfrak{a}lphapha+1}+(\hat d_{k})^{\mathfrak{a}lphapha+1} \right).
^\varepsilonnd{multline}
We also have (cf.~^\varepsilonqref{ddd})
\betagin{equation}
\label{uk00}
\hat d_{k-1} \leq C_1 d_k\quad\text{and}\quad
\hat d_{k } \leq C_1 d_k.
^\varepsilonnd{equation}
Using ^\varepsilonqref{uk00} and taking into account that
$\lim_{k\to\infty} d_k = 0$ and $\mathfrak{a}lphapha\geq 3$, we obtain from ^\varepsilonqref{uk0}:
\betagin{equation}\label{uk}
\|u_k\|^2_{\mathsf{L}^2(\Omega)}=1+o(1)\text{ as }k\to\infty.
^\varepsilonnd{equation}
Now we estimate the $\mathsf{L}^2$-norm of $\nabla u_k$. Using ^\varepsilonqref{beta} and ^\varepsilonqref{uk00} we get
\betagin{equation}\label{estixx}
\betagin{split}
\|\nabla u_k\|^2_{\mathsf{L}^2(\Omega)}&=
\|\nabla u_k\|^2_{\mathsf{L}^2(P_{k-1})}+\|\nabla u_k\|^2_{\mathsf{L}^2(P_{k})}
=
{1\over (d_k)^2}\left({\betata_{k-1}\over \hat d_{k-1}}+{\betata_k\over \hat d_k}\right)
\\
&\leq
{C_2 \over (d_k)^2}\left( (\hat d_{k-1})^{\mathfrak{a}lphapha-1}+(\hat d_{k})^{\mathfrak{a}lphapha-1} \right)\leq
2C_1 C_2(d_{k})^{\mathfrak{a}lphapha-3}.
^\varepsilonnd{split}
^\varepsilonnd{equation}
Moreover, it is clear that for any $^\varepsilonps>0$ there exists $k(^\varepsilonps)\in\mathbb{N}$ such that
$\mathrm{supp}(u_k)\subset\Omega^\varepsilon$ for all $k\ge k(^\varepsilonps)$. This, ^\varepsilonqref{uk}, ^\varepsilonqref{estixx}, and
^\varepsilonqref{ddTo0}
yield $\Gamma_\Omega>0$ (recall that $\mathfrak{a}lphapha\geq 3$).
As an immediate consequence we conclude the following corollary.
\betagin{corollary}\label{cor1}
Let $\Omega$ be the bounded rooms-and-passages domain in ^\varepsilonqref{RP3} and let $A_\Omega$ be the self-adjoint Neumann Laplacian in $\mathsf{L}^2(\Omega)$.
Then
$$\sigma_{^\varepsilonss}(A_\Omega)\not=\varnothing $$
^\varepsilonnd{corollary}
The natural question that arises in the context of Corollary~\ref{cor1} is what form the essential spectrum of the Neumann Laplacian $A_\Omega$ may have.
This topic is discussed in the next subsection.
\subsection{Neumann Laplacian with prescribed essential spectrum\label{subsec12}}
In the celebrated paper \cite{HSS91} R.~Hempel, L.~Seco, and B.~Simon
have shown, using rooms-and-passages-type domains of similar form as above, that the
essential spectrum of the Neumann Laplacian can be rather arbitrary.
Below we briefly describe their construction.
We fix sequences $(d_k)_{k\in\mathbb{N}}$ and $(\hat d_k)_{k\in\mathbb{N}}$ of positive numbers
satisfying
\betagin{gather}
\label{dd}
\sum\limits_{k\in\mathbb{N}}(d_k+\hat d_k)<\infty,
^\varepsilonnd{gather}
and of course one then has ^\varepsilonqref{ddTo0}.
Let the domain $\Omega\subset\mathbb{R}^2$ consist of countably many rooms $R_k$ and passages $P_k$. Furthermore,
in each room we insert an additional ``wall'' $W_k^{\mathfrak{a}lpha_k}$ (see Figure~\ref{fig2}), and the resulting modified room is then denoted by
$R_k^{\mathfrak{a}lphapha_k}=R_k\mathfrak{set}minus W_k^{\mathfrak{a}lpha_k}$, so that
\betagin{equation}\label{omega23}
\Omega=\bigcup\limits_{k\in\mathbb{N}}\bigl( R_k^{\mathfrak{a}lpha_k}\cup P_k\bigr)=\bigcup\limits_{k\in\mathbb{N}}\bigl((R_k\mathfrak{set}minus W_k^{\mathfrak{a}lpha_k})\cup P_k\bigr).
^\varepsilonnd{equation}
Here $R_k$ and $P_k$ are defined by ^\varepsilonqref{RP1} and ^\varepsilonqref{RP2}
with $\betata_k$ satisfying
\betagin{gather}\label{b}
0<\betata_k\leq \min\left\{d_k;\,d_{k+1}\right\};
^\varepsilonnd{gather}
cf. ^\varepsilonqref{betaestk}.
The walls
$W_k^{\mathfrak{a}lpha_k}$ are given by
$$W_k^{\mathfrak{a}lpha_k}:=\left\{\mathbf{x}=(x,y)\in\mathbb{R}^2:\ x=x_k-{d_k\over 2},\ |y|\in \bigg[{\mathfrak{a}lpha_k\over 2},{d_k\over 2}\bigg]\right\},$$
where it is assumed that the sequence $(\mathfrak{a}lpha_k)_{k\in\mathbb{N}}$ satisfies
\betagin{gather}\label{a}
0<\mathfrak{a}lpha_k\leq d_k.
^\varepsilonnd{gather}
\betagin{figure}[h]
\betagin{picture}(290,70)
\includegraphics[width=100mm]{rpw.eps}
\put(-162,35){$W_k^{\mathfrak{a}lpha_k}$}
\put(-147,30){\vector(2,-1){11}}
\put(-147,46){\vector(2, 1){11}}
\put(-132,30){\vector(0,1){6}}
\put(-132,48){\vector(0,-1){6}}
\put(-129,35){\betagin{rotate}{90}$_{\mathfrak{a}lpha_k}$^\varepsilonnd{rotate}}
^\varepsilonnd{picture}
\caption{Rooms-and-passages domain with additional walls\label{fig2}}
^\varepsilonnd{figure}
\betagin{theorem}[Hempel-Seco-Simon, 1991]\label{th-HSS}
Let $\mathfrak{S}\subset [0,\infty)$ be an arbitrary closed set such that $0\in \mathfrak{S}$. Then there exist sequences $(d_k)_{k\in\mathbb{N}}$, $(\hat d_k)_{k\in\mathbb{N}}$, $(\mathfrak{a}lpha_k)_{k\in\mathbb{N}}$,
$(\beta_k)_{k\in\mathbb{N}}$ satisfying ^\varepsilonqref{dd}, ^\varepsilonqref{b}, and ^\varepsilonqref{a} such
that
\betagin{gather}
\label{essS}
\sigma_^\varepsilonss(A_\Omega)=\mathfrak{S}.
^\varepsilonnd{gather}
^\varepsilonnd{theorem}
\betagin{proof}
This sketch of the proof from \cite{HSS91} consists of three steps. In the first step, where we skip a perturbation argument, it is shown that
the original problem to ensure ^\varepsilonqref{essS} for the Neumann Laplacian $A_\Omega$ on the domain $\Omega$
can be reduced to show the same property for a ``decoupled'' Neumann Laplacian $A_{\rm dec}$. The spectrum of this operator can be described explicitly,
which is done in the second step. Finally, in a third step the parameters are adjusted in such a way that ^\varepsilonqref{essS} holds.\\
\noindent
{\it Step 1.} Let $(d_k)_{k\in\mathbb{N}}$, $(\hat d_k)_{k\in\mathbb{N}}$, $(\mathfrak{a}lpha_k)_{k\in\mathbb{N}}$, and
$(\beta_k)_{k\in\mathbb{N}}$ be some sequences that satisfy ^\varepsilonqref{dd}, ^\varepsilonqref{b}, and ^\varepsilonqref{a}. Let $\Omega$ be the corresponding
domain in ^\varepsilonqref{omega23} and let $A_\Omega$ be the Neumann Laplacian on $\Omega$ defined via the quadratic form as in ^\varepsilonqref{formn}--^\varepsilonqref{opn}.
In the following we denote by $A_{R_k^{\mathfrak{a}lpha_k}}$
the Neumann Laplacian
on the domain $R_k^{\mathfrak{a}lphapha_k}=R_k\mathfrak{set}minus W_k^{\mathfrak{a}lpha_k}$, also defined via the quadratic form
\betagin{equation}\label{form5}
\mathfrak{a}_{R_k^{\mathfrak{a}lphapha_k}}[u,v]=\int_{R_k^{\mathfrak{a}lpha_k}}\nabla u\cdot\overline{\nabla v}\,\mathrm{d} x,\quad
\mathrm{dom}\bigl(\mathfrak{a}_{R_k^{\mathfrak{a}lpha_k}}\bigr)=\mathsf{H}^1(R_k^{\mathfrak{a}lpha_k}),
^\varepsilonnd{equation}
in the same way as in ^\varepsilonqref{formn}--^\varepsilonqref{opn}. Informally speaking, the functions in the domain of this operator satisfy Neumann
boundary conditions on the boundary of the room $R_k$ and, in addition,
Neumann boundary conditions on both sides of the additional wall $W_k^{\mathfrak{a}lpha_k}$. Furthermore, we will
make use self-adjoint Laplacians
on the interiors $\mathring P_k$ of the passages $P_k$ with mixed Dirichlet and Neumann boundary conditions.
More precisely, $A_{\mathring P_k}^{\text{\tiny{DN}}}$
denotes the self-adjoint Laplacian defined on a subspace of $\mathsf{H}^1(\mathring P_k)$, where it is assumed that the functions in the domain satisfy
Neumann boundary conditions on $\{\textbf{x}=(x,y)\in\partial \mathring P_k:\ y=\pm \beta_k/2\}$
and Dirichlet boundary conditions on the remaining part $\{\textbf{x}=(x,y)\in\partial \mathring P_k:\ x=x_k\vee x=x_{k}+\hat d_k\}$ of the boundary.
Now consider the ``decoupled'' operator
\betagin{equation}\label{orto}
A_{\rm dec}=
\bigoplus_{k\in\mathbb{N}}
\bigl(
A_{R_k^{\mathfrak{a}lpha_k}}
\oplus
A_{\mathring P_k}^{\text{\tiny{DN}}}\bigr)
^\varepsilonnd{equation}
as an orthogonal sum of the self-adjoint operators $A_{R_k^{\mathfrak{a}lpha_k}}$ and $A_{\mathring P_k}^{\text{\tiny{DN}}}$ in the space
$$\mathsf{L}^2(\Omega)=\bigoplus_{k\in\mathbb{N}} \bigl(\mathsf{L}^2(R_k^{\mathfrak{a}lpha_k})\oplus\mathsf{L}^2(\mathring P_k)\bigr).$$
Then one can show that the resolvent difference
$(A_{\rm dec}+\mathcal{I}d)^{-1} - (A_\Omega+\mathcal{I}d)^{-1}$
of the Neumann Laplacian $A_\Omega$ and the decoupled operator $A_{\rm dec}$
is a compact operator
provided $\betata_k\to 0$ sufficiently fast as $k\to \infty$.
We fix such a sequence $(\beta_k)_{k\in\mathbb{N}}$; then from
Weyl's theorem (see, e.g., \cite[Theorem XIII.14]{RS78}) one concludes
$$\sigma_^\varepsilonss(A_\Omega)=\sigma_^\varepsilonss(A_{\rm dec})$$ and hence it remains to show that
$$\sigma_^\varepsilonss(A_{\rm dec})=\mathfrak{S}.$$
\noindent
{\it Step 2.}
First we shall explain how the eigenvalues of the Neumann Laplacian on $R_k^{\mathfrak{a}lpha_k}$ depend on the size of the wall $W_k^{\mathfrak{a}lpha_k}$
inside $R_k$. In this step of the proof the value $\mathfrak{a}lpha_k=0$ is also allowed (in this case the room $R_k^{\mathfrak{a}lpha_k}$ decouples: it becomes
a union of two disjoint rectangles).
We denote the eigenvalues of $A_{R_k^{\mathfrak{a}lphapha_k}}$ (counted with multiplicities) and ordered
as a nondecreasing sequence by $(\lambda_{j}(R_k^{\mathfrak{a}lphapha_k}))_{j\in\mathbb{N}}$.
It is not difficult to check that the corresponding forms $\mathfrak{a}_{R_k^{\mathfrak{a}lpha_k}}$ in ^\varepsilonqref{form5} are monotone in the parameter $\mathfrak{a}lphapha_k$, that is,
for $0\leq\mathfrak{a}lphapha_k\leq\mathbf{w}idetilde \mathfrak{a}lphapha_k\leq d_k$ one has
\betagin{gather*}
\mathrm{dom}(\mathfrak{a}_{R_k^{\mathfrak{a}lpha_k}})\supset\mathrm{dom}(\mathfrak{a}_{R_k^{\mathbf{w}idetilde \mathfrak{a}lpha_k}}),\\
\mathfrak{a}_{R_k^{\mathfrak{a}lpha_k}}[u,u]= \mathfrak{a}_{R_k^{\mathbf{w}idetilde \mathfrak{a}lpha_k}}[u,u]\text{ for all }u\in\mathrm{dom}(\mathfrak{a}_{R_k^{\mathbf{w}idetilde \mathfrak{a}lpha_k}}),
^\varepsilonnd{gather*}
which means $\mathfrak{a}_{R_k^{\mathfrak{a}lpha_k}}\leq \mathfrak{a}_{R_k^{\mathbf{w}idetilde \mathfrak{a}lpha_k}}$ in the sense of ordering of forms.
Then it follows from the min-max principle (see, e.g., \cite[Section~4.5]{D95}) that for each $j\in{\mathbb{N}}$ the function
\betagin{gather}\label{alpfa-f}
[0,d_k]\supset\mathfrak{a}lphapha_k\mapsto\lambda_j(R_k^{\mathfrak{a}lpha_k})
^\varepsilonnd{gather}
is nondecreasing and by Theorem~\ref{th-contin}
this function is also continuous. In the present situation it is clear that
\betagin{equation}\label{lambda1R}
\lambda_1(R_k^{\mathfrak{a}lpha_k})=0,
^\varepsilonnd{equation}
and
\betagin{equation}
\label{lambda2R}
\lambda_2(R_k^{\mathfrak{a}lpha_k})=
\betagin{cases}
0,&\mathfrak{a}lpha_k=0,\\
\left({\pi / d_k}\right)^2,&\mathfrak{a}lpha_k=d_k,
^\varepsilonnd{cases}
^\varepsilonnd{equation}
and due to the monotonicity of the function ^\varepsilonqref{alpfa-f} one also has
\betagin{gather}\label{lambda3R}
\lambda_3(R_k^{\mathfrak{a}lpha_k})\geq \lambda_3(R_k^0)=\left({\pi / d_k}\right)^2.
^\varepsilonnd{gather}
Furthermore, if $(\mu_j(\mathring P_k))_{j\in\mathbb{N}}$ denote the eigenvalues (counted with multiplicities) of $A_{\mathring P_k}^{\text{\tiny{DN}}}$ ordered
as a nondecreasing sequence then one verifies that the first eigenvalue $\mu_1(\mathring P_k)$ is given by
\betagin{gather}\label{lambda1mu1b}
\mu_1(\mathring P_k)= ({\pi / \hat d_k} )^2
^\varepsilonnd{gather}
for all $k\in{\mathbb{N}}$.
From the orthogonal sum structure in ^\varepsilonqref{orto} it is clear that
$$
\sigma_{^\varepsilonss}(A_{\rm dec})=
\mathfrak{acc}\big((\lambda_j(R_k^{\mathfrak{a}lpha_k}))_{j,k\in\mathbb{N}}\big)\cup\mathfrak{acc}\big((\mu_j(\mathring P_k))_{j,k\in\mathbb{N}}\big),
$$
where the symbol $\mathfrak{acc}$ denotes the set of accumulation points of a sequence.
Observe that the eigenvalues $(\mu_j(\mathring P_k))_{j\in\mathbb{N}}$ do not have any finite accumulation point (the smallest eigenvalue satisfies
^\varepsilonqref{lambda1mu1b} and $\lim_{k\to\infty }\hat d_k= 0$ by assumption) and hence we obtain
$$
\sigma_{^\varepsilonss}(A_{\rm dec})=
\mathfrak{acc}\big((\lambda_j(R_k^{\mathfrak{a}lpha_k}))_{j,k\in\mathbb{N}}\big).
$$
\noindent
{\it Step 3.} Now we complete the proof by adjusting the parameters in the above construction.
Since by assumption $\mathfrak{S}$ is a closed subset of $[0,\infty)$ one can always find a sequence $(s_k)_{k\in \mathbb{N}}$
such that
\betagin{gather}
\label{s-assumpt}
s_k> 0\text{ and }\mathfrak{acc}((s_k)_{k\in \mathbb{N}})=
\betagin{cases}
\mathfrak{S}\mathfrak{set}minus\{0\},&0\text{ is an isolated point of }\mathfrak{S},\\
\mathfrak{S},&\text{otherwise}.
^\varepsilonnd{cases}
^\varepsilonnd{gather}
Next, for each $k\in\mathbb{N}$ we fix a number $d_k>0$ such that
\betagin{gather}\label{sd}
s_k < (\pi/ d_k)^2.
^\varepsilonnd{gather}
In addition, we assume that the numbers $d_k$ are chosen small enough so that
$$\sum\limits_{k\in\mathbb{N}}d_k<\infty.$$
We also fix a sequence of positive numbers $(\hat d_k)_{k\in\mathbb{N} }$ such that
$$\sum\limits_{k\in\mathbb{N}}\hat d_k<\infty.$$
Using
the continuity of the function ^\varepsilonqref{alpfa-f}
and taking into account ^\varepsilonqref{lambda2R} and ^\varepsilonqref{sd} it is clear that
there exists $\mathfrak{a}lpha_k\in (0,d_k)$ such that the second eigenvalue $\lambda_2(R_k^{\mathfrak{a}lpha_k})$ of the Neumann Laplacian $A_{R_k^{\mathfrak{a}lpha_k}}$
satisfies
\betagin{gather}\label{lambda2}
\lambda_2(R_k^{\mathfrak{a}lpha_k})=s_k
^\varepsilonnd{gather}
for all $k\in{\mathbb{N}}$.
Furthermore, by construction we also have $\lambda_1(R_k^{\mathfrak{a}lpha_k})=0$ and $\lambda_j(R_k^{\mathfrak{a}lpha_k})\geq (\pi/d_k)^2$ for $j\geq 3$
(cf. ^\varepsilonqref{lambda3R}), and hence we conclude together with
$\lim_{k\to\infty }d_k= 0$, ^\varepsilonqref{lambda1R}, ^\varepsilonqref{s-assumpt}, ^\varepsilonqref{lambda2},
and $0\in \mathfrak{S}$
that
$$\mathfrak{acc}\big((\lambda_j(R_k^{\mathfrak{a}lpha_k}))_{j,k\in\mathbb{N}}\big)=
\mathfrak{acc}\big((\lambda_j(R_k^{\mathfrak{a}lpha_k}))_{j\leq 2,\,k\in\mathbb{N}}\big)=
\{0\}\cup
\mathfrak{acc}((s_k)_{k\in\mathbb{N}}) = \mathfrak{S}.$$
This implies $\sigma_^\varepsilonss (A_{\rm dec})=\mathfrak{S}$ and completes the proof.
^\varepsilonnd{proof}
Besides ^\varepsilonqref{essS} it is also shown in \cite{HSS91} that
the absolutely continuous spectrum $\sigma_{\rm ac}(A_\Omega)$ of $A_\Omega$ is empty.
The argument is as follows: It is verified that the difference
$(A_\Omega+\mathcal{I}d)^{-2}-(A_{\rm dec}+\mathcal{I}d)^{-2}$
is a trace class operator, and consequently the absolutely continuous spectra of
$A_\Omega$ and $A_{\rm dec}$ coincide; cf. \cite[page~30,\,Corollary~3]{RS79}.
Since $\sigma(A_{\rm dec})$ is pure point one concludes $\sigma_{\rm ac}(A_\Omega)=\sigma_{\rm ac}(A_{\rm dec})=^\varepsilonmptyset$.
Note, that the absolutely continuous spectrum of the Neumann Laplacian on a bounded domain is not always empty.
For example, in \cite{Si92} B.~Simon constructed a bounded set $\Omega$ having the form of a ``jelly roll''
such that $\sigma_{\rm ac}(A_\Omega)=[0,\infty)$.
In \cite{HSS91} R.~Hempel, L.~Seco, and B.~Simon
also constructed a domain for which ^\varepsilonqref{essS} holds
without the restriction $0\in \mathfrak{S}$.
For this purpose so-called comb-like domains are used, see Figure~\ref{fig3}.
To construct the comb one attaches a sequence of ``teeth'' $(T_k^{\mathfrak{a}lpha_k})_{k\in\mathbb{N}}$ to a fixed rectangle $Q$;
the tooth $T_k^{\mathfrak{a}lpha_k}$ is obtained from a rectangle $T_k$ by removing an
internal wall $W_k^{\mathfrak{a}lpha_k}$. The teeth have bounded lengths, shrinking widths, and are stacked together without gaps.
The analysis is similar to the rooms-and-passages case. One can prove that the Neumann Laplacian $A_\Omega$ on such a comb-like domain $\Omega$
is a compact perturbation of the decoupled operator
$$
A_{\rm dec}=
\left(\bigoplus_{k\in\mathbb{N}}
A_{T_k^{\mathfrak{a}lpha_k}}^{\text{\tiny{DN}}}\right) \oplus A_Q,
$$
where
$A_{Q}$ is the Neumann Laplacian on $Q$ and $A_{T_k^{\mathfrak{a}lpha_k}}^{\text{\tiny{DN}}}$ is the Laplace operator on the tooth $T_k^{\mathfrak{a}lpha_k} $ subject to
Neumann boundary conditions on $\partial T_k^{\mathfrak{a}lpha_k}\mathfrak{set}minus\partial Q$
and Dirichlet boundary conditions on $\partial T_k^{\mathfrak{a}lpha_k} \cap\partial Q$.
The walls $W_k^{\mathfrak{a}lpha_k}$ are adjusted in such a way that
the lowest eigenvalue
of $A_{T_k^{\mathfrak{a}lpha_k}}^{\text{\tiny{DN}}}$ coincides with a predefined number $s_k$, while the next eigenvalues tend to $\infty$ as $k\to\infty$ and do not contribute to essential spectrum.
\betagin{figure}[h]
\betagin{picture}(130,165)
\includegraphics[height=60mm]{combs.eps}
\put(-60,50){$Q$}
\put(-96,100){$T_k$}
\put(-82,156){$W_k^{\mathfrak{a}lpha_k}$}
\put(-79,155){\vector(-1,-2){8}}
^\varepsilonnd{picture}
\caption{Comb-like domain \label{fig3}}
^\varepsilonnd{figure}
The important and somewhat surprising element of the rooms-and-pass\-age and comb-like domain constructions
is the form of the decoupled operators with mixed Dirichlet and Neumann boundary conditions.
The fact that one chooses Dirichlet conditions on the common part of the
boundaries of the passages $P_k$ and modified rooms $R_k^{\mathfrak{a}lpha_k}$, and similarly on the common part of the
boundaries of the modified teeth $T_k^{\mathfrak{a}lpha_k}$ and the rectangle $Q$
is due to the following well-known effect (see, e.g., \cite{An87,Arr95,J89}):
the spectrum of the Neumann Laplacian on
$Q\cup T^\varepsilon$, where $Q$ is a fixed domain and $T^\varepsilon$ is an attached
``handle'' of fixed length $L$ and width $^\varepsilonps$, converges
to the direct sum of the Neumann Laplacian
on $Q$ and the one-dimensional Dirichlet Laplacian on $(0,L)$ as $^\varepsilonps\to 0$.
{\color{black}
Finally, we note that A.A.~Kiselev and B.S.~Pavlov \cite{KP94} obtained Theorem~\ref{th-HSS} for
(a kind of) Neumann Laplacian on a bounded set consisting of an array of two-dimensional domains connected
by {intervals}.}
\subsection{Neumann Laplacian with prescribed discrete spectrum\label{subsec13}}
In this section we are interested in the discrete spectrum of Neumann Laplacians. First we recall
a result by Y.~Colin de Verdi\`{e}re from \cite{CdV87}.
\betagin{theorem}[Colin de Verdi\`{e}re, 1987]\label{thCdV}
Let $n\in\mathbb{N}\mathfrak{set}minus\{1\}$ and assume that
\betagin{gather*}
0=\lambda_1<\lambda_2<\mathrm{d}ots<\lambda_m,\quad m\in\mathbb{N},
^\varepsilonnd{gather*}
are fixed numbers.
Then there exists a bounded domain $\Omega\subset\mathbb{R}^n$ such that the spectrum of the
Neumann Laplacian
$A_\Omega$ on $\Omega$ is purely discrete and
\betagin{gather*}
\lambda_k(\Omega)=\lambda_k,\quad k=1,\mathrm{d}ots,m,
^\varepsilonnd{gather*}
where $\left(\lambda_k(\Omega)\right)_{k\in\mathbb{N}}$ denotes the sequence of the eigenvalues of
$A_\Omega$ numbered in increasing order with multiplicities taken into account.
^\varepsilonnd{theorem}
In fact, the main result in \cite{CdV87} concerns Riemannian manifolds: for an arbitrary
compact connected manifold one can construct a Riemannian metric on this manifold in such a way that
the first $m$ eigenvalues of the corresponding Laplace-Beltrami operator coincide with $m$ predefined numbers.
The idea of the proof of this theorem in \cite{CdV87} is to first
construct a suitable differential operator $A_\Gamma$ on a metric graph $\Gamma$ such that the first $m$ eigenvalues of $A_\Gamma$ coincide
with the predefined numbers $\lambda_k$, $k=1,\mathrm{d}ots,m$.
Then the graph $\Gamma$ is ``blown'' up to a tubular thin domain $\Omega$ in such a way that the first $m$ eigenvalues of the Neumann Laplacian
$A_\Omega$ on $\Omega$ are asymptotically close to the first $m$ eigenvalues of $A_\Gamma$ provided the cross-section of $\Omega$ tends to zero.
For dimensions $n\ge 3$ the above theorem is extended in \cite{CdV87} by allowing nonsimple eigenvalues. More precisely,
one can construct a domain in $\mathbb{R}^n$, $n\ge 3$, such that the first $m$ eigenvalues of the Neumann Laplacian on this domain coincide with the predefined numbers
$$0=\lambda_1<\lambda_2\le\lambda_3\le \mathrm{d}ots\le \lambda_m.$$
A similar theorem for the Dirichlet Laplacian (at least for $n=2$) can not be expected. In fact,
by a well-known result of L.E.~Payne, G.~P\'olya, and H.F.~Weinberger from \cite{PPW56}
the ratio between the $k$-th and the $(k-1)$-th Dirichlet eigenvalue (for compact domains in $\mathbb{R}^2$) is bounded from
above (by a domain independent constant) and hence the eigenvalues can not be placed arbitrarily on $(0,\infty)$.
Y.~Colin de Verdi\`{e}re's result from above was later
improved by R.~Hempel, T.~Kriecherbauer, and P.~Plankensteiner in \cite{HKP97}, where a bounded domain $\Omega$ was constructed
such that the essential spectrum and a finite part of the discrete spectrum of
$A_\Omega$
coincide with predefined sets. In their construction comb-like domains were used; see Figure~\ref{fig3}.
The next theorem may be viewed as a variant of Theorem~\ref{thCdV}, although it is actually a slightly weaker version. In fact, we present this result here
since it can be proved
with a similar rooms-and-passages domain strategy as Theorem~\ref{th-HSS}. An important ingredient is a multidimensional version
of the intermediate value theorem from \cite{HKP97}; cf. Lemma~\ref{lemma-hempel} below.
\betagin{theorem} \label{thCdV+}
Let $n\in\mathbb{N}\mathfrak{set}minus\{1\}$ and assume that
\betagin{gather*}
0<\nu_1<\nu_2<\mathrm{d}ots<\nu_m,\quad m\in\mathbb{N},
^\varepsilonnd{gather*}
are fixed numbers.
Then there exists a bounded domain $\Omega\subset\mathbb{R}^n$ such that the spectrum of the
Neumann Laplacian
$A_\Omega$ on $\Omega$ is purely discrete and
\betagin{gather*}
\lambda_{m+k}(\Omega)=\nu_k,\quad k=1,\mathrm{d}ots,m,
^\varepsilonnd{gather*}
where $\left(\lambda_l(\Omega)\right)_{l\in\mathbb{N}}$ denotes the sequence of the eigenvalues of
$A_\Omega$ numbered in increasing order with multiplicities taken into account.
^\varepsilonnd{theorem}
\betagin{proof}
We consider the case $n=2$; the construction and the arguments in dimensions $n\geq 3$ are similar and left to the reader.
The proof consists of several steps and
in principle the strategy is similar to the one in the proof of Theorem~\ref{th-HSS}.
More precisely, we fix an arbitrary open and bounded interval $\mathcal{I}\subset (0,\infty)$ containing all the points $\nu_k$.
First we consider the decoupled domain consisting of $m$ pairwise disjoint rooms
$R_k^{\mathfrak{a}lpha_k}=R_k\mathfrak{set}minus W_k^{\mathfrak{a}lpha_k}$ stacked in a row (as before $R_k$ is a square and $W_k^{\mathfrak{a}lpha_k}$ is an internal wall in it). The first eigenvalue of the
Neumann Laplacian on $R_k^{\mathfrak{a}lpha_k}$ is zero.
Moreover, under a suitable choice of $d_k$ and $\mathfrak{a}lpha_k$
the second eigenvalue coincides with $\nu_k$ and the third eigenvalue
is
larger than $\sup\mathcal{I}$. Consequently the first $m$ eigenvalues of the Neumann Laplacian
on $\cup_{k=1}^m R_k^{\mathfrak{a}lpha_k}$ are zero, the next $m$ eigenvalues are $\nu_1,\mathrm{d}ots,\nu_m$, and all further eigenvalues are contained in $(\sup\mathcal{I},\infty)$.
Afterwards we connect the rooms by small windows of length $^\varepsilonps$ (see Figure~\ref{fig4}); the resulting domain is denoted by $\Omega$. If $^\varepsilonps$ is small
the spectrum changes slightly. Namely,
\betagin{itemize}
\item the eigenvalues $\lambda_1(\Omega),\mathrm{d}ots,\lambda_m(\Omega)$ remain in $[0,\inf\mathcal{I})$,
\item the eigenvalues $\lambda_{m+1}(\Omega),\mathrm{d}ots,\lambda_{2m}(\Omega)$ remain in small neighborhoods
of $\nu_1,\mathrm{d}ots,\nu_m$, respectively, moreover they still belong to $\mathcal{I}$,
\item the rest of the spectrum remains in $(\sup\mathcal{I},\infty)$.
^\varepsilonnd{itemize}
It remains to establish the coincidence of
$\lambda_{k+m}(\Omega)$ and $\nu_k$ as $k=1,\mathrm{d}ots,m$ (so far they are only close as $^\varepsilonps>0$ is sufficiently small),
which is done by using a multidimensional version of mean value theorem from \cite{HKP97}. Roughly speaking, the shift of the eigenvalues
that appears after inserting the small windows can be compensated by varying
the constants $\mathfrak{a}lpha_1,\mathrm{d}ots,\mathfrak{a}lpha_m$ appropriately.
In the following we implement this strategy.
\noindent
{\it Step 1.} Fix some positive numbers $d_1,\mathrm{d}ots,d_m$ and let $\mathfrak{a}lpha_1,\mathrm{d}ots,\mathfrak{a}lpha_m$, and $^\varepsilonps$ be nonnegative numbers satisfying
\betagin{gather}\label{epsi}
\mathfrak{a}lpha_k\in [0,d_k]\quad\text{and}\quad
^\varepsilonps\in [0,\min_{k=1,\mathrm{d}ots,m} d_k].
^\varepsilonnd{gather}
In the following we shall use the notation $\mathfrak{a}lpha:=\{\mathfrak{a}lpha_1,\mathrm{d}ots,\mathfrak{a}lpha_m\}$.
We introduce the domain $\Omega^{\mathfrak{a}lpha,^\varepsilonps}$ consisting of $m$ modified rooms $R_k^{\mathfrak{a}lpha_k}$ being stacked in a row and connected through a small window
$P_k^^\varepsilonps$ of length $^\varepsilonps$. Each room $R_k^{\mathfrak{a}lpha_k}$ is obtained by removing from a square with side length $d_k$
two additional walls of length $(d_k-\mathfrak{a}lpha_k)/2$; see Figure~\ref{fig4}.
More precisely, let $x_k:=\sum_{j=1}^{k}d_j$, define the rooms by
\betagin{gather*}
R_k^{\mathfrak{a}lpha_k}=\left(( x_k- d_k,x_k)\times\left(-{d_k\over 2},{d_k\over 2}\right)\right) \mathfrak{set}minus W_k^{\mathfrak{a}lphapha_k},
^\varepsilonnd{gather*}
where the walls are
\betagin{gather*}
W_k^{\mathfrak{a}lphapha_k}=\left\{(x,y)\in\mathbb{R}^2:\ x=x_k-{d_k\over 2},\ |y|\in \bigg[{\mathfrak{a}lpha_k\over 2},{d_k\over 2}\bigg]\right\}
^\varepsilonnd{gather*}
and the windows are $P_k^^\varepsilonps=\left\{x_k\right\}\times \left(-{^\varepsilonps\over 2},{^\varepsilonps\over 2}\right)$. Then the domain $\Omega^{\mathfrak{a}lpha,^\varepsilonps}$ that we will use is defined as
$$\Omega^{\mathfrak{a}lpha,^\varepsilonps}=\left(\bigcup\limits_{k=1}^m R^{\mathfrak{a}lpha_k}_k \right)\cup\left(\bigcup\limits_{k=1}^{m-1} P^^\varepsilonps_k\right).$$
\betagin{figure}[h]
\betagin{picture}(100,80)
\includegraphics[width=50mm]{m-rooms+.eps}
\put(-130,30){$R_1^{\mathfrak{a}lpha_1}$}
\put(-100,36){\vector(0,-1){8}}
\put(-100,35){\vector(0,1){14}}
\put(-98,35){$\mathfrak{a}lpha_1$}
\put(-63,46){\vector(0,-1){5}}
\put(-63,32){\vector(0,1){5}}
\put(-61,36){$^\varepsilonps$}
^\varepsilonnd{picture}
\caption{\label{fig4} Domain $\Omega^{\mathfrak{a}lpha,^\varepsilonps}$ for $m=3$}
^\varepsilonnd{figure}
The Neumann Laplacian on $\Omega^{\mathfrak{a}lpha,^\varepsilonps}$ will be denoted by $A_{\Omega^{\mathfrak{a}lpha,^\varepsilonps}}$ and it is important to observe that for $^\varepsilonps=0$
this self-adjoint operator in $\mathsf{L}^2(\Omega^{\mathfrak{a}lpha,^\varepsilonps})$ decouples in a finite orthogonal sum of Neumann Laplacians $A_{R_k^{\mathfrak{a}lpha_k}}$ on the
rooms $R_k^{\mathfrak{a}lpha_k}$, that is,
one has
\betagin{equation}\label{orto+}
A_{\Omega^{\mathfrak{a}lpha,0}}=\bigoplus_{k=1}^m A_{R_k^{\mathfrak{a}lpha_k}}
^\varepsilonnd{equation}
with respect to the corresponding space decomposition
$$
\mathsf{L}^2(\Omega^{\mathfrak{a}lpha,0})=\bigoplus_{k=1}^m \mathsf{L}^2(R_k^{\mathfrak{a}lpha_k}).
$$
Note that $\mathsf{L}^2(\Omega^{\mathfrak{a}lpha,0})=\mathsf{L}^2(\Omega^{\mathfrak{a}lpha,^\varepsilonps})$ and hence $A_{\Omega^{\mathfrak{a}lpha,^\varepsilonps}}$ and $A_{\Omega^{\mathfrak{a}lpha,0}}$ act in the same space for all $^\varepsilonps$ in ^\varepsilonqref{epsi}.
It is clear from ^\varepsilonqref{orto+} that the spectrum of the decoupled operator $A_{\Omega^{\mathfrak{a}lpha,0}}$ is the union of the
spectra of the Neumann Laplacians $A_{R_k^{\mathfrak{a}lpha_k}}$, $k=1,\mathrm{d}ots,m$.
We recall (see Step~2 of the proof of Theorem~\ref{th-HSS}) that the functions
\betagin{equation}\label{incre}
[0,d_k]\ni\mathfrak{a}lpha_k\mapsto \lambda_j(R_k^{ \mathfrak{a}lpha_k}),\quad j\in\mathbb{N},
^\varepsilonnd{equation}
are continuous and nondecreasing. Moreover,
one has
\betagin{equation}\label{lambda12+}
\lambda_1(R_k^{\mathfrak{a}lpha_k})=0,\quad\lambda_2(R_k^{ \mathfrak{a}lpha_k})=
\betagin{dcases}
0,&\mathfrak{a}lpha_k=0,\\
(\pi / d_k )^2,&\mathfrak{a}lpha_k=d_k,
^\varepsilonnd{dcases}
^\varepsilonnd{equation}
and
\betagin{equation}\label{lambda3+}
\lambda_3(R_k^{ \mathfrak{a}lpha_k})\ge \lambda_3(R_k^{0})=
(\pi/d_k)^2.
^\varepsilonnd{equation}
\noindent
{\it Step 2.}
Now we approach the main part of the proof, where the parameters will be properly adjusted.
Let $0<\nu_1<\mathrm{d}ots<\nu_m$ be as in the assumptions of the theorem and
fix an open interval $\mathcal{I}$ such that
\betagin{gather}\label{Inu}
0<\inf \mathcal I<\nu_1\quad\text{and}\quad \nu_m<\sup\mathcal I.
^\varepsilonnd{gather}
Assume that the numbers $d_1,\mathrm{d}ots,d_m$
satisfy
\betagin{gather}\label{la<d}
\sup\mathcal I<\min_k\left(\pi/d_k\right)^2.
^\varepsilonnd{gather}
Furthermore, let us choose a constant $\gamma>0$ such that the intervals
$$ [\nu_k-\gamma,\nu_k+\gamma],\quad k=1,\mathrm{d}ots,m,$$
are pairwise disjoint and
\betagin{gather}
\label{inI}
\bigcup_{k=1}^m [\nu_k-\gamma,\nu_k+\gamma]\subset\mathcal{I}
^\varepsilonnd{gather}
holds.
Next, we introduce the sets
\betagin{equation}\label{interval}
L_k=\left\{\mathfrak{a}lphapha_k\in[0,d_k]:\ \lambda_2(R_k^{ \mathfrak{a}lpha_k})\in [\nu_k-\gamma,\nu_k+\gamma]\right\}.
^\varepsilonnd{equation}
Using the continuity and monotonicity of the function in ^\varepsilonqref{incre} and
taking into account ^\varepsilonqref{lambda12+}, ^\varepsilonqref{Inu}-^\varepsilonqref{inI}
we conclude that each $ {L}_k$ is a nonempty compact interval.
We set
$$\mathfrak{a}lpha_k^-=\min L_k,\quad \mathfrak{a}lpha_k^+=\max L_k,\quad\text{and}\quad \mathcal{D}=\prod_{k=1}^m[\mathfrak{a}lpha_k^-,\mathfrak{a}lpha_k^+].$$
It is clear from ^\varepsilonqref{lambda12+}, ^\varepsilonqref{Inu}, and ^\varepsilonqref{inI} that $\mathfrak{a}lpha_k^\pm>0$.
From now on we assume that
\betagin{gather*}
\mathfrak{a}lpha=\{\mathfrak{a}lpha_1,\mathrm{d}ots,\mathfrak{a}lpha_m\}\in \mathcal{D}.
^\varepsilonnd{gather*}
As usual we denote
the eigenvalues of the decoupled operator $A_{\Omega^{\mathfrak{a}lpha,0}}$ in ^\varepsilonqref{orto+} by $(\lambda_j(\Omega^{\mathfrak{a}lpha,0}))_{j\in\mathbb{N}}$, counted with multiplicities and ordered as a nondecreasing sequence.
It follows from $\lambda_1(R_k^{ \mathfrak{a}lpha_k})=0$ for $k=1,\mathrm{d}ots,m$, that
\betagin{equation*}
\lambda_k(\Omega^{\mathfrak{a}lpha,0})=0\quad\text{for}\quad k=1,\mathrm{d}ots,m.
^\varepsilonnd{equation*}
Furthermore, ^\varepsilonqref{lambda3+}, ^\varepsilonqref{la<d}, ^\varepsilonqref{inI}, and ^\varepsilonqref{interval} imply that the $m+k$-th eigenvalues of the orthogonal sum $A_{\Omega^{\mathfrak{a}lpha,0}}$ coincides with the second eigenvalue
$\lambda_{2}(R_k^{ \mathfrak{a}lpha_k})$ of the Neumann Laplacian $A_{R_k^{ \mathfrak{a}lpha_k}}$ as $k=1,\mathrm{d}ots,m$:
\betagin{gather}\label{lambda-all0}
\lambda_{m+k}(\Omega^{\mathfrak{a}lpha,0})=
\lambda_{2}(R_k^{ \mathfrak{a}lpha_k})\in \overline{B_\gamma(\nu_k)}\text{ for } k=1,\mathrm{d}ots,m.
^\varepsilonnd{gather}
Moreover, it is clear from ^\varepsilonqref{lambda3+} and ^\varepsilonqref{la<d} that
\betagin{gather*}
\lambda_{k }(\Omega^{\mathfrak{a}lpha,0})>\sup\mathcal{I} \quad\text{for}\quad k= 2m+1,\,2m+2,\mathrm{d}ots.
^\varepsilonnd{gather*}
We introduce the functions $f_k^0:\mathcal{D}\to \mathbb{R}$ by
\betagin{gather}\label{f0}
f_k^0(\mathfrak{a}lpha_1,\mathfrak{a}lpha_2,\mathrm{d}ots,\mathfrak{a}lpha_m)=\lambda_{k+m}(\Omega^{\mathfrak{a}lpha,0}),\quad k=1,\mathrm{d}ots,m.
^\varepsilonnd{gather}
It is important to note that due to ^\varepsilonqref{lambda-all0}
the value $\lambda_{k+m}(\Omega^{\mathfrak{a}lpha,0})$ of the function $f_k^0$ depends only on the $k$-th variable $\mathfrak{a}lphapha_k$. Using this
and taking into account that the mapping ^\varepsilonqref{incre} is nondecreasing we get
\betagin{multline}\label{Hempel0}
f_k^0(\mathfrak{a}lpha_1^+,\mathrm{d}ots,\mathfrak{a}lpha_{k-1}^+,\mathfrak{a}lpha_k^-,\mathfrak{a}lpha_{k+1}^+,\mathrm{d}ots,\mathfrak{a}lpha_m^+)\\
=\nu_k-\gamma
<\nu_k<\nu_k+\gamma
\\
=f_k^0(\mathfrak{a}lpha_1^-,\mathrm{d}ots,\mathfrak{a}lpha_{k-1}^-,\mathfrak{a}lpha_k^+,\mathfrak{a}lpha_{k+1}^-,\mathrm{d}ots,\mathfrak{a}lpha_m^-).
^\varepsilonnd{multline}
\noindent
{\it Step 3.}
Let $(\lambda_j(\Omega^{\mathfrak{a}lpha,^\varepsilonps}))_{j\in\mathbb{N}}$ be the eigenvalues
of the Neumann Laplacian $A_{\Omega^{\mathfrak{a}lpha,^\varepsilonps}}$ on $\Omega^{\mathfrak{a}lpha,^\varepsilonps}$ counted with multiplicities and ordered
as a nondecreasing sequence.
For $^\varepsilonps\ge 0$ we introduce the functions $f_k^^\varepsilonps:\mathcal{D}\to \mathbb{R}$ by
\betagin{gather}\label{f}
f_k^^\varepsilonps(\mathfrak{a}lpha_1,\mathfrak{a}lpha_2,\mathrm{d}ots,\mathfrak{a}lpha_m)=\lambda_{k+m}(\Omega^{\mathfrak{a}lpha,^\varepsilonps}),\quad k=1,\mathrm{d}ots,m.
^\varepsilonnd{gather}
Of course, for $^\varepsilonps=0$ these functions coincide with the functions in ^\varepsilonqref{f0}.
Observe that, in contrast to $f_k^0$, for $^\varepsilonps>0$ the values $\lambda_{k+m}(\Omega^{\mathfrak{a}lpha,^\varepsilonps})$ of $f_k^^\varepsilonps$ in general do not depend only on the $k$-th variable.
It is important to note that Theorem~\ref{th-contin} and Remark~\ref{rem-contin} show that the functions
\betagin{gather*}
^\varepsilonps\mapsto \lambda_j(\Omega^{\mathfrak{a}lpha,^\varepsilonps}),\quad j\in\mathbb{N},
^\varepsilonnd{gather*}
are continuous for each fixed $\mathfrak{a}lpha$. Hence it follows together with ^\varepsilonqref{Hempel0} that
\betagin{equation}\label{Hempel}
\betagin{split}
f_k^^\varepsilonps(\mathfrak{a}lpha_1^+,\mathrm{d}ots,\mathfrak{a}lpha_{k-1}^+,\mathfrak{a}lpha_k^-,\mathfrak{a}lpha_{k+1}^+,\mathrm{d}ots,\mathfrak{a}lpha_m^+)&<\nu_k\\
&<f_k^^\varepsilonps(\mathfrak{a}lpha_1^-,\mathrm{d}ots,\mathfrak{a}lpha_{k-1}^-,\mathfrak{a}lpha_k^+,\mathfrak{a}lpha_{k+1}^-,\mathrm{d}ots,\mathfrak{a}lpha_m^-)
^\varepsilonnd{split}
^\varepsilonnd{equation}
for $^\varepsilonps>0$ sufficiently small.
From now on we fix $^\varepsilonps>0$ for which ^\varepsilonqref{Hempel} holds.
To proceed further we need the following multidimensional version
of the intermediate value theorem, which was established in \cite[Lemma~3.5]{HKP97}.
\betagin{lemma}[Hempel-Kriecherbauer-Plankensteiner, 1997]\label{lemma-hempel}
Let $a_k < b_k$, $k=1,\mathrm{d}ots,m$, and $\mathcal{D}:=\prod_{k=1}^m[a_k, b_k]$. Assume
that $f:\mathcal{D}\to\mathbb{R}^m$ is continuous and that each component function $f_k$ is
nondecreasing in each of its arguments. Let us suppose that
$F_k^-<F_k^+$, $k=1,\mathrm{d}ots,m$, where
\betagin{gather*}
F_k^-=f_k(b_1,b_2,\mathrm{d}ots,b_{k-1},a_k,b_{k+1},\mathrm{d}ots,b_m),\\
F_k^+=f_k(a_1,a_2,\mathrm{d}ots,a_{k-1},b_k,a_{k+1},\mathrm{d}ots,b_m).
^\varepsilonnd{gather*}
Then for any $F\in\prod_{k=1}^m[F_k^-,F_k^+]$
there exists $x\in\mathcal{D}$ such that $f(x)=F$.
^\varepsilonnd{lemma}
We apply this lemma to the function
$f^^\varepsilonps=(f^\varepsilon_1,\mathrm{d}ots,f^\varepsilon_m)$ defined by ^\varepsilonqref{f}.
By Theorem~\ref{th-contin} and Remark~\ref{rem-contin} $f^^\varepsilonps:\mathcal{D}\to\mathbb{R}^m$ is continuous. Moreover, by the
min-max principle each component $f^^\varepsilonps_k$ of $f^^\varepsilonps$ is nondecreasing in
each of its arguments.
Using this and ^\varepsilonqref{Hempel} we conclude that
$f^^\varepsilonps$ satisfies all assumptions in Lemma~\ref{lemma-hempel} and
hence there exists $\mathfrak{a}lpha\in\mathcal{D}$ such that
\betagin{gather*}
f_k^\varepsilon( \mathfrak{a}lpha)=\nu_k.
^\varepsilonnd{gather*}
With this choice of $\mathfrak{a}lphapha=\{\mathfrak{a}lphapha_1,\mathrm{d}ots,\mathfrak{a}lphapha_m\}$ and $d_1,\mathrm{d}ots,d_m$ fixed as in the beginning of Step 2 (see ^\varepsilonqref{la<d}) it follows that
$\lambda_{m+k}(\Omega^{\mathfrak{a}lphapha,^\varepsilonps})=\nu_k$ for $k=1,\mathrm{d}ots,m$. This completes the proof of
Theorem~\ref{thCdV+}.
^\varepsilonnd{proof}
\section{Singular Schr\"odinger operators with $\mathrm{d}elta'$-interactions\label{sec2}}
In this section we show that the methods used to control the spectrum of the Neumann Laplacian in the previous section
can also be applied to singular Schr\"odinger operators describing the motion of quantum particles in
potentials being supported at a discrete (finite or infinite) set of
points. These operators are often referred to as \textit{solvable models} in
quantum mechanics, since their mathematical and physical
quantities (e.g., their spectrum) can be determined explicitly. We refer to the monograph \cite{AGHH05} for an introduction to this topic.
We also note that in the mathematical literature such operators are often called \textit{Schr\"odinger operators with point interactions}.
The classical example of
a Schr\"odinger operator with \textit{$\mathrm{d}elta$-interactions} is
the following formal expression
\betagin{gather*}
\mathrm{d}isplaystyle-{\mathrm{d}^2 \over \mathrm{d} z^2} + \sum\limits_{k\in \mathbb{N}}\mathfrak{a}lpha_k\mathrm{d}elta_{z_k},
^\varepsilonnd{gather*}
where $\mathrm{d}elta_{z_k}$ are Dirac delta-functions supported at the points $z_k\in\mathbb{R}$ and $\mathfrak{a}lpha_k\in\mathbb{R}\cup\{\infty\}$.
In the present paper we treat the closely related model of a Schr\"odinger operator with $\mathrm{d}elta'$-interactions (or \textit{point dipole interactions})
defined by the formal expression
\betagin{gather}\label{delta'}
-{\mathrm{d}^2\over \mathrm{d} z^2}+\sum\limits_{k\in\mathbb{N}}\beta_k\langle\cdot\,,\,\mathrm{d}elta_{z_k}'\rangle\mathrm{d}elta_{z_k}',
^\varepsilonnd{gather}
where $\mathrm{d}elta_{z_k}'$ is the distributional derivative of the delta-function supported at $z_k\in\mathbb{R}$, $\langle\phi,\mathrm{d}elta_{z_k}'\rangle$ denotes its action
on the test function $\phi$, and $\beta_k\in \mathbb{R}\cup\{\infty\}$.
The above formal expression can be realized
as a self-adjoint operator in $\mathsf{L}^2$ with the action $-{\mathrm{d}^2\over \mathrm{d} z^2}$ and
domain consisting of local $\mathsf{H}^2$-functions $u$ that satisfy
\betagin{gather*}
u'(z_k -0) = u'(z_k + 0),\quad u(z_k +0)-u(z_k -0)=\betata_k u'(z_k\pm 0 )
^\varepsilonnd{gather*}
(the case $\betata=\infty$ stands for a decoupling with Neumann conditions at $z_k\pm 0$).
The existence of this model was pointed out by
A.~Grossmann, R.~H{\o}egh-Krohn, M.~Mebkhout in \cite{GHKM80}, the first rigorous mathematical treatment of $\mathrm{d}elta'$-interactions is due to
F.~Gesztesy and H.~Holden in \cite{GH87}.
Among the numerous subsequent contributions we emphasize the more recent papers \cite{KM10b,KM14} by A.~Kostenko and M.M.~Malamud, in which also the
more elaborate case $|z_k-z_{k-1}|\to 0$ as $|k|\to \infty$ was treated.
In these papers self-adjointness, lower semiboundedness and spectral properties of the underlying operators were studied in detail.
Our goal and strategy is similar to \cite{HSS91} in the context of the Neumann Laplacian: We wish to construct an
operator of the form ^\varepsilonqref{delta'} with predefined essential spectrum; cf.~Theorem~\ref{th-HSS}.
At this point we present the main result of this section on a formal level
without giving a precise definition of the underlying operator.
This will be done during its proof; cf.~Theorem~\ref{th-BK+} for a more precise formulation of Theorem~\ref{th-BK}.
\betagin{theorem} \label{th-BK}
Let $\mathfrak{S}\subset [0,\infty)$ be an arbitrary closed set such that $0\in \mathfrak{S}$. Then there exists a bounded interval $(a,b)\subset\mathbb{R}$, a sequence of points
$(z_k)_{k\in\mathbb{N}}$ in $(a,b)$, and a
sequence of positive numbers $(\betata_k)_{k\in\mathbb{N}}$ such that the operator $A_\betata $ in $\mathsf{L}^2(a,b)$ defined by formal expression ^\varepsilonqref{delta'} satisfies
\betagin{gather*}
\sigma_^\varepsilonss(A_\betata)=\mathfrak{S}.
^\varepsilonnd{gather*}
^\varepsilonnd{theorem}
\betagin{proof}
For the construction of the self-adjoint
Schr\"odinger operator with $\mathrm{d}elta'$-interactions we use a similar idea as in the construction of the
rooms-and-passages domains in the previous section.
Here we split the sequence of points $(z_k)_{k\in\mathbb{N}}$ in ^\varepsilonqref{delta'} in two interlacing
subsequences $(x_k)_{k\in\mathbb{N}}$ and $(y_k)_{k\in\mathbb{N}}$, where the point $y_k$ is in the middle of $(x_{k-1},x_k)$,
and instead of $\betata_k$ we denote the interaction strengths at the points $x_k$ by $p_k$ and at the points $y_k$ by $q_k$.
Instead of $A_\betata$ we shall write $A_{p,q}$ for the corresponding Schr\"{o}dinger operator with $\mathrm{d}elta'$-interactions, see Step 3 of the proof.
The intervals $(x_{k-1},x_k)$ will play the role of the rooms, the interactions at the points $x_k$ will play the role of
the passages, and the interactions at the points $y_k$ will play the role of the additional walls inside the rooms.
As in the proof of Theorem~\ref{th-HSS} we fix a sequence $(s_k)_{k\in \mathbb{N}}$
such that
\betagin{gather}\label{s-assumpt2}
s_k> 0\text{ and }\mathfrak{acc}((s_k)_{k\in \mathbb{N}})=
\betagin{cases}
\mathfrak{S}\mathfrak{set}minus\{0\},&0\text{ is an isolated point of }\mathfrak{S},\\
\mathfrak{S},&\text{otherwise},
^\varepsilonnd{cases}
^\varepsilonnd{gather}
and for each $k\in\mathbb{N}$ we choose numbers $d_k>0$ that satisfy
\betagin{gather}\label{dk}
s_k < (\pi/ d_k)^2.
^\varepsilonnd{gather}
Moreover, we can assume that $d_k$ are chosen sufficiently small, so that
\betagin{gather}\label{dk+}
\sum\limits_{k\in \mathbb{N}} d_k <\infty
^\varepsilonnd{gather}
and hence
\betagin{gather}\label{dk-infty}
\lim_{k\to\infty} d_k=0
^\varepsilonnd{gather}
holds.
Finally, we set
\betagin{gather*}
x_0=0,\quad x_k=x_{k-1}+d_k,\quad
y_k={x_{k-1}+x_k\over 2},\quad \mathcal{I}_k=(x_{k-1},x_k),\,\, k\in\mathbb{N},
^\varepsilonnd{gather*}
and we consider the interval $(a,b)$, where
\betagin{gather*}
a=x_0=0\quad\text{and}\quad b=\sum\limits_{k\in \mathbb{N}} d_k.
^\varepsilonnd{gather*}
The proof consists of four steps. In the first step we discuss the spectral properties of the Schr\"odinger operator $\mathbf{A}_{q_k,\mathcal{I}_k}$
on the interval $\mathcal{I}_k$ with a $\mathrm{d}elta'$-interaction of strength $q_k>0$ at $y_k$
and Neumann boundary conditions at the endpoints of $\mathcal{I}_k$.
In the second step we consider the direct sum of these operators:
$$A_{\infty,q}=\bigoplus_{k\in\mathbb{N}}\mathbf{A}_{q_k,\mathcal{I}_k}.$$
Note that the Neumann conditions at $x_k\pm 0 $ can be regarded as $\mathrm{d}elta'$-interaction with \textit{infinite} strength.
Thus $A_{\infty,q}$ corresponds to the Schr\"odinger operator on $(a,b)$
with $\mathrm{d}elta'$-interactions of strengths $q_k$ at the points $y_k$ and $\mathrm{d}elta'$-interactions of strengths $\infty$ at the points $x_k$.
The interaction strengths $q_k$ will be adjusted in such a way that
the essential spectrum of $A_{\infty,q}$ coincides with $\mathfrak{S}$.
In fact, $\sigma_{^\varepsilonss}(A_{\infty,q})$ is the union of the point $0$ and all
accumulation points of a sequence formed by the \textit{second} eigenvalues
of $\mathbf{A}_{q_k,\mathcal{I}_k}$.
In the third step we perturb the decoupled operator $A_{\infty,q}$ linking the intervals $\mathcal{I}_{k+1}$ and $\mathcal{I}_k$
by a $\mathrm{d}elta'$-interaction of a sufficiently large strength $p_k>0$ for all $k\in\mathbb{N}$;
the corresponding operator is denoted by $A_{p,q}$. We will prove in the last step that
the essential spectra of $A_{p,q}$ and $A_{\infty,q}$ coincide if the interaction strengths
$p_k$ tend to $\infty$ for $k\to\infty$ sufficiently fast.
\noindent\textit{Step~1.}
Let $q_k\in (0,\infty]$ and let $\mathbf{a}_{q_k,\mathcal{I}_k}$ be the sesquilinear form in $\mathsf{L}^2(\mathcal{I}_k)$
defined by
\betagin{equation*}
\betagin{split}
\mathbf{a}_{q_k,\mathcal{I}_k}[\mathbf{u},\mathbf{v}]&=\int_{\mathcal{I}_k}\mathbf{u}'\cdot \overline{\mathbf{v}'}\,\mathrm{d} x \\
&\qquad + \frac{1}{q_k}
\left(\mathbf{u}(y_k+0)-\mathbf{u}(y_k-0)\right)\overline{\left(\mathbf{v}(y_k+0)-\mathbf{v}(y_k-0)\right)},\\
\mathrm{dom}(\mathbf{a}_{q_k,\mathcal{I}_k})&=\mathsf{H}^1(\mathcal{I}_k\mathfrak{set}minus\{y_k\});
^\varepsilonnd{split}
^\varepsilonnd{equation*}
for $q_k=\infty$ we use the convention $\infty^{-1}=0$.
The form $\mathbf{a}_{q_k,\mathcal{I}_k}$ is densely defined, nonnegative, and closed in $\mathsf{L}^2(\mathcal{I}_k)$. Hence
by the first representation theorem there is a unique nonnegative
self-adjoint operator $\mathbf{A}_{q_k,\mathcal{I}_k}$ in $\mathsf{L}^2(\mathcal{I}_k)$ such that
$\mathrm{dom}(\mathbf{A}_{q_k,\mathcal{I}_k})\subset\mathrm{dom}(\mathbf{a}_{q_k,\mathcal{I}_k})$ and
\betagin{equation*}
(\mathbf{A}_{q_k,\mathcal{I}_k} \mathbf{u}, \mathbf{v})_{\mathsf{L}^2(\mathcal{I}_k)}=\mathbf{a}_{q_k,\mathcal{I}_k}[\mathbf{u},\mathbf{v}],\quad \mathbf{u}\in \mathrm{dom}(\mathbf{A}_{q_k,\mathcal{I}_k}),\,\,\mathbf{v}\in\mathrm{dom}(\mathbf{a}_{q_k,\mathcal{I}_k}).
^\varepsilonnd{equation*}
Integration by parts shows that
$\mathrm{dom}(\mathbf{A}_{q_k,\mathcal{I}_k})$ consists of all those functions $\mathbf{u}\in \mathsf{H}^2(\mathcal{I}_k\mathfrak{set}minus\{y_k\})$ that satisfy
the $\mathrm{d}elta'$-jump condition
\betagin{equation*}
\mathbf{u}'(y_k -0) = \mathbf{u}'(y_k + 0)=\frac{1}{q_k}\left( \mathbf{u}(y_k +0)-\mathbf{u}(y_k -0)\right)
^\varepsilonnd{equation*}
at the point $y_k$ and Neumann boundary conditions
\betagin{equation*}
\mathbf{u}'(x_{k-1}) = \mathbf{u}'(x_k)=0
^\varepsilonnd{equation*}
at the endpoints of $\mathcal{I}_k$. Furthermore,
the action of $\mathbf{A}_{q_k,\mathcal{I}_k}$ is given by
$$
(\mathbf{A}_{q_k,\mathcal{I}_k}\mathbf{u} )\!\restriction_{(x_{k-1},y_k)}= -\left( \mathbf{u}\!\restriction_{(x_{k-1},y_k)}\right)'',\quad
(\mathbf{A}_{q_k,\mathcal{I}_k}\mathbf{u} )\!\restriction_{(y_k,x_{k})}= -\left( \mathbf{u}\!\restriction_{(y_k,x_k)}\right)''.
$$
The spectrum of the self-adjoint operator $\mathbf{A}_{q_k,\mathcal{I}_k}$ is purely discrete. We use the notation
$\left\{\lambda_j(\mathbf{A}_{q_k,\mathcal{I}_k})\right\}_{j\in\mathbb{N}}$ for the corresponding eigenvalues counted with multiplicities and ordered
as a nondecreasing sequence. Some properties of these eigenvalues are collected in the next lemma. Here we will also make use of the Neumann Laplacian
on $\mathcal{I}_k$, defined as usual via the form
\betagin{equation}\label{neum22}
\mathbf{a}_{0,\mathcal{I}_k}[\mathbf{u},\mathbf{v}]=(\mathbf{u}',\mathbf{v}')_{\mathsf{L}^2(\mathcal{I}_k)},\quad \mathrm{dom}(\mathbf{a}_{0,\mathcal{I}_k})=\mathsf{H}^1(\mathcal{I}_k),
^\varepsilonnd{equation}
and we shall denote this operator by $\mathbf{A}_{0,\mathcal{I}_k}$. To avoid possible confusion we emphasize that the form domain $\mathsf{H}^1(\mathcal{I}_k)$
of the Neumann Laplacian $\mathbf{A}_{0,\mathcal{I}_k}$ is smaller than the form domain $\mathsf{H}^1(\mathcal{I}_k\mathfrak{set}minus\{y_k\})$ of the
operators $\mathbf{A}_{q_k,\mathcal{I}_k}$ with $q_k\in (0,\infty]$. Furthermore, we mention already here that the self-adjoint operator $\mathbf{A}_{\infty,\mathcal{I}_k}$ is the direct sum of the
Neumann Laplacians on the intervals $(x_{k-1},y_k)$ and $(y_k,x_k)$.
\betagin{lemma}\label{lemma:EVprop}
For each $j\in\mathbb{N}$
\betagin{equation}\label{cont-monot}
\betagin{array}{l}
\text{the function }(0,\infty]\ni q_k\mapsto \lambda_j(\mathbf{A}_{ q_k,\mathcal{I}_k})\text{ is}\\
\text{monotonically decreasing and continuous,}
^\varepsilonnd{array}
^\varepsilonnd{equation}
and one has
\betagin{equation}\label{0}
\lim\limits_{q_k\to +0}\lambda_j(\mathbf{A}_{ q_k,\mathcal{I}_k})=
\lambda_j(\mathbf{A}_{0,\mathcal{I}_k}).
^\varepsilonnd{equation}
^\varepsilonnd{lemma}
\betagin{proof}[Proof of Lemma~\ref{lemma:EVprop}]
The monotonicity of the function ^\varepsilonqref{cont-monot}
follows from the min-max principle and the monotonicity of the function
$q_k\mapsto \mathbf{a}_{q_k,\mathcal{I}_k}[\mathbf{u},\mathbf{u}]$
for each fixed $\mathbf{u}\in\mathsf{H}^1(\mathcal{I}_k)$.
To prove the continuity of the function ^\varepsilonqref{cont-monot} consider $q_k,\mathbf{w}idehat q_k\in (0,\infty]$,
$\mathbf{f},\mathbf{g}\in\mathsf{L}^2(\mathcal{I}_k)$, and set $\mathbf{u}=(\mathbf{A}_{q_k,\mathcal{I}_k}+\mathcal{I}d)^{-1}\mathbf{f}$ and
$\mathbf{v}=(\mathbf{A}_{\mathbf{w}idehat q_k,\mathcal{I}_k}+\mathcal{I}d)^{-1}\mathbf{g}$. Then we have
\betagin{equation}\label{res:diff}
\betagin{split}
&\bigl((\mathbf{A}_{q_k,\mathcal{I}_k}+\mathcal{I}d)^{-1}\mathbf{f}-(\mathbf{A}_{\mathbf{w}idehat q_k,\mathcal{I}_k}+\mathcal{I}d)^{-1}\mathbf{f},\mathbf{g}\bigr)_{\mathsf{L}^2(\mathcal{I}_k)}\\
&\quad=\bigl(\mathbf{u},(\mathbf{A}_{\mathbf{w}idehat q_k,\mathcal{I}_k}+\mathcal{I}d)\mathbf{v}\bigr)_{\mathsf{L}^2(\mathcal{I}_k)}-\bigl((\mathbf{A}_{q_k,\mathcal{I}_k}+\mathcal{I}d)\mathbf{u}, \mathbf{v}\bigr)_{\mathsf{L}^2(\mathcal{I}_k)}\\
&\quad=\mathfrak{a}_{\mathbf{w}idehat q_k,\mathcal{I}_k}[\mathbf{u},\mathbf{v}]-\mathfrak{a}_{q_k,\mathcal{I}_k}[\mathbf{u},\mathbf{v}]\\
&\quad=\left(\frac{1}{\mathbf{w}idehat q_k}-\frac{1}{q_k}\right)
\left(\mathbf{u}(y_k+0)-\mathbf{u}(y_k-0)\right)\overline{\left(\mathbf{v}(y_k+0)-\mathbf{v}(y_k-0)\right)}
^\varepsilonnd{split}
^\varepsilonnd{equation}
With $\mathcal{I}_k^+=(y_k,x_k)$ and $\mathcal{I}_k^-=(x_{k-1},y_k)$ one has
the standard trace estimate (see, e.g. \cite[Lemma 1.3.8]{BKbook})
\betagin{gather*}
|\mathbf{u}(y_k\pm 0)|^2\leq
\frac{d_k}{2}\|\mathbf{u}'\|^2_{\mathsf{L}^2(\mathcal{I}^\pm_k)}+
\frac{4}{d_k}\|\mathbf{u}\|^2_{\mathsf{L}^2(\mathcal{I}^\pm_k)},\quad \mathbf{u}\in\mathsf{H}^1(\mathcal{I}^\pm_k),
^\varepsilonnd{gather*}
and with $C_k=\max\{d_k,8 d_k^{-1}\}$ we estimate
\betagin{equation}\label{okpr}
\betagin{split}
\bigl\vert\mathbf{u}(y_k+0)-\mathbf{u}(y_k-0) \bigr\vert^2 &\leq 2 \vert\mathbf{u}(y_k+0)\vert ^2+ 2 \vert\mathbf{u}(y_k-0)\vert ^2\\
&\leq d_k\|\mathbf{u}'\|^2_{\mathsf{L}^2(\mathcal{I}_k\mathfrak{set}minus\{y_k\})}+
8d^{-1}_k\|\mathbf{u}\|^2_{\mathsf{L}^2(\mathcal{I}_k)}\\
&\leq C_k \bigl(\mathbf{a}_{q_k,\mathcal{I}_k}[\mathbf{u},\mathbf{u}]+\|\mathbf{u}\|^2_{\mathsf{L}^2(\mathcal{I}_k)}\bigr)\\
&= C_k \bigl((\mathbf{A}_{q_k,\mathcal{I}_k}+\mathcal{I}d)\mathbf{u}, \mathbf{u}\bigr)_{\mathsf{L}^2(\mathcal{I}_k)}\\
&= C_k \bigl(\mathbf{f},(\mathbf{A}_{q_k,\mathcal{I}_k}+\mathcal{I}d)^{-1}\mathbf{f}\bigr)_{\mathsf{L}^2(\mathcal{I}_k)}\\
& \leq C_k \|\mathbf{f}\|_{\mathsf{L}^2(\mathcal{I}_k)} \|(\mathbf{A}_{q_k,\mathcal{I}_k}+\mathcal{I}d)^{-1}\mathbf{f}\|_{\mathsf{L}^2(\mathcal{I}_k)}\\
& \leq C_k \|\mathbf{f}\|^2_{\mathsf{L}^2(\mathcal{I}_k)},
^\varepsilonnd{split}
^\varepsilonnd{equation}
where we have used $q_k>0$ in the third estimate.
In the same way we get $\vert\mathbf{v}(y_k+0)-\mathbf{v}(y_k-0)\vert^2\leq C_k \|\mathbf{g}\|^2_{\mathsf{L}^2(\mathcal{I}_k)}$. Hence ^\varepsilonqref{res:diff}
leads to the estimate
\betagin{equation*}
\betagin{split}
\bigl|\bigl((\mathbf{A}_{q_k,\mathcal{I}_k}+\mathcal{I}d)^{-1}\mathbf{f}&-(\mathbf{A}_{\mathbf{w}idehat q_k,\mathcal{I}_k}+\mathcal{I}d)^{-1}\mathbf{f},\mathbf{g}\bigr)_{\mathsf{L}^2(\mathcal{I}_k)}\bigr| \\
&\qquad\leq C_k
\left|\frac{1}{\mathbf{w}idehat q_k}-\frac{1}{q_k}\right| \|\mathbf{f}\|_{\mathsf{L}^2(\mathcal{I}_k)} \|\mathbf{g}\|_{\mathsf{L}^2(\mathcal{I}_k)},
^\varepsilonnd{split}
^\varepsilonnd{equation*}
and from this we conclude
\betagin{gather}\label{nrc}
\|(\mathbf{A}_{q_k,\mathcal{I}_k}+\mathcal{I}d)^{-1} -(\mathbf{A}_{\mathbf{w}idehat q_k,\mathcal{I}_k}+\mathcal{I}d)^{-1}\|\to 0
\text{ as }\mathbf{w}idehat q_k\to q_k.
^\varepsilonnd{gather}
It is well-known (see, e.g., \cite[Corollary~A.15]{P06}) that the norm-resolvent convergence in ^\varepsilonqref{nrc} implies the convergence of the eigenvalues, namely
for each $j\in\mathbb{N}$ we obtain
$$\lambda_j(\mathbf{A}_{\mathbf{w}idehat q_k,\mathcal{I}_k})\to \lambda_j(\mathbf{A}_{ q_k,\mathcal{I}_k})
\quad\text{as}\quad\mathbf{w}idehat q_k\to q_k,$$
and hence the function in ^\varepsilonqref{cont-monot} is continuous.
It remains to prove ^\varepsilonqref{0}. For this we will use Theorem~\ref{th-s+} from Appendix~\ref{appb}.
Note first that
the set
$$\left\{\mathbf{u}\in \mathsf{H}^1(\mathcal{I}_k\mathfrak{set}minus\{y_k\})=\mathrm{dom}(\mathbf{a}_{ q_k,\mathcal{I}_k}):\ \sup_{q_k>0}\mathbf{a}_{ q_k,\mathcal{I}_k}[\mathbf{u},\mathbf{u}]<\infty\right\}$$
coincides with the form domain $\mathrm{dom}(\mathbf{a}_{0,\mathcal{I}_k})=\mathsf{H}^1(\mathcal{I}_k)$ of the Neumann Laplacian in ^\varepsilonqref{neum22}. Moreover,
for each $\mathbf{u},\mathbf{v}$ from this set one has $$\lim_{q_k\to 0}\mathbf{a}_{q_k,\mathcal{I}_k}[\mathbf{u},\mathbf{v}]=\mathbf{a}_{0,\mathcal{I}_k}[\mathbf{u},\mathbf{v}].$$
Since the spectra of the operators $\mathbf{A}_{q_k,\mathcal{I}_k}$ and $\mathbf{A}_{0,\mathcal{I}_k}$ are purely discrete
Theorem~\ref{th-s+} shows ^\varepsilonqref{0}.
^\varepsilonnd{proof}
Now we return to the spectral properties of the self-adjoint operators $\mathbf{A}_{ q_k,\mathcal{I}_k}$. In particular, the eigenvalues
of the Neumann Laplacian $\mathbf{A}_{0,\mathcal{I}_k}$ on $\mathcal{I}_k$ and the direct sum of the Neumann Laplacians $\mathbf{A}_{\infty,\mathcal{I}_k}$
on $(x_{k-1},y_k)$ and $(y_k,x_k)$ can be easily calculated. For our purposes it suffices to note that
\betagin{gather}\label{1}
\lambda_1(\mathbf{A}_{0,\mathcal{I}_k})=\lambda_1(\mathbf{A}_{\infty,\mathcal{I}_k})=0,\\\label{2}
\lambda_2(\mathbf{A}_{0,\mathcal{I}_k})=\left({\pi/ d_k}\right)^2,\quad \lambda_2(\mathbf{A}_{\infty,\mathcal{I}_k})=0,\\
\label{3}
\lambda_3(\mathbf{A}_{\infty,\mathcal{I}_k})=\left({2\pi/ d_k}\right)^2.
^\varepsilonnd{gather}
It follows from ^\varepsilonqref{cont-monot}, ^\varepsilonqref{1}, and ^\varepsilonqref{3} that
for any $q_k\in(0,\infty]$ we have
\betagin{gather}\label{13+}
\lambda_1(\mathbf{A}_{q_k,\mathcal{I}_k})=0,\quad
\lambda_3(\mathbf{A}_{q_k,\mathcal{I}_k})\geq\left({2\pi/ d_k}\right)^2.
^\varepsilonnd{gather}
Also, using ^\varepsilonqref{cont-monot}, ^\varepsilonqref{0}, ^\varepsilonqref{2}
and taking into account that $0<s_k < (\pi/ d_k)^2$ from ^\varepsilonqref{dk} we conclude that there exists $q_k> 0$ such that
\betagin{gather}
\label{2+}
\lambda_2(\mathbf{A}_{q_k,\mathcal{I}_k})=s_k,\quad k\in\mathbb{N}.
^\varepsilonnd{gather}
From now on we fix $q_k>0$ for which ^\varepsilonqref{2+} holds.
\noindent\textit{Step 2}.
Now we consider the direct sum
\betagin{equation}\label{ai8}
A_{\infty,q}=\bigoplus_{k\in\mathbb{N}} \mathbf{A}_{q_k,\mathcal{I}_k}
^\varepsilonnd{equation}
of the nonnegative self-adjoint operators $\mathbf{A}_{q_k,\mathcal{I}_k}$ in the space
$$
\mathsf{L}^2(a,b)=\bigoplus_{k=1}^\infty \mathsf{L}^2(\mathcal{I}_k).
$$
In a more explicit form $A_{\infty,q}$ is given by
\betagin{equation*}
\betagin{split}
(A_{\infty,q}u)\!\restriction_{\mathcal{I}_k}&=\mathbf{A}_{q_k,\mathcal{I}_k}\mathbf{u}_k,\\
\mathrm{dom}(A_{\infty,q})&=
\biggl\{
u\in\mathsf{L}^2(a,b):\ \mathbf{u}_k\in\mathrm{dom}(\mathbf{A}_{q_k,\mathcal{I}_k}),\\
&\qquad\qquad\qquad\qquad
\sum\limits_{k\in\mathbb{N}}\|\mathbf{A}_{q_k,\mathcal{I}_k}\mathbf{u}_k\|^2_{\mathsf{L}^2(\mathcal{I}_k)}<\infty
\biggr\},
^\varepsilonnd{split}
^\varepsilonnd{equation*}
where $\mathbf{u}_k:=u\!\restriction_{\mathcal{I}_k}$ stands for
the restriction of the function $u$ onto the interval $\mathcal{I}_k$. Note that the corresponding sesquilinear form $\mathfrak{a}_{\infty,q}$ associated
with $A_{\infty,q}$ is
\betagin{equation*}
\betagin{split}
\mathbf{a}_{\infty,q}[u,v]&=\sum\limits_{k\in\mathbb{N}}\mathbf{a}_{q_k,\mathcal{I}_k}[\mathbf{u}_k,\mathbf{v}_k],\\
\mathrm{dom}(\mathfrak{a}_{\infty,q})&=
\left\{
u\in\mathsf{L}^2(a,b):\ \mathbf{u}_k\in\mathrm{dom}(\mathbf{a}_{q_k,\mathcal{I}_k}),\
\sum\limits_{k\in\mathbb{N}}\mathbf{a}_{q_k,\mathcal{I}_k}[\mathbf{u}_k,\mathbf{u}_k] <\infty
\right\}.
^\varepsilonnd{split}
^\varepsilonnd{equation*}
It is clear that the operator $A_{\infty,q}$ in ^\varepsilonqref{ai8} is self-adjoint and nonnegative in $\mathsf{L}^2(a,b)$.
Furthermore, it is not difficult to check that
\betagin{gather*}
\sigma_^\varepsilonss(A_{\infty,q})
=
\mathfrak{acc}\big((\lambda_j(\mathbf{A}_{q_k,\mathcal{I}_k}))_{j,k\in\mathbb{N}}\big)
^\varepsilonnd{gather*}
holds. Taking into account
that $0\in \mathfrak{S}$ and using ^\varepsilonqref{s-assumpt2}, ^\varepsilonqref{dk-infty}, ^\varepsilonqref{13+}, ^\varepsilonqref{2+}, we arrive at
\betagin{gather*}
\sigma_^\varepsilonss(A_{\infty,q})
=
\{0\}\cup\mathfrak{acc}\big((s_k)_{k\in\mathbb{N}}\big)
= \mathfrak{S}.
^\varepsilonnd{gather*}
\noindent
\textit{Step~3.}
In this step we perturb the decoupled operator $A_{\infty,q}$ linking the intervals $\mathcal{I}_{k+1}$ and $\mathcal{I}_k$
by a $\mathrm{d}elta'$-interaction of sufficiently large strength $p_k>0$ for all $k\in\mathbb{N}$.
The corresponding self-adjoint operator will be denoted by $A_{p,q}$.
More precisely, for $p_k>0$, $k\in\mathbb{N}$, we consider the sesquilinear form
$\mathfrak{a}_{p,q}$
\betagin{equation*}
\betagin{split}
\mathfrak{a}_{p,q}[u,v]&=\sum\limits_{k\in\mathbb{N}}\mathbf{a}_{q_k,\mathcal{I}_k}[\mathbf{u}_k,\mathbf{v}_k]\\
&\quad +\sum\limits_{k\in\mathbb{N}} \frac{1}{p_k}
\left(u(x_k+0)-u(x_k-0)\right)
\overline{\left(v(x_k+0)-v(x_k-0)\right)},\\
\mathrm{dom}(\mathfrak{a}_{p,q})&=\left\{u\in \mathsf{L}^2(a,b):\ \mathbf{u}_k\in\mathrm{dom}(\mathbf{a}_{q_k,\mathcal{I}_k}),\, \mathfrak{a}_{p,q}[u,u]<\infty\right\},
^\varepsilonnd{split}
^\varepsilonnd{equation*}
in $\mathsf{L}^2(a,b)$. This form is nonnegative and densely defined in $\mathsf{L}^2(a,b)$.
Moreover, the form is closed by \cite[Lemma~2.6]{KM14} and the corresponding nonnegative self-adjoint operator $A_{p,q}$
is given by
\betagin{equation*}
\betagin{split}
(A_{p,q}u)\!\restriction_{(a,b)\mathfrak{set}minus\mathcal{Z}}&=-(u\!\restriction_{(a,b)\mathfrak{set}minus\mathcal{Z}})'',\\
\mathrm{dom}(A_{p,q})&=\bigg\{u\in\mathsf{H}^2((a,b)\mathfrak{set}minus\mathcal{Z}):\ u'(a)=0,\\
&\qquad u'(y_k +0) = u'(y_k - 0)=\frac{1}{q_k}\left( u(y_k +0)-u(y_k -0)\right),\\
&\qquad u'(x_k +0) = u'(x_k - 0)=\frac{1}{p_k}\left( u(x_k +0)-u(x_k -0)\right)\bigg\},
^\varepsilonnd{split}
^\varepsilonnd{equation*}
where $\mathcal{Z}=\left\{x_k:k\in\mathbb{N}\right\}\cup\left\{y_k: k\in\mathbb{N}\right\}$; cf.~\cite[Lemma~2.6 and Proposition~2.1]{KM14}.
Now consider
$$
\rho_k:=\max\left\{{1\over p_k d_k},\ {1\over p_kd_{k+1}}\right\},\quad k\in\mathbb{N},
$$
and assume that
\betagin{gather}\label{rho}
\rho_k\to 0\text{ as }k\to\infty.
^\varepsilonnd{gather}
\noindent
\textit{Step~4.}
In this step we verify
\betagin{equation}\label{kmt}
\sigma_^\varepsilonss(A_{p,q})=\sigma_^\varepsilonss(A_{\infty,q}).
^\varepsilonnd{equation}
by showing that
the difference of resolvents
$$T_{p,q}:=(A_{p,q} + \mathcal{I}d)^{-1}-(A_{\infty,q} + \mathcal{I}d)^{-1}$$
is a compact operator.
Then ^\varepsilonqref{kmt} is an immediate consequence of
the Weyl theorem, see, e.g. \cite[Theorem XIII.14]{RS78}. { We remark that in a similar situation a related perturbation result
and the invariance of the essential spectrum was shown in \cite[Theorem~1.3]{KM14}}.
In the following let $\kappa_n=\sup_{k\in [n,\infty)\,\cap\,\mathbb{N}}\rho_k$. Then
it follows from ^\varepsilonqref{rho} that
\betagin{gather}
\label{kappa}
\kappa_n<\infty\text{ for each }n\in\mathbb{N}\text{\quad and\quad }
\kappa_n\to 0\text{ as }n\to\infty.
^\varepsilonnd{gather}
In a first step we claim that
\betagin{gather}\label{domdom+}
\mathrm{dom}(\mathfrak{a}_{\infty,q})=\mathrm{dom}(\mathfrak{a}_{p,q}).
^\varepsilonnd{gather}
In fact, the inclusion $\mathrm{dom}(\mathfrak{a}_{p,q})\subset\mathrm{dom}(\mathfrak{a}_{\infty,q})$
follows directly from the definition of the above form domains.
To prove the reverse inclusion
we have to show that
\betagin{gather}\label{sum:infty}
\sum\limits_{k\in\mathbb{N}}\frac{1}{p_k}
\left|u(x_k+0)-u(x_k-0)\right|^2<\infty
^\varepsilonnd{gather}
for
$u\in\mathrm{dom}(\mathfrak{a}_{\infty,q})$.
Using the standard trace estimates (see, e.g. \cite[Lemma~1.3.8]{BKbook})
\betagin{equation*}
\betagin{split}
|u(x_k+0)|^2&\leq d_{k+1}\|u'\|^2_{\mathsf{L}^2(\mathcal{I}_{k+1})}+ \frac{2}{d_{k+1}}\|u\|^2_{\mathsf{L}^2(\mathcal{I}_{k+1})},\\
|u(x_k-0)|^2&\leq d_{k}\|u'\|^2_{\mathsf{L}^2(\mathcal{I}_k)}+ \frac{2}{d_k}\|u\|^2_{\mathsf{L}^2(\mathcal{I}_k)},
^\varepsilonnd{split}
^\varepsilonnd{equation*}
and taking into account that $\sup_{k\in\mathbb{N}}d_k<b-a$ and $q_k>0$ we obtain
\betagin{equation}\label{form:encl}
\betagin{split}
&\sum\limits_{k\in\mathbb{N}}\frac{1}{p_k}
\left|u(x_k+0)-u(x_k-0)\right|^2 \\
&\quad \leq 2\sum\limits_{k\in\mathbb{N}}\frac{1}{p_k}|u(x_k+0)|^2+2\sum\limits_{k\in\mathbb{N}}\frac{1}{p_k}|u(x_k-0)|^2\\
&\quad
\leq 2\sum\limits_{k\in\mathbb{N}}\frac{1}{p_k}\left(
d_{k+1}\|u'\|^2_{\mathsf{L}^2(\mathcal{I}_{k+1})} + \frac{2}{d_{k+1}}\|u\|^2_{\mathsf{L}^2(\mathcal{I}_{k+1})} \right)\\
&\qquad\qquad + 2\sum\limits_{k\in\mathbb{N}}\frac{1}{p_k}\left(
d_k\|u'\|^2_{\mathsf{L}^2(\mathcal{I}_k)} + \frac{2}{d_k}\|u\|^2_{\mathsf{L}^2(\mathcal{I}_k)} \right)\\
&\quad\leq 2\kappa_1 \sum\limits_{k\in\mathbb{N}} d_{k+1}^2\|u'\|^2_{\mathsf{L}^2(\mathcal{I}_{k+1})} + 4\kappa_1 \sum\limits_{k\in\mathbb{N}} \|u\|^2_{\mathsf{L}^2(\mathcal{I}_{k+1})} \\
&\qquad\qquad +2\kappa_1 \sum\limits_{k\in\mathbb{N}} d_{k}^2\|u'\|^2_{\mathsf{L}^2(\mathcal{I}_{k})} + 4\kappa_1 \sum\limits_{k\in\mathbb{N}} \|u\|^2_{\mathsf{L}^2(\mathcal{I}_{k})} \\
&\quad \le
4\kappa_1 (b-a)^2 \|u'\|^2_{\mathsf{L}^2(a,b)}+8\kappa_1 \|u\|^2_{\mathsf{L}^2(a,b)}\\
&\quad\leq
4\kappa_1 (b-a)^2\mathfrak{a}_{\infty,q}[u,u]+8\kappa_1 \|u\|^2_{\mathsf{L}^2(a,b)},
^\varepsilonnd{split}
^\varepsilonnd{equation}
and thus ^\varepsilonqref{sum:infty} holds. We have shown ^\varepsilonqref{domdom+}.
Now let $f,g\in\mathsf{L}^2(a,b)$ be arbitrary and consider the functions
\betagin{equation*}
\betagin{split}
u&=(A_{p,q}+\mathcal{I}d)^{-1}f\in \mathrm{dom}(A_{p,q})\subset\mathrm{dom}(\mathfrak{a}_{p,q}),\\
v&=(A_{\infty,q}+\mathcal{I}d)^{-1}g\in \mathrm{dom}(A_{\infty,q})\subset\mathrm{dom}(\mathfrak{a}_{\infty,q}).
^\varepsilonnd{split}
^\varepsilonnd{equation*}
Using ^\varepsilonqref{domdom+}
and the fact that $(A_{\infty,q}+\mathcal{I}d)^{-1}$ is a self-adjoint operator we get
\betagin{equation}\label{res-dif}
\betagin{split}
\left(T_{p,q} f,g\right)_{\mathsf{L}^2(a,b)}&=
\left((A_{p,q}+\mathcal{I}d)^{-1}f - (A_{\infty,q}+\mathcal{I}d)^{-1} f,g\right)_{\mathsf{L}^2(a,b)}\\
&=
\left(u,(A_{\infty,q}+\mathcal{I}d)v\right)_{\mathsf{L}^2(a,b)}-\left((A_{p,q}+\mathcal{I}d)u, v\right)_{\mathsf{L}^2(a,b)}\\
&=\mathfrak{a}_{\infty,q}[u,v] - \mathfrak{a}_{p,q}[u,v]\\
&=-\sum\limits_{k\in\mathbb{N}}\frac{1}{p_k} (u(x_k+0)-u(x_k-0))\overline{(v(x_k+0)-v(x_k-0))}.
^\varepsilonnd{split}
^\varepsilonnd{equation}
Next we introduce the operators $\Gamma_{p},\Gamma_{\infty}:\mathsf{L}^2(a,b)\to l^2(\mathbb{N})$ defined by
\betagin{equation*}
\betagin{split}
(\Gamma_{p} f)_k&:={((A_{p,q}+\mathcal{I}d)^{-1}f)(x_k+0)-((A_{p,q}+\mathcal{I}d)^{-1}f)(x_k-0)\over \sqrt{p_k}},\\
(\Gamma_{\infty} g)_k&:={((A_{\infty,q}+\mathcal{I}d)^{-1}g)(x_k+0)-((A_{\infty,q}+\mathcal{I}d)^{-1}g)(x_k-0)\over \sqrt{p_k}},
^\varepsilonnd{split}
^\varepsilonnd{equation*}
on their natural domains
\betagin{equation*}
\betagin{split}
\mathrm{dom}(\Gamma_{p})&=\left\{f\in\mathsf{L}^2(a,b):\ \Gamma_{p} f\in l^2(\mathbb{N})\right\},\\
\mathrm{dom}(\Gamma_{\infty})&=\left\{g\in\mathsf{L}^2(a,b):\ \Gamma_{\infty} g\in l^2(\mathbb{N})\right\}.
^\varepsilonnd{split}
^\varepsilonnd{equation*}
Note that $\mathrm{dom}(\Gamma_{p})$ coincides with the whole $\mathsf{L}^2(a,b)$; this follows immediately from the $\ran(A_{p,q}+\mathcal{I}d)^{-1}\subset \mathrm{dom}(\mathfrak{a}_{p,q})$.
Let us prove that the operator
$\Gamma_p$ is compact.
For this purpose we introduce the finite rank operators
$$
\Gamma_p^n:\mathsf{L}^2(a,b)\to l^2(\mathbb{N}),\qquad (\Gamma_p^n f)_k = \betagin{cases} (\Gamma_p f)_k, & k\le n,\\
0, & k> n. ^\varepsilonnd{cases}
$$
Let $f\in\mathsf{L}^2(a,b)=\mathrm{dom}(\Gamma_p)$ and $u=(A_{p,q}+\mathcal{I}d)^{-1}f$.
Using the same arguments as in the proof of ^\varepsilonqref{form:encl} and ^\varepsilonqref{okpr} we
obtain
\betagin{equation}\label{f-est}
\betagin{split}
\|\Gamma_p^n f - \Gamma_p f\|^2_{l^2(\mathbb{N})}&=
\sum\limits_{k:\,k>n}\frac{1}{p_k}|u(x_k+0)-u(x_k-0)|^2\\
&\leq 4\kappa_{n+1} (b-a)^2 \left(\mathfrak{a}_{p,q}[u,u] + \|u\|^2_{\mathsf{L}^2(a,b)}\right)+8\kappa_{n+1} \|u\|^2_{\mathsf{L}^2(a,b)} \\
&=4\kappa_{n+1} (b-a)^2(f,u)_{\mathsf{L}^2(a,b)} +8\kappa_{n+1} \|u\|^2_{\mathsf{L}^2(a,b)}\\
&\leq (4\kappa_{n+1} (b-a)^2 +8\kappa_{n+1}) \|f\|^2_{\mathsf{L}^2(a,b)}
^\varepsilonnd{split}
^\varepsilonnd{equation}
and hence it follows from ^\varepsilonqref{kappa} that $\|\Gamma_p^n - \Gamma_p\|_{l^2(\mathbb{N})} \to 0$ as $n\to\infty$. Since $\Gamma_p^n$ are finite rank operators
we conclude that the operator $\Gamma_p$ is compact.
Furthermore, it is easy to see that $\Gamma_\infty$ is a bounded operator defined on $\mathsf{L}^2(a,b)$.
Indeed, for $g\in\mathsf{L}^2(a,b)$ and $v=(A_{\infty,q}+\mathcal{I}d)^{-1}g$ one verifies in the same way as in
^\varepsilonqref{form:encl} and ^\varepsilonqref{f-est} that
\betagin{equation*}
\betagin{split}
\|\Gamma_\infty g\|^2_{l^2(\mathbb{N})}&=
\sum\limits_{k\in\mathbb{N}}(p_k)^{-1}|v(x_k+0)-v(x_k-0)|^2\\
&\le
(4\kappa_1 (b-a)^2 +8\kappa_1) \|g\|^2_{\mathsf{L}^2(a,b)}.
^\varepsilonnd{split}
^\varepsilonnd{equation*}
Now ^\varepsilonqref{res-dif} can be rewritten in the form
$$(T_{p,q}f,g)_{\mathsf{L}^2(a,b)}=-(\Gamma_p f,\Gamma_\infty g)_{l^2(\mathbb{N})},\qquad f,g\in\mathsf{L}^2(a,b),$$
and hence we have
$T_{p,q} =-(\Gamma_\infty)^*\Gamma_p.$ Since $\Gamma_p$ is compact and
$\Gamma_\infty$ is bounded (thus $(\Gamma_\infty)^*$ is also bounded) we conclude that
$T_{p,q}$ is compact.
^\varepsilonnd{proof}
For the convenience of the reader we now formulate Theorem~\ref{th-BK} in a more precise form.
\betagin{theorem}\label{th-BK+}
Let $\mathfrak{S}\subset [0,\infty)$ be an arbitrary closed set such that $0\in \mathfrak{S}$ and choose
the sequences $(s_k)_{k\in \mathbb{N}}$ and $(d_k)_{k\in \mathbb{N}}$ as in ^\varepsilonqref{s-assumpt2}--^\varepsilonqref{dk+}.
Let
$(q_k)_{k\in \mathbb{N}}$ and $(p_k)_{k\in \mathbb{N}}$ be sequences such that ^\varepsilonqref{2+} and ^\varepsilonqref{rho} hold. Then the self-adjoint
Schr\"odinger operator $A_{p,q}$ with $\mathrm{d}elta'$-interactions of strengths $(q_k)_{k\in \mathbb{N}}$ and $(p_k)_{k\in \mathbb{N}}$ at the points $(y_k)_{k\in \mathbb{N}}$ and $(x_k)_{k\in \mathbb{N}}$,
respectively, satisfies
$$\sigma_^\varepsilonss(A_{p,q})=\mathfrak{S}.$$
^\varepsilonnd{theorem}
At the end of this section we note that there exist many other methods for the construction of Schr\"odinger operators
with predefined spectral properties. For example, F.~Gesztesy, W.~Karwowski, and Z.~Zhao constructed in \cite{GKZ92a,GKZ92b} a smooth
potential $V$ (which is a limit of suitably chosen
$N$-soliton solutions of the Korteweg-de Vries equation as $N\to\infty$) such that the Schr\"odinger operator $H = -{\mathrm{d}^2\over \mathrm{d} x^2} + V$ has purely
absolutely continuous spectrum $\mathbb{R}_+$ and a prescribed sequence of points in $\mathbb{R}_-$ is contained in the set
of eigenvalues of $H$.
\section{Essential spectra of self-adjoint extensions of symmetric operators\label{sec3}}
The aim of this slightly more abstract section is to discuss some possible spectral
properties of self-adjoint extensions of a given symmetric operator in a separable Hilbert space. In a similar context the existence of self-adjoint extensions
with prescribed point spectrum, absolutely continuous spectrum, and singular continuous spectrum in spectral gaps of a fixed underlying symmetric operator
was discussed in \cite{ABMN05,ABN98,B04,BMN06,BN95,BN96,BNW93}, see also \cite{M79} for a related result on prescribed eigenvalue asymptotics {\color{black}or, e.g., the earlier contributions \cite{AI71,G69,G70,I71}}. Our main observation here is the fact that for a symmetric operator with infinite defect numbers
one can construct self-adjoint extensions with prescribed essential spectrum; cf. Theorem~\ref{essit} below.
In the following let ${\mathcal H}$ be a separable (infinite dimensional) complex Hilbert space with scalar product $(\cdot,\cdot)$. Recall that a linear operator $S$ in ${\mathcal H}$
is said to be {\it symmetric} if
\betagin{equation*}
(Sf,g)=(f,Sg),\qquad f,g\in\mathrm{dom} (S).
^\varepsilonnd{equation*}
We point out that a symmetric operator is in general not self-adjoint. More precisely, if the domain $\mathrm{dom} (S)$ of $S$ is dense in ${\mathcal H}$ then
the adjoint $S^*$ of the operator $S$ is given by
\betagin{equation*}
\betagin{split}
S^*h&=k,\\
\mathrm{dom} (S^*)&=\bigl\{h\in{\mathcal H}:^\varepsilonxists\,k\in{\mathcal H}\text{ such that } (Sf,h)=(f,k)\text{ for all }f\in\mathrm{dom} (S)\bigr\},
^\varepsilonnd{split}
^\varepsilonnd{equation*}
and the fact that $S$ is symmetric is equivalent to the inclusion $S\subset S^*$ in the sense that $\mathrm{dom} (S)\subset\mathrm{dom} (S^*)$ and $S^*f=Sf$ for all
$f\in\mathrm{dom} (S)$. However, this is obviously a weaker property than the more natural physical property of {\it self-adjointness}, that is, $S=S^*$.
A symmetric operator is not necessarily closed (although closable) and the spectrum of a symmetric operator which is not self-adjoint
typically covers the whole complex plane (or at least the upper or lower complex halfplane). We also point out that the closure $\overline S$
of a symmetric operator $S$
is not necessarily self-adjoint; if this is the case such an operator is called {\it essentially self-adjoint}, that is, $\overline S=S^*$ -- however,
we shall not deal with essential self-adjoint operators here.
We emphasize that from a spectral theoretic point of view
a symmetric operator (or an essentially self-adjoint operator) which is not self-adjoint is not suitable as an observable in the description of a
physical quantum system.
It is an important issue to understand in which situations a symmetric operator admits self-adjoint extensions and how these self-adjoint extensions
can be described.
These questions were already discussed in the classical contribution \cite{N30} by J. von Neumann.
For completeness we recall that a self-adjoint operator $A$ in ${\mathcal H}$ is an extension of a densely defined symmetric operator $S$
if $S\subset A$; since $A$ is self-adjoint this is equivalent to $A\subset S^*$.
We start by recalling the so-called first von
Neumann formula in the next theorem.
\betagin{theorem}
Let $S$ be a densely defined closed symmetric operator in ${\mathcal H}$. Then the domain of the adjoint operator $S^*$ admits the direct sum decomposition
\betagin{equation}\label{neu1}
\mathrm{dom} (S^*)=\mathrm{dom} (S)\,\mathrm{d}ot+\,\ker(S^*-i)\,\mathrm{d}ot+\,\ker(S^*+i).
^\varepsilonnd{equation}
^\varepsilonnd{theorem}
Note that $S^*f_i=if_i$ for all $f_i\in\ker(S^*-i)$ and similarly $S^*f_{-i}=-if_{-i}$ for all $f_{-i}\in\ker(S^*+i)$. The spaces $\ker(S^*-i)$ and
$\ker(S^*+i)$ are usually called {\it defect spaces} of $S$ and their dimensions are the {\it deficiency indices} of $S$.
It will turn out that the deficiency indices and isometric operators in between the defect spaces are particularly
important in the theory of self-adjoint extensions.
One can show that the dimension of $\ker(S^*-\lambda_+)$ does not depend on $\lambda_+\in{\mathbb{C}}^+$
and the dimension of $\ker(S^*-\lambda_-)$ does not depend on $\lambda_-\in{\mathbb{C}}^-$. However, for fixed $\lambda_+\in{\mathbb{C}}^+$ and $\lambda_-\in{\mathbb{C}}^-$
and hence, in particular, for $\lambda_+=i$ and
$\lambda_-=-i$,
the dimensions of $\ker(S^*-\lambda_+)$ and $\ker(S^*-\lambda_-)$ may be different. According to the second von Neumann formula both dimensions coincide
if and only if $S$ admits self-adjoint extensions in ${\mathcal H}$.
\betagin{theorem}\label{n2}
Let $S$ be a densely defined closed symmetric operator in ${\mathcal H}$. Then there exist self-adjoint extensions $A$ of $S$ in ${\mathcal H}$ if and only if
\betagin{equation*}
\mathrm{d}im\bigl(\ker(S^*-i)\bigr)=\mathrm{d}im\bigl(\ker(S^*+i)\bigr).
^\varepsilonnd{equation*}
If, in this case $U:\ker(S^*-i)\rightarrow \ker(S^*+i)$ is a unitary operator and $\mathrm{dom}(S^*)$ is decomposed as in ^\varepsilonqref{neu1}, then the operator $A$ defined by
\betagin{equation}\label{aa}
\betagin{split}
A(f_S+f_i+f_{-i})&=Sf_S +if_i -if_{-i},\\
\mathrm{dom} (A)&=\bigl\{f=f_S+f_i+f_{-i}\in\mathrm{dom} (S^*):f_{-i}=Uf_i \bigr\},
^\varepsilonnd{split}
^\varepsilonnd{equation}
is a self-adjoint extension of $S$ and, vice versa, for any self-adjoint extension $A$ of $S$ there exists a unitary operator
$U:\ker(S^*-i)\rightarrow \ker(S^*+i)$ such that ^\varepsilonqref{aa} holds.
^\varepsilonnd{theorem}
From the intuition is it clear that for a densely defined symmetric operator with equal infinite deficiency indices there is a lot of flexibility
for the unitary operators in between the defect subspaces (since they are infinite dimensional). This flexibility also allows to construct self-adjoint extensions
with various different spectral properties.
Now we wish to consider the following particular situation. Let again $S$ be a densely defined closed symmetric operator in ${\mathcal H}$ with equal infinite
deficiency indices and assume that there exists a self-adjoint extension of $S$ such that the resolvent is a compact operator. In this situation
we shall construct another self-adjoint extension $A$ of $S$ with prescribed
essential spectrum in the next theorem.
\betagin{theorem}\label{essit}
Let $S$ be a densely defined closed symmetric operator in ${\mathcal H}$ with equal infinite deficiency indices and assume that
there exists a self-adjoint extension of $S$ with compact resolvent.
Let ${\mathcal G}$ be a separable infinite dimensional Hilbert space and let $\Xi$ be a self-adjoint operator in ${\mathcal G}$ with ${\mathbb{R}}\cap\rho(\Xi)\not=^\varepsilonmptyset$.
Then there exists a self-adjoint extension
$A$ of $S$ in ${\mathcal H}$ such that
\betagin{equation}\label{essi}
\sigma_{\rm ess}(A)=\sigma_{\rm ess}(\Xi).
^\varepsilonnd{equation}
^\varepsilonnd{theorem}
\betagin{proof}
Let $A_0$ be a self-adjoint extension of $S$ in ${\mathcal H}$ such that the resolvent $(A_0-\lambda)^{-1}$ is a compact operator for some, and hence for all, $\lambda\in\rho(A_0)$.
Let us fix some point $\mu\in{\mathbb{R}}\cap\rho(A_0)\cap\rho(\Xi)$. Note that this is possible since we have assumed ${\mathbb{R}}\cap\rho(\Xi)\not=^\varepsilonmptyset$ and
the spectrum of $A_0$ is a discrete subset of the real line due to the compactness assumption.
In the present situation the spaces $\ker(S^*-\lambda_+)$ and $\ker(S^*-\lambda_-)$ for $\lambda_\pm\in{\mathbb{C}}^\pm$
are both infinite dimensional and one can show that here also the space $\ker(S^*-\mu)$ is infinite dimensional; this follows, e.g., from the direct
sum decomposition
\betagin{equation}\label{deco}
\mathrm{dom} (S^*)=\mathrm{dom} (A_0)\,\mathrm{d}ot+\,\ker(S^*-\mu)
^\varepsilonnd{equation}
and the fact that $S^*$ is an infinite dimensional extension of $A_0$. Moreover, it is no restriction to assume that the Hilbert space ${\mathcal G}$ in the assumptions
of the theorem coincides with $\ker(S^*-\mu)$ since any two separable infinite dimensional Hilbert spaces can be identified via a unitary operator.
Now observe the orthogonal sum decomposition
\betagin{equation*}
{\mathcal H}=\ker(S^*-\mu)\oplus\ran(S-\mu)
^\varepsilonnd{equation*}
and with respect to this decomposition we consider the bounded everywhere defined operator
\betagin{equation}\label{rmu}
R_\mu:=(A_0-\mu)^{-1}+\left[\betagin{matrix} (\Xi-\mu)^{-1} & 0 \\ 0 & 0 ^\varepsilonnd{matrix}\right].
^\varepsilonnd{equation}
We claim that $R_\mu^{-1}$ is a well-defined operator. In fact, if $R_\mu h=0$ for some $h\in{\mathcal H}$ then ^\varepsilonqref{rmu} implies
\betagin{equation*}
(A_0-\mu)^{-1}h=-\left[\betagin{matrix} (\Xi-\mu)^{-1} & 0 \\ 0 & 0 ^\varepsilonnd{matrix}\right]h,
^\varepsilonnd{equation*}
and since the left-hand side belongs to $\mathrm{dom} (A_0)$ and the right-hand side is nonzero only in $\ker(S^*-\mu)$ it follows from the direct sum decomposition
^\varepsilonqref{deco} that $h=0$. This confirms that $R_\mu^{-1}$, and hence also
$$
A:=R_\mu^{-1}+\mu
$$
is a well-defined operator. It is clear from $R_\mu=(A-\mu)^{-1}$ that $\mu\in\rho(A)$ and $A$ is self-adjoint in ${\mathcal H}$ since the same is obviously true for $R_\mu$ in ^\varepsilonqref{rmu}.
In order to determine the essential spectrum of $A$ recall the Weyl theorem (see, e.g., \cite[Theorem XIII.14]{RS78}) which states
that compact perturbations in resolvent
sense do not change the essential spectrum. In the present situation we have that
$$
(A-\mu)^{-1}-\left[\betagin{matrix} (\Xi-\mu)^{-1} & 0 \\ 0 & 0 ^\varepsilonnd{matrix}\right]=R_\mu-\left[\betagin{matrix} (\Xi-\mu)^{-1} & 0 \\ 0 & 0 ^\varepsilonnd{matrix}\right]=(A_0-\mu)^{-1}
$$
is a compact operator and hence the essential spectrum $\sigma_{\rm ess}((A-\mu)^{-1})$ coincides with the
essential spectrum of the diagonal block matrix operator, that is,
$$\sigma_{\rm ess}\left(\left[\betagin{matrix} (\Xi-\mu)^{-1} & 0 \\ 0 & 0 ^\varepsilonnd{matrix}\right]\right)=\sigma_{\rm ess}((\Xi-\mu)^{-1})\cup\{0\}.$$
This implies ^\varepsilonqref{essi}.
^\varepsilonnd{proof}
From the construction of the operator $A$ in the proof of Theorem~\ref{essit} the following representation can be concluded:
\betagin{equation}\label{as}
\betagin{split}
A(f_0+f_\mu)&=A_0f_0 + \mu f_\mu,\\
\mathrm{dom} (A)&=\bigl\{f_0+f_\mu\in\mathrm{dom} (A_0)\,\mathrm{d}ot+\,\ker(S^*-\mu):\\
&\qquad\qquad\qquad\qquad\qquad (\Xi-\mu)f_\mu=P_\mu(A_0-\mu)f_0\bigr\};
^\varepsilonnd{split}
^\varepsilonnd{equation}
here $P_\mu$ denotes the orthogonal projection in ${\mathcal H}$ onto $\ker(S^*-\mu)$.
In fact, since ^\varepsilonqref{rmu} is the resolvent $(A-\mu)^{-1}$ of $A$ it follows that the elements $f\in\mathrm{dom} (A)$ have the form
$f=R_\mu h$, $h\in{\mathcal H}$. Due to the direct sum decomposition ^\varepsilonqref{deco} we have $R_\mu h=f=f_0+f_\mu$ with some $f_0\in\mathrm{dom} (A_0)$ and some $f_\mu\in\ker(S^*-\mu)$,
and when comparing with ^\varepsilonqref{rmu} it follows that $f_0=(A_0-\mu)^{-1}h$ and $f_\mu=(\Xi-\mu)^{-1}P_\mu h$. Hence it is clear that $f=R_\mu h\in\mathrm{dom} (A)$
satisfies the condition
\betagin{equation}\label{bcd}
(\Xi-\mu)f_\mu=P_\mu(A_0-\mu)f_0
^\varepsilonnd{equation}
in ^\varepsilonqref{as}. On the other hand, if $f=f_0+f_\mu\in\mathrm{dom} (A_0)\,\mathrm{d}ot+\,\ker(S^*-\mu)$ is such that ^\varepsilonqref{bcd} holds then one can verify in a similar way that
there exists $h\in{\mathcal H}$ such that $f=R_\mu h$, and hence $f\in\mathrm{dom} (A)$. Summing up we have shown the representation ^\varepsilonqref{as}.
Finally we note that the explicit form ^\varepsilonqref{as} of $A$ comes via a restriction of the adjoint operator $S^*$ and the decomposition ^\varepsilonqref{deco}; the domain of $A$
is described by an abstract boundary condition depending on the choice of the operator $\Xi$. This abstract result can of course be
formulated in various explicit situations, e.g., for infinitely many $\mathrm{d}elta'$-interactions as in Section~\ref{sec2} or for the Laplacian
on a bounded domain as in Section~\ref{sec1}, where the boundary condition in ^\varepsilonqref{as} can be specified further.
Furthermore, the self-adjoint extensions $A$ and $A_0$ can be described in the formalism of von Neumann's second formula in Theorem~\ref{n2}.
If one fixes a unitary operator $U_0:\ker(S^*-i)\rightarrow \ker(S^*+i)$ for the representation of $A_0$ in ^\varepsilonqref{aa} then
the unitary operator $U:\ker(S^*-i)\rightarrow \ker(S^*+i)$ corresponding to the self-adjoint extension $A$ can be expressed in terms of $U_0$ and the parameter
$\Xi$. The technical details are left to the reader.
\numberwithin{theorem}{section}
\mathfrak{a}ppendix
\section{Continuous dependence of the eigenvalues on varying domains}\label{appa}
In this appendix we establish an auxiliary result on the continuous dependence of the eigenvalues
of the Neumann Laplacian on varying domains, which is useful and convenient for the proofs of Theorem~\ref{th-HSS} and Theorem \ref{thCdV+}.
For our purposes it is sufficient to consider the following geometric setting:
Let $\Omega\subset\mathbb{R}^2$ be a bounded Lipschitz domain
and assume that also the subdomains
$$\Omega_\pm=\Omega\cap\left\{(x,y)\in\mathbb{R}^2: \pm x> 0\right\}$$
are (bounded, nontrivial) Lipschitz domains. Furthermore, we assume that the set
$$\Gamma =\Omega\cap\left\{(x,y)\in\mathbb{R}^2: x=0\right\}=\partial\Omega^-\cap\partial\Omega^+$$ is a (compact) interval with the endpoints
$(0,A)$ and $(0,B)$ in $\mathbb{R}^2$, where $A<B$.
For $a,b\in [A,B]$ fixed such that $a\le b$ we introduce the domain $\Omega_{a,b}$ by
\betagin{equation}\label{omab}
\Omega_{a,b}=\Omega_-\cup\Omega_+ \cup \Gamma_{a,b},
\text{ where }\Gamma_{a,b}=\{0\}\times (a,b)
^\varepsilonnd{equation}
(see Figure~\ref{fig5}, left). Note that
$\Omega_{A,B}=\Omega$ and $\Omega_{a,a}=\Omega_-\cup\Omega_+=\Omega\mathfrak{set}minus\Gamma$.
We denote by $Y\subset [A,B]\times [A,B]$ the set of all admissible pairs $\{a,b\}$, that is,
$$Y=\bigl\{\{a,b\} : A\leq a\leq b\leq B\bigr\}.$$
\betagin{figure}[h]
\betagin{tikzpicture}
\mathrm{d}raw [line width=0.2mm, black] plot [smooth cycle] coordinates {(3,0) (2,1.5) (0,2) (-1.5,1.5) (-2,0) (0,-1) (2,-1)};
\mathrm{d}raw [line width=0.2mm, black] (0,2) -- (0,1) ;
\mathrm{d}raw [line width=0.2mm, black] (0,0) -- (0,-1);
\node[circle,fill=black,inner sep=0pt,minimum size=0.7mm ] (a) at (0,-1) {};
\node[circle,fill=black,inner sep=0pt,minimum size=0.7mm ] (a) at (0,0) {};
\node[circle,fill=black,inner sep=0pt,minimum size=0.7mm ] (a) at (0,1) {};
\node[circle,fill=black,inner sep=0pt,minimum size=0.7mm ] (a) at (0,2) {};
\node[text width=10mm] at (0.55,0) {$(0,a)$};
\node[text width=10mm] at (0.55,1) {$(0,b)$};
\node[text width=10mm] at (0.55,-0.87) {$(0,A)$};
\node[text width=10mm] at (0.55,1.68) {$(0,B)$};
^\varepsilonnd{tikzpicture}\qquad\qquad
\betagin{tikzpicture}
\mathrm{d}raw [line width=0.2mm, black] plot [smooth cycle] coordinates {(3,0) (2,1.5) (0,2) (-1.5,1.5) (-2,0) (0,-1) (2,-1)};
\mathrm{d}raw [line width=0.2mm, black] (0,2) -- (0,1) ;
\mathrm{d}raw [line width=0.2mm, black] (0,0) -- (0,-1);
\mathrm{d}raw [line width=0.2mm, black] (-1.5,1.5) -- (-1.5,0.5) ;
\mathrm{d}raw [line width=0.2mm, black] (-1.5,0.2) -- (-1.5,-0.4);
\mathrm{d}raw [line width=0.2mm, black] (2,1.5) -- (2,0.5) ;
\mathrm{d}raw [line width=0.2mm, black] (2,-0.1) -- (2,-1);
^\varepsilonnd{tikzpicture}
\caption{Domain $\Omega_{a,b}$ with one wall (left) and $m=3$ walls (right)}\label{fig5}
^\varepsilonnd{figure}
Since the domain $\Omega_{a,b}$ in ^\varepsilonqref{omab} has the \textit{cone property} (see, e.g. \cite[Chapter~IV,~4.3]{A75}) it follows from
Rellich's theorem \cite[Theorem~6.2]{A75} that the embedding $\mathsf{H}^1(\Omega_{a,b})\hookrightarrow \mathsf{L}^2(\Omega_{a,b})$ is compact. Therefore,
the spectrum of the Neumann Laplacian $A_{\Omega_{a,b}}$ on $\Omega_{a,b}$ is purely discrete.
We denote by $\left(\lambda_k(\Omega_{a,b})\right)_{k\in\mathbb{N}}$ the sequence of eigenvalues of
$A_{\Omega_{a,b}}$ numbered in nondecreasing order with multiplicities taken into account.
\betagin{theorem}\label{th-contin}
For each $k\in\mathbb{N}$ the function
$\{a,b\}\mapsto \lambda_k(\Omega_{a,b})$
is continuous on $Y$.
^\varepsilonnd{theorem}
\betagin{remark}\label{rem-contin}
Theorem~\ref{th-contin} remains valid for more general domains $\Omega_{a,b}$ obtained
from $\Omega$ by adding $m>1$ walls in the same way -- see Figure~\ref{fig5} (right, here $m=3$).
In this case
$a=\{a_1,\mathrm{d}ots,b_m\}$, $b=\{b_1,\mathrm{d}ots,b_m\}$ with
\betagin{gather}\label{aabb}
A_j\le a_j\leq b_j\leq B_j,\ j=1,\mathrm{d}ots,m
^\varepsilonnd{gather}
and $\{a,b\}\mapsto \lambda_k(\Omega_{a,b})$ is continuous on
$\{\{a,b\}\in\mathbb{R}^{2m}: ^\varepsilonqref{aabb}\text{ holds}\}$.
^\varepsilonnd{remark}
For the proof of Theorem~\ref{th-contin} we shall first recall a particular case of a more general abstract result established in \cite{IOS89},
which is formulated and proved for operators in \textit{varying} Hilbert spaces.
\betagin{theorem}[Iosif'yan-Oleinik-Shamaev, 1989]\label{thIOS}
Let $B_n$, $n\in\mathbb{N}$, and $B$ be nonnegative compact operators in a Hilbert space $\mathcal{H}$.
We denote by $(\mu_k(B_n))_{k\in\mathbb{N}}$ and $(\mu_k(B))_{k\in\mathbb{N}}$ the sequences of the eigenvalues of
$B_n$ and $B$, respectively, numbered in nonincreasing order with multiplicities taken into account.
Assume that the following conditions hold:
\betagin{itemize}
\item[{\rm (i)}] $\sup_{n}\|B_n\|<\infty$;
\item[{\rm (ii)}] $\forall f\in\mathcal{H}$: $B_n f\to Bf$ as $n\to \infty$;
\item[{\rm (iii)}] for any bounded sequence $(f_n)_{n\in\mathbb{N}}$ in $\mathcal{H}$ there exists $u\in\mathcal{H}$
and a subsequence $(n_k)_{k\in\mathbb{N}}$ such that
$B_{n_k}f_{n_k}\to u$ in $\mathcal{H}$ as $k\to\infty$.
^\varepsilonnd{itemize}
Then for each $k\in\mathbb{N}$
\betagin{gather}\label{mumu}
\mu_k(B_n)\to \mu_k(B)\text{ as }n\to\infty.
^\varepsilonnd{gather}
^\varepsilonnd{theorem}
\betagin{proof}[Proof of Theorem~\ref{th-contin}]
Fix some $\{a,b\}\in Y$ and consider an arbitrary sequence $\{a_n,b_n\}\in Y$, $n\in\mathbb{N}$, such that
$\lim_{n\to\infty}a_n=a$ and $\lim_{n\to\infty}b_n=b$.
We have to show that for each $k\in\mathbb{N}$
\betagin{gather}
\label{lambdalambda}
\lambda_k(\Omega_{a_n,b_n})\to \lambda_k(\Omega_{a,b})\text{ as }n\to\infty.
^\varepsilonnd{gather}
The strategy is to apply Theorem~\ref{thIOS} to the resolvents of the Neumann Laplacians
$A_{\Omega_{a_n,b_n}}$ and $A_{\Omega_{a,b}}$.
More precisely,
we consider the operators
\betagin{gather*}
B_n=(A_{\Omega_{a_n,b_n}}+\mathcal{I}d)^{-1}\quad\text{and}\quad B=(A_{\Omega_{a,b}}+\mathcal{I}d)^{-1},
^\varepsilonnd{gather*}
which are bounded operators acting in $\mathcal{H}=\mathsf{L}^2(\Omega)=\mathsf{L}^2(\Omega_{a,b})$.
We show below that these operators satisfy the conditions (i)-(iii) in Theorem~\ref{thIOS}.
Then it follows that ^\varepsilonqref{mumu} holds for each $k\in\mathbb{N}$ and from
\betagin{gather*}
\mu_k(B_n)=(\lambda_k(\Omega_{a_n,b_n})+1)^{-1}\quad\text{and}\quad
\mu_k(B)=(\lambda_k(\Omega_{a,b})+1)^{-1}
^\varepsilonnd{gather*}
we conclude ^\varepsilonqref{lambdalambda}.\\
(i) This condition holds since $$\|B_n\|={1\over \mathrm{dist}(-1,\,\sigma(A_{\Omega_{a_n,b_n}}))}=1.$$
(ii) In order to check condition (ii) in Theorem~\ref{thIOS} let $f\in \mathsf{L}^2(\Omega)$ and set
$u_n=B_n f$, $n\in\mathbb{N}$. For $\phi\in \mathsf{H}^1(\Omega_{a_n,b_n})$
it follows from the definition of the Neumann Laplacian $A_{\Omega_{a_n,b_n}}$ that $u_n$ satisfies
\betagin{equation}\label{weak_n}
(\nabla u_n,\nabla\phi)_{\mathsf{L}^2(\Omega_{a_n,b_n})}+
(u_n,\phi)_{\mathsf{L}^2(\Omega_{a_n,b_n})}= \bigl((A_{\Omega_{a_n,b_n}}+\mathcal{I}d)u_n,\phi\bigr)_{\mathsf{L}^2(\Omega_{a_n,b_n})}
=(f,\phi)_{\mathsf{L}^2(\Omega_{a_n,b_n})}.
^\varepsilonnd{equation}
In particular, using ^\varepsilonqref{weak_n} for $\phi=u_n$ we get
\betagin{equation*}
\betagin{split}
\|u_n\|^2_{\mathsf{H}^1(\Omega_{a_n,b_n})}&=
\|\nabla u_n\|^2_{\mathsf{L}^2(\Omega_{a_n,b_n})}+\|u_n\|^2_{\mathsf{L}^2(\Omega_{a_n,b_n})}=(f,u_n)_{\mathsf{L}^2(\Omega_{a_n,b_n})}\\&\leq \|f \|_{\mathsf{L}^2(\Omega_{a_n,b_n})}\|u_n\|_{\mathsf{L}^2 (\Omega_{a_n,b_n})}
\leq\|f \|_{\mathsf{L}^2(\Omega_{a_n,b_n})}\|u_n\|_{\mathsf{H}^1(\Omega_{a_n,b_n})},
^\varepsilonnd{split}
^\varepsilonnd{equation*}
and therefore
\betagin{gather}\label{apriori}
\|u_n\|_{\mathsf{H}^1(\Omega_{a_n,b_n})}\leq \|f \|_{\mathsf{L}^2(\Omega_{a_n,b_n})}.
^\varepsilonnd{gather}
We set $u_n^\pm = u_n\!\!\restriction_{\Omega_\pm}$. Below we shall use the same $\pm$-superscript notation for restrictions of other functions
onto $\Omega_\pm$. It follows from ^\varepsilonqref{apriori}
that $(u_n^\pm)_{n\in\mathbb{N}}$ is a bounded sequence in
$\mathsf{H}^1(\Omega_\pm)$ and hence there exist $u^\pm\in\mathsf{H}^1(\Omega_\pm)$
and a subsequence $n_k\to\infty$ such that
\betagin{gather}
\label{H1}
u_{n_k}^\pm\rightharpoonup u^\pm\text{ in }\mathsf{H}^1(\Omega_\pm)
^\varepsilonnd{gather}
(as usual the notation $\rightharpoonup$ is used for the \textit{weak} convergence).
With the help of Rellich's theorem we conclude from ^\varepsilonqref{H1} that
\betagin{gather}
\label{H1-}
u_{n_k}^\pm\to u^\pm\text{ in }\mathsf{H}^{1-\kappa}(\Omega_\pm),\quad \kappa\in (0,1].
^\varepsilonnd{gather}
Finally, well-known mapping properties of the trace operator on $\mathsf{H}^1(\Omega_\pm)$ (see, e.g., \cite[Theorem~3.37]{McL00}) together with ^\varepsilonqref{H1-} lead to
\betagin{gather}
\label{traces}
\gamma_{\Gamma}^\pm u_{n_k}^\pm \to \gamma_{\Gamma}^\pm u^\pm\text{ in }\mathsf{L}^2(\Gamma)
^\varepsilonnd{gather}
as $n_k\to\infty$,
where $\gamma^\pm_{\Gamma}u^\pm$ stands for the restriction of the trace of the function $u^\pm\in\mathsf{H}^1(\Omega_\pm)$ onto $\Gamma$.
Next we introduce the set of functions
\betagin{multline*}
\mathbf{w}idehat\mathsf{H}^1(\Omega_{a,b})=\bigl\{u\in\mathsf{H}^1(\Omega_{a,b}):\ ^\varepsilonxists\mathrm{d}elta=\mathrm{d}elta(u)>0\text{ such that}\\
u=0\text{ in } \mathrm{d}elta\text{-neighborhoods of }(0,a)\text{ and }(0,b)\bigr\}.
^\varepsilonnd{multline*}
It is known that $\mathbf{w}idehat\mathsf{H}^1(\Omega_{a,b})$ is dense in $\mathsf{H}^1(\Omega_{a,b})$
(this is due to the fact that the capacity of the set $\{(0,a),\,(0,b)\}$ is zero; we refer to \cite{RT75} for more details).
Now let $\phi\in \mathbf{w}idehat\mathsf{H}^1(\Omega_{a,b})$. It is clear that for $n_k$ sufficiently large we also have
$\phi\in \mathsf{H}^1(\Omega_{a_{n_k},b_{n_k}})$ and hence
^\varepsilonqref{weak_n} is valid. The identity ^\varepsilonqref{weak_n} written componentwise reads as
\betagin{equation*}
\betagin{split}
(\nabla u_{n_k}^- ,\nabla\phi^-)_{\mathsf{L}^2(\Omega_-)}+
(\nabla u_{n_k}^+ ,\nabla&\phi^+)_{\mathsf{L}^2(\Omega_+)}+
(u_{n_k}^-,\phi^-)_{\mathsf{L}^2(\Omega_-)}+
(u_{n_k}^+,\phi^+)_{\mathsf{L}^2(\Omega_+)}\\
&=
(f^-,\phi^-)_{\mathsf{L}^2(\Omega_-)}+
(f^+,\phi^+)_{\mathsf{L}^2(\Omega_+)},
^\varepsilonnd{split}
^\varepsilonnd{equation*}
and passing to the limit (we have weak convergence in $\mathsf{H}^1(\Omega_\pm)$ by ^\varepsilonqref{H1}) as $n_k\to\infty$
we get
\betagin{equation}\label{weak}
\betagin{split}
(\nabla u^- ,\nabla\phi^-)_{\mathsf{L}^2(\Omega_-)}+
(\nabla u^+ ,\nabla&\phi^+)_{\mathsf{L}^2(\Omega_+)}+
(u^-,\phi^-)_{\mathsf{L}^2(\Omega_-)}+
(u^+,\phi^+)_{\mathsf{L}^2(\Omega_+)}\\
&=
(f^-,\phi^-)_{\mathsf{L}^2(\Omega_-)}+
(f^+,\phi^+)_{\mathsf{L}^2(\Omega_+)}.
^\varepsilonnd{split}
^\varepsilonnd{equation}
Let us denote
$$
u(x)=
\betagin{cases}
u^-(x),& x\in\Omega_-,
\\
u^+(x),& x\in\Omega_+.
^\varepsilonnd{cases}
$$
Obviously $u\in\mathsf{L}^2(\Omega)$. Using ^\varepsilonqref{H1-} with $\kappa=1$ we obtain
\betagin{gather}\label{ii}
u_{n_k}\to u\text{ in }\mathsf{L}^2(\Omega).
^\varepsilonnd{gather}
Since $u_{n_k}\in\mathsf{H}^1(\Omega_{a_{n_k},b_{n_k}})$ it is clear that
$$\gamma^-_{\Gamma_{a_{n_k},b_{n_k}}} u^-_{n_k}=\gamma_{\Gamma_{a_{n_k},b_{n_k}}}^+ u_{n_k}^+,$$
where $\gamma^\pm_{\Gamma_{a_{n_k},b_{n_k}}}$ is the restriction of the trace onto $\Gamma_{a_{n_k},b_{n_k}}=\{0\}\times (a_{n_k},b_{n_k})$.
Therefore, ^\varepsilonqref{traces} implies that $\gamma^-_{\Gamma_{a',b'}} u^-=\gamma^+_{\Gamma_{a',b'}} u^+$ for \textit{any} interval $(a',b')\subset (a,b)$ and, consequently,
$$\gamma^-_{\Gamma_{a ,b }} u^-=\gamma^+_{\Gamma_{a ,b }} u^+.$$
As $u^\pm\in \mathsf{H}^1(\Omega_\pm)$
this implies
$u\in \mathsf{H}^1(\Omega_{a,b})$ and ^\varepsilonqref{weak} can be written in the form
\betagin{gather}
\label{weak+}
(\nabla u ,\nabla\phi)_{\mathsf{L}^2(\Omega_{a,b})}+
(u,\phi)_{\mathsf{L}^2(\Omega_{a,b})}=
(f,\phi)_{\mathsf{L}^2(\Omega_{a,b})}.
^\varepsilonnd{gather}
Since $\mathbf{w}idehat\mathsf{H}^1(\Omega_{a,b})$ is dense in $ \mathsf{H}^1(\Omega_{a,b})$ this equality holds
for any $\phi\in \mathsf{H}^1(\Omega_{a,b})$. It is easy to see that
^\varepsilonqref{weak+} is equivalent to $u=Bf$. This also shows that the limit function $u$ is independent of the subsequence $n_k$ and hence we conclude that
^\varepsilonqref{ii} holds for any subsequence $n_k$. Thus,
$$B_nf = u_n\to u = Bf\text{ in }\mathsf{L}^2(\Omega)$$
as $n\to\infty$. We have verified condition (ii) in Theorem~\ref{thIOS}.
(iii) To check this condition let $(f_n)_{n\in\mathbb{N}}$ be a bounded sequence in $\mathsf{L}^2(\Omega)$.
The same arguments as in the proof of (ii) (cf.~^\varepsilonqref{apriori}) show that the sequence
$(B_n f_n)_{n\in\mathbb{N}}$ is bounded in $\mathsf{H}^1(\Omega\mathfrak{set}minus\Gamma)$, and hence contains a weakly convergent subsequence in $\mathsf{H}^1(\Omega\mathfrak{set}minus\Gamma)$.
Since the embedding
$$H^1(\Omega\mathfrak{set}minus\Gamma)\hookrightarrow \mathsf{L}^2(\Omega\mathfrak{set}minus\Gamma)=\mathsf{L}^2(\Omega)$$
is compact (again we use Rellich's embedding theorem) we conclude that there is a strongly convergent subsequence in $\mathsf{L}^2(\Omega)$, that is,
condition (iii) in Theorem~\ref{thIOS} is satisfied.
^\varepsilonnd{proof}
\betagin{remark}
Besides the continuity of the function $\{a,b\}\mapsto \lambda_k(\Omega_{a,b})$
one can also conclude that it decreases (\textit{resp.}, increases) monotonically with respect to $a$ (\textit{resp.}, with respect to $b$). This follows easily from the min-max principle (see, e.g., \cite[Section~4.5]{D95}).
Note, that, in general, when one perturbs a fixed domain $\Omega$ by removing a subset $S_a$ ($a\in\mathbb{R}$ is a parameter)
the monotonicity of the eigenvalues of the Neumann Laplacian in $\Omega_a:=\Omega\mathfrak{set}minus S_a$ does not follow from the
the monotonicity of the underlying domains with respect to $a$, i.e., even if $\Omega_a\subset \Omega_{\mathbf{w}idetilde a}$,
it does not mean that $\lambda(\Omega_a)\geq \lambda(\Omega_{\mathbf{w}idetilde a})$ (see \cite[Section~2.3]{MNP85} for more details).
This is in contrast to Dirichlet Laplacian, where the monotonicity is always present -- see, e.g., \cite{RT75,GZ94,Oz81,MNP85}
for the properties of Dirichlet eigenvalues in so perturbed domains. However, in our configuration monotonicity nevertheless holds
for Neumann eigenvalues.
This is due to a special structure of the removed set having the form of two walls with zero thickness.
^\varepsilonnd{remark}
\section{Convergence results for monotone sequences of quadratic forms}\label{appb}
We recall a well-known convergence result for a sequence of
monotonically increasing quadratic forms from \cite{Si78} which is used in the proof of Theorem~\ref{th-BK}.
Consider a family $\{\mathfrak{a}_{q}\}_{q>0}$ of densely defined closed nonnegative sesquilinear forms in a Hilbert space $\mathcal{H}$. For simplicity we assume that
the domain of $\mathfrak{a}_q $ is the same for all $q$, and we use the notation $\mathrm{dom}(\mathfrak{a}_q)=\mathcal{H}^1$.
Let $\mathcal{A}_q$ be the nonnegative self-adjoint operator associated with the form $\mathfrak{a}_q$ via the first representation theorem.
Now assume, in addition, that the family $\{\mathfrak{a}_{q}\}_{q>0}$ of forms increases monotonically as $q$ decreases, i.e.
for any $0<q<\mathbf{w}idetilde q<\infty$ one has
\betagin{gather}
\label{monot-forms}
\mathfrak{a}_{q}[u,u]\geq \mathfrak{a}_{\mathbf{w}idetilde q}[u,u],\quad u\in \mathcal{H}^1.
^\varepsilonnd{gather}
We define the limit form $\mathfrak{a}_0$ as follows:
$$\mathrm{dom}(\mathfrak{a}_0)=\left\{u\in\mathcal{H}^1:\ \sup\limits_{q>0}\mathfrak{a}_{q}[u,u]<\infty\right\},\ \mathfrak{a}_0[u,v]=\lim\limits_{q\to 0}\mathfrak{a}_{q}[u,v].$$
One verifies that $\mathfrak{a}_0$ is a well-defined nonnegative symmetric sesquilinear form (which is not necessarily densely defined)
and, in fact, by \cite{Si78} the limit form $\mathfrak{a}_0$ is closed.
Let us now assume that $\mathrm{dom}(\mathfrak{a}_0)$ is dense in $\mathcal{H}$, so that one can associate a
nonnegative self-adjoint operator $\mathcal{A}_0$ with $\mathfrak{a}_0$ via the first representation theorem.\footnote{We wish to mention here that in the situation where
the limit form $\mathfrak{a}_0$ is not densely defined one associates a self-adjoint relation (multivalued operator) via the corresponding generalized first representation theorem
for nondensely defined closed nonnegative forms; see \cite{BHSW10} for more details and related convergence results.}
According to \cite{Si78} one then has convergence of the corresponding nonnegative self-adjoint operators in the strong resolvent sense (see also \cite[Theorem 4.2]{BHSW10}):
\betagin{theorem}[Simon, 1978]
For each $f\in\mathcal{H}$ one has
\betagin{gather}\label{strong}
\|(\mathcal{A}_q+\mathcal{I}d)^{-1}f- (\mathcal{A}_0+\mathcal{I}d)^{-1}f\|\to 0\text{ as }q\to 0.
^\varepsilonnd{gather}
^\varepsilonnd{theorem}
Now let us assume, in addition, that the spectra of the self-adjoint operators $\mathcal{A}_q$ and $\mathcal{A}_0$ are purely discrete.
We write $(\lambda_k(\mathcal{A}_q))_{k\in\mathbb{N}}$ and $(\lambda_k(\mathcal{A}_0))_{k\in\mathbb{N}}$
for the eigenvalues of these operators counted with multiplicities and ordered
as nondecreasing sequences. In this case one can conclude the following spectral convergence:
\betagin{theorem}\label{th-s+}
For each $k\in\mathbb{N}$ one has
\betagin{gather}\label{ev-conv}
\lambda_k(\mathcal{A}_q)\to
\lambda_k(\mathcal{A}_0)\text{ as }q\to 0.
^\varepsilonnd{gather}
^\varepsilonnd{theorem}
\betagin{proof}
The discreteness of the spectra of $\mathcal{A}_q$ and $\mathcal{A}_0$ is equivalent to the compactness of the resolvents
$(\mathcal{A}_q+\mathcal{I}d)^{-1}$ and $(\mathcal{A}_0+\mathcal{I}d)^{-1}$. Moreover
^\varepsilonqref{monot-forms} implies (cf.~\cite[Proposition~1.1]{Si78})
$$(\mathcal{A}_q+\mathcal{I}d)^{-1}\leq (\mathcal{A}_{\mathbf{w}idetilde q}+\mathcal{I}d)^{-1}$$
provided $0<q<\mathbf{w}idetilde q<\infty$. Then by \cite[Theorem VIII-3.5]{K66} the strong convergence in
^\varepsilonqref{strong} becomes even convergence in the operator norm, that is,
\betagin{gather}\label{norm:res:conv}
\|(\mathcal{A}_q+\mathcal{I}d)^{-1}- (\mathcal{A}_0+\mathcal{I}d)^{-1}\|\to 0\text{ as }q\to 0.
^\varepsilonnd{gather}
It is well-known (see, e.g., \cite[Corollary~A.15]{P06}) that the norm resolvent convergence
^\varepsilonqref{norm:res:conv} implies the convergence
of the eigenvalues, i.e. ^\varepsilonqref{ev-conv} holds.
^\varepsilonnd{proof}
\betagin{thebibliography}{99}
\bibitem{A75}
R.A.~Adams, Sobolev Spaces, Academic Press, New York-London, 1975.
\bibitem{AG93}
N.I.~Akhiezer and I.M.~Glazman, Theory of Linear Operators in Hilbert Space, Dover Publications, New York, 1993.
\bibitem{AGHH05}
S.~Albeverio, F.~Gesztesy, R.~H{\o}egh-Krohn, and H.~Holden,
Solvable Models in Quantum Mechanics, 2nd edition. With an appendix by Pavel Exner, AMS Chelsea Publishing, Providence, RI, 2005.
\bibitem{ABMN05}
S. Albeverio, J. Brasche, M.M.~Malamud, and H. Neidhardt,
Inverse spectral theory for symmetric operators with several gaps: scalar-type Weyl functions, J. Funct. Anal. 228 (2005), 144--188.
\bibitem{ABN98}
S. Albeverio, J. Brasche, and H. Neidhardt,
On inverse spectral theory for self-adjoint extensions: mixed types of spectra, J. Funct. Anal. 154 (1998), 130--173.
\bibitem{AN06}
S.~Albeverio and L.~Nizhnik,
A Schr\"odinger operator with a $\mathrm{d}elta'$-interaction on a
Cantor set and Krein–Feller operators,
Math. Nachr. 279 (2006), 467--476.
\bibitem{AI71}
\v{S}.A.~Alimov and V.A. Ilʹin,
Conditions for the convergence of spectral decompositions that correspond to self-adjoint extensions of elliptic operators. II. Self-adjoint extension of the Laplace operator with an arbitrary spectrum,
Differencialʹnye Uravnenija 7 (1971), 851--882.
\bibitem{A78}
C.J.~Amick,
Some remarks on Rellich’s theorem and the Poincar\'e inequality,
J. Lond. Math. Soc., II. Ser. 18 (1978), 81--93.
\bibitem{An87}
C.~Ann\'{e},
Spectre du Laplacien et \'{e}crasement d'andes,
Ann. sci. \'{E}cole norm. Sup. 20 (1987), 271--280.
\bibitem{Arr95}
J.M.~Arrieta,
Rates of eigenvalues on a dumbbell domain. Simple eigenvalue case,
Trans. Am. Math. Soc. 347 (1995), 3503--3531.
\bibitem{AEL94}
J.E.~Avron, P.~Exner, and Y.~Last,
Periodic Schr\"odinger operators with large gaps and Wannier-Stark ladders,
Phys. Rev. Lett. 72 (1994), 896--899.
\bibitem{BK15}
D.~Barseghyan and A.~Khrabustovskyi,
Gaps in the spectrum of a periodic quantum graph with periodically distributed $\mathrm{d}elta'$-type interactions,
J. Phys. A: Math. Theor. 48 (2015), 255201.
\bibitem{BEW13}
B.M.~Brown, W.D.~Evans, I.G.~ Wood,
Some spectral properties of Rooms and Passages domains and their skeletons, Spectral analysis, differential equations and mathematical physics: a festschrift in honor of Fritz Gesztesy's 60th birthday, 69--85, Proc. Sympos. Pure Math., 87, Amer. Math. Soc., Providence, RI, 2013.
\bibitem{BGLL15}
J. Behrndt, G. Grubb, M. Langer, and V. Lotoreichik,
Spectral asymptotics for resolvent differences of elliptic operators with $\mathrm{d}elta$ and $\mathrm{d}elta'$-interactions on hypersurfaces,
J. Spectr. Theory 5 (2015), 697--729.
\bibitem{BEL14}
J. Behrndt, P. Exner, and V.~Lotoreichik,
Schr\"{o}dinger operators with $\mathrm{d}elta$ and $\mathrm{d}elta'$interactions on Lipschitz surfaces and chromatic numbers of associated partitions,
Rev. Math. Phys. 26 (2014), 1450015.
\bibitem{BHSW10}
J.~Behrndt, S.~Hassi, H.~de~Snoo, and R.~Wietsma,
Monotone convergence theorems for semibounded operators and forms with applications,
Proc. Roy. Soc. Edinburgh Sect. A 140 (2010), 927--951.
\bibitem{BLL13}
J. Behrndt, M. Langer, and V.~Lotoreichik,
Schr\"{o}dinger operators with $\mathrm{d}elta$ and $\mathrm{d}elta'$-potentials supported on hypersurfaces,
Ann. Henri Poincar\'e 14 (2013), 385--423.
\bibitem{BS87}
M.S.~Birman and M.Z.~Solomjak,
Spectral Theory of Self-adjoint Operators in Hilbert space,
Reidel, Dordrecht, 1987.
\bibitem{BKbook}
G.~Berkolaiko and P.~Kuchment,
Introduction to Quantum Graphs, Mathematical Surveys and Monographs 186, AMS Providence, RI, 2013.
\bibitem{B04}
J. Brasche,
Spectral theory for self-adjoint extensions. Spectral theory of Schr\"{o}dinger operators, pp. 51--96,
Contemp. Math. 340, Aportaciones Mat., Amer. Math. Soc., Providence, RI, 2004.
\bibitem{BMN06}
J. Brasche, M.M. Malamud, and H. Neidhardt,
Selfadjoint extensions with several gaps: finite deficiency indices, Oper. Theory Adv. Appl. 162 (2006), 85--101.
\bibitem{BN95}
J. Brasche and H.~Neidhardt,
On the absolutely continuous spectrum of self-adjoint extensions, J. Funct. Anal. 131 (1995), 364--385.
\bibitem{BN96}
J. Brasche and H.~Neidhardt,
On the singular continuous spectrum of self-adjoint extensions,
Math. Z. 222 (1996), 533--542.
\bibitem{BNW93}
J. Brasche, H.~Neidhardt, and J. Weidmann,
On the point spectrum of selfadjoint extensions,
Math. Z. 214 (1993), 343--355.
\bibitem{BN13a}
J.~Brasche and L.P.~Nizhnik,
One-dimensional Schr\"odinger operators with general point interactions,
Methods Funct. Anal. Topology 19 (2013), 4--15.
\bibitem{BN13b}
J.~Brasche and L.P.~Nizhnik,
One-dimensional Schr\"odinger operator with $\mathrm{d}elta'$-interactions on a set of Lebesgue measure zero,
Oper. Matrices 7 (2013), 887--904.
\bibitem{BSW95}
D.~Buschmann, G.~Stolz, and J.~Weidmann,
Onedimensional Schr\"odinger operators with local point interactions,
J. Reine Angew. Math. 467 (1995), 169--186.
\bibitem{CK19}
G.~Cardone and A.~Khrabustovskyi,
$\mathrm{d}elta'$-interaction as a limit of a thin Neumann waveguide with transversal window,
J. Math. Anal. Appl. 473 (2019), 1320--1342.
\bibitem{CdV87}
Y.~Colin de Verdi\`{e}re,
Construction de Laplaciens dont une partie finie du spectre est donn\'{e}e,
Ann. Sci. \'{E}cole Norm. Sup. 20 (1987), 599--615.
\bibitem{D95}
E.B.~Davies,
Spectral Theory and Differential Operators,
Cambridge University Press, Cambridge, 1995.
\bibitem{EKMT14}
J.~Eckhardt, A.~Kostenko, M.M.~Malamud, and G.~Teschl,
One-dimensional Schr\"odinger operators with $\mathrm{d}elta'$-interactions on Cantor-type sets,
J. Differential Equations 257 (2014), 415--449.
\bibitem{EH87}
W.D.~Evans and D.J~Harris,
Sobolev embeddings for generalized ridged domains,
Proc. London Math. Soc. (3) 54 (1987), 141--175.
\bibitem{Ex95a}
P.~Exner,
Lattice Kronig-Penney models, Phys. Rev. Lett. 74 (1995), 3503--3506.
\bibitem{Ex95b}
P.~Exner,
The absence of the absolutely continuous spectrum for $\mathrm{d}elta'$ Wannier-
Stark ladders,
J. Math. Phys. 36 (1995), 4561--4570.
\bibitem{Ex96}
P.~Exner,
Contact interactions on graph superlattices,
J. Phys. A 29 (1996), 87--102.
\bibitem{EJ13}
P.~Exner and M.~Jex,
Spectral asymptotics of a strong $\mathrm{d}elta'$ interaction on a planar loop,
J. Phys. A 46 (2013), 345201.
\bibitem{EJ14}
P.~Exner and M.~Jex,
Spectral asymptotics of a strong $\mathrm{d}elta'$ interaction supported by a surface,
Phys. Lett. A 378 (2014), 2091--2095.
\bibitem{EK15}
P.~Exner and A.~Khrabustovskyi,
On the spectrum of narrow Neumann waveguide with periodically distributed traps,
J. Phys. A: Math. Theor. 48 (2015), 315301.
\bibitem{EK18}
P.~Exner and A.~Khrabustovskyi,
Gap control by singular Schr\"odinger operators in a periodically structured metamaterial,
J. Math. Phys. Anal. Geom. 14 (2018), 270--285.
\bibitem{EL18}
P.~Exner and J.~Lipovsk\'y,
Smilansky-Solomyak model with a $\mathrm{d}elta'$-interaction,
Phys. Lett. A 382 (2018), 1207--1213.
\bibitem{ER16} P. Exner and J. Rohleder,
Generalized interactions supported on hypersurfaces,
J. Math. Phys. 57 (2016), 041507.
\bibitem{Fr79}
L.E.~Fraenkel,
On regularity of the boundary in the theory of Sobolev spaces,
Proc. London Math. Soc. (3) 39 (1979), 385--427.
\bibitem{G69}
M.M.~Gehtman,
On the question of the spectrum of selfadjoint extensions of a symmetric semibounded operator,
Dokl. Akad. Nauk SSSR 186 (1969), 1250--1252.
\bibitem{G70}
M.M.~Gehtman,
An investigation of the spectrum of certain nonclassical selfadjoint extensions of Laplace's operator,
Funkcional. Anal. i Prilo\v{z}en 4 (1970), 72.
\bibitem{GH87}
F.~Gesztesy and H.~Holden,
A new class of solvable models in quantum mechanics describing point interactions on the line,
J. Phys. A: Math. Gen. 20 (1987), 5157--5177.
\bibitem{GKZ92a}
F.~Gesztesy, W.~Karwowski, Z.~Zhao,
New types of soliton solutions, Bull. Amer. Math. Soc. (N.S.) 27 (1992), 266--272.
\bibitem{GKZ92b}
F.~Gesztesy, W.~Karwowski, Z.~Zhao,
Limits of soliton solutions,
Duke Math. J. 68 (1992), 101--150.
\bibitem{GZ94}
F.~Gesztesy and F.~Zhao,
Domain perturbations, Brownian motion, capacities, and ground states of Dirichlet Schr\"odinger operators, Math. Z. 215 (1994), 143--150.
\bibitem{GHKM80}
A.~Grossmann, R.~H{\o}egh-Krohn, and M.~Mebkhout,
A class of explicitly soluble, local, many-center Hamiltonians for one-particle quantum mechanics in two and three dimensions. I,
J. Math. Phys. 21 (1980), 2376--2385.
\bibitem{HKP97}
R.~Hempel, T.~Kriecherbauer, and P.~Plankensteiner,
Discrete and Cantor spectrum for Neumann Laplacians of combs,
Math. Nachr. 188 (1997), 141--168.
\bibitem{HSS91}
R.~Hempel, L.A.~Seco, and B.~Simon,
The essential spectrum of Neumann Laplacians on some bounded singular domains,
J. Funct. Anal. 102 (1991), 448--483.
\bibitem{I71}
V.A.~Ilʹin,
Conditions for the convergence of spectral decompositions that correspond to self-adjoint extensions of elliptic operators. I. Self-adjoint extension of the Laplace operator with a point spectrum,
Differencialʹnye Uravnenija 7 (1971), 670--710.
\bibitem{IOS89}
G.A.~Iosif'yan, O.A.~Oleinik, and A.S.~Shamaev,
On the limit behavior of the spectrum of a sequence of operators defined in different Hilbert spaces,
Russian Math. Surveys 44 (1989), 195--196.
\bibitem{JL16}
M.~Jex and V. Lotoreichik,
On absence of bound states for weakly attractive $\mathrm{d}elta'$-interactions supported on
non-closed curves in $\mathbb{R}^2$,
J. Math. Phys. 57 (2016), 022101.
\bibitem{J89}
S.~Jimbo,
The singularly perturbed domain and the characterization for the eigenfunctions with Neumann boundary condition,
J. Differ. Equations 77 (1989), 322--350.
\bibitem{K66}
T.~Kato,
Perturbation Theory for Linear Operators,
Berlin-Heidelberg-New York, Springer, 1966.
\bibitem{KP94}
A.A.~Kiselev, B.S.~Pavlov,
The essential spectrum of the Laplace operator of the Neumann problem in a model domain of complex structure,
Theoret. and Math. Phys. 99 (1994), 383--395.
\bibitem{KM10a}
A.~Kostenko and M.M.~Malamud,
Schr\"odinger operators with $\mathrm{d}elta'$-interactions and the
Krein-Stieltjes string, Dokl. Math. 81 (2010), 342--347.
\bibitem{KM10b}
A.~Kostenko and M.M.~Malamud,
1-D Schr\"odinger operators with local point interactions on a discrete set,
J. Differential Equations 249 (2010), 253--304.
\bibitem{KM14}
A.~Kostenko and M.M.~Malamud,
Spectral theory of semibounded Schr\"odinger operators with $\mathrm{d}elta'$-interactions,
Ann. Henri Poincar\'e 15 (2014), 501--541.
\bibitem{LR15}
V.~Lotoreichik and J.~Rohleder,
An eigenvalue inequality for Schr\"odinger operators with
$\mathrm{d}elta$ and $\mathrm{d}elta'$-interactions supported on hypersurfaces,
Oper. Theor. Adv. Appl. 247 (2015), 173--184.
\bibitem{MPS16}
A.~Mantile, A.~Posilicano, and M.~Sini,
Self-adjoint elliptic operators with boundary conditions on not closed hypersurfaces,
J. Differ. Equations 261 (2016), 1--55.
\bibitem{Ma68}
V.~Mazya, On Neumann’s problem for domains with irregular boundaries,
Siberian Math. J. 9 (1968), 990--1012.
\bibitem{Ma79}
W.~Mazja,
Einbettungssätze f\"ur Sobolewsche R\"aume. Teil 1, BSB B. G. Teubner Verlagsgesellschaft, Leipzig, 1979.
\bibitem{Ma80}
W.~Mazja,
Einbettungssätze f\"ur Sobolewsche R\"aume. Teil 2, BSB B. G. Teubner Verlagsgesellschaft, Leipzig, 1980.
\bibitem{Ma85}
V.G.~Mazya,
Sobolev spaces (Russian), Leningrad. Univ., Leningrad, 1985.
\bibitem{MP97}
V.G.~Mazya, S.V.~Poborchi,
Differentiable functions on bad domains, World Scientific Publishing Co., Inc., River Edge, NJ, 1997.
\bibitem{Ma11}
V.~Mazya, Sobolev spaces with applications to elliptic partial differential equations. Second, revised and augmented edition, Springer, Heidelberg, 2011.
\bibitem{MNP85}
V.G.~Mazya, S.A.~Nazarov, and B.A.~Plamenevskij,
Asymptotic expansions of the eigenvalues of boundary value problems for the Laplace operator in domains with small holes,
Math. USSR, Izv. 24(1985), 321--345.
\bibitem{McL00}
W.~McLean,
Strongly Elliptic Systems and Boundary Integral Equations,
Cambridge University Press, Cambridge, 2000.
\bibitem{M79}
V.A.~Mikhailets,
Selfadjoint extensions of operators with a prescribed spectrum asymptotics,
Dokl. Akad. Nauk Ukrain. SSR Ser. A 5 (1979), 338--341, 398.
\bibitem{M96}
V.A.~Mikhailets,
Schr\"odinger operator with point $\mathrm{d}elta'$-interactions,
Dokl. Math. 348 (1996), 727--730.
\bibitem{N30}
J. von Neumann,
Allgemeine Eigenwerttheorie hermitescher Funktionaloperatoren,
Math. Annalen 102 (1930), 49--131.
\bibitem{Oz81}
S.~Ozawa, Singular variation of domains and eigenvalues of the Laplacian, Duke Math. J. 48 (1981), 767--778.
\bibitem{PPW56}
L.E.~Payne, G.~P\'olya, and H.F.~Weinberger,
On the ratio of consecutive eigenvalues,
J. Math. and Phys. 35 (1956), 289--298.
\bibitem{P06}
O.~Post,
Spectral convergence of quasi-one-dimensional spaces,
Ann. Henri Poincar\'e 7 (2006), 933--973.
\bibitem{RT75}
J.~Rauch and M.~Taylor,
Potential and scattering theory on wildly perturbed domains,
J. Funct. Anal. 18 (1975), 27--59.
\bibitem{RS72}
M.~Reed and B.~Simon,
Methods of Modern Mathematical Physics. I. Functional Analysis, Academic Press, New York--London, 1972.
\bibitem{RS78}
M.~Reed and B.~Simon,
Methods of Modern Mathematical Physics. IV. Analysis of Operators,
Academic Press, New York--London, 1978.
\bibitem{RS79}
M.~Reed and B.~Simon,
Methods of Modern Mathematical Physics. III. Analysis of Operators,
Academic Press, New York--London, 1979.
\bibitem{Sch12}
K.~Schm\"udgen, Unbounded Self-adjoint Operators on Hilbert space,
Springer, Dordrecht, 2012.
\bibitem{Si78}
B.~Simon,
A canonical decomposition for quadratic forms with applications to monotone convergence theorems,
J. Funct. Anal. 28 (1978), 377--385.
\bibitem{Si92}
B.~Simon,
The Neumann Laplacian of a jelly roll,
Proc. Amer. Math. Soc. 114 (1992), 783--785.
\bibitem{T72}
H. Triebel,
H\"{o}here Analysis,
VEB Deutscher Verlag der Wissenschaften, Berlin, 1972.
\bibitem{ZSY17}
J.~Zhao, G.~Shi, and J.~Yan, Discreteness of spectrum for Schr\"odinger operators with $\mathrm{d}elta'$-type conditions on infinite regular trees,
Proc. Roy. Soc. Edinburgh Sect. A 147 (2017), 1091--1117.
^\varepsilonnd{thebibliography}
^\varepsilonnd{document}
|
\begin{document}
\newtheorem{Theorem}{Theorem}
\newtheorem{Lemma}[Theorem]{Lemma}
\newtheorem{Proposition}[Theorem]{Proposition}
\newtheorem{Corollary}[Theorem]{Corollary}
\newtheorem{Definition}[Theorem]{Definition}
\newtheorem{Notation}[Theorem]{Notation}
\newtheorem{Example}[Theorem]{Example}
\newtheorem{Remark}[Theorem]{Remark}
\newtheorem{Remarks}[Theorem]{Remarks}
\newcommand\N{\mathbb{N}}
\newcommand\Z{\mathbb{Z}}
\newcommand\R{\mathbb{R}}
\title{A New Algorithm for Approximating the Least Concave Majorant}
\author{\| Martin | Franc\accent23u|, Prague, \| Ron |Kerman|, St. Catharines, \| Gord |Sinnamon|,~London }
\abstract
The least concave majorant, $\hat F$, of a continuous function $F$ on a closed interval, $I$,
is defined by
\[
\hat F (x) = \inf \left\{ G(x): G \geq F, G \mbox{ concave}\right\},\; x \in I.
\]
We present here an algorithm, in the spirit of the Jarvis March, to approximate the least concave majorant of a differentiable piecewise polynomial function of degree at most three on $I$. Given any function $F \in \mathcal{C}^4(I)$, it can be well-approximated on $I$ by a clamped cubic spline $S$. We show that $\hat S$ is then a good approximation to $\hat F$.
We give two examples, one to illustrate, the other to apply our algorithm.
\endabstract
\keywords
least concave majorant, level function, spline approximation
\endkeywords
\subjclass
26A51, 52A41, 46N10
\endsubjclass
\thanks
The first-named author was supported by the grant SVV-2016-260335 and by the grant P201/13/14743S of the Grant Agency of the Czech Republic. NSERC support is gratefully acknowledged.
\endthanks
\section{Introduction}
Suppose $F$ is a continuous function on the interval $I = [a,b]$. Denote by $\hat F$ the least concave majorant of $F$, namely,
\[
\hat F (x) = \inf \left\{ G(x) : G \geq F, G \mbox{ concave }\right\},
\]
which can be shown to be given by
\[
\hat F (x) = \sup \left\{ \frac{\beta - x} {\beta - \alpha} F (\alpha ) + \frac{ x - \alpha }{\beta - \alpha} F (\beta) : a \leq \alpha \leq x \leq \beta \leq b \right\} , \; x \in I.
\]
This concave function has application in such diverse areas as Mathematical Economics, Statistics, and Abstract Interpolation Theory.
See, for example, \cite{Deb1976}, \cite{Car2002}, \cite{Pee1970}, \cite{BruKru1991}, \cite{MasSin2006} and \cite{KerMilSin2007}.
We observe that $\hat F$ is continuous on $I$ and it is differentiable there when $F$ is.
Our aim in this paper is to give a new algorithm to approximate $\hat F$, together with an estimate of the error entailed. If $F$ is a continuous or, stronger yet, a differentiable piecewise polynomial of degree at most three, then so is $\hat F$. If not, then $F$ may be approximated by a clamped cubic spline and the least concave majorant of the approximating function is seen to be a good approximation to $\hat F$. To estimate the error in Theorem~\ref{Theorem53} below we use a known result for the approximation error involving such cubic splines from \cite{HalMey1976}, together with a new result on $(\hat F)'$, which in \cite[p.70]{Lor1953} and \cite{Halperin1953} is denoted by $(F')^{\circ}$ and is referred to as the level function of $F'$ in the unweighted case.
See the aforementioned Theorem \ref{Theorem53}.
\par
The simple structure of $\hat F$ will be the basis of our algorithm. Since $F$ and $\hat F$ are continuous, the zero set, $Z_F$, of $\hat F - F$ is closed; of course, $\hat F = F$ on $Z_F$. The connected components of $Z_F^c$ are intervals open in the relative topology of $I$ on which $\hat F$ is a strict linear majorant of $F$; indeed, if, for definiteness, the component interval with endpoints $\alpha$ and $\beta$ is a subset of the interior of $I$, then
\begin{equation}\label{1}
\hat F (\alpha) =F(\alpha), \; \hat F (\beta) = F (\beta),
\end{equation}
\begin{equation}\label{2}
F(x) < \hat F(x) = F(\alpha) + (x - \alpha ) \frac{F(\beta) - F (\alpha)}{\beta - \alpha}, \, \alpha < x < \beta,
\end{equation}
and, if $F$ is differentiable on $I$,
\begin{equation}\label{3}
(\hat F)' (\alpha) = F'(\alpha) = \frac{F(\beta) - F (\alpha)}{\beta - \alpha} = F'(\beta) = (\hat F)'(\beta).
\end{equation}
Our task is thus to find the component intervals of $Z_F^c$. This will be done using a refinement of the Jarvis March algorithm; see \cite{Jar1973}. To begin, we determine the set of points, $D$, at which $F$ attains its maximum value, $M$, and then take $C=[c_1,c_2]$ to be the smallest closed interval containing $D$. Of course, in many cases $D$ consists of one point and $c_1 = c_2$.
\par
It turns out that $\hat F$ increases to $M$ on $[a,c_1]$, is identically equal to $M$ on $C$, then decreases on $[c_2, b]$.
\par
To describe in general terms how the algorithm works we focus on $[a,c_1]$, $a < c_1$, and take $F$ to be a differentiable function which is piecewise cubic. As such, there is a partition, $P$, of $[a,c_1]$ on each subinterval of which $F$ is a cubic polynomial. By refining the partition, if necessary, to include critical points and points of inflection of $F$, we may assume that this polynomial is either strictly concave, linear or strictly convex and is either increasing or decreasing on its subinterval. It is the subintervals where the associated cubic polynomial is increasing and strictly concave that are of interest. It is important to point out that for a piecewise cubic function, $Z_F^c$ has only finitely many components.
\par
Now, $\hat F$ on a component of $Z_F^c$ may be thought of as a kind of linear bridge over a convex part of $F$. With this in mind, we call an interval, say $J = (\alpha, \beta)$, a bridge interval if, on it, $F$ satisfies
\begin{equation}\label{bridge2}
F(x) < F(\alpha) + (x - \alpha ) \frac{F(\beta) - F (\alpha)}{\beta - \alpha}, \, \alpha < x < \beta,
\end{equation}
and
\begin{equation}\label{bridge3}
F'(\alpha) = \frac{F(\beta) - F (\alpha)}{\beta - \alpha} = F'(\beta).
\end{equation}
We include endpoints of $I$ as possible endpoints of bridge intervals. In such case, the corresponding part of (\ref{bridge3}) is omitted.
An illustrating example of bridge intervals and least concave majorant of a function can be found in figure~\ref{figure6}. It might be helpful to reader to check demonstrative Example 1. in section~\ref{section7} while reading the formal description of algorithm. The algorithm is there applied to a particular spline.
\par
Proceeding systematically from $c_1$ to $a$ (the procedure from $c_2$ to $b$ is similar) our algorithm determines, in a finite number of steps, a finite number of pairwise disjoint bridge intervals with endpoints in the intervals of increasing strict concavity referred to in the above paragraph.
The desired components are among these bridge intervals.
The technical details of all this are elaborated in Section \ref{section2}. Proofs of results stated in that section are proved in the next one and the algorithm itself is justified in the one following that.
Remarks on the implementation of the procedure are made in Section \ref{section5}. Section \ref{section6} has estimates of the error incurred when approximating an absolutely continuous function by a clamped cubic spline, while in the final section two examples are given.
\section{The algorithm}\label{section2}
In this section we describe our algorithm in more detail. This will require us to first state some lemmas whose proof will be given in the next section.
\par
Suppose that $F$ a is continuous function on some interval $I = [a,b]$ and let $\hat F, Z_F^c, M, D$ and $C= [c_1, c_2]$ be as in the introduction.
\begin{Lemma}\label{SeptemberLemma1}
If $F$ is a continuous function on $I$, then the least concave majorant, $\hat F$, of $F$ on $I = [a,b]$ is continuous on $I$, with $\hat F(a) = F(a)$ and $\hat F(b) = F(b)$. Moreover, on each component interval, $J$, of $Z_F^c$, with endpoints $\alpha$ and $\beta$, $\hat F$ is the linear function, $l$, interpolating $F$ at the points $\alpha$ and $\beta$.
\end{Lemma}
\begin{Lemma}\label{SeptemberLemma2}
Suppose $F$ is differentiable on $(a,b)$ and $(\alpha,\beta)$ is a component of $Z_F^c$. Then $\hat F$ is differentiable on $(a,b)$, $(\hat F)'(x)=F'(x)$ for $x\in (a,b)\cap Z_F$, and $(\hat F)'(x)=\frac{F(\beta)-F(\alpha)}{\beta-\alpha}$ for $x\in [\alpha,\beta]$. In particular,
$
F'(x)=(\hat F)'(x)=\frac{F(\beta)-F(\alpha)}{\beta-\alpha}
$
if $x=\alpha\in (a,b)$ or $x=\beta\in (a,b)$. Moreover, if $F'$ is continuous on $(a,b)$, then so is $(\hat F)'$.
\end{Lemma}
\begin{Lemma}\label{OldLemma1}
Let $F$ be a continuous function on $I$, then $\hat F \equiv M$ on $C$. Moreover, $\hat F$ is strictly increasing on $(a,c_1)$ and strictly decreasing on $(c_2, b)$.
\end{Lemma}
\begin{Lemma}\label{observation}
Let $F$ be a continuous function, suppose $C=[c_1,c_2]$ is as in the introduction, and suppose $x,y,z \in (a,b)$ such that $F$ is strictly convex on $(x,z)$ and $y \in (x,z)$. Then $F(y) \neq \hat F (y)$.
Suppose $F$ is differentiable as well. If $y \in (a, c_1)$ and $F'(y) \leq 0 $ then $F(y) \neq \hat F(y)$. Analogously, if $y \in (c_2,b)$ and $F'(y) \geq 0$ then $F(y) \neq \hat F(y)$.
\end{Lemma}
\begin{Lemma}\label{OldLemma2}
Let $F$ be a continuous function.If $J = (\alpha, \beta)$ is a component interval of $Z_F^c$ then either $J \subset (a,c_1)$, $J \subset (c_1, c_2)$ or $J \subset (c_2, b)$.
\end{Lemma}
Suppose that $F$ is piecewise cubic and differentiable on $I$, and suppose $J \subset [a,c_1]$.
Denote by $P$ the closed intervals determined by the partition of $[a,c_1]$ inherited from the piecewise cubic structure of $F$, together with any critical points and points of inflection of $F$ in $[a,c_1]$.
\par
\begin{Lemma}\label{OldLemma3}
Suppose that $F$ is piecewise cubic and differentiable on $I$. Let $J = (\alpha, \beta) \subset [a,c_1)$ be a component interval of $Z_F^c$. Then, either $\alpha = a$ or there is an interval $K = [k_1, k_2]$ in $P$ containing $\alpha$ on which $F$ is strictly concave and increasing. Similarly, either $\beta = c_1$ or there is an interval $L = [l_1, l_2]$ in $P$ containing $\beta$ on which $F$ is strictly concave and increasing. Moreover, $K \neq L$.
\end{Lemma}
\par
Leaving aside the case $c_1 = c_2 = b$ our goal is to select the components of $Z_F^c$ from among the bridge intervals of the form $[a,b_1)$ or $(a_1,b_1)$, $a_1 > a$, such that $a_1$ and $b_1$ lie in distinct intervals in $\mathcal{P}$ with disjoint interiors on which intervals $F$ is strictly concave and increasing.
\par
Let $\mathcal{P}$ be the collection of intervals in $P$ where $F$ is strictly concave and increasing.
\par
Given a pair of intervals in $\mathcal{P}$ that could have the endpoints of a bridge interval in them, one determines those endpoints, if they exist, by the study of a certain sextic polynomial equation. The details of the most complicated case are described in the following lemma.
\begin{Lemma}\label{OldLemma4}
Let $L = [l_1, l_2]$ and $R= [r_1, r_2]$ belong to $\mathcal{P}$ with $l_2 \leq r_1$. Suppose
\[
F(x) = \begin{cases}
P_L (x) &= A x^3 + B x^2 + C x + D \mbox{ on } L\\
P_R (x) &= W x^3 + X x^2 + Y x + Z \mbox{ on } R,\\
\end{cases}
\]
with $AW \neq 0$. Assume
\[
J = P_L' (L) \cap P_R' (R) \neq \emptyset.
\]
Then, if there is a bridge interval $I_1 = (a_1, b_1)$ with $a_1 \in L$ and $b_1 \in R$, this bridge interval is such that
\begin{equation}
a_1 = (P_L')^{-1}(y_0) \mbox{ and } b_1 = (P_R')^{-1}(y_0),
\end{equation}
where $y_0$ is a point in $J$ satisfying the sextic equation
\[
(\mu^2_1 - \mu^2_2 - \mu^2_3 \delta )^2 - 4 \mu_2^2 \mu^2_3 \gamma \delta = 0,
\]
in which
\[
\gamma = 3 A y + B^2 - 3 AC ,\; \delta = 3 W y + X^2 - 3 W Y, \; \mu_2 = \frac{- 2 \gamma} {27 A^2 }, \mu_3 = \frac{2 \delta} {27 W^2 }
\]
and
\[
\mu_1 = \frac{1}{3} \left( \frac{X}{W} - \frac{B}{A} \right)y + \left( Z + \frac{2 X^3}{27 W^2} - \frac{Y X}{3 W} \right) - \left( D + \frac{2 B^3}{27A^2} - \frac{BC}{3A} \right).
\]
\end{Lemma}
The verification that a given interval $J = (\alpha,\beta)\subset (a,c_1)$ satisfies condition (\ref{bridge2}) can be achieved using the following criterion:
Assume that $\alpha \in L = [l_1, l_2] \in \mathcal{P}$, $\beta \in R = [r_1, r_2] \in \mathcal{P}$, $l_2 < r_1$, and that $l$ is a linear function interpolating $F$ on $J$.
Then $J$ satisfies (\ref{bridge2}) if, for every $K = [k_1,k_2]$ in $P$, with $K \subset [l_2 ,r_1]$,
\[
l(k_1) - F(k_1) > 0 \mbox{ and } l(k_2) - F(k_2)>0,
\]
and, in addition, if $K \in \mathcal{P}$, then
\[
l(\varrho) - F(\varrho) >0
\]
for any root, $\varrho$, in $K$ of the quadratic
\[
F'(x) = \frac{F(\beta) - F(\alpha)}{\beta - \alpha}.
\]
Obvious modifications of the above must also hold for $[a_1, l_2]$ and $[r_1,b_2]$. This criterion can be proved using elementary calculus.
We are now able to describe an iterative procedure that selects the component intervals of $Z_F^c$ from a class of bridge intervals. We will focus our description on the case of finding all component intervals contained in $(a,c_1)$ as the case in which the component intervals are contained in $(c_2, b)$ is analogous while the component intervals in $(c_1, c_2)$ are determined trivially by Lemma~\ref{OldLemma1} .
If $a = c_1$, then there is no such component interval. In the following, we exclude, at first, the case $c_1 = c_2 = b$, so that $c_1 < b$.
Set $\mathcal{P}_0 = \mathcal{P}$.
We claim that $\mathcal{P}_0$ cannot be empty. As a consequence of Lemma~\ref{OldLemma2} we have that $\hat F (c_1) = F (c_1)$, since $c_1$ cannot be in interior of any component interval. The point $c_1$ is a local maximum of $F$. The choice of $P$ ensures that there is
an interval $(x,c_1)$ such that $F$ is increasing and concave on it, hence $\mathcal{P}_0$ must contain at least one interval.
Assume $\mathcal{P}_0$ has exactly one interval. The fact that $c_1$ is a local maximum of $F$ ensures that this interval is of a form $[x,c_1)$. Suppose now that $x = a$ then $F= \hat F$ on $[a,c_1]$, since the function
\[
m(t) = \begin{cases}
F(t), \; t \in [a,c_1], \\
M, t \in (c_1, b],
\end{cases}
\]
is a concave majorant of $F$. (It is a concave function extended linearly with slope that of the tangent line at the endpoint.)
Suppose now that $x \neq a$. We have $F \neq \hat F$ on $(a,x)$ - if there were $y \in (a,x)$ such that $F(y) = \hat F(y)$, then $F$ would have to be increasing and strictly concave on some neighbourhood by Lemma~\ref{observation} and Lemma~\ref{OldLemma3}. This is a contradiction to the assumption that $[x,c_1]$ is the only interval in $\mathcal{P}_0$. Since $F \neq \hat F$ on $(a,x)$ there must be a component interval containing $(a,x)$. On the other hand, Lemma~\ref{OldLemma1} implies that $F(c_1) = \hat F(c_1)$, hence this component interval must be a subset of $(a,c_1)$.
The desired component interval is of a form $(a, \beta)$, $\beta \in [x, c_1)$. If we choose $\beta$ to be the unique solution to the equation
\[
F'(\beta) = \frac{F(\beta) - F(a)}{\beta -a},
\]
then the interval $(a,\beta)$ will be the component interval, since it is the only interval which satisfies the necessary conditions (\ref{3}).
\par
Suppose next that $\mathcal{P}_0$ has at least two intervals and take $R = [r_1 ,r_2]$ to be that interval in $\mathcal{P}_0$ closest to $c_1$.
\par
We seek first a component interval of the form $(a,r)$, $r \in R$, as if $R$ were the only interval in $\mathcal{P}_0$. If no such interval exists, let $L = [l_1, l_2]$ be the interval in $\mathcal{P}_0$ closest to $a$, then use Lemma \ref{OldLemma4} to test for a bridge interval $W = (w_1, w_2)$ with $w_1 \in L$ and $w_2 \in R$.
It is important to point out that Lemma~\ref{OldLemma4} only places a restriction on bridge intervals, it does not guarantee them. Once the sextic is solved, condition (\ref{2}) must still be verified for the proposed bridge interval. This means iterating through each partition subinterval contained in the proposed bridge interval and solving a maximum problem to verify that $F$ lies underneath the proposed linear $\hat F$.
In a true Jarvis March points, rather than intervals, are ordered according to the angle of a tangent line. In the case of intervals associated to piecewise {\it cubic} functions such an ordering is computationally expensive.
Should there be no such $W$ carry out the same test on the interval in $\mathcal{P}_0$ closest to the right of $L$, if one exists.
\par
If, in moving systematically to the right in this way, we find no $W$, we discard $R$ from $\mathcal{P}_0$ to get $\mathcal{P}_1$ and repeat the above procedure.
\par
If, on the contrary, we find such a $W$, it will be a component interval. Say $w_1 \in N=[n_1, n_2]$, $N \in \mathcal{P}_0$.
\par
We next form $\mathcal{P}_1$ by discarding from $\mathcal{P}_0$ all intervals to the right of point $w_1$, for example $R$, and, in addition, replace $N$ by the interval $[n_1,w_1]$ (if $n_1 < w_1$, otherwise just discard $N$). We then carry out the above-described procedure with $\mathcal{P}_1$, if $\mathcal{P}_1 \neq \emptyset$.
\par
Continuing in this way we see that $\mathcal{P}_{n+1}$ has at least one less interval than $\mathcal{P}_n$, so the algorithm terminates after a finite number of steps.
\par
Finally, in the case $c_1 = c_2 = b$ there may be a component interval of $Z_F^c$ of the form $(r,b)$, $r \in [a,b)$. This may be found in a similar way as those of the form $(a,r)$.
\begin{Remark}
We now comment briefly on how one can modify our algorithm to deal with piecewise cubic functions that are only continuous. In this case the notion of a bridge interval has to be changed since function $F$ might not be differentiable at the endpoint of a component intervals of $Z_F^c$ and hence that end point needn't belong to an interval of strict concavity.
Accordingly, we say that $(\alpha, \beta)$ is a bridge interval if conditions (\ref{bridge2}) and (\ref{bridge3}) hold and, in addition,
\[
F'(\alpha - ) \geq \frac{ F (\beta) - F (\alpha)}{\beta - \alpha} \geq F'(\alpha+) \mbox{ and } F'(\beta -) \geq \frac{ F (\beta) - F (\alpha)}{\beta - \alpha} \geq F'(\beta +).
\]
\par
Again, Lemma \ref{OldLemma3} must be modified to compensate for the $F$ need not be differentiable. To do this we allow for {\it three} possibilities, namely, $\alpha = a$, $\alpha$ is contained in interval of strict concavity of $F$ {\it or} $\alpha$ is one of the points at which $F'(\alpha - ) > F'(\alpha +)$; a similar change must be made at the $\beta$. These changes necessitate our including all points of discontinuity of $F'$ as degenerate intervals in $\mathcal{P}$.
\par
The iterations of our algorithm proceed much as in the differentiable case, with the difference that when some point, say $x$, is selected from $\mathcal{P}_i$ we must check if $(\alpha ,x)$ (or $(\beta, x)$ ) is a bridge interval in the new sense. This can be done in a manner similar to the one we described for determining if $(\alpha,\beta)$ is a bridge interval in the old sense.
\end{Remark}
\section{Proof of Lemmas 1-7}\label{section3}
\begin{proof}[Proof of Lemma \ref{SeptemberLemma1}]
Since $\hat F$ is concave it is continuous on the interior of $I$. This continuity ensures that, for all $\varepsilon >0$, there exists a slope $m$ such that the graph of $F$ lies under the line
\[
l_a(x) = F(a) + m (x- a ) + \varepsilon.
\]
But then $l_a$ would be a concave majorant of $F$, so
\[
F(x) \leq \hat F(x) \leq l_a(x), \; x \in I.
\]
As $\varepsilon >0$ is arbitrary, $\hat F$ is continuous at $a$, with $\hat F(a) = F(a)$.
A similar argument shows $\hat F$ is continuous at $b$, with $\hat F(b) = F(b)$.
Let $J$ and $l$ be as in the statement of Lemma~\ref{SeptemberLemma1} and suppose $y$ is a point at which $F - l$ achieves its maximum value on $I$. Since $F$ lies below the line $l + F(y) - l(y)$, so does $\hat F$. In particular, $\hat F (y) \leq F(y)$, so $\hat F(y) = F(y)$ and hence $y \notin J^\circ$. But, $\hat F(\alpha) = F(\alpha)$ and $\hat F(\beta) = F(\beta)$, so, by concavity, $\hat F$ lies above $l$ on $J$ and below $l$ off $J^\circ$. Thus,
\[
F(y) - l(y) \leq \hat F(y) - l(y) \leq 0,
\]
whence
\[
F \leq l + F(y) - l(y) \leq l.
\]
This means $\hat F$ lies below $l$ on $J$. It follows that $\hat F = l$ on $J$.
\end{proof}
\begin{proof}[Proof of Lemma \ref{SeptemberLemma2}]
If $x \in (a,b)\cap Z_F$ then $\hat F (x) = F(x)$. Since $\hat F$ is a concave majorant of $F$, for any $w$ and $y$ satisfying $a <w < x < y < b$, we have
\[
\frac{F(y) - F(x) }{y-x} \leq \frac{\hat F(y) - \hat F(x)}{y-x} \leq \frac{\hat F(x) - \hat F(w) }{x-w} \leq \frac{F(x) - F(w)}{x-w} =\frac { F(w) - F(x) }{w-x}.
\]
Since $F$ is differentiable at $x$, the squeeze theorem shows that $(\hat F)'(x)$ exists and equals $F'(x)$.
Lemma~\ref{SeptemberLemma1} shows that, on $(\alpha,\beta)$, $\hat F$ is a line with slope $\frac{F(\beta)-F(\alpha)}{\beta-\alpha}$. So it is differentiable on $(\alpha, \beta)$ and has one-sided derivatives at the points $\alpha$ and $\beta$. If $\alpha$ or $\beta$ is in $(a,b)\cap Z_F$ the derivative of $\hat F$ exists there and, of course, coincides with its one-sided derivative. If $\alpha=a$ or $\beta=b$, the endpoints of the domain of $\hat F$, then $(\hat F)'$ is a necessarily just a one-sided derivative. We conclude that $(\hat F)'=\frac{F(\beta)-F(\alpha)}{\beta-\alpha}$ on the closed interval $[\alpha, \beta]$.
Evidently, $(\hat F)'$ is continuous at each $x\in X_F^c$. Suppose $F'$ is continuous at $x\in(a,b)\cap Z_F$. If $a < w < x <y < b$ then any component of $Z_F^c$ that intersects $(w,y)$ has at least one endpoint in $(w,y)$. It follows that $(\hat F)'(w,y) \subset F'(w,y)$. Since $F'$ is continuous at $x$, so is $(\hat F)'$.
\end{proof}
\begin{proof}[Proof of Lemma \ref{OldLemma1}]
To verify the first statement, one need only observe that between two points in $D$ (at which $F = M$) $\hat F = M$.
The second statement follows from a simple contradiction argument: Assume that there are $x_1, x_2 \in (a, c_1)$, $x_1 < x_2$, such that $\hat F (x_1) \geq \hat F(x_2)$. Then $\hat F (x_1) < \hat F (c_1)$ implies that
\[
\hat F(x_2) < \hat F(x_1) \frac{c_1 - x_2}{c_1 - x_1} + \hat F (c_1) \left( 1 - \frac{ c_1 - x_2}{c_1 - x_1}\right).
\]
But this contradicts the concavity of $\hat F$. Consequently, we have $\hat F (x_1) < \hat F (x_2)$. An analogous argument shows that $\hat F$ is strictly decreasing on $(c_2, b)$.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{observation}]
The second part follows from Lemma~\ref{OldLemma1}, as $\hat F$ is strictly increasing on $(a,c_1)$ and strictly decreasing on $(c_2,b)$. This leads to contradiction as if $F(y) = \hat F (y)$ then $F'(y) = (\hat F)' (y)$ by Lemma~\ref{SeptemberLemma2} if $y$ is an isolated point of $Z_F$
and trivially otherwise.
To prove the first part: suppose for contradiction that $\hat F (y) = F (y)$. Then
\[
\hat F(y) = F (y) \leq F(x) \frac{y-x}{z-x} + F(z) \frac{z-y}{z-x} \leq \hat F (x) \frac{y-x}{z-x} + \hat F(z) \frac{z-y}{z-x},
\]
which is in contradiction with the strict concavity of $\hat F$.
\end{proof}
\begin{proof}[Proof of Lemma \ref{OldLemma2}]
For $x$ in bridge interval $J = (\alpha, \beta)$, condition(\ref{bridge2}) yields
\begin{eqnarray*}
F(x) &\leq& F (\alpha)\frac{\beta - x }{\beta - \alpha} + F (\beta ) \frac{x - \alpha }{\beta - \alpha}\\
&\leq& M,
\end{eqnarray*}
with equality only with $F (\alpha) = F(\beta) = M$. Thus, $J$ intersects $C$ only if both endpoints are contained in $C$. The conclusion follows.
\end{proof}
\begin{proof}[Proof of Lemma \ref{OldLemma3}]
When $a =\alpha$ or $b = c_1 = \beta$ there is nothing to prove. Assume first, then, that $\alpha > a$ and choose $K = [k_1, k_2] \in P$ such that $\alpha\in[k_1,k_2)$. For any $x\in J\cap(\alpha,k_2)$, Lemmas 2 and 3 combine to give,
\[
F(x)<\hat F(x)=F(\alpha)+(x-\alpha)F'(\alpha).
\]
Since $F$ lies below its tangent line, it is neither linear nor strictly convex on $[k_1,k_2]$.
Thus, $F$ must be strictly concave on $K$. Lemma~\ref{OldLemma1} implies that $\hat F$ is strictly increasing on $(a,c_1)$, hence $(\hat F)'(\alpha) > 0$. Lemma~\ref{SeptemberLemma2} yields that $(\hat F)'$ exists and $(\hat F)'(\alpha) = F' (\alpha)$. The choice of $P$ ensures that $F$ is monotone on $K$. Hence $F$ is increasing on $K$.
\par
A similar argument yields $F$ strictly concave and increasing on $L = [l_1,l_2]$ when $\beta < c_1$.
\end{proof}
\begin{proof}[Proof of Lemma \ref{OldLemma4}]
Since $F'$ is decreasing on $L$ and $R$, $J = [c,d]$, with $c = \max \left[F'(l_2), F'(r_2) \right]$ and $d = \min \left[F'(l_1), F'(r_1) \right]$.
\par
Now,
\[
F'(x) =
\begin{cases}
P_L' (x) = 3 A x^2 + 2 B x + C \mbox{ on } L,\\
P_R' (x) = 3 W x^2 + 2 X x + Y \mbox{ on } R,\\
\end{cases}
\]
with $F'$ decreasing on both intervals. So, the unique root, $a(y)$ and $b(y)$, of
\[
P_L' (a(y)) = y \mbox{ and } P_R' (b(y)) = y, \, y \in J,
\]
can be obtained from the formulas
\[
a(y) = - \frac{1}{3A} [B \pm \sqrt{3Ay + B^2 - 3AC}]
\]
and
\[
b(y) = -\frac{1}{3W} [X \pm \sqrt{3Wy + X^2 - 3WY}].
\]
We now seek $y \in J$ so that
\[
\frac{F(b(y)) - F(a(y))}{b(y) - a (y) } = y
\]
or
\begin{equation}\label{(5)}
F(b(y)) - y b(y) - (F(a(y)) - y a (y) ) = 0.
\end{equation}
\begin{figure}
\caption{For each $y \in (P_L)'(L) \cap (P_R)'(R)$ there exist exactly one $a(y) \in L$ and $b(y) \in R$ such that $P_L' (a(y)) = R_L'(b(y)) = y$.}
\label{figureSexticWrong}
\end{figure}
\begin{figure}
\caption{There is a $y_0 \in (P_L)'(L) \cap (P_R)'(R)$ so that the corresponding $a(y_0)$ and $b(y_0)$ referred to in the caption of Figure~\ref{figureSexticWrong}
\label{figureSexticRight}
\end{figure}
Figures~\ref{figureSexticWrong} and Figures~\ref{figureSexticRight} below illustrate the geometric meaning of equation (\ref{(5)}).
Letting
\[
\gamma(y) = 3Ay + B^2 - 3AC \mbox{ and } \delta(y) = 3Wy + X^2 - 3WY,
\]
the equation (\ref{(5)}) is equivalent to
\begin{equation}\label{6}
\mu_1 + \mu_2 \sqrt{\gamma} + \mu_3 \sqrt{\delta} = 0,
\end{equation}
with $\mu_1, \mu_2, \mu_3$ linear functions of $y$, namely,
\[
\mu_2 (y) = - \frac{2 \gamma}{27 A^2 }, \; \mu_3 = \frac{2\delta} {27 W^2}
\]
and
\[
\mu_1 (y) = \frac{1}{3} \left(\frac{X}{W} - \frac{B}{A} \right) y + \left( Z + \frac{2X^2}{27 W^2} - \frac{YX}{3W} \right) - \left( D + \frac{2 B^2}{27 A^2} - \frac{CB}{3A} \right).
\]
We claim the solution of (\ref{(5)}) is a root of the sextic polynomial equation
\begin{equation}\label{7}
(\mu_1^2 - \mu_2^2 \gamma - \mu_3^2 \delta )^2 - 4 \mu_2^2 \mu_3^2 \gamma \delta = 0.
\end{equation}
Indeed, isolating $\mu_1$ in (\ref{6}), then squaring both sides gives
\begin{equation}\label{8}
\mu_1^2 = \mu_2^2 \gamma + \mu_3^2 \delta + 2 \mu_2 \mu_3 \sqrt{\gamma} \sqrt{\delta}.
\end{equation}
Isolating the term in (\ref{8}) with the square roots and squaring both sides yields (\ref{7}).
\end{proof}
The following remark is given to make the appearance of the sextic equation seem more natural.
\begin{Remark}
Suppose, for definiteness, the $a(y)$ and $b(y)$ referred to in the proof of Lemma~\ref{OldLemma4} are given by
\[
a(y) = \frac{-B}{3A} + \frac{1}{3A} \sqrt{3Ay + B^2 - 3AC} \mbox{ and } b(y) = \frac{-X}{3W} + \frac{1}{3W} \sqrt{3WY + X^2 - 3WY}.
\]
Then, equation (\ref{(5)}) can be written
\begin{multline*}
P_R \left( \frac{-X}{3W} + \frac{1}{3W} \sqrt{3WY + X^2 - 3WY} \right) - P_L \left( \frac{-B}{3A} + \frac{1}{3A} \sqrt{3Ay + B^2 - 3AC} \right) \\
= y \left( \frac{-X}{3W} + \frac{B}{3A} + \frac{1}{3W} \sqrt{3WY + X^2 - 3WY} - \frac{1}{3A} \sqrt{3Ay + B^2 - 3AC} \right).
\end{multline*}
In our original proof of Lemma~\ref{OldLemma4} we rearranged the terms in this version of (\ref{(5)}), then squared both sides. We repeated this procedure a few times to get rid of the square roots and so arrive arrive at the sextic equation (\ref{7}).
\end{Remark}
\section{Justification of the algorithm}\label{section4}
The purpose of this section is to prove
\begin{Theorem}\label{OldTheorem5}
Let $F$ be differentiable piecewise cubic function. Then the bridge intervals coming out of the algorithm are precisely the component intervals of $Z_f^c$.
\end{Theorem}
For simplicity, we consider only the components in $[a,c_1)$. We begin with the preparatory
\begin{Lemma}\label{OldLemma6}
Suppose that $F$ is absolutely continuous on $I_0 = [a,b]$. Let $I = (a_1, b_1 )$ be a bridge interval with right hand endpoint in an interval $R$ on which $F$ is strictly concave and increasing. If $J = (a_2, b_2)$ is another bridge interval such that $I \cap J \neq \emptyset$, $b_2 \in R$ and $b_1 < b_2$, then $a_2 < a_1$.
\end{Lemma}
\begin{proof}
Let
\[
l_I (x) = F(a_1) + (x-a_1) \frac{F(b_1) - F(a_1)}{b_1 - a_1}
\]
and, similarly,
\[
l_J (x) = F(a_2) + (x-a_2) \frac{F(b_2) - F(a_2)}{b_2 - a_2}.
\]
Assume, if possible, $a_1 < a_2$. Then, $a_2 < b_1$, otherwise $I\cap J = \emptyset$.
So,
\begin{equation}\label{lemma6contradiction}
l_J(a_2) = F(a_2) < l_I(a_2),
\end{equation}
since $I$ is a bridge interval. The latter also implies
\[
F(b_2) = l_I (a_2) + (b_1 - a_2) F'(b_1) + \int_{b_1}^{b_2} F'(t) \, d t;
\]
further, $J$ being a bridge interval, we have
\[
F(b_2) = l_J (a_2) + (b_2 - a_2)F'(b_2).
\]
Therefore,
\[
0 = l_J (a_2) - l_I (a_2) + (b_2 - a_2)F'(b_2) - (b_1 - a_2) F'(b_1) - \int_{b_1}^{b_2} F'(t) \, d t.
\]
The strict concavity of $F$ on $R$ ensures that $F'(t) > F'(b_2)$ for $t \in R$, $t < b_2$. Thus
\begin{eqnarray*}
l_I (a_2) - l_J (a_2) &=& (b_2 - a_2) F'(b_2) - (b_1 - a_2) F'(b_1 )- \int_{b_1}^{b_2} F'(t) \, d t \\
&<& (b_2 - a_2) F'(b_2) - (b_1 - a_2) F'(b_2 )- \int_{b_1}^{b_2} F'(t) \, d t \\
&=& (b_2 - b_1) F'(b_2) - \int_{b_1}^{b_2} F'(t) \, d t < 0.
\end{eqnarray*}
Consequently,
\[
l_I(a_2) - l_J (a_2) < 0,
\]
thereby contradicting (\ref{lemma6contradiction}).
\end{proof}
\begin{proof}[Proof of Theorem \ref{OldTheorem5}.]
As a consequence of Lemma~\ref{OldLemma2} one gets that the component intervals are split into three groups: component intervals contained in $[a,c_1]$, $[c_1,c_2]$ and component intervals which are subsets of $[c_2,b]$. We begin by observing that component intervals of $Z_F^c$ in $[a,c_1]$ are the maximal bridge intervals there.
\par
To the end of showing every bridge interval coming out of the algorithm is a component interval of $Z_F^c$, fix an iteration, say the $n$-th, of the procedure. Let $R=[r_1, r_2]$ be that
interval in $\mathcal{P}_n$ closest to $c_1$. According to Lemma~\ref{OldLemma6}, if there are bridge intervals with righthand endpoint in $R$, the one closest to $c_1$ will be the bridge interval chosen by the algorithm and, moreover, will be a maximal bridge interval.
\par
We next prove \emph{all} component intervals of $Z_F^c$ (in $[a,c_1)$) come out of the algorithm. Assume, if possible, $M = (m_1, m_2)$ is a component not obtained by the algorithm. Let $S = [s_1, s_2]$ be that member of $\mathcal{P}$ such that $m_2 \in S$.
\par
Now, either $S$ was chosen as an $R$ in some iteration or it was not. If it was chosen and $M$ is not the bridge interval with righthand endpoint in $S$ closest to $c_1$, then another bridge interval, $N = (n_1, n_2)$, is; in particular, $M$ and $N$ satisfy the hypotheses of Lemma \ref{OldLemma6}, with $m_2 < n_2$. We conclude $M \subset N$, which contradicts the maximality of $M$.
\par
Finally, suppose $S$ was not chosen. Then, there is a last iteration, say the $n$-th, such that
$S \in \mathcal{P}_n$. Let $T\in \mathcal{P}_n$ be the interval in $\mathcal{P}_n$ closest to $c_2$.
If $T$ does not contain the righthand endpoint of a bridge interval, $S$, will be chosen in the next iteration, which can't be. So, let $N = (n_1, n_2)$ be a bridge interval, indeed a component interval of $Z_F^c$, having $n_2 \in T$. Now, $n_1$ cannot be to the right of $S$ as that would entail $S \in \mathcal{P}_{n+1}$. Again, $n_1$ cannot lie to the left of $S$ nor can we have $n_1 < m_2$, since either would contradict the maximality of $M$. The only possibility left is $n_1 \in S$, $n_1 \geq m_2$.
\par
Should we have $n_1 > s_1$, $M$ would arise from $[s_1, n_1]$ in the next iteration. This leaves the case $s_1 = m_2 = n_1$. All intervals in $\mathcal{P}_n$ contained in $[n_1, c_1] = [m_2, c_2]$ will be discarded at the end of the $n$-th step. But, according to Lemma \ref{OldLemma3}, there exists an interval in $\mathcal{P}_{n+1}$ with $m_2$ as its right hand endpoint, which interval will be the one in $\mathcal{P}_{n+1}$ closest to $c_1$. As $m_2$ belongs to that interval $M$ would come out of the $(n+1)$-th step of the algorithm contrary to our assumption.
\end{proof}
\section{Implementation of the algorithm}\label{section5}
\par
In this section we discuss ways to make the algorithm more efficient. Suppose, then, that $F$ is a differentiable piecewise polynomial and that we are searching for component intervals contained in $[a,c_1]$. In a given iteration we have chosen the interval $R=[r_1, r_2]$ furthest to the right in the current version of $\mathcal{P}$ and we are about to seek in it and, in an appropriate interval $L$ to the left, endpoints of a bridge interval. It turns out we needn't do this for all $L$.
We developed have developed a few simple criteria to determine those $L$ which cannot contain the left endpoint of a bridge interval with right endpoint in $R$.
One natural test is to require of $L$ that $F'(L) \cap F'(R) \neq \emptyset$.
Lemma~\ref{Lemma7} below implies that there must be an intervening interval in $P$ between $L$ and $R$ on which $F$ is convex (or linear). We split the intervals in $\mathcal{P}$ into groups such that intervals in the same group are not separated by any intervening convex or linear interval. Then, bridge intervals cannot have endpoints in intervals from the same group. Consequently $L$ is a viable candidate only if it belongs to a group other then $R$.
Moreover, for $L$ to be a viable candidate it must lie to the left of the set of points at which $F$ equals its maximum value on $[a,r_1]$. This is a consequence of Lemma~\ref{Lemma8} as, it ensures that otherwise no bridge interval has endpoints in $L$ and $R$.
Of course, there are more such criteria. We now state and proof the two Lemmas referred to above.
\begin{Lemma}\label{Lemma7}
Let $F$ be a differentiable piecewise polynomial function. Every bridge interval has to contain an interval from $P$ on which $F$ is not strictly concave.
\end{Lemma}
\begin{proof}
Suppose for contradiction that there is a bridge interval $B = (b_1, b_2)$ such that $F$ is strictly concave on $(b_1, b_2)$. Condition \ref{bridge3} then yields that $F'(b_1) = F'(b_2)$. At the same time, strict concavity of $F$ yields that $F'$ is decreasing on $B$, which leads to contradiction.
\end{proof}
\begin{Lemma}\label{Lemma8}
Assume $F$ is a cubic spline,
suppose $R=[r_1, r_2] \subset [a, c_1]$ is an interval on which $F$ is strictly concave and increasing, with $m_2 \in R$ such that $\hat F (m_2) = F(m_2)$. Given $s < r_1$ satisfying $F(s) = \max \left\{ F(x) : x \in [a,r_1] \right\}$ and an $m_1 < r_1$ for which $M = (m_1, m_2)$ is a component interval of $Z_F^c$, one has $m_1 \in [a,s]$.
\end{Lemma}
\begin{proof}
Assume, if possible, $m_1 \in (s,r_1]$. Then
$\hat F (s) \geq F(s) \geq F(m_1) = \hat F (m_1)$, by hypothesis, and $\hat F (m_2) > \hat F(m_1)$, since $\hat F$ is increasing on $[a, c_1]$ according to Lemma~\ref{OldLemma1}. Hence
\begin{eqnarray*}
\hat F(m_1)& \leq& \frac{m_2 - m_1} {m_2 -s } \hat F (s) + \frac{m_1 - s}{m_2 - s } \hat F (m_2) \\
&=& \hat F (s) + (m_1 -s ) \frac{\hat F(m_2) - \hat F (s) }{m_2 - s },
\end{eqnarray*}
which contradicts the concavity of $\hat F$.
\end{proof}
\section{Error Estimates}\label{section6}
Given an absolutely continuous function $G$ on a closed interval $I$ of finite length, we choose $F$ to be the clamped cubic spline interpolating $G$ at the points of a partition $\varrho$ of $I$. This permits us to take advantage of the following special case of optimal error bounds for cubic spline interpolation obtained by Charles A. Hall and W. Weston Meyer in \cite{HalMey1976}.
\begin{Proposition}\label{Proposition51}
Suppose $G \in \mathcal{C}^4 (I)$ and let $\varrho := [x_0, \dots ,x_{n+1}]$ be a partition of $I$. Denote by $F$ the clamped cubic spline interpolating $G$ at the nodes of $\varrho$. Then,
\[
\left| G' (x) - F' (x)\right| \leq \frac{1}{24} \left\| G^{(4)} \right\|_\infty \left\| \varrho \right\|^3, \; x \in I,
\]
where $\| . \|_\infty$ denotes the usual supremum norm and
\[
\| \varrho \| := \sup \{ |x_k - x_{k-1} | : k = 1, \dots ,n\}.
\]
\end{Proposition}
To estimate the error involved in approximating the least concave majorant, we first consider the sensitivity of the level function to changes in the original function. We recall that the level function, $f^\circ$, of $f$ is given by $f^\circ = (\hat F)'$, where $F' = f$.
\begin{Theorem}\label{theorem52}
Suppose $F$ and $G$ are absolutely continuous functions defined on a finite interval $I$. Then $f = F'$, $g = G'$ and denote by $f^\circ$ and $g^\circ$ the level functions of $f$ and $g$ respectively. Then $\hat F$ and $\hat G$ are also absolutely continuous on $I$, and
\[
\left\|f^\circ - g^\circ \right\|_\infty = \left\| (\hat F)' - (\hat G)' \right\|_\infty \leq \left\| f - g \right\|_\infty.
\]
Here $\hat F$ and $\hat G$ denote the least concave majorants of $F$ and $G$, respectively, and $f = F'$, $g = G'$, while $f^\circ = (\hat F)'$, and $g^\circ = (\hat G)'$.
\end{Theorem}
\begin{proof}
Set
\[
Z_F = \left\{ x\in I : F(x)=\hat F(x)\right\},\;
Z_G =\left\{x\in I : G(x)=\hat G(x)\right\}
\]
and observe that $f^\circ = f$ almost everywhere on $Z_F$ and $g^\circ = g$ almost everywhere on $Z_G$. By Lemma \ref{SeptemberLemma1}, $\hat F$ is continuous and is of constant slope on each component of the complement of $Z_F$ . It follows that $\hat F$ is absolutely continuous on $I$. Since $\hat G$ is continuous and is of constant slope on each component of the complement of $Z_G$, $\hat G$ is absolutely continuous on $I$ as well.
We consider several cases to establish that $\left| f^\circ (x)-g^\circ (x)\right| \leq \left\| f - g\right\|_\infty$ for almost every $x \in I$.
\begin{itemize}
\item Case 1: $x\in Z_F$ and $x\in Z_G$. For almost every such $x$,
\[
\left| f^\circ (x) - g^\circ (x)\right| = \left| f(x) - g(x)\right| \leq \left\| f - g\right\|_\infty.
\]
\item Case 2: $x\in Z_G$ but $x\notin Z_F$. Then $x$ is in the interior of some component interval $[a,b]$ of $F$. By Lemma \ref{SeptemberLemma1}, $\hat F(a) = F (a)$ and $\hat F(b) = F (b)$.
Since $\hat F$ has constant slope on $[a, b]$,
\[
\int^x_a f =F(x)-F(a) \leq \hat F (x)-\hat F(a) = (x-a)f^\circ (x).
\]
and
\[
\int^b_x f =F(b)-F(x) \geq \hat F(b)- \hat F(x)=(b-x)f^\circ (x).
\]
Also, since $\hat G(x) = G(x)$ and $g^\circ$ is non-increasing,
\[
\int_a^x g=G(x)-G(a) \geq \hat G (x) - \hat G (a) = \int_a^x g^\circ \geq (x-a) g^\circ (x).
\]
and
\[
\int_x^b g = G(b) - G(x) \leq \hat G(b) - \hat G (x) = \int_x^b g^\circ \leq (b-x) g^\circ (x).
\]
Combining these four inequalities, we obtain,
\begin{eqnarray*}
- \left\| f - g\right\|_\infty &\leq& \frac{1}{ x - a} \int_a^x (f - g) \leq f^\circ (x) - g^\circ (x) \\
&\leq& \frac{1}{b - x} \int_x^b (f -g) \leq \left\| f - g\right\|_\infty .
\end{eqnarray*}
Thus, $\left| f^\circ (x) - g^\circ (x)\right| \leq \left\| f - g\right\|_\infty$.
\item Case 3 : $x\in Z_F$ but $x\notin Z_G$. Just reverse the roles of $F$ and $G$ in Case 2.
\item Case 4: $x \notin Z_F$ and $x \notin Z_G$. Suppose without loss of generality that $g^\circ (x) \leq
f^\circ (x)$. Let $a$ be the left-hand endpoint of the component interval of $G$ containing $x$, and let $b$ be the right-hand endpoint of the component interval of $F$ containing $x$. By Lemma~\ref{SeptemberLemma1}, $\hat G(a) = G(a)$ and $\hat F(b) = F(b)$. Since $g^\circ $ is constant on $(a, x)$ and non-increasing on $(x, b)$ we have
\[
(b-a)g^\circ (x)\geq \int_a^b g^\circ = \hat G(b)- \hat G(a) \geq G(b) -G(a)= \int_a^b g.
\]
Since $f^\circ$ is non-increasing on $(a,x)$ and constant on $(x,b)$, we have
\[
(b-a)f^\circ (x)\leq \int_a^b f^\circ = \hat F(b) - \hat F(a) \leq F(b) - F(a)= \int_a^b f.
\]
Combining these, we have
\[
f^\circ (x) - g^\circ (x) \leq \frac{1}{b-a} \int_a^b (f-g) \leq \left\| f - g \right\|_\infty.
\]
\end{itemize}
This completes the proof.
\end{proof}
The last result can be combined with Proposition~\ref{Proposition51} to give the desired error
estimates.
\begin{Theorem}\label{Theorem53}
Let $\varrho$ be a partition of the interval $[a,b]$ and suppose $G \in \mathcal{C}^4([a,b])$. Let $F$ be the clamped cubic spline interpolating $G$ on $\varrho$. Then
\[
\left\| f^\circ - g^\circ \right\|_\infty \leq \left\| f - g\right\|_\infty \leq \frac{1}{24} \left\| G^{(4)}\right\|_\infty \left\| \varrho \right\|^3
\]
and for each $x \in [a, b]$,
\[
\left| \hat F (x) - \hat G(x)\right| \leq \frac{\min \{ x-a,b-x \}}{24} \left\| G^{(4)}\right\|_\infty \left\| \varrho \right\|^3.
\]
Here $\hat F$ and $\hat G$ denote the least concave majorants of $F$ and $G$, respectively, and $f = F'$, $g = G'$; $f^\circ = (\hat F)'$, and $g^\circ = (\hat G)'$.
\end{Theorem}
\begin{proof}
The first inequality is just Theorem~\ref{theorem52} together with the result from \cite{HalMey1976}. For the second, observe that by Lemma~\ref{SeptemberLemma1}, $\hat F(a) = F(a)$ and $\hat G(a) = G(a)$, and since $a$ is in the partition $\varrho$, $G(a) = F(a)$. Thus, $\hat F(a) = G(a)$. Since both $\hat F$ and $\hat G$ are concave and hence absolutely continuous,
\[
\left| \hat F (x) - \hat G (x) \right| = \left| \int_a^x f^\circ (x) - g^\circ (x) \right| \leq \int_a^x \left\| f^\circ - g^\circ \right\|_\infty \leq \frac{x - a}{24} \left\| G^{(4)} \right\|_\infty \left\| \varrho \right\|^3.
\]
A similar argument, using integration on $[x, B]$, shows that
\[
\left| \hat F(x) - \hat G(x)\right| \leq \frac{ b - x}{24} \left\| G^{(4)}\right\|_\infty \left\| \varrho \right\|^3
\]
and completes the proof.
\end{proof}
\section{Examples}\label{section7}
We present here two examples involving our algorithm.
\subsection*{Example 1.}
With our first example we illustrate the flow of the algorithm. Let $s$ be the continuously differentiable, piecewise cubic function defined on $[0,10]$, by
\[
s(x) = s_n(x) \mbox{ on } [n-1,n],\; n = 1,2, \dots 10,
\]
where
\begin{eqnarray*}
s_1 (x) = -1.1 x^3 + 1.1 x^2 + x + 1, && s_2 (x) = 1.3 x^3 - 5.3 x^2 + 6.6 x - 0.6, \\
s_3 (x) = -0.9 x^3 + 1.1 x^2 + x + 1, && s_4 (x) = -1.5 x^3 +16 x^2 -56 x +67, \\
s_5 (x) = 3, && s_6 (x) = 0.5 x^3 - 8.75 x^2 + 50 x - 90.75, \\
s_7 (x) = 2 + (x - 6.5)^2, && s_8 (x) = 1.5 x^3 - 33.25 x^2 + 246 x +605, \\
s_9 (x) = x^3 - 25.5 x^2 + 216 x - 605, && s_{10} (x) = 0.6 x^3 - 16.6 x^2 + 153 x - 467.3.
\end{eqnarray*}
The graph of $s$ is given in Figure~\ref{figure1} below.
\begin{figure}
\caption{Graph of $s$ with marked points where prescribed polynomials change.}
\label{figure1}
\end{figure}
To begin, $s$ attains its maximum value of $3$ on $D = [4,5]\cup \left\{ 8 \right\}$. So, $\hat{s} (x) = 3$ on $C = [4,8]$.
Since $s < 3$ on $(5,8)$ it will be a component interval. We next seek the component intervals in $[0,4]$.
By adding to the partition those points in $[0,4]$ for which $s'$ or $s''$ changes sign we get a refined partition where, on each subinterval, $s$ is monotone and either strictly convex or strictly concave. The first derivative of $s$ changes sign at $0.97687$, $1.75204$, $2.8701$ and $3.\bar{1}$. The second derivative changes sign at $0.\bar{3}$, $1.35897$, $2.\bar{2}$ and $3.\bar{5}$. We are interested in subintervals of $[0,4]$ where $s$ is strictly concave and increasing. These are $I_1 = [0.\bar{3}, 0.97687]$, $I_2 = [2.\bar{2}, 2.87011]$ and $I_3 = [3.\bar{5}, 4]$.
Thus, $\mathcal{P}_0 = \left\{ I_1 , I_2, I_3 \right\}$. Clearly, $I_3$ is the interval in $\mathcal{P}_0$ furthest to the right.
There are no bridge intervals with left endpoint $0$ and right endpoint in $I_3$.
Indeed, there \emph{are} two candidate intervals of form $[a,r]$, $r \in I_3$, such that
\begin{equation}\label{condLeft}
s'(r) = \frac{s(r) - s(0)}{r} = \frac{s(r) - 1}{r},
\end{equation}
but, for neither candidate does one have (3), that is,
\[
s(x) < x \left[ \frac{s(r) - 1 }{r}\right], \; x \in (0,r).
\]
This can be seen in Figure~\ref{SimpleLeftEndPoint}.
\begin{figure}
\caption{The two intervals with left-hand endpoint being $0$ which satisfy the first condition (\ref{condLeft}
\label{SimpleLeftEndPoint}
\end{figure}
Again, there are two intervals with right endpoint in $I_3$ and left endpoint in $I_1$ for which (1) and (2) holds. These are
\[
I_{1,1} = (0.89359, 3.90772) \mbox{ and } I_{1,2} = (0.92390, 3.16878).
\]
However, only on $I_{1,1}$ is (3) satisfied. The situation is depicted in Figure~\ref{figure3}.
\begin{figure}
\caption{This figure pictures the bridge interval joining intervals $I_1$ and $I_3$ and the other candidate. }
\label{figure3}
\end{figure}
Since no interval with left endpoint in $I_2$ can have smaller left endpoint than left endpoint of $I_{1,1}$, the interval $I_{1,1}$ is the desired component interval.
This completes the first iteration of our algorithm.
To form $\mathcal{P}_1$ for the second iteration we, of course, discard $I_3$. We also discard $I_2$, since it is contained int $I_{1,1}$. This leaves in $\mathcal{P}_1$ only interval $I_{1}'$, as $(0.\bar{3}, 0.89359) = I_1 \setminus I_{1,1}$.
There is one bridge interval with right endpoint in $I_{1}'$ and left endpoint $0$. It is $(0,0.5)$, therefore $(0,0.5)$ is a component interval. We have thus found all component intervals in $[0,4]$.
We now seek component intervals contained in $[8,10]$. To begin we must ad to the partition points 8,9,10 the critical point $8.5$ and the inflection points $9.\bar{2}$ and $9.\bar{4}$. It is then found that the intervals on which $s$ is strictly concave and increasing are $J_1 = [8,8.5]$ and $J_2 = [9,9.\bar{2}]$.
The interval $[8.05353,10]$ is a bridge interval with left endpoint in $J_1$ and right endpoint $10$.
The unique component interval in $[8,10]$. See Figure~\ref{SimpleRightEndPoint}.
\begin{figure}
\caption{This figure shows the component interval $(8.05353,10)$. }
\label{SimpleRightEndPoint}
\end{figure}
The graph of $\hat s $ appears in Figure~\ref{figure6}.
\begin{figure}
\caption{The least concave majorant of $s$ is linear interpolation of $s$ from end-points of a component interval and agrees with $s$ elsewhere.}
\label{figure6}
\end{figure}
\subsection*{ Example 2.}
Consider the trimodal density function discussed in \cite{HarKerPicTsy1998}, namely,
\[
f(x) = 0.5 \phi(x-3) + 3 \phi( 10(x - 3.8)) + 2 \phi(10(x-4.2)),
\]
in which
\[
\phi(x) = \frac{1}{\sqrt{2\pi}} e^{-\frac{x^2}{2} }.
\]
We wish to approximate the least concave majorant of $F(x) = \int_0^x f(y) \,dy$ on $[0,6]$. Now, $\left\|F^{(4)} \right\|_\infty \leq 700$, so to ensure that the clamped cubic spline $S_F$ approximating $F$ on $[0,6]$ satisfies $| f^\circ (x) - (S'_F)^\circ (x) | \leq .001$ on $[0,6]$, we solve the equation $\frac{700}{24}\left\| \varrho \right\|^3 = .001$ to obtain $\left\| \varrho \right\| = .03249$. Dividing $[0,6]$ into $85 > \frac{6}{.03249}$ equal subintervals, we apply the algorithm to identify the component intervals of $Z_{S_f}^C$. The approximation $\int_0^x (\hat S'_f)^\circ$ to $\hat F (x)$ is accurate to within $.003$.
Figure \ref{figure62} shows the graph of $F(y)$ and the approximation to its least concave majorant, $\hat S_F$.
\begin{figure}
\caption{The trimodal density function $F$ with its least concave majorant $\hat F$. The bridge intervals are $(0, 2.42575)$ and $(2.48781,3.23693) $.}
\label{figure62}
\end{figure}
{\small
{\em Authors' addresses}:
{\em Martin Franc\accent23u},
Faculty of Mathematics and Physics, Charles~University,
Prague, Czech Republic,
e-mail:~\texttt{[email protected]};
{\em Ron Kerman},
Department of Mathematics and Statistics, Brock~University,
St. Catharines, Canada,
e-mail:~\texttt{[email protected]};
{\em Gord Sinnamon},
Department of Mathematics, University of~Western Ontario,
London, Canada,
e-mail:~\texttt{sinnamon@\allowbreak uwo.ca};
}
\end{document}
|
\begin{document}
\title[On the spectrum of stochastic perturbations of the shift]
{On the spectrum of stochastic perturbations of the shift and Julia sets}
\author{E. H. El Abdalaoui}
\address { Department of Mathematics, University
of Rouen, LMRS, UMR 60 85, Avenue de l'Universit\'e, BP.12, 76801
Saint Etienne du Rouvray - France}
\email{[email protected] }
\author {A. Messaoudi }
\address{Departamento de Matem\'atica, IBILCE-UNESP, Rua Cristov\~o Colombo,
2265, CEP 15054-0000, S\~ao
Jos\'e de Rio Preto-SP, Brasil}
\email{[email protected]}
\footnote{Research partially
supported by French-Brasilian Cooperation (French CNRS and Brasilian CNPq)
and Capes-Cofecub Project 661/10 .
The second author was Supported by Brasilian CNPq grant 305043/2006-4}
\maketitle
{\renewcommand\abstractname{Abstract}
\date{12 september 2011}
\begin{abstract}
We extend the Killeen-Taylor study in \cite{KT} by investigating in different Banach spaces
($\ell^\alpha(\mathbb{N}), c_0(\mathbb{N}),c_c(\mathbb{N})$)
the point, continuous and residual spectra of stochastic perturbations of the shift operator associated to
the stochastic adding machine in base $2$ and in Fibonacci base. For the base $2$, the spectra are connected to the Julia set of a quadratic map.
In the Fibonacci case, the spectra involve the Julia set of an endomorphism of $\mathbb{C}^2$.
\hspace{-0.7cm}{\em AMS Subject Classifications} (2000): 37A30, 37F50, 47A10, 47A35.\\
{\em Key words and phrases:} Markov operator, Markov process, transition operator,
stochastic perturbations of the shift, stochastic adding machine, Julia sets, residual spectrum, continuous spectrum.\\
\end{abstract}
\thispagestyle{empty}
\section{\bf Introduction}
In this paper, we study in detail the spectrum of some stochastic perturbations of the shift operator
introduced by Killeen and Taylor in \cite{KT}.
We focus our study on large Banach spaces for which we complete the Killeen-Taylor study. We investigate also
the case of Fibonacci base, but in this case, we are not able to compute the residual and continuous spectra
exactly.\\
We recall that
in \cite {KT},
Killeen and Taylor defined the stochastic adding machine as a stochastic perturbation of the shift
in the following way: let $N$ be a nonnegative integer number written in base $2$ as
$N= \sum_{i=0}^{k(N)} \varphirepsilon_{i}(N)2^{i}$ where
$\varphirepsilon_{i}(N)=0$ or $1$ for all $i.$ It is known that there
exists an algorithm that computes the digits of $N+1$. This
algorithm can be described by introducing an auxiliary
binary "carry" variable $c_i(N)$ for each digit $\varphirepsilon_{i}(N)$
by the following manner:
Put $c_{-1}(N+1)=1$ and
$$\varphirepsilon_{i}(N+1)= \varphirepsilon_{i}(N)+ c_{i-1} (N+1) {\rm {\quad mod \quad}} (2)$$
$$c_i (N+1) = \left[{\frac{\varphirepsilon_{i}(N)+ c_{i-1}(N+1)}{2}}\right]$$
where $i \geq 0$ and $[z]$ denote the integer part of $z \in
\mathbb{R}_{+}.$
Let $\{e_{i}(n): i \geq 0, n \in \mathbb{N}\}$ be an independent,
identically distributed family of random variables which take the value $0$ with probability
$1-p$ and the value $1$ with probability $p$. Let $N$ be an integer.
Given a sequence $(r_{i}(N))_{i \geq 0} $ of $0$ and $1$ such that
$r_{i}(N)=1$ for finitely many indices $i$, we consider the
sequences $(r_{i}(N+1))_{i \geq 0}$ and $(c'_{i}(N+1))_{i \geq -1}$
defined by
$c'_{-1}(N+1)=1$ and for all $i \geq 0$
$$r_{i}(N+1)= r_{i}(N)+ e_{i}(N)c'_{i-1}(N+1)
{\rm {\quad mod \quad}} (2)$$
$$ c'_i (N+1) = \left[\frac{r_{i}(N)+
e_{i}(N)c'_{i-1}(N+1)}{2}\right],$$
With this we have that a number $\sum_{i=0}^{+\infty}
r_{i}(N)2^{i}$ transitions to a number \linebreak$\sum_{i=0}^{+\infty}
r_{i}(N+1)2^{i}$.
In particular, an integer $N$ having
a binary representation of the form $\varphirepsilon_{n}\ldots
\varphirepsilon_{k+1}0 \underlinederbrace{11\ldots 11}_{k}$ transitions to
$\varphirepsilon_{n}\ldots \varphirepsilon_{k+1}1 \underlinederbrace{00\ldots
00}_{k}$ with probability $p^{k+1}$ and a number having binary
representation of the form $\varphirepsilon_{n}\ldots \varphirepsilon_{k}
\underlinederbrace{11\ldots 11}_{k}$ transitions to $\varphirepsilon_{n}\ldots
\varphirepsilon_{k} \underlinederbrace{00\ldots 00}_{k}$ with probability
$p^{k}(1-p).$
Equivalently, we obtain a Markov process $\psi(N)$ with state space $\mathbb{N}$ by $\psi(N)= \sum_{i=0}^{+\infty}r_i(N)2^{i}$. The
corresponding transition operator is denoted by $S_p$ and given in Figure 2.
For $p=1$ the transition operator equals the shift operator (cf. Figure 2), hence the stochastic adding machine can be seen as a stochastic perturbation of the shift operator. It is also a
model of Weber law in the context of counter and pacemarker errors. This law is
used in biology and psychophysiology \cite{KT2}.
In [KT], P.R. Killeen and J. Taylor studied the spectrum of the
transition operator $S_p$ (of $\psi(N)$) on $\ell^{\infty}$. They
proved that the spectrum $\sigma(S_p)$ is equal to the filled Julia
set of the quadratic map $f: \mathbb{C} \mapsto \mathbb{C} $ defined
by: $f(z)= (z- (1-p))^2 / p^2$, i.e:
$\sigma (S_p)= \{z \in \mathbb{C},\; (f^{n}(z))_{n \geq 0} \mbox { is bounded } \}$
where $f^n$ is the $n$-th iteration of $f$.
In \cite{Messaoudi-Smania}, Messaoudi and Smania defined the stochastic adding machine
in the Fibonacci base. The corresponding transition operator is given in Figure 6.
Their procedure can be extended to a large class of adding machine and is given by the following manner. Consider the
Fibonacci sequence $(F_n)_{n \geq 0}$ given by the relation
$$F_0= 1, F_1=2,\;
F_n= F_{n-1}+ F_{n-2} \;\; \forall n \geq 2.$$
Using the greedy algorithm, we can write every
nonnegative integer $N$ in a unique way as $\displaystyle N= \sum_{i=0}^{k(N)}
\varphirepsilon_{i}(N)F_i$ where $\varphirepsilon _{i}(N)= 0$ or $1$ and
$\varphirepsilon _{i}(N)\varphirepsilon _{i+1}(N) \ne 11, \;$ for all $i \in
\left\{0,{\cal D}ots,k(N)-1\right\}$ (see \cite{Z}).
It is known that the addition of $1$ in the Fibonacci base (adding machine) is
recognized by a finite state automaton (transductor).
In \cite{Messaoudi-Smania}, the authors defined the stochastic adding machine by introducing
a $``$ probabilistic transductor $\textquotedblright$.
They also computed the point spectrum of the
transition operator acting in $\ell^{\infty}$ associated to the stochastic adding machine with
respect to the base $(F_n)_{n \geq 0}$. In particular, they showed that the point
spectrum $\sigma_{pt}(S_p)$ in $\ell^{\infty}$
is connected to the filled Julia set $J (g)$ of the function $g:
\mathbb{C}^2 \mapsto \mathbb{C}^2 $ defined by:
$$g(x,y)= (\frac{1}{p^{2}}(x- 1+p)(y- 1+p), x).$$
Precisely, they proved that
$$\sigma_{pt}(S_p)=
\mathcal{K}_p= \{ \lambdambda \in
\mathbb {C} \; (q_n (\lambdambda))_{n \geq 1} \mbox { is bounded }\},$$
where $q_{F_{0}}(z)= z,\; q_{F_{1}} (z)=z^2,\; q_{F_{k}} (z)= \displaystyle \frac{1}{p}q_{F_{k-1}} (z)q_{F_{k-2}} (z)-\displaystyle\frac{1-p}{p},$ for all $k \geq 2$
and for all nonnegative integers $n$, we have $q_n (z)= q_{F_{k_{1}}} \ldots q_{F_{k_{m}}} $ where $ F_{k_{1}}+ {\cal D}ots+ F_{k_{m}}$ is the Fibonacci representation of $n$.
In particular, $\sigma_{pt}{(S_{p})}$ is contained in the set
\begin{eqnarray*}
\mathcal {E}_p &= &
\{
\lambdambda \in \mathbb{C} \; \vert \; (q_{F_{n}} (\lambdambda))_{n \geq 1} \mbox { is bounded } \}\\
& = &
\{
\lambdambda \in \mathbb{C} \; \vert \; ( \lambdambda_1, \lambdambda) \in
J (g)\}
\end{eqnarray*}
where $\lambdambda_1= 1-p
+\frac{ (1- \lambdambda- p)^2}{p}.$\\
\begin{center}
\includegraphics[scale=0.6]{base2.eps}\\
{\footnotesize {Fig.1. Transition graph of stochastic adding machine in base 2}}
\end{center}
Here we investigate the spectrum of the stochastic adding machines in base $2$ and in the Fibonacci
base in different Banach spaces. In particular,
we compute exactly the point, continuous and residual spectra of the stochastic adding machine
in base $2$ for the Banach spaces $c_0$, $c$, $\ell^{\alpha},\; \alpha \geq 1$.
For the Fibonacci base, we improve the result in \cite{Messaoudi-Smania} by
proving that the spectrum of $S_p$ acting on $\ell^{\infty}$ contain ${\mathcal{E}}_p$. The same result will be proven
for the Banach spaces $c_0$, $c$ and $\ell^{\alpha}, \alpha \geq 1$.
The paper is organized as follows. In section 2, we give some basic facts on spectral theory.
In section 3, we state our main results (Theorems 1, 2 and 3). Section 4 contains
the proof in the case of the base 2 and finally, in
section 5, we present the proof in the case of the Fibonacci base.
\section{\bf Basic facts from the spectral theory of operators (see
for instance \cite{Halmos}, \cite{Rudin},\cite{Schechter}, \cite{yoshida})}
Let $E$ be a complex Banach space and $T$ a bounded operator on it.
The spectrum of $T$, denoted by $\sigma(T)$, is the subset of complex numbers
$\lambdambda $ for which $T-\lambdambda Id_E$ is not an isomorphism ($Id_E$ is the
identity maps).
As usual we point out that if $\lambdambda$ is in $\sigma(T)$ then one of the
following assertions hold:
\begin{enumerate}
\item $T-\lambdambda Id_E$ is not injective. In this case we say that $\lambdambda$ is in the point spectrum denoted by
$\sigma_{pt}(T)$.
\item $T-\lambdambda Id_E$ is injective, not onto and has dense range. We say
that $\lambdambda$ is in the continuous spectrum denoted by $\sigma_c(T)$.
\item $T-\lambdambda Id_E$ is injective and does not have dense range. We
say that $\lambdambda$ is in residual spectrum of T denoted by $\sigma_r(T)$.
\end{enumerate}
It follows that $\sigma(T)$ is the disjoint union
\[
\sigma(T) = \sigma_{pt} (T) \cup \sigma_c (T) \cup \sigma_r (T).
\]
It is well known and it is an easy consequence of Liouville Theorem that
the spectrum of any bounded operator is a non empty compact set of $\mathbb{C}$. There is a connection
between the spectrum of $T$ and the spectrum of
the dual operator $T'$ acting on the dual space $E'$ by
$T'~~:~~\phi \mapsto \phi \circ T.$ In particular, we have
\begin{prop}[Phillips Theorem]{}\lambdabel{Phillips} Let $E$ be a Banach space and $T$
a bounded operator on it, then $
\sigma(T)=\sigma(T').$
\end{prop}
\noindent We also have a classical relation between the point and residual spectra of $T$ and the point spectrum of $T'$.
\begin{prop}\lambdabel{residual}
For a bounded operator $T$ we have
$$\sigma_r(T)\subset \sigma_{pt}(T')\subset \sigma_r(T)\cup \sigma_{pt}(T).$$
In particular,
if $\sigma_{pt}(T)$ is an empty set
then $$\sigma_r(T)=\sigma_{pt}(T').$$
\end{prop}
\section{\bf Main results.}
Our main results are stated in the following three theorems.
{\bf Theorem 1}.
The spectrum of the operator $S_p$ acting on $c_0,\; c$ and $\ell^{\alpha}, \; \alpha \geq 1$
is equal to the filled Julia set $J(f)$ of the quadratic map $f(z)= (z- (1-p))^2 / p^2$.
Precisely,
in $c_0$ (resp. $\ell^{\alpha}, \; \alpha > 1$), the continuous spectrum of $S_p$
is equal to $J(f)$ and the point and residual spectra are empty.
In $c$, the point spectrum is equal $\{1\}$, the residual spectrum is empty and the continuous spectrum equals $J(f) \backslash \{1\}$.
{\bf Theorem 2}.
In $\ell^1 $, the point spectrum of $S_p$ is empty.
The residual spectrum of $S_p$ is not empty and contains a dense and countable subset of the
Julia set $\partial (J_f)$, i.e. $\bigcup_{n=0}^{+\infty} f^{-n}\{1\} \subset \sigma_{r} (S_p)$. The continuous spectrum is equal to the relative complement of the
residual spectrum with respect to the filled Julia set $J_f$.
{\bf Theorem 3}.
The spectra of $S_p$ acting respectively in $\ell^{\infty},\; c_{0},\; c$ and $\ell^{\alpha},\; \alpha \geq 1$, associated to the stochastic Fibonacci adding machines contain the set $\mathcal{E}_{p}= \{ \lambdambda \in \mathbb{C} \; \vert \; ( \lambdambda_1, \lambdambda) \in
J (g)\}$ where $J (g)$ is the filled Julia set of the function $g$ and $\lambdambda_1=1-p
+\frac{ (1- \lambdambda- p)^2}{p}.$
{\bf Conjecture}: We conjecture that in the case of $\ell^1 $, the residual spectrum of the transition operator associated to the stochastic adding machine in base $2$ is $\sigma_{r} (S_p)= \bigcup_{n=0}^{+\infty} f^{-n}\{1\} $.
For Fibonacci stochastic adding machine, we conjecture that the spectra of $S_p$ in the Banach spaces cited in Theorem 3 are equals to the set $\mathcal{E}_{p}$.
\begin{rem}
The methods used for the proof of our results can be adapted for a large class of stochastic adding machine given by transductors.
\end{rem}
\begin{rem}
We point out that from Killeen and Taylor method one may deduce in the case of $\ell^{\infty}$
that the residual and continuous spectrum is empty.
On the contrary here we compute directly the residual and continuous spectrum in $\ell^{\alpha}, c_0$ and $c$.
\end{rem}
\begin{center}
\hspace{-9.5 mm}\includegraphics[scale=0.7]{opbase2.eps}
{\footnotesize {Fig.2. Transition operator of stochastic adding machine in base 2}}
\end{center}
\section{\bf Proof of our main results for stochastic adding machine in base 2.}
We are interested in the spectrum of $S_p$ on three Banach spaces connected by
duality. The space $c_0$ is the space of complex sequences which
converge to zero, in other words, the continuous functions on $\mathbb{N}$ vanishing at
infinity. The dual space of $c_0$ is by Riesz Theorem the space of
bounded Borel measures on $\mathbb{N}$ with total variation norm. This space can be
identified with $\ell^1$, the space of summable row vectors. Finally, the
dual space of $\ell^1$ is $\ell^{\infty}$ the space of
bounded complex sequences.
We are also interested in the spectrum of $S_p$
as operator on the
space $\ell^{\alpha}$ with $\alpha > 1$ and also in the space $c$ of complex convergent sequences.
\begin{prop}\lambdabel{defia}
The operator $S_p$ (acting on the right) is well defined on the space $X$ where $X \in \{c_0,\; c,\; l^{\alpha},\;
\alpha \geq 1\}$, moreover $||S_p|| \leq 1$.
\end{prop}
Since the operator $S_p$ is bi-stochastic, the proof of this proposition is a straightforward consequence
of the following more general lemma.
\begin{lemm}\lambdabel{BanachSteinhaussEasy} Let $A=(a_{i,j})_{i,j \in \mathbb{N}}$ be an infinite matrix with nonnegative
coefficients. Assume that there exists a positive constant $M$ such that
\begin{enumerate}
\item $\displaystyle \sup_{i \in \mathbb{N}}\left(\sum_{j=0}^{\infty}a_{i,j}\right) \leq M,$
\item $\displaystyle \sup_{j \in \mathbb{N}}\left(\sum_{i=0}^{\infty}a_{i,j}\right) \leq M.$
\end{enumerate}
Then $A$ defines a bounded operator on the spaces $c_0,\; c,\; l^{\infty}$ and $\ell^{\alpha}(\mathbb{N})$
with $\alpha \geq 1$. In addition the norm of $A$ is less than $M$.
\end{lemm}
\begin{proof}
\noindent By the assumption $(1)$ it is easy to get that $A$ is well defined on $\ell^{\infty}(\mathbb{N})$ and its
norms is less than $M$.
\noindent Now, let $v= (v_n)_{n \geq 0},\; v \ne 0$ such that $\displaystyle \lim_{n \longrightarrow +\infty}v_n =l \in \mathbb{C}$, then for any $\varphirepsilon>0$ there exists a positive integer $j_0$ such that for any
$j \geq j_0$, we have $\displaystyle |v_j-l| \leq \frac{\varphirepsilon} {2 M}$. Let $\displaystyle d= \sum_{j=0}^{+\infty}a_{n,j}$, then from the assumption $(1)$, we have that for any $n \in \mathbb{N}$,
\begin{eqnarray}\lambdabel{epsilonsur2}
\left| (Av)_n - d{\cal D}ot l\right|=\left|\sum_{j=0}^{+\infty}a_{n,j} (v_j -l)\right|
\leq \displaystyle \sum_{j=0}^{j_0-1}a_{n,j}|v_j -l|+ \frac{\varphirepsilon}2 .
\end{eqnarray}
\noindent But by the assumption $(2)$, for any $j \in \{0,\ldots,j_0-1\}$, we have
$\displaystyle \sum_{n=0}^{+\infty}a_{n,j} < \infty$. Then there exists
$n_0 \in \mathbb{N}$ such that for any $n \geq n_0$ and for any $j \in \{0,{\cal D}ots,j_0-1\}$, we have
\begin{eqnarray}\lambdabel{2epsilonsur2}
|{a}_{n,j}| \leq \frac{\varphirepsilon}{2 j_0 (\delta+1)} \mbox { where } \delta= sup \{ \vert v_j -l \vert, \; j \in \mathbb{N}\}
\end{eqnarray}
Combined (\ref{epsilonsur2}) with (\ref{2epsilonsur2}) we get that
$$ \left| (Av)_n - d{\cal D}ot l \right| \leq \varphirepsilon,\; \forall n \geq n_0.$$
Hence
$AX \subset X$ if $X= c_0$ or $c$.
\vspace {1em}
\noindent Now take $\alpha > 1$ and $v \in
l^{\alpha}$. For any integer integer $i \in \mathbb{N}$, we have
\begin{eqnarray*}
\left|(Av)_i\right|^\alpha \leq {\left(\sum_{j=0}^{+\infty}{a}_{i,j}|v_j|\right)}^\alpha.
\end{eqnarray*}
\noindent Let $\alpha'$ be a conjugate of $\alpha$, i.e, $\displaystyle \frac1{\alpha}+\frac1{\alpha'}=1.$ Then,
by H\"older inequality we get
\begin{eqnarray*}
{\left(\sum_{j=0}^{+\infty}{a}_{i,j}|v_j|\right)}^\alpha \leq {\left(\sum_{j=0}^{+\infty}{{a}_{i,j}}\right)}^{\frac{\alpha}{\alpha'}}
{\left(\sum_{j=0}^{+\infty}{{a}_{i,j}}|v_j|^{\alpha}\right)}.
\end{eqnarray*}
Hence
\begin{eqnarray}
\lambdabel{tt}
{\left(\sum_{j=0}^{+\infty}{a}_{i,j}|v_j|\right)}^\alpha \leq {\left(\sup_{l\in \mathbb{N}}\sum_{j=0}^{+\infty}{{a}_{l,j}}\right)}^{\frac{\alpha}{\alpha'}}{\left(\sum_{j=0}^{+\infty}{{a}_{i,j}}|v_j|^{\alpha}\right)}.
\end{eqnarray}
\noindent Thus
\begin{eqnarray*}
||Av||^{\alpha}_{\alpha}
& \leq& M^{\frac{\alpha}{\alpha'}}
\sum_{i=0}^{\infty} \left( \sum_{j=0}^{+\infty}{{a}_{i,j}}|v_j|^{\alpha} \right)\\
&=& M^{\frac{\alpha}{\alpha'}} \sum_{j=0}^{\infty}\left( \sum_{i=0}^{\infty}
{a}_{i,j} \right)\vert v_j \vert ^{\alpha}\\
&\leq& M^{\frac{\alpha}{\alpha'}} \sup_{j \in \mathbb{N}}\left(
\sum_{i=0}^{\infty}{a}_{i,j}\right) ||v||^{\alpha}_{\alpha}\\
&\leq& M^{1+\frac{\alpha}{\alpha'}}||v||^{\alpha}_{\alpha}.
\end{eqnarray*}
\noindent{}Then
\begin{eqnarray*}
\vert \vert A v \vert \vert_{\alpha} \leq M \vert \vert v \vert \vert_{\alpha}.
\end{eqnarray*}
Hence $A$ is a continuous operator and $\vert \vert A \vert \vert \leq M$.
The case $\alpha=1$ is an easy exercise and it is left to the reader.
\end{proof}
From Proposition \ref{defia}, we deduce that
$S_p$ is a Markov operator and its spectrum is contained in the unit disc of complex numbers.
Consider the map $ f: z \in \mathbb{C} \longmapsto
\left(\frac{z-(1-p)}{p}\right)^2$ and denote by $J(f)$ the associated filled Julia set defined by:\\
$$
J(f)=\left \{ z \in \mathbb{C},\; \vert f^{(n)}(z) \vert
\not \longrightarrow \infty\right\}.
$$
\noindent Killeen and Taylor investigated the spectrum of $S_p$
acting on $\ell^{\infty}$. They proved that the point spectrum of $S_p$ is equal to the filled Julia set of $f$.
In addition, they showed that the spectrum is invariant under the action of $f$. As a consequence,
one may deduce that the continuous and residual spectra in this case are empty.
Here we will compute exactly the residual part and the continuous part of
the spectrum of $S_p$ acting on the spaces $c_0,\; c$ and
$\ell^{\alpha}, \; \alpha \geq 1$.
\begin{thm}\lambdabel{spp}
The spectrum of the operator $S_p$ acting on $X$ where $X \in \{c_0,\; c,\; l^{\alpha},\;
\alpha \geq 1\}$
is equal to the filled Julia set of $f, \; J (f)$.
Precisely,
in $c_0$ (resp. $\ell^{\alpha}, \; \alpha > 1$), the continuous spectrum of $S_p$
is equal to $J(f)$ and the point and residual spectra are empty.
In $c$, the point spectrum is the singleton $\{1\}$,
the residual spectrum is empty and the continuous spectrum is $J(f) \backslash \{1\}$.
\end{thm}
For the proof of Theorem \ref{spp} we shall need the following proposition.
\begin{prop}
\lambdabel{inclu}
The spectrum of $S_p$ in $X$, where $X \in \{c_0,\; c,\; l^{\alpha},\; 1 \leq \alpha \leq +\infty\}$, is contained in the filled Julia set of $f$.
\end{prop}
The main idea of the proof of Proposition \ref{inclu} can be found in the Killen-Taylor proof. The key argument is
that the $\widetilde{S_p}^{2}$ is similar to the operator $ES_p \oplus OS_p$, where
$\widetilde{S_p}=\displaystyle \frac {S_p- (1-p)Id}{p}$ and $E, O$ denote the even and odd operators acting on $X$ by
$$E (h_0,h_1,\ldots)= (h_0,0,h_1,0, h_2,\ldots),$$ \noindent{}and
$$O (h_0,h_1,\ldots)= (0, h_0,0,h_1,0, h_2,\ldots),$$ for any $h= (h_0,h_1,\ldots)$ in $X$. Precisely,
for all $v= (v_{i})_{i \geq 0} \in X $,
we have
$$\widetilde{S_p}^{2}(v)= E S_p (v_0, v_2,\ldots v_{2n},\ldots)+ O S_p (v_1, v_3,\ldots v_{2n+1},\ldots).$$
As a consequence we deduce from the mapping spectral theorem \cite{Schechter} that the
spectrum of $S_p$ is invariant under $f$.
Let us start the proof of Theorem \ref{spp} by proving the following result.
\begin{prop}
\lambdabel{specte}
The point spectrum of $S_p$ acting on $X$ where $X \in \{ c_0,\; l^{\alpha},\; \alpha \geq 1\}$ is empty, and the point spectrum of $S_p$ on $c$ is equal to $\{1\}$.
\end{prop}
For the proof, we need the following lemma from \cite{KT} .
\begin{lemm}\lambdabel{Ktlemma}\cite{KT}.
Let $n$ be a nonnegative integer and $X_n= \{m \in \mathbb{N}:\; (S_p)_{n,m} \ne 0\}$, then the following properties are valid.
\begin{enumerate}
\item For all nonnegative integers $n$, we have $n \in X_{n}$ and $(S_{p})_{n,n}= 1-p$.
\item If $n=\varphirepsilon _{k}\ldots
\varphirepsilon _{1}0,\; k \geq 2,$ is an even integer then $ X_{n}= \{n, n+1\}$ and $(S_{p})_{n,n+1}= p$.
\item If $n=\varphirepsilon _{k}\ldots
\varphirepsilon _{t}0\underlinederbrace{1 \ldots 1}_{s}$ is an odd integer with $s \geq 1$
and $k \geq t \geq s+1 $, then $X_{n}= \{n, n+1, n- 2^m+1,\; 1 \leq m \leq s\}$ and $n$ transitions to $n+1=
\varphirepsilon _{k}\ldots \varphirepsilon _{t}1\underlinederbrace{0 \ldots
00}_{s} $ with probability $(S_p)_{n,n+1}=p^{s+1}$,
and $n$ transitions to $n-2^m+1= \varphirepsilon
_{k}\ldots
\varphirepsilon _{t}0\underlinederbrace{1 \ldots 1}_{s-m}\underlinederbrace{0
\ldots 0}_{m}$,
$ 1 \leq m \leq s$
with probability
$(S_{p})_{n,n-2^m+ 1}= p^{m}(1-p)$.
\end{enumerate}
\end{lemm}
{\bf Proof of Proposition \ref{specte}.}
Let $\lambdambda$ be an eigenvalue of $S_p$ associated to the eigenvector
$v= (v_n)_{n \geq 0}$ in $X$ where where $X \in \{ c_0, c, \; l^{\alpha},\; \alpha \geq 1\}$
Let $\lambdambda$ be an eigenvalue of $S_{p}$ associated to the eigenvector
$v= (v_i)_{i \geq 0}$ in $X$. By Lemma \ref{Ktlemma}, we see that the operator $S_p$ satisfies $(S_{p})_{i,i+k}=0$ for all $i,k
\in \mathbb{N}$ with $k \geq 2$. Therefore, for all integers $k \geq 1,$
we have
\begin{eqnarray}
\lambdabel{rrr} \sum_{i=0}^{k}(S_{p})_{k-1,i} v_i = \lambdambda v_{k-1}.
\end{eqnarray}
Then, one can prove by induction on $k$ that for all
integers $k \geq 1$, there exists a complex number $q_{k}=q_{k} (p,
\lambdambda)$ such that
\begin{eqnarray} \lambdabel{for3}
v_k= q_k v_0
\end{eqnarray}
By Lemma
\ref{Ktlemma} and the fact that $(S_p- \lambdambda I)v)_{2^n}=0$ for all nonnegative integers $n$, we get
\begin{eqnarray}
\lambdabel{for2}~~~~~ p^{n+1} v_{2^n}+ (1-p- \lambdambda)
v_{2^n-1} + \sum_{i=1}^{n}p^{i}(1-p) v_{2^n- 2^i}=0 ,\; \forall n \geq 0.
\end{eqnarray}
\noindent{}Hence $$v_{2^{n}}= \frac{1}{p} A -(\frac{1}{p}-1)v_0,$$
where $A=\displaystyle -\frac{1}{p^n} ( (1-p- \lambdambda)
v_{2^{n-1}+ (2^{n-1}-1)} + \sum_{i=1}^{n-1}p^{i}(1-p) v_{2^{n-1}+ (2^{n-1}- 2^i)}.$
On the other hand, by the self similarity structure of the transition matrix $S_{p}$, one can prove that if $i$ and $j$ are two integers such that for some positive integer $n$ we have $2^{n-1} \leq i,j < 2^n$, then
the transition probability from $i$ to $j$ is equal to the transition probability from $i-2^{n-1}$ to $j -2^{n-1}$.
Using this last fact and (\ref{for2}), it follows that
$$ v_{2^n}= \frac{1}{p} q_{2^{n-1}}v_{2^{n-1}}-\left(\frac{1}{p}-1\right)v_0.$$
\noindent This gives
\begin{eqnarray}
\lambdabel{for5} q_{2^n}= \frac{1}{p} q_{2^{n-1}}^2-\left(\frac{1}{p}-1\right),
\end{eqnarray}
\noindent where
$$ q_{2^0}=q_{1}= -\frac{1-p- \lambdambda}{p}.$$
{\bf Case 1: $ v \in c_0$ or $\ell^{\alpha},\; \alpha \geq 1$.}
We have $\lim_{n \to \infty} q_{2^n}= 0$. Thus by (\ref{for5}), we get $p=1$, which is absurd,
then the point spectrum is empty.
{\bf Case 2: $ v \in c$.}
Assume that $\lim q_n= l \in \mathbb{C}$, then by (\ref{for5}), we deduce that $l=1$ or $l= p-1$.
On the other hand, for any $n \in \mathbb{N}$, there exist $k$ nonnegative integers $n_1 < n_2 \ldots <n_k$ such that $n= 2^{n_1}+ {\cal D}ots 2^{n_k}$.
We can prove (see \cite{KT}) that
\begin{eqnarray}\lambdabel{prod_q}
\lambdabel{produit}
q_n= q_{2^{n_1}}\ldots q_{2^{n_k}}.
\end{eqnarray}
Then $\lim q_{ 2^{n-2}+ 2^{n}}= l^2= l$, thus $l=p-1$ is excluded. Since $S_p$ is stochastic, we conclude that $l=1$ and $\sigma_{pt, c}(S_p)=\{1\}$.
$\mathcal{B}ox$
\begin{rem} By the same arguments as above, Killeen and Taylor in \cite {KT} proved that
the point spectrum of $S_p$ acting on
$\ell^{\infty}$ is equal to the filled Julia set of the quadratic map $f$. In fact,
it is easy to see from the arguments above that $\sigma_{pt, l^{\infty}} (S_{p})= \{\lambdambda \in \mathbb{C},\; q_{n}(\lambdambda) \mbox{ bounded } \}$.
Indeed, (\ref{for5}) implies that if $(q_{2^{n}})_{n \geq 0}$ is bounded, then for all $n \geq 0,\; \vert q_{2^{n}} \vert \leq 1.$
This clearly forces $\sigma_{pt, l^{\infty}} (S_p)= \{\lambdambda \in \mathbb{C},\; q_{2^{n}}(\lambdambda) \mbox { bounded } \}$
by \eqref{prod_q}. Now,
since
$$q_{2^{n}}= h \circ f^{n-1} \circ h^{-1} (q_{1})= h \circ f^{n-1}(\lambdambda), \forall n \in \mathbb{N},$$
where
$ h(x)= \displaystyle \frac{x}{p}- \frac{1-p}{p}$, we conclude that $\sigma_{pt, l^{\infty}}(S_p)= J(f)$.
It follows from Proposition \ref{inclu} that $\sigma _{ l^{\infty}}(S_p)= J(f)$ and the residual and continuous spectra are
empty.
\end{rem}
\begin{prop}\lambdabel{Spr} The residual spectrum of $S_p$ acting on $X \in \{ c_0$, $c$, $\ell^{\alpha},\; \alpha > 1\}$
is empty.
\end{prop}
\begin{proof}
Let $\lambdambda $ be an element of the residual spectrum of $S_p$ acting on $c_0$ (resp. $c$). Then, by Proposition \ref{residual}, we deduce that there exists a sequence
$u= (u_k)_{k \geq 0} \in l^1(\mathbb{N})$ such that $u (S_p- \lambdambda Id)=0.$
{\bf Claim}. $\displaystyle u_k = \frac{1}{q_k} u_0,\; \forall k \in \mathbb{N}.$\\
\noindent{}We have
\begin{eqnarray*}\lambdabel{les-pairs}
\forall k \in 2\mathbb{N} ,\;
(u (S_p- \lambdambda Id))_{k+1}=
p u_{k }+ (1-p- \lambdambda) u_{k+1}=0.
\end{eqnarray*}
\noindent Hence
\begin{eqnarray}\lambdabel{parr}
\forall k \in 2\mathbb{N} ,\;
u_{k}= q_1 u_{k+1}.
\end{eqnarray}
\noindent If $k$ is odd, then $k=2^n -1+ t$ where $t=0$ or $ t= \sum_{j=2}^{s}2^{n_{j}}$ where $1 \leq n <n_2 <n_3,\ldots <n_s$.
Since $(u (S_p- \lambdambda Id))_{k+1}=0,$ then we have
\begin{eqnarray}
\lambdabel{tttl}
p^{n+1}u_{k}+(1-p-\lambdambda)u_{k+1}+\sum_{i=1}^{n}p^i(1-p)u_{k+2^i}=0.
\end{eqnarray}
Observe that the relation (\ref{tttl}) between $u_k$ and $u_{k+2^{n}}$
is similar to the relation (\ref{for2}) between $v_{2^n}$ and $v_0$. Hence, by induction on $n$, we obtain
\begin{eqnarray}
\lambdabel{sd}
u_k= q_{2^{n}} u_{k +2^{n}}.
\end{eqnarray}
Indeed, if $n=1$ then by (\ref{tttl}) and (\ref{parr}), we get
$$p^2 u_k+ \left (q_1(1-p- \lambdambda\right)+ p(1-p)) u_{k+2}=0.$$
Therefore
$$u_k = \left(\frac{q_1^2}{p}- \frac{1-p}{p}\right)u_{k+2}= q_2 u_{k+2}.$$
Then (\ref{sd}) is proved for $n=1$.
Now, assume that (\ref{sd}) holds for the numbers $1,2,{\cal D}ots,m-1$.
\noindent{}Take $n=m$ and $1 \leq i <m$, then $k+2^i= 1+2+ {\cal D}ots + 2^{m-1}+ 2^i+ t= 2^i -1+ t'$ where $t'= 2^m+ t$.
Applying the induction hypothesis, we get
$$ u_{k+2^i}= q_{2^i} u_{k+2^{i+1}}= q_{{2^i}}q_{{2^{i+1}}}\ldots q_{{2^{m-1}}}u_{k+2^m}.$$
On the other hand, since $2^{i}+ {\cal D}ots + 2^{m-1}= 2^m -2^ i$, we have
\begin{eqnarray}
\lambdabel{gf}
u_{k+2^i}= q_{2^m- 2^i}u_{k+2^m}.
\end{eqnarray}
Considering (\ref{tttl}) with $n=m$ and (\ref{gf}) yields
$$u_k= -
\frac{1}{p^{m+1}} \left(\left(1-p-\lambdambda\right)q_{2^m- 1}+\sum_{i=1}^{m}p^i(1-p)q_{2^m- 2^i} u_{k+2^m}\right).$$
Combined (\ref{for3}) and (\ref{for2}), we obtain (\ref{sd}) for $n=m$.
Then (\ref{sd}) holds for all integers $n \geq 1$.
In particular, we have $u_{2^{n-1}-1}= q_{2^{n-1}} u_{2^{n}-1}$, for all integers $n \geq 1$.
Thus
\begin{eqnarray}
\lambdabel{nnn}
u_{2^{n}-1}= \frac{1} {q_{2^{0}}q_{2} \ldots q_{2^{n-1}}} u_{0}= \frac{1} {q_{2^{n}-1}} u_{0},\; \forall n \geq 1.
\end{eqnarray}
On the other hand, for all integers $n \geq 1$, by (\ref {parr}) we have
$u_{2^n}= q_{2^0} u_{{2^n}+2^0}.$ and
from (\ref{sd}), we see that
\begin{eqnarray}
\lambdabel{abc}
u_{2^n}= q_{2^0} q_{2^1} u_{{2^2}-1+2^n}=\ldots = q_{2^0} q_{2^1}\ldots q_{2^{n-1}} u_{2^{n+1}-1}.
\end{eqnarray}
Consequently from (\ref{nnn}) and (\ref{abc}), we obtain
\begin{eqnarray}
\lambdabel{xxx}
u_{2^{n}}= \frac{1} {q_{2^{n}}} u_{0},\; \forall n \geq 1.
\end{eqnarray}
\noindent Now fix an integer $k \in \mathbb{N}$ and assume that $k= \displaystyle \sum_{i=1}^{s}2^{n_{i}}$
where $0 \leq n_1 <n_2 <\ldots <n_s$.
We will prove by induction on $s$ that the following statement holds
\begin {eqnarray}
\lambdabel{scs}
u_k= \frac{1} {q_{2^{n_{1}}}q_{2^{n_{2}}}\ldots q_{2^{n_{s}}}} u_{0}=\frac{1} {q_{k}} u_{0}.
\end{eqnarray}
Indeed, it follows from
(\ref{xxx}), that (\ref{scs}) is true for $s=1$.
Now assume that(\ref{scs}) is true for all integers $1 \leq i <s$.
{\bf Case 1.} $k$ is odd.
In this case
$\displaystyle k= \sum_{i=1}^{s}2^{n_{i}}= 2^{n}-1+ l = \sum_{j=0}^{n-1}2^{j}+ l$ where $l=0$ if $n=s+1$ and $l= \displaystyle \sum_{i=n}^{s}2^{n_{i}} $ if $n \leq s$.
If $ n \geq 2$, we use (\ref{sd}) to get
$u_{k-2^{n-1}}= q_{2^{n-1}}u_k$ and by induction hypothesis, we have
$$
u_{k} = \frac{1}{q_{2^{n-1}}q_{k-2^{n-1}}}u_0=\frac{1}{q_{k}}
u_0 .$$
If $n=1$, we consider (\ref{parr}) to write
$u_k=\displaystyle \frac{1}{q_1} u_{k-1}$.
Thus, we deduce, by induction hypothesis, that
$$u_k= \frac{1}{q_1 q_{k-1}} u_{0}= \frac{1}{q_{k}}
u_0.$$
{\bf Case 2.} $k$ is even.
In this case $n_1>0.$ and
by (\ref{parr}), we deduce that $$u_k= q_{2^{0}} u_{k+2^0}= q_{2^{0}} u_{k+2^1 -1}.$$
Applying (\ref{sd}), it follows that
\begin{eqnarray*}
u_k= q_{2^{0}}q_{2^{1}} u_{k+2^2-1}=\ldots &=& q_{2^{0}}q_{2^{1}}\ldots q_{2^{n_1 -1}} u_{k+2^{n_1}- 1 }\\
&=&
q_{2^{0}}q_{2^{1}}\ldots q_{2^{n_1 -1}} u_{(k-2^{n_1})+2^{n_1 +1}- 1 }.
\end{eqnarray*}
Hence
\begin{eqnarray*}
u_k &= & q_{2^{0}}\ldots q_{2^{n_1 -1}} q_{2^{n_1 +1}}u_{(k-2^{n_1})+2^{n_1 +2}- 1 }\\
& =&
q_{2^{0}}\ldots q_{2^{n_1 -1}} q_{2^{n_1 +1}} \ldots q_{2^{n_2 -1}} u_{(k-2^{n_1}- 2^{n_2})+2^{n_2 +1}- 1 }.
\end{eqnarray*}
Thus
\begin{eqnarray}
\lambdabel{rts}
u_k= \displaystyle \frac{\displaystyle \prod_{i=0}^{n_s} q_{2^{i}}}{\displaystyle \prod_{i=1}^{s} q_{2^{n_i}}} u_{2^{n_s +1}- 1}.
\end{eqnarray}
By (\ref{rts}) and (\ref{nnn}) we get $u_k= \displaystyle \frac{1}{\displaystyle \prod_{i=1}^{s} q_{2^{n_i}}} u_{0}= \frac{1}{q_{k}}
u_0.$
Therefore we have proved that for all nonnegative integers
\begin{eqnarray}
\lambdabel{dua}
u_k= \frac{1}{q_{k}}
u_0.
\end{eqnarray}
\noindent{}We conclude that $u$ is in $\ell^1(\mathbb{N})$ if and only if
$\displaystyle \sum_{k=1}^{+\infty}\left|\frac1{q_{k}(\lambdambda)}\right|<\infty.$
\noindent But this gives that the
residual spectrum of $S_p$ acting on $c_0$ or $ c$ satisfy
\begin{eqnarray}
\lambdabel{sss}
\sigma_{r,C_0}(S_p) \subset \left\{\lambdambda \in
\overlineerline{\mathbb{D}(0,1)}: \; \sum_{k=1}^{+\infty}\left|\frac1{q_{k}(\lambdambda)}\right|<\infty\right\}.
\end{eqnarray}
\noindent We claim that $\displaystyle \sum_{k=1}^{+\infty}\left|\frac1{q_{k}(\lambdambda)}\right|<\infty$ implies
$ \vert q_{2^n-1}\vert \geq 1,$ for all integers $n \geq 1$.
Indeed, by D'Alembert's Theorem, we have
\begin{eqnarray}
\lambdabel{dalem}
\lim sup \frac{\vert q_n \vert}{\vert q_{n+1} \vert} \leq 1.
\end{eqnarray}
\noindent Now assume that $n$ is even. Then $n= 2^{k_0}+{\cal D}ots + 2^{k_m}$ where $1 \leq k_0 <k_1 < \ldots <k_m$
(representation in base $2$). In this case $n+1= 2^{0}+ 2^{k_0}+{\cal D}ots + 2^{k_m}$. Using (\ref{produit}), we obtain
$\displaystyle \frac{\vert q_n \vert}{\vert q_{n+1} \vert} = \frac{1}{\vert q_{1} \vert}$
and by (\ref{dalem}), we get
\begin{eqnarray}
\lambdabel{aac}
\vert q_{1} \vert \geq 1 .
\end{eqnarray}
\
Since for all integers $n \geq 0$, we have
$q_{2^n}= \displaystyle \frac{1}{p} \displaystyle q_{2^{n-1}}^2-\displaystyle \left(\frac{1}{p}-1\right)$. It follows, from the triangle inequality,
that $\vert q_{2^n}\vert \geq 1$ for all integers $n \geq 1$.
Let $i$ be a positive integer. Since $\displaystyle 2^{i}-1= \sum_{j=0}^{i-1}2^{j}$, we obtain by (\ref {produit}) that
$q_{2^{i}-1}=q_{2^{i-1}}q_{2^{i-2}}{\cal D}ots q_1$. Hence
$$\displaystyle \vert q_{2^i-1} \vert \geq 1,\;\mbox { for any integer } i \geq 1.$$
On the other hand, consider the first coordinate of the vector $\mu (S_p- \lambdambda Id)=0.$ Then we have
$$(1-p- \lambdambda)\mu_0+ \sum_{i=1}^{+\infty} p^{i}(1-p) \mu_{2^i-1}=0.$$
Dividing the two members of the last equality by $p$, we obtain
\begin{eqnarray}
\lambdabel{qs1}
q_1= \sum_{i=1}^{+\infty} p^{i-1}(1-p) / q_{2^i-1}.
\end{eqnarray}
We claim that there exists an integer $i_0 \in \mathbb{N}$ such that $\vert q_{2^{i_0}-1} \vert >1$. Indeed, if not the series
$\displaystyle \sum_{i \in \mathbb{N}} \frac{1}{\vert q_{2^i-1} \vert}$ will diverge.
\noindent{}Thus
$\vert q_1 \vert <\displaystyle \sum_{i \neq i_0}^{+\infty} p^{i}(1-p) +p^{i_0-1}(1-p) <1$.
Absurd. We conclude that the residual spectrum of $S_p$ acting on $c_0$ (resp. $c$ ) is empty.\\
\noindent{}The same proof yields that the
residual spectrum of $S_p$ acting on $\ell^{\alpha},\; \alpha > 1,$ is empty and the proof
of the proposition is complete.
\end{proof}
\begin{rem}
By (\ref {sss}), it follows that $ \lambdambda$ belongs to $ \sigma_{r, X}$ where $X=c_0$ or $c$ or $\ell^{\alpha},\; \alpha >1$,
implies $\lim \vert q_n (\lambdambda) \vert = + \infty$.
But this contradicts Proposition \ref{inclu}, which forces $\sigma_{r,C_0}(S_p)=\sigma_{r,l^{\alpha}}(S_p)= \emptyset$.
\end{rem}
\begin{prop}\lambdabel{Spc} The following equalities are satisfied:
$$\sigma_{c, c }(S_p)= J(f)\backslash \{1\},\; \sigma_{c, c_0 }(S_p)= \sigma_{c, l^{\alpha} }(S_p)= J(f) \mbox { for all } \alpha >1.$$
\end{prop}
\begin{proof} Assume that $X \in \{c_0,\; c\}$. Then, by Phillips Theorem, we see that the spectrum of $S_p$ in $X$ is equal to the the spectrum of $S_p$ in $\ell^{\infty}$
and from Propositions \ref{specte} and \ref{Spr}, we obtain the result.
\noindent Now, assume $X= l^{\alpha},\; \alpha > 1$.
According to Propositions \ref{inclu}, \ref{specte} and \ref{Spr}, it is enough to prove that $J(f) \subset \sigma (S_p) $.
Consider $\lambdambda \in J(f)$. We will prove that $\lambdambda$ belongs to the approximate point spectrum of $S_p$ .
For all integers $k \geq 2,$ put $w^{(k)}= (1,q_1 (\lambdambda), \ldots, q_k (\lambdambda), 0 \ldots 0, \ldots)^{t} \in l^{\alpha} $
where $(q_k (\lambdambda))_{k \geq 1}= (q_k)_{k \geq 1}$ is the sequence defined in (\ref{for3}) of the proof of Theorem \ref {spp} and
let $u^{(k)}=\displaystyle \frac{w^{(k)}} {\vert \vert w^{(k)} \vert \vert_{\alpha}}$, then we have the following claim.
{\bf Claim:} $\displaystyle \lim_{n\rightarrow+\infty} \vert \vert (S_p- \lambdambda Id) u^{(2^n)}\vert \vert_{\alpha}=0.$\\
\noindent Indeed, we have
$$\forall i \in \{0,\ldots,k-1\}, ~~\left ((S_p- \lambdambda Id) u^{(k)}\right)_i=0.$$
Thus
\begin{eqnarray*}
\sum_{i=0}^{+\infty} \left \vert {((S_p- \lambdambda Id) u^{(k)})}_i \right \vert ^{\alpha}
= \displaystyle
\frac {\displaystyle \sum_{i=k}^{+\infty}\displaystyle \left
\vert \sum_{j=0}^{k}( S_p - \lambdambda Id)_{i,j} w^{(k)}_{j} \right \vert }{ \vert \vert w^{(k)}\vert \vert_{\alpha}^{\alpha}}^{\alpha}.
\end{eqnarray*}
Putting $a_{i,j}= \vert (S_p -\lambdambda Id)_{i,j}\vert$ for all $i,j$ and using (\ref{tt}), we get
\begin{eqnarray*}
\left \vert \sum_{j=0}^{k}( S_p - \lambdambda Id)_{i,j} w^{(k)}_{j} \right \vert ^{\alpha} \leq C \sum_{j=0}^{k} \vert (S_p - \lambdambda Id)_{i,j}\vert \vert w^{(k)}_{j} \vert^{\alpha}
\end{eqnarray*}
where
$C= \displaystyle \sup_{i \in \mathbb{N}} \left(\sum_{j=0}^{\infty} \vert (S_p- \lambdambda Id)_{i,j} \vert
\right)^{\frac {\alpha} {\alpha'}}$ and $\alpha'$
is the conjugate of $\alpha .$
Observe that $C$ is a finite nonnegative constant because $S_p$ is a stochastic matrix and $\lambdambda$ belongs to $J (f)$ which is a bounded set.
In this way we have
\begin{eqnarray*}
\left \vert \left \vert {(S_p- \lambdambda Id) u^{(k)}} \right \vert \right \vert^{{\alpha}}_{\alpha}
& \leq &
C \sum_{i=k}^{+\infty}\frac{\left( \sum_{j=0}^{k} \vert w^{(k)}_j \vert^{\alpha} \vert (S_p- \lambdambda Id)_{ij}\vert \right)}
{\vert \vert w^{(k)}\vert \vert_{\alpha}^{\alpha}}\\
& = &
\frac{C}{\vert \vert w^{(k)}\vert \vert_{\alpha}^{\alpha}} \sum_{j=0}^{k} \vert w^{(k)}_j \vert^{\alpha}
\sum_{i=k}^{+\infty} \vert (S_p- \lambdambda Id)_{ij}\vert.
\end{eqnarray*}
\noindent Now, for $k=2^n$, we will compute the following terms $$ A_{kj}=\displaystyle \sum_{i=k}^{+\infty} \vert (S_p- \lambdambda Id)_{ij} \vert,\; 0 \leq j \leq k.$$
Assume that $0 \leq j <k =2^n.$ Then $\left(S_p- \lambdambda Id\right)_{ij}=( S_p)_{ij}$ for all $i \geq k$.
{\bf Case 1}: $j$ is odd. Then by Lemma \ref{Ktlemma}, $(S_p)_{ij} \ne 0$ if and only $i=j-1$ or $i=j$.
Hence $(S_p)_{ij}=0$ for all $i \geq k$. Thus
\begin{eqnarray}
\lambdabel{t0}
A_{kj}=0.
\end{eqnarray}
{\bf Case 2}: $j=0$ . Then by Lemma \ref{Ktlemma}, we have
\begin{eqnarray}
\lambdabel{tts}
A_{kj}= \sum_{i=2^n}^{+\infty} (S_p)_{i0}= \displaystyle \sum_{i=n+1}^{+\infty}p^{i}(1-p)= p^{n+1}.
\end{eqnarray}
{\bf Case 3}: $j $ is even and $j >0$. Then $j= \varphirepsilon _{n-1}\ldots
\varphirepsilon _{s}\underlinederbrace{0 \ldots 0}_{s}= \displaystyle \sum _{i=s}^{n-1} \varphirepsilon_i 2^i$ with $s \geq 1$
and $\varphirepsilon _{s}=1$. But by Lemma \ref{Ktlemma}, $(S_p)_{ij} \ne 0$ if and only if $i=2^m -1+j$ where $0 \leq m \leq s$. Hence $i <2^n= k.$
Therefore, in this case
\begin{eqnarray}
\lambdabel{tr}
A_{kj}=0.
\end{eqnarray}
Now assume $j=k=2^n$. In this case, we have
$A_{kj}=\vert 1-p-\lambdambda\vert+ \displaystyle \sum_{i=2^n+1}^{+\infty} (S_p)_{i, 2^n}.$
On the other hand, by Lemma \ref{Ktlemma}, we deduce that
$(S_p)_{i, 2^n} \ne 0$ if and only if $i= 2^n+ 2^m-1$ where $0 \leq m \leq n$
and $(S_p)_{2^n+ 2^m-1, 2^n} = p^m (1-p).$
Therefore
\begin{eqnarray}
\lambdabel{tt2}
A_{kj}= \sum_{i=2^n}^{+\infty} \vert (S_p- \lambdambda Id)_{i,2^n}\vert= \vert 1-p-\lambdambda \vert+ \sum_{m=0}^{n} p^m (1-p).
\end{eqnarray}
By (\ref{t0}),(\ref{tts}),(\ref{tr}) and (\ref{tt2}), we have for $k=2^n$ and $0 \leq j \leq k$,
\begin{eqnarray*}
\lambdabel{t3}
A_{kj} \ne 0 \Longleftrightarrow j=0 \mbox { or } j=k=2^n.
\end{eqnarray*}
Consequently
\begin{eqnarray*}
\left \vert \left \vert {(S_p- \lambdambda Id) u^{(2^n)}} \right \vert \right \vert^{\alpha}_{\alpha}
& \leq & C~~. \frac{ \vert w^{(k)}_{0} \vert ^{\alpha} A_{k0}+ \vert w^{(k)}_{k} \vert ^{\alpha} A_{kk}} {\vert \vert w^{(k)}\vert \vert_{\alpha}^{\alpha}} \\%
& = & C~~. \frac { p^{n+1} + \vert q_{2^n} \vert^{\alpha} \left( \vert 1-p- \lambdambda \vert +\displaystyle \sum_{m=0}^{n} p^m (1-p)\right) }
{\vert \vert w^{(2^n)}\vert \vert_{\alpha}^{\alpha}}.
\end{eqnarray*}
\noindent We claim that $\vert \vert w^{(2^n)} \vert \vert_{\alpha} $ goes to infinity as $n$ goes to infinity. Indeed, if not since
the sequence $\vert \vert w^{(2^n)} \vert \vert_{\alpha} $ is a increasing sequence, it must converge. Put
$w=(q_i)_{i \geq 0}$ with $q_0=1$. It follows that the sequence $(w^{(2^n)})_{n \geq 0}$ converges to $w$ in
$\ell^{\alpha}$ which means that there exists a nonzero vector $w \in l^{\alpha}$ such that
$(S_p-\lambdambda Id)w=0$. This contradicts Proposition \ref{specte}.
Now, since $\lambdambda$ belongs to the filled Julia set which is a bounded set and $(q_n)_{n \geq 0}$ is a bounded sequence, it follows
that $\vert \vert {((S_p- \lambdambda Id) u^{(2^{n})})}\vert \vert_{\alpha} $ converge to 0, and the claim is proved.
We conclude that $\lambdambda$ belongs to the approximate point spectrum of $S_p$ and the proof of Proposition \ref{Spc} is complete.
\end{proof}
This ends the proof of Theorem \ref{spp}.
\section*{\bf Spectrum of $S_p$ acting on the right on $\ell^1 $.}
Here, we will study the spectrum of $S_p$ acting (on the right) in $\ell^1 $.
We deduce from Proposition \ref {inclu} that the Spectrum of $S_p$ on $\ell^1 $ is contained in the filled Julia set $J(f)$.
On the other hand, using the same proof than Proposition \ref{Spc}, we obtain that $J(f)$
is contained in the approximate point spectrum of $S_p$. This yields that the spectrum of $S_p$ acting on $\ell^1$ is
equal to $J(f)$.
\begin{thm}
\lambdabel{l1}
In $\ell^1 $, the residual spectrum contains a
dense and countable subset of the Julia set $\partial (J(f))$. The continuous spectrum is not empty and
is equal to the relative complement of the
residual spectrum with respect to the the filled Julia set $J(f)$.
\end{thm}
\begin{proof}
The proof of Proposition \ref{Spr}, shows that the residual spectrum of $S_p$ in $\ell^1 $ is equal to the point spectrum of $S_p$ (acting on right) in ${l^{1}}' =l^{\infty} $.
By (\ref{dua}) and (\ref{qs1}),
we see that
$$\sigma_{r}(S_p) =$$
\begin{eqnarray*}
\lambdabel{srr}
\left\{\lambdambda \in \mathbb{C},\; (q_n(\lambdambda)) {\rm {~and~ }} (1/ q_n (\lambdambda)) {\rm {~are~bounded~and~ }}
q_1= \sum_{i=1}^{+\infty} \frac{p^{i-1}(1-p)}{q_{2^i-1}}
\right \}
\end{eqnarray*}
$$
= J (f) {\cal A}p \left \{\lambdambda \in \mathbb{C},\; ( 1/ q_n (\lambdambda)) \mbox { is bounded and }
q_1= \sum_{i=1}^{+\infty} \frac{p^{i-1}(1-p)}{q_{2^i-1}}\right \}.
$$
On the other hand we have
\begin{eqnarray}
\lambdabel{fs}
q_{2^n}^{2}= f(q_{2^{n-1}}^2)=\ldots f^{n}(q_1^{2})= f^{n+1}(\lambdambda),\; \forall n \geq 0.
\end{eqnarray}
Let $n \in \mathbb{N}$ and $ E_n= \{\lambdambda \in \mathbb{C},\; q_{2^{n}} (\lambdambda)=1\}$
{\bf Claim 1}:
$\displaystyle \bigcup_{n=0}^{+\infty} E_n= \bigcup_{n=0}^{+\infty} f^{-n}\{1\}.$
Indeed, let $\lambdambda \in \mathbb{C}$ such that $f^{n}(\lambdambda)=1$ for some nonnegative integer $n \geq 1$.
Then, by (\ref{fs}), we have $q_{2^{n-1}}=1$ or $q_{2^{n-1}}=-1$.
From (\ref{for5}), we see that $q_{2^{n-1}}=-1$ implies $q_{2^{n}}=1.$
Hence $f^{-{n}}\{1\} \subset E_{n-1} \cup E_{n}.$ Since $1 \in E_n$ for all integers $n \geq0$, we conclude that,
$\bigcup_{n=0}^{+\infty} f^{-n}\{1\} \subset \bigcup_{n=0}^{+\infty} E_n .$ The other inclusion follows from (\ref{fs}).
{\bf Claim 2}:
$\displaystyle \bigcup_{n=0}^{+\infty} E_n \subset \sigma_r (S_p)$.
Indeed,
assume that $n \in \mathbb{N}$ and $\lambdambda \in E_n$. Then
by (\ref{for5}), we get that
\begin{eqnarray}
\lambdabel{chch}
q_{2^{k}}= 1, \; \forall k \geq n.
\end{eqnarray}
But from (\ref{chch}) and (\ref{produit}), we have that $(q_k(\lambdambda))_{k \geq 0} \mbox { and } (1/ q_k (\lambdambda))_{k \geq 0} \mbox { are bounded }.$
Moreover, we have
\begin{eqnarray*}
q_1= \sum_{i=1}^{+\infty} \frac{p^{i-1}(1-p)}{q_{2^i-1}}
&\Longleftrightarrow& q_2= \sum_{i=2}^{+\infty} \frac{p^{i-2}(1-p) q_1}{q_{2^i-1}}\\
&\Longleftrightarrow& q_{2^{k}}= \sum_{i=k+1}^{+\infty} p^{i-k-1}(1-p) \frac{ q_{2^{0}} \ldots q_{2^{k-1}}} { q_{2^i-1}}, \;~~ \forall k \geq 0 \\
\lambdabel{fd} &\Longleftrightarrow& q_{2^k}= \sum_{i=k+1}^{+\infty} \frac{ p^{i-k-1}(1-p)} { q_{2^{k} } q_{2^{k+1}}\ldots q_{2^{i-1}}}, \; \forall k \geq 0.
\end{eqnarray*}
Thus
$$q_1= \sum_{i=1}^{+\infty} \frac{p^{i-1}(1-p)} {q_{2^i-1}}
\Longleftrightarrow 1= \sum_{i=0}^{+\infty}p^{i}(1-p).$$
From this $\lambdambda \in \sigma_r (S_p)$ and the claim 2 is proved.
But $1$ is a repulsor fixed point of $f$, it follows that $\bigcup_{n=0}^{+\infty} f^{-n}\{1\}$ is a dense subset of
the Julia set $\partial J(f)$. By this fact combined with claims 1 and 2, we conclude that the residual spectrum contains a
dense and countable subset of the Julia set $\partial (J(f))$.
On the other hand, $(p-1) ^2 \in J(f)$ since $f((p-1)^2))= (p-1)^2$, but $(p-1) ^2 \not \in \sigma_r(S_p)$ because for any positive integer $n, \; q_{2^n}((p-1) ^2)= (p-1),$
which implies that $\lim q_n= 0$ and hence $1 / q_n$ is not bounded. Thus $(p-1)^2 \in \sigma_c(S_p)$.
This finishes the proof of the theorem.
\end{proof}
\begin{conj}
We conjecture that the residual spectrum in $\ell^1 $ equals the set $\bigcup_{n=0}^{+\infty} f^{-n}\{1\}.$
\end{conj}
\section*{\bf Spectrum of $S_p$ acting on the left. }
Phillips Theorem combined with Proposition \ref{residual}, Theorems \ref{spp} and \ref{l1},
leads to the following result.
\begin{thm}
The spectrum of $S_p$ (acting on the left) in the spaces $c_0, c, l^{\alpha}$ where $1 \leq \alpha \leq +\infty$ equals to the filled Julia set $J(f)$.
Precisely:
In $c_0, l^{\alpha}$ where $1 \leq \alpha < +\infty$, the spectrum of $S_p$ equals to the continuous spectrum of $S_p$.
In $c$, the point spectrum of $S_p$ equals $\{1\}$ and the continuous spectrum equals $J(f) \backslash \{1\}$.
In $\ell^{\infty}$, the point spectrum equals to the residual spectrum of $S_p$ in $\ell^1$.
\end{thm}
\begin{center}
\includegraphics[scale=0.5]{p07.eps}\\
{\footnotesize { \hspace{5.5 em} Fig.3. Filled Julia set: $p=0.7$}}
\end{center}
\section{ Fibonacci Stochastic adding machine (see \cite{Messaoudi-Smania})}
Let us consider the
Fibonacci sequence $(F_n)_{n \geq 0}$ given by the relation
$$F_n= F_{n-1}+ F_{n-2} \;\; \forall n \geq 2.$$
Using greedy algorithm, we can write (see \cite{Z}) every
nonnegative integer $N$ in a unique way as $ N=\displaystyle \sum_{i=0}^{k(N)}
\varphirepsilon_{i}(N)F_i$ where $\varphirepsilon _{i}(N)= 0$ or $1$ and
$\varphirepsilon _{i}(N)\varphirepsilon _{i+1}(N) \ne 0, \; \forall 0 \leq
i \leq k(N)-1$.
It is known that the addition of $1$ in base $(F_n)_{n \geq 0}$
(called Fibonacci adding machine) is given by a finite state automaton transductor on $A^{*} \times A^{*}$ where $A=\{0,1\}$
(see Fig.4). This transductor is formed by two states ( an
initial state $I$ and a terminal state $T$). The initial state is
connected to itself by $2$ arrows. One of them is labeled by
$(10,00)$ and the other by $(101,000)$. There are also $2$ arrows
going from the initial state to the terminal one. One of these
arrows is labeled by $(00,01)$ and the other by $ (001, 010)$. The
terminal state is connected to itself by $2$ arrows. One of them
is labeled by $(0,0)$ and the other by $(1,1)$.
Assume that $N=
\varphirepsilon_{n}\ldots \varphirepsilon_0$. To find the digits of $N+1$,
we will consider
the finite path $c=
(p_{k+1}, a_{k}/ b_{k}, p_{k}) \ldots (p_2, a_1/ b_1, p_1)(p_1, a_0
/ b_0, p_0)$ where $ p_i \in \{I, T\},\; p_0= I,\; p_{k+1}= T,\;
a_i, b_i \in A^{*}$ where $A=\{0,1\}$ and the words $a_k \ldots a_0$ and $b_k \ldots
b_0$ have no two consecutive $1$. Moreover $\ldots 0\ldots 0 a_k
\ldots a_0= \ldots 0 \ldots 0 \varphirepsilon_{n}\ldots \varphirepsilon_0$.
Hence $N+1=\varphirepsilon'_{n}\ldots \varphirepsilon'_0$, where
$$\ldots
0\ldots 0 b_k \ldots b_0= \ldots 0\ldots 0 \varphirepsilon'_{n}\ldots
\varphirepsilon'_0.$$
Example: If $N= 10= 10010$ then
$$N \mbox { corresponds
to the path } (T, 1/1, T) \; (T, 00/01, I) \; (I, 10/00,I) .$$
Hence $N+1= 10100= 11.$
\begin{center}
\includegraphics[width=15cm,height=6cm,keepaspectratio=true]{fig01tran.eps}
{ \footnotesize {Fig.4. Transductor of Fibonacci adding machine
}}
\end{center}
In \cite{Messaoudi-Smania}, the authors define the stochastic adding machine by the following way:
Consider "probabilistic"
transductor $\mathcal{T}_{p}$ (see Fig.5) where $0 < p < 1,$ by
the following manner.
The states of $\mathcal{T}_{p}$ are $I$ and $T$. The labels are of the
form $(0/0, 1), (1/1, 1),$\\
$ (a/b, p)$ or $(a/a, 1-p)$ where $a/b$ is a label in $\mathcal{T}$.
The labeled edges in $\mathcal{T}_{p}$ are of the form $ (T,
(x/x, 1), T)$ where $x \in \{0,1\}$ or of the form $ (r, (a/b, p),
q)$ or $(T, (a/a, 1-p), q)$ where $ (r, a/b , q)$ is a labeled edge
in $\mathcal{T}$, with $q = I$.
The stochastic process $\psi(N)$ is defined by $\psi(N)=
\sum_{i=0}^{+\infty}r_{i} (N) F_{i}$ where $(r_i(N))_{i \geq 0}$ is an
infinite sequence of $0$ or $1$ without two $1$ consecutive and
with finitely many non zero terms.
The sequence $(r_i(N))_{i \geq 0}$ is defined by the following way:
Put $r_i(0)=0$ for all $i$, and assume that we have defined
$(r_i(N-1))_{i \geq 0},\; N \geq 1$. In the transductor $\mathcal{T}_{p}$, consider a path
$$\ldots (T, (0 /0, 1), T)\ldots (T, (0 /0, 1), T)(p_{n+1}, (a_n / b_n, t_n), p_{n}) \ldots
(p_1, (a_0 / b_0, t_0), p_0)$$
\noindent{}where $p_0 = I$ and
$p_{n+1}= T,$ such that the words $\ldots r_1(N-1) r_0(N-1)$ and $\ldots 00a_n \ldots
a_0$ are equal.
We define the sequence $(r_i(N))_{i \geq 0}$ as the infinite
sequence whose terms are $0$ or $1$ such that
$\ldots r_1(N) r_0(N)= \ldots 00b_n \ldots b_0
.$
We remark that $\psi(N-1)$ transitions to $\psi(N)$ with probability of
$p_{\psi(N-1)\psi(N)}= t_n t_{n-1} \ldots t_0.$
Example 1: If $N= 10= 10010$, then, in the transductor of
Fibonacci adding machine, $N$ corresponds to the path $ (T, 1/1, T)
\; (T, 00/01, I) \; (I, 10/00,I) .$
In the stochastic Fibonacci adding machine, we have the following
paths (see Figure 2):
\begin {enumerate}
\item
$(T, (1/1,1), T) \; (T, (0/0,1), T)(T, (0/0,1), T)\; (T, (10/10,
1-p),I).$ In this case $N= 10010$ transitions to $10010$ with
probability $1-p$.
\item
$ (T, (1/1,1), T)\; (T, (00/00,1-p), I)\; (I,
(10/00, p),I)$. In this case $N= 10$ transitions to $10000= 8$ with
probability $p(1-p)$.
\item
$ (T, (1/1,1), T)\; (T, (00/01,p), I)\; (I, (10/00, p),I)$. In this
case $N= 10$ transitions to $10100=11$ with probability $p^2$.
\end{enumerate}
\begin{center}
\includegraphics{fig02tran.eps}
{\footnotesize {Fig.5. Transductor of Fibonacci fallible adding
machine }}
\end{center}
By using the transductor $\mathcal{T}_p$, we can prove the following
result (see \cite {Messaoudi-Smania}).
\begin{prop}
\lambdabel{proba}
Let $N$ be a nonnegative integer, then the following results are
satisfied.
\begin{enumerate}
\item
$N$ transitions to $N$ with probability $1-p$.
\item
If $N=\varphirepsilon _{k}\ldots
\varphirepsilon _{2}00,\; k \geq 2,$ then $N$ transitions to $N+1=
\varphirepsilon _{k}\ldots \varphirepsilon _{2}01 $ with probability $p$.
\item
If $N=\varphirepsilon _{k}\ldots
\varphirepsilon _{t}00\underlinederbrace{1010\ldots 1010}_{2s}$ with $s \geq 1$
and $k \geq t \geq 2s+2 $, then $N$ transitions to $N+1=
\varphirepsilon _{k}\ldots \varphirepsilon _{t}01\underlinederbrace{0 \ldots
00}_{2s} $ with probability $p^{s+1}$,
and $N$ transitions to $N- \sum_{i=1}^{m}F_{2i-1}= N-F_{2m}+1= \varphirepsilon _{k}\ldots
\varphirepsilon _{t}00\underlinederbrace{10 \ldots 10}_{2s-2m}\underlinederbrace{0
\ldots 00}_{2m}$,\\
$ 1 \leq m \leq s$
with probability
$p^{m}(1-p).$
\item
If $N=\varphirepsilon _{k}\ldots
\varphirepsilon _{t}0\underlinederbrace{0101\ldots 0101}_{2s},\; s \geq 2$ and $k \geq t \geq 2s+1 $,
then $N$ transitions to $N+1= \varphirepsilon _{k}\ldots
\varphirepsilon _{t}0\underlinederbrace{1000\ldots 000}_{2s}$ with probability
$p^{s}$,
and $N$ transitions to $N- \sum_{i=0}^{m}F_{2i}= N-F_{2m+1}+1= \varphirepsilon _{k}\ldots
\varphirepsilon _{t}00\underlinederbrace{10 \ldots 10}_{2s-2m}\underlinederbrace{0
\ldots 00}_{2m-1}, \; 2 \leq m \leq s$ with probability
$p^{m-1}(1-p).$
\item
If $N=\varphirepsilon _{k}\ldots \varphirepsilon _{3}001,\; k \geq 3,$ then
$N$ transitions to $N+1= \varphirepsilon _{k}\ldots \varphirepsilon _{3}010
$ with probability $p$.
\end{enumerate}
\end{prop}
By Proposition \ref{proba}, we construct the transition graph. We also find the transition operator $S_p$ associated to the transition graph.
$$\hspace{-5 mm }\tiny {
\left(\begin{array}{cccccccccccccccccc}
1-p& p&0&0&0&0 &0&0 &0&0 &0&0 &0\ldots \\
0& 1-p & p&0&0 &0&0 &0&0 &0&0&0 &0\ldots \\
p(1-p) & 0 & 1-p&p^2&0&0&0 &0&0 &0&0 &0 &0 \ldots\\
0& 0& 0& 1-p&p&0&0 &0&0 &0&0 &0 &0 \ldots\\
p(1-p)& 0& 0& 0& 1-p&p^2&0&0 &0&0 &0&0 &0 \ldots\\
0& 0& 0& 0& 0 & 1-p&p&0&0 &0&0 &0 &0 \ldots\\
0& 0& 0& 0& 0 &0& 1-p&p&0&0 &0&0 &0 \ldots\\
p^2(1-p)& 0& 0& 0&0& p(1-p)&0& 1-p&p^3&0&0 &0 &0 \ldots\\
0& 0& 0& 0& 0 & 0& 0& 0 & 1-p&p&0&0 &0 \ldots\\
0& 0& 0& 0& 0 &0& 0& 0& 0& 1-p&p&0 &0 \ldots\\
0& 0& 0& 0&0& p(1-p)&0&0& p(1-p)&0& 1-p&p^2 &0 \ldots\\
\vdots &\vdots &\vdots &\vdots &\vdots &\vdots &\vdots &\vdots
&\vdots &\vdots &\vdots &\vdots &\vdots
\end{array}
\right)}
$$$$
{\rm {\footnotesize {Fig.6.~~Transition~graph~of~stochastic~adding~machine~in~Fibonacci~base.}}}
$$
\normalsize
\begin{rem}
In \cite{Messaoudi-Smania}, the authors prove that
the point spectrum of $S_p $ in
$\ell^{\infty}$ is equal to the set $ \mathcal{K}_p= \{ \lambdambda \in
\mathbb {C}, \; (q_n (\lambdambda))_{n \geq 1} \mbox { is bounded }\}$,
where $q_{F_{0}}(z)= z,\; q_{F_{1}} (z)=z^2,\; q_{F_{k}} (z)= \displaystyle \frac{1}{p}q_{F_{k-1}} (z)q_{F_{k-2}} (z)-\displaystyle\frac{1-p}{p},$ for all $k \geq 2$
and for all nonnegative integers $n$, we have $q_n (z)= q_{F_{k_{1}}} \ldots q_{F_{k_{m}}} $ where $ F_{k_{1}}+ {\cal D}ots+ F_{k_{m}}$ is the Fibonacci representation of $n$.
In particular, $\sigma_{pt(S_{p})}$ is contained in the set
\begin{eqnarray*}
\mathcal {E}_p &= &
\{
\lambdambda \in \mathbb{C} \; \vert \; (q_{F_{n}} (\lambdambda))_{n \geq 1} \mbox { is bounded } \}\\
& = &
\{
\lambdambda \in \mathbb{C} \; \vert \; ( \lambdambda_1, \lambdambda) \in
J (g)\}
\end{eqnarray*}
where $J(g)$ is the filled Julia set of the function $g:
\mathbb{C}^2 \mapsto \mathbb{C}^2 $ defined by:
$g(x,y)=(\frac{1}{p^{2}}(x- 1+p)(y- 1+p), x)$ and $\lambdambda_1=1-p
+\frac{ (1- \lambdambda- p)^2}{p}.$
They also investigated the topological properties of $\mathcal {E}_p$.
\end{rem}
\begin{prop}
\lambdabel{ptfib}
The operator $S_p$ is well defined in the Banach spaces $c_{0},c$
and $\ell^{\alpha},\; \alpha \geq 1$.
The point spectra of $S_p$ acting in the spaces $c_{0}$,
and $\ell^{\alpha}$ associated to the stochastic Fibonacci adding machines
are empty sets. In $c$, the point spectrum equals $\{1\}$.
\end{prop}
\begin{proof}
By Proposition \ref{proba}, we can prove that
the sum of coefficients of every column of $S_p$ is bounded by a fixed constant $M >0$.
Indeed, let $n \in \mathbb{N}$ and $s_n= \sum_{i=0}^{+\infty} p_{i,n}$ be the sum of coefficients of the $n$-th column.
If $n= \varphirepsilon _{k}\ldots \varphirepsilon _{2}01 $ or $n= \varphirepsilon _{k}\ldots \varphirepsilon _{3}010 $ (Fibonacci representation), then by 1), 2) and 5) of Proposition \ref{proba}, we have $s_n=1$
If $n= \varphirepsilon _{k}\ldots \varphirepsilon _{t}01\underlinederbrace{0 \ldots
00}_{s},\; s \geq 2 $, then for all integers $i \in \mathbb{N},\; p_{i,n} >0$ implies that $i= n$ or $i=n-1$ or $i= \varphirepsilon _{k}\ldots \varphirepsilon _{t}01 \underlinederbrace {0 \ldots 0}_{s-2m} \underlinederbrace{01 \ldots
01}_{2m},\; s \geq 2m$ or $i= \varphirepsilon _{k}\ldots \varphirepsilon _{t}01 \underlinederbrace {0 \ldots 0}_{s-2m} \underlinederbrace{10 \ldots
10}_{2m},\; s \geq 2m$.
Hence $ s_n \leq 1-p + p^ {\lceil 2 \rceil}+ 2 \sum_{m=1}^{\infty} p^{m}(1-p) \leq 1+ 2 p.$
If $n=0$, then $s_n \leq 1+p.$
On the other hand, since $S_p$ is a stochastic matrix, then by Proposition \ref{defia}, $S_p$ is well defined in the spaces $c_{0}, \; c$
(resp. in $\ell^{\alpha},\; \alpha \geq 1)$.
Now, let $\lambdambda$ be an eigenvalue of $S_p$ in $ X$ where $X \in \{c_{0}, c, \ell^{\alpha},\; \alpha \geq 1\}$ associated to the eigenvector
$v= (v_i)_{i \geq 0} \in X$. Since the
transition probability from any nonnegative integer $i$ to any
integer $i+k,\; k \geq 2$ is $p_{i,i+k}= 0$ (see Proposition
\ref{proba}), the operator $S_p$ satisfies $(S_{p})_{i,i+k}=0$ for all $i,k
\in \mathbb{N}$ with $k \geq 2$. Thus for every integer $k \geq 1,$
we have
\begin{eqnarray}
\lambdabel{rrr} \sum_{i=0}^{k}p_{k-1,i} v_i = \lambdambda v_{k-1}.
\end{eqnarray}
Then, we can prove by induction on $k$ that for any
integer $k \geq 1$, there exists a complex number $c_{k}=c_k(p,
\lambdambda)$ such that
\begin{eqnarray} \lambdabel{for4}
v_k= c_k v_0
\end{eqnarray}
Using the fact that the matrix $S_p$ is auto-similar, we can prove that $c_k= q_k$ for all integers $k \in \mathbb{N}$ (see Theorem 1, page 303, \cite{Messaoudi-Smania}).
Since $$ q_{F_{n}} (z)= \displaystyle \frac{1}{p}q_{F_{n-1}} (z)q_{F_{n-2}} (z)-\displaystyle\frac{1-p}{p},\; \forall n \in \mathbb{N},$$
and $(q_{F_{n}})$ converges to $0$ when $n$ goes to infinity, we obtain that
the point spectrum of $S_p$ acting in $c_{0}$
(resp. in $\ell^{\alpha},\; \alpha \geq 1$) is
empty. Using the same idea than proposition \ref{specte}, we see that $\sigma_{pt, c}= \{1\}$.
\end{proof}
\begin{rem}
By Phillips Theorem and duality, it follows that the spectra of $S_p$ acting in $X$ where $X \in \{ c_{0},\; c,\; l^{1},\;
l^{\infty}\}$ associated to the stochastic Fibonacci adding machine are equals.
\end{rem}
\begin{thm}
\lambdabel{ptfib}
The spectra of $S_p$ acting in $X$ where $X \in \{l^{\infty}, c_{0}, c, l^{\alpha}, \alpha \geq 1\}$ contain the set $\mathcal{E}_{p}= \{\lambdambda \in \mathbb{C},\;
(q_{F_n}(\lambdambda) )_{n \geq 0} \mbox { is bounded } \}$.
\end{thm}
\begin{proof}
The proof is similar to the proof of Proposition \ref{Spc} and will be done in case $\ell^{\alpha},\; \alpha > 1$.
Let $\lambdambda \in \mathcal{E}_p $ and let us prove that $\lambdambda$ belongs to the approximate point spectrum of $S_p$ in $\ell^{\alpha},\; \alpha > 1$.
\noindent For every integer $k \geq 2,$ consider $w^{(k)}= (1,q_1 (\lambdambda), \ldots, q_k (\lambdambda), 0 \ldots 0, \ldots)^{t} \in l^{\alpha} $
where $(q_k (\lambdambda))_{k \geq 1}= (q_k)_{k \geq 1}$ is the sequence defined in the proof of Theorem \ref {ptfib}.
Let $u^{(k)}=\displaystyle \frac{w^{(k)}} {\vert \vert w^{(k)} \vert \vert_{\alpha}}$, then we have the following claim.
{\bf Claim:} $\displaystyle \lim_{n\rightarrow+\infty} \vert \vert (S_p- \lambdambda Id) u^{(F_n)}\vert \vert_{\alpha}=0.$\\
By using the same proof than Proposition \ref{Spc}, we have
\begin{eqnarray*}
\left \vert \left \vert {(S_p- \lambdambda Id) u^{(F_n)}} \right \vert \right \vert^{{\alpha}}_{\alpha}
\leq
\frac{D}{\vert \vert w^{(F_n)}\vert \vert_{\alpha}^{\alpha}} \sum_{j=0}^{F_n} \vert w^{(F_n)}_j \vert^{\alpha} B_{F_n,j}
\end{eqnarray*}
where $ D$ is a positive constant and $ B_{F_n, j}= \sum_{i=F_n}^{+\infty} \vert (S_p- \lambdambda Id)_{ij}\vert$.
We can prove by the same manner done in Proposition \ref{Spc} that for $0 \leq j \leq F_n$,
\begin{eqnarray*}
\lambdabel{t3}
B_{F_n, j} \ne 0 \Longleftrightarrow j=0 \mbox { or } j=F_n.
\end{eqnarray*}
Indeed,
if $j \in \{1, \ldots F_n -1\}$, then since $ i \geq F_n$, we have $(S_p- \lambdambda Id)_{ij}= p_{i,j}.$
If the Fibonacci representation of $j$ is $j= \varphirepsilon_k \ldots \varphirepsilon_{2} 01$ or $j= \varphirepsilon_k \ldots \varphirepsilon_{t} 1 0 \ldots0$, it is
easy to see by Proposition \ref{proba} that $ p_{i,j} \ne 0$ implies $i < F_n$.
On the other hand, if $j=0$ then
$B_{F_n, j}= \sum_{l=F_n}^{+\infty} p _{l,0} $.
Since $p _{l,0} \ne 0$ if and only $l= F_i -1$ and $p _{F_i -1,0} = p^{\lceil i/2 \rceil} (1-p)$, we have
$B_{F_n, j} \leq 2 \displaystyle \sum_{i=m}^{+\infty}p^{i}(1-p)= 2 p^{m}$ where $m= \lceil (n+1)/2 \rceil.$
Now assume $j=F_n$. In this case, we have
$B_{F_n,j}=\vert 1-p-\lambdambda\vert+ \displaystyle \sum_{i=F_n+1}^{+\infty} p_{i, F_n}.$
On the other hand, by Proposition \ref{proba}, we deduce that
$p_{i, F_n} \ne 0$ if and only if $i= F_n+ F_m-1$ where $0 \leq m \leq n$
and $p_{F_n+ F_m-1, F_n} = p^{[m/2]} (1-p).$
Therefore
\begin{eqnarray}
\lambdabel{t2}
B_{F_n, F_n}= \vert 1-p-\lambdambda \vert+ \sum_{m=0}^{n} p^{ \lceil m/2 \rceil} (1-p) \leq \vert 1-p-\lambdambda \vert+ 2
\end{eqnarray}
Hence
\begin{eqnarray*}
\left \vert \left \vert {(S_p- \lambdambda Id) u^{(F_n)}} \right \vert \right \vert^{\alpha}_{\alpha}
\leq D \frac {2 p^{m} + \vert q_{F_n} \vert^{\alpha} \left( \vert 1-p- \lambdambda \vert +2 \right ) }
{\vert \vert w^{(F_n)}\vert \vert_{\alpha}^{\alpha}}.
\end{eqnarray*}
\noindent Since $\vert \vert w^{(F_n)} \vert \vert_{\alpha} $ goes to infinity as $n$ goes to infinity and $(q_{F_{n}})_{n \geq 0}$ is bounded, it follows
that $\vert \vert {((S_p- \lambdambda Id) u^{(F_n)})}\vert \vert_{\alpha} $ converge to 0.
Therefore $\lambdambda$ belongs to the approximate point spectrum of $S_p$. Thus the spectrum of $S_p$ acting
on $\ell^{\alpha}\; \alpha >1$, contains $\mathcal{E}_{p}$.
This finishes the proof for $\ell^{\alpha}\; \alpha >1$. The case $\ell^1$ can be handled in the same way,
the details being left to the reader.
\end{proof}
{\bf Open questions.}
We are not yet able to compute the residual and continuous spectrum of $S_p$
acting in the Banach spaces $\ell^{\infty},\; c_{0},\; c$
or in $\ell^{\alpha},\; \alpha \geq 1$.
We conjecture that $\sigma(S_{p})= \mathcal{E}_p$. Moreover, in the case of $\ell^{\infty}$
we conjecture that the residual spectrum is empty and
the continuous spectrum is the set $\mathcal{E}_p \setminus \mathcal{K}_{p}.$
The difficulty here is that the matrix $S_p$ is not bi-stochastic.
One may also look for a characterization of all real numbers $0 < p <1$ for which $\mathcal{E}_p \neq \mathcal{K}_{p}.$
\begin{thank}
The first author would like to express a heartful thanks to Albert Fisher, Eduardo Garibaldi,
Paulo Ruffino and Margherita Disertori for stimulating discussions on the subjet.
It is a pleasure for him to acknowledge the warm hospitality of the
UNESP University in S\~ao Jos\'e of Rio Preto, Campinas University and USP in S\~ao Paulo (Brazil) where a most of this
work has been done.
The second author would like to thanks University of Rouen where a part of this
work has been realized.
\end{thank}
\begin{center}
\includegraphics[width=6cm,height=6cm,keepaspectratio]{1-6.eps}
{\footnotesize \hspace {-13em} { Fig.7. $p=0.625$}}
\end{center}
\begin{center}
\includegraphics[width=6cm,height=6cm,keepaspectratio]{1-61.eps}
{\footnotesize \hspace {-13em} { Fig.8. $p=0.621$}}
\end{center}
\end{document}
|
\begin{document}
\begin{abstract}
In this note we point out that the definition of the universal enveloping dialgebra for a Leibniz algebra is consistent with the interpretation of a Leibniz algebra as a generalization not of a Lie algebra, but of the adjoint representation of a Lie algebra. From this point of view, the formal integration problem of Leibniz algebras is, essentially, trivial.
\end{abstract}
\title{{A comment on the integration of Leibniz algebras}
A Leibniz algebra is a vector space with a bilinear bracket satisfying the Leibniz identity
$$[[x,y],z]=[[x,z],y]+[x,[y,z]].$$
A Leibniz algebra whose bracket is antisymmetric is a Lie algebra.
J.-L.\ Loday who defined Leibniz algebras suggested that they arise as tangent structures to some hypothetic objects he called ``coquecigrues'' in the same way as the Lie algebra structure arises
on the tangent space to a Lie group at the unit element.
The hunt for coquecigrues has resulted in various trophies such as \cite{KW}. However, the unsatisfying feature of the integration method in \cite{KW} is that in the case when the Leibniz algebra is a Lie algebra it does not produce a Lie group.
The purpose of this note is point out that this feature is consistent with the definition of the universal enveloping dialgebra of a Leibniz algebra. Namely, we show that the usual procedure of the formal Lie integration via distribution algebras, as described in \cite{Serre}, can be applied in the context of Leibniz algebras. For Leibniz algebras with an antisymmetric bracket, that is, for Lie algebras, this integration method produces not the corresponding Lie group but, rather, the group together with its adjoint representation.
It will be more convenient for us to replace Leibniz algebras by a wider class of objects, namely, Lie algebras in the Loday-Pirashvili category of linear maps (2-vector spaces). We shall see that such a Lie algebra integrates to a triple that consists of a Lie group $G$, its right representation $\rho$ and a morphism of $\rho$ to the right adjoint representation $\mathrm{Ad}^{-1}_G$.
In a way, the output of this integration procedure is trivial. A Leibniz algebra can be thought of as a right ${\mathfrak{g}}$-module over the right adjoint representation of ${\mathfrak{g}}$ for some Lie algebra ${\mathfrak{g}}$ and this obviously integrates to a right representation of the corresponding Lie group $G$ with a morphism to the right adjoint representation of $G$. The main result of this note says that this kind of integration is the natural consequence of considering dialgebras as the category of universal enveloping objects for Leibniz algebras.
It is worth pointing out that we do not claim here that Leibniz algebras cannot be integrated so as to produce Lie groups in the special case of Lie algebras. Indeed, a {\em local} integration procedure of this type was described by S.\ Covez in \cite{C}, who developed the idea of M.\ Kinyon \cite{K}. It is shown in \cite{C} that any Leibniz algebra can be locally integrated to a local augmented Lie rack; this integration method applied to a Lie algebra produces a local Lie group. The algebraic structure on the distributions supported at the unit of a local augmented Lie racks should produce another version of the universal enveloping algebra for Leibniz algebras.
All vector spaces and algebras will be assumed to be defined over a field of characteristic zero.
\section{Lie algebras in the Loday-Pirashvili category}
\subsection{The Loday-Pirashvili category}
The Loday-Pirashvili category ${\mathcal{LM}}$ (or the category of 2-vector spaces), defined in \cite{LP}, has as objects linear maps $(V\to W)$ between vector spaces. A morphism from $(V\to W)$ to $(V'\to W')$
is a commutative diagram
$$
\begin{array}{ccc}
V&\to&V'\\
\downarrow&&\downarrow\\
W&\to&W'.
\end{array}
$$
There is a tensor product defined as
$$(V\xrightarrow{\delta} W)\otimes(V'\xrightarrow{\delta'} W')=((V\otimes W' + W\otimes V')\xrightarrow{\delta\otimes \text{id}+\text{id}\otimes \delta'} W\otimes W'),$$
where, for the sake of readability, we write $+$ for the direct sum. If objects of ${\mathcal{LM}}$ are thought of as chain complexes concentrated in degrees 1 and 0, this tensor product is just the product of chain complexes with the degree 2 part thrown away. We shall refer to the vector spaces $V$ and $W$ as the degree 1 and degree 0 parts of $(V\to W)$ respectively.
The interchange automorphism $\tau$ defined on $(V\to W)^{\otimes 2}$ acts by
$$\tau( x\otimes y' + y\otimes x')= x'\otimes y + y'\otimes x$$
in degree 1 and by $\tau(x\otimes x')=x'\otimes x$ in degree 0. The interchange automorphism gives an action of the symmetric group $\Sigma_n$ on the $n$th tensor power of an object in ${\mathcal{LM}}$. This allows to define symmetric and exterior powers: the symmetric power is universal for the morphisms from the tensor power of that are invariant under this action, and the exterior power for the morphisms that change by the sign representation of $\Sigma_n$.
In particular, $$S^k(V\to W)=(S^{k-1}W\otimes V \to S^k W),$$ and
$${\mathcal L}mbda^k(V\to W)=({\mathcal L}mbda^{k-1}W\otimes V \to {\mathcal L}mbda^k W),$$
see \cite{LP} for details. It is easy to see that, similarly to the isomorphism $S(W+W')=S(W)\otimes S(W')$ there is an isomorphism
$$S\left((V+V')\xrightarrow{\delta+\delta'}(W+W')\right)=S(V\xrightarrow{\delta} W)\otimes S(V'\xrightarrow{\delta'} W')$$
for any pair of objects $(V\xrightarrow{\delta} W)$ and $(V'\xrightarrow{\delta'} W')$ in ${\mathcal{LM}}$.
\subsection{Lie algebras in ${\mathcal{LM}}$ and their universal enveloping algebras}
A Lie algebra in ${\mathcal{LM}}$ is a linear map $\mu:(V\to W)^{\otimes 2}\to (V\to W)$ which is antisymmetric (that is, which factors through ${\mathcal L}mbda^2(V\to W))$ and satisfies the Jacobi identity
$$\mu(1\otimes\mu)-\mu(\mu\otimes 1)+\mu(\mu\otimes 1)(1\otimes\tau)=0.$$
In general, a Lie algebra is ${\mathcal{LM}}$ is an object $(M\xrightarrow{\delta}{\mathfrak{g}})$ with ${\mathfrak{g}}$ a Lie algebra, $M$ a right ${\mathfrak{g}}$-module and $\delta$ a ${\mathfrak{g}}$-equivariant map. A Leibniz algebra ${\mathfrak{g}}$ gives rise to a Lie algebra $({\mathfrak{g}}\to{\mathfrak{g}}_{Lie})$ in ${\mathcal{LM}}$. Conversely, given a Lie algebra $(M\xrightarrow{\delta}{\mathfrak{g}})$ in ${\mathcal{LM}}$ one can define the Leibniz algebra bracket on $M$ by
$$[x,y]=[x,\delta y],$$
where the bracket on the right-hand side denotes the right action action of ${\mathfrak{g}}$ on $M$, see \cite{LP}.
Similarly, one can speak of associative algebras, coalgebras in ${\mathcal{LM}}$ and so on. The universal enveloping algebra of a Lie algebra $(M\to{\mathfrak{g}})$ in ${\mathcal{LM}}$ is the $U({\mathfrak{g}})$-bimodule $U({\mathfrak{g}})\otimes M$, such that for all $g\in {\mathfrak{g}}$, $h\in U({\mathfrak{g}})$ and $m\in M$
$$g\cdot h\otimes m= gh\otimes m$$
and
$$h\otimes m \cdot g= hg\otimes m+ h\otimes [m,g];$$
here $[m,g]$ denotes the right action of $g$ on $M$.
The degree 1 part of an algebra $(B\xrightarrow{\delta} A)$ in ${\mathcal{LM}}$ carries the structure of a dialgebra (see \cite{L2}) given by
$$ x \vdash y = \delta x\cdot y \quad \text{and} \quad x \dashv y = x \cdot \delta y $$
for $x,y\in B$. The universal enveloping dialgebra of a Leibniz algebra ${\mathfrak{g}}$ is the dialgebra structure on $U({\mathfrak{g}}_{ie})\otimes {\mathfrak{g}}$ coming from the universal enveloping algebra of the Lie algebra $({\mathfrak{g}}\to{\mathfrak{g}}_{Lie})$ in ${\mathcal{LM}}$, see \cite{L2, G}.
There are two kinds of trivial examples of Lie algebras in ${\mathcal{LM}}$. Any Lie algebra ${\mathfrak{g}}$ gives rise to the Lie algebra $(0\to {\mathfrak{g}})$ in ${\mathcal{LM}}$, and any vector space $V$ gives rise to the Lie algebra $(V\to 0)$. The corresponding universal enveloping algebras are $(0\to U({\mathfrak{g}}))$ and $(V\to0)$, respectively.
\subsection{The identity map of a Lie algebra and the bimodule of 1-currents}
An important and interesting class of examples of Lie algebras in ${\mathcal{LM}}$ consists of the identity maps $({\mathfrak{g}}\to{\mathfrak{g}})$, where ${\mathfrak{g}}$ is a Lie algebra. These are precisely the Lie algebras considered as Leibniz algebras. In this case, the universal enveloping algebra can be interpreted in terms of the bimodule of 1-currents on a local analytic Lie group $G$ whose algebra is ${\mathfrak{g}}$.
Recall (\cite{Serre}) that the algebra $D_0(G)$ of distributions on $G$ supported at the unit element is naturally isomorphic to $U({\mathfrak{g}})$. As a vector space it is isomorphic to the symmetric algebra $S({\mathfrak{g}})$ by the Poincar\'e-Birkhoff-Witt Theorem. Indeed, each distribution in $D_0(G)$ can be though of a differential operator applied to the Dirac's delta function.
Denote by $D_1(G)$ the space of 1-currents (that is, linear functionals on 1-forms) on $G$
supported at the unit. By definition, each 1-current in $D_1(G)$ can be written as a sum\footnote{We shall use the Einstein summation convention.} $\alpha_i\partial_i$ where $\alpha_i$ are distributions supported at the unit and $\partial_i$ is the coordinate in ${\mathfrak{g}}$ dual to $dx_i$. Applied to a 1-form $f_j dx_j$ the current $\alpha_i\partial_i$ gives the sum $\alpha_i(f_i)$.
As a vector space, $D_1(G)$ is isomorphic to $S({\mathfrak{g}})\otimes{\mathfrak{g}}$.
There is a linear map $$\delta: D_1(G)\to D_0(G),$$
given by $$(\delta\alpha) (f)=\alpha (df)$$
for $\mu\in D_1(G)$ and $f$ a
function on $G$.
The product $\mu:G\times G\to G$ and the diagonal ${\mathcal D}elta: G\to G\times G$ induce maps of the spaces of
differential forms $$\mu^*: \Omega^1(G)\to\Omega^1(G\times G)=\Omega^1(G)\,\widehat{\otimes}\, \Omega^0(G) + \Omega^0(G)\ \widehat{\otimes}\ \Omega^1(G)$$
and
$${\mathcal D}elta^*:\Omega^1(G\times G)=\Omega^1(G)\ \widehat{\otimes}\ \Omega^0(G) + \Omega^0(G)\ \widehat{\otimes}\ \Omega^1(G)\to \Omega^1(G).$$ Dually, there are maps
$$\mu':D_1(G)\otimes D_0(G) + D_0(G)\otimes D_1(G)\to D_1(G)$$
and
$${\mathcal D}elta': D_1(G)\to D_0(G)\otimes D_1(G) + D_1(G)\otimes D_0(G),$$
which give the map $\delta: D_1(G)\to D_0(G)$ the structure of a bialgebra in ${\mathcal{LM}}$.
The primitive elements in degree 1 are the currents of the form $\alpha_i\partial_i$ where each $\alpha_i$ is a constant and $\partial_i$ is dual to $dx_i$; in degree 0 the primitives are of the form $\alpha_i\partial_i$ where each $\alpha_i$ is a constant and $\partial_i$ is the derivative of the Dirac's delta along the $i$th coordinate axis, evaluated at the origin. The map $\delta$ identifies both primitive subspaces. By the Milnor-Moore Theorem of \cite{LP}, we have
\begin{prop}
The map $(D_1(G)\xrightarrow{\delta} D_0(G))$ is naturally isomorphic to the universal enveloping algebra of $({\mathfrak{g}}\xrightarrow{\mathrm{id}}{\mathfrak{g}})$.
\end{prop}
In particular, the universal enveloping dialgebra of a Lie algebra has a fundamentally different geometric meaning from its universal enveloping algebra: it consists of 1-currents and not of distributions.
\section{Integration}
\subsection{Formal integration}
Recall that a formal group on a vector space $V$ is a linear map
$$F: S(V+V)\to V,$$
which is associative and unital. The fact that $F$ is unital means
$$F|_{1\otimes S(V)}=1\otimes \pi_V\quad \text{and}\quad F|_{S(V) \otimes 1}= \pi_V \otimes 1,$$ where $\pi_V:S(V)\to V$ is the projection onto the degree 1 subspace. Associativity means that the extension of $F$ to a coalgebra morphism $$F':S(V+V)=S(V)\otimes S(V)\to S(V)$$
is an associative product. (Any linear map $\theta: S(V)\to V'$ can be extended to a unique coalgebra morphism $\theta':S(V)\to S(V')$. The extension is given explicitly by the formula
$$\theta'(\mu)=\sum_{n=0}^{\infty}\frac{1}{n!}\,\theta(\mu_{(1)})\ldots \theta(\mu_{(n)})=\epsilon(\mu)1+\theta(\mu)+\ldots,$$
where $\epsilon$ is the counit and Sweedler's notation is used.)
The map $F$ is interpreted as follows in terms of an $n$-tuple $(f_k(x_i,y_j))$ of power series which represent the formal group ($n=\dim V$).
Choose a basis in $V$ and let $x_i$ and $y_j$ be the coordinates in the first and the second copies of $V$ respectively, and $F_k$ the components of the map $F$. Then $F_k$ sends a monomial in $x_i$ and $y_j$ to its coefficient in $f_k$.
Given a Lie algebra ${\mathfrak{g}}$, a formal group which integrates ${\mathfrak{g}}$ can be obtained via the Campbell-Baker-Hausdorff formula. Alternatively, consider the map
$$U({\mathfrak{g}})\otimes U({\mathfrak{g}})\to U({\mathfrak{g}})\xrightarrow{\text{Prim}} {\mathfrak{g}},$$
where the first arrow is the product in the universal enveloping algebra and the second arrow is the projection onto the primitive subspace. Identifying $U({\mathfrak{g}})$ with $S({\mathfrak{g}})$ via the Poincar\'e-Birkhoff-Witt Theorem, we obtain a formal group which integrates ${\mathfrak{g}}$.
This approach to formal integration can be applied in tensor categories other than vector spaces; in particular, in the Loday-Pirashvili category.
Define a formal group in the Loday-Pirashvili category to be an object $(V\xrightarrow{\delta} W)$ of ${\mathcal{LM}}$ together with a linear map
$$S\left((V+V)\xrightarrow{\delta+\delta}(W+W)\right)\xrightarrow{G} \left(V\xrightarrow{\delta} W\right),$$
whose extension to a coalgebra morphism in ${\mathcal{LM}}$
$$S(V\xrightarrow{\delta}W)\otimes S(V\xrightarrow{\delta}W) \to S(V\xrightarrow{\delta}W) $$
is an algebra in ${\mathcal{LM}}$. Here we use the fact that, as in the case of usual coalgebras, any linear morphism $\theta: S(V\to W)\to (V'\to W')$ can be extended to a unique coalgebra morphism. In degree 1 this extension is given by
$$\theta'_1(\mu\otimes v)=
\sum_{n=0}^{\infty}\frac{1}{n!}\,\theta_0(\mu_{(1)})\ldots \theta_0(\mu_{(n)})\otimes \theta_1\left(\mu_{(n+1)}\otimes v\right),$$
where $\theta_i$ is the morphism between the degree $i$ components.)
With this definition of a formal group the integration problem is, essentially, trivial.
Given a Lie algebra $(M\to{\mathfrak{g}})$ in ${\mathcal{LM}}$, compose the product in $U(M\to{\mathfrak{g}})$ with the projection to the primitive subspace. Identifying $U(M\to{\mathfrak{g}})$ with $S(M\to{\mathfrak{g}})$ we get a diagram
$$
\begin{array}{ccc}
S({\mathfrak{g}})\otimes M\otimes S({\mathfrak{g}}) + S({\mathfrak{g}})\otimes S({\mathfrak{g}})\otimes M& \xrightarrow{G^1+G^2} &M\\
\downarrow&&\downarrow\\
S({\mathfrak{g}})\otimes S({\mathfrak{g}})& \xrightarrow{F} &{\mathfrak{g}}
\end{array}
$$
which ``integrates'' the Lie algebra $(M,{\mathfrak{g}})$ in ${\mathcal{LM}}$.
Conversely, given a formal group, its extension to a coalgebra morphism is a bialgebra whose primitive subspace is a Lie algebra in ${\mathcal{LM}}$. We have
\begin{prop}
The functor that assigns to a Lie algebra $(M\to{\mathfrak{g}})$ in ${\mathcal{LM}}$ the primitive part of the product in $U(M\to{\mathfrak{g}})$ is an equivalence of the categories of Lie algebras in ${\mathcal{LM}}$ and of formal groups in ${\mathcal{LM}}$.
\end{prop}
Clearly, a Lie algebra $(0\to{\mathfrak{g}})$ integrates to a usual formal group and a vector space $(V\to 0)$ integrates to itself. For a general Lie algebra $(M\xrightarrow{}{\mathfrak{g}})$ in ${\mathcal{LM}}$, consider first the primitive part of the left action of $U({\mathfrak{g}})$ on $U({\mathfrak{g}})\otimes M$:
$$\alpha\otimes (\beta\otimes v) \to \alpha\beta\otimes v \xrightarrow{\text{Prim}} \epsilon(\alpha\beta) v,$$
where $\alpha,\beta$ are in $U({\mathfrak{g}})$, $v\in M$ and $\epsilon$ is the counit in $U({\mathfrak{g}})$. This means that if this map is thought of as an $m$-tuple, where $m=\dim M$, of formal power series $g^1(x,y,v)$ with $x,y\in{\mathfrak{g}}$ and $v\in M$, it is of a very simple form
$$g^1(x,y,v)=v,$$
which neither depends on the ${\mathfrak{g}}$-module structure of $M$, nor on the Lie algebra structure of ${\mathfrak{g}}$.
As for the primitive part of the right action of $U({\mathfrak{g}})$ on $U({\mathfrak{g}})\otimes M$ we have
$$(\alpha\otimes v)\otimes 1 \to \alpha\otimes v \xrightarrow{\text{Prim}} \epsilon(\alpha) v,$$
and for $b\in {\mathfrak{g}}$
$$(\alpha\otimes v)\otimes b\to\alpha b\otimes v + \alpha\otimes [v,b]\xrightarrow{\text{Prim}} \epsilon(\alpha) [v,b].$$
In particular, we see that the primitive part of the right action does not depend on the non-zero degree terms in $\alpha$. As an $m$-tuple of formal power series, this action can be written as a linear in $v$ function $g^2(x,v,y)=g^2(v,y)$.
There are two conditions on the function $g^2(v,y)$. Let $f(x,y)$ be the $n$-tuple ($n=\dim {\mathfrak{g}}$) of formal power series representing the formal group $F$ on ${\mathfrak{g}}$. Then the first condition is associativity:
$${\mathfrak{g}}^{2}(g^{2}(v,y),z)={\mathfrak{g}}^{2}(v,f(y,z)),$$
and the second is the relation to the function $f$: $\delta g^2(v,y)$ should coincide with the linear in $\delta v$ terms of $f(\delta v, y)$; here $\delta$ is the map $M\to{\mathfrak{g}}$.
For example, for the Lie algebra $({\mathfrak{g}}\xrightarrow{\text{id}}{\mathfrak{g}})$ the function $g^{2}(v,y)$ is simply the linear in $v$ part of $f(v,y)$.
\subsection{A global interpretation} The formal group in ${\mathcal{LM}}$ that integrates the identity map $({\mathfrak{g}}\to{\mathfrak{g}})$ can be thought of as an infinitesimal version of the product on the tangent bundle of a Lie group $G$:
$$TG\times TG\to TG.$$
This product is, in fact, a pair of actions (right and left) of $G$ on $TG$. If the left action is used to trivialize $TG$, the right action becomes the right adjoint action $\mathrm{Ad}^{-1}$ of $G$ on ${\mathfrak{g}}$.
The global version of an arbitrary formal group in ${\mathcal{LM}}$ is a vector bundle $\xi$ over $G$ with the fibre $M$, together with an ``anchor map'' $p:\xi\to TG$ which commutes with the projections, and a pair of actions, right and left, of $G$ on $\xi$ which are carried by $p$ to the actions of $G$ on $TG$. The bundle $\xi$ can be trivialized by means of the left action and we see that a ``Lie group in ${\mathcal{LM}}$'' is simply a commuting triangle
\begin{equation}\label{LPG}
\begin{array}{ccc}
G&\xrightarrow{\mathrm{\rho}}& GL(M)\\
\Vert& & \downarrow\\
G&\xrightarrow{\mathrm{Ad^{-1}}}&GL({\mathfrak{g}})
\end{array}
\end{equation}
where $G$ is a Lie group whose Lie algebra is ${\mathfrak{g}}$, $\rho$ is a right representation of $G$ on $M$ and the downwards arrow is induced by a map $M\to{\mathfrak{g}}$. The right ${\mathfrak{g}}$-module structure on $M$ comes from its structure of a right $\mathfrak{gl}(M)$-module.
Note that any finite-dimensional formal group in ${\mathcal{LM}}$ comes from such a Lie group in ${\mathcal{LM}}$.
\subsection{The universal enveloping algebra as the bimodule of 1-currents}
Finally, let us point out that the interpretation of the universal enveloping algebra as the bialgebra of distributions on a Lie group holds for the Lie groups in the Loday-Pirashvili category as defined by (\ref{LPG}).
Indeed, let us think of a Lie group in ${\mathcal{LM}}$ as a ``generalized tangent bundle'' $\xi$ over $G$ with the fibre $M$, that is, a bundle with a two-sided action of $G$ and an anchor map to the tangent bundle. The space of sections $\Gamma(\xi^*)$ of the dual bundle $\xi^*$ can be thought of as the space of generalized differential 1-forms on $G$. Similarly, the space $D_1(\xi)$ of linear functionals on $\Gamma(\xi^*)$ that are of the form $\sum_i a_i\otimes v_i$, with $a_i\in D_0(G)$ and $v_i\in M$, can be considered as the space of generalized 1-currents supported at the unit of $G$. The anchor map sends the usual 1-forms to the generalized 1-forms and generalized 1-currents to usual 1-currents. Composing this map with $\delta: D_1(G)\to D_0(G)$ we get a map $D_1(\xi)\to D_0(G)$.
The actions of $G$ on $\xi$ give rise to a map
$$\Gamma(\xi^*)\to \Omega^0(G)\, \widehat{\otimes}\, \Gamma(\xi^*)+ \Gamma(\xi^*)\, \widehat{\otimes}\, \Omega^0(G). $$
By duality, $D_1(\xi)\to D_0(G)$ acquires the structure of an algebra in ${\mathcal{LM}}$. Similarly, the diagonal map $G\to G\times G$ gives rise to a map
$$\Omega^0(G)\, \widehat{\otimes}\, \Gamma(\xi^*)+ \Gamma(\xi^*)\, \widehat{\otimes}\, \Omega^0(G)\to \Gamma(\xi^*)$$
and this gives a coalgebra structure on $D_1(\xi)\to D_0(G)$.
The primitive part of $D_1(\xi)\to D_0(G)$ is, clearly $(M\to {\mathfrak{g}})$ and it only remains to observe that the actions of $D_0(G)$ on $D_1(\xi)$ give rise to the same Lie algebra structure on $(M\to {\mathfrak{g}})$ as that coming from the right $\mathfrak{gl}(M)$ action on $M$.
If the left action of $G$ is used to trivialize $\xi$, we can speak of generalized 1-forms with constant coefficients; these are invariant under the left action and can be identified with elements of $M^*$. The right action of $g\in G$ sends such a form $m\in M^*$ to the form $$v\to m(\mathrm{Ad}_{\rho(g)}^{-1}(v)).$$
Now, let $w\in D_0(G)$ and $v\in D_1(\xi)$ be primitive. In order to verify that $vw-wv$ coincides with the right action $[v,D\rho(w)]$ of $w$ on $v$ via $D\rho: {\mathfrak{g}}\to\mathfrak{gl}(M)$ it is sufficient to check it on forms with constant coefficients. And, indeed, we have
\begin{multline*}
(vw-wv)(m)=(vw)(m)=(v\otimes w) (m(\mathrm{Ad}_{\rho(g)}^{-1}))\\ =\frac{\partial}{\partial w} m\left(\mathrm{Ad}_{\rho(g)}^{-1}(v)\right)=m\left(\frac{\partial}{\partial w} \mathrm{Ad}_{\rho(g)}^{-1}(v)\right)=m([v,D\rho(w)])=[v,D\rho(w)](m).
\end{multline*}
\begin{rem}
The solution to the integration problem given here is modelled very closely on the usual Lie theory and provides an exact analogy to the triple
\begin{equation*}
\mbox{Lie algebras}\simeq \mbox{Irreducible cocommutative Hopf
algebras} \simeq\mbox{Formal groups},
\end{equation*}
which would be expected from any reasonable extension of Lie theory to ${\mathcal{LM}}$. The interpretation of the universal enveloping algebras in terms of 1-currents explains in a natural way the existence of two coproducts in the enveloping dialgebras of Leibniz algebras.
Nevertheless, we should point out that the motivation behind the coquecigrue hunt is not just the quest for a good analogy, but a desire to find a homology theory for groups that would parallel the Leibniz homology for Lie algebras. The integration procedure coming from dialgebras appears to be too simple-minded to be of help in this.
\end{rem}
\begin{rem}
The same formal integration method works in other situations. In \cite{MPi} it was employed for the formal Lie theory of non-associative products. It can also be used to integrate formally Lie algebras in the category of chain complexes; the integration procedure in ${\mathcal{LM}}$ can be considered as a truncation of Lie algebra integration in that category.
\end{rem}
{\small
}
\end{document}
|
\begin{document}
\title{\textbf{Quantum money with nearly optimal error tolerance}}
\author{Ryan Amiri}
\affiliation{SUPA, Institute of Photonics and Quantum Sciences,
Heriot-Watt University, Edinburgh, EH14 4AS, UK}
\author{Juan Miguel Arrazola}
\affiliation{Centre for Quantum Technologies, National University of Singapore, 3 Science Drive 2, Singapore 117543}
\date{\today}
\begin{abstract}
We present a family of quantum money schemes with classical verification which display a number of benefits over previous proposals. Our schemes are based on hidden matching quantum retrieval games and they tolerate noise up to $23\%$, which we conjecture reaches $25\%$ asymptotically as the dimension of the underlying hidden matching states is increased. Furthermore, we prove that $25\%$ is the maximum tolerable noise for a wide class of quantum money schemes with classical verification, meaning our schemes are almost optimally noise tolerant. We use methods in semi-definite programming to prove security in a substantially different manner to previous proposals, leading to two main advantages: first, coin verification involves only a constant number of states (with respect to coin size), thereby allowing for smaller coins; second, the re-usability of coins within our scheme grows linearly with the size of the coin, which is known to be optimal. Lastly, we suggest methods by which the coins in our protocol could be implemented using weak coherent states and verified using existing experimental techniques, even in the presence of detector inefficiencies.
\end{abstract}
\maketitle
\section{Introduction}
\normalem
Quantum cryptography has traditionally been associated exclusively with quantum key distribution \cite{BB84}, but it encompasses a much larger class of tasks and protocols \cite{Broadbent2016}. Notable examples are quantum signature schemes \cite{AWKA2016,AWA2015,amiri2015unconditionally}, two-party quantum cryptography \cite{lunghi2013exp,erven2014exp,ng2012experimental}, delegated quantum computation \cite{broadbent2009universal,barz2012demonstration}, covert quantum communication and steganography \cite{sanguinetti2016perfectly,bash2015quantum,arrazola2016covert,bradler2016absolutely}, quantum random number generation \cite{ma2016qrng,sanguinetti2014quantum,lunghi2015self}, quantum fingerprinting \cite{QuantumFingerprinting,arrazolaqfp,xu2015experimental,GX16}, and quantum money \cite{W1983,Gav2012,GK2015}. Historically, many of these protocols have been extremely challenging to implement with available technologies, but we are currently approaching a point where both theoretical and experimental developments have made it possible for the first experimental demonstrations to emerge. We are thus entering an exciting stage where practical quantum cryptography has begun to expand rapidly beyond the realms of quantum key distribution.
Quantum money, which was first suggested by Weisner in 1970 \cite{W1983} as a means to create money that is physically impossible to counterfeit, is one of the first examples of quantum cryptography. The basic aim of any quantum money scheme is to enable a trusted authority, the bank, to provide untrusted users with finitely re-usable, verifiable coins that cannot be forged. Verifiability ensures that honest users can prove the money they hold is genuine, while unforgeability restricts the ability of an adversary to dishonestly fabricate additional coins. Potential drawbacks of Weisner's original scheme were that verification required quantum communication between the holder and the bank, and moreover security of the scheme had not been proved rigorously. Indeed, it was shown in Refs. \cite{A09,L2010} that many variants of the scheme were vulnerable to so-called ``adaptive attacks" -- attacks in which the adversary is allowed a number of auxiliary interactions with the bank before trying to forge a coin.
In 2012, Gavinsky \cite{Gav2012} addressed both issues and presented a fully secure quantum money scheme in which coins are verified using three rounds of \emph{classical} communication between the holder of the coin and the bank. The scheme was based on hidden matching quantum retrieval games (QRGs), first introduced in Ref. \cite{YJK2004}. Nevertheless, the scheme could not be considered \emph{practical}, as the security analysis did not include the effects of noise. This issue was addressed by Pastawski et al. \cite{PYJ2011}, in which a noise tolerant quantum money scheme with classical verification was proposed that remains secure as long as the noise is less than $\frac{1}{2}-\frac{1}{\sqrt{8}}\approx 14.6\%$. The scheme requires only two rounds of communication for verification and is secure even against adaptive attacks. Following this, Ref. \cite{GK2015} presented a simpler protocol, again based on hidden matching QRGs, in which the verification procedure contained only a single round of communication, and could tolerate up to $12.5\%$ noise.
Beyond the secret-key quantum money schemes discussed above, there has also been significant interest in public-key quantum money schemes, first proposed in \cite{A09}, offering computational security against quantum adversaries.
Since then, Farhi et al. \cite{FGH10} introduced the concepts of quantum state restoration and single-copy tomography to further rule out a large class of seemingly promising schemes. Following this result, Farhi et al. \cite{FGH12} suggested a scheme based on knot theory and conjectured that it is secure against computationally bounded adversaries. However, whether a secure public-key quantum money scheme exists without the use of oracles is an open question and, so far, the majority of schemes that were proposed have subsequently been broken \cite{LAF10}.
In this work, we focus on secret-key quantum money schemes with classical verification and propose a new scheme based on hidden matching QRGs. Utilising semi-definite programming, we provide a full security proof of our scheme, and show that by increasing the dimension of the underlying states, we can increase the error tolerance to as much as $23.03\%$ for states of dimension $n=14$, while also proving that the maximum noise tolerance in that case is $23.3\%$. Thus, the error tolerance of our protocols is nearly optimal. We conjecture that for large dimension, the error tolerance of our protocols approaches $25\%$ asymptotically, and we further prove that $25\%$ is the maximum possible error tolerance for a wide range of quantum money protocols, including all those based on hidden matching QRGs. Increasing the error tolerance has a twofold benefit: as well as allowing the protocol to be performed in regions of higher noise than was previously possible, it also increases protocol efficiency since we show that security relies on the size of the gap between the expected error rate and the maximum tolerable error rate of the scheme, thereby allowing smaller coins. Finally, we discuss how our schemes can be implemented in practice using a coherent state encoding, while also showing that they remain secure even in the presence of limited detection efficiency.
\subsection{Definitions and Previous Results}
In this section we state various definitions that are needed to introduce our quantum money schemes. We consider the case of quantum money ``mini-schemes" in which the bank creates only a single quantum coin and the adversary attempts to use this coin to forge another copy. It has been shown in Ref. \cite{AC2012} that by adding a classical serial number to each coin, a secure full quantum money scheme can be created directly from the secure mini-scheme, and so the two are essentially equivalent.
\begin{defn}\label{Def:QMoney}
A quantum money mini-scheme with classical verification consists of an algorithm, \emph{Bank}, which creates a quantum coin $\$$ and a verification protocol \emph{Ver}, which is a classical protocol run between a holder $H$ of $\$$ and the bank $B$, designed to verify the authenticity of the coin. The final output of this protocol is a bit $b \in\{0, 1\}$ sent by the bank, which corresponds to whether the coin is valid or not. Denote by $\text{\emph{Ver}}^B_H(\$)$ this final bit. The scheme must satisfy two properties to be secure:
\begin{itemize}
\item Correctness: The scheme is $\epsilon$-correct if for every honest holder, we have $$\text{\emph{Pr}}[\text{\emph{Ver}}^B_H(\$) = 1] \geq 1 - \epsilon.$$
\item Unforgeability: Coins in the scheme are $\epsilon$-unforgeable if for any quantum adversary who has interacted a finite and bounded number of times with the bank and holds a valid coin $\$$, the probability that she can produce two coins $\$_1$ and $\$_2$ that are verified by an honest user satisfies $$ \text{\emph{Pr}}\left[\text{\emph{Ver}}^B_H(\$_1) = 1 \wedge \text{\emph{Ver}}^B_H(\$_2) = 1 \right]\leq \epsilon,$$ where $H$ is any honest holder.
\end{itemize}
\end{defn}
The first property guarantees that all honest participants can prove the coins they own are valid, while the second property guarantees that a dishonest adversary cannot forge the coins. The definition covers adaptive attacks by allowing the adversary to interact with the bank (via the verification procedure) a finite number of times before attempting to forge the coin.
The schemes presented in this paper are based on quantum retrieval games (QRGs), which we have mentioned but not formally introduced. A QRG is a protocol performed between two parties, Alice and Bob, and can be seen as a generalisation of state discrimination. Alice holds an $n$-bit string $x$, selected at random according to a probability distribution $p(x)$, which she encodes into a quantum state $\rho_x$. She sends the state to Bob, whose goal is to provide a correct answer to a given question about $x$. Mathematically, a question is modelled as a relation: if $X$ is the set of possible values $x$ can take, and if $A$ is the set of possible answers, the relation $\sigma$ is a subset of $X \times A$. If $(x,a) \in \sigma$, this means that, given $x$, the answer $a$ is a correct answer to the ``question" $\sigma$. Formally, a quantum retrieval game is defined as follows.
\begin{defn}
Let $X$ and $A$ be the sets of inputs and answers respectively. Let $\sigma \subset X\times A$ be a relation and $\{p(x), \rho_x\}$ an ensemble of states and their a priori probabilities. Then the tuple $G = (X, A, \{p(x), \rho_x\}, \sigma)$ is called a quantum retrieval game. If Bob may choose to find an answer to one of a finite number of distinct relations $\sigma_1, ..., \sigma_k$, then we write the game as $G = (X, A, \{p(x), \rho_x\}, \sigma_1, ..., \sigma_k)$.
\end{defn}
A particularly useful class of QRGs are the \emph{hidden matching} QRGs \cite{JM,GK2015,Gav2012}, in which the relations are defined by matchings. A matching $M$ on the set $[n] := \{1,2,...,n\}$, where $n$ is an even number, is a partitioning of the set into $n/2$ disjoint pairs of numbers\footnote{More precisely, this is actually the definition of a \emph{perfect} matching.}. A matching can be visualised as a graph with $n$ nodes, where edges define the elements in the matching, as illustrated in Fig. \ref{fig:matching}. In general, there are $1\times 3\times \ldots \times (n-1)=(n-1)!!$ distinct matchings of any set containing $n$ elements. For our purposes, we focus on sets of matchings where no two matchings in the set contain a common element. We call such sets \textit{pairwise disjoint}. The maximum number of pairwise disjoint matchings is $n-1$, since if we consider the element $1 \in [n]$, it must be paired in each matching with a distinct integer less than or equal to $n$.
\begin{defn}
A maximal pairwise disjoint set of matchings, $\mathcal{R}$, is a set of pairwise disjoint matchings on $[n]$ such that $|\mathcal{R}| := n-1$.
\end{defn}
A matching on the set $[n]$ can be equivalently represented as a graph with $n$ nodes, with each each element $(i,j)$ of the matching identified with an edge in the graph. Maximal pairwise disjoint sets of matchings for $n=4,6,$ and 8 are illustrated in Fig. \ref{fig:matching}.
\begin{figure}
\caption{Maximal pairwise disjoint set of matchings for (a) $n=4$, (b) $n=6$ and (c) $n=8$. Colour is used to represent each matching within the maximal pairwise disjoint set.}
\label{fig:matching}
\end{figure}
In hidden matching QRGs the set of possible inputs is the set of all $n$-bit strings, each chosen with equal probability, where $n$ is an even number. Alice encodes her input into the $n$-dimensional pure state
\begin{equation} \label{eq:HM}
|\phi_x \rangle = \frac{1}{\sqrt{n}} \sum^n_{i=1} (-1)^{x_i}|i\rangle
\end{equation}
where $x_i$ is the $i$-th bit of the string $x$. The relations in this game are defined by the matchings: given a matching, the correct answers are the ones which correctly identify the parity of the bits connected by an edge in the matching. For example, if $(1,2)$ is an element of the matching, the measurement should output $x_1\oplus x_2$. Formally, given a perfect matching $M_1$, the set of answers is given by $$A = \big\{ (i,j,b) : i,j \in \{1,...,n\}, b\in \{0,1\} \big\}$$ and the corresponding relation is $$\sigma_1 = \{ (x, i, j, b) : x_i\oplus x_j = b \text{ and } (i,j) \in M_1 \}.$$
Bob is able to find a correct answer to any matching of his choice with certainty simply by measuring in the basis
\begin{equation} \label{eq:basis}
\mathcal{B} = \{ \frac{1}{\sqrt{2}} ( |i\rangle \pm |j\rangle)\}, \:\:\:\: \text{ with } (i,j)\in M.
\end{equation}
This is because the outcome $\frac{1}{\sqrt{2}} ( |i\rangle + |j\rangle)$ can only occur if $x_i\oplus x_j = 0$, and similarly $\frac{1}{\sqrt{2}} ( |i\rangle - |j\rangle)$ can only occur if $x_i\oplus x_j = 1$.
Previous quantum money schemes based on hidden matching QRGs have used only two matchings for verification. In the following section, we generalise these schemes to the case of an arbitrary number of matchings and show that this allows us to significantly increase the noise tolerance of the resulting schemes.
\section{Quantum money scheme} \label{sec:scheme}
Here we present a quantum money scheme which is secure even in the presence of up to $23\%$ noise. As in Ref. \cite{GK2015}, the verification protocol requires only one round of classical communication.
In this scheme, the bank randomly chooses a number of $n$-bit classical strings and encodes each of them into the hidden matching states, given by Eq. \eqref{eq:HM}. Essentially, the coin is a collection of these independent quantum states, and each of the quantum states can be thought of as an instance of a QRG. We assume that there is a maximal pairwise disjoint set of matchings on $[n]$, known to all participants, which we call $\mathcal{R}$. This set specifies the $n-1$ possible relations defined within each QRG, and each state in the coin represents a QRG. To verify a coin, the holder will pick a small selection of the states from the coin and randomly choose a relation for each. The holder will perform the appropriate measurement (defined by Eq. \eqref{eq:basis}) to get an answer for each QRG under each chosen relation. The holder then sends these answers to the bank which returns whether more than a specified fraction of the answers are correct or not. If they are, the coin is accepted as valid; otherwise, it is rejected. The scheme is formally defined below and illustrated in Figs. \ref{fig:bank} and \ref{fig:ver}.
\begin{algorithm}[H]
\floatname{algorithm}{Bank Algorithm}
\renewcommand{\thealgorithm}{}
\caption{}
\begin{algorithmic}[1]
\STATE The bank independently and randomly chooses $q$ $n$-bit strings which we will call $x^{1}, ..., x^{q}$.
\STATE For $i\in [q]$, the bank creates $\phi_{x^i} := |\phi_{x^i}\rangle\langle \phi_{x^i}|$, where $$|\phi_{x^i}\rangle := \frac{1}{\sqrt{n}} \sum^n_{j=1} (-1)^{x^i_j}|j\rangle.$$ For each $i$ we define the QRG $G_i = (S_i, A_i, \{\phi_{x^i} \}_{x^i}, \sigma_1, ..., \sigma_{n-1})$, where $\mathcal{R} = \{\sigma_1, ..., \sigma_{n-1}\}$ is a maximal pairwise disjoint set of matchings known to all participants in the scheme.
\STATE The bank creates the classical binary register, $r$, and initialises it to $0^q$.
\STATE The bank creates the counter variable $s$ and initialises it to $0$.
\STATE The pair $ (\$,r) = (\bigotimes_{i=1}^q \phi_{x^i}, r) $ is the coin for the mini-scheme. The bank keeps the counter $s$ in order to keep track of the number of verification attempts.
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[H]
\floatname{algorithm}{Ver Algorithm}
\renewcommand{\thealgorithm}{}
\caption{}
\begin{algorithmic}[1]
\STATE The holder of the coin randomly chooses a subset of indices, $L\subset [q]$ such that $r_i =0$ for each $i \in L$. The indices $i\in L$ specify the selection of games $G_i$ which will be used as tests in the verification procedure. For each $i\in L$, the holder sets the corresponding bit of $r$ to be $1$ so that this game cannot be used in future verifications.
\STATE For each $i\in L$, the holder picks a relation $\sigma^\prime_i$ at random from $\mathcal{R}$ and applies the appropriate measurement to obtain outcome $d_i$.
\STATE The holder sends all triplets $(i, \sigma^\prime_i, d_i)$ to the bank.
\STATE The bank checks that $s<T$, where $T$ is the pre-defined maximum number of allowed verifications for the coin. If $s = T$, the bank declares the coin as invalid.
\STATE For each $i$, the bank checks whether the answer is correct by comparing $(i, \sigma^\prime_i, d_i)$ to the secret $x^i$ values. The bank accepts the coin as valid if and only if more than $l(c-\delta)$ of the answers are correct, where $c$ is a correctness parameter of the protocol, $l = |L|$, and $\delta$ is a small positive constant.
\STATE The bank updates $s$ to $s+1$.
\end{algorithmic}
\end{algorithm}
We say that an instance of the verification algorithm has been passed/failed if the final output by the bank is ``valid"/``invalid" respectively. Coins can be verified at most $T$ times until the Hamming weight of $r$ is greater than $Tl$, at which point the coin is returned to the bank to be refreshed. We choose $T$ to be small but linear in $q$. Any such choice would be acceptable but, for the sake of definiteness, in what follows we set $T := q/(1000l)$. We note that having $T$ scale linearly with $q$ is optimal for any quantum money scheme \cite{Gav2012} and that this is an improvement over previous protocols (for example those in Refs. \cite{GK2015, Gav2012}).
The parameter $c$ represents the probability that an honest verifier obtains a correct outcome for a QRG in an honest run of the protocol. In the ideal setting $c=1$, since an honest participant in possession of a correct state will always be able to get a correct answer to a relation. Of course, in practice system imperfections inevitably lead to errors so that even when all participants are honest, it is not certain that the holder's measurement will return a correct answer. Thus, in the presence of errors, we must have $c<1$, and the smallest value of $c$ for which we can retain security determines the noise tolerance of the protocol.
\begin{figure}
\caption{Schematic illustration of the Bank algorithm for $n=8$. The bank selects $q$ $8$-bit strings and initialises the $q$-bit register $r$ to the zero string. The bank creates the corresponding hidden matching states and sends these, together with $r$, to the holder of the coin.}
\label{fig:bank}
\end{figure}
\begin{figure}
\caption{Schematic showing the verification algorithm. The verifier selects a sample $\{\rho_{x_1}
\label{fig:ver}
\end{figure}
We note that this scheme requires the bank to maintain a small classical database to record the number of times the verification protocol has been run -- i.e. the bank's database is ``non-static", and must be updated after each run of verification. Although this requirement demands more from the bank than completely static database models, we believe the requirement is both minimal and realistic, and allows significant simplifications to the security analysis. Nevertheless, in some cases it may be desirable for the bank to have a completely static database -- for example in applications in which the bank consists of many small, decentralised branches wary of attacks spanning multiple bank locations. In this case, by adding an additional round of classical communication in the verification protocol, our scheme can be transformed into a fully static database scheme which retains the same level of noise tolerance. Security can be proved by directly applying the arguments in Ref. \cite{Gav2012} to show that the additional verification attempts do not (significantly) help the adversary\footnote{We are able to apply the arguments in Ref. \cite{Gav2012} because, although our scheme uses more than two matchings, when taken pairwise any two matchings within our scheme are independent.}.
\subsection{Security} \label{sec:security}
In this section we prove that the scheme defined above is secure according to Definition \ref{Def:QMoney}.
\subsubsection{Correctness}
Correctness of the scheme follows simply from the Hoeffding bound \cite{Hoeffding}. In the honest case, if the holder of a coin has probability $c$ of getting a correct answer for each of the $l$ QRGs selected in the verification protocol, then his probability of getting fewer than $(c-\delta)l$ correct answers overall is bounded by
\begin{equation}
\mathbb{P}(\text{Honest Fail}) \leq e^{-2l\delta^2}.
\end{equation}
Based on the security analysis in the following section, we choose $\delta$ to be half of the gap between the error rate an honest participant expects and the minimum error rate the adversary can achieve. I.e. we set $\delta := (e_{\text{min}}-\beta)/2$, where $e_{\text{min}}$ is the minimum error rate achievable by the adversary (derived below in Eq. \eqref{eq:perror}), and $\beta := 1-c$ is the error rate expected in an honest run of the protocol.
\subsubsection{Unforgeability}
We assume the adversary is in possession of a valid coin and first address a simple forging strategy available to the adversary based on manipulating the $r$ register attached to the coin. The adversary is allowed to set at most $q/1000$ of the $r$ register entries to $1$. She creates $(\$_1,r_1)$ and $(\$_2,r_2)$ to send to the two honest verifiers, Ver$_1$ and Ver$_2$ respectively. If she sets $r_1(i)=1$ and $r_2(i)=0$, she can be certain that Ver$_1$ will not select the $i$'th state to test, and so can forward the perfect state to Ver$_2$. In this way, $q/1000$ of the states in the coins sent to each verifier will be perfect, and will not cause errors. The remaining positions must have $r$ register values of $0$ for both verifiers. Similarly, the adversary is able to use the auxiliary verification attempts to her advantage. We make a worst-case assumption and assume that the adversary gets full knowledge of every state used in an auxiliary verification attempt. Since there are at most $T$ attempts allowed, each of which involve $l$ states, the adversary knows the identity of at most $q/1000$ of the states. Since the states are prepared independently, this knowledge does not provide any information on the remaining states.
\begin{figure}
\caption{Representation of the states within the quantum coins sent to the verifiers. The first block on the far left represents all states for which the adversary set $r=1$ for Ver$_1$, and $r=0$ for Ver$_2$. The adversary knows that Ver$_1$ cannot select these states for testing, and so is able to forward on the perfect states to Ver$_2$. The second block of states represents the same, but with the roles of the verifiers reversed. The Aux. Ver states in the diagram are the ones that we assume are known to the adversary via auxiliary verifications. The remaining states in white are the ones we consider below -- those states for which the $r$ register is zero for both verifiers, and which have not been used in auxiliary verifications. }
\label{fig:forge}
\end{figure}
The combined effect of the above two strategies is that the adversary is able to exactly replicate $q/500$ of the states in the coin, as shown in Fig. \ref{fig:forge}. To prove coins are unforgeable, we consider the remaining $997q/1000$ states for which the $r$ register is zero for both verifiers, and for which the adversary has no auxiliary information. In reference to Fig. \ref{fig:forge}, we refer to these states as the white states, and start by considering a single such state, $\phi_{x^i}:= \ket{\phi_{x^i}}\bra{\phi_{x^i}}$, contained in the coin. For simplicity, we drop the superscript on the $n$-bit strings $x^i$ in all that follows.
The idea behind the proof is to relate the probability that the forger can use a single white state to create two states that pass the verification test of the two honest verifiers, to the average fidelity of these two states with the original state $\ket{\phi_x}$. The maximisation of this average fidelity corresponds to the optimal attack, which can be cast as a semi-definite program. By focusing on the dual program, we can upper bound the value of the semi-definite program and therefore bound the forging probability of the adversary. Lastly, we show that coherent attacks on multiple states cannot help the adversary to forge.
Since the adversary has a valid coin, she holds the unknown state
\begin{equation} \label{eq:start}
|\phi_x\rangle = \frac{1}{\sqrt{n}} \sum^n_{i=1} (-1)^{x_i}|i\rangle.
\end{equation}
From this state, the adversary wishes to create two states, $\eta_x$ and $\tau_x$, which, when measured by the honest verifiers, will give the correct answer to a randomly chosen relation in $\mathcal{R}$. Consider the normalised state sent to $\textrm{Ver}_1$,
\begin{equation}
\eta_x = \sum^n_{i,j = 1} a_{ij}|i\rangle\langle j|.
\end{equation}
Suppose the verifier chooses to measure using the matching $M_\alpha = \{(i_1, j_1),...,(i_{n/2},j_{n/2})\}$, where $\alpha \in\{1,2,\ldots,n-1\}$. To find a correct answer to the relation $\sigma_\alpha$ defined by this matching, an honest verifier will apply the measurement with projectors in the set $\{\ket{+_{i_kj_k}}\bra{+_{i_kj_k}}, \ket{-_{i_kj_k}}\bra{-_{i_kj_k}} \: : \: k=1, ..., n/2\}$, where $\ket{\pm_{i_kj_k}} := \frac{1}{\sqrt{2}}(\ket{i_k} \pm \ket{j_k})$. An incorrect result is obtained whenever the verifier finds an incorrect value for $x_{i_k}\oplus x_{j_k}$, which happens whenever the measurement outcome is one of the form
\begin{equation}
\frac{1}{\sqrt{2}} ( |i\rangle - (-1)^{x_i\oplus x_j} |j\rangle ).
\end{equation}
This happens with probability
\begin{equation}
p^{\alpha, x}_{\text{Ver}_1} = \frac{1}{2} \left( 1 - \sum^{n/2}_{k=1}(-1)^{x_{i_k}\oplus x_{j_k}}a_{i_kj_k} + (-1)^{x_{i_k}\oplus x_{j_k}}a_{j_ki_k} \right).
\end{equation}
Thus, the probability of an incorrect answer to $\sigma_\alpha$ is given by a subset of the off-diagonal elements of the density matrix $\eta_x$. The off-diagonal elements occurring are exactly those with indices paired by the matching $M_\alpha$. Since the set of relations form a maximal pairwise disjoint set, the off-diagonal matrix elements appearing in the error probability for different relations will all be distinct. Therefore, averaging over all possible relations that could be chosen by the verifier allows us to significantly simplify the adversary's error probability, which becomes
\begin{equation} \label{eq:aver}
p^x_{\text{Ver}_1} = \frac{1}{n-1}\sum_{\alpha=1}^{n-1}p^{\alpha, x}_{\text{Ver}_1} = \frac{1}{2(n-1)} \left( n - \sum^n_{i, j=1} (-1)^{x_{i}\oplus x_{j}}a_{ij} \right) = \frac{n}{2(n-1)}(1-F_x),
\end{equation}
where we have defined
\begin{equation}
F_x := \langle \phi_x | \eta_x | \phi_x \rangle = \frac{1}{n} \sum_{i,j} (-1)^{x_{i}\oplus x_{j}}a_{ij}.
\end{equation}
Since the adversary does not know the secret string $x$, rather than holding the state in Eq. \eqref{eq:start}, she instead holds a mixture over the possible $x$ values. We define $F := \frac{1}{2^{n}}\sum_xF_x$ and take an average over $x$ values to get
\begin{equation} \label{eq:aver2}
p_{\text{Ver}_1} = \frac{1}{2^n} \sum_x p^x_{\text{Ver}_1} = \frac{1}{2^n} \sum_x \frac{n}{2(n-1)} \left( 1 - F_x \right) = \frac{n}{2(n-1)} \left( 1 - F \right).
\end{equation}
Essentially then, to successfully forge a coin, the adversary is trying to create two states, $\eta_x$ and $\tau_x$, which both have a high fidelity with the original state $|\phi_x\rangle$. Let's define $G_x = \langle \phi_x| \tau_x | \phi_x \rangle$, and $G := \frac{1}{2^{n}}\sum_xG_x$. For the purpose of forging, the adversary needs \emph{both} $\textrm{Ver}_1$ and $\textrm{Ver}_2$ to accept the coin she sends, which requires her to make both error probabilities as small as possible. From the above result, we can relate this to maximising the average fidelity of the states $\eta_x$ and $\tau_x$ with the original state. This problem can be cast as a semi-definite program as follows.
Let $\Psi : L(\mathcal{X}) \rightarrow L(\mathcal{Y} \otimes \mathcal{Z} )$ be a physical channel taking states in Hilbert space $\mathcal{X}$ to states in the Hilbert space $\mathcal{Y}\otimes \mathcal{Z}$, where both $\mathcal{Y}$ and $\mathcal{Z}$ are isomorphic to $\mathcal{X}$. We want to find the channel that maximises
\begin{equation} \label{eq:sdp}
\overline{F}=\frac{1}{2^n} \sum^{2^n}_{x=1} \frac{\langle \phi_x |\eta_x|\phi_x\rangle + \langle \phi_x |\tau_x|\phi_x\rangle}{2},
\end{equation}
where $\eta_x = \text{Tr}_{\mathcal{Z}}\left[ \Psi(|\phi_x\rangle\langle \phi_x |)\right]$ and $\tau_x = \text{Tr}_{\mathcal{Y}}\left[ \Psi(|\phi_x\rangle\langle \phi_x |)\right]$. In other words, $\eta_x$ is the reduced state of the channel output representing the state held by $\textrm{Ver}_1$, and $\tau_x$ is the reduced state of the channel output representing the state held by $\textrm{Ver}_2$. This maximisation is subject to $\Psi$ being a completely positive trace preserving linear map. To express this maximisation in the standard form of a semi-definite program, we express the channel as an operator using the Choi representation. We fix the preferred basis to be $\{ |i\rangle \}_{i=1,...,n}$, the basis used to define the hidden matching states in the ensemble. Given this choice, the Choi operator corresponding to the channel $\Psi$ is an operator $J(\Psi)$ in $L(\mathcal{X}\otimes \mathcal{Y} \otimes \mathcal{Z})$, given by
\begin{equation}
J(\Psi) = \sum^n_{i,j = 1} |i\rangle\langle j|_{\mathcal{X}} \otimes \Psi(|i\rangle\langle j|)_{\mathcal{Y}\mathcal{Z}}
\end{equation}
Using the facts that $\langle \phi_x | i\rangle = \langle i | \phi_x\rangle $ for all states in the ensemble, and that $\Psi$ is a linear map, it can be shown that
\begin{equation}
\text{Tr}_{\mathcal{X}\mathcal{Y}\mathcal{Z}}\Bigg[ \Big( \phi^{\mathcal{X}}_x \otimes \phi^{\mathcal{Y}}_x \otimes \ensuremath{\mathbbm 1}^{\mathcal{Z}}\Big) J(\Psi)\Bigg] = \langle\phi_x |\eta_x|\phi_x\rangle_{\mathcal{Y}},
\end{equation}
and similarly that
\begin{equation}
\text{Tr}_{\mathcal{X}\mathcal{Y}\mathcal{Z}}\Bigg[ \Big( \phi^{\mathcal{X}}_x \otimes \ensuremath{\mathbbm 1}^{\mathcal{Y}} \otimes \phi^{\mathcal{Z}}_x \Big) J(\Psi)\Bigg] = \langle\phi^x |\tau_x|\phi^x\rangle_{\mathcal{Z}},
\end{equation}
where here, for ease of notation, we have used the superscript to denote the relevant Hilbert space. With this we can rewrite the problem in Eq. \eqref{eq:sdp} as the problem of finding the operator $J(\Psi)$ which maximises
\begin{equation}
\frac{1}{2^{n+1}}\sum^{2^n}_{x=1} \text{Tr}_{\mathcal{X}\mathcal{Y}\mathcal{Z}}\Bigg[ \Big( (\phi^{\mathcal{X}}_x \otimes \phi^{\mathcal{Y}}_x \otimes \ensuremath{\mathbbm 1}^{\mathcal{Z}}) + (\phi^{\mathcal{X}}_x \otimes \ensuremath{\mathbbm 1}^{\mathcal{Y}} \otimes \phi^{\mathcal{Z}}_x ) \Big) J(\Psi) \Bigg].
\end{equation}
The conditions that the channel must be completely positive and trace preserving lead to the conditions that $J(\Psi)$ must be positive semidefinite and $\text{Tr}_{\mathcal{Y}\mathcal{Z}}(J(\Psi)) = \ensuremath{\mathbbm 1}_{\mathcal{X}}$. Written in standard form, the semidefinite program corresponding to the maximum average fidelity is given by
\begin{equation}
\begin{aligned}
& \text{Maximise:}
& & \langle Q(n), X \rangle \\
& \text{subject to:}
& & \text{Tr}_{\mathcal{Y}\mathcal{Z}}(X) = \ensuremath{\mathbbm 1}_{\mathcal{X}} \\
&&& X \geq 0,
\end{aligned}
\end{equation}
where
\begin{equation}
Q(n) = \frac{1}{2^{n+1}} \sum^{2^n}_{x=1} \Big( (\phi^{\mathcal{X}}_x \otimes \phi^{\mathcal{Y}}_x \otimes \ensuremath{\mathbbm 1}^{\mathcal{Z}}) + (\phi^{\mathcal{X}}_x \otimes \ensuremath{\mathbbm 1}^{\mathcal{Y}} \otimes \phi^{\mathcal{Z}}_x ) \Big).
\end{equation}
The dual problem is simply
\begin{equation}
\begin{aligned}
& \text{Minimise:}
& & \text{Tr}(Y) \\
& \text{subject to:}
& & \ensuremath{\mathbbm 1}_{\mathcal{Y}\mathcal{Z}}\otimes Y \geq Q(n) \\
&&& Y \in \text{Herm}(\mathcal{X}),
\end{aligned}
\end{equation}
since $\langle \ensuremath{\mathbbm 1}_{\mathcal{X}}, Y \rangle = \text{Tr}(Y)$ and the adjoint of the partial trace is the extension by the identity. The dual problem approaches the optimal value from above, so any feasible point (i.e. any operator $Y$ that satisfies the constraints of the dual problem) gives us an upper bound on the maximum average fidelity. A feasible point can easily be found in terms of the matrix $Q(n)$ as
\begin{equation}
Y=||Q(n)||_{\infty}\ensuremath{\mathbbm 1}_{\mathcal{X}}
\end{equation}
so that we arrive at the following upper bound on the average fidelity:
\begin{equation}
\overline{F}\leq n||Q(n)||_{\infty}.
\end{equation}
Thus, for quantum money protocols using states of dimension $n$ and a maximal disjoint set of matchings, we can upper bound the error probability of the adversary in terms of the operator norm of $Q(n)$. Computing this norm for different values of $n$ leads to the bound
\begin{equation}
\overline{F}\leq \frac{1}{2}+\frac{1}{n}
\end{equation}
which we have verified numerically for $n\leq 14$ and we conjecture holds for any $n$. From now on, we simply assume that $n\leq 14$. The analysis above enables us to restrict the achievable error probabilities for the two verifiers on a single game as
\begin{equation}
\begin{rcases}
p_{\text{Ver}_1} = \frac{n}{2(n-1)} \left( 1 - F \right) \\
p_{\text{Ver}_2} = \frac{n}{2(n-1)} \left( 1 - G \right)
\end{rcases}
\quad
\text{ subject to: } \frac{1}{2} (F+G) \leq \frac{1}{2}+\frac{1}{n},
\end{equation}
which leads to
\begin{equation} \label{eq:bound}
p_{\text{Ver}_1} + p_{\text{Ver}_2} \geq \frac{1}{2} - \frac{1}{2(n-1)}.
\end{equation}
Until now, we have considered only a single white state out of the $l$ games used in the verification protocol. Let us now consider $l$ such games, and let $p^{(i)}_{\text{Ver}_j}$ be the error probability for honest verifier $j$ on the $i$'th run of the verification protocol. We claim that when we have $l$ independent white states (in the sense that each $x^i$ is chosen independently), it is still the case that
\begin{equation}
p^{(i)}_{\text{Ver}_1} + p^{(i)}_{\text{Ver}_2} \geq \frac{1}{2} - \frac{1}{2(n-1)}
\end{equation}
for all $i$, regardless of the outcomes of previous measurements made by the verifiers. Though intuitively reasonable, this claim is far from trivial, but can be proved using a teleportation argument due to Croke and Kent \cite{CrokeKent} (See Appendix A) so that, essentially, we can imagine the adversary acts independently on each game in the verification protocol. Therefore, on each and every white state, at least one verifier must have an error probability of at least
\begin{equation}
\frac{1}{2}(p^{(i)}_{\text{Ver}_1} + p^{(i)}_{\text{Ver}_2}) = \frac{1}{4} - \frac{1}{4(n-1)}.
\end{equation}
Overall, if we include the effects of $r$ register manipulation and auxiliary verifications, at least one verifier, say Ver$_1$, must have an average error probability over all $l$ games of at least
\begin{equation} \label{eq:perror}
e_{\text{min}} = \frac{997}{999}\left(\frac{1}{4} - \frac{1}{4(n-1)}\right) \approx \frac{1}{4} - \frac{1}{4(n-1)}
\end{equation}
Using Hoeffding's inequality, the probability of both verifiers accepting the coin can be bounded as
\begin{equation}
\mathbb{P}(\text{Both $\textrm{Ver}_1$ and $\textrm{Ver}_2$ generate outcome ``Valid"}) \leq \mathbb{P}(\text{$\textrm{Ver}_1$ generates outcome ``Valid"}) \leq e^{-2l\delta^2},
\end{equation}
where $\delta = (e_{\text{min}} - \beta)/2$, as above. As long as $\beta < e_{\text{min}}$, the Hoeffding bound can be used to show that it becomes exponentially unlikely for both verifiers to pass the verification protocol. By increasing the maximum noise tolerance of the protocol we increase the size of $\delta$, thereby allowing smaller sample sizes in the verification protocol, which increases the re-usability of coins. If we choose $n=4$, our scheme would be able to tolerate $16.6\%$ noise, and for $n=14$ it can tolerate up to $23\%$ noise. This concludes the proof of security against forging.
In the next section, we prove an upper bound on the error tolerance achievable for a general class of classical verification quantum money schemes, and show this bound limits to $25\%$ as the dimension of the underlying states is increased. This implies that our protocols are nearly optimal in terms of error tolerance. When proving this result, we assume only that the coin is a collection of quantum states each identified with a secret classical string, and that to verify the coin the holder must declare a number of single bit values which can be checked against the classical record.
\section{Maximum achievable noise tolerance} \label{sec:max}
Suppose we have a scheme in which the coin consists of many independently chosen $n$-dimensional pure quantum states, $\phi_x = |\phi_x\rangle\langle \phi_x |$, with $x\in X$ and where $x$ is a classical bit string chosen according to some probability distribution. To verify each state, the holder performs some POVM, $\mathcal{M}_x = \{M^{\text{cor}}_x, M^{\text{inc}}_x \}$, to ascertain one bit of information about each of the states used in the verification protocol. The bit values resulting from the measurement outcomes are checked against a classical record to verify whether the coin is genuine or not.
\begin{lemma}
For any quantum money scheme of the above type, the maximum tolerable noise, $e_{\text{max}}$, must be less than
\begin{equation}
e_{\text{max}} \leq \frac{1}{2} - \frac{1}{4}\frac{n+2}{n+1}.
\end{equation}
\end{lemma}
\textit{Proof.} We prove this by explicitly illustrating a strategy available to the adversary. The adversary holds the unknown state $\phi_x$, which lives in Hilbert space $\mathcal{H}$. She extends the state to $\phi_x \otimes \Phi$, where $\Phi = \frac{1}{n} \ensuremath{\mathbbm 1}_n$, and symmetrises the system. Specifically, she performs the mapping
\begin{equation}
\phi_x \otimes \Phi \rightarrow S_2 (\phi_x \otimes \Phi ) S_2,
\end{equation}
where $S_2$ is the projector onto $\mathcal{H}^2_+$, the symmetric subspace of $\mathcal{H}^{\otimes 2}$, and where the state on the right hand side is not normalised. The resulting normalised state of each clone is \cite{KW99}
\begin{equation}
\eta_x = v \phi_x + (1-v) \Phi,
\end{equation}
where $v := \frac{1}{2}\:\frac{n+2}{n+1}$. By the correctness requirement of quantum money schemes, an honest measurement on the correct state should always give a correct answer so that the coin is declared valid, i.e.
\begin{equation} \label{eq:correctness}
\text{Tr}(M^{\text{cor}}_x \phi_x) = 1.
\end{equation}
We further assume that, without access to the state $\phi_x$, the adversary has no information on $x$ and can do no better than to guess randomly. This means her probability of declaring a correct bit value is $1/2$, i.e.\footnote{Note that this assumption holds for all hidden matching quantum money schemes considered, and for any scheme in which the verification protocol involves declaring many single bit values which are later checked. Nevertheless, there may be protocols in which the verification protocol involves checking many $m$-bit outcomes, in which case the more reasonable assumption would be
\begin{equation*}
\text{Tr}(M^{\text{cor}}_x \Phi) = 1/2^m.
\end{equation*}
To our knowledge such a scheme does not exist, but if higher error tolerance is desired our proof suggests looking into such schemes.}
\begin{equation} \label{eq:guess}
\text{Tr}(M^{\text{cor}}_x \Phi) = 1/2.
\end{equation}
Both honest verifiers hold the state $\eta_x$. Using Eqs. \eqref{eq:correctness} and \eqref{eq:guess}, the probability that an honest verifier gets a correct measurement outcome is
\begin{equation}
\text{Tr}(M^{\text{cor}}_x\eta_x) = v \text{Tr}(M^{\text{cor}}_x \phi_x) + (1-v) \text{Tr}(M^{\text{cor}}_x \Phi) = v + \frac{(1-v)}{2}.
\end{equation}
Expressing $v$ in terms of the dimension of the system shows that this strategy (which is always available to the adversary) leads to the honest verifiers finding an error rate of
\begin{equation}
e_{\text{max}} = \frac{1}{2} - \frac{1}{4}\frac{n+2}{n+1},
\end{equation}
and so for any such scheme to be secure an honest participant must expect an error rate less than $e_{\text{max}}$ in an honest run of the protocol.
Our analysis shows that for any scheme with $n = 4$ the tolerable noise is at most $20\%$, which complements our results in Section \ref{sec:security} where we described a protocol with $n=4$ which tolerated noise up to $16.6\%$. For $n=14$, the bound in this section shows that any such scheme has a noise tolerance of at most $23.3\%$. For $n=14$, our protocol can achieve an error tolerance of $23.03\%$, and so it is nearly optimal. As we increase the dimension of the quantum states used for the coins, the upper bound on the tolerable noise approaches $25\%$ which coincides with our conjecture for the tolerable noise in our protocols above.
\begin{figure}
\caption{Plot showing the theoretical bound on protocol noise tolerance (dotted line) and the noise tolerance achieved by the protocols in Section \ref{sec:scheme}
\end{figure}
\section{Experimental Implementation}
The protocol presented in Section \ref{sec:scheme} gives rise to three main technical challenges when one considers experimental implementations, namely: the security analysis provided does not account for losses; the bank requires a source of complex, high-dimensional states; and the protocol requires that the coin holders have the ability to store states in quantum memory. In this section we address the first two issues so that a proof-of-principle implementation of the verification algorithm of the quantum money schemes could be performed with current technology.
\subsection{Detector Losses} \label{sec:detloss}
Here we tackle the first of the issues, and consider an implementation in which the verifiers use imperfect detectors with efficiency $\eta$. We assume that all detector losses are random and cannot be manipulated by the adversary. In this paper we do not consider channel loss, as we assume that coin transfers occur over short distances, meaning channel losses are less relevant. Nevertheless, many of the methods presented here would remain valid in the presence of small channel loss with only minor modifications necessary. To incorporate detector loss, it is necessary to modify the verification protocol, previously stated in Section \ref{sec:scheme}, so that it becomes:
\begin{algorithm}[H]
\floatname{algorithm}{Ver Algorithm}
\renewcommand{\thealgorithm}{}
\caption{}
\begin{algorithmic}[1]
\STATE The holder randomly chooses a subset of indices, $L\subset [q]$, with $l=|L|$, such that $r_i =0$ for each $i\in |$. The indices $i\in L$ specify the selection of games $G_i$ which will be used as tests for the verification procedure. For each $i\in L$, the holder then sets the corresponding bit of $r$ to be $1$ so that this game cannot be used in future verifications.
\STATE For each $i\in L$, the holder picks a relation $\sigma^\prime_i$ at random from $\mathcal{R}$ and applies the appropriate measurement to get answer $d_i$. If there is no measurement outcome we say the measurement was unsuccessful and set $d_i = \emptyset$. We define the number of successful measurement outcomes to be $l^\prime$.
\STATE If $l^\prime < l_{min} := (\eta - \epsilon)l$, where $\epsilon > 0$ is a small security parameter, the verifier aborts the protocol.
\STATE The holder sends all triplets $(i, \sigma^\prime_i, d_i)$ to the bank.
\STATE The bank checks that $s<T$, where $T$ is the pre-defined maximum number of allowed verifications for the coin. If $s = T$, the bank declares the coin as invalid.
\STATE For each $i$, the bank checks whether the answer is correct by comparing $(i, \sigma^\prime_i, d_i)$ to the secret $x^i$ values. The bank ignores those outcomes for which $d_i = \emptyset$, and accepts the coin as valid only if more than $l^\prime(c-\delta)$ of the answers are correct, where $c = 1-\beta$ is a measure of the channel correctness and $\delta$ is a small positive constant.
\STATE The bank updates $s$ to $s+1$.
\end{algorithmic}
\end{algorithm}
\subsubsection{Correctness}
Correctness of the scheme follows from Hoeffding's inequality. When all participants are honest, it is exponentially unlikely for $l^\prime$ to be less than $l_{min}$, so the protocol will not abort, except with a negligible probability. If the protocol does not abort, the verifier has at least $l_{min}$ successful measurement outcomes, each with an independent probability $c$ of being correct. Overall, the probability of the verification failing is bounded by
\begin{equation}
\mathbb{P}(\text{Ver fails}) \leq \exp \left[-2l_{min}\delta^2\right] + \exp[-2l\epsilon^2],
\end{equation}
where now $\delta = (e^\prime_{\text{min}}-\beta)/2$, with $e^\prime_{\text{min}}$ derived in Eq. \eqref{eq:emin} below as the minimum average error rate achievable by the adversary.
\subsubsection{Unforgeability}
Since the protocol now includes detector losses, the adversary may not have to send states to each verifier for each game in the verification protocol, and she could attempt to hide losses arising from her strategy in the losses arising from detector inefficiency. As a consequence, the set of strategies available to the adversary is increased, and we must make sure our arguments in Section \ref{sec:security} still apply.
Let $U_1$ and $U_2$ be $q$-bit strings representing whether or not the adversary sent a state to Ver$_1$ and Ver$_2$ respectively, for each of the $q$ games created by the bank. An entry of $1$ means the adversary sent a state to the verifier, while an entry of $0$ means the adversary did not send a state to the verifier. We want to show that, in order for the protocol not to abort, $W(U_i) \geq \gamma q$, where $\gamma := 1 - \frac{3\epsilon}{\eta}$ and $W$ is the Hamming weight. Suppose $W(U_i) = \gamma q$. Then, in Step 1 of the verification protocol, Ver$_i$ takes a sample, $V_i$, consisting of $l$ of the entries of $U_i$. Hoeffding's inequality gives
\begin{equation}
P\Big(W(V_i) \leq (\gamma+\frac{\epsilon}{\eta}) l\Big) \geq 1 - \exp [-2\frac{\epsilon^2}{\eta^2}l].
\end{equation}
If $W(V_i) \leq (\gamma+\frac{\epsilon}{\eta}) l$, then the probability of at least $l_{min}$ successful measurement outcomes is given by
\begin{equation}
P\Big(\text{At least $l_{min}$ successful measurement outcomes } | \:\:W(V_i) \leq (\gamma+\frac{\epsilon}{\eta}) l \Big) \leq \exp[-2l\epsilon^2] .
\end{equation}
The probability of the protocol proceeding past Step 3 of verification is therefore
\begin{equation} \label{eq:noabort}
P(\text{No Abort} | W(U_i) = \gamma q ) \leq \exp [-2\frac{\epsilon^2}{\eta^2}l]+ \exp[-2\epsilon^2l].
\end{equation}
In what follows we assume $W(U_i) \geq \gamma q$, since otherwise the above shows that the verifiers will abort with near certainty. This means the adversary is able to use any strategy that leads to channel losses of at most $\frac{3\epsilon}{\eta}$ for each verifier, as these can be hidden within the normal fluctuations of detector loss. Suppose there is a strategy which gives at least $(1-\frac{3\epsilon}{\eta}) q$ states to each verifier, and which leads to an average error probability (on only the states tested) of $e^\prime_{\text{min}}$ for at least one of the verifiers. Then, there is a strategy which gives $q$ states to each verifier, and leads to an average error probability for at least one of the verifiers of $(1-\frac{3\epsilon}{\eta})e^\prime_{\text{min}} + \frac{3\epsilon}{2\eta}$ (the adversary simply sends the maximally mixed state to each verifier in place of the $\frac{3\epsilon}{\eta}$ losses). Since this strategy falls under the scope of the analysis in Section \ref{sec:security}, we know that the resulting error rate must be at least $e_{\text{min}}$, which means
\begin{equation} \label{eq:emin}
e^\prime_{\text{min}} \geq \frac{e_{\text{min}} - \frac{3\epsilon}{2\eta}}{1-\frac{3\epsilon}{\eta}}.
\end{equation}
The parameter $\epsilon$ can be chosen to be arbitrarily small by increasing the sample size $l$. As such, the protocol is able to handle arbitrarily large detector losses, and leads to noise tolerance that can be kept arbitrarily close to the noise tolerance derived for the case of perfect detectors.
Each verifier tests at least $l_{min}$ states, and at least one verifier expects an error rate of $e^\prime_{\text{min}}$. The probability of this verifier passing the test is bounded as
\begin{equation} \label{eq:40}
P(\text{Observed error rate smaller than $e^\prime_{\text{min}} - \delta$}) \leq \exp [-2l_{min}\delta^2].
\end{equation}
Combining Eqs. \eqref{eq:noabort} and \eqref{eq:40}, the probability that the adversary is able to forge a coin is given by
\begin{equation}
P(\text{Forgery}) \leq \exp [-2\frac{\epsilon^2}{\eta^2}l] + \exp [-2l\epsilon^2] + \exp [-2l_{min}\delta^2]
\end{equation}
\subsection{Coherent State Implementation}
In this section we tackle the second issue arising when considering experimental realisations of the scheme -- the bank must create hidden matching states of the form in Eq. \eqref{eq:HM}, which are high-dimensional states of high complexity. The implementation of hidden matching quantum retrieval games has been studied extensively in Ref. \cite{JM}, where the coherent state mapping defined in Ref. \cite{JM_Mapping} was used to approximate each hidden matching state by a sequence of $n$ coherent states of the form
\begin{align}
\ket{\alpha,x}&=e^{-\frac{|\alpha|^2}{2}}\sum_{k=0}^{\infty}\frac{\alpha^k}{k!}(a_x^{\dagger})^n\ket{0}\nonumber\\
&=\bigotimes_{i=1}^n\ket{(-1)^{x_i} \frac{\alpha}{\sqrt{n}}},
\end{align}
where
\begin{equation}
a_x^{\dagger}=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}(-1)^{x_i}b_i^{\dagger}
\end{equation}
and $\{b_1^{\dagger},b_2^{\dagger},\ldots,b_n^{\dagger}\}$ are the creation operators of the $n$ modes. We call each sequence of coherent states a block, so that a single block is used to approximate a hidden matching state. As outlined in Ref. \cite{JM}, Bob's measurement can then be performed using linear optics circuits and single photon detectors.
In the absence of a phase reference, the phase of each block is randomised, which implies that each block is equivalent to a classical mixture of number states \cite{BLM2000}. More specifically, writing $\alpha=e^{i\theta}|\alpha|$, we have
\begin{align}
\int_{0}^{2\pi}\frac{d\theta}{2\pi}\ketbra{\alpha,x}{\alpha,x}=e^{-|\alpha|^2}\sum_{k=0}^{\infty}\frac{|\alpha|^{2k}}{k!}\ketbra{k}{k}_x,
\end{align}
where $\ketbra{k}{k}_x$ is a state of $k$ photons in the mode $a_x^{\dagger}$. Thus, the probability of obtaining a particular number of photons depends only on $\alpha$, which is a free parameter within the coherent state mapping. We consider the following three cases:
\subsubsection{Zero photons in the block}
In this case the state emitted is simply the vacuum state. If the adversary chooses to forward a state on to the verifiers, she can do no better than to induce a $50\%$ error rate, and it is simple to show that it is never beneficial for her to do so. This scenario can therefore be considered a ``source" loss, as opposed to a channel or detector loss. Crucially, since these losses are not controllable by the adversary, they can be treated in the same manner as detector losses in Section \ref{sec:detloss} simply by including the source loss into the detector loss parameter, $\eta$. The probability of zero photons being emitted is $p_0 = e^{-|\alpha|^2}$.
\subsubsection{One photon in the block}
In this case, the state emitted is equivalent to the ideal hidden matching state in Eq. \eqref{eq:HM} since
\begin{align}
\ket{1}_x&=a_x^{\dagger}\ket{0}\nonumber\\
&=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}b_i^{\dagger}\ket{0}\nonumber\\
&=\frac{1}{\sqrt{n}}\sum_{i=1}^{n} (-1)^{x_i}\ket{i},
\end{align}
where $\ket{i}$ is a single photon state in the mode $b_i$. Therefore, whenever the bank's source emits a single photon, the analysis in Section \ref{sec:security} applies. The probability of one photon being emitted is $p_1 = |\alpha|^2 e^{-|\alpha|^2}$.
\subsubsection{More than one photon in the block}
In this case we assume the worst case scenario: whenever the source emits more than one photon to represent a hidden matching state, the adversary can perfectly forge that state. The resulting error rate for the adversary is $e^\prime_{\text{min}}(\frac{p_1}{p_1+p_{2+}})$, where $p_{2+} = 1-p_0-p_1$. For small $|\alpha|$, $p_{2+}\approx \frac{|\alpha|^4}{2}$, while $p_1\approx |\alpha|^2$, so that $p_{2+}\ll p_1$ and the adversary's error probability is almost unchanged by using coherent states.
\section{Conclusion}
We presented a family of unconditionally secure classical verification quantum money schemes which are tolerant to noise up to $23\%$, and which we conjecture tolerate noise up to $25\%$. We further proved that $25\%$ is the maximum noise tolerance achievable for a wide class of quantum money schemes, including all classical verification secret-key schemes previously proposed. The security of our schemes depends on the difference between maximum tolerable noise and expected noise, meaning the increase in maximum tolerable noise increases the efficiency of our scheme, allowing for smaller, more re-usable coins. The techniques we use to prove security differ considerably to previous papers, and the re-usability of our coins is optimal \cite{Gav2012} in that it scales linearly with the number of qubits in the coin. This is a significant improvement when compared to Ref. \cite{GK2015}, in which the re-usability scales as $q^{1/3}$, and Ref. \cite{Gav2012}, in which re-usability scales as $q^{1/4}$, where $q$ is the total number of qubits in the coin. With realistic assumptions on experimental equipment, we expect that, using $n=8$, a coin containing $10^{9}$ qubits would use $l=18,000$ states for each verification, and would be re-usable $T = 100$ times for a security level of $10^{-6}$. Lastly, we suggested methods of adapting our techniques to facilitate experimental implementations of the scheme. We show that the schemes can be implemented using weak coherent states even in the presence of limited detector efficiency.
\acknowledgements{The authors would like to thank I. Kerenidis, E. Andersson, and A. Ignjatovic for helpful discussions. R. A. gratefully acknowledges EPSRC studentship funding under grant number EP/I007002/1. J.M.A. recognizes funding from the Singapore Ministry of Education (partly through the Academic Research Fund Tier 3 MOE2012-T3-1-009) and the National Research Foundation of Singapore, Prime Minister’s Office, under the Research Centres of Excellence programme.}
\section{Appendix A}
\subsection{Overview of Argument}
In the main paper, we claim that the adversary cannot use coherent attacks on multiple states in order to beat the bound given in Eq. \eqref{eq:bound}, even when conditioned on the states chosen by the bank, and on the outcomes of previous measurement results found by the verifiers. In this section we formally prove our claim using a teleportation argument similar to the one introduced by Croke and Kent in Ref. \cite{CrokeKent}, so that each game can essentially be viewed as independent of all others.
In order to apply the teleportation argument, we must first introduce a modified individual setting, in which the adversary is allowed an additional ability. We show that this modification does not help the adversary to cheat. We then show that any coherent strategy can be transformed into a modified individual strategy. Therefore, any coherent strategy cannot beat the bounds proved for the unmodified individual case, as claimed.
\subsection{Modified Individual Attacks} \label{sec:indiv}
In the individual setting, the verifiers each receive a single hidden matching state and apply the verification protocol to test its authenticity. As specified by the protocol, the verifiers randomly choose to measure the state they receive using one of the matching measurements. We include this random choice of matching into the mathematical description of the measurement, and group the outcomes to be either ``correct" or ``incorrect". It can be shown that if the bank creates $\phi_x = \ket{\phi_x}\bra{\phi_x}$, the verifiers measurement is described by the POVM
\begin{equation}
\Gamma_x = \{ \Gamma^{cor, x}, \Gamma^{inc, x} \} = \frac{n}{2(n-1)} \left\{ \frac{n-2}{n} \mathbb{I} + \phi_x, \: \mathbb{I} - \phi_x \right\}.
\end{equation}
Suppose now the adversary has the additional power of being able to force the verifiers to apply a correction unitary (which will be the teleportation corrections) to their measurement outcomes before they are sent to the bank. The adversary must specify the correction operation before sending the states to the verifiers, and, crucially, the correction operation is such that it is simply a permutation of the set of hidden matching states. For example, suppose the teleportation operation takes input $\ket{\phi_x}$ and outputs $\ket{\phi_{x^\prime}}$, with correction operator $C$. In this case, before sending the states, the adversary will tell the verifiers that they must apply correction $C$ to their measurement outcomes. In effect then, the verifiers will measure
\begin{equation} \label{eq:povm}
\Gamma_{x^\prime} = \{ \Gamma^{cor, x^\prime}, \Gamma^{inc, x^\prime} \} = \frac{n}{2(n-1)} \left\{ \frac{n-2}{n} \mathbb{I} + \phi_{x^\prime}, \: \mathbb{I} - \phi_{x^\prime} \right\},
\end{equation}
since the correction applied to $\Gamma^{inc, x^\prime}$ is $\Gamma^{inc, x}$. On average, given $\phi_x$, it is not possible for the adversary to create two states, $\eta_x$ and $\tau_x$, such that $\text{Tr}[ \Gamma^{inc, x^\prime} (\eta_x + \tau_x) ] < p$. If it were possible, then it would imply that the adversary can clone $\phi_{x^\prime}$ better than what is allowed by quantum mechanics (and our arguments in the main paper). This is because if the adversary was given $\phi_{x^\prime}$ he could easily transform it to $\phi_x$ by applying $C$, and then perform the strategy to get two copies with a fidelity higher than the bound proved in the main paper. Therefore the additional power given to the adversary does not allow her to decrease the value of $p_{\text{Ver}_1} + p_{\text{Ver}_2}$.
\subsection{Coherent Strategy}
We now consider the case of $N$ games created by the bank. The bank creates
\begin{equation}
\frac{1}{2^{Nn}} \sum_{x_1,x_2} \ket{x_1}\bra{x_1}_{X_1} \otimes \ket{x_2}\bra{x_2}_{X_2}\otimes \ket{\phi_{x_1}}\bra{\phi_{x_1}}_A \otimes \ket{\phi_{x_2}}\bra{\phi_{x_2}}_B.
\end{equation}
The $X_1$ and $A$ registers contain the first $N-1$ secret strings selected by the bank and the corresponding hidden matching states, respectively. The $X_2$ and $B$ registers contain the $N$'th secret string selected by the bank and its corresponding hidden matching state. Only the $A$ and $B$ registers are accessible to the adversary. We assume for a contradiction that there exists a strategy available to the adversary such that, conditional on the value in the $X_1$ register, and conditional on the verifiers obtaining specific outcomes in previous measurements, the value of $p_{\text{Ver$_1$}} + p_{\text{Ver$_2$}}$ in the $N$'th game is decreased below the bound in Eq. \eqref{eq:bound}.
We describe this strategy as follows -- upon receiving the states from the bank, the adversary applies the unitary operation $S_{ABC}$ so that the state becomes
\begin{equation}
\begin{split}
& \frac{1}{2^{Nn}} \sum_{x_1,x_2} \ket{x_1}\bra{x_1}_{X_1} \otimes \ket{x_2}\bra{x_2}_{X_2}\otimes S_{ABC}\Big(\ket{\phi_{x_1}}\bra{\phi_{x_1}}_A \otimes \ket{\phi_{x_2}}\bra{\phi_{x_2}}_B \otimes \ket{0}\bra{0}_C \Big) S^\dagger_{ABC} \\
&= \frac{1}{2^{Nn}} \sum_{x_1,x_2} \ket{x_1}\bra{x_1}_{X_1} \otimes \ket{x_2}\bra{x_2}_{X_2}\otimes \ket{\Psi^{x_1x_2}}\bra{\Psi^{x_1x_2}}_{AA^\prime B B^\prime C^\prime}.
\end{split}
\end{equation}
The $A, A^\prime$ registers are the spaces that contain the states that will be sent to Ver$_1$ and Ver$_2$ (resp.) for the first $N-1$ games. The $B, B^\prime$ registers are the spaces that contain the states that will be sent to Ver$_1$ and Ver$_2$ (resp.) for the $N$'th game. The $C$ registers are auxiliary registers held by the adversary. We assume that the bank measures the $X_1$ register, and gets a state, $x_1$, which satisfies the conditions in the assumption. The state held by the adversary is then
\begin{equation}
\frac{1}{2^{n}} \sum_{x_2} \ket{\Psi^{x_1x_2}} \bra{\Psi^{x_1x_2}}.
\end{equation}
The adversary gives the $A,A^\prime, B, B^\prime$ parts of the state to the verifiers. The honest verifiers will first make measurements on systems $A, A^\prime$ and a possible post measurement state is
\begin{equation} \label{eq:4}
\frac{1}{2^{n}} \sum_{x_2} a_{x_1x_2} \Pi_{AA^\prime}\ket{\Psi^{x_1x_2}} \bra{\Psi^{x_1x_2}} \Pi^\dagger_{AA^\prime}.
\end{equation}
We assume that $\Pi_{AA^\prime}$ is a measurement outcome satisfying the conditions of the assumption, so that the error probabilities on the $N$'th game are decreased. Here $a_{x_1x_2}$ is the normalisation term, $a_{x_1x_2} = 1/\text{Tr} \big[ \Pi_{AA^\prime}\ket{\Psi^{x_1x_2}}\bra{\Psi^{x_1x_2}}\Pi^\dagger_{AA^\prime}\big]$.
The verifiers now each measure $\Gamma_{x_2}$, as defined in Eq. \eqref{eq:povm}, on their $B$ system. The assumption tells us that
\begin{equation} \label{eq:contr}
\frac{1}{2^{n}} \sum_{x_2} \Bigg[ a_{x_1x_2} \text{Tr}\Big[ \Gamma^{inc, x_2}_{B} \: \Pi_{AA^\prime}\ket{\Psi^{x_1x_2}} \bra{\Psi^{x_1x_2}} \Pi^\dagger_{AA^\prime} \Big] + a_{x_1x_2} \text{Tr}\Big[ \Gamma^{inc, x_2}_{B^\prime} \: \Pi_{AA^\prime}\ket{\Psi^{x_1x_2}} \bra{\Psi^{x_1x_2}} \Pi^\dagger_{AA^\prime} \Big] \Bigg] < p.
\end{equation}
We now aim to prove that this leads to a contradiction.
\subsection{Teleportation strategy}
Supposing the above strategy exists, we explore what this enables the adversary to do in the individual case in the hopes of finding a contradiction. We suppose the bank creates
\begin{equation}
\frac{1}{2^{n}} \sum_{x_2} \ket{x_2}\bra{x_2}_{X_2} \otimes \ket{\phi_{x_2}}\bra{\phi_{x_2}}_B
\end{equation}
and sends the $B$ part to the adversary. The adversary can simulate the above strategy locally, by creating $\ket{x_1}$, $\ket{\phi_{x_1}}$ and the maximally mixed state on $n$ dimensions $\ket{\Phi}$. After relabelling the registers, the adversary holds the state
\begin{equation}
\frac{1}{2^{n}} \sum_{x_2} \ket{x_1}\bra{x_1}_{X_1} \otimes \ket{x_2}\bra{x_2}_{X_2}\otimes \ket{\phi_{x_1}}\bra{\phi_{x_1}}_A \otimes \ket{\phi_{x_2}}\bra{\phi_{x_2}}_D \otimes \ket{0}\bra{0}_C \otimes \ket{\Phi}\bra{\Phi}_{BE}.
\end{equation}
To simulate the strategy in the previous section, the adversary applies $S$ to the $A$, $B$ and $C$ registers, followed by a measurement on the resulting $A, A^\prime$ registers. Conditional on measurement outcome $\Pi_{AA^\prime}$, she then applies a generalised Bell measurement on the $D$ and $E$ registers in order to teleport the unknown state $\ket{\phi_{x_2}}$ into the $B$ register which was acted on by $S$ (modulo a teleportation correction). If the appropriate measurement outcome is not found, the adversary does not perform the Bell measurement and instead starts again. The resulting state is
\begin{equation} \label{eq:11}
\frac{1}{2^{n}} \sum_{x_2} a_{x_1x^\prime_2} \Pi_{AA^\prime} \ket{\Psi^{x_1x^\prime_2}} \bra{\Psi^{x_1x^\prime_2}} \Pi^\dagger_{AA^\prime}.
\end{equation}
Notice the state contains $x^\prime_2$ since the Bell measurement does not faithfully teleport the state, and a correction is required which we have not performed. If the dimension of the hidden matching states is a power of two, the correction operators are simply tensor products of the Pauli operators \cite{Rig05}. Crucially, all corrections define a bijective mapping between $x^\prime_2$ and $x_2$, so that as $x_2$ cycles over all possible values so does $x^\prime_2$, and the probabilities are not affected (all corrections are equally likely, which must be the case so that information is not communicated faster than light).
The state in Eq. \eqref{eq:11} is the same as the state in Eq. \eqref{eq:4}, but the measurements applied by the verifiers are correlated with the $X_2$ register held by the bank. Therefore, the verifiers failure probabilities are not the same when measuring the two states. Measurements on the state in Eq. \eqref{eq:4} leads to a failure probability of
\begin{equation} \label{eq:12}
\frac{1}{2^{n}} \sum_{x_2} \Bigg[ a_{x_1x_2} \text{Tr}\Big[ \Gamma^{inc, x_2}_{B} \: \Pi_{AA^\prime}\ket{\Psi^{x_1x_2}} \bra{\Psi^{x_1x_2}} \Pi^\dagger_{AA^\prime} \Big] + a_{x_1x_2} \text{Tr}\Big[ \Gamma^{inc, x_2}_{B^\prime} \: \Pi_{AA^\prime}\ket{\Psi^{x_1x_2}} \bra{\Psi^{x_1x_2}} \Pi^\dagger_{AA^\prime} \Big] \Bigg],
\end{equation}
while measurements on the state in Eq. \eqref{eq:11} lead to a failure probability of
\begin{equation} \label{eq:13}
\frac{1}{2^{n}} \sum_{x_2} \Bigg[ a_{x_1x^\prime_2} \text{Tr}\Big[ \Gamma^{inc, x_2}_{B} \: \Pi_{AA^\prime}\ket{\Psi^{x_1x^{\prime}_2}} \bra{\Psi^{x_1x^{\prime}_2}} \Pi^\dagger_{AA^\prime} \Big] + a_{x_1x^\prime_2} \text{Tr}\Big[ \Gamma^{inc, x_2}_{B^\prime} \: \Pi_{AA^\prime}\ket{\Psi^{x_1x^{\prime}_2}} \bra{\Psi^{x_1x^{\prime}_2}} \Pi^\dagger_{AA^\prime} \Big] \Bigg].
\end{equation}
The difference being the appearance of $x^\prime_2$ in the second expression. Nevertheless, the two can be made equal if the verifiers are forced to apply the teleportation correction unitary to their measurement outcomes. In effect, this correction relabels the measurement outcomes so that $\Gamma^{inc, x_2} \rightarrow \Gamma^{inc, x^\prime_2}$. Following this correction, the two expressions \eqref{eq:12} and \eqref{eq:13} are equal. This shows that the assumption in Eq. \eqref{eq:contr} leads to a contradiction, since it shows an individual attack in the modified scenario can achieve the same error probability as a coherent attack, and the error probabilities achievable in the modified individual scenario are the same as for the unmodified individual scenario.
\end{document}
|
\begin{document}
\section{Introduction}
The motional state of atomic or mechanical degrees of freedom can be manipulated via the interaction with the electromagnetic field confined in a cavity. Such a possibility is best illustrated by cavity cooling, which has been successfully applied to single atoms \cite{AtomCooling}, ions \cite{IonCooling}, and micro- and nano-mechanical resonators~\cite{Chan2011,Teufel2011,Kippen2012}. Recent breakthroughs in the dissipative preparation of mechanical squeezed states~\cite{MechSqueezing1,MechSqueezing2,MechSqueezing3,ResEngIons3}, where a cavity-assisted scheme is designed to {\em cool} the target system directly into a squeezed state of motion, can be thought of as a powerful development of this paradigm~\cite{ResEng1,Kronwald,Wang,Woolley,JieLi}.
However, for many applications, ranging from fundamental tests of quantum mechanics to quantum information precessing, the stabilization of highly pure states
with non-Gaussian features is needed instead.
In cavity optomechanics, the quadratic optomechanical coupling has been exploited for the dissipative preparation of Schr\"odinger cat states~\cite{OptoCat1,OptoCat2}, but
the existence of multiple steady states requires the unpractical initialization of the system in a state of definite parity.
Recently we have shown that a tunable optomechanical coupling which has both a linear and quadratic component enables the stabilization of pure non-Gaussian states without requiring any initialization~\cite{Me1,Me2}.
For specific values of the amplitude of the laser drives new families of nonclassical states can be stabilized, which correspond to (squeezed and displaced) superpositions of a finite number of Fock states. Here we focus on a specific instance, namely on one such (displaced) finite superposition that approximates---in principle with arbitrary fidelity---any number state in the harmonic ladder (modulo a displacement).
\section{Results}
We consider an optomechanical system where the frequency of a cavity mode
parametrically couples to the displacement and squared displacement of a mechanical resonator.
The Hamiltonian is given by (we set $\hbar=1$ throughout)
\begin{equation}
\hat H=\omega_c \hat a^\dag \hat a+\omega_m \hat b^\dag \hat b - g_0^{(1)}\hat a^\dag \hat a (\hat b+\hat b^\dag)- g_0^{(2)} \hat a^\dag \hat a (\hat b+\hat b^\dag)^2 +\hat H_{\mathrm{drive}}\label{HInt}\,,
\end{equation}
where $\hat a$ ($\hat b$) is the annihilation operator of the cavity (mechanical) mode of frequency $\omega_c$ ($\omega_m$) and $g_0^{(1)}$, $g_0^{(2)}$ respectively quantifies the linear and quadratic single-photon coupling. Such linear-and-quadratic coupling can be realized in
membrane-in-the-middle setups~\cite{NonLin,NonLin1, NonLin2}, cold atoms~\cite{NonLin3}, microdisk resonators~\cite{NonLin4} and photonic crystal cavities~\cite{NonLin5,NonLin6}. The cavity has a decay rate $\kappa$ and is driven with three tones
\begin{equation}
\hat H_{\mathrm{drive}}=\hat a^\dag\bigl(\varepsilon_- e^{-i\omega_-t}+\varepsilon_0 e^{-i\omega_0t}+\varepsilon_+ e^{-i\omega_+t} \bigr) +\mathrm{H.c.}\, ,
\end{equation}
applied on the cavity resonance ($\omega_0=\omega_c$), and on the lower and upper mechanical sideband $(\omega_{\pm}=\omega_c\pm\omega_m)$.
After standard linearization (we dub $\hat d$ the fluctuation operator of the cavity field), moving into a
frame rotating with the free cavity and mechanical Hamiltonian, and focusing on the good cavity limit ($\kappa\ll \omega_m$) we get
\begin{equation}
\hat H_{{\rm RWA}}=-\hat d^{\dag}(G_- \hat b +G_+\hat b^{\dag}+G_0\{\hat b,\hat b^{\dag} \}) +\mathrm{H.c.}\, ,
\end{equation}
where we set $G_{\pm}=g_0^{(1)}\alpha_{\pm}$, $G_{0}=g_0^{(2)}\alpha_{0}$, and $\alpha_{\pm,0}$ are the steady values of the cavity amplitude at each frequency component; we will assume these couplings to be real and positive without loss of generality.
After a transient time the cavity field is found in the vacuum while the mechanical resonator in a pure state $\ket{\varphi}$ that satisfies the condition
\begin{equation}\label{Eqf}
(G_- \hat b +G_+\hat b^{\dag}+G_0\{\hat b,\hat b^{\dag} \}) \ket{\varphi}=0\, .
\end{equation}
Note that when the nonlinear term in absent, namely $\nobreak{G_0\equiv0}$, we recover dissipative squeezing with a squeezing degree $ r=\tanh^{-1} (G_+/G_-)$~\cite{Kronwald}.
\par
In order to characterize the steady state $\ket{\varphi}$,
let us first assume that the amplitudes at the two mechanical sidebands are equal, i.e., $G_{\pm}=G$. In this case it is enough to notice that for the following values of the resonant coupling
\begin{equation}\label{Value}
G_0=\frac{G}{\sqrt{2(2n+1)}}\, ,
\end{equation}
the condition expressed in Eq.~\eqref{Eqf} becomes
\begin{equation}
\hat b^{\dag}\hat b \,\hat D\left(\sqrt{n+\tfrac12}\right) \ket{\varphi}= n \hat D\left(\sqrt{n+\tfrac12}\right) \ket{\varphi}\, ,
\end{equation}
where $\hat D$ is the displacement operator and $n\in \mathbb{N}$ a non-negative integer (to stress this dependence we set $\ket{\varphi_n}\equiv \ket{\varphi}$ from now on).
This is in turn equivalent to
\begin{equation}\label{InfiniteSq}
\ket{\varphi_n}= \hat D \left(-\sqrt{n+\tfrac12}\right) \ket{n}
\end{equation}
and proves that the steady state is indeed a displaced Fock state. In particular, by tuning the amplitude of the resonant drive in
Eq.~\eqref{Value} {\it any} state in the Fock state ladder can be stabilized.
\par
\begin{figure*}
\caption{Wigner function $\nobreak{W(q,p)=\frac{1}
\label{f:Plot1}
\end{figure*}
The class of steady states obtained in Eq.~\eqref{InfiniteSq} turns out to be unstable~\cite{Me2}. However, it can be seen as the limit $G_+\rightarrow G_-$ of the more general case $G_+\neq G_-$ with
\begin{equation}\label{Value2}
G_0=\sqrt{\frac{G_+G_-}{2(2n+1)}}\, \, ,
\end{equation}
which is guaranteed to be stable as long as $G_+<G_-$.
In order to find the new steady state, we can project Eq.~\eqref{Eqf}
onto the position eigenstate $\ket{q}$ and obtain a differential equation for the associated
wave function $\varphi_n(q)$. The solution of such equation reads
\begin{equation}
\varphi_n(q)\propto e^{-q\sqrt{\zeta(1+2n)}} e^{-\frac{q^2}{2}}H_n\left(q+\tfrac{(1+\zeta)\sqrt{\zeta(1+2n)}}{2\zeta}\right) \, ,
\end{equation}
where we set
$\zeta=\tanh r\in[0,1)$.
Note that the integer order of the Hermite polynomial is determined by the resonant coupling in Eq.~\eqref{Value2}.
By completing the square in the exponent we get
\begin{equation}
\varphi_n(q)\propto e^{-\frac12(q-\xi_n)^2}H_n\left(q-\xi_n+\tfrac{(1-\zeta)\sqrt{\zeta(1+2n)}}{2\zeta}\right) \, ,
\end{equation}
where $\xi_n=-\sqrt{\zeta(1+2n)}$. Note that for $\zeta\rightarrow1$ we correctly recover the wave function of a displaced quantum harmonic oscillator. We now exploit the following property of the Hermite polynomials,
$H_n(x+y)=\sum_{k=0}^n \binom{n}{k}H_k(x)(2y)^{n-k}\, ,
$ which leads us to
\begin{equation}
\varphi_n(q)\propto \sum_{k=0}^n \binom{n}{k}c_n^{-k} e^{-\frac12(q-\xi_n)^2}H_k\left(q-\xi_n\right) \, ,
\end{equation}
with $c_n=-\frac{(1-\zeta)}{4\zeta}\xi_n$. From the last line we can finally read the explicit expression of the state
\begin{equation}
\ket{\varphi_n}=\mathcal{N}_n \hat D\bigl(\xi_n/\sqrt2\bigr) \sum_{k=0}^n \binom{n}{k}c_n^{\,-k} \ket{k}\, ,
\end{equation}
where the normalization factor is given by $\mathcal{N}_n=\left[_2F_1\left(-n,-n;1;c_n^{-2}\right)\right]^{-1/2}$.
The steady state is now given by the action of a $n$-dependent displacement on a superposition of a finite number ($n+1$) of elements.
It is easily checked that in the limit $\zeta\rightarrow1$ the superposition collapses to the single element of Eq.~\eqref{InfiniteSq}.
On the other hand, for any non-zero value of the squeezing parameter the state $\ket{\varphi_n}$ displays negativity in the
Wigner distribution and the larger the amount squeezing the closer the resemblance with a Fock state.
This feature is clear from Fig.~\ref{f:Plot1}, where we show the Wigner distribution for a given $n$ ($n=5$) and different values of the squeezing parameter
$\zeta$. We clearly see that the distribution, which for lower values of $\zeta$ is skewed toward one side, progressively straightens to approach that of a Fock state. We can thus think of $\ket{\varphi_n}$ as a state that approximates any given displace Fock state, to an extent that improves with the amount of squeezing available.
Mechanical dissipation---not considered here---sets a limit on the precision of such approximation.
Yet, one can show that it is still possible to approximate with near-unit fidelity any Fock state~\cite{Me2}.
\par
Coming back to Eq.~\eqref{Eqf}, we notice that $\ket{\varphi_n}$ is the state {\it uniquely} annihilated by the nonlinear operator
\begin{equation}\label{FockMode}
\hat f=\mathcal{G}\hat \beta+\sqrt{\frac{\cosh r\sinh r}{2(2n+1)}} \{\hat b^{\dag} ,\hat b \} \, ,
\end{equation}
where $\hat \beta=\cosh r \hat b+\sinh r \hat b^{\dag}$ is a Bogoliubov mode and $\mathcal{G}=\sqrt{G_-^2-G_+^2}$. The nonlinear contribution added to the Bogoliubov transformation makes the nature of $\hat f$ non bosonic.
\section{Discussion}
We presented an exactly solvable model to augment dissipative squeezing by means of a quadratic nonlinearity. The model can be
implemented in optomechanical cavity and the states stabilized by our protocol approximate displaced multi-phonon Fock state of any desired number.
\acknowledgments{
M.~B.~is supported by the European Union's Horizon 2020 research and innovation programme under grant
agreement No 732894 (FET Proactive HOT). O.~H.~acknowledges support from the
SFI-DfE Investigator programme (grant 15/IA/2864), the EU Horizon2020
Collaborative Project TEQ (grant agreement No 766900) and from the EPSRC
project EP/P00282X/1.}
\conflictsofinterest{The authors declare no conflict of interest. The funding sponsor had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.}
\reftitle{References}
\end{document}
|
\begin{document}
\begin{abstract}
We study finite element approximations of the nonhomogeneous Dirichlet problem for the fractional Laplacian. Our approach is based on weak imposition of the Dirichlet condition and incorporating a nonlocal analogous of the normal derivative as a Lagrange multiplier in the formulation of the problem. In order to obtain convergence orders for our scheme, regularity estimates are developed, both for the solution and its nonlocal derivative. The method we propose requires that, as meshes are refined, the discrete problems be solved in a family of domains of growing diameter.
\end{abstract}
\maketitle
\section{Introduction and preliminaries} \label{sec:intro}
Anomalous diffusion refers to phenomena arising whenever the associated underlying stochastic process is not given by Brownian motion.
One striking example of a nonlocal operator is the fractional Laplacian of order $s$ ($0<s<1$), which we will denote by $(-\Delta)^s$.
If the domain under consideration is the whole space ${\mathbb{R}^n}$, then $(-\Delta)^s$ is a pseudodifferential operator with symbol $|\xi|^{2s}$. Indeed, for a function $u$ in the Schwartz class $\mathcal{S}$, let
\begin{equation}
(-\Delta)^s u = \mathcal{F}^{-1} \left( |\xi|^{2s} \mathcal{F} u \right) ,
\label{eq:fourier}
\end{equation}
where $\mathcal{F}$ denotes the Fourier transform.
The fractional Laplacian can equivalently be defined by means of the identity \cite{Hitchhikers}
\begin{equation}
(-\Delta)^s u (x) = C(n,s) \mbox{ P.V.} \int_{\mathbb{R}^n} \frac{u(x)-u(y)}{|x-y|^{n+2s}} \, dy,
\label{eq:fraccionarioyo}
\end{equation}
where the normalization constant
\begin{equation} \label{eq:cns}
C(n,s) = \frac{2^{2s} s \Gamma(s+\frac{n}{2})}{\pi^{n/2} \Gamma(1-s)}
\end{equation}
is taken in order to be consistent with definition \eqref{eq:fourier}.
In the theory of stochastic processes, this operator appears as the infinitesimal generator of a stable Lévy process \cite{Bertoin}. Indeed, it
is possible to obtain a fractional heat equation as a limit of a random walk with long jumps \cite{Valdinoci}.
There are two different approaches to the definition of the fractional Laplacian on an open bounded set $\Omega$.
On the one hand, to analyze powers of the Laplacian in a spectral sense: given a function $u$, to consider its spectral decomposition in terms of the eigenfunctions of the Laplacian with homogeneous Dirichlet boundary condition, and to take the operator that acts by raising to the power $s$ the corresponding eigenvalues. Namely, if
$\{\psi_k, \lambda_k \}_{k \in \mathbb{N}} \subset H^1_0 (\Omega) \times \mathbb{R}_+ $ denotes the set of normalized eigenfunctions and eigenvalues, then this operator is defined as
\[
(-\Delta)_S^s \, u (x) = \sum_{k=1}^\infty \lambda_k^s ( u , \psi_k )_{L^2(\Omega)} \psi_k(x),
\qquad x\in\Omega.
\]
On the other hand, there is the possibility to keep the motivation coming from the stochastic process leading to the definition of $(-\Delta)^s$ in ${\mathbb{R}^n}$.
This option leads to two different types of operators: one in which the stochastic process is restricted to $\Omega$ and one in which particles are allowed to jump anywhere in the space.
The first of these two is the infinitesimal generator of a censored stable Lévy process \cite{Bogdan}, we refer to it as \emph{regional} fractional Laplacian and it is given by
\begin{equation}
(-\Delta)^s_{\Omega} u(x) = C(n,s,\Omega) \mbox{ P.V.} \int_\Omega \frac{u(x)-u(y)}{|x-y|^{n+2s}} \, dy,
\quad x\in\Omega.
\label{eq:regional}
\end{equation}
The second of the two operators motivated by L\'evy processes leads to considering the integral formulation \eqref{eq:fraccionarioyo}. Observe that, unlike the aforementioned fractional Laplacians, the definition of this operator does not depend on the domain $\Omega$. In this work we deal with this operator, which we denote by $(-\Delta)^s$ and simply call it the fractional Laplacian.
The possibility of having arbitrarily long jumps in the random walk explains why, when considering a fractional Laplace equation on
a bounded domain $\Omega$, boundary conditions should be prescribed on $\Omega^c = {\mathbb{R}^n} \setminus \overline \Omega$.
For an account of numerical methods for the fractional Laplacians mentioned above, we refer the reader to the recent survey \cite{survey}.
Specific to the numerical treatment of \eqref{eq:fraccionarioyo}, we mention algorithms based on finite elements \cite{ABB, AB, AinsworthGlusa_adaptive, AinsworthGlusa_efficient, DEliaGunzburger},
finite differences \cite{HuangOberman}, Dunford-Taylor representation formulas \cite{BLP3}, Nystr\"om \cite{ABBM} and Monte Carlo \cite{Kyprianou} methods.
\new{Given $s \in (0,1)$}, in this work we study finite element approximations to problem
\begin{equation} \label{eq:dirichletnh}
\left\lbrace
\begin{array}{rl}
(-\Delta)^s u = f & \mbox{ in }\Omega, \\
u = g & \mbox{ in }\Omega^c , \\
\end{array}
\right.
\end{equation}
where the functions $f$ and $g$ are data belonging to suitable spaces. Analysis of the homogeneous counterpart of \eqref{eq:dirichletnh} was carried out in \cite{AB},
where a numerical method was developed, theoretical error bounds were established and numerical results in agreement with the theoretical predictions were obtained.
Solvability of a class of nonhomogeneous Dirichlet problems for nonlocal operators --involving not necessarily symmetric or continuous kernels--
was studied in \cite{Felsinger2015}.
An important result for dealing with \eqref{eq:dirichletnh} is the following integration
by parts formula for the fractional Laplacian \cite{dipierro2014nonlocal}: for $u, v$ smooth enough,
it holds
\begin{equation}\label{eq:parts} \begin{split}
\frac{C(n,s)}{2} \iint_Q & \frac{(u(x)-u(y)) (v(x)-v(y))}{|x-y|^{n+2s}} \, dx \, dy \, \\
& = \int_\Omega v(x) (-\Delta)^su(x) \, dx + \int_{\Omega^c} v(x) \, \mathcal{N}_s u(x) \, dx ,
\end{split} \end{equation}
where $\mathcal{N}_s u$ is the \emph{nonlocal normal derivative} of $u$, given by
\begin{equation*}
\mathcal{N}_s u = C(n,s) \, \int_\Omega \frac{u(x)-u(y)}{|x-y|^{n+2s}} \, dy, \ x \in \Omega^c,
\end{equation*}
and $Q = (\Omega \times {\mathbb{R}^n}) \cup ({\mathbb{R}^n} \times \Omega)$.
\new{Along this paper we always work with a \emph{fixed} value of $s$. Nonetheless, it is instructive to mention that $\mathcal{N}_s u$ recovers in the limit $s\to 1$ the notion of the classical normal derivative (cf. Remark \ref{rem:singular}).}
The aim of this work is to build finite element approximations for both, the solution $u$ of \eqref{eq:dirichletnh} as well as for its nonlocal derivative $\mathcal{N}_s u$. In this regard, we discuss briefly a standard direct approach in which the Dirichlet condition $g$ is strongly imposed. As it turns out, this simple and optimally convergent method for the variable $u$, does not provide a computable approximation of $\mathcal{N}_s u$.
In order to overcome this limitation a mixed formulation of the problem --in which $\mathcal{N}_s u$ plays the role of a Lagrange multiplier--
is introduced and numerically approximated. By means of this approach, which is the main object of this paper,
numerical approximations for both $u$ and $\mathcal{N}_s u$ are delivered and optimal order of convergence is proved for them.
\new{In this way, our method inaugurates the variational setting
for the treatment of non-homogeneous essential boundary conditions of fractional operators. This is a promising scenario in which one might consider more general problems, including coupled systems involving fractional and integer-order operators.}
Throughout this paper, $C$ denotes a positive constant which may be different in various places.
\subsection{Sobolev spaces}
Given an open set $\Omega \subset {\mathbb{R}^n}$ and $s \in(0,1)$, the fractional Sobolev space $H^s(\Omega)$ is defined by
\[
H^s(\Omega) = \left\{ v \in L^2(\Omega) \colon |v|_{H^s(\Omega)} < \infty \right\},
\]
where $|\cdot|_{H^s(\Omega)}$ is the Aronszajn-Slobodeckij seminorm
\[
|v|_{H^s(\Omega)}^2 = \iint_{\Omega^2} \frac{|v(x)-v(y)|^2}{|x-y|^{n+2s}} \, dx \, dy.
\]
Naturally, $H^s(\Omega)$ is a Hilbert space furnished with the norm $\|\cdot\|_{H^s(\Omega)}^2 = \|\cdot\|_{L^2(\Omega)}^2 + |\cdot|_{H^s(\Omega)}^2 .$
We denote $\langle \cdot, \cdot \rangle_{H^s(\Omega)}$ the bilinear form
$$
\langle u , v \rangle_{H^s(\Omega)} = \iint_{\Omega^2} \frac{(u(x)-u(y))(v(x)-v(y))}{|x-y|^{n+2s}} \, dx \, dy, \quad u, v \in H^s(\Omega).
$$
Sobolev spaces of order greater than one are defined as follows. If $s>1$ is not an integer, the decomposition $s = m + \sigma$, where $m \in \mathbb{N}$ and $\sigma \in (0,1)$, allows to define $H^s(\Omega)$ by setting
\[
H^s(\Omega) = \left\{ v \in H^m(\Omega) \colon |D^\alpha v|_{H^{\sigma}(\Omega)} < \infty \text{ for all } \alpha \text{ s.t. } |\alpha| = m \right\}.
\]
A space of interest in our analysis consists of the set
$$\omegaidetilde{H}^s(\Omega) = \{ v \in H^s({\mathbb{R}^n}) \colon \text{supp } v \subset \overline{\Omega} \},$$
endowed with the norm
\[
\| v \|_{\omegaidetilde H^s(\Omega)} = \| \tilde v \|_{H^s({\mathbb{R}^n})},
\]
where $\tilde v$ is the extension of $v$ by zero outside $\Omega$. For simplicity
of notation, whenever we refer to a function in $\omegaidetilde{H}^s(\Omega)$, we assume that it is extended by zero onto $\Omega^c$.
Let $s > 0$. By using $L^2(\Omega)$ as a pivot space, we have that the duality pairing between $H^s(\Omega)$ and its dual $\omegaidetilde{H}^{-s}(\Omega) = (H^s(\Omega))'$ coincides with the $L^2(\Omega)$ inner product. Moreover, we denote the dual of $\omegaidetilde{H}^{s}(\Omega)$ by $H^{-s}(\Omega)$.
\begin{remark}[Duality pairs]
In order to keep the notation as clear as possible, along the following sections
we write $\int_\Omega \mu v$ for $\mu\in H'$ and $v\in H$. However, if the duality
needs to be stressed we use $\langle \mu, v\rangle$ instead.
\label{rem:dualitypair}
\end{remark}
We state some important theoretical results regarding the space $\omegaidetilde{H}^{s}(\Omega)$ (see e.g., \cite[Proposition 2.4]{AB}).
\begin{proposition}[Poincar\'e inequality] Given a domain $\Omega$ and $s>0$, there exists a constant $C$ such that, for all $v \in \omegaidetilde H^s(\Omega)$,
\begin{equation}
\| v \|_{L^2(\Omega)} \le C | v |_{H^s({\mathbb{R}^n})}.
\label{eq:poincare}
\end{equation}
\end{proposition}
\begin{remark} \label{rem:poincare}
Analogously to integer order Sobolev spaces, an immediate consequence of the Poincar\'e inequality is that the $H^s$-seminorm is equivalent to the full $H^s$-norm over $\omegaidetilde H ^s(\Omega)$. Observe that, given $v \in \omegaidetilde{H}^s (\Omega)$, its $H^s$-seminorm is given by
\[
|v|_{H^s({\mathbb{R}^n})}^2 = |v|_{H^s(\Omega)}^2 + 2 \int_\Omega |v(x)|^2 \int_{\Omega^c} \frac{1}{|x-y|^{n+2s}} dy \, dx .
\]
\end{remark}
\begin{definition} \label{def:w}
Given a (not necessarily bounded) set $\Omega$ with Lipschitz continuous boundary and $s \in (0,1)$, we denote by $\omegas : \Omega \to (0,\infty)$ the function given by
\begin{equation} \label{eq:def_w}
\omegas (x) = \int_{\Omega^c} \frac{1}{|x-y|^{n+2s}} dy.
\end{equation}
Denoting $\delta(x) = d(x,\partial \Omega)$, the following bounds hold
\[
0 < \frac{C}{\delta(x)^{2s}} \le \omegas(x) \le \frac{\sigma_{n-1}}{2s \, \delta(x)^{2s}} \quad \forall x\in \Omega,
\]
where $\sigma_{n-1}$ is the measure of the $n-1$ dimensional sphere and $C>0$ depends on $\Omega$. For the lower bound above we refer to \cite[formula (1.3.2.12)]{Grisvard}, whereas the upper bound is easily deduced by integration in polar coordinates.
\end{definition}
\begin{proposition}[Hardy inequalities, see {\cite{Dyda, Grisvard}}] \label{prop:hardy}
Let $\Omega$ be a bounded Lipschitz domain, then there exists $c=c(\Omega,n,s)>0$ such that
\begin{equation} \label{eq:hardy}
\begin{split}
\int_\Omega \frac{|v(x)|^2}{\delta(x)^{2s}} \, dx \, & \leq c \| v \|_{H^s(\Omega)}^2 \ \forall \, v \in H^s(\Omega) \quad \text{if } 0<s < 1/2, \\
\int_\Omega \frac{|v(x)|^2}{\delta(x)^{2s}} \, dx \, & \leq c |v|_{H^s(\Omega)}^2 \ \forall \, v \in \omegaidetilde H ^s(\Omega) \quad \text{if } 1/2 < s < 1.
\end{split}
\end{equation}
\end{proposition}
\begin{corollary} \label{cor:norma} If $0<s < 1/2$, then there exists a constant $c=c(\Omega,n,s)>0$ such that
\begin{equation*}
\| v \|_{H^s({\mathbb{R}^n})} \leq {c} \| v \|_{H^s(\Omega)} \quad \forall v \in \omegaidetilde{H}^s(\Omega).
\end{equation*}
On the other hand, if $1/2<s<1$ there exists a constant ${c}={c}(\Omega,n,s)>0$ such that
\begin{equation*}
\| v \|_{H^s({\mathbb{R}^n})} \leq {c} | v |_{H^s({\Omega})} \quad \forall v \in \omegaidetilde{H}^s(\Omega) .
\end{equation*}
\end{corollary}
\begin{remark} \label{remark:un_medio}
When $s=1/2$, since Hardy's inequality fails, it is not possible to bound the $H^{1/2}({\mathbb{R}^n})$-seminorm in terms of the $H^{1/2}(\Omega)$-norm for functions supported in $\overline \Omega$. However, for the purposes we pursue in this work, it suffices to notice that the estimate
\[
\|v \|_{H^{1/2}({\mathbb{R}^n})} \le C |v|_{H^{1/2+\varepsilon}(\Omega)}
\]
holds for all $v \in \omegaidetilde {H}^{1/2+\varepsilon}(\Omega)$,
where $\varepsilon > 0$ is fixed.
\end{remark}
An important tool for our work is the extension operator given by the following (see \cite[Theorem 5.4]{Hitchhikers} and \cite{zhou2015fractional}).
\begin{lemma}\label{extension}
Given $\sigma \ge 0$ and $\Omega$ a (not necessarily bounded) Lipschitz domain, there exists a continuous extension operator $E: H^\sigma(\Omega) \to H^\sigma({\mathbb{R}^n}).$ Namely, there is a constant $C(n,\sigma,\Omega)$ such that, for all $u \in H^\sigma(\Omega)$,
\[
\| Eu \|_{H^\sigma({\mathbb{R}^n})} \le C \| u \|_{H^\sigma(\Omega)}.
\]
\end{lemma}
\begin{remark}
\label{rem:ext_complemento}
During the next sections we need Lemma \ref{extension} for $\Omega^c$, although we prefer to state it in the more natural fashion, that is, in terms of $\Omega$ itself.
\end{remark}
\subsection{Fractional Laplacian and regularity of the Dirichlet homogenous problem}
The operator $(-\Delta)^s$ may be defined either by \eqref{eq:fourier} or \eqref{eq:fraccionarioyo}. The latter is useful to cope with problems involving the operator in a variational framework, and therefore to perform finite element analysis of such problems. On the other hand, definition \eqref{eq:fourier} allows to study the operator from the viewpoint of pseudodifferential calculus. The equivalence between these two definitions can be found, for example, in \cite{Hitchhikers}.
Using the definition \eqref{eq:fourier}, it is easy to prove the following.
\begin{proposition} \label{prop:order}
For any $s \in {\mathbb {R}}$, the operator $(-\Delta)^s$ is of order $2s$, that is, $(-\Delta)^s : H^\ell ({\mathbb{R}^n}) \to H^{\ell-2s} ({\mathbb{R}^n})$ is continuous for any $\ell \in {\mathbb {R}}$.
\end{proposition}
From the previous proposition, it might be expected that, given a bounded smooth domain $\Omega$, if $u \in \omegaidetilde H^s(\Omega)$ satisfies $(-\Delta)^s u = f$ for some $f \in H^\ell (\Omega)$, then $u \in H^{\ell + 2s}(\Omega)$. However, this is not the case. Regularity of solutions of problems involving the fractional Laplacian over bounded domains is a delicate issue.
Indeed, consider for instance the homogeneous problem
\begin{equation} \label{eq:homogeneous}
\left\lbrace
\begin{array}{rl}
(-\Delta)^s u = f & \text{ in } \Omega, \\
u = 0 & \text{ in } \Omega^c.
\end{array}
\right.
\end{equation}
In \cite{Grubb}, regularity results for \eqref{eq:homogeneous} are stated in terms of
H\"ormander $\mu-$spaces. These mix the features of supported and restricted Sobolev spaces by means of combining certain pseudodifferential operators with zero-extensions and restriction operators. We refer to that work for a definition and further details.
In terms of standard Sobolev spaces, the results therein may be stated as follows (see also \cite{VishikEskin}).
\begin{proposition}\label{prop:regHr}
Let $f\in H^r(\Omega)$ for $r\geq -s$ and $u\in \omegaidetilde{H}^s(\Omega)$ be the solution of the Dirichlet problem
\eqref{eq:homogeneous}. Then, the following regularity estimate holds
$$
|u|_{H^{s+\alpha}({\mathbb{R}^n})} \leq C(n, \alpha) \|f\|_{H^r(\Omega)}.
$$
Here, $\alpha = s+r$ if $s+r < \frac12$ and $\alpha =
\frac12 - \varepsilon$ if $s+r \ge \frac12$, with $\varepsilon > 0$ arbitrarily small.
\end{proposition}
\begin{remark}
We emphasize that assuming further Sobolev regularity for the right hand side function $f$ does not imply that the solution $u$ will be any smoother than what is given by the previous proposition.
\end{remark}
\section{Statement of the problem}
Throughout the remaining sections of this work we are going to denote by $V$ the space $V = H^s({\mathbb{R}^n})$,
furnished with its usual norm. The domain $\Omega$ is assumed to be bounded and smooth and therefore
(due to the latter condition) it is an extension domain for functions
in $H^s(\Omega)$ (and of course for functions in $H^s(\Omega^c)$). This fact is used in some parts of the presentation
without further comments.
Multiplying the first equation in \eqref{eq:dirichletnh} by a suitable test function $v$ and applying \eqref{eq:parts}, we obtain
\begin{equation}
\begin{split}
\frac{C(n,s)}{2} \iint_Q \frac{(u(x)-u(y))(v(x)-v(y))}{|x-y|^{n+2s}} \, dx \, dy & - \int_{\Omega^c} v(x) \, \mathcal{N}_s u(x) \, dx\\
& = \int_\Omega f(x) v(x) \, dx .
\end{split}
\label{eq:debil}
\end{equation}
In order to write a weak formulation for our problem we assume $f\in \omegaidetilde{H}^{-s}(\Omega)$, $g \in H^s(\Omega^c)$
and introduce the bilinear and linear forms $a\colon V\times V \to {\mathbb {R}},$ $ F\colon V \to {\mathbb {R}},$
\[
\begin{aligned}
a(u,v) & = \frac{C(n,s)}{2} \iint_Q \frac{(u(x)-u(y))(v(x)-v(y))}{|x-y|^{n+2s}} \, dx \, dy , \\
F(u) & = \int_{\Omega} f(x) u(x) \, dx ,
\end{aligned} \]
which are needed in the sequel.
\begin{remark} \label{remark:form_a}
The form $a$ satisfies the identity
$$a(u,v) = \frac{C(n,s)}{2} \left( \langle u,v \rangle_{H^s({\mathbb{R}^n})} - \langle u,v \rangle_{H^s(\Omega^c)} \right) \ \forall u, v \in V.$$
This, in turn, implies the continuity of $a$ in $V$, that is
$$|a(u,v)| \le C(n,s) |u|_{H^s({\mathbb{R}^n})} |v|_{H^s({\mathbb{R}^n})},$$
and the fact that over the set $\omegaidetilde{H}^s(\Omega)$, $a(v,v)$ coincides with $ \frac{C(n,s)}{2}| v |_{H^s({\mathbb{R}^n})}^2.$
\end{remark}
\subsection{Direct formulation} \label{ss:direct}
Our first approach is based on the strong imposition of the Dirichlet condition.
From \eqref{eq:debil} we obtain at once the weak formulation: find
$u\in V_g$ such that
\begin{equation}
\label{eq:cont_direct}
a(u,v)=F(v) \quad \forall v\in \omegaidetilde H^s(\Omega),
\end{equation}
where $V_g=\{w\in V \colon w=g \mbox{ in } \Omega^c\}$.
The treatment for this formulation is standard. Since $\Omega^c$ is an extension domain we
may find $g^E\in V_g$, $g^E:=E(g)$, such that $\|g^E\|_{V}\le C \|g\|_{H^s(\Omega^c)}$,
with $C$ depending on $\Omega$. Using that $a(u,v)$ is continuous and coercive in
$\omegaidetilde H^s(\Omega)$ (see Remark \ref{remark:form_a}), existence and uniqueness of a solution
$u_0\in \omegaidetilde H^s(\Omega)$ of the problem
\[
a(u_0,v)=F(v)-a(g^E,v) \quad \forall v\in \omegaidetilde H^s(\Omega),
\]
is guaranteed, thanks to the continuity of the right hand side. Considering $u:=u_0+g^E$ we deduce the following.
\begin{proposition} \label{prop:well_posedness_direct}
Problem \eqref{eq:cont_direct} admits a unique solution $u \in V_g$, and there exists $C>0$ such that the bound
\[
\| u \|_V \le C \left( \| f \|_{\omegaidetilde{H}^{-s}(\Omega)} + \| g \|_{H^s(\Omega^c)} \right)
\]
is satisfied.
\end{proposition}
\subsection{Mixed formulation}
The idea behind this formulation dates back to
Babu\v ska's seminal paper \cite{Babuska}.
We define the set $\Lambda = (H^s(\Omega^c))' = \omegaidetilde{H}^{-s}(\Omega^c)$, furnished with its usual norm,
and introduce the bilinear and linear forms $ b\colon V\times \Lambda \to {\mathbb {R}}, G: \Lambda \to {\mathbb {R}}$,
\begin{equation*}
b(u, \mu) = \int_{\Omega^c} u(x) \, \mu(x) \, dx ,
\end{equation*}
and
\begin{align*}
\ G(\lambda) = \int_{\Omega^c} g(x) \, \lambda(x) \, dx,
\end{align*}
which are obviously continuous.
The mixed formulation of \eqref{eq:dirichletnh} reads: find $(u,\lambda) \in V\times\Lambda$ such that
\begin{equation}\label{eq:cont}
\begin{split}
a(u,v) - b(v,\lambda) & = F(v) \quad \forall v \in V , \\
b(u,\mu) & = G(\mu) \quad \forall \mu \in \Lambda .
\end{split}
\end{equation}
\begin{remark} \label{rem:lambda}
As can be seen from the above considerations, the Lagrange multiplier $\lambda$, which is associated to the restriction $u = g $ in $\Omega^c$,
coincides with the nonlocal derivative $\mathcal{N}_s u$ in that set. In order to simplify the notation, in the following we will refer to it as $\lambda$.
\end{remark}
Notice that the kernel of the bilinear form $b$ agrees with $\omegaidetilde{H}^s(\Omega)$,
that is,
\begin{equation} \label{eq:kernel}
K = \{ v \in V \colon b(v,\mu) = 0 \ \forall \mu \in \Lambda \} = \omegaidetilde{H}^s(\Omega).
\end{equation}
Recalling Remarks \ref{rem:poincare} and \ref{remark:form_a}, it follows that
\begin{equation}
\| v \|_V^2 \le C | v |_{H^s({\mathbb{R}^n})}^2 =
C a(v,v) \quad \forall v \in K.
\label{eq:ellipticity}
\end{equation}
We are now in a position to prove the inf-sup condition for the form $b$.
\begin{lemma}
For all $\mu \in \Lambda$, it holds that
\begin{equation}\label{eq:infsup}
\sup_{u \in V} \frac{b(u,\mu)}{\| u \|_V} \ge \frac1C \| \mu \|_{\Lambda},
\end{equation}
where $C> 0$ is the constant from Lemma \ref{extension}.
\end{lemma}
\begin{proof}
Let $\mu \in \Lambda$. Recalling that $\Lambda = (H^s(\Omega^c))'$ and taking into account the extension operator given by Lemma \ref{extension}, we have
\[
\| \mu \|_\Lambda = \sup_{v\in H^s(\Omega^c)} \frac{b(v, \mu)}{\| v \|_{H^s(\Omega^c)}} \le
C \sup_{v\in H^s(\Omega^c)} \frac{b(Ev, \mu)}{\| Ev \|_{V}} \le
C \sup_{u \in V} \frac{b(u, \mu)}{\| u \|_{V}} .
\]
\end{proof}
Due to the ellipticity of $a$ on the kernel of $b$ \eqref{eq:ellipticity} and the inf-sup condition \eqref{eq:infsup}, we deduce the well-posedness of the continuous problem by means of the Babu\v{s}ka-Brezzi theory \cite{BoffiBrezziFortin}.
\begin{proposition} \label{prop:well_posedness}
Problem \eqref{eq:cont} admits a unique solution $(u,\lambda) \in V\times\Lambda$, and there exists $C>0$ such that the bound
\[
\| u \|_V + \| \lambda \|_\Lambda \le C \left( \| f \|_{\omegaidetilde{H}^{-s}(\Omega)} + \| g \|_{H^s(\Omega^c)} \right)
\]
is satisfied.
\end{proposition}
\begin{remark}
\label{rem:igualesu}
Considering test functions $v\in K$, the first equation of \eqref{eq:cont} implies that $u$ solves \eqref{eq:cont_direct}, while the second equation of \eqref{eq:cont} enforces the condition $u\in V_g$.
\end{remark}
\section{Regularity of solutions} \label{sec:regularity}
Since the maximum gain of regularity for solutions of the homogeneous problem is ``almost'' half a derivative, from this point on we assume
$f \in H^{1/2-s}(\Omega)$. Moreover, we require the Dirichlet condition $g$ to belong to $H^{s+1/2}(\Omega^c)$.
As described in \S\ref{ss:direct}, we consider an extension $g^E \in {H^{s+1/2}({\mathbb{R}^n})}$
and consider the homogeneous problem
\eqref{eq:homogeneous} with right hand side function equal to $f - (-\Delta)^s g^E$:
\begin{equation*}
\left\lbrace
\begin{array}{rll}
(-\Delta)^s u_0 & = f - (-\Delta)^s g^E & \text{in } \Omega, \\
{u_0} & = 0 & \text{in } \Omega^c.
\end{array}
\right.
\end{equation*}
Due to Proposition \ref{prop:order}, it follows that $(-\Delta)^s g^E \in H^{{1/2-s}}({\mathbb{R}^n})$, with
\[
\| (-\Delta)^s g^E \|_{ H^{{1/2-s}}({\mathbb{R}^n})} \le C \| g^E \|_{ H^{{s+1/2}}({\mathbb{R}^n})} \le C \| g \|_{ H^{{s+1/2}}(\Omega^c)},
\]
so that the right hand side function $f - (-\Delta)^s g^E$ belongs to $H^{{1/2-s}}(\Omega)$. Applying Proposition \ref{prop:regHr} (see also \cite{Grubb, VishikEskin}), we obtain that the solution $u_0\in \omegaidetilde H^{s+1/2-\varepsilon}(\Omega)$ for $\varepsilon>0$, with
\[
\| u_0 \|_{H^{s+1/2-\varepsilon}({\mathbb{R}^n})} \le C{(\varepsilon)} \left( \| f \|_{H^{{1/2-s}}(\Omega)} + \| (-\Delta)^s g^E \|_{H^{{1/2-s}}(\Omega)} \right).
\]
Moreover, as the solution of \eqref{eq:dirichletnh} is given by $u = u_0 + g^E$, we deduce that $u \in H^{s+1/2-\varepsilon}({\mathbb{R}^n})$, and
\begin{equation}
\| u \|_{H^{s+1/2-\varepsilon}({\mathbb{R}^n})} \le C{(\varepsilon)} \left( \| f \|_{H^{{1/2-s}}(\Omega)} + \| g \|_{H^{{s+1/2}}(\Omega^c)} \right).
\label{eq:reg_u}
\end{equation}
We have proved the regularity of solutions of \eqref{eq:dirichletnh}.
\begin{theorem} \label{teo:regularidadDirect} Let $f \in H^{{1/2-s}}(\Omega)$ and let $g \in H^{{s+1/2}}(\Omega^c)$.
Let $u \in H^s({\mathbb{R}^n})$ be the solution of \eqref{eq:dirichletnh}. Then, for all $\varepsilon>0,$ $u \in H^{s+1/2-\varepsilon}({\mathbb{R}^n})$ and
there exists $C{=C(\varepsilon)}>0$ such that
\begin{equation*}
\| u \|_{H^{s+1/2-\varepsilon}({\mathbb{R}^n})} \le \\
C \left( \| f \|_{H^{{1/2-s}}(\Omega)} + \| g \|_{H^{{s+1/2}}(\Omega^c)} \right).
\end{equation*}
\end{theorem}
Regularity of the nonlocal normal derivative of the solution is deduced under an additional compatibility hypothesis on the Dirichlet condition. Namely, we assume that $(-\Delta)^s_{\Omega^c} g \in H^{{1/2-s}}(\Omega^c)$, where $(-\Delta)^s_{\Omega^c}$ denotes the regional fractional Laplacian operator \eqref{eq:regional} in $\Omega^c$.
\begin{theorem} \label{teo:regularidad} Assume the hypotheses of Theorem \ref{teo:regularidadDirect}, and in addition let $g$ be such that $(-\Delta)^s_{\Omega^c} g \in H^{{1/2-s}}(\Omega^c)$. Then, for all $\varepsilon>0,$ $u \in H^{s+1/2-\varepsilon}({\mathbb{R}^n})$, and its nonlocal normal derivative $\lambda \in H^{-s+1/2-\varepsilon}(\Omega^c)$. Moreover, there exists $C{=C(\varepsilon)}>0$ such that
\begin{equation*} \begin{split}
\| u \|_{H^{s+1/2-\varepsilon}({\mathbb{R}^n})} & + \| \lambda \|_{H^{-s+1/2-\varepsilon}(\Omega^c)} \le
C \, \Sigma_{f,g},
\end{split} \end{equation*}
where
\begin{equation} \label{eq:def_sigma}
\Sigma_{f,g} = \| f \|_{H^{{1/2-s}}(\Omega)} + \| g \|_{H^{{s+1/2}}(\Omega^c)} + \| (-\Delta)_{\Omega^c}^s g \|_{H^{{1/2-s}}(\Omega^c)}.
\end{equation}
\end{theorem}
\begin{proof}
We only need to prove that $\lambda \in {H}^{-s+1/2-\varepsilon}(\Omega^c)$.
Let $v \in \omegaidetilde{H}^{s-1/2+\varepsilon}(\Omega^c)$. Since $\lambda = (-\Delta)^s u - (-\Delta)_{\Omega^c}^s g$ in $\Omega^c$, we write
\[
\left| \int_{\Omega^c} \lambda v \right| \le \left( \| (-\Delta)^s u \|_{H^{-s+1/2-\varepsilon}(\Omega^c)} + \| (-\Delta)_{\Omega^c}^s g \|_{H^{-s+1/2-\varepsilon}(\Omega^c)} \right) \| v \|_{\omegaidetilde H^{s-1/2+\varepsilon}(\Omega^c)} .
\]
Using Proposition \ref{prop:order}, we deduce
$$
\left| \int_{\Omega^c} \lambda v \right| \le C \left( \| u \|_{H^{s+1/2-\varepsilon}({\mathbb{R}^n})} + \| (-\Delta)_{\Omega^c}^s g \|_{H^{-s+1/2-\varepsilon}(\Omega^c)} \right) \| v \|_{\omegaidetilde H^{s-1/2+\varepsilon}(\Omega^c)}
$$
and taking supremum in $v$ we conclude that $\lambda \in H^{-s+1/2-\varepsilon}(\Omega^c)$,
with
\begin{equation*} \begin{split}
\| \lambda & \|_{H^{-s+1/2-\varepsilon}(\Omega^c)} \le
C \, \Sigma_{f,g},
\end{split} \end{equation*}
where we have used \eqref{eq:reg_u} in the last inequality and the notation \eqref{eq:def_sigma}.
\end{proof}
\begin{remark}
In view of Proposition \ref{prop:order}, it might seem true that for every $\ell \in {\mathbb {R}}$ and $g \in H^\ell(\Omega^c)$ it holds that $(-\Delta)_{\Omega^c}^s g \in H^{\ell-2s}(\Omega^c)$, which in turn would imply that the hypothesis $(-\Delta)^s_{\Omega^c} g \in H^{{1/2-s}}(\Omega^c)$ is superfluous. However, we have not been able neither to prove nor to disprove this claim. As an illustration on what type of additional hypotheses are utilized to ensure this type of behavior of the restricted fractional Laplacian, we refer the reader to \cite[Lemma 5.6]{warma2015}.
\end{remark}
Naturally, the homogeneous case $g\equiv0$ satisfies the assumptions of Theorem \ref{teo:regularidad}.
\begin{corollary} \label{cor:regularidad}
Let $\Omega$ be a smooth domain and $f \in H^{{1/2-s}}(\Omega)$. Let $u \in \omegaidetilde H^s(\Omega)$ be the solution of \eqref{eq:homogeneous}
and $\lambda$ be its nonlocal normal derivative. Then, for all $\varepsilon>0,$ it holds that $\lambda \in H^{-s+1/2-\varepsilon}(\Omega^c)$ and
\begin{equation*}
\| \lambda \|_{H^{-s+1/2-\varepsilon}(\Omega^c)} \le C(n,s,\Omega,\varepsilon) \| f \|_{H^{{1/2-s}}(\Omega)} .
\end{equation*}
\end{corollary}
\begin{remark} \label{rem:singular} We illustrate the sharpness of the regularity estimate for the nonlocal derivative from Theorem \ref{teo:regularidad} (or from Corollary \ref{cor:regularidad}) with the following simple example. Let $\Omega = (-1,1)$ and consider the problem
\[
\left \lbrace
\begin{array}{rl}
(-\Delta)^s u = 1 & \text{ in } (-1,1), \\
u = 0 & \text{ in } {\mathbb {R}}\setminus(-1,1),
\end{array}
\right.
\]
whose solution is given by $u(x) = c(s) (1 - x^2)_+^s$ for some constant $c(s) > 0$ (see, for example, \cite{Getoor}).
We focus on the behavior of $\mathcal{N}_s u$ near the boundary of $\Omega$; for instance, let $x \in (1,2)$. Basic manipulations allow to derive the bound
\[
\left| \mathcal{N}_s u (x) \right| > \frac{C(s)}{(x-1)^s}.
\]
Next, given $\alpha \in (0,1)$, observe that $(x-1)^\alpha \in H^\ell(1,2)$ if and only if $\ell < \alpha + 1/2.$ Thus, by duality, we conclude that $\mathcal N_su \notin H^{{1/2-s}}(1,2)$.
The reduced regularity of the nonlocal normal derivative near the boundary does not happen as an exception but is what should be expected in general. Indeed, following \cite{ABBM}, let $f:(-1,1)\to{\mathbb {R}}$ be a function such that its coefficients $f_j$ (in the expansion with respect to the basis of the so-called Gegenbauer polynomials $\left\{ C^{(s+1/2)}_j \right\}$) satisfy either
\[\sum_{j=0}^\infty \frac{f_j \, j !}{\Gamma(2s+j+1)} \, C_j^{(s+1/2)}(-1) \neq 0 \,
\text{ or } \sum_{j=0}^\infty \frac{f_j \, j !}{\Gamma(2s+j+1)} \, C_j^{(s+1/2)}(1) \neq 0.
\]
Then, the solution to \eqref{eq:homogeneous} is given by $u(x) = (1-x^2)_+^s \phi(x)$, where $\phi$ is a smooth function that does not vanish as $|x| \to 1$ (cf. \cite[Theorem 3.14]{ABBM}). Therefore, the same argument as above applies: the nonlocal derivative of the solution of the homogeneous Dirichlet problem belongs to $H^{-s+1/2-\varepsilon}({\mathbb {R}} \setminus (-1,1))$, and the $\varepsilon > 0$ cannot be removed.
We remark that in the limit $s\to 1$ the nonlocal normal derivatives concentrate mass towards the boundary of the domain, so that \cite{dipierro2014nonlocal}
\[
\lim_{s\to 1} \int_{\Omega^c} \mathcal{N}_s u \, v = \int_{\partial\Omega} \frac{\partial u}{\partial n} \, v \quad \forall u,v \in C^2_0({\mathbb{R}^n}).
\]
This estimate also illustrates the singular behavior of $\mathcal{N}_s u$ near the boundary of $\Omega$.
\end{remark}
\section{Finite Element approximations} \label{sec:fe_approximations}
In this section we begin the study of finite element approximations to problem \eqref{eq:cont}. Here we assume the Dirichlet datum $g$ to have bounded support.
This assumption allows to simplify the error analysis of the numerical method we propose in this work, but it is not necessary. In the next section, estimates for data not satisfying such hypothesis are deduced.
\subsection{Finite element spaces}
Given ${H>1}$ big enough, we denote by $\Omega_H$ a domain containing $\Omega$ and such that
\begin{equation}
c H \le \min_{x\in \partial \Omega, \, y \in \partial \Omega_H} d(x,y) \le \max_{x\in \partial \Omega, \, y \in \partial \Omega_H} d(x,y) \le C H,
\label{eq:def_OmegaH}
\end{equation}
where $c, C$ are constants independent of $H$.
We set conforming simplicial meshes on $\Omega$ and $\Omega_H \setminus \Omega$, in such a way that the resulting partition of $\Omega_H$ remains admissible. Moreover, to simplify our analysis, we assume the family of meshes to be globally quasi-uniform.
\begin{remark}
The parameter $H$ depends on the mesh size $h$ in such a way that as $h$ goes to $0$, $H$ tends to infinity.
The purpose of $\Omega_H$ is twofold: in first place, to provide a domain in which to implement the finite element approximations.
In second place, the behavior of solutions may be controlled in the complement of $\Omega_H$.
Assuming $g$ to have bounded support implies that, for $h$ small enough, the domain $\Omega_H$ contains the support of the Dirichlet datum $g$.
Moreover, since there is no reason to expect $\lambda$ to be compactly supported, taking $H$ depending adequately on $h$ ensures that
the decay of the nonlocal derivative in $\Omega^c_H$ is of the same order as the approximation error of $u$ and $\lambda$ within $\Omega_H$.
\end{remark}
We consider nodal basis functions
\[
\varphi_1, \ldots, \varphi_{N_{int}}, \varphi_{N_{int}+1}, \ldots , \varphi_{N_{int}+N_{ext}} ,
\]
where the first $N_{int}$ nodes belong to the interior of $\Omega$ and the last $N_{ext}$ to ${\Omega_H \setminus \Omega}$.
The discrete spaces we consider consist of continuous, piecewise linear functions:
\begin{align*}
& V_h = \text{span } \{\varphi_1, \ldots, \varphi_{N_{int}+N_{ext}} \}, \\
& K_h = \text{span } \{\varphi_1, \ldots, \varphi_{N_{int}} \}, \\
& \Lambda_h = \text{span } \{ \varphi_{N_{int}+1}, \ldots, \varphi_{N_{int}+N_{ext}} \}.
\end{align*}
The spaces $V_h$ and $\Lambda_h$ are endowed with the $\| \cdot \|_V$ and $\| \cdot \|_\Lambda$ norms, respectively.
We set the discrete functions to vanish on $\partial \Omega_H$, so that $V_h \subset \omegaidetilde{H}^{3/2-\varepsilon}(\Omega_H)$.
\subsection{The mixed formulation with a Lagrangian multiplier}
The discrete problem reads: find $(u_h, \lambda_h) \in V_h \times \Lambda_h$ such that
\begin{equation}\label{eq:discrete}
\begin{split}
a(u_h, v_h) - b(v_h, \lambda_h) = F(v_h) \ & \forall v_h \in V_h, \\
b(u_h, \mu_h) = G(\mu_h) \ & \forall \mu_h \in \Lambda_h .
\end{split}
\end{equation}
Notice that the space $K_h$ coincides with the kernel of the restriction of $b$ to $\Lambda_h$
and consists of piecewise linear functions over the triangulation of $\Omega$ that vanish on $\partial \Omega$
To verify the well-posedness of the discrete problem \eqref{eq:discrete}, we need to show that the bilinear form $a$ is coercive on $K_h$ and that the discrete inf-sup condition for the bilinear form $b$ holds.
\begin{lemma} There exists a constant $C > 0$, independent of $h$ {and $H$}, such that for all $v_h \in K_h$,
\begin{equation} \label{eq:disc_coercivity}
a(v_h, v_h) \ge C \| v_h \|_{V}^2 .
\end{equation}
\end{lemma}
\begin{proof}
Observe that $K_h$ is a subspace of the continuous kernel $K$ given by \eqref{eq:kernel}.
The lemma follows by the coercivity of $a$ on $K$.
\end{proof}
In order to prove the discrete inf-sup condition, we utilize a projection over the discrete space. Since $V_h \subset \omegaidetilde H^{3/2-\varepsilon}(\Omega_H)$ for all $\varepsilon > 0$, it is possible to define the $L^2$-projection of functions in the dual space of ${\omegaidetilde H^{3/2-\varepsilon}}(\Omega_H).$
Namely, we consider $P_h : H^{-\sigma}(\Omega_H) \to V_h$ for $0\le \sigma\le1$, the operator characterized by
\[
\int_{\Omega_H} (w - P_h w) \, v_h = 0 \quad \forall v_h \in V_h .
\]
The following property will be useful in the sequel.
\begin{lemma} \label{projection}
Let $0<\sigma<1$, and assume the family of meshes to be quasi-uniform. Then, there exists a constant $C$, independent of $h$ {and $H$}, such that
\[
\| P_h w \|_{ H^\sigma(\Omega_H)} \le C \| w \|_{H^\sigma(\Omega_H)}
\]
for all $w \in H^\sigma(\Omega_H)$.
\end{lemma}
\begin{proof}
The proof follows by interpolation. On the one hand, the $L^2$-stability estimate
\[
\| P_h w \|_{L^2(\Omega_H)} \le \| w \|_{L^2(\Omega_H)}
\]
is obvious.
On the other hand, the $H^1$ bound
\begin{equation}
\label{eq:estH1}
\| P_h w \|_{H^1(\Omega_H)} \le C \| w \|_{H^1(\Omega_H)}
\end{equation}
is a consequence of a global inverse inequality (see, for example \cite{Auricchio}).
\new{Because $P_h$ commutes with dilations, a scaling argument allows to show that $C$ can be taken independent of $H$. Indeed, we can assume --after a translation, if needed-- that $\Omega_H$ is a ball $B_R$ of radius $R\ge 1$ centered at the origin.
Denote by $\hat{P}_h$ the $L^2$-projections over meshes in $B_1$. Then, for every $\hat w\in H^1(B_1)$ and every quasi-uniform mesh it holds that
\[
\| \hat{P}_h \hat w \|_{H^1(B_1)} \le C_1 \| \hat w \|_{H^1(B_1)} ,
\]
where $C_1$ is a \emph{fixed} constant.
Next, define $T \colon B_1\to B_R$ by $T(\hat x)=R\hat x$, and, for each $ w\in H^1(B_R)$, the function $ w \circ T=\hat w\in H^1(B_1)$.
Every quasi-uniform mesh ${\mathcal {T}}$ on $B_R$ with mesh size $h$ is in correspondence with a quasi-uniform mesh on $B_1$ with mesh size $\frac{h}{R}$ through the obvious identification ${\mathcal {T}} = T (\hat {\mathcal {T}})$. For these meshes we have the identity $\omegaidehat{P_h w}=\hat{P}_{\frac{h}{R}} \hat{w}$ and hence, changing variables,
\[
\| \nabla P_h w\|_{L^2(B_R)}= R^{\frac{n}{2}-1}
\| \nabla \hat{P}_{\frac{h}{R}} \hat w\|_{L^2(B_1)}\le
C_1 R^{\frac{n}{2}-1} \left( \| \nabla \hat w\|_{L^2(B_1)} + \|\hat w\|_{L^2(B_1)} \right).
\]
Therefore,
\[
\| \nabla P_h w\|_{L^2(B_R)}\le C_1
\left( \| \nabla w\|_{L^2(B_R)} +\frac{1}{R} \| w\|_{L^2(B_R)} \right),
\]
and then
\[
\| \nabla P_h w\|_{L^2(B_R)}\le 2C_1 \| w \|_{H^1(B_R)}.
\]
Since bounds for $ \| P_h w\|_{L^2(B_R)}$ are immediate, \eqref{eq:estH1} follows.}
\end{proof}
\begin{remark}
The global quasi-uniformity hypothesis could actually be weakened and substituted by the ones from \cite{BramblePasciakSteinbach, Carstensen, CrouzeixThomee}. In these works, meshes are required to be just locally quasi-uniform, but some extra control on the change in measures of neighboring elements is needed as well.
\end{remark}
Stability estimates in negative-order norms are obtained by duality.
\begin{lemma}
Let $0 \le \sigma \le 1$, and assume the family of meshes to be quasi-uniform. Then, there is a constant $C$, independent of $h$ and $H$, such that
\begin{equation*}
\| P_h w \|_{\omegaidetilde H^{-\sigma}(\Omega_H)} \le C \| w \|_{\omegaidetilde H^{-\sigma}(\Omega_H)}
\end{equation*}
for all $w \in \omegaidetilde H^{-\sigma}(\Omega_H)$.
\end{lemma}
\begin{proof}
Consider $v \in H^\sigma(\Omega_H)$. We have
\[
\int_{\Omega_H} P_hw \, v \, = \new{\int_{\Omega_H} P_h w \, P_h v} \,=\int_{\Omega_H} w \, P_hv \, \le \| w \|_{\omegaidetilde H^{-\sigma}(\Omega_H)} \| P_h v \|_{ H^{\sigma}(\Omega_H)} .
\]
The proof follows by the $ H^\sigma$-stability of $P_h$.
\end{proof}
\begin{remark}
For simplicity, the previous lemma was stated for functions defined in $\Omega_H$, but clearly it is also valid over $\Omega_H \setminus \Omega$:
\begin{equation} \label{eq:estab_neg}
\| P_h w \|_{\omegaidetilde H^{-\sigma}(\Omega_H\setminus\Omega)} \le C \| w \|_{\omegaidetilde H^{-\sigma}(\Omega_H\setminus\Omega)} \quad \forall w \in \omegaidetilde H^{-\sigma}(\Omega_H\setminus\Omega).
\end{equation}
\new{For the sake of completeness, since the scaling argument does not carry over straightforwardly, we sketch a proof of the stability estimate
$$\| P_h w \|_{ H^\sigma(\Omega_H\setminus\Omega)} \le C \| w \|_{H^\sigma(\Omega_H\setminus\Omega)}.$$ As in the proof of Lemma \ref{projection}, it suffices to show
\begin{equation}
\label{eq:estH1diff}
\| P_h w \|_{H^1(\Omega_H\setminus \Omega)} \le C \| w \|_{H^1(\Omega_H\setminus \Omega)}
\end{equation}
with a fixed $C$, and then conclude by interpolation with the $L^2$ estimate.}
\new{ Consider a smooth truncation function $0\le \psi \le 1$, such that $\psi=1$ in $\Omega_1:=\{x\in {\mathbb {R}}^n \colon d(x,\Omega)<1\}$.
Assume the support of $\psi$ is contained in a fixed open ball $B_r$ with radius $r$.
Thus, for $H$ large enough (namely, for $h$ small enough), $B_r\subset \Omega_H$.
Given $w\in H^1(\Omega_H\setminus \Omega)$, we write $w=w\psi + w(1-\psi)$ and therefore
we just need to bound
\[
\| P_h (w \psi) \|_{H^1(\Omega_H\setminus \Omega)} \ \mbox{ and } \ \| P_h [w(1- \psi)] \|_{H^1(\Omega_H\setminus \Omega)}.
\] }
\new{ Because $r$ is fixed, if $h$ is small enough the former norm coincides with the norm over $B_r\setminus \Omega$, since
$B_r$ is open and contains the support of $\psi$. Moreover, since $\psi$ is smooth, we bound
\[
\begin{aligned}
\| P_h (w \psi) \|_{H^1(\Omega_H\setminus \Omega)} & = \| P_h (w \psi) \|_{H^1(B_r\setminus \Omega)} \le C(r,\Omega) \| w \psi \|_{H^1(B_r\setminus \Omega)} \\
& \le C(r, \Omega, \psi) \| w \|_{H^1(\Omega_H\setminus \Omega)}.
\end{aligned}
\]
On the other hand, considering a zero-extension within $\Omega$ and using \eqref{eq:estH1} and the smoothness of $\psi$ we deduce
\[
\begin{aligned}
\| P_h [w(1- \psi)] \|_{H^1(\Omega_H\setminus \Omega)} & = \| P_h [w(1- \psi)] \|_{H^1(\Omega_H)} \le C\| w(1- \psi) \|_{H^1(\Omega_H)} \\
& \le C \| w(1- \psi) \|_{H^1(\Omega_H\setminus \Omega)} \le C\| w \|_{H^1(\Omega_H\setminus \Omega)},
\end{aligned}
\]
with a final constant $C$ depending only on $r$ and $\psi$.
From these estimates, \eqref{eq:estH1diff} follows immediately, and in consequence, we obtain the bound \eqref{eq:estab_neg}. }
\end{remark}
\begin{proposition}
Let $s \ne \frac12$. Then, there exists a constant $C$, independent of $h$ {and $H$}, such that the following discrete inf-sup condition holds:
\begin{equation}
\sup_{v_h \in V_h} \frac{b(v_h, \mu_h)}{\| v_h \|_{V}} \ge
C \| \mu_h \|_\Lambda \quad \forall \mu_h \in \Lambda_h.
\label{eq:infsup_disc}
\end{equation}
\end{proposition}
\begin{proof}
In first place, let $E$ be the extension operator given by Lemma \ref{extension}
{(replacing $\Omega$ with $\Omega^c$ there)} and $P_h$ the $L^2$-projection considered
in this section. For simplicity of notation, we write, for $v \in H^s(\Omega^c)$, $P_h(Ev) = P_h \left((Ev) \big|_{\Omega_H} \right).$ Taking into account
the fact that $P_h(Ev) \in \omegaidetilde{H}^s(\Omega_H)$ and the continuity of these operators, it is clear that
\[
\| P_h (Ev) \|_V = \| P_h (Ev) \|_{\omegaidetilde H^s(\Omega_H)} \le C \| v \|_{H^s(\Omega^c)} \quad \forall v \in H^s(\Omega^c),
\]
which in turn allows us to use $P_h(Ev)$ as a Fortin operator.
Indeed, let $\mu_h \in \Lambda_h$, $v \in H^s(\Omega^c)$ and write
\[ \begin{aligned}
\sup_{v_h \in V_h} \frac{b(v_h, \mu_h)}{\| v_h \|_{V}} & \ge
\frac{b(P_h(Ev), \mu_h)}{ \| P_h(Ev) \|_V} \ge C \frac{b(v, \mu_h)}{ \| v \|_{H^s(\Omega^c)}}.
\end{aligned} \]
Using the fact that $v$ is arbitrary together with \eqref{eq:infsup}, we deduce \eqref{eq:infsup_disc}.
\end{proof}
\begin{remark} \label{rem:s_not_1/2}
{The previous proposition is the basis for the stability of the mixed numerical method we propose in this paper. The proof works only for $s\neq \frac12$ and thus from this point on we asume that to be the case.
However, we remark that the experimental orders of convergence we have obtained for $s = \frac12$ agree with those expected by the theory by taking the limit $s\to \frac12$, supporting the fact that this drawback is a mere limitation of our proof.}
\end{remark}
Due to the standard theory of finite element approximations of saddle point problems \cite{BoffiBrezziFortin}, we deduce the following estimate.
\begin{proposition} \label{prop:cea}
Let $(u,\lambda) \in V\times \Lambda$ and $(u_h,\lambda_h) \in V_h \times \Lambda_h$ be the respective solutions of problems \eqref{eq:cont} and \eqref{eq:discrete}. Then there exists a constant $C$, independent of $h$ {and $H$}, such that
\begin{equation} \label{eq:cea}
\| u - u_h \|_V + \| \lambda - \lambda_h \|_\Lambda \le
C \left( \inf_{v_h \in V_h} \| u - v_h \|_V + \inf_{\mu_h \in \Lambda_h} \| \lambda - \mu_h \|_\Lambda \right) .
\end{equation}
\end{proposition}
In order to obtain convergence order estimates for the finite element approximations under consideration, it remains to estimate the infima on the right hand side of \eqref{eq:cea}.
Within $\Omega_H$, this is achieved by means of a quasi-interpolation operator \cite{Clement, ScottZhang}.
We denote such an operator by $\Pi_h$; depending on whether discrete functions are required to have zero trace or not, $\Pi_h$ could be either the Cl\'ement or the Scott-Zhang operator. For these operators, it holds that (see, for example, \cite{Ciarlet})
\begin{equation}
\label{eq:sz_estimate}
\| v -\Pi_h v \|_{H^{t}(\Omega)} \le C h^{\sigma - t} \| v \|_{H^{\sigma}(\Omega)}
\quad \forall v \in H^{\sigma}(\Omega), \ 0 \le t \le \sigma\le 2.
\end{equation}
{Since this estimate is applied later to $\Omega_H\setminus\Omega$ it is important to stress that the
constant can be taken independent of the diameter of $\Omega$. This is indeed the case due to the fact
that \eqref{eq:sz_estimate} is obtained by summing \emph{local} estimates on stars (see e.g., \cite{AB, Ciarlet}). }
\begin{lemma} Given $v \in L^2(\Omega_H \setminus \Omega)$ and $0\le \sigma \le 1$, the following estimate holds:
\begin{equation}
\| v - P_h v \|_{\omegaidetilde H^{-\sigma}(\Omega_H \setminus \Omega)} \le C h^{\sigma} \| v \|_{L^2(\Omega_H \setminus \Omega)}.
\label{eq:aprox_neg}
\end{equation}
{The constant $C$ is independent of $h$ and $H$.}
\end{lemma}
\begin{proof}
Let $v \in L^2({\Omega_H}\setminus \Omega)$. Given $\varphi \in H^\sigma(\Omega_H\setminus\Omega)$, considering the quasi-interpolation operator $\Pi_h$ and taking into account that
$(v-P_h v) \perp V_h$,
\begin{align*}
\frac{\int_{\Omega_H\setminus \Omega} (v - P_h v) \varphi}{ \| \varphi \|_{ H^\sigma(\Omega_H\setminus\Omega)}} & =
\frac{\int_{\Omega_H\setminus \Omega} (v - P_h v) (\varphi - \Pi_h \varphi)}{ \| \varphi \|_{H^\sigma(\Omega_H\setminus\Omega)}} \le \\
& \le \|v - P_h v \|_{L^2(\Omega_H \setminus \Omega)} \frac{\| \varphi - \Pi_h \varphi \|_{L^2(\Omega_H \setminus \Omega)}}{ \| \varphi \|_{H^\sigma(\Omega_H\setminus\Omega)}} .
\end{align*}
Combining well-known approximation properties of $\Pi_h$ with the trivial estimate $\|v - P_h v \|_{L^2(\Omega_H \setminus \Omega)} \le \|v \|_{L^2(\Omega_H \setminus \Omega)}$, we conclude the proof.
\end{proof}
For the following we need to define restrictions in negative order spaces.
Let $\sigma \in (0,1)$ and choose a fixed cutoff
function $\eta \in C^\infty(\Omega^c)$ such that
\begin{equation}
\label{eq:cutoff}
0\le \eta\le 1, \quad \text{supp}(\eta)\subset \overline{\Omega}_H \setminus \Omega, \quad \eta(x)=1 \quad \mbox{in} \quad \Omega_{H-1} \setminus \Omega .
\end{equation}
Define the operator {$T_\eta:\; H^\sigma(\Omega_H\setminus \Omega) \to H^\sigma(\Omega^c)$} that multiplies by $\eta$
{any extension to $\Omega^c$ of functions in $H^\sigma(\Omega_H\setminus \Omega)$},
that is, $T_\eta(\psi):=\eta \psi$. We have {$\|T_\eta(\psi)\|_{H^\sigma(\Omega^c)}\le C \|\psi\|_{H^\sigma(\Omega_H\setminus \Omega)}$}, with a constant that does not depend on $H$ (use interpolation from the obvious cases $\sigma=0$ and $\sigma=1$).
Then, $T_\eta$ can be extended to negative-order spaces, $T_\eta :\omegaidetilde H^{-\sigma}(\Omega^c)\to \omegaidetilde H^{-\sigma}(\Omega_H \setminus \Omega)$.
Consider an element $\mu\in \omegaidetilde H^{-\sigma}(\Omega^c)$, and define $T_\eta$ by means of
\[
\langle T_\eta(\mu), \psi \rangle=\langle \mu , \eta \psi \rangle.
\]
The continuity $\|T_\eta(\mu)\|_{\omegaidetilde H^{-\sigma}(\Omega_H\setminus \Omega)}\le C \|\mu\|_{\omegaidetilde H^{-\sigma}(\Omega^c)},$
follows easily from the continuity in positive spaces. Notice that similar considerations hold for $T_{1-\eta} : \omegaidetilde H^{-\sigma}(\Omega^c)\to \omegaidetilde H^{-\sigma}(\Omega_{H-1}^c)$. A localization estimate for negative-order norms using these maps reads as follows.
\begin{lemma} \label{lemma:triangular}
The following identity holds for all $\mu \in \omegaidetilde H^{-\sigma}(\Omega^c)$:
\[
\| \mu \|_{\omegaidetilde H^{-\sigma}(\Omega^c)} \le \| T_\eta (\mu) \|_{\omegaidetilde H^{-\sigma}(\Omega_H\setminus\Omega)} + \| T_{1-\eta} (\mu) \|_{\omegaidetilde H^{-\sigma}(\Omega_{H-1}^c)}.
\]
\end{lemma}
\begin{proof}
We first notice that, for every $\psi \in H^{\sigma}(\Omega^c)$, it holds that
\[
\psi = T_\eta \left( \psi \big|_{\Omega_H\setminus\Omega} \right) + T_{1-\eta} \left( \psi \big|_{\Omega_{H-1}^c} \right),
\]
and that
\[ \begin{aligned}
\| T_\eta \left( \psi \big|_{\Omega_H\setminus\Omega} \right)\|_{H^{\sigma}(\Omega^c)} & \le \| \psi \big|_{\Omega_H\setminus\Omega} \|_{H^{\sigma}(\Omega_H\setminus\Omega)}, \\
\| T_{1-\eta} \left( \psi \big|_{\Omega_{H-1}^c} \right)\|_{H^{\sigma}(\Omega^c)} & \le \| \psi \big|_{\Omega_{H-1}^c} \|_{H^{\sigma}(\Omega_{H-1}^c)} .
\end{aligned} \]
So, given $\mu \in \omegaidetilde H^{-\sigma}(\Omega^c)$, it follows that
\begin{equation} \label{eq:linearity_mu}
\frac{\langle \mu, \psi \rangle}{\| \psi \|_{H^{\sigma}(\Omega^c)}} \le
\frac{\langle T_\eta (\mu), \psi \big|_{\Omega_H\setminus\Omega} \rangle}{\| \psi\big|_{\Omega_H\setminus\Omega} \|_{H^{\sigma}(\Omega_H\setminus\Omega)}}
+ \frac{\langle T_{1-\eta} (\mu), \psi\big|_{\Omega_{H-1}^c} \rangle}{\| \psi\big|_{\Omega_{H-1}^c} \|_{H^{\sigma}(\Omega_{H-1}^c)}}
\end{equation}
for all $\psi \in H^{\sigma}(\Omega^c)$. The proof follows by taking suprema in both sides of the inequality above.
\end{proof}
\begin{remark} \label{remark:triangular}
From \eqref{eq:linearity_mu}, it is apparent that, if $\mu \in \omegaidetilde H^{-\sigma}(\Omega^c)$ and $\nu \in \omegaidetilde H^{-\sigma}(\Omega_H \setminus \Omega)$, then
\[
\| \mu - \nu \|_{\omegaidetilde H^{-\sigma}(\Omega^c)} \le \| T_\eta (\mu) - \nu \|_{\omegaidetilde H^{-\sigma}(\Omega_H\setminus\Omega)} + \| T_{1-\eta} (\mu) \|_{\omegaidetilde H^{-\sigma}(\Omega_{H-1}^c)}.
\]
\end{remark}
In order to simplify notation, in the sequel we just write $\eta\mu $ and $(1-\eta)\mu$ for $T_\eta(\mu)$ and $T_{1-\eta}(\mu)$, respectively.
Next, we estimate the approximation errors within the meshed domain.
\begin{proposition} \label{prop:interpolation}
The following estimates hold:
\begin{align}
\inf_{v_h \in V_h} \| u - v_h \|_{H^s(\Omega_H)} & \le
C \,{h^{1/2-\varepsilon}} \Sigma_{f,g},
\label{eq:interpolation_H_u} \\
\inf_{\mu_h \in \Lambda_h} \| \eta \lambda - \mu_h \|_{\omegaidetilde H^{-s}(\Omega_H\setminus\Omega)} & \le
C \,{h^{1/2-\varepsilon}} \Sigma_{f,g}, \label{eq:interpolation_H_nsu}
\end{align}
where $\Sigma_{f,g}$ is given by \eqref{eq:def_sigma} and $\eta$ is the cutoff function in \eqref{eq:cutoff}.
\end{proposition}
\begin{proof}
Estimate \eqref{eq:interpolation_H_u} is easily attained by taking into account that $u$ vanishes on $\Omega_H^c$ (because we are assuming that the support of $g$ is bounded), and applying the regularity estimate \eqref{eq:reg_u} jointly with approximation identities for quasi-interpolation operators.
In order to prove \eqref{eq:interpolation_H_nsu}, in first place we assume $s< 1/2$,
so that $\eta \lambda \in L^2(\Omega_H \setminus \Omega)$ {by Theorem~\ref{teo:regularidad}}. Set $\mu_h = P_h (\eta \lambda)$, then applying \eqref{eq:aprox_neg}, approximation properties of $P_h$ and the continuity of $T_\eta:H^{-s+1/2-\varepsilon}(\Omega_H\setminus \Omega) \to H^{-s+1/2-\varepsilon}(\Omega^c)$, we obtain \eqref{eq:interpolation_H_nsu} immediately.
Meanwhile, if $s > 1/2$, considering $\sigma = s$ in \eqref{eq:estab_neg} and \eqref{eq:aprox_neg}, we obtain:
\begin{align*}
\| w - P_h w \|_{\omegaidetilde H^{-s}(\Omega_H \setminus \Omega)} & \le C \| w \|_{\omegaidetilde H^{-s}(\Omega_H \setminus \Omega)} \\
\| w - P_h w \|_{\omegaidetilde H^{-s}(\Omega_H \setminus \Omega)} & \le C h^{s} \| w \|_{L^2(\Omega_H \setminus \Omega)} .
\end{align*}
Interpolating these two identities, recalling the regularity of $\lambda$ given by Theorem \ref{teo:regularidad}
and the continuity of $T_\eta$, we deduce that
\begin{align*}
\| \eta \lambda - P_h (\eta \lambda) \|_{\omegaidetilde H^{-s}(\Omega_H \setminus \Omega)} &\le C h^{1/2 - \varepsilon}\| \lambda \|_{H^{-s+1/2-\varepsilon}({\Omega^c})} \le C h^{1/2 - \varepsilon} \, \Sigma_{f,g}.
\end{align*}
\end{proof}
As the norms in both $V$ and $\Lambda$ involve integration on unbounded domains and the discrete functions vanish outside $\Omega_H$, in order to estimate the infima in \eqref{eq:cea}, we need to rely on identities that do not depend on the discrete approximation but on the behavior of $u$ and $\lambda$. For the term corresponding to the norm of $u$, Corollary \ref{cor:norma} suffices (as long as $\text{supp}(g) \subset \Omega_H$), whereas for the nonlocal derivative contribution it is necessary to formulate an explicit decay estimate.
\begin{proposition} \label{prop:dec_nsu} Let $\Omega_H$ be such that $\text{supp}(g) \subset \Omega_H$.
Then, there exists a constant $C$, independent of $f$, $g$ {and $H$}, such that the estimate
\[ \begin{aligned}
\| (1-\eta) \lambda \|_{\omegaidetilde H^{-s}(\Omega_{H-1}^c)} & \le \| \lambda \|_{L^2(\Omega_{H-1}^c)} \\\
& \le C H^{-(n/2 + 2s)} \left( \| f \|_{H^{-s+1/2}(\Omega)} + \| g \|_{H^{s+1/2}(\Omega^c)} \right)
\end{aligned} \]
holds, where $\eta$ is the cutoff function from \eqref{eq:cutoff}.
\end{proposition}
\begin{proof}
It is evident that
\[
\| (1-\eta)\lambda \|_{\omegaidetilde H^{-s}(\Omega_{H-1}^c)} \le \| (1-\eta)\lambda \|_{L^2(\Omega_{H-1}^c)} \le \| \lambda \|_{L^2(\Omega_{H-1}^c)}.
\]
Given $x \in \Omega_{H-1}^c$, it holds that
\[
|\lambda(x)| \le C(n,s) \left[ \int_\Omega \frac{|u(y)|}{|x-y|^{n+2s}} \, dy
+ |g(x)| \int_\Omega \frac{1}{|x-y|^{n+2s}} \, dy \right],
\]
and therefore
\begin{equation} \label{eq:estimacion_lambda} \begin{aligned}
\| \lambda \|_{L^2(\Omega_{H-1}^c)}^2 \le C \bigg[ & \int_{\Omega_{H-1}^c} \left(\int_\Omega \frac{|u(y)|}{|x-y|^{n+2s}} \, dy \right)^2 dx
\\
& + \int_{\Omega_{H-1}^c} |g(x)|^2 \left( \int_\Omega \frac{1}{|x-y|^{n+2s}} \, dy \right)^2 dx \bigg] .
\end{aligned}
\end{equation}
We estimate the two integrals in the right hand side above separately.
As for the first one, consider the auxiliary function $\omega: \Omega \to \mathbb{R}$,
\[
\omega(y) = \left( \int_{\Omega_{H-1}^c} \frac{1}{|x-y|^{2(n+2s)}} \right)^{1/2};
\]
integrating in polar coordinates and noticing that $(H-1)^{-(n/2+2s)} \le C H^{-(n/2+2s)}$, we deduce
\[
| \omega (y) | \le C H^{-(n/2+2s)} \quad \forall y \in \Omega ,
\]
and so, $ \| \omega \|_{L^2(\Omega)} \le C H^{-(n/2+2s)}$.
As a consequence,
applying Minkowski's integral inequality, the Cauchy-Schwarz inequality and the previous estimate for $ \| \omega \|_{L^2(\Omega)}$, we obtain
\[ \begin{aligned}
\int_{\Omega_{H-1}^c} \left(\int_\Omega \frac{|u(y)|}{|x-y|^{n+2s}} \, dy \right)^2 dx & \le C \left( \int_\Omega |u(y)| \, |\omega(y)| \, dy \right)^2 \\
& \le C H^{-2(n/2+2s)} \| u \|_{L^2(\Omega)}^2.
\end{aligned} \]
Finally, the $L^2$-norm of $u$ is controlled in terms of the data (see, for example, \eqref{eq:reg_u}).
As for the second term in the right hand side in \eqref{eq:estimacion_lambda}, it suffices to notice that for $x \in \Omega_{H-1}^c$, it holds
\[
\int_\Omega \frac{1}{|x-y|^{n+2s}} \, dy \le C H^{-(n+2s)}.
\]
This implies that
\[
\int_{\Omega_{H-1}^c} |g(x)|^2 \left( \int_\Omega \frac{1}{|x-y|^{n+2s}} \, dy \right)^2 dx \le
C H^{-2(n+2s)} \| g \|_{L^2(\Omega_{H-1}^c)}^2 ,
\]
and concludes the proof.
\end{proof}
\begin{remark} \label{rem:orden_H}
As the finite element approximation $u_h$ to $u$ in $\Omega_H$ has an $H^s$-error of order $h^{1/2-\varepsilon},$ we need the previous estimate for the nonlocal derivative to be at least of the same order. Thus, we require $H^{-(n/2+2s)} \le C h^{1/2}$, that is, $H\ge C h^{-1/(n+4s)}.$
\end{remark}
Collecting the estimates we have developed so far, we are ready to prove the following.
\begin{theorem} \label{teo:convergencia_bounded}
Let $\Omega$ be a bounded, smooth domain, $f\in H^{{1/2-s}}(\Omega)$ and $g \in H^{{s+1/2}}(\Omega^c)$. Moreover, assume that $g$ has bounded support and consider $\Omega_H$ according to \eqref{eq:def_OmegaH}, with $H \gtrsim h^{-1/(n+4s)}.$ For the finite element approximations considered in this work and $h$ small enough, the following a priori estimates hold:
\begin{align}
& \| u - u_h \|_{V} \le C h^{1/2 - \varepsilon} \Sigma_{f,g}, \label{eq:aprox_u} \\
& \| \lambda - \lambda_h \|_{\Lambda} \le C h^{1/2-\varepsilon} \Sigma_{f,g}. \label{eq:aprox_nsu}
\end{align}
for a constant $C$ depending on $\varepsilon$ but independent of $h$, $H$, $f$ and $g$, and $\Sigma_{f,g}$ defined by \eqref{eq:def_sigma}.
\end{theorem}
\begin{proof}
In order to obtain the above two inequalities, it is enough to estimate the infima in \eqref{eq:cea}.
Since $g$ is boundedly supported and $H \to \infty$ as $h \to 0$, if $h$ is small enough then $\mbox{supp}(g) \subset \Omega_H$. So, $ u - v_h \in \omegaidetilde{H}^{s}(\Omega_H)$ for all $v_h \in V_h$ and thus we may apply Corollary \ref{cor:norma} (or Remark \ref{remark:un_medio} if $s=1/2$)
together with \eqref{eq:interpolation_H_u}:
\begin{align*}
\inf_{v_h \in V_h} \| u - v_h \|_V & \le C \inf_{v_h \in V_h} \| u - v_h \|_{H^s(\Omega_H)} \le
C h^{1/2 - \varepsilon } \, \Sigma_{f,g}.
\end{align*}
The infimum involving the nonlocal derivative is estimated as follows. Consider the cutoff function $\eta$ from \eqref{eq:cutoff}. Since $\mu_h$ vanishes in $\Omega_H^c$, using Remark \ref{remark:triangular}, we have
\[
\inf_{\mu_h \in \Lambda_h} \| \lambda - \mu_h \|_{\Lambda} \le \inf_{\mu_h \in \Lambda_h} \|\eta \lambda - \mu_h \|_{\omegaidetilde H^{-s}(\Omega_H \setminus \Omega)} + \| (1-\eta)\lambda \|_{\omegaidetilde H^{-s}(\Omega_{H-1}^c)} .
\]
The first term on the right hand side is bounded by means of estimate \eqref{eq:interpolation_H_nsu}, whereas for the second one we apply Proposition \ref{prop:dec_nsu} and notice that the choice of $H$ implies that $H^{-(n/2+2s)} \le C h^{1/2}.$
It follows that
\[
\inf_{\mu_h \in \Lambda_h} \| \lambda - \mu_h \|_{\Lambda} \le C h^{1/2 - \varepsilon} \Sigma_{f,g},
\]
and the proof is completed.
\end{proof}
\subsection{The Direct Method}
As it is already mentioned in the introduction, in this work we mainly focus on the
mixed formulation. Nonetheless, here we provide some details regarding the
direct discrete formulation.
We consider the discrete problem: find $u_h\in V_{h,g_h}$ such that
\begin{equation}
\label{eq:discreteDirect}
a(u_h, v_h) = F(v_h) \quad \forall v_h \in K_h,
\end{equation}
where $V_{h,g_h}$ is the subset of $V_h$ of functions that agree with $g_h$
in $\Omega_H\setminus\Omega$.
The function $g_h$ is chosen as an approximation of $g$; for instance, we may consider $g_h=\Pi_h(g)$. As a consequence, it holds that $\|g-g_h\|_{H^s(\Omega^c)}\le Ch^{1/2-\varepsilon} \|g\|_{H^{s+1/2}(\Omega^c)}$. Let $u$ and $u^{(h)}$ be the solutions of the continuous problem with right hand side $f$ and Dirichlet conditions $g$ and $g_h$, respectively. Using Proposition \ref{prop:well_posedness_direct}, we deduce that
\[
\|u- u^{(h)}\|_V\le C h^{1/2-\varepsilon}\|g\|_{H^{s+1/2}(\Omega^c)}.
\]
Therefore, in order to bound $\|u- u_h\|_V$ it is enough to bound
$\| u^{(h)}-u_h\|_V$. However, if $\text{supp}(g)\subset \Omega_H$, then $u^{(h)}-u_h\in K=\omegaidetilde H^s(\Omega)$ and
due to the continuity and coercivity of $a$ in $K$ we deduce the best approximation property,
$$\|u^{(h)}-u_h\|_V\le C \inf_{v_h\in V_{g_h}}\|u^{(h)}-v_h\|_V.$$
Taking $v_h=\Pi_h(u)$ and using the triangle inequality we are led to bound
$\|u^{(h)}-u\|_V$ and $\| u-\Pi_h(u)\|_V$. A further use of interpolation estimates allows
to conclude
\begin{theorem} \label{teo:convergencia_bounded_direct}
Let $\Omega$ be a bounded, smooth domain, $f\in H^{-s+1/2}(\Omega)$, $g \in H^{s+1/2}(\Omega^c)$ for some $\varepsilon >0$, and assume that $\text{supp}(g) \subset {\Omega_H}$.
For the finite element approximations considered in this subsection, it holds that
\[
\| u - u_h \|_{V} \le C h^{1/2 - \varepsilon} \left( \| f \|_{H^{-s+1/2}(\Omega)} + \| g \|_{H^{s+1/2}(\Omega^c)} \right),
\]
for a constant $C$ depending on $\varepsilon$ but independent of $h$, $H$, $f$ and $g$.
\end{theorem}
\section{Volume constraint truncation error}\label{sec:bdry}
The finite element approximations performed in the previous section refer to a problem in which the Dirichlet condition $g$ has bounded support. Here, we develop error estimates without this restriction on the volume constraints.
However, as it is not possible to mesh the whole support of $g$, we are going to take into account the Dirichlet condition in the set $\Omega_H$ considered in the previous section. We compare $u$, the solution to \eqref{eq:dirichletnh} to $\tilde u$, the solution to
\begin{equation} \label{eq:dirichlet_tilde}
\left\lbrace
\begin{array}{rl}
(-\Delta)^s \tilde u = f & \mbox{ in }\Omega, \\
\tilde u = \tilde g & \mbox{ in }\Omega^c , \\
\end{array}
\right.
\end{equation}
where $\tilde{g} = \eta g$, and $\eta$ is the cutoff function \eqref{eq:cutoff}.
This allows to apply the finite element estimates developed in Section \ref{sec:fe_approximations} to problem \eqref{eq:dirichlet_tilde}, because $\text{supp}(\tilde g) \subset \overline{\Omega_H}$.
The objective of this section is to show that choosing $H$ in the same fashion as there, namely $H \ge C h^{-1/(n+4s)}$, leads to the same order of error between the continuous truncated problem and the original one.
Since the problems under consideration are linear, without loss of generality we may assume that $g\ge 0$ (otherwise split $g = g_+ - g_-$ and work with the two problems separately).
\begin{proposition} \label{est_truncado}
The following estimate holds:
\begin{equation}
| u - \tilde u |_{H^s(\Omega)} \le C H^{-(n/2+2s)} \| g \|_{L^2(\Omega_H^c)} ,
\label{eq:est_truncado}
\end{equation}
for a constant $C$ independent of $H$ and $g$.
\end{proposition}
\begin{proof}
Denote $\varphi = u - \tilde{u}$ the difference between the solutions to equations \eqref{eq:dirichletnh} and \eqref{eq:dirichlet_tilde}.
{Then,
\[ \left\lbrace
\begin{array}{rll}
(-\Delta)^s \varphi & = 0 & \mbox{ in }\Omega, \\
\varphi & = g - \tilde g \ge 0 & \mbox{ in }\Omega^c . \\
\end{array}
\right.\]
}
We emphasize that $\varphi$ is nonnegative (because of the comparison principle), $s-$harmonic in $\Omega$ and vanishes in $\Omega_{H-1} \setminus \Omega$.
Moreover, let us consider $\tilde \varphi = \varphi \chi_\Omega$. As $\varphi \in H^{s+1/2-\varepsilon}({\mathbb{R}^n})$ vanishes in $\Omega_{H-1} \setminus \Omega$, it is clear that $\tilde \varphi \in \omegaidetilde H^{s+1/2-\varepsilon}(\Omega)$, and applying the integration by parts formula \eqref{eq:parts}:
\[
a(\varphi, \tilde\varphi) = \int_\Omega \tilde{\varphi} (-\Delta)^s \varphi = 0.
\]
The nonlocal derivative term in last equation is null because $\tilde \varphi$ vanishes in $\Omega^c$. Splitting the integrand appearing in the form $a$ and recalling the definition of $\omegas$ \eqref{eq:def_w}, we obtain
\begin{equation} \label{eq:clave} \begin{split}
|\varphi|_{H^s(\Omega)}^2 & = - 2 \int_\Omega \varphi^2(x) \, \omegas(x) \, dx
+ 2 \int_\Omega \varphi(x) \left( \int_{\Omega_{H-1}^c} \frac{g(y)-\tilde g (y) }{|x-y|^{n+2s}} \, dy \right) dx \\
& \le 2 \int_\Omega \varphi(x) \left( \int_{\Omega_{H-1}^c} \frac{g(y)-\tilde g (y) }{|x-y|^{n+2s}} \, dy \right) dx .
\end{split}
\end{equation}
Applying the Cauchy-Schwarz inequality in the integral over $\Omega_{H-1}^c$ and taking into account that $g - \tilde g \le g$ and that $(H-1)^{-(n/2+2s)} \le C H^{-(n/2+2s)}$, it follows immediately that
\begin{equation} \label{eq:est_hs}
|\varphi|_{H^s(\Omega)}^2 \le C(n,s) H^{-(n/2+2s)} \| \varphi \|_{L^1(\Omega)} \| g \|_{L^2(\Omega_{H-1}^c)} .
\end{equation}
We need to bound $\|\varphi\|_{L^1(\Omega)}$ adequately.
Let $\psi \in H^s({\mathbb{R}^n})$ be a function that equals $1$ over $\Omega$. Multiplying $(-\Delta)^s\varphi$ by $\psi$, integrating on $\Omega$ and applying \eqref{eq:parts}, since $\varphi$ is $s$-harmonic in $\Omega$, we obtain
\[
0 = a(\varphi, \psi) - \int_{\Omega^c} \mathcal{N}_s\phi(y) \, \psi(y) \, dy,
\]
or equivalently,
\begin{equation*}
\begin{split}
0 & = C(n,s) \int_\Omega \int_{\Omega^c} \frac{(\varphi(x) - \varphi(y))(1-\psi(y))}{|x-y|^{n+2s}} \, dy \, dx \\
& - C(n,s) \int_{\Omega^c} \left(\int_\Omega \frac{\varphi(y)- \varphi(x)}{|x-y|^{n+2s}} \, dx \right) \psi(y) \, dy.
\end{split}
\end{equation*}
This implies that
\[
\int_\Omega \int_{\Omega^c} \frac{\varphi(x) - \varphi(y)}{|x-y|^{n+2s}} \, dy \, dx = 0 .
\]
Recalling that $\varphi$ is zero in $\Omega_{H-1} \setminus \Omega$ and that $g - \tilde g \le g$, from the previous identity it follows that
\[
\int_\Omega \varphi(x) \, \omegas(x) \, dx = \int_\Omega \int_{\Omega_{H-1}^c} \frac{g(y) - \tilde{g}(y)}{|x-y|^{n+2s}} dy dx \le C H^{-(n/2+2s)} \| g \|_{L^2(\Omega_{H-1}^c)} .
\]
Recall that the function $\omegas$ is uniformly bounded in $\Omega$ and that $\varphi\ge 0$. We deduce
\begin{equation} \label{eq:cota_L1}
\| \varphi \|_{L^1(\Omega)} \le C H^{-(n/2+2s)} \| g \|_{L^2(\Omega_{H-1}^c)} ,
\end{equation}
and combining this bound with \eqref{eq:est_hs} yields \eqref{eq:est_truncado}.
\end{proof}
As a byproduct of the proof of the previous proposition, we obtain the following
\begin{lemma} \label{est_l2} There is a constant $C$ such that the bound
\begin{equation}\label{eq:est_l2}
\| u - \tilde u \|_{L^2(\Omega)} \le C H^{-(n/2+2s)} \| g \|_{L^2(\Omega_{H-1}^c)}
\end{equation}
holds, for a constant $C$ independent of $H$ and $g$.
\end{lemma}
\begin{proof}
As before, we write $\varphi = u - \tilde u$.
From the first line in \eqref{eq:clave},
\begin{align*}
2 \int_\Omega \varphi^2(x) \, \omegas(x) \, dx
& \le \int_\Omega \varphi(x) \left( \int_{\Omega_{H-1}^c} \frac{g(y)-\tilde g (y) }{|x-y|^{n+2s}} \, dy \right) dx \le \\
& \le C H^{-(n/2+2s)} \| \varphi \|_{L^1(\Omega)} \| g \|_{L^2(\Omega_{H-1}^c)}.
\end{align*}
Combining this estimate with \eqref{eq:cota_L1}, we deduce
\[
\int_\Omega \varphi^2(x) \omegas(x) \, dx \le C H^{-(n+4s)} \| g \|_{L^2(\Omega_{H-1}^c)}^2 ,
\]
where the function $\omegas$ is given by Definition \ref{def:w}.
The lower uniform boundedness of $\omegas$ implies \eqref{eq:est_l2} immediately.
\end{proof}
Given $\tilde u$, the solution to \eqref{eq:dirichlet_tilde}, let us denote $\tilde \lambda = \mathcal{N}_s \tilde u$ its nonlocal normal derivative.
\begin{proposition}\label{prop:est_lambda}
There is a constant $C$ such that
\begin{equation} \label{eq:est_lambda}
\| \lambda - \tilde \lambda \|_{\Lambda} \le C H^{-(n/2+2s)} \| g \|_{L^2(\Omega_{H-1}^c)} ,
\end{equation}
for a constant $C$ independent of $H$ and $g$.
\end{proposition}
\begin{proof}
Let $\phi \in H^s(\Omega^c)$, according to Lemma \ref{extension} we consider an extension $E\phi \in H^s({\mathbb{R}^n})$ such that $\| E\phi \|_{H^s({\mathbb{R}^n})} \le C \| \phi \|_{H^s(\Omega^c)}.$
By linearity, it is clear that $\lambda - \tilde \lambda = \mathcal{N}_s \varphi$, where $\varphi = u - \tilde u$. Applying the integration by parts formula \eqref{eq:parts} and recalling that $\varphi$ is $s-$harmonic in $\Omega$,
\[
\int_{\Omega^c} (\lambda -\tilde \lambda) \phi = \frac{C(n,s)}{2} \iint_Q \frac{(\varphi(x)-\varphi(y)) (E\phi(x)-E\phi(y))}{|x-y|^{n+2s}} \, dx \, dy.
\]
Since $\varphi$ vanishes in $\Omega_{H-1} \setminus \Omega$, it is simple to bound
\begin{equation*}\begin{split}
\int_{\Omega^c} & (\lambda -\tilde \lambda) \phi \le \\
& C \left( \left| \langle \varphi, E\phi \rangle_{H^s(\Omega)} \right| + \left| \int_\Omega \int_{\Omega_{H-1}^c} \frac{(\varphi(x)-\varphi(y)) (E\phi(x)-\phi(y))}{|x-y|^{n+2s}} \, dx \, dy \right| \right).
\end{split}\end{equation*}
The first term on the right hand side above is bounded by $C |\varphi|_{H^s(\Omega)} \|\phi\|_{H^s(\Omega^c)}$, and Proposition \ref{est_truncado} provides the bound $|\varphi|_{H^s(\Omega)} \le C H^{-(n/2+2s)} \| g \|_{L^2(\Omega_{H-1}^c)}$.
For the second term, splitting the integrand it is simple to obtain the estimates:
\begin{align*}
& \left| \int_\Omega \varphi(x) E\phi(x) \left( \int_{\Omega_{H-1}^c} \frac{1}{|x-y|^{n+2s}} \, dy \right) dx \right| \le C \| \varphi \|_{L^2(\Omega)} \| E\phi \|_{L^2(\Omega)}, \\
& \left| \int_\Omega \varphi(x) \left( \int_{\Omega_{H-1}^c} \frac{\phi(y)}{|x-y|^{n+2s}} \, dy \right) dx \right| \le C H^{-(n/2+2s)} \| \varphi \|_{L^1(\Omega)} \| \phi \|_{L^2(\Omega_{H-1}^c)}, \\
& \left| \int_\Omega E\phi(x)\left(\int_{\Omega_{H-1}^c} \frac{\varphi(y)}{|x-y|^{n+2s}} \, dy \right) dx \right| \le C H^{-(n/2+2s)} \| E \phi \|_{L^1(\Omega)} \| \varphi \|_{L^2(\Omega_{H-1}^c)}, \\
& \left| \int_\Omega \left( \int_{\Omega_{H-1}^c} \frac{\varphi(y) \phi(y)}{|x-y|^{n+2s}} \, dy \right) dx \right| \le C H^{-(n+2s)} \| \varphi \|_{L^2(\Omega_H^c)} \| \phi \|_{L^2(\Omega_{H-1}^c)}.
\end{align*}
The terms on the right hand sides of the inequalities above are estimated applying Lemma \ref{est_l2} and Proposition \ref{est_truncado}, as well as recalling the continuity of the extension operator and of the inclusion $L^2(\Omega) \subset L^1(\Omega)$. We obtain
\[
\frac{\int_{\Omega^c} (\lambda -\tilde \lambda) \phi } { \| \phi \|_{H^s(\Omega^c)} } \le C H^{-(n/2+2s)} \| g \|_{L^2(\Omega_{H-1}^c)} \quad \forall \phi \in H^s(\Omega^c).
\]
Taking supremum in $\phi$, estimate \eqref{eq:est_lambda} follows.
\end{proof}
Combining the estimates obtained in this section, we immediately prove the following result.
\begin{theorem}\label{teo:principal}
Let $(u,\lambda)$ be the solution of problem \eqref{eq:cont}, and consider $\tilde g$ as in the beginning of this section. Moreover, let $(u_h, \lambda_h)$ be the finite element approximations of the truncated problem \eqref{eq:dirichlet_tilde}, defined on $\Omega_H$, where $H$ behaves as $h^{-1/(n+4s)}$. Then,
\begin{equation}
\| u - u_h \|_{H^s(\Omega_{H-1})} \le C h^{1/2-\varepsilon}
\, \Sigma_{f,g} \label{eq:aprox_u1}
\end{equation}
and
\begin{equation}
\| \lambda - \lambda_h \|_{\Lambda} \le C h^{1/2-\varepsilon}
\, \Sigma_{f,g}, \label{eq:aprox_lambda}
\end{equation}
for a constant $C$ depending on $\varepsilon$ but independent of $h$, $H$, $f$ and $g$, and $\Sigma_{f,g}$ defined by \eqref{eq:def_sigma}.
\end{theorem}
\begin{proof}
Applying the triangle inequality, we write
\[
\| u - u_h \|_{H^s(\Omega_{H-1})} \le \| u - \tilde{u} \|_{H^s(\Omega_{H-1})} + \| \tilde{u} - u_h \|_{H^s(\Omega_{H-1})} .
\]
The second term above is bounded by $\| \tilde{u} - u_h \|_V$, which is controlled by \eqref{eq:aprox_u}. As for the first one, recall that $u = \tilde u$ in $\Omega_{H-1} \setminus \Omega$, so that
\begin{equation*}\begin{split}
\| u - & \tilde{u} \|_{H^s(\Omega_{H-1})}^2 = \\
& \| u - \tilde{u} \|_{H^s(\Omega)}^2 + 2 \int_\Omega |u(x) - \tilde{u}(x)|^2 \left( \int_{\Omega_{H-1}\setminus \Omega} \frac{1}{|x-y|^{n+2s}} \, dy \right) dx.
\end{split}\end{equation*}
The integral above is bounded by means of Hardy-type inequalities from Proposition \ref{prop:hardy}, (or by Remark \ref{remark:un_medio} if $s=1/2$)
because $(u-\tilde u)\chi_\Omega$ belongs to $\omegaidetilde H^s(\Omega)$. So, resorting to Proposition \ref{est_truncado} and Lemma \ref{est_l2},
\[
\| u - \tilde{u} \|_{H^s(\Omega_{H-1})} \le C \| u - \tilde{u} \|_{H^s(\Omega)} \le C H^{-(n/2+2s)} ,
\]
which --taking into account the behavior of $H$-- is just \eqref{eq:est_truncado} and \eqref{eq:est_l2}.
Estimate \eqref{eq:aprox_lambda} is an immediate consequence of the triangle inequality, the dependence of $H$ on $h$ and equations \eqref{eq:est_lambda} and \eqref{eq:aprox_nsu}. Indeed,
\begin{align*}
\| \lambda - \lambda_h \|_{\Lambda} & \le \| \lambda - \tilde \lambda \|_{\Lambda} + \| \tilde \lambda - \lambda_h \|_{\Lambda} \le
C h^{1/2-\varepsilon} \, \Sigma_{f,g} .
\end{align*}
\end{proof}
\begin{remark}
We point out that \eqref{eq:aprox_u1} estimates the error in the $H^s(\Omega_{H-1})$-norm. Since it is only possible to mesh a bounded domain, there is no hope in general to obtain convergence estimates for $\| u - u_h \|_V$, unless some extra hypothesis on the decay of the volume constraint is included.
\end{remark}
\section{Numerical experiments}
We display the results of the computational experiments performed for the mixed formulation of \eqref{eq:dirichletnh}. The scheme utilized for these two-dimensional examples is based on the code introduced in \cite{ABB}, where details about the computation of the matrix having entries $a(\varphi_i, \varphi_j)$ can be found.
The examples we provide give evidence of the convergence of the scheme towards the solution $u$ both for Dirichlet data with bounded and unbounded support. We point out that our convergence estimates (Theorems \ref{teo:convergencia_bounded} and \ref{teo:principal}) are expressed in terms of fractional-order norms, and thus their computation is, in general, out of reach. Whenever not possible, we compute orders of convergence in $L^2$-norms.
Also, as stated in Remark \ref{rem:s_not_1/2}, although the possibility $s = \frac12$ was excluded from our analysis, the numerical evidence we present here indicates that the same estimates hold in such a case as for $s \ne \frac12$.
Our first example is closely related to Remark \ref{rem:singular}. Indeed, the solution considered there gives a function with constant fractional Laplacian and supported in the $n$-dimensional unit ball. In this example, however, we shrink the domain so that we produce a nonhomogeneous volume constraint with bounded support. Namely, for $\Omega = B(0,1/2) \subset {\mathbb {R}}^2$ we study
\begin{equation} \label{ex:bounded_support}
\left\lbrace \begin{array}{rl l}
(-\Delta)^s u & = 2 & \text{ in } \Omega,\\
u & = \frac{1}{2^{2s} \Gamma(1+s)^2} (1 - |\cdot|^2)_+^s & \text{ in } \Omega^c.
\end{array} \right.
\end{equation}
By linearity, the exact solution of this problem can be expressed as the sum of the solutions to problems
\begin{equation} \label{eq:auxiliary_1}
\left\lbrace \begin{array}{rl l}
(-\Delta)^s u_1 & = 1 & \text{ in } \Omega,\\
u_1 & = \frac{1}{2^{2s} \Gamma(1+s)^2} (1 - |\cdot|^2)_+^s & \text{ in } \Omega^c,
\end{array} \right.
\end{equation}
and
\begin{equation} \label{eq:auxiliary_2}
\left\lbrace \begin{array}{rl l}
(-\Delta)^s u_2 & = 1 & \text{ in } \Omega,\\
u_2 & = 0 & \text{ in } \Omega^c.
\end{array} \right.
\end{equation}
The first problem above has a smooth solution within $\overline\Omega$, whereas the latter has the minimal regularity guaranteed by Proposition \ref{prop:regHr}.
Explicitly, by Remark \ref{rem:singular}, the exact solution is given by
\[
u(x) = u_1(x) + u_2(x) = \frac{1}{2^{2s} \Gamma(1+s)^2} \left[ \left(1 - |x|^2\right)_+^s + \left(\frac14 - |x|^2\right)_+^s \right].
\]
Moreover, finite element solutions to \eqref{ex:bounded_support} can also be represented as the sum of the corresponding solutions to \eqref{eq:auxiliary_1} and \eqref{eq:auxiliary_2}. In practice, we consider the two problems separately and add up their discrete solutions.
The error in the $H^s(\Omega)$-norm is estimated as follows. In first place, we write
\[
\| u - u_h \|_{H^s(\Omega)} \le \| u_1 - u_{1,h} \|_{H^s(\Omega)} + \| u_2 - u_{2,h}\|_{H^s(\Omega)}.
\]
As for the first term in the right hand side above, since $u_1$ is smooth in $\Omega$, we may bound it by interpolation,
\[
\|u_1 - u_{1,h} \|_{H^s(\Omega)} \le \| u_1 - u_{1,h} \|_{L^2(\Omega)}^{1-s} \| u_1 - u_{1,h} \|_{H^1(\Omega)}^s.
\]
The second term can be computed by using the same trick as in \cite[Lemma 5.1]{AB} because it corresponds to a problem with homogeneous Dirichlet conditions,
\[
| u_2 - u_{2,h} |_{H^s(\Omega)} \le | u_2 - u_{2,h} |_{H^s({\mathbb{R}^n})} = \left(\int_\Omega u_2 (x) - u_{2,h} (x) \right)^{1/2}.
\]
We carried out computations for $s \in \{0.1, \ldots , 0.9\}$ on meshes with size $h \in \{0.045, 0.037, 0.03, 0.025\}$. The auxiliary domains considered were $\Omega_H = B(0,H+1/2)$ with $H = C h^{-1/(2+4s)}$ and $C = C(s)$ was such that $H$ would equal $1$ if $h$ was set to $0.15$. Therefore, the support of the volume constraint was contained in every auxiliary domain $\Omega_H$.
Our results are summarized in Tables \ref{tab:bounded_support_s05} and \ref{tab:bounded_support}. In spite of only having upper bounds for the errors, the experimental order of convergence
E.O.C. is in good agreement with the theory. In Table \ref{tab:bounded_support_s05} it is also noticeable that the error is driven by the contribution of the nonsmooth component $u_2$, that is two orders of magnitude larger than the error of the smooth component. The observed order of convergence of the latter is in good agreement with the fact that $u_1 \in H^2(\Omega)$.
\begin{table}[ht] \centering
\begin{tabular}{| c | c | c | c |} \hline
$h$ & $ \|u_1 - u_{1,h} \|_{H^s(\Omega)} $ & $ \|u_2 - u_{2,h} \|_{H^s(\Omega)} $ & $\| u - u_h\|_{H^s(\Omega)}$ \\ \hline
$0.045$ & $7.593\times10^{-4}$ & $6.423\times10^{-2}$ & $6.499\times10^{-4}$ \\
$0.037$ & $4.629\times10^{-4}$ & $5.742\times10^{-2}$ & $5.789\times10^{-4}$ \\
$0.030$ & $3.187\times10^{-4}$ & $5.196\times10^{-2}$ & $5.228\times10^{-4}$ \\
$0.025$ & $3.168\times10^{-4}$ & $4.799\times10^{-2}$ & $4.831\times10^{-4}$ \\
\hline \hline
E.O.C. & $1.53$ & $0.49$ & $0.50$ \\
\hline
\end{tabular}
\caption{Upper bounds for the errors in Example \ref{ex:bounded_support} with $s = 0.5$.
}
\label{tab:bounded_support_s05}
\end{table}
\begin{table}[ht] \centering
\begin{tabular}{| c | | c | c | c | c | c | c | c | c | c | c |} \hline
$s$ & $0.1$ & $0.2$ & $0.3$ & $0.4$ & $0.5$ & $0.6$ & $0.7$ & $0.8$ & $0.9$ \\ \hline
E.O.C. & $0.48$ & $0.48$ & $0.49$ & $0.49$ & $0.50$ & $0.53$ & $0.56$ & $0.59$ & $0.62$ \\
\hline
\end{tabular}
\caption{Experimental orders of convergence in $H^s(\Omega)$ for Example \ref{ex:bounded_support}, for $s \in \{0.1, \ldots , 0.9\}$.
}
\label{tab:bounded_support}
\end{table}
We next display two examples where the Dirichlet condition has unbounded support, posed in the two-dimensional unit ball. The Poisson kernel for this domain is known \cite[Chapter 1]{Landkof}, and thus it is simple to obtain an explicit expression for the solutions of problems as the two we analyze next. More precisely, let $\Omega = B(0,r) \subset {\mathbb{R}^n}$ for some $r>0$ and let $g:\Omega^c \to {\mathbb {R}}$. Then, a solution to
\begin{equation} \label{eq:balayage}
\left\lbrace \begin{array}{rl}
(-\Delta)^s u = 0 & \text{in } \Omega, \\
u = g & \text{in } \Omega^c,
\end{array}
\right.
\end{equation}
is given by
\begin{equation} \label{eq:sol_balayage}
u(x) = \int_{\Omega^c} g(y) \, P(x,y) \, dy,
\end{equation}
where
\[
P(x,y) = \frac{\Gamma(n/2) \sin (\pi s)}{\pi^{n/2 + 1}} \left(\frac{r^2 - |x|^2}{|y|^2 - r^2}\right)^s \frac1{|x-y|^2}, \ x \in \Omega, \ y \in \Omega^c .
\]
We compute numerical solutions to \eqref{eq:balayage} in the two-dimensional unit ball with two different
functions
\[
g(x) = \exp(-|x|^2) \quad \text{and} \quad g(x) = \frac1{|x|^4} .
\]
In the experiments performed, we set $\Omega_H = B(0,H+1)$ with $H = C h^{-1/(2+4s)}$ and $C = C(s)$ such that $H = 1$ for $h=0.1$. We considered discretizations for
$s \in \{0.1, \ldots , 0.9\}$ on meshes with size $h \in \{0.1, 0.082, 0.067, 0.055, 0.045\}$.
Table \ref{tab:balayage} shows the computed orders of convergence in $L^2(\Omega)$ for these two problems, and Figure \ref{fig:fitting_pol} displays the computed $L^2$-errors for some values of $s$ and $g(x) = \frac1{|x|^4}$. In this example solutions are not smooth up to the boundary of $\Omega$. Thus, the observed convergence with orders approximately $s+1/2$ is expected.
\begin{table}[ht]\centering
\begin{tabular}{| c | c | c |} \hline
$s$ & $g(x) = \exp(-|x|^2)$ & $g(x) = \frac1{|x|^4}$ \\ \hline
$0.1$ & $0.64$ & $0.55$ \\
$0.2$ & $0.78$ & $0.64$ \\
$0.3$ & $0.86$ & $0.74$ \\
$0.4$ & $0.90$ & $0.89$ \\
$0.5$ & $0.97$ & $1.03$ \\
$0.6$ & $1.15$ & $1.14$ \\
$0.7$ & $1.27$ & $1.16$ \\
$0.8$ & $1.32$ & $1.26$ \\
$0.9$ & $1.37$ & $1.40$ \\
\hline
\end{tabular}
\caption{Experimental orders of convergence in $L^2(\Omega)$ for \eqref{eq:balayage} with Dirichlet data with unbounded support.}
\label{tab:balayage}
\end{table}
\begin{figure}
\caption{Computed $L^2$-errors for $s = 0.1$ (green), $s=0.5$ (red) and $s=0.9$ (blue) for problem \eqref{eq:balayage}
\label{fig:fitting_pol}
\end{figure}
Moreover, since we cannot mesh the support of the volume constraint, as $h$ decreases the actual region where we measure the error is expanded. According to Remark \ref{rem:orden_H}, in these experiments we have considered $H = C h^{-1/(4+2s)}$. Nevertheless, the computational cost of solving \eqref{eq:discrete} for $H$ large is extremely high. In practice, we have worked with small values of the constant $C$ that relates $H$ with $h$, especially for $s$ small.
Finally, since it is expected that increasing the truncation parameter $H$ leads to a better approximation, we analyze the dependence on $H$ in the previous example with $g(x) = \frac1{|x|^4}$. We compare convergence rates both in $L^2(\Omega_H)$ and $L^2({\mathbb{R}^n})$. Because
\[ \| g \|_{L^2(B(0,R)^c)} = \sqrt{\frac{\pi}{3}} \, R^{-3} , \]
the decay of the error in $\Omega_H^c$ is algebraic in $h$,
\[
\| g \|_{L^2(\Omega_H^c)} \le C h^{\frac{3}{2+4s}}.
\]
Thus, if we utilize a sequence of domains $\{ \Omega_H \}$ with $H$ not large enough, the tail of the $L^2$-norm of the volume constraint has a large impact on the $L^2({\mathbb{R}^n})$-error.
In Figure \ref{fig:erres} we compare the effect of increasing the constant in the identity $H = C h^{-1/(2+4s)}.$ Errors are observed to diminish considerably, and there is a slight improvement in the orders of convergence as well. Notice also that the errors in $L^2({\mathbb{R}^n})$ are one order of magnitude larger than errors in $L^2(\Omega_H)$.
\begin{figure}
\caption{Left panel: convergence in $L^2(\Omega_H)$ (circles) and in $L^2({\mathbb{R}
\label{fig:erres}
\end{figure}
{}
\end{document}
|
\begin{document}
\draft
\title{Wavefunction Collapse and Random Walk}
\author{Brian Collett and Philip Pearle}
\address{Department of Physics, Hamilton College, Clinton, NY 13323}
\date{\today}
\maketitle
\begin{abstract}
{Wavefunction collapse models modify Schr\"odinger's equation so that it describes
the rapid evolution of a superposition of macroscopically distinguishable states to one of them.
This provides a phenomenological basis for a physical resolution to the so-called ``measurement problem."
Such models have experimentally testable differences from standard quantum theory. The most well developed
such model at present is the Continuous Spontaneous Localization (CSL) model in which
a fluctuating classical field interacts with particles to cause collapse. One ``side
effect" of this interaction is that the field imparts energy to the particles: experimental
evidence on this has led to restrictions on the parameters of the model, suggesting that
the coupling of the classical field to the particles must be mass--proportional. Another ``side
effect," is that the field imparts momentum to particles, causing a small blob of matter to
undergo random walk. Here we explore this in order to supply predictions which could be experimentally tested.
We examine the translational diffusion of a sphere and a disc,
and the rotational diffusion of a disc, according to CSL.
For example, we find that the rms distance an isolated $10^{-5}$cm radius sphere
diffuses is $\approx$ (its diameter, 5 cm) in (20 sec, a day), and that
a disc of radius $2\cdot 10^{-5}$cm and thickness $.5\cdot 10^{-5}$cm diffuses
through $2\pi$rad in about 70sec (this assumes the "standard" CSL
parameter values). The comparable rms diffusions of standard quantum theory
are smaller than these by a factor $10^{-3\pm 1}$. It is shown that the CSL diffusion in air at STP is
much reduced and, indeed, is swamped by the ordinary Brownian motion. It is
also shown that the sphere's diffusion in a thermal radiation bath at room temperature is comparable
to the CSL diffusion, but is utterly negligible at liquid He temperature. Thus, in order to
observe CSL diffusion, the pressure and temperature must be low. At the low reported pressure
of $<5\cdot10^{-17}$Torr, achieved at $4.2^{\circ}$K, the mean time between
air molecule collisions with the (sphere, disc) is $\approx$(80, 45)min. This is ample time for
observation of the putative CSL diffusion with the standard parameters and, it is pointed out,
with any parameters in the range over which the theory may be considered viable. This encourages
consideration of how such an experiment may actually be performed, and the paper closes
with some thoughts on this subject.}
\end{abstract}
\pacs{03.65 Bz}
\section{Introduction}\label{Section I}
Schr\"odinger was troubled by the collapse postulate associated with Bohr's
``Copenhagen" version of quantum theory. This requires a superposition of
macroscopically distinguishable states (an ill-defined concept), upon observation
(another ill-defined concept), to be suddenly replaced by one of those states. In his
famous ``cat paradox" paper \cite{Schrodinger} Schr\"odinger wrote this ``is the most
interesting part of the entire theory," saying that it prevented one from ascribing
reality to the wavefunction ``because from the realism point of view observation is a
natural process like any other and cannot {\it per se} bring about an
interruption of the orderly flow of events." Dynamical
wavefunction collapse models resolve Schr\"odinger's difficulty, allowing one
to ascribe reality to the wavefunction (somewhat ironically) by altering
Schr\"odinger's own equation, so that the collapse takes place in orderly
and well--defined fashion.
The Continuous Spontaneous Localization (CSL) model \cite{PearleCSL,GPR},
based upon previous models by Ghirardi, Rimini and Weber (GRW) \cite{GRW} and one of the authors
\cite{Pearle} is the most well-developed collapse model at present \cite{others,PearleNaples}. In it, to
Schr\"odinger's equation is added a term which contains a randomly fluctating classical field
$w(\bf x ,t)$ that interacts with particles, bringing about collapse.
Although collapse is the desired and main effect, there are also ``side effects."
Because collapse narrows wavefunctions, particles gain energy from the field in this process
\cite{GR,Squires,Ballentine}. Experimental tests\cite{PearleSquires,Collett,Ring}
have resulted in restrictions on the
range of permissable parameters for the model, sugggesting that the coupling
between the field and particles (which determines the particle's collapse rate)
is proportional to particle mass: thus, for a material object undergoing collapse, its nucleons are
much more responsible for this behavior than are its electrons. In this paper we discuss another side effect:
the random impulses particles get from the field results in random walk of objects. Indeed, in one
of the earliest attempts at a dynamical collapse model, Karolyhazy \cite{Karolyhazy} discussed such behavior.
Here we discuss it in the context of the CSL model, in order to see if the effect is measureable.
We first consider a sphere undergoing translational random walk.
Section II summarizes the needed formalism associated with the usual Brownian motion in both
air at temperature T and in a radiation bath at temperature T. Section III summarizes the results
(of calculations given in the appendices, as are most of the detailed calculations in this paper)
associated with CSL--induced random walk of the sphere.
Section IV puts numerical values into these equations in three realms of air--sphere interaction:
viscous, molecular and impact, in order of decreasing air molecule number density. It becomes clear that,
in order to observe CSL diffusion, the air density must be low enough
so that the mean time between air--sphere impacts is large compared to the
time over which diffusion may be observed.
In Sections V and VI we turn to discuss
respectively translational and rotational diffusion of a disc.
Rotational diffusion of a disc will be the subject of our experimental proposal,
because a small translation distance (e.g., the disc radius) becomes, when it is a distance of rotation,
equivalent to a large fraction of $2\pi$rad and therefore
more readily discernable. We consider a disc rather than a sphere
because a perfect homogeneous sphere undergoes no CSL rotational diffusion
since its rotated quantum states are identical. (An actual sphere's rotated states are slightly different so it
does undergo a very small amount of collapse and rotational diffusion.)
We find, for example, that a disc of radius $2\cdot 10^{-5}$cm and thickness $.5\cdot 10^{-5}$cm diffuses
through $2\pi$rad in about 70sec: this assumes the "standard" values of the two
parameters, proposed by GRW\cite{GRW} for their model and taken over into CSL,
which may be characterized as the time $\lambda^{-1}=10^{16}$sec it takes an isolated
nucleon in a superposition of two localized states separated
by a distance greater than $a=10^{-5}$cm to collapse to one of those states. A
pressure of $<5\cdot 10^{-17}$Torr is attainable\cite{Gabrielse} and, at this pressure, we
find the mean collision time
between air molecules and the disc is about 45 minutes.
These results are so encouraging that we consider, in section VII, the full range
of ($\lambda$, $a$) parameter values over which the theory may be considered viable (as well as
the parameter proposal of Penrose\cite{others} based upon gravity). This indicates that
experiments to observe diffusion of small objects could provide a definitive test of
CSL and other collapse models. Therefore, in section VIII, we make a preliminary experimental proposal
whose details we hope to examine in a future paper.
\section{Brownian Motion Review}\label{Section II}
\subsection {Diffusion}\label{II A}
It is useful to review the usual Brownian motion formalism \cite{Mazo}. The Fokker-Planck
equation for the probability density $\rho ({\bf x}, {\bf v}, t)$ for the position and
velocity of the center of mass (CM) of a
randomly walking sphere (radius $R$, mass $M$,
density $D$) in a thermal bath at temperature $T$ is
\begin{equation}\label{2.1} {\partial\rho\over\partial t}=
\sum_{j=1}^{3} \bigg\{ -v^{j}{\partial\rho\over\partial x^{j}}+{1\over\tau}{\partial v^{j}\rho\over\partial v^{j}}
+{\beta\over\tau^{2}}{\partial^{2}\rho\over\partial {v^{j}}^{2}} \bigg\}\end{equation}
\noindent where (as will be seen) $\tau$ characterizes the time to reach
thermal equilibrium and $(2\beta t)^{1/2}$
is the equilibrium rms diffusion distance in time t. Eq. (2.1) can be used to calculate averages:
${\bar f}(t)\equiv\int\int d{\bf x}d{\bf v}\rho ({\bf x}, {\bf v}, t)f({\bf x}, {\bf v})$.
By multiplying Eq. (2.1) by $x^{j}$ and integrating by parts, and likewise for $v^{j}$, we obtain
$d\overline {x^{j}}/dt=\overline {v^{j}}$, $d\overline {v^{j}}/dt=-\overline {v^{j}}/\tau$, so
$\overline {v^{j}}=v^{j}(0)\exp -t/\tau$ and
\[\overline {x^{j}}=v^{j}(0)\tau \big[1-e^{-t/\tau}\big]\]
\noindent (assuming $x^{j}(0)=0$). Likewise, $d\overline {{{x^{j}}^{2}}}/dt=2\overline {x^{j}v^{j}}$,
$d\overline {x^{j}v^{j}}/dt=\overline {{{v^{j}}^{2}}}- \overline {x^{j}v^{j}}/\tau$,
$d\overline {{{v^{j}}^{2}}}/dt=-2\overline {{{v^{j}}^{2}}}/\tau+2\beta/\tau^{2}$, so we obtain
\begin{mathletters}
\label{all2.2}
\begin{equation}
\overline {{v^{j}}^{2}}-{\overline {v^{j}}}^{2}={\beta\over\tau}\big[1-e^{-2t/\tau}\big]
\end{equation}
\begin{equation}
(\Delta x)^{2}\equiv\overline {{{x^{j}}^{2}}}-{\overline {x^{j}}^{2}
=2\beta\tau}\bigg[t/\tau-\big(1-e^{-t/\tau}\big)-{1\over 2}{\big(1-e^{-t/\tau}\big)}^{2}\bigg]
\end{equation}
\begin{equation}
\thinspace (\Delta x)^{2}\thinspace\thinspace\thinspace\thinspace\thinspace\thinspace_
{\overrightarrow{t<<\tau}}\thinspace\thinspace\thinspace\thinspace\thinspace\thinspace
{2\beta t^{3}\over3\tau^{2}}[1-{t\over 4\tau}+...],\thinspace\thinspace\thinspace\thinspace\thinspace\thinspace
\thinspace\thinspace\thinspace\thinspace\thinspace
\thinspace(\Delta x)^{2}\thinspace\thinspace\thinspace\thinspace\thinspace\thinspace_{\overrightarrow{t>>\tau}}
\thinspace\thinspace\thinspace\thinspace\thinspace\thinspace2\beta t.
\end{equation}
\end{mathletters}
\noindent We particularly call attention to the $\sim t^{3}$ diffusion for $t<<\tau$.
One may readily express the variables $\beta$ and $\tau$ in terms of physical quantities. From
$Md\overline {v^{j}}/dt=-M\overline {v^{j}}/\tau$ we see that the damping force is
$-(M/\tau){\bf v}\equiv-\xi{\bf v}$. According to the equipartition theorem,
the equilibrium value of $\overline {{{v^{j}}^{2}}}$ is $kT/M$ so,
by Eq. (2.2a), $kT/M=\beta/\tau$. Thus
\begin{equation}\label{2.3}
\beta=kT/\xi, \thinspace\thinspace\thinspace\thinspace\thinspace\thinspace\tau=M/\xi.
\end{equation}
\subsection {Viscosity Factor}\label{II B}
We now consider various expressions for $\xi$.
In the case of a sphere in a fluid, as is well known, according to Stokes, the drag coefficient is
\begin{equation}\label{2.4}
\xi=6\pi\eta R, \qquad (l_m <<R),
\end{equation}
\noindent where $\eta$ is the viscosity of the fluid and
$l_m$ is the molecular mean free path.
If the fluid is a gas, as the gas density decreases there are three realms for air-sphere interaction.
At high enough density so that $l_{m}<<R$, is the viscous realm: here $\xi$ is given by Stokes law (2.4). At lower density,
where $l_{m}>>R$, is the molecular realm. Here a colliding molecule may be
considered to have a thermal velocity distribution since its last collision before hitting the
sphere occurs so far away from the sphere that it is unaffected by the sphere's velocity (i.e.,
viscosity and hydrodynamic considerations are irrelevant). However, in this case, many molecular collisions
occur over the shortest resolvable time
interval so the Brownian motion assumptions still apply, and $\xi$ is given by Eq. (2.5) below.
At very low density, over time intervals
where individual molecular collisions with the sphere
can be resolved, is the impact realm where the Brownian motion assumptions no longer apply.
In the molecular realm, Stokes law is no longer accurate. Experimental
investigations into the correction to Stokes law,
begun by Millikan\cite{Millikan}
and continued to this day, are fitted by
\[\xi=\frac{6\pi\eta R}{1+(l_{m}/R)[\alpha+\beta\exp-(\gamma R/l_{m})]}
\]
\noindent where e.g., recent measurements\cite{Aerosol} on polystyrene spheres give
$\alpha\approx1$, $\beta\approx.6$ and $\gamma\approx1$. In the limit $l_{m}>>R$ it is readily
shown\cite{Cunningham,Epstein}, assuming specular reflection of air molecules (other assumptions moderately
alter the numerical coefficients), that $\alpha=3/2$, $\beta=0$. In this case, using $\eta=(1/3)nm_{g}{\overline u}l_{m}$
($n$ is the gas molecular number density, $m_{g}$ is the mass of a gas molecule and ${\overline u}$ is its mean velocity)
in the above equation, the dependence upon $l_{m}$ disappears as one expects, resulting in
\begin{equation}\label{2.5}
\xi=(4\pi/3)nm_{g}{\overline u}R^{2}=(8/3)nR^{2}(2\pi m_{g}kT)^{1/2}, \qquad (l_m >>R)
\end{equation}
\noindent where we have used ${\overline u}=(8kT/\pi m_{g})^{1/2}$.
In the case where the sphere moves in thermal
radiation it is shown in Appendix D that a result of Einstein and Hopf\cite{EinsteinandHopf,Einstein}
may be adapted to obtain, for a dielectric sphere of large dielectric constant (but also true up to a
numerical constant for other shaped objects, where $R^{3}$ is replaced by the object's volume),
\begin{equation}\label{2.6}
\xi={4(2\pi)^{7}\over 135}\hbar R^{6}\bigg({kT\over\hbar c}\bigg)^{8}.
\end{equation}
When Brownian motion assumptions apply, regardless of the physical source of $\xi$,
the rms diffusion distance $\Delta x$'s long time and short time behaviors differ.
Einstein's well known result\cite{Einstein1904} for $t>>\tau$ and the result for
$t<<\tau$ follow from Eqs. (2.2c), (2.3):
\begin{mathletters}
\label{alll2.7}
\begin{equation}
\Delta x\quad _{ \overrightarrow{t>>\tau}}\quad \bigg[{2kTt\over\xi}\bigg]^{1/2}
\end{equation}
\begin{equation}
\Delta x\quad _{ \overrightarrow{t<<\tau}}\quad \bigg[{2kT\xi t^{3}\over3M^{2}}\bigg]^{1/2}.
\end{equation}
\end{mathletters}
\section{CSL Random Walk of a Sphere}\label{Section III}
\subsection {Diffusion of Center of Mass}\label{III A}
In the case of CSL, we consider an ensemble of sphere CM wavefunctions $\langle{\bf q}|\psi,t\rangle_{w}$,
each evolving under a particular sample field $w(\bf x ,t)$. They
are described by the density matrix $\rho (t)$ whose evolution equation (see Appendix A) satisfies
\begin{eqnarray}\label{3.1}
&&{\partial \langle{\bf q}|\rho(t)|{\bf q}'\rangle \over \partial t}=
-i \langle{\bf q}|\bigg[ {{\bf P}^{2}\over 2M}, \rho(t)\bigg]|{\bf q}'\rangle\nonumber\\
&&\qquad\qquad\qquad\qquad\qquad -{\lambda N^{2}\over V^{2}}\int\int_{V}
d{\bf z}d{\bf z}'\bigg[\Phi({\bf z}-{\bf z}')-\Phi({\bf z}-{\bf z}'+{\bf q}-{\bf q}')\bigg]
\langle{\bf q}|\rho(t)|{\bf q}'\rangle
\end{eqnarray}
\noindent under the approximation that the mass in the sphere is uniformly spread throughout it.
In Eq. (3.1), ${\bf P}$ is the CM momentum operator,
$V$ is the volume of the sphere and $\Phi({\bf z})\equiv \exp - {\bf z}^{2}/4a^{2}$.
The electrons have been neglected because of their smaller mass and lower collapse rate, and
the proton and neutron masses are taken to be equal for simplicity, so Eq. (3.1)
depends just upon the nucleon number $N$.
Eq. (3.1) may be used to calculate ensemble averages of expectation values of operators:
$\overline{\langle F\rangle}(t)\equiv Tr[F\rho(t)]$. The trace of the CSL
term in Eq. (3.1) multiplied by the CM position operators
$Q^{j}$ (or any function of them), by $P^{j}$ or by $Q^{j}P^{j}+P^{j}Q^{j}$ vanishes. Thus we obtain
$d\overline {\langle Q^{j}\rangle}/dt=\overline {\langle P^{j}\rangle}/M$,
$d\overline {\langle P^{j}\rangle}/dt=0$ and so $\overline{\langle Q^{j}\rangle}(t)=0$
(assuming $\langle Q^{j}\rangle (0)=0$) and $\overline {\langle P^{j}\rangle}(t)=0$ (assuming $\langle P^{j}\rangle (0)=0$).
However, the trace of the CSL term in Eq. (3.1) multiplied by
${P^{j}}^{2}$ does not vanish. Collapse increases
energy because it narrows wavefunctions and the references given in section 1 show that (neglecting the
collapse behavior associated with the electrons) the rate of energy increase is given by
\begin{equation}\label{3.2}
{d\over dt}\overline{\langle H\rangle}={3\lambda\hbar^{2} N^{2}\over 4M a^{2}},
\end{equation}
\noindent As is shown in Appendix A, for the CM part of the energy it follows from Eq. (3.1) that
\begin{equation}\label{3.3}
{d\over dt}{\overline {{{\langle {P^{j}}^{2}\rangle}}}\over 2M}={\lambda\hbar^{2} N^{2}f(R/a)\over 4M a^{2}}.
\end{equation}
\noindent The factor $f$ essentially characterizes the collapse rate
when the sphere is displaced by the distance $a$ (see the discussion after Eq. (A10) in Appendix A).
$f(R/a)$, given in analytic form in Eq. (A9b), is a monotonically decreasing function of its argument, with $f(0)=1$,
$f(1)=.62$ and $f(R/a)\rightarrow 6(a/R)^{4}$ for $R>>a$. Summing Eq. (3.3) over the
three values of $j$ and comparison with Eq. (3.2) shows that, for small $R/a$, the
excitation of the CM accounts for almost all of the sphere's energy increase but, as $R/a$ increases,
internal nuclear excitation (too small to observe at present) accounts for more of it.
Therefore, using Eq. (3.1),
since $d\overline {{{\langle {Q^{j}}^{2}\rangle}}}/dt=\overline {\langle Q^{j}P^{j}+P^{j}Q^{j}\rangle}/M$,
$d \overline {\langle Q^{j}P^{j}+P^{j}Q^{j}\rangle}/dt=2\overline {{{\langle {P^{j}}^{2}\rangle}}}/M$ and
$d\overline {{{\langle {P^{j}}^{2}\rangle}}}/dt$ is given by Eq. (3.3), we find
\begin{equation}\label{3.4}
\overline {{{\langle {Q^{j}}^{2}\rangle}}} =
\langle \bigg( Q^{j}+\frac{P^{j}t}{M}\bigg)^{2}\rangle (0)+{\lambda\hbar^{2}f(R/a)t^{3}\over 6 m^{2}a^{2}}
\end{equation}
\noindent where $m$ is the mass of a nucleon.
We note the $\sim t^{3}$ diffusion associated with a random force without damping.
This occurs essentially because the average square velocity is
increasing so the distance of each ``step" in the random walk increases with time.
In Eq. (3.4) we have utilized $M=Nm$ to emphasize that, for $R<<a$, the
diffusion is ``universal," i.e., independent of the material and size (or, it turns out, shape) of the
piece of matter and, in general, that the dependence on $N$ is only indirect, through the sphere's radius R.
\subsection {Wavepacket Width}\label{III B}
Eq. (3.4) is the result needed to describe CSL random walk. However, it is necessary to show that the $\sim t^{3}$
term is not due to an increase in the width of the CM wavepackets in the ensemble but truly due to the
diffusion of the centers of the packets. That is, the ensemble mean square wavepacket width is
$\overline{s^{2}}\equiv\overline{{{\langle [Q^{j}-\langle Q^{j}\rangle]^{2}\rangle}}}$,
what we want is the ensemble mean of the squared
displacement of the center of the wavepackets $\overline{\langle Q^{j}\rangle^{2}}$
and what we've got from Eq. (3.4) is $\overline {{\langle {Q^{j}}^{2}\rangle}}=\overline{s^{2}}
+ \overline{\langle Q^{j}\rangle^{2}}$.
Under the combined influence of the collapse (which tends to narrow wavefunctions) and the
normal Schr\"odinger evolution of a free object (which tends to expand wavefunctions),
$\overline{s^{2}}$ tends to an equilibrium size in a characteristic time $\tau_{s}$.
This has been discussed in the context of the GRW
model\cite{GRW,BGG,BG} and for a particle in a simple continuous collapse model by Diosi\cite{Diosi4}. We discuss
it for the CSL model in Appendix B. It requires a separate treatment
because $\overline{\langle Q^{j}\rangle^{2}}$ (which is
{\it not} ${\overline{\langle Q^{j}\rangle}}^{2}=0$) and so $\overline{s^{2}}$ cannot be found
from the density matrix since they involve an ensemble average over a quantity that is quartic in the statevector.
According to Appendix B, the asymptotic CM wavepacket width is the same for every wavepacket
(i.e., no ensemble average need be involved):
\begin{equation}\label{3.5}
s^{2}(t)\thinspace\thinspace\thinspace\thinspace\thinspace\thinspace_{\overrightarrow{t>>\tau_{s}}}
\thinspace\thinspace\thinspace\thinspace\thinspace\thinspace
s^{2}_{\infty}=\bigg[{a^{2}\hbar\over 2\lambda m N^{3} f(a/R)}\bigg]^{1/2}
\end{equation}
\noindent This expression for $s_{\infty}$ can be understood as follows. If a wavepacket has width $s$,
due to its Schr\"odinger evolution it expands a distance $\sim (\hbar/Ms)\Delta t$ in time $\Delta t$.
Due to the collapse evolution it contracts a distance $\sim$ (collapse rate)$\Delta t\cdot$(fractional decrease)$s$.
As discussed in Appendix A (after Eq. (10)) the collapse rate is $\lambda N^{2}f$. The fractional decrease is
that which would occur if a gaussian of width $s<<a$ is multiplied by a gaussian of width $a$, namely
$(s/a)^{2}$. Thus it contracts a distance $\lambda N^{2}f(s/a)^{2}s\Delta t$. Equating the
Schr\"odinger expansion to the collapse contraction and solving for $s^{2}$ gives the result $\sim$(3.5).
The characteristic time to reach this width is
\begin{equation}\label{3.6}
\tau_{s}={Nms^{2}_{\infty}\over\hbar}.
\end{equation}
\noindent This can be understood as the time it takes a packet of width $s_{\infty}$ to expand by
the distance $s_{\infty}$: $(\hbar/Ms_{\infty})\tau_{s}=s_{\infty}$.
Appendix B also shows that the CSL diffusive behavior soon becomes the dominant contribution to
$\overline{\langle {Q^{j}}^{2}\rangle}$ once equilibrium has been reached:
with initial equilibrium conditions, Eq. (3.4) becomes
\begin{equation}\label{3.7}
\overline{\langle {Q^{j}}^{2}\rangle}=s_{\infty}^{2}+s_{\infty}^{2}\bigg[{t\over \tau_{s}}+
{t^{2}\over 2\tau_{s}^{2}}+{t^{3}\over 12\tau_{s}^{3}}\bigg]
\end{equation}
\noindent Eqs. (3.5), (3.6) are derived in Appendix B under the assumption that $s^{2}(t)<<a^{2}$.
From Eq. (3.5) this implies $N>3\cdot 10^{7}$ nucleons which, for ordinary matter densities
(1gm/cc $< D <$ 20gm/cc)
means that Eqs. (3.5), (3.6) hold for $R>2\cdot 10^{-6}$cm.
\section{Translational Diffusion Of A Sphere: Numerical Values}\label{Section IV}
We shall now put numbers into these equations so as to consider the conditions
necessary to observe CSL-induced diffusion of a sphere. In the following we shall use
the GRW parameter values, $\lambda^{-1}=10^{16}$sec and $a=10^{-5}$cm, until section VII
when we consider the full range of allowable parameter values for the theory.
\subsection{CSL Diffusion Alone}\label{IV A}
According to Eq. (3.4), under the CSL mechanism acting alone, the rms distance along an axis the sphere diffuses is
\begin{equation}\label{4.1}
\Delta Q^{j}={\hbar\over m a}\bigg[{\lambda ft^{3}\over6}\bigg]^{1/2}
=6.5f^{1/2}{(t\thinspace\thinspace{\rm days})}^{3/2}\thinspace\thinspace{\rm cm}
\end{equation}
\noindent where $f=1$ for $R<<a$. Since Eq. (4.1) does not depend directly upon $N$
it is independent of the density $D$. It does
depend upon $R$ through $f$, decreasing rapidly as $R$ increases:
\[ \Delta Q^{j}\thinspace\thinspace\thinspace_{\overrightarrow {R>>a}}
16(a/R)^{2}{(t\thinspace\thinspace{\rm days})}^{3/2}\thinspace\thinspace{\rm cm}.\]
\noindent Table 1 lists $\Delta Q$ for various values of $R$ and $t$.
Although diffusion is all we shall be concerned with in the remainder of this paper,
we give here some values for $s_{\infty}$ and
$\tau_{s}$. From Eqs. (3.5), (3.6) it follows that
\[ s_{\infty}\thinspace_{\overrightarrow{R<<a}}
3.8\cdot 10^{-7}D^{-3/4}(a/R)^{9/4}\thinspace{\rm cm}\thinspace,\quad
s_{\infty}\thinspace_{\overrightarrow{R>>a}}
2.4\cdot 10^{-7}D^{-3/4}(a/R)^{5/4}\thinspace{\rm cm}\]
\[\tau_{s}\thinspace_{\overrightarrow{R<<a}}.58 D^{-1/2}(a/R)^{3/2}\thinspace{\rm sec},\qquad
\tau_{s}\thinspace_{\overrightarrow{R>>a}}
.23 D^{-1/2}(R/a)^{1/2}\thinspace{\rm sec}\]
\noindent ($D$ is in gm/cc). Table 2 lists values
of $s_{\infty}$ and $\tau_{s}$ for various values of $R$,
for a sphere of density $D=1$gm/cc. (We note that $\tau_{s}$ increases as $R$ moves away from $a$
in either direction since as $R$ decreases the collapse rate decreases and as $R$ increases the Schr\"odinger
spreading rate decreases).
For example, an $R=10^{-5}$cm sphere's center of mass wavefunction reaches equilibrium size
$s_{\infty}\approx 4\cdot 10^{-7}$cm in $\tau_{s}\approx .6$sec, and diffuses $\Delta Q\approx$ (60microns, 5cm)
in (1000sec, 1day).
It is worth examining the diffusion to be expected were the CSL hypothesis to
be false ($\lambda=0$) and the Copenhagen concept of collapse
somehow occurring ``upon observation" to be employed. In this case we utilize
Eq. (3.4), $\Delta Q_{QM}=\langle P/M\rangle(0)t$.
If the sphere is observed at $t=0$, localized to $\approx 2R$, we may take $\langle P\rangle(0)\approx\hbar/4R$.
If the sphere is then ``in the dark" (unobserved) until time $t$, we obtain from
$\Delta Q_{QM}\approx t\hbar/[D(4/3)\pi R^{3}4R]$, for $R=10^{-5}$cm and $D=1$gm/cc, that
$\Delta Q_{QM}\approx (6\cdot10^{-6}$cm, $5\cdot10^{-4}$cm) in (1000sec, 1day). These numbers are
smaller than their CSL counterparts by the respective factors $(10^{-3}, 10^{-4})$.
\subsection {Diffusion in Air}\label{IV B}
The CSL diffusion distances in vacuum given in Table 1 are much reduced
by the $-\xi {\bf v}$ damping due to collisions with air molecules.
[Another effect of
these collisions is that, as the air molecules
collide with the sphere they become entangled with its states, increasing the effective collapse rate.
This effect is complicated,
depending upon the sphere's quantum state's differences of air molecule density
in $a^{3}$ sized volumes surrounding the sphere. Because the air molecule density is much less than the
sphere density and because the quantum states of the sphere which compete in the collapse ``game" are
so spatially close, I shall ignore this effect.]
The Fokker-Planck equation for the combined CSL and Brownian diffusion in air, which replaces Eq. (2.1), is
\begin{equation}\label{4.2} {\partial\rho\over\partial t}=
\sum_{j=1}^{3} \bigg\{ -v^{j}{\partial\rho\over\partial x^{j}}+{\xi\over M}{\partial v^{j}\rho\over\partial v^{j}}
+\bigg[{kT\xi\over M^{2}}+
{\lambda \hbar ^{2}f\over 4 m^{2}a^{2}}\bigg]{\partial^{2}\rho\over\partial {v^{j}}^{2}} \bigg\}
\end{equation}
\noindent The long time and short time diffusion expressions which replace Eqs. (2.7a,b) are
\begin{mathletters}
\begin{equation}\label{4.3a}
(\Delta x)^{2}\thinspace\thinspace\thinspace_{\overrightarrow{t>>\tau}} \bigg[{2kT\over \xi}+
\bigg({M\over\xi}\bigg)^{2}{\lambda \hbar ^{2}f\over 2 m^{2}a^{2}}\bigg]t.
\end{equation}
\begin{equation}\label{4.3b}
(\Delta x)^{2}\thinspace\thinspace\thinspace_{\overrightarrow{t<<\tau}} \bigg[{2kT\xi\over 3M^{2}}+
{\lambda \hbar ^{2}f\over 6 m^{2}a^{2}}\bigg]t^{3}.
\end{equation}
\end{mathletters}
\subsubsection {Viscous Realm}\label{IV B1}
First consider the viscous realm. At room temperature $T_{0}$
and atmospheric pressure $p_{0}$, the mean free path of
air (N$_{2}$ or O$_{2}$) is $\l_{m}\approx .6\cdot 10^{-5}$cm. For $\xi$ we use
Stokes' law (2.4) or the corrected equation following it
(needed for $R=10^{-5}$cm since then $l_{m}\approx R$, which amounts to a
40\% decrease in $\xi$ if $\alpha=3/2$ and $\beta=0$): with $\eta\approx 2\cdot 10^{-4}$gm/cm-sec we have
\[ \tau=M/\xi\approx (2\cdot 10^{-6}, \thinspace\thinspace 10^{4}){\rm sec\thinspace\thinspace for\thinspace\thinspace}
R=(10^{-5}, \thinspace\thinspace 1){\rm cm}\]
\noindent Then we may apply Eq. (4.3a) for $t>\tau$
to obtain
\begin{mathletters}
\begin{equation}\label{4.4a}
\Delta x _{BR}=\bigg[{2kT\over \xi}t\bigg]^{1/2}\approx(.6,\thinspace\thinspace 1.4\cdot 10^{-3})(t {\rm days})^{1/2}
{\rm cm\thinspace\thinspace for\thinspace\thinspace}
R=(10^{-5}, \thinspace\thinspace 1){\rm cm}
\end{equation}
\begin{equation}\label{4.4b}
\Delta x_{CSL}=\bigg({M\over\xi}\bigg)\bigg({\lambda \hbar^{2}f\over 2m^{2}a^{2}}t\bigg)^{1/2}
\approx 3\cdot 10^{-11}D(t {\rm days})^{1/2}
{\rm cm\thinspace\thinspace for\thinspace\thinspace}
R\geq 10^{-5}{\rm cm}
\end{equation}
\end{mathletters}
\noindent (note that Eq. (4.4b) is independent of $R$). For $t<\tau$, where Eq. (4.3b) applies,
the ratio $\Delta x_{CSL}/\Delta x _{BR}$ is the same as that given in Eqs. (4.4).
Clearly the CSL diffusion
is swamped by the Brownian diffusion in the viscous realm (especially since it
is the sum of the squares of Eqs. (4.4) which add in Eq. (4.3a)).
\subsubsection {Molecular Realm}\label{IV B2}
We next turn to the molecular realm where $l_{m}>>R$, which can
be achieved by lowering the air density through lowering the air pressure $p$ (which we shall give
in units of picoTorr: 1pT=$10^{-12}$T). We focus upon
$R=10^{-5}$cm spheres since, if $R>>a$, $\Delta x_{CSL}$ decreases $\sim R^{-2}$ and so is less easily observed (and also
$\Delta x _{BR}\sim R^{-2}$ so no relative advantage is gained by increasing $R$). Moreover,
spheres of this size have been typical of observations of Brownian motion in air \cite{Millikan,Andrade}.
From Eq. (2.5) we find the time to reach thermal equilibrium is
\[\tau=M/\xi\approx 2\cdot 10^{9}(T/T_{0})^{1/2}(p\thinspace {\rm pT})^{-1}{\rm sec}\]
\noindent Since $\tau$ is so long for picoTorr pressure or less, we apply Eq. (4.3b) for $t<<\tau$:
\begin{mathletters}
\begin{equation}\label{4.5a}
\Delta x_{BR}=\bigg[{2kT\xi\over 3M^{2}}t^{3}\bigg]^{1/2}\approx 2\cdot 10^{-4}(p\thinspace
{\rm pT})^{1/2}(T/T_{0})^{1/4}D^{-1}t^{3/2}{\rm cm}
\end{equation}
\begin{equation}\label{4.5b}
\Delta x_{CSL}=\bigg[{\lambda \hbar ^{2}f\over 6 m^{2}a^{2}}t^{3}\bigg]^{1/2}
\approx 2\cdot 10^{-7}t^{3/2}{\rm cm}
\end{equation}
\end{mathletters}
\noindent ($t$ is in sec).
It follows from Eqs. (4.5a,b), even at liquid He temperature $T=4.2^{\circ}$K
and for a dense sphere $D=10$gm/cc, that $\Delta x_{BR}\approx\Delta x_{CSL}$
requires the very low pressure of $p\approx 10^{-3}$pT. But, at this low pressure, the Brownian
assumption of many molecule-sphere collisions occurring in the shortest observable time interval
no longer applies. Therefore, observation of CSL diffusion requires
the air density to be low enough to be in the impact realm.
\subsubsection {Impact Realm}\label{IV B3}
Consider the mean time between molecule-sphere collisions. The mean number
of collisions/sec-area of air molecules is $(n\overline{u} /4)$ so the mean time between
molecule-sphere collisions is $\tau_{c}=[(n\overline{u} /4)(4\pi R^{2})]^{-1}$.
Moreover, the change in speed of the sphere due to one collision with an air molecule is
$\Delta v\approx \overline{u}(m_{g}/M)$. With $\overline{u}\approx 4.5\cdot 10^{4}(T/T_{0})^{1/2}$cm/sec
and $n\approx 2.5\cdot 10^{19}(p/p_{0})(T_{0}/T)$cm$^{-3}$ we obtain, for an $R=10^{-5}$cm sphere,
\begin{equation}\label{4.6}
\tau_{c}\approx 2(T/T_{0})^{1/2}(p\thinspace {\rm pT})^{-1}{\rm sec}, \quad
\Delta v\approx 5\cdot 10^{-5}(T/T_{0})^{1/2}{\rm cm/sec}.
\end{equation}
\noindent
[Incidentally, we can understand the molecular realm's Brownian motion in terms of the
impact realm motion if we consider the impact realm but for $t>>\tau_{c}$,
so that many collisions have occurred in time $t$ and so
Brownian motion considerations apply.
Then $\Delta x_{BR}$ in Eq. (4.5a) may be written in terms of the quantities in Eq. (4.6),
using $\xi$ taken from Eq. (2.5):
\[\Delta x_{BR}\sim[(kT/M^{2})nR^{2}(m_{g}kT)^{1/2}t^{3}]^{1/2}\sim \Delta v t[t/\tau_{c}]^{1/2},\].
\noindent This says that the Brownian diffusion distance in time $t$ is the distance the sphere goes with the speed
it gets from a single collision multiplied by the square root of the number of collisions
(the expected fluctuation in the number of collisions)].
Eq. (4.5b) (the same as Eq. (4.1) gives the CSL diffusion distance in the impact realm.
In order to observe CSL diffusion over the largest distance, one wants $\tau_{c}$ to be as long as possible and so,
By Eq. (4.6), one wants the lowest possible pressure.
An experiment conducted at pressure$<5\cdot 10^{-17}$Torr at $4.2^{\circ}$K has been reported\cite{Gabrielse}.
In these conditions, the mean collsion time is $\tau_{c}\approx 80$min. In this time, according to Eq. (4.5b),
$\Delta x_{CSL}\approx .7$mm. This should be readily observable, so much that it encourages one to
contemplate an experiment to test CSL over a wide range of parameter values (Section VIII).
\subsection {Diffusion in a Thermal Radiation Bath}\label{IV C}
For completeness, we note that, even when one eliminates collisions of the sphere with air molecules
over some sufficiently long time interval, there is still thermal radiation to supply
damping and random impacts and thus induce Brownian motion. However, as we shall soon see,
this is very small at liquid Helium temperature.
The viscosity coefficient in the case of thermal radiation, obtained in Appendix D
and cited in Eq. (2.6), has the numerical value
\begin{equation}\label{4.7}
\xi_{RAD}\approx 4\cdot 10^{-29}(R/10^{-5})^{6}(T/T_{0})^{8} {\rm \thinspace gm/sec}
\end{equation}
\noindent ($R$ is in cm).
It follows from this that the time (2.3) to reach thermal equilibrium is
\begin{equation}\label{4.8}
\tau_{RAD}=M/\xi_{RAD}\approx 10^{14}D(R/10^{-5})^{-3}(T/ T_{0})^{-8}{\rm sec}.
\end{equation}
\noindent At room temperature or less, $\tau_{RAD}$ is so long that only the case of $t<<\tau_{RAD}$ is of interest.
Then, using Eqs. (2.7b) and (2.6),
\begin{equation}\label{4.9}
\Delta x_{RAD}=\bigg[{2kT\xi_{RAD}t^{3}\over3M^{2}}\bigg]^{1/2}
\approx 8D^{-1}(T/ T_{0})^{9/2}(t/10^{5})^{3/2}{\rm cm}.
\end{equation}
\noindent (note that (4.9) is independent of $R$). According to Eq. (4.9), at room temperature a
$D=1$gm/cc sphere of any radius will diffuse
$\approx 7$cm/day due to thermal radiation alone. This is essentially equal in magnitude to the CSL
diffusion for an $R=10^{-5}$cm sphere (which is
unaffected by such a small damping coefficient). However, at $T=4.2^{\circ}$K we have
$\Delta x_{RAD}\approx 4\cdot 10^{-8}$cm in a day which is utterly negligible.
Therefore, at liquid He temperature, which is needed to obtain the low
pressure of the impact realm, we need not consider the random walk due to thermal radiation.
\section {Translational Diffusion Of A Disc}\label{V}
Because rotation through $2\pi$rads may be more easily detected than a comparable translation,
our experimental proposal (section VIII) is based upon observing rotational diffusion. However, since a uniform sphere
displays no CSL rotational diffusion we consider a more asymmetrical object, a disc.
In this section, for completeness, we discuss translational diffusion of a disc. In section VI we shall discuss
rotational diffusion of a disc.
\subsection {Brownian Diffusion}\label{VA}
Consider a disc of radius $L$ and thickness $b$.
For Brownian motion, the time dependence of the rms diffusion is given in section II in terms of
$\xi$ (e.g., Eqs. (2.7)). In the viscous realm, for an oblate spheroid $(x^{2}+y^{2})/L^{2}+z^{2}/(b/2)^{2}=1$
(close enough to a disc), Lamb\cite{Lamb} shows that, for $b<<L$,
\begin {mathletters}\label{5.1}
\begin {equation}
\xi\approx 16\eta L \qquad ({\rm motion\thinspace\thinspace perpendicular\thinspace\thinspace to \thinspace\thinspace face})
\end {equation}
\begin {equation}
\xi\approx (32/3)\eta L \qquad ({\rm motion\thinspace\thinspace along\thinspace\thinspace edge}).
\end {equation}
\end{mathletters}
\noindent This is not qualitatively different from Stokes law (2.4) (with $R\approx L$).
In the molecular realm where $l_m>>(L,b)$, one may readily calculate, as in
references \cite{Cunningham,Epstein}, assuming specular reflection of the molecules in the disc rest frame,
\begin {mathletters}\label{5.2}
\begin {equation}
\xi=4n L^{2}(2\pi m_{g}kT)^{1/2}\qquad
({\rm motion\thinspace\thinspace perpendicular\thinspace\thinspace to \thinspace\thinspace face})
\end {equation}
\begin {equation}
\xi=2n Lb(2\pi m_{g}kT)^{1/2}\qquad ({\rm motion\thinspace\thinspace along\thinspace\thinspace edge}).
\end {equation}
\end{mathletters}
\noindent Eq. (5.2a) is not qualitatively different from Eq. (2.5) for a sphere (with $L\approx R$).
As for Eq. (5.2b), decreasing $b$ to reduce $\xi$ for edgewise motion does not reduce
$\Delta x_{BR}/\Delta x_{CSL}$ since, from Eqs. (4.3), $[\Delta x_{BR}/\Delta x_{CSL}]^{2}\sim\xi/M^{2}\sim b^{-1}$.
\subsection {CSL Diffusion}\label{VB}
For CSL, the time dependence of the rms diffusion is given in Appendix A, Eq. (A.10)
(and copied in Eq. (3.4)). $f=1$
for a disc with all dimensions $<<a$, so in this case the disc's diffusion is no different from that of a sphere
with $R<<a$, Eq. (4.1). There is a difference for $(b/2a)^{2}<<1$ and $(L/2a)^{2}>>1$:
\begin {mathletters}\label{5.3}
\begin {equation}
f\rightarrow (2a/L)^{2}\qquad ({\rm motion\thinspace\thinspace perpendicular\thinspace\thinspace to \thinspace\thinspace face})
\end {equation}
\begin {equation}
f\rightarrow (4/\sqrt{\pi})(a/L)^{3}\qquad ({\rm motion\thinspace\thinspace along\thinspace\thinspace edge}).
\end {equation}
\end{mathletters}
Thus the (thin) disc diffusion decreases less with increasing size than does the sphere's diffusion,
for which $f\rightarrow 6(a/R)^{4}$. Thus, if a larger object is needed for greater visibility,
a larger radius disc gives greater diffusion
than does a sphere of the same radius. However, the conclusion reached in section IV for a sphere holds as well
for a disc: the impact realm is required to effectively remove Brownian motion in order to see CSL translational
diffusion of a disc.
\section{Rotational Diffusion of a Disc}\label{VI}
Rotational Brownian motion was (naturally) first considered by Einstein\cite{Einstein Rot}.
The Fokker-Planck equation for an object rotating about a fixed axis through angle $\theta$
with angular velocity $\omega$ is identical in form to Eq. (2.1) with the replacements $v^{j}\rightarrow \omega$,
$x^{j}\rightarrow \theta$. For rotation, the viscous torque on a sphere is $-\xi_{ROT}\omega$ where
\cite{Einstein Rot,Lamb2}
\begin{equation}\label{6.1}
\xi_{ROT}=8\pi\eta R^{3},\qquad (l_m <<R),
\end{equation}
\noindent which replaces Stokes' law, Eq. (2.4). Eq. (2.7a) is replaced by
\begin{equation}\label{6.2}
\Delta\theta_{BR}\quad _{ \overrightarrow{t>>\tau_{ROT}}}\quad \bigg[{2kTt\over\xi_{ROT}}\bigg]^{1/2}
\end{equation}
\noindent where $\tau_{ROT}=I/\xi_{ROT}$ and $I=(2/5)MR^{2}$ is the sphere's moment of inertia. We see
from Eqs. (6.1), (6.2) compared with Eqs. (2.4), (2.7a)
that $\Delta\theta_{BR}\approx\Delta x_{BR}/R$.
But, the case of a sphere is of no use to us. In the approximation we make,
where the nuclear mass is uniformly spread out over the sphere,
there is no difference between two rotated quantum states of the sphere so, according to CSL,
there is no collapse and therefore no random rotational motion
(without this approximation there {\it is} collapse
and random rotation but it is very slow). However,
CSL random rotation does occur for a nonspherical object.
\subsection{Brownian Rotational Diffusion}\label{VIA}
Here we consider
rotational diffusion of a disc ``on edge" (i.e., oriented with the face of the disc in a vertical plane),
of radius $L$ and thickness $b<<L$,
in the molecular and impact realms ($l_{m}>>L$). For a sphere in these realms,
if the molecules make elastic collisions with the sphere, they do not transfer
momentum parallel to the sphere face and so do not cause any torque ($\xi_{ROT}=0$). However, for the disc,
a straightforward calculation (as in \cite{Cunningham,Epstein}) yields the torque $=-\xi_{ROT}\omega$
about an axis passing through the edge and center, where
\begin{equation}\label{6.3}
\xi_{ROT}=(4/\pi)nL^{4}(2\pi m_{g}kT)^{1/2}
\end{equation}
\noindent and Eq. (2.7b) is replaced by
\begin{equation}\label{6.4}
\Delta \theta_{BR}\quad _{ \overrightarrow{t<<\tau_{ROT}}}\quad \bigg[{2kT\xi_{ROT} t^{3}\over3I^{2}}\bigg]^{1/2}
\approx 80\frac{(p\thinspace{\rm pT})^{1/2}(T/T_{0})^{1/4}t^{3/2}}{(D{\rm gm/cc})
(b{\rm d\mu})
(L{\rm d\mu})^{2}}{\rm rads}
\end{equation}
\noindent ($I\approx ML^{2}/4$). Here we have employed the rather weird unit $1{\rm d}\mu=10^{-5}$cm
because the dimensions of the disc we are considering are such that the factors $(b{\rm d\mu})$,
$(L{\rm d\mu})$ are of the order of unity. We only give Eq. (6.4), valid for $t<<\tau_{ROT}$, because
the time to reach thermal equilibrium is so long at picoTorr pressures or less:
$\tau_{ROT}=I/\xi_{ROT}\approx 5\cdot 10^{7}
D(b{\rm \thinspace\thinspace in\thinspace\thinspace d\mu})(p{\rm pT})^{-1}(T/T_{0})^{1/2}$sec.
According to Eqs. (6.4) and (6.5) (below), Brownian motion dominates CSL diffusion even at 1pT pressure, for
discs with dimensions of the order of $a=1d\mu$. Therefore one must go to lower pressure, to the impact realm, to see
CSL rotational diffusion.
\subsection{CSL Rotational Diffusion}\label{VIB}
For CSL rotational diffusion, it is shown in Appendix C that
\begin{equation}\label{6.5}
\Delta \theta_{CSL}\approx \frac{\hbar}{ma^{2}}\bigg(\frac{\lambda t^{3}f_{ROT}}{12}\bigg)^{1/2}
\approx .018f^{1/2}_{ROT}t^{3/2}{\rm rad}
\end{equation}
\noindent where FIG. 1 contains a graph of $f_{ROT}(\alpha, \beta)$ vs. $\alpha\equiv (L/2a)$
for various values of $\beta \equiv (b/2a)$. For example,
$f_{ROT}\approx 1/3$ for $b\approx .5a$ and $L\approx 2a$.
For this example,
according to Eq. (6.5), $\Delta \theta_{CSL}$
diffuses through $2\pi$rad in about 70sec. If $\lambda$ were $10^{-4}$ times smaller i.e.,
$\lambda\approx 10^{-20}$ (and still $a=10^{-5}$cm), which is at the edge of where one may consider the theory to
be viable (section VII, Eq. (7.3)), this time is about 25 min.
It is worth examining the rotational diffusion to be expected from
standard quantum theory (as was done for translational diffusion at the end of section IVA). From Eq. (C6)
with $\lambda=0$ we have
\[\Delta \theta_{QM}=\langle{\cal L}\rangle (0)t/I\]
\noindent where $\langle{\cal L}\rangle (0)$ is the expectation value of
the angular momentum operator in the initial state. If the disc is observed at $t=0$, localized to $\Delta \theta\approx\pi /4$,
then we may take $\langle{\cal L}\rangle (0)\approx \hbar/[2(\pi/4)]=2\hbar/ \pi$. If the disc is ``in the dark"
(unobserved) until time t, using $I=(Db\pi L^{2})(L^{2}/4)$, we obtain
\begin{equation}\label{6.6}
\Delta \theta_{QM}\approx \frac{8\hbar t}{\pi^{2}DbL^{4}}.
\end{equation}
\noindent With the choices $D=1$gm/cc, $b=.5\cdot10^{-5}$cm, $L=2\cdot10^{-5}$cm, we get
$\Delta \theta_{QM}\approx 10^{-3}t$rad. Thus, $\Delta \theta_{QM}\approx$(.1, 1, 86)rad in
$t=$(100sec, 1000sec, 1day). These numbers are smaller than their CSL counterparts
by the respective factors (100, 300, 3000). However, were $\lambda$ sufficiently small,
this diffusion could be observed in the experiment we propose (section VIII).
\subsection{Gas-Disc Collisions}\label{VIC}
The times given in the example of the previous section for diffusion
through $2\pi$rad ($\approx70$sec for $\lambda^{-1}=10^{16}$sec,
$\approx$25min for $\lambda^{-1}=10^{20}$sec) should be compared with the
mean time between collisions of air molecules with the disc.
Assume Nitrogen molecular gas at temperature $4.2^{\circ}$K and pressure $5\cdot10^{-17}$Torr.
The mean molecular speed is ${\overline u}=[8kT/\pi m_{g}]^{1/2}\approx 5.6\cdot10^{3}$cm/sec.
The molecular density is $\rho=p/kT\approx115$particles/cc. The molecular flux is
$J=\rho{\overline u}/4\approx 1.5\cdot10^{5}$particles/cm$^{2}$-sec.
The mean time between collisions (we consider collisions with the 2 faces of the disc
but neglect collisions with the edge) is thus $\tau_{c}=1/(2J\pi L^{2})\approx45$min.
We conclude this subsection with an estimate of the effect of a collision. A Nitrogen molecule with speed $\overline u$
impacting perpendicular to the disc
face at distance $L$ from the rotation axis ("worst possible case") conveys to
the disc an angular velocity
\begin{equation}\label{6.7}
\omega=\frac{m_{g}{\overline u}L}{I}\approx \frac{33}{D(b{\rm \thinspace\thinspace}d\mu)
(L{\rm \thinspace\thinspace}d\mu)^{2}}{\rm rad/sec}
\end{equation}
\noindent For the example we have been considering, this is $\omega\approx$ 8rad/sec. Such a sudden
jump in the angular velocity should be readily observable and distinguishable from the
expected CSL behavior.
\section {Parameter Values}\label{Section VII}
In the previous sections of this paper, for clarity's sake, in
numerical calculations we have used the
values of the CSL parameters $(\lambda^{-1},a)$ suggested by GRW, namely $(10^{16}$sec$,10^{-5}$cm).
However, these values have no theoretical underpinning and were simply chosen to give reasonable
results: other values are possible. However, not all values are possible.
Therefore we examine already existing experimental and theoretical constraints on these parameters.
Any new experiment must be considered as placing further constraints.
While one may hope that experiment reveals an "anomalous" random walk
confirming the existence of a CSL-type collapse process and disclosing the values of the parameters, one
should consider the possibility that this does not happen. We consider
the additional constraints negative results could provide.
In particular, we consider what would be needed to eliminate CSL as a viable resolution of the ``measurement problem".
We also discuss two other topics. One is the random walk associated with a suggestion by Penrose of
a connection between gravity and collapse. This can essentially be interpreted as giving the results of
this paper with a particular value for $\lambda$. We also show the range of parameter values
consistent with a speculation that the fluctuating field $w$ has a thermal basis, based upon
cosmological considerations and an analogy between
standard random walk and CSL random walk.
In what follows, $\sim$ means up to a numerical factor not too far from 1.
An experiment which looks for photons emitted by the atoms in an underground shielded slug of Germanium
places a limit on the number of bound electrons or
nucleons "spontaneously" excited in Ge atoms\cite{Collett,Ring}. Spontaneous excitation of bound states is
expected from the CSL collapse mechanism,
which narrows electron and nucleon wavefunctions thereby giving these particles increased energy (presumably the energy
for this comes from the fluctuating collapse-causing field $w$\cite{Pearleenergy}). For example, a 1s
electron ejected from an Ge atom will result in radiation of an 11.1 keV (equal to its binding energy)
shower of photons from the atom's remaining electrons as they cascade downward added to radiation equal to
the kinetic energy of the ousted electron which it rapidly loses in collisions with other atoms.
The present experimental upper limit on the rate of photon pulses appearing in 1 KeV bins above 11 keV
is $\approx$ .05 pulses/(keV kg day) \cite{AvignoneRing}.
The theoretical excitation rate is conveniently expanded in a power series in (size of bound state/$a)^{2}$. The
first term in this series turns out to vanish identically if the collapse coupling constant
is mass proportional \cite{PearleSquires}. We have assumed this in the present paper
(e.g., see Eq. (A.1) et. seq.) because, for atomic spontaneous excitation, the numerical coefficient of this first term
is large enough to make the experiment sensitive to the relative coupling constant size of electrons and nucleons, and the
results make mass-proportionality likely. The experiment is less sensitive to the second term in
the series but the data on excitation rate of nucleons
still provides a constraint (because of the now-assumed small electron coupling
constant, the electron excitation rate data does not provide as strong a constraint). The theory gives
probability/sec$\sim\lambda$(nucleon diameter/$a)^{4}$. Since, in Ge, there
are $8.3\cdot 10^{24}$atoms/kg and $A\approx 72$ and, using nuclear radius$\approx 1.4\cdot 10^{-13}A^{1/3}$cm and
$8.6\cdot 10^{4}$sec/day, we get
\[ \lambda^{-1}a^{4}>2\cdot 10^{-15}. \]
However, the strongest present constraint, based upon the same experimental data, was provided by Fu\cite{QFu}.
He calculated the rate of radiation by a free electron (mass $m_{e}$)
due to being shaken by the collapse mechanism. He obtained
for the number of photons of energy E radiated by an electron per second per energy the expression
\[ R(E)=\frac{\lambda (m_{e}/m)^{2}e^{2}\hbar}{4\pi^{2}a^{2}m^{2}c^{3}E}=
8.1\cdot10^{-38}\frac{(\lambda/a^{2})}{(\lambda/a^{2})_{GRW}}
\bigg(\frac{1}{E {\rm keV}}\bigg){\rm counts/(sec\thinspace\thinspace keV)}\rightarrow
2.1\cdot10^{-8}{\rm counts/(keV\thinspace\thinspace kg\thinspace\thinspace day)}.\]
\noindent The last term on the righthand side of the above equation
gives the rate of radiation from the 4 valence electrons (essentially free) from each atom in
a slug of Ge (using $8.29\cdot10^{24}$ atoms/kg for Ge) at $\approx$ 11 keV with the GRW parameter values.
This and the experimental upper limit quoted above leads to the experimental constraint:
\begin{equation}\label{7.1}
\lambda^{-1}a^{2}>.4
\end{equation}
\noindent This constraint (labelled line 1) is graphed in Fig. 1: the allowed region is to the right of the line.
Diffusion experiments should do much better than (7.1) in constraining the parameter values. For example, consider
a rotational diffusion experiment such as we sketch in the next section. One expects to be able
to detect a $\Delta\Theta\approx \pi/2$ diffusion in 45 minutes. If such a diffusion were not
detected, Eq. (6.5) (where $\Delta\Theta_{CSL}$ goes as $\lambda^{1/2}/a^{2}$) gives
\begin{equation}\label{7.2}
\lambda^{-1}a^{4}>10^{2}
\end{equation}
\noindent (region to the right of the line labelled 2 in Fig.1). This amounts to being able to detect $\Delta\Theta_{CSL}$
a factor $10^{-3}$ times smaller than that expected using the GRW parameter values.
There is also what may be called a ``theoretical constraint"\cite{Collett,Ring}, although it is fairly rough. The
purpose of a collapse model is to account for the world as we see it. The model may be considered to fail if it allows
an observable object to remain in a superposition of two well-separated locations ``too long". How long is
``too long"? We might take that to be human perception time $\sim .1$sec.
For a first example,
consider an object which is just visible, a sphere of diameter $4\cdot 10^{-5}$cm, in a superposition
involving a displacement $>>a$, with $a>4\cdot 10^{-5}$cm. The collapse time is
$\sim\lambda^{-1}/N^{2}$, where $N$ is the number of particles in the sphere. If the sphere's density is
$D\approx 1$gm/cc, then $N\approx 2\cdot 10^{10}$ and the condition $\lambda^{-1}/N^{2}<.1$sec implies
\begin{equation}\label{7.3}
\lambda^{-1}<4\cdot 10^{19}
\end{equation}
\noindent (region below the line labelled 3 in Fig.1).
For a second example, again consider the above sphere but with a superposition involving a
displacement$<a$. In this case the collapse time is
$\sim (4\lambda^{-1}a^{2})/[N\cdot$displacement$]^{2}$. Using the smallest possible discernible displacement,
$4\cdot 10^{-5}$cm, the condition that the collapse time is $<.1$sec implies
\begin{equation}\label{7.4}
\lambda^{-1}a^{2}<1.6\cdot 10^{10}
\end{equation}
\noindent (region to the left of the line labelled 4 in Fig.1).
These ``theoretical constrains" are rough but we may take them seriously enough to observe,
from Fig. 1, that constraints
(7.2) and (7.4) still permit a narrow wedge-shaped range of allowed parameters.
However, suppose one were able to perform
an experimental test of translational diffusion of a sphere with a precision for $\Delta Q$
that is $10^{-3}$ times smaller than $\Delta Q_{CSL}$ with the GRW parameters, and find a null result.
According to Eq. (4.1) the resulting
constraint is $\lambda^{1/2}/a<10^{-3}(10^{-16}$sec$^{-1})^{1/2}/(10^{-5}$cm$)$ or
\begin{equation}\label{7.5}
\lambda^{-1}a^{2}>10^{12}
\end{equation}
It appears that the conflict between (7.5) and (7.4) would make CSL nonviable.
We close this section with two additional considerations.
First, in Appendix E we argue that a proposal by Penrose\cite{others},
and other suggestions involving a gravitational basis for collapse\cite{Diosi,GGR,PearleSquires2},
arrive at an effective value for $\lambda$: $\lambda_{G}\approx Gm^{2}/a\hbar\approx 2\cdot 10^{-23}$sec$^{-1}$
when the object undergoing collapse is of size $\approx a$. With such a small value of $\lambda$,
the ``theoretical constraint" inequality (7.3) is violated: for the superposed
sphere states considered in obtaining (7.3), the collapse time is $\approx 10$sec, much longer than human perception time.
However, proponents of $\lambda=\lambda_{G}$ could argue for a weaker ``theoretical constraint"\cite{ABGG}.
That is, when a human observer looks at the sphere, the detection process in the brain amounts to entangling the
two spatially distinct sphere states with two spatially distinct states of brain particles. This
extra entanglement, while only roughly estimable, appears to bring about collapse in less than human perception time.
Detection of diffusion with such a small value of $\lambda$ could be possible.
For example, we may compare the expected rotational diffusion (6.6) of standard quantum theory
with the expected CSL diffusion (6.5) with $\lambda=\lambda_{G}$ and $a=10^{-5}$cm: $\Delta\Theta_{QM}\approx 10^{-3}t$rad
and $\Delta\Theta_{G}\approx 10^{-5}t^{3/2}$rad. These give, for times (45min, 3hour),
$\Delta\Theta_{QM}\approx (2.7,$ 10.8)rad and $\Delta\Theta_{G}\approx (1.4,$ 11.2)rad.
Last, we call the reader's attention to Appendix F, where we
consider that the collapse-inducing fluctuations of $w$ may come from a thermal bath of some unspecified medium in
thermal equilibrium with the $2.7^{\circ}$K cosmic radiation. The
$\sim t^{3/2}$ time dependence of the CSL diffusion for a nucleon is identified with
the standard Brownian motion at this temperature over an interval much less than the time it takes to reach thermal equilibrium.
The latter time is taken to be $\gamma\cdot$(the age of the universe), with $\gamma>1$. We
then obtain the equality (F.2), $\lambda^{-1}a^{2}\approx 10^{3}\gamma$, which is consistent with
a wide range of parameter values, e.g., for $a=10^{-5}$cm this implies $\lambda^{-1}>10^{13}$sec.
\section {Experimental Considerations}\label{VII}
In order to observe quantum mechanical rotational diffusion, either that
arising from standard quantum theory or from CSL, it is necessary to isolate a small
object from all outside torques for a period of minutes to hours while measuring
its rotational position. We believe that it is now possible to perform such an
experiment by combining techniques from nanomachining and from atom/particle trapping.
Below, we shall consider the problems of creating suitable discs, suspending and isolating
them, removing their residual thermal energy, and monitoring their angular position
as a function of time.
Our first consideration is the production of suitable disc samples.
Larger discs (2$\mu$m diameter) have already been fabricated from silicon dioxide using standard
IC fabrication methods \cite{Chen} and recent work at the Cornell Nanofabrication Facility shows
that it is now possible to create structures with lateral dimensions below 100nm and thicknesses less than 40nm
\cite{Tannenbaum}. So it appears possible to make discs of suitable dimensions using current methods.
As explained below, we suggest using discs of highly conducting metals such as copper or gold.
We next consider suspending and isolating a disc for the duration of the experiment.
We suggest utilizing a charged, conducting disc in a Paul trap \cite{Paul}
in a very high vacuum. The Paul trap uses an alternating quadrupole
electric field to suspend a charged particle. While the method was initially developed
to confine atoms, it was quickly adapted to suspend larger objects.
Wuerker at. al.\cite{Wuerker} injected small conducting microparticles using an electrostatic
method that also charged the particles in the injection process. Once
they had fed a cloud of particles into the trap they were able to select a single
particle to retain in the trap by manipulating the trap's operating fields.
More recently, Arnold and co-workers\cite{Arnold1} have used Paul traps and modified Paul traps to
confine single microparticles (a few $\mu$m in size) for optical experiments. One of their modified Paul traps
has been used to confine single microparticles to within the Brownian limit set by the
atmospheric gas in their traps. They add extra
static electric fields to counter the effects of both gravity and
imperfections in the quadrupole shape of the main field\cite{Arnold2}. This leaves a perfectly force free
spot in the trap where the particle will sit, subject only to collisions with the gas.
Pressures of less than $5\cdot 10^{-17}$ Torr have been reported \cite{Gabrielse}
in traps cooled with liquid He to 4$^{\circ}$K. As we have remarked in subsection VIC,
for a disc of radius $2\cdot 10^{-5}$cm and thickness $.5\cdot 10^{-5}$cm, at these conditions
the average interval between gas-disc
collisions is 45 minutes. Moreover, the Poisson statistical nature of the collisions make it
likely to find intervals between collisions up to 90 minutes. This is quite long enough to observe
even the rotational diffusion predicted by standard quantum mechanics.
Although we are still engaged in studying the detailed dynamics of a charged conducting disc in the Paul
trap, it appears already that the positional trap also acts as an orientational trap
and will suspend the disc ``vertically oriented" (with its flat surface parallel
to the vertical symmetry axis of the field).
There is no torque on a centrally positioned and vertically oriented disc
causing it to rotate about the symmetry axis in the direction which we shall refer to as
the azimuthal direction. Thus the trap appears to suspend a charged disc in exactly the best
orientation to observe rotational diffusion in the azimuthal direction. What is currently under study is whether a
displacement from center and/or tipping off vertical of the disc causes an azimuthal torque and, if so, whether that
should be minimized or not (i.e., if the extra motion is due to translational or rotational
CSL diffusion, this might cause an increase of observable CSL-induced diffusive rotation).
When first injected into the trap, the discs will possess considerable translational and
rotational kinetic energy. Arnold et. al. were able to remove this energy by the viscous
interactions with the gas in the cell. In a high vacuum experiment there is no gas to take
up this kinetic energy and a disc will continue to orbit the trap. However, if we
add a transverse magnetic field then the eddy currents set up in the disc will convert the
kinetic energy to thermal energy in the disc. The magnetic forces induced are proportional
to the velocity of the disc and so provide a true viscous force. A simple dimensional analysis
suggests that quite moderate fields, no more than a few kilogauss, will damp out the
mechanical energy in a few seconds, thus bringing the disc to rest at the null point of
the trap and vertically oriented.
Preliminary calculations show that light (e.g., from a laser) shone in along the symmetry
axis of the trap will scatter from the disc in a pattern that exhibits azimuthal anisotropy:
more light is scattered perpendicular to the faces of the disc than perpendicular to the edges.
Thus the orientation of the disc about a vertical axis can be monitored by collecting the
scattered light. It appears that the geometry of the Paul trap makes it particularly easy to
collect light scattered from a particle at the center of the trap.
If the electrodes are highly polished, then the geometry is such that photons scattered away
from the symmetry axis will be funneled by the electrodes to emerge through the two gaps where the
cap electrodes and the ring electrode do not meet. Moreover, the scattered photons will retain
their azimuthal orientation so that light collected at the gaps will retain the azimuthal
intensity distribution and so provide information about the orientation of the disc.
We suggest collecting the scattered photons with 8 photomultipliers operating as photon
counters spaced around each gap so that the disc orientation can be measured to within $45^{\circ}$.
The light which illuminates and scatters from the disc does so symmetrically and therefore
exerts no average azimuthal torque on it. However, because the photons scatter
randomly from the disc, they exert a random torque on it and so cause it
to undergo diffusive rotation. Fortunately, this effect scales with the
light intensity and thus can be minimized by using sufficiently weak
illumination. Moreover, this illuminational diffusion can itself be measured in
exactly the same way as any other rotational diffusion. Thus its effects
can be eliminated by studying the behavior of the disc as the light
level is reduced. The precise limit on the maximum amount of light that can be scattered without
materially affecting the precision of the experiment depends on the value of $\lambda$ that one
wishes to measure. For example, according to our present rough calculations,
for the standard value of $\lambda = 10^{-16}$sec$^{-1}$, the disc
can scatter about 200 photons per second before the illuminational diffusion exceeds 5\% of the
CSL diffusion. A simple calculation shows that, if the disc were at rest,
you would need to count photons for about 10 seconds
to localize it to within 45$^{\circ}$. Since CSL diffusion with that $\lambda$ should lead to
one revolution every 70 seconds the time resolution is quite adequate.
In order to observe diffusion for a lower value of $\lambda$,
the maximum light level must be reduced accordingly.
However, the lower value of $\lambda$ will lead to a slower rate of CSL diffusion and so allow
integration of light over a longer period. This makes up for the lower maximum light level and means
that the lower limit on $\lambda$ that can be measured is set by the vacuum and not by the
illumination. A 45-90 minute interval between gas-disc collisions sets a lower limit
$\lambda <10^{-23}$sec$^{-1}$. This is low enough to allow the experiment to definitively test CSL and,
if the CSL diffusion does not appear, to see the diffusion expected
from standard quantum mechanics.
\acknowledgments
We especially appreciate the help of Gordon Jones and we would also like to thank Frank Avignone,
Peter Milloni, James Ring and Ann Silversmith for their contributions to this work.
One of us (P.P.) would like to thank Harvey Brown and the Philosophy of Physics group at Oxford for
the stimulating environment where the idea for this work originated.
\appendix
\section {Translational diffusion In CSL}
In CSL, the density matrix evolution of the wavefunction of a blob of matter containing $N$ particles, in the position
representation $|{\bf x}_{1},...{\bf x}_{N}>\equiv |x>$, is given by\cite{PearleCSL,GPR}
\begin{equation}\label{A1}
{\partial\over \partial t}\langle x|\rho (t)|x'\rangle =-i\langle x|[H,\rho (t)]|x'\rangle
-{\lambda\over 2}\sum_{i=1}^{N}\sum_{j=1}^{N}{m_{i}m_{j}\over m^{2}}
[\Phi({\bf x}_{i}-{\bf x}_{j})+\Phi({\bf x}'_{i}-{\bf x}'_{j})-2\Phi({\bf x}_{i}-{\bf x}'_{j})]
\langle x|\rho (t)|x'\rangle
\end{equation}
\noindent where
\begin{equation}\label{A2}
\Phi({\bf z})\equiv e^{-{\bf z}^{2}/ 4 a^{2}},
\end{equation}
\noindent $\lambda$ is the collapse rate for a proton and we have assumed mass-proportionality
of the collapse coupling (see section VII), ${\bf x}_{j}$ is the position coordinate of the $j$th particle,
$m_{j}$ is its mass, $m$ is the mass of the proton and $H$ is the
usual Hamiltonian. In what follows we shall neglect the contribution of the electrons
because of the smallness of the electron mass, and for simplicity take the mass of the neutron equal to $m$.
Then, $N$ is the number of nucleons.
We wish to consider only the behavior of the center of mass (CM) of the blob. Accordingly,
we trace Eq. (A1) over the relative coordinates ${\bf R}_{i}\equiv {\bf X}_{i}-{\bf Q}$ (eigenvalues ${\bf r}_{i}$)
where ${\bf Q}\equiv \sum_{i}m_{i}{\bf X}_{i}/\sum_{i}m_{i} = N^{-1}\sum_{i}{\bf X}_{i}$
is the CM position operator (eigenvalues ${\bf q}$).
$\Phi({\bf x}_{i}-{\bf x}_{j})= \Phi({\bf r}_{i}-{\bf r}_{j})$ and $\Phi({\bf x}'_{i}-{\bf x}'_{j})=\Phi({\bf r}'_{i}-{\bf r}'_{j})$
are independent of ${\bf q}$ but
$\Phi({\bf x}_{i}-{\bf x}'_{j})=\Phi({\bf r}_{i}-{\bf r}'_{j}+{\bf q}-{\bf q}')$.
We shall also assume that the density matrix is the direct product of the internal and CM density matrices
(this neglects their entanglement due to the collapse-induced excitation of the internal nuclear states).
The trace of Eq. (A1) over relative coordinates yields
\begin{eqnarray}\label{A3}
&&{\partial\over \partial t}\langle{\bf q}|\rho _{cm} (t)|{\bf q}'\rangle =
-i\langle{\bf q}|\bigg[ {P^{2}\over 2M},\rho _{cm} (t)\bigg] |{\bf q}'\rangle\nonumber\\
&&\quad -\lambda\int dr \langle r|\rho _{int} (t)|r\rangle\sum_{i=1}^{N}\sum_{j=1}^{N}
[\Phi({\bf r}_{i}-{\bf r}_{j})-\Phi({\bf r}_{i}-{\bf r}_{j}+{\bf q}-{\bf q}')]
\langle{\bf q}|\rho _{cm} (t)|{\bf q}'\rangle.
\end{eqnarray}
\noindent Since the nucleons are well-localized, we may write e.g.,
$\int dr \langle r|\rho _{int} (t)|r\rangle\Phi({\bf r}_{i}-{\bf r}_{j})\approx\Phi({\bf z}_{i}-{\bf z}_{j})$ where
${\bf z}_{i}$ is the mean position of the $i$th nucleon. Moreover, since the
nucleii are closely spaced compared to $a=10^{-5}$cm we may, to a good approximation, take them to be continuously distributed and replace
the double sum in Eq. (A3) by a double integral, obtaining
\begin{eqnarray}\label{A4}
&&{\partial\over \partial t}\langle{\bf q}|\rho _{cm} (t)|{\bf q}'\rangle=
-i\langle{\bf q}|\bigg[ {P^{2}\over 2M},\rho _{cm} (t)\bigg] |{\bf q}'\rangle\nonumber\\
&&\qquad -\lambda \bigg( {N\over V}\bigg) ^{2}\int\int_{V} d{\bf z}d{\bf z}'
[\Phi({\bf z}-{\bf z}')-\Phi({\bf z}-{\bf z}'+{\bf q}-{\bf q}')]
\langle{\bf q}|\rho _{cm} (t)|{\bf q}'\rangle
\end{eqnarray}
To see roughly how the collapse part of Eq. (A4) works, suppose that
$\rho (0)= (1/2)[|\psi_{1}\rangle+|\psi_{2}\rangle][\langle\psi_{1}|+\langle\psi_{2}|]$ and that
$|\psi_{1}\rangle$, $|\psi_{2}\rangle$ describe two well-separated $(>>a)$ states of a blob so
$\Phi({\bf z}-{\bf z}'+{\bf q}-{\bf q}')\approx 0$. Therefore, neglecting the Hamiltonian term, Eq. (A4)
says that the off-diagonal density elements exponentially decay:
\[\langle{\bf q}|\rho _{cm} (t)|{\bf q}'\rangle=(1/2)\langle{\bf q}|\psi_{1}\rangle\langle\psi_{2}|{\bf q}'\rangle
e^{-\lambda NN't}.\]
\noindent In this equation, if the dimensions of the blob are $<<a$
then $\Phi({\bf z}-{\bf z}')\approx 1$ and $V^{-1}\int d{\bf z}=1$
so $N'\approx N$ . If
the dimensions of the blob are $>>a$, $N'\approx$ the number of nucleons in a volume $a^{3}$.
This collapse rate $\lambda NN'$ is diminished if the blob states overlap
($\Phi({\bf z}-{\bf z}'+{\bf q}-{\bf q}')\neq 0$).
\subsection{Translational Diffusion Of A Sphere}
We shall apply Eq. (A4) to an ensemble of spheres. Each sphere's
CM wavefunction (subject to its own sample field $w({\bf x},t)$) reaches an equilibrium size (see Appendix B),
subject as it is to the Schr\"odinger evolution expansion and
the collapse interaction contraction, with the center of a new contraction generally located
off-center from the previous wavefunction center, thereby giving rise to the random walk.
We shall use Eq. (A4) to calculate
\begin{equation}\label{A5}
\overline {{{\langle {Q^{j}}^{2}\rangle}}}(t)\equiv\int DwP(w)
{_{w}\langle\psi,t|{Q^{j}}^{2}|\psi,t\rangle _{w}\over _{w}\langle\psi,t|\psi,t\rangle _{w}}
\equiv Tr [\rho (t) {Q^{j}}^{2}].
\end{equation}
\noindent In Eq. (A5), $|\psi,t\rangle _{w}$ is the statevector of the sphere at time $t$
which evolves under a specific collapse-causing random field $w({\bf x}, t)$, the density matrix is
$\rho (t)=|\psi,t\rangle _{w}\thinspace _{w}\langle\psi,t|$ and, according to CSL,
$DwP(w)=Dw_{w}\langle\psi,t|\psi,t\rangle _{w}$ is the probability that the field $w({\bf x}, t)$ appears in nature,
where $Dw\sim\prod_{{\bf x}, t}dw({\bf x}, t)$ (space-time may be regarded as divided into little cells, in each of which
$w({\bf x}, t)$ can take on any real value). In Appendix B we shall calculate
\begin{equation}\label{A6}
\overline {{\langle Q^{j}\rangle}^{2}}(t)\equiv\int DwP(w)
\bigg[{_{w}\langle\psi,t|Q^{j}|\psi,t\rangle _{w}\over _{w}\langle\psi,t|\psi,t\rangle _{w}} \bigg]^{2}
\end{equation}
\noindent which cannot be expressed as a trace with respect to the density matrix. As shown in Appendix B, the
spheres we consider are large enough so that the mean
square packet width $\overline{s^{2}}\equiv\overline{{{\langle [Q^{j}-\langle Q^{j}\rangle]^{2}\rangle}}}$
rapidly reaches an equilibrium constant size $<<\overline {{\langle Q^{j}\rangle}^{2}}(t)$.
Therefore the increase with time $\sim t^{3}$ of $\overline {{\langle Q^{j}\rangle}^{2}}(t)$
found here (see Eq. (A10) is solely due to the diffusion of the spheres.
To find $ \overline {{{\langle {Q^{j}}^{2}\rangle}}}(t)$ we take successive traces of Eq. (A4):
\begin{mathletters}
\label{A7}
\begin{equation}
{d\over dt} \overline {{{\langle {Q^{j}}^{2}\rangle}}} ={1\over M}\overline{{\langle P^{j} Q^{j}+Q^{j} P^{j}\rangle}}
\end{equation}
\begin{equation}
{d\over dt}{1\over M}\overline{{\langle P^{j} Q^{j}+Q^{j} P^{j}\rangle}} =
{2\over M^{2}}\overline{{\langle {P^{j}}^{2}\rangle}}
\end{equation}
\begin{equation}
{d\over dt}{2\over M^{2}}\overline{{\langle {P^{j}}^{2}\rangle}}={\lambda N^{2}\hbar^{2}f(R/a)\over M^{2}a^{2}}
\end{equation}
\end{mathletters}
\noindent where
\begin{equation}\label{A8}
f(R/a)\equiv{1\over V^{2}}\int\int_{V} d{\bf z}d{\bf z}' \Phi({\bf z}-{\bf z}')
\bigg[1-{(z^{j}-z'^{j})^{2}\over 2a^{2}}\bigg]
\end{equation}
Integration of $f$ may be facilitated using Gauss's law to convert the volume integrals to surface integrals:
\begin{mathletters}\label{A9}
\begin{equation}
f(R/a)={2a^{2}\over V^{2}}\int\int_{V} d{\bf z}d{\bf z}'
{\bf \nabla}\cdot {\hat {\bf e}}_{j}{\bf \nabla}'\cdot {\hat {\bf e}}'_{j}\Phi({\bf z}-{\bf z}')
=2a^{2}\frac{1}{V^{2}}\int\int_{A}
d{\bf A}\cdot{\hat {\bf e}}_{j}d{\bf A}'\cdot{\hat {\bf e}}'_{j}\Phi({\bf z}-{\bf z}')
\end{equation}
\begin{equation}
=6\bigg( {a\over R} \bigg)^{4}\bigg[ 1-{2a^{2}\over R^{2}}+\bigg(1+{2a^{2}\over R^{2}}\bigg) e^{-R^{2}/a^{2}}\bigg]
\end{equation}
\begin{equation}
\thinspace_{\overrightarrow {R<<a}}1, \qquad f(1)=.62, \qquad_{\overrightarrow {R>>a}}6\bigg({a\over R}\bigg)^{4}.
\end{equation}
\end{mathletters}
\noindent $f$ is a monotonically decreasing function of its argument.
It follows from Eqs. (A.7) that
\begin{equation}\label{A10}
\overline {{{\langle {Q^{j}}^{2}\rangle}}} =
\langle \bigg( Q^{j}+\frac{P^{j}t}{M}\bigg)^{2}\rangle (0)+{\lambda\hbar^{2}f(R/a)t^{3}\over 6 m^{2}a^{2}}
\end{equation}
\noindent which is the result quoted in Eq. (3.4). The diffusion term in Eq. (A10) can be understood as follows.
$d^{3}\overline {{{\langle {Q^{j}}^{2}\rangle}}}(t)/dt^{3}$ is proportional to the square
of the collapse-induced velocity $(\hbar/ Ma)^{2}$ multiplied by the effective collapse rate. For
$R<<a$, the collapse rate is $\sim\lambda N^{2}$, giving rise to Eq. (A10) with $f=1$. For
$R>>a$, as we have previously shown\cite{GPR}, the collapse rate
is $\sim\lambda\cdot$(number of particles in a volume $a^{3})\cdot($number of uncovered particles).
That is, we imagine the sphere in a superposition of two states displaced from each other by a certain distance so
the two images of the sphere overlap: the ``uncovered" particles are those in the region of no overlap. In this
case we suppose the displacement distance is $a$. Then, the (number of uncovered particles)$\approx (N/V)a$(the
surface area $A$ of the sphere). Thus we find the expression
\[\sim (\hbar/ Ma)^{2} \lambda (Na^{3}/V)(NAa/V)\sim\lambda(\hbar/ ma)^{2}(a/R)^{4}\sim\lambda(\hbar/ ma)^{2}f.\]
\subsection{Translational Diffusion Of A Disc}
In the case of a disc undergoing translational diffusion,
$f$ depends upon its orientation. If the disc is of radius $L$ and thickness $b$,
for motion perpendicular to the disc face it follows from Eq. (A9a) (which is applicable
to an arbitrarily shaped object) that
\begin{equation}\label{A11}
f=4\bigg(\frac{2a}{L}\bigg)^{4}\bigg(\frac{2a}{b}\bigg)^{2}\bigg[1-e^{-b^{2}/4a^{2}}\bigg]
\int_{0}^{L/2a}xdx\int_{0}^{L/2a}x'dx'e^{-(x^{2}+x'^{2})}I_{0}(2xx')
\end {equation}
\noindent For example, for $(b/2a)^{2}<<1$, $f\approx 1$ for $(L/2a)^{2}<<1$ and
$f\rightarrow (2a/L)^{2}$ for $(L/2a)^{2}>>1$.
For motion parallel to the disc edge,
\begin{equation}\label{A12}
f=\bigg(\frac{2a}{L}\bigg)^{2}e^{-L^{2}/2a^{2}}I_{1}(L^{2}/2a^{2})\bigg(\frac{2a}{b}\bigg)^{2}
\bigg[\frac{b}{2a}\int_{-b/2a}^{b/2a}dxe^{-x^{2}}-1+e^{-(b/2a)^{2}}\bigg]
\end {equation}
\noindent where $f\rightarrow(4/\sqrt{\pi})(a/L)^{3}$ for $(b/2a)^{2}<<1$ and $(L/2a)^{2}>>1$.
Eqs. (A11, A12) can also be understood as proportional to the effective collapse rate.
We use an alternative expression for the rate, equivalent to that given in the previous paragraph,
appropriate for an object with a dimension ($b$ in this case) less than $a$. It is
rate $\sim$ (number of particles in a cell)$^{2}\cdot$(number of uncovered cells), where a cell
is a cube of dimension $a$ on each side. Here each cell has occupied
volume $ba^{2}$ so the number of particles/cell $=(Nba^{2}/\pi L^{2}b)\sim (a/L)^{2}$.
For displacement $a>>b$ perpendicular to the face, all the cells---$\pi L^{2}/a^{2}$ of them---are uncovered,
so we obtain the collapse rate $\sim (a/L)^{4}(L/a)^{2}=(a/L)^{2}$. For motion parallel to the face,
displacement $a$ uncovers the cells lying on the circumference of the disc $\approx 2\pi L/a$ of them, giving
the collapse rate $\sim (a/L)^{4}(L/a)=(a/L)^{3}$.
\section {Wavepacket Width Of Center Of Mass in CSL}
The CSL evolution equation for the normalized statevector in Stratonovitch form (so manipulations
can be performed using the usual rules of calculus) is
\begin{mathletters}
\begin{equation}\label{B1a}
{d\over dt}|\psi,t\rangle_w=\bigg\{ -iH+
\bigg[\int d{\bf x} G({\bf x})w({\bf x},t)-
\lambda \int d{\bf x} \big[ G^{2}({\bf x})-
\thinspace _{w}\langle \psi,t|G^{2}({\bf x})|\psi,t\rangle_w\big]\bigg]\bigg\}|\psi,t\rangle_w
\end{equation}
\begin{equation}\label{B1b}
G({\bf x})\equiv{1\over (\pi a^{2})^{3/4}}\sum_{j=1}^{N} \bigg[ e^{-({\bf X}_{j}-{\bf x})^{2}/2a^{2}}\thinspace
-\thinspace_{w}\langle \psi,t|e^{-({\bf X}_{j}-{\bf x})^{2})/2a^{2}}|\psi,t\rangle_w\bigg]
\end{equation}
\end{mathletters}
\noindent where ${\bf X}_{j}$ is the position operator for the jth nucleon,
$w({\bf x},t)=dB({\bf x},t)/dt$ is standard white noise and $B({\bf x},t)$ is standard Brownian motion
($\overline {w({\bf x},t)} =0$, $\overline {w({\bf x},t)w({\bf x}',t')}=
\lambda \delta ({\bf x}-{\bf x}')\delta (t-t'))$. We extract the equation for the CM wavefunction
just as in Appendix A whose notation is
used here (Eq. (A1) for the density matrix can readily be derived from Eq. (B1)).
Again, we assume that the statevector can be written as a direct product of the
internal statevector $|\psi_{int},t\rangle$ and the CM statevector $|\phi,t\rangle_w$. We suppose that
$|\psi_{int},t\rangle$ obeys the usual Schrodinger equation (thereby neglecting the CSL excitation of atoms
and nucleii) with Hamiltonian $H_{int}$, so the complete Hamiltonian is $H={\bf P}^{2}/2M+H_{int}$,
where ${\bf P}$ is the CM momentum operator. Using this in Eq. (B.1)
with ${\bf X}_{j}={\bf R}_{j}+{\bf Q}$, multiplying by $\int dr\langle\psi_{int},t|r\rangle\langle r|$,
employing the localized nature of nucleons so, e.g.,
$\int dr|\langle\psi_{int},t|r\rangle|^{2}F({\bf r}_{j})\approx F({\bf z}_{j})$ where ${\bf z}_{j}$
is the mean position of the $j$th nucleon, and then approximating
$\sum_{j} F({\bf z}_{j})\approx (N/V)\int_{V}d{\bf z}F({\bf z})$ results in
\begin{mathletters}
\begin{equation}\label{B2a}
{d\over dt}\langle {\bf q}|\phi,t\rangle_w=\bigg\{ -i{\bigtriangledown ^{2}\over 2M}+
\bigg[ \int d{\bf x} g({\bf x}-{\bf q})w({\bf x},t)-
\lambda \int d{\bf x} \
\big[ g^{2}({\bf x}-{\bf q})-
\thinspace _{w}\langle \psi,t|g^{2}({\bf x}-{\bf q})|\psi,t\rangle_w\big]\bigg]\bigg\}|\phi,t\rangle_w
\end{equation}
\begin{equation}\label{B2b}
g({\bf x}-{\bf q})\equiv {1\over (\pi a^{2})^{3/4}}\bigg( {N\over V} \bigg) \int_{V} d {\bf z}
\bigg[ e^{-({\bf z}+{\bf q}-{\bf x})^{2}/2a^{2}}\thinspace
-\thinspace_{w}\langle \phi,t|e^{-({\bf z}+{\bf q}-{\bf x})^{2}/2a^{2}}|\phi,t\rangle_w\bigg]
\end{equation}
\end{mathletters}
\noindent (Eq. (A4) for the CM density matrix can readily be derived from Eq. (B2)).
\subsection{A Sphere's Equilibrium CM Wavepacket Width And The Time To Reach It}
Eq. (B2), applied to a sphere of radius $R$, is the starting point for our calculation.
We shall consider only cases where, for
each $|\phi,t\rangle_w$, the squared wavepacket width $s^{2}(t)\equiv
\thinspace_{w}\langle \phi,t|{Q^{j}}^{2}|\phi,t\rangle_w-{\thinspace_{w}\langle \phi,t|Q^{j}|\phi,t\rangle_w}^{2}
=\thinspace_{w}\langle \phi,t|[{Q^{j}-\langle Q^{j}\rangle}]^{2}|\phi,t\rangle_w$ is much less than
$a^{2}$ (note that $s$ has no subscript $j$ because we assume its initial spherical symmetry
which is maintained thereafter). At the end of section IIIit is shown
that $s<<a$ implies $R>10^{-6}$cm.
We expand the exponents in Eq. (B2b) in
powers of $[Q^{j}-\langle Q^{j}\rangle]/a$, retaining only the leading term:
\begin{equation}\label{B3}
g({\bf x}-{\bf q})\approx {1\over (\pi a^{2})^{3/4}}\bigg( {N\over V} \bigg) \int_{V} d {\bf z}
e^{-({\bf z}+\langle{\bf Q}\rangle -{\bf x})^{2}/2a^{2})}
({\bf z}+\langle{\bf Q}\rangle -{\bf x})\cdot({\bf q}-\langle{\bf Q}\rangle)/a^{2}.
\end{equation}
The solution of Eq. (B2a), for a short time $\Delta t$, can be written (with use of Eq. (B3)) as
\begin{eqnarray}\label{B4}
&& \langle {\bf q}|\phi,t+\Delta t\rangle_{w}
=\exp\bigg[ i\Delta t {\bigtriangledown ^{2}\over 2M}
-{1\over a^{2}(\pi a^{2})^{3/4}}({\bf q}-\langle{\bf Q}\rangle)\cdot
\bigg( {N\over V} \bigg)\int_{V} d {\bf z}\int d {\bf x}({\bf z}-{\bf x})e^{-({\bf z}-
{\bf x})^{2}/2a^{2}}dB({\bf x},t) \nonumber\\
&&\qquad\qquad\qquad\qquad\qquad\qquad-\lambda \Delta t {N^{2}\over 2a^{2}}f(R/a)
\big[({\bf q}-\langle{\bf Q}\rangle)^{2}-
\thinspace _{w}\langle \psi,t|({\bf q}-\langle{\bf Q}\rangle)^{2}|\psi,t\rangle_w\big]\bigg]\langle
{\bf q}|\phi,t\rangle_{w}
\end{eqnarray}
\noindent (note the replacement of ${\bf x}-\langle{\bf Q}\rangle$ by ${\bf x}$ as dummy integration variable and the
concommitant use of translation invariance of $w({\bf x},t))$§ where $f(R/a)$ is given by Eq. (A8). Eq. (B4)
shows that a gaussian wavefunction at time $t$ is taken into a gaussian wavefunction at time $t+\Delta t$.
Although we could deal with a more general class of wavefunction, the results are the same and the argument is simpler
if we restrict ourselves to the complex gaussian wavefunction
\begin{equation}\label{B5}
\langle {\bf q}|\phi,t\rangle_{w}=A e^{-({\bf q}-{\bf b})^{2}/4\sigma^{2}},
\qquad A\equiv (2\pi \sigma^{2}{\sigma ^ {*}}^{2}/\sigma_{R}^{2})^{-3/4}e^{-b_{I}^{2}/4\sigma_{R}^{2}}.
\end{equation}
\noindent In Eq. (B5), ${\bf b}={\bf b}_{R}+i{\bf b}_{I}$, $\sigma^{2}=\sigma_{R}^{2}+i\sigma_{I}^{2}$
are complex functions of time. Using this wavefunction one can calculate various
expectation values involving the CM position and momentum. It follows from Eq. (B5) that
\begin{eqnarray}\label{B6}
&&\qquad\qquad\qquad\qquad\qquad\qquad
|\langle {\bf q}|\phi,t\rangle_{w}|^{2}=(2\pi s^{2})^{-3/4} e^{-({\bf q}-\langle{\bf Q}\rangle)^{2}/2 s^{2}},\nonumber\\
&&\quad \langle{\bf Q}\rangle ={\bf b}_{R}+{\bf b}_{I}\sigma_{I}^{2}/\sigma_{R}^{2},
\quad \langle{\bf P}\rangle ={\bf b}_{I}2\sigma_{R}^{2},
\quad s^{2}\equiv\langle (Q^{j}-\langle Q^{j}\rangle)^{2}\rangle=\sigma_{R}^{2}+\sigma_{I}^{4}/\sigma_{R}^{2},
\quad\langle (P^{j}-\langle P^{j}\rangle)^{2}\rangle=1/4 \sigma_{R}^{2}.
\end{eqnarray}
\noindent We note that
\[ e^{i\Delta t \bigtriangledown ^{2}/2M}\langle {\bf q}|\phi,t\rangle_{w}=
A e^{-({\bf q}-{\bf b})^{2}/4[\sigma^{2}+i(\Delta t/2M)]}
\approx e^{({\bf q}-{\bf b})^{2}i(\Delta t/8M\sigma^{4})}\langle {\bf q}|\phi,t\rangle_{w}. \]
\noindent Putting this into Eq. (B4) and equating the coefficients of ${\bf q}^{2}$ and ${\bf q}$ results in
\begin{mathletters}
\begin{equation}\label{B7a}
{1\over 4}{d\over dt}{1\over\sigma^{2}}=-{i\over 8M\sigma^{4}}+ {\lambda N^{2}\over2a^{2}}f
\end{equation}
\begin{equation}\label{B7b}
{1\over 2}{d\over dt}{{\bf b}\over\sigma^{2}}=-{i{\bf b}\over 4M\sigma^{4}}+
{\lambda N^{2}\over a^{2}}\langle {\bf Q}\rangle f-
{1\over a^{2}(\pi a^{2})^{3/4}}
{N\over V}\int_{V} d {\bf z}\int d {\bf x}({\bf z}-{\bf x})e^{-({\bf z}-{\bf x})^{2}/2a^{2}}w({\bf x},t)
\end{equation}
\end{mathletters}
First, consider Eq. (B7a):
\begin{equation}\label{B8}
{d\over dt}\sigma^{2}=-{i\over 2M}- {2\lambda N^{2}\over a^{2}}f\sigma^{4}
\end{equation}
\noindent It has no stochastic part and may be immediately solved.
As $t\rightarrow \infty$, where $d\sigma^{2}(t)/dt=0$,
according to Eq. (B8),
\[ \sigma^{2}(\infty)=(a/2N)(\hbar/2M\lambda f)^{1/2}(1+i) \]
\noindent so, by Eq. (B6), the asymptotic squared wavepacket width is
\begin{equation}\label{B9}
s^{2}(\infty)\equiv s_{\infty}^{2} =(a/N)(\hbar/2M\lambda f)^{1/2}.
\end{equation}
\noindent This result can be obtained by a simple physical argument given in section III (following Eq. (3.5)).
Introducing $\sigma^{2}(\infty)$ into Eq. (B8), together with
\begin{equation}\label{B10}
\tau_{s}\equiv Ms_{\infty}^{2}/\hbar
\end{equation}
\noindent converts Eq. (B8) to
\begin{equation}\label{B11}
{d\over d(t/\tau_{s})}\bigg({\sigma\over s_{\infty}}\bigg) ^{2}=-{i\over 2}- \bigg({\sigma\over s_{\infty}}\bigg) ^{4}
\end{equation}
\noindent with solution
\begin{equation}\label{B12}
\sigma^{2}(t)=s_{\infty}^{2}{(1+i)\over 2}
\Bigg[{{\sigma^{2}(0)\over s_{\infty}^{2}}\big[ e^{t(1+i)/\tau_s}+1\big]+{(1+i)\over 2}\big[ e^{t(1+i)/\tau_s}-1\big]
\over {\sigma^{2}(0)\over s_{\infty}^{2}}\big[ e^{t(1+i)/\tau_s}-1\big]+{(1+i)\over 2}\big[ e^{t(1+i)/\tau_s}+1\big]}\Bigg]
\end{equation}
\noindent which shows the approach to equilibrium.
Thus we have achieved the main purpose of this appendix, to obtain the cm wavepacket equilibrium width (B9)
and the characteristic time to reach that width (B10). We emphasize that these results apply to {\it every}
cm wavepacket, since they are independent of the particular realization of the fluctuating field $w$
encountered by a sphere.
\subsection{CM Translational Diffusion Revisited}
However, we do have Eq. (B7b) which does depend upon $w$ and
which gives us, via Eq. (B6), each individual cm wavefunction's mean position $\langle{\bf Q}\rangle$
(and mean momentum $\langle{\bf P}\rangle$), enabling us to understand in detail the ensemble average
$\overline {{{\langle {Q^{j}}^{2}\rangle}}}$ in Eq. (A10).
We first note that the stochastic term in Eq. (B7b) has no dependence on the dynamical variables
$\sigma^{2}$ and ${\bf b}$. Since the ensemble average of the product of this term's $i$th and $j$th components is
$\delta_{ij}\lambda N^{2}f\delta (t-t')/2a^{2}$ we may write the term as
$(\lambda N^{2}f/2a^{2})^{1/2}{\bf w}(t)$ where the $w_{j}(t)$'s are independent white noise,
$\overline{w_{i}(t)w_{j}(t')}=\delta_{ij}\delta (t-t')$. We may then write Eq. (B7b) as
\begin{equation}\label{B13}
{d\over dt}{\bf b}=-{i\sigma^{4}\over \sigma_{R}^{2}}{{\bf b}_{I}\over s_{\infty}^{2}\tau_{s}}
+{\sigma^{2}{\bf w}(t)\over s_{\infty}\tau_{s}^{1/2}}.
\end{equation}
Suppose we follow an individual wavefunction for sufficient time $>>\tau_{s}$ until (say, at time $t=0$) it achieves
its equilibrium width $s_{\infty}$ with $\sigma^{2}=s_{\infty}^{2}(1+i)/2$ so Eq. (B13) simplifies to
\begin{equation}\label{B14}
d{\bf b}={{\bf b}_{I}\over \tau_{s}}dt
+{(1+i)\over 2}{s_{\infty}\over\tau_{s}^{1/2}}d{\bf B}(t)
\end{equation}
\noindent where $B(t)$ is Brownian motion ({\bf w}(t)={d\bf B}(t)/dt). The solution of Eq. (B.14) is
\begin{equation}\label{B15}
b^{j}_{R}(t)={s_{\infty}\over 2\tau_{s}^{3/2}}\int_{0}^{t}dt'B^{j}(t')+{s_{\infty}\over 2\tau_{s}^{1/2}}B^{j}(t), \quad
b^{j}_{I}(t)={s_{\infty}\over 2\tau_{s}^{1/2}}B^{j}(t)
\end{equation}
\noindent (we have assumed $b^{j}_{R}(0)=b^{j}_{I}(0)=0$).
It follows from Eqs. (B15) and (B6) that
\begin{equation}\label{B16}
\langle{\bf Q}\rangle=\frac{s_{\infty}}{2\tau_{s}^{3/2}}\int_{0}^{t}dt'{\bf B}(t')+{s_{\infty}\over \tau_{s}^{1/2}}{\bf B}(t),
\qquad \langle{\bf P}\rangle=\frac{1}{2s_{\infty}\tau_{s}^{1/2}}{\bf B}(t)
\end{equation}
\noindent which explicitly shows the diffusive nature of $\langle{\bf Q}\rangle$ and $\langle{\bf P}\rangle$.
We can now find
$\overline{\langle {Q^{j}}^{2}\rangle}=s_{\infty}^{2}+\overline{\langle Q^{j}\rangle^{2}}$ and compare with Eq. (A10).
Recalling that $\overline{{B^{j}}^{2}(t)}=t$ and $\overline{B^{j}(t)B^{j}(t')}=$min$(t,t')$, we obtain
\begin{equation}\label{B17}
\overline{\langle {Q^{j}}^{2}\rangle}=s_{\infty}^{2}+s_{\infty}^{2}\bigg[{t\over \tau_{s}}+
{t^{2}\over 2\tau_{s}^{2}}+{t^{3}\over 12\tau_{s}^{3}}\bigg]
\end{equation}
\noindent which is identical to Eq. (A10) for a wavefunction which has equilibrium width $s_{\infty}$ at $t=0$.
\section{Rotational Diffusion in CSL}
Starting with Eq. (A1) for the evolution of the density matrix in CSL, we follow the lines of
argument in Appendix A. We assume here that the cm of a blob of matter is fixed but that it is
free to rotate about a fixed axis through an angle represented by the operator $\Theta$ with angular
momentum operator ${\cal L}$. We also assume that the density matrix is the direct product of
the internal density matrix and the orientation density matrix $\rho_{ang}$. We obtain, analogous to Eq. (A4),
\begin{eqnarray}\label{C1}
&&{\partial\over \partial t}\langle\theta|\rho _{ang} (t)|\theta'\rangle =
-i\langle\theta|\bigg[ {{\cal L}^{2}\over 2I},\rho_{ang}(t)\bigg] |\theta'\rangle\nonumber\\
&&\qquad -\lambda \bigg( {N\over V}\bigg)^{2}\int\int_{V} d{\bf z}d{\bf z}'
[\Phi({\bf z}(0)-{\bf z}'(0))-\Phi({\bf z}(\theta)-{\bf z}'(\theta'))]
\langle\theta|\rho_{ANG}(t)|\theta'\rangle
\end{eqnarray}
\noindent where, denoting the rotation axis by $z_{3}$,
\begin{equation}\label{C2}
\Phi({\bf z}(\theta)-{\bf z}'(\theta'))=\exp -{1\over 4a^{2}}[{\bf z}^{2}+{\bf z}'^{2}
-2(z_{1}z'_{1}+z_{2}z'_{2})\cos (\theta-\theta ')-2(z_{1}z'_{2}-z_{2}z'_{1})\sin (\theta-\theta ')-2z_{3}z'_{3}].
\end{equation}
To find $\overline{\langle\Theta^{2}\rangle}(t)$, in analogy to Eqs. (A7), we take successive
traces of Eq. (C1):
\begin{mathletters}
\label{all C3}
\begin{equation}
{d\over dt} \overline {{{\langle {\Theta}^{2}\rangle}}} ={1\over I}\overline{{\langle {\cal L} \Theta +\Theta {\cal L}\rangle}}
\end{equation}
\begin{equation}
{d\over dt}{1\over I}\overline{{\langle {\cal L} \Theta + \Theta {\cal L}\rangle}} =
{2\over I^{2}}\overline{{\langle {{\cal L}}^{2}\rangle}}
\end{equation}
\begin{equation}
{d\over dt}{2\over I^{2}}\overline{{\langle {{\cal L}}^{2}\rangle}}=
{\lambda\over 2} \bigg[{\hbar\over ma^{2}}\bigg]^{2}f_{ROT}
\end{equation}
\end{mathletters}
\noindent where ${\bf z}_{\bot}\equiv(z_{1},z_{2})$ and $f_{ROT}$ is the dimensionless geometrical factor
\begin{equation}\label{C4}
f_{ROT}=2\bigg[{\frac{Ma}{IV}}\bigg]^{2}\int\int_{V} d{\bf z}d{\bf z}'
\big[{\bf z}_{\bot}\cdot{\bf z}'_{\bot}-{1\over 2a^{2}}({\bf z}_{\bot}\times{\bf z}'_{\bot})^{2}\big]\Phi({\bf z}-{\bf z}').
\end{equation}
To see that (C3) vanishes if the blob is rotationally symmetric about
the $z$-axis (i.e., a sphere or a disc with the $z$-axis perpendicular to its face), we write (C4) as
\begin{equation}\label{C5}
f_{ROT}=-4\bigg[{\frac{Ma^{2}}{IV}}\bigg]^{2}
\int_{V}d{\bf z}({\bf z}_{\bot}\times\nabla_{{\bf z}_{\bot}})^{2}\int_{V} d{\bf z}'\Phi({\bf z}-{\bf z}').
\end{equation}
\noindent Rotational symmetry implies that the integral over ${\bf z}'$ is just a function of ${\bf z}^{2}$
and then the integral vanishes since $({\bf z}_{\bot}\times\nabla_{{\bf z}_{\bot}}){\bf z}^{2}=0$.
It follows from Eqs. (C3) that the mean square angular diffusion has the time dependence
\begin{equation}\label{C6}
\overline {{{\langle {\Theta}^{2}\rangle}}}=\langle\bigg( \Theta+\frac{{\cal L}t}{I}\bigg) ^{2}\rangle(0)+
\lambda \frac{t^{3}}{12}\bigg[\frac{\hbar}{ma^{2}}\bigg]^{2}f_{ROT}.
\end{equation}
We wish apply Eq. (C6) to a disc of radius $L$ and thickness $b$ ($I=(ML^{2}/4)[1+(b^{2}/3L^{2}]$),
with the rotation axis parallel to the face of the disc, so $f_{ROT}$ must be calculated for this case.
The double volume integral in Eq. (C5) can be converted to a double integral over the surface of the disc
by using the divergence theorem:
\begin{equation}\label{C7}
f_{ROT}(\alpha, \beta)=\bigg[\frac{2}{[1+(\beta^{2}/3\alpha^{2})]\beta\alpha^{4}}\bigg]^{2}
\int_{A}d{\bf A}\cdot ({\bf r}\times {\bf k})\int_{A}d{\bf A}'\cdot ({\bf r}'\times {\bf k})e^{-({\bf r}-{\bf r}')^{2}}
\end{equation}
\noindent where ${\bf k}$ is the unit vector along the axis of rotation and $\alpha\equiv L/2a$, $\beta\equiv b/2a$.
Calling the contribution of the two disc faces $f_{1}$, the two disc edges $f_{2}$, and
the edge-face contribution $f_{3}$, we obtain
\begin{mathletters}
\label{all C8}
\begin{equation}
f_{ROT}(\alpha, \beta)=\bigg[\frac{4}{[1+(\beta^{2}/3\alpha^{2})]\beta\alpha^{4}}\bigg]^{2}[f_{1}+f_{2}+f_{3}]
\end{equation}
\begin{equation}
f_{1}=[1-e^{-\beta^{2}}]\int_{0}^{\alpha}r^{2}dr\int_{0}^{\alpha}r'^{2}dr'I_{1}(2rr')e^{-(r^{2}+r'^{2})}
\end{equation}
\begin{equation}
f_{2}=(1/2)\alpha^{2}e^{-2\alpha^{2}}I_{1}(2\alpha^{2})\int_{-\beta/2}^{\beta/2}ydy\int_{-\beta/2}^{\beta/2}y'dy'e^{-(y-y')^{2}}
\end{equation}
\begin{equation}
f_{3}=-2\alpha e^{-\alpha^{2}}\int_{0}^{\alpha}r^{2}dre^{-r^{2}}I_{1}(2\alpha r)\int_{-\beta/2}^{\beta/2}ydye^{-(.5\beta-y)^{2}}
\end{equation}
\end{mathletters}
\noindent where $I_{1}$ is the Bessel function.
A graph of $f_{ROT}(\alpha, \beta)$ vs $\alpha$ parametrized by
various values of $\beta$ is given in FIG. 1.
We note that, for a thin disc ($\beta<<\alpha$) with $\beta<<1$, $f_{1}$ is the leading term in Eq. (C8a) which becomes
\begin{equation}\label{C9}
f_{ROT}(\alpha)\approx \bigg(\frac{2}{\alpha}\bigg)^{4}
\int_{0}^{\alpha}r^{2}dr\int_{0}^{\alpha}r'^{2}dr'I_{1}(2rr')e^{-(r^{2}+r'^{2})}.
\end{equation}
\section{Thermal Radiation Viscosity Factor For A Dielectric Sphere}
In order to compare the Brownian diffusion of an object in
a thermal radiation bath with CSL diffusion, it is only necessary to
find the viscosity factor $\xi$ for this situation and put it into the Brownian motion equations of section II.
An object moving with respect to thermal radiation with speed $v$
feels a drag force $-\xi v$ because it receives
more momentum from the photons it approaches than from those from which it recedes. We have not
been able to find $\xi$ for a dielectric sphere in the literature (the closest has been the force on an oscillator\cite{Hoye}) so
we give it here. Actually, after this Appendix was written, we decided that the experiment we propose would concern
a conducting disc rather than a dielectric sphere! The result for a conducting sphere is not quite the same as that for
a dielectric sphere with dielectric constant equal to infinity: although that is the appropriate limit for electric field
behavior, a conducting sphere's magnetic behavior is also
important in considering the scattering cross-section of electromagnetic radiation (necessary for this calculation).
And, of course, a sphere is not a disc. However, the result obtained for $\xi$ will be representative, i.e.,
the same up to a numerical factor not too far from 1, when the dielectric constant goes to infinity and the radius of the
sphere is replaced by the radius of the disc.
\subsection{Viscosity Factor For a Mirror}
For expositional ease and purposes of comparison we shall first obtain Einstein's result for a mirror
moving perpendicular to its face\cite{EinsteinandHopf,Einstein}, as seen from the laboratory frame
in which the radiation is thermal. We shall use properties of photons (which Einstein had not yet obtained
as he was in the process of establishing these, so he used classical electromagnetism).
As is well known, at temperature $T$ the mean number of photons in a mode of frequency $\nu_{0}$ (the subscript 0 refers to the laboratory frame)
is $[\exp\beta h \nu_{0}-1]^{-1}$ (where $\beta\equiv (kT)^{-1}$). Since the number of photon modes/vol
of frequency $\nu_{0}$ in the range $d\nu_{0}$ moving in a direction ($\theta_{0}$, $\phi_{0}$)
within solid angle $d\Omega_{0}$ is $2\nu_{0}^{2}d\nu_{0}d\Omega_{0}/c^{3}$ (the factor 2 is for the two polarizations),
the mean photon number/vol-freq-solid angle is
\begin{equation}\label{D1}
n(\nu_{0})=2(\nu_{0}^{2}/c^{3})[\exp\beta h \nu_{0}-1]^{-1}
\end{equation}
First we find the momentum transferred to the mirror by a colliding photon.
Einstein considered a mirror which, in its rest frame, is perfectly reflecting only for frequencies in the range
$(\nu, \nu+d\nu)$ (no subscript refers to the rest frame of the mirror) and perfectly transmitting otherwise.
Let the mirror (of area $A$) move
in the $z$-direction with speed $v$ away from a photon of momentum $p_{0}=h\nu_{0}/c$ whose
direction of motion makes an angle $\theta_{0}$ with respect to the $z$-axis.
The photon will be reflected only if it has frequency $\nu$ in the rest frame of the mirror.
From the energy and momentum transformations of special relativity (all calculations are to order $v/c$),
\begin{equation}\label{D2}
\nu=\nu_{0}[1-(v/c)\cos\theta_{0}], \qquad \nu\cos\theta =\nu_{0}[\cos\theta_{0}-(v/c)]
\end{equation}
\noindent where $\theta$ is the angle the photon makes with the normal to the mirror in the mirror's rest frame.
In this frame the photon's incident and outgoing (normal) momenta are respectively
$(h\nu /c)\cos\theta$ and $-(h\nu /c)\cos\theta$ so the momentum imparted (normal) to the mirror is
$\Delta P=2(h\nu /c)\cos\theta$. The difference of momenta of a nonrelativistic object is a Galilean invariant.
Therefore, $\Delta P=\Delta P_{0}$ which, by (D2) may be written as
\begin{equation}\label{D3}
\Delta P_{0}=2(h\nu_{0}/c)[\cos\theta_{0}-(v/c)].
\end{equation}
Next we find the number of these photons colliding with the mirror in time $dt$.
In the mirror rest frame this is ${\bf J}\cdot {\bf A}dt$ where ${\bf J}$ is the particle number flux and ${\bf A}=A{\bf {\hat z}}$
with $A$ the area of the mirror. This is
the same number which collides with the mirror in the laboratory frame in time $dt$.
The four-current transformation equation gives
\begin{equation}\label{D4}
{\bf J}\cdot {\bf A}=({\bf J}_{0}-\rho_{0}{\bf v})\cdot {\bf A}=n(\nu_{0})d\nu_{0}d\Omega_{0}(c\cos\theta_{0}-v)A.
\end{equation}
Thus, by Eqs. (D3, D4),
the momentum transferred to the mirror in the laboratory frame in time $dt$, expressed in
laboratory frame coordinates, is
\begin{equation}\label{D5}
-vdtd\xi\equiv{\bf J}\cdot {\bf A}dt\Delta P_{0}=
n(\nu_{0})d\nu_{0}d\Omega_{0}c[\cos\theta_{0}-(v/c)]Adt2(h\nu_{0}/c)[\cos\theta_{0}-(v/c)].
\end{equation}
It remains to integrate Eq. (D5) over all $\Omega_{0}$ but, first, we must express $\nu_{0}$ in terms of $\nu$ and
$\theta_{0}$. From the inverse of Eq. (D2)
we have $\nu_{0}=\nu[1+(v/c)\cos\theta_{0}]$ so we obtain
\[
d\nu_{0}=d\nu[1+(v/c)\cos\theta_{0}], \qquad \nu_{0}n(\nu_{0})=
\nu[1+\frac{v}{c}cos\theta_{0}]\eta\bigg(\nu[1+\frac{v}{c}cos\theta_{0}]\bigg)=\nu n(\nu)+(\nu n(\nu))'\nu(v/c)\cos\theta_{0}+o(v/c)^{2}.
\]
\noindent Then, we must remember that the above analysis is predicated upon the mirror receding
from these photons (so the range of $\theta_{0}$ is (0, $\pi /2$)). The momentum
imparted by the photons on the other side of the mirror is given by the negative of the right hand side of
Eq. (D5) with the replacement
$v\rightarrow-v$. Thus, the contribution from all photons to the force is
\begin{eqnarray}\label{D6}
-vd\xi &&=d\nu 2hA\int_{0}^{\pi/2}d\Omega_{0}\bigg\{
\nu n(\nu)[\cos^{2}\theta_{0}-(v/c)(2\cos\theta_{0}-\cos^{3}\theta_{0})]+
(\nu n(\nu))'\nu(v/c)\cos^{3}\theta_{0}\bigg\}\nonumber\\
&&\qquad-(v\rightarrow-v)\nonumber\\
&&=-vd\nu2\pi(h/c)A[3\nu n(\nu)-\nu(\nu n(\nu))'].
\end{eqnarray}
This is Einstein's result. Putting Eq. (D1) for $n(\nu)$ ($\nu=\nu_{0}$ to zeroth order in $v/c$)
into Eq. (D6) yields
\begin{equation}\label{D7}
d\xi=4\pi \bigg(\frac{\nu}{c}\bigg)^{3}\bigg(\frac{h\nu}{kT}\bigg)\bigg(\frac{h}{c}\bigg)
\frac{e^{\beta h\nu}}{[e^{\beta h\nu}-1]^{2}}Ad\nu.
\end{equation}
\noindent Of course, $\nu$ may be integrated over to obtain the viscosity factor for a mirror which
is a perfect reflector at all frequencies:
\begin{equation}\label{D8}
\xi=4\pi h\bigg(\frac{kT}{hc}\bigg)^{4}A\int_{0}^{\infty}dz\frac{z^{4}e^{z}}{(e^{z}-1)^{2}}=
\frac{2\pi^{2}}{15}\hbar\bigg(\frac{kT}{\hbar c}\bigg)^{4}A.
\end{equation}
\subsection{Viscosity Factor For A Dielectric Sphere}
Our discussion for a dielectric sphere (dielectric constant $\epsilon$, radius $R$,
moving in the $z$-direction with speed $v$) exactly parallels that for the mirror.
First we find the momentum transferred to the sphere by a colliding photon. In the rest frame of the sphere,
the scattered radiation has a dipole pattern so the radiation scattered in two opposite directions carries no net momentum. Thus,
for radiation of frequency $\nu$, insofar as momentum transfer is concerned, the sphere acts
like an absorber (of area equal to the total scattering cross-section $\sigma(\nu)$). Thus, the momentum {\it effectively} imparted (i.e., on average)
in the $z$-direction by an incident colliding photon
is $\Delta P=(h\nu/c)\cos\theta$. As in our previous discussion, since $\Delta P=\Delta P_{0}$,
the effective momentum imparted by a single photon in the laboratory frame is 1/2 of the value given in Eq. (D3).
Next we find the number of these photons colliding with the sphere in time $dt$. In the
sphere rest frame this is $J\sigma (\nu )dt$, where $J$ is the number flux: this is the same
number that collides with the sphere in the laboratory frame in time $dt$. To express this number in terms of laboratory frame
variables, we note that $J/c$ is the zeroth component of the current 4-vector and
$J\cos\theta$ is the component along the $z$-axis. Therefore the Lorentz transformation of the
zeroth component of the current 4-vector is $J/c=[J_{0}/c-(v/c^{2})J_{0}\cos\theta_{0}]$ or,
substituting for $J_{0}$,
\begin{equation}\label{D9}
J\sigma(\nu)dt=n(\nu_{0})d\nu_{0}d\Omega_{0}c[1-(v/c)\cos\theta_{0}]\sigma(\nu)dt.
\end{equation}
\noindent Note that Eq. (D.9) differs from the parallel mirror equation (D.4) in that radiation of
frequency $\nu$ incident from any direction sees the same cross-section of the sphere while
this is not the case with the mirror,
Therefore, the momentum transferred in the $z$-direction in the laboratory frame in time $dt$ by these photons
is, by (D9) and half of (D3),
\begin{equation}\label{D10}
-vdtd\xi=J\sigma(\nu)dt\Delta P_{0}
=n(\nu_{0})d\nu_{0}d\Omega_{0}c[1-(v/c)\cos\theta_{0}]\sigma(\nu)dt(h\nu_{0}/c)[\cos\theta_{0}-(v/c)].
\end{equation}
\noindent As before, we express $d\nu_{0}$ and $n(\nu_{0})\nu_{0}$ in terms of $\nu$ and $\theta_{0}$ ($\sigma$
is already in terms of $\nu$) and integrate over all $\Omega _{0}$:
\begin{eqnarray}\label{D11}
-vd\xi &&=d\nu h\sigma(\nu)\int_{0}^{\pi}d\Omega_{0}\bigg\{
\nu n(\nu)[\cos\theta_{0}-(v/c)]+
(\nu n(\nu))'\nu(v/c)\cos^{2}\theta_{0}\bigg\}\nonumber\\
&&=-vd\nu(4\pi/3)(h/c)\sigma(\nu)[3\nu n(\nu)-\nu(\nu n(\nu))'].
\end{eqnarray}
\noindent This is 2/3 of the comparable expression (D6) for the mirror, with the
area $A$ replaced by the cross-section $\sigma(\nu)$.
The classically calculated cross-section (i.e., the total scattered energy/sec divided by the incident energy/sec-area)
for an electromagnetic wave of wavelength $>>R$ is\cite{Jackson}
\begin{equation}\label{D12}
\sigma(\nu)=\bigg(\frac{8\pi}{3}\bigg)\bigg(\frac{2\pi\nu}{c}\bigg)^{4}R^{6}\bigg[\frac{\epsilon -1}{\epsilon+2}\bigg]
\rightarrow \bigg(\frac{8\pi}{3}\bigg)\bigg(\frac{2\pi\nu}{c}\bigg)^{4}R^{6}
\end{equation}
\noindent where, for simplicity, we shall only use the limit of large $\epsilon$.
In Eq. (D12), $\sigma$ has been averaged over
incident polarizations and summed over scattered polarizations.
Putting (D1) for $n(\nu)$ and (D12) for $\sigma(\nu)$ into (D11) yields
\begin{equation}\label{D13}
d\xi=\bigg(\frac{8\pi}{3}\bigg)\bigg(\frac{\nu}{c}\bigg)^{3}\bigg(\frac{h\nu}{kT}\bigg)h
\frac{e^{\beta h\nu}}{[e^{\beta h\nu}-1]^{2}}d\nu\sigma(\nu)=
(2\pi)^{4}\bigg(\frac{8\pi}{3}\bigg)^{2}\bigg(\frac{\nu}{c}\bigg)^{7}
\bigg(\frac{h\nu}{kT}\bigg)\bigg(\frac{h}{c}\bigg)R^{6}
\frac{e^{\beta h\nu}}{[e^{\beta h\nu}-1]^{2}}d\nu.
\end{equation}
\noindent We remark that, if $d\nu\sigma(\nu)$ in the first equation of (D13)
is replaced by $\int_{0}^{\infty}d\nu\sigma(\nu)=\pi e^{2}/mc$, the sum rule for an
individual oscillator\cite{Jackson2} of mass m and resonant frequency $\nu$, we obtain the
value of $\xi$ for a single oscillator given in reference\cite{Hoye}.
Upon integrating {D13} over $\nu$ we obtain the viscosity coefficient
\begin{equation}\label{D14}
\xi=\bigg(\frac{8}{9\pi}\bigg)\bigg(\frac{kT}{\hbar c}\bigg)^{8}\hbar R^{6}
\int_{0}^{\infty}dz z^{8}\frac{e^{z}}{[e^{z}-1]^{2}}=\frac{4(2\pi)^{7}}{135}\bigg(\frac{kT}{\hbar c}\bigg)^{8}\hbar R^{6}
\end{equation}
\noindent since the integral=$(2\pi)^{8}/60$ ($\approx 8!$). This result is used in Sections IIB and 1VC.
\section {A Gravitational Proposal}\label{Appendix E}
Diosi\cite{Diosi} suggested a gravitationally based CSL-type collapse model with
the collapse rate $\sim G$. However, it
effectively had $a\approx$ the proton size and therefore too large a proton excitation rate, a flaw
corrected by Ghirardi, Grassi and Rimini\cite{GGR} who added the standard $a$ to the model.
Penrose\cite{others}, perhaps unwilling to commit to a nonfundamental parameter $a$ (however, see
Pearle and Squires\cite{PearleSquires2} for a ``derivation" of $a$ in terms of fundamental constants in the context of a
gravitationally based model) has a more modest proposal. His suggestion is that,
when quantum theory describes an object as being in a state of two superposed locations,
collapse of the state to one of those locations will take place in a time equal to $\hbar$ divided by the
gravitational energy required to move two real copies of the object from a completely
overlapping configuration to these two locations. For example, consider a sphere of mass $M$ and
radius $R$. Since the gravitational energy of two such spheres displaced by a small distance $D<<R$ is
\[ U(D)=\frac{GM^{2}}{R}\bigg[-\frac{6}{5}+\frac{1}{2}\bigg(\frac{D}{R}\bigg)^{2}\bigg],
\]
\noindent then the time it takes a quantum state of a sphere in a superposition of
two states separated by the distance $D$ to collapse to one or the other state is
\begin{equation}\label{E.1}
\tau_{c}=2\hbar R^{3}/GM^{2}D^{2}
\end{equation}
This is a minimalist proposal, not a complete dynamical theory.
For example, it is silent on how to treat the collapse of the state of a sphere in a continuous
superposition of locations (i.e., the usual wavefunction description of the CM of a sphere).
Nonetheless, we shall have the temerity to make what we regard as a reasonable extrapolation to that situation,
in order to estimate the random walk entailed by this proposal.
\subsection{Equilibrium CM Wavepacket Size For A Sphere }
First, consider the qualitative argument given after Eq. (3.5), for the equilibrium size of a CM wavefunction,
applied to the sphere. A CM wavepacket of width $D$ expands a distance $\sim(\hbar/MD)\Delta t$ in time
$\Delta t$ due to the Schr\"odinger evolution. Now, assume that the collapse is linear, in the sense that,
in time $\Delta t$, if
the wavepacket width is $D$, collapse acting alone makes it
contract to $D[1-(\Delta t/\tau_{c})]$, where $\tau_{c}$ is
given by Eq. (E.1). If $D=s$ is the equilibrium width of the wavepacket, then the Schr\"odinger expansion is
compensated by the collapse contraction, yielding $s\Delta t/\tau_{c}\sim (\hbar/Ms)\Delta t$ or
\begin{equation}\label{E.2}
s^{4}\sim \frac{\hbar^{2}R^{3}}{GM^{3}}
\end{equation}
\noindent Eq. (E.2) may be compared to
the CSL result (3.5):
\[ s^{4}\sim \frac{\hbar a^{2}m^{2}}{\lambda M^{3}f(R/a)}.
\]
We may therefore regard this proposal's equilibrium CM wavepacket size as giving the CSL size if
\begin{equation}\label{e.3}
\lambda f(R/a)\sim \frac{Gm^{2}}{a\hbar}\bigg(\frac{a}{R}\bigg)^{3}
\end{equation}
\noindent In particular, if $R\sim a$ (and so $f\approx 1$),
\begin{equation}\label{E.4}
\lambda \sim \frac{Gm^{2}}{a\hbar}\approx 10^{-23}{\rm sec}^{-1}
\end{equation}
\subsection{Translational Diffusion Of A Sphere}
We may obtain the same result, that this proposal gives the CSL behavior for objects of size
$\approx a$ with $\lambda$ having numerical value (E.4), from other considerations such as random walk of the sphere.
In this case, the Schr\"odinger equation tells us that
$d^{3}\overline {{{\langle {Q^{j}}^{2}\rangle}}}/dt^{3}=(2/M^{2})d\overline {{{\langle {P^{j}}^{2}\rangle}}}/dt$ (Eqs. (A7)).
The collapse, acting on a wavefunction of width $D$, narrows the wavefunction and, in so doing, increases the energy.
As we have remarked in the previous subsection,
\[ dD/dt=-D/\tau_{c}=-GM^{2}D^{2}/2\hbar R^{3}.
\]
\noindent From the uncertainty principle, $\overline{{{\langle {P^{j}}^{2}\rangle}}}\approx (\hbar/D)^{2}$, so
\[ d\overline{{{\langle {P^{j}}^{2}\rangle}}}/dt\sim -(\hbar^{2}/D^{3})dD/dt\sim GM^{2}\hbar/R^{3}
\]
\noindent (notice that the result is independent of $D$, as in CSL) and so
\begin{equation}\label{E.5}
d^{3}\overline {{{\langle {Q^{j}}^{2}\rangle}}}/dt^{3}\sim G\hbar/R^{3}
\end{equation}
\noindent (notice that the result is independent of $M$, as in CSL). Comparison of Eq. (E.5)
with the CSL result (3.4):
\begin{equation}\label{E.6}
d^{3}\overline{{{\langle {Q^{j}}^{2}\rangle}}}/dt^{3}={\lambda\hbar^{2}f(R/a)\over m^{2}a^{2}}
\end{equation}
\noindent yields the same ``effective" $\lambda$ given in (E.3)
\subsection{Rotational Diffusion Of A Disc}
Angular random walk of a disc proceeds along the same lines. For a thin disc of mass $M$ and radius $L$,
the gravitational energy required to rotate one such disc through a small angle $\theta$ with respect to a second initially
completely overlapping disc is $\sim (GM^{2}/L)\theta^{2}$ so
\begin{equation}\label{E.7}
\tau_{c}\sim \hbar L/GM^{2}\theta ^{2}.
\end{equation}
\noindent Here we utilize, from Eqs. (C.3),
$d^{3}\overline {{{\langle {\theta}^{2}\rangle}}}/dt^{3}=(2/I^{2})d\overline {{{\langle {\cal L}^{2}\rangle}}}/dt$.
According to this gravitational proposal, $d\theta/dt=-\theta/\tau_{c}$. From the uncertainty principle,
$\overline {{{\langle {\cal L}^{2}\rangle}}}\sim (\hbar/\theta)^{2}$ so
\[d\overline {{{\langle {\cal L}^{2}\rangle}}}/dt\sim -\hbar^{2} /\theta^{3}d\theta/dt\sim \hbar^{2}/\theta^{2}\tau_{c}
\]
\noindent and so
\begin{equation}\label{E.8}
d^{3}\overline {{{\langle {\theta}^{2}\rangle}}}/dt^{3}\sim \hbar^{2}/I^{2}\theta^{2}\tau_{c}\sim G\hbar/L^{5}.
\end{equation}
\noindent Eq. (E.8) may be compared with the CSL result (6.5)
\begin{equation}\label{E.9}
d^{3}\overline {{{\langle {\theta}^{2}\rangle}}}/dt^{3}\sim \lambda (\hbar/ma^{2})^{2}f_{ROT}(L/2a)
\end{equation}
\noindent which yields the ``effective" $\lambda$
\begin{equation}\label{E.10}
\lambda f_{ROT}(L/2a)\sim \frac{Gm^{2}}{a\hbar}\bigg(\frac{a}{L}\bigg)^{5}
\end{equation}
\noindent In our proposed experiment, for which $L\sim a$ (and so $f_{ROT}(L/2a)\approx 1$),
the ``effective" $\lambda$ is again given by (E.4).
The results obtained here are effectively the same as would be obtained with the modified Diosi model.
\section {Thermal Source Of The Fluctuations?}\label{Appendix F}
It is fun to speculate that the collapse-inducing fluctuations of $w$ may come from a thermal
bath, as do so many other fluctuations in physics.
Since a thermal bath defines a preferred reference frame (i.e., the frame in which the bath
medium has zero average momentum density), this would preclude a truly special relativistically
invariant collapse model. But, anyway, the universe is not truly special relativistically invariant,
possessing as it does the preferred comoving reference frame. Moreover, this reference frame is endowed with
the 2.7$^{\circ}$K thermal radiation bath. So, one might entertain the idea that the
fluctuations of $w$ arise from some unspecified medium in thermal equilibrium with the 2.7$^{\circ}$K radiation.
For an object in random walk, we note that the $\sim t^{3}$ time dependence of
$\overline {{{\langle {Q^{j}}^{2}\rangle}}}$ given by Eq. (3.4) for CSL is also the
time dependence of $(\Delta x)^{2}$ given by Eq. (2.7c) for ordinary Brownian motion when
$t$ is very much smaller than $\tau=\xi/M$ (which characterizes the time scale of the
approach to thermal equilibrium). Since objects show no sign of reaching thermal equilibrium today,
we may assume that $\tau$ is larger than the age of the universe,
$\tau\equiv\gamma 50 \lambda_{CSL}^{-1}$ (since $\lambda_{CSL}^{-1}=10^{16}$sec$\approx3\cdot 10^{8}$yr)
with $\gamma>1$.
Continuing in the same lighthearted vein, we propose that, for a fundamental object, the nucleon,
the two sources of the $\sim t^{3}$ behavior, thermal and CSL,
may be identified, and we equate Eqs. (2.7a) and (3.4), obtaining
\begin{equation}\label{F.1}
\frac{kT}{m\tau}\approx \frac{\lambda_{CSL}\hbar^{2}}{m^{2}a_{CSL}^{2}} \thinspace\thinspace
\thinspace\thinspace\thinspace\thinspace{\rm or}\thinspace\thinspace
\thinspace\thinspace\thinspace\thinspace
KT\approx 50\gamma \frac{\hbar^{2}}{m^{2}a_{CSL}^{2}}
\end{equation}
\noindent (we have set $M=m$ and $f(R/a)=1$).
When $T=2.7^{\circ}$K then $kT\approx 2.5\cdot 10^{-4}$eV.
The energy $\hbar^{2}/m^{2}a_{CSL}^{2}\approx 4\cdot 10^{-9}$eV. Thus, Eq. (F.1)
implies $\gamma\approx 10^{3}$, which is consistent.
Of course, in the speculation above there is no need to choose $(\lambda^{-1}, a)$ to have their
CSL numerical values. The appropriate generalization of (8.16) is
\begin{equation}\label{F.2}
\frac{\lambda^{-1}}{\lambda_{CSL}^{-1}}\bigg(\frac{a}{a_{CSL}}\bigg)^{2}\approx \frac{\gamma}{10^{3}} \thinspace\thinspace
\thinspace\thinspace\thinspace\thinspace{\rm or}\thinspace\thinspace
\thinspace\thinspace\thinspace\thinspace
\lambda^{-1}a^{2}\approx10^{3}\gamma.
\end{equation}
There is quite a range of $\lambda$ and $a$ consistent with (F.2) and present constraints (see FIG. 2),
especially in view of the flexibility in choosing $\gamma$.
\begin{references}
\bibitem{Schrodinger}E. Schr\"odinger, Die Naturwissenschaften 23, 807 (1935).
\bibitem{PearleCSL} P. Pearle, Phys. Rev. A {\bf 39}, 2277 (1989)
\bibitem{GPR} G. C. Ghirardi, P. Pearle and A.
Rimini, Phys. Rev. A {\bf 42}, 78 (1990).
\bibitem{GRW} G. C. Ghirardi, A. Rimini and T. Weber, Phys. Rev. D {\bf 34}, 470 (1986);
Phys. Rev. D {\bf 36}, 3287 (1987); Found. Phys. {\bf 18}, 1, (1988).
\bibitem{Pearle} P. Pearle, Phys. Rev. D {\bf 13}, 857 (1976);
Int'l. Journ. Theor. Phys. {\bf 48}, 489 (1979);
Found. Phys. {\bf 12}, 249 (1982);
Phys. Rev. D {\bf 29}, 235 (1984);
in {\it The Wave-Particle Dualism}, edited by S. Diner et. al (Reidel, Dordrecht 1984);
Journ. Stat. Phys. {\bf 41}, 719 (1985);
in {\it Quantum Concepts in Space and
Time}, edited by R. Penrose and C. J.
Isham (Clarendon, Oxford, 1986); Phys. Rev. D {\bf 33}, 2240 (1986);
in {\it New Techniques in Quantum Measurement Theory},
edited by D. M. Greenberger (N.Y. Acad. of Sci., N.Y., 1986), p.539.
\bibitem{others} For some other collapse models, see I. C. Percival, Proc. Roy. Soc. A {\bf 451}, 503 (1995) and
{\it Quantum State Diffusion}, (Cambridge Univ. Press, Cambridge, 1998);
L. P. Hughston, Proc. Roy. Soc. A {\bf 452}, 953 (1995); R. Penrose, Gen. Rel. and Grav. {\bf 28}, 581 (1996);
S. L. Adler and L. P. Horwitz, Journ. Math. Phys. {\bf 41}, 2485 (2000); D. Fivel, Phys. Rev. A {\bf 56}, 146 (1997).
\bibitem{PearleNaples} For a recent review of CSL, see P. Pearle in
{\it Open Systems and Measurement in Relativistic Quantum theory},
edited by A. Miller (Plenum, New York 1990), p.167.
\bibitem{GR} G. C. Ghirardi and A. Rimini in {\it Sixty-Two Years of Uncertainty},
edited by H.P. Breuer and F, Petruccione (Springer, Heidelberg 1999), p.195.
\bibitem{Squires} E. J. Squires, Phys. Lett. A {\bf 158}, 431 (1991).
\bibitem{Ballentine} L. E. Ballentine, Phys. Rev. A {\bf 43}, 9 (1991).
\bibitem{PearleSquires} P. Pearle and E. Squires, Phys. Rev. Lett. {\bf 73}, 1 (1994).
\bibitem{Collett} B. Collett, P. Pearle, F. Avignone and S. Nussinov,
Found. Phys. {\bf 25}, 1399 (1995).
\bibitem{Ring} P. Pearle, James Ring, J. I. Collar and F. T. Avignone III,
Found. Phys. {\bf 29}, 465 (1999).
\bibitem{Karolyhazy} F. Karolyhazy, Nuovo Cimento {\bf42A}, 1506 (1966);
F. Karolyhazy, A Frenkel and B. Lukacs in
{\it Physics as Natural Philosophy}, edited by A.
Shimony and H. Feshbach (M.I.T. Press, Cambridge 1982), p. 204;
in {\it Quantum Concepts in Space and Time}, edited by R. Penrose and
C. J. Isham (Clarendon, Oxford 1986), p. 109; A. Frenkel, Found. Phys. {\bf 20}, 159 (1990).
\bibitem{Gabrielse} G. Gabrielse et. al, Phys. Rev. Lett. {\bf 65}, 1317 (1990).
\bibitem{Mazo} For a nice treatment see R. M. Mazo in {\it Stochastic Processes in Nonequilibrium Systems,
Lecture Notes in Physics {\bf 84}}, edited by L. Garrido, P. Seglar and P. J. Shepard (Springer-Verlag, Berlin 1978), p. 53.
\bibitem{Millikan} R. A. Millikan, Phys. Rev. {\bf 32}, 349 (1911); Phys. Rev. {\bf 22}, 1 (1923).
\bibitem{Aerosol} M. D. Allen and O. G. Raabe, Aerosol Sci. and Tech. {\bf 4}, 269 (1985).
\bibitem{Cunningham} E. Cunningham, Proc. Roy. Soc. {\bf 83}, 357 (1910).
\bibitem{Epstein} P. S. Epstein, Phys. Rev. {\bf 23}, 710 (1924).
\bibitem{EinsteinandHopf} A. Einstein and L. Hopf, Ann. der Phys. {\bf 33}, 1105 (1910).
\bibitem{Einstein} A. Einstein, Phys. Zeit. {\bf 10}, 185 (1909).
\bibitem{Einstein1904} A. Einstein, Ann. der Phys. {\bf 17}, 549 (1905).
\bibitem{BGG} F. Benatti, G. C. Ghirardi and R. Grassi, Found. Phys. {\bf 35}, 5 (1995).
\bibitem{BG} A. Bassi and G. C. Ghirardi, Brit. J. Phil. Sci. {\bf 50}, 719 (1999).
\bibitem{Diosi4} L. Diosi, Phys. Lett. {\bf A132}, 233 (1988).
\bibitem{Andrade} E. N. daC. Andrade and R. C. Parker, Proc. Roy. Soc. {\bf 159}, 507 (1937).
\bibitem{Lamb} H. Lamb, {\it Hydrodynamics} (Dover, N.Y. 1945), p. 605.
\bibitem{Einstein Rot} A. Einstein, Ann. der Phys. {\bf 19}, 371 (1906).
\bibitem{Lamb2} H. Lamb, Op. Cit. p. 589.
\bibitem{Pearleenergy} P. Pearle, Found. Phys. {\bf 30}, 1145 (2000).
\bibitem{AvignoneRing} We are indebted to Frank Avignone for supplying recent data and
to Jim Ring for analyzing it.
\bibitem{QFu} Q. Fu. Phys. Rev. A56, 1806 (1997).
\bibitem{Diosi}L. Diosi, Phys. Rev. A{\bf 40}, 1165 (1989).
\bibitem{GGR}G. C. Ghirardi, R. Grassi and A. Rimini, Phys. Rev. A {\bf 42}, 1057 (1990).
\bibitem{PearleSquires2} P. Pearle and E. Squires, Found. Phys. {\bf 26}, 291 (1996).
\bibitem{ABGG} F. Aicardi, A. Borsellino, G. C. Ghirardi and R. Grassi, Found. Phys. Lett {\bf 4}, 109 (1991).
\bibitem{Chen} Chen et. al., J. Aerosol. Sci. {\bf 24},181 (1993).
\bibitem{Tannenbaum} Tannenbaum et. al. J. Vac. Sci. submitted. Cornell Project 789-99.
\bibitem{Paul} Paul Rev. Mod. Phys. {\bf 60} ,531 (1990).
\bibitem{Wuerker} Wuerker at. al. J. Applied. Phys. {\bf 30}, 342 (1958).
\bibitem{Arnold1} S. Arnold, J.H. Li, S. Holler, A. Korn and A.F. Izmailov, J. Appl. Phys. {\bf 78}, 3566 (1995).
\bibitem{Arnold2} S. Arnold, L. M. Foley and A. Korn, J. Appl. Phys. {\bf 74}, 4291 (1993).
\bibitem{Hoye} J. S. Hoye and I Brevik, Physica A {\bf 196}, 241 (1993). We would like to thank
Peter Milloni for calling our attention to this paper.
\bibitem{Jackson} J. D. Jackson, {\it Classical Electrodynamics} (Wiley, N.Y. 1975), p. 414.
\bibitem{Jackson2} J. D. Jackson, Ibid p. 805.
\end{references}
\begin{figure}
\caption{A graph of $f_{ROT}
\end{figure}
\begin{figure}
\caption{A graph of $\log_{10}
\end{figure}
\begin{table}
\caption{CSL diffusion in vacuum: rms distance $\Delta Q$cm for various
sphere radii $R$ and times $t$.}
\begin{tabular}{cccc}
&\multicolumn{2}{c}{$t$ in sec}\\
$R$ in cm&$10$&$10^{3}$&$10^{5}$\\
\tableline
$10^{-6}$&$8\cdot 10^{-6}$&$8\cdot 10^{-3}$&8\\
$10^{-5}$&$6\cdot 10^{-6}$&$6\cdot 10^{-3}$&6\\
$10^{-4}$&$2\cdot 10^{-7}$&$2\cdot 10^{-4}$&$2\cdot 10^{-1}$\\
$10^{-2}$&$6\cdot 10^{-11}$&$2\cdot 10^{-8}$&$2\cdot 10^{-5}$\\
$1$&$6\cdot 10^{-15}$&$2\cdot 10^{-12}$&$2\cdot 10^{-9}$\\
\end{tabular}
\end{table}
\begin{table}
\caption{CSL rms equilibrium center of mass
wavefunction size $s_{\infty}$ and characteristic time $\tau_{s}$
to reach that size in vacuum for various radii $R$ of a sphere of density 1gm/cc.}
\begin{tabular}{ccc}
$R$ in cm&$s_{\infty}$ in cm&$\tau_{s}$ in sec\\
\tableline
$10^{-6}$&$7\cdot 10^{-5}$&20\\
$10^{-5}$&$4\cdot 10^{-7}$&.6\\
$10^{-4}$&$1\cdot 10^{-8}$&.6\\
$10^{-2}$&$4\cdot 10^{-11}$&6\\
$1$&$1\cdot 10^{-13}$&60\\
\end{tabular}
\end{table}
\end{document}
|
\betagin{document}
\title{Structural stability for the splash singularities\
of the water waves problem}
\betagin{abstract}In this paper we show a structural stability result for water waves. The main motivation for this result is that we would like to exhibit a water wave
whose interface starts as a graph and ends in a splash. Numerical simulations lead to an approximate solution with the desired behaviour. The stability result will conclude that near the approximate solution to water waves there is an exact solution.\etand{abstract}
\section{Introduction}
The water waves problem models the motion of an incompressible fluid with constant density $\rho$ in a domain $\Omega(t)$ with a free boundary $\partialrtial \Omega(t)$,
which satisfies the Euler equation with the presence of gravity and whose flow in potential. The system, in $\mathbb{R}^2$, can be written, after some computations, as an equation for the free boundary, \betagin{equation}\Lambdabel{Parametriza}
\partialrtial\Omega(t)=\{z(\alphapha,t)=(z_1(\alphapha,t),z_2(\alphapha,t)):\alphapha\in\mathbb{R}\},
\etand{equation} and an equation for the amplitude of the vorticity, $\omegaega(\alphapha,t)$, in the following way
\betagin{equation}\Lambdabel{em}
z_t(\alpha,t)=BR(z,\omegaega)(\alpha,t)+c(\alpha,t)z_{\alpha}(\alpha,t),
\etand{equation}
\betagin{align}
\betagin{split}\Lambdabel{cEuler}
\omegaega_t(\alpha,t)&=-2BR_t(z,\omegaega)(\alpha,t)\cdot
z_{\alpha}(\alpha,t)-\Big(\frac{\omegaega^2}{4|\partialrtial_{\alpha} z|^2}\Big)_{\alpha}(\alpha,t) +(c\omegaega)_{\alpha}(\alpha,t)\\
&\quad+2c(\alpha,t) BR_{\alpha}(z,\omegaega)(\alpha,t)\cdot z_{\alpha}(\alpha,t)-2
(z_2)_\alpha(\alpha,t),
\etand{split}
\etand{align}
where $BR(z,\omegaega)$ is the classical Birkhoff-Rott integral
\betagin{equation}\Lambdabel{BR}
BR(z,\omegaega)(\alpha,t)=\frac{1}{2\pi}PV\int_{\mathbb{R}}\frac{(z(\alpha,t)-z(\betata,t))^{\bot}}{|z(\alpha,t)-z(\betata,t)|^2}\omegaega(\betata,t)d\betata.
\etand{equation}
The function $c(\alpha,t)$ is arbitrary since the boundary is convected by the normal component of the velocity of the fluid. Also, we notice that, in order to get an explicit equation for $\partialrtial_t\omegaega$, we need to invert the operator $$I + T = I + 2 \Lambdangle BR(z,\cdot), z_{\alpha} \rightarrowngle$$
and we have taken the acceleration due to gravity and the density $\rho$ equal to one.
Once one has solved this system for $(z,\omegaega)$ the velocity of the fluid and the pressure in the domain $\Omega(t)$ can be recovered by using Biot-Savart and Bernoulli laws. For details see \cite{Castro-Cordoba-Fefferman-Gancedo-GomezSerrano:finite-time-singularities-free-boundary-euler}.
In the last two decades these equations have been intensively studied. For an extensive survey about analytical results on water waves see the monograph \cite{Lannes:water-waves-book}.
In this paper we are concerned with the problem of the existence of water waves which start as a graph and become a splash curve in finite time. Roughly speaking, a splash curve is a smooth curve that collapses with itself in a single point such as the curve of fig. \ref{PictureSplash}. A rigorous definition can be found in \cite{Castro-Cordoba-Fefferman-Gancedo-GomezSerrano:finite-time-singularities-free-boundary-euler} where the existence of splash singularities has been shown. Coutand and Shkoller \cite{Coutand-Shkoller:finite-time-splash} have proven the existence of splash singularities in presence of vorticity. Fefferman, Ionescu and Lie \cite{Fefferman-Ionescu-Lie:absence-splash-singularities} have proven the non existence of splash singularities for internal waves, i.e. for an interface between two incompressible fluids.
\betagin{figure}[h!]\centering
\includegraphics[scale=0.4]{splashvacio.png}
\caption{Splash singularity. A smooth interface that collapses in a point.}
\Lambdabel{PictureSplash}
\etand{figure}
We are interested in the following statement:
\betagin{conjecture}
There exist initial data $z_0(\alpha), \omega_0(\alpha)$ of solutions of the water wave equations such that at time $0$ the curve $z_0(\alpha)$ can be parameterized as a graph, the interface then turns over at a finite time $T_1 > 0$, and finally produces a splash at a finite time $T_2 > T_1$.
\etand{conjecture}
We should remark that this conjecture is a combination of the scenarios in theorems \cite[Theorem I.1] {Castro-Cordoba-Fefferman-Gancedo-GomezSerrano:finite-time-singularities-free-boundary-euler} and \cite[Theorem 7.1]{Castro-Cordoba-Fefferman-Gancedo-LopezFernandez:rayleigh-taylor-breakdown} and is supported by numerical evidence that we can see in Fig. \ref{splash}. This numerical simulation was carried out using the method of Beale, Hou and Lowengrub \cite{Beale-Hou-Lowengrub:convergence-boundary-integral}.
\betagin{figure}[h!]\centering
\includegraphics[scale=0.5]{ZoomNontildaSolid0Dashed4Dotted7Polished.eps}
\caption{Evolution from a graph to a splash.}
\Lambdabel{splash}
\etand{figure}
The proof of this conjecture could follow along these lines. First of all, we will move backwards in time, 0 being the time of the splash, $T_2 - T_1$ the time of the turning and $T_2$ the time in which the solution can be parameterized as a graph. Also we write the water waves equation in a new domain given by the projection of $\Omega(t)$ by the conformal map
$$
P(w)=\Big(\tan\Big(\frac{w}{2}\Big)\Big)^{1/2},\quad w\in\mathbb{C},
$$
whose intention is to keep apart the self-intersecting point by taking the branch of the square root above passing through this crucial point. The equation in this new domain can be written as follows:
\betagin{align}\Lambdabel{zeq}
\tilde{z}_t(\alpha,t) & = Q^2(\alpha,t)BR(\tilde{z},\tilde{\omegaega})(\alpha,t) + \tilde{c}(\alpha,t)\tilde{z}_{\alpha}(\alpha,t),\etand{align}
\betagin{align}\Lambdabel{eqomega}
\tilde{\omegaega}_{t}(\alpha,t) =& -2 BR_t(\tilde{z},\tilde{\omegaega})(\alpha,t) \cdot \tilde{z}_{\alpha}(\alpha,t) - (Q^{2})_{\alpha}(\alpha,t)|BR(\tilde{z},\tilde{\omegaega})|^{2} (\alpha,t) - \Big(\frac{Q^2(\alpha,t)\tilde{\omegaega}(\alpha,t)^2}{4|\tilde{z}_{\alpha}(\alpha,t)|^{2}}\Big)_{\alpha}\nonumber \\
& + 2\tilde{c}(\alpha,t) BR_\alpha(\tilde{z},\tilde{\omegaega}) \cdot \tilde{z}_{\alpha}(\alpha,t) + \left(\tilde{c}(\alpha,t)\tilde{\omegaega}(\alpha,t)\right)_{\alpha}
- 2 \left(P^{-1}_2(\tilde{z}(\alpha,t))\right)_{\alpha} \nonumber \etand{align}
where
$$\tilde{z}(\alpha,t)=P(z(\alpha,t)),\quad Q^2(\alpha,t) = \left|\frac{dP}{dw}(P^{-1}(\tilde{z}(\alpha,t)))\right|^{2} \thetaxt{ and } \alpha \in \mathbb{T}.$$
(From now on we will omit the superscript tilde in the notation).
We start computing a numerical approximation of a solution to the water waves equation \ref{zeq} that starts as a splash, turns over and finally is a graph. Such a candidate is depicted in Fig. \ref{splash}. With this aproximation we can construct explicit functions $(x,\gammaamma)$ that solve the system
\betagin{equation}
\left\{
\betagin{array}{rl}
x_t =& Q^2(x)BR(x,\gammaamma) + b x_{\alpha} + f\\
\gammaamma_t = &-2BR_{t}(x,\gammaamma) \cdot x_{\alpha} - (Q^2(x))_{\alpha}|BR(x,\gammaamma)|^{2}-\left(\frac{Q^2(x)\gammaamma^2}{4|x_{\alpha}|^{2}}\right)_{\alpha}\\
&+ 2bBR_{\alpha}(x,\gammaamma) \cdot x_{\alpha} + (b\gammaamma)_{\alpha} -2(P^{-1}_{2}(x))_{\alpha} + g
\etand{array}
\right.
\etand{equation}
where $f$ and $g$ are errors that we hope are small. By using the computer we are able to give rigorous bounds for these errors. The question we want to answer is if there exists an exact solution $(z,\omegaega)$ of the water waves equation close to these functions $(x,\gammaamma)$. That means we need to prove the following theorem:
\betagin{theorem}
\Lambdabel{stabilitytheorem}
Let
$$ D(\alpha,t) \etaquiv z(\alpha,t) - x(\alpha,t), \quad d(\alpha,t) \etaquiv \omegaega(\alpha,t) - \gammaamma(\alpha,t), \quad \mathcal{D}(\alpha,t) \etaquiv \varpirphi(\alpha,t) - \psi(\alpha,t)$$
where $(x,\gammaamma,\psi)$ are the solutions of
\betagin{equation}
\Lambdabel{CharlieFlat}
\left\{
\betagin{array}{rl}
x_t & = Q^2(x)BR(x,\gammaamma) + b x_{\alpha} + f\\
b & = \underbrace{\frac{\alpha + \pi}{2\pi}\int_{-\pi}^{\pi}(Q^2
BR(x,\gammaamma))_{\alpha}\frac{x_\alpha}{|x_{\alpha}|^{2}} d\alpha - \int_{-\pi}^{\alpha}(Q^2 BR(x,\gammaamma))_{\betata}\frac{x_{\alpha}}{|x_{\alpha}|^{2}}d\betata}_{b_s} \\
& + \underbrace{\frac{\alpha + \pi}{2\pi}\int_{-\pi}^{\pi}f_{\alpha}\frac{x_\alpha}{|x_{\alpha}|^{2}} d\alpha - \int_{-\pi}^{\alpha}f_{\betata}\frac{x_{\betata}}{|x_{\betata}|^{2}}d\betata}_{b_e}\\
\gammaamma_t & + 2BR_{t}(x,\gammaamma) \cdot x_{\alpha} = - (Q^2(x))_{\alpha}|BR(x,\gammaamma)|^{2} + 2bBR_{\alpha}(x,\gammaamma) \cdot x_{\alpha} + (b\gammaamma)_{\alpha} \\
& \qquad \qquad \qquad \quad \; \, -\left(\frac{Q^2(x)\gammaamma^2}{4|x_{\alpha}|^{2}}\right)_{\alpha} - 2(P^{-1}_{2}(x))_{\alpha} + g \\
\psi(\alpha,t) & = \frac{Q^2_x(\alpha,t)\gammaamma(\alpha,t)}{2|x_{\alpha}(\alpha,t)|} - b_s(\alpha,t)|x_{\alpha}(\alpha,t)|,
\etand{array}
\right.
\etand{equation}
where $(z,\omegaega)$ are the solutions of \etaqref{CharlieFlat} with $f \etaquiv g \etaquiv 0$, $\varpirphi$ is the function
$$\varpirphi=\frac{Q^2_z(\alphapha,t)\omegaega(\alphapha,t)}{2|z_\alphapha(\alphapha,t)|}-b(\alphapha,t)|z_\alphapha(\alphapha,t)|,$$
and $E$ is the following norm for the difference
$$E(t) \etaquiv \left(\|D\|^{2}_{H^{3}} + \int_{-\pi}^{\pi}\frac{Q^2\sigmagma_{z}}{|z_{\alpha}|^{2}}|\partialrtial^{4}_{\alpha}D|^{2} + \|d\|^{2}_{H^{2}} + \|\mathcal{D}\|^{2}_{H^{3+\frac{1}{2}}}\right).$$
Then we have that
$$\left|\frac{d}{dt}E(t)\right|\leq \mathcal{C}(t)(E(t)+E^{k}(t))+c\partialrtial_{\eta}lta(t)$$
where $$\mathcal{C}(t)= \mathcal{C}(\mathcal{E}(t),\|x\|_{H^{5+\frac12}}(t),\|\gammaamma\|_{H^{3+\frac12}}(t),
\|\zeta\|_{H^{4+\frac12}}(t),\|F(x)\|_{L^\infty}(t))$$ and
$$\partialrtial_{\eta}lta(t)=(\|f\|_{H^{5+\frac12}}(t)+\|g\|_{H^{3+\frac12}}(t))^k+(\|f\|_{H^{5+\frac12}}(t)+\|g\|_{H^{3+\frac12}}(t))^2, \thetaxt{ $k$ big enough}$$
depends on the norms of $f$ and $g$, and $\mathcal{E}(t)$ is given by
\betagin{align*}
\mathcal{E}(t)=&\|z\|^2_{H^3}(t)+\int_{\mathbb{T}}\frac{Q^2\sigmagma_z}{|z_\alpha|^2}|\partialrtial_{\alpha}^4 z|^2d\alpha+\|F(z)\|^2_{L^\infty}(t)\\
&+\|\omega\|^2_{H^{2}}(t)+
\|\varpirphi\|^2_{H^{3+\frac12}}(t)+\frac{|z_\alpha|^2}{m(Q^2\sigmagma_z)(t)}+\sum_{l=0}^4\frac{1}{m(q^l)(t)}
\etand{align*}
where the $L^\infty$ norm of the function
$$
F(z)\etaquiv \frac{|\betata|}{|z(\alpha,t)-z(\alpha-\betata,t)|},\quad \alpha,\betata\in\mathbb{T}
$$
measures the arc-chord condition,
\betagin{align}
\betagin{split}\Lambdabel{R-T}
\sigmagma_{z} \etaquiv& \left(BR_{t}(z,\omegaega) + \frac{\varpirphi}{|z_{\alpha}|}BR_{\alpha}(z,\omegaega)\right) \cdot z_{\alpha}^{\perp} + \frac{\omegaega}{2|z_{\alpha}|^{2}}\left(z_{\alpha t} + \frac{\varpirphi}{|z_{\alpha}|}z_{\alpha \alpha}\right) \cdot z_{\alpha}^{\perp} \\
& + Q\left|BR(z,\omegaega) + \frac{\omegaega}{2|z_{\alpha}|^{2}}z_{\alpha}\right|^{2}(\nabla Q)(z) \cdot z_{\alpha}^{\perp}
+ (\nabla P_{2}^{-1})(z) \cdot z_{\alpha}^{\perp}
\etand{split}
\etand{align}
is the Rayleigh-Taylor function,
$$
m(Q^2\sigmagma_z)(t)\etaquiv\min_{\alpha\in\mathbb{T}}Q^2(\alpha,t)\sigmagma_z(\alpha,t),
$$
and finally
$$
m(q^l)(t)\etaquiv\min_{\alpha\in\mathbb{T}}|z(\alpha,t)-q^l|
$$
for $l=0,...,4$, with
\betagin{equation}\Lambdabel{points}
q^0=\left(0,0\right),\quad
q^1=\left(\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}}\right),\quad
q^2=\left(\frac{-1}{\sqrt{2}},\frac{1}{\sqrt{2}}\right),\quad
q^3=\left(\frac{-1}{\sqrt{2}}, \frac{-1}{\sqrt{2}}\right),\quad
q^4=\left(\frac{1}{\sqrt{2}}, \frac{-1}{\sqrt{2}}\right),
\etand{equation}
which are the singular points of the transformation $P$.
\etand{theorem}
\betagin{remark}
We can absorb the terms in $\mathcal{E}(t)$ by $E(t)$ raised to an appropriate power and terms in $(x,\gammaamma)$ by performing the splitting $\|z\| = \|z-x\| + \|x\|$ (or the analogous one for a different variable) for any norm or any quantity that appears in $\mathcal{E}(t)$.
\etand{remark}
Theorem \ref{stabilitytheorem} was announced in \cite{Castro-Cordoba-Fefferman-Gancedo-GomezSerrano:splash-water-waves}.
If we knew $\mathcal{C}(t), f(t), g(t), k$ or bounds on them, a priori, then we could provide bounds on $\mathcal{E}(t)$ at any time $T$. We point out here that $E(t)$ controls the norm $\|\partialrtial_{\alpha} z^{1}(\alpha) - \partialrtial_{\alpha} x^{1}(\alpha)\|_{L^{\infty}}$. Let $T_g$ be a time in which the approximate solution is a graph, i.e. $\partialrtial_{\alpha} x^{1}(\alpha,T_g) > 0 \quad \forall \alphapha$. Now, if $E(T_g) < \partialrtial_{\alpha} x^{1}(\alpha,T_g)$ then
\betagin{align*}
\partialrtial_{\alpha} z^{1}(\alpha,T_g) > -\|\partialrtial_{\alpha} z^{1}(\alpha) - \partialrtial_{\alpha} x^{1}(\alpha)\|_{L^{\infty}} + \partialrtial_{\alpha} x^{1}(\alpha,T_g) > 0,
\etand{align*}
and this shows that $z$ is a graph. In other words, the possible set of solutions of the water waves equation is a ball centered at $(x,\gammaamma,\zeta)$ with the topology given by $E$. All of the elements of this ball are graphs, therefore the solution is necessarily a graph. Thus, the problem is reduced to study and find bounds for $\mathcal{C}(t), f(t), g(t), k$.
The recent developments of computer architecture have boosted their use in mathematics, giving birth to a full set of new results only achievable by this enormous power. However, it has the drawback that floating-point operations can not be performed exactly, resulting in numerical errors. In order to overcome this difficulty and be able to prove rigorous results, we use the so-called \etamph{interval arithmetics}, in which instead of working with arbitrary real numbers, we perform computations over intervals which have representable numbers as endpoints. On these objects, an arithmetic is defined in such a way that we are guaranteed that for every $x \in X, y \in Y$
\betagin{align*}
x \star y \in X \star Y,
\etand{align*}
for any operation $\star$. For example,
\betagin{align*}
[\underline{x},\overline{x}] + [\underline{y},\overline{y}] & = [\underline{x} + \underline{y}, \overline{x} + \overline{y}] \\
[\underline{x},\overline{x}] \times [\underline{y},\overline{y}] & = [\min\{\underline{x}\underline{y},\underline{x}\overline{y},\overline{x}\underline{y},\overline{x}\overline{y}\},\max\{\underline{x}\underline{y},\underline{x}\overline{y},\overline{x}\underline{y},\overline{x}\overline{y}\}]
\etand{align*}
We can also define the interval version of a function $f(X)$ as an interval $I$ that satisfies that for every $x \in X$ we have $f(x) \in I$.
The article is organized as follows: in sections 2 and 3 we give some details about how to control the errors $f$, $g$ and the constants that arise in Theorem \ref{stabilitytheorem} by using the computer. Finally, in section 4 we give a complete proof of Theorem \ref{stabilitytheorem}.
\section{Bounds for $f(t)$ and $g(t)$}
\subsection{Representation of the functions and Interpolation}
The first thing one has to decide is how to represent the data and how to pass from the cloud of points in space-time obtained by non-rigorous simulation to a function defined everywhere in $[-\pi,\pi] \times [0,T]$. We need to interpolate in some way.
In our case, we chose to represent the functions $x$ and $\gammaamma$ by piecewise polynomials (splines) of high degree (10) in space, and low degree (3) in time. To do so, we first interpolate in space for every node in the time mesh. The interpolation is made via B-Splines. Since the interpolation is reduced to solve a linear (interval) system $Ac = y$, where $A$ is constant in time and space and $y$ depends on the values of the function at time $t$ since the mesh in space is constant, we precondition by multiplying by the non-rigorous inverse of the midpoints of the entries of $A$. We remark that the system is interval-based because we need to produce a curve that is a splash (i.e. there have to be two points $\alpha_1, \alpha_2$ such that we can guarantee $x_0(\alpha_1) = x_0(\alpha_2)$. Finally, the system is solved using a rigorous Gauss-Seidel iterative method. We also remark that the need for interval-based calculations is only strictly necessary at time $t = 0$ since it is the only point in which we have to guarantee some equality. By working with multiprecision (1024 bits) we can get widths in the coefficients of the order of $10^{-300}$. In order to perform interpolation in time, we fix the values of the function and its time derivative at the mesh points. This gives us lots of systems of 4 equations (the values of the function and its derivative at both endpoints) and 4 unknowns (the 4 coefficients of the degree 3 polynomial) but with an explicit formula for each of them. With this method, our spline will be $C^{1}$ in time but it might not be $C^{2}$.
\subsection{Rigorous bounds for Singular integrals}
In this section we will discuss the computational details of the rigorous calculation of some singular integrals. In particular we will focus on the Hilbert transform, but the methods apply to any integral kernel whose main singularity is homogeneous of degree -1. Parts of the computation (the $N$ part) are slightly related to the Taylor models with relative remainder presented in M. Jolde\c{s}' thesis \cite{Joldes:rigorous-polynomial-approximations}.
Let us suppose that we have a function $f$ given explicitly by a spline (piecewise polynomial) which is $C^{k-1}$ everywhere and $C^{k}$ except at finitely many points (the points in which the different pieces of the spline are glued together). We need to calculate rigorously the Hilbert Transform of $f$, that is
\betagin{align*}
Hf(x) = \frac{PV}{\pi} \int_{\mathbb{T}} \frac{f(x)-f(y)}{2\tan\left(\frac{x-y}{2}\right)}dy,
\etand{align*}
and we want to approximate it by a piecewise polynomial function with less regularity, plus an error that can be bounded in $H^{q}, 0\leq q \leq c < k$ and in $L^{\infty}$. Let us assume that the knots of the spline are $\alpha_i$, $i = 0, \ldots, N-1$ and that we fix $x \in [\alpha_i, \alpha_{i+1}]$ where the indices are taken modulo $N$ and the distance between the indices is taken over $\mathbb{Z}_{N}$. We can split our integral in
\betagin{align*}
Hf(x) & = \frac{PV}{\pi} \int_{\mathbb{T}} \frac{f(x)-f(y)}{2\tan\left(\frac{x-y}{2}\right)}dy
= \frac{PV}{\pi} \sum_{j} \int_{\alpha_{j}}^{\alpha_{j+1}} \frac{f(x)-f(y)}{2\tan\left(\frac{x-y}{2}\right)}dy \\
& = \frac{PV}{\pi} \sum_{|j-i|>K} \int_{\alpha_{j}}^{\alpha_{j+1}} \frac{f(x)-f(y)}{2\tan\left(\frac{x-y}{2}\right)}dy
+ \frac{PV}{\pi} \sum_{|j-i|\leq K} \int_{\alpha_{j}}^{\alpha_{j+1}} \frac{f(x)-f(y)}{2\tan\left(\frac{x-y}{2}\right)}dy \\
& \etaquiv Hf^{F}(x) + Hf^{N}(x).
\etand{align*}
Now, if we want to express $Hf^{F}(x)$ as a polynomial, it is easy since the integrand does not have a singularity. Hence
\betagin{align*}
Hf^{F}(x) & = \frac{PV}{\pi} \sum_{|j-i|>K} \int_{\alpha_{j}}^{\alpha_{j+1}} \frac{f(x)-f(y)}{2\tan\left(\frac{x-y}{2}\right)}dy
= \frac{PV}{\pi} \sum_{|j-i|>K} \int_{\alpha_{j}}^{\alpha_{j+1}} F^{j}(x,y) dy \\
& = \sum_{|j-i|>K} \int_{\alpha_{j}}^{\alpha_{j+1}} \sum_{n,m} c_{nm} (x-x^{*}(i))^{m}(y-y^{*}(j))^{n} + E(x,y) dy \etaquiv P(x) + E(x),
\etand{align*}
where $E$ accounts for the error and is a polynomial with interval coefficients. Typically, we will use as the points for the Taylor expansions $x^{*}(i) = \alpha_i$ since we will compare the resulting polynomial with another one of the form $\sum_{j} b_j (x-x_{i})^{j}$ and we will also choose $y^{*}(j) = \frac{\alpha_{j} + \alpha_{j+1}}{2}$. This choice is useful for two reasons: first, we will only have to integrate half of the terms since the rest will integrate to zero; and second, the error estimates will be better for this choice of $y^{*}(j)$ in the sense that the coefficients will be smaller. All the computations will be carried out using automatic differentiation. We should remark that we can get estimates for the error $E$ in any of the above mentioned norms without having to recompute it since the relation
\betagin{align*}
\partialrtial_{x}^{q} Hf^{F}(x) - \partialrtial_{x}^{q}P(x) = \partialrtial_{x}^{q} E(x)
\etand{align*}
holds for every $q < k$.
Now, we move on to the term $Hf^{N}(x)$. In this case, we perform a Taylor expansion in both the denominator
\betagin{align*}
2\tan\left(\frac{x-y}{2}\right) = (x-y)+c(x-y)^{3}, \quad c = \thetaxt{ small (interval) constant}
\etand{align*}
and the numerator
\betagin{align*}
f(x) = f(y) + (x-y)f'(y) + \frac{1}{2}(x-y)^2f''(y) + \ldots \frac{1}{n!}(x-y)^{k-1}f^{k-1}(\etata),
\etand{align*}
where $\etata$ belongs to an intermediate point between $x$ and $y$, which we can enclose in the convex hull of $[\alpha_i, \alpha_{i+1}]$ and $[\alpha_{j}, \alpha_{j+1}]$ where the convex hull is understood in the torus. Since typically $K$ will be very small (compared to $N$) there is no ambiguity in the definition. Finally, we can factor out $(x-y)$ and divide both in the numerator and the denominator. Since we know $f(y)$ explicitly, we can perform the explicit integration and get a piecewise polynomial as a result.
\subsection{Estimates of the norm of the Operator $I + T$}
In this subsection we will outline how to compute the norm of the operator $I + T = I + 2 \Lambdangle BR(z,\cdot), z_{\alpha} \rightarrowngle$. Since the operator $T$ behaves like a Hilbert Transform plus smoothing terms, we will describe how to calculate rigorously with the help of a computer an estimate for the norm of its inverse. The procedure is more general and can be applied to a bigger family of kernels.
Let $\mathbb{T} = \mathbb{R}/2\pi \mathbb{Z}$, and let $A(x), B(x)$ be real-valued functions on $\mathbb{T}$. Also, let $E(x,y)$ be a real-valued function on $\mathbb{T} \times \mathbb{T}$. We assume $A, B$ and $E$ are given by explicit formulas such as as perhaps piecewise trigonometric polynomials or splines, and $E(x,y)$ is a trigonometric polynomial on each rectangle $I \times J$ of some partition of $\mathbb{T} \times \mathbb{T}$. We suppose $A,B,E$ are smooth enough.
Let $H$ be the Hilbert transform acting on functions on $\mathbb{T}$, i.e.
\betagin{align*}
Hf(x) = \frac{PV}{2\pi}\int_{\mathbb{T}} \phi_{\delta} * \phi_{\delta} *t\left(\frac{y}{2}\right)f(x-y)dy.
\etand{align*}
Assume that $A$ and $B$ have no common zeros on $\mathbb{T}$.
Let
\betagin{align*}
Sf(x) = A(x)f(x) + B(x)Hf(x) + \int_{\mathbb{T}}E(x,y)f(y)dy, \quad f \in L^{2}(\mathbb{T}).
\etand{align*}
Thus, $S$ is a singular integral operator.
We hope that $S^{-1}$ exists and has a not-so-big norm on $L^{2}$, but we don't know this yet.
Our goal here is to find approximate solutions $F$ of the equation $SF = f$ for suitable given $f \in L^{2}(\mathbb{T})$, and to check that $\|SF - f\|_{L^{2}(\mathbb{T})} < \partialrtial_{\eta}lta$ for suitable $\partialrtial_{\eta}lta$. Our computation of $F$ will be based on heuristic ideas, but the computation of an upper bound for $\|SF - f\|_{L^{2}(\mathbb{T})}$ will be rigorous. In our case, $A(x) = 1, B(x) = 1$.
To carry this out, let $H_0 \subset H_1 \subset L^{2}(\mathbb{T})$ be finite-dimensional subspaces, e.g. with $H_i$ consisting of the span of wavelets (from a wavelet bases) having lengthscale $\gammaeq 2^{-N_i}$. Here $N_1 \gammaeq N_0 + 3$ (say). Let $\pi_{i}$ be the orthogonal projection from $L^{2}(\mathbb{T})$ to $H_i$, and let us solve the equation
\betagin{align}
\Lambdabel{charlieinversionstar}
\pi_1 S \pi_1 F = \pi_0 f.
\etand{align}
If $f$ is given explicitly in a wavelet bases, then \etaqref{charlieinversionstar} is a linear algebra problem, since $\pi_1 S \pi_1$ is of finite rank, and its matrix (in terms of some given basis for $H_1$) can be computed explicitly.
\betagin{itemize}
\item If $\pi_0 f \not \in \thetaxt{Range}(\pi_1 S \pi_1)$, then our heuristic procedure fails.
\item If $\pi_0 f \in \thetaxt{Range}(\pi_1 S \pi_1)$, then we find $F \in H_1$ such that $\pi_1 S \pi_1 F = \pi_0 f$, i.e. $\pi_1 SF = \pi_0 f$.
\etand{itemize}
We then have
\betagin{align*}
\|SF - f\|_{L^{2}(\mathbb{T})} \leq \| (I - \pi_1) SF\|_{L^{2}(\mathbb{T})} + \|(I - \pi_0) f\|_{L^{2}(\mathbb{T})},
\etand{align*}
and both norms on the right-hand side may be estimated explicitly.
Now, our goal is to make a heuristic computation of an operator of the form
\betagin{align*}
\tilde{S}f(x) = \tilde{A}(x)f(f) + \tilde{B}(x)Hf(x) + \int_{\mathbb{T}}\tilde{E}(x,y)f(y)dy
\etand{align*}
such that $S\tilde{S} - I$ has small norm on $L^{2}(\mathbb{T})$.
Here, we will make a heuristic computation of $\tilde{S}$; later we will give a rigorous upper bound for the norm of $S\tilde{S} - I$ on $L^{2}(\mathbb{T})$. By a heuristic computation of $\tilde{S}$ we mean a heuristic computation of $\tilde{A}, \tilde{B}$ and $\tilde{E}$.
We first find $\tilde{A}$ and $\tilde{B}$ by setting
\betagin{align*}
(A + iB)(\tilde{A} + i \tilde{B}) = 1 \mathbb{R}ightarrow
\left\{
\betagin{array}{rcl}
A \tilde{A} - B\tilde{B} & = 1& \\
A \tilde{B} + B\tilde{A} & = 0& \\
\etand{array}
\right.
\etand{align*}
Then, this means that
\betagin{align*}
S\tilde{S} = (A \tilde{A} - B\tilde{B}) + (A \tilde{B} + B\tilde{A})H + \thetaxt{ Smoothing terms}
= I + \thetaxt{ Smoothing terms}
\etand{align*}
So, from now on, we suppose that $\tilde{A}$ and $\tilde{B}$ are known. For the operator $I + T$, this means $\tilde{A} = 1/2, \tilde{B} = -1/2$. We want to compute $\tilde{E}$. Now, let $\{\phi_{\nu}\}$ be some orthonormal basis for $L^{2}(\mathbb{T})$, for example a wavelet basis. By the previous methods, we can try to find functions $\psi_{\nu} \in L^{2}(\mathbb{T})$ such that $S\psi_{\nu} - \phi_{\nu}$ has small norm. We carry this for $\nu = 1, \ldots, N$ for a large $N$. We now try to make $\tilde{E}$ satisfy
\betagin{align}
\tilde{A}(x)\phi_{\nu}(x) + \tilde{B}(x)H\phi_{\nu}(x) + \int_{\mathbb{T}}\tilde{E}(x,y)\phi_{\nu}(y) = \psi_{\nu}(x) \thetaxt{ for } \nu = 1, \ldots, N.
\etand{align}
Thus, we want
\betagin{align}
\Lambdabel{charlieadmiration}
\int_{\mathbb{T}}\tilde{E}(x,y) \phi_{\nu}(y) dy = \left(\psi_{\nu}(x) - \tilde{A}(x) \phi_{\nu}(x) - \tilde{B}(x)H\phi_{\nu}(x)\right) \etaquiv \psi_{\nu}^{\#}(x), \quad \nu = 1, \ldots, N.
\etand{align}
Note that $\psi_{\nu}^{\#}$ can be computed explicitly.
Since the $\phi_{\nu}$ (all $\nu$) form an orthonormal basis for $L^{2}(\mathbb{T})$, it is natural to define
\betagin{align*}
\tilde{E}(x,y) = \sum_{\nu=1}^{N} \psi_{\nu}^{\#} \phi_{\nu}(y).
\etand{align*}
This can be computed explicitly, and it satisfies \etaqref{charlieadmiration}. Thus, we can compute
\betagin{align}
\Lambdabel{charlie2admiration}
S\tilde{S} & = (A+BH+E)(\tilde{A} + \tilde{B}H + \tilde{E})\nonumber \\
& = A\tilde{A} + A \tilde{B}H + A \tilde{E} + BH \tilde{A} + BH\tilde{B}H + BH \tilde{E} + E\tilde{A} + E\tilde{B}H + E\tilde{E} \nonumber \\
& = A\tilde{A} + A \tilde{B}H + A \tilde{E} + B\tilde{A}H + B[H,\tilde{A}] - B\tilde{B} + B[H,\tilde{B}]H \nonumber\\
& + BH \tilde{E} + E\tilde{A} + E\tilde{B}H + E\tilde{E} \nonumber \\
& = (A\tilde{A} - B\tilde{B}) + (A \tilde{B} + B\tilde{A})H + \{A \tilde{E} + B[H,\tilde{A}] + B[H,\tilde{B}]H \nonumber\\
& + BH \tilde{E} + E\tilde{A} + E\tilde{B}H + E\tilde{E}\}
\etand{align}
We claim that all terms enclosed in curly brackets are integral operators of the form
\betagin{align*}
S^{\#}f(x) = \int_{\mathbb{T}}E^{\#}(x,y) f(y) dy,
\etand{align*}
for an $E^{\#}$ that we can calculate. Let us go term by term
\betagin{itemize}
\item $A \tilde{E}$ has the form $S^{\#}$, with $E^{\#}(x,y) = A(x)\tilde{E}(x,y)$.
\item $B[H,\tilde{A}]$ has the form $S^{\#}$, with $E^{\#}(x,y) = \frac{1}{2\pi}B(x)\phi_{\delta} * \phi_{\delta} *t\left(\frac{x-y}{2}\right)(\tilde{A}(x)-\tilde{A}(y))$.
Note that if $\tilde{A}$ is a piecewise trigonometric polynomial and $C^{k}$, then $E^{\#}$ can easily be computed modulo a small error in $C^{k-1}$.
\item $B[H,\tilde{B}]H$ has the form $S^{\#}$, with
\betagin{align*}
E^{\#}(x,y) & = \frac{1}{4\pi^{2}}B(x)PV\int\phi_{\delta} * \phi_{\delta} *t\left(\frac{x-z}{2}\right)(\tilde{B}(x)-\tilde{B}(z))\phi_{\delta} * \phi_{\delta} *t\left(\frac{z-y}{2}\right)dz. \\
& = \frac{1}{4\pi^{2}}B(x)PV\int\left\{\phi_{\delta} * \phi_{\delta} *t\left(\frac{x-z}{2}\right)(\tilde{B}(x)-\tilde{B}(z))-2\tilde{B}'(x)\right\}\phi_{\delta} * \phi_{\delta} *t\left(\frac{z-y}{2}\right)dz.
\etand{align*}
\item $BH\tilde{E}$ has the form $S^{\#}$, with
\betagin{align*}
E^{\#}(x,y) & = \frac{1}{2\pi}B(x)PV\int\phi_{\delta} * \phi_{\delta} *t\left(\frac{x-z}{2}\right)\tilde{E}(z,y)dz. \\
& = \frac{1}{2\pi}B(x)PV\int\phi_{\delta} * \phi_{\delta} *t\left(\frac{x-z}{2}\right)\left(\tilde{E}(z,y)-\tilde{E}(x,y)\right)dz.
\etand{align*}
\item $E\tilde{A}$ has the form $S^{\#}$, with $E^{\#}(x,y) = \tilde{E}(x,y)\tilde{A}(y)$.
\item $E\tilde{B}H$ has the form $S^{\#}$, with
\betagin{align*}
E^{\#}(x,y) & = \frac{1}{2\pi}PV\int E(x,z)\tilde{B}(z)\phi_{\delta} * \phi_{\delta} *t\left(\frac{z-y}{2}\right)dz. \\
& = \frac{1}{2\pi}PV\int \left\{E(x,z)\tilde{B}(z)-E(x,y)\tilde{B}(y)\right\}\phi_{\delta} * \phi_{\delta} *t\left(\frac{z-y}{2}\right)dz.
\etand{align*}
\item $E\tilde{E}$ has the form $S^{\#}$, with $E^{\#}(x,y) = \int E(x,z)\tilde{E}(z,y)dz$.
\etand{itemize}
This proves the claim.
Letting $\mathcal{E}^{\#}f(x) = \int_{\mathbb{T}}E^{\#}(x,y)f(y)dy$ be the operator in curly brackets in \etaqref{charlie2admiration}, we see that
\betagin{align*}
S\tilde{S} = (A\tilde{A} - B\tilde{B}) + (A\tilde{B} + B\tilde{A})H + \mathcal{E}^{\#},
\etand{align*}
and that the function $E^{\#}(x,y)$ can be computed modulo a small error in $C^{0}(\mathbb{T} \times \mathbb{T})$. Therefore, we obtain an upper bound for the norm of $S\tilde{S} - I$, namely
\betagin{align*}
\max| A\tilde{A} - B\tilde{B} - 1| + \max |A\tilde{B} + B\tilde{A}|
+ \max\left\{\max_{x}\int|E^{\#}(x,y)|dy,\max_{y}\int|E^{\#}(x,y)|dx\right\}.
\etand{align*}
Defining $S_{err} := S\tilde{S} - I$, we obtain an explicit upper bound $\partialrtial_{\eta}lta$ for the norm of $S_{err}$ on $L^{2}(\mathbb{T})$. We hope that $\partialrtial_{\eta}lta < 1$. If not, then we fail.
Suppose $\partialrtial_{\eta}lta < 1$. Then
\betagin{align*}
S\tilde{S} = I + S_{err} \mathbb{R}ightarrow S\tilde{S}(I+S_{err})^{-1} = I,
\etand{align*}
so we obtain a right inverse for $S$, namely $\tilde{S}(I+S_{err})^{-1}$, which has norm at most
\betagin{align}
\Lambdabel{charlie2star}
\|\tilde{S}\|(1-\partialrtial_{\eta}lta)^{-1},
\etand{align}
where $\|\tilde{S}\|$ denotes the norm of $\tilde{S}$ as an operator on $L^{2}(\mathbb{T})$. Recall
\betagin{align*}
\tilde{S}f(x) = \tilde{A}(x)f(x) + \tilde{B}(x)Hf(x) + \int_{\mathbb{T}}\tilde{E}(x,y)f(y)dy.
\etand{align*}
Therefore,
\betagin{align*}
\|\tilde{S}\| \leq \max|\tilde{A}(x)| + \max|\tilde{B}(x)| + \max\left\{\max_{x}\int|\tilde{E}(x,y)|dy,\max_{y}\int|\tilde{E}(x,y)|dx\right\}.
\etand{align*}
Plugging that bound into \etaqref{charlie2star}, we obtain an explicit upper bound for the norm on $L^{2}$ of a right inverse for $S$. Similarly (by looking at $\tilde{S}S$ instead of $S\tilde{S}$), we obtain an upper bound for the norm on $L^{2}$ of a left inverse for $S$.
\betagin{remark}
To estimate e.g. $\max_{x}\int_{\mathbb{T}}|E^{\#}(x,y)|dy$ it may be enough just to use the trivial estimate
\betagin{align*}
\max_{x}\int_{\mathbb{T}}|E^{\#}(x,y)|dy \leq 2\pi \max_{x,y}|E^{\#}(x,y)|
\etand{align*}
\etand{remark}
\betagin{remark}[Time dependent solutions]
For $t \in [t_0,t_1]$ (a small time interval), let
\betagin{align*}
S_t f(x) = A(x,t)f(x) + B(x,t)Hf(x) + \int_{\mathbb{T}}E(x,y,t)f(y)dy,
\etand{align*}
where (for each $t$),$A(\cdot,t),B(\cdot,t),E(\cdot,\cdot,t)$ are as assumed above.
If $A,B,E$ depend in a reasonable way on $t$, then one shows easilly that
\betagin{align*}
\| S_t - S_{t_0}\| < \etata \thetaxt{ for all } t \in [t_0,t_1].
\etand{align*}
We can make $\etata$ small by taking $t_1$ close enough to $t_0$. Suppose we prove that $\|S_{t_0}^{-1}\| \leq C_0$ by the previous methods. Then, of course we obtain an upper bound for $\| S_{t}^{-1}\|$ valid for all $t \in [t_0,t_1]$.
\etand{remark}
\section{Bounds for $\mathcal{C}(t)$ and $k$}
\subsection{Writing the differential inequality as a differential system of equations}
The calculation of a bound for $\mathcal{C}(t)$ requires more effort than the previous one since one needs to calculate the terms one by one and add all their contributions to $\mathcal{C}(t)$. For example, in order to calculate the evolution of the norm $\|D\|_{H^{k}}(t)$ a systematic approach is to take $k$ derivatives ($k$ ranging from 0 to 4) in the equation for the evolution of $z$ (\ref{CharlieFlat} with $f = g = 0$), take another $k$ derivatives in the equation for $x$ (\ref{CharlieFlat} with arbitrary $f,g$) and subtract them. Let us focus from now on in the term $Q(z)^{2}BR(z,\omegaega) - Q(x)^2BR(x,\gammaamma)$ and its derivatives. One notices that in order to write a term in the variables $(z,\omegaega,\varpirphi)$ composed of $a$ factors minus its counterpart in the variables $(x,\gammaamma,\psi)$ in a suitable way (i.e. as a sum of terms that only have factors $x,\gammaamma,\psi,D,d,\mathcal{D}$) then the number of terms is $2^{a}-1$. The way of writing it is the classical way of adding and subtracting the same term with the purpose of creating differences of terms and eliminate all the occurrences of the variables $(z,\omegaega,\varpirphi)$. An example for the Birkhoff-Rott operator (with $Q = 1$) is given next. We should remark that the computation and bounding of the Birkhoff-Rott is the most expensive one, the rest of the terms being easier.
\betagin{align*}
&BR(z,\omega) - BR(x,\gammaamma) = \frac{1}{2\pi}\int \frac{(x(\alphapha)-x(\betata))^\perp}{|x(\alphapha)-x(\betata)|^2}\left(\omega(\betata)-\gammaamma(\betata)\right)d\betata \\
& + \frac{1}{2\pi}\int \frac{(z(\alphapha)-z(\betata))^\perp-(x(\alphapha)-x(\betata))^\perp}{|x(\alphapha)-x(\betata)|^2}\left(\gammaamma(\betata)\right)d\betata \\
& + \frac{1}{2\pi}\int \frac{(z(\alphapha)-z(\betata))^\perp-(x(\alphapha)-x(\betata))^\perp}{|x(\alphapha)-x(\betata)|^2}\left(\omega(\betata)-\gammaamma(\betata)\right)d\betata \\
& + \frac{1}{2\pi}\int \left(\frac{1}{|z(\alphapha)-z(\betata)|^2} - \frac{1}{|x(\alphapha)-x(\betata)|^2}\right) (x(\alphapha)-x(\betata))^\perp\gammaamma(\betata)d\betata \\
& + \frac{1}{2\pi}\int \left(\frac{1}{|z(\alphapha)-z(\betata)|^2} - \frac{1}{|x(\alphapha)-x(\betata)|^2}\right) (x(\alphapha)-x(\betata))^\perp(\omega(\betata) - \gammaamma(\betata))d\betata \\
& + \frac{1}{2\pi}\int \left(\frac{1}{|z(\alphapha)-z(\betata)|^2} - \frac{1}{|x(\alphapha)-x(\betata)|^2}\right) (z(\alphapha)-z(\betata) - (x(\alphapha)-x(\betata)))^\perp\gammaamma(\betata)d\betata \\
& + \frac{1}{2\pi}\int \left(\frac{1}{|z(\alphapha)-z(\betata)|^2} - \frac{1}{|x(\alphapha)-x(\betata)|^2}\right) (z(\alphapha)-z(\betata) - (x(\alphapha)-x(\betata)))^\perp(\omega(\betata) - \gammaamma(\betata))d\betata
\etand{align*}
After having seen this, it is clear that a tool that can perform symbolic calculations (derivation and basic arithmetic at least) and the correct grouping of the factors is required since the performance at this task by a human is not satisfactory. We developed a tool in 900 lines of C++ code that could do all this and output the collection of terms in Tex. We show an excerpt of the terms concerning the fourth derivative of $BR(z,\omegaega) - BR(x,\gammaamma)$. The total number of terms in that case is 2841.
\betagin{align*}
& 2\pi \left(\partialrtial_{\alpha}^{4} BR(x,\gammaamma) - \partialrtial_{\alpha}^{4} BR(z,\omegaega) \right) = \\
&
+\int (\partialrtial_{\alpha}^{4} x(\alpha)-\partialrtial_{\alpha}^{4} x(\alpha-\betata))^{\perp}d(\alpha-\betata)\frac{1}{\left|x(\alpha)-x(\alpha-\betata)\right|^{2}}d\alpha\\
&
+\int (\partialrtial_{\alpha}^{4} x(\alpha)-\partialrtial_{\alpha}^{4} x(\alpha-\betata))^{\perp}d(\alpha-\betata) \left(\frac{1}{\left|x(\alpha)-x(\alpha-\betata)\right|^{2}}-\frac{1}{\left|z(\alpha)-z(\alpha-\betata)\right|^{2}}\right)d\alpha\\
&
+\int (\partialrtial_{\alpha}^{4} x(\alpha)-\partialrtial_{\alpha}^{4} x(\alpha-\betata))^{\perp}\gammaamma(\alpha-\betata)\left(\frac{1}{\left|x(\alpha)-x(\alpha-\betata)\right|^{2}} -\frac{1}{\left|z(\alpha)-z(\alpha-\betata)\right|^{2}}\right)d\alpha\\
&
+4\int (\partialrtial_{\alpha}^{3} x(\alpha)-\partialrtial_{\alpha}^{3} x(\alpha-\betata))^{\perp}\partialrtial_{\alpha} d(\alpha-\betata)\frac{1}{\left|x(\alpha)-x(\alpha-\betata)\right|^{2}}d\alpha\\
&
+4\int (\partialrtial_{\alpha}^{3} x(\alpha)-\partialrtial_{\alpha}^{3} x(\alpha-\betata))^{\perp}\partialrtial_{\alpha} d(\alpha-\betata)\left(\frac{1}{\left|x(\alpha)-x(\alpha-\betata)\right|^{2}} -\frac{1}{\left|z(\alpha)-z(\alpha-\betata)\right|^{2}}\right)d\alpha\\
&
-8\int (\partialrtial_{\alpha}^{3} x(\alpha)-\partialrtial_{\alpha}^{3} x(\alpha-\betata))^{\perp}d(\alpha-\betata)\left(\frac{1}{\left|x(\alpha)-x(\alpha-\betata)\right|^{2}}\right)^{2} \\
& \times (\partialrtial_{\alpha} x(\alpha)-\partialrtial_{\alpha} x(\alpha-\betata))\cdot(D(\alpha)-D(\alpha-\betata))d\alpha\\
&
-8\int (\partialrtial_{\alpha}^{3} x(\alpha)-\partialrtial_{\alpha}^{3} x(\alpha-\betata))^{\perp}d(\alpha-\betata)\left(\frac{1}{\left|x(\alpha)-x(\alpha-\betata)\right|^{2}}\right)^{2} \\
& \times (\partialrtial_{\alpha} x(\alpha)-\partialrtial_{\alpha} x(\alpha-\betata))\cdot(x(\alpha)-x(\alpha-\betata))d\alpha\\
&
-8\int (\partialrtial_{\alpha}^{3} x(\alpha)-\partialrtial_{\alpha}^{3} x(\alpha-\betata))^{\perp}d(\alpha-\betata)\left(\frac{1}{\left|x(\alpha)-x(\alpha-\betata)\right|^{2}}\right)^{2} \\
& \times (\partialrtial_{\alpha} D(\alpha)-\partialrtial_{\alpha} D(\alpha-\betata))\cdot(D(\alpha)-D(\alpha-\betata))d\alpha\\
&
-8\int (\partialrtial_{\alpha}^{3} x(\alpha)-\partialrtial_{\alpha}^{3} x(\alpha-\betata))^{\perp}d(\alpha-\betata)\left(\frac{1}{\left|x(\alpha)-x(\alpha-\betata)\right|^{2}}\right)^{2} \\
& \times (x(\alpha)-x(\alpha-\betata))\cdot(\partialrtial_{\alpha} D(\alpha)-\partialrtial_{\alpha} D(\alpha-\betata))d\alpha\\
&
-8\int (\partialrtial_{\alpha}^{3} x(\alpha)-\partialrtial_{\alpha}^{3} x(\alpha-\betata))^{\perp}d(\alpha-\betata)\left(\left(\frac{1}{\left|x(\alpha)-x(\alpha-\betata)\right|^{2}}\right)^{2}
- \left(\frac{1}{\left|z(\alpha)-z(\alpha-\betata)\right|^{2}}\right)^{2}\right) \\
& \times (\partialrtial_{\alpha} D(\alpha)-\partialrtial_{\alpha} D(\alpha-\betata))\cdot(D(\alpha)-D(\alpha-\betata)) d\alpha \\
& + 2831 \thetaxt{ more terms...}
\etand{align*}
However, there is a significant way to reduce the number of terms in the estimates: writing the equation in complex form instead of vector form. Thus, we can write the evolution for $z$ in the following way:
\betagin{align*}
\partialrtial_t z^{*}(\alpha,t) = \frac{1}{2\pi} \int_{\mathbb{T}} \frac{1}{z(\alpha,t) - z(\betata,t)}\omegaega(\betata,t)d\betata + c(\alpha,t)\partialrtial_{\alpha} z^{*}(\alpha,t)
\etand{align*}
In this formulation, the fourth derivative accounts for only 140 terms. We present the first 10 below.
\betagin{align*}
& 2\pi \left(\partialrtial_{\alpha}^{4} BR(x,\gammaamma) - \partialrtial_{\alpha}^{4} BR(z,\omegaega) \right) \\
& =
-72\int (\partialrtial_{\alpha}^{2} x(\alpha)-\partialrtial_{\alpha}^{2} x(\alpha-\betata))(\partialrtial_{\alpha} x(\alpha)-\partialrtial_{\alpha} x(\alpha-\betata)) \left(\frac{1}{x(\alpha)-x(\alpha-\betata)}\right)^{4} \\
& \times (\partialrtial_{\alpha} D(\alpha)-\partialrtial_{\alpha} D(\alpha-\betata))d(\alpha-\betata)d\alpha\\
&
-72\int (\partialrtial_{\alpha}^{2} x(\alpha)-\partialrtial_{\alpha}^{2} x(\alpha-\betata))(\partialrtial_{\alpha} x(\alpha)-\partialrtial_{\alpha} x(\alpha-\betata)) \left(\frac{1}{x(\alpha)-x(\alpha-\betata)}\right)^{4} \\
& \times (\partialrtial_{\alpha} D(\alpha)-\partialrtial_{\alpha} D(\alpha-\betata))\gammaamma(\alpha-\betata)d\alpha\\
&
-72\int (\partialrtial_{\alpha} x(\alpha)-\partialrtial_{\alpha} x(\alpha-\betata))(\partialrtial_{\alpha}^{2} D(\alpha)-\partialrtial_{\alpha}^{2} D(\alpha-\betata)) \left(\frac{1}{x(\alpha)-x(\alpha-\betata)}\right)^{4} \\
& \times (\partialrtial_{\alpha} D(\alpha)-\partialrtial_{\alpha} D(\alpha-\betata))d(\alpha-\betata)d\alpha\\
&
-72\int (\partialrtial_{\alpha} x(\alpha)-\partialrtial_{\alpha} x(\alpha-\betata))(\partialrtial_{\alpha}^{2} D(\alpha)-\partialrtial_{\alpha}^{2} D(\alpha-\betata)) \left(\frac{1}{x(\alpha)-x(\alpha-\betata)}\right)^{4} \\
& \times (\partialrtial_{\alpha} D(\alpha)-\partialrtial_{\alpha} D(\alpha-\betata))\gammaamma(\alpha-\betata)d\alpha\\
&
-36\int (\partialrtial_{\alpha}^{2} D(\alpha)-\partialrtial_{\alpha}^{2} D(\alpha-\betata))\left(\partialrtial_{\alpha} D(\alpha)-\partialrtial_{\alpha} D(\alpha-\betata)\right)^{2} \left(\frac{1}{x(\alpha)-x(\alpha-\betata)}\right)^{4}d(\alpha-\betata)d\alpha\\
&
-36\int (\partialrtial_{\alpha}^{2} D(\alpha)-\partialrtial_{\alpha}^{2} D(\alpha-\betata))\left(\partialrtial_{\alpha} D(\alpha)-\partialrtial_{\alpha} D(\alpha-\betata)\right)^{2} \left(\frac{1}{x(\alpha)-x(\alpha-\betata)}\right)^{4}\gammaamma(\alpha-\betata)d\alpha\\
&
+8\int \left(\frac{1}{x(\alpha)-x(\alpha-\betata)}\right)^{3}(\partialrtial_{\alpha}^{3} D(\alpha)-\partialrtial_{\alpha}^{3} D(\alpha-\betata))(\partialrtial_{\alpha} D(\alpha)-\partialrtial_{\alpha} D(\alpha-\betata))d(\alpha-\betata)d\alpha\\
&
+8\int \left(\frac{1}{x(\alpha)-x(\alpha-\betata)}\right)^{3}(\partialrtial_{\alpha}^{3} D(\alpha)-\partialrtial_{\alpha}^{3} D(\alpha-\betata))(\partialrtial_{\alpha} D(\alpha)-\partialrtial_{\alpha} D(\alpha-\betata))\gammaamma(\alpha-\betata)d\alpha\\
&
+24\int \left(\frac{1}{x(\alpha)-x(\alpha-\betata)}\right)^{3}(\partialrtial_{\alpha}^{2} D(\alpha)-\partialrtial_{\alpha}^{2} D(\alpha-\betata))(\partialrtial_{\alpha} D(\alpha)-\partialrtial_{\alpha} D(\alpha-\betata))\partialrtial_{\alpha} d(\alpha-\betata)d\alpha\\
&
+24\int \left(\frac{1}{x(\alpha)-x(\alpha-\betata)}\right)^{3}(\partialrtial_{\alpha}^{2} D(\alpha)-\partialrtial_{\alpha}^{2} D(\alpha-\betata))(\partialrtial_{\alpha} D(\alpha)-\partialrtial_{\alpha} D(\alpha-\betata))\partialrtial_{\alpha} \gammaamma(\alpha-\betata)d\alpha\\
& + 130 \thetaxt{ more terms...}
\etand{align*}
The final observation is that if we consider $\mathcal{E}(t)$ as a scalar, we might not get suitable estimates. In order to get better estimates, we will modify the energy into a ``vectorized'' version $\mathcal{E}_{v}(t)$, which we will also denote by $\mathcal{E}(t)$ by abuse of notation. This new vectorized energy will be as follows
\betagin{align*}
\mathcal{E}(t) =
\left(
\betagin{array}{c}
\|D\|_{L^{2}} \\
\|D\|_{\dot{H^{1}}} \\
\|D\|_{\dot{H^{2}}} \\
\|D\|_{\dot{H^{3}}} \\
\|d\|_{L^{2}} \\
\|d\|_{\dot{H^{1}}} \\
\|d\|_{\dot{H^{2}}} \\
\vdots
\etand{array}
\right),
\etand{align*}
where the homogeneous spaces $\dot{H^{k}}$ have their norm defined by $\|f\|_{\dot{H^{k}}} = \|\partialrtial_{\alpha}^{k} f\|_{L^{2}}$. With this vectorized system, we avoid both the bounding of any given norm by the full energy and any constant factor arising from interpolation between two Sobolev spaces. Thus, our constant $C(t)$ will roughly be of a size comparable to the largest eigenvalue of the linearized system.
\subsection{Estimates for the linear terms with $Q = 1$}
Since we expect $\mathcal{E}(t)$ to be small, the terms that affect more to the evolution of $\mathcal{E}(t)$ are the linear ones. We now report on the non-rigorous experiments over the linear terms to obtain an approximate bound of the behavior of the full system (i.e. an approximation to the largest eigenvalue of the linearized system). We remark that a multiplication of the estimates by a constant, even a small factor 2 for example, has a big impact on the system, rendering the estimates useless and the estimations not tight enough, because the type of estimates we are going to get are exponential in the product of the time elapsed between the splash and the graph and the constant. Therefore, we should be very careful and fine estimates have to be developed.
First of all, we will work with $Q = 1$ and later move on to the case $Q \neq 1$. We will adopt the following convention to denote the different Kernels (integral operators) that appear:
\betagin{align*}
\mathbb{T}heta^{a_1, a_2, a_3, a_4}_{b_1, b_2}(\alpha,\betata) & = \frac{1}{(x(\alpha)-x(\betata))^{b_1}}(\partialrtial_{\alpha} x(\alpha)-\partialrtial_{\alpha} x(\betata))^{a_1}(\partialrtial_{\alpha}^{2}x(\alpha)-\partialrtial_{\alpha}^{2}x(\betata))^{a_2} \\
& \times (\partialrtial_{\alpha}^{3} x(\alpha)-\partialrtial_{\alpha}^{3} x(\betata))^{a_3}(\partialrtial_{\alpha}^{4} x(\alpha)-\partialrtial_{\alpha}^{4} x(\betata))^{a_4}\partialrtial_{\alpha}^{b_2}\gammaamma(\betata) \\
\mathbb{T}heta^{a_1, a_2, a_3, a_4}_{b_1, -1}(\alpha,\betata) & = \frac{1}{(x(\alpha)-x(\betata))^{b_1}}(\partialrtial_{\alpha} x(\alpha)-\partialrtial_{\alpha} x(\betata))^{a_1}(\partialrtial_{\alpha}^{2}x(\alpha)-\partialrtial_{\alpha}^{2}x(\betata))^{a_2} \\
& \times (\partialrtial_{\alpha}^{3} x(\alpha)-\partialrtial_{\alpha}^{3} x(\betata))^{a_3}(\partialrtial_{\alpha}^{4} x(\alpha)-\partialrtial_{\alpha}^{4} x(\betata))^{a_4}. \\
\etand{align*}
The operators for which $b_2 \neq -1$ will act on $D$ or its derivatives whereas the operators for which $b_2 = -1$ will act on $d$ or its derivatives. We now describe how to split the Kernels in such a way that they can be computed. For the case where $b_2 \neq -1$ we illustrate this by splitting $\mathbb{T}heta^{0,0,0,0}_{2,0}$, but the technique can be applied to any Kernel.
\betagin{align}
\frac{1}{2\pi}\int \mathbb{T}heta^{0,0,0,0}_{2,0}&(D(\alpha) - D(\betata))d\betata = \underbrace{\frac{1}{2\pi}D(\alpha)\int K(\alpha,\betata)\gammaamma(\betata)d\betata}_{T_1}
- \underbrace{\frac{1}{2\pi}\int K(\alpha,\betata)\gammaamma(\betata)D(\betata)d\betata}_{T_2} \nonumber \\
& + \underbrace{\frac{1}{2\pi}c_1(\alpha)\int \frac{D(\alpha) - D(\betata)}{4\sigman^{2}\left(\frac{\alpha - \betata}{2}\right)}\gammaamma(\betata)d\betata}_{T_3}
+ \underbrace{\frac{1}{2\pi}c_2(\alpha)\int \frac{D(\alpha) - D(\betata)}{2\tan\left(\frac{\alpha - \betata}{2}\right)}\gammaamma(\betata)d\betata}_{T_4},
\Lambdabel{estimacionessplitting}
\etand{align}
where
\betagin{align*}
K(\alpha,\betata) & = \frac{1}{(x(\alpha)-x(\betata))^{2}} - \frac{c_1(\alpha)}{4\sigman^{2}\left(\frac{\alpha - \betata}{2}\right)} - \frac{c_2(\alpha)}{2\tan\left(\frac{\alpha - \betata}{2}\right)}\\
c_1(\alpha) & = \frac{1}{x_\alpha^{2}(\alpha)}\\
c_2(\alpha) & = \frac{x_{\alpha \alpha}(\alpha)}{x_{\alpha}^{3}(\alpha)}.
\etand{align*}
We can think of $c_1(\alpha)$ and $c_2(\alpha)$ as the Taylor coefficients of $\mathbb{T}heta(\alpha,\betata)$ around $\betata = \alphapha$. We can bound the terms in
\etaqref{estimacionessplitting} in the following way:
\betagin{align*}
T_4(\alpha) & = c_2(\alpha)[H(D\gammaamma)(\alpha) - DH(\gammaamma)(\alpha)]\\
T_3(\alpha) & = c_1(\alpha)[\Lambda(D\gammaamma)(\alpha) - D\Lambda(\gammaamma)(\alpha)]
\etand{align*}
We have then the estimates
\betagin{align*}
\|T_4\|_{L^{2}} & \leq \|c_2\|_{L^{\infty}}(\|D\|_{L^{2}}\|\gammaamma\|_{L^{\infty}} + \|D\|_{L^{2}}\|H\gammaamma\|_{L^{\infty}}) \\
\|T_3\|_{L^{2}} & \leq \|c_1\|_{L^{\infty}}(\|D\|_{L^{2}}\|\gammaamma_{\alpha}\|_{L^{\infty}} + \|D_{\alpha}\|_{L^{2}}\|\gammaamma\|_{L^{\infty}} + \|D\|_{L^{2}}\|\Lambda(\gammaamma)\|_{L^{\infty}}).
\etand{align*}
We now move on to $T_1$. We will estimate it in the following way:
\betagin{align*}
\int T_1 \overline{D(\alpha)} d\alpha = \frac{1}{2\pi}\int |D(\alpha)|^{2}\int K(\alpha,\betata) \gammaamma(\betata)d\betata d\alpha
\leq \frac{1}{2\pi}\|D\|_{L^{2}}^{2}\left\|\int K(\cdot,\betata) \gammaamma(\betata)d\betata\right\|_{L^{\infty}}.
\etand{align*}
To estimate the kernel $T_2$ we will use the Generalized Young's inequality \cite{Folland:introduction-pdes}:
\betagin{align*}
\|T_2(D)\|_{L^{2}}^{2} = \frac{1}{4\pi^{2}}\int \int \int K(\alpha,\betata) \gammaamma(\betata) D(\betata) \overline{K(\alpha,\sigmagma)} \overline{\gammaamma(\sigmagma)} \overline{D(\sigmagma)} d\betata d\sigmagma d\alphapha.
\etand{align*}
Defining
\betagin{align*}
\tilde{K}(\betata,\sigmagma) = \int K(\alpha,\betata) \gammaamma(\betata) \overline{K(\alpha,\sigmagma)} \gammaamma(\sigmagma) d\alphapha,
\etand{align*}
we have that
\betagin{align*}
\|T_2(D)\|_{L^{2}}^{2} & = \frac{1}{4\pi^{2}}\int \int \tilde{K}(\betata,\sigmagma)D(\betata)\overline{D(\sigmagma)} d\betata d\sigmagma \\
& = \frac{1}{4\pi^{2}}\int D(\betata) \left(\int \tilde{K}(\betata,\sigmagma)\overline{D(\sigmagma)} d\sigmagma\right)d\betata \\
& \leq \frac{1}{4\pi^{2}}\|D\|_{L^{2}} \left\|\int \tilde{K}(\dot,\sigmagma)d\sigmagma\right\|_{L^{2}} \\
& \leq \frac{1}{4\pi^{2}}C\|D\|_{L^{2}}^{2}, \quad C = \max\left\{\max_{\betata}\int |\tilde{K}(\betata,\sigmagma)|d\sigmagma,\max_{\sigmagma} \int |\tilde{K}(\betata,\sigmagma)|d\betata\right\}
\etand{align*}
We finally show how to estimate the Kernels with $b_2 = -1$. We will do this by showing how to estimate $\mathbb{T}heta^{0,0,0,0}_{1,-1}$ but the technique can be applied to any Kernel.
\betagin{align*}
\frac{1}{2\pi}\int \mathbb{T}heta^{0,0,0,0}_{1,-1}(d(\betata))d\betata & = \underbrace{\frac{1}{2\pi}\int K(\alpha,\betata)d(\betata)d\betata}_{T_1} \nonumber \\
& + \underbrace{\frac{1}{2\pi}c_1(\alpha)\int \frac{1}{2\tan\left(\frac{\alpha - \betata}{2}\right)}d(\betata)d\betata}_{T_2},
\etand{align*}
where
\betagin{align*}
K(\alpha,\betata) & = \frac{1}{(x(\alpha)-x(\betata))} - \frac{c_1(\alpha)}{2\tan\left(\frac{\alpha - \betata}{2}\right)}\\
c_1(\alpha) & = \frac{1}{x_\alpha(\alpha)}.
\etand{align*}
We can easily estimate these two terms applying to $T_1$ the same estimates (Young's inequality) as for $T_2$ in the previous case and by noting that $T_2$ is $\frac{1}{2}c_1(\alpha)H(d)$.
\subsection{Estimates for the linear terms with $Q \neq 1$}
To perform the real estimates, where $Q \neq 1$ we will use the estimates from the previous sections. We will explain how to pass from the former ones to the latter ones. We will illustrate this by computing the linear terms of the Birkhoff-Rott operator.
First of all, the total number of terms will increase by a factor 2, since we will have
\betagin{align*}
Q^{2}(z)BR(z,\omega) - Q^{2}(x)BR(x,\gammaamma)) & = \underbrace{(Q^{2}(z)-Q^{2}(x))(BR(z,\omega) - BR(x,\gammaamma))}_{\thetaxt{nonlinear}} \\
& + \underbrace{Q^{2}(x)(BR(z,\omega) - BR(x,\gammaamma))}_{\thetaxt{calculated before}} \\
& + \underbrace{(Q^{2}(z)-Q^{2}(x))BR(x,\gammaamma)}_{\thetaxt{new terms}}
\etand{align*}
In order to calculate the old terms with $Q \neq 1$, the only thing we have to do is to incorporate a factor of $\partialrtial_{\alpha}^{k}Q^{2}(x)(\alpha)$ in the estimates. The new terms can easily be calculated using that, up to linear order
\betagin{align*}
(Q^{2}(z)-Q^{2}(x)) = \frac{1}{8}\left\Lambdangle \frac{1+x^4}{x}, \overline{3x^2 - \frac{1}{x^2}}\right\rightarrowngle D + O(D^2).
\etand{align*}
\section{Proof of Theorem \ref{stabilitytheorem}}
\Lambdabel{sectionstability}
In this section, we will prove the stability Theorem \ref{stabilitytheorem}.
The equations are:
\betagin{align*}
\thetaxt{SPLASH}&\left\{
\betagin{array}{cl}
z_t & = Q_{z}^{2}BR + cz_{\alpha} \\
c & = {\rm div}\thinspacesplaystyle \frac{\alpha + \pi}{2\pi}\int_{-\pi}^{\pi}(Q^2 BR)_{\alpha}\frac{z_\alpha}{|z_{\alpha}|^{2}} d\alpha - \int_{-\pi}^{\alpha}(Q^2 BR)_{\betata}\frac{z_{\betata}}{|z_{\betata}|^{2}}d\betata \\
\omegaega_t & + 2BR_{t} \cdot z_{\alpha} = - (Q^2)_{\alpha}|BR|^{2} + 2cBR_{\alpha} \cdot z_{\alpha} + (c\varpirpi)_{\alpha}\\
& {\rm div}\thinspacesplaystyle - \left(\frac{Q^2\varpirpi^2}{4|z_{\alpha}|^{2}}\right)_{\alpha}
- 2(P^{-1}_{2}(z))_{\alpha}
\etand{array}
\right.\\
\thetaxt{APPROX}&\left\{
\betagin{array}{cl}
x_t & = Q^2(x)BR(x,\gammaamma) + bx_{\alpha} + f\\
b & = \underbrace{\frac{\alpha + \pi}{2\pi}\int_{-\pi}^{\pi}(Q^2 BR)_{\alpha}\frac{x_\alpha}{|x_{\alpha}|^{2}} d\alpha - \int_{-\pi}^{\alpha}(Q^2 BR)_{\betata}\frac{x_{\alpha}}{|x_{\alpha}|^{2}}d\betata}_{b_s} \\
&+ \underbrace{\frac{\alpha + \pi}{2\pi}\int_{-\pi}^{\pi}f_{\alpha}\frac{x_\alpha}{|x_{\alpha}|^{2}} d\alpha - \int_{-\pi}^{\alpha}f_{\betata}\frac{x_{\betata}}{|x_{\betata}|^{2}}d\betata}_{b_e}\\
\gammaamma_t & + 2BR_{t}(x,\gammaamma) \cdot x_{\alpha} = - (Q^2(x))_{\alpha}|BR(x,\gammaamma)|^{2} + 2bBR_{\alpha}(x,\gammaamma) \cdot x_{\alpha} + (b\gammaamma)_{\alpha} \\
& \qquad \qquad \qquad - {\rm div}\thinspacesplaystyle \left(\frac{Q^2(x)\gammaamma^2}{4|x_{\alpha}|^{2}}\right)_{\alpha} - 2(P^{-1}_{2}(x))_{\alpha} + g
\etand{array}
\right.
\etand{align*}
where
\betagin{equation*}
BR(z,\varpirpi)(\alphapha) = \frac{1}{2\pi}PV\int_{-\pi}^{\pi}\frac{(z(\alpha) - z(\alpha-\betata))^{\perp}}{|z(\alpha) - z(\alpha-\betata)|^{2}}\varpirpi(\alpha-\betata)d\betata,
\etand{equation*}
$f$ will be the error for $z$ and $g$ will be the error for $\omega$.
\subsection{Computing the difference $z-x$ and $\omega - \gammaamma$}
We define now:
$$ D \etaquiv z - x, \quad d \etaquiv \omega - \gammaamma, \quad \mathcal{D} \etaquiv \varpirphi - \psi$$
The energy
$$ E(t) \etaquiv \frac{1}{2}\left(\|D\|^{2}_{L^{2}} + \int_{-\pi}^{\pi}\frac{Q_{z}^{2}}{|z_{\alpha}|^{2}}\sigmagma_{z}|\partialrtial^{4}_{\alpha}D|^{2} + \|d\|^{2}_{H^{2}} + \|\mathcal{D}\|^{2}_{H^{3+\frac{1}{2}}}\right)$$
and the Rayleigh-Taylor condition
\betagin{align*}\sigmagma_{z} &\etaquiv \left(BR_{t} + \frac{\varpirphi}{|z_{\alpha}|}BR_{\alpha}\right) \cdot z_{\alpha}^{\perp} + \frac{\omega}{2|z_{\alpha}|^{2}}\left(z_{\alpha t} + \frac{\varpirphi}{|z_{\alpha}|}z_{\alpha \alpha}\right) \cdot z_{\alpha}^{\perp} \\
&+ Q\left|BR + \frac{\omega}{2|z_{\alpha}|^{2}}z_{\alpha}\right|^{2}\nabla Q \cdot z_{\alpha}^{\perp}
- (\nabla P_{2}^{-1})(z) \cdot z_{\alpha}^{\perp}\etand{align*}
Note that $\sigmagma_{z} > 0$. We shall show that
$$\left|\frac{d}{dt}E(t)\right|\leq \mathcal{C}(t)(E(t)+E^{k}(t))+c\partialrtial_{\eta}lta(t)$$
where $$\mathcal{C}(t)= \mathcal{C}(\|x\|_{H^{5+\frac12}}(t),\|\gammaamma\|_{H^{3+\frac12}}(t),
\|\psi\|_{H^{4+\frac12}}(t),\|F(x)\|_{L^\infty}(t))$$ and $$\partialrtial_{\eta}lta(t)=(\|f\|_{H^{5+\frac12}}(t)+\|g\|_{H^{3+\frac12}}(t))^k+(\|f\|_{H^{5+\frac12}}(t)+\|g\|_{H^{3+\frac12}}(t))^2, \thetaxt{ $k$ big enough}$$
depend on the norms of $f$ and $g$.
\betagin{remark}
From now on, we will denote $E(t) + E(t)^k$ by $P(E(t))$.
\etand{remark}
$ \frac{1}{2}\frac{d}{dt}\|D\|_{L^{2}}^{2} \leq CP(E(t)) + \partialrtial_{\eta}lta(t)$ is left to the reader. We compute
$$ \frac{1}{2}\frac{d}{dt}\int_{-\pi}^{\pi}\frac{Q_{z}^{2}}{|z_{\alpha}|^{2}}\sigmagma_{z}|\partialrtial^{4}_{\alpha}D|^{2}
= \frac{1}{2}\int_{-\pi}^{\pi}\frac{(Q_{z}^{2}\sigmagma_{z})_{t}}{|z_{\alpha}|^{2}}|\partialrtial^{4}_{\alpha}D|^{2}
+ \int_{-\pi}^{\pi}\frac{Q_{z}^{2}}{|z_{\alpha}|^{2}}\sigmagma_{z}\partialrtial_{\alpha}^{4}D \partialrtial_{\alpha}^{4}D_t$$
The first integral is easy to bound by $CP(E(t))$, we proceed as in the local existence Theorem I.7 in \cite{Castro-Cordoba-Fefferman-Gancedo-GomezSerrano:finite-time-singularities-free-boundary-euler}. We split
$$ I = \int_{-\pi}^{\pi}\frac{Q_{z}^{2}}{|z_{\alpha}|^{2}}\sigmagma_{z}\partialrtial_{\alpha}^{4}D \partialrtial_{\alpha}^{4}D_t = I_1 + I_2 + I_3$$
where
\betagin{align*}
I_1 & = \int_{-\pi}^{\pi}\frac{Q_{z}^{2}}{|z_{\alpha}|^{2}}\sigmagma_{z}\partialrtial_{\alpha}^{4}D \partialrtial_{\alpha}^{4}(Q_{z}^2 BR(z,\omega) - Q_{x}^{2}BR(x,\gammaamma))d\alpha\\
I_2 & = \int_{-\pi}^{\pi}\frac{Q_{z}^{2}}{|z_{\alpha}|^{2}}\sigmagma_{z}\partialrtial_{\alpha}^{4}D \partialrtial_{\alpha}^{4}(cz_{\alpha} - bx_{\alpha})d\alpha\\
I_3 & = \int_{-\pi}^{\pi}\frac{Q_{z}^{2}}{|z_{\alpha}|^{2}}\sigmagma_{z}\partialrtial_{\alpha}^{4}D \partialrtial_{\alpha}^{4}fd\alpha
\etand{align*}
We have:
$$ I_3 \leq \frac{1}{2}\int_{-\pi}^{\pi}\frac{Q_{z}^{2}}{|z_{\alpha}|^{2}}\sigmagma_{z}|\partialrtial_{\alpha}^{4}D|^2d\alpha
+ \frac{1}{2}\int_{-\pi}^{\pi}\frac{Q_{z}^{2}}{|z_{\alpha}|^{2}}\sigmagma_{z}|\partialrtial_{\alpha}^{4}f|^2d\alpha
\leq CP(E(t)) + \frac{\|Q_{z}^{2}\sigmagma_{z}\|_{L^{\infty}}}{2}\partialrtial_{\eta}lta(t)$$
Thus, we are done with $I_3$. We now split
\betagin{align*}
I_1 & = \thetaxt{l.o.t} + I_{1,1} + I_{1,2} + I_{1,3} + I_{1,4}\\
I_{1,1} & = \int_{-\pi}^{\pi}\frac{Q_{z}^{2}}{|z_{\alpha}|^{2}}\sigmagma_{z}\partialrtial_{\alpha}^{4}D (\partialrtial_{\alpha}^{4}(Q_{z}^2) BR(z,\omega) - \partialrtial_{\alpha}^{4}(Q_{x}^{2})BR(x,\gammaamma))d\alpha\\
I_{1,2} & = \int_{-\pi}^{\pi}\frac{Q_{z}^{2}}{|z_{\alpha}|^{2}}\sigmagma_{z}\partialrtial_{\alpha}^{4}D \left(Q_{z}^2\frac{1}{2\pi}\int_{-\pi}^{\pi}\frac{(\partialrtial_{\alpha}^{4}z(\alpha) - \partialrtial_{\alpha}^{4}z(\alpha - \betata))^{\perp}}{|z(\alpha) - z(\alpha - \betata)|^{2}}\omega(\alpha-\betata)d\betata\right. \\
& \left.- Q_{x}^2\frac{1}{2\pi}\int_{-\pi}^{\pi}\frac{(\partialrtial_{\alpha}^{4}x(\alpha) - \partialrtial_{\alpha}^{4}x(\alpha - \betata))^{\perp}}{|x(\alpha) - x(\alpha - \betata)|^{2}}\gammaamma(\alpha-\betata)d\betata\right)d\alpha\\
I_{1,3} & = \int_{-\pi}^{\pi}\frac{Q_{z}^{2}}{|z_{\alpha}|^{2}}\sigmagma_{z}\partialrtial_{\alpha}^{4}D\\ &\times\left(Q_{z}^2\frac{-1}{\pi}\int_{-\pi}^{\pi}\frac{(z(\alpha) - z(\alpha - \betata))^{\perp}}{|z(\alpha) - z(\alpha - \betata)|^{4}}(z(\alpha) - z(\alpha - \betata)) \cdot (\partialrtial_{\alpha}^{4}z(\alpha) - \partialrtial_{\alpha}^{4}z(\alpha - \betata))\omega(\alpha-\betata)d\betata\right. \\
& +\left.Q_{x}^2\frac{1}{\pi}\int_{-\pi}^{\pi}\frac{(x(\alpha) - x(\alpha - \betata))^{\perp}}{|x(\alpha) - x(\alpha - \betata)|^{4}}(x(\alpha) - x(\alpha - \betata)) \cdot (\partialrtial_{\alpha}^{4}x(\alpha) - \partialrtial_{\alpha}^{4}x(\alpha - \betata))\gammaamma(\alpha-\betata)d\betata\right) \\
I_{1,4} & = \int_{-\pi}^{\pi}\frac{Q_{z}^{2}}{|z_{\alpha}|^{2}}\sigmagma_{z}\partialrtial_{\alpha}^{4}D \partialrtial_{\alpha}^{4}(Q_{z}^2 BR(z,\partialrtial_{\alphapha}^{4}\omega) - Q_{x}^{2}BR(x,\partialrtial_{\alpha}^{4}\gammaamma))d\alpha
\etand{align*}
where l.o.t stands for low order terms, nice terms easier to deal with.
$$ I_{1,1} = \thetaxt{l.o.t} + I_{1,1,1} \thetaxt{ where }$$
\betagin{align*}
I_{1,1,1} & = 2\int_{-\pi}^{\pi}\frac{Q_{z}^{2}}{|z_{\alpha}|^{2}}\sigmagma_{z}\partialrtial_{\alpha}^{4}D (\nabla Q(z) \cdot \partialrtial_{\alpha}^{4} z BR(z,\omega) - \nabla Q(x) \cdot \partialrtial_{\alpha}^{4} x BR(x,\gammaamma))d\alpha \\
& = 2\int_{-\pi}^{\pi}\frac{Q_{z}^{2}}{|z_{\alpha}|^{2}}\sigmagma_{z}\partialrtial_{\alpha}^{4}D \nabla Q(z) \cdot \partialrtial_{\alpha}^{4} D BR(z,\omega)d\alpha \\
& + 2\int_{-\pi}^{\pi}\frac{Q_{z}^{2}}{|z_{\alpha}|^{2}}\sigmagma_{z}\partialrtial_{\alpha}^{4}D (\nabla Q(z) \cdot \partialrtial_{\alpha}^{4} x BR(z,\omega) - \nabla Q(x) \cdot \partialrtial_{\alpha}^{4} x BR(x,\gammaamma))d\alpha \\
& \leq 2\int_{-\pi}^{\pi}\frac{Q_{z}^{2}}{|z_{\alpha}|^{2}}\sigmagma_{z}|\partialrtial_{\alpha}^{4}D|^2 d\alpha \underbrace{\|\nabla Q(z) BR(z,\omega)\|_{L^\infty}}_{\thetaxt{bounded as for local existence}} \\
& + \int_{-\pi}^{\pi}\frac{Q_{z}^{2}}{|z_{\alpha}|^{2}}\sigmagma_{z}|\partialrtial_{\alpha}^{4}D|^2
+ \underbrace{\int_{-\pi}^{\pi}\frac{Q_{z}^{2}}{|z_{\alpha}|^{2}}\sigmagma_{z}|\nabla Q(z) \cdot \partialrtial_{\alpha}^{4} x BR(z,\omega) - \nabla Q(x) \cdot \partialrtial_{\alpha}^{4} x BR(x,\gammaamma)|^2 d\alpha}_{\thetaxt{l.o.t in $D$ and $d$}} \\
& \leq CP(E(t))
\etand{align*}
which means $I_{1,1}$ is done.
From now on we will denote
$$ {\rm div}\thinspacesplaystyleelta_{\betata}z(\alpha) = z(\alpha) - z(\alpha - \betata)$$
\betagin{align*}
I_{1,2} & = I_{1,2,1} + I_{1,2,2} + I_{1,2,3} + I_{1,2,4} \thetaxt{ where}\\
I_{1,2,1} & = \int_{-\pi}^{\pi}\frac{Q_{z}^{2}}{|z_{\alpha}|^{2}}\sigmagma_{z}\partialrtial_{\alpha}^{4}D Q_{z}^2\frac{1}{2\pi}\int_{-\pi}^{\pi}\frac{{\rm div}\thinspacesplaystyleelta_{\betata} \partialrtial_{\alpha}^{4} D^{\perp}(\alpha)}{|{\rm div}\thinspacesplaystyleelta_{\betata} z(\alpha)|^2}\omega(\alpha-\betata)d\betata d\alpha \\
I_{1,2,2} & = \int_{-\pi}^{\pi}\frac{Q_{z}^{2}}{|z_{\alpha}|^{2}}\sigmagma_{z}\partialrtial_{\alpha}^{4}D Q_{z}^2\frac{1}{2\pi}\int_{-\pi}^{\pi}{\rm div}\thinspacesplaystyleelta_{\betata}\partialrtial_{\alpha}^{4}x ^{\perp}(\alpha) \left(\frac{1}{|{\rm div}\thinspacesplaystyleelta_{\betata}z(\alpha)|^{2}} - \frac{1}{|{\rm div}\thinspacesplaystyleelta_{\betata}x(\alpha)|^{2}}\right)\omega(\alpha-\betata)d\betata d\alpha \\
I_{1,2,3} & = \int_{-\pi}^{\pi}\frac{Q_{z}^{2}}{|z_{\alpha}|^{2}}\sigmagma_{z}\partialrtial_{\alpha}^{4}D Q_{z}^2\frac{1}{2\pi}\int_{-\pi}^{\pi}\frac{{\rm div}\thinspacesplaystyleelta_{\betata}\partialrtial_{\alpha}^{4}x ^{\perp}(\alpha)}{|{\rm div}\thinspacesplaystyleelta_{\betata}x(\alpha)|^{2}}d(\alpha-\betata)d\betata d\alpha \\
I_{1,2,4} & = \int_{-\pi}^{\pi}\frac{Q_{z}^{2}}{|z_{\alpha}|^{2}}\sigmagma_{z}\partialrtial_{\alpha}^{4}D (Q_{z}^2 - Q_{x}^{2})\frac{1}{2\pi}\int_{-\pi}^{\pi}\frac{{\rm div}\thinspacesplaystyleelta_{\betata}\partialrtial_{\alpha}^{4}x ^{\perp}(\alpha)}{|{\rm div}\thinspacesplaystyleelta_{\betata}x(\alpha)|^{2}}\gammaamma(\alpha-\betata)d\betata d\alpha \\
\etand{align*}
\betagin{align*}
I_{1,2,1} & = \int_{-\pi}^{\pi}\frac{Q_{z}^{4}}{|z_{\alpha}|^{2}}\sigmagma_{z}\partialrtial_{\alpha}^{4}D(\alpha)\frac{1}{2\pi}\int_{-\pi}^{\pi}\frac{{\rm div}\thinspacesplaystyleelta_{\alpha-\betata} \partialrtial_{\alpha}^{4} D^{\perp}(\alpha)}{|{\rm div}\thinspacesplaystyleelta_{\alpha-\betata} z(\alpha)|^2}\omega(\alpha-\betata)d\betata d\alpha \\
& = \frac{1}{|z_{\alpha}|^{2}}\frac{1}{2\pi}\int_{-\pi}^{\pi}\int_{-\pi}^{\pi}\partialrtial_{\alpha}^{4}D\frac{{\rm div}\thinspacesplaystyleelta_{\alpha-\betata} \partialrtial_{\alpha}^{4} D^{\perp}(\alpha)}{|{\rm div}\thinspacesplaystyleelta_{\alpha-\betata} z(\alpha)|^2}\left(\frac{Q_{z}^{4}(\alpha) \sigmagma_{z}(\alpha) \omega(\betata) - Q_{z}^{4}(\betata) \sigmagma_{z}(\betata) \omega(\alpha)}{2}\right. \\
&\left. + \underbrace{\frac{Q_{z}^{4}(\alpha) \sigmagma_{z}(\alpha) \omega(\betata) + Q_{z}^{4}(\betata) \sigmagma_{z}(\betata) \omega(\alpha)}{2}}_{\thetaxt{\tiny this is zero as in local existence $(\partialrtial_{\alpha}^{4}D \cdot \partialrtial_{\alpha}^{4}D^{\perp} = 0)$}}\right)d\alpha d\betata \\
\mathbb{R}ightarrow I_{1,2,1} & = \frac{1}{2\pi}\int_{-\pi}^{\pi}\frac{\partialrtial_{\alpha}^{4}D}{|z_{\alpha}|^{2}}\int_{-\pi}^{\pi}\underbrace{\frac{{\rm div}\thinspacesplaystyleelta_{\alpha-\betata} \partialrtial_{\alpha}^{4} D^{\perp}(\alpha)}{|{\rm div}\thinspacesplaystyleelta_{\alpha-\betata} z(\alpha)|^2}}_{\substack{\thetaxt{\tiny Hilbert transform}\\ \thetaxt{\tiny applied to $\partialrtial_{\alpha}^{4}D^{\perp}(\alpha)$}}}\left(\frac{Q_{z}^{4}(\alpha) \sigmagma_{z}(\alpha) \omega(\betata) - Q_{z}^{4}(\betata) \sigmagma_{z}(\betata) \omega(\alpha)}{2} \right) \\
\mathbb{R}ightarrow I_{1,2,1} & \leq CP(E(t))
\etand{align*}
For $I_{1,2,2}$ we can make a trick to get less derivatives in $x$.
\betagin{align*}
I_{1,2,2} & = I_{1,2,2}^{1} + I_{1,2,2}^{2} + I_{1,2,2}^{3} \\
I_{1,2,2}^{3} & = \frac{1}{2}\int_{-\pi}^{\pi}\frac{Q_{z}^{4}}{|z_{\alpha}|^{2}}\sigmagma_{z}\partialrtial_{\alpha}^{4}D
\omega(\alpha)\left(\frac{1}{|z_{\alpha}|^{2}} - \frac{1}{|x_{\alpha}|^{2}}\right)
\overbrace{\frac{1}{\pi}\int_{-\pi}^{\pi}\frac{{\rm div}\thinspacesplaystyleelta_{\betata}\partialrtial_{\alpha}^{4}x ^{\perp}(\alpha)}{\betata^{2}}d\betata}^{\Lambda \partialrtial_{\alpha}^{4} x} d\alpha \\
I_{1,2,2}^{2} & = \frac{1}{2\pi}\int_{-\pi}^{\pi}\frac{Q_{z}^{4}}{|z_{\alpha}|^{2}}\sigmagma_{z}\partialrtial_{\alpha}^{4}D \int_{-\pi}^{\pi}{\rm div}\thinspacesplaystyleelta_{\betata}\partialrtial_{\alpha}^{4}x ^{\perp}(\alpha) \left(\frac{1}{|{\rm div}\thinspacesplaystyleelta_{\betata}z(\alpha)|^{2}}
- \frac{1}{|z_{\alpha}(\alpha)|^{2}\betata^{2}} + \overbrace{\frac{z_{\alpha} \cdot z_{\alpha \alpha}}{|z_{\alpha}|^{4} \betata}}^{=0}\right. \\
& \left.-\left(\frac{1}{|{\rm div}\thinspacesplaystyleelta_{\betata}x(\alpha)|^{2}}
- \frac{1}{|x_{\alpha}(\alpha)|^{2}\betata^{2}} + \overbrace{\frac{x_{\alpha} \cdot x_{\alpha \alpha}}{|x_{\alpha}|^{4} \betata}}^{=0}\right)
\right)\omega(\alpha)d\betata d\alpha\\
I_{1,2,2}^{1} & = \frac{1}{2\pi}\int_{-\pi}^{\pi}\frac{Q_{z}^{4}}{|z_{\alpha}|^{2}}\sigmagma_{z}\partialrtial_{\alpha}^{4}D \int_{-\pi}^{\pi}{\rm div}\thinspacesplaystyleelta_{\betata}\partialrtial_{\alpha}^{4}x ^{\perp}(\alpha) \left(\frac{1}{|{\rm div}\thinspacesplaystyleelta_{\betata}z(\alpha)|^{2}}
- \frac{1}{|{\rm div}\thinspacesplaystyleelta_{\betata}x(\alpha)|^{2}}\right)\left(\omega(\alpha - \betata) - \omega(\alpha)\right)d\betata d\alpha
\etand{align*}
We use that ${\rm div}\thinspacesplaystyle \left|\frac{1}{|z_{\alpha}|^{2}} - \frac{1}{|x_{\alpha}|^{2}}\right| \leq \frac{|x_{\alpha}| + |z_{\alpha}|}{|z_{\alpha}|^{2}|x_{\alpha}|^{2}}|D_{\alpha}|$ to find that
\betagin{align*} I_{1,2,2}^{3} & \leq \frac{1}{4}\int_{-\pi}^{\pi}\frac{Q_{z}^{2}}{|z_{\alpha}|^{2}}\sigmagma_{z}|\partialrtial_{\alpha}^{4}D|^2
+ \|Q_{z}\|_{L^{\infty}}^{6} \|\sigmagma_{z}\|_{L^{\infty}} \|\omega\|_{L^{\infty}}^{2}\left(\frac{|x_{\alpha}| + |z_{\alpha}|}{|z_{\alpha}|^{2}|x_{\alpha}|^{2}}\right)^{2}\overbrace{\|D_{\alpha}\|_{L^{\infty}}^{2}}^{ \substack{\thetaxt{\tiny Sobolev}\\ \thetaxt{\tiny inequalities}}}\overbrace{\|\Lambda \partialrtial_{\alpha}^{4} x\|_{L^{2}}^{2}}^{\thetaxt{Control of $\|x\|_{H^{5}}$}}\\
& \leq CP(E(t))
\etand{align*}
We can use that
$$ \left|\left(\frac{1}{|{\rm div}\thinspacesplaystyleelta_{\betata}z(\alpha)|^{2}}
- \frac{1}{|z_{\alpha}(\alpha)|^{2}\betata^{2}} + \frac{z_{\alpha} \cdot z_{\alpha \alpha}}{|z_{\alpha}|^{4} \betata}\right)\right| \leq \|z\|_{C^{2}}^{k} \frac{1}{\betata^{1/2}}\|z\|_{C^{2+\frac{1}{2}}}\|F(z)\|_{L^{\infty}}^{k}
$$
and that
\betagin{align*}
& \left|\frac{1}{|{\rm div}\thinspacesplaystyleelta_{\betata}z(\alpha)|^{2}}
- \frac{1}{|z_{\alpha}(\alpha)|^{2}\betata^{2}} + \frac{z_{\alpha} \cdot z_{\alpha \alpha}}{|z_{\alpha}|^{4} \betata}
- \left(\frac{1}{|{\rm div}\thinspacesplaystyleelta_{\betata}x(\alpha)|^{2}}
- \frac{1}{|x_{\alpha}(\alpha)|^{2}\betata^{2}} + \frac{x_{\alpha} \cdot x_{\alpha \alpha}}{|x_{\alpha}|^{4} \betata}\right)\right| \\
& \leq \|z\|_{C^{2}}^{k} \|x\|_{C^{2}}^{k}\frac{1}{\betata^{1/2}}\|D\|_{C^{2+\frac{1}{2}}}\|F(z)\|_{L^{\infty}}^{k}\|F(x)\|_{L^{\infty}}^{k}
\etand{align*}
to find
\betagin{align*}
I_{1,2,2}^{2} &\leq \frac{1}{8\pi^{2}}\int_{-\pi}^{\pi}\frac{Q_{z}^{2}}{|z_{\alpha}|^{2}}\sigmagma_{z}|\partialrtial_{\alpha}^{4}D|^2
\\&+ C\|Q_{z}\|_{L^{\infty}}^{6} \|\sigmagma_{z}\|_{L^{\infty}}\|z\|_{C^{2}}^{k} \|x\|_{C^{2}}^{k}\|D\|_{C^{2+\frac{1}{2}}}\|\partialrtial_{\alpha}^{4} x\|_{L^{2}}^{2}\|F(z)\|_{L^{\infty}}^{k}\|F(x)\|_{L^{\infty}}^{k}\etand{align*}
We've used that
$$ \left(\int_{-\pi}^{\pi}d\alpha\left(\int_{-\pi}^{\pi}\frac{\partialrtial_{\alpha}^{4}x(\alpha - \betata)}{|\betata|^{1/2}}d\betata\right)^{2}\right)^{1/2} \leq C\|\partialrtial_{\alpha}^{4}x\|_{L^{2}}.$$
We split further in $I_{1,2,2}^{1} = I_{1,2,2}^{1,1} + I_{1,2,2}^{1,2}$:
\betagin{align*}
I_{1,2,2}^{1,1} & = \frac{1}{2\pi}\int_{-\pi}^{\pi}\frac{Q_{z}^{4}}{|z_{\alpha}|^{2}}\sigmagma_{z}\partialrtial_{\alpha}^{4}D \int_{-\pi}^{\pi}{\rm div}\thinspacesplaystyleelta_{\betata}\partialrtial_{\alpha}^{4}x ^{\perp}(\alpha) \left(\frac{1}{|{\rm div}\thinspacesplaystyleelta_{\betata}z(\alpha)|^{2}}
- \frac{1}{|{\rm div}\thinspacesplaystyleelta_{\betata}x(\alpha)|^{2}}\right)\\
&\times \left(\omega(\alpha - \betata) - \omega(\alpha) + \omega_{\alpha}(\alpha) \betata\right)d\betata d\alpha\\
I_{1,2,2}^{1,2} & = \frac{1}{2\pi}\int_{-\pi}^{\pi}\frac{Q_{z}^{4}}{|z_{\alpha}|^{2}}\sigmagma_{z}\partialrtial_{\alpha}^{4}D \omega_{\alpha}(\alpha) \int_{-\pi}^{\pi}{\rm div}\thinspacesplaystyleelta_{\betata}\partialrtial_{\alpha}^{4}x ^{\perp}(\alpha)\left(\frac{\betata}{|{\rm div}\thinspacesplaystyleelta_{\betata}z(\alpha)|^{2}}
- \frac{\betata}{|{\rm div}\thinspacesplaystyleelta_{\betata}x(\alpha)|^{2}}\right)d\betata d\alpha\\
\etand{align*}
Inside of the $\betata$ integral in $I_{1,2,2}^{1,1}$ there is no principal value, so the appropriate estimate follows:
$$ I_{1,2,2}^{1,1} \leq CP(E(t))$$
For $I_{1,2,2}^{1,2}$ we proceed as for $I_{1,2,2}^{2}$. We decompose adding and subtracting ${\rm div}\thinspacesplaystyle \frac{1}{|z_{\alpha}|^{2}\betata} - \frac{1}{|x_{\alpha}|^{2}\betata}$. Thus, we are done with $I_{1,2,2}$. We decompose $I_{1,2,3} = I_{1,2,3}^{1} + I_{1,2,3}^{2} + I_{1,2,3}^{3}$.
\betagin{align*}
I_{1,2,3}^{1} & = \int_{-\pi}^{\pi}\frac{Q_{z}^{2}}{|z_{\alpha}|^{2}}\sigmagma_{z}\partialrtial_{\alpha}^{4}D Q_{z}^2\frac{1}{2\pi}\int_{-\pi}^{\pi}{\rm div}\thinspacesplaystyleelta_{\betata}\partialrtial_{\alpha}^{4}x ^{\perp}(\alpha)\\
&\times\left(\frac{1}{|{\rm div}\thinspacesplaystyleelta_{\betata}x(\alpha)|^{2}}-\frac{1}{|x_\alpha|^{2}\betata^{2}}+\frac{x_{\alpha} \cdot x_{\alpha \alpha}}{|x_{\alpha}|^{4}\betata}\right)d(\alpha-\betata)d\betata d\alpha \\
I_{1,2,3}^{2} & = - \int_{-\pi}^{\pi}\frac{Q_{z}^{2}}{|z_{\alpha}|^{2}}\sigmagma_{z}\partialrtial_{\alpha}^{4}D Q_{z}^2\frac{\partialrtial_{\alpha}^{4} x^{\perp}(\alpha)}{|x_{\alpha}|^{2}}\frac{1}{2\pi}\int_{-\pi}^{\pi}\frac{{\rm div}\thinspacesplaystyleelta_{\betata}d(\alpha)}{\betata^{2}}d\betata d\alpha \\
I_{1,2,3}^{3} & = \int_{-\pi}^{\pi}\frac{Q_{z}^{2}}{|z_{\alpha}|^{2}}\sigmagma_{z}\partialrtial_{\alpha}^{4}D Q_{z}^2\frac{1}{|x_{\alpha}|^{2}}\frac{1}{2\pi}\int_{-\pi}^{\pi}\frac{{\rm div}\thinspacesplaystyleelta_{\betata}(d\partialrtial_{\alpha}^{4}x^{\perp})(\alpha)}{\betata^{2}}d\betata d\alpha
\etand{align*}
It's easy to obtain:
\betagin{align*}
I_{1,2,3}^{1} & \leq \frac{1}{4\pi}\int_{-\pi}^{\pi}\frac{Q_{z}^{2}}{|z_{\alpha}|^{2}}\sigmagma_{z}|\partialrtial_{\alpha}^{4}D|^{2}d\alpha
+ C\|Q_{z}\|_{L^{\infty}}^{6} \|\sigmagma_{z}\|_{L^{\infty}}\|d\|_{L^{\infty}} \|x\|_{C^{2}}^{k}\|F(x)\|_{L^{\infty}}^{k}\|x\|_{C^{2,\partialrtial_{\eta}lta}}\|\partialrtial_{\alpha}^{4} x\|_{L^{2}}^{2}\\& \leq CP(E(t)) \\
I_{1,2,3}^{2}& \leq CP(E(t)) \thetaxt{ analogously since } \|\Lambda d\|_{L^{\infty}} \leq C\|d\|_{H^{2}}\\
I_{1,2,3}^{3}& \leq CP(E(t)) \thetaxt{ using } \|\Lambda (d \partialrtial_{\alpha}^{4} x^{\perp})\|_{L^{2}} \leq C\|d\|_{H^{2}}\|x\|_{H^{5}}.
\etand{align*}
We are done with $I_{1,2,3}$. To deal with $I_{1,2,4}$ se use that
$$ Q_{z}^{2} - Q_{x}^{2} = 2Q((1-t)z + tx)\nabla Q((1-t)z + tx) \cdot D(\alpha) \thetaxt{ for } t \in (0,1).$$
Then it is easy to find
$$ I_{1,2,4} \leq CP(E(t)),$$
and we are done with $I_{1,2}$. We decompose $I_{1,3}$ as
\betagin{align*}
I_{1,3} & = I_{1,3,1} + I_{1,3,2} +I_{1,3,3} +I_{1,3,4} +I_{1,3,5} +I_{1,3,6} \\
I_{1,3,1} & = \int_{-\pi}^{\pi}\frac{Q_{z}^{2}}{|z_{\alpha}|^{2}}\sigmagma_{z}\partialrtial_{\alpha}^{4}D Q_{z}^2\frac{-1}{\pi}\int_{-\pi}^{\pi}\frac{{\rm div}\thinspacesplaystyleelta_{\betata}z^{\perp}(\alpha)}{|{\rm div}\thinspacesplaystyleelta_{\betata}z(\alpha)|^4}{\rm div}\thinspacesplaystyleelta_{\betata} z(\alpha) \cdot {\rm div}\thinspacesplaystyleelta_{\betata} \partialrtial_{\alpha}^{4}D(\alpha) \omega(\alpha-\betata) d\betata d\alpha\\
I_{1,3,2} & = \int_{-\pi}^{\pi}\frac{Q_{z}^{2}}{|z_{\alpha}|^{2}}\sigmagma_{z}\partialrtial_{\alpha}^{4}D Q_{z}^2\frac{-1}{\pi}\int_{-\pi}^{\pi}\frac{{\rm div}\thinspacesplaystyleelta_{\betata}z^{\perp}(\alpha)}{|{\rm div}\thinspacesplaystyleelta_{\betata}z(\alpha)|^4}{\rm div}\thinspacesplaystyleelta_{\betata} z(\alpha) \cdot {\rm div}\thinspacesplaystyleelta_{\betata} \partialrtial_{\alpha}^{4}x(\alpha) d(\alpha-\betata) d\betata d\alpha\\
I_{1,3,3} & = \int_{-\pi}^{\pi}\frac{Q_{z}^{2}}{|z_{\alpha}|^{2}}\sigmagma_{z}\partialrtial_{\alpha}^{4}D Q_{z}^2\frac{-1}{\pi}\int_{-\pi}^{\pi}\frac{{\rm div}\thinspacesplaystyleelta_{\betata}z^{\perp}(\alpha)}{|{\rm div}\thinspacesplaystyleelta_{\betata}z(\alpha)|^4}{\rm div}\thinspacesplaystyleelta_{\betata} D \cdot {\rm div}\thinspacesplaystyleelta_{\betata} \partialrtial_{\alpha}^{4}x(\alpha) \gammaamma(\alpha-\betata) d\betata d\alpha\\
I_{1,3,4} & = \int_{-\pi}^{\pi}\frac{Q_{z}^{2}}{|z_{\alpha}|^{2}}\sigmagma_{z}\partialrtial_{\alpha}^{4}D Q_{z}^2\frac{-1}{\pi}\int_{-\pi}^{\pi}\frac{{\rm div}\thinspacesplaystyleelta_{\betata}D^{\perp}(\alpha)}{|{\rm div}\thinspacesplaystyleelta_{\betata}z(\alpha)|^4}{\rm div}\thinspacesplaystyleelta_{\betata} x(\alpha) \cdot {\rm div}\thinspacesplaystyleelta_{\betata} \partialrtial_{\alpha}^{4}x(\alpha) \gammaamma(\alpha-\betata) d\betata d\alpha\\
I_{1,3,5} & = \int_{-\pi}^{\pi}\frac{Q_{z}^{2}}{|z_{\alpha}|^{2}}\sigmagma_{z}\partialrtial_{\alpha}^{4}D Q_{z}^2\frac{-1}{\pi}\int_{-\pi}^{\pi}
{\rm div}\thinspacesplaystyleelta_{\betata} x^{\perp}(\alpha){\rm div}\thinspacesplaystyleelta_{\betata} x(\alpha) \cdot {\rm div}\thinspacesplaystyleelta_{\betata} \partialrtial_{\alpha}^{4}x(\alpha) \gammaamma(\alpha-\betata)\\
&\times \left(
\frac{1}{|{\rm div}\thinspacesplaystyleelta_{\betata}z(\alpha)|^4} - \frac{1}{|{\rm div}\thinspacesplaystyleelta_{\betata}x(\alpha)|^4}\right)d\betata d\alpha\\
I_{1,3,6} & = \int_{-\pi}^{\pi}\frac{Q_{z}^{2}}{|z_{\alpha}|^{2}}\sigmagma_{z}\partialrtial_{\alpha}^{4}D (Q_{z}^2-Q_{x}^{2})\frac{-1}{\pi}\int_{-\pi}^{\pi}
\frac{{\rm div}\thinspacesplaystyleelta_{\betata}x^{\perp}(\alpha)}{|{\rm div}\thinspacesplaystyleelta_{\betata}x(\alpha)|^4}
{\rm div}\thinspacesplaystyleelta_{\betata} x(\alpha) \cdot {\rm div}\thinspacesplaystyleelta_{\betata} \partialrtial_{\alpha}^{4}x(\alpha) \gammaamma(\alpha-\betata)d\betata d\alpha\\
\etand{align*}
$I_{1,3,j}, \quad j = 2,3,4,5,6$ are easier to deal with (It can be done as before). Therefore we focus on $I_{1,3,1}$.
\betagin{align*}
I_{1,3,1} & = I_{1,3,1}^{1} + I_{1,3,1}^{2} + I_{1,3,1}^{3} \\
I_{1,3,1}^{1} & = \int_{-\pi}^{\pi}\frac{Q_{z}^{2}}{|z_{\alpha}|^{2}}\sigmagma_{z}\partialrtial_{\alpha}^{4}D Q_{z}^2\frac{-1}{\pi}\int_{-\pi}^{\pi}\left(\frac{{\rm div}\thinspacesplaystyleelta_{\betata}z^{\perp}(\alpha)}{|{\rm div}\thinspacesplaystyleelta_{\betata}z(\alpha)|^4}{\rm div}\thinspacesplaystyleelta_{\betata} z(\alpha) \omega(\alpha-\betata)
\right.\\
&\left.- \frac{\partialrtial_{\alpha} z^{\perp}(\alpha)}{|\partialrtial_{\alpha} z(\alpha)|^4}\partialrtial_{\alpha} z(\alpha - \betata)\omega(\alpha)\frac{1}{\betata^{2}}
\right) \cdot {\rm div}\thinspacesplaystyleelta_{\betata} \partialrtial_{\alpha}^{4}D(\alpha) d\betata d\alpha\\
I_{1,3,1}^{2} & = \int_{-\pi}^{\pi}\frac{Q_{z}^{2}}{|z_{\alpha}|^{2}}\sigmagma_{z}\partialrtial_{\alpha}^{4}D Q_{z}^2\frac{-1}{\pi}
\frac{\partialrtial_{\alpha} z^{\perp}(\alpha)}{|\partialrtial_{\alpha} z(\alpha)|^4}\omega(\alpha) \partialrtial_{\alpha}^{4}D(\alpha) \cdot
\int_{-\pi}^{\pi} \frac{\partialrtial_{\alpha} z(\alpha - \betata) - \partialrtial_{\alpha} z(\alpha)}{\betata^{2}}d\betata d\alpha\\
I_{1,3,1}^{3} & = \int_{-\pi}^{\pi}\frac{Q_{z}^{2}}{|z_{\alpha}|^{2}}\sigmagma_{z}\partialrtial_{\alpha}^{4}D Q_{z}^2\frac{-1}{\pi}
\frac{\partialrtial_{\alpha} z^{\perp}(\alpha)}{|\partialrtial_{\alpha} z(\alpha)|^4}\omega(\alpha)
\int_{-\pi}^{\pi} \frac{{\rm div}\thinspacesplaystyleelta_{\betata}(\partialrtial_{\alpha} z \cdot \partialrtial_{\alpha}^{4} D)(\alpha)}{\betata^{2}}d\betata d\alpha\\
\etand{align*}
In $I_{1,3,1}^{1}$ we find a commutator, which can be handled as before. It is also easy to estimate $I_{1,3,1}^{2}$.
To deal with $I_{1,3,1}^{3}$ we remember that
\betagin{align*}
\partialrtial_{\alpha} z(\alpha) \cdot \partialrtial_{\alpha}^{4}D(\alpha) & = \partialrtial_{\alpha} z(\alpha) \cdot \partialrtial_{\alpha}^{4} z(\alpha) - \partialrtial_{\alpha} x(\alpha) \cdot \partialrtial_{\alpha}^{4} x(\alpha) - \partialrtial_{\alpha} D(\alpha) \cdot \partialrtial_{\alpha}^{4} x(\alpha)\\
& = -3\partialrtial_{\alpha}^{2}z(\alpha) \cdot \partialrtial_{\alpha}^{3} z(\alpha) + 3 \partialrtial_{\alpha}^{2} x(\alpha) \partialrtial_{\alpha}^{3} x(\alpha) - \partialrtial_{\alpha} D(\alpha) \cdot \partialrtial_{\alpha}^{4} x(\alpha)
\etand{align*}
That allows us to decompose further
\betagin{align*}
\partialrtial_{\alpha} z(\alpha) \cdot \partialrtial_{\alpha}^{4} D(\alpha) & = -3\partialrtial_{\alpha}^{2}z(\alpha) \cdot \partialrtial_{\alpha}^{3} D(\alpha) - 3 \partialrtial_{\alpha}^{2} D(\alpha) \partialrtial_{\alpha}^{3} x(\alpha) - \partialrtial_{\alpha} D(\alpha) \cdot \partialrtial_{\alpha}^{4} x(\alpha)
\etand{align*}
which yields
\betagin{align*}
I_{1,3,1}^{3} & = I_{1,3,1}^{3,1} + I_{1,3,1}^{3,2} + I_{1,3,1}^{3,3} \\
I_{1,3,1}^{3,1} & = \frac{3}{\pi}\int_{-\pi}^{\pi}\frac{Q_{z}^{4}}{|z_{\alpha}|^{2}}\sigmagma_{z}\partialrtial_{\alpha}^{4}D
\cdot
\frac{\partialrtial_{\alpha} z^{\perp}(\alpha)}{|\partialrtial_{\alpha} z(\alpha)|^4}\omega(\alpha)
\int_{-\pi}^{\pi} \frac{{\rm div}\thinspacesplaystyleelta_{\betata}(\partialrtial_{\alpha}^2 z \cdot \partialrtial_{\alpha}^{3} D)(\alpha)}{\betata^{2}}d\betata d\alpha\\
I_{1,3,1}^{3,2} & = \frac{3}{\pi}\int_{-\pi}^{\pi}\frac{Q_{z}^{4}}{|z_{\alpha}|^{2}}\sigmagma_{z}\partialrtial_{\alpha}^{4}D
\cdot
\frac{\partialrtial_{\alpha} z^{\perp}(\alpha)}{|\partialrtial_{\alpha} z(\alpha)|^4}\omega(\alpha)
\int_{-\pi}^{\pi} \frac{{\rm div}\thinspacesplaystyleelta_{\betata}(\partialrtial_{\alpha}^2 D \cdot \partialrtial_{\alpha}^{3} x)(\alpha)}{\betata^{2}}d\betata d\alpha\\
I_{1,3,1}^{3,3} & = \frac{3}{\pi}\int_{-\pi}^{\pi}\frac{Q_{z}^{4}}{|z_{\alpha}|^{2}}\sigmagma_{z}\partialrtial_{\alpha}^{4}D
\cdot
\frac{\partialrtial_{\alpha} z^{\perp}(\alpha)}{|\partialrtial_{\alpha} z(\alpha)|^4}\omega(\alpha)
\int_{-\pi}^{\pi} \frac{{\rm div}\thinspacesplaystyleelta_{\betata}(\partialrtial_{\alpha} D \cdot \partialrtial_{\alpha}^{4} x)(\alpha)}{\betata^{2}}d\betata d\alpha\\
\etand{align*}
We use that
\betagin{align*}
\left\|\int_{-\pi}^{\pi} \frac{{\rm div}\thinspacesplaystyleelta_{\betata}(\partialrtial_{\alpha}^2 z \cdot \partialrtial_{\alpha}^{3} D)(\alpha)}{\betata^{2}}d\betata\right\|_{L^{2}}^{2} \leq C\left\|\partialrtial_{\alpha} (\partialrtial_{\alpha}^{2} z \cdot \partialrtial_{\alpha}^{3} D)\right\|_{L^{2}}^{2} \leq CP(E(t))
\etand{align*}
to control $I_{1,3,1}^{3,1}$. $I_{1,3,1}^{3,2}$ follows similarly. We control $I_{1,3,1}^{3,3}$ using that
\betagin{align*}
\left\|\int_{-\pi}^{\pi} \frac{{\rm div}\thinspacesplaystyleelta_{\betata}(\partialrtial_{\alpha} D \cdot \partialrtial_{\alpha}^{4} x)(\alpha)}{\betata^{2}}d\betata\right\|_{L^{2}}^{2} &\leq
\left\|\partialrtial_{\alpha} (\partialrtial_{\alpha} D \cdot \partialrtial_{\alpha}^{4} x)\right\|_{L^{2}}^{2} \\& \leq \|\partialrtial_{\alpha} D\|_{L^{\infty}}^{2}\|\partialrtial_{\alpha}^{5} x\|_{L^{2}}^{2} + \|\partialrtial_{\alpha}^{2} D\|_{L^{\infty}}^{2}\|\partialrtial_{\alpha}^{4} x\|_{L^{2}}^{2}
\leq CP(E(t))
\etand{align*}
This allows us to finish the estimates for $I_{1,3,1}^{3,3}$ and $I_{1,3,1}^{3}$. We are done with $I_{1,3,1}$ and $I_{1,3}$. We now decompose $I_{1,4}$.
\betagin{align*}
I_{1,4} &= I_{1,4,1} + I_{1,4,2} + I_{1,4,3} + I_{1,4,4} \\
I_{1,4,1} &= \int_{-\pi}^{\pi}\frac{Q_{z}^{2}}{|z_{\alpha}|^{2}}\sigmagma_{z}\partialrtial_{\alpha}^{4}D \cdot Q_{z}^{2} BR(z,\partialrtial_{\alpha}^{4} d)d\betata d\alpha\\
I_{1,4,2} &= \int_{-\pi}^{\pi}\frac{Q_{z}^{2}}{|z_{\alpha}|^{2}}\sigmagma_{z}\partialrtial_{\alpha}^{4}D \cdot Q_{z}^{2} \frac{1}{2\pi}\int_{-\pi}^{\pi}\frac{{\rm div}\thinspacesplaystyleelta_{\betata}D^{\perp}(\alpha)}{|{\rm div}\thinspacesplaystyleelta_{\betata}z(\alpha)|^{2}}\partialrtial_{\alpha}^{4}\gammaamma(\alpha - \betata) d\betata d\alpha\\
I_{1,4,3} &= \int_{-\pi}^{\pi}\frac{Q_{z}^{2}}{|z_{\alpha}|^{2}}\sigmagma_{z}\partialrtial_{\alpha}^{4}D \cdot Q_{z}^{2} \frac{1}{2\pi}\int_{-\pi}^{\pi}{\rm div}\thinspacesplaystyleelta_{\betata}x^{\perp}(\alpha) \partialrtial_{\alpha}^{4}\gammaamma(\alpha - \betata) \left(\frac{1}{|{\rm div}\thinspacesplaystyleelta_{\betata}z(\alpha)|^{2}}
- \frac{1}{|{\rm div}\thinspacesplaystyleelta_{\betata}x(\alpha)|^{2}}\right) d\betata d\alpha\\
I_{1,4,4} &= \int_{-\pi}^{\pi}\frac{Q_{z}^{2}}{|z_{\alpha}|^{2}}\sigmagma_{z}\partialrtial_{\alpha}^{4}D \cdot (Q_{z}^{2} -Q_{x}^{2})BR(x,\partialrtial_{\alpha}^{4}\gammaamma)d\alpha
\etand{align*}
We control $I_{1,4,2}$, $I_{1,4,3}$ and $I_{1,4,4}$ as before. We further split
\betagin{align*}
I_{1,4,1} &= I_{1,4,1}^{1} + I_{1,4,1}^{2} + I_{1,4,3} + I_{1,4,4} \\
I_{1,4,1}^{1} & = \int_{-\pi}^{\pi}\frac{Q_{z}^{2}}{|z_{\alpha}|^{2}}\sigmagma_{z}\partialrtial_{\alpha}^{4}D \cdot Q_{z}^{2}\left(BR(z,\partialrtial_{\alpha}^{4} d) - \frac{1}{2}\frac{\partialrtial_{\alpha} z^{\perp}(\alpha)}{|\partialrtial_{\alpha} z(\alpha)|^{2}}H(\partialrtial_{\alpha}^{4} d))\right)d\alpha\\
I_{1,4,1}^{2} & = \int_{-\pi}^{\pi}\frac{Q_{z}^{2}}{|z_{\alpha}|^{2}}\sigmagma_{z}\partialrtial_{\alpha}^{4}D \cdot \frac{\partialrtial_{\alpha} z^{\perp}(\alpha)}{|z_{\alpha}|}\left(\frac{Q_z^2}{2|z_{\alpha}|}H(\partialrtial_{\alpha}^{4} d) - H\left(\frac{Q_{z}^{2}}{2|z_{\alpha}|}\partialrtial_{\alpha}^{4} d\right)\right)d\alpha \\
I_{1,4,1}^{3} & = -\int_{-\pi}^{\pi}\frac{Q_{z}^{2}}{|z_{\alpha}|^{2}}\sigmagma_{z}\partialrtial_{\alpha}^{4}D \cdot \frac{\partialrtial_{\alpha} z^{\perp}(\alpha)}{|z_{\alpha}|}H\left(\left(\frac{Q_{z}^{2}}{2|z_{\alpha}|} - \frac{Q_{x}^2}{2|x_{\alpha}|}\right)\partialrtial_{\alpha}^{4}\gammaamma\right)d\alpha \\
I_{1,4,1}^{4} & = \int_{-\pi}^{\pi}\frac{Q_{z}^{2}}{|z_{\alpha}|^{2}}\sigmagma_{z}\partialrtial_{\alpha}^{4}D \cdot \frac{\partialrtial_{\alpha} z^{\perp}(\alpha)}{|z_{\alpha}|}H\left(\frac{Q_{z}^{2}\partialrtial_{\alpha}^{4}\omega}{2|z_{\alpha}|} - \frac{Q_{x}^2\partialrtial_{\alpha}^{4}\gammaamma}{2|x_{\alpha}|}\right)d\alpha \\
\etand{align*}
There are commutators in $I_{1,4,1}^{1}$ and $I_{1,4,1}^{2}$ so they are easy to estimate. To get the estimate for $I_{1,4,1}^{3}$ we bound
\betagin{align*}
\left\|H\left(\left(\frac{Q_{z}^{2}}{2|z_{\alpha}|} - \frac{Q_{x}^{2}}{2|x_{\alpha}|}\right)\partialrtial_{\alpha}^{4}\gammaamma\right)\right\|_{L^{2}}^{2} \leq \underbrace{\left\|\frac{Q_{z}^{2}}{2|z_{\alpha}|} - \frac{Q_{x}^{2}}{2|x_{\alpha}|}\right\|_{L^{\infty}}^{2}}_{\thetaxt{at the level of $D(\alpha)$}}\left\|\partialrtial_{\alpha}^{4} \gammaamma\right\|_{L^{2}}^{2} \leq CE^2(t)
\etand{align*}
We now remember the following formulas:
\betagin{align*}
\varpirphi & = \frac{Q_{z}^{2}\omega}{2|z_{\alpha}|} - c|z_{\alpha}| \\
\psi & = \frac{Q_{x}^{2}\gammaamma}{2|x_{\alpha}|} - b_{s}|x_{\alpha}| \\
\etand{align*}
These yield
\betagin{align*}
I_{1,4,1}^{4} = S + I_{1,4,1}^{4,1} + I_{1,4,1}^{4,2} + \thetaxt{ l.o.t},
\etand{align*}
where
\betagin{align*}
S &= \int_{-\pi}^{\pi}Q_{z}^{2}\sigmagma_{z} \partialrtial_{\alpha}^{4}D \cdot \frac{\partialrtial_{\alpha} z^{\perp}(\alpha)}{|z_{\alpha}|^3}H(\partialrtial_{\alpha}^{4} {\rm div}\thinspacesplaystylecal)(\alpha) d\alpha\\
I_{1,4,1}^{4,1} &= - \int_{-\pi}^{\pi}\frac{Q_{z}^{2}}{|z_{\alpha}|^{2}}\sigmagma_{z}\partialrtial_{\alpha}^{4}D \cdot \frac{\partialrtial_{\alpha} z^{\perp}(\alpha)}{|z_{\alpha}|}H\left(\frac{Q_{z}\nabla Q(z) \cdot \partialrtial_{\alpha}^{4} z}{|z_{\alpha}|}\omega - \frac{Q_{x} \nabla Q(x) \cdot \partialrtial_{\alpha}^{4}x}{|x_{\alpha}|}\gammaamma\right)d\alpha \\
I_{1,4,1}^{4,2} &= \int_{-\pi}^{\pi}\frac{Q_{z}^{2}}{|z_{\alpha}|^{2}}\sigmagma_{z}\partialrtial_{\alpha}^{4}D \cdot \frac{\partialrtial_{\alpha} z^{\perp}(\alpha)}{|z_{\alpha}|}H\left(\partialrtial_{\alpha}^{4}\left(c|z_{\alpha}| - b_{s}|x_{\alpha}|\right)\right)d\alpha \\
\etand{align*}
$S$ is going to appear later with a negative sign and therefore cancel out. $I_{1,4,1}^{4,1}$ can be bounded as before since it is low order.
We show how to deal with $I_{1,4,1}^{4,2}$. We compute
$$\partialrtial_{\alpha}^{4}(c|z_{\alpha}|) = - \partialrtial_{\alpha}^{3}\left((Q_{z}^{2} BR)_{\alpha} \cdot \frac{z_{\alpha}}{|z_{\alpha}|}\right); \quad \partialrtial_{\alpha}^{4}(b_s|x_{\alpha}|) = -\partialrtial_{\alpha}^{3}\left((Q^{2}_{x}BR)_{\alpha} \frac{x_{\alpha}}{|x_{\alpha}|}\right)$$
Then, in $\partialrtial_{\alpha}^{4}(c|z_{\alpha}|) - \partialrtial_{\alpha}^{4}(b_s|x_{\alpha}|)$ we consider the most singular terms
\betagin{align*}
\partialrtial_{\alpha}^{4}(c|z_{\alpha}|) - \partialrtial_{\alpha}^{4}(b_s|x_{\alpha}|) &= J_1 + J_2 + J_3 + J_4 + J_5 + \thetaxt{ l.o.t.} \\
J_1 & = -2Q_{z} \nabla Q(z) \cdot \partialrtial_{\alpha}^{4} z BR(z,\omega) \cdot \frac{z_{\alpha}}{|z_{\alpha}|}
+ 2Q_{x} \nabla Q(x) \cdot \partialrtial_{\alpha}^{4} x BR(x,\gammaamma) \cdot \frac{x_{\alpha}}{|x_{\alpha}|} \\
J_2 & = - (Q^{2}_{z}BR)_{\alpha} \frac{\partialrtial_{\alpha}^{4} z}{|z_{\alpha}|} + (Q_{x}^{2} BR)_{\alpha} \frac{\partialrtial_{\alpha}^{4} x}{|x_{\alpha}|} \\
J_3 & = Q^{2}_{z} \frac{1}{\pi}\int_{-\pi}^{\pi}\frac{{\rm div}\thinspacesplaystyleelta_{\betata}z^{\perp}(\alpha)}{|{\rm div}\thinspacesplaystyleelta_{\betata} z(\alpha)|^{4}} \frac{z_{\alpha}(\alpha)}{|z_{\alpha}|} {\rm div}\thinspacesplaystyleelta_{\betata} z(\alpha) \cdot {\rm div}\thinspacesplaystyleelta_{\betata} \partialrtial_{\alpha}^{4} z(\alpha) \omega(\alpha - \betata) d\betata \\
& - Q^{2}_{x} \frac{1}{\pi}\int_{-\pi}^{\pi}\frac{{\rm div}\thinspacesplaystyleelta_{\betata}x^{\perp}(\alpha)}{|{\rm div}\thinspacesplaystyleelta_{\betata} x(\alpha)|^{4}} \frac{x_{\alpha}(\alpha)}{|x_{\alpha}|} {\rm div}\thinspacesplaystyleelta_{\betata} x(\alpha) \cdot {\rm div}\thinspacesplaystyleelta_{\betata} \partialrtial_{\alpha}^{4} x(\alpha) \gammaamma(\alpha - \betata) d\betata \\
J_4 & = -Q_{z}^{2} BR(z,\partialrtial_{\alpha}^{4}\omega) \cdot \frac{z_{\alpha}}{|z_{\alpha}|} + Q_{x}^{2} BR(x,\partialrtial_{\alpha}^{4}\gammaamma) \cdot \frac{x_{\alpha}}{|x_{\alpha}|}
\etand{align*}
$J_5$ will be given later. In $J_1$ and $J_2$ we find 4th order terms in derivatives in $z$ and $x$ so they are fine. In $J_3$ we find inside the integrals
\betagin{align}
\Lambdabel{pacostar1}
{\rm div}\thinspacesplaystyleelta_{\betata} z^{\perp}(\alpha) \cdot z_{\alpha}(\alpha) & = (z(\alpha) - z(\alpha - \betata) - \betata z_{\alpha}(\alpha))^{\perp} \cdot z_{\alpha}(\alpha) \\
\Lambdabel{pacostar2}
{\rm div}\thinspacesplaystyleelta_{\betata} x^{\perp}(\alpha) \cdot x_{\alpha}(\alpha) & = (x(\alpha) - x(\alpha - \betata) - \betata x_{\alpha}(\alpha))^{\perp} \cdot x_{\alpha}(\alpha)
\etand{align}
This implies that we find ''Hilbert'' transforms applied to four derivatives of $x$ and $z$. We are done with $J_3$.
In $J_4$ we also find them inside the integrals \etaqref{pacostar1} and \etaqref{pacostar2} so it is easy to check that we have kernels whose main singularity is homogenous of degree 0 applied to four derivatives of $\partialrtial_{\alpha}^{4} \omega$ and $\partialrtial_{\alpha}^{4} \gammaamma$. This implies that we have a Hilbert transform applied to $\partialrtial_{\alpha}^{3} \omega$ and $\partialrtial_{\alpha}^{3}\gammaamma$ so we are done with $J_4$. The most dangerous term is $J_5$ which is given by
\betagin{align*}
J_5 & = - Q_{z}^{2} \frac{1}{2\pi}\int_{-\pi}^{\pi} \frac{{\rm div}\thinspacesplaystyleelta_{\betata}\partialrtial_{\alpha}^{4} z^{\perp}(\alpha)}{|{\rm div}\thinspacesplaystyleelta_{\betata}z(\alpha)|^{2}}\cdot \frac{z_{\alpha}(\alpha)}{|z_{\alpha}|}\omega(\alpha - \betata) d\betata
+ Q_{x}^{2} \frac{1}{2\pi}\int_{-\pi}^{\pi} \frac{{\rm div}\thinspacesplaystyleelta_{\betata}\partialrtial_{\alpha}^{4} x^{\perp}(\alpha)}{|{\rm div}\thinspacesplaystyleelta_{\betata}x(\alpha)|^{2}}\cdot \frac{x_{\alpha}(\alpha)}{|x_{\alpha}|}\gammaamma(\alpha - \betata) d\betata
\etand{align*}
We split further
\betagin{align*}
J_5 & = J_{5,1} + J_{5,2} \\
J_{5,1} & = - Q_{z}^{2} \frac{1}{2\pi}\frac{z_{\alpha}(\alpha)}{|z_{\alpha}|}\cdot\int_{-\pi}^{\pi} \left(
\frac{\omega(\alpha - \betata)}{|{\rm div}\thinspacesplaystyleelta_{\betata}z(\alpha)|^{2}} - \frac{\omega(\alpha)}{|z_{\alpha}|^{2}4\sigman^{2}\left(\betata/2\right)}
\right){\rm div}\thinspacesplaystyleelta_{\betata}\partialrtial_{\alpha}^{4} z^{\perp}(\alpha)d\betata\\
& + Q_{x}^{2} \frac{1}{2\pi}\frac{x_{\alpha}(\alpha)}{|x_{\alpha}|}\cdot \int_{-\pi}^{\pi} \left(
\frac{\gammaamma(\alpha - \betata)}{|{\rm div}\thinspacesplaystyleelta_{\betata}x(\alpha)|^{2}} - \frac{\gammaamma(\alpha)}{|x_{\alpha}|^{2}4\sigman^{2}\left(\betata/2\right)}
\right){\rm div}\thinspacesplaystyleelta_{\betata}\partialrtial_{\alpha}^{4} x^{\perp}(\alpha)d\betata \\
J_{5,2} & = - Q_{z}^{2}\frac{1}{2} \frac{z_{\alpha}(\alpha)}{|z_{\alpha}|^{3}} \omega(\alpha)\cdot \Lambda(\partialrtial_{\alpha}^{4}z^{\perp})(\alpha)
+ Q_{x}^{2}\frac{1}{2} \frac{x_{\alpha}(\alpha)}{|x_{\alpha}|^{3}} \gammaamma(\alpha)\cdot \Lambda(\partialrtial_{\alpha}^{4}x^{\perp})(\alpha)
\etand{align*}
In $J_{5,1}$ we find a Hilbert transform applied to $\partialrtial_{\alpha}^{4} z^{\perp}$ and $\partialrtial_{\alpha}^{4} x^{\perp}$ so it is fine. We split further:
\betagin{align*}
J_{5,2} & = J_{5,2,1} + J_{5,2,2} + J_{5,2,3} \\
J_{5,2,1} & = \left(Q_{x}^{2}\frac{1}{2} \frac{x_{\alpha}(\alpha)}{|x_{\alpha}|^{3}} \gammaamma(\alpha) - Q_{z}^{2}\frac{1}{2} \frac{z_{\alpha}(\alpha)}{|z_{\alpha}|^{3}} \omega(\alpha)\right)\cdot \Lambda(\partialrtial_{\alpha}^{4}x^{\perp})(\alpha)\\
J_{5,2,2} & = \Lambda\left(Q_{z}^{2}\frac{1}{2} \frac{z_{\alpha}(\alpha)}{|z_{\alpha}|^{3}} \omega(\alpha)\partialrtial_{\alpha}^{4} D^{\perp}\right)-
Q_{z}^{2}\frac{1}{2} \frac{z_{\alpha}(\alpha)}{|z_{\alpha}|^{3}} \omega(\alpha)\Lambda(\partialrtial_{\alpha}^{4} D^{\perp})\\
J_{5,2,3} & = - \Lambda\left(Q_{z}^{2}\frac{1}{2} \frac{z_{\alpha}(\alpha)}{|z_{\alpha}|^{3}} \omega(\alpha)\partialrtial_{\alpha}^{4} D^{\perp}\right)
\etand{align*}
$J_{5,2,1}$ can be estimated as before (there are more derivatives: 5 in total, but they are in $x$). In $J_{5,2,2}$ we find a commutator. Finally:
\betagin{align*}
I_{1,4,1}^{4,2} \leq CP(E(t)) - \int_{-\pi}^{\pi}Q_{z}^{2}\sigmagma_{z} \partialrtial_{\alpha}^{4}D \cdot \frac{z^{\perp}(\alpha)}{|z_{\alpha}|}H\left(\Lambda\left(Q_{z}^{2}\frac{1}{2}\frac{z_{\alpha}}{|z_{\alpha}|^3}\omega \partialrtial_{\alpha}^{4} D^{\perp}\right)\right)d\alpha
\etand{align*}
We use that $H(\Lambda) = - \partialrtial_{\alpha}$ and $z_{\alpha} \cdot \partialrtial_{\alpha}^{4} D^{\perp} = - z_{\alpha}^{\perp} \cdot \partialrtial_{\alpha}^4 D$ to obtain:
\betagin{align*}
I_{1,4,1}^{4,2} & \leq CP(E(t)) - \frac{1}{2}\int_{-\pi}^{\pi}Q_{z}^{2}\sigmagma_{z} \partialrtial_{\alpha}^{4}D \cdot \frac{z^{\perp}(\alpha)}{|z_{\alpha}|}\partialrtial_{\alpha}\left(\frac{Q_{z}^{2}\omega}{|z_{\alpha}|^2}\partialrtial_{\alpha}^{4} D\cdot \frac{z_{\alpha}^{\perp}}{|z_{\alpha}|}\right)d\alpha \\
& \leq CP(E(t)) - \underbrace{\frac{1}{2}\int_{-\pi}^{\pi}Q_{z}^{2}\sigmagma_{z} \partialrtial_{\alpha}^{4}D \cdot \frac{z^{\perp}(\alpha)}{|z_{\alpha}|}
\partialrtial_{\alpha}^{4}D \cdot \frac{z^{\perp}(\alpha)}{|z_{\alpha}|}\partialrtial_{\alpha}\left(\frac{Q_{z}^{2} \omega}{|z_{\alpha}|}\right)d\alpha}_{\thetaxt{Easy to estimate by $CP(E(t))$}}\\
& - \frac{1}{2}\int_{-\pi}^{\pi}Q_{z}^{2}\sigmagma_{z} \frac{Q_{z}^{2}\omega}{|z_{\alpha}|^2}\underbrace{\partialrtial_{\alpha}^{4}D \cdot \frac{z^{\perp}(\alpha)}{|z_{\alpha}|}
\partialrtial_{\alpha}\left(\partialrtial_{\alpha}^{4} D\cdot \frac{z_{\alpha}^{\perp}}{|z_{\alpha}|}\right)}_{\thetaxt{Integration by parts}}d\alpha
\etand{align*}
Then we are done with $I_{1,4,1}^{4,2}$, $I_{1,4,1}^{4}$, $I_{1,4,1}$, $I_{1,4}$ and $I_1$.
To finish with $I$ it remains to control $I_2$. We split it as:
\betagin{align*}
I_2 & = I_{2,1} + I_{2,2} + \thetaxt{ l.o.t} \\
I_{2,1} & = \int_{-\pi}^{\pi}\frac{Q_{z}^{2}}{|z_{\alpha}|^{2}}\sigmagma_{z}\partialrtial_{\alpha}^{4}D (c \partialrtial_{\alpha}^{5}z - b \partialrtial_{\alpha}^{5} x)d\alpha\\
I_{2,2} & = \int_{-\pi}^{\pi}\frac{Q_{z}^{2}}{|z_{\alpha}|^{2}}\sigmagma_{z}\partialrtial_{\alpha}^{4}D (\partialrtial_{\alpha}^{4} c z_{\alpha} - \partialrtial_{\alpha}^{4} b x_{\alpha})d\alpha\\
\etand{align*}
The low order terms are easier to deal with. We further split $I_{2,1}$.
\betagin{align*}
I_{2,1} &= I_{2,1,1} + I_{2,1,2} + I_{2,1,3} \\
I_{2,1,1} &= \int_{-\pi}^{\pi}\frac{Q_{z}^{2}}{|z_{\alpha}|^{2}}\sigmagma_{z}c \underbrace{\partialrtial_{\alpha}^{4}D \partialrtial_{\alpha}^{5}D}_{\thetaxt{Integration by parts}} d\alpha\\
I_{2,1,2} &= \int_{-\pi}^{\pi}\frac{Q_{z}^{2}}{|z_{\alpha}|^{2}}\sigmagma_{z}\partialrtial_{\alpha}^{4}D (c - b_s)\underbrace{\partialrtial_{\alpha}^{5}x}_{\thetaxt{5 derivatives, but in $x$}} d\alpha\\
I_{2,1,3} &= \underbrace{\int_{-\pi}^{\pi}\frac{Q_{z}^{2}}{|z_{\alpha}|^{2}}\sigmagma_{z}\partialrtial_{\alpha}^{4}D b_e\partialrtial_{\alpha}^{5}x}_{\thetaxt{Error term}} d\alpha\\
\etand{align*}
We find $I_{2,1} \leq CP(E(t)) + c\partialrtial_{\eta}lta(t)$. We decompose $I_{2,2}$.
\betagin{align*}
I_{2,2} &= I_{2,2,1} + I_{2,2,2} + I_{2,2,3} \\
I_{2,2,1} & = \int_{-\pi}^{\pi}\frac{Q_{z}^{2}}{|z_{\alpha}|^{2}}\sigmagma_{z}\partialrtial_{\alpha}^{4}D (\partialrtial_{\alpha}^{4} c - \partialrtial_{\alpha}^{4} b_s)\cdot z_{\alpha}d\alpha\\
I_{2,2,2} & = \int_{-\pi}^{\pi}\frac{Q_{z}^{2}}{|z_{\alpha}|^{2}}\sigmagma_{z}\partialrtial_{\alpha}^{4}D \underbrace{\partialrtial_{\alpha}^{4} b_s}_{\thetaxt{5 derivatives in $x$}} \partialrtial_{\alpha} Dd\alpha\\
I_{2,2,3} & = \underbrace{-\int_{-\pi}^{\pi}\frac{Q_{z}^{2}}{|z_{\alpha}|^{2}}\sigmagma_{z}\partialrtial_{\alpha}^{4}D \partialrtial_{\alpha}^{4}b_e x_{\alpha}d\alpha}_{\thetaxt{Error term}}\\
\etand{align*}
We deal with $I_{2,2,1}$ more carefully. We use that
\betagin{align*}
\partialrtial_{\alpha}^{4} D \cdot z_{\alpha} & = \partialrtial_{\alpha}^{4} z \cdot z_{\alpha} - \partialrtial_{\alpha}^{4} x \cdot x_{\alpha} - \partialrtial_{\alpha}^{4} x \cdot D_{\alpha} \\
& = - 3 \partialrtial_{\alpha}^{3} z \cdot \partialrtial_{\alpha}^{2} z + 3 \partialrtial_{\alpha}^{3} x \cdot \partialrtial_{\alpha}^{2} x - \partialrtial_{\alpha}^{4} x \cdot D_{\alpha} \\
& = - 3 \partialrtial_{\alpha}^{3} D \cdot \partialrtial_{\alpha}^{2} z - 3 \partialrtial_{\alpha}^{3} x \cdot \partialrtial_{\alpha}^{2} D - \partialrtial_{\alpha}^{4} x \cdot D_{\alpha}
\etand{align*}
to obtain
\betagin{align*}
I_{2,2,1} & = I_{2,2,1}^{1} + I_{2,2,1}^{2} + I_{2,2,1}^{3} \\
I_{2,2,1}^{1} & = -3\int_{-\pi}^{\pi}\frac{Q_{z}^{2}}{|z_{\alpha}|^{2}}\sigmagma_{z}\partialrtial_{\alpha}^{3}D \cdot \partialrtial_{\alpha}^{2} z \partialrtial_{\alpha}^{4}(c - b_s) d\alpha\\
I_{2,2,1}^{2} & = -3\int_{-\pi}^{\pi}\frac{Q_{z}^{2}}{|z_{\alpha}|^{2}}\sigmagma_{z}\partialrtial_{\alpha}^{3}x \cdot \partialrtial_{\alpha}^{2} D \partialrtial_{\alpha}^{4}(c - b_s) d\alpha\\
I_{2,2,1}^{3} & = -\int_{-\pi}^{\pi}\frac{Q_{z}^{2}}{|z_{\alpha}|^{2}}\sigmagma_{z}\partialrtial_{\alpha}^{4}x \cdot \partialrtial_{\alpha} D \partialrtial_{\alpha}^{4}(c - b_s) d\alpha\\
\etand{align*}
We can integrate by parts in all of the above terms to get low order terms. We are finally done with $I$.
\subsection{Computing the difference $\varpirphi-\psi$}
From the local existence proof we find the equation for $\varphi_t$:
\betagin{equation}
\left\{
\betagin{array}{cl}
{\rm div}\thinspacesplaystyle \varphi_t & = {\rm div}\thinspacesplaystyle -\varphi B_{z}(t) - \frac{Q_z^2}{2|z_{al}|}\partialrtial_{\alpha}\left(\frac{\varphi^{2}}{Q_{z}^{2}}\right) - Q_{z}^{2}\left(BR_t \cdot \frac{z_{\alpha}}{|z_{\alpha}|}+\frac{(P^{-1}_{2}(z))_{\alpha}}{|z_{\alpha}|}\right) \\
& {\rm div}\thinspacesplaystyle + Q_{z}Q_{t}^{z}\frac{\omega}{|z_{\alpha}|} - 2cBR \cdot \frac{z_{\alpha}}{|z_{\alpha}|} Q_{z}Q_{\alpha}^{z} - c^2|z_{\alpha}|\frac{Q_{\alpha}^{z}}{Q_{z}}
- \frac{Q_{z}^{3}}{|z_{\alpha}|}|BR|^{2}Q_{\alpha}^{z} - (c|z_{\alpha}|)_{t}\\
{\rm div}\thinspacesplaystyle B_z(t) & = {\rm div}\thinspacesplaystyle \frac{1}{2\pi}\int_{-\pi}^{\pi}(Q_{z}^{2}BR)_{\alpha} \cdot \frac{z_{\alpha}}{|z_{\alpha}|}d\alpha
\etand{array}
\right.
\etand{equation}
We will show how to find the equation for $\psi_t$. We start from
$$ \psi = \frac{Q_{x}^{2}\gammaamma}{2|x_{\alpha}|} - b_s|x_{\alpha}|$$
and therefore
$$ \frac{\psi^{2}}{Q_{x}^{2}} = \frac{Q_{x}^{2}\gammaamma^{2}}{4|x_{\alpha}|} + \frac{b_s^{2}|x_{\alpha}|^{2}}{Q_{x}^{2}} - \gammaamma b_s,$$
that yields
$$ -\partialrtial_{\alpha}\left(\frac{\psi^{2}}{Q_{x}^{2}}\right) = -\partialrtial_{\alpha}\left(\frac{Q_{x}^{2}\gammaamma^{2}}{4|x_{\alpha}|}\right) - \partialrtial_{\alpha}\left(\frac{b_s^{2}|x_{\alpha}|^{2}}{Q_{x}^{2}}\right) + \partialrtial_{\alpha}\left(\gammaamma b_s\right)$$
The equation for $\gammaamma_t$ reads:
\betagin{align*}
\gammaamma_t = & -2BR_t \cdot x_{\alpha} - (Q_{x}^{2})_{\alpha}|BR|^{2} + 2b_sBR_{\alpha} \cdot x_{\alpha} \\
& -\partialrtial_{\alpha}\left(\frac{\psi^{2}}{Q_{x}^{2}}\right) + \left(\frac{b_s^{2}|x_{\alpha}|^{2}}{Q_{x}^{2}}\right)_{\alpha} - 2(P^{-1}_{2}(z))_{\alpha}
+ 2b_e BR_{\alpha} \cdot x_{\alpha} + (b_e \gammaamma)_{\alpha} + g
\etand{align*}
Then
\betagin{align*}
\omega_t = & Q_x(Q_x)_{t} \frac{\gammaamma}{|x_{\alpha}}| - \frac{Q_x^2 \gammaamma}{2|x_{\alpha}|^{3}}x_{\alpha} \cdot x_{\alpha t}
+ \frac{Q_x^2 \gammaamma_{t}}{2|x_{\alpha}|} - (b_s|x_{\alpha}|)_{t}\\
= & Q_x(Q_x)_{t} \frac{\gammaamma}{|x_{\alpha}|} - \frac{Q_x^2 \gammaamma}{2|x_{\alpha}|}B_x(t) - \frac{Q_x^2 \gammaamma}{2|x_{\alpha}|}\frac{1}{2\pi}\int_{-\pi}^{\pi}f_{\alpha}\cdot \frac{x_{\alpha}}{|x_{\alpha}|^2}d\alpha \\
& + \frac{Q_x^2}{2|x_{\alpha}|}\left(-2BR_t \cdot x_{\alpha} - (Q_{x}^{2})_{\alpha}|BR|^{2} + 2b_sBR_{\alpha} \cdot x_{\alpha}
-\partialrtial_{\alpha}\left(\frac{\psi^{2}}{Q_{x}^{2}}\right) + \left(\frac{b_s^{2}|x_{\alpha}|^{2}}{Q_{x}^{2}}\right)_{\alpha}\right.\\
& \left.- 2(P^{-1}_{2}(z))_{\alpha}
+ 2b_e BR_{\alpha} \cdot x_{\alpha} + (b_e \gammaamma)_{\alpha} + g\right)
- (b_s|x_{\alpha}|)_{t}
\etand{align*}
We should remark that we have used that
$$ x_{\alpha} \cdot x_{\alpha t} = \frac{1}{2\pi}\int_{-\pi}^{\pi}(Q_{x}^{2} BR)_{\alpha} \cdot x_{\alpha} d\alpha + \frac{1}{2\pi}\int_{-\pi}^{\pi}f_{\alpha} \cdot x_{\alpha} d\alpha$$
and
$$ B_x(t) = \frac{1}{2\pi}\int_{-\pi}^{\pi}(Q_x^{2} BR)_{\alpha} \cdot \frac{x_{\alpha}}{|x_{\alpha}|^{2}}d\alpha$$
Computing, we find that
\betagin{align*}
\psi_t = & Q_x(Q_x)_{t} \frac{\gammaamma}{|x_{\alpha}|} - \underbrace{\frac{Q_x^2 \gammaamma}{2|x_{\alpha}|}B_x(t)}_{(1)} - Q_x^2 BR_t\frac{x_{\alpha}}{|x_{\alpha}|} -
\frac{Q_x^3}{|x_{\alpha}|}|BR|^{2}Q_{\alpha}^{x} + Q_{x}^{2}b_sBR_{\alpha} \cdot \frac{x_{\alpha}}{|x_{\alpha}|} \\
& - \frac{Q_{x}^{2}}{2|x_{\alpha}|}\partialrtial_{\alpha}\left(\frac{\psi^{2}}{Q_{x}^{2}}\right) + \underbrace{\frac{Q_{x}^{2}}{2}\left(\frac{b_s^{2}|x_{\alpha}|^{2}}{Q_{x}^{2}}\right)_{\alpha}}_{(1)}
- \frac{Q_{x}^{2}}{|x_{\alpha}|}(P^{-1}_{2}(z))_{\alpha} - (b_s|x_{\alpha}|)_{t} + \mathcal{E}^{1}
\etand{align*}
where
$$ \mathcal{E}^{1} = \frac{Q_{x}^{2}}{|x_{\alpha}|}BR_{\alpha}\cdot x_{\alpha} b_e + \frac{Q_{x}^{2}}{2|x_{\alpha}|}(b_e \gammaamma)_{\alpha} + \frac{Q_{x}^{2}}{|x_{\alpha}|} g
- \frac{Q_x^2 \gammaamma}{2|x_{\alpha}|}\frac{1}{2\pi}\int_{-\pi}^{\pi}f_{\alpha}\cdot \frac{x_{\alpha}}{|x_{\alpha}|^2}d\alpha$$
are error terms. We consider
\betagin{align*}
(1) = &-\frac{Q_x^2 \gammaamma}{2|x_{\alpha}|}B_x(t) +\frac{Q_{x}^{2}}{2}\left(\frac{b_s^{2}|x_{\alpha}|^{2}}{Q_{x}^{2}}\right)_{\alpha} \\
= & -\frac{Q_x^2 \gammaamma}{2|x_{\alpha}|}B_x(t) +b_s|x_{\alpha}|(b_s)_{\alpha} - \frac{Q_{\alpha}^{x}}{Q_{x}}b_s^{2}|x_{\alpha}| \\
= & -\frac{Q_x^2 \gammaamma}{2|x_{\alpha}|}B_x(t) +b_s|x_{\alpha}|B_x(t)-b_s(Q_{x}^{2}BR)_{\alpha} \cdot \frac{x_{\alpha}}{|x_{\alpha}|} - \frac{Q_{\alpha}^{x}}{Q_{x}}b_s^{2}|x_{\alpha}| \\
= & -B_x(t)\psi-b_s(Q_{x}^{2}BR)_{\alpha} \cdot \frac{x_{\alpha}}{|x_{\alpha}|} - \frac{Q_{\alpha}^{x}}{Q_{x}}b_s^{2}|x_{\alpha}|
\etand{align*}
It yields
\betagin{align*}
\psi_t = & Q_x(Q_x)_{t} \frac{\gammaamma}{|x_{\alpha}|} - B_x(t) \psi \underbrace{- b_s(Q_{x}^{2}BR)_{\alpha} \cdot \frac{x_{\alpha}}{|x_{\alpha}|}}_{(2)} - \frac{Q_{\alpha}^{x}}{Q_{x}}b_s^{2}|x_{\alpha}| \\
& - Q_x^2 BR_t\frac{x_{\alpha}}{|x_{\alpha}|} -
\frac{Q_x^3}{|x_{\alpha}|}|BR|^{2}Q_{\alpha}^{x} + \underbrace{Q_{x}^{2}b_sBR_{\alpha} \cdot \frac{x_{\alpha}}{|x_{\alpha}|}}_{(2)} \\
& - \frac{Q_{x}^{2}}{2|x_{\alpha}|}\partialrtial_{\alpha}\left(\frac{\psi^{2}}{Q_{x}^{2}}\right)
- \frac{Q_{x}^{2}}{|x_{\alpha}|}(P^{-1}_{2}(z))_{\alpha} - (b_s|x_{\alpha}|)_{t} + \mathcal{E}^{1}
\etand{align*}
It is easy to check that
\betagin{align*}
(2) = & - b_s(Q_{x}^{2}BR)_{\alpha} \cdot \frac{x_{\alpha}}{|x_{\alpha}|} + Q_{x}^{2}b_sBR_{\alpha} \cdot \frac{x_{\alpha}}{|x_{\alpha}|}
= -2b_s BR \cdot \frac{x_{\alpha}}{|x_{\alpha}|} Q_x (Q_x)_{\alpha},
\etand{align*}
then
\betagin{align*}
\psi_t = & -B_x(t) \psi - \frac{Q_{x}^{2}}{2|x_{\alpha}|}\partialrtial_{\alpha}\left(\frac{\psi^{2}}{Q_{x}^{2}}\right) - Q_x^2\left(BR_t\frac{x_{\alpha}}{|x_{\alpha}|}
+ \frac{(P^{-1}_{2}(z))_{\alpha}}{|x_{\alpha}|} \right) \\
& + Q_x(Q_x)_{t} \frac{\gammaamma}{|x_{\alpha}|}-2b_s BR \cdot \frac{x_{\alpha}}{|x_{\alpha}|} Q_x (Q_x)_{\alpha} - \frac{Q_{\alpha}^{x}}{Q_{x}}b_s^{2}|x_{\alpha}|
-\frac{Q_x^3}{|x_{\alpha}|}|BR|^{2}Q_{\alpha}^{x} \\
& - (b_s|x_{\alpha}|)_{t} + \mathcal{E}^{1}
\etand{align*}
With this formula it is easy to find that
$$ \frac{1}{2}\frac{d}{dt}\int |\mathcal{D}|^{2}dx \leq CP(E(t)) + c\partialrtial_{\eta}lta(t)$$
In order to deal with II
$$ II = \int_{-\pi}^{\pi}\Lambda \partialrtial_{\alpha}^{3} \mathcal{D} \partialrtial_{\alpha}^{3}\mathcal{D}_{t} d\alpha$$
we take a derivative in $\alpha$ in the equation for $\omega$ and $\psi$ to reorganize the most dangerous terms. If we find a term of low order, we will denote it by NICE. Since the equations for $\varpirphi_t$ and $\psi_t$ are analogous except for the $\mathcal{E}^{1}$ term, the NICE terms are going to be easier to estimate in terms of $CP(E(t)) + c\partialrtial_{\eta}lta(t)$.
\betagin{align*}
\psi_{\alpha t} = & -B_x(t) \psi_{\alpha} - \partialrtial_{\alpha}\left(\frac{Q_{x}^{2}}{2|x_{\alpha}|}\partialrtial_{\alpha}\left(\frac{\psi^{2}}{Q_{x}^{2}}\right)\right) - \left(Q_x^2\left(\underbrace{BR_t\frac{x_{\alpha}}{|x_{\alpha}|}}_{(3)}
+ \frac{(P^{-1}_{2}(z))_{\alpha}}{|x_{\alpha}|} \right)\right)_{\alpha} \\
& + \left(Q_x(Q_x)_{t} \frac{\gammaamma}{|x_{\alpha}}|\right)_{\alpha} - \left(2b_s BR \cdot \frac{x_{\alpha}}{|x_{\alpha}|} Q_x (Q_x)_{\alpha}\right)_{\alpha} - \left(\frac{Q_{\alpha}^{x}}{Q_{x}}b_s^{2}|x_{\alpha}|\right)_{\alpha}
-\left(\frac{Q_x^3}{|x_{\alpha}|}|BR|^{2}Q_{\alpha}^{x}\right)_{\alpha} \\
& \underbrace{- (b_s|x_{\alpha}|)_{\alpha t}}_{(3)} + \mathcal{E}^{1}_{\alpha}
\etand{align*}
Expanding (3):
\betagin{align*}
(3) = & -\left(Q_{x}^{2}BR_t\frac{x_{\alpha}}{|x_{\alpha}|}\right)_{\alpha} - (b_s|x_{\alpha}|)_{\alpha t} \\
= & -\left(Q_{x}^{2}BR_t\right)_{\alpha}\frac{x_{\alpha}}{|x_{\alpha}|}-Q_{x}^{2}BR_t\left(\frac{x_{\alpha}}{|x_{\alpha}|}\right)_{\alpha}
- \left(|x_{\alpha}|B_{x}(t) - (Q_{x}^{2} BR)_{\alpha} \cdot \frac{x_{\alpha}}{|x_{\alpha}|}\right)_{t} \\
= & - \left(|x_{\alpha}|B_{x}(t)\right)_{t} + (Q_{x}^{2} BR)_{\alpha} \cdot \left(\frac{x_{\alpha}}{|x_{\alpha}|}\right)_{t}
+ 2(Q_{x}(Q_{x})_{t} BR)_{\alpha} \cdot\frac{x_{\alpha}}{|x_{\alpha}|} - Q_{x}^{2}BR_{t} \cdot \left(\frac{x_{\alpha}}{|x_{\alpha}|}\right)_{\alpha}
\etand{align*}
We use that
$$ \left(\frac{x_{\alpha}}{|x_{\alpha}|}\right)_{\alpha} = \frac{x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{2}} \frac{x_{\alpha}^{\perp}}{|x_{\alpha}|};
\quad \left(\frac{x_{\alpha}}{|x_{\alpha}|}\right)_{t} = \frac{x_{\alpha t} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{2}} \frac{x_{\alpha}^{\perp}}{|x_{\alpha}|}$$
to find
\betagin{align*}
\psi_{\alpha t} = & \underbrace{-B_x(t) \psi_{\alpha}}_{(4)} - \underbrace{\frac{\partialrtial_{\alpha}^{2}(\psi^{2})}{2|x_{\alpha}|}}_{(5)}
+ \underbrace{\partialrtial_{\alpha}\left(\frac{(Q_{x})_{\alpha}}{|x_{\alpha}|Q_{x}}\psi^{2}\right)}_{(6)}
- Q_{x}^{2} BR_t \cdot x_{\alpha}^{\perp} \frac{x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}} -
(|x_{\alpha}|B_x(t))_{t} \\
& + \underbrace{(Q_{x}^{2} BR)_{\alpha} \cdot x_{\alpha}^{\perp} \frac{x_{\alpha t} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}}}_{(13)}
+ \underbrace{2(Q_x (Q_x)_{t} BR)_{\alpha} \frac{x_{\alpha}}{|x_{\alpha}|}}_{(7)}
- \underbrace{\left(Q_{x}^{2}\frac{(P^{-1}_{2}(z))_{\alpha}}{|x_{\alpha}|}\right)_{\alpha}}_{(8)}\\
&+ \underbrace{\left(Q_x(Q_x)_{t} \frac{\gammaamma}{|x_{\alpha}|}\right)_{\alpha}}_{(9)}
- \underbrace{\left(2b_s BR \cdot \frac{x_{\alpha}}{|x_{\alpha}|}Q_x (Q_x)_{\alpha}\right)_{\alpha}}_{(10)}
- \underbrace{\left(\frac{(Q_x)_{\alpha}}{Q_x}b_s^{2}|x_{\alpha}|\right)_{\alpha}}_{(11)}\\
&- \underbrace{\left(\frac{Q_x^{3}}{|x_{\alpha}|}|BR|^{2}(Q_x)_{\alpha}\right)_{\alpha}}_{(12)} + \mathcal{E}^{1}_{\alpha}
\etand{align*}
The term $(|x_{\alpha}|B_x(t))_{t}$ depends only on $t$ so it is not going to appear in computing II.
$$ (4) = -B_x(t) \psi_{\alpha} \thetaxt{ is NICE (at the level of } \psi_{\alpha})$$
$$ (5) = -\frac{\partialrtial_{\alpha}^{2}(\psi^{2})}{2|x_{\alpha}|} \thetaxt{ is a transparent term which is NICE (even if we have to deal with } \Lambda^{1/2})$$
$$ (6) = \partialrtial_{\alpha}\left(\frac{(Q_{x})_{\alpha}}{|x_{\alpha}|Q_{x}}\psi^{2}\right) = -\frac{(Q_x)^{2}_{\alpha}}{|x_{\alpha}|(Q_x)^{2}}
+ \frac{2(Q_x)_{\alpha}\psi \psi_{\alpha}}{|x_{\alpha}|Q_x} + \frac{\psi^{2}}{Q_x}\left(\frac{(Q_x)_{\alpha}}{|x_{\alpha}|}\right)_{\alpha}$$
The first term is at the level of $\partialrtial_{\alpha} x$ so it is NICE. The second term is at the level of $\partialrtial_{\alpha} x$ or $\psi_{\alpha}$ so it is NICE. We write the last one as
$$\frac{\psi^{2}}{Q_x}\left(\frac{(Q_x)_{\alpha}}{|x_{\alpha}|}\right)_{\alpha}
= \frac{\psi^{2}}{Q_x}x_{\alpha} \cdot \left(\nabla^{2}Q(x)\cdot \frac{x_{\alpha}}{|x_{\alpha}|}\right)
+ \frac{\psi^{2}}{Q_x}x_{\alpha} \nabla Q(x) \cdot x_{\alpha}^{\perp} \frac{x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}}$$
The first term is at the level of $x_{\alpha}$ or $\psi$ so it is NICE. For the second term we have used that
$$ \left(\frac{x_{\alpha}}{|x_{\alpha}|}\right)_{\alpha} = \frac{x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{2}} \frac{x_{\alpha}^{\perp}}{|x_{\alpha}|}$$
Finally:
$$ (6) = \thetaxt{NICE } + \frac{\psi^{2}}{Q_x}x_{\alpha} \nabla Q(x) \cdot x_{\alpha}^{\perp} \frac{x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}}$$
\betagin{align*} (7) = 2(Q_x (Q_x)_{t} BR)_{\alpha} \frac{x_{\alpha}}{|x_{\alpha}|}
&= 2(Q_x)_{\alpha} (Q_x)_{t} BR \cdot \frac{x_{\alpha}}{|x_{\alpha}|}
+ 2Q_x \left(\frac{(Q_x)_{t}}{|x_{\alpha}}\right)_{\alpha} BR \cdot x_{\alpha}\\
&+ 2Q_x(Q_x)_{t} BR_{\alpha} \cdot \frac{x_{\alpha}}{|x_{\alpha}|}\etand{align*}
The first term is at the level of $x_{\alpha}, x_{t}, BR \sigmam x_{\alpha}$ so it is NICE. We use that
$$ \frac{(Q_x)_{t\alpha}}{|x_{\alpha}|} = \frac{(Q_x)_{\alpha t}}{|x_{\alpha}|}
= \frac{(\nabla Q(x) \cdot x_{\alpha})_{t}}{|x_{\alpha}|}
= \left(\nabla Q(x) \cdot \frac{x_{\alpha}}{|x_{\alpha}|}\right)_{t} - \nabla Q(x) \cdot x_{\alpha} \left(\frac{1}{|x_{\alpha}|}\right)_{t}$$
Using that
$$ \frac{x_{\alpha} \cdot x_{\alpha t}}{|x_{\alpha}|^{2}} = B_{x}(t) + \frac{1}{2\pi}\int_{-\pi}^{\pi}f_{\alpha} \cdot \frac{x_{\alpha}}{|x_{\alpha}|^2}d\alpha$$
and
$$ \left(\frac{x_{\alpha}}{|x_{\alpha}|}\right)_{t} = \frac{x_{\alpha t} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{2}} \cdot\frac{x_{\alpha}^{\perp}}{|x_{\alpha}|}$$
we find that
\betagin{align}
\Lambdabel{star}
\frac{(Q_x)_{t\alpha}}{|x_{\alpha}|}
&= x_t \cdot \left(\nabla^2 Q(x) \cdot \frac{x_{\alpha}}{|x_{\alpha}|}\right) + \nabla Q(x) \cdot x_{\alpha}^{\perp} \frac{x_{\alpha t} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}}\nonumber\\
&+ \nabla Q(x) \cdot\frac{x_{\alpha}}{|x_{\alpha}|} B_x(t) + \nabla Q(x) \cdot\frac{x_{\alpha}}{|x_{\alpha}|}\frac{1}{2\pi}\int_{-\pi}^{\pi}f_{\alpha} \cdot \frac{x_{\alpha}}{|x_{\alpha}|^2}d\alpha
\etand{align}
That yields
\betagin{align*}
(7) & = 2(Q_x (Q_x)_{t} BR)_{\alpha} \frac{x_{\alpha}}{|x_{\alpha}|}
= \thetaxt{NICE } + \underbrace{2Q_x BR \cdot x_{\alpha} x_t \cdot \left(\nabla^2 Q(x) \cdot \frac{x_{\alpha}}{|x_{\alpha}|}\right)}_{\thetaxt{NICE (at the level of }x_{\alpha}, x_t, BR)} \\
& + 2Q_x BR \cdot x_{\alpha} \nabla Q(x) \cdot x_{\alpha}^{\perp} \frac{x_{\alpha t} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}}
+ \underbrace{2Q_x BR \cdot x_{\alpha} \nabla Q(x) \cdot\frac{x_{\alpha}}{|x_{\alpha}|} B_x(t)}_{\thetaxt{NICE (at the level of }x_{\alpha}, x_t, BR)} \\
& + \underbrace{2Q_x BR \cdot x_{\alpha} \nabla Q(x) \cdot\frac{x_{\alpha}}{|x_{\alpha}|}\frac{1}{2\pi}\int_{-\pi}^{\pi}f_{\alpha} \cdot \frac{x_{\alpha}}{|x_{\alpha}|^2}d\alpha}_{\thetaxt{part of error terms}}
+ 2Q_x(Q_x)_{t} BR_{\alpha} \cdot \frac{x_{\alpha}}{|x_{\alpha}|}
\etand{align*}
Finally:
\betagin{align*}
(7) & = 2(Q_x (Q_x)_{t} BR)_{\alpha} \frac{x_{\alpha}}{|x_{\alpha}|}
= \thetaxt{NICE } + 2Q_x BR \cdot x_{\alpha} \nabla Q(x) \cdot x_{\alpha}^{\perp} \frac{x_{\alpha t} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}}
+ 2Q_x(Q_x)_{t} BR_{\alpha} \cdot \frac{x_{\alpha}}{|x_{\alpha}|}
\etand{align*}
\betagin{align*}
(8) & = -\left(Q_{x}^{2}\frac{(P^{-1}_{2}(z))_{\alpha}}{|x_{\alpha}|}\right)_{\alpha}
= -\left(Q_{x}^{2}\nabla P^{-1}_{2}(x)\cdot \frac{x_{\alpha}}{|x_{\alpha}|}\right)_{\alpha}
= \underbrace{-2 Q_{x} \nabla Q^{x}_{x} \cdot x_{\alpha} \nabla P^{-1}_{2}(x) \cdot \frac{x_{\alpha}}{|x_{\alpha}|}}_{\thetaxt{NICE (at the level of }x_{\alpha})} \\
& \underbrace{- Q_{x}^{2} x_{\alpha} \cdot \left(\nabla^{2} P^{-1}_{2}(x) \cdot \frac{x_{\alpha}}{|x_{\alpha}|}\right)}_{\thetaxt{NICE (at the level of }x_{\alpha})}
- Q_{x}^{2} \nabla P^{-1}_{2}(x) \cdot x_{\alpha}^{\perp} \frac{x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}}
\etand{align*}
which means
\betagin{align*}
(8) & = -\left(Q_{x}^{2}\frac{(P^{-1}_{2}(z))_{\alpha}}{|x_{\alpha}|}\right)_{\alpha}
= \thetaxt{NICE } - Q_{x}^{2} \nabla P^{-1}_{2}(x) \cdot x_{\alpha}^{\perp} \frac{x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}}
\etand{align*}
\betagin{align*}
(9) & = \left(Q_x(Q_x)_{t} \frac{\gammaamma}{|x_{\alpha}|}\right)_{\alpha}
= \underbrace{(Q_x)_{\alpha} (Q_x)_{t}\frac{\gammaamma}{|x_{\alpha}|}}_{\thetaxt{NICE (at the level of }x_{\alpha}, x_t)} + Q_x \frac{(Q_x)_{\alpha t}}{|x_{\alpha}|}\gammaamma
+ Q_x(Q_x)_{t} \left(\frac{\gammaamma}{|x_{\alpha}|}\right)_{\alpha}
\etand{align*}
We use \etaqref{star} to deal with $\frac{(Q_x)_{\alpha t}}{|x_{\alpha}|}$. We find that
\betagin{align*}
(9) & = \left(Q_x(Q_x)_{t} \frac{\gammaamma}{|x_{\alpha}|}\right)_{\alpha}
= \thetaxt{NICE } + Q_x \gammaamma \nabla Q(x) \cdot x_{\alpha}^{\perp} \frac{x_{\alpha t} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}}
+ Q_x(Q_x)_{t} \left(\frac{\gammaamma}{|x_{\alpha}|}\right)_{\alpha}
\etand{align*}
\betagin{align*}
(10) & = - \left(2b_s BR \cdot \frac{x_{\alpha}}{|x_{\alpha}|}Q_x (Q_x)_{\alpha}\right)_{\alpha}
= \underbrace{- 2b_s BR \cdot \frac{x_{\alpha}}{|x_{\alpha}|}(Q_x)^{2}_{\alpha}}_{\thetaxt{NICE as before}}
- \left(2b_s BR \cdot \frac{x_{\alpha}}{|x_{\alpha}|}\right)_{\alpha}Q_x (Q_x)_{\alpha} \\
& - 2b_s BR \cdot x_{\alpha}Q_x \nabla Q_x(x) \cdot x_{\alpha}^{\perp} \frac{x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}}
\underbrace{- 2b_s BR \cdot \frac{x_{\alpha}}{|x_{\alpha}|}Q_x x_{\alpha} (\nabla^{2} Q_{x}(x)) \cdot x_{\alpha}}_{\thetaxt{NICE as before}}
\etand{align*}
Therefore
\betagin{align*}
(10) = - \left(2b_s BR \cdot \frac{x_{\alpha}}{|x_{\alpha}|}Q_x (Q_x)_{\alpha}\right)_{\alpha}
& = \thetaxt{NICE }
- \left(2b_s BR \cdot \frac{x_{\alpha}}{|x_{\alpha}|}\right)_{\alpha}Q_x (Q_x)_{\alpha} \\
& - 2b_s BR \cdot x_{\alpha}Q_x \nabla Q_x(x) \cdot x_{\alpha}^{\perp} \frac{x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}}
\etand{align*}
\betagin{align*}
(11) = - \left(\frac{(Q_x)_{\alpha}}{Q_x}b_s^{2}|x_{\alpha}|\right)_{\alpha}
& = - \left(b_s^{2}|x_{\alpha}|\right)_{\alpha}\frac{(Q_x)_{\alpha}}{Q_x}
- \frac{b_s^{2}|x_{\alpha}|^2}{Q_x}\nabla Q(x) \cdot x_{\alpha}^{\perp} \frac{x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}} \\
& - \frac{x_{\alpha}(\nabla^{2} Q(x) \cdot x_{\alpha})}{Q_x}b_s^2|x_{\alpha}| + \frac{(Q_x)_{\alpha}^2}{(Q_x)^2}b_s^2|x_{\alpha}|
\etand{align*}
The fact that the last two terms are NICE, allows us to find that
\betagin{align*}
(11) = - \left(\frac{(Q_x)_{\alpha}}{Q_x}b_s^{2}|x_{\alpha}|\right)_{\alpha}
& = \thetaxt{NICE } - \left(b_s^{2}|x_{\alpha}|\right)_{\alpha}\frac{(Q_x)_{\alpha}}{Q_x}
- \frac{b_s^{2}|x_{\alpha}|^2}{Q_x}\nabla Q(x) \cdot x_{\alpha}^{\perp} \frac{x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}}
\etand{align*}
Finally:
\betagin{align*}
(12) = - \left(\frac{Q_x^{3}}{|x_{\alpha}|}|BR|^{2}(Q_x)_{\alpha}\right)_{\alpha}
& = \underbrace{- 3(Q_x)^2(Q_x)^2_{\alpha}|BR|^{2}}_{\thetaxt{NICE}} - \frac{Q_x^3}{|x_{\alpha}|}(|BR|^2)_{\alpha}(Q_x)_{\alpha} \\
& \underbrace{- \frac{Q_x^3}{|x_{\alpha}|}|BR|^2x_{\alpha} \cdot(\nabla^{2}Q(x) \cdot x_{\alpha})}_{\thetaxt{NICE}} - Q_{x}^{3}|BR|^{2}\nabla Q(x) \cdot x_{\alpha}^{\perp} \frac{x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}}
\etand{align*}
which implies that
\betagin{align*}
(12) = - \left(\frac{Q_x^{3}}{|x_{\alpha}|}|BR|^{2}(Q_x)_{\alpha}\right)_{\alpha}
& = \thetaxt{NICE } - \frac{Q_x^3}{|x_{\alpha}|}(|BR|^2)_{\alpha}(Q_x)_{\alpha} - Q_{x}^{3}|BR|^{2}\nabla Q(x) \cdot x_{\alpha}^{\perp} \frac{x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}}
\etand{align*}
We gather all the formulas from (4) to (12) absorbing the error terms by $\tilde{\mathcal{E}}^{1}_{\alphapha}$ whenever we encounter them.
It yields:
\betagin{align*}
\psi_{\alpha t} & = \thetaxt{NICE } + \underbrace{\frac{\psi^{2}}{Q_x} \nabla Q(x) \cdot x_{\alpha}^{\perp} \frac{x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}}}_{(16)} \underbrace{- Q_{x}^{2} BR_t \cdot x_{\alpha}^{\perp} \frac{x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}}}_{(15)}
\underbrace{- Q_{x}^{2} \nabla P_{2}^{-1}(x) \cdot x_{\alpha}^{\perp} \frac{x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}}}_{(15)} \\
& + \underbrace{Q_{x} \gammaamma \nabla Q(x) \cdot x_{\alpha}^{\perp} \frac{x_{\alpha t} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}}}_{(18)}
+ \underbrace{Q_x(Q_x)_{t} \left(\frac{\gammaamma}{|x_{\alpha}|}\right)_{\alpha}}_{(14)}
+ \underbrace{2Q_x BR \cdot x_{\alpha} \nabla Q(x) \cdot x_{\alpha}^{\perp} \frac{x_{\alpha t} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}}}_{(18)} \\
& + \underbrace{Q_x(Q_x)_{t} 2 BR_{\alpha} \cdot \frac{x_{\alpha}}{|x_{\alpha}|}}_{(14)}
\underbrace{- \left(2 b_x BR \cdot \frac{x_{\alpha}}{|x_{\alpha}|}\right)_{\alpha}Q_x(Q_x)_{\alpha}}_{(17)}
\underbrace{-2b_s BR \cdot x_{\alpha}Q_x \nabla Q(x) \cdot x_{\alpha}^{\perp} \frac{x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}}}_{(16)}\\
& \underbrace{-(b_s^{2}|x_{\alpha}|)_{\alpha}\frac{(Q_x)_{\alpha}}{Q_x}}_{(17)}
\underbrace{-\frac{b_s^2|x_{\alpha}|^{2}}{Q_x}\nabla Q(x) \cdot x_{\alpha}^{\perp} \frac{x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}}}_{(16)}
\underbrace{-\frac{Q_x^3}{|x_{\alpha}|}(|BR|^{2})_{\alpha}(Q_x)_{\alpha}}_{(17)}
\\
& \underbrace{-Q_x^3|BR|^{2}\nabla Q(x) \cdot x_{\alpha}^{\perp} \frac{x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}}}_{(16)}+ (Q_x^2 BR)_{\alpha} \cdot x_{\alpha}^{\perp} \frac{x_{\alpha t} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}} + \tilde{\mathcal{E}}_{\alpha}^{1}
\etand{align*}
We compute
\betagin{align*}
(14) & = Q_x(Q_x)_{t} \left(\frac{\gammaamma}{|x_{\alpha}|}\right)_{\alpha}
+ Q_x(Q_x)_{t} 2 BR_{\alpha} \cdot \frac{x_{\alpha}}{|x_{\alpha}|}\\
&= 2 \frac{(Q_x)_{t}}{Q_x}(Q_x)^2\left(\frac{\gammaamma}{2|x_{\alpha}|}\right)_{\alpha}
+ 2\frac{(Q_x)_{t}}{Q_x}(Q_x)^2 BR_{\alpha} \cdot \frac{x_{\alpha}}{|x_{\alpha}|} \\
& = 2\frac{(Q_x)_{t}}{Q_x}\psi_{\alpha} - 2\frac{(Q_x)_{t}}{Q_x}(Q_x^{2})_{\alpha}\frac{\gammaamma}{2|x_{\alpha}|}
- 2\frac{(Q_x)_{t}}{Q_x}(Q_x^{2})_{\alpha}BR_{\alpha} \frac{x_{\alpha}}{|x_{\alpha}|}
- 2\frac{(Q_x)_{t}}{Q_x}(|x_{\alpha}|B_x(t))
\etand{align*}
The last formula allows us to conclude that (14)=NICE. We reorganize using (15), (16), (17) and (18).
\betagin{align*}
\psi_{\alpha t} & = \thetaxt{NICE } - Q_x^2(BR_t \cdot x_{\alpha}^{\perp}
+ \nabla P_{2}^{-1}(x) \cdot x_{\alpha}^{\perp})\frac{x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}} \\
& - Q^3\left(|BR|^{2} + \frac{b_s^2|x_{\alpha}|^{2}}{Q_x^{4}}+2b_s \frac{BR \cdot x_{\alpha}}{Q_x^{2}} - \frac{\psi^{2}}{Q_x^4}\right)\nabla Q(x) \cdot x_{\alpha}^{\perp} \frac{x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}} \\
& + (Q_x^{2} BR)_{\alpha} \cdot x_{\alpha}^{\perp} \frac{x_{\alpha t} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}}
+ (Q_x \gammaamma + 2Q_x BR \cdot x_{\alpha})\nabla Q(x) \cdot x_{\alpha}^{\perp} \frac{x_{\alpha t} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}} \\
& - \left(\frac{Q_x^{3}(|BR|^{2})_{\alpha}}{|x_{\alpha}|} + \frac{(b_s^{2}|x_{\alpha}|)_{\alpha}}{Q_x}
+ \left(2b_s BR \cdot \frac{x_{\alpha}}{|x_{\alpha}|}\right)_{\alpha}Q_x\right)(Q_x)_{\alpha} + \tilde{\mathcal{E}}_{\alpha}^{1}
\etand{align*}
We add and subtract terms in order to find the R-T condition. We remember here that
\betagin{align}
\sigma_z & = \left(BR_t + \frac{\varphi}{|z_{\alpha}|}BR_{\alpha}\right) \cdot z_{\alpha}^{\perp} + \frac{\omega}{2|z_{\alpha}|^{2}}\left(z_{\alpha t} + \frac{\varphi}{|z_{\alpha}|}z_{\alpha \alpha}\right)\cdot z_{\alpha}^{\perp}\nonumber\\
& + Q_z\left|BR + \frac{\omega}{2|z_{\alpha}|^{2}}z_{\alpha}\right|^{2}\nabla Q(z) \cdot z_{\alpha}^{\perp} + \nabla P_{2}^{-1}(z) \cdot z_{\alpha}^{\perp}\nonumber\\
\sigma_x & = \left(BR_t + \frac{\psi}{|x_{\alpha}|}BR_{\alpha}\right) \cdot x_{\alpha}^{\perp} + \frac{\gammaamma}{2|x_{\alpha}|^{2}}\left(x_{\alpha t} + \frac{\psi}{|x_{\alpha}|}x_{\alpha \alpha}\right)\cdot x_{\alpha}^{\perp}\nonumber\\
& \Lambdabel{X} + Q_x\left|BR + \frac{\gammaamma}{2|x_{\alpha}|^{2}}x_{\alpha}\right|^{2}\nabla Q(x) \cdot x_{\alpha}^{\perp} + \nabla P_{2}^{-1}(x) \cdot x_{\alpha}^{\perp}
\etand{align}
In $\sigma_{x}$ there are error terms but they are not dangerous. Then, we find
\betagin{align*}
\psi_{\alpha t} & = \thetaxt{NICE } \\
&- Q_x^{2}\left(\left(BR_t + \frac{\psi}{|x_{\alpha}|}BR_{\alpha}\right) \cdot x_{\alpha}^{\perp}
+ \frac{\gammaamma}{2|x_{\alpha}|^{2}}\left(x_{\alpha t} + \frac{\psi}{|x_{\alpha}|}x_{\alpha \alpha}\right)\cdot x_{\alpha}^{\perp}
+ \nabla P_{2}^{-1}(x) \cdot x_{\alpha}^{\perp}\right)\frac{x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}} \\
& \underbrace{+ (Q_x^{2} BR)_{\alpha} \cdot x_{\alpha}^{\perp} \frac{x_{\alpha t} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}}
+ Q_x^{2}\left(\frac{\psi}{|x_{\alpha}|}BR_{\alpha} \cdot x_{\alpha}^{\perp} + \frac{\gammaamma}{2|x_{\alpha}|^{2}}\left(x_{\alpha t} + \frac{\psi}{|x_{\alpha}|}x_{\alpha \alpha}\right) \cdot x_{\alpha}^{\perp}\right)\frac{x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}}}_{(19)}\\
& - Q^3\left(|BR|^{2} + \frac{b_s^2|x_{\alpha}|^{2}}{Q_x^{4}}+2b_s \frac{BR \cdot x_{\alpha}}{Q_x^{2}} - \frac{\psi^{2}}{Q_x^4}\right)\nabla Q(x) \cdot x_{\alpha}^{\perp} \frac{x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}} \\
& + (Q_x \gammaamma + 2Q_x BR \cdot x_{\alpha})\nabla Q(x) \cdot x_{\alpha}^{\perp} \frac{x_{\alpha t} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}} \\
& - \left(\frac{Q_x^{3}(|BR|^{2})_{\alpha}}{|x_{\alpha}|} + \frac{(b_s^{2}|x_{\alpha}|)_{\alpha}}{Q_x}
+ \left(2b_s BR \cdot \frac{x_{\alpha}}{|x_{\alpha}|}\right)_{\alpha}Q_x\right)(Q_x)_{\alpha} + \tilde{\mathcal{E}}_{\alpha}^{1}
\etand{align*}
Line (19) can be written as
\betagin{align*}
(19) & = (Q_x^{2} BR)_{\alpha} \cdot x_{\alpha}^{\perp} \frac{x_{\alpha t} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}}
+ Q_x^{2}BR_{\alpha} \cdot x_{\alpha}^{\perp}\frac{\psi}{|x_{\alpha}|}\frac{x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}}\\
&+ \frac{Q_x^{2}\gammaamma}{2|x_{\alpha}|^{2}}\left(x_{\alpha t} + \frac{\psi}{|x_{\alpha}|}x_{\alpha \alpha}\right)
\cdot x_{\alpha}^{\perp}\frac{x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}} \\
& = (Q_x^{2} BR)_{\alpha} \cdot x_{\alpha}^{\perp} \frac{x_{\alpha t} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}}
+ (Q_x^{2}BR)_{\alpha} \cdot x_{\alpha}^{\perp}\frac{\psi}{|x_{\alpha}|}\frac{x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}}\\
&+ \frac{Q_x^{2}\gammaamma}{2|x_{\alpha}|^{2}}\left(x_{\alpha t}\cdot x_{\alpha}^{\perp} + \frac{\psi}{|x_{\alpha}|}x_{\alpha \alpha}\cdot x_{\alpha}^{\perp}\right)
\frac{x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}} - 2Q_x(Q_x)_{\alpha} BR \cdot x_{\alpha}^{\perp} \frac{\psi}{|x_{\alpha}|}\frac{x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}} \\
& = (Q_x^{2} BR)_{\alpha} \cdot x_{\alpha}^{\perp}\frac{1}{|x_{\alpha}|^{3}}\left( x_{\alpha t} \cdot x_{\alpha}^{\perp}
+ \frac{\psi}{|x_{\alpha}|}x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}\right)\\
&+ \frac{Q_x^{2}\gammaamma}{2|x_{\alpha}|^{2}}\frac{1}{|x_{\alpha}|^{3}}\left(x_{\alpha t}\cdot x_{\alpha}^{\perp} + \frac{\psi}{|x_{\alpha}|}x_{\alpha \alpha}\cdot x_{\alpha}^{\perp}\right)
x_{\alpha \alpha} \cdot x_{\alpha}^{\perp} - 2Q_x(Q_x)_{\alpha} BR \cdot x_{\alpha}^{\perp} \frac{\psi}{|x_{\alpha}|}\frac{x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}} \\
& = \frac{1}{|x_{\alpha}|^{3}}\left( x_{\alpha t} \cdot x_{\alpha}^{\perp}
+ \frac{\psi}{|x_{\alpha}|}x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}\right)\left((Q_x^{2} BR)_{\alpha} \cdot x_{\alpha}^{\perp}
+ \frac{Q_x^{2}\gammaamma}{2|x_{\alpha}|^{2}}x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}\right)\\
&- 2Q_x(Q_x)_{\alpha} BR \cdot x_{\alpha}^{\perp} \frac{\psi}{|x_{\alpha}|}\frac{x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}}
\etand{align*}
We expand $x_{\alpha t}$ to find
\betagin{align*}
(19) & = \frac{1}{|x_{\alpha}|^{3}}\left((Q_x^{2} BR)_{\alpha} \cdot x_{\alpha}^{\perp}
+ \frac{Q_x^{2}\gammaamma}{2|x_{\alpha}|^{2}}x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}\right)^2\\
&+ \underbrace{\frac{x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}}\left((Q_x^{2} BR)_{\alpha} \cdot x_{\alpha}^{\perp}
+ \frac{Q_x^{2}\gammaamma}{2|x_{\alpha}|^{2}}x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}\right)b_e}_{\thetaxt{error term: we incorporate it as }\tilde{\mathcal{E}}_{\alpha}^{2}} - 2Q_x(Q_x)_{\alpha} BR \cdot x_{\alpha}^{\perp} \frac{\psi}{|x_{\alpha}|}\frac{x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}}
\etand{align*}
We denote
\betagin{equation}
\Lambdabel{spiral}
G_{x}(\alpha) = (Q_x^{2} BR)_{\alpha} \cdot x_{\alpha}^{\perp}
+ \frac{Q_x^{2}\gammaamma}{2|x_{\alpha}|^{2}}x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}
\etand{equation}
We claim that
$$ G_x(\alpha) = \thetaxt{NICE } + |x_{\alpha}|H(\partialrtial_{\alpha} \psi)$$
that becomes
$$ (G_x(\alpha))^2 = \thetaxt{NICE}$$
Then
\betagin{align*}
(19) = \thetaxt{NICE } - 2Q_x(Q_x)_{\alpha} BR \cdot x_{\alpha}^{\perp} \frac{\psi}{|x_{\alpha}|}\frac{x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}} + \tilde{\mathcal{E}}_{\alpha}^{2}
\etand{align*}
We write
\betagin{align*}
G_x(\alpha) & = \underbrace{2Q_x(Q_x)_{\alpha} BR \cdot x_{\alpha}^{\perp}}_{\thetaxt{NICE, at the level of }x_{\alpha}}
+ \underbrace{Q_x^{2}\frac{1}{2\pi}\int \frac{(x_{\alpha}(\alpha) - x_{\alpha}(\alpha-\betata)) \cdot x_{\alpha}(\alpha)}{|x(\alpha)-x(\alpha-\betata)|^{2}}\gammaamma(\alpha-\betata)d\betata}_{\thetaxt{NICE, we use that }|x_{\alpha}|^{2} = A_x(t)} \\
& \underbrace{- Q_x^{2}\frac{1}{\pi}\int \frac{(x_{\alpha}(\alpha) - x_{\alpha}(\alpha-\betata)) \cdot x_{\alpha}(\alpha)}{|x(\alpha)-x(\alpha-\betata)|^{4}}(x(\alpha) - x(\alpha - \betata))(x_{\alpha}(\alpha) - x_{\alpha}(\alpha - \betata))\gammaamma(\alpha-\betata)d\betata}_{\thetaxt{NICE, we use that }|x_{\alpha}|^{2} \thetaxt{ only depends on time}} \\
& + \underbrace{Q_x^2 BR(x,\gammaamma_{\alpha}) \cdot x_{\alpha}^{\perp}}_{\thetaxt{Hilbert transform applied to }\gammaamma_{\alpha}} + \frac{Q_x^{2}\gammaamma}{2|x_{\alpha}|^{2}}x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}
\etand{align*}
Therefore
\betagin{align*}
G_x(\alpha) & = \thetaxt{NICE } + |x_{\alpha}|Q_x^{2}H\left(\left(\frac{\gammaamma}{2|x_{\alpha}|}\right)_{\alpha}\right) + \frac{Q_x^{2}\gammaamma}{2|x_{\alpha}|^{2}}x_{\alpha \alpha} \cdot x_{\alpha}^{\perp} \\
& = \thetaxt{NICE } + |x_{\alpha}|H\left(\left(\frac{Q_x^{2}\gammaamma}{2|x_{\alpha}|}\right)_{\alpha}\right) + \frac{Q_x^{2}\gammaamma}{2|x_{\alpha}|^{2}}x_{\alpha \alpha} \cdot x_{\alpha}^{\perp} \\
& = \thetaxt{NICE } + |x_{\alpha}|H(\partialrtial_{\alpha} \psi) + H\left((b_s |x_{\alpha}|^{2})_{\alpha}\right)+ \frac{Q_x^{2}\gammaamma}{2|x_{\alpha}|^{2}}x_{\alpha \alpha} \cdot x_{\alpha}^{\perp} \\
& = \thetaxt{NICE } + |x_{\alpha}|H(\psi_{\alpha}) - H\left((Q_{x}^{2}BR)_{\alpha} \cdot x_{\alpha}\right)+ \frac{Q_x^{2}\gammaamma}{2|x_{\alpha}|^{2}}x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}
\etand{align*}
\betagin{align*}
&(Q_{x}^{2}BR)_{\alpha} \cdot x_{\alpha} = \underbrace{2Q_x(Q_x)_{\alpha} BR \cdot x_{\alpha}}_{\thetaxt{NICE}}
+ Q_x^{2}\frac{1}{2\pi}\int \frac{(x_{\alpha}(\alpha) - x_{\alpha}(\alpha-\betata))^{\perp} \cdot x_{\alpha}(\alpha)}{|x(\alpha)-x(\alpha-\betata)|^{2}}\gammaamma(\alpha-\betata)d\betata \\
& = \underbrace{- Q_x^{2}\frac{1}{\pi}\int \frac{(x(\alpha) - x(\alpha-\betata))^{\perp} \cdot x_{\alpha}(\alpha)}{|x(\alpha)-x(\alpha-\betata)|^{4}}(x(\alpha) - x(\alpha - \betata))(x_{\alpha}(\alpha) - x_{\alpha}(\alpha - \betata))\gammaamma(\alpha-\betata)d\betata}_{\thetaxt{NICE, extra cancellation in }(x(\alpha) - x(\alpha - \betata))^{\perp} \cdot x_{\alpha}(\alpha)} \\
& + \underbrace{Q_x^{2}\frac{1}{2\pi}\int \frac{(x(\alpha) - x(\alpha-\betata))^{\perp} \cdot x_{\alpha}(\alpha)}{|x(\alpha)-x(\alpha-\betata)|^{2}}\gammaamma(\alpha-\betata)d\betata}_{\thetaxt{NICE, extra cancellation in }(x(\alpha) - x(\alpha - \betata))^{\perp} \cdot x_{\alpha}(\alpha)}
\etand{align*}
This means that
\betagin{align*}
(Q_{x}^{2}BR)_{\alpha} \cdot x_{\alpha} & = \thetaxt{NICE } + \frac{1}{2}H\left(Q_x^2\frac{\partialrtial_{\alpha}^{2}x^{\perp} \cdot x_{\alpha}}{|x_{\alpha}|^{2}}\gammaamma\right)
\etand{align*}
Taking Hilbert transforms:
\betagin{align*}
-H\left((Q_{x}^{2}BR)_{\alpha} \cdot x_{\alpha}\right) & = \thetaxt{NICE } - \frac{1}{2}H^2\left(Q_x^2\frac{\partialrtial_{\alpha}^{2}x^{\perp} \cdot x_{\alpha}}{|x_{\alpha}|^{2}}\gammaamma\right)
= \thetaxt{NICE } + \frac{1}{2}Q_x^2\frac{\partialrtial_{\alpha}^{2}x^{\perp} \cdot x_{\alpha}}{|x_{\alpha}|^{2}}\gammaamma
\etand{align*}
Using that $\partialrtial_{\alpha}^{2} x^{\perp} \cdot x_{\alpha} = -\partialrtial_{\alpha}^{2} x \cdot x_{\alpha}^{\perp}$ we are done. Thus (19) yields
\betagin{align*}
\psi_{\alpha t} & = \thetaxt{NICE } - Q_x^{2}\left(\left(BR_t + \frac{\psi}{|x_{\alpha}|}BR_{\alpha}\right) \cdot x_{\alpha}^{\perp}\right.\\
&+\left. \frac{\gammaamma}{2|x_{\alpha}|^{2}}\left(x_{\alpha t} + \frac{\psi}{|x_{\alpha}|}x_{\alpha \alpha}\right)\cdot x_{\alpha}^{\perp}
+ \nabla P_{2}^{-1}(x) \cdot x_{\alpha}^{\perp}\right)\frac{x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}} \\
& - Q_{x}^3\left(|BR|^{2} + \frac{b_s^2|x_{\alpha}|^{2}}{Q_x^{4}}+2b_s \frac{BR \cdot x_{\alpha}}{Q_x^{2}} - \frac{\psi^{2}}{Q_x^4}\right)\nabla Q(x) \cdot x_{\alpha}^{\perp} \frac{x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}} \\
& + (Q_x \gammaamma + 2Q_x BR \cdot x_{\alpha})\nabla Q(x) \cdot x_{\alpha}^{\perp} \frac{x_{\alpha t} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}} \\
& \underbrace{- \left(\frac{Q_x^{3}(|BR|^{2})_{\alpha}}{|x_{\alpha}|} + \frac{(b_s^{2}|x_{\alpha}|)_{\alpha}}{Q_x}
+ \left(2b_s BR \cdot \frac{x_{\alpha}}{|x_{\alpha}|}\right)_{\alpha}Q_x\right)(Q_x)_{\alpha}}_{(20)} \\
& \underbrace{- 2Q_x(Q_x)_{\alpha} BR \cdot x_{\alpha}^{\perp} \frac{\psi}{|x_{\alpha}|} \frac{x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}}}_{(21)}+ \mathcal{E}_{\alpha}^{2}, \quad \thetaxt{ where } \quad \mathcal{E}_{\alpha}^{2} = \tilde{\mathcal{E}}_{\alpha}^{1} + \tilde{\mathcal{E}}_{\alpha}^{2}
\etand{align*}
For (20) we write
\betagin{align*}
|x_t|^{2} & = Q_x^{4}|BR|^{2} + b_s^{2}|x_{\alpha}|^{2} + 2 Q_x^{2} b_s BR \cdot x_{\alpha} \\
& + \underbrace{b_e^{2}|x_{\alpha}|^{2} + f^{2} + 2Q_x^{2} BR \cdot x_{\alpha} b_e + 2b_s b_e |x_{\alpha}|^{2} + 2Q_x^{2} BR \cdot f
+ 2b_{s} x_{\alpha} \cdot f + 2b_e x_{\alpha} \cdot f}_{\thetaxt{ error terms } \tilde{\mathcal{E}}_{\alpha}^{3}} \\
\mathbb{R}ightarrow \frac{|x_t|^{2}}{Q_x|x_{\alpha}|} & = \frac{Q_x^{3}|BR|^{2}}{|x_{\alpha}|} + \frac{b_s^{2}|x_{\alpha}|}{Q_x} + 2 Q_x b_s BR \cdot \frac{x_{\alpha}}{|x_{\alpha}|} + \frac{\tilde{\mathcal{E}}_{\alpha}^{3}}{Q_x|x_{\alpha}|} \\
\etand{align*}
Now
\betagin{align*}
(20) = \thetaxt{NICE } - \frac{(|x_t|^{2})_{\alpha}}{Q_x|x_{\alpha}|}(Q_x)_{\alpha} + \frac{\tilde{\mathcal{E}}_{\alpha}^{3}}{Q_x|x_{\alpha}|}(Q_x)_{\alpha}
\etand{align*}
which means
\betagin{align*}
(20)+(21) & = \thetaxt{NICE } - \frac{(|x_t|^{2})_{\alpha}}{Q_x|x_{\alpha}|}(Q_x)_{\alpha}
- 2Q_x(Q_x)_{\alpha} BR \cdot x_{\alpha}^{\perp} \frac{\psi}{|x_{\alpha}|} \frac{x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}} + \frac{\tilde{\mathcal{E}}_{\alpha}^{3}}{Q_x|x_{\alpha}|}(Q_x)_{\alpha}
\etand{align*}
We write
\betagin{align*}
x_{\alpha t} & = \underbrace{(x_{\alpha t} \cdot x_{\alpha}) \frac{x_{\alpha}}{|x_{\alpha}|^{2}}}_{\thetaxt{only depends on }t}
+ (x_{\alpha t} \cdot x_{\alpha}^{\perp}) \frac{x_{\alpha}^{\perp}}{|x_{\alpha}|^{2}} \\
& = \left(B_x(t) + \frac{1}{2\pi}\int_{-\pi}^{\pi}f_{\betata} \cdot \frac{x_{\betata}}{|x_{\betata}|^{2}}d\betata\right)x_{\alpha} + \left((Q_{x}^{2} BR)_{\alpha} \cdot x_{\alpha}^{\perp} + b x_{\alpha \alpha} \cdot x_{\alpha}^{\perp} + f_{\alpha} \cdot x_{\alpha}^{\perp}\right)\frac{x_{\alpha}^{\perp}}{|x_{\alpha}|^{2}} \\
& = \left(B_x(t) + \frac{1}{2\pi}\int_{-\pi}^{\pi}f_{\betata} \cdot \frac{x_{\betata}}{|x_{\betata}|^{2}}d\betata\right)x_{\alpha} + \left((Q_{x}^{2} BR)_{\alpha} \cdot x_{\alpha}^{\perp} + b_s x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}\right)\frac{x_{\alpha}^{\perp}}{|x_{\alpha}|^{2}}\\
&+ \left(b_e x_{\alpha \alpha} \cdot x_{\alpha}^{\perp} + f_{\alpha} \cdot x_{\alpha}^{\perp}\right)\frac{x_{\alpha}^{\perp}}{|x_{\alpha}|^{2}} \\
& = \left(B_x(t) + \frac{1}{2\pi}\int_{-\pi}^{\pi}f_{\betata} \cdot \frac{x_{\betata}}{|x_{\betata}|^{2}}d\betata\right)x_{\alpha} +
\underbrace{\left((Q_{x}^{2} BR)_{\alpha} \cdot x_{\alpha}^{\perp} + \frac{Q_x^{2}\gammaamma}{2|x_{\alpha}|^{2}} x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}\right)}_{G_x(\alpha) \thetaxt{ as in }\etaqref{spiral}}\frac{x_{\alpha}^{\perp}}{|x_{\alpha}|^{2}}\\
& - \frac{\psi}{|x_{\alpha}|}x_{\alpha \alpha} \cdot x_{\alpha}^{\perp} \frac{x_{\alpha}^{\perp}}{|x_{\alpha}|^{2}} \left(b_e x_{\alpha \alpha} \cdot x_{\alpha}^{\perp} + f_{\alpha} \cdot x_{\alpha}^{\perp}\right)\frac{x_{\alpha}^{\perp}}{|x_{\alpha}|^{2}}
\etand{align*}
Writing $x_t = (Q_x^{2}BR) + b_s x_{\alpha} + b_e x_{\alpha} + f_{\alpha}$ we compute
\betagin{align*}
x_{\alpha t} \cdot x_{\alpha} & = \underbrace{Q_{x}^{2} BR \cdot x_{\alpha}}_{\thetaxt{NICE}}\left(B_x(t) + \underbrace{\frac{1}{2\pi}\int_{-\pi}^{\pi}f_{\betata} \cdot \frac{x_{\betata}}{|x_{\betata}|^{2}}d\betata}_{\thetaxt{error}}\right) + \underbrace{G_x(\alpha) Q_{x}^{2} BR \cdot \frac{x_{\alpha}^{\perp}}{|x_{\alpha}|^{2}}}_{\thetaxt{NICE because }G_x \thetaxt{ is nice}} \\
& - \frac{\psi}{|x_{\alpha}|}x_{\alpha \alpha} \cdot x_{\alpha}^{\perp} Q_x^{2} BR \cdot \frac{x_{\alpha}^{\perp}}{|x_{\alpha}|^{2}}
+ Q_x^{2} BR \cdot \frac{x_{\alpha}^{\perp}}{|x_{\alpha}|^{2}}\left(b_e x_{\alpha \alpha} \cdot x_{\alpha}^{\perp} + \underbrace{f_{\alpha} \cdot x_{\alpha}^{\perp}}_{\thetaxt{error}}\right)\frac{x_{\alpha}^{\perp}}{|x_{\alpha}|^{2}} \\
& + \underbrace{b_s\left(B_x(t) + \frac{1}{2\pi}\int_{-\pi}^{\pi}f_{\betata} \cdot \frac{x_{\betata}}{|x_{\betata}|^{2}}d\betata\right)|x_{\alpha}|^{2}}_{\thetaxt{NICE}}
+ b_e\left(\underbrace{B_x(t)}_{\thetaxt{error}} + \frac{1}{2\pi}\int_{-\pi}^{\pi}f_{\betata} \cdot \frac{x_{\betata}}{|x_{\betata}|^{2}}d\betata\right)|x_{\alpha}|^{2} + \hat{\mathcal{E}}
\etand{align*}
where $\hat{\mathcal{E}}$ is an error term. To simplify we write
$$ x_{\alpha t} \cdot x_{\alpha} = \thetaxt{NICE }- \frac{\psi}{|x_{\alpha}|}x_{\alpha \alpha} \cdot x_{\alpha}^{\perp} Q_x^{2} BR \cdot \frac{x_{\alpha}^{\perp}}{|x_{\alpha}|^{2}} + \thetaxt{ errors}$$
Setting the above formula in the expression of (20)+(21) allows us to find
$$ (20) + (21) = \thetaxt{NICE }+ \thetaxt{ errors }$$
This yields
\betagin{align*}
\psi_{\alpha t} & = \thetaxt{NICE } - Q_x^{2}\left(\left(BR_t + \frac{\psi}{|x_{\alpha}|}BR_{\alpha}\right)
+ \frac{\gammaamma}{2|x_{\alpha}|^{2}}\left(x_{\alpha t} + \frac{\psi}{|x_{\alpha}|}x_{\alpha \alpha}\right)
+ \nabla P_{2}^{-1}(x)\right)\cdot x_{\alpha}^{\perp}\frac{x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}} \\
& - Q_{x}^3\left(|BR|^{2} + \frac{b_s^2|x_{\alpha}|^{2}}{Q_x^{4}}+2b_s \frac{BR \cdot x_{\alpha}}{Q_x^{2}} - \frac{\psi^{2}}{Q_x^4}\right)\nabla Q(x) \cdot x_{\alpha}^{\perp} \frac{x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}} \\
& + (Q_x \gammaamma + 2Q_x BR \cdot x_{\alpha})\nabla Q(x) \cdot x_{\alpha}^{\perp} \frac{x_{\alpha t} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}} + \mathcal{E}_{\alpha}^{3}
\etand{align*}
being $\mathcal{E}_{\alpha}^{3}$ a new error term. We now complete the formula for $\sigma_x$ in \etaqref{X} to find
\betagin{align*}
\psi_{\alpha t} & = \thetaxt{NICE } - Q_x^{2}\sigma_{x} \frac{x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}} \\
& + \underbrace{Q_x \left|BR + \frac{\gammaamma}{2|x_{\alpha}|^{2}}x_{\alpha}\right|^{2}
\nabla Q(x) \cdot x_{\alpha}^{\perp} \frac{x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}}}_{(22)} \\
& + \underbrace{Q_{x}^3\left(-|BR|^{2} - \frac{b_s^2|x_{\alpha}|^{2}}{Q_x^{4}}-2b_s \frac{BR \cdot x_{\alpha}}{Q_x^{2}} + \frac{\psi^{2}}{Q_x^4}\right)\nabla Q(x) \cdot x_{\alpha}^{\perp} \frac{x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}}}_{(23)} \\
& + \underbrace{(Q_x \gammaamma + 2Q_x BR \cdot x_{\alpha})\nabla Q(x) \cdot x_{\alpha}^{\perp} \frac{x_{\alpha t} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}}}_{(24)} + \mathcal{E}_{\alpha}^{3}
\etand{align*}
Expanding
$$ \frac{\psi^{2}}{Q_x^{4}} = \frac{\gammaamma^{2}}{4|x_{\alpha}|^{2}} + \frac{b_s^{2}|x_{\alpha}|^{2}}{Q_x^{4}} - \frac{\gammaamma b_s}{Q_x^{2}}$$
we find
\betagin{align*}
(22) + (23) = Q_{x}^3\left(\frac{\gammaamma^{2}}{2|x_{\alpha}|^{2}} + BR \cdot x_{\alpha} \frac{\gammaamma}{|x_{\alpha}|^{2}}-2b_s \frac{BR \cdot x_{\alpha}}{Q_x^{2}}
- \frac{\gammaamma b_s}{Q_x^{2}}\right)\nabla Q(x) \cdot x_{\alpha}^{\perp} \frac{x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}}
\etand{align*}
Writing
\betagin{align*}
x_{\alpha t} \cdot x_{\alpha}^{\perp} = (Q_x^{2} BR)_{\alpha} x_{\alpha}^{\perp} + b_{s} x_{\alpha \alpha} \cdot x_{\alpha}^{\perp} + \thetaxt{ errors}
\etand{align*}
we obtain that
\betagin{align*}
(24)& = \left(Q_x \gammaamma + 2Q_x BR \cdot x_{\alpha}\right)\nabla Q(x) \cdot x_{\alpha}^{\perp} \frac{(Q_x^{2} BR)_{\alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}} \\
& + \left(Q_x \gammaamma + 2Q_x BR \cdot x_{\alpha}\right)\nabla Q(x) \cdot x_{\alpha}^{\perp} b_s\frac{x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}}
+ \thetaxt{ errors}
\etand{align*}
Thus
\betagin{align*}
(22)+(23)&+(24) = Q_{x}^3\left(\frac{\gammaamma^{2}}{2|x_{\alpha}|^{2}} + BR \cdot x_{\alpha} \frac{\gammaamma}{|x_{\alpha}|^{2}}
\right)\nabla Q(x) \cdot x_{\alpha}^{\perp} \frac{x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}} \\
& + \left(Q_x \gammaamma + 2Q_x BR \cdot x_{\alpha}\right)\nabla Q(x) \cdot x_{\alpha}^{\perp} \frac{(Q_x^{2} BR)_{\alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}}
+ \thetaxt{ errors} \\
& = Q_x \nabla Q(x) \cdot x_{\alpha}^{\perp} \left(\gammaamma + 2BR \cdot x_{\alpha}\right)\left(\frac{Q_x^{2}\gammaamma}{2|x_{\alpha}|^{2}}\frac{x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}} + \frac{(Q_x^{2}BR)_{\alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}}\right) + \thetaxt{ errors} \\
& = Q_x \nabla Q(x) \cdot x_{\alpha}^{\perp} \left(\gammaamma + 2BR \cdot x_{\alpha}\right)\frac{1}{|x_{\alpha}|^{3}}D_{x}(\alpha) + \thetaxt{ errors}\\
& = \thetaxt{NICE } + \thetaxt{ errors}
\etand{align*}
Finally, we obtain
$$ \psi_{\alpha t} = \thetaxt{NICE}(x,\gammaamma,\psi) - Q_{x}^{2}\sigma_{x} \frac{x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}}
+ \mathcal{E}_{\alpha}^{4}$$
For $\varphi_{\alpha t}$ we find
$$ \varphi_{\alpha t} = \thetaxt{NICE}(z,\omega,\varphi) - Q_{z}^{2}\sigma_{z} \frac{z_{\alpha \alpha} \cdot z_{\alpha}^{\perp}}{|z_{\alpha}|^{3}},$$
since we can apply the same methods as before to the equations with $f = g = 0$, which are satisfied by $(z,\omega,\varphi)$. Then:
\betagin{align*}
II & = \int_{-\pi}^{\pi} \Lambda \partialrtial_{\alpha}^{3} \mathcal{D} \cdot \partialrtial_{\alpha}^{3} \mathcal{D}_{t} =
\int_{-\pi}^{\pi} \Lambda \partialrtial_{\alpha}^{3} \mathcal{D}\left(\thetaxt{NICE}(z,\omega,\varphi) - \thetaxt{NICE}(x,\gammaamma,\psi)\right)d\alpha \\
& - \int_{-\pi}^{\pi}\Lambda \partialrtial_{\alpha}^{3} \mathcal{D}\left(\partialrtial_{\alpha}^{2}\left(Q_{z}^{2} \sigma_{z} \frac{z_{\alpha \alpha} \cdot z_{\alpha}^{\perp}}{|z_{\alpha}|^{3}} - Q_x^{2} \sigma_{x} \frac{x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}}\right)\right)
- \int_{-\pi}^{\pi} \Lambda \partialrtial_{\alpha}^{3} \mathcal{D} \mathcal{E}_{\alpha}^{4} d\alpha
\etaquiv II_{1} + II_{2} + II_{3}
\etand{align*}
$$ II_1 \leq CP(E(t)) \quad \thetaxt{ because we are dealing with the NICE term}$$
$$ II_{3} \leq CP(E(t)) + c\partialrtial_{\eta}lta(t) \quad \thetaxt{ because of the errors}$$
It remains to estimate $II_{2}$. We consider the most singular terms
\betagin{align*}
II_2 & = II_{2,1} + II_{2,2} + II_{2,3} + \thetaxt{ l.o.t} \\
II_{2,1} & = - \int_{-\pi}^{\pi}\Lambda (\partialrtial_{\alpha}^{3} \mathcal{D})\left((Q_{z}^{2})_{\alpha \alpha} \sigma_{z} \frac{z_{\alpha \alpha} \cdot z_{\alpha}^{\perp}}{|z_{\alpha}|^{3}} - (Q_x^{2})_{\alpha \alpha} \sigma_{x} \frac{x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}}\right)d\alpha \\
II_{2,2} & = - \int_{-\pi}^{\pi}\Lambda (\partialrtial_{\alpha}^{3} \mathcal{D})\left(Q_{z}^{2} \sigma_{z} \frac{\partialrtial_{\alpha}^{4}z \cdot z_{\alpha}^{\perp}}{|z_{\alpha}|^{3}} - Q_x^{2} \sigma_{x} \frac{\partialrtial_{\alpha}^{4}x \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}}\right)d\alpha \\
II_{2,3} & = - \int_{-\pi}^{\pi}\Lambda (\partialrtial_{\alpha}^{3} \mathcal{D})\left(Q_{z}^{2} \partialrtial_{\alpha}^{2} \sigma_{z} \frac{z_{\alpha \alpha} \cdot z_{\alpha}^{\perp}}{|z_{\alpha}|^{3}} - Q_x^{2} \partialrtial_{\alpha}^{2} \sigma_{x} \frac{x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}}\right)d\alpha
\etand{align*}
\betagin{align*}
II_{2,1} & = - \int_{-\pi}^{\pi}H (\partialrtial_{\alpha}^{3} \mathcal{D})\left((Q_{z}^{2})_{\alpha \alpha} \sigma_{z} \frac{z_{\alpha \alpha} \cdot z_{\alpha}^{\perp}}{|z_{\alpha}|^{3}} - (Q_x^{2})_{\alpha \alpha} \sigma_{x} \frac{x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}}\right)_{\alpha}d\alpha\\
&\leq CP(E(t)) + c\partialrtial_{\eta}lta(t) \quad \thetaxt{ as before}
\etand{align*}
For $II_{2,2}$ we decompose further
\betagin{align*}
II_{2,2} & = - S + \widetilde{II}_{2,2}, \quad \thetaxt{ where} \\
\widetilde{II}_{2,2} & = - \int_{-\pi}^{\pi}\Lambda (\partialrtial_{\alpha}^{3} \mathcal{D})\left(Q_{z}^{2} \sigma_{z} \frac{\partialrtial_{\alpha}^{4}x \cdot z_{\alpha}^{\perp}}{|z_{\alpha}|^{3}} - Q_x^{2} \sigma_{x} \frac{\partialrtial_{\alpha}^{4}x \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}}\right)d\alpha \\
S &= \int_{-\pi}^{\pi}Q_{z}^{2}\sigmagma_{z} \partialrtial_{\alpha}^{4}D \cdot \frac{\partialrtial_{\alpha} z^{\perp}(\alpha)}{|z_{\alpha}|^3}H(\partialrtial_{\alpha}^{4} {\rm div}\thinspacesplaystylecal)(\alpha) d\alpha
\etand{align*}
We find that
\betagin{align*}
\widetilde{II}_{2,2} \leq CP(E(t)) + c\partialrtial_{\eta}lta(t)
\etand{align*}
and $-S$ cancels out with $S$. We are done with $II_{2,2}$. We write
\betagin{align*}
II_{2,3} & = \int_{-\pi}^{\pi}H (\partialrtial_{\alpha}^{3} \mathcal{D})\left(Q_{z}^{2} \partialrtial_{\alpha}^{3} \sigma_{z} \frac{z_{\alpha \alpha} \cdot z_{\alpha}^{\perp}}{|z_{\alpha}|^{3}} - Q_x^{2} \partialrtial_{\alpha}^{3} \sigma_{x} \frac{x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}}\right)d\alpha + \quad \thetaxt{ l.o.t}
\etand{align*}
We claim that
\betagin{align}
\Lambdabel{casitapaco}
Q_x^2 \partialrtial_{\alpha}^3 \sigma_{x} = |x_{\alpha}|H(\partialrtial_{\alpha}^{3} \psi_{t}) - b_s |x_{\alpha}| H(\partialrtial_{\alpha}^{4} \psi) + \thetaxt{ errors } + \thetaxt{NICE}(x,\gammaamma,\psi)
\etand{align}
In the local existence we get
\betagin{align*}
Q_z^2 \partialrtial_{\alpha}^3 \sigma_{z} = |z_{\alpha}|H(\partialrtial_{\alpha}^{3} \varphi_{t}) - c |z_{\alpha}| H(\partialrtial_{\alpha}^{4} \varphi) + \thetaxt{NICE}(z,\omega,\varphi)
\etand{align*}
This implies
\betagin{align*}
II_{2,3} & = II_{2,3,1} + II_{2,3,2} + II_{2,3,3} + II_{2,3,4} \\
II_{2,3,1} & = \int_{-\pi}^{\pi}H (\partialrtial_{\alpha}^{3} \mathcal{D})\left(|z_{\alpha}|H(\partialrtial_{\alpha}^{3} \varphi_t) \frac{z_{\alpha \alpha} \cdot z_{\alpha}^{\perp}}{|z_{\alpha}|^{3}} - |x_{\alpha}| H(\partialrtial_{\alpha}^{3} \psi_t)\frac{x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}}\right)d\alpha \\
II_{2,3,2} & = -\int_{-\pi}^{\pi}H (\partialrtial_{\alpha}^{3} \mathcal{D})\left(c|z_{\alpha}|H(\partialrtial_{\alpha}^{4} \varphi) \frac{z_{\alpha \alpha} \cdot z_{\alpha}^{\perp}}{|z_{\alpha}|^{3}} - b_s|x_{\alpha}| H(\partialrtial_{\alpha}^{4} \psi)\frac{x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}}\right)d\alpha \\
II_{2,3,3} & = \int_{-\pi}^{\pi}H (\partialrtial_{\alpha}^{3} \mathcal{D})\left(\thetaxt{NICE}(z,\omega,\varphi) \frac{z_{\alpha \alpha} \cdot z_{\alpha}^{\perp}}{|z_{\alpha}|^{3}} - \thetaxt{NICE}(x,\gammaamma,\psi)\frac{x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}}\right)d\alpha \\
II_{2,3,4} & = -\int_{-\pi}^{\pi}H (\partialrtial_{\alpha}^{3} \mathcal{D})\frac{x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}}d\alpha + \thetaxt{ errors}
\etand{align*}
It is easy to find
$$ II_{2,3,4} \leq CP(E(t)) + c\partialrtial_{\eta}lta(t), \quad \thetaxt{ error terms}$$
$$ II_{2,3,3} \leq CP(E(t)), \quad \thetaxt{ l.o.t}$$
In $II_{2,3,2}$ we split further:
\betagin{align*}
II_{2,3,2} & = II_{2,3,2}^{1} + II_{2,3,2}^{2} \\
II_{2,3,2}^{1} & = -\int_{-\pi}^{\pi}H (\partialrtial_{\alpha}^{3} \mathcal{D})|z_{\alpha}|H(\partialrtial_{\alpha}^{4} {\rm div}\thinspacesplaystylecal) \frac{z_{\alpha \alpha} \cdot z_{\alpha}^{\perp}}{|z_{\alpha}|^{3}}d\alpha \\
II_{2,3,2}^{2} & = -\int_{-\pi}^{\pi}H (\partialrtial_{\alpha}^{3} \mathcal{D})H(\partialrtial_{\alpha}^{4} \psi)\left(c|z_{\alpha}| \frac{z_{\alpha \alpha} \cdot z_{\alpha}^{\perp}}{|z_{\alpha}|^{3}} - b_s|x_{\alpha}| \frac{x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}}\right)d\alpha
\etand{align*}
Then
$$ II_{2,3,2}^{1} = \frac{1}{2}\int_{-\pi}^{\pi}|H (\partialrtial_{\alpha}^{3} \mathcal{D})|^{2}\left(|z_{\alpha}| \frac{z_{\alpha \alpha} \cdot z_{\alpha}^{\perp}}{|z_{\alpha}|^{3}}\right)_{\alpha}d\alpha \leq CP(E(t))$$
For $II_{2,3,2}^{2}$:
$$ II_{2,3,2}^{2} = -\int_{-\pi}^{\pi}\Lambda^{1/2}(\partialrtial_{\alpha}^{3}\psi)\Lambda^{1/2}\left(H (\partialrtial_{\alpha}^{3} \mathcal{D})\left(c|z_{\alpha}| \frac{z_{\alpha \alpha} \cdot z_{\alpha}^{\perp}}{|z_{\alpha}|^{3}} - b_s|x_{\alpha}| \frac{x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{3}}\right)\right)d\alpha \leq CP(E(t))$$
It remains
\betagin{align*}
II_{2,3,1} & = II_{2,3,1}^{1} + II_{2,3,1}^{2} \\
II_{2,3,1}^{1} & = \int_{-\pi}^{\pi}H (\partialrtial_{\alpha}^{3} \mathcal{D})H(\partialrtial_{\alpha}^{3} {\rm div}\thinspacesplaystylecal_t) \frac{z_{\alpha \alpha} \cdot z_{\alpha}^{\perp}}{|z_{\alpha}|^{2}} d\alpha \\
II_{2,3,1}^{2} & = \int_{-\pi}^{\pi}H (\partialrtial_{\alpha}^{3} \mathcal{D}) \underbrace{H(\partialrtial_{\alpha}^{3}\psi_t)}_{\thetaxt{approx. sol.}}\left(\frac{z_{\alpha \alpha} \cdot z_{\alpha}^{\perp}}{|z_{\alpha}|^{2}} - \frac{x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}}{|x_{\alpha}|^{2}}\right)d\alpha \\
\etand{align*}
Then
$$ II_{2,3,1}^{1} \leq CP(E(t)) + c\partialrtial_{\eta}lta(t)$$
At this point we remember that we had to deal with
$$ II = \int_{-\pi}^{\pi} \Lambda (\partialrtial_{\alpha}^{3} \mathcal{D})\partialrtial_{\alpha}^{3} \mathcal{D}_{t}d\alpha$$
so in $II_{2,3,1}^{1}$ we find one derivative less (or $1/2$ derivatives less) and this shows that we can bound
$$ II_{2,3,1}^{1} \leq CP(E(t)) + c\partialrtial_{\eta}lta(t)$$
by brute force. It remains to show claim \etaqref{casitapaco}. We remember
\betagin{align*}
Q_{x}^{2}\sigma_{x} & = Q_x^{2}\left(BR_t + \frac{\psi}{|x_{\alpha}|}BR_{\alpha}\right) \cdot x_{\alpha}^{\perp}
+ \frac{Q_x^2\gammaamma}{2|x_{\alpha}|^{2}}\left(x_{\alpha t} + \frac{\psi}{|x_{\alpha}|}x_{\alpha \alpha}\right) \cdot x_{\alpha}^{\perp} \\
& + \underbrace{Q_x^{3}\left|BR + \frac{\gammaamma}{2|x_{\alpha}|^{2}}x_{\alpha}\right|^{2}\nabla Q(x) \cdot x_{\alpha}^{\perp}}_{\thetaxt{this term is in }H^3 \thetaxt{ so it is NICE}} + \underbrace{Q_x^{2} \nabla P_{2}^{-1}(x) \cdot x_{\alpha}^{\perp}}_{\thetaxt{this term is also in }H^3}
\etand{align*}
We write
\betagin{align*}
\frac{Q_x^2\gammaamma}{2|x_{\alpha}|^{2}}&\left(x_{\alpha t} + \frac{\psi}{|x_{\alpha}|}x_{\alpha \alpha}\right) \cdot x_{\alpha}^{\perp}\\
& = \frac{Q_x^2\gammaamma}{2|x_{\alpha}|^{2}}\left((Q_x BR)_{\alpha} \cdot x_{\alpha}^{\perp} + b_s x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}
+ b_e x_{\alpha \alpha} \cdot x_{\alpha}^{\perp} + f_{\alpha} \cdot x_{\alpha}^{\perp} + \frac{\psi}{|x_{\alpha}|}x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}\right) \\
& = \frac{Q_x^2\gammaamma}{2|x_{\alpha}|^{2}}\left((Q_x BR)_{\alpha} \cdot x_{\alpha}^{\perp} + \left(b_s + \frac{\psi}{|x_{\alpha}|}\right) x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}\right)\\
& + \frac{Q_x^2\gammaamma}{2|x_{\alpha}|^{2}}\left(b_e x_{\alpha \alpha} \cdot x_{\alpha}^{\perp} + f_{\alpha} \cdot x_{\alpha}^{\perp}\right) \\
& = \frac{Q_x^2\gammaamma}{2|x_{\alpha}|^{2}}\left((Q_x BR)_{\alpha} \cdot x_{\alpha}^{\perp} + \frac{Q_x^{2}\gammaamma}{2|x_{\alpha}|^{2}} x_{\alpha \alpha} \cdot x_{\alpha}^{\perp}\right) + \thetaxt{ errors} \\
& = \frac{Q_x^2\gammaamma}{2|x_{\alpha}|^{2}}G_x(\alpha) + \thetaxt{ errors} = \thetaxt{NICE } + \thetaxt{ errors}
\etand{align*}
Finally, the most singular terms in $Q_x^{2} \sigma_{x}$ are
$$ L = Q_x^{2} BR_t \cdot x_{\alpha}^{\perp} + \frac{Q_x^{2}\psi}{|x_{\alpha}|}BR_{\alpha} \cdot x_{\alpha}^{\perp}$$
We take 3 derivatives and consider the most dangerous characters:
\betagin{align*}
L & = M_1 + M_2 + M_3 + \thetaxt{ l.o.t} \\
M_1 & = Q_x^{2} BR(x,\partialrtial_{\alpha}^{3} \gammaamma_t) \cdot x_{\alpha}^{\perp} + \frac{Q_x^{2}\psi}{|x_{\alpha}|}BR(x,\partialrtial_{\alpha}^{4}\gammaamma) \cdot x_{\alpha}^{\perp} \\
M_2 & = Q_x^{2} \frac{1}{2\pi}\int_{-\pi}^{\pi}\frac{(\partialrtial_{\alpha}^{3} x_{t}(\alpha) - \partialrtial_{\alpha}^{3}x_{t}(\alpha-\betata))\cdot x_{\alpha}(\alpha)}{|x(\alpha) - x(\alpha-\betata)|^{2}}
\gammaamma(\alpha-\betata)d\betata \\
& + \frac{Q_x^{2}\psi}{|x_{\alpha}|} \frac{1}{2\pi}\int_{-\pi}^{\pi}\frac{(\partialrtial_{\alpha}^{4} x(\alpha) - \partialrtial_{\alpha}^{4}x(\alpha-\betata))\cdot x_{\alpha}(\alpha)}{|x(\alpha) - x(\alpha-\betata)|^{2}}
\gammaamma(\alpha-\betata)d\betata \\
M_3 & = -\frac{Q_x^2}{\pi}\int_{-\pi}^{\pi}\frac{{\rm div}\thinspacesplaystyleelta_{\betata} x(\alpha) \cdot x_{\alpha}(\alpha)}{|{\rm div}\thinspacesplaystyleelta_{\betata}x(\alpha)|^{4}}{\rm div}\thinspacesplaystyleelta_{\betata} x(\alpha) \cdot
{\rm div}\thinspacesplaystyleelta_{\betata} \partialrtial_{\alpha}^{3}x_t(\alpha) \gammaamma(\alpha-\betata)d\betata \\
& -\frac{\psi Q_x^2}{|x_{\alpha}|\pi}\int_{-\pi}^{\pi}\frac{{\rm div}\thinspacesplaystyleelta_{\betata} x(\alpha) \cdot x_{\alpha}(\alpha)}{|{\rm div}\thinspacesplaystyleelta_{\betata}x(\alpha)|^{4}}{\rm div}\thinspacesplaystyleelta_{\betata} x(\alpha) \cdot
{\rm div}\thinspacesplaystyleelta_{\betata} \partialrtial_{\alpha}^{4}x(\alpha) \gammaamma(\alpha-\betata)d\betata
\etand{align*}
In $M_2$ we find
$$ M_2 = \frac{Q_x^2\gammaamma}{2|x_{\alpha}|^{2}}\Lambda(\partialrtial_{\alpha}^{3} x_t \cdot x_{\alpha})
+ \frac{Q_x^{2}\psi\gammaamma}{|x_{\alpha}|^{3}}\Lambda(\partialrtial_{\alpha}^{4}x \cdot x_{\alpha}) + \thetaxt{ l.o.t}$$
For the second term we use the usual trick
$$ \partialrtial_{\alpha}^{4} x \cdot x_{\alpha} = -3\partialrtial_{\alpha}^{3} x \cdot x_{\alpha \alpha}$$
For the first term we remember that
\betagin{align*}
& |x_{\alpha}|^{2} = A(t) \mathbb{R}ightarrow x_{\alpha} \cdot x_{\alpha t} = \frac{1}{2}A'(t) \mathbb{R}ightarrow (x_{\alpha} \cdot x_{\alpha t})_{\alpha} = 0 \\
& \mathbb{R}ightarrow x_{\alpha \alpha} \cdot x_{\alpha t} + x_{\alpha} \cdot x_{\alpha \alpha t} = 0 \mathbb{R}ightarrow
x_{\alpha \alpha \alpha} \cdot x_{\alpha t} + 2x_{\alpha \alpha} \cdot x_{\alpha \alpha t} + x_{\alpha} \cdot x_{\alpha \alpha \alpha t} = 0\\
& \mathbb{R}ightarrow x_{\alpha} \cdot x_{\alpha \alpha \alpha t} = - 2 x_{\alpha \alpha} \cdot x_{\alpha \alpha t} - x_{\alpha \alpha \alpha} \cdot x_{\alpha t}
\etand{align*}
This allows us to control $M_2$. For $M_3$ we find
$$ M_3 = -\frac{Q_x^2\gammaamma}{|x_{\alpha}|^{2}}\Lambda(x_{\alpha} \cdot \partialrtial_{\alpha}^{3} x_t)
- \frac{Q_x^{2}\psi\gammaamma}{|x_{\alpha}|^{3}}\Lambda(x_{\alpha} \cdot \partialrtial_{\alpha}^{4}x) + \thetaxt{ l.o.t}$$
so it can be estimated in the same way as $M_2$. There remains $M_1$.
$$ M_1 = Q_x^{2} BR(x,\partialrtial_{\alpha}^{3} \gammaamma_t) \cdot x_{\alpha}^{\perp} + \frac{Q_x^{2}\psi}{|x_{\alpha}|}BR(x,\partialrtial_{\alpha}^{4}\gammaamma) \cdot x_{\alpha}^{\perp}$$
Using that ${\rm div}\thinspacesplaystyleelta_{\betata} x^{\perp}(\alpha) \cdot x_{\alpha}^{\perp}(\alpha) = {\rm div}\thinspacesplaystyleelta_{\betata} x(\alpha) \cdot x_{\alpha}(\alpha)$ we find
\betagin{align}
\Lambdabel{ouroboros}
M_1 = \frac{Q_x^{2}}{2}H(\partialrtial_{\alpha}^{3}\gammaamma_{t}) + \frac{Q_x^{2} \psi}{2|x_{\alpha}|}H(\partialrtial_{\alpha}^{4}\gammaamma) + \thetaxt{ l.o.t}
\etand{align}
We compute
\betagin{align}
\frac{Q_x^{2}}{2}H(\partialrtial_{\alpha}^{3}\gammaamma_{t}) & = H\left(\partialrtial_{\alpha}^{3}\left(\frac{Q_x^{2}\gammaamma}{2}\right)_{t}\right) + \thetaxt{ NICE} \nonumber \\
& = H(\partialrtial_{\alpha}^{3}(|x_{\alpha}|\psi)_{t}) + H(\partialrtial_{\alpha}^{3}(|x_{\alpha}|b_s)_{t}) + \thetaxt{ NICE} \nonumber \\
& = |x_{\alpha}|H(\partialrtial_{\alpha}^{3}\psi_{t}) + H(\partialrtial_{\alpha}^{2}\partialrtial_t(-(Q_x^{2} BR)_{\alpha} \cdot x_{\alpha})) + \thetaxt{ NICE} \Lambdabel{anchor}
\etand{align}
We compute the most singular term in
\betagin{align*}
\partialrtial_{\alpha}^{2}\partialrtial_t(-(Q_x^{2} BR)_{\alpha} \cdot x_{\alpha}) & = -\frac{Q_x^{2}}{2\pi}\int_{-\pi}^{\pi}\frac{(\partialrtial_{\alpha}^{3}x_t(\alpha) - \partialrtial_{\alpha}^{3}x_t(\alpha-\betata))^{\perp} \cdot x_{\alpha}(\alpha)}{|{\rm div}\thinspacesplaystyleelta_{\betata} x(\alpha)|^{2}}\gammaamma(\alpha-\betata)d\betata \\
& + \underbrace{\frac{Q_x^{2}}{\pi}\int_{-\pi}^{\pi}\frac{({\rm div}\thinspacesplaystyleelta_{\betata} x(\alpha))^{\perp} \cdot x_{\alpha}}{|{\rm div}\thinspacesplaystyleelta_{\betata}x(\alpha)|^{4}}{\rm div}\thinspacesplaystyleelta_{\betata} x(\alpha) {\rm div}\thinspacesplaystyleelta_{\betata} \partialrtial_{\alpha}^{3} x_{t}(\alpha) \gammaamma(\alpha-\betata)d\betata}_{\thetaxt{extra cancellation}} \\
& - \underbrace{\frac{Q_x^{2}}{2\pi}\int_{-\pi}^{\pi}\frac{({\rm div}\thinspacesplaystyleelta_{\betata} x(\alpha))^{\perp} \cdot x_{\alpha}}{|{\rm div}\thinspacesplaystyleelta_{\betata}x(\alpha)|^{2}}\partialrtial_{\alpha}^{3} \gammaamma_{t}(\alpha-\betata)d\betata}_{\thetaxt{extra cancellation}} + \thetaxt{ l.o.t. } + \thetaxt{ NICE}
\etand{align*}
This shows that
\betagin{align*}
\partialrtial_{\alpha}^{2}\partialrtial_t(-(Q_x^{2} BR)_{\alpha} \cdot x_{\alpha}) = -\frac{Q_x^{2}\gammaamma}{2|x_{\alpha}|^{2}}\Lambda(\partialrtial_{\alpha}^{3} x_{t}^{\perp} \cdot x_{\alpha}) + \thetaxt{ l.o.t. } + \thetaxt{ NICE}
\etand{align*}
That gives
\betagin{align*}
\partialrtial_{\alpha}^{2}\partialrtial_t(-(Q_x^{2} BR)_{\alpha} \cdot x_{\alpha}) = -\Lambda\left(\frac{Q_x^{2}\gammaamma}{2|x_{\alpha}|^{2}}\partialrtial_{\alpha}^{3} x_{t}^{\perp} \cdot x_{\alpha}\right) + \thetaxt{ l.o.t. } + \thetaxt{ NICE}
\etand{align*}
which implies
\betagin{align*}
H(\partialrtial_{\alpha}^{2}\partialrtial_t(-(Q_x^{2} BR)_{\alpha} \cdot x_{\alpha})) &= \partialrtial_{\alpha}\left(\frac{Q_x^{2}\gammaamma}{2|x_{\alpha}|^{2}}\partialrtial_{\alpha}^{3} x_{t}^{\perp} \cdot x_{\alpha}\right) + \thetaxt{ l.o.t. } + \thetaxt{ NICE} \\
&= -\frac{Q_x^{2}\gammaamma}{2|x_{\alpha}|^{2}}\partialrtial_{\alpha}\left(\partialrtial_{\alpha}^{3} x_{t} \cdot x_{\alpha}^{\perp}\right) + \thetaxt{ NICE}
\etand{align*}
Plugging the above formula in \etaqref{anchor} we find that
\betagin{align*}
\frac{Q_x^{2}}{2}H(\partialrtial_{\alpha}^{3}\gammaamma_{t}) & = |x_{\alpha}|H(\partialrtial_{\alpha}^{3}\psi_{t}) - \frac{Q_x^{2}\gammaamma}{2|x_{\alpha}|^{2}}\partialrtial_{\alpha}\left(\partialrtial_{\alpha}^{3} x_{t} \cdot x_{\alpha}^{\perp}\right) + \thetaxt{ NICE} \\
& = |x_{\alpha}|H(\partialrtial_{\alpha}^{3}\psi_{t}) - \frac{Q_x^{2}\gammaamma}{2|x_{\alpha}|^{2}}\partialrtial_{\alpha}\left(\partialrtial_{\alpha}^{3}(Q_x^{2} BR) \cdot x_{\alpha}^{\perp}\right)
- \frac{Q_x^{2}\gammaamma}{2|x_{\alpha}|^{2}}\partialrtial_{\alpha}\left(b_s \partialrtial_{\alpha}^{4} x \cdot x_{\alpha}^{\perp}\right) + \thetaxt{ l.o.t}\\
& + \thetaxt{ NICE} + \thetaxt{ errors}\\
\etand{align*}
As we did before, in $\partialrtial_{\alpha}(\partialrtial_{\alpha}^{3}(Q_x^{2}BR) \cdot x_{\alpha}^{\perp})$, the most dangerous term is given by $Q_{x}^{2}\frac{1}{2}H(\partialrtial_{\alpha}^{4}\gammaamma)$, the tangential terms appear, which implies
\betagin{align*}
\partialrtial_{\alpha}(\partialrtial_{\alpha}^{3}(Q_x^{2}BR) \cdot x_{\alpha}^{\perp}) & = Q_x^{2}\frac{1}{2}H(\partialrtial_{\alpha}^{4}\gammaamma) + \thetaxt{ NICE}
\etand{align*}
and therefore
\betagin{align*}
\frac{Q_x^{2}}{2}H(\partialrtial_{\alpha}^{3}\gammaamma_{t}) = |x_{\alpha}|H(\partialrtial_{\alpha}^{3}\psi_{t}) - \frac{Q_x^{2}\gammaamma}{2|x_{\alpha}|^{2}}\frac{Q_{x}^{2}}{2}H(\partialrtial_{\alpha}^{4}\gammaamma)
- \frac{Q_x^{2}\gammaamma}{2|x_{\alpha}|^{2}}b_s\partialrtial_{\alpha}\left(\partialrtial_{\alpha}^{4} x \cdot x_{\alpha}^{\perp}\right) + \thetaxt{ NICE} + \thetaxt{ errors}\\
\etand{align*}
We use \etaqref{ouroboros} to find
\betagin{align*}
\frac{Q_x^{2}}{2}&H(\partialrtial_{\alpha}^{3}\gammaamma_{t}) + \frac{Q_x^{2}\psi}{2|x_{\alpha}|}H(\partialrtial_{\alpha}^{4}\gammaamma)
\\
& = |x_{\alpha}|H(\partialrtial_{\alpha}^{3}\psi_{t}) - \frac{Q_x^{2}}{2}b_sH(\partialrtial_{\alpha}^{4}\gammaamma)
- \frac{Q_x^{2}\gammaamma}{2|x_{\alpha}|^{2}}b_s\partialrtial_{\alpha}\left(\partialrtial_{\alpha}^{4} x \cdot x_{\alpha}^{\perp}\right) + \thetaxt{ NICE} + \thetaxt{ errors} \\
& = |x_{\alpha}|H(\partialrtial_{\alpha}^{3}\psi_{t}) - b_s|x_{\alpha}|H\left(\partialrtial_{\alpha}^{4}\left(\frac{Q_x^{2}\gammaamma}{2|x_{\alpha}|}\right)\right)
- \frac{Q_x^{2}\gammaamma}{2|x_{\alpha}|^{2}}b_s\partialrtial_{\alpha}\left(\partialrtial_{\alpha}^{4} x \cdot x_{\alpha}^{\perp}\right) + \thetaxt{ NICE} + \thetaxt{ errors} \\
& = |x_{\alpha}|H(\partialrtial_{\alpha}^{3}\psi_{t}) - b_s|x_{\alpha}|H\left(\partialrtial_{\alpha}^{4}\psi\right) - b_s|x_{\alpha}|H(\partialrtial_{\alpha}^{4}(b_s|x_{\alpha}|))
- \frac{Q_x^{2}\gammaamma}{2|x_{\alpha}|^{2}}b_s\partialrtial_{\alpha}\left(\partialrtial_{\alpha}^{4} x \cdot x_{\alpha}^{\perp}\right) \\
&+ \thetaxt{ NICE} + \thetaxt{ errors} \\
\etand{align*}
We will show that
$$ - b_s|x_{\alpha}|H(\partialrtial_{\alpha}^{4}(b_s|x_{\alpha}|))
- \frac{Q_x^{2}\gammaamma}{2|x_{\alpha}|^{2}}b_s\partialrtial_{\alpha}\left(\partialrtial_{\alpha}^{4} x \cdot x_{\alpha}^{\perp}\right)$$
is NICE and then we are done.
\betagin{align*}
- b_s|x_{\alpha}|H(\partialrtial_{\alpha}^{4}(b_s|x_{\alpha}|))&- \frac{Q_x^{2}\gammaamma}{2|x_{\alpha}|^{2}}b_s\partialrtial_{\alpha}\left(\partialrtial_{\alpha}^{4} x \cdot x_{\alpha}^{\perp}\right)
= - b_sH(\partialrtial_{\alpha}^{4}(b_s|x_{\alpha}|^{2}))- \frac{Q_x^{2}\gammaamma}{2|x_{\alpha}|^{2}}b_s\partialrtial_{\alpha}\left(\partialrtial_{\alpha}^{4} x \cdot x_{\alpha}^{\perp}\right) \\
& = b_sH(\partialrtial_{\alpha}^{3}((Q_x^{2} BR)_{\alpha} \cdot x_{\alpha}))- \frac{Q_x^{2}\gammaamma}{2|x_{\alpha}|^{2}}b_s\partialrtial_{\alpha}\left(\partialrtial_{\alpha}^{4} x \cdot x_{\alpha}^{\perp}\right)
\etand{align*}
We repeat the calculation for dealing with the most dangerous terms in
\betagin{align*}
\partialrtial_{\alpha}^{3}((Q_x^{2} BR)_{\alpha} \cdot x_{\alpha}) = \Lambda\left(\partialrtial_{\alpha}^{4} x^{\perp} \cdot x_{\alpha} \frac{\gammaamma Q_{x}^{2}}{2|x_{\alpha}|^{2}}\right) + \thetaxt{ l.o.t}
\etand{align*}
In the l.o.t we use that ${\rm div}\thinspacesplaystyleelta_{\betata} x^{\perp}(\alpha) \cdot x(\alpha) $ gives an extra cancellation. We find that
\betagin{align*}
b_sH(\partialrtial_{\alpha}^{3}&((Q_x^{2} BR)_{\alpha} \cdot x_{\alpha})) - \frac{Q_x^{2}\gammaamma}{2|x_{\alpha}|^{2}}b_s\partialrtial_{\alpha}\left(\partialrtial_{\alpha}^{4} x \cdot x_{\alpha}^{\perp}\right) \\
& = b_sH(\Lambda\left(\partialrtial_{\alpha}^{4} x^{\perp} \cdot x_{\alpha} \frac{\gammaamma Q_{x}^{2}}{2|x_{\alpha}|^{2}}\right))- \frac{Q_x^{2}\gammaamma}{2|x_{\alpha}|^{2}}b_s\partialrtial_{\alpha}\left(\partialrtial_{\alpha}^{4} x \cdot x_{\alpha}^{\perp}\right) + \thetaxt{ NICE} \\
& = -b_s\partialrtial_{\alpha}\left(\partialrtial_{\alpha}^{4} x^{\perp} \cdot x_{\alpha} \frac{\gammaamma Q_{x}^{2}}{2|x_{\alpha}|^{2}}\right)- \frac{Q_x^{2}\gammaamma}{2|x_{\alpha}|^{2}}b_s\partialrtial_{\alpha}\left(\partialrtial_{\alpha}^{4} x \cdot x_{\alpha}^{\perp}\right) + \thetaxt{ l.o.t} + \thetaxt{ NICE}
\etand{align*}
Using that $\partialrtial_{\alpha}^{4} x^{\perp} \cdot x_{\alpha} = - \partialrtial_{\alpha}^{4} x \cdot x_{\alpha}^{\perp} $ we are done.
\betagin{tabular}{ll}
\thetaxtbf{Angel Castro} & \\
{\small Departamento de Matem\'aticas} &\\
{\small Universidad Aut\'onoma de Madrid } & \\
{\small Instituto de Ciencias Matem\'aticas-CSIC} &\\
{\small Campus de Cantoblanco} & \\
{\small Email: angel\[email protected]} & \\
& \\
\thetaxtbf{Diego C\'ordoba} & \thetaxtbf{Charles Fefferman}\\
{\small Instituto de Ciencias Matem\'aticas} & {\small Department of Mathematics}\\
{\small Consejo Superior de Investigaciones Cient\'ificas} & {\small Princeton University}\\
{\small C/ Nicol\'{a}s Cabrera, 13-15} & {\small 1102 Fine Hall, Washington Rd, }\\
{\small Campus Cantoblanco UAM, 28049 Madrid} & {\small Princeton, NJ 08544, USA}\\
{\small Email: [email protected]} & {\small Email: [email protected]}\\
& \\
\thetaxtbf{Francisco Gancedo} & \thetaxtbf{Javier G\'omez-Serrano}\\
{\small Departamento de An\'alisis Matem\'atico} & {\small Department of Mathematics}\\
{\small Universidad de Sevilla} & {\small Princeton University}\\
{\small C/ Tarfia, s/n } & {\small 1102 Fine Hall, Washington Rd,} \\
{\small Campus Reina Mercedes, 41012 Sevilla} & {\small Princeton, NJ 08544, USA} \\
{\small Email: [email protected]} & {\small Email: [email protected]}\\
\etand{tabular}
\etand{document}
|
\begin{document}
\title{{On Axially Symmetric Solutions of Fully Nonlinear Elliptic Equations} }
\author{{Nikolai Nadirashvili\thanks{LATP, CMI, 39, rue F. Joliot-Curie, 13453
Marseille FRANCE, [email protected]},\hskip .4 cm Serge
Vl\u adu\c t\thanks{IML, Luminy, case 907, 13288 Marseille Cedex
FRANCE, [email protected]} }}
\date{}
\maketitle
\def
\break} \def\al{\alpha} \def\be{\beta} \def\ga{\gamma} \def\Ga{\Gamma{
\break} \def\al{\alpha} \def\be{\beta} \def\ga{\gamma} \def\Ga{\Gamma}
\def\omega} \def\Om{\Omega} \def\ka{\kappa} \def\lm{\lambda} \def\Lm{\Lambda{\omega} \def\Om{\Omega} \def\ka{\kappa} \def\lm{\lambda} \def\Lm{\Lambdaega} \def\Om{\Omega} \def\ka{\kappa} \def\lm{\lambda} \def\Lm{\Lambda}
\def\delta} \def\Dl{\Delta} \def\vph{\varphi} \def\vep{\varepsilon} \def\th{\theta{\delta} \def\Dl{\Delta} \def\vph{\varphi} \def\vep{\varepsilon} \def\th{\theta}
\def\Theta} \def\vth{\vartheta} \def\sg{\sigma} \def\Sg{\Sigma{\Theta} \def\vth{\vartheta} \def\sg{\sigma} \def\Sg{\Sigmaeta} \def\vth{\vartheta} \def\sg{\sigma} \def\Sg{\Sigma}
\def$
\blacksquare$} \def\wendproof{$
\square${$
\blacksquare$} \def\wendproof{$
\square$}
\def\mathop{\rm holim}} \def\span{{\rm span}} \def\mod{{\rm mod}{\mathop{\rm holim}} \def\span{{\rm span}} \def\mod{{\rm mod}}
\def{\rm rank}} \def\bsl{{\backslash}{{\rm rank}} \def\bsl{{\backslash}}
\def\int\limits} \def\pt{{\partial}} \def\lra{{\longrightarrow}{\int\limits} \def\pt{{\partial}} \def\lra{{\longrightarrow}}
\section{Introduction}
In this paper we study a class of fully nonlinear second-order
elliptic equations of the form
$$F(D^2u)=0\leqno(1)$$
defined in a domain of ${\bf R}^n$. Here $D^2u$ denotes the
Hessian of the function $u$. We assume that
$F$ is a Lipschitz function defined on
$S^2({\bf R}^n)$ of the space of ${n\times n}$ symmetric
matrices. Recall that (1) is called
uniformly elliptic if there exists a constant $C=C(F)\ge 1$
(called an {\it ellipticity constant\/}) such that
$$C^{-1}||N||\le F(M+N)-F(M) \le C||N||\; \leqno(2)$$ for any
non-negative definite symmetric matrix $N$; if $F\in C^1(D)$ then
this condition is equivalent to
$${1\over C'}|\xi|^2\le
F_{u_{ij}}\xi_i\xi_j\le C' |\xi |^2\;,\forall\xi\in {\bf
R}^n\;.\leqno(2')$$
Here, $u_{ij}$ denotes the partial derivative
$\pt^2 u/\pt x_i\pt x_j$. A function $u$ is called a {\it
classical\/} solution of (1) if $u\in C^2(\Om)$ and $u$ satisfies
(1). Actually, any classical solution of (1) is a smooth
($C^{\alpha +3}$) solution, provided that $F$ is a smooth
$(C^\alpha )$ function of its arguments.
For a matrix $S \in S^2({\bf R}^n)$ we denote by $\lambda(S)=\{
\lambda_i : \lambda_1\leq...\leq\lambda_n\}
\in {\bf R}^n$ the (ordered) set of eigenvalues of the matrix $S$. Equation
(1) is called a Hessian equation ([T1],[T2] cf. [CNS]) if the
function $F(S)$ depends only
on the eigenvalues $\lambda(S)$ of the matrix $S$, i.e., if
$$F(S)=f(\lambda(S)),$$
for some function $f$ on ${\bf R}^n$ invariant under permutations of
the coordinates.
In other words the equation (1) is called Hessian if it is invariant under
the action of the group
$O(n)$ on $S^2({\bf R}^n)$:
$$\forall O\in O(n),\; F({^t O}\cdot S\cdot O)=F(S) \;. $$
Consider the Dirichlet problem
$$\cases{F(D^2u)=0 &in $\Om$\cr
u=\vph &on $\pt\Om\;,$\cr}\leqno(3)$$ where $\Omega \subset {\bf
R}^n$ is a bounded domain with smooth boundary $\partial \Omega$
and $\vph$ is a continuous function on $\pt\Om$.
The main goal of this paper is to show that the axially symmetric
solutions of the Dirichlet problem are classical for Hessian
elliptic equations. Recall that without the symmetricity assumption this can be
false in higher dimensions [NV1, NV2].
Let $\Omega \subset {\bf R}^3$ be a smooth bounded axially symmetric domain.
We consider the Dirichlet problem $(3)$ in $\Omega. $
{\bf Theorem 1. }{ \it Let $F\in C^1$ be a uniformly elliptic operator. Let $\vph \in C^{1,\epsilon }(\partial \Omega )$ be an axially symmetric function, $0<\epsilon <\epsilon_o$, where $\epsilon_o>0$ depends
on the ellipticity constant of $F$.
Then the Drichlet problem $(3)$ has a unique classical
solution $u\in C^2(\Omega )\cap C^{1,\epsilon }(\bar \Omega )$. }
{\bf Remark. } The same results hold for the solutions of the $n$-dimensional axially symmetric problems (i.e., for the solutions of the form $u(x)=u(x_1, x_2^2+...+x_n^2)$ ).
The axially symmetric problems are essentially 2-dimensional.
Outside the axis of symmetry one can rewrite the equations as two-dimensional
fully nonlinear equations with lower order terms. However on the axis of symmetry
the equations became singular and that limits the application of the strong
methods known for the dimension 2.
\section{ Proof of Theorem 1}
Let $\Omega \subset {\bf R}^n$. Let
$$Lw= \sum a_{ij}(x) {\partial^2 w \over \partial x_i \partial x_j }, \leqno (2.1)$$
be a linear uniformly elliptic operator defined in a domain $\Omega \subset {\bf R}^n $,
$$C^{-1}|\xi |^2 \leq \sum a_{ij}\xi_i\xi_j \leq C|\xi |^2.$$
We will need the following propositions, see [GT], [K].
{\bf Proposition 1. } {\it Let $G\subset {\bf R}^n $ be be a bounded domain with a smooth boundary.
Let $u\in C^2 (\bar G) $ be a solution of the equation
$$Lu = 0 \quad in \quad G,$$
$u_{|\partial G} =\phi $. Then
$$||Du||_{C^{\alpha } (\partial G )} \leq C||\vph ||_{C^{1,\alpha }(\partial G )},$$
where positive constants $\alpha $ and $C$ depend on $G$ and the ellipticity constant
of the operator $L$. }
{\bf Proposition 2. }{\it Assume that $F\in C^1$, $F(0)=0$, $\partial \Omega \in C^2$ and the uniform ellipticity condition
$(2')$ holds.
Let $u \in C^2(\bar \Omega )$ be a solution of the Drichlet problem $(3).$
Then
$$||Du||_{C^{\alpha } (\partial \Omega )} \leq C||\vph ||_{C^{1,\alpha }(\partial \Omega )},$$
where positive constants $\alpha $ and $C$ depend on $\Omega $ and on the ellipticity constants
of $F$.}
Two following propositions are essentially two-dimensional, see [BJS], [GT].
{\bf Proposition 3. } {\it Let $u\in C^2(D_1)$, where $D_r\subset {\bf R}^2$ be the disk $|x|<r$,
and let $u$ be a solution in $D_1$ of the equation
$$Lu=0,$$
where $L$ is the elliptic operator $(2.1).$ Then
$$ osc_{D_1}\ u_{x_1} \geq (1+ \xi )osc_{D_{1/2}}\ u_{x_1},$$
where $\xi>0$ be a constant depending only on the ellipticity constant of operator $L$. }
{\bf Proposition 4. }{\it Let $u\in C^2(D_1)$ be a solution of a fully nonlinear
elliptic equation
$$H(D^2u, Du, x)=0$$
in $D_1$, and $H(0,0,x)=0$. Let $| u|<M$. Then
$$||u||_{C^{2,\alpha }(D_{1/2})}<CM,$$
where $\alpha ,C>0$ are constants depending on the ellipticity constant of $H$ and $C^1$-norm of the function $H$. }
As a corollary of Proposition 3 we have
{\bf Lemma 1. }{\it Let $u\in C^2(D_1)$ be a solution of the equation
$$Lu=0,$$
in $D_1$ and $l$ an affine linear function in $D_1$. Let $|l-u|<M$. Then for any $\epsilon >0$ there are $\alpha ,r>0$ depending only on $\epsilon$ and the ellipticity constant of $L$ such that
$$|| u-l||_{C^{1,\alpha }(D_r)} < \epsilon M.$$ }
Applying Lemma 1 to the derivative of the solutions of fully nonlinear elliptic equation we get
{\bf Lemma 2.} {\it Let $u\in C^2(D_1)$ be a solution of the fully nonlinear equation
$$F(D^2u)=0,$$
in $D_1$ and $F(0)=0$. Let $q$ be a quadratic polynomial in $D_1$ such that $|q- u|<M$. Then for any $\epsilon >0$ there are $\alpha ,\rho>0$ depending only on $\epsilon$ and the ellipticity constant of $F$ such that
$$|| u-q||_{C^{2,\alpha }(D_\rho)} < \epsilon M.$$ }
Proving Theorem 1 we may assume without loss that $F(0)=0$.
Let $x_1,x_2,x_3$ be an orthonormal coordinate system in ${\bf R}^3$ and $x_1$ be an
axis of symmetry of the domain $\Omega $. Denote
$$\omega} \def\Om{\Omega} \def\ka{\kappa} \def\lm{\lambda} \def\Lm{\Lambdaega = \{x\in \Omega , x_3=0\}.$$
Let $u$ be a classical axially symmetric solution of the Dirichlet problem (3). Denote
$$||u||_{C(\Omega )} =A.$$
Since $u_{x_3}$ is a solution of linear uniformly
elliptic equation $Lu_{x_3}=0$
and $u_{x_3}=0$ on $\omega} \def\Om{\Omega} \def\ka{\kappa} \def\lm{\lambda} \def\Lm{\Lambdaega $ then by Proposition 1
$$||u_{x_3x_3}||_{C^{\alpha }(\omega} \def\Om{\Omega} \def\ka{\kappa} \def\lm{\lambda} \def\Lm{\Lambdaega' )} \leq C||\vph ||_{C^{1,\alpha }(\partial G )},\leqno (2.1)$$
where $\omega} \def\Om{\Omega} \def\ka{\kappa} \def\lm{\lambda} \def\Lm{\Lambdaega' \subset \subset \omega} \def\Om{\Omega} \def\ka{\kappa} \def\lm{\lambda} \def\Lm{\Lambdaega $, positive constants $\alpha $ and $C$ depend on $G, \omega} \def\Om{\Omega} \def\ka{\kappa} \def\lm{\lambda} \def\Lm{\Lambdaega'$ and the ellipticity constant
of the operator $F$.
We define two-dmensional Hessian elliptic operators $f_a, a\in R$,
$$f_a(\lambda_1, \lambda_2)= f(\lambda_1, \lambda_2, a ).$$
Let $y\in \Omega $ be a point on the axis $x_1$.
Denote
$$h=dist (y, \partial \Omega ).$$
Define for $ 0<r<h$ the function $u_r$ on the unit disk
$D_1\subset R^2$ by
$$u_r(x)=u_r(x_1,x_2)=(u(r(x_1-y_1,x_2))-u(y))/r^2.$$
Set $a=u_{x_2x_2}(y)$.
Let $v_r$ be a solution of the Dirichlet problem
$$\cases{f_a(\lambda (D^2v_r))=0 &in $D_1 $\cr
v_r=u_r &on $\pt D_1\;,$\cr}\leqno(2.2)$$
The classical solution of two-dimensional Dirichlet problem (2.2) is known to
exist, e.g. [GT].
Since our equation $F(D^2u)= 0$ is homogeneous
we can assume without loss that the inequalities
$$1< |
\break} \def\al{\alpha} \def\be{\beta} \def\ga{\gamma} \def\Ga{\Gammaabla F | < C$$
hold for a positive constant $C.$
From (2.1) and the last inequalities it follows easily that the functions
\break} \def\al{\alpha} \def\be{\beta} \def\ga{\gamma} \def\Ga{\Gammaoindent $u_r - C_or^{\alpha }(1-|x|^2) $ and
$u_r + C_or^{\alpha }(1-|x|^2) $ are, for a sufficiently large constant $C_o$, sub- and supersolutions of the Dirichlet problem (2.2).
Hence
$$|u_r-v_r|\leq C_or^{\alpha }.$$
Denote, $w_r=u_r-v_r$.
Let $\rho $ be the constant of Lemma 2 for the elliptic operator $f$ and $\epsilon =1/2$.
Define a sequence of functions $u_n$ in $D_1$, $n=1,2,...$, by
$$u_n=u_{h\rho^n}.$$
Correspondingly we define $v_n=v_{h\rho^n}$, $w_n =u_n-v_n$.
From Lemma 2 we get the following recurrence inequalities:
there are quadratic polynomials
$q_n$, $n=1,2,...$, such that $f(q_n)=0$ and
$$||v_{n+1} -q_{n+1}||_{|C(D_1)} \leq {1\over 2}||v_n -q_n||_{|C(D_1)} + C_o\rho^{\alpha n}.$$
Since $|u_1|<A/h^2$, we get
$$||v_n -q_n||_{|C(D_1)}<2AC_o\rho^{\alpha n}/h^2,$$
$$||u_n -q_n||_{|C(D_1)}<2AC_o\rho^{\alpha n}/h^2,$$
$$||w_n||_{|C(D_1)}<C_o\rho^{\alpha n}.$$
for all $n=1,2,...$.
Hence, since the functions $u_n$ are obtained as dilations of $u$, it follows that
$$||q_{n+1} -q_n||_{|C(D_1)}<2AC_o\rho^{\alpha n-2}/h^2.\leqno (2.3)$$
Therefore
$$||u_n||<AC_1/h^2\leqno (2.4)$$
for a constant $C_1>0$ depending only on the ellipticity constant of $F$, $n=1,2,...$.
Denote
$$E=\{ z=x+y: |x|<h/2, x_2/x_1>1/4 \},$$
$$G= \{x\in D_1 : x_2> 1/4, dist (x, \partial D_1 )> 1/4 \} ,$$
$$G_n=\{ x: x/h\rho^n \in G\},$$
$n=1,2,...$.
Set $g_n=u_n-q_n$. Then from (2.3), (2.4) and Proposition 4 we have
$$||g_n||_{C^{2,\alpha }(G)}<AC_2/h^2,$$
where $C_2>0$ depends only on the ellipticity constant of $F$. Since
$$||g_n||_{|C(D_1)}<2AC_o\rho^{\alpha n}/h^2$$
then by interpolation between the last two inequalities we get
$$||g_n||_{C^{1,\alpha /2 }(G)}<AC_3\rho^{\alpha n/2}/h^2,$$
where $C_3>0$ depends on the ellipticity constant of the equation. Thus
$$||u||_{C^{1,\alpha /2 }(G_n)}<AC_3/h^2,$$
for all $n=1,2,...$. Together with (2.3) the last inequality gives
$$||u||_{C^{1,\alpha /2 }(E)}<AC_4/h^2,\leqno (2.5)$$
where $C_4>0$ depends only on the ellipticity constant of the equation.
By (2.1) on the axis $x_1$ the second derivatives $u_{x_2x_2}=u_{x_3x_3}$
satisfy the H\"{o}lder estimates. Since on the axis the mixed derivatives $u_{x_ix_j}=0$ for $i
\break} \def\al{\alpha} \def\be{\beta} \def\ga{\gamma} \def\Ga{\Gammaeq j$
we conclude from the equation that the second derivative $u_{x_1x_1} $
satisfies the H\"{o}lder estimates as well. These estimates together with (2.5)
give the following inequality
$$ ||u||_{C^{1,\alpha /2 }(\omega} \def\Om{\Omega} \def\ka{\kappa} \def\lm{\lambda} \def\Lm{\Lambdaega')}<AC_5, $$
where $C_5>0$ depends on the ellipticity constant of the equation and the
distance of $\omega} \def\Om{\Omega} \def\ka{\kappa} \def\lm{\lambda} \def\Lm{\Lambdaega'$ to the boundary $\partial \omega} \def\Om{\Omega} \def\ka{\kappa} \def\lm{\lambda} \def\Lm{\Lambdaega $.
Combining the last inequality with Proposition 2 we get the following apriori
estimate for the axially symmetric solutions of fully nonlinear uniformly elliptic equations:
{\bf Lemma 3 }.
{\it Let $u\in C^2 (\Omega) $ be an axially symmetric solution of $(3)$ and let $\Omega' $
be a compact subdomain of $\Omega $. Then the following inequalities hold:
$$||u||_{C^{1,\alpha } ( \Omega )} \leq C||\vph ||_{C^{1,\alpha }(\partial \Omega )},$$
$$||u||_{C^{2,\alpha } ( \Omega' )} \leq C'||\vph ||_{C^{1,\alpha }(\partial \Omega )},$$
where positive constants $C,C'$ and $\alpha $ depend on $\Omega $ and on the ellipticity constant of $F$, $C'$ depending also on the distance of $\Omega'$ to the boundary $\partial \Omega $.}
The apriori estimate of Lemma 3 and the standard method of continuation
by parameter, see, e.g., [GT], gives the classical solvability of the Dirichlet
problem (3) for a uniformly elliptic equation.
\centerline{REFERENCES}
\break} \def\al{\alpha} \def\be{\beta} \def\ga{\gamma} \def\Ga{\Gammaoindent [CC] L. Caffarelli, X. Cabre, {\it Fully Nonlinear Elliptic
Equations}, Amer. Math. Soc., Providence, R.I., 1995.
\break} \def\al{\alpha} \def\be{\beta} \def\ga{\gamma} \def\Ga{\Gammaoindent [BJS] L.Bers, F.John, M.Schechter, {\it Partial Differential Equations},
Interscience Publisher, New York-london-Sydney, 1964.
\break} \def\al{\alpha} \def\be{\beta} \def\ga{\gamma} \def\Ga{\Gammaoindent [CIL] M.G. Crandall, H. Ishii, P-L. Lions, {\it User's
guide to viscosity solutions of second order partial differential
equations,} Bull. Amer. Math. Soc. (N.S.), 27(1) (1992), 1--67.
\break} \def\al{\alpha} \def\be{\beta} \def\ga{\gamma} \def\Ga{\Gammaoindent [CNS] L. Caffarelli, L. Nirenberg, J. Spruck, {\it The Dirichlet
problem for nonlinear second order elliptic equations III. Functions
of the eigenvalues of the Hessian, } Acta Math.
155 (1985), no. 3-4, 261--301.
\break} \def\al{\alpha} \def\be{\beta} \def\ga{\gamma} \def\Ga{\Gammaoindent [GT] D. Gilbarg, N. Trudinger, {\it Elliptic Partial
Differential Equations of Second Order, 2nd ed.}, Springer-Verlag,
Berlin-Heidelberg-New York-Tokyo, 1983.
\break} \def\al{\alpha} \def\be{\beta} \def\ga{\gamma} \def\Ga{\Gammaoindent [K] N.V. Krylov, {\it Nonlinear Elliptic and Parabolic
Equations of Second Order}, Reidel, 1987.
\break} \def\al{\alpha} \def\be{\beta} \def\ga{\gamma} \def\Ga{\Gammaoindent [NV1] N. Nadirashvili, S. Vl\u adu\c t, {\it On Hessian
fully nonlinear elliptic equations}, arXiv:0805.2694 [math.AP],
submitted.
\break} \def\al{\alpha} \def\be{\beta} \def\ga{\gamma} \def\Ga{\Gammaoindent [NV2] N. Nadirashvili, S. Vl\u adu\c t,
{\it Nonclassical Solutions of Fully Nonlinear Elliptic
Equations II: Hessian Equations and Octonions }, arXiv:0912.312
[math.AP], submitted.
\break} \def\al{\alpha} \def\be{\beta} \def\ga{\gamma} \def\Ga{\Gammaoindent [T1] N. Trudinger, {\it Weak solutions of Hessian
equations,} Comm. Partial Differential Equations 22 (1997), no.
7-8, 1251--1261.
\break} \def\al{\alpha} \def\be{\beta} \def\ga{\gamma} \def\Ga{\Gammaoindent [T2] N. Trudinger, {\it On the Dirichlet problem for
Hessian equations,} Acta Math.
175 (1995), no. 2, 151--164.
\end{document}
|
\begin{document}
\begin{abstract}
Finding $k$-cores in graphs is a valuable and effective strategy for
extracting dense regions of otherwise sparse graphs. We focus on the
important problem of maintaining cores on rapidly changing dynamic
graphs, where batches of edge changes need to be processed quickly.
Prior batch core algorithms have only addressed half the problem of
maintaining cores, the problem of maintaining a core decomposition. This finds
vertices that are dense, but not regions; it misses connectivity. To
address this, we bring an efficient index from community search into
the core domain, the Shell Tree Index. We develop a novel dynamic
batch algorithm to maintain it that improves efficiency over
processing edge-by-edge. We implement our algorithm and
experimentally show that with it core queries can be returned
on rapidly changing graphs quickly enough for interactive applications.
For 1 million edge batches, on many graphs we run over $100\times$ faster than
processing edge-by-edge while remaining under re-computing from scratch.
\end{abstract}
\title{Batch Dynamic Algorithm to Find $k$-Cores and Hierarchies}
\section{Introduction}
An important problem in graph analysis is finding locally dense regions in
globally sparse graphs. In this work we consider the problem of finding
$k$-cores~\cite{seidman1983network,matula1983smallest}, which are maximal
connected subgraphs with minimum degree at least $k$. This problem has
seen significant attention given its efficiency~\cite{matula1983smallest}
and usefulness across many domains~\cite{kumar2000web,alvarez2005k,
hagmann2008mapping,van2011rich,garcia2017ranking,filho2018hierarchical,
kong2019k}.
Many practically important graphs from web data, social networks, and
related fields are both large and continuously changing.
The problem of
maintaining core \emph{decompositions} on graphs has been well
studied~\cite{li2013efficient,sariyuce2013streaming,zhang2017fast,
zhang2019unboundedness}. Existing approaches run in linear time in the
size of the graph, which is theoretically
optimal~\cite{zhang2019unboundedness}, and on many real-world graphs they
maintain decompositions within milliseconds after edge changes. So, is
the problem solved?
Unfortunately, these approaches only address half of the problem of
returning a $k$-core\cite{sariyuce2016fast}. $k$-cores are originally
defined as \emph{connected} subgraphs~\cite{seidman1983network}. All of
the application examples referenced above rely on or use connectivity.
A core decomposition, on the other hand, provides \emph{coreness} values
for every vertex: that is, the largest value such that a vertex is in
a $k$-core, but not in a $(k+1)$-core. Prior approaches have either
ignored connectivity (which provides limited, but some insight
e.g.,~\cite{kitsak2010identification}) or left the final step of finding
components as a separate process. The main tool to address computing connectivity on cores, or a \emph{core hierarchy},
has been independently proposed several
times~\cite{barbieri2015efficient,sariyuce2016fast,fang2017effective,
fang2020effective} in different contexts, and concurrently developed in~\cite{lin2021hierarchical}. We introduce this index in the
most basic setting, designed for $k$-cores on simple undirected graphs, and
we call it the Shell Tree Index (\textsf{ST-Index}). This index supports queries to
extract the cores a vertex is in along with the full core
hierarchy of a graph.
\begin{figure}
\caption{\label{fig:lj}
\label{fig:lj}
\end{figure}
\paragraph{Example Problem}
Consider the problem of managing a social network. First, given a user,
we wish to recommend friends to them that are well connected \emph{in
their part} of the graph: this is a vertex and coreness query. Second, we
want to detect structural changes, for example sybil
attacks~\cite{douceur2002sybil} from new fake accounts: this is
a hierarchy query. Figure~\ref{fig:lj} shows the core hierarchy of the
LiveJournal graph~\cite{yang2015defining} and how far apart different
dense regions are. For both query examples, we want results in tens of
milliseconds to either prepare a webpage or mitigate an emerging attack.
In this example scenario, a state-of-the-art core decomposition system is
put in place, which provides coreness updates quickly after graph changes.
The two goals above require information about \emph{specific cores}. If
certain vertices achieve higher coreness values, this does not inform
whether a new region is created. Furthermore, unless there is only one
dense region, it will not enable useful recommendations. Instead, we need
systems and algorithms that can quickly and effectively return \emph{cores
themselves} along with their full hierarchies.
\paragraph{Approach}
The \textsf{ST-Index} builds on the laminar nature of cores.
For $k' < k$, every $k$-core is contained within some $k'$-core, naturally forming a tree.
Each node in this tree corresponds to the \emph{shell} of the core, that is
vertices which are not in any higher core.
Coupled with a reverse map, a core can
be efficiently returned by traversing the subtree staying below the
desired $k$ value.
The core hierarchy is the tree.
We build the tree by first identifying regions of the graph where the
cores are the same, known as subcores, and then forming a directed acyclic
graph (DAG) with each subcore as a node.
Starting from the highest $k$ values, we process nodes in the DAG upwards, merging and moving them to form a tree.
The only known prior maintenance approach, operating for attributed graphs and used as part of solving community search, is given
in~\cite{fang2017effective}.
We first port this maintenance
approach to the case of $k$-cores on standard graphs and use that as our edge-by-edge baseline.
Given an edge
change, it maintains the \textsf{ST-Index} by either merging or splitting nodes on paths to the root.
Concurrent with this work, \cite{lin2021hierarchical} builds on~\cite{fang2017effective}'s approach by batching operations on the tree.
In real-world graphs there is significant variance in the rate of change.
As such, batch dynamic algorithms that can reduce the
total work when operating on batches are
desired~\cite{luo2020batch,dhulipala2020parallel}. We provide a
batch dynamic algorithm to maintain cores themselves, starting from
core decompositions.
We do this by maintaining the subcore DAG used during construction.
After a batch of changes, we revisit each node in the DAG that was modified and re-compute any subcore changes.
Any DAG changes are then pushed into the tree, temporarily turning the tree back into a DAG.
We then traverse from the sink upwards, correcting the tree.
\paragraph{Contributions}
In addition to bringing the \textsf{ST-Index} from the community search domain into the direct, $k$-core domain, we prove efficiency properties on the \textsf{ST-Index}.
Our main contributions are:
\begin{enumerate}
\item A subcore DAG based \textsf{ST-Index} construction algorithm
\item A batch dynamic algorithm to maintain \textsf{ST-Index} that
reduces the work of edge-by-edge updates
\item An experimental evaluation on real-world graphs that
show with both our edge-by-edge and batch algorithms, \textsf{ST-Index} is suitable for interactive use
\end{enumerate}
The remainder of this paper is structured as follows. In
\S~\ref{sec:related} we describe the related work. In
\S~\ref{sec:preliminaries} we formally describe our model and problem. In
\S~\ref{sec:sti} we present \textsf{ST-Index}. In \S~\ref{sec:computing-sti} we
provide our algorithm to compute \textsf{ST-Index} from scratch. In
\S~\ref{sec:maint-sti} we explain how to maintain \textsf{ST-Index} for dynamic graphs
and introduce our batch algorithm. In \S~\ref{sec:experiments} we
experimentally evaluate our implementations, and in
\S~\ref{sec:conclusion} we conclude.
\section{Related Work} \label{sec:related}
$k$-cores were introduced independently
in~\cite{seidman1983network,matula1983smallest}.
\cite{matula1983smallest} additionally provided a peeling algorithm that uses
bucketing to run in $O(n + m)$. The main
strategy for computing $k$-cores has remained roughly the same since then:
iteratively peeling the graph, or excluding vertices with too low of
degrees, until all degrees are $k$.
For maintenance, \cite{li2013efficient} and \cite{sariyuce2013streaming}
independently proposed \textsf{Traversal}, which limits consideration of vertices around an edge
change if they provably cannot update values. \cite{sariyuce2013streaming}
defines the
notion of subcores and purecores, variants of which are used in all known
maintenance algorithms to limit considered subgraphs. \cite{zhang2017fast}
proposed \textsf{Order}, which is the current state-of-the-art and maintains
a \emph{peeling order}, instead of coreness values directly, using an order-statistic treap
and a heap.
Parallel approaches have relied on identifying
a set of vertices that can be independently peeled~\cite{jin2018core,
aksu2014distributed,hua2019faster,aridhi2016distributed}.
\cite{bai2020efficient,zhang2019unboundedness} provide batch algorithms
that reduce work as multiple edges are processed simultaneously.
All of the above focus on computing the \emph{coreness values} for vertices. In
fact, the lack of focus on connectivity has, in some cases, resulted in
later work redefining cores to not include connectivity
(e.g.,~\cite{malliaros2020core}) which limits their usefulness.
Numerous other targets, similar to cores, have been
proposed~\cite{malliaros2020core}. \cite{eidsaa2013s,zhou2020core}
develop weighted extensions to cores, \cite{linghu2020global} uses core
concepts to
reinforce connections within networks, \cite{galimberti2020core} proposes
notions of cores for multilayer networks, and \cite{zhang2020exploring}
ensures vertices in core-like regions are also relatively
cohesive given their neighbors.
In cases where the cores are used for downstream
algorithms, returning the actual (connected) vertices is identified as
crucial and algorithms are built to support such
queries~\cite{liu2019efficient}.
Community search~\cite{sozio2010community,cui2014local} is a more general
problem for returning a connected set of vertices in a community based on
a seed set. The community is commonly defined with a \emph{minimum
degree} measure\cite{fang2020survey}. In this case, if the query consists
of a single vertex, community search can return exactly a core. For this
reason, we pull from the field of community search to develop \textsf{ST-Index}.
\cite{barbieri2015efficient} proposed the first known shell tree index.
It does not support efficient queries, as it creates additional vertices
for each coreness level that must be addressed. \cite{sariyuce2016fast}
identifies the same problem that we address---cores require
connectivity---and proposes a shell tree-like index with a static construction in the more general nuclei framework, but leaves out
maintenance. \cite{fang2017effective} operates on attributed graphs and extends
\cite{sariyuce2016fast}'s approach and \cite{barbieri2015efficient}'s
index with incremental and decremental algorithms, but without batch
algorithms. We port this approach to the problem of cores and use this as our baseline.
Concurrent with this work, \cite{lin2021hierarchical} provides a batch algorithm that is based on \cite{fang2017effective} and batches changes to the tree directly, without the use of a DAG.
\section{Preliminaries} \label{sec:preliminaries}
A graph $G=(V,E)$ is a set of vertices $V$ and set of edges $E$. An edge
$e \in E$ represents the connection between two distinct vertices $u,v \in
V$, $e=\{u,v\}$.
We denote $\abs{V}$ by $n$ and $\abs{E}$ by $m$.
We use $\Gamma(v)$ to represent the neighboring edges of $v \in V$.
The degree of $v \in V$ is $\abs{\Gamma(v)}$.
For directed graphs, $\Gamma^\mathrm{in}$ represents
edges ending at the given vertex and $\Gamma^\mathrm{out}$ represents
edges leaving a vertex. If the graph is ambiguous, we use $\Gamma_G$ for
graph $G$. The neighborhood of a vertex set $S \subseteq V$, $\Gamma(S)$,
represents the vertices and edges connected to $S$, that is it is the
subgraph induced by $S$ and all neighbors of vertices in $S$.
\paragraph{Dynamic Graph Model}
We consider graphs
that are
changing over time, known as dynamic graphs. An \emph{edge change} is
a tuple $\tup{c, v, e}$ consisting of a direction $c$, a vertex $v \in
V$, and an edge $e \in E$. A dynamic graph is then an infinite turnstile
stream of edge changes $\mathcal{S}$, where time is the position in the
stream. At any point in time an undirected graph
$G^t$
can be formed by applying all edge changes until $t$, starting from an
empty graph.
In this model, the timestamp of edges received is not preserved and not used by the algorithm.
An algorithm that does take into consideration timestamps is called a \emph{temporal} algorithm, and can either be dynamic or static.
\begin{definition}\label{def:alg}
Let $\mathcal{A}$ be a graph algorithm with output $\mathcal{A}(G)$.
Then $\mathcal{A}_\Delta$ is a dynamic graph algorithm if, for some
times $t$ and $t'$, with $t < t'$,
\begin{equation*}
\mathcal{A}_\Delta\left(G^{t}, \mathcal{A}(G^{t}),
\mathcal{A}_\Delta^t,
\mathcal{S}[t, t']\right)
= \tup{\mathcal{A}(G^{t'}),
\mathcal{A}_\Delta^{t'}},
\end{equation*}
where $\mathcal{A}_\Delta^t$ contains algorithm state at $t$ and
$\mathcal{S}[t,t']$ are the edge changes in $\mathcal{S}$ from $t+1$ to
$t'$.
\end{definition}
We call an \emph{incremental algorithm} a dynamic graph algorithm which
can only handle edge insertions and
a \emph{decremental algorithm} one which can only handle edge deletions.
A \emph{batch dynamic} algorithm can handle $t' > t+1$.
Our batch algorithm, described in
Section~\ref{sec:maint-sti}, has an additional state bound by the size of
the graph.
\paragraph{Cores}
We provide a brief background on $k$-cores.
\begin{definition}\label{def:core}
Let $G$ be a graph
and $k \in \mathbb{N}$. A $k$-core
in $G$ is a set of vertices $V'$ which induce a subgraph $K=(V', E')$
such that: (1) $V'$ is maximal in $G$; (2) $K$ is connected; and (3)
the minimum degree is at least $k$, $\min_{v \in V'} \abs{\Gamma_K(v)}
\geq k$.
\end{definition}
Figure~\ref{fig:example-kcore} shows an example graph and its
cores.
There are two \emph{separate} $k=3$ cores, one with vertices $1$ through $4$ and the other with vertices $7$ through $10$.
If all vertices with less than a degree $3$ are iteratively removed, the remaining graph consists of those two separate connected components.
\begin{figure}
\caption{\label{fig:example-kcore}
\label{fig:example-kcore}
\end{figure}
\begin{definition}\label{def:coreness}
Let $G=(V,E)$ be a graph and $v \in V$.
The coreness of $v$, denoted $\kappa[v]$, is the value $k$ such that
$v$ is in a $k$-core
but not in a $(k+1)$-core.
\end{definition}
\begin{definition}\label{def:degeneracy}
Let $G=(V,E)$ be a graph. The $k$-core number of $G$, denoted
$\rho_G$ and shortened to $\rho$, is given by $\rho = \max_{v \in V}
\kappa[v]$.
\end{definition}
\paragraph{Problem Statement}
We consider the problem of efficiently supporting core and coreness
queries on a dynamic graph stream. Let $k \in \mathbb{N}$ and $u \in V$.
\begin{itemize}
\item The coreness query $\ensuremath{\mathcal{K}}(u)$ returns $\kappa[u]$.
\item The core query $\ensuremath{\mathcal{C}}(u, k)$ returns the vertices of the $k$-core
subgraph that contains $u$.
\item The hierarchy query $\ensuremath{\mathcal{H}}$ returns the hierarchical structure of
the cores as a tree, with the root as the $0$-core
\end{itemize}
Prior work in the context of cores has focused only on supporting $\ensuremath{\mathcal{K}}$
queries on dynamic graphs.
Unfortunately, this prevents many of the applications of $k$-cores which
rely on \emph{extracting dense regions} of a graph.
\section{Shell Tree Index} \label{sec:sti}
In this section we present the Shell Tree Index, \textsf{ST-Index}, which is able to
efficiently return cores for different vertices: its runtime is asymptotically the size of the result and its space is linear in the number of vertices. This index
has been independently developed several
times~\cite{barbieri2015efficient,sariyuce2016fast,fang2017effective,
fang2020effective,lin2021hierarchical} in different contexts.
We present the index here for completeness.
We will address how to construct the index in
Section~\ref{sec:computing-sti} and how to maintain it in
Section~\ref{sec:maint-sti}.
$\ensuremath{\mathcal{K}}(u)$ queries, or \emph{coreness} queries, can be
efficiently returned using an array of size $n$. We therefore focus on
$\ensuremath{\mathcal{C}}$ and $\ensuremath{\mathcal{H}}$ queries.
\begin{figure}
\caption{\label{fig:sti}
\label{fig:sti}
\end{figure}
\begin{lemma}[\cite{sariyuce2013streaming}]\label{lem:laminar}
Cores form a laminar family, that is every pair of cores are either disjoint or one is contained in the other.
\end{lemma}
\begin{proof}
We want to show that for every two cores $K_1$ and $K_2$, $K_1 \cap K_2$ is exactly one of $\emptyset$, $K_1$, or $K_2$.
Let $K_1$ and $K_2$ be two cores of $G$, with corresponding $k$ values
$k_1$ and $k_2$.
Suppose $K_1 \cap K_2 \neq \emptyset$, implying
$K_1$ is connected to $K_2$. Note that $k_1 \neq k_2$, otherwise $K_1
\cup K_2$ is a $k_1$-core, invalidating maximality. Let $k_1 < k_2$.
Suppose $\exists v \in K_2$ such that $v \not\in K_1$. $v$ must be
connected in $K_2$, and so there exists a path from $K_1$ to $v$ with
minimum degree at least $k_2$. Let $K_1'$ be a subgraph that includes
$K_1$ and the path to $v$. Then, $K_1'$ is a $k_1$-core and larger
than $K_1$, invalidating maximality.
\end{proof}
\begin{definition}\label{def:shell}
Let $G=(V,E)$ be a graph and $K \subseteq V$ a $k$-core in $G$
for some $k \in \mathbb{N}$. Then $S$ is a $k$-shell if $S = \{v \in K:
\kappa[v] = k\}$.
\end{definition}
Note that the shell is \emph{disconnected}, however it is a subset of
a \emph{connected} core. This means that the traditional approach of using
coreness values to compute the shell does not work. We address shell
computation later in Section~\ref{sec:computing-sti}, using
\emph{subcores}.
A \emph{shell tree} $T$ is at the heart of the \textsf{ST-Index}.
We call the vertices of $T$ \emph{tree nodes}, to distinguish from the vertices in $G$.
Each node has two additional pieces of data associated with it: a $k$ value and a set of vertices (in $G$).
$T$ is built as follows.
A root node is made with $k=0$ and a vertex set of isolated vertices (those with $\abs{\Gamma_G(v)} = 0$).
Next, nodes are made in $T$ for every $k$-shell.
Its $k$ attribute is set to $k$ corresponding to the shell and its vertex list is set to the vertices in the $k$-shell.
An edge is created in $T$ by linking $k$-shells, following Lemma~\ref{lem:laminar}.
An example shell tree is shown in Figure~\ref{fig:sti}.
The \textsf{ST-Index} consists of $T$ and a map $M$, mapping $v\in V$ to the appropriate node in $T$.
\begin{lemma}\label{lem:st-tree}
The shell tree is a directed, rooted tree.
\end{lemma}
\begin{proof}
Suppose a tree node $u$, corresponding to core $K_u$ has two in-edges.
By definition~\ref{def:shell}, each parent corresponds to a unique
$k$-shell. Consider the two corresponding cores, $K_1$ and $K_2$.
They both include $K_u$, yet are distinct, and so they have
non-trivial overlap contradicting Lemma~\ref{lem:laminar}.
The root is defined with $k=0$.
\end{proof}
\begin{lemma}\label{lem:compress}
The out-degree of a non-root tree node with no corresponding vertices
in the shell tree can be at most 1.
\end{lemma}
\begin{proof}
Let the tree node with no corresponding vertices be at level $k > 0$ with out-degree at least 2.
Then, there are two distinct \emph{cores} at $k+1$ (not necessarily shells), and
one core at $k$. The two cores at $k+1$ must be disconnected by construction.
However, because the tree node has no corresponding vertices, we know that every vertex in the $k$-core is also in a $(k+1)$-core.
Furthermore, the $k$-core is connected.
Hence, it is not possible for the two cores at $k+1$ to be disconnected.
\end{proof}
\begin{lemma}\label{lem:st-size}
Let $G=(V,E)$ be a graph with $n = \abs{V}$.
The number of nodes in the shell tree is at most $n+1$ and edges is at most $n$.
\end{lemma}
\begin{proof}
By Lemma~\ref{lem:compress}, each node in the tree (besides the root)
must have at least one vertex. As there are at most $n$ vertices, the
size of the tree is at most $n+1$. By Lemma~\ref{lem:st-tree}, we
know it is a tree, and so with at most $n+1$ nodes it has at most $n$
edges.
\end{proof}
\paragraph{Queries on \textsf{ST-Index}}
The three queries, $\ensuremath{\mathcal{K}}(u)$, $\ensuremath{\mathcal{C}}(u, k)$, and $\ensuremath{\mathcal{H}}$ are returned as follows.
\begin{itemize}
\item $\ensuremath{\mathcal{K}}(u)$ follows the map $M[u]$ to the shell tree node $n$, and
then returns the $k$ value for $n$.
\item $\ensuremath{\mathcal{C}}(u,k)$ runs a tree traversal that stays above the level $k$
\item $\ensuremath{\mathcal{H}}$ returns the tree nodes and attributes directly.
\end{itemize}
\paragraph{Efficiency of \textsf{ST-Index}}
We next address the efficiency of queries on \textsf{ST-Index}.
\begin{theorem}
$\ensuremath{\mathcal{C}}(u,k)$ queries on \textsf{ST-Index} run in $O(\abs{\ensuremath{\mathcal{C}}(u,k)})$ and correctly return the $k$-core.
\end{theorem}
\begin{proof}
First, we show correctness. Let $C^*$ be the core for $\ensuremath{\mathcal{C}}(u,k)$,
that is $C^*$ is a $k$-core and $u \in C^*$. The traversal will cover
all vertices in the subtree containing $u$ at level $k$ and higher. By
Lemma~\ref{lem:laminar} we know all denser cores are fully contained
in the desired $k$-core. By Lemma~\ref{lem:compress}, we know that any
split will occur in an explicit tree node with vertices in the
resulting shell. So, this split will be captured by the tree
traversal. As such, all vertices in the tree nodes traversed with values $k$ or more
exactly form the $k$-core.
Let down represent higher $k$ values in the tree.
Next, we show efficiency. Every downward link in the subtree needs to
be fully explored, and there are no nodes with overlapping vertices in
the tree. Once a downward traversal occurs, there is no need to check
parents. When traversing upwards, all children except the previous one
will be explored downwards. In each case every node is visited
exactly once and all of its associated vertices are enumerated once and are part of the returned core.
As \textsf{ST-Index} is a tree, whether to traverse to the parent can be decided based on whether the parents' value is lower than $k$.
This will result in one additional operation.
As such, the runtime is $O(\abs{\ensuremath{\mathcal{C}}(u,k)})$ and
efficient.
\end{proof}
\begin{theorem}
The \textsf{ST-Index} takes $O(n)$ space.
\end{theorem}
\begin{proof}
The \textsf{ST-Index} consists of a map of size $n$ between vertices and tree nodes,
along with the shell tree itself. By Lemma~\ref{lem:st-size}, the tree
has at most $n+1$ nodes and $n$ tree edges. Each tree node may have
vertices, but there are no redundant vertices. So, the size is
$O(n+n+1+n+n) = O(n)$.
\end{proof}
The shell tree itself contains the hierarchy of
cores and shells, and so returning \textsf{ST-Index} efficiently resolves $\ensuremath{\mathcal{H}}$
queries.
\begin{figure}
\caption{\label{fig:cf}
\label{fig:cf}
\end{figure}
\section{Computing the \textsf{ST-Index}} \label{sec:computing-sti}
Computing (and maintaining) the \textsf{ST-Index} hinges on building (and maintaining)
the shell tree. We propose a \emph{subcore directed acyclic graph}, that
provides the link between core decompositions and the shell tree. In this
section we describe how to compute the \textsf{ST-Index} from scratch using the
subcore DAG.
This problem is broken into three parts:
computing coreness values, subcore DAG, and the shell tree.
\paragraph{Computing Coreness Values}
Computing coreness values has been well studied on graphs~\cite{matula1983smallest,dhulipala2017julienne}. The most direct
approach, known as peeling, starts by keeping an array of vertex degrees. It then moves up
through coreness values, removing vertices with insufficient degree and
recording when they are removed. This is efficient, running in
$O(n+m)$, when using buckets~\cite{matula1983smallest}.
We refer the reader to \cite{malliaros2020core} for a survey.
\paragraph{Computing the Subcore DAG}
Next, we introduce the \emph{subcore directed acyclic graph (DAG)}, which is used to bridge
between coreness values and cores.
\begin{definition}
Let $G$ be a graph. A \emph{subcore} is a subgraph $C$ such
that (1) $C$ is maximal (2) $\forall v \in C$, $\kappa[v] = k$ for
some $k \in \mathbb{N}$ and (3) $C$ is connected.
\end{definition}
Subcores were introduced in~\cite{sariyuce2013streaming} to limit the
region that may have coreness values change on graph changes.
Figure~\ref{fig:cf} shows an example graph with cores and subcores.
\begin{figure}
\caption{\label{fig:cfd}
\label{fig:cfd}
\end{figure}
\begin{obs}\label{obs:cf-disjoint}
Subcores are disjoint, by maximality of cores and property (2), and so
the number of subcores is bound by $n$.
\end{obs}
After breaking cores up into subcores, the glue to link them back
together is saved as a \emph{subcore DAG}.
The subcore DAG is built with a directed edge from every
lower $k$ subcore to a strictly higher $k$ subcore that it is \emph{directly connected} to.
The subcore DAG from Figure~\ref{fig:cf} and its shell tree is shown in
Figure~\ref{fig:cfd}.
\begin{lemma}\label{lem:cfd-size}
The subcore DAG size is bound by $G$.
\end{lemma}
\begin{proof}
Each vertex in the subcore DAG corresponds to a connected
subgraph in the graph, and every edge in the DAG is a directed edge
that results from contracting all vertices in each subcore.
Contraction only removes edges and vertices, and no new edges or vertices are added.
\end{proof}
\begin{obs}
The subcore DAG is not a tree. Consider a $3$-clique and
a $4$-clique, connected via an edge, and both connected to another
vertex. This forms a directed triangle in the DAG.
\end{obs}
\begin{algorithm}[tb]
\KwIn{graph $G=(V,E)$, $\kappa$}
$C \gets \emptyset$; $D \gets \emptyset$ \tcp{DAG vertices and edges}
$L \gets [v : v \in V]$ \tcp{Labels}
\kc{Compute the subcores}
\For{$v \in V$}{
\lIf{$L[v] \neq v$}{continue}
$C \gets C \cup \{v\}$ \;
\kc{Perform a BFS that stays within $\kappa$ levels from $v$}
$Q \gets \mathrm{Queue}()$; $Q\mathrm{.push}(v)$ \label{alg:cfd:bfs-s} \;
\While{$Q \neq \emptyset$}{
$n \gets Q\mathrm{.pop}()$ \;
\For{$w \in \Gamma(n): L[w] \neq v \land \kappa[w] = \kappa[v]$}{
$Q\mathrm{.push}(w)$\;
$L[w] = v$ \label{alg:cfd:bfs-e}\;
}
}
}
\kc{Produce the DAG edges}
\For{$v \in V$ \label{alg:cfd:p-s}}{
\For{$n \in \Gamma(v)$ where $L[v] \neq L[n]$}{
$D \gets D \cup \{\tup{L[v], L[n]}\}$ \label{alg:cfd:p-e} \;
}
}
\Return{DAG=$(C,D)$}
\caption{\label{alg:cfd}
Building the subcore DAG.
}
\end{algorithm}
The process of building the subcore DAG is shown in
Algorithm~\ref{alg:cfd}.
This algorithm performs a breadth-fist search (BFS) for each vertex.
The search is constrained to stay within a $\kappa$ level, and DAG edges
are emitted on graph edges that leave $\kappa$ levels.
Efficient connected components algorithms,
e.g.,~\cite{shun2014simple}, could be used instead.
\begin{lemma}
Algorithm~\ref{alg:cfd} runs in $O(n+m)$.
\end{lemma}
\begin{proof}
From lines~\ref{alg:cfd:bfs-s}--\ref{alg:cfd:bfs-e}, inside the
internal BFS, each vertex will be visited once. Inside, each
edge will be visited once. Finally, the entire BFS will only start from
unvisited vertices.
For lines~\ref{alg:cfd:p-s}--\ref{alg:cfd:p-e}, each vertex and edge
will again be visited, resulting in $O(n+m)$ work.
\end{proof}
\subsection{Building the Shell Tree}
Given a subcore DAG and $\kappa$ values, we can compute the shell tree.
Our algorithm starts with the DAG and modifies it as it moves from the
sinks upwards (towards lower $k$ values), using a max-heap.
Each processed vertex: 1) identifies neighbors that are
at its $\kappa$ level, and merges itself with them;
2) sets a single node that is an in-neighbor with
the closest $\kappa$ value as the tree parent; and 3) moves all other in-edges to the
identified parent, ensure it becomes a tree. The details are presented in
Algorithm~\ref{alg:st}.
\begin{algorithm}[tb]
\KwIn{DAG=$(C,D)$, $\kappa$}
$T=(N,E) \gets$ DAG \;
$S \gets \emptyset$ \;
$H \gets \mathrm{Heap}()$ \tcp{Empty Heap}
\For{sink $s \in N$}{
$H\mathrm{.push}(\kappa[s], s)$\;
}
\While{$H \neq \empty$}{
$v \gets H\mathrm{.pop}()$ \;
\lIf{$v \in S$}{{\bf continue}}
$S \gets S \cup \{v\}$ \;
\kc{Merge with neighbors at same level}
\While{$\exists n \in \Gamma(v) : \kappa[n] = \kappa[v]$}{
$\mathsf{Merge}(v, n)$ \;
$S \gets S \cup \{n\}$ \;
}
\kc{Move all remaining and new in neighbors}
$t \gets \argmax_{n \in \Gamma^{\mathrm{in}}(v)} \kappa[n]$ \;
\For{$n \in \Gamma^{\mathrm{in}}(v)$}{
\lIf{$n \neq t$}{$\mathsf{MoveEdge}(\tup{n,v} \to \tup{n,t})$}
$H\mathrm{.push}(\kappa[n], n)$ \;
}
}
\Return{$T$}
\caption{\label{alg:st}
Constructing the shell tree.
}
\end{algorithm}
\begin{lemma}
Algorithm~\ref{alg:st} correctly builds the shell tree.
\end{lemma}
\begin{proof}
We argue that after running Algorithm~\ref{alg:st}, each node will exactly contain the shell.
First, a node needs to contain all connected subcore DAG nodes at the given $\kappa$ value.
Second, it cannot have additional nodes merged with it.
We argue correctness via induction on $\kappa$.
At the highest $\kappa$ level, by the DAG properties, we know the tree nodes connected to the sink are shells and valid.
Now, consider a tree node with $\kappa$ and assume nodes at $\kappa' > \kappa$ are valid.
The node is formed by merging DAG nodes at the same level, which are all connected.
Any connectivity that is not at level $\kappa$ will be preserved by moving edges to the node's parent.
By Lemma~\ref{lem:laminar}, we know that any DAG
neighbors that it is connected to will also be connected to the
parent, and so the new tree node is valid.
\end{proof}
\begin{lemma}
Algorithm~\ref{alg:st} runs in $O(\rho(n+m)\log n)$.
\end{lemma}
\begin{proof}
The heap processes each vertex once, and each vertex can
potentially have all edges attached, resulting in $O(n+m)$ per
iteration.
However, edges may be carried upwards, and in the worst case
all edges except one are carried upwards resulting in a factor of
$\rho$. The log factor comes from the heap use.
\end{proof}
\section{Maintaining the \textsf{ST-Index}} \label{sec:maint-sti}
In this section, we show how to maintain the \textsf{ST-Index} on a graph stream. The
objective is to develop a batch dynamic algorithm $\mathcal{A}_\Delta$
that will output the shell tree \textsf{ST-Index}, while having a small internal state
$\mathcal{A}_\Delta^\mathrm{s}$ and a quick runtime with low variability.
\subsection{Maintaining Coreness}
We refer the reader to
\cite{zhang2017fast,sariyuce2013streaming,li2013efficient,Gabert21-ParSocial} for algorithms
to maintain $\kappa$.
These approaches (and similarly \textsf{ST-Index}) extend to trusses~\cite{cohen2008trusses} and other nuclei~\cite{sariyuce2015finding} by use of a hypergraph~\cite{Gabert21-WSDM}.
For our experiments we implemented and use
\textsf{Order}~\cite{zhang2017fast}, the state-of-the-art decomposition
maintenance algorithm.
For notational convenience, consider a time $t$.
Let $G^-$ denote $G^{(t)}$ and $G^+$ denote $G^{(t+\Delta)}$.
Let $\kappa^{-}$ denote the $\kappa$ values in $G^-$ and $\kappa^{+}$
denote $\kappa$ values in $G^+$.
We take advantage of the following crucial property of
coreness values on graphs: the subcore theorem.
\begin{theorem}[\cite{sariyuce2013streaming}]\label{thm:subcore}
Let $\{u,v\}$ be an edge change.
Suppose $\kappa_{G^-}[u] \leq \kappa_{G^-}[v]$.
Then, only vertices in the subcore containing $u$ may have $\kappa$
values change in $G^+$, and they may only change by 1 (increase by $1$ for
insertion, decrease by $1$ for deletion.)
\end{theorem}
\subsection{Single Edge Maintenance Algorithm} \label{sec:se}
\begin{algorithm}
\KwIn{graph $G=(V,E)$, $e=\{u,v\}$, $\kappa^-$, $\kappa^+$, \textsf{ST-Index} $=(M, T)$}
\lIf{$\kappa^-[u] > \kappa^-[v]$}{swap $u$, $v$}
$K \gets M[u]$ \tcp*{find the tree node for $u$}
$S \gets \{ w \in V : \kappa^-[w] \neq \kappa^+\}$ \;
\If{$M[u]\mathrm{.vertices} = S$}{
\kc{The entire shell moves as one subcore}
\For{$c \in K\mathrm{.children}$}{
\lIf{$c\mathrm{.k} = k+1$}{$\mathsf{Merge}(K, c)$}
}
$K\mathrm{.k} \gets k+1$ \;
\Return{$T$}
}
\kc{We need to merge or create a new sink}
$K\mathrm{.vertices} \gets K\mathrm{.vertices} \setminus S$ \;
$X \gets \tup{K, k+1, S}$ \tcp*{new tree node with parent $K$, level $k+1$, vertices $S$}
\For{$w \in S$}{
\For{$n \in \Gamma_{G^-}(w) \setminus S$}{
\lIf{$\kappa^+[n] \geq k+1$}{
$\mathsf{MergeOrConnect}(X, M[n])$
}
}
}
\kc{Merge the path with $v$}
$c \gets M[v]$, $l \gets \mathrm{SINK}$ \;
\While{$\kappa[c] \geq \kappa^+[u]$}{
$l \gets c$; $c \gets c\mathrm{.parent}$ \;
}
$\mathsf{MergePaths}(X, c)$ \;
\Return{$T$}
\caption{\label{alg:maint-inc}
\textsf{SingleEdge} (incremental case).
}
\end{algorithm}
\begin{algorithm}
\KwIn{\textsf{ST-Index} $= (M,T)$, $U$, $V$}
\lIf{$U = V$}{\Return{}}
\lIf{$\kappa[U] > \kappa[V]$}{swap $U$, $V$}
$c \gets V$; $l \gets \mathrm{SINK}$ \;
\While{$\kappa[c] \geq \kappa[U]$}{
$l \gets c$; $c \gets c\mathrm{.parent}$ \;
}
\If{$\kappa[U] = \kappa[c]$}{$\mathsf{Merge}(U, c)$\;
\Return{$\mathsf{MergePaths}(c, U\mathrm{.parent})$}}
\Else{$\mathsf{MakeChild}(U, c)$\;
\Return{$\mathsf{MergePaths}(c, U)$}}
\caption{\label{alg:merge-paths}
$\mathsf{MergePaths}$, which merges two paths starting from tree nodes
$U$ and $V$ until the root.
}
\end{algorithm}
\begin{figure}
\caption{\label{fig:inc}
\label{fig:inc}
\end{figure}
The main idea for maintaining the \textsf{ST-Index} edge-by-edge is to first break
apart any core
or shell that was increased and then repair the tree by merging together
the paths from the endpoints. For deletions,
a map is made that determines where, after a core is split, it could
return to in the tree.
Then, the path from the core to the root
is traversed and any potential split is determined.
Our algorithm shares many similarities to the community search algorithm of~\cite{fang2017effective}.
Our algorithm addresses cores instead of the more general community search problem on attributed graphs.
Specifically, it does not need to support queries involving subsets of vertices.
We refer to this approach as \textsf{SingleEdge}.
We describe insertions in detail---deletions are similar but split nodes~\cite{fang2017effective}.
Let $K$ be the tree node that has a \emph{lower $\kappa$ value} given an edge insertion.
We first check if all of $K$'s vertices leave.
If so, we move $K$ down and merge its children with connected subcores.
Next, we iterate through the moved
vertices and identify if they are connected to a shell tree node at
level $k+1$.
If so, we merge those shell tree nodes together. If not, we
create a new tree node for the moved vertices.
Then, we walk up the tree from both endpoints
and, starting at level $k+1$, begin merging all visited vertices.
The algorithm is presented in Algorithm~\ref{alg:maint-inc}, with merge paths presented in Algorithm~\ref{alg:merge-paths}.
A visual depiction is given in Figure~\ref{fig:inc}.
\begin{lemma}
The runtime for Algorithm~\ref{alg:maint-inc} is $O(\abs{\Gamma(S)}
+ \rho n)$, where $S$ is the subcore that increases $\kappa$.
\end{lemma}
\begin{proof}
In the first part, the modified subcore and all of its immediate
neighbors are accessed, resulting in $O(\Gamma(S))$ work. After that,
in the worst case, the height of the tree will be accessed to find the
closest neighbor to merge in, resulting in $O(\rho n)$ work.
\end{proof}
\begin{figure}
\caption{\label{fig:multins}
\label{fig:multins}
\end{figure}
\begin{figure*}
\caption{\label{fig:multiins-baseline}
\label{fig:multiins-baseline}
\end{figure*}
\subsection{Batch Maintenance}
We now present our batch maintenance algorithm.
First, we present the opportunity for reducing work by providing an example.
In Figure~\ref{fig:multins}, we show the graph before and after the batch.
The idea is to \emph{keep the subcore DAG in memory} and use it to update the subcore tree.
This can naturally be combined with \textsf{SingleEdge} to provide a hybrid approach, moving between the two based on a batch size.
We maintain an additional pointer between every node in the tree and every node in the subcore DAG.
There are two main parts to maintaining the subcore tree in the subcore batch algorithm.
First, we maintain the subcore DAG by iterating over changed vertices and recomputing any subcore changes, creating and merging subcores (locally) as appropriate.
Second, we need to maintain the \textsf{ST-Index} given the DAG changes.
To do this we begin by making all of the DAG changes propagate forward to the tree.
Any deleted DAG node results in deleting the reference from the subcore tree, any newly empty tree nodes are deleted, and any new DAG nodes and their connections are added to the tree.
The tree is now no longer a DAG.
We then run the heap-based Algorithm~\ref{alg:st} to finish turning the modified structure back into a tree.
During this process we maintain the reverse vertex maps.
Unlike \textsf{SingleEdge}, our batch approach naturally covers deletions identically to insertions and both insertions and deletions can be mixed inside of batches.
This is due to handling both endpoints of an edge change, instead of only the endpoint with a lower $\kappa$ value at some point in time.
The approach is shown in Algorithm~\ref{alg:batch}.
Following the example in Figure~\ref{fig:multins}, we show the saved work between \textsf{SingleEdge} and \textsf{Batch} in Figure~\ref{fig:multiins-baseline} (next page).
\begin{algorithm}
\KwIn{\textsf{ST-Index} $= (M,T)$, DAG $D$, batch $B$}
$C \gets \{ v : v \in e \in B \}$;
$K \gets \emptyset$ \;
$I \gets \emptyset$ \tcp*{Visited set}
\For{$v \in C$}{
\lIf{$v \in I$}{continue}
$I \gets I \cup \{v\}$\;
$Q \gets \mathrm{Queue}$; $Q\mathrm{.push}(v)$ \tcp*{Change queue}
\While{$Q \neq \emptyset$}{
$q \gets Q\mathrm{.pop}()$\;
$n_d,n_T \gets L[q]$ \tcp*{DAG/Tree node of $q$}
$K \gets K \cup \{ n_D \}$\;
$n_d' \gets $ new DAG node \;
assign $q$ to $n_d'$ in $D$ and $M, T$ \;
$S \gets \mathrm{Queue}$; $S\mathrm{.push}(q)$ \tcp*{Subcore queue}
\While{$S \neq \emptyset$}{
$n \gets S\mathrm{.pop}()$ \;
\tcc{Check if $n$ is in the subcore}
\If{$\kappa^+[n] \neq \kappa^+[q]$ }{
\tcc{If $n$ changed, process it separately}
\If{$n \not\in I$ and $\kappa^-[n] \neq \kappa^+[n]$}{
$I \gets I \cup \{n\}$ \;
$Q\mathrm{.push}(n)$ \;
}
continue \;
}
\If{$n \not\in I$}{
$I \gets I \cup \{n\}$ \;
$Q\mathrm{.push}(n)$ \;
}
assign $n$ to $n_d'$ in $D$ and $M,T$ \;
}
}
}
remove newly isolated nodes in $D$ \;
copy DAG edges from DAG nodes in $K$ to $T$ \;
remove newly empty tree nodes in $T$ \;
run Algorithm~\ref{alg:st} \;
\caption{\label{alg:batch}
The \textsf{Batch} algorithm.
}
\end{algorithm}
Our runtime is the cost of Algorithm~\ref{alg:st} plus the cost of a BFS over each modified subcore.
Correctness follows from Algorithm~\ref{alg:st} as we maintain the built data structures and operations.
In the worst case this can be the runtime of Algorithm~\ref{alg:st}.
However, note that the BFS on subcores is limited to modified subcores.
As such, empirically we run faster than re-computing from scratch, as shown in the following Section~\ref{sec:experiments}.
\section{Empirical Analysis}\label{sec:experiments}
In this section we perform an experimental evaluation of our approach to
demonstrate that it is able to provide core queries on rapidly changing
real-world graphs.
\paragraph{Environment}
We implemented our algorithm in C++ and compiled with GCC 10.2.0 at
\texttt{O3}.
We ran on Intel Xeon E5-2683 v4 CPUs at 2.1 GHz with 256 GB of RAM and CentOS 7.
To perform coreness maintenance, we implemented \textsf{Order}~\cite{zhang2017fast}.
Any coreness maintenance approach can be used in its place.
We include all memory allocation costs in our runtimes.
We use a hash map of vectors to store the graph, and store both in- and
out-edges.
We ran five trials for each experiment and show the results from all
trials.
\paragraph{Baseline}
As our baseline, we implemented the non-batch maintenance approach
from~\cite{fang2017effective}, which we ported to the case of computing cores on graphs (see Section~\ref{sec:se}). We refer to this as \textsf{SingleEdge}.
When operating on a batch, \textsf{SingleEdge} runs independently for each
edge change. Insertions and deletions can therefore easily be mixed.
We only show results with insertions as they are the harder case~\cite{fang2017effective} and there are few known benchmark datasets with frequent deletions.
\paragraph{Datasets}
The graphs that we evaluate with are benchmark graphs that are
representative of real-world graphs from a variety of domains and with
different properties. We downloaded them from SNAP~\cite{snapnets}
(excluding Ar-2005, downloaded from~\cite{BoVWFI}). The graphs we use are given in
Table~\ref{tab:datasets}. We cleaned the data by removing self loops and
duplicates edges and treated graphs as undirected.
We randomized the edge order, simulating a graph stream, and performed our experiments by first removing random edges and next inserting them.
\begin{table}
\caption{\label{tab:datasets}
Graphs used with $n$, $m$ in millions.
}
\centering
\begin{tabular}{lrrr} \toprule
Name & $n$,\ \ \ $m$ & DAG $n$, \ \ $m$ & $\abs{T}$ \\ \midrule
Ar-2005~\cite{BoVWFI,BRSLLP} & 22, 640 & 12, \ 47 & 28 K\\
Orkut~\cite{yang2015defining} & 3, 117 & 1, \ 22 & 254 \ \ \ \\
LiveJ~\cite{yang2015defining} & 4, \ 35 & 2, \ 12 & 2 K \\
Pokec~\cite{takac2012data} & 2, \ 22 & 1, \ \ 5 & 54 \ \ \ \\
Patents~\cite{leskovec2005graphs} & 4, \ 17 & 2, \ \ 4 & 4 K \\
BerkStan~\cite{leskovec2009community} & 0.7, \ \ 7 & 0.2, 0.8 & 2 K \\
Google~\cite{leskovec2009community} & 1, \ \ 4 & 0.4, 1.2 & 5 K \\
YouTube~\cite{yang2015defining} & 1, \ \ 3 & 1, 2.5 & 140 \ \ \ \\
\bottomrule
\end{tabular}
\end{table}
\begin{figure}
\caption{\label{fig:index}
\label{fig:index}
\end{figure}
\begin{figure}
\caption{\label{fig:qC}
\label{fig:qC}
\end{figure}
\begin{figure}
\caption{\label{fig:qH}
\label{fig:qH}
\end{figure}
\begin{figure*}
\caption{\label{fig:varybatch}
\label{fig:varybatch}
\end{figure*}
\paragraph{Experiments}
Our main experimental goal is to evaluate the real-world feasibility of
our approach on modern graphs and systems with highly variable and large batch
sizes.
First, we show the index construction time for \textsf{Batch}.
The results are shown in Figure~\ref{fig:index}.
In all cases building the tree is more expensive than building the DAG.
The overall runtime reinforces the need for dynamic algorithms as for
large graphs, such as Orkut, the DAG construction takes around 90 seconds
and the tree construction takes around 330 seconds.
Next, we want to show that \textsf{ST-Index} is a useful index for cores.
We report the query times for \ensuremath{\mathcal{C}} in Figure~\ref{fig:qC} and \ensuremath{\mathcal{H}} in Figure~\ref{fig:qH} on \textsf{ST-Index}.
For \ensuremath{\mathcal{C}}, we performed queries from 1000 randomly sampled vertices with uniformly random $k$-values such that the vertex is in a $k$-core.
For all graphs, all cores are returned in under one second with many in the tens of milliseconds.
Given that our query is efficient the runtime largely consists of copying memory.
The denser the core the faster the return tends to be, as there are fewer vertices to copy out.
In many cases, the runtimes are fast enough to be used for interactive applications, e.g., in web page content.
For \ensuremath{\mathcal{H}}, we report the time to build and return the full hierarchy, including each node at each level.
This is under 10 seconds for all graphs, showing that full hierarchies can be used for interactive time applications.
Finally, we maintained cores for 100 batches of different batch sizes for each graph.
The results are shown in Figure~\ref{fig:varybatch}.
In all cases, when batch sizes are large \textsf{Batch} remains below both \textsf{FromScratch} and
\textsf{SingleEdge}.
For a batch dynamic algorithm, we are looking for the region below re-computing from scratch and below single-edge algorithsm.
In some graphs, such as Pokec and Patents, it is not a large region, however in all graphs it exists and provides significant improvements.
Future work involves combining the DAG construction and maintenance with the direct tree maintenance to achieve an effective hybrid approach, achieving the lower of the all of the curves.
Note that these are log-log plots, and so even for Patents our batch approach is $2 \times$ faster than re-computing from scratch at batch sizes of one million.
\section{Conclusion} \label{sec:conclusion}
We focus on the important but overlooked problem of returning
\emph{cores}, as opposed to \emph{coreness} values.
We consider both core
queries, which return a $k$-core, and hierarchy queries, which return the full core hierarchy.
Our approach applies beyond $k$-cores to other arbitrary nuclei, such as trusses.
We develop algorithms around a tree-based index, the \textsf{ST-Index}, that
is efficient and takes linear space in the number of graph vertices.
We provide an algorithm to construct the \textsf{ST-Index} using a new approach based on a subcore DAG.
We design and implement a batch maintenance algorithm for \textsf{ST-Index} that uses the same subcore DAG and can handle variable and high batch sizes.
We show that our approach is able to run faster than edge-by-edge approaches on rapidly changing graphs and can return cores and hierarchies fast enough for interactive use.
\begin{acks}
This work was funded in part by the NSF under Grant CCF-1919021 and in part by the Laboratory Directed Research and Development program at Sandia National Laboratories.
Sandia National Laboratories is a multimission laboratory managed and operated by National Technology \& Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525.
\end{acks}
\balance
\end{document}
|
\begin{equation}taegin{document}
\tauitle{Covert Quantum Internet}
\alphauthor{Kamil Br\'adler}
\etamail{[email protected]}
\alphaffiliation{Department of Mathematics and Statistics, University of Ottawa, Ottawa, Canada}
\alphauthor{George Siopsis}
\etamail{[email protected]}
\alphaffiliation{Department of Physics and Astronomy, The University of Tennessee, Knoxville, TN 37996-1200, U.S.A.}
\alphauthor{Alex Wozniakowski}
\etamail{[email protected]}
\alphaffiliation{Department of Physics, Harvard University, Cambridge, MA 02138, U.S.A.}
\deltaate{\tauoday}
\begin{equation}taegin{abstract}
We apply covert quantum communication based on entanglement generated from the Minkowski vacuum to the setting of quantum computation and quantum networks. Our approach hides the generation and distribution of entanglement in quantum networks by taking advantage of relativistic quantum effects. We devise a suite of covert quantum teleportation protocols that utilize the shared entanglement, local operations, and covert classical communication to transfer or process quantum information in stealth. As an application of our covert suite, we construct two prominent examples of measurement-based quantum computation, namely the teleportation-based quantum computer and the one-way quantum computer. In the latter case we explore the covert generation of graph states, and subsequently outline a protocol for the covert implementation of universal blind quantum computation.
\etand{abstract}
\muaketitle
\onecolumngrid
\sigmaection{Introduction}
The Internet is ubiquitous in daily life, linking a multitude of devices; but the security of the Internet is a public concern. For the exchange of sensitive information a different type of network, called the Darknet, provides anonymous connections that are strongly resistant to eavesdropping and traffic analysis \cite{Syverson et al}. Recently, the structure and resilience of the Internet and the Darknet were analyzed, and the latter was shown to be a more robust network under various types of failures \cite{Domenico and Arenas}. Covert protocols for classical communication \cite{lee2015achieving, sobers2016covert, mukherjee2016covert} and computation \cite{von Ahn et al, Chandran et al, Jarecki} supply networks with a concealing medium or object, enabling data to be transferred or processed without detection. Quantum information science provides a new perspective on the networking of devices, as well as the possible types of algorithms and protocols. In the setting of the quantum internet the network nodes generate, process, and store quantum information locally, and entanglement is distributed across the entire network \cite{Cirac et al 1, H. Jeff Kimble, Rod Van Meter}. As part of the paradigm of local operations and classical communication (LOCC), quantum teleportation protocols utilize the shared entanglement to faithfully transfer quantum data from site to site or implement quantum logic gates for distributed quantum computation \cite{Cirac et al 2, Pirandola et al, Pirandola and Braunstein, Van Meter and Devitt}.
In certain situations, such as a secret government mission, quantum networks might need their communications or computations to be performed without detection. Here, we solve this problem by extending recent work on covert quantum communication \cite{Absolutely Covert} to the setting of quantum computation and quantum networks. We devise a suite of covert quantum teleportation protocols taking advantage of relativistic quantum effects. We show, for instance, that quantum teleportation protocols can be performed in stealth by utilizing the entanglement present in Minkowski vacuum and covert classical communication to hide the non-local parts in quantum teleportation.
In our setup we hide the generation and sharing of entanglement, without a concealing medium or object, between network nodes through the relativistic modes of the Minkowski vacuum, and covert entanglement persists at long distance. We introduce a Minkowski vacuum-assisted amplification scheme in Section \rhoef{Sect:CovertCommunication}, followed by standard entanglement distillation with covert classical communication to recover covert Bell states. The entanglement-swapping protocol with covert classical communication distributes entanglement across the network, which provides a resource for our suite of covert quantum teleportation protocols. This enables quantum data to be transferred or processed in the quantum network without detection. Thereby, the quantum network's operations remain hidden from any adversary outside of the network. We call this the \tauextit{Covert Quantum Internet}.
We apply our suite of covert quantum teleportation protocols to construct two prominent examples of measurement-based quantum computation \cite{Jozsa, Briegel et al}:\ the teleportation-based quantum computer \cite{Gottesman and Chuang, Zhou et al, Eisert et al, Nielsen, Leung, Childs et al} and the one-way quantum computer $\mubox{QC}_{\muathcal{C}}$ \cite{Raussendorf and Briegel 1, Raussendorf and Briegel 2, Raussendorf et al}. Teleportation-based quantum computers use the idea of gate teleportation to carry out computations. We give covert versions of gate teleportation protocols, such as the one-bit teleportation primitive \cite{Zhou et al} and the multipartite compressed teleportation (MCT) protocol \cite{MCT}, in Section \rhoef{Sect:CovertTeleportation}. We discuss universal quantum computation with covert gate teleportation in Section \rhoef{Sect:CovertTeleportationComputer}. A one-way quantum computer $\mubox{QC}_{\muathcal{C}}$ carries out computations solely by performing single-qubit measurements on a fixed, many-body resource state and the measurement bases determine the gate or algorithm that is implemented. We show how to covertly generate graph states, such as the topological $3$D cluster state \cite{Raussendorf and Harrington, Raussendorf Harrington Goyal} or the recently discovered Union Jack state with nontrivial $2$D symmetry protected topological order (SPTO) \cite{Miller and Miyake}, in Section \rhoef{Sect:OneWay}. The single-qubit measurements on the covert resource state are performed locally during implementation of a quantum algorithm, and the feed-forward of measurement outcomes is shared through covert classical communication. In addition we outline a covert implementation of the Broadbent, Fitzsimons, and Kashefi (BFK) protocol for universal blind quantum computation \cite{BFK} in Section \rhoef{Sect:OneWay}. Finally, we conclude in Section \rhoef{Sect:Conclusion}.
\sigmaection{Covert communication}
\lambdabel{Sect:CovertCommunication}
Long before the development of cryptography and encryption, one of humankind’s best techniques of secretly delivering a message was to hide the fact that a message was even delivered at all. Methods for covert quantum communication with a concealing medium, i.e., thermal noise, were considered in \cite{Bash et al, Arrazola and Scarani}, but the conclusion was that stealth capabilities vanish when the medium is absent. In \cite{Absolutely Covert} truly ultimate limits on covert quantum communication were presented. In this section we describe the covert generation and sharing of entanglement between Alice and Bob, or network nodes in a quantum network, without a concealing medium or object using plain two-state inertial detectors \cite{bibRez}. The goal is to generate entanglement from the vacuum. We describe a process which results in entangled detectors. However, the amount of entanglement is very small. We amplify it by repeating the process, thus accumulating a finite amount of entanglement from the vacuum. The fidelity of the resulting state shared by the detectors possessed by Alice and Bob is then increased close to perfection by standard distillation techniques. The previous work of some of the authors,~\cite{Absolutely Covert}, differs from the current analysis in the type of the detectors used. In both cases the detectors are inertial, but in~\cite{Absolutely Covert} we used a pair of two two-level atoms with a time-dependent energy gap. This, however, poses a considerable challenge for the current quantum technology state-of-the-art. Here, the detectors are standard two-level atoms.
Alice and Bob are in possession of two-state detectors.
The detectors couple to vacuum modes which we model by a real massless scalar field $\pihi$.
They are separated by a distance $L$. For definiteness, let $\muathbf{r} = \muathbf{r}_A \etaquiv (0,0,0)$ for Alice, and $\muathbf{r} = \muathbf{r}_B \etaquiv (0,0,L)$ for Bob. If Alice's (Bob's) detector is turned on at time $t$ ($t'$), then the proper time between the two detectors while in operation is
\begin{equation}tae \Deltaelta s^2 = (t'-t)^2 - L^2. \etae
We expect maximal correlations between the two detectors along the null line connecting the two events, i.e., $\Deltaelta s = 0$, or $t'-t= L$. Thus, if Alice's detector is turned on at $t = 0$, then optimally, Bob's detector will be turned on at time $t' = L$.
The two-point correlator for the massless scalar field is given by
\begin{equation}tae \Deltaelta (t, \muathbf{r}; t' , \muathbf{r}') = - \pihi}{\rhom racac{1}{4\pii^2}\, \pihi}{\rhom racac{1}{(t-t' - i\varepsilon)^2 - (\muathbf{r}-\muathbf{r}')^2} \; \; .\etae
For best results at finite $L$, for the detectors we choose window functions centered around points at which $\Deltaelta$ diverges (i.e., Alice and Bob are along a null line). Thus, for Alice and Bob, we choose, respectively,
\begin{equation}tae\lambdabel{eq3} w_A(t) = \lambdambda \muathbf{w}(t) \ , \ \ w_B(t) = \lambdambda \muathbf{w}(t-L) \ , \ \ \muathbf{w}(t) = e^{-\pihi}{\rhom racac{t^2}{\sigmaigma^2}}~, \etae
where $\lambdambda >0$ is a coupling constant whose value depends on the details of the detector setup. In general, it is expected to be small. $\sigmaigma$ is the width of the time window during which the detector is turned on.
Assuming localized detectors, the Hamiltonian is
\begin{equation}tae H = H_A + H_B \ , \ \ H_{k} = w_{k} (t) \left( e^{i\piartiallta t} \sigmaigma_k^+ + e^{-i\piartiallta t} \sigmaigma_k^- \rhoight) \pihi (t, \muathbf{r}_{k}) \ \ \ (k=A,B) \etae
where $\hbar\piartiallta$ is the energy gap of the two states of a detector, and $\sigmaigma_A^\pim$ ($\sigmaigma_B^\pim$) are spin ladder operators for Alice's (Bob's) detector.
The massless scalar field can be expanded in terms of creation and annihilation operators as
\begin{equation}tae \pihi (t,\muathbf{r}) = \int \pihi}{\rhom racac{d^3k}{(2\pii)^3 2|\muathbf{k}|} \left( a(\muathbf{k}) e^{-i(|\muathbf{k}| t - \muathbf{k}\cdot \muathbf{r})} + a^\deltaagger (\muathbf{k}) e^{i(|\muathbf{k}| t - \muathbf{k}\cdot \muathbf{r})}\rhoight)
\etae
where $a(\muathbf{k})$ annihilates the vacuum ($a(\muathbf{k}) |0\rhoangle = 0$). The Hilbert space of the system is the tensor product of excitations of the vacuum state $|0\rhoangle$, and the two-dimensional spaces of Alice and Bob spanned by $\{ |0\rhoangle_k , |1\rhoangle_k \}$, $k=A,B$.
Assuming an initial state $|in\rhoangle = |0\rhoangle\otimes |0\rhoangle_A \otimes |0\rhoangle_B$, the evolution of the system is governed by the Hamiltonian $H$. After tracing out the field degrees of freedom, it is easy to see that the final state is of the form
\begin{equation}tae\lambdabel{eq5} \varrho_{AB} = \left[ \begin{equation}taegin{array}{cccc}
a_1 & 0 & 0 & c_1 \\ 0 & a_2 & c_2 & 0 \\ 0 & c_2^\alphast & b_2 & 0 \\ c_1^\alphast & 0 & 0 & b_1
\etand{array}\rhoight] \etae
Thus the evolution of the system of the two detectors after they are switched on and off with the switching profile \etaqref{eq3} is given by the quantum channel $\muathcal{N} : |in\rhoangle \muapsto \varrho_{AB}$.
The final state is in general entangled. However the amount of entanglement is minute. To enhance the entanglement, after the detectors have been switched off and therefore decoupled from the scalar field, we bring the state of $\pihi$ back to the vacuum, and repeat the process. This can be repeated $N$ times, where $N$ is large enough for appreciable entanglement generation. At the $n$th step, we apply the channel
\begin{equation}tae \muathcal{N}_n : \varrho_{AB}^{(n)} \muapsto \varrho_{AB}^{(n+1)} \etae
At each step, the state is of the same form as \etaqref{eq5},
\begin{equation}tae \varrho_{AB}^{(n)} = \left[ \begin{equation}taegin{array}{cccc}
a_1^{(n)} & 0 & 0 & c_1^{(n)} \\ 0 & a_2^{(n)} & c_2^{(n)} & 0 \\ 0 & c_2^{(n)\alphast} & b_2^{(n)} & 0 \\ c_1^{(n)\alphast} & 0 & 0 & b_1^{(n)}
\etand{array}\rhoight] \etae
with $\varrho_{AB}^{(0)} = |00\rhoangle_{AB}\lambdangle 00|$, and $\varrho_{AB}^{(1)} = \varrho_{AB}$.
Using perturbation theory, we obtain
\begin{equation}tae \lambdabel{eq:iter1storder}
\varrho_{AB} = \left[ \begin{equation}taegin{array}{cccc}
1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0
\etand{array}\rhoight] + \lambdambda^2 \left[ \begin{equation}taegin{array}{cccc}
-2 \muathcal{J}_1 & 0 & 0 & - \muathcal{J}^*_2 \\ 0 & \muathcal{J}_1 & \muathcal{J}_3 & 0 \\ 0 & \muathcal{J}_3^\alphast & \muathcal{J}_1 & 0 \\ -\muathcal{J}_2 & 0 & 0 & 0
\etand{array}\rhoight] + \muathcal{O} (\lambdambda^4)
\etae
Explicitly, at first order in $\lambdambda^2$ we have
\begin{equation}taea \muathcal{J}_1 &=& - \pihi}{\rhom racac{1}{4\pii^2} \int_{-\infty}^{\infty} dt \muathbf{w}(t)
\int_{-\infty}^{\infty} dt'\muathbf{w}(t') \pihi}{\rhom racac{e^{i \piartiallta (t -t')}}{(t'-t-i\etapsilon)^2 } \, , \nuonumber\\
\muathcal{J}_{2} &=& - \pihi}{\rhom racac{1}{4\pii^2} \int_{-\infty}^{\infty} dt \muathbf{w}(t)
\int_{-\infty}^{\infty} dt'\muathbf{w} (t' -L)
\pihi}{\rhom racac{e^{i \piartiallta (t +t')}}{(t'-t-i\etapsilon)^2 -L^2} \, ,\nuonumber\\
\muathcal{J}_{3} &=& - \pihi}{\rhom racac{1}{4\pii^2} \int_{-\infty}^{\infty} dt \muathbf{w}(t)
\int_{-\infty}^{\infty} dt'\muathbf{w} (t' -L)
\pihi}{\rhom racac{e^{i \piartiallta (t -t')}}{(t'-t-i\etapsilon)^2 -L^2} .\etaea
These expressions contain poles. Using the Fourier transform of the detector profile,
\begin{equation}tae \widetilde{\muathbf{w}} (\omega) = \int_{-\infty}^\infty dt e^{-i\omega t}\muathbf{w}(t) = \sigmaigma\sigmaqrt{\pii} e^{-\sigmaigma^2 \omega^2/4}\etae
and contour integration, we arrive at expressions which are free of singularities and amenable to numerical integration\pihi}{\rhom ootnote{The expressions can be evaluated analytically as well.}:
\begin{equation}taea \muathcal{J}_1 &=& \pihi}{\rhom racac{1}{4\pii^2} \int_0^\infty d\omega\omega \widetilde{\muathbf{w}}^2 (\omega +\piartiallta), \nuonumber\\
\muathcal{J}_{2} &=& \pihi}{\rhom racac{1}{4\pii^2L} \int_0^\infty d\omega \widetilde{\muathbf{w}}(\omega -\piartiallta)\widetilde{\muathbf{w}}(\omega+\piartiallta) e^{-iL(\omega-\piartiallta)} \sigmain \omega L,\nuonumber\\
\muathcal{J}_{3} &=& \pihi}{\rhom racac{1}{4\pii^2L} \int_0^\infty d\omega \widetilde{\muathbf{w}}^2(\omega+\piartiallta) e^{-iL(\omega-\piartiallta)} \sigmain \omega L. \etaea
After $n$ applications of the quantum channel, we bring the system of the two detectors to the state
\begin{equation}tae\lambdabel{eq:firstorder} \varrho_{AB}^{(n)} = \left[ \begin{equation}taegin{array}{cccc}
1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0
\etand{array}\rhoight] + n \lambdambda^2 \left[ \begin{equation}taegin{array}{cccc}
-2 \muathcal{J}_1 & 0 & 0 & - \muathcal{J}^*_2 \\ 0 & \muathcal{J}_1 & \muathcal{J}_3 & 0 \\ 0 & \muathcal{J}_3^\alphast & \muathcal{J}_1 & 0 \\ -\muathcal{J}_2 & 0 & 0 & 0
\etand{array}\rhoight] + \muathcal{O} (\lambdambda^4) \etae
Thus, after $N$ steps, the effective coupling constant becomes $N\lambdambda^2$, and entanglement is amplified. To quantify the results, we
use the singlet fraction of $\varrho$ defined as~\cite{horodeckisReview}
\begin{equation}tae\lambdabel{eq7} \muathcal{F} (\varrho) = \muax_\Phi \lambdangle \Phi |\varrho |\Phi \rhoangle \ , \ \ |\Phi \rhoangle = U_A\otimes U_B \pihi}{\rhom racac{1}{\sigmaqrt{2}}\left( |00\rhoangle_{AB} + |11\rhoangle_{AB} \rhoight).
\etae
\begin{equation}taegin{figure}[h]
\centering
\includegraphics[width=14.5cm]{fig_delta.pdf}
\caption{\lambdabel{fig:1} The singlet fraction \etaqref{eq7} \etamph{vs.}\ the distance $L$ of the two detectors for different values of their energy gap. The number of iterations is $N=500,\lambdambda^2=0.01$, and $\sigmaigma=10~\muu\muathrm{s}$.}
\etand{figure}
\begin{equation}taegin{figure}[h]
\centering
\includegraphics[width=14.5cm]{fig_iters.pdf}
\caption{\lambdabel{fig:2} The singlet fraction \etaqref{eq7} \etamph{vs.}\ the distance $L$ of the two detectors at different stages of entanglement amplification for $\lambdambda^2=0.01,\piartiallta=100~\muathrm{kHz}$, and $\sigmaigma=10~\muu\muathrm{s}$.}
\etand{figure}
We have optimized the singlet fraction using perturbation theory numerically to second order in $\lambdambda^2$. We have shown above explicit analytic expressions at first order in $\lambdambda^2$. We have also obtained second-order analytical expressions, but not included here for brevity. Their derivation is based on the expansion of two two-level detectors to an arbitrary perturbative order in~\cite{bradlerExpansion}. Using these second-order analytical expressions, we have calculated the singlet fraction \etaqref{eq7} numerically. Its behavior is shown in Fig.~\rhoef{fig:1} and ~\rhoef{fig:2}. For a wide range of parameters of the system, we obtain $\muathcal{F} > 1/2$, showing entanglement extraction from the vacuum. More precisely, there are known two-qubit entangled states where $\muathcal{F}<1/2$, but in our case the entanglement of formation (EOF)~\cite{horodeckisReview} is zero whenever $\muathcal{F}\leq1/2$, and positive otherwise. The iteratively collected entanglement through the Minkowski vacuum-assisted amplification scheme is shown in figure Fig.~\rhoef{fig:2}. As the number of iterations increases, the original minute amount of entanglement reaches quite substantial values. When measured as the singlet fraction, it goes well above $1/2$ and a corresponding increase is documented for the EOF as well. Recall that the EOF is a true entanglement measure for two qubits. As in the case of Fig.~\rhoef{fig:1}, the actual calculation is done to the second order in $\lambdambda^2$. After the amplification process is completed, we perform standard distillation on many copies of the iterated state in order to arrive at a nearly perfect maximally entangled state. Thus, we produce a covert Bell state shared between Alice's and Bob's detectors.
\sigmaection{Covert Quantum Teleportation}
\lambdabel{Sect:CovertTeleportation}
The quantum teleportation protocol was introduced by Bennett et al.\ \cite{Bennett et al}. Recently, the protocol was found to have a $3$D topological structure using the quon language \cite{Quon}; and Pirandola and Braunstein cite teleportation as the ``most promising mechanism for a future quantum internet" \cite{Pirandola and Braunstein}. To implement the protocol for two parties: a sender, Alice, disassembles an unknown quantum state at her location; and a receiver, Bob, reconstructs the quantum state identically at his location. In order for the reconstruction to succeed, Alice and Bob prearrange to share the Bell state, which is utilized as a resource for the protocol. In addition, the parties share some purely classical information.
The idea of covertly implementing the quantum teleportation protocol involves hiding the non-local parts, i.e., distribution of the entangled resource state and classical communication of measurement outcomes. In \rhoef{Sect:CovertCommunication} we presented the covert generation and distribution of the Bell state, shared by Alice and Bob. Covert classical communication hides the transfer of the measurement outcomes from Alice to Bob. Thus, we establish a protocol for the hidden transfer of quantum data between Alice and Bob, or network nodes.
Gottesman and Chuang considered a variant of the teleportation protocol in which Bob's reconstructed quantum state differed from Alice's original quantum state \cite{Gottesman and Chuang}. In this elegant version of teleportation the fault-tolerant construction of certain quantum gates was developed. Later, Zhou et al.\ extended the teleportation method of gate construction by introducing the one-bit teleportation primitive, which enabled a class of gates in the Clifford hierarchy to be recursively constructed \cite{Zhou et al}. As an example, this includes the controlled rotations that appear in Shor's factoring algorithm \cite{Shor}. Zhou et al.\ and Eisert et al.\ first considered the minimal resources required for the implementation of certain remote quantum gates \cite{Zhou et al, Eisert et al}. A generalization of these methods, called the multipartite compressed teleportation (MCT) protocol, enables the efficient sharing of multipartite, non-local quantum gates in which the protocol does not reduce to compositions of bipartite teleportation \cite{MCT}. Furthermore, the MCT protocol allows a quantum network to share a controlled gate with multiple targets.
The scheme for the covert teleportation of quantum states is adaptable to the variants of the protocol that teleport quantum gates. For instance, the covert teleportation of a controlled-NOT gate utilizes one covert Bell state, local operations, and covert classical communication of measurement outcomes. The covert recursive construction with one bit teleportation \cite{Zhou et al} utilizes instances of the covert, controlled-NOT gate, local operations, and covert classical communication of the ancillary state preparation and measurement outcomes. In the MCT protocol the resource state is either the Greenberger-Horne-Zeilinger state $|\mubox{GHZ} \rhoangle$, or $|\mubox{Max} \rhoangle$ as first introduced in \cite{Max}, depending upon the particular non-local quantum gate \cite{MCT}. Both multipartite resource states can be constructed from simpler Bell states \cite{Bose et al, Max}, thus covert Bell states can be distilled into covert $|\mubox{GHZ} \rhoangle$ or $|\mubox{Max} \rhoangle$ with local operations and covert classical communication.
\sigmaection{Covert Measurement-Based Quantum Computation}
\sigmaubsection{Teleportation-Based Quantum Computation}
\lambdabel{Sect:CovertTeleportationComputer}
The teleportation-based approach to quantum computation uses the idea of gate teleportation to affect quantum computation \cite{Gottesman and Chuang, Zhou et al, Eisert et al, Nielsen, Leung, Childs et al, MCT}. Given the ability to perform single-qubit gates, the teleportation of either a controlled-NOT gate or a controlled-Z gate is universal for quantum computation. Hence, a covert implementation of a universal teleportation-based quantum computer is achieved with covert Bell states, local operations that include all single-qubit gates, and covert classical communication of measurement outcomes. Alternative universal quantum gate sets can be constructed through the covert MCT protocol or the covert recursive construction with the one-bit teleportation primitive.
\sigmaubsection{One-Way Quantum Computation}
\lambdabel{Sect:OneWay}
Another prominent example of measurement-based quantum computation is the one-way quantum computer $\mubox{QC}_{\muathcal{C}}$, which was introduced by Raussendorf and Briegel \cite{Raussendorf and Briegel 1, Raussendorf and Briegel 2}. The idea of $\mubox{QC}_{\muathcal{C}}$ is to prepare a fixed, many-body resource state in which quantum computations are carried out solely through single-qubit measurements on the resource state and classical feed-forward of measurement outcomes \cite{Jozsa, Briegel et al}. Cluster states, a sub-class of graph states, are the archetypal resource state, whereby topological $3$D cluster states are utilized in fault-tolerant versions of $\mubox{QC}_{\muathcal{C}}$ \cite{Raussendorf and Harrington, Raussendorf Harrington Goyal}. Universal resource states for $\mubox{QC}_{\muathcal{C}}$ have been widely studied \cite{Briegel et al, Raussendorf and Harrington, Raussendorf Harrington Goyal, BFK, Van den Nest et al}, and recently the universal Union Jack state with nontrivial $2$D symmetry protected topological order (SPTO) was found \cite{Miller and Miyake}.
A one-way quantum computation proceeds with a classical input that specifies the data and program. A graph state is generated by preparing each vertex qubit in a graph in the fiducial starting state $| + \rhoangle \etaquiv \pihi}{\rhom racac{1}{\sigmaqrt{2}} (|0 \rhoangle + | 1 \rhoangle)$, and applying a controlled-Z gate to every pair of qubits connected by an edge in the graph. Since controlled-Z gates mutually commute, the production of a graph state is independent of the order of operations. Next, a sequence of adaptive single-qubit measurements is implemented on certain qubits in the graph, whereby the measurement bases depend upon the specified program as well as the previous measurement outcomes. A classical computer determines which measurement directions are chosen during every step of the computation.
To covertly implement $\mubox{QC}_{\muathcal{C}}$ the classical input that specifies the data and program needs to be shared by covert classical communication. The graph state is generated by applying the covert, controlled-Z gate to each pair of fiducial qubits connected by an edge in the graph. The single-qubit measurements are applied locally. The classical computer that determines the measurement directions is offline, and covert protocols for communication \cite{lee2015achieving, sobers2016covert, mukherjee2016covert} and computation \cite{von Ahn et al, Chandran et al, Jarecki} are utilized.
Topological $3$D cluster states provide a pathway towards large-scale distributed quantum computation \cite{Van Meter and Devitt} by combining the universality of $2$D cluster states with the topological error-correcting capabilities of the toric code \cite{Kitaev}. Quantum computation is performed on a $3$D cluster state via a temporal sequence of single-qubit measurements, which leaves a non-trivial cluster topology that embeds a fault-tolerant quantum circuit \cite{Raussendorf and Harrington, Raussendorf Harrington Goyal}. To generate the computational resource state fiducial qubits are located at the center of faces and edges of an elementary cell; see Figure 2, page $5$ of \cite{Raussendorf Harrington Goyal}. Controlled-Z gates are applied from each face qubit to each neighboring edge qubit. The elementary cell is tiled in $3$D to form a topological cluster state. Hence, the covert generation of the topological $3$D cluster state follows from the discussion above.
The universal $2$D cluster state was shown to have trivial $2$D symmetry protected topological order (SPTO) and nontrivial $1$D SPTO, whereas the universal Union Jack state was found to possess nontrivial $2$D SPTO \cite{Miller and Miyake}. The Union Jack state is generated by preparing fiducial qubits, and applying a controlled-controlled-Z gate to every triangular cell in the graph. The resulting universal resource state has the advantageous property of being Pauli universal, meaning that single-qubit measurements in the Pauli bases on the Union Jack state can implement arbitrary quantum computations. The feature of Pauli universality is forbidden for the $2$D cluster state, as implied by the Gottesman-Knill theorem, since the state is generated by an element from the Clifford group \cite{Gottesman Knill}. In other words single-qubit measurements in the Pauli bases on the $2$D cluster state are efficiently simulated on a classical computer. The covert generation of the Union Jack state is a simple modification of the aforementioned scheme with controlled-Z gates for graph states. Namely, covert teleportation applies a doubly controlled-Z gate to fiducial qubits in triangular cells.
BFK introduced a universal blind quantum computation protocol \cite{BFK}, which exploits measurement-based quantum computation to allow a client, Alice, with limited quantum computing power to delegate a computation to a quantum server, Bob. The premise of the BFK protocol is that Alice's inputs, computations, and outputs are unknown to Bob \cite{BFK, Chien et al}. In what follows we outline a covert implementation of the basic steps in the BFK protocol, by hiding the communication rounds between Alice and Bob.
\begin{equation}taegin{algorithm}[H]
\pihi}{\rhom loatname{algorithm}{Protocol}
\rhoenewcommand{\tauhealgorithm}{}
\caption{Covert BFK universal blind quantum computation}
\begin{equation}taegin{enumerate}
\item The preparation stage \nuewline
For each column $x=1,\deltaots,n$ \nuewline
For each row $y=1,\deltaots,m$ \nuewline
\begin{equation}taegin{enumerate}
\item Alice prepares qubits $| \pisi_{x,y} \rhoangle$ such that
$$|\pisi_{x,y} \rhoangle \in \mathcal{B}ig\{ \pihi}{\rhom racac{1}{\sigmaqrt{2}} \begin{equation}taig (|0 \rhoangle + e^{i \tauheta_{x,y}} |1 \rhoangle \begin{equation}taig) \; | \; \tauheta_{x,y} = 0, \pii / 4, \deltaots, 7 \pii / 4 \mathcal{B}ig\},$$
and transfers each qubit to Bob via covert teleportation.
\item Bob generates an entangled brickwork state $G_{n \tauimes m}$ \cite{BFK} from all the qubits received, applying a controlled-Z gate to qubits joined by an edge.
\etand{enumerate}
\item Interaction and measurement stage \nuewline
For each column $x=1,\deltaots,n$ \nuewline
For each row $y=1,\deltaots,m$ \nuewline
\begin{equation}taegin{enumerate}
\item Alice computes $\pihi'_{x,y}$ based on the real measurement angle $\pihi_{x,y}$ and the previous measurement results.
\item Alice chooses $r_{x,y} \in \{0,1\}$ and computes the angle $$\piartiallta_{x,y} = \pihi'_{x,y} + \tauheta_{x,y} + \pii r_{x,y}.$$
\item Alice sends $\piartiallta_{x,y}$ to Bob via covert classical communication.
\item Bob measures $|\pisi_{x,y} \rhoangle$ in the basis $ \{ |0 \rhoangle + e^{i \piartiallta_{x,y}} | 1 \rhoangle \} ,$ then sends the measurement outcome to Alice via covert classical communication.
\item If $r_{x,y} = 1$ above, then Alice flips the measurement outcome bit; otherwise she does nothing.
\etand{enumerate}
\etand{enumerate}
\etand{algorithm}
\sigmaection{Conclusion}
\lambdabel{Sect:Conclusion}
A quantum internet is a manifest platform for quantum cryptography, sensor networks, and large-scale networked quantum computing. We developed a quantum internet that hides its operations by exploiting features of relativistic quantum information. In particular, we used properties of the Minkowski vacuum to hide the generation and distribution of entanglement in quantum networks, which provides a resource for covert quantum teleportation, thus enabling the faithful transmission of quantum data from site to site without detection, as well as the construction of universal quantum computers that carry out quantum logic operations in stealth. We anticipate further protocols being constructed covertly. For instance, graph states can be generated covertly, which could be used in the study of preexisting protocols and algorithms that require graph states. We are working on the experimental details of our setup towards a proof-of-principle demonstration in the near future.
\alphacknowledgements
This material is based upon work supported by the U.S.\ Air Force Office of Scientific Research under award number FA9550-17-1-0083. The authors also acknowledge support from the U.S.\ Office of Naval Research (ONR) under award number N00014-15-1-2646.
\begin{equation}taegin{thebibliography}{1}
\begin{equation}taibitem{Syverson et al}
Paul Syverson, David Goldschlag, and Michael Reed, Anonymous connections and onion routing, \tauextit{IEEE Symposium on Security and Privacy} (1997).
{\color{blue}{\deltaoi{10.1109/SECPRI.1997.601314}}}
\begin{equation}taibitem{Domenico and Arenas}
Manilo De Domenico and Alex Arenas, Modeling structure and resilience of the dark network, \tauextit{Phy. Rev. E} {\begin{equation}taf{95}}, 022313 (2017).
{\color{blue}{\deltaoi{10.1103/PhysRevE.95.022313}}}
\begin{equation}taibitem{lee2015achieving}
Seonwoo Lee, Robert Baxley, Mary Ann Weitnauer, and Brett Walkenhorst, Achieving undetectable communication, \tauextit{IEEE Journal of Selected Topics in Signal Processing} vol.\ 9 no.\ 7, 1195--1205 (2015).
{\color{blue}{\deltaoi{10.1109/JSTSP.2015.2421477}}}
\begin{equation}taibitem{sobers2016covert}
Tamara Sobers, Boulat Bash, Saikat Guha, Don Towsley, and Dennis Goeckel, Covert communication in the pressence of an uninformed jammer, (2016).
{\color{blue}{\url{https://arxiv.org/abs/1605.00127}}}
\begin{equation}taibitem{mukherjee2016covert}
Pritam Murkherjee and Sennur Ulukus, Covert bits through queues, \tauextit{IEEE Conference on Communications and Network Security}, (2016).
{\color{blue}{\deltaoi{10.1109/CNS.2016.7860561}}}
\begin{equation}taibitem{von Ahn et al}
Luis von Ahn, Nicholas Hopper, and John Langford, Covert two-party computation, In STOC '05 \tauextit{Proceedings of the 37th annual
ACM symposium on Theory of computing}, 513--522, New York, NY, USA, 2005, ACM Press.
\begin{equation}taibitem{Chandran et al}
Nishanth Chandran, Vipul Goyal, Rafail Ostrovsky, and Amit Sahai, Covert multi-party computation, \tauextit{IEEE Symposium on Foundations of Computer Science}, (2007).
{\color{blue}{\deltaoi{10.1109/FOCS.2007.61}}}
\begin{equation}taibitem{Jarecki}
Stanislaw Jarecki, Practical covert authentication, \tauextit{Public-Key Cryptography-PKC 2014}, Lecture notes in Computer Science, vol.\ 8383, (2014).
{\color{blue}{\deltaoi{10.1007/978-3-642-54631-0_35}}}
\begin{equation}taibitem{Cirac et al 1}
Ignacio Cirac, Peter Zoller, H.\ Jeff Kimble, and Hideo Mabuchi, Quantum state transfer and entanglement distribution among distant nodes in a quantum network, \tauextit{Phys. Rev. Lett.} {\begin{equation}taf{78}}, 3221 (1997).
{\color{blue}{\deltaoi{10.1103/PhysRevLett.78.3221}}}
\begin{equation}taibitem{H. Jeff Kimble}
H.\ Jeff Kimble, The quantum internet, \tauextit{Nature} {\begin{equation}taf{453}}, 1023--1030 (2008).
{\color{blue}{\deltaoi{10.1038/nature07127}}}
\begin{equation}taibitem{Rod Van Meter}
Rod Van Meter, Quantum networking, John Wiley \& Sons, 2014.
{\color{blue}{\deltaoi{10.1002/9781118648919}}}
\begin{equation}taibitem{Cirac et al 2}
Ignacio Cirac, Artur Ekert, Susana Huelga, and Chiara Macchiavello, Distributed quantum computation over noisy channels, \tauextit{Phys. Rev. A} {\begin{equation}taf{59}}, 4249 (1999).
{\color{blue}{\deltaoi{10.1103/PhysRevA.59.4249}}}
\begin{equation}taibitem{Pirandola et al}
Stefano Pirandola, Jens Eisert, Christian Weedbrook, Akira Furusawa, and Samuel Braunstein, Advances in quantum teleportation, \tauextit{Nature Photonics} {\begin{equation}taf{9}}, 641--652 (2015).
{\color{blue}{\deltaoi{10.1038/nphoton.2015.154}}}
\begin{equation}taibitem{Pirandola and Braunstein}
Stefano Pirandola and Samuel Braunstein, Physics:\ unite to build a quantum internet, \tauextit{Nature} {\begin{equation}taf{532}}, 169--171 (2016).
{\color{blue}{\deltaoi{10.1038/532169a}}}
\begin{equation}taibitem{Van Meter and Devitt}
Rodney Van Meter and Simon Devitt, The path to scalable distributed quantum computing, \tauextit{Computer} vol.\ 49, 31--42 (2016).
{\color{blue}{\deltaoi{10.1109/MC.2016.291}}}
\begin{equation}taibitem{Absolutely Covert}
Kamil Br\'{a}dler, Timjan Kalajdzievski, George Siopsis, and Christian Weedbrook, Absolutely covert quantum communication, (2016).
{\color{blue}{\url{https://arxiv.org/abs/1607.05916}}}
\begin{equation}taibitem{Jozsa}
Richard Jozsa, An introduction to measurement based quantum computation, (2005).
{\color{blue}{\url{https://arxiv.org/abs/quant-ph/0508124}}}
\begin{equation}taibitem{Briegel et al}
Hans Briegel, Daniel Browne, Wolfgang D{\"u}r, Robert Raussendorf and Maarten Van den Nest, Measurement-based quantum computation, \tauextit{Nature Physics}, 19--26, (2009).
{\color{blue}{\deltaoi{10.1038/nphys1157}}}
\begin{equation}taibitem{Gottesman and Chuang}
Daniel Gottesman and Isaac Chuang, Demonstrating the viability of universal quantum computation using teleportation and single-qubit operations, \tauextit{Nature} {\begin{equation}taf{402}}, 390--393 (1999).
{\color{blue}{\deltaoi{10.1038/46503}}}
\begin{equation}taibitem{Zhou et al}
Xinlan Zhou, Debbie Leung, and Isaac Chuang, Methodology for quantum logic gate construction, \tauextit{Phys. Rev. A} {\begin{equation}taf{62}}, 052316 (2000).
{\color{blue}{\deltaoi{10.1103/PhysRevA.62.052316}}}
\begin{equation}taibitem{Eisert et al}
Jens Eisert, Kurt Jacobs, Polykarpos Papadopoulos, and Martin Plenio, Optimal local implementation of nonlocal quantum gates, \tauextit{Phys. Rev. A} {\begin{equation}taf{62}}, 052317 (2000).
{\color{blue}{\deltaoi{10.1103/PhysRevA.62.052317}}}
\begin{equation}taibitem{Nielsen}
Michael Nielsen, Quantum computation by measurement and quantum memory, \tauextit{Physics Letters A} Vol.\ 308, Issues 2--3, 96--100, (2003).
{\color{blue}{\deltaoi{10.1016/S0375-9601(02)01803-0}}}
\begin{equation}taibitem{Leung}
Debbie Leung, Quantum computation by measurements, \tauextit{IJQI} Vol.\ 308, No.\ 1, 33--43, (2004).
{\color{blue}{\deltaoi{10.1142/S0219749904000055}}}
\begin{equation}taibitem{Childs et al}
Andrew Childs, Debbie Leung, and Michael Nielsen, Unified derivations of measurement-based schemes for quantum computation, \tauextit{Phys. Rev. A} {\begin{equation}taf{71}}, 032318 (2005).
{\color{blue}{\deltaoi{10.1103/PhysRevA.71.032318}}}
\begin{equation}taibitem{Raussendorf and Briegel 1}
Robert Raussendorf and Hans Briegel, A one-way quantum computer, \tauextit{Phys. Rev. Lett.} {\begin{equation}taf{86}}, 5188 (2001).
{\color{blue}{\deltaoi{10.1103/PhysRevLett.86.5188}}}
\begin{equation}taibitem{Raussendorf and Briegel 2}
Robert Raussendorf and Hans Briegel, Computational model underlying the one-way quantum computer, \tauextit{Quantum information \& Computation} {\begin{equation}taf{2}}, 443--486 (2002).
\begin{equation}taibitem{Raussendorf et al}
Robert Raussendorf, Daniel Browne, and Hans Briegel, Measurement-based quantum computation on cluster states, \tauextit{Phys. Rev. A} {\begin{equation}taf{68}}, 022312 (2003).
{\color{blue}{\deltaoi{10.1103/PhysRevA.68.022312}}}
\begin{equation}taibitem{MCT}
Arthur Jaffe, Zhengwei Liu, and Alex Wozniakowski, Constructive simulation and topological design of protocols, \tauextit{New J.\ Phys.} (2017).
{\color{blue}{\deltaoi{10.1088/1367-2630/aa5b57}}}
\begin{equation}taibitem{Raussendorf and Harrington}
Robert Raussendorf and Jim Harrington, Fault-tolerant quantum computation with high threshold in two dimensions, \tauextit{Phys. Rev. Lett.} {\begin{equation}taf{98}}, 190504 (2007).
{\color{blue}{\deltaoi{10.1103/10.1103/PhysRevLett.98.190504}}}
\begin{equation}taibitem{Raussendorf Harrington Goyal}
Robert Raussendorf, Jim Harrington, and Kovid Goyal, Topological fault-tolerance in cluster state quantum computation, \tauextit{New J.\ Phys.} {\begin{equation}taf{9}} (2007).
{\color{blue}{\deltaoi{10.1088/1367-2630/9/6/199}}}
\begin{equation}taibitem{Miller and Miyake}
Jacob Miller and Akimasa Miyake, Hierarchy of universal entanglement in 2D measurement-based quantum computation, \tauextit{npj Quantum Information} {\begin{equation}taf{2}}, 16036 (2016).
{\color{blue}{\deltaoi{10.1038/npjqi.2016.36}}}
\begin{equation}taibitem{BFK}
Anne Broadbent, Joseph Fitzsimons, and Elham Kashefi, Universal blind quantum computation, \tauextit{Proceedings of the 50th Annual IEEE Symposium on Foundations of Computer Science}, 517--526 (2009).
{\color{blue}{\deltaoi{10.1109/FOCS.2009.36}}}
\begin{equation}taibitem{Bash et al}
Boulat Bash, Andrei Gheorghe, Monika Patel, Jonathan Habif, Dennis Goeckel, Don Towsley, and Saikat Guha, Quantum-secure covert communication on bosonic channels, \tauextit{Nature Communications} {\begin{equation}taf{6}}, 8626 (2015).
{\color{blue}{\deltaoi{10.1038/ncomms9626}}}
\begin{equation}taibitem{Arrazola and Scarani}
Juan Arrazola and Valerio Scarani, Covert quantum communication, \tauextit{Phys. Rev. Lett.} {\begin{equation}taf{117}}, 250503 (2016).
{\color{blue}{\deltaoi{10.1103/PhysRevLett.117.250503}}}
\begin{equation}taibitem{bibRez}
Benni Reznik, Entanglement from the vacuum, \tauextit{Foundations of Physics} \tauextbf{33}, 167 (2003).
{\color{blue}{\deltaoi{10.1023/A:1022875910744}}}
\begin{equation}taibitem{horodeckisReview}
Ryszard Horodecki, Pawe\l{} Horodecki, Micha\l{} Horodecki, and Karol Horodecki, Quantum entanglement, \tauextit{Review of Modern Physics} vol.\ 81, 865, (2009).
{\color{blue}{\deltaoi{10.1103/RevModPhys.81.865}}}
\begin{equation}taibitem{bradlerExpansion}
Kamil Br\'adler, A novel approach to perturbative calculations for a large class of interacting boson theories, (2017).
{\color{blue}{\url{https://arxiv.org/abs/1703.02153}}}
\begin{equation}taibitem{Bennett et al}
Charles Bennett, Gilles Brassard, Claude Cr\'epeau, Richard Jozsa, Asher Peres, and William Wootters, Teleporting an unknown quantum state via dual classical and Einstein-Podolsky-Rosen channels, \tauextit{Phys. Rev. Lett.} {\begin{equation}taf{70}}, 1895 (1993).
{\color{blue}{\deltaoi{10.1103/PhysRevLett.70.1895}}}
\begin{equation}taibitem{Quon}
Zhengwei Liu, Alex Wozniakowski, and Arthur Jaffe, Quon 3D language for quantum information, \tauextit{Proceedings of the U.S.\ National Academy of Sciences} vol.\ 114 no.\ 10, 2497--2502, (2017).
{\color{blue}{\deltaoi{10.1073/pnas.1621345114}}}
\begin{equation}taibitem{Shor}
Peter Shor, Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer, \tauextit{SIAM J. Comput.} vol.\ 26, issue 5, 1484--1509, (1997).
{\color{blue}{\deltaoi{10.1137/S0097539795293172}}}
\begin{equation}taibitem{Max}
Arthur Jaffe, Zhengwei Liu, and Alex Wozniakowski, Holographic software for quantum networks, (2016).
{\color{blue}{\url{https://arxiv.org/abs/1605.00127}}}
\begin{equation}taibitem{Bose et al}
Sougato Bose, Vlatko Vedral, and Peter Knight, A multiparticle generalization of entanglement swapping, \tauextit{Phys. Rev. A} {\begin{equation}taf{57}}, 822 (1998).
{\color{blue}{\deltaoi{10.1103/PhysRevA.57.822}}}
\begin{equation}taibitem{Van den Nest et al}
Maarten Van den Nest, Akimasa Miyake, Wolfgang D\"ur, and Hans Briegel, Universal resources for measurement-based quantum computation, \tauextit{Phys. Rev. Lett} {\begin{equation}taf{97}}, 150504 (2006).
{\color{blue}{\deltaoi{10.1103/PhysRevLett.97.150504}}}
\begin{equation}taibitem{Kitaev}
Alexei Kitaev, Fault-tolerant quantum computations by anyons, \tauextit{Annals of Physics} vol.\ 303, issue 1, 2--30, (2003).
{\color{blue}{\deltaoi{10.1016/S0003-4916(02)00018-0}}}
\begin{equation}taibitem{Gottesman Knill}
Daniel Gottesman, The Heisenberg representation of quantum computers, {\color{blue}{\url{https://arxiv.org/abs/quant-ph/9807006}}}, Group 22: Proceedings of the XXII International Colloqium on Group Theoretical Methods in Physics, 32--43 (1998).
\begin{equation}taibitem{Chien et al}
Chia-Hung Chien, Rodney Van Meter, and Sy-Yen Kuo, Fault-tolerant operations for universal blind quantum computation, \tauextit{ACM j.\ Emerg.\ Technol.\ Comput.\ Syst.} Article, 22 pages (2013).
{\color{blue}{\deltaoi{10.1145/0000001.0000001}}}
\etand{thebibliography}
\etand{document}
|
\begin{equation}egin{document}
\title{On the J-flow in Sasakian manifolds
}
\begin{equation}egin{abstract}
We study the space of Sasaki metrics on a compact manifold $M$ by introducing an odd-dimensional analogue of the $J$-flow.
That leads to the notion of {\end{equation}m critical metric} in the Sasakian context. In analogy to the K\"ahler case, on a polarised Sasakian manifold there exists at most one {\end{equation}m normalised} critical metric. The flow is a tool for texting the existence of such a metric.
We show that some results proved by Chen in \cite{chen} can be generalised to the Sasakian case. In particular, the {\end{equation}m Sasaki $J$-flow} is a gradient flow which has always a long-time solution minimising the distance on the space of Sasakian potentials of a polarized Sasakian manifold. The flow minimises an energy functional whose definition depends on the choice of a background transverse K\"ahler form $\chi$.
When $\chi$ has nonnegative transverse holomorphic bisectional curvature, the flow converges to a critical Sasakian structure.
\end{equation}nd{abstract}
\section{Introduction}
Sasakian manifolds are the odd-dimensional counterpart of K\"ahler manifolds and are defined as odd-dimensional Riemannian manifolds $(M,g)$ whose Riemannian cone $(M\times\R^+,t^2g+dt^2)$ admits a K\"ahler structure. These manifolds are important for both geometric and physical reasons.
In geometry they can be used to produce new examples of complete K\"ahler manifolds, manifolds with special holonomy and Einstein metrics. Moreover, Sasakian manifolds play a role in the study of orbifolds since many K\"ahler orbifolds can be desingolarised by using Sasakian spaces. In theoretical physics these manifolds play a central role in the AdS/CFT correspondence (see e.g. \cite{CLPP,GM,GM1,MS,superstring1,MSY,superstring2}). We refer to \cite{bgbook,sparks} for general theory and recent advanced in the study of these manifolds.
Given a Sasakian manifold, the choice of a K\"ahler structure on the Riemannian cone determines
a unitary Killing vector field $\xi$ of the metric $g$ and an endomorphism $\Phi$ of the tangent bundle to $M$ such that
$$
\Phi^2=-{\rm Id}+\end{equation}ta\otimes \xi\,,\quad g(\Phi\cdot,\Phi\cdot)=g(\cdot,\cdot)-\end{equation}ta\otimes \end{equation}ta\,, \quad g=\varphirac12 d\end{equation}ta\circ ({\rm Id}\otimes \Phi)+\end{equation}ta\otimes \end{equation}ta\,,
$$
$\end{equation}ta$ being the $1$-form dual to $\xi$ via $g$. It turns out that $\end{equation}ta$ is a contact form and that $\Phi$ induces a CR-structure $(\mathcal D,J)$ on $M$. Moreover, $\Phi(X)={\rm D}_X\xi$ for every vector field $X$ on $M$, where ${\rm D}$ is the Levi-Civita connection of $g$.
The quadruple $(\xi,\Phi,\end{equation}ta,g)$ is usually called a {\end{equation}m Sasakian structure} and the pair $(\xi,J)$ can be seen as a polarization of $M$.
The research of this paper is mainly motivated by \cite{bgs,guanzhang,guanzhangA,he} where it is approached the study of Riemannian and symplectic aspects of the space of Sasakian potentials
$\mathcal{H}$ on a polarised Sasakian manifold. Our approach consists in using an analogue of the $J$-flow in the context of Sasakian Geometry obtaining some results similar to the ones proved in the K\"ahler case by Chen in \cite{chen}. The $J$-flow is a gradient geometric flow of K\"ahler structures introduced and firstly studied by Donaldson in \cite{donaldsonNC} from the point of view of moment maps and by Chen in \cite{chen} in relation to the Mabuchi energy. It is defined as the gradient flow of a functional $J_{\chi}$ defined on the space of {\end{equation}m normalized} K\"ahler potentials whose definition depends on a fixed background K\"ahler structure $\chi$.
Chen proved in \cite{chen} that the flow has always a unique long time solution which, in the special case when $\chi$ has nonnegative biholomorphic curvature, converges to a critical K\"alher metric. Further results about the flow are obtained in \cite{LSS,songweinkoveJ,wein1,wein2}.
As far as we know, the interest for geometric flows in foliated manifolds comes from \cite{LMR} where it is introduced a foliated version of the Ricci flow.
Subsequently, Smoczyk, Wang and Zhang proved in \cite{smoczyk} that the transverse Ricci flow preserves the Sasakian condition and study its long time behavior generalising the work of Cao in \cite{cao} to the Sasakian case. Some deep geometric and analytic aspects of the Sasaki Ricci flow were further investigated in \cite{collins0,collins1,collins2,collins3}.
In analogy to the K\"ahler case, the {\end{equation}m Sasaki $J$-flow} introduced in this paper (see Section \ref{wellposed} for the precise definition) is the gradient flow of a functional $J_{\chi}\colon \mathcal{H}\to \R$ whose definition depends on the choice of a transverse K\"ahler structure $\chi$. Sasakian metrics arising from critical points of the restriction of $J_{\chi}$ to the space of {\end{equation}m normalized} Sasakian potentials $\mathcal{H}_0$, are natural candidates to be {\end{equation}m canonical} Sasakian metrics.
The main result of the paper is the following
\begin{equation}egin{theorem}\label{main} Let $(M,\xi,\Phi,\end{equation}ta,g)$ be a $(2n+1)$-dimensional Sasakian manifold and let $\chi$ be a transverse K\"ahler form on $M$. Then the functional $J_{\chi}\colon \mathcal H_0\to \R$
has at most one critical point and the Sasaki $J$-flow has a long-time solution $f$ for every initial datum $f_0$. Furthermore, the length of any smooth curve in $\mathcal{H}_0$ and the distance between any two points decrease under the flow and when the transverse holomorphic bisectional curvature of $\chi$ is nonnegative, $f$ converges to a critical point of $J_{\chi}$ in $\mathcal H_0$.
\end{equation}nd{theorem}
The last sentence in the statement of Theorem \ref{main} implies that if the transverse K\"ahler structure $\chi$ has nonnegative transverse holomorphic bisectional curvature, then $J_{\chi}$ has a critical point in $\mathcal H_0$. We remark that Sasakian manifolds having nonnegative transverse holomorphic bisectional curvature are classified in \cite{HeSun}, but in the definition of the Sasaki $J$-flow, $\chi$ is just a transverse K\"ahler structure not necessarily induced by a Sasaki metric.
From the local point of view, a solution to the {\end{equation}m Sasaki $J$-flow} can be seen as a collection of
solutions to the K\"ahler $J$-flow on open sets in $\C^n$. This fact allows us to use all the local estimates about the K\"ahler $J$-flow provided in \cite{chen}. What is necessary modifying from the K\"ahler case is the proof of the existence of a short-time solution to the flow (since the flow is parabolic only along transverse directions) and the
global estimates. The short-time existence is obtained in Section \ref{wellposed} by using a trick introduced in \cite{smoczyk}, while the global estimates are obtained by using a transverse version of the maximal principle for transversally elliptic operators (see section \ref{maxprin}).
\noindent {\end{equation}m{Acknowledgements}}. The authors would like to thank Valentino Tosatti for useful comments and remarks.
\section{Preliminaries}\label{preliminaries}
In this section we recall some basic facts about Sasakian Geometry declaring the notation which will be adopted in the rest of the paper.
Let $(M,\xi,\Phi,\end{equation}ta,g)$ be a $(2n+1)$-dimensional Sasaki manifold. Then the Reeb vector field $\xi$ specifies a Riemannian foliation on $M$, which is usually denoted by $\mathcal F_{\xi}$, and the vector bundle to $M$ splits in $TM=\mathcal D\oplus L_{\xi}$, where $L_{\xi}$ is the line bundle generated by $\xi$ and $\mathcal D$ has as fiber over a point $x$ the vector space $\ker \end{equation}ta_x$. The metric $g$ splits accordingly in $g=g^T+\end{equation}ta^2$, where the degenerate tensor $g^T$ is called the {\end{equation}m transverse metric} of the Sasakian structure. In the following we denote by $\nabla^T$ the {\end{equation}m transverse Levi-Civita} connection defined on the bundle $\mathcal D$ in terms of the Levi-Civita connection ${\rm D}$ of $g$ as
\begin{equation}\label{nablaT}
\nabla^T_XY=
\begin{equation}egin{cases}
{\rm D}_XY\quad\mbox{ if } X\in \Gamma(\mathcal D)\\
[\xi,Y]^{\mathcal D} \quad\mbox{ if } X=\xi\,,
\end{equation}nd{cases}
\end{equation}
where the upperscript $\mathcal D$ denotes the orthogonal projection onto $\mathcal D$. This connection induces the transverse curvature
\begin{equation}\label{RT}
R^T(X,Y)Z=\nabla_X^T\nabla^T_YZ-\nabla_Y^T\nabla^T_XZ-\nabla^T_{[X,Y]}Z,
\end{equation}
and the transverse Ricci curvature ${\rm Ric}^T$ obtained as the trace of the map $X\mapsto R^T(X,\cdot)\cdot$ on $\mathcal D$ with respect to $g^T$. We further recall that a real $p$-form $\alpha$ on $(M,\xi,\Phi,\end{equation}ta,g)$ is called {\end{equation}m basic} if
$$
\iota_{\xi}\alpha=0\,,\quad \iota_{\xi}d\alpha=0,
$$
where $\iota_{\xi}$ denotes the contraction along $\xi$. The set of basic $p$-forms is usually denoted by $\Omega_{B}^p(M)$ and $\Omega_B^0(M)=C^{\infty}_B(M)$. Since the exterior differential operator takes basic forms into basic forms, its restriction $d_B$ to $\Omega_B(M)=\oplus \Omega_B^p(M)$ defines a cohomological complex. Moreover, $\Phi$ induces a {\end{equation}m transverse complex structure} $J$ on $(M,\xi)$ and a splitting of the space of complex basic forms in forms of type $(p,q)$ in the usual way. Furthermore, the complex extension of $d_B$ to $\Omega_B(M,\C)$ splits as $d_B=\partial_B+\begin{equation}ar\partial_B$ and $\begin{equation}ar\partial_B^2=0$
(see e.g. \cite{bg} for details). A basic $(1,1)$-form $\chi$ on $(M,\xi,\Phi,\end{equation}ta,g)$ is said to be {\end{equation}m positive} if
\begin{equation}\label{positive}
\chi(Z,\begin{equation}ar Z)>0,
\end{equation}
for every non-zero section $Z$ of $\Gamma(\mathcal D^{1,0})$. If further $\chi$ is closed, we refer to $\chi$ as to a {\end{equation}m transverse K\"ahler form}. Note that condition \end{equation}qref{positive} depends only on the transverse complex structure $J$ and on $\xi$, since $\chi$ is basic.
Every such a $\chi$ induces the global metric
$$
g_{\chi}(\cdot,\cdot)=\chi(\cdot,\Phi\cdot)+\end{equation}ta(\cdot)\,\end{equation}ta(\cdot),
$$
on $M$. The metric $g_{\chi}$ induces a transverse Levi-Civita connection $\nabla^{\chi}$ and a tranverse curvature $R^{\chi}$ as in \end{equation}qref{nablaT} and \end{equation}qref{RT} (here it is important that $\chi$ is basic in order to define $\nabla^{\chi}$).
\subsection{Adapted coordinates}\label{foliatedcoordinates}
Let $(M,\xi,\Phi,\end{equation}ta,g)$ be a Sasakian manifold. We can always find local coordinates
$\{z^1,\dots,z^{n},z\}$ taking values in $\C^n\times \R$ such that
\begin{equation}\label{coord}
\xi=\partial_z\,,\quad
\Phi(d{z^j})=i\,d{z^j},\quad \Phi(d{\begin{equation}ar z^j})=-i\,d{\begin{equation}ar z^j}\,.
\end{equation}
A function $h$ is basic if and only if does not depend on the variable $z$ and we usually denote by
$h_{,i_1\dots i_r\begin{equation}ar j_1\dots \begin{equation}ar j_l}$ the space derivatives of $h$ along $\partial_{z^{i_1}},\dots ,\partial_{z^{i_r}},
\partial_{\begin{equation}ar z^{ j_1}},\dots,\partial_{\begin{equation}ar z^{ j_l}}$. We denote by $A_{i_1\dots i_r\begin{equation}ar j_1\dots \begin{equation}ar j_l}$ (without \lq\lq ,\rq\rq) the components of the basic tensor $A$. Furthermore, when a function $f$ depends also on a time variable $t$, we use notation
$\dot f$ to denote its time derivative. In the case when $f$ depends on two time variables $(t,s)$, we write $\partial_t f$ and $\partial_s f$, to distinguish the two derivatives.
For instance, the metric $g$ and the transverse symplectic form $d\end{equation}ta$ locally write as
$$
g=g_{i\begin{equation}ar j}dz^id\begin{equation}ar z^{j}+\end{equation}ta^2\,,\quad d\end{equation}ta=2ig_{i\begin{equation}ar j}dz^i\widetilde{\nabla}edge d\begin{equation}ar z^{j},
$$
where the $g_{i\begin{equation}ar j}$ are all basic functions. In particular the transverse metric $g^T$ writes as
$g^T=g_{i\begin{equation}ar j}dz^id\begin{equation}ar z^{ j}$ and a Sasakian structure can be regarded as a collection of K\"ahler structures each one defined on an open set of $\C^n$. Observe that conditions \end{equation}qref{coord} depend only $(\xi,J)$ and therefore they hold for every Sasakian structure compatible with $(\xi,J)$.
This fact is crucial in the proof of Theorem \ref{main}.
In this paper we make sometimes use of {\end{equation}m special foliated coordinates} with respect to a transverse K\"ahler form $\chi$. Indeed, once a transverse K\"ahler form $\chi$ on the Sasakian manifold $(M,\xi,\Phi,\end{equation}ta,g)$ is fixed, we can always find foliated coordinates $\{z^1,\dots,z^{n},z\}$ around any fixed point $x$ such that if $\chi=\chi_{i\begin{equation}ar j}\,dz^i\widetilde{\nabla}edge d\begin{equation}ar z^j$, then
$$
\chi_{i\begin{equation}ar j}=\delta_{ij}\,,\quad \partial_{z^r}\chi_{i\begin{equation}ar j}=0\,,\mbox{ at }x\,.
$$
Moreover, we can further require that the transverse metric $g^T$ takes a diagonal expression at $x$.
\subsection{The space of the Sasakian potentials and the definition of $J$-flow} Following \cite{bgs,guanzhang,guanzhangA,he}, given a Sasakian manifold $(M,\xi,\Phi,\end{equation}ta,g)$, we consider
$$
{\mathcal{H}}=\{ h \in C_B^{\infty}(M,\R)\,\,:\,\, \end{equation}ta_h=\end{equation}ta+ d^c h\ \mbox{is a contact form}\}\,,
$$
where $d^ch$ is the $1$-form on $M$ defined by $(d^ch)(X)=-\varphirac12 dh(\Phi(X))$. Every $h\in {\mathcal{H}}$ induces the Sasakian structure
$(\xi,\Phi_h,\end{equation}ta_h,g_h)$ where
$$\begin{equation}egin{aligned}
\Phi_h=\Phi-(\xi\otimes (\end{equation}ta_h-\end{equation}ta))\circ \Phi\,,\quad
g_h=\varphirac12\,d\end{equation}ta_h\circ({\rm Id}\otimes \Phi_h)+\end{equation}ta_h\otimes\end{equation}ta_h\,.
\end{equation}nd{aligned}
$$
Notice that
$$
\end{equation}ta_h\widetilde{\nabla}edge (d\end{equation}ta_h)^n=\end{equation}ta \widetilde{\nabla}edge (d\end{equation}ta_h)^n\,.
$$
All the Sasakian structures induced by the functions in $\mathcal H$ have the same Reeb vector field and the same transverse complex structure. It is rather natural to restrict our attention to the space of $\mathcal{H}_0$ of {\end{equation}m normalized } Sasakian potentials. $\mathcal H_0$ is defined as the zero set of the functional
$I\colon \mathcal H\to \R$ defined trough its first variation by
$$
\varphirac{\partial}{\partial t}I(f)=\varphirac{1}{2^nn!}\int_M \dot{f}\,\end{equation}ta\widetilde{\nabla}edge d\end{equation}ta_{f}^n\,,\quad I(0)=0,
$$
where $f$ is a smooth curve in $\mathcal H$
(see \cite[formula (14)]{guanzhang} for an explicit formulation of $I$).
The pair $(\xi,J)$ can be seen as a {\end{equation}m polarisation} of the Sasakian manifold (see \cite{bgs}). Notice that $\mathcal{H}$ is open in $C^{\infty}_B(M,\R)$ and has the natural Riemannian metric
\begin{equation}egin{equation}\label{metric}
(\varphi,\psi)_h:=\varphirac{1}{2^nn!}\int_M\, \varphi \psi\,\end{equation}ta \widetilde{\nabla}edge (d\end{equation}ta_h)^n\,.
\end{equation}nd{equation}
The covariant derivative of \end{equation}qref{metric} along a smooth curve $f=f(t)$ in $C^{\infty}_B(M,\R)$ takes the following expression
$$
D_t\psi=\dot\psi-\varphirac14\,\langle d_B\psi,d_B\dot f\rangle_f,
$$
where $\psi$ is an arbitrary smooth curve in $C^{\infty}_B(M,\R)$ and $\langle \cdot ,\cdot\rangle _f$ is the pointwise scalar product induced by $g_f$ on basic forms (see \cite{guanzhang,he}). Note that $D_t$ can be alternatively written as
$$
D_t\psi=\dot\psi-\varphirac12{\rm Re} \langle \partial_B\psi,\partial_B\dot f\rangle_f
$$
which has the following local expression
$$
D_t\psi=\dot\psi-\varphirac14g_f^{\begin{equation}ar jk}(\psi_{,k}\dot f_{,\begin{equation}ar j}+\psi_{,\begin{equation}ar j}\dot f_{,k})\,.
$$
Moreover, a curve $f=f(t)$ in $\mathcal H$ is a geodesic if and only if it solves
\begin{equation}egin{equation}\label{geodesic}
\ddot f-\varphirac14 |d_B \dot f|^2_f=0\,.
\end{equation}nd{equation}
Furthermore, W. He proved in \cite{he} that $\mathcal H$ is an infinite dimensional symmetric space whose curvature can be written as
$$
R_h(\psi_1,\psi_2)\psi_3=-\varphirac1{16}\{\{\psi_1,\psi_2\}_f,\psi_3\}_h,
$$
where $\{\,,\,\}_h$ is the Poisson bracket on $C^{\infty}_B(M,\R)$ induced by the contact form $\end{equation}ta_h$.
As in the K\"ahler case, it is still an open problem to establish when two points in $\mathcal H$ can be connected by a geodesic path. Fortunately, Guan and Zhang proved in \cite{guanzhangA} that this can be always done in a weak sense. More precisely, the role of $\mathcal H$
is replaced with its completion $\begin{equation}ar{\mathcal{H}}$ with respect to the $C_{w}^2$-norm (see \cite{guanzhangA} for details) and the geodesic equation \end{equation}qref{geodesic} with
\begin{equation}\label{wgeodesic}
\left(\ddot f-\varphirac14 |d_B \dot f|^2_f\right)\,\end{equation}ta\widetilde{\nabla}edge d\end{equation}ta_f^n=\end{equation}psilon\,\end{equation}ta\widetilde{\nabla}edge d\end{equation}ta^n\,.
\end{equation}
Then, by definition a $C^{1,1}$-geodesic is a curve in $\begin{equation}ar {\mathcal H}$ obtained as weak limit of solutions to \end{equation}qref{wgeodesic}, and from \cite{guanzhangA} it follows that for every
two points in $\mathcal H$ there exists a $C^{1,1}$-geodesic connecting them.
Now we can introduce the Sasakian version of the $J$-flow. The definition depends on the choice of a transverse K\"ahler form $\chi$. Note that
$$
\end{equation}ta_h\widetilde{\nabla}edge \chi^n=\end{equation}ta\widetilde{\nabla}edge \chi^n\neq 0,
$$
for every $h\in \ \mathcal{H}$, since $\chi$ and $d^c_Bh$ are both basic forms.
\begin{equation}egin{prop}\label{wp}
Let $f_0,f_1\in \mathcal{H}$ and $f\colon [0,1]\to\mathcal{H}
$ be a smooth path satisfying $f(0)=f_0$, $f(1)=f_1$. Then
$$
A_{\chi}(f):=\int_0^1\int_M \dot f\,\chi\widetilde{\nabla}edge \end{equation}ta\widetilde{\nabla}edge (d\end{equation}ta_f)^{n-1}\,dt,
$$
depends only on $f_0$ and $f_1$.
\end{equation}nd{prop}
\begin{equation}egin{proof}
Following the approach of Mabuchi in \cite{mabuchiF},
let $\psi(s,t):=sf(t)$ and let $\Psi$ be the $2$-form on the square $Q=[0,1]\times [0,1]$ defined as
$$
\Psi(s,t):=\left(\int_M \partial_t\psi\, \end{equation}ta\widetilde{\nabla}edge\chi\widetilde{\nabla}edge (d\end{equation}ta_\psi)^{n-1}\right)\,dt+
\left(\int_M \partial_s\psi\, \end{equation}ta\widetilde{\nabla}edge\chi\widetilde{\nabla}edge (d\end{equation}ta_\psi)^{n-1}\right)\,ds\,.
$$
We show that $\Psi$ is closed as $2$-form on $Q$:
\begin{equation}egin{equation}
\begin{equation}egin{split}
d\Psi(s,t)=&\,\varphirac{d}{ds}\left( \int_M \partial_t\psi\, \end{equation}ta\widetilde{\nabla}edge\chi\widetilde{\nabla}edge (d\end{equation}ta_\psi)^{n-1}\right)\,dt\widetilde{\nabla}edge ds-\varphirac{d}{dt}\left(\int_M \partial_s\psi\, \end{equation}ta\widetilde{\nabla}edge\chi\widetilde{\nabla}edge (d\end{equation}ta_\psi)^{n-1}\right)\,dt\widetilde{\nabla}edge ds\\
=&\,(n-1)\,s\,i\left( \int_M \dot f\, \end{equation}ta\widetilde{\nabla}edge\chi\widetilde{\nabla}edge(d\end{equation}ta_\psi)^{n-2}\widetilde{\nabla}edge \partial_B \begin{equation}ar\partial_B f+\int_M f\, \end{equation}ta\widetilde{\nabla}edge\chi\widetilde{\nabla}edge (d\end{equation}ta_\psi)^{n-2}\widetilde{\nabla}edge\begin{equation}ar \partial_B \partial_B \dot f\right)dt\widetilde{\nabla}edge ds\\
=&\,(n-1)\,s\,i\left[ \int_M d\left(\dot f\, \end{equation}ta\widetilde{\nabla}edge\chi\widetilde{\nabla}edge(d\end{equation}ta_\psi)^{n-2}\widetilde{\nabla}edge \begin{equation}ar\partial_B f\right)-\int_M\partial_B\dot f\, \end{equation}ta\widetilde{\nabla}edge\chi\widetilde{\nabla}edge(d\end{equation}ta_\psi)^{n-2}\widetilde{\nabla}edge \begin{equation}ar\partial_B f+\right.\\
&\left.+\int_M d\left( f\, \end{equation}ta\widetilde{\nabla}edge\chi\widetilde{\nabla}edge (d\end{equation}ta_\psi)^{n-2}\widetilde{\nabla}edge \partial_B \dot f\right)-\int_M \begin{equation}ar \partial_B f\widetilde{\nabla}edge \end{equation}ta\widetilde{\nabla}edge\chi\widetilde{\nabla}edge (d\end{equation}ta_\psi)^{n-2}\widetilde{\nabla}edge \partial_B \dot f\right]dt\widetilde{\nabla}edge ds\\
=&\,(n-1)\,s\,i\left[ \int_M\partial_B\dot f\widetilde{\nabla}edge \begin{equation}ar\partial_B f \widetilde{\nabla}edge\end{equation}ta\widetilde{\nabla}edge\chi\widetilde{\nabla}edge(d\end{equation}ta_\psi)^{n-2} -\int_M \partial_B \dot f\widetilde{\nabla}edge \begin{equation}ar \partial_B f\widetilde{\nabla}edge \end{equation}ta\widetilde{\nabla}edge\chi\widetilde{\nabla}edge (d\end{equation}ta_\psi)^{n-2} \right]dt\widetilde{\nabla}edge ds\\
=&\,0.
\end{equation}nd{split}\nonumber
\end{equation}nd{equation}
Therefore the Gauss-Green Theorem implies that
$$
\int_{\partial Q}\Psi=0,
$$
and the claim follows.
\end{equation}nd{proof}
In view of the last proposition, we can write $A_{\chi}(f_0,f_1)$ instead of $A_{\chi}(f)$.
\begin{equation}egin{definition}
The {\end{equation}m Sasaki $J$-functional} is the map $J_{\chi}\colon \mathcal{H}\to \R$ defined as
$$
J_{\chi}(h)=\varphirac{1}{2^{n-1}(n-1)!} A_{\chi}(0,h)\,.
$$
\end{equation}nd{definition}
Alternatively we can define $J_{\chi}$ through its first variation by
\begin{equation}\label{ptJ}
\partial_t J_{\chi}(f)=\int_M \varphirac{1}{2^{n-1}(n-1)!}\dot f\,\chi\widetilde{\nabla}edge \end{equation}ta\widetilde{\nabla}edge (d\end{equation}ta_f)^{n-1}\,,\quad J_{\chi}(0)=0,
\end{equation}
and then apply Proposition \ref{wp} to show that the definition is well-posed.
Note that
$$
\partial_t J_{\chi}(f)=\varphirac12 (\dot f\chi,d\end{equation}ta)_f,
$$
and therefore
$$
\partial_t J_{\chi}(f)=\varphirac{1}{2^nn!} \int_M \dot f\sigma_f\,\end{equation}ta\widetilde{\nabla}edge d\end{equation}ta_{f}^n,
$$
where
for $h\in \mathcal {H}$
$$
\sigma_h=g^{\begin{equation}ar ba}_h\chi_{a\begin{equation}ar b} ,
$$
the components and the derivatives are computed with respect to transverse holomorphic coordinates and with the upper indices in $g_h$ we denote the components of the inverse matrix.
If we restrict $J_{\chi}$ to $\mathcal{H}_0,$ then $h\in \mathcal H_0$ is a critical point of $J_{\chi}\colon \mathcal{H}_0\to \R$
if and only if
$$
\int_M k\,\end{equation}ta\widetilde{\nabla}edge \chi\widetilde{\nabla}edge d\end{equation}ta_h^{n-1}=0,
$$
for every $k$ in the tangent space to $\mathcal H_0$ at $h$, i.e. if and only if $2n \,\end{equation}ta\widetilde{\nabla}edge \chi\widetilde{\nabla}edge d\end{equation}ta_h^{n-1}=c\, \end{equation}ta\widetilde{\nabla}edge d\end{equation}ta_h^{n}$, where
\begin{equation}\label{c}
c=\varphirac{2n\int_M \chi \widetilde{\nabla}edge \end{equation}ta\widetilde{\nabla}edge d\end{equation}ta^{n-1}}{\int_M \end{equation}ta\widetilde{\nabla}edge d\end{equation}ta^n}\,.
\end{equation}
Given $h\in\mathcal{H}_0$, we can rewrite the condition of being a critical point of $J_{\chi}$ as
\begin{equation}\label{criticaleq}
\sigma_h=c\,.
\end{equation}
Therefore, if $f_0\in \mathcal H_0$ is fixed, the evolution equation
\begin{equation}\label{jflow}
\dot f=c-\sigma_f\,,\quad f(0)=f_0,
\end{equation}
can be seen as the gradient flow of $J_{\chi}\colon \mathcal{H}_0\to \R$.
\begin{equation}egin{definition}
A Sasakian structure $(\xi,\Phi_h,\end{equation}ta_h,g_h)$ is called {\end{equation}m critical} if $h$ satisfies \end{equation}qref{criticaleq}. We will refer to \end{equation}qref{jflow}
as to the {\end{equation}m Sasaki $J$-flow}.
\end{equation}nd{definition}
\section{Technical Results and Critical Sasaki Metrics}
Let $(M,\xi,\Phi,\end{equation}ta,g)$ be a $(2n+1)$-dimensional compact Sasakian manifold and let
$f=f(t)$ be a smooth curve in the space of normalized Sasakian potentials $\mathcal H_0$.
Then
\begin{equation}egin{equation}\label{vol}
\varphirac{\partial}{\partial t} \end{equation}ta \widetilde{\nabla}edge (d\end{equation}ta_f)^n= \mathcal{D}elta_f\dot f\, \end{equation}ta \widetilde{\nabla}edge (d\end{equation}ta_f)^n,
\end{equation}nd{equation}
where for $h\in \mathcal H_0$, $\mathcal{D}elta_h$ denotes the basic Laplacian
$$
\mathcal{D}elta_h\psi=-\partial_B^*\partial_B\psi=\,g_h^{\begin{equation}ar jr}\psi_{,r\begin{equation}ar j}\,,\quad \mbox{ for }\psi\in C^{\infty}_B(M, \R)\,.
$$
A direct computation yields
\begin{equation}egin{equation}\label{sigma_t}
\dot \sigma_f=-g_f^{\begin{equation}ar p m}\, \dot f_{,m\begin{equation}ar l}\,g_f^{\begin{equation}ar l q}\,\chi_{q\begin{equation}ar p}=-\langle i\partial_B\begin{equation}ar\partial_B \dot f,\chi\rangle_f,
\end{equation}nd{equation}
where, given $\alpha$ and $\begin{equation}eta$ in $\Omega_B^{(p,q)}(M,\C)$, we set
$$
\langle \alpha,\begin{equation}eta\rangle_h=\alpha_{i_1\dots i_p\begin{equation}ar j_1\dots\begin{equation}ar j_q}\cdot \begin{equation}ar \begin{equation}eta_{r_1\dots r_p\begin{equation}ar s_1\dots\begin{equation}ar s_q}
g^{\begin{equation}ar r_1i_1}_{h}\cdots g^{\begin{equation}ar r_pi_p}_{h}\cdot g_h^{\begin{equation}ar j_1 s_1}\cdots g_{h}^{\begin{equation}ar j_q s_q},
$$
and
$$
(\alpha,\begin{equation}eta)_h=\varphirac{1}{2^nn!}\int_M \langle \alpha,\begin{equation}eta\rangle_h\,\end{equation}ta\widetilde{\nabla}edge d\end{equation}ta_{h}^n\,.
$$
In particular, if $\alpha=\alpha_i\,dz^i$ and $\begin{equation}eta=\begin{equation}eta_j\,dz^j$ are transverse forms of type $(1,0)$,
by writing $\chi=i\chi_{a\begin{equation}ar b}dz^a\widetilde{\nabla}edge \begin{equation}ar z^b$, we have
$$
\langle\chi,\alpha\widetilde{\nabla}edge \begin{equation}ar \begin{equation}eta\rangle_h=i\chi_{a\begin{equation}ar b}\begin{equation}ar \alpha_{r}\begin{equation}eta_j g^{\begin{equation}ar r a}_hg^{\begin{equation}ar b j}_h\,.
$$
The following technical lemma will be useful in the sequel.
\begin{equation}egin{lemma}\label{int1}
Let $u\in C_B^\infty (M,\mathds{R})$ and $f$ be a smooth path in $C^{\infty}_B(M,\R)$. Then
\begin{equation}egin{itemize}
\item[($i$)] $(\mathcal{D}elta_f\dot f,u\sigma)_f=-(\partial_B \dot f,\sigma\partial_B u)_f-(u\partial_B\dot f,\partial_B \sigma)_f;$
\item[($ii$)] $(\begin{equation}ar \partial_B\partial_B \dot f,u\chi)_f=-i(u\,\partial_B \dot f,\partial_B\sigma)_f-(\chi, \partial_B u \widetilde{\nabla}edge\begin{equation}ar\partial_B \dot f)_f;$
\item[($iii$)] $(\dot f,\dot \sigma)_f=\varphirac12 (\partial_B (\dot f)^2,\partial_B\sigma)_f-i(\chi, \partial_B \dot f \widetilde{\nabla}edge\begin{equation}ar\partial_B \dot f)_f. $
\end{equation}nd{itemize}
where $\sigma=g_f^{\begin{equation}ar kr}\chi_{r\begin{equation}ar k}$.
\end{equation}nd{lemma}
\begin{equation}egin{proof}$\textrm{}$
\begin{equation}egin{itemize}
\item[($i$)]
$
(\mathcal{D}elta_f\dot f,\dot u\sigma)_f=-(\partial_B^*\partial_B\dot f,u \sigma)_f=-(\partial_B \dot f,\sigma\partial_B u)_f-(u\partial_B\dot f,\partial_B\sigma)_f.
$
\item[($ii$)] Since the Laplacian is self-adjoint we have:
\begin{equation}egin{equation}
\begin{equation}egin{split}
2^nn!i(\begin{equation}ar \partial_B\partial_B \dot f,u\chi)_f=&-\int_M ug_f^{\begin{equation}ar c j}g_f^{\begin{equation}ar b a}\dot f_{,j\begin{equation}ar b}\,\chi_{a\begin{equation}ar c}\,\end{equation}ta\widetilde{\nabla}edge(d\end{equation}ta_f)^n\\
=&\int_M u_{,\begin{equation}ar b}g_f^{\begin{equation}ar c j}g_f^{\begin{equation}ar b a}\chi_{a\begin{equation}ar c}\,\dot f_{,j}\,\end{equation}ta\widetilde{\nabla}edge(d\end{equation}ta_f)^n+\int_M ug_f^{\begin{equation}ar c j}g_f^{\begin{equation}ar b a}\chi_{a{\begin{equation}ar b},\begin{equation}ar c}\,\dot f_{,j}\,\end{equation}ta\widetilde{\nabla}edge(d\end{equation}ta_f)^n\\
=&\int_M u_{,\begin{equation}ar b}g_f^{\begin{equation}ar c j}g_f^{\begin{equation}ar b a}\chi_{a\begin{equation}ar c}\,\dot f_{,j}\,\end{equation}ta\widetilde{\nabla}edge(d\end{equation}ta_f)^n+\int_M ug_f^{\begin{equation}ar c j}\sigma_{,\begin{equation}ar c}\,\dot f_{,j}\,\end{equation}ta\widetilde{\nabla}edge(d\end{equation}ta_f)^n\\
=&2^nn!(u\partial_B \dot f,\partial_B\sigma)_f-2^nn!i(\chi, \partial_B u\widetilde{\nabla}edge\begin{equation}ar \partial_B \dot f)_f.
\end{equation}nd{split}\nonumber
\end{equation}nd{equation}
\item[($iii$)] By using \end{equation}qref{sigma_t} and $(ii)$, we have
$$
\begin{equation}egin{aligned}
(\dot f,\dot \sigma)_f=&-i(\partial_B\begin{equation}ar \partial_B\dot f,\dot f\chi)_f=(\dot f\,\partial_B \dot f,\partial_B\sigma)_f-i(\chi, \partial_B \dot f \widetilde{\nabla}edge\begin{equation}ar\partial_B \dot f)_f\\
=&\varphirac12 (\partial_B (\dot f)^2,\partial_B\sigma)_f-i(\chi, \partial_B \dot f \widetilde{\nabla}edge\begin{equation}ar\partial_B \dot f)_f
\end{equation}nd{aligned}
$$
\end{equation}nd{itemize}
as required.
\end{equation}nd{proof}
The following proposition is about the uniqueness of {\end{equation}m critical Sasaki metrics} in $\mathcal H_0$ and it is analogue to the K\"ahler case.
\begin{equation}egin{prop}\label{stimevarie}
$J_{\chi}\colon \mathcal{H}_0\to \R$ has at most one critical point.
\end{equation}nd{prop}
\begin{equation}egin{proof}
Let $f$ be a curve in the space $\begin{equation}ar{\mathcal H}$ obtained as completion of $\mathcal H$ with respect to the $C^2_w$-norm. Then taking into account the definition of $J_{\chi}$,
Lemma \ref{int1} and equations \end{equation}qref{vol}, \end{equation}qref{sigma_t}, we have
\begin{equation}egin{equation}
\begin{equation}egin{split}
\partial_t^2\,J_{\chi}(f)=&(\ddot f,\sigma_f)_f+\varphirac12(\mathcal{D}elta_f\dot f,\dot f\sigma_f)_f+i(\dot f\begin{equation}ar\partial_B\partial_B\dot f,\chi)_f\\
=&\varphirac{1}{2^nn!}\int_M\left(\ddot f-\varphirac12|\partial_B \dot f|_g^2 \right)\sigma_f\,\end{equation}ta\widetilde{\nabla}edge (d\end{equation}ta_f)^{n}-i(\chi,\partial_B \dot f\widetilde{\nabla}edge \begin{equation}ar\partial_B\dot f)_f\,.
\end{equation}nd{split}\nonumber
\end{equation}nd{equation}
Therefore if $f$ solves the modified geodesic equation \end{equation}qref{wgeodesic}, then
$$
\partial_t^2\,J_{\chi}(f)\mathfrak{g}eq -i(\chi,\partial_B \dot f\widetilde{\nabla}edge \begin{equation}ar\partial_B\dot f)_f\mathfrak{g}eq 0\,.
$$
Let us assume now to have two critical points $f_0$ and $f_1$ of $J_{\chi}$ in $\mathcal H_0$ and denote by $\begin{equation}ar{\mathcal H}_0$
the competition of $\mathcal H_0$ with respect to the $C^2_w$-norm.
Then, in view of \cite{guanzhangA}, there exists a $C^{1,1}$-gedesic $f$ in $\begin{equation}ar{\mathcal H}_0$ such that $f(0)=f_0$ and $f(1)=f_1$. Let $h(t)=J_{\chi}(f(t))$. Then since $f_0$ and $f_1$ are critical points of $J_{\chi}$, we have $\dot h(0)=\dot h(1)=0$. Since $\ddot h\mathfrak{g}eq 0$, it as to be $\ddot h\end{equation}quiv 0$ which implies $\partial_B \dot f=0$ and $\dot f(t)$ is constant for every $t\in [0,1]$. Finally, since $f$ is a curve in $\begin{equation}ar{\mathcal H}_0$, then $I(f)=0$ and therefore $\dot f=0$, which implies $f_0=f_1$, as required.
\end{equation}nd{proof}
On a compact $3$-dimensional Sasaki manifold, the existence of a critical metric is always guaranteed. Indeed,
if $(M,\xi,\Phi,\end{equation}ta,g)$ is a compact $3$-dimensional Sasaki manifold with a fixed background transverse K\"ahler form $\chi$, then we can write:
$$
\chi=\varphirac14\langle \chi ,d\end{equation}ta\rangle\,d\end{equation}ta\,,\quad d\end{equation}ta_h= \left(1-\varphirac12 \mathcal{D}elta_Bh\right)\, d\end{equation}ta,
$$
where the scalar product and the basic Laplacian are computed with respect to the metric induced by $\end{equation}ta$. Hence, $\end{equation}ta_h=\end{equation}ta+d^ch$ induces a critical metric if and only if $h$ solves:
$$\mathcal{D}elta_Bh=2-\varphirac{1}{c}\langle \chi ,d\end{equation}ta\rangle\,,\quad c=\varphirac{2\int_M\end{equation}ta\widetilde{\nabla}edge\chi}{\int_M\end{equation}ta\widetilde{\nabla}edge d\end{equation}ta}\,,$$
which has always a solution since:
$$
\int_M \left(2-\varphirac{1}{c}\langle \chi ,d\end{equation}ta\rangle\right)\end{equation}ta\widetilde{\nabla}edge d\end{equation}ta=0\,.
$$
In higher dimensions there is a cohomological obstruction to the existence of a critical metric similar to the one in the K\"ahler case.
Recall that if $(M,\omega)$ is a compact K\"ahler $2n$-dimensional manifold (with $n>1$) with a fixed background K\"ahler metric $\chi$, then the existence of a $J_{\chi}$-critical normalised K\"ahler potential on $(M,\omega)$
implies that $[c\omega-\chi]$ is a K\"ahler class in $H^2(M,\R)$ (see \cite{donaldsonMM}).
In \cite{chenM} Chen proved that such a condition is sufficient for the existence of a critical metric on complex surfaces, while in the recent paper \cite{LSS}, Lejmi and Sz\'ekelhyidi provide an example where it is satisfied, but the $J$--flow does not converge. In \cite{songweinkoveJ}, Song and Weinkove find a necessary and sufficient condition for the convergence of the flow in terms of a $(n-1,n-1)$-form. Some further results have been obtained in \cite{wein1,wein2}.
The Sasakian context is quite similar. Indeed, given a Sasakian manifold $(M,\xi,\Phi,\end{equation}ta,g)$ with a fixed background transverse K\"ahler form $\chi$, then if $h\in \mathcal H_0$ is a critical normalised Sasakian potential, then
$\varphirac c2d\end{equation}ta_h-\chi$ is a transverse K\"ahler form and we expect that the results in \cite{songweinkoveJ,wein1,wein2} could be generalised to the Sasakian case.
The following proposition is about the existence of a critical Sasaki metric in dimension $5$:
\begin{equation}egin{prop}
Let $(M,\xi,\Phi,\end{equation}ta,g)$ be a compact $5$-dimensional Sasaki manifold. Assume that there exists a map $h\in \mathcal{H}_0$ such that $\varphirac c2\,(d\end{equation}ta+dd^ch)-\chi$ is a transverse K\"ahler form. Then, there exists a critical Sasaki metric on $M$.
\end{equation}nd{prop}
\begin{equation}egin{proof}
Up to rescaling $\end{equation}ta$, we may assume $c=1$. A function $h\in \mathcal{H}_0$ is critical if and only if
$$
2\,\end{equation}ta\widetilde{\nabla}edge \chi\widetilde{\nabla}edge\left(\varphirac12d \end{equation}ta+dd^c h\right)= \end{equation}ta\widetilde{\nabla}edge \left(\varphirac12d\end{equation}ta+dd^ch\right)^2\,.
$$
Let $\Omega=\varphirac12 d\end{equation}ta-\chi$. Then our hypothesis implies that $\Omega$ is a transverse K\"ahler form and moreover
by substituting we get
$$
(\Omega+dd^c h)^2=\chi^2\,.
$$
Finally, the Calabi-Yau theorem in K\"ahler foliations \cite{aziz} implies the statement.
\end{equation}nd{proof}
\section{Well-posedness of the Sasaki $J$-flow} \label{wellposed}
\begin{equation}egin{theorem}
The Sasaki $J$-flow is well-posed, i.e., for every initial datum $f_0$, system \end{equation}qref{jflow} has a unique maximal solution $f$ defined in $[0,\end{equation}psilon_{\max})$, for some positive $\end{equation}psilon_{\max}$.
\end{equation}nd{theorem}
\begin{equation}egin{proof}
Since $\mathcal{H}$ is not open in $C^{\infty}(M,\R)$, to apply the standard parabolic theory we have to use a trick adopted by Smoczyk, Wang and Zhang for showing the short-time existence of the Sasaki-Ricci flow in \cite{smoczyk}.
Since the functional $F\colon \mathcal{H}\to \R$ defined as
$$
F(f)=\xi^2(f)+\sigma_f,
$$
is elliptic, the standard parabolic theory implies that the geometric flow
\begin{equation}egin{equation}\label{modifiedflow}
\dot f=c-\xi^2(f)-\sigma_f,\quad
f(0)=f_0,
\end{equation}nd{equation}
has a unique maximal solution $f\in C^{\infty}(M\times[0,\end{equation}psilon_{\rm max}),\R)$, for some $\end{equation}psilon_{\rm max}>0$. Of course if $f(\cdot,t)$ is a solution to \end{equation}qref{modifiedflow} which is basic for every $t$ and $I(f)=0$, then $f$ solves \end{equation}qref{jflow}. We first show that when $f_0$ is basic, then the solution $f$ to \end{equation}qref{modifiedflow} holds basic for every $t\in[0,\end{equation}psilon_{\rm max})$. We have
$$
\partial_t \xi(f)=\xi(\dot f)=\xi(-\xi^2(f)-g^{\begin{equation}ar k r}\chi_{r\begin{equation}ar k})
$$
Moreover, since the components of $\chi$ are basic, we have
$$
\xi(g^{\begin{equation}ar k r}\chi_{r\begin{equation}ar k})=-g^{\begin{equation}ar k l}_f(\xi(f_{,l\begin{equation}ar m}))g^{\begin{equation}ar m r}_f\chi_{r\begin{equation}ar k}=
g^{\begin{equation}ar k l}_f\xi (f)_{,l\begin{equation}ar m}\,g^{\begin{equation}ar m r}_f\chi_{r\begin{equation}ar k}=-\langle dd^c_B\xi (f),\chi \rangle_f\,,
$$
i.e.
\begin{equation}\label{xi}
\partial_t \xi(f)=-\xi^3(f)+\langle dd^c_B\xi (f),\chi \rangle_f\,.
\end{equation}
Equation \end{equation}qref{xi} is parabolic in $\xi(f)$ and then, since the solution to a parabolic problem is unique, if $\xi(f_0)=0$, $\xi(f(t))=0$ for every $t\in [0,\end{equation}psilon_{\rm max})$, as required.
Finally we show that if $f_0$ is normalised, then $I(f)=0$ for every $t\in [0,\end{equation}psilon_{\rm max})$. We have
$$
\partial_tI(f)=\varphirac{1}{2^nn!}\int_M \dot f\end{equation}ta\widetilde{\nabla}edge d\end{equation}ta_{f}^n=\varphirac{1}{2^nn!}\int_M (c-\sigma_f)\,\end{equation}ta\widetilde{\nabla}edge d\end{equation}ta_{f}^n,
$$
and since $c\int_M\,\end{equation}ta\widetilde{\nabla}edge d\end{equation}ta_{f}^n=\int_M\sigma_f\,\end{equation}ta\widetilde{\nabla}edge d\end{equation}ta_{f}^n$ we have $\partial_tI(f)=0$. Therefore, since $I(f_0)=0$, $I(f)=0$ for every $t\in[0,\end{equation}psilon_{\max})$ and the claim follows.
\end{equation}nd{proof}
\begin{equation}egin{rem}{\end{equation}m
Alternatively, the short-time existence of the Sasaki $J$-flow can be obtained by invoking the short-time existence of any second order {\end{equation}m transversally parabolic equation} on compact manifolds foliated by Riemannian foliations. A proof of the latter result can be founded in \cite{BHV}. }
\end{equation}nd{rem}
In analogy to the K\"ahler case, let ${\rm En} \colon \mathcal H_0\to \R$ be the energy functional
$$
{\rm En}(h)=\varphirac{1}{2^nn!}\int_M\sigma_h^2\,\end{equation}ta\widetilde{\nabla}edge (d\end{equation}ta_h)^{n}=(\sigma_h,\sigma_h)_{h}^2.
$$
\begin{equation}egin{prop}\label{stimevarie}
The following items hold:
\begin{equation}egin{itemize}
\item[1.] ${\rm En}$ has the same critical points of $J_\chi$ and it is strictly decreasing along the Sasaki $J$-flow;
\item[2.] any critical point of ${\rm En}$ is a local minimizer;
\item[3.] the lenght of any curve in $\mathcal H_0$ and the distance of any two points in $\mathcal H_0$ decrease under the $J$-flow.
\end{equation}nd{itemize}
\end{equation}nd{prop}
\begin{equation}egin{proof}
$1.$ Let $f\!:[0,1]\to \mathcal{H}_0$ be a smooth curve. Then, by using \end{equation}qref{sigma_t} and Lemma \ref{int1}, the first variation of $E$ reads:
\begin{equation}egin{equation*}
\begin{equation}egin{split}
\partial_t {\rm En}(f)=&\varphirac{1}{2^nn!}\partial_t\int_M\sigma_f^2\,\end{equation}ta\widetilde{\nabla}edge (d\end{equation}ta_f)^{n}
=2(\sigma_f,\dot \sigma_f)_f+(\sigma_f^2,\mathcal{D}elta_f\dot f)_f\\
=&2(\sigma_f\partial_B \dot f,\partial_B\sigma_f)-2i(\chi,\partial_B\dot f\widetilde{\nabla}edge\begin{equation}ar \partial_B \sigma_f)_f-2(\partial_B\sigma_f,\sigma_f\partial_B\dot f)_f\\
=&-2i(\chi,\partial_B\dot f\widetilde{\nabla}edge\begin{equation}ar \partial_B \sigma_f)_f.
\end{equation}nd{split}
\end{equation}nd{equation*}
Along the Sasaki $J$-flow one has $\dot f=c-\sigma_f$, thus:
$$
\partial_t {\rm En}(f)=-2i(\chi,\partial_B\sigma_f\widetilde{\nabla}edge\begin{equation}ar \partial_B \sigma_f)_f\leq 0,
$$
and ${\rm En}$ is strictly decreasing along the $J$-flow. Moreover, if $h\in \mathcal H_0$ is a critical point of ${\rm En}$, then
$\partial_B \sigma_h=0$ which implies that $h$ is critical if and only if $\sigma_h=c$.
$2.$
Now we compute the second variation of ${\rm En}$. Let $f\colon (-\delta,\delta) \times (-\delta,\delta) \to \mathcal H_0$ be a smooth map in the variables $(t,s)$
Assume that $f(0,0)=h$ is a critical point of ${\rm En}$ and let $u=\partial_{t}f_{|(0,0)}$, $v=\partial_s f_{|(0,0)}$.
Then we have
$$
\partial_s\partial_t {\rm En}(\alpha)=\varphirac{1}{2^{n-1}n!}i\partial_s\left(\int_M\langle\chi,\partial_B\partial_t\alpha\widetilde{\nabla}edge\begin{equation}ar \partial_B \sigma_\alpha\rangle_\alpha\,\end{equation}ta\widetilde{\nabla}edge(d\end{equation}ta_\alpha)^n\right)\,,
$$
and
$$
\partial_s\partial_t {\rm En}(\alpha)_{|(0,0)}=\varphirac{1}{2^{n-1}n!}\int_M\langle\chi,\partial_Bu\widetilde{\nabla}edge\begin{equation}ar \partial_B \partial_s\sigma_{\alpha |(0,0)}\rangle_h\,\end{equation}ta\widetilde{\nabla}edge(d\end{equation}ta_h)^n=2(\chi,\partial_Bu\widetilde{\nabla}edge\begin{equation}ar \partial_B \partial_s\sigma_{\alpha |(0,0)})_h,
$$
since $\sigma_h$ is constant. Now
$$
2(\chi,\partial_Bu\widetilde{\nabla}edge\begin{equation}ar \partial_B \partial_s\sigma_{\alpha |(0,0)})_h=2 (\chi,\partial_s\sigma_{\alpha |(0,0)}\, \partial_B\begin{equation}ar\partial_Bu)_h,
$$
and formula \end{equation}qref{sigma_t} implies
\begin{equation}egin{equation}\label{secondvar}
\begin{equation}egin{split}
\partial_s\partial_t {\rm En}(f)_{|(0,0)}=\varphirac{1}{2^{n-1}n!}\int_M\langle i\partial_B\begin{equation}ar\partial_B u,\chi\rangle_h\langle i\partial_B\begin{equation}ar\partial_B v,\chi\rangle_h\,\end{equation}ta\widetilde{\nabla}edge(d\end{equation}ta_h)^n,
\end{equation}nd{split}\nonumber
\end{equation}nd{equation}
which implies that $\partial_s\partial_t {\rm En}(f)_{|(0,0)}$ is positive definite as symmetric form.
$3.$ Given smooth curve $u\colon[0,1]\to \mathcal H_0$ in $\mathcal H_0$ and $h\in \mathcal H_0$ we denote by
$$
\mathcal{L}(h,u) =\varphirac{1}{2^{n}n!}\int_0^1\int_M \dot u^2 \ \end{equation}ta\widetilde{\nabla}edge(d\end{equation}ta_h)^n\widetilde{\nabla}edge ds=(\dot u,\dot u)_{h},
$$
the square of the length of $u$ we respect to the Sasaki metric induced by $h$.
Let $f\colon [0,\end{equation}psilon)\times [0,1]\to \mathcal H_0$ and assume that $t\mapsto f(t,s)$ is a solution to the $J$-flow for every $s\in [0,1]$. Then, by using Lemma \ref{int1}, we have
\begin{equation}egin{equation}
\begin{equation}egin{split}
2^{n}n!\partial_t\mathcal{L}(f,f)=&\partial_t\left[\int_0^1\int_M (\partial_s f)^2\, \end{equation}ta\widetilde{\nabla}edge(d\end{equation}ta_f)^n\widetilde{\nabla}edge ds\right]\\
=&\int_0^1\int_M 2\partial_s\partial_tf\partial_s f+ (\partial_sf)^2\mathcal{D}elta_f(\partial_tf)\, \end{equation}ta\widetilde{\nabla}edge(d\end{equation}ta_f)^n\widetilde{\nabla}edge ds\\
=&\int_0^1\int_M -2\partial_s\sigma_f\partial_s f- (\partial_sf)^2\mathcal{D}elta_f(\sigma_f)\, \end{equation}ta\widetilde{\nabla}edge(d\end{equation}ta_f)^n\widetilde{\nabla}edge ds\\
=&-\int_0^1\left[2(\partial_s\sigma_f,\partial_sf)_f+((\partial_sf)^2,\mathcal{D}elta_f \sigma_f)_f \right]ds\\
=&-\int_0^1\left[2(\partial_s\sigma_f,\partial_sf)_f+(\partial_B(\partial_sf)^2,\partial_B \sigma_f)_f \right]ds\\
=&2i\int_0^1(\chi,\partial_B \partial_sf\widetilde{\nabla}edge \begin{equation}ar \partial_B \partial_sf)_f\,ds\leq 0
\end{equation}nd{split}\nonumber
\end{equation}nd{equation}
and the equality holds if and only if $\partial_sf(t,s)$ is constant in $s$.
\end{equation}nd{proof}
\section{ A Maximum principle for basic maps and tensors}\label{maxprin}
In this section we introduce a basic principle for transversally elliptic operators on Sasakian manifolds.
The principle will be applied in the next section to compute the $C^2$-estimate about the solutions to \end{equation}qref{jflow}.
Let $(M,\xi,\Phi,\end{equation}ta)$ be a Sasakian manifold. By a {\end{equation}m smooth family of basic linear partial differential operators}
$\{E\}_{t\in [0,\end{equation}psilon)}$ we mean a smooth family of operators $E(\cdot,t)\colon C_B^{\infty}(M,\R)\to C^{\infty}_B(M,\R)$ which can be locally written as
$$
E(h(y),t)=\sum_{1\leq |k|\leq m} a_k(y,t)
\varphirac{\partial^{|k|}}{\partial y^{k_1}\dots\partial y^{k_r}}h(y)
$$
for every $h\in \C_B^{\infty}(M,\R)$, where $\{y^1,\dots,y^{2n},z\}$ are real coordinates on $M$ such that $\xi=\partial_z\,. $
The maps $a_k$ are assumed to be smooth and basic in the space coordinates (see \cite{aziz} for a detailed descriptions of these operators on compact manifolds foliated by Riemannian foliations). Observe that $E$ can be regraded as a functional $E\colon C_B^{\infty}(M\times [0,\end{equation}psilon),\R)\to C_B^{\infty}(M\times [0,\end{equation}psilon),\R)$ in a natural way. We further make the strong assumption on $E$ to satisfy
\begin{equation}\label{E}
E(h(x,t),t)\leq 0,
\end{equation}
whenever the complex Hessian $dd^ch$ of $h$ is nonpositive at the point $(x,t)\in M\times [0,\end{equation}psilon).$
\begin{equation}egin{prop}[Maximum principle for basic maps]\label{maximum}
Assume that $h\in C^{\infty}(M\times [0,\end{equation}psilon),\mathds{R})$ satisfies
$$
\partial_th(x,t)-E(h(x,t),t)\leq 0.
$$
Then
$$
\sup_{(x,t)\in M\times [0,\end{equation}psilon)} h(x,t)\leq \sup_{x\in M} h(x,0).
$$
\end{equation}nd{prop}
\begin{equation}egin{proof}
Fix $\end{equation}psilon_0\in (0,\end{equation}psilon)$ and let $h_\lambda\colon M\times [0,\end{equation}psilon_0]\to \R$ be the map
$h_\lambda(x,t)=h(x,t)-\lambda t$. Assume that $h_\lambda$ achieves its global maximum at $(x_0,t_0)$ and assume by contradiction that
$t_0>0$. Then $\partial_th_\lambda(x_0,t_0)\mathfrak{g}eq 0$ and $dd^ch_\lambda(x_0,t_0)$ is nonpositive. Therefore condition \end{equation}qref{E} implies $E(h_\lambda(x_0,t_0),t_0)\leq 0$ and consequently
$$
\partial_th_\lambda(x_0,t_0)-E(h_\lambda(x_0,t_0),t_0)\mathfrak{g}eq 0.
$$
Since $\partial_th_\lambda=\partial_th-\lambda$ and $E(h_\lambda(x,t),t)=E(h(x,t),t)$, we have
$$
0\leq \partial_th(x_0,t_0)-E(h(x_0,t_0),t_0)-\lambda \leq -\lambda,
$$
which is a contradiction. Therefore $h_\lambda$ achieves its global maximum at a point $(x_0,0)$ and
$$
\sup_{M\times [0,\end{equation}psilon_0]} h\leq \sup_{M\times [0,\end{equation}psilon_0]} h_\lambda+\lambda\end{equation}psilon_0\leq \sup_{x\in M} h(x,0)+\lambda\end{equation}psilon_0.
$$
Since the above inequality holds for every $\end{equation}psilon_0\in (0,\end{equation}psilon)$ and $\lambda>0$, the claim follows.
\end{equation}nd{proof}
A similar result can be stated for tensors:
\begin{equation}egin{prop}[Maximum principle for basic tensors]\label{maxtensors}
Let $\kappa$ be a smooth curve of basic $(1,1)$-forms on $M$ for $t\in [0,\end{equation}psilon)$. Assume $\kappa$ nonpositive and such that
$$
\partial_t\kappa_{i\begin{equation}ar j}(x,t)-E(\kappa_{i\begin{equation}ar j}(x,t),t)=N_{i\begin{equation}ar j}(x,t),
$$
where $N$is a nonpositive basic form and the components are with respect to foliated coordinates. Then $\kappa$ is nonpositive for every $t\in [0,\end{equation}psilon)$.
\end{equation}nd{prop}
\begin{equation}egin{proof}
The proof is very similar to the case of functions. We show that for every positive $\lambda$, $\kappa_{\lambda}=\kappa-t\lambda d\end{equation}ta$ is nonpositive. Assume by contradiction that this is not true. Then there exists a $\lambda$, a first point $(x_0,t_0)\in M\times [0,\end{equation}psilon)$ and $g$-unitary $(1,0)$-vector $Z\in \mathcal{D}_{x_0}^{1,0}$ such that $\kappa_{\lambda}(Z,\begin{equation}ar Z)=0$. We extend $Z$ to a basic and unitary vector field in a small enough neighborhood $U$ of $x$ and consider the map $f_{\lambda}\colon U\times [0,t_0]\to \R$ given by
$f_{\lambda}=\kappa_{\lambda}(Z,\begin{equation}ar Z)$. Then $f_\lambda$ has a maximum at $(x_0,t_0)$ and so
$$
\partial_tf_{\lambda}\mathfrak{g}eq 0\,,\quad E(f_{\lambda}(x_0,t_0),t_0)\leq 0,
$$
at $(x_0,t_0)$. Now since
$$
E(f_{\lambda}(x,t),t)=E(f_0(x,t),t),
$$
we have
$$
0\leq\partial_t(f_\lambda)=E(f_\lambda,\cdot)+N(Z,\begin{equation}ar Z)-\varphirac\lambda2\leq 0,
$$
at $(x_0,t_0)$, which implies
$$
N(Z,\begin{equation}ar Z)\mathfrak{g}eq \varphirac\lambda2 ,
$$
at $(x_0,t_0)$, which is a contradiction.
\end{equation}nd{proof}
In the following we will apply the two propositions when $E$ is the operator $\tilde \mathcal{D}elta_f$ depending on a smooth curve $f$ in $\mathcal{H}$ defined by:
$$
\tilde \mathcal{D}elta_f(h,t)=g_f^{\begin{equation}ar k p}g_f^{\begin{equation}ar q j}\chi_{j\begin{equation}ar k} h_{,a\begin{equation}ar b}\,.
$$
\section{Second order estimates}\label{estimates}
The following two lemmas provide the a priori estimates we need to prove the main theorem.
\begin{equation}egin{lemma}\label{soe}
Let $f\colon M\times [0,\end{equation}psilon)\to \R$ be a solution to \end{equation}qref{jflow}, with $\end{equation}psilon<\infty$.
Then
$$
\sigma_f\leq \min_{x\in M}\sigma_f(x,0)
$$
and there exists a uniform constants $C$, depending only on $f_0$, such that
$$
\mathfrak{g}amma_f(x,t)\leq \sup_{x\in M} \mathfrak{g}amma_f(x,0)\,{\rm e}^{C\end{equation}psilon}
$$
where $\mathfrak{g}amma_f=\chi^{\begin{equation}ar j k}(g_f)_{k\begin{equation}ar j}$.
\end{equation}nd{lemma}
\begin{equation}egin{proof}
The upper bound of $\sigma_f$ easily follows from the definition of $J_{\chi}$ and Proposition \ref{maximum}.
Indeed, differentiating \end{equation}qref{jflow} in $t$ we have
$\ddot f=-\partial_t\sigma_f=g_f^{\begin{equation}ar a b}g_f^{\begin{equation}ar k j} \chi_{b\begin{equation}ar k}\dot f_{,j\begin{equation}ar a}=-\tilde \mathcal{D}elta_f\sigma_f,$ i.e.,
$$
\partial_t\sigma_f=\tilde \mathcal{D}elta_f\sigma_f
$$
and Proposition \ref{maximum} implies the first inequality.
About the upper bound of $\mathfrak{g}amma_f$, we have
$$
\partial_t\mathfrak{g}amma_f=\chi^{\begin{equation}ar j k}\partial_t\left[(g_f)_{k\begin{equation}ar j}\right]=\chi^{\begin{equation}ar j k}\dot f_{,k\begin{equation}ar j}.
$$
Since $f$ solves \end{equation}qref{jflow}, we have
$$
\dot f_{,a}=g_f^{\begin{equation}ar k p}(g_f)_{p\begin{equation}ar q,a}g_f^{\begin{equation}ar q j}\chi_{j\begin{equation}ar k}-g_f^{\begin{equation}ar k j}\chi_{j\begin{equation}ar k,a}
$$
and
\begin{equation}egin{equation}\label{dotfab}
\begin{equation}egin{split}
\dot f_{,a\begin{equation}ar b}=&-2g_f^{\begin{equation}ar k s}(g_f)_{\begin{equation}ar rs,\begin{equation}ar b}g_f^{\begin{equation}ar r p}(g_f)_{p\begin{equation}ar q,a}g_f^{\begin{equation}ar q j}\chi_{j\begin{equation}ar k}+\tilde \mathcal{D}elta_f[(g_f)_{a\begin{equation}ar b}]\\
&+g_f^{\begin{equation}ar k p}(g_f)_{p\begin{equation}ar q,a}g_f^{\begin{equation}ar q j}\chi_{j\begin{equation}ar k,\begin{equation}ar b}+g_f^{\begin{equation}ar k s}(g_f)_{\begin{equation}ar rs,\begin{equation}ar b}g_f^{\begin{equation}ar r j}\chi_{j\begin{equation}ar k,a}-g_f^{\begin{equation}ar k j}\chi_{j\begin{equation}ar k,a\begin{equation}ar b}.
\end{equation}nd{split}
\end{equation}
Let $R^T=R^T(\chi)$ be the transverse curvature of $\chi$ and ${\rm Ric}^T(\chi)$ its transverse Ricci tensor (see section \ref{preliminaries}). The components of
$R^T$ with respect to foliated coordinates read as $R_{j\begin{equation}ar k a\begin{equation}ar b}^T=-\chi_{j\begin{equation}ar k, a\begin{equation}ar b}+\chi^{\begin{equation}ar p q}\chi_{j\begin{equation}ar p, a}\chi_{q\begin{equation}ar k, \begin{equation}ar b}$.
Fix a point $(x_0,t_0)\in M\times[0,\end{equation}psilon)$ and {\end{equation}m special foliated coordinates} for $\chi$ around it (see subsection \ref{foliatedcoordinates}). We may further assume without loss of generality that $(g_f)_{j\begin{equation}ar k}=\lambda_j\delta_{j k}$ at $(x_0,t_0)$. Then
\begin{equation}egin{equation}\label{dotfab1}
\begin{equation}egin{split}
\dot f_{,a\begin{equation}ar b}=\,&\sum_{k,r=1}^n \varphirac{-2}{\lambda_k^2\lambda_r}(g_f)_{\begin{equation}ar rk,\begin{equation}ar b}(g_f)_{r\begin{equation}ar k,a}
+\tilde \mathcal{D}elta_f[(g_f)_{a\begin{equation}ar b}]
-\sum_{k=1}^n
\varphirac{1}{\lambda_k}\chi_{k\begin{equation}ar k,a\begin{equation}ar b}\quad\mbox{ at } (x_0,t_0)
\end{equation}nd{split}
\end{equation}nd{equation}
and
\begin{equation}egin{equation}
\begin{equation}egin{split}
\partial_t\mathfrak{g}amma_f=\sum_{a=1}^n\dot f_{,a\begin{equation}ar a}=
\sum_{a=1}^n\left[\sum_{k,r=1}^n \varphirac{-2}{\lambda_k^2\lambda_r}|(g_f)_{k\begin{equation}ar r,\begin{equation}ar a} |^2
+\tilde \mathcal{D}elta_f[(g_f)_{a\begin{equation}ar a}]
-\sum_{k=1}^n \varphirac{1}{\lambda_k}\chi_{k\begin{equation}ar k,a\begin{equation}ar a}\right] \quad\mbox{ at } (x_0,t_0)\,.
\end{equation}nd{split}\nonumber
\end{equation}nd{equation}
i.e.
\begin{equation}egin{equation*}
\partial_t\mathfrak{g}amma_f=
\sum_{a=1}^n\left(\sum_{k,r=1}^n \varphirac{-2}{\lambda_k^2\lambda_r}|(g_f)_{k\begin{equation}ar r,\begin{equation}ar a} |^2
+\tilde \mathcal{D}elta_f[(g_f)_{a\begin{equation}ar a}]\right)
-\sum_{k=1}^n\varphirac{1}{\lambda_k}{\rm Ric}^T_{k\begin{equation}ar k} \quad\mbox{ at } (x_0,t_0)\,.
\end{equation}nd{equation*}
Now a direct computation yields
$$
\tilde \mathcal{D}elta_f\mathfrak{g}amma_f=\sum_{a=1}^n \tilde \mathcal{D}elta_f[(g_f)_{a\begin{equation}ar a}]-\sum_{a,k=1}^n\varphirac{\lambda_a}{\lambda_k^2} R^T_{a\begin{equation}ar ak\begin{equation}ar k} \quad\mbox{ at } (x_0,t_0)
$$
and therefore
$$
\partial_t\mathfrak{g}amma_f-\tilde \mathcal{D}elta_f\mathfrak{g}amma_f=\sum_{a,k=1}^n\left(\sum_{r=1}^n \varphirac{-2}{\lambda_k^2\lambda_r}|(g_f)_{k\begin{equation}ar r,\begin{equation}ar a} |^2
+\varphirac{\lambda_a}{\lambda_k^2} R^T_{a\begin{equation}ar ak\begin{equation}ar k}\right)
-\sum_{k=1}^n \varphirac{1}{\lambda_k}{\rm Ric}^T_{k\begin{equation}ar k} \quad\mbox{ at } (x_0,t_0)\,.
$$
Observe that
$$
\sum_{k=1}^n\varphirac1{\lambda_k}=\sigma_f(x_0,t_0)\leq C_1,\qquad \sum_{k=1}^n\lambda_k=\mathfrak{g}amma_f(x_0,t_0),
$$
where $C_1=\min_{x\in M}\sigma_{f}(x,0)$.
Thus for all $k=1,\dots, n$ we have
$$
\varphirac1{\lambda_k}\leq C_1, \qquad \lambda_k\leq \mathfrak{g}amma_f(x_0,t_0)\,.
$$
Since $M$ is compact, there exists a constant $C_2$ such that ${\rm Ric}^T-C_2\chi$ is nonnegative and therefore at $(x_0,t_0)$ we have
$$
|\varphirac{1}{\lambda_k}{\rm Ric}^T_{k\begin{equation}ar k}|\leq nC_1C_2\,,
\quad |\sum_{a,k=1}^n\varphirac{\lambda_a}{\lambda_k^2} R^T_{a\begin{equation}ar ak\begin{equation}ar k}|\leq C_1^2 |\sum_{a=1}^n\lambda_a {\rm Ric}^T_{a\begin{equation}ar a}|
\leq nC_1^2C_2\mathfrak{g}amma_f,
$$
Thus there exists a constant $C$ such that
$$
\partial_t\mathfrak{g}amma_f-\tilde \mathcal{D}elta_f\mathfrak{g}amma_f\leq C\mathfrak{g}amma_f+C.
$$
Let $F:={\rm e}^{-Ct}\mathfrak{g}amma_f -Ct$. Then
$$
\partial_t F-\tilde \mathcal{D}elta_f F=e^{-Ct}\left(-c\mathfrak{g}amma_f+\partial_t\mathfrak{g}amma_f-\tilde \mathcal{D}elta \mathfrak{g}amma_f\right)-C,
$$
and by Proposition \ref{maximum} we have
$$
\sup_{(x,t)\in M\times [0,\end{equation}psilon)}F\leq \sup_{x\in M}F(x,0)=\sup_{x\in M} \mathfrak{g}amma_f(x,0),
$$
which implies
$$
\sup_{(x,t)\in M\times [0,\end{equation}psilon)}\mathfrak{g}amma_t=\sup_{x\in M} \mathfrak{g}amma_f(x,0)e^{C\end{equation}psilon}
$$
as required.
\end{equation}nd{proof}
In order to get a uniform lower bound for $d\end{equation}ta_f$ we need to add an hypothesis on the bisectional curvature of $\chi$ (see Theorem \ref{bisectional} below). Observe that the existence of a uniform lower bound without further assumption would imply the existence of a critical metric in $\mathcal{H}_0$ for each choice of $\end{equation}ta$ and $\chi$, in contrast with the necessary condition $\varphirac c2d\end{equation}ta_f-\chi>0$.
\begin{equation}egin{theorem}\label{bisectional}
Assume that the transverse bisectional curvature of $\chi$ is nonnegative
and let $f\colon M\times [0,\end{equation}psilon)\to \R$ be a solution to \end{equation}qref{jflow}. Then there exists constant $C$ depending only on the initial datum $f_0$ such that $C\chi-d\end{equation}ta_f$ is a transverse K\"ahler form
for every $t\in [0,\end{equation}psilon)$.
\end{equation}nd{theorem}
\begin{equation}egin{proof}
Let $\kappa=\varphirac12 d\end{equation}ta_f-C\chi$
where $C$ is a constant chosen big enough to have $\kappa$ nonpositive at $t=0$.
Then $\kappa$ is a time-dependent basic $(1,1)$-form which is nonpositive at $t=0$.
We apply Proposition \ref{maxtensors} to show that $\kappa$ is nonpositive for every $t\in [0,\end{equation}psilon)$.
Once a system of foliated coordinates $\{z^k,z\}$ is fixed, we have $\partial_t \kappa_{a\begin{equation}ar b}=\dot f_{,a\begin{equation}ar b}$ and formula \end{equation}qref{dotfab} implies
\begin{equation}egin{equation}
\begin{equation}egin{split}
\partial_t \kappa_{a\begin{equation}ar b}=&-2g_f^{\begin{equation}ar k s}(g_f)_{\begin{equation}ar rs,\begin{equation}ar b}g_f^{\begin{equation}ar r p}(g_f)_{p\begin{equation}ar q,a}g_f^{\begin{equation}ar q j}\chi_{j\begin{equation}ar k}+g_f^{\begin{equation}ar k p}(g_f)_{p\begin{equation}ar q,a\begin{equation}ar b}g_f^{\begin{equation}ar q j}\chi_{j\begin{equation}ar k}\\
&+g_f^{\begin{equation}ar k p}(g_f)_{p\begin{equation}ar q,a}g_f^{\begin{equation}ar q j}\chi_{j\begin{equation}ar k,\begin{equation}ar b}+g_f^{\begin{equation}ar k s}(g_f)_{\begin{equation}ar rs,\begin{equation}ar b}g_f^{\begin{equation}ar r j}\chi_{j\begin{equation}ar k,a}-g_f^{\begin{equation}ar k j}\chi_{j\begin{equation}ar k,a\begin{equation}ar b}\\
=&-2g_f^{\begin{equation}ar k s}(g_f)_{\begin{equation}ar rs,\begin{equation}ar b}g_f^{\begin{equation}ar r p}(g_f)_{p\begin{equation}ar q,a}g_f^{\begin{equation}ar q j}\chi_{j\begin{equation}ar k}+\tilde\mathcal{D}elta\left[(g_f)_{a\begin{equation}ar b}\right]\\
&+g_f^{\begin{equation}ar k p}(g_f)_{p\begin{equation}ar q,a}g_f^{\begin{equation}ar q j}\chi_{j\begin{equation}ar k,\begin{equation}ar b}+g_f^{\begin{equation}ar k s}(g_f)_{\begin{equation}ar rs,\begin{equation}ar b}g_f^{\begin{equation}ar r j}\chi_{j\begin{equation}ar k,a}-g_f^{\begin{equation}ar k j}\chi_{j\begin{equation}ar k,a\begin{equation}ar b},
\end{equation}nd{split}\nonumber
\end{equation}nd{equation}
i.e.
\begin{equation}\label{segno1}
\begin{equation}egin{aligned}
\partial_t \kappa_{a\begin{equation}ar b}-\tilde\mathcal{D}elta\left[(g_f)_{a\begin{equation}ar b}\right]=\,&-2g_f^{\begin{equation}ar k s}(g_f)_{\begin{equation}ar rs,\begin{equation}ar b}g_f^{\begin{equation}ar r p}(g_f)_{p\begin{equation}ar q,a}g_f^{\begin{equation}ar q j}\chi_{j\begin{equation}ar k}\\
&+g_f^{\begin{equation}ar k p}(g_f)_{p\begin{equation}ar q,a}g_f^{\begin{equation}ar q j}\chi_{j\begin{equation}ar k,\begin{equation}ar b}+g_f^{\begin{equation}ar k s}(g_f)_{\begin{equation}ar rs,\begin{equation}ar b}g_f^{\begin{equation}ar r j}\chi_{j\begin{equation}ar k,a}-g_f^{\begin{equation}ar k j}\chi_{j\begin{equation}ar k,a\begin{equation}ar b}.
\end{equation}nd{aligned}
\end{equation}
We apply Proposition \ref{maxtensors} using as $N$ the basic form defined by the right hand part of formula \end{equation}qref{segno1}. To this end, we have to show that $N$ is nonpositive. That can be easily done as follows: fix a point $(x,t)\in M\times [0,\end{equation}psilon)$ and an arbitrary unitary vector field $Z\in \mathcal D_{x}^{1,0}$. Then we can find foliated coordinates $(z,z^{k})$ around $x$ which are special for $\chi$ and such that: $Z=\partial_{z^1|x}$ and $g_f$ takes a diagonal expression with eigenvalues $\lambda_k$ at $(x,t)$.
Then we have
$$
N(Z,\begin{equation}ar Z)=-2\sum_{k,r=1}^n\varphirac{1}{\lambda_k^2\lambda_r}|(g_f)_{k\begin{equation}ar r,\begin{equation}ar 1}|^2-\sum_{k=1}^n\varphirac{1}{\lambda_k^2}R^T(\chi)_{k\begin{equation}ar k1\begin{equation}ar 1}
$$
at $(x,t)$ and the claim follows.
\end{equation}nd{proof}
\section{Proof of the main theorem}
The proof of Theorem $\ref{main}$ is based on the second order estimates provided in Section \ref{estimates} and on the following result in K\"ahler geometry.
\begin{equation}egin{theorem}\label{kahler}
Let $B$ be an open ball about $0$ in $\C^n$ and let $\omega, \chi$ be two K\"ahler forms on $B$. Let $f\colon M\times [0,\end{equation}psilon)\to \R$ be solution to the K\"ahler $J$-flow
$$
\dot f=c-g_f^{\begin{equation}ar k r}\chi_{r\begin{equation}ar k},
$$
where $g_f$ is the metric associated to $\omega_f=\omega+dd^cf$.
Assume that $\omega_f$ is uniformly bounded in $B\times[0,\end{equation}psilon)$. Then $f$ is $C^{\infty}$-bounded in a small ball about $0$.
\end{equation}nd{theorem}
As explained in \cite{chen}, the theorem can be proved by using the well-known Evans and Krylov's interior estimate (see \cite{gill} for a proof of the estimates in the complex case).
\begin{equation}egin{proof}[Proof of Theorem $\ref{main}$]
The proof of the long time existence consists in showing that every solution $f$ to \end{equation}qref{jflow} have a $C^{\infty}$-bound. Let
$f\colon M\times [0,\end{equation}psilon_{\max})\to \mathds{R}$ be the solution to \end{equation}qref{jflow} with initial condition $f_0\in \mathcal H_0$ and assume by contradiction $\end{equation}psilon_{\max}< \infty$. Lemma \ref{soe} implies that the second derivatives of $f$ are uniformly bounded in $M$. Since $f$ can be regarded as a collection of solutions to the K\"ahler $J$-flow on small open balls in $\C^n$, Theorem \ref{kahler} implies that $f$ is $C^\infty$-uniformly bounded in $M$. Therefore $f$ converges in $C^{\infty}$-norm to a smooth function $\tilde f$ as $t$ tends to $\end{equation}psilon_{\max}^{-}$. Since $\partial_t f$ is basic for every $t\in[0,\end{equation}psilon_{\max})$, $\tilde f$ is basic and by the well-posedness of the Sasaki $J$-flow, the solution $f$ can be extended after $\end{equation}psilon_{\max}$ contradicting its maximality.
The proof of the long time existence in the case when $\chi$ has nonnegative transverse holomorphic bisectional curvature, is obtained exactly as in the K\"ahler case. Let $f\colon M\times [0,\infty)\to \R$ be a solution to the Sasaki $J$-flow. Since $\chi$ has nonnegative holomorphic bisectional curvature, Theorem \ref{bisectional} implies that $f$ has a uniform $C^{\infty}$-bound and Ascoli-Arzel\`a implies that given a sequence $t_j\in [0,\infty)$, $t_j\to \infty$, $f_{t_j}$ has a subsequnce converging in $C^\infty$-norm to function $f_{\infty}$ as $t_j\to \infty$. Therefore, $f$ converges to a critical map $f_{\infty}\in \mathcal{H}_0$.
\end{equation}nd{proof}
\begin{equation}egin{thebibliography}{99}
\overline{i}bitem{BHV} L. Bedulli, W. He, L. Vezzoni: Geometric flows on foliated manifolds, {\end{equation}m in preparation}.
\overline{i}bitem{bg} C. P. Boyer, K. Galicki: On Sasakian--Einstein geometry, {\end{equation}m Internat. J. Math.} {\begin{equation}f 11} (2000) 873--909.
\overline{i}bitem{bgbook} C.P. Boyer, K. Galicki: {\end{equation}m Sasaki geometry}, Oxford Mathematical Monographs. Oxford University Press, Oxford, 2008. xii+613 pp.
\overline{i}bitem{bgs} C. P. Boyer, K. Galicki, S. R. Simanca: Canonical Sasakian metrics, {\end{equation}m Comm. in Math. Phys.} {\begin{equation}f 279}, n. 3 (2008), 705--733.
\overline{i}bitem{cao} H.-D. Cao: Deformation of K\"ahler metrics to K\"ahler-Einstein metrics on compact K\"hler manifolds. {\end{equation}m Invent. Math.} {\begin{equation}f 81} (1985), no. 2, 359--372.
\overline{i}bitem{chenM} X. X. Chen: On lower bound of the Mabuchi energy and its application. {\end{equation}m Int. Math. Research Notices} {\begin{equation}f 12} (2000).
\overline{i}bitem{chen} X. X. Chen: A new parabolic flow in K\"ahler manifolds, {\end{equation}m Comm. An. Geom. } {\begin{equation}f 12} (2004), n. 4, 837--852.
\overline{i}bitem{collins0} T. C. Collins: The transverse entropy functional and the Sasaki-Ricci flow. {\end{equation}m Trans. Amer. Math. Soc.} {\begin{equation}f 365} (2013), no. 3, 1277--1303.
\overline{i}bitem{collins1} T. C. Collins: Uniform Sobolev Inequality along the Sasaki-Ricci Flow. {\end{equation}m J. Geom. Anal.} {\begin{equation}f 24} (2014), no. 3, 1323--1336.
\overline{i}bitem{collins2} T. C. Collins: Stability and convergence of the Sasaki-Ricci flow, to appear in Crelle's Journal.
\overline{i}bitem{collins3} T. Collins, A. Jacob: On the convergence of the Sasaki-Ricci flow, {\tt arXiv:1110.3765v1}. To appear in {\end{equation}m Contemp. Math.}
\overline{i}bitem{CLPP} M. Cvetic, H. L\"u, Don N. Page, C.N. Pope: New Einstein--Sasaki spaces in five and higher dimensions, {\end{equation}m Phys. Rev. Lett.} {\begin{equation}f 95} (2005), p. 4.
\overline{i}bitem{donaldsonNC} S. K. Donaldson: {\end{equation}m Symmetric spaces, K\"ahler geometry and Hamiltonian dynamics}, in Northern California
Symplectic Geometry Seminar, Amer. Math. Soc. Transl. Ser. 2 196, Amer. Math. Soc., Providence, 1999, 13--33.
\overline{i}bitem{donaldsonMM} S. K. Donaldson: Moment maps and diffeomorphisms, {\end{equation}m Asian J. Math.} {\begin{equation}f 3} (1999), no. 1, 1--15.
\overline{i}bitem{aziz}
El Kacimi-Alaoui, A.: Op\'erateurs transversalement elliptiques sur un feuilletage riemannien et applications. (French) [Transversely elliptic operators on a Riemannian foliation, and applications] {\end{equation}m Compositio Math.} {\begin{equation}f 73} (1990), no. 1, 57--106.
\overline{i}bitem{GM} J.P. Gauntlett, D. Martelli, J. Sparks, W. Waldram: Sasaki-Einstein metrics on $S^2\times S^3$, {\end{equation}m Adv. Theor. Math. Phys.} {\begin{equation}f 8} (2004), pp. 711--734.
\overline{i}bitem{GM1} J.P. Gauntlett, D. Martelli, J. Sparks, W. Waldram: A new infinite class of Sasaki-Einstein manifolds, {\end{equation}m Adv. Theor. Math. Phys.} {\begin{equation}f 8} (2004), pp. 987-1000.
\overline{i}bitem{gilbarg} D. Gilbarg, N.S. Trudinger: {\end{equation}m Elliptic partial differential equations of second order}, Springer-Verlag, 1983.
\overline{i}bitem{gill} M. Gill: Convergence of the parabolic complex Monge-Ampere equation on compact Hermitian manifolds, {\end{equation}m Communications in Analysis and Geometry} {\begin{equation}f19} (2011), no. 2, 277--304.
\overline{i}bitem{GKN}
M. Godli\'nski, W. Kopczy\'nski, P. Nurowski: Locally Sasakian manifolds, {\end{equation}m Class. Quantum Grav.} {\begin{equation}f17} (2000) 105--115.
\overline{i}bitem{guanzhang} P. Guan, X. Zhang: {\end{equation}m A geodesic equation in the space of Sasakian metrics}, Geometry and analysis 1, 303--318, Adv. Lect. Math. 17, Int. Press, Somerville, MA, 2011.
\overline{i}bitem{guanzhangA} P. Guan, X. Zhang: Regularity of the geodesic equation in the space of Sasakian metrics, {\end{equation}m Advances in Math.} {\begin{equation}f230}, Issue 1 (2012), pp. 321--371.
\overline{i}bitem{he} W. He: On the transverse scalar curvature of a compact Sasaki manifold, {\tt arXiv:1105.4000}, to appear in {\end{equation}m Complex Manifolds}.
\overline{i}bitem{HeSun} W. He, S. Sun: The generalized Frankel conjecture in Sasaki geometry, {\tt arXiv:1209.4026}.
\overline{i}bitem{LSS}
M. Lejmi, G, Sz\'ekelyhidi: The J-flow and stability, {\tt arXiv:1309.2821}.
\overline{i}bitem{lieberman} G. Lieberman: {\end{equation}m Second Order Parabolic Differential Equations}, World Scientific, Singapore New Jersey London Hong Kong, 1996.
\overline{i}bitem{LMR}
M. Lovri\'c, M. Min-Oo, E. A. Ruh: Deforming transverse Riemannian metrics of foliations. {\end{equation}m Asian J. Math.} {\begin{equation}f 4} (2000), no. 2, 303--314.
\overline{i}bitem{mabuchiF} T. Mabuchi: K-energy maps integrating Futaki invariants, {\end{equation}m T$\rm\hat{o}$hoku Math. J.} {\begin{equation}f38} (1986), 575--593.
\overline{i}bitem{mabuchi} T. Mabuchi: Some symplectic geometry on compact K\"ahler manifolds. I, {\end{equation}m Osaka J. Math.} {\begin{equation}f24} (1987), no. 2, 227--252.
\overline{i}bitem{MS} D. Martelli, J. Sparks: Toric Sasaki-Einstein metrics on $S^2\times S^3$, {\end{equation}m Phys. Lett. B} {\begin{equation}f 621} (2005), pp. 208--212.
\overline{i}bitem{superstring1} D. Martelli, J. Sparks: Toric geometry, Sasaki-Einstein manifolds and a new infinite class of AdS/CFT duals, {\end{equation}m Comm.
Math. Phys.} {\begin{equation}f 262} (2006) 51--89.
\overline{i}bitem{MSY} D. Martelli, J. Sparks, S.T. Yau: The geometric dual of $a$-maximisation for toric Sasaki-Einstein manifolds,
{\end{equation}m Comm. Math. Phys.} {\begin{equation}f 268} (2006), no. 1, 39?65.
\overline{i}bitem{superstring2} D. Martelli, J. Sparks, S.T. Yau: Sasaki-Einstein manifolds and volume minimisation, {\end{equation}m Comm. Math. Phys.} {\begin{equation}f 280} (3)
(2008) 611--673.
\overline{i}bitem{sasaki} S. Sasaki: On differentiable manifolds with certain structures which are closely related to almost contact structure, {\end{equation}m T$\rm\hat{o}$hoku Math. J.} {\begin{equation}f2} (1960), 459-476.
\overline{i}bitem{siu} Y. T. Siu: {\end{equation}m Lectures on Hermitian-Einstein metrics for stable bundles and K\"ahler-Einstein metrics}. DMV Seminar, 8. Birkh\"auser Verlag, Basel, 1987. 171 pp.
\overline{i}bitem{smoczyk} K. Smoczyk, G. Wang, Y. Zhang: The Sasaki-Ricci flow, {\end{equation}m Internat. J. Math.} {\begin{equation}f21} (2010), no. 7, 951--969.
\overline{i}bitem{songweinkove} J. Song, B. Weinkove: {\end{equation}m Introduction to the K\"ahler-Ricci flow}, eds S. Boucksom, P. Eyssidieux, V. Guedj, Lecture Notes Math. 2086, Springer 2013.
\overline{i}bitem{songweinkoveJ} J. Song, B. Weinkove, On the convergence and singularities of the $J$-flow with applications to the Mabuchi energy, preprint 2014.
\overline{i}bitem{sparks} J. Sparks: Sasakian-Einstein manifolds, {\end{equation}m Surveys Diff.Geom.} {\begin{equation}f 16} (2011), 265--324.
\overline{i}bitem{st} Streets, J., Tian, G.: A parabolic flow of pluriclosed metrics
{\end{equation}m Int. Math. Res. Not.} IMRN 2010, no. 16, 3101--3133.
\overline{i}bitem{tosatti} V. Tosatti, B. Weinkove: Estimates for the complex Monge-Amp\`ere equation on Hermitian and balanced manifolds, {\end{equation}m Asian J. Math.} {\begin{equation}f14} (2010), no. 1, 19--40.
\overline{i}bitem{wein1} B. Weinkove: Convergence of the J-flow on K\"ahler surfaces, {\end{equation}m Comm. Anal. Geom.} {\begin{equation}f 12}, no. 4 (2004), 949-965.
\overline{i}bitem{wein2} B. Weinkove: On the J-flow in higher dimensions and the lower boundedness of the Mabuchi energy, {\end{equation}m J. Differential Geom.} {\begin{equation}f 73}, no. 2 (2006), 351--358.
\end{equation}nd{thebibliography}
\end{equation}nd{document}
|
\begin{document}
\hypersetup{linkcolor=blue}
\date{\today}
\author{Ray Bai\footremember{USCStat}{Department of Statistics, University of South Carolina, Columbia, SC 29208.}\thanks{Co-first author. Email: \href{mailto:[email protected]}{\tt [email protected]} }, Gemma E. Moran\footremember{Wharton}{Data Science Institute, Columbia University, New York, NY 10027.}\thanks{Co-first author. Email: \href{[email protected]}{\tt [email protected]} } , Joseph L. Antonelli\footremember{UFStat}{Department of Statistics, University of Florida, Gainesville, FL 32611.}\thanks{Co-first author. Email: \href{mailto:[email protected]}{\tt [email protected]} } , \\
Yong Chen\footremember{DBEI}{Department of Biostatistics, Epidemiology, and Informatics, University of Pennsylvania, Philadelphia, PA 19104.}, Mary R. Boland\footrecall{DBEI}}
\title{Spike-and-Slab Group Lassos for Grouped Regression and Sparse Generalized Additive Models
}
\maketitle
\begin{abstract}
We introduce the spike-and-slab group lasso (SSGL) for Bayesian estimation and variable selection in linear regression with grouped variables. We further extend the SSGL to sparse generalized additive models (GAMs), thereby introducing the first nonparametric variant of the spike-and-slab lasso methodology. Our model simultaneously performs group selection and estimation, while our fully Bayes treatment of the mixture proportion allows for model complexity control and automatic self-adaptivity to different levels of sparsity. We develop theory to uniquely characterize the global posterior mode under the SSGL and introduce a highly efficient block coordinate ascent algorithm for maximum a posteriori (MAP) estimation. We further employ de-biasing methods to provide uncertainty quantification of our estimates. Thus, implementation of our model avoids the computational intensiveness of Markov chain Monte Carlo (MCMC) in high dimensions. We derive posterior concentration rates for both grouped linear regression and sparse GAMs when the number of covariates grows at nearly exponential rate with sample size. Finally, we illustrate our methodology through extensive simulations and data analysis.
\end{abstract}
\section{Introduction} \label{intro}
\subsection{Regression with Grouped Variables} \label{groupedregression}
Group structure arises in many statistical applications. For example, in multifactor analysis of variance, multi-level categorical predictors are each represented by a group of dummy variables. In genomics, genes within the same pathway may form a group at the pathway or gene set level and act in tandem to regulate a biological system. In each of these scenarios, the response $\bm{Y}_{n \times 1}$ can be modeled as a linear regression problem with $G$ groups:
\begin{equation} \label{groupmodel}
\bm{Y} = \displaystyle \sum_{g=1}^{G} \bm{X}_g \bm{\beta}_g + \bm{\text{var}epsilon},
\end{equation}
where $\bm{\text{var}epsilon} \sim \mathcal{N}_n ( \bm{0}, \sigma^2 \bm{I}_n)$, $\bm{\beta}_g$ is a coefficients vector of length $m_g$, and $\bm{X}_g$ is an $n \times m_g$ covariate matrix corresponding to group $g = 1, \ldots G$. Even in the absence of grouping information about the covariates, the model (\ref{groupmodel}) subsumes a wide class of important nonparametric regression models called \textit{generalized additive models} (GAMs). In GAMs, continuous covariates may be represented by groups of basis functions which have a nonlinear relationship with the response. We defer further discussion of GAMs to Section \ref{NPSSLIntro}.
It is often of practical interest to select groups of variables that are most significantly associated with the response. To facilitate this group-level selection, \citet{YuanLin2006} introduced the group lasso, which solves the optimization problem,
\begin{equation} \label{grouplassoobj}
\displaystyle \argmin_{\bm{\beta}} \frac{1}{2 } \lVert \bm{Y} - \displaystyle \sum_{g=1}^{G} \bm{X}_g \bm{\beta}_g \rVert_2^2 + \lambda \displaystyle \sum_{g=1}^{G} \sqrt{m_g} \lVert \bm{\beta}_g \rVert_2,
\end{equation}
where $|| \cdot ||_2$ is the $\ell_2$ norm. In the frequentist literature, many variants of model (\ref{grouplassoobj}) have been introduced, which use some combination of $\ell_1$ and $\ell_2$ penalties on the coefficients of interest (e.g., \cite{JacobObozinskiVert2009, LiNanZhu2015, SimonFriedmanHastieTibshirani2013}).
In the Bayesian framework, selection of relevant groups under model (\ref{groupmodel}) is often done by placing spike-and-slab priors on each of the groups $\bm{\beta}_g$ (e.g., \cite{XuGhosh2015, LiquetMengersenPettittSutton2017, YangNarisetty2019, NingGhosal2018}). These priors typically take the form,
\begin{align} \label{pointmassspikeandslab}
\begin{array}{l}
\pi(\bm{\beta} \vert \bm{\gamma}) = \displaystyle \prod_{g=1}^{G} [ (1-\gamma_g) \delta_0 (\bm{\beta}_g) + \gamma_g \pi(\bm{\beta}_g) ],\\
\pi(\bm{\gamma} | \theta) = \displaystyle \prod_{g=1}^{G} \theta^{\gamma_g} (1-\theta)^{1-\gamma_g}, \\
\theta \sim \pi(\theta),
\end{array}
\end{align}
where $\bm{\gamma}$ is a binary vector that indexes the $2^G$ possible models, $\theta \in (0, 1)$ is the mixing proportion, $\delta_{0}$ is a point mass at $\bm{0}_{m_g} \in \mathbb{R}^{m_g}$ (the ``spike''), and $\pi(\bm{\beta}_g)$ is an appropriate ``slab'' density (typically a multivariate normal distribution or a scale-mixture multivariate normal density). With a well-chosen prior on $\theta$, this model will favor parsimonious models in very high dimensions, thus avoiding the curse of dimensionality.
\subsection{The Spike-and-Slab Lasso} \label{spikeandslablasso}
For Bayesian variable selection, point mass spike-and-slab priors (\ref{pointmassspikeandslab}) are interpretable, but they are computationally intractable in high dimensions, due in large part to the combinatorial complexity of updating the discrete indicators $\bm{\gamma}$. As an alternative, fully continuous variants of spike-and-slab models have been developed. For continuous spike-and-slab models, the point mass spike $\delta_{0}$ is replaced by a continuous density heavily concentrated around $\bm{0}_{m_g}$. This not only mimics the point mass but it \textit{also} facilitates more efficient computation, as we describe later.
In the context of sparse normal means estimation and univariate linear regression, \citet{Rockova2018} and \citet{RockovaGeorge2018} introduced the univariate spike-and-slab lasso (SSL). The SSL places a mixture prior of two Laplace densities on the individual coordinates $\beta_j$, i.e.
\begin{equation} \label{SSlasso}
\pi( \bm{\beta} | \theta) = \displaystyle \prod_{j=1}^{p} [( 1 - \theta) \psi (\beta_j | \lambda_0 ) + \theta \psi (\beta_j | \lambda_1 )],
\end{equation}
where $\theta \in (0,1)$ is the mixing proportion and $\psi(\cdot | \lambda )$ denotes a univariate Laplace density indexed by hyperparameter $\lambda$, i.e. $\psi(\beta | \lambda ) = \frac{\lambda}{2} e^{-\lambda | \beta | }$. Typically, we set $\lambda_0 \gg \lambda_1$ so that the spike is heavily concentrated about zero. Unlike \eqref{pointmassspikeandslab}, the SSL model \eqref{SSlasso} does not place any mass on exactly sparse vectors. Nevertheless, the global posterior mode under the SSL prior may be exactly sparse. Meanwhile, the slab stabilizes posterior estimates of the larger coefficients so they are not downward biased. Thus, the SSL posterior mode can be used to perform variable selection and estimation simultaneously.
The spike-and-slab lasso methodology has now been adopted for a wide number of statistical problems. Apart from univariate linear regression, it has been used for factor analysis \cite{RockovaGeorge2016, MoranRockovaGeorge2019}, multivariate regression \cite{DeshpandeRockovaGeorge2018}, covariance/precision matrix estimation \cite{DeshpandeRockovaGeorge2018, GanNarisettyLiang2018, LiMcCormickClark2019}, causal inference \cite{AntonelliParmigianiDominici2019}, generalized linear models (GLMs) \cite{TangShenZhangYi2017GLM, TangShenLiZhangWenQianZhuangShiYi2018}, and Cox proportional hazards models \cite{TangShenZhangYi2017Cox}.
While the SSL (\ref{SSlasso}) induces sparsity on individual coefficients (through the posterior mode), it does not account for group structure of covariates. For inference with structured data in GLMs, \citet{TangShenLiZhangWenQianZhuangShiYi2018} utilized the univariate spike-and-slab lasso prior \eqref{SSlasso} for grouped data where each group had a group-specific sparsity-inducing parameter, $\theta_g$, instead of a single $\theta$ for all coefficients. However, this univariate SSL prior does not feature the ``all in, all out'' selection property of the original group lasso of \citet{YuanLin2006} or the \emph{grouped} and \emph{multivariate} SSL prior, which we develop in this work.
In this paper, we introduce the \textit{spike-and-slab group lasso} (SSGL) for Bayesian grouped regression and variable selection. Under the SSGL prior, the global posterior mode is exactly sparse, thereby allowing the mode to automatically threshold out insignificant groups of coefficients. To widen the use of spike-and-slab lasso methodology for situations where the linear model is too inflexible, we extend the SSGL to sparse generalized additive models by introducing the \textit{nonparametric spike-and-slab lasso} (NPSSL). To our knowledge, our work is the first to apply the spike-and-slab lasso methodology outside of a parametric setting. Our contributions can be summarized as follows:
\begin{enumerate}
\item We propose a new group spike-and-slab prior for estimation and variable selection in both parametric and nonparametric settings. Unlike frequentist methods which rely on separable penalties, our model has a \textit{non}-separable and self-adaptive penalty which allows us to automatically adapt to ensemble information about sparsity.
\item We introduce a highly efficient block coordinate ascent algorithm for global posterior mode estimation. This allows us to rapidly identify significant groups of coefficients, while thresholding out insignificant ones.
\item We show that de-biasing techniques that have been used for the original lasso \citep{Tibshirani1996} can be extended to our SSGL model to provide valid inference on the estimated regression coefficients.
\item For both grouped regression and sparse additive models, we derive near-optimal posterior contraction rates for both the regression coefficients $\bm{\beta}$ \textit{and} the unknown variance $\sigma^2$ under the SSGL prior.
\end{enumerate}
The rest of the paper is structured as follows. In
Section \ref{SSGroupLassoIntro}, we introduce the spike-and-slab group lasso (SSGL). In Section \ref{optimizationSSGL}, we characterize the global posterior mode and introduce efficient algorithms for fast maximum \emph{a posteriori} (MAP) estimation and variable selection. In Section \ref{sec:inference}, we utilize ideas from the de-biased lasso to perform inference on the SSGL model. In Section \ref{NPSSLIntro}, we extend the SSGL to nonparametric settings by proposing the nonparametric spike-and-slab lasso (NPSSL). In Section \ref{asymptotictheory}, we present asymptotic theory for the SSGL and the NPSSL. Finally, in Sections \ref{Simulations} and \ref{dataanalysis}, we provide extensive simulation studies and use our models to analyze real data sets.
\subsection{Notation}
We use the following notations. For two nonnegative sequences $\{ a_n \}$ and $\{ b_n \}$, we write $a_n \asymp b_n$ to denote $0 < \lim \inf_{n \rightarrow \infty} a_n/b_n \leq \lim \sup_{n \rightarrow \infty} a_n/b_n < \infty$. If $\lim_{n \rightarrow \infty} a_n/b_n = 0$, we write $a_n = o(b_n)$ or $a_n \prec b_n$. We use $a_n \lesssim b_n$ or $a_n = O(b_n)$ to denote that for sufficiently large $n$, there exists a constant $C >0$ independent of $n$ such that $a_n \leq Cb_n$. For a vector $\bm{v} \in \mathbb{R}^p$, we let $\lVert \bm{v} \rVert_1 := \sum_{i=1}^p |v_i|$, $\lVert \bm{v} \rVert_2 := \sqrt{ \sum_{i=1}^p v_i^2}$, and $\lVert \bm{v} \rVert_{\infty} := \max_{1 \leq i \leq p} | v_i |$ denote the $\ell_1$, $\ell_2$, and $\ell_{\infty}$ norms respectively. For a symmetric matrix $\bm{A}$, we let $\lambda_{\min} (\bm{A})$ and $\lambda_{\max} (\bm{A})$ denote its minimum and maximum eigenvalues.
\section{The Spike-and-Slab Group Lasso} \label{SSGroupLassoIntro}
Let $\bm{\beta}_g$ denote a real-valued vector of length $m_g$. We define the \textit{group lasso density} as
\begin{equation} \label{grouplassoprior}
\bm{\Psi} ( \bm{\beta}_g | \lambda ) = C_g \lambda^{m_g} \exp \left( - \lambda \lVert \bm{\beta}_g \rVert_2 \right),
\end{equation}
where $C_g = 2^{-m_g} \pi^{-(m_g-1)/2} \left[ \mathcal{G}amma \left( (m_g+1)/2 \right) \right]^{-1}$. This prior has been previously considered by \cite{KyungGillGhoshCasella2010, XuGhosh2015} for Bayesian inference in the grouped regression model (\ref{groupmodel}). \citet{KyungGillGhoshCasella2010} considered a single prior (\ref{grouplassoprior}) on each of the $\bm{\beta}_g$'s, while \citet{XuGhosh2015} employed (\ref{grouplassoprior}) as the slab in the point-mass mixture (\ref{pointmassspikeandslab}). These authors implemented their models using MCMC.
In this manuscript, we introduce a \textit{continuous} spike-and-slab prior with the group lasso density (\ref{grouplassoprior}) for both the spike \textit{and} the slab. The continuous nature of our prior is critical in facilitating efficient coordinate ascent algorithms for MAP estimation that allow us to bypass the use of MCMC. Letting $\bm{\beta} = ( \bm{\beta}_1^T, \ldots, \bm{\beta}_G^T)^T$ under model (\ref{groupmodel}), the \textit{spike-and-slab group lasso} (SSGL) is defined as:
\begin{equation} \label{ssgrouplasso}
\pi (\bm{\beta} | \theta ) = \displaystyle \prod_{g=1}^{G} \left[ (1- \theta) \bm{\Psi} ( \bm{\beta}_g | \lambda_0 ) + \theta \bm{\Psi} ( \bm{\beta}_g | \lambda_1 ) \right],
\end{equation}
where $\bm{\Psi}( \cdot | \lambda)$ denotes the group lasso density (\ref{grouplassoprior}) indexed by hyperparameter $\lambda$, and $\theta \in (0, 1)$ is a mixing proportion. $\lambda_0$ corresponds to the spike which shrinks the entire vector $\bm{\beta}_g$ towards $\bm{0}_{m_g}$, while $\lambda_1$ corresponds to the slab. For shorthand notation, we denote $\bm{\Psi} ( \bm{\beta}_g | \lambda_0 )$ as $\bm{\Psi}_0 (\bm{\beta}_g)$ and $\bm{\Psi} ( \bm{\beta}_g | \lambda_1 )$ as $\bm{\Psi}_1 (\bm{\beta}_g)$ going forward.
Under the grouped regression model (\ref{groupedregression}), we place the SSGL prior (\ref{ssgrouplasso}) on $\bm{\beta}$. In accordance with the recommendations of \cite{MoranRockovaGeorge2018}, we do not scale our prior by the unknown $\sigma$. Instead, we place an independent Jeffreys prior on $\sigma^2$, i.e.
\begin{equation} \label{jeffreys}
\pi(\sigma^2) \propto \sigma^{-2}.
\end{equation}
The mixing proportion $\theta$ in (\ref{ssgrouplasso}) can either be fixed deterministically or endowed with a prior $\theta \sim \pi(\theta)$. We will discuss this in detail in Section \ref{optimizationSSGL}.
\section{Characterization and Computation of the Global Posterior Mode}\label{optimizationSSGL}
Throughout this section, we let $p$ denote the total number of covariates, i.e. $p = \sum_{g=1}^{G} m_g$. Our goal is to find the maximum \emph{a posteriori} estimates of the regression coefficients $\bm{\beta} \in \mathbb{R}^p$. This optimization problem is equivalent to a penalized likelihood method in which the logarithm of the prior \eqref{ssgrouplasso} may be reinterpreted as a penalty on the regression coefficients. Similarly to \citet{RockovaGeorge2018}, we will leverage this connection between the Bayesian and frequentist paradigms and introduce the SSGL penalty. This strategy combines the adaptivity of the Bayesian approach with the computational efficiency of existing algorithms in the frequentist literature.
A key component of the SSGL model is $\theta$, the prior expected proportion of groups with large coefficients. Ultimately, we will pursue a fully Bayes approach and place a prior on $\theta$, allowing the SSGL to adapt to the underlying sparsity of the data and perform an automatic multiplicity adjustment \cite{SB10}. For ease of exposition, however, we will first consider the case where $\theta$ is fixed, echoing the development of \citet{RockovaGeorge2018}. In this situation, the regression coefficients $\bm{\beta}_g$ are conditionally independent \emph{a priori}, resulting in a separable SSGL penalty. Later we will consider the fully Bayes approach, which will yield the \emph{non-separable} SSGL penalty.
\begin{definition}
Given $\theta \in (0, 1)$, the separable SSGL penalty is defined as
\begin{align} \label{penSbetag}
pen_S(\bm{\beta}|\theta) &= \log\left[\frac{\pi(\bm{\beta}|\theta)}{\pi(\mathbf{0}_p|\theta)}\right] = -\lambda_1\sum_{g =1}^G \lVert \bm{\beta}_g\rVert_2 + \sum_{g =1}^G \log\left[\frac{p^*_{\theta}(\mathbf{0}_{m_g})}{p^*_{\theta}(\bm{\beta}_g)}\right] \numbereqn
\end{align}
where
\begin{align}
p_{\theta}^*(\bm{\beta}_g) = \frac{\theta \bm{\Psi}_1(\bm{\beta}_g)}{\theta\bm{\Psi}_1(\bm{\beta}_g) + (1-\theta)\bm{\Psi}_0(\bm{\beta}_g)}.
\end{align}
\end{definition}
The separable SSGL penalty is almost the logarithm of the original prior \eqref{ssgrouplasso}; the only modification is an additive constant to ensure that $pen_S(\mathbf{0}_p|\theta) = 0$. The connection between the SSGL and penalized likelihood methods is made clearer when considering the derivative of the separable SSGL penalty, given in the following lemma.
\begin{lemma} \label{derivativeseparableSSGL}
The derivative of the separable SSGL penalty satisfies
\begin{align}
\frac{\partial pen_S(\bm{\beta}|\theta)}{\partial \lVert \bm{\beta}_g\rVert_2} = -\lambda_{\theta}^*(\bm{\beta}_g)
\end{align}
where
\begin{align}
\lambda^*_{\theta}(\bm{\beta}_g) = \lambda_1p_{\theta}^*(\bm{\beta}_g) + \lambda_0[1-p_{\theta}^*(\bm{\beta}_g)].
\end{align}
\end{lemma}
Similarly to the SSL, the SSGL penalty is a weighted average of the two regularization parameters, $\lambda_1$ and $\lambda_0$. The weight $p^*_{\theta}(\bm{\beta}_g)$ is the conditional probability that $\bm{\beta}_g$ was drawn from the slab distribution rather than the spike. Hence, the SSGL features an adaptive regularization parameter which applies different amounts of shrinkage to each group, unlike the group lasso which applies the same shrinkage to each group.
\subsection{The Global Posterior Mode} \label{globalmodetheorems}
Similarly to the group lasso \citep{YuanLin2006}, the separable nature of the penalty \eqref{penSbetag} lends itself naturally to a block coordinate ascent algorithm which cycles through the groups. In this section, we first outline the group updates resulting from the Karush-Kuhn-Tucker (KKT) conditions. The KKT conditions provide necessary conditions for the global posterior mode. We then derive a more refined condition for the global mode to aid in optimization for multimodal posteriors.
Following \citet{HBM12}, we assume that within each group, covariates are orthonormal, i.e. $\bm{X}_g^T\bm{X}_g = n\int_{\mathbb{R}^p} \int_{\mathbb{R}^n_-}b_{m_g}$ for $g= 1,\dots, G$. If this assumption does not hold, then the $\bm{X}_g$ matrices can be orthonormalized before fitting the model. As noted by \citet{BrehenyHuang2015}, orthonormalization can be done without loss of generality since the resulting solution can be transformed back to the original scale.
\begin{proposition}
The necessary conditions for $\widehat{\bm{\beta}} = (\widehat{\bm{\beta}}_1^T, \dots, \widehat{\bm{\beta}}_G^T)^T$ to be a global mode are:
\begin{align}
\bm{X}^T_g(\bm{Y} - \bm{X}\widehat{\bm{\beta}}) = \sigma^2\lambda_{\theta}^*(\widehat{\bm{\beta}}_g)\frac{\widehat{\bm{\beta}}_g}{\lVert \bm{\beta}_g\rVert_2} \quad &\text{for} \quad \widehat{\bm{\beta}}_g \neq \mathbf{0}_{m_g},\\
\lVert \bm{X}_g^T(\bm{Y} - \bm{X}\widehat{\bm{\beta}}) \rVert_2 \leq \sigma^2\lambda_{\theta}^*(\widehat{\bm{\beta}}_g) \quad &\text{for}\quad \widehat{\bm{\beta}}_g = \mathbf{0}_{m_g}.
\end{align}
Equivalently,
\begin{align}
\widehat{\bm{\beta}}_g = \frac{1}{n}\left(1-\frac{\sigma^2\lambda_{\theta}^*(\widehat{\bm{\beta}}_g)}{\lVert \bm{z}_g\rVert_2}\right)_+\bm{z}_g \label{soft_thresh}
\end{align}
where $\bm{z}_g = \bm{X}_g^T\left[\bm{Y} - \sum_{l\neq g}\bm{X}_l\widehat{\bm{\beta}}_l\right].$
\end{proposition}
\begin{proof}
Follows immediately from Lemma \ref{derivativeseparableSSGL} and subdifferential Calculus.
\end{proof}
The above characterization for the global mode is necessary, but not sufficient. A more refined characterization may be obtained by considering the group-wise optimization problem, noting that the global mode is also a maximizer of the $g$th group, keeping all other groups fixed.
\begin{proposition} \label{globalmodeseparable}
The global mode $\widehat{\bm{\beta}}_g = \mathbf{0}_{m_g}$ if and only if $\lVert \bm{z}_g \rVert_2 \leq \Delta$, where
\begin{align}
\Delta = \inf_{\bm{\beta}_g} \left\{ \frac{n\lVert \bm{\beta}_g \rVert_2}{2} - \frac{\sigma^2pen_S(\bm{\beta}|\theta)}{\lVert \bm{\beta}_g \rVert_2} \right\}.
\end{align}
\end{proposition}
The proof for Proposition \ref{globalmodeseparable} can be found in Appendix \ref{App:D2}. Unfortunately, the threshold $\Delta$ is difficult to compute. We instead find an approximation to this threshold. An upper bound is simply that of the soft-threshold solution \eqref{soft_thresh}, with $\Delta \leq \sigma^2\lambda^*(\bm{\beta}_g)$. However, when $\lambda_0$ is large, this bound may be improved. Similarly to \citet{RockovaGeorge2018}, we provide improved bounds on the threshold in Theorem \ref{delta_bounds}. This result requires the function $h: \mathbb{R}^{m_g} \to \mathbb{R}$, defined as:
\begin{align*}
h(\bm{\beta}_g) = [\lambda_{\theta}^*(\bm{\beta}_g) - \lambda_1]^2 + \frac{2n}{\sigma^2}\log p_{\theta}^*(\bm{\beta}_g).
\end{align*}
\begin{theorem}\label{delta_bounds}
When $(\lambda_0 - \lambda_1) > 2\sqrt{n}/\sigma$ and $h(\mathbf{0}_{m_g})> 0$, the threshold $\Delta$ is bounded by:
\begin{align}
\Delta^L < \Delta < \Delta^U
\end{align}
where
\begin{align}
\Delta^L &= \sqrt{2n \sigma^2\log[1/p_{\theta}^*(\mathbf{0}_{m_g})] - \sigma^4d} + \sigma^2\lambda_1, \\
\Delta^U &= \sqrt{2n \sigma^2\log[1/p_{\theta}^*(\mathbf{0}_{m_g})] } + \sigma^2\lambda_1,\label{delta_u}
\end{align}
and
\begin{align}
0 < d < \frac{2n}{\sigma^2} - \left(\frac{n}{\sigma^2(\lambda_0-\lambda_1)} - \frac{\sqrt{2n}}{\sigma}\right)^2
\end{align}
\end{theorem}
When $\lambda_0$ is large, $d\to 0$ and the lower bound on the threshold approaches the upper bound, yielding the approximation $\Delta = \Delta^U$.
We will ultimately use this approximation in our block coordinate ascent algorithm.
\subsection{The Non-Separable SSGL penalty}\label{NS_SSGL}
As discussed earlier, a key reason for adopting a Bayesian strategy is that it allows the model to borrow information across groups and self-adapt to the true underlying sparsity in the data. This is achieved by placing a prior on $\theta$, the proportion of groups with non-zero coefficients. We now outline this fully Bayes strategy and the resulting \emph{non-separable} SSGL penalty. With the inclusion of the prior $\theta \sim \pi(\theta)$, the marginal prior for the regression coefficients has the following form:
\begin{align}
\pi(\bm{\beta}) &= \int_0^1 \prod_{g=1}^G[\theta \bm{\Psi}_1(\bm{\beta}_g) + (1-\theta) \bm{\Psi}_0(\bm{\beta}_g)] d\pi(\theta) \\
&= \left( \prod_{g=1}^{G} C_g \lambda_1^{m_g} \right) e^{-\lambda_1\sum_{g=1}^G\lVert\bm{\beta}_g\rVert_2} \int_0^1 \frac{\theta^G}{\prod_{g=1}^G p_{\theta}^*(\bm{\beta}_g)} d\pi(\theta), \label{marginal_ns}
\end{align}
The non-separable SSGL penalty is then defined similarly to the separable penalty, where again we have centered the penalty to ensure $pen_{NS}(\mathbf{0}_p) = 0$.
\begin{definition}
The non-separable SSGL (NS-SSGL) penalty with $\theta \sim \pi(\theta)$ is defined as
\begin{align} \label{penNSbeta}
pen_{NS}(\bm{\beta}) &= \log \left[ \frac{\pi(\bm{\beta})}{\pi(\mathbf{0}_p)}\right] = -\lambda_1\sum_{g=1}^G \lVert \bm{\beta}_g\rVert_2 + \log\left[ \frac{\int_0^1 \theta^G/\prod_{g=1}^G p_{\theta}^*(\bm{\beta}_g) d\pi(\theta)}{\int_0^1 \theta^G/\prod_{g=1}^G p_{\theta}^*(\mathbf{0}_{m_g}) d\pi(\theta)}\right]. \numbereqn
\end{align}
\end{definition}
Although the penalty \eqref{marginal_ns} appears intractable, intuition is again obtained by considering the derivative. Following the same line of argument as \citet{RockovaGeorge2018}, the derivative of \eqref{marginal_ns} is given in the following lemma.
\begin{lemma}
\begin{align}
\frac{\partial pen_{NS}(\bm{\beta})}{\partial \lVert \bm{\beta}_g \rVert_2} \equiv \lambda^*(\bm{\beta}_g; \bm{\beta}_{\backslash g}),
\end{align}
where
\begin{align}
\lambda^*(\bm{\beta}_g; \bm{\beta}_{\backslash g}) = p^*(\bm{\beta}_g;\bm{\beta}_{\backslash g})\lambda_1 + [1-p^*(\bm{\beta}_g;\bm{\beta}_{\backslash g})]\lambda_0
\end{align}
and
\begin{align}
p^*(\bm{\beta}_g;\bm{\beta}_{\backslash g}) \equiv p^*_{\theta_g}(\bm{\beta}_g), \quad \text{with} \quad \theta_g = \mathbb{E}[\theta|\bm{\beta}_{\backslash g}].
\end{align}
\end{lemma}
That is, the marginal prior from \eqref{marginal_ns} is rendered tractable by considering each group of regression coefficients separately, conditional on the remaining coefficients. Such a conditional strategy is motivated by the group-wise updates for the separable penalty considered in the previous section. Thus, our optimization strategy for the non-separable penalty will be very similar to the separable case, except instead of a fixed value for $\theta$, we will impute the mean of $\theta$ conditioned on the remaining regression coefficients.
We now consider the form of the conditional mean, $\mathbb{E}[\theta|\widehat{\bm{\beta}}_{\backslash g}]$. As noted by \citet{RockovaGeorge2018}, when the number of groups is large, this conditional mean can be replaced by $\mathbb{E}[\theta|\widehat{\bm{\beta}}]$; we will proceed with the same approximation. For the prior on $\theta$, we will use the standard beta prior $\theta\sim \mathcal{B}(a, b)$. With the choices $a = 1$ and $b = G$ for these hyperparameters, this prior results in an automatic multiplicity adjustment for the regression coefficients \cite{SB10}.
We now examine the conditional distribution $\pi(\theta|\widehat{\bm{\beta}})$. Suppose that the number of groups with non-zero coefficients is $\widehat{q}$, and assume without loss of generality that the first $\widehat{q}$ groups have non-zero coefficients. Then,
\begin{align}
\pi(\theta|\widehat{\bm{\beta}}) \propto \theta^{a-1}(1-\theta)^{b-1}(1-\theta z)^{G-\widehat{q}}\prod_{g=1}^{\widehat{q}}(1-\theta x_g), \label{theta_posterior}
\end{align}
with $z = 1-\frac{\lambda_1}{\lambda_0}$ and $x_g = (1-\frac{\lambda_1}{\lambda_0}e^{\lVert \widehat{\bm{\beta}}_g\rVert_2(\lambda_0-\lambda_1)})$. Similarly to \citet{RockovaGeorge2018}, this distribution is a generalization of the Gauss hypergeometric distribution. Consequently, the expectation may be written as
\begin{align}
\mathbb{E}[\theta |\widehat{\bm{\beta}}] = \frac{\int_0^1 \theta^a (1-\theta)^{b-1}(1-\theta z)^{G-\widehat{q} }\prod_{g=1}^{\widehat{q}} (1-\theta x_g)d\theta}{ \int_0^1 \theta^{a-1} (1-\theta)^{b-1}(1-\theta z)^{G-\widehat{q} }\prod_{g=1}^{\widehat{q}} (1-\theta x_g) d\theta}. \label{theta_expectation}
\end{align}
While the above expression \eqref{theta_expectation} appears laborious to compute, it admits a much simpler form when $\lambda_0$ is very large. Using a slight modification to the arguments of \cite{RG16_abel}, we obtain this simpler form in Lemma \ref{theta_mean_lemma}.
\begin{lemma} \label{condexpectationlemma}
Assume $\pi(\theta|\widehat{\bm{\beta}})$ is distributed according to \eqref{theta_posterior}. Let $\widehat{q}$ be the number of groups with non-zero coefficients. Then as $\lambda_0 \to \infty$,
\begin{align}
\mathbb{E}[\theta|\widehat{\bm{\beta}}] = \frac{a + \widehat{q} }{a + b + G}.\label{theta_mean}
\end{align}
\label{theta_mean_lemma}
\end{lemma}
The proof for Lemma \ref{condexpectationlemma} is in Appendix \ref{App:D2}. We note that the expression \eqref{theta_mean} is essentially the usual posterior mean of $\theta$ under a beta prior. Intuitively, as $\lambda_0$ diverges, the weights $p_{\theta}^*(\bm{\beta}_g)$ concentrate at zero and one, yielding the familiar form for $\mathbb{E}[\theta|\widehat{\bm{\beta}}]$. With this in hand, we are now in a position to outline the block coordinate ascent algorithm for the non-separable SSGL.
\subsection{Optimization} \label{optimizationalgorithm}
The KKT conditions for the non-separable SSGL penalty yield the following necessary condition for the global mode:
\begin{align}
\widehat{\bm{\beta}}_g \leftarrow \frac{1}{n} \left(1-\frac{\sigma^2\lambda_{\widehat{\theta}}^*(\widehat{\bm{\beta}}_g)}{\lVert \bm{z}_g\rVert_2}\right)_+\bm{z}_g, \label{soft_thresh_ns}
\end{align}
where $\bm{z}_g = \bm{X}_g^T\left[\bm{Y} - \sum_{l\neq g}\bm{X}_l\widehat{\bm{\beta}}_l\right]$ and $\widehat{\theta}$ is the mean \eqref{theta_mean}, conditioned on the previous value of $\bm{\beta}$. As before, \eqref{soft_thresh_ns} is sufficient for a local mode, but not the global mode. When $p \gg n$ and $\lambda_0$ is large, the posterior will be highly multimodal. As in the separable case, we require a refined thresholding scheme that will eliminate some of these suboptimal local modes from consideration. In approximating the group-wise conditional mean $\mathbb{E}[\theta|\widehat{\bm{\beta}}_{\backslash g}] $ with $\mathbb{E}[\theta|\widehat{\bm{\beta}}]$, we do not require group-specific thresholds. Instead, we can use the threshold given in Proposition \ref{globalmodeseparable} and Theorem \ref{delta_bounds} where $\theta$ is replaced with the current update \eqref{theta_mean}. In particular, we shall use the upper bound $\Delta^U$ in our block coordinate ascent algorithm.
Similarly to \citet{RockovaGeorge2018}, we combine the refined threshold, $\Delta^U$ with the soft thresholding operation \eqref{soft_thresh_ns}, to yield the following update for $\widehat{\bm{\beta}}_g$ at iteration $k$:
\begin{align}
{\bm{\beta}}_{g}^{(k)} \leftarrow \frac{1}{n}\left(1-\frac{\sigma^{2(k)} \lambda^*({\bm{\beta}}_{g}^{(k - 1)}; {\theta}^{(k)} )}{\lVert \bm{z}_g\rVert_2}\right)_+\bm{z}_g \ \mathbb{I}(\lVert \bm{z}_g\rVert_2 > \Delta^U)
\end{align}
where $\theta^{(k)}= \mathbb{E}[\theta|\bm{\beta}^{(k-1)}]$. Technically, $\theta$ should be updated after each group $\bm{\beta}_g$ is updated. In practice, however, there will be little change after one group is updated and so we will update both $\theta$ and $\Delta^U$ after every $M$ iterations with a default value of $M = 10$.
With the Jeffreys prior $\pi(\sigma^2) \propto \sigma^{-2}$, the error variance $\sigma^2$ also has a closed form update:
\begin{align}
\sigma^{2(k)} \leftarrow \frac{\lVert \bm{Y} - \bm{X}\bm{\beta}^{(k-1)}\rVert_2^2}{n + 2}.
\end{align}
The complete optimization algorithm is given in Algorithm \ref{algorithm} of Appendix \ref{completealgorithm}. The computational complexity of this algorithm is $\mathcal{O}(np)$ per iteration, where $p = \sum_{g=1}^{G} m_g$. It takes $\mathcal{O}(n m_g)$ operations to compute the partial residual $\bm{z}_g$ for the $g$th group, for a total cost of $\mathcal{O} (n \sum_{g=1}^{G} m_g) = \mathcal{O}(np)$. Similarly, it takes $\mathcal{O}(np)$ cost to compute the sum of squared residuals $\lVert \bm{Y} - \bm{X} \widehat{\bm{\beta}} \rVert_2^2$ to update the variance parameter $\sigma^2$. The computational complexity of our algorithm matches that of the usual gradient descent algorithms for lasso and group lasso \citep{FriedmanHastieTibshirani2010}.
As a non-convex method, it is not guaranteed that SSGL will find the global posterior mode, only a local mode. However, the refined thresholding scheme (Theorem \ref{delta_bounds}) and a warm start initialization strategy (described in detail in Appendix \ref{AddlComputationalDetails}) enable SSGL to eliminate a number sub-optimal local modes from consideration in a similar manner to \citet{RockovaGeorge2018}. To briefly summarize the initialization strategy, we tune $\lambda_0$ from an increasing sequence of values, and we further scale $\lambda_0$ by $\sqrt{m_g}$ for each $g$th group to ensure that the amount of penalization is on the same scale for groups of potentially different sizes \citep{HBM12}. Meanwhile, we keep $\lambda_1$ fixed at a small value so that selected groups have minimal shrinkage. See Appendix \ref{AddlComputationalDetails} for detailed discussion of choosing $(\lambda_0, \lambda_1)$.
\section{Approaches to Inference} \label{sec:inference}
While the above procedure allows us to find the posterior mode of $\boldsymbol{\beta}$, providing a measure of uncertainty around our estimate is a challenging task. One possible solution is to run MCMC where the algorithm is initialized at the posterior mode. By starting the MCMC chain at the mode, the algorithm should converge faster. However, this is still not ideal, as it can be computationally burdensome in high dimensions. Instead, we will adopt ideas from a recent line of research (\cite{van2014asymptotically, javanmard2018debiasing}) based on de-biasing estimates from high-dimensional regression. These ideas were derived in the context of lasso regression, and we will explore the extent to which they work for the SSGL penalty. Define $\widehat{\boldsymbol{\Sigma}} = \boldsymbol{X}^T \boldsymbol{X}/n$ and let $\widehat{\boldsymbol{\Theta}}$ be an approximate inverse of $\widehat{\boldsymbol{\Sigma}}$. We define
\begin{equation}
\widehat{\boldsymbol{\beta}}_d = \widehat{\boldsymbol{\beta}} + \widehat{\boldsymbol{\Theta}} \boldsymbol{X}^T (\boldsymbol{Y} - \boldsymbol{X} \widehat{\boldsymbol{\beta}})/n.
\end{equation}
where $\widehat{\bm{\beta}}$ is the MAP estimator of $\bm{\beta}$ under the SSGL model. By \cite{van2014asymptotically}, this quantity $\widehat{\bm{\beta}}_d$ has the following asymptotic distribution:
\begin{equation} \label{asymptoticdist}
\sqrt{n}(\widehat{\boldsymbol{\beta}}_d - \boldsymbol{\beta}) \sim \mathcal{N}(\boldsymbol{0}, \sigma^2 \widehat{\boldsymbol{\Theta}} \widehat{\boldsymbol{\Sigma}} \widehat{\boldsymbol{\Theta}}^T).
\end{equation}
For our inference procedure, we replace the population variance $\sigma^2$ in (\ref{asymptoticdist}) with the modal estimate $\widehat{\sigma}^2$ from the SSGL model. To estimate $\widehat{\boldsymbol{\Theta}}$, we utilize the nodewise regression approach developed in \cite{meinshausen2006high, van2014asymptotically}. We describe this estimation procedure for $\widehat{\boldsymbol{\Theta}}$ in Appendix \ref{ThetaEstimateDebiasing}.
Let $\widehat{\beta}_{dj}$ denote the $j$th coordinate of $\widehat{\bm{\beta}}_d$. We have from (\ref{asymptoticdist}) that the $100(1-\alpha) \%$ asymptotic pointwise confidence intervals for $\beta_{j}, j = 1, \ldots, p$, are
\begin{align} \label{confidenceintervals}
[ \widehat{\beta}_{dj} - c(\alpha, n, \widehat{\sigma}^2), \widehat{\beta}_{dj} + c(\alpha, n, \widehat{\sigma}^2) ],
\end{align}
where $c(\alpha, n, \widehat{\sigma}^2) := \Phi^{-1} (1-\alpha/2) \sqrt{ \widehat{\sigma}^2 ( \widehat{\boldsymbol{\Theta}} \widehat{\boldsymbol{\Sigma}} \widehat{\boldsymbol{\Theta}}^T )_{jj} / n}$ and $\Phi(\cdot)$ denotes the cdf of $\mathcal{N}(0,1)$. It should be noted that our posterior mode estimates should have less bias than existing estimates such as the group lasso. Therefore, the goal of the de-biasing procedure is less about de-biasing the posterior mode estimates, and more about providing an estimator with an asymptotic normal distribution from which we can perform inference.
To assess the ability of this procedure to obtain accurate confidence intervals (\ref{confidenceintervals}) with $\alpha=0.05$, we run a small simulation study with $n=100$, $G=100$ or $n=300, G=300$, and each of the $G$ groups having $m=2$ covariates. We generate the covariates from a multivariate normal distribution with mean $\boldsymbol{0}$ and an AR(1) covariance structure with correlation $\rho$. The two covariates from each group are the linear and squared term from the original covariates. We set the first seven elements of $\boldsymbol{\beta}$ equal to $(0, 0.5, 0.25, 0.1, 0, 0, 0.7)$ and the remaining elements equal to zero. Lastly, we try $\rho = 0$ and $\rho = 0.7$. Table \ref{tab:debiasing} shows the coverage probabilities across 1000 simulations for all scenarios looked at. We see that important covariates, i.e. covariates with a nonzero corresponding $\beta_j$, have coverage near 0.85 when $n=100$ under either correlation structure, though this increases to nearly the nominal rate when $n=300$. The remaining covariates (null covariates) achieve the nominal level regardless of the sample size or correlation present.
\begin{table}[t]
\centering
\begin{tabular}{|l|rrr|}
\hline
& $\rho$ & Important covariates & Null covariates\\
\hline
$n=100, G=100$ & 0.0 & 0.83 & 0.93 \\
& 0.7 & 0.85 & 0.94 \\
\hline
$n=300, G = 300$ & 0.0 & 0.93 & 0.95 \\
& 0.7 & 0.92 & 0.95 \\
\hline
\end{tabular}
\caption{Coverage probabilities for de-biasing simulation. }
\label{tab:debiasing}
\end{table}
\section{Nonparametric Spike-and-Slab Lasso} \label{NPSSLIntro}
We now introduce the nonparametric spike-and-slab lasso (NPSSL). The NPSSL allows for flexible modeling of a response surface with minimal assumptions regarding its functional form. We consider two cases for the NPSSL: (i) a main effects only model, and (ii) a model with both main and interaction effects.
\subsection{Main Effects} \label{NPSSLMainEffects}
We first consider the main effects NPSSL model. Here, we assume that the response surface may be decomposed into the sum of univariate functions of each of the $p$ covariates. That is, we have the following model:
\begin{align}
y_i = \sum_{j=1}^p f_j(X_{ij}) + \text{var}epsilon_i, \quad \text{var}epsilon_i \sim \mathcal{N}(0, \sigma^2). \label{NPSSL_main_effects}
\end{align}
Following \citet{RavikumarLaffertyLiuWasserman2009}, we assume that each $f_j$, $j = 1,\dots, p$, may be approximated by a linear combination of basis functions $\mathcal{B}_j = \{g_{j1}, \dots, g_{jd}\}$, i.e.,
\begin{align}
f_j(X_{ij}) \approx \sum_{k = 1}^{d} g_{jk}(X_{ij}) \beta_{jk}
\end{align}
where $\bm{\beta}_j = (\beta_{j1}, \dots, \beta_{jd})^T$ are the unknown weights. Let $\widetilde{\bm{X}}_j$ denote the $n\times d$ matrix with the $(i, k)$th entry $\widetilde{\bm{X}}_j(i, k) = g_{jk}(X_{ij})$. Then, \eqref{NPSSL_main_effects} may be represented in matrix form as
\begin{align}
\bm{Y} - \bm{\delta} = \sum_{j=1}^p\widetilde{\bm{X}}_j\bm{\beta}_j + \bm{\text{var}epsilon}, \quad \bm{\text{var}epsilon}\sim \mathcal{N}_n(\mathbf{0}, \sigma^2 \bm{I}_n), \label{matrix_main_effects}
\end{align}
where $\bm{\delta}$ is a vector of the lower-order truncation bias. Note that we assume the response $\bm{Y}$ has been centered and so we do not include a grand mean $\bm{\mu}$ in (\ref{matrix_main_effects}). Thus, we do not require the main effects to integrate to zero as in \cite{WeiReichHoppinGhosal2018}. We do, however, require the matrices $\widetilde{\bm{X}}_j, j = 1, \ldots, p$, to be orthogonal, as discussed in Section \ref{optimizationSSGL}. Note that the entire design matrix does not need to be orthogonal; only the group-specific matrices need to be. We can enforce this in practice by either using orthonormal basis functions or by orthornormalizing the $\widetilde{\bm{X}}_j$ matrices before fitting the model.
We assume that $\bm{Y}$ depends on only a small number of the $p$ covariates so that many of the $f_j$'s have a negligible contribution to \eqref{NPSSL_main_effects}. This is equivalent to assuming that most of the weight vectors $\bm{\beta}_j$ have all zero elements. If the $j$th covariate is determined to be predictive of $\bm{Y}$, then $f_j$ has a non-negligible contribution to \eqref{NPSSL_main_effects}. In this case, we want to include the \textit{entire} basis function approximation to $f_j$ in the model.
The above situation is a natural fit for the SSGL. We have $p$ groups where each group is either included as a whole or not included in the model. The design matrices for each group are exactly the matrices of basis functions, $\widetilde{\bm{X}}_j, j=1, \ldots, p$. We will utilize the non-separable SSGL penalty developed in Section \ref{NS_SSGL} to enforce this group-sparsity behavior in the model \eqref{matrix_main_effects}. More specifically, we seek to maximize the objective function with respect to $\bm{\beta} = (\bm{\beta}_1^T,\dots, \bm{\beta}_p^T)^T \in \mathbb{R}^{pd}$ and $\sigma^2$:
\begin{align}
L(\bm{\beta}, \sigma^2) = -\frac{1}{2\sigma^2}\lVert \bm{Y} - \sum_{j=1}^p \widetilde{\bm{X}}_j\bm{\beta}_j\rVert_2^2 - (n+2)\log\sigma+ pen_{NS}(\bm{\beta}).\label{NPSSL_main_effects_obj}
\end{align}
To find the estimators of $\bm{\beta}$ and $\sigma^2$, we use Algorithm \ref{algorithm} in Appendix \ref{completealgorithm}. Similar additive models have been proposed by a number of authors including \citet{RavikumarLaffertyLiuWasserman2009} and \citet{WeiReichHoppinGhosal2018}. However, our proposed NPSSL method has a number of advantages. First, we allow the noise variance $\sigma^2$ to be unknown, unlike \citet{RavikumarLaffertyLiuWasserman2009}. Accurate estimates of $\sigma^2$ are important to avoid overfitting the noise beyond the signal. Secondly, we use a block-descent algorithm to quickly target the modes of the posterior, whereas \citet{WeiReichHoppinGhosal2018} utilize MCMC. Finally, our SSGL algorithm automatically thresholds negligible groups to zero, negating the need for a post-processing thresholding step.
\subsection{Main and Interaction Effects} \label{NPSSLMainInteractionEffects}
The main effects model \eqref{NPSSL_main_effects} allows for each covariate to have a nonlinear contribution to the model, but assumes a linear relationship \emph{between} the covariates. In some applications, this assumption may be too restrictive. For example, in the environmental exposures data which we analyze in Section \ref{NHANES}, we may expect high levels of two toxins to have an even more adverse effect on a person's health than high levels of either of the two toxins. Such an effect may be modeled by including interaction effects between the covariates.
Here, we extend the NPSSL to include interaction effects. We consider only second-order interactions between the covariates, but our model can easily be extended to include even higher-order interactions. We assume that the interaction effects may be decomposed into the sum of bivariate functions of each pair of covariates, yielding the model:
\begin{align}
y_i = \sum_{j=1}^p f_j(X_{ij}) + \sum_{k=1}^{p-1}\sum_{l=k+1}^p f_{kl}(X_{ik}, X_{il}) + \text{var}epsilon_i, \quad \text{var}epsilon_i \sim \mathcal{N}(0, \sigma^2).\label{NPSSL_interactions}
\end{align}
For the interaction terms, we follow \citet{WeiReichHoppinGhosal2018} and approximate $f_{kl}$ using the outer product of the basis functions of the interacting covariates:
\begin{align}
f_{kl}(X_{ik}, X_{il}) \approx \sum_{s = 1}^{d^*}\sum_{r=1}^{d^*} g_{ks}(X_{ik})g_{lr}(X_{il}) \beta_{klsr}
\end{align}
where $\bm{\beta}_{kl} = (\beta_{kl11}, \dots, \beta_{kl1d^*}, \beta_{kl21}, \dots, \beta_{kld^*d^*})^T \in \mathbb{R}^{d^{*2}}$ is the vector of unknown weights. We let $\widetilde{\bm{X}}_{kl}$ denote the $n\times d^{*2}$ matrix with rows $$\widetilde{\bm{X}}_{kl}(i, \cdot) = \text{vec}(\bm{g}_k(X_{ik})\bm{g}_l(X_{il})^T),$$ where $\bm{g}_k(X_{ik}) = (g_{k1}(X_{ik}), \dots, g_{kd^*}(X_{ik}))^T$. Then, \eqref{NPSSL_interactions} may be represented in matrix form as
\begin{align}
\bm{Y} - \bm{\delta} = \sum_{j=1}^p \widetilde{\bm{X}}_j\bm{\beta}_j + \sum_{k=1}^{p-1}\sum_{l=k+1}^p \widetilde{\bm{X}}_{kl}\bm{\beta}_{kl} + \bm{\text{var}epsilon}, \quad \bm{\text{var}epsilon}\sim \mathcal{N}_n(\mathbf{0}, \sigma^2\bm{I}_n),\label{matrix_NPSSL_interactions}
\end{align}
where $\bm{\delta}$ is a vector of the lower-order truncation bias. We again assume $\bm{Y}$ has been centered and so do not include a grand mean in \eqref{matrix_NPSSL_interactions}. We do not constrain $f_{kl}$ to integrate to zero as in \citet{WeiReichHoppinGhosal2018}. However, we do ensure that the main effects are not in the linear span of the interaction functions. That is, we require the ``main effect'' matrices $\widetilde{\bm{X}}_l$ and $\widetilde{\bm{X}}_k$ to be orthogonal to the ``interaction'' matrix $\widetilde{\bm{X}}_{kl}$. This condition is needed to maintain identifiability for both the main and interaction effects in the model. In practice, we enforce this condition by setting the interaction design matrix to be the residuals of the regression of $\widetilde{\bm{X}}_k \circ \widetilde{\bm{X}}_l$ on $\widetilde{\bm{X}}_k$ and $\widetilde{\bm{X}}_l$.
Note that the current representation does not enforce strong hierarchy. That is, interaction terms can be included even if their corresponding main effects are removed from the model. However, the NPSSL model can be easily modified to accommodate strong hierarchy. If hierarchy is desired, the ``interaction'' matrices can be augmented to contain both main and interaction effects, as in \citet{lim2015learning}, i.e. the ``interaction'' matrices in \eqref{matrix_NPSSL_interactions} would be $\widetilde{\bm{X}}_{kl}^{\textrm{aug}} = [ \widetilde{\bm{X}}_k, \widetilde{\bm{X}}_l, \widetilde{\bm{X}}_{kl}]$, instead of simply $\widetilde{\bm{X}}_{kl}$. This augmented model is overparameterized since the main effects still have their own separate design matrices as well (to ensure that main effects can still be selected even if $\bm{\beta}_{kl}^{\textrm{aug}} = \bm{0}$). However, this ensures that interaction effects are only selected if the corresponding main effects are also in the model.
In the interaction model, we either include $\bm{\beta}_{kl}$ in the model \eqref{matrix_NPSSL_interactions} if there is a non-negligible interaction between the $k$th and $l$th covariates, or we estimate $\widehat{\bm{\beta}}_{kl} = \mathbf{0}_{d^{*2}}$ if such an interaction is negligible. With the non-separable SSGL penalty, the objective function is:
\begin{align*} \label{obj_NPSSL_interaction}
L(\bm{\beta}, \sigma^2) &= -\frac{1}{2\sigma^2}\lVert \bm{Y} - \sum_{j=1}^p \widetilde{\bm{X}}_j\bm{\beta}_j - \sum_{k=1}^{p-1} \sum_{l=k+1}^p \widetilde{\bm{X}}_{kl}\bm{\beta}_{kl}\rVert_2^2 + pen_{NS}(\bm{\beta}) \\
& \quad -(n+2)\log\sigma, \numbereqn
\end{align*}
where $\bm{\beta} = (\bm{\beta}_1^T, \dots, \bm{\beta}_p^T, \bm{\beta}_{12}^T, \dots \bm{\beta}_{(p-1)p}^T)^T \in \mathbb{R}^{pd + p(p-1)d^{*2}/2}.$ We can again use Algorithm \ref{algorithm} in Appendix \ref{completealgorithm} to find the modal estimates of $\bm{\beta}$ and $\sigma^2$.
\section{Asymptotic Theory for the SSGL and NPSSL} \label{asymptotictheory}
In this section, we derive asymptotic properties for the separable SSGL and NPSSL models. We first note some differences between our theory and the theory in \citet{RockovaGeorge2018}. First, we prove \textit{joint} consistency in estimation of both the unknown $\bm{\beta}$ \textit{and} the unknown $\sigma^2$, whereas \cite{RockovaGeorge2018} proved their result only for $\bm{\beta}$, assuming known variance $\sigma^2 = 1$. Secondly, \citet{RockovaGeorge2018} established convergence rates for the global posterior mode and the full posterior separately, whereas we establish a contraction rate $\epsilon_n$ for the full posterior only. Our rate $\epsilon_n$ satisfies $\epsilon_n \rightarrow 0$ as $n \rightarrow \infty$ (i.e. the full posterior collapses to the true $(\bm{\beta}, \sigma^2)$ almost surely as $n \rightarrow \infty$), and hence, it automatically follows that the posterior mode is a consistent estimator of $(\bm{\beta}, \sigma^2)$. Finally, we also derive a posterior contraction rate for nonparametric additive regression, not just linear regression. All proofs for the theorems in this section can be found in Appendix \ref{App:D3}.
\subsection{Grouped Linear Regression}
We work under the frequentist assumption that there is a true model,
\begin{equation} \label{truemodel}
\bm{Y} = \displaystyle \sum_{g=1}^{G} \bm{X}_g \bm{\beta}_{0g} + \bm{\text{var}epsilon}, \hspace{.5cm} \bm{\text{var}epsilon} \sim \mathcal{N}_n ( \mathbf{0}, \sigma_0^2 \int_{\mathbb{R}^p} \int_{\mathbb{R}^n_-}b_n ),
\end{equation}
where $\bm{\beta}_0 = ( \bm{\beta}_{01}^T, \ldots, \bm{\beta}_{0G}^T )^T$ and $\sigma_0^2 \in (0, \infty)$. Denote $\bm{X} = [ \bm{X}_1, \ldots, \bm{X}_G ]$ and $\bm{\beta} = ( \bm{\beta}_1^T, \ldots, \bm{\beta}_G^T)^T$. Suppose we endow $(\bm{\beta}, \sigma^2)$ under model (\ref{truemodel}) with the following prior:
\begin{equation} \label{hiermodel}
\begin{array}{rl}
\pi (\bm{\beta} | \theta ) \sim & \displaystyle \prod_{g=1}^{G} \left[ (1- \theta) \bm{\Psi} ( \bm{\beta}_g | \lambda_0 ) + \theta \bm{\Psi} ( \bm{\beta}_g | \lambda_1 ) \right], \\
\theta \sim & \mathcal{B}(a, b), \\
\sigma^2 \sim & \mathcal{IG} (c_0, d_0),
\end{array}
\end{equation}
where $c_0 > 0$ and $d_0 > 0$ are fixed constants and the hyperparameters $(a,b)$ in the prior on $\theta$ are to be chosen later.
\begin{remark}
In our implementation of the SSGL model, we endowed $\sigma^2$ with an improper prior, $\pi(\sigma^2) \propto \sigma^{-2}$. This can be viewed as a limiting case of the $\mathcal{IG}(c_0,d_0)$ prior with $c_0 \rightarrow 0, d_0 \rightarrow 0$. This improper prior is fine for implementation since it leads to a proper posterior, but for our theoretical investigation, we require the priors on $(\bm{\beta}, \sigma^2)$ to be proper.
\end{remark}
\subsubsection{Posterior Contraction Rates}
Let $m_{\max} = \max_{1 \leq j \leq G} m_g$ and let $p = \sum_{g=1}^{G} m_g$. Let $S_0$ be the set containing the indices of the true nonzero groups, where $S_0 \subseteq \{1, \ldots, G \}$ with cardinality $s_0 = \lvert S_0 \rvert$. We make the following assumptions:
\begin{enumerate}[label=(A\arabic*)]
\item Assume that $G \gg n $, $\log(G) = o(n)$, and $m_{\max} = O( \log G / \log n)$. \label{A1}
\item The true number of nonzero groups satisfies $s_0 = o(n / \log G)$. \label{A2}
\item There exists a constant $k>0$ so that $\lambda_{\max} ( \bm{X}^T \bm{X} ) \leq k n^{\alpha}$, for some $\alpha \in [1, \infty)$. \label{A3}
\item Let $\xi \subset \{1, \ldots, G \}$, and let $\bm{X}_{\xi}$ denote the submatrix of $\bm{X}$ that contains the submatrices with groups indexed by $\xi$. There exist constants $\nu_1 > 0$, $\nu_2 > 0$, and an integer $\bar{p}$ satisfying $s_0 = o(\bar{p})$ and $\bar{p} = o( s_0 \log n )$, so that $n \nu_1 \leq \lambda_{\min} ( \bm{X}_{\xi}^T \bm{X}_{\xi} ) \leq \lambda_{\max} ( \bm{X}_{\xi}^T \bm{X}_{\xi}) \leq n \nu_2$ for any model of size $\lvert \xi \rvert \leq \bar{p}$. \label{A4}
\item $\lVert \bm{\beta}_0 \rVert_{\infty} = O( \log G ).$ \label{A5}
\end{enumerate}
Assumption \ref{A1} allows the number of groups $G$ and total number of covariates $p$ to grow at nearly exponential rate with sample size $n$. The size of each individual group may also grow as $n$ grows, but should grow at a slower rate than $n / \log n$. Assumption \ref{A2} specifies the growth rate for the true model size $s_0$. Assumption \ref{A3} bounds the eigenvalues of $\bm{X}^T \bm{X}$ from above and is less stringent than requiring all the eigenvalues of the Gram matrix ($\bm{X}^T \bm{X} / n$) to be bounded away from infinity. Assumption \ref{A4} ensures that $\bm{X}^T \bm{X}$ is locally invertible over sparse sets. In general, conditions \ref{A3}-\ref{A4} are difficult to verify, but they can be shown to hold with high probability for certain classes of matrices where the rows of $\bm{X}$ are independent and sub-Gaussian \cite{MendelsonPajor2006, RaskuttiWainwrightYu2010}. Finally, Assumption \ref{A5} places a restriction on the growth rate of the maximum signal size for the true $\bm{\beta}_0$.
We now state our main theorem on the posterior contraction rates for the SSGL prior (\ref{hiermodel}) under model (\ref{truemodel}). Let $\mathbb{P}_0$ denote the probability measure underlying the truth (\ref{truemodel}) and $\Pi( \cdot | \bm{Y})$ denote the posterior distribution under the prior (\ref{hiermodel}) for $(\bm{\beta}, \sigma^2)$.
\begin{theorem}[posterior contraction rates] \label{posteriorcontractiongroupedregression}
Let $\epsilon_n = \sqrt{ s_0 \log G / n }$, and suppose that Assumptions \ref{A1}-\ref{A5} hold. Under model (\ref{truemodel}), suppose that we endow $(\bm{\beta}, \sigma^2)$ with the prior (\ref{hiermodel}). For the hyperparameters in the $\mathcal{B}(a,b)$ prior on $\theta$, we choose $a=1, b=G^{c}$, $c > 2$. Further, we set $\lambda_0 = (1-\theta)/\theta$ and $\lambda_1 \asymp 1/n$ in the SSGL prior. Then
\begin{equation} \label{l2contraction}
\Pi \left( \bm{\beta}: \lVert \bm{\beta} - \bm{\beta}_0 \rVert_2 \geq M_1 \sigma_0 \epsilon_n \vert \bm{Y} \right) \rightarrow 0 \textrm{ a.s. } \mathbb{P}_0 \textrm{ as } n, G \rightarrow \infty,
\end{equation}
\begin{equation} \label{predictioncontraction}
\Pi \left( \bm{\beta}: \lVert \bm{X} \bm{\beta} - \bm{X} \bm{\beta}_0 \rVert_2 \geq M_2 \sigma_0 \sqrt{n} \epsilon_n \vert \bm{Y} \right) \rightarrow 0 \textrm{ a.s. } \mathbb{P}_0 \textrm{ as } n, G \rightarrow \infty,
\end{equation}
\begin{equation} \label{varianceconsistency}
\Pi \left( \sigma^2: \lvert \sigma^2 - \sigma_0^2 \rvert \geq 4 \sigma_0^2 \epsilon_n \vert \bm{Y} \right) \rightarrow 0 \textrm{ as } n \rightarrow \infty, \textrm{ a.s. } \mathbb{P}_0 \textrm{ as } n, G \rightarrow \infty,
\end{equation}
for some $M_1 > 0, M_2 > 0$.
\end{theorem}
\begin{remark}
In the case where $G = p$ and $m_1 = \ldots = m_G = 1$, the $\ell_2$ and prediction error rates in (\ref{l2contraction})-(\ref{predictioncontraction}) reduce to the familiar optimal rates of $\sqrt{s_0 \log p / n}$ and $\sqrt{s_0 \log p}$ respectively.
\end{remark}
\begin{remark}
Eq. (\ref{varianceconsistency}) demonstrates that our model also consistently estimates the unknown variance $\sigma^2$, therefore providing further theoretical justification for placing an independent prior on $\sigma^2$, as advocated by \citet{MoranRockovaGeorge2018}.
\end{remark}
\subsubsection{Dimensionality Recovery}
Although the posterior mode is exactly sparse, the SSGL prior is absolutely continuous so it assigns zero mass to exactly sparse vectors. To approximate the model size under the SSGL model, we use the following generalized notion of sparsity \citep{BhattacharyaPatiPillaiDunson2015}. For $\omega_g > 0$, we define the generalized inclusion indicator and generalized dimensionality, respectively, as
\begin{equation} \label{generalizeddimensionality}
\gamma_{\omega_g} (\bm{\beta}_g) = I( \lVert \bm{\beta}_g \rVert_2 > \omega_g) \textrm{ and } \lvert \bm{\gamma} (\bm{\beta}) \rvert = \displaystyle \sum_{g=1}^{G} \gamma_{\omega_g} (\bm{\beta}_g ).
\end{equation}
In contrast to \cite{BhattacharyaPatiPillaiDunson2015, RockovaGeorge2018}, we allow the threshold $\omega_g$ to be different for each group, owing to the fact that the group sizes $m_g$ may not necessarily all be the same. However, the $\omega_g$'s, $g= 1, \ldots, G$, should still tend towards zero as $n$ increases, so that $| \bm{\gamma} (\bm{\beta}) |$ provides a good approximation to $\# \{g: \bm{\beta}_g \neq \mathbf{0}_{m_g} \}$.
Consider as the threshold,
\begin{equation} \label{omegathreshold}
\omega_g \equiv \omega_g(\lambda_0, \lambda_1, \theta) = \frac{1}{\lambda_0 - \lambda_1} \log \left[ \frac{1-\theta}{\theta} \frac{\lambda_0^{m_g}}{\lambda_1^{m_g}} \right]
\end{equation}
Note that for large $\lambda_0$, this threshold rapidly approaches zero. Analogous to \cite{Rockova2018, RockovaGeorge2018}, any vectors $\bm{\beta}_g$ that satisfy $\lVert \bm{\beta}_g \rVert_2 = \omega_g$ correspond to the intersection points between the two group lasso densities in the separable SSGL prior (\ref{ssgrouplasso}), or when the second derivative $\partial^2 pen_S(\bm{\beta}|\theta) / \partial \lVert \bm{\beta}_g\rVert_2^2 = 0.5$. The value $\omega_g$ represents the turning point where the slab has dominated the spike, and thus, the sharper the spike (when $\lambda_0$ is large), the smaller the threshold.
Using the notion of generalized dimensionality (\ref{generalizeddimensionality}) with (\ref{omegathreshold}) as the threshold, we have the following theorem.
\begin{theorem}[dimensionality] \label{dimensionalitygroupedregression}
Suppose that the same conditions as those in Theorem \ref{posteriorcontractiongroupedregression} hold. Then under \eqref{truemodel}, for sufficiently large $M_3 > 0$,
\begin{equation} \label{posteriorcompressibility}
\displaystyle \sup_{\bm{\beta}_0} \mathbb{E}_{\bm{\beta}_0} \Pi \left( \bm{\beta}: \lvert \bm{\gamma} (\bm{\beta} ) \rvert > M_3 s_0 \vert \bm{Y} \right) \rightarrow 0 \textrm{ as } n, G \rightarrow \infty.
\end{equation}
\end{theorem}
Theorem \ref{dimensionalitygroupedregression} shows that the expected posterior probability that the generalized dimension is a constant multiple larger than the true model size $s_0$ is asymptotically vanishing. In other words, the SSGL posterior concentrates on sparse sets.
\subsection{Sparse Generalized Additive Models (GAMs)}
Assume there is a true model,
\begin{equation} \label{truemodelGAM}
y_i = \displaystyle \sum_{j=1}^{p} f_{0j}(X_{ij}) + \text{var}epsilon_i, \hspace{.5cm} \text{var}epsilon_i \sim \mathcal{N} (0, \sigma_0^2).
\end{equation}
where $\sigma_0^2 \in (0, \infty)$. Throughout this section, we assume that all the covariates $\bm{X}_i = (X_{i1}, \ldots, X_{ip})^T$ have been standardized to lie in $[0,1]^p$ and that $f_{0j} \in \mathcal{C}^{\kappa}[0,1], j=1, \ldots, p$. That is, the true functions are all at least $\kappa$-times continuously differentiable over $[0,1]$, for some $\kappa \in \mathbb{N}$. Suppose that each $f_{0j}$ can be approximated by a linear combination of basis functions, $\{g_{j1}, \ldots, g_{jd} \}$. In matrix notation, (\ref{truemodelGAM}) can then be written as
\begin{equation} \label{truemodelGAMmatrix}
\bm{Y} = \displaystyle \sum_{j=1}^{p} \widetilde{\bm{X}}_j \bm{\beta}_{0j} + \bm{\delta} + \bm{\text{var}epsilon}, \hspace{.5cm} \bm{\text{var}epsilon} \sim \mathcal{N}_n ( \bm{0}, \sigma_0^2 \int_{\mathbb{R}^p} \int_{\mathbb{R}^n_-}b_n),
\end{equation}
where $\widetilde{\bm{X}}_j$ denotes an $n \times d$ matrix where the $(i,k)$th entry is $\widetilde{\bm{X}}_j(i,k) = g_{jk} (X_{ij})$, the $\bm{\beta}_{0j}$'s are $d \times 1$ vectors of basis coefficients, and $\bm{\delta}$ denotes an $n \times 1$ vector of lower-order bias.
Denote $\widetilde{\bm{X}} = [ \widetilde{\bm{X}}_1, \ldots, \widetilde{\bm{X}}_p]$ and $\bm{\beta} = ( \bm{\beta}_1^T, \ldots, \bm{\beta}_p^T )^T$. Under (\ref{truemodelGAM}), suppose that we endow $(\bm{\beta}, \sigma^2)$ in (\ref{truemodelGAMmatrix}) with the prior (\ref{hiermodel}). We have the following assumptions:
\begin{enumerate}[label=(B\arabic*)]
\item Assume that $p \gg n$, $\log p = o(n)$, and $d \asymp n^{1 / (2 \kappa + 1)}$. \label{B1}
\item The number of true nonzero functions satisfies
\begin{align*}
s_0 = o( \max \{ n / \log p, n^{2 \kappa / (2 \kappa + 1)} \} ).
\end{align*} \label{B2}
\item There exists a constant $k_1 > 0$ so that for all $n$, $\lambda_{\max} ( \widetilde{\bm{X}}^T \widetilde{\bm{X}} ) \leq k_1 n$. \label{B3}
\item Let $\xi \subset \{1, \ldots, p \}$, and let $\widetilde{\bm{X}}_{\xi}$ denote the submatrix of $\widetilde{\bm{X}}$ that contains the submatrices indexed by $\xi$. There exists a constant $\nu_1 > 0$ and an integer $\bar{p}$ satisfying $s_0 = o(\bar{p})$ and $ \bar{p} = o( s_0 \log n )$, so that $\lambda_{\min} ( \widetilde{\bm{X}}_{\xi}^T \widetilde{\bm{X}}_{\xi} ) \geq n \nu_1$ for any model of size $\lvert \xi \rvert \leq \bar{p}$. \label{B4}
\item $\lVert \bm{\beta}_0 \rVert_{\infty} = O( \log p ).$ \label{B5}
\item The bias $\bm{\delta}$ satisfies $\lVert \bm{\delta} \rVert_{2} \lesssim \sqrt{s_0 n} d^{- \kappa}$. \label{B6}
\end{enumerate}
\noindent Assumptions \ref{B1}-\ref{B5} are analogous to assumptions \ref{A1}-\ref{A5}. Assumptions \ref{B3}-\ref{B4} are difficult to verify but can be shown to hold if appropriate basis functions for the $g_{jk}$'s are used, e.g. cubic B-splines \cite{YooGhosal2016, WeiReichHoppinGhosal2018}. Finally, Assumption \ref{B6} bounds the approximation error incurred by truncating the basis expansions to be of size $d$. This assumption is satisfied, for example, by B-spline basis expansions \cite{ZhouShenWolfe1998, WeiReichHoppinGhosal2018}.
Let $\widetilde{\mathbb{P}}_0$ denote the probability measure underlying the truth (\ref{truemodelGAM}) and $\Pi ( \cdot | \bm{Y})$ denote the posterior distribution under NPSSL model with the prior (\ref{hiermodel}) for $(\bm{\beta}, \sigma^2)$ in (\ref{truemodelGAMmatrix}). Further, let $f (\bm{X}_i) = \sum_{j=1}^{p} f_j ( X_{ij})$ and $f_0 (\bm{X}_i) = \sum_{j=1}^{p} f_{0j} (X_{ij})$, and define the empirical norm $\lVert \cdot \rVert_n$ as
\begin{align*}
\lVert f - f_0 \rVert_n^2 = \frac{1}{n} \sum_{i=1}^{n} \left[ f (\bm{X}_i) - f_0 (\bm{X}_i) \right]^2.
\end{align*}
Let $\mathcal{F}$ denote the infinite-dimensional set of all possible additive functions $f = \sum_{j=1}^{p} f_j$, where each $f_j$ can be represented by a $d$-dimensional basis expansion. In \citet{RaskuttiWainwrightYu2012}, it was shown that the minimax estimation rate for $f_0 = \sum_{j=1}^{p} f_{0j}$ under squared $\ell_2$ error loss is $\epsilon_n^2 \asymp s_0 \log p / n + s_0 n^{-2 \kappa / (2 \kappa + 1)}$. The next theorem establishes that the NPSSL model achieves this minimax posterior contraction rate.
\begin{theorem}[posterior contraction rates] \label{contractionGAMs}
Let $\epsilon_n^2 = s_0 \log p / n + s_0 n^{-2 \kappa / (2 \kappa + 1)}$. Suppose that Assumptions \ref{B1}-\ref{B6} hold. Under model (\ref{truemodelGAMmatrix}), suppose that we endow $(\bm{\beta}, \sigma^2)$ with the prior (\ref{hiermodel}) (replacing $G$ with $p$). For the hyperparameters in the $\mathcal{B}(a,b)$ prior on $\theta$, we choose $a=1, b=p^{c}$, $c > 2$. Further, we set $\lambda_0 = (1-\theta)/\theta$ and $\lambda_1 \asymp 1/n$ in the SSGL prior. Then
\begin{equation} \label{empiricalcontractionGAM}
\Pi \left( f \in \mathcal{F}: \lVert f - f_0 \rVert_n \geq \widetilde{M}_1 \epsilon_n | \bm{Y} \right) \rightarrow 0 \textrm{ a.s. } \widetilde{\mathbb{P}}_0 \textrm{ as } n, p \rightarrow \infty,
\end{equation}
\begin{equation} \label{GAMvarianceconsistency}
\Pi \left( \sigma^2: \lvert \sigma^2 - \sigma_0^2 \rvert \geq 4 \sigma_0^2 \epsilon_n \vert \bm{Y} \right) \rightarrow 0 \textrm{ as } n \rightarrow \infty, \textrm{ a.s. } \widetilde{\mathbb{P}}_0 \textrm{ as } n, p \rightarrow \infty,
\end{equation}
for some $\widetilde{M}_1 > 0$.
\end{theorem}
Let the generalized dimensionality $ | \bm{\gamma} (\bm{\beta}) |$ be defined as before in (\ref{generalizeddimensionality}) (replacing $G$ with $p$), with $\omega_g$ from (\ref{omegathreshold}) as the threshold (replacing $m_g$ with $d$). The next theorem shows that under the NPSSL, the expected posterior probability that the generalized dimension size is a constant multiple larger than the true model size $s_0$ asymptotically vanishes.
\begin{theorem}[dimensionality] \label{dimensionalityGAM}
Suppose that the same conditions as those in Theorem \ref{contractionGAMs} hold. Then under \eqref{truemodelGAMmatrix}, for sufficiently large $\widetilde{M}_2 > 0$,
\begin{equation} \label{posteriorcompressibilityGAM}
\displaystyle \sup_{\bm{\beta}_0} \widetilde{\mathbb{E}}_{\bm{\beta}_0} \Pi \left( \bm{\beta}: \lvert \bm{\gamma} (\bm{\beta} ) \rvert > \widetilde{M}_2 s_0 \vert \bm{Y} \right) \rightarrow 0 \textrm{ as } n, p \rightarrow \infty.
\end{equation}
\end{theorem}
\section{Simulation Studies} \label{Simulations}
In this section, we will evaluate our method in a number of settings. For the SSGL approach, we fix $\lambda_1 = 1$ and use cross-validation to choose from $\lambda_0 \in \{1, 2, \ldots, 100 \}$. For the prior $\theta \sim \mathcal{B}(a,b)$, we set $a=1, b=G$ so that $\theta$ is small with high probability. We will compare our SSGL approach with the following methods:
\begin{enumerate}
\item GroupLasso: the group lasso \citep{YuanLin2006}
\item BSGS: Bayesian sparse group selection \citep{chen2016bayesian}
\item SoftBart: soft Bayesian additive regression tree (BART) \citep{linero2018bayesian}
\item RandomForest: random forests \citep{breiman2001random}
\item SuperLearner: super learner \citep{van2007super}
\item GroupSpike: point-mass spike-and-slab priors \eqref{pointmassspikeandslab} placed on groups of coefficients\footnote{Code to implement GroupSpike is included in the Supplementary data. Due to the discontinuous prior, GroupSpike is not amenable to a MAP finding algorithm and has to be implemented using MCMC.}
\end{enumerate}
In our simulations, we will look at the mean squared error (MSE) for estimating $f(\boldsymbol{X}_{\textrm{new}})$ averaged over a new sample of data $\boldsymbol{X}_{\textrm{new}}$. We will also evaluate the variable selection properties of the different methods using precision and recall, where $\text{precision} = \text{TP}/(\text{TP}+\text{FP})$, $\text{recall} = \text{TP}/(\text{TP}+\text{FN})$, and TP, FP, and FN denote the number of true positives, false positives, and false negatives respectively. Note that we will not show precision or recall for the SuperLearner, which averages over different models and different variable selection procedures and therefore does not have one set of variables that are deemed significant.
\subsection{Sparse Semiparametric Regression} \label{simsparse}
Here, we will evaluate the use of our proposed SSGL procedure in sparse semiparametric regression with $p$ continuous covariates. Namely, we implement the NPSSL main effects model described in Section \ref{NPSSLMainEffects}. In Appendix \ref{App:B}, we include more simulation studies of the SSGL approach under both sparse and dense settings, as well as a simulation study showing that we are accurately estimating the residual variance $\sigma^2$.
We let $n=100, p=300$. We generate independent covariates from a standard uniform distribution, and we let the true regression surface take the following form:
\begin{align*}
\mathbb{E} (Y \vert \boldsymbol{X}) = 5 \text{sin}(\pi X_1) + 2.5 (X_3^2 - 0.5) + e^{X_4} + 3 X_5,
\end{align*}
with variance $\sigma^2 = 1$.
\begin{figure}
\caption{Simulation results for semiparametric regression. The top left panel presents the out-of-sample mean squared error, the top right panel shows the recall score to evaluate variable selection, the bottom left panel shows the precision score, and the bottom right panel shows the estimates from each simulation of $f_1(X_1)$ for SSGL. The MSE for BSGS is not displayed as it lies outside of the plot area.}
\label{fig:simSparse}
\end{figure}
To implement the SSGL approach, we estimate the mean response as
\begin{align*}
\mathbb{E} ( \bm{Y} \vert \boldsymbol{X}) = \widetilde{\boldsymbol{X}}_1 \boldsymbol{\beta}_1 + \dots + \widetilde{\boldsymbol{X}}_p \boldsymbol{\beta}_p,
\end{align*}
where $\widetilde{\boldsymbol{X}}_j$ is a design matrix of basis functions used to capture the possibly nonlinear effect of $X_j$ on $Y$. For the basis functions in $\widetilde{\boldsymbol{X}}_j, j = 1, \ldots, p$, we use natural splines with degrees of freedom $d$ chosen from $d \in \{2, 3, 4\}$ using cross-validation. Thus, we are estimating a total of between 600 and 1200 unknown basis coefficients.
We run 1000 simulations and average all of the metrics considered over each simulated data set. Figure \ref{fig:simSparse} shows the results from this simulation study. The GroupSpike approach has the best performance in terms of MSE, followed closely by SSGL, with the next best approach being SoftBart. In terms of recall, the SSGL and GroupLasso approaches perform the best, indicating the highest power in detecting the significant groups. This comes with a loss of precision as the GroupSpike and SoftBart approaches have the best precision among all methods.
Although the GroupSpike method performed best in this scenario, the SSGL method was much faster. As we show in Appendix \ref{App:B5}, when $p=4000$, fitting the SSGL model with a sufficiently large $\lambda_0$ takes around three seconds to run. This is almost 50 times faster than running 100 MCMC iterations of the GroupSpike method (never mind the total time it takes for the GroupSpike model to converge). Our experiments demonstrate that the SSGL model gives comparable performance to the ``theoretically ideal'' point mass spike-and-slab in a fraction of the computational time.
\subsection{Interaction Detection}
We now explore the ability of the SSGL approach to identify important interaction terms in a nonparametric regression model. To this end, we implement the NPSSL model with interactions from Section \ref{NPSSLMainInteractionEffects}. We generate 25 independent covariates from a standard uniform distribution with a sample size of 300. Data is generated from the model:
\begin{align*}
\mathbb{E} (Y \vert \boldsymbol{X}) = 2.5\text{sin}(\pi X_1 X_2) + 2\text{cos}(\pi (X_3 + X_5)) + 2(X_6 - 0.5) + 2.5X_7,
\end{align*}
with variance $\sigma^2=1$. While this may not seem like a high-dimensional problem, we will consider all two-way interactions, and there are 300 such interactions. The important two-way interactions are between $X_1$ and $X_2$ and between $X_3$ and $X_5$. We evaluate the performance of each method and examine the ability of SSGL to identify important interactions while excluding all of the remaining interactions. Figure \ref{fig:simInt} shows the results for this simulation setting. The SSGL, GL, GroupSpike, and SoftBart approaches all perform well in terms of out-of-sample mean squared error, with GroupSpike slightly outperforming the competitors. The SSGL also does a very good job at identifying the two important interactions. The $(X_1, X_2)$ interaction is included in 97\% of simulations, while the $(X_3, X_5)$ interaction is included 100\% of the time. All other interactions are included in only a small fraction of simulated data sets.
\begin{figure}
\caption{Simulation results from the interaction setting. The left panel shows out-of-sample MSE for each approach, while the right panel shows the probability of a two-way interaction being included into the SSGL model for all pairs of covariates.}
\label{fig:simInt}
\end{figure}
\section{Real Data Analysis} \label{dataanalysis}
Here, we will illustrate the SSGL procedure in two distinct settings: 1) evaluating the SSGL's performance on a data set where $n=120$ and $p=15,000$, and 2) identifying important (nonlinear) main effects and interactions of environmental exposures. In Appendix \ref{App:C}, we evaluate the predictive performance of our approach on benchmark data sets where $p < n$, compared to several other state-of-the-other methods. Our results show that in both the $p \gg n$ and $p<n$ settings, the SSGL maintains good predictive accuracy.
\subsection{Bardet-Biedl Syndrome Gene Expression Study} \label{bb_subsection}
We now analyze a microarray data set consisting of gene expression measurements from the eye tissue of 120 laboratory rats\footnote{Data accessed from the Gene Expression Omnibus \url{ www.ncbi.nlm.nih.gov/geo} (accession no. GSE5680).}. The data was originally studied by \citet{Scheetz06} to investigate mammalian eye disease, and later analyzed by \citet{BrehenyHuang2015} to demonstrate the performance of their group variable selection algorithm. In this data, the goal is to identify genes which are associated with the gene TRIM32. TRIM32 has previously been shown to cause Bardet-Biedl syndrome \citep{chiang06}, a disease affecting multiple organs including the retina.
The original data consists of 31,099 probe sets. Following \citet{BrehenyHuang2015}, we included only the 5,000 probe sets with the largest variances in expression (on the log scale). For these probe sets, we considered a three-term natural cubic spline basis expansion, resulting in a grouped regression problem with $n = 120$ and $p = 15,000$. We implemented SSGL with regularization parameter values $\lambda_1=1$ and $\lambda_0$ ranging on an equally spaced grid from 1 to 500. We compared SSGL with the group lasso \citep{YuanLin2006}, implemented using the R package \texttt{gglasso} \citep{Yang2015}.
As shown in Table \ref{BB_gene_table}, SSGL selected much fewer groups than the group lasso. Namely, SSGL selected 12 probe sets, while the group lasso selected 83 probe sets. Moreover, SSGL achieved a smaller 10-fold cross-validation error than the group lasso, albeit within range of random variability (Table \ref{BB_gene_table}). These results demonstrate that the SSGL achieves strong predictive accuracy, while \textit{also} achieving the most parsimony. The groups selected by both SSGL and the group lasso are displayed in Table \ref{bb_gene_table} of Appendix \ref{App:C}. Interestingly, only four of the 12 probes selected by SSGL were also selected by the group lasso.
\begin{table}[t!]
\centering
\begin{tabular}{lcc}
\hline
& SSGL & Group Lasso \\
\hline
\# groups selected & 12 & 83\\
10-fold CV error & 0.012 (0.003) & 0.017 (0.008) \\
\hline
\end{tabular}
\caption{Results for SSGL and Group Lasso on the Bardet-Biedl syndrome gene expression data set. In parentheses, we report the standard errors for the CV prediction error. }\label{BB_gene_table}
\end{table}
We next conducted gene ontology enrichment analysis on the group of genes found by each of the methods using the R package \texttt{clusterProfiler} \citep{YWY12}. This software determines whether subsets of genes known to act in a biological process are overrepresented in a group of genes, relative to chance. If such a subset is significant, the group of genes is said to be ``enriched'' for that biological process. With a false discovery rate of 0.01, SSGL had five enriched terms, while the group lasso had none. The terms for which SSGL was enriched included RNA binding, a biological process with which the response gene TRIM32 is associated.\footnote{https://www.genecards.org/cgi-bin/carddisp.pl?gene=TRIM32 (accessed 03/01/20)} These findings show the ability of SSGL to find biologically meaningful signal in the data. Additional details for our gene ontology enrichment analysis can be found in Appendix \ref{App:C}.
\subsection{Environmental Exposures in the NHANES Data}\label{NHANES}
Here, we analyze data from the 2001-2002 cycle of the National Health and Nutrition Examination Survey (NHANES), which was previously analyzed by \citet{antonelli2017estimating}. We aim to identify which organic pollutants are associated with changes in leukocyte telomere length (LTL) levels. Telomeres are segments of DNA that help to protect chromosomes, and LTL levels are commonly used as a proxy for overall telomere length. LTL levels have previously been shown to be associated with adverse health effects \citep{haycock2014leucocyte}, and recent studies within the NHANES data have found that organic pollutants can be associated with telomere length \citep{mitro2015cross}.
\begin{figure}
\caption{Exposure response curves for each of the four exposures with significant main effects identified by the model. }
\label{fig:MainEffectNHANES}
\end{figure}
We use the SSGL approach to evaluate whether any of 18 organic pollutants are associated with LTL length and whether there are any significant interactions among the pollutants also associated with LTL length. In addition to the 18 exposures, there are 18 additional demographic variables which we adjust for in our model. We model the effects of the 18 exposures on LTL length using spline basis functions with two degrees of freedom. For the interaction terms, this leads to four terms for each pair of interactions, and we orthogonalize these terms with respect to the main effects. In total, this leads to a data set with $n=1003$ and $p=666$.
Our model selects four significant main effects and six significant interaction terms. In particular, PCB 3, PCB 11, Furan 1, and Furan 4 are identified as the important main effects in the model. Figure \ref{fig:MainEffectNHANES} plots the exposure response curves for these exposures. We see that each of these four exposures has a positive association with LTL length, which agrees with results seen in \cite{mitro2015cross} that saw positive relationships between persistent organic pollutants and telomere length. Further, our model identifies more main effects and more interactions than previous analyses of these data, e.g. \cite{antonelli2017estimating}, which could lead to more targeted future research in understanding how these pollutants affect telomere length. Additional discussion and analysis of the NHANES data set can be found in Appendix \ref{App:C}.
\section{Discussion} \label{discussion}
We have introduced the spike-and-slab group lasso (SSGL) model for variable selection and linear regression with grouped variables. We also extended the SSGL model to generalized additive models with the nonparametric spike-and-slab lasso (NPSSL). The NPSSL can efficiently identify both nonlinear main effects \textit{and} higher-order nonlinear interaction terms. Moreover, our prior performs an automatic multiplicity adjustment and self-adapts to the true sparsity pattern of the data through a \textit{non}-separable penalty. For computation, we introduced highly efficient coordinate ascent algorithms for MAP estimation and employed de-biasing methods for uncertainty quantification. An \textsf{R} package implementing the SSGL model can be found at \url{https://github.com/jantonelli111/SSGL}.
Although our model performs group selection, it does so in an ``all-in-all-out'' manner, similar to the original group lasso \citep{YuanLin2006}. Future work will be to extend our model to perform both group selection and within-group selection of individual coordinates. We are currently working to extend the SSGL to perform bilevel selection.
We are also working to extend the nonparametric spike-and-slab lasso so it can adapt to even more flexible regression surfaces than the generalized additive model. Under the NPSSL model, we used cross-validation to tune a single value for the degrees of freedom. In reality, different functions can have vastly differing degrees of smoothness, and it will be desirable to model anisotropic regression surfaces while avoiding the computational burden of tuning the individual degrees of freedom over a $p$-dimensional grid.
\section*{Acknowledgments}
Dr. Ray Bai, Dr. Gemma Moran, and Dr. Joseph Antonelli contributed equally and wrote this manuscript together, with input and suggestions from all other listed co-authors. The bulk of this work was done when the first listed author was a postdoc at the Perelman School of Medicine, University of Pennsylvania, under the mentorship of the last two authors. The authors are grateful to three anonymous reviewers, the Associate Editor, and the Editor whose thoughtful comments and suggestions helped to improve this manuscript. The authors would also like to thank Ruoyang Zhang, Peter B{\"u}hlmann, and Edward George for helpful discussions.
\section*{Funding}
Dr. Ray Bai and Dr. Mary Boland were funded in part by generous funding from the Perelman School of Medicine, University of Pennsylvania. Dr. Ray Bai and Dr. Yong Chen were funded by NIH grants 1R01AI130460 and 1R01LM012607.
\begin{appendix}
\section{Additional Computational Details} \label{App:A}
\subsection{SSGL Block-Coordinate Ascent Algorithm} \label{completealgorithm}
\begin{algorithm}[H]
\scriptsize \begin{flushleft}
Input: grid of increasing $\lambda_0$ values $I = \{\lambda_0^{1},\dots, \lambda_0^{L}\}$, update frequency $M$\\[4pt]
Initialize: $\bm{\beta}^* = \mathbf{0}_p$, $\theta^* = 0.5$, $\sigma^{*2}$ as described in Section \ref{AddlComputationalDetails}, $\Delta^*$ according to \eqref{delta_u} in the main manuscript \\[4pt]
For $l = 1, \dots, L$:
\begin{enumerate}
\item Set iteration counter $k_l = 0$
\item Initialize: $\widehat{\bm{\beta}}^{(k_l)} = \bm{\beta}^*$, $\theta^{(k_l)} = \theta^*$, $\sigma^{(k_l)2} = \sigma^{*2}$, $\Delta^U = \Delta^*$
\item While \textsf{diff} $ > \text{var}epsilon$
\begin{enumerate}
\item Increment $k_l $
\item For $ g= 1, \dots,G$:
\begin{enumerate}
\item Update
\begin{equation*}
{\bm{\beta}}_{g}^{(k_l)} \leftarrow \frac{1}{n} \left(1-\frac{\sigma^{(k_l)2} \lambda^*({\bm{\beta}}_{g}^{(k_l - 1)} ; \theta^{(k_l)} )}{\lVert \bm{z}_g\rVert_2}\right)_+\bm{z}_g \ \mathbb{I}(\lVert \bm{z}_g\rVert_2 > \Delta^U)
\end{equation*}
\item Update
\begin{equation*}
\widehat{Z}_g =
\begin{cases}
1 &\text{if } {\bm{\beta}}_{g}^{(k_l)} \neq \mathbf{0}_{m_g} \\
0 &\text{otherwise}
\end{cases}
\end{equation*}
\item If $g \equiv 0 \mod M$:
\begin{enumerate}
\item Update
$${\theta}^{(k_l)} \leftarrow \frac{a + \sum_{g=1}^G \widehat{Z}_g}{ a + b +G}$$
\item If $k_{l-1}< 100$:
\begin{equation*}
\text{Update } {\sigma}^{(k_l)2} \leftarrow \frac{\lVert \bm{Y} - \bm{X}{\bm{\beta}}^{(k_l)}\rVert_2^2}{n+2}
\end{equation*}
\item Update
\begin{equation*}
\Delta^U \leftarrow
\begin{cases}
\sqrt{2n\sigma^{(k_l)2}\log[1/p^*(\mathbf{0}_{m_g};\theta^{(k_l)})]} +\sigma^{(k_l)2}\lambda_1 &\text{if } h(\mathbf{0}_{m_g};{\theta}^{(k_l)} ) >0\\
{\sigma}^{(k_l)2}\lambda^*(\mathbf{0}_{m_g};{\theta}^{(k_l)}) &\text{otherwise}
\end{cases}
\end{equation*}
\end{enumerate}
\item \textsf{diff} $= \lVert {\bm{\beta}}^{(k_l)} - \bm{\beta}^{(k_l - 1)}\rVert_2$
\end{enumerate}
\end{enumerate}
\item Assign $\bm{\beta}^* = \bm{\beta}^{(k_l)}$, $\theta^* = \theta^{(k_l)}$, $\sigma^{*2}= \sigma^{2(k_l)}$, $\Delta^* = \Delta^U$
\end{enumerate}
\end{flushleft}
\caption{Spike-and-Slab Group Lasso} \label{algorithm}
\end{algorithm}
\subsection{Tuning Hyperparameters, Initializing Values, and Updating the Variance in Algorithm 1} \label{AddlComputationalDetails}
We keep the slab hyperparameter $\lambda_1$ fixed at a small value. We have found that our results are not very sensitive to the choice of $\lambda_1$. This parameter controls the variance of the slab component of the prior, and the variance must simply be large enough to avoid overshrinkage of important covariates. For the default implementation, we recommend fixing $\lambda_1 = 1$. This applies minimal shrinkage to the significant groups of coefficients and affords these groups the ability to escape the pull of the spike.
Meanwhile, we choose the spike parameter $\lambda_0$ from an increasing ladder of values. We recommend selecting $\lambda_0 \in \{ 1,2,...,100 \}$, which represents a range from hardly any penalization to very strong penalization. Below, we describe precisely how to tune $\lambda_0$. To account for potentially different group sizes, we use the same $\lambda_0$ for all groups but multiply $\lambda_0$ by $\sqrt{m_g}$ for each $g$th group, $g=1, \ldots, G$. As discussed in \cite{HBM12}, further scaling of the penalty by group size is necessary in order to ensure that the same degree of penalization is applied to potentially different sized groups. Otherwise, larger groups may be erroneously selected simply because they are larger (and thus have larger $\ell_2$ norm), not because they contain significant entries.
When the spike parameter $\lambda_0$ is very large, the continuous spike density approximates the point-mass spike. Consequently, we face the computational challenge of navigating a highly multimodal posterior. To ameliorate this problem for the spike-and-slab lasso, \citet{RockovaGeorge2018} recommend a ``dynamic posterior exploration'' strategy in which the slab parameter $\lambda_1$ is held fixed at a small value and $\lambda_0$ is gradually increased along a grid of values. Using the solution from a previous $\lambda_0$ as a ``warm start'' allows the procedure to more easily find optimal modes. In particular, when $(\lambda_1 - \lambda_0)^2 \leq 4$, the posterior is convex.
\citet{MoranRockovaGeorge2018} modify this strategy for the unknown $\sigma^2$ case. This is because the posterior is always non-convex when $\sigma^2$ is unknown. Namely, when $p\gg n$ and $\lambda_0 \approx \lambda_1$, the model can become saturated, causing the residual variance to go to zero. To avoid this suboptimal mode at $\sigma^2 = 0$, \citet{MoranRockovaGeorge2018} recommend fixing $\sigma^2$ until the $\lambda_0$ value at which the algorithm starts to converge in less than 100 iterations. Then, $\bm{\beta}$ and $\sigma^2$ are simultaneously updated for the next largest $\lambda_0$ in the sequence. The intuition behind this strategy is we first find a solution to the convex problem (in which $\sigma^2$ is fixed) and then use this solution as a warm start for the non-convex problem (in which $\sigma^2$ can vary).
We pursue a similar ``dynamic posterior exploration'' strategy with the modification for the unknown variance case for the SSGL in Algorithm \ref{algorithm} of Section \ref{completealgorithm}. A key aspect of this algorithm is how to choose the maximum value of $\lambda_0$. \citet{RockovaGeorge2018} recommend this maximum to be the $\lambda_0$ value at which the estimated coefficients stabilize. An alternative approach is to choose the maximum $\lambda_0$ using cross-validation, a strategy which is made computationally feasible by the speed of our block coordinate ascent algorithm. In our experience, the dynamic posterior exploration strategy favors more parsimonious models than cross-validation. In the simulation studies in Section \ref{Simulations}, we utilize cross-validation to choose $\lambda_0$, as there, our primary goal is predictive accuracy rather than parsimony.
Following \cite{MoranRockovaGeorge2018}, we initialize $\bm{\beta}^{*} = \bm{0}_p$ and $\theta^{*} = 0.5$. We also initialize $\sigma^{*2}$ to be the mode of a scaled inverse chi-squared distribution with degrees of freedom $\nu=3$ and scale parameter chosen such that the sample variance of $\bm{Y}$ corresponds to the 90th quantile of the prior. We have found this initialization to be quite effective in practice at ensuring that Algorithm \ref{algorithm} converges in less than 100 iterations for sufficiently large $\lambda_0$.
\subsection{Additional Details for the Inference Procedure} \label{ThetaEstimateDebiasing}
Here, we describe the nodewise regression procedure for estimating $\widehat{\boldsymbol{\Theta}}$ in Section \ref{sec:inference}. This approach for estimating the inverse of the covariance matrix $\widehat{\boldsymbol{\Sigma}} = \bm{X}^T \bm{X} / n$ was originally proposed and studied theoretically in \cite{meinshausen2006high} and \cite{van2014asymptotically}.
For each $j=1,\dots,p$, let $\boldsymbol{X}_j$ denote the $j$th column of $\boldsymbol{X}$ and $\boldsymbol{X}_{-j}$ denote the submatrix of $\boldsymbol{X}$ with the $j$th column removed. Define $\widehat{\boldsymbol{\gamma}}_j$ as
\begin{align*}
\widehat{\boldsymbol{\gamma}}_j = \displaystyle \argmin_{\gamma}(|| \boldsymbol{X}_{j} - \boldsymbol{X}_{-j} \boldsymbol{\gamma}||^2_2 / n + 2 \lambda_j ||\boldsymbol{\gamma}||_1).
\end{align*}
\noindent Now we can define the components of $\widehat{\boldsymbol{\gamma}}_j$ as $\widehat{\boldsymbol{\gamma}}_{j,k}$ for $k = 1, \dots, p$ and $k \neq p$, and create the following matrix:
$$
\widehat{\bm{C}} =
\begin{pmatrix}
1&-\widehat{\boldsymbol{\gamma}}_{1,2}&\dots&-\widehat{\boldsymbol{\gamma}}_{1,p}\\
-\widehat{\boldsymbol{\gamma}}_{2,1}&1&\dots&-\widehat{\boldsymbol{\gamma}}_{2,p}\\
\vdots&\vdots&\ddots&\vdots\\
-\widehat{\boldsymbol{\gamma}}_{p,1}&-\widehat{\boldsymbol{\gamma}}_{p,2}&\dots&1
\end{pmatrix}.
$$
\noindent Lastly, let $\widehat{\bm{T}}^2 = \text{diag}(\widehat{\tau}_1^2, \widehat{\tau}_2^2, \dots, \widehat{\tau}_p^2)$, where
$$\widehat{\tau}_j = || \boldsymbol{X}_{j} - \boldsymbol{X}_{-j} \widehat{\boldsymbol{\gamma}}_j||^2_2 / n + \lambda_j ||\widehat{\boldsymbol{\gamma}}_j||_1.$$
\noindent We can proceed with $\widehat{\boldsymbol{\Theta}} = \widehat{\bm{T}}^{-2} \widehat{\bm{C}}$. This choice is used because it puts an upper bound on $|| \widehat{\boldsymbol{\Sigma}} \widehat{\boldsymbol{\Theta}}_j^T - \bm{e}_j||_{\infty}$. Other regression models such as the original spike-and-slab lasso \citep{RockovaGeorge2018} could be used instead of the lasso \citep{Tibshirani1996} regressions for each covariate. However, we will proceed with this choice, as it has already been studied theoretically and shown to have the required properties to be able to perform inference for $\boldsymbol{\beta}$.
\section{Additional Simulation Results} \label{App:B}
Here, we present additional results which include different sample sizes than those seen in the manuscript, assessment of the SSGL procedure under dense settings, estimates of $\sigma^2$, timing comparisons, and additional figures.
\subsection{Increased Sample Size for Sparse Simulation} \label{App:B1}
Here, we present the same sparse simulation setup as that seen in Section \ref{simsparse}, though we will increase $n$ from 100 to 300. Figure \ref{fig:AppendixSimSparse} shows the results and we see that they are very similar to those from the manuscript, except that the mean squared error (MSE) for the SSGL approach is now nearly as low as the MSE for the GroupSpike approach, and the precision score has improved substantially.
\begin{figure}
\caption{Simulation results from the sparse setting with $n=300$. The left panel presents the out-of-sample mean squared error, the middle panel shows the precision score, and the right panel shows the recall score. The MSE for BSGS is not displayed as it lies outside of the plot area.}
\label{fig:AppendixSimSparse}
\end{figure}
\begin{figure}
\caption{Simulation results from the less sparse setting with $n=100$ and $n=300$. The left column shows out-of-sample MSE, the middle panel shows the precision score, and the right column shows the recall score.}
\label{fig:simDense}
\end{figure}
\subsection{Dense Model} \label{App:B2}
Here, we generate independent covariates from a standard normal distribution, and we let the true regression surface take the following form
\begin{align*}
\mathbb{E} (Y \vert \boldsymbol{X}) = \sum_{j=1}^{20} 0.2 X_j + 0.2 X_j^2,
\end{align*}
with variance $\sigma^2=1$. In this model, there are no strong predictors of the outcome, but rather a large number of predictors which have small impacts on the outcome. Here, we display results for both $n=100$ and $p=300$, as well as $n=300$ and $p = 300$, as the qualitative results change across the different sample sizes. Our simulation results can be seen in Figure \ref{fig:simDense}. When the sample size is 100, the SSGL procedure performs the best in terms of both MSE and recall score, while all approaches do poorly with the precision score. When the sample size increases to 300, the SSGL approach still performs quite well in terms of MSE and recall, though the GroupLasso and GroupSpike approaches are slightly better in terms of MSE. The SSGL approach still maintains a low precision score while the GroupSpike approach has a very high precision once the sample size is large enough.
\subsection{Estimation of $\sigma^2$} \label{App:B3}
To evaluate our ability to estimate $\sigma^2$ and confirm our theoretical results that the posterior of $\sigma^2$ contracts around the true parameter, we ran a simulation study using the following data generating model:
\begin{align*}
\mathbb{E} (Y \vert \boldsymbol{X}) = 0.5X_1 + 0.3X_2 + 0.6X_{10}^2 - 0.2X_{20},
\end{align*}
with $\sigma^2 = 1$. We vary $n \in \{50, 100, 500, 1000, 2000\}$ and we set $G = n$ to confirm that the estimates are centering around the truth as both the sample size and covariate dimension grows. We use groups of size two that contain both the linear and quadratic term for each covariate. Note that in this setting, the total number of regression coefficients actually \textit{exceeds} the sample size since each group has two terms, leading to a total of $p=2G$ coefficients in the model.
\begin{figure}
\caption{Boxplots of the estimates of $\sigma^2$ from the SSGL model for a range of sample sizes. Note that $n=G$ in each scenario.}
\label{fig:SigmaTest}
\end{figure}
Figure \ref{fig:SigmaTest} shows box plots of the estimates for $\sigma^2$ across all simulations for each sample size and covariate dimension. We see that for small sample sizes there are some estimates well above 1 or far smaller than 1. This is because either some important variables are excluded (so the sum of squared residuals gets inflated), or too many variables are included and the model is overfitted (leading to small $\widehat{\sigma}^2$). These problems disappear as the sample size grows to 500 or larger, where we observe that the estimates are closely centering around the true $\sigma^2 = 1$. Figure \ref{fig:SigmaTest} confirms our theoretical results in Theorem \ref{posteriorcontractiongroupedregression} and Theorem \ref{contractionGAMs}, which state that as $n, G \rightarrow \infty$, the posterior $\pi(\sigma^2|\bm{Y})$ contracts around the true $\sigma^2$.
\subsection{Large Number of Groups} \label{App:B4}
\begin{figure}
\caption{Simulation results from the many groups setting with $G=2000$. The left panel presents the out-of-sample mean squared error, the middle panel shows the precision score, and the right panel shows the recall score.}
\label{fig:AppendixSimBigG}
\end{figure}
We now generate data with $n=200$ and $G=2000$, where each group contains three predictors. We generate data from the following model:
\begin{align*}
\mathbb{E} ( \bm{Y} \vert \boldsymbol{X}) = \sum_{g=1}^G \boldsymbol{X}_g \boldsymbol{\beta}_g,
\end{align*}
where we set $\boldsymbol{\beta}_g = \boldsymbol{0}$ for $g=1, \dots 1996$. For the final four groups, we draw individual coefficient values from independent normal distributions with mean 0 and standard deviation 0.4. These coefficients are redrawn for each data set in the simulation study, and therefore, the results are averaging over many possible combinations of magnitudes for the true nonzero coefficients. We see that the best performing approach in this scenario is the GroupSpike approach, followed by the SSGL approach. The SSGL approach outperforms group lasso in terms of MSE and precision, while group lasso has a slightly higher recall score.
\subsection{Computation Time} \label{CPUExperiment} \label{App:B5}
In this study, we evaluate the computational speed of the SSGL procedure in comparison with the fully Bayesian GroupSpike approach that places point-mass spike-and-slab priors on groups of coefficients. We fix $n=300$ and vary the number of groups $G \in \{ 100, 200, \dots, 2000 \}$, with two elements per group. For the SSGL approach, we keep track of the computation time for estimating the model for $\lambda_0 = 20$. For large values of $\lambda_0$, it typically takes 100 or fewer iterations for the SSGL method to converge. For the GroupSpike approach, we keep track of the computation time required to run 100 MCMC iterations. Both SSGL and GroupSpike models were run on an Intel E5-2698v3 processor.
In any given data set, the computation time will be higher than the numbers presented here because the SSGL procedure typically requires fitting the model for multiple values of $\lambda_0$, while the GroupSpike approach will likely take far more than 100 MCMC iterations to converge, especially in higher dimensions. Nonetheless, this should provide a comparison of the relative computation speeds for each approach.
The average CPU time in seconds can be found in Figure \ref{fig:CPU}. We see that the SSGL approach is much faster as it is able to estimate all the model parameters for a chosen $\lambda_0$ in just a couple of seconds, even for $G=2000$ (or $p=4000$). When $p=2000$, the SSGL returned a final solution in roughly three seconds on average, whereas GroupSpike required over two minutes to run 100 iterations (and would most likely require many more iterations to converge). This is to be expected as the GroupSpike approach relies on MCMC. Figure \ref{fig:CPU} shows the large computational gains that can be achieved using our MAP finding algorithm.
\begin{figure}
\caption{CPU time for the SSGL and GroupSpike approaches averaged across 1000 replications for fixed $n=300$ and different group sizes $G$.}
\label{fig:CPU}
\end{figure}
\section{Additional Results and Discussion for Real Data Examples} \label{App:C}
In this section, we perform additional data analysis of the SSGL method on benchmark datasets where $p<n$ to demonstrate that the SSGL model also works well in low-dimensional settings. We also provide additional analyses and discussion of the two real data examples analyzed in Section \ref{dataanalysis}.
\subsection{Testing Predictive Performance of the SSGL on Datasets Where $p < n$} \label{predictiveperformance}
We first look at three data sets which have been analyzed in a number of manuscripts, most recently in \cite{linero2018bayesian}. The tecator data set is available in the \texttt{caret} package in \textsf{R} \citep{kuhn2008building} and has three different outcomes $\bm{Y}$ to analyze. Specifically, this data set looks at using 100 infrared absorbance spectra to predict three different features of chopped meat with a sample size of 215. The Blood-Brain data is also available in the \texttt{caret} package and aims to predict levels of a particular compound in the brain given 134 molecular descriptors with a sample size of 208. Lastly, the Wipp data set contains 300 samples with 30 features from a computer model used to estimate two-phase fluid flow \citep{storlie2011surface}. For each of these data sets, we hold out 20 of the subjects in the data as a validation sample and see how well the model predicts the outcome in the held-out data. We repeat this 1000 times and compute the root mean squared error (RMSE) for prediction.
\begin{table}[t!]
\resizebox{.98\textwidth}{!}{
\centering
\begin{tabular}{lrrrrrrr}
\hline
Data & SSGL & GroupLasso & RandomForest & SoftBart & SuperLearner & BSGS & GroupSpike \\
\hline
Tecator 1 & 1.41 & 1.57 & 2.75 & 1.93 & 1.00 & 5.16 & 1.67 \\
Tecator 2 & 1.25 & 1.58 & 2.91 & 1.97 & 1.00 & 6.77 & 1.41 \\
Tecator 3 & 1.14 & 1.38 & 1.94 & 1.81 & 1.10 & 3.31 & 1.00 \\
BloodBrain & 1.10 & 1.04 & 1.00 & 1.01 & 1.00 & 1.24 & 1.13 \\
Wipp & 1.44 & 1.30 & 1.46 & 1.00 & 1.17 & 4.68 & 1.30 \\
\hline
\end{tabular}
}
\caption{Standardized out-of-sample root mean squared prediction error averaged across 1000 replications for the data sets in Section \ref{predictiveperformance}. An RMSE of 1 indicates the best performance within a data set.}
\label{tab:pred}
\end{table}
Table \ref{tab:pred} shows the results for each of the methods considered in the simulation study. The results are standardized so that for each data set, the RMSE is divided by the minimum RMSE for that data set. This means that the model with an RMSE of 1 had the best predictive performance, and all others should be greater than 1, with the magnitude indicating how poor the performance was. We see that the top performer across the data sets was SuperLearner, which is not surprising given that SuperLearner is quite flexible and averages over many different prediction models. Our simulation studies showed that SuperLearner may not work as well when $p > n$. However, the data sets considered here all have $p < n$, which could explain its improved performance here. Among the other approaches, SSGL performs quite well as it has RMSE's relatively close to 1 for all the data sets considered.
\subsection{Additional Details for Bardet-Biedl Analysis}
Here we present additional results for the Bardet-Biedl Syndrome gene expression analysis conducted in Section \ref{bb_subsection}. Table \ref{bb_gene_table} displays the 12 probes found by SSGL. Table \ref{gene_ontology_results} displays the terms for which SSGL was enriched in a gene ontology enrichment analysis.
\begin{table}[ht]
\centering
\begin{tabular}{llrr}
\hline
Probe ID & Gene Symbol & SSGL Norm & Group Lasso Norm \\
\hline
1374131\_at & & 0.034 & \\
1383749\_at & Phospho1 & 0.067 & 0.088 \\
1393735\_at & & 0.033 & 0.002 \\
1379029\_at & Zfp62 & 0.074 & \\
1383110\_at & Klhl24 & 0.246 & \\
1384470\_at & Maneal & 0.087 & 0.005 \\
1395284\_at & & 0.014 & \\
1383673\_at & Nap1l2 & 0.045 & \\
1379971\_at & Zc3h6 & 0.162 & \\
1384860\_at & Zfp84 & 0.008 & \\
1376747\_at & & 0.489 & 0.002 \\
1379094\_at & & 0.220 & \\
\hline
\end{tabular}
\caption{Probes found by SSGL on the Bardet-Biedl syndrome gene expression data set. The probes which were also found by the Group Lasso have nonzero group norm values.}
\label{bb_gene_table}
\end{table}
\begin{table}[ht]
\centering
\begin{tabular}{{ | m{1em} | m{11cm} | }}
\hline
& SSGL: enriched terms in gene ontology enrichment analysis \\
\hline
1 & alpha-mannosidase activity \\
2 & RNA polymerase II intronic transcription regulatory region sequence-specific DNA binding \\
3 & mannosidase activity \\
4 & intronic transcription regulatory region sequence-specific DNA binding \\
5 & intronic transcription regulatory region DNA binding \\
\hline
\end{tabular}
\caption{Table displays the terms for which SSGL was found to be enriched in a gene ontology enrichment analysis, ordered by statistical significance.}\label{gene_ontology_results}
\end{table}
\subsection{Additional Details for Analysis of NHANES Data}
Here we will present additional results from the NHANES data analysis in Section \ref{NHANES}. Here, the aim is to identify environmental exposures that are associated with leukocyte telomere length. In the NHANES data, we have measurements from 18 persistent organic pollutants. Persistent organic pollutants are toxic chemicals that have potential to adversely affect health. They are known to remain in the environment for long periods of time and can travel through wind, water, or even the food chain. Our data set consists of 11 polychlorinated biphenyls (PCBs), three Dioxins, and four Furans. We want to understand the impact that these can have on telomere length, and to understand if any of these pollutants interact in their effect on humans.
The data also contains additional covariates that we will adjust for such as age, a squared term for age, gender, BMI, education status, race, lymphocyte count, monocyte count, cotinine level, basophil count, eosinophil count, and neutrophil count. To better understand the data set, we have shown the correlation matrix between all organic pollutants and covariates in Figure \ref{fig:CorrNHANES}. We can see that the environmental exposures are all fairly positively correlated with each other. In particular, the PCBs are highly correlated among themselves. The correlation across chemical types, such as the correlation between PCBs and Dioxins or Furans are lower, though still positively correlated. The correlation between the covariates that we place into our model and the exposures is generally extremely low, and the correlation among the individual covariates is also low, with the exception of a few blood cell types as seen in the upper right of Figure \ref{fig:CorrNHANES}.
\begin{figure}
\caption{Correlation matrix among the 18 exposures and 18 demographic variables used in the final analysis for the NHANES study.}
\label{fig:CorrNHANES}
\end{figure}
As discussed in Section \ref{NHANES}, when we fit the SSGL model to this data set, we identified four main effects (plotted in Figure \ref{fig:MainEffectNHANES}). Our model also identified six interactions as having nonzero parameter estimates. The identified interactions are PCB 10 - PCB 7, Dioxin 1 - PCB 11, Dioxin 2 - PCB 2, Dioxin 2 - Dioxin 1, Furan 1 - PCB 10, and Furan 4 - Furan 3. We see that there are interactions both within a certain type of pollutant (Dioxin and Dioxin, etc.) and across pollutant types (Furan and PCB).
Lastly, looking at Figure \ref{fig:MainEffectNHANES}, we can see that the exposure response curves for the four identified main effects are relatively linear, particularly for PCB 11 and Furan 1. With this in mind, we also ran our SSGL model with one degree of freedom splines for each main effect. Note that this does not require a model that handles grouped covariate structures as the main effects and interactions in this case are both single parameters. Cross-validated error from the model with one degree of freedom is nearly identical to the model with two degrees of freedom, though the linear model selects far more terms. The linear model selects six main effect terms and 20 interaction terms. As the two models provide similar predictive performance but the model with two degrees of freedom is far more parsimonious, we elect to focus on the model with two degrees of freedom.
\section{Proofs of Main Results} \label{App:D}
\subsection{Preliminary Lemmas} \label{App:D1}
Before proving the main results in the paper, we first prove the following lemmas.
\begin{lemma} \label{auxlemma1}
Suppose that $\bm{\beta}_g \in \mathbb{R}^{m_g}$ follows a group lasso density indexed by $\lambda$, i.e. $\bm{\beta}_g \sim \bm{\Psi} ( \bm{\beta}_g \vert \lambda )$. Then
\begin{equation*}
\mathbb{E}( \lVert \bm{\beta}_g \rVert_2^2 ) = \frac{m_g (m_g+1)}{\lambda^2}.
\end{equation*}
\end{lemma}
\begin{proof}
The group lasso density, $\bm{\Psi}(\bm{\beta}_g \vert \lambda )$, is the marginal density of a scale mixture,
\begin{equation*}
\bm{\beta}_g \sim \mathcal{N}_{m_g} ( \mathbf{0}, \tau \bm{I}_{m_g} ), \hspace{.3cm} \tau \sim \mathcal{G} \left( \frac{m_g+1}{2}, \frac{\lambda^2}{2} \right).
\end{equation*}
Therefore, using iterated expectations, we have
\begin{align*}
\mathbb{E}( \lVert \bm{\beta}_g \rVert_2^2 ) & = \mathbb{E}\left[ \mathbb{E}( \lVert \bm{\beta}_g \rVert_2^2 \hspace{.1cm} \vert \hspace{.1cm} \tau ) \right] \\
& = m_g \mathbb{E}( \tau ) \\
& = \frac{m_g (m_g+1) }{\lambda^2}.
\end{align*}
\end{proof}
\begin{lemma} \label{auxlemma2}
Suppose $\sigma^2 > 0, \sigma_0^2 > 0$. Then for any $\epsilon_n \in (0, 1)$ such that $\epsilon_n \rightarrow 0$ as $n \rightarrow \infty$, we have for sufficiently large $n$,
\begin{align*}
\left\{ \lvert \sigma^2 - \sigma_0^2 \rvert \geq 4 \sigma_0^2 \epsilon_n \right\} \subseteq \left\{ \frac{\sigma^2}{\sigma_0^2} > \frac{1 - \epsilon_n}{1 - \epsilon_n} \textrm{ or } \frac{\sigma^2}{\sigma_0^2} < \frac{1 - \epsilon_n}{1 + \epsilon_n} \right\}.
\end{align*}
\end{lemma}
\begin{proof}
For large $n$, $\epsilon_n < 1/2$, so $2 \epsilon_n / (1-\epsilon_n) < 4 \epsilon_n, - 2 \epsilon_n / (1 + \epsilon_n) > -4 \epsilon_n$, and thus,
\begin{align*}
& \lvert \sigma^2 - \sigma_0^2 \rvert \geq 4 \sigma_0^2 \epsilon_n \mathbb{R}ightarrow ( \sigma^2 - \sigma_0^2)/ \sigma_0^2 \geq 4 \epsilon_n \textrm{ or } (\sigma^2 - \sigma_0^2)/\sigma_0^2 \leq -4 \epsilon_n \\
& \qquad \qquad \mathbb{R}ightarrow \frac{\sigma^2}{\sigma_0^2} - 1 > \frac{2 \epsilon_n}{1 - \epsilon_n} \textrm{ or } \frac{\sigma^2}{\sigma_0^2} - 1 < - \frac{2 \epsilon_n}{1 + \epsilon_n} \\
& \qquad \qquad \mathbb{R}ightarrow \frac{\sigma^2}{\sigma_0^2} > \frac{1 + \epsilon_n}{1 - \epsilon_n} \textrm{ or } \frac{\sigma^2}{\sigma_0^2} < \frac{1 - \epsilon_n}{1 + \epsilon_n},
\end{align*}
and hence,
\begin{align*}
\lvert \sigma^2 - \sigma_0^2 \rvert \geq 4 \sigma_0^2 \epsilon_n \hspace{.2cm} \mathbb{R}ightarrow \hspace{.2cm} \frac{\sigma^2}{\sigma_0^2} > \frac{1 + \epsilon_n}{1 - \epsilon_n} \textrm{ or } \frac{\sigma^2}{\sigma_0^2} < \frac{1 - \epsilon_n}{1 + \epsilon_n}.
\end{align*}.
\end{proof}
\begin{lemma} \label{auxlemma3}
Suppose that a vector $\bm{z} \in \mathbb{R}^m$ can be decomposed into subvectors, $\bm{z} = [ \bm{z}_1', \ldots, \bm{z}_d' ]$, where $\sum_{i=1}^{d} \lvert \bm{z}_i \rvert = m$ and $\lvert \bm{z}_i \rvert$ denotes the length of $\bm{z}_i$. Then $\lVert \bm{z} \rVert_2 \leq \sum_{i=1}^{d} \lVert \bm{z}_i \rVert_2$.
\begin{proof}
We have
\begin{align*}
\lVert \bm{z} \rVert_2 & = \sqrt{ z_{11}^2 + \ldots + z_{1 \lvert z_1 \rvert}^2 + \ldots + z_{d1}^2 + \ldots + z_{d \lvert z_d \rvert} } \\
& \leq \sqrt{ z_{11}^2 + \ldots + z_{1 \lvert z_1 \rvert}^2 } + \ldots + \sqrt{ z_{d1}^2 + \ldots + z_{d \lvert z_d \rvert} } \\
& = \lVert \bm{z}_1 \rVert_2 + \ldots + \lVert \bm{z}_d \rVert_2.
\end{align*}
\end{proof}
\end{lemma}
\subsection{Proofs for Section 3} \label{App:D2}
\begin{proof}[Proof of Proposition \ref{globalmodeseparable}]
This result follows from an adaptation of the arguments of \citet{ZZ12}. The group-specific optimization problem is:
\begin{align}
\widehat{\bm{\beta}}_g = \argmax_{\bm{\beta}_g}\left\{ -\frac{1}{2}\lVert \bm{z}_g-\bm{\beta}_g\rVert_2^2+ \sigma^2pen_S(\bm{\beta}|\theta) \right\}. \label{groupwise}
\end{align}
We first note that the optimization problem \eqref{groupwise} is equivalent to maximizing the objective
\begin{align}
L(\bm{\beta}_g) &= -\frac{1}{2}\lVert \bm{z}_g - \bm{\beta}_g\rVert_2^2+ \sigma^2pen_S(\bm{\beta}|\theta) + \frac{1}{2} \lVert \bm{z}_g\rVert_2^2 \\
&= \lVert \bm{\beta}_g\rVert_2\left[\frac{\bm{\beta}_g^T\bm{z}_g}{\lVert \bm{\beta}\rVert_2} - \left(\frac{\lVert \bm{\beta}_g\rVert_2}{2} - \frac{\sigma^2pen_S(\bm{\beta}|\theta)}{\lVert \bm{\beta}_g\rVert_2}\right) \right] \\
&= \lVert \bm{\beta}_g\rVert_2\left[\lVert\bm{z}_g\rVert_2\cos\text{var}phi - \left(\frac{\lVert \bm{\beta}_g\rVert_2}{2} - \frac{\sigma^2pen_S(\bm{\beta}|\theta)}{\lVert \bm{\beta}_g \rVert_2}\right) \right] \label{delta_proof}
\end{align}
where $\text{var}phi$ is the angle between $\bm{z}_g$ and $\bm{\beta}_g$. Then, when $\lVert \bm{z}_g\rVert_2 < \Delta$, the second factorized term of \eqref{delta_proof} is always less than zero, and so $\widehat{\bm{\beta}}_g = \mathbf{0}_{m_g}$ must be the global maximizer of $L$. On the other hand, when the global maximizer $\widehat{\bm{\beta}}_g = \mathbf{0}_{m_g}$, then the second factorized term must always be less than zero, otherwise $\widehat{\bm{\beta}}_g = \mathbf{0}_{m_g}$ would no longer be the global maximizer and so $\lVert \bm{z}_g\rVert_2 < \Delta$.
\end{proof}
\begin{proof}[Proof of Lemma \ref{theta_mean_lemma}]
We have
\begin{align}
\mathbb{E}[\theta |\widehat{\bm{\beta}}] = \frac{\int_0^1 \theta^a (1-\theta)^{b-1}(1-\theta z)^{G-\widehat{q} }\prod_{g=1}^{\widehat{q}} (1-\theta x_g) d\theta}{ \int_0^1 \theta^{a-1} (1-\theta)^{b-1}(1-\theta z)^{G-\widehat{q} }\prod_{g=1}^{\widehat{q}} (1-\theta x_g)d\theta }. \label{thetaexpectation}
\end{align}
When $\lambda_0 \to \infty$, we have $z \to 1$ and $x_g \to -\infty$ for all $g = 1,\dots, \widehat{q}$. Hence,
\begin{align}
\lim_{\lambda_0\to \infty} \mathbb{E}[\theta|\widehat{\bm{\beta}}] &= \lim_{z\to 1}\lim_{x_g\to -\infty}\frac{\int_0^1 \theta^a (1-\theta)^{b + G-\widehat{q}-1}\prod_{g=1}^{\widehat{q}} (1-\theta x_g)}{ \int_0^1 \theta^{a-1} (1-\theta)^{b-1}(1-\theta z)^{G-\widehat{q} }\prod_{g=1}^{\widehat{q}} (1-\theta x_g) } \\
&=\frac{ \int_0^1 \theta^{a + \widehat{q}}(1-\theta)^{b + G - \widehat{q}-1}d\theta }{\int_0^1 \theta^{a + \widehat{q} - 1}(1-\theta)^{b + G - \widehat{q} - 1}d\theta} \\
&= \frac{a + \widehat{q}}{a + b+ G }.
\end{align}
\end{proof}
\subsection{Proofs for Section 6} \label{App:D3}
In this section, we use proof techniques from \cite{NingGhosal2018, SongLiang2017, WeiReichHoppinGhosal2018} rather than the ones in \cite{RockovaGeorge2018}. However, none of these other papers considers \textit{both} continuous spike-and-slab priors for groups of regression coefficients \textit{and} an independent prior on the unknown variance.
\begin{proof}[Proof of Theorem \ref{posteriorcontractiongroupedregression}]
Our proof is based on first principles of verifying Kullback-Leibler (KL) and testing conditions (see e.g., \cite{GhosalGhoshVanDerVaart2000}). We first prove (\ref{l2contraction}) and (\ref{varianceconsistency}).
\noindent \textbf{Part I: Kullback-Leibler conditions.} Let $f \sim \mathcal{N}_n ( \bm{X} \bm{\beta}, \sigma^2 \bm{I}_n) $ and $f_0 \sim \mathcal{N}_n (\bm{X} \bm{\beta}_0, \sigma_0^2 \bm{I}_n )$, and let $\Pi(\cdot)$ denote the prior (\ref{hiermodel}). We first show that for our choice of $\epsilon_n = \sqrt{s_0 \log G / n}$,
\begin{equation} \label{KullbackLeiblercond}
\Pi \left( K (f_0, f) \leq n \epsilon_n^2, V(f_0, f) \leq n \epsilon_n^2 \right) \geq \exp (-C_1 n \epsilon_n^2),
\end{equation}
for some constant $C_1 > 0$, where $K(\cdot, \cdot)$ denotes the KL divergence and $V(\cdot, \cdot)$ denotes the KL variation. The KL divergence between $f_0$ and $f$ is
\begin{equation} \label{KLdiv}
K(f_0, f) = \frac{1}{2} \left[ n \left( \frac{\sigma_0^2}{\sigma^2} \right) - n - n \log \left( \frac{\sigma_0^2}{\sigma^2} \right) + \frac{ \lVert \bm{X} ( \bm{\beta} - \bm{\beta}_0 ) \rVert_2^2}{\sigma^2} \right],
\end{equation}
and the KL variation between $f_0$ and $f$ is
\begin{equation} \label{KLvar}
V(f_0, f) = \frac{1}{2} \left[ n \left( \frac{\sigma_0^2}{\sigma^2} \right)^2 - 2n \left( \frac{\sigma_0^2}{\sigma^2} \right) + n \right] + \frac{\sigma_0^2}{(\sigma^2)^{2}} \lVert \bm{X} ( \bm{\beta} - \bm{\beta}_0 ) \rVert_2^2.
\end{equation}
Define the two events $\mathcal{A}_1$ and $\mathcal{A}_2$ as follows:
\begin{equation} \label{eventA1}
\mathcal{A}_1 = \left\{ \sigma^2: n \left( \frac{\sigma_0^2}{\sigma^2} \right) - n - n \log \left( \frac{\sigma_0^2}{\sigma^2} \right) \leq n \epsilon_n^2, \right. \\
\left. n \left( \frac{\sigma_0^2}{\sigma^2} \right)^2 - 2n \left( \frac{\sigma_0^2}{\sigma^2} \right) + n \leq n \epsilon_n^2 \right\}
\end{equation}
and
\begin{equation} \label{eventA2}
\mathcal{A}_2 = \left\{ (\bm{\beta}, \sigma^2): \frac{ \lVert \bm{X} ( \bm{\beta} - \bm{\beta}_0 ) \rVert_2^2}{\sigma^2} \leq n \epsilon_n^2, \right. \\
\left. \frac{\sigma_0^2}{(\sigma^2)^{2}} \lVert \bm{X} ( \bm{\beta} - \bm{\beta}_0 ) \rVert_2^2 \leq n \epsilon_n^2/2 \right\}.
\end{equation}
Following from (\ref{KullbackLeiblercond})-(\ref{eventA2}), we may write $\Pi ( K(f_0, f) \leq \epsilon_n^2, V(f_0, f) \leq \epsilon_n^2 ) = \Pi ( \mathcal{A}_2 \vert \mathcal{A}_1 ) \Pi ( \mathcal{A}_1)$. We derive lower bounds for $\Pi(\mathcal{A}_1)$ and $\Pi (\mathcal{A}_2 \vert \mathcal{A}_1)$ separately. Noting that we may rewrite $\mathcal{A}_1$ as
\begin{align*}
\mathcal{A}_1 = \left\{ \sigma^2: \frac{\sigma_0^2}{\sigma^2} - 1 - \log \left( \frac{\sigma_0^2}{\sigma^2} \right) \leq \epsilon_n^2, \hspace{.3cm} \left( \frac{\sigma_0^2}{\sigma^2} - 1 \right)^2 \leq \epsilon_n^2 \right\},
\end{align*}
and expanding $\log(\sigma_0^2 / \sigma^2)$ in the powers of $1-\sigma_0^2/\sigma^2$ to get $\sigma_0^2 / \sigma^2-1-\log(\sigma_0^2/\sigma^2) \sim (1-\sigma_0^2/\sigma^2)^2/2$, it is clear that $\mathcal{A}_1 \supset \mathcal{A}_1^{\star}$, where $\mathcal{A}_1^{\star} = \{ \sigma^2: \sigma_0^2 / ( \epsilon_n + 1) \leq \sigma^2 \leq \sigma_0^2 \}$. Thus, since $\sigma^2 \sim \mathcal{IG}(c_0, d_0)$, we have for sufficiently large $n$,
\begin{align*} \label{A1lowerbound}
\Pi(\mathcal{A}_1) \geq \Pi(\mathcal{A}_1^{\star}) & \asymp \displaystyle \int_{\sigma_0^2/( \epsilon_n + 1)}^{\sigma_0^2} (\sigma^2)^{-c_0-1} e^{-d_0/ \sigma^2} d \sigma^2 \\
& \geq (\sigma_0^2)^{-c_0-1} e^{-d_0 (\epsilon_n + 1) / \sigma_0^2} . \numbereqn
\end{align*}
Thus, from (\ref{A1lowerbound}), we have
\begin{equation} \label{neglogA1upper}
- \log \Pi (\mathcal{A}_1) \lesssim \epsilon_n + 1 \lesssim n \epsilon_n^2,
\end{equation}
since $n\epsilon_n^2 \rightarrow \infty$. Next, we consider $\Pi (\mathcal{A}_2 \vert \mathcal{A}_1)$. We have
\begin{align*}
\frac{\sigma_0^2}{(\sigma^2)^2} \lVert \bm{X} ( \bm{\beta} - \bm{\beta}_0) \rVert_2^2 & = \bigg| \bigg| \frac{ \bm{X} ( \bm{\beta} - \bm{\beta}_0 )}{\sigma} \bigg| \bigg|_2^2 \left( \frac{\sigma_0^2}{\sigma^2} - 1 \right) + \bigg| \bigg| \frac{ \bm{X} ( \bm{\beta} - \bm{\beta}_0 )}{\sigma} \bigg| \bigg|_2^2,
\end{align*}
and conditional on $\mathcal{A}_1$, we have that the previous display is bounded above by
\begin{align*}
\bigg| \bigg| \frac{ \bm{X} ( \bm{\beta} - \bm{\beta}_0 )}{\sigma} \bigg| \bigg|_2^2 \left( \epsilon_n + 1 \right) < \frac{2}{\sigma^2} \lVert \bm{X} ( \bm{\beta} - \bm{\beta}_0 ) \rVert_2^2 ,
\end{align*}
for large $n$ (since $\epsilon_n < 1$ when $n$ is large). Since $\mathcal{A}_1 \supset \mathcal{A}_1^{\star}$, where $\mathcal{A}_1^{\star}$ was defined earlier, the left-hand side of both expressions in \eqref{eventA2} can be bounded above by a constant multiple of $\lVert \bm{X} (\bm{\beta}-\bm{\beta}_0) \rVert_2^2$, conditional on $\mathcal{A}_1$. Therefore, for some constant $b_1 > 0$, $\Pi( \mathcal{A}_2 \vert \mathcal{A}_1)$ is bounded below by
\begin{align*} \label{A2givenA1LowerBound1}
& \Pi (\mathcal{A}_2 \vert \mathcal{A}_1 ) \geq \Pi \left( \lVert \bm{X} (\bm{\beta} - \bm{\beta}_0 ) \rVert_2^2 \leq \frac{b_1^2 n \epsilon_n^2}{2} \right) \\
& \geq \Pi \left( \lVert \bm{\beta} - \bm{\beta}_0 \rVert_2^2 \leq \frac{b_1^2 \epsilon_n^2}{2 k n^{\alpha-1} } \right) \\
& \geq \int_{0}^{1} \left \{ \Pi_{S_0} \left( \lVert \bm{\beta}_{S_0} - \bm{\beta}_{0 S_0} \rVert_2^2 \leq \frac{ b_1^2 \epsilon_n^2}{ 4 k n^{\alpha} } \bigg| \theta \right) \right\} \left\{ \Pi_{S_0^c} \left( \lVert \bm{\beta}_{S_0^c} \rVert_2^2 \leq \frac{ b_1^2 \epsilon_n^2}{ 4 k n^{\alpha} } \bigg| \theta \right) \right\} d \pi(\theta), \numbereqn
\end{align*}
where we used Assumption \ref{A3} in the second inequality, and in the third inequality, we used the fact that conditional on $\theta$, the SSGL prior is separable, so $\pi(\bm{\beta} | \theta )= \pi_{S_0}(\bm{\beta} | \theta ) \pi_{S_0^c}(\bm{\beta} | \theta )$. We proceed to lower-bound each bracketed integrand term in (\ref{A2givenA1LowerBound1}) separately. Changing the variable $\bm{\beta} - \bm{\beta}_0 \rightarrow \bm{b}$ and using the fact that $\pi_{S_0} (\bm{\beta} | \theta ) > \theta^{s_0} \prod_{g \in S_0} \left[ C_g \lambda_1^{m_g} e^{-\lambda_1 \lVert \bm{\beta}_g \rVert_2} \right]$ and $\lVert \bm{z} \rVert_2 \leq \lVert \bm{z} \rVert_1$ for any vector $\bm{z}$, we have as a lower bound for the first term in (\ref{A2givenA1LowerBound1}),
\begin{align*} \label{A2givenA1LowerBound2}
& \theta^{s_0} e^{-\lambda_1 \lVert \bm{\beta}_{S_0} \rVert_2 } \prod_{g \in S_0} C_g \left\{ \displaystyle \int_{ \lVert \bm{b}_{g} \rVert_1 \leq \frac{b_1 \epsilon_n }{ 2 s_0 \sqrt{k n^{\alpha} } } } \lambda_1^{m_g} e^{-\lambda_1 \lVert \bm{b}_{g} \rVert_1 } d \bm{b}_g \right\}. \numbereqn
\end{align*}
Each of the integral terms in (\ref{A2givenA1LowerBound2}) is the probability of the first $m_g$ events of a Poisson process happening before time $b_1 \epsilon_n / 2 s_0 \sqrt{k n^{\alpha}}$. Using similar arguments as those in the proof of Lemma 5.1 of \cite{NingGhosal2018}, we obtain as a lower bound for the product of integrals in (\ref{A2givenA1LowerBound2}),
\begin{align*} \label{A2givenA1LowerBound3}
& \displaystyle \prod_{g \in S_0} C_g \left\{ \displaystyle \int_{ \lVert \bm{b}_{g} \rVert_1 \leq \frac{b_1 \epsilon_n }{ 2 s_0 \sqrt{ k n^{\alpha} } } } \lambda_1^{m_g} e^{-\lambda_1 \lVert \bm{b}_{g} \rVert_1 } d \bm{b}_g \right\} \\
& \qquad \geq \displaystyle \prod_{g \in S_0} C_g e^{-\lambda_1 b_1 \epsilon_n / 2 s_0 \sqrt{ k n^{\alpha}}} \frac{1}{m_g !} \left( \frac{ \lambda_1 b_1 \epsilon_n}{ s_0 \sqrt{ k n^{\alpha}}} \right)^{m_g} \\
& \qquad = e^{- \lambda_1 b_1 \epsilon_n / 2 \sqrt{ k n^{\alpha}}} \displaystyle \prod_{g \in S_0} \frac{C_g}{m_g !} \left( \frac{ \lambda_1 b_1 \epsilon_n}{ s_0 \sqrt{ k n^{\alpha} }} \right)^{m_g}. \numbereqn
\end{align*}
Combining (\ref{A2givenA1LowerBound2})-(\ref{A2givenA1LowerBound3}), we have the following lower bound for the first bracketed term in (\ref{A2givenA1LowerBound1}):
\begin{align} \label{A2givenA1LowerBound5}
\theta^{s_0} e^{-\lambda_1 \lVert \bm{\beta}_{S_0} \rVert_2 } e^{- \lambda_1 b_1 \epsilon_n / 2 \sqrt{ k n^{\alpha}}} \displaystyle \prod_{g \in S_0} \frac{C_g}{m_g !} \left( \frac{ \lambda_1 b_1 \epsilon_n}{ s_0 \sqrt{ k n^{\alpha}}} \right)^{m_g}.
\end{align}
Now, noting that $\pi_{S_0^c} ( \bm{\beta} | \theta ) > (1-\theta)^{G-s_0} \prod_{g \in S_0^c} \left[ C_g \lambda_0^{m_g} e^{-\lambda_0 \lVert \bm{\beta}_g \rVert_2} \right]$, we further bound the second bracketed term in (\ref{A2givenA1LowerBound1}) from below. Let $\Check{\pi} ( \cdot )$ denote the density, $\Check{\pi} (\bm{\beta}_g) = C_g \lambda_0^{m_g} e^{-\lambda_0 \lVert \bm{\beta}_g \rVert_2}$. We have
\begin{align*} \label{A2givenA1LowerBound6}
&\Pi_{S_0^c} \left( \lVert \bm{\beta}_{S_0^c} \rVert_2^2 \leq \frac{ b_1^2 \epsilon_n^2}{ 4 k n^{\alpha} } \bigg| \theta \right) > (1 - \theta)^{G-s_0} \displaystyle \prod_{g \in S_0^c} \Check{\pi} \left( \lVert \bm{\beta}_g \rVert_2^2 \leq \frac{ b_1^2 \epsilon_n^2}{4 k n^{\alpha} (G-s_0)} \right) \\
& \qquad \qquad \geq (1 - \theta)^{G-s_0} \displaystyle \prod_{g \in S_0^c} \left[ 1 - \frac{4 k n^{\alpha} (G- s_0) \mathbb{E}_{\Check{\Pi}} \left( \lVert \bm{\beta}_g \rVert_2^2 \right) }{ b_1^2 \epsilon_n^2} \right] \\
& \qquad \qquad = (1-\theta)^{G-s_0} \displaystyle \prod_{g \in S_0^c} \left[ 1 - \frac{4 k n^{\alpha} (G- s_0) m_g (m_g+1) }{ \lambda_0^2 b_1^2 \epsilon_n^2 } \right] \\
& \qquad \qquad \geq (1-\theta)^{G-s_0} \left[ 1 - \frac{4 k n^{\alpha} G m_{\max} (m_{\max} + 1)}{ \lambda_0^2 b_1 \epsilon_n^2 } \right]^{G-s_0}, \numbereqn
\end{align*}
where we used an application of the Markov inequality and Lemma \ref{auxlemma1} in the second line. Combining \eqref{A2givenA1LowerBound5}-\eqref{A2givenA1LowerBound6} gives as a lower-bound for \eqref{A2givenA1LowerBound1},
\begin{align*} \label{A2givenA1LowerBound7}
\Pi ( \mathcal{A}_2 | \mathcal{A}_1 ) & \geq \left\{ e^{-\lambda_1 \lVert \bm{\beta}_{S_0} \rVert_2} e^{-\lambda_1 b_1 \epsilon_n / 2 \sqrt{k n^{\alpha}} } \prod_{g \in S_0} \frac{C_g}{m_g!} \left( \frac{\lambda_1 b_1 \epsilon_n}{s_0 \sqrt{k n^{\alpha}}} \right)^{m_g} \right\} \\
& \times \left\{ \int_{0}^{1} \theta^{s_0} (1-\theta)^{G-s_0} \left[ 1 - \frac{4 k n^{\alpha} G m_{\max} (m_{\max}+1)}{\lambda_0^2 b_1 \epsilon_n^2} \right]^{G-s_0} d \pi(\theta) \right\} \numbereqn
\end{align*}
Let us consider the second bracketed term in \eqref{A2givenA1LowerBound7} first. By assumption, $\lambda_0 = (1-\theta) / \theta$. Further, $\lambda_0^2 = (1-\theta)^2 / \theta^2$ is monotonically decreasing in $\theta$ for $\theta \in (0, 1)$. Hence, for constant $c > 2$ in the $\mathcal{B} ( 1, G^c)$ prior on $\theta$, a lower bound for the second bracketed term in \eqref{A2givenA1LowerBound7} is
\begin{align*} \label{A2givenA1LowerBound8}
& \int_{1/(2G^c+1)}^{1/(G^c+1)} \theta^{s_0} (1-\theta)^{G-s_0} \left[ 1 - \frac{4 k n^{\alpha} G m_{\max} (m_{\max} +1 )}{\lambda_0^2 b_1 \epsilon_n^2} \right]^{G-s_0} d \pi(\theta) \\
& \geq (2G^c + 1)^{-s_0} \left[ 1 - \frac{ 4 k n^{\alpha} G m_{\max} (m_{\max} + 1)}{G^{2c} b_1 \epsilon_n^2} \right]^{G-s_0} \int_{1/(2G^c+1)}^{1/(G^c+1)} (1-\theta)^{G-s_0} d \pi (\theta) \\
& \gtrsim (2 G^c+1)^{-s_0} \left[ 1 - \frac{1}{G-s_0} \right]^{G-s_0} \int_{1/(2G^c+1)}^{1/(G^c+1)} (1-\theta)^{G- s_0} d \pi(\theta) \\
& \asymp (2G^c + 1)^{-s_0} G^{-c} \int_{1/(2G^c+1)}^{1/(G^c+1)} (1-\theta)^{G^c+G-s_0-1} d \theta \\
& = (2 G^c + 1)^{-s_0} G^{-c} (G^c + G - s_0)^{-1} \\
& \qquad \qquad \times\left[ \left( 1 - \frac{1}{2G^c +1} \right)^{G^c + G - s_0} - \left( 1 - \frac{1}{G^c + 1} \right)^{G^c + G - s_0} \right] \\
& \gtrsim (2G^{c} + 1)^{-s_0} G^{-c} (G^c + G - s_0)^{-1}, \numbereqn
\end{align*}
where in the third line, we used our assumptions about the growth rates for $m_{\max}$, $G$, and $s_0$ in Assumptions \ref{A1}-\ref{A2} and the fact that $c > 2$. In the fourth line, we used the fact that $(1- 1/x)^x \rightarrow e^{-1}$ as $x \rightarrow \infty$ and $\theta \sim \mathcal{B}(1, G^{c})$. In the sixth line, we used the fact that the bracketed term in the fifth line can be bounded below by $e^{-1} - e^{-2}$ for sufficiently large $n$.
Combining \eqref{A2givenA1LowerBound7}-\eqref{A2givenA1LowerBound8}, we obtain for sufficiently large $n$,
\begin{align*} \label{neglogpiA2givenA1}
- \log \Pi ( \mathcal{A}_2 \vert \mathcal{A}_1 ) \lesssim & \hspace{.2cm} \lambda_1 \lVert \bm{\beta}_{0 S_0} \rVert_2 + \frac{ \lambda_1 b_1 \epsilon_n }{2 \sqrt{ k n^{\alpha}}} + \displaystyle \sum_{g \in S_0} \log (m_g !) - \displaystyle \sum_{g \in S_0} \log C_g \\
& + \displaystyle \sum_{g \in S_0} m_g \log \left( \frac{s_0 \sqrt{ k n^{\alpha}}}{\lambda_1 b_1 \epsilon_n} \right) + s_0 \log(2G^{c} + 1) + c \log G \\
& + \log (G^c + G - s_0 ) \numbereqn
\end{align*}
We examine each of the terms in (\ref{neglogpiA2givenA1}) separately. By Assumptions \ref{A1} and \ref{A5} and the fact that $\lambda_1 \asymp 1/n$, we have
\begin{align*}
\lambda_1 \lVert \bm{\beta}_{0 S_0} \rVert_2 \leq \lambda_1 \sqrt{s_0 m_{\max}} \lVert \bm{\beta}_{0 S_0} \rVert_{\infty} \lesssim s_0 \log G \lesssim n \epsilon_n^2,
\end{align*}
and
\begin{align*}
\frac{\lambda_1 b_1 \epsilon_n}{2 \sqrt{ k n^{\alpha}}} \lesssim \epsilon_n \lesssim n \epsilon_n^2.
\end{align*}
Next, using the facts that $x! \leq x^x$ for $x \in \mathbb{N}$ and Assumption \ref{A1}, we have
\begin{align*}
\displaystyle \sum_{g \in S_0} \log (m_g !) \leq s_0 \log(m_{\max} !) \leq s_0 m_{\max} \log (m_{\max}) \leq s_0 m_{\max} \log n \lesssim n \epsilon_n^2.
\end{align*}
Using the fact that the normalizing constant, $C_g = 2^{-m_g} \pi^{-(m_g - 1)/2} [ \mathcal{G}amma ((m_g+1)/2) ]^{-1}$, we also have
\begin{align*}
\displaystyle \sum_{g \in S_0} - \log C_g = & \displaystyle \sum_{g \in S_0} \left\{ m_g \log 2 + \left( \frac{m_g - 1}{2} \right) \log \pi + \log \left[ \mathcal{G}amma \left( \frac{m_g+1}{2} \right) \right] \right\} \\
& \leq s_0 m_{\max}( \log 2 + \log \pi ) + \displaystyle \sum_{g \in S_0} \log ( m_g ! ) \\
& \lesssim s_0 m_{\max}( \log 2 + \log \pi ) + s_0 m_{\max} \log n \\
&\lesssim s_0 \log G \\
& \lesssim n \epsilon_n^2,
\end{align*}
where we used the fact that $\mathcal{G}amma ( (m_g+1)/2 ) \leq \mathcal{G}amma(m_g + 1) = m_g !$. Finally, since $\lambda_1 \asymp 1/n$ and using Assumption \ref{A1} that $m_{\max} = O(\log G / \log n)$, we have
\begin{align*}
\displaystyle \sum_{g \in S_0} m_g \log \left( \frac{s_0 \sqrt{ k n^{\alpha} }}{\lambda_1 b_1 \epsilon_n} \right) & \lesssim \displaystyle s_0 m _{\max} \log \left( \frac{s_0 n^{ \alpha /2+1} \sqrt{k}}{ b_1 \epsilon_n^2} \right) \\
& = s_0 m_{\max} \log \left( \frac{n^{\alpha/2+2} \sqrt{k}}{b_1 \log G} \right) \\
& \lesssim s_0 m_{\max} \log n \\
& \lesssim s_0 \log G \\
& \lesssim n \epsilon_n^2.
\end{align*}
Finally, it is clear by the definition of $n \epsilon_n^2$ and the fact that $c > 2$ is a constant that
\begin{align*}
s_0 \log (2G^c + 1) + c \log G + \log (G^c + G - s_0 ) \asymp s_0 \log G = n \epsilon_n^2.
\end{align*}
Combining all of the above, together with (\ref{neglogpiA2givenA1}), we have
\begin{equation} \label{neglogpiA2givenA1pt2}
- \log \Pi ( \mathcal{A}_2 \vert \mathcal{A}_1 ) \lesssim n \epsilon_n^2.
\end{equation}
By (\ref{neglogA1upper}) and (\ref{neglogpiA2givenA1pt2}), we may choose a large constant $C_1 > 0$, so that
\begin{equation*}
\Pi( \mathcal{A}_2 \vert \mathcal{A}_1 ) \Pi (\mathcal{A}_1) \gtrsim \exp(- C_1 n \epsilon_n^2 / 2) \exp(- C_1 n \epsilon_n^2 / 2) = \exp(-C_1 n \epsilon_n^2),
\end{equation*}
so the Kullback-Leibler condition (\ref{KullbackLeiblercond}) holds.
\noindent \textbf{Part II: Testing conditions.} To complete the proof, we show the existence of a sieve $\mathcal{F}_n$ such that
\begin{equation} \label{testingcond1}
\Pi( \mathcal{F}_n^c) \leq \exp(-C_2 n \epsilon_n^2),
\end{equation}
for positive constant $C_2 > C_1+2$, where $C_1$ is the constant from (\ref{KullbackLeiblercond}), and a sequence of test functions $\phi_n \in [0,1]$ such that
\begin{equation} \label{testingcond2}
\mathbb{E}_{f_0} \phi_n \leq e^{-C_4 n \epsilon_n^2},
\end{equation}
and
\begin{equation} \label{testingcond3}
\displaystyle \sup_{ \begin{array}{rl} f \in \mathcal{F}_n: & \lVert \bm{\beta} - \bm{\beta}_0 \rVert_2 \geq (3+\sqrt{\nu_1}) \sigma_0 \epsilon_n, \\ & \textrm{ or } \lvert \sigma^2 - \sigma_0^2 \rvert \geq 4 \sigma_0^2 \epsilon_n \end{array} } \mathbb{E}_f (1 - \phi_n) \leq e^{-C_4 n \epsilon_n^2},
\end{equation}
for some $C_4 > 0$, where $\nu$ is from Assumption \ref{A4}. Recall that $\omega_g \equiv \omega_g(\lambda_0, \lambda_1, \theta) = \frac{1}{\lambda_0 - \lambda_1} \log \left[ \frac{1-\theta}{\theta} \frac{\lambda_0^{m_g}}{\lambda_1^{m_g}} \right]$. Choose $C_3 \geq C_1+2+\log 3$, and consider the sieve,
\begin{equation} \label{sievedef}
\mathcal{F}_n = \left\{ f: \lvert \bm{\gamma} (\bm{\beta}) \rvert \leq C_3 s_0, 0 < \sigma^2 \leq G^{C_3 s_0 / c_0} \right\},
\end{equation}
where $c_0$ is from $\mathcal{IG}(c_0, d_0)$ prior on $\sigma^2$ and $\lvert \bm{\gamma} (\bm{\beta}) \rvert$ denotes the generalized dimensionality (\ref{generalizeddimensionality}).
We first verify (\ref{testingcond1}). We have
\begin{equation} \label{sievecomplement1}
\Pi (\mathcal{F}_n^c) \leq \Pi \left( \lvert \bm{\gamma} (\bm{\beta}) \rvert > C_3 s_0 \right) + \Pi \left( \sigma^2 > G^{C_3 s_0 /c_0} \right).
\end{equation}
We focus on bounding each of the terms in (\ref{sievecomplement1}) separately. First, let $\theta_0 = C_3 s_0 \log G / G^c$, where $c > 2$ is the constant in the $\mathcal{B}(1, G^c)$ prior on $\theta$. Similarly as in the proof of Theorem 6.3 in \citet{RockovaGeorge2018}, we have $\pi(\bm{\beta}_g | \theta) < 2 \theta C_g \lambda_1^{m_g} e^{- \lambda_1 \lVert \bm{\beta}_g \rVert_2}$ for all $\lVert \bm{\beta}_g \rVert_2 > \omega_g$. We have for any $\theta \leq \theta_0$ that
\begin{align*} \label{sievecomplement2}
\Pi ( | \bm{\gamma} ( \bm{\beta} ) | > C_3 s_0 | \theta ) & \leq \displaystyle \sum_{S: |S| > C_3 s_0 } 2^{|S|} \theta_0^{|S|} \displaystyle \int_{ \lVert \bm{\beta}_g \rVert_2 > \omega_g; g\in S} C_g \lambda_1^{m_g} e^{-\lambda_1 \lVert \bm{\beta}_g \rVert_2} d \bm{\beta}_S \\
& \qquad \times \displaystyle \int_{ \lVert \bm{\beta}_g \rVert_2 \leq \omega_g; g \in S^c} \Pi_{S^c} ( \bm{\beta} ) d \bm{\beta}_{S^c} \\
& \lesssim \displaystyle \sum_{S: |S| > C_3 s_0 } \theta_0^{|S|}, \numbereqn
\end{align*}
where we used the assumption that $\lambda_1 \asymp 1/n$, the definition of $\omega_g$, and the fact that $\theta \leq \theta_0$ to bound the first integral term from above by $ \prod_{g \in S} (1/n)^{m_g} \leq n^{-|S|}$, and we bounded the second integral term above by one. We then have
\begin{align*} \label{sievecomplement2-pt2}
& \Pi ( | \bm{\gamma} (\bm{\beta} ) | > C_3 s_0 ) = \int_{0}^{1} \Pi ( | \bm{\gamma} (\bm{\beta} ) | > C_3 s_0 | \theta) d \pi (\theta) \\
& \qquad \leq \int_{0}^{\theta_0} \Pi ( | \bm{\gamma} ( \bm{\beta} ) | > C_3 s_0 | \theta) d \pi (\theta) + \Pi ( \theta > \theta_0 ). \numbereqn
\end{align*}
Note that since $s_0 = o(n / \log G)$ by Assumption \ref{A1}, $G \gg n$, and $c > 2$, we have $\theta_ 0 \leq C_3 n / G^c < 1 / G^2$ for sufficiently large $n$. Following from \eqref{sievecomplement2}, we thus have that for sufficiently large $n$,
\begin{align*} \label{sievecomplement2-pt3}
& \int_{0}^{\theta_0} \Pi ( | \bm{\gamma} ( \bm{\beta} ) | > C_3 s_0 | \theta) d \pi (\theta) \leq \sum_{S: |S| > C_3 s_0} \theta_0^{|S|} \\
& \qquad \leq \sum_{k = \lfloor C_3 s_0 \rfloor + 1}^{G} { G \choose k} \left( \frac{1}{G^2} \right)^{k} \\
& \qquad \leq \sum_{k= \lfloor C_3 s_0 \rfloor + 1}^{G} \left( \frac{e}{k G} \right)^{k} \\
& \qquad < \displaystyle \sum_{k = \lfloor{C_3 s_0} \rfloor+1}^{G} \left( \frac{e}{G (\lfloor{C_3 s_0} \rfloor +1)} \right)^{k} \\
& \qquad = \frac {\left( \frac{e}{G(\lfloor{C_3 s_0} \rfloor+1)} \right)^{\lfloor{C_3 s_0} \rfloor+1} - \left( \frac{e}{ G (\lfloor{C_3 s_0} \rfloor +1)} \right)^{G+1} }{ 1 - \frac{e}{G(\lfloor{C_3 s_0} \rfloor+1)} } \\
& \qquad \lesssim G^{- ( \lfloor C_3 s_0 \rfloor+1 )} \\
& \qquad \lesssim \exp \left( -C_3 n \epsilon_n^2 \right). \numbereqn
\end{align*}
where we used the inequality ${ G \choose k } \leq (e G / k)^k$ in the third line of the display.
Next, since $\theta \sim \mathcal{B}(1, G^{c})$, we have
\begin{align*} \label{sievecomplement2-pt4}
\Pi ( \theta > \theta_0 ) & = (1 - \theta_0)^{G^c} \\
& = \left( 1 - \frac{ C_3 s_0 \log G}{G^c} \right)^{G^c} \\
& \leq e^{- C_3 s_0 \log G} \\
& = e^{-C_3 n \epsilon_n^2}. \numbereqn
\end{align*}
Combining \eqref{sievecomplement2-pt2}-\eqref{sievecomplement2-pt4}, we have that
\begin{align} \label{sievecomplement3}
\Pi ( | \bm{\gamma} (\bm{\beta} ) | > C_3 s_0 ) \leq 2 e^{-C_3 n \epsilon_n^2}.
\end{align}
Finally, we have as a bound for the second term in \eqref{sievecomplement1},
\begin{align*} \label{sievecomplement4}
\Pi \left( \sigma^2 > G^{C_3 s_0/c_0} \right) & = \displaystyle \int_{G^{C_3 s_0 / c_0}}^{\infty} \frac{d_0^{c_0}}{\mathcal{G}amma(c_0)} (\sigma^2)^{-c_0-1} e^{-d_0 / \sigma^2} d \sigma^2 \\
& \lesssim \int_{G^{C_3 s_0/c_0}}^{\infty} (\sigma^2)^{-c_0-1} \\
& \asymp G^{-C_3 s_0} \\
& \lesssim \exp(-C_3 n \epsilon_n^2). \numbereqn
\end{align*}
Combining (\ref{sievecomplement1})-(\ref{sievecomplement4}), we have
\begin{align*}
\Pi (\mathcal{F}_n^c) \leq 3 \exp \left( - C_3 n \epsilon_n^2 \right) = \exp \left( -C_3 n \epsilon_n^2 + \log 3 \right),
\end{align*}
and so given our choice of $C_3$, (\ref{sievecomplement1}) is asymptotically bounded from above by $\exp(-C_2 n \epsilon_n^2)$ for some $C_2 \geq C_1+2$. This proves (\ref{testingcond1}).
We now proceed to prove (\ref{testingcond2}). Our proof is based on the technique used in \citet{SongLiang2017} with suitable modifications. For $\xi \subset \{1, \ldots, G \}$, let $\bm{X}_{\xi}$ denote the submatrix of $\bm{X}$ with submatrices indexed by $\xi$, where $\lvert \xi \rvert \leq \bar{p}$ and $\bar{p}$ is from Assumption \ref{A4}. Let $\widehat{\bm{\beta}}_{\xi} = ( \bm{X}_{\xi}^T \bm{X}_{\xi})^{-1} \bm{X}_{\xi}^T \bm{Y}$ and $\bm{\beta}_{0 \xi}$ denote the subvector of $\bm{\beta}_0$ with groups indexed by $\xi$. Let $m_{\xi} = \sum_{g \in \xi} m_g$, and let $\widehat{\sigma}_{\xi}^2 = \lVert \bm{Y} - \bm{X}_\xi \widehat{\bm{\beta}}_{\xi} \rVert_2^2 / (n - m_{\xi} )$. Note that $\widehat{\bm{\beta}}_{\xi}$ and $\widehat{\sigma}_{\xi}^2$ both exist and are unique because of Assumptions \ref{A1}, \ref{A2}, and \ref{A4} (which combined, gives us that $m_{\xi} = o(n)$).
Let $\widetilde{p}$ be an integer satisfying $\widetilde{p} \asymp s_0$ and $\widetilde{p} \leq \bar{p} - s_0$, where $\bar{p}$ is from Assumption \ref{A4}, and the specific choice of $\widetilde{p}$ will be given below. Recall that $S_0$ is the set of true nonzero groups with cardinality $s_0 = \lvert S_0 \rvert$. Similar to \cite{SongLiang2017}, we consider the test function $\phi_n = \max \{ \phi_n', \tilde{\phi}_n \}$, where
\begin{equation} \label{testfunction}
\begin{array}{ll}
\phi_n' = \displaystyle \max_{\xi \supset S_0, \lvert \xi \rvert \leq \widetilde{p}+s_0} 1 \left\{ \lvert \widehat{\sigma}_{\xi}^2 - \sigma_0^2 \rvert \geq \sigma_0^2 \epsilon_n \right\}, & \textrm{ and } \\
\tilde{\phi}_n = \displaystyle \max_{\xi \supset S_0, \lvert \xi \rvert \leq \widetilde{p}+s_0} 1 \left\{ \lVert \widehat{\bm{\beta}}_{\xi} - \bm{\beta}_{0 \xi} \rVert_2 \geq \sigma_0 \epsilon_n \right\}. &
\end{array}
\end{equation}
Because of Assumption \ref{A4}, we have $\widetilde{p} \prec n$ and $\widetilde{p} \prec n \epsilon_n^2$. Additionally, since $\epsilon_n = o(1)$, we can use almost identical arguments as those used to establish (A.5)-(A.6) in the proof of Theorem A.1 of \cite{SongLiang2017} to show that for any $\xi$ satisfying $\xi \supset S_0, | \xi | \leq \widetilde{p}$,
\begin{equation*}
\mathbb{E}_{( \bm{\beta}_0, \sigma_0^2 )} 1 \left\{ \lvert \widehat{\sigma}_{\xi}^2 - \sigma_0^2 \rvert \geq \sigma_0^2 \epsilon_n \right\} \leq \exp(- c_4' n \epsilon_n^2),
\end{equation*}
for some constant $\hat{c}_4 > 0$, and for any $\xi$ satisfying $\xi \supset S_0, | \xi | \leq \widetilde{p}$,
\begin{equation*}
\mathbb{E}_{( \bm{\beta}_0, \sigma_0^2 )} 1 \left\{ \lVert \widehat{\bm{\beta}}_{\xi} - \bm{\beta}_{0 \xi} \rVert_2 \geq \sigma_0 \epsilon_n \right\} \leq \exp(-\tilde{c}_4 n \epsilon_n^2),
\end{equation*}
for some $\tilde{c}_4 > 0$. Using the proof of Theorem A.1 in \cite{SongLiang2017}, we may then choose $\widetilde{p} = \lfloor \min \{ c_4', \tilde{c}_4 \} n \epsilon_n^2 / (2 \log G) \rfloor$, and then
\begin{equation} \label{upperboundfirsttesting}
\mathbb{E}_{f_0} \phi_n \leq \exp ( - \check{c}_4 n \epsilon_n^2 ),
\end{equation}
for some $\check{c}_4 > 0$. Next, define the set,
\begin{align*}
\begin{array}{ll}
\mathcal{C} & = \left\{ \lVert \bm{\beta} - \bm{\beta}_0 \rVert_2 \geq (3+\sqrt{\nu_1}) \sigma_0 \epsilon_n \textrm{ or } \sigma^2 / \sigma_0^2 > (1+\epsilon_n)/(1-\epsilon_n) \right. \\
& \qquad \left.\textrm{ or } \sigma^2 / \sigma_0^2 < (1-\epsilon_n)/(1+\epsilon_n ) \right\}.
\end{array}
\end{align*}
By Lemma \ref{auxlemma2}, we have
\begin{equation} \label{upperboundsecondtesting1}
\displaystyle \sup_{ \begin{array}{rl} f \in \mathcal{F}_n: & \lVert \bm{\beta} - \bm{\beta}_0 \rVert_2 \geq (3+\sqrt{\nu_1}) \sigma_0 \epsilon_n , \\ & \textrm{ or } \lvert \sigma^2 - \sigma_0^2 \rvert \geq 4 \sigma_0^2 \epsilon_n \end{array}} \mathbb{E}_f (1-\phi_n) \leq \displaystyle \sup_{ f \in \mathcal{F}_n: (\bm{\beta}, \sigma^2) \in \mathcal{C}} \mathbb{E}_f (1 - \phi_n). \numbereqn
\end{equation}
Similar to \cite{SongLiang2017}, we consider $\mathcal{C} \subset \widehat{\mathcal{C}} \cup \widetilde{\mathcal{C}}$, where
\begin{align*}
& \widehat{\mathcal{C}} = \{ \sigma^2/\sigma_0^2 > (1+\epsilon_n)/(1-\epsilon_n) \textrm{ or } \sigma^2 / \sigma_0^2 < (1-\epsilon_n)/(1+\epsilon_n) \}, \\
& \tilde{\mathcal{C}} = \{ \lVert \bm{\beta} - \bm{\beta}_0 \rVert \geq (3 + \sqrt{\nu_1}) \sigma_0 \epsilon_n \textrm{ and } \sigma^2 = \sigma_0^2 \},
\end{align*}
and so an upper bound for (\ref{upperboundsecondtesting1}) is
\begin{align*} \label{upperboundsecondtesting2}
& \displaystyle \sup_{f\in \mathcal{F}_n: (\bm{\beta}, \sigma^2) \in \mathcal{C}} \mathbb{E}_f (1-\phi_n) = \displaystyle \sup_{f \in \mathcal{F}_n: (\bm{\beta}, \sigma^2) \in \mathcal{C}} \mathbb{E}_f \min \{ 1-\phi_n', 1-\tilde{\phi}_n \} \\
& \qquad \leq \max \left\{ \displaystyle \sup_{f \in \mathcal{F}_n: (\bm{\beta}, \sigma^2) \in \hat{\mathcal{C}}} \mathbb{E}_f (1-\phi_n'), \displaystyle \sup_{f \in \mathcal{F}_n: (\bm{\beta}, \sigma^2) \in \tilde{\mathcal{C}}} \mathbb{E}_f (1-\tilde{\phi}_n) \right\}. \numbereqn
\end{align*}
Let $\tilde{\xi} = \{ g: \lVert \bm{\beta}_g \rVert_2 > \omega_g \} \cup S_0$, $m_{\tilde{\xi}} = \sum_{g \in \tilde{\xi}} m_g$, and $\tilde{\xi}^c = \{1, \ldots, G \} \setminus \tilde{\xi}$. For any $f \in \mathcal{F}_n$ such that $(\bm{\beta}, \sigma^2) \in \hat{\mathcal{C}} \cup \tilde{\mathcal{C}}$, we must have then that $\lvert \tilde{\xi} \rvert \leq C_3 s_0 + s_0 \leq \bar{p}$, by Assumption \ref{A4}. By \eqref{sievecomplement2-pt4}, the prior puts exponentially vanishing probability on values of $\theta > \theta_0$ where $\theta_0 = C_3 s_0 \log G / G^c < 1/(G^2+1)$ for large $G$. Since $\lambda_0 = (1-\theta )/ \theta$ is monotonic decreasing in $\theta$, we have that with probability greater than $1 - e^{-C_3 n \epsilon_n^2}$, $\lambda_0 \geq G^2$. Combining this fact with Assumption \ref{A3} and using $\mathcal{F}_n$ in \eqref{sievedef}, we have that for any $f \in \mathcal{F}_n, (\bm{\beta}, \sigma^2) \in \hat{C} \cup \tilde{C}$ and sufficiently large $n$,
\begin{align*} \label{upperboundsecondtesting3}
\lVert \bm{X}_{\tilde{\xi}^c} \bm{\beta}_{\tilde{\xi}^c} \rVert_2
& \leq \sqrt{k n^{\alpha}} \lVert \bm{\beta}_{\tilde{\xi}^c} \rVert_2 \\
& \leq \sqrt{k n^{\alpha} } \left[ (G - \lvert \tilde{\xi} \rvert ) \displaystyle \max_{g \in \tilde{\xi}^c} \omega_g \right] \\
& \leq \sqrt{ k n^{\alpha} } \left\{ \frac{G}{\lambda_0-\lambda_1} \log \left[ \frac{1-\theta}{\theta} \left( \frac{\lambda_0}{\lambda_1} \right)^{m_{\max}} \right] \right\} \\
& \lesssim \min \{ \sqrt{k}, 1 \} \times \sqrt{\nu_1}\sqrt{n} \sigma_0 \epsilon_n, \numbereqn
\end{align*}
where $\nu$ is from Assumption \ref{A4}. In the above display, we used Lemma \ref{auxlemma3} in the second inequality, while the last inequality follows from our assumptions on $(\theta, \lambda_0, \lambda_1)$ and $m_{\max}$, so one can show that the bracketed term in the third line is asymptotically bounded above by $D \sqrt{\nu_1 } \sqrt{n^{1 -\alpha}} \sigma_0 \epsilon_n$ for large $n$ and any constant $D > 0$. Thus, using nearly identical arguments as those used to prove Part I of Theorem A.1 in \cite{SongLiang2017}, we have
\begin{align*} \label{upperboundsecondtesting4}
& \displaystyle \sup_{f \in \mathcal{F}_n: (\bm{\beta}, \sigma^2) \in \hat{\mathcal{C}}} \mathbb{E}_f (1-\phi_n') \\
& \qquad\leq \sup_{f \in \mathcal{F}_n: (\bm{\beta}, \sigma^2) \in \hat{\mathcal{C}}} \Pr \left( \lvert \chi^2_{n-m_{\tilde{\xi}}} (\zeta) - (n - m_{\tilde{\xi}} ) \rvert \geq (n- m_{\tilde{\xi}} ) \epsilon_n \right) \\
& \qquad \leq \exp( - \hat{c}_4 n \epsilon_n^2), \numbereqn
\end{align*}
where the noncentrality parameter $\zeta$ satisfies $\zeta \leq n \epsilon_n^2 \nu_1 \sigma_0^2 / 16 \sigma^2 $, and the last inequality follows from the fact that the noncentral $\chi^2$ distribution is subexponential and Bernstein's inequality (see Lemmas A.1 and A.2 in \cite{SongLiang2017}).
Using the arguments in Part I of the proof of Theorem A.1 in \cite{SongLiang2017}, we also have that for large $n$,
\begin{align*}\label{upperboundsecondtesting5}
& \displaystyle \sup_{f \in \mathcal{F}_n: (\bm{\beta}, \sigma^2) \in \tilde{\mathcal{C}}} \mathbb{E}_f (1-\tilde{\phi}_n) \\
& \qquad \leq \displaystyle \sup_{f \in \mathcal{F}_n: (\bm{\beta}, \sigma^2) \in \hat{\mathcal{C}}} \Pr \left( \lVert ( \bm{X}_{\tilde{\xi}}^T \bm{X}_{\tilde{\xi}})^{-1} \bm{X}_{\tilde{\xi}}^T \bm{\text{var}epsilon} \rVert_2 \geq \left[ \lVert \bm{\beta}_{\tilde{\xi}} - \bm{\beta}_{0 \tilde{\xi}} \lVert_2 - \sigma_0 \epsilon_n - \right. \right. \\
& \left. \left. \qquad \qquad \qquad \qquad \qquad \lVert ( \bm{X}_{\tilde{\xi}}^T \bm{X}_{\tilde{\xi}} )^{-1} \bm{X}_{\tilde{\xi}}^T \bm{X}_{\tilde{\xi}^c} \bm{\beta}_{\tilde{\xi}^c} \rVert_2 \right] / \sigma \right) \\
& \qquad \leq \displaystyle \sup_{f \in \mathcal{F}_n: (\bm{\beta}, \sigma^2) \in \hat{\mathcal{C}}} \Pr \left( \lVert ( \bm{X}_{\tilde{\xi}}^T \bm{X}_{\tilde{\xi}})^{-1} \bm{X}_{\tilde{\xi}}^T \bm{\text{var}epsilon} \rVert_2 \geq \epsilon_n \right) \\
& \qquad \leq \sup_{f \in \mathcal{F}_n: (\bm{\beta}, \sigma^2) \in \hat{\mathcal{C}}} \Pr ( \chi_{\lvert \widetilde{\xi} \rvert}^2 \geq n \nu_1 \epsilon_n^2 ) \\
& \qquad \leq \exp ( - \tilde{c}_4 n \epsilon_n^2), \numbereqn
\end{align*}
where the second inequality in the above display holds since $\lVert \bm{\beta}_{\tilde{\xi}} - \bm{\beta}_{0 \tilde{\xi}} \rVert_2 \geq \lVert \bm{\beta} - \bm{\beta}_0 \rVert_2 - \lVert \bm{\beta}_{ \tilde{\xi}^c} \rVert_2 $, and since (\ref{upperboundsecondtesting3}) can be further bounded from above by $\sqrt{k n^{\alpha}} \sqrt{\nu_1} \sigma_0 \epsilon_n$ and thus $\lVert \bm{\beta}_{\tilde{\xi}^c} \rVert \leq \sqrt{ \nu_1} \sigma_0 \epsilon_n$. Therefore, we have for $f \in \mathcal{F}_n, (\bm{\beta}, \sigma^2) \in \tilde{C}$,
\begin{align*}
\lVert \bm{\beta}_{\tilde{\xi}} - \bm{\beta}_{0 \tilde{\xi}} \rVert_2 \geq (3 + \sqrt{\nu_1}) \sigma_0 \epsilon_n - \sqrt{\nu_1} \sigma_0 \epsilon_n = 3 \sigma_0 \epsilon_n,
\end{align*}
while by Assumption \ref{A4} and (\ref{upperboundsecondtesting3}), we also have
\begin{align*}
& \lVert ( \bm{X}_{\tilde{\xi}}^T \bm{X}_{\tilde{\xi}})^{-1} \bm{X}_{\tilde{\xi}}^T \bm{X}_{\tilde{\xi}^c} \bm{\beta}_{\tilde{\xi}^c} \rVert_2 \leq \sqrt{ \lambda_{\max} \left( (\bm{X}_{\tilde{\xi}}^T \bm{X}_{\tilde{\xi}})^{-1} \right) } \lVert \bm{X}_{\tilde{\xi}^c} \bm{\beta}_{\tilde{\xi}^c} \rVert_2 \\
& \qquad \leq \left( \sqrt{1/n \nu_1} \right) \left( \sqrt{n \nu_1} \sigma_0 \epsilon_n \right) = \sigma_0 \epsilon_n,
\end{align*}
and then we used the fact that on the set $\tilde{C}$, $\sigma = \sigma_0$. The last three inequalities in (\ref{upperboundsecondtesting5}) follow from Assumption \ref{A4}, the fact that $\lvert \widetilde{\xi} \rvert \leq \bar{p} \prec n \epsilon_n^2$, and the fact that for all $m>0$, $\Pr(\chi^2_m \geq x) \leq \exp(-x/4)$ whenever $x \geq 8m$. Altogether, combining (\ref{upperboundsecondtesting1})-(\ref{upperboundsecondtesting5}), we have that
\begin{equation} \label{upperboundsecondtesting6}
\displaystyle \sup_{ \begin{array}{rl} f \in \mathcal{F}_n: & \lVert \bm{\beta} - \bm{\beta}_0 \rVert_2 \geq (3+\sqrt{\nu}) \sigma_0 \epsilon_n, \\ & \textrm{ or } \lvert \sigma^2 - \sigma_0^2 \rvert \geq 4 \sigma_0^2 \epsilon_n \end{array} } \mathbb{E}_f (1 - \phi_n) \leq \exp \left( - \min \{ \hat{c}_4, \tilde{c}_4 \} n \epsilon_n^2 \right),
\end{equation}
where $\hat{c}_4 > 0$ and $\tilde{c}_4 > 0$ are the constants from (\ref{upperboundsecondtesting4}) and (\ref{upperboundsecondtesting5}).
Now set $C_4 = \min \{ \hat{c}_4, \tilde{c}_4, \check{c}_4 \}$, where $\check{c}_4$ is the constant from (\ref{upperboundfirsttesting}). By and (\ref{upperboundfirsttesting}) and (\ref{upperboundsecondtesting6}), this choice of $C_4$ will satisfy both testing conditions (\ref{testingcond2}) and (\ref{testingcond3}).
Since we have verified (\ref{KullbackLeiblercond}) and (\ref{testingcond1})-(\ref{testingcond3}) for $\epsilon_n = \sqrt{s_0 \log G / n}$, we have
\begin{align*}
\Pi \left( \bm{\beta} : \lVert \bm{\beta} - \bm{\beta}_0 \rVert_2 \geq (3+\sqrt{\nu}) \sigma_0 \epsilon_n \bigg| \bm{Y} \right) \rightarrow 0 \textrm{ a.s. } \mathbb{P}_0 \textrm{ as } n, G \rightarrow \infty,
\end{align*}
and
\begin{align*}
\Pi \left( \sigma^2: \lvert \sigma^2 - \sigma_0^2 \rvert \geq 4 \sigma_0^2 \epsilon_n \vert \bm{Y} \right) \rightarrow 0 \textrm{ as } n \rightarrow \infty, \textrm{ a.s. } \mathbb{P}_0 \textrm{ as } n, G \rightarrow \infty,
\end{align*}
i.e. we have proven (\ref{l2contraction}) and (\ref{varianceconsistency}).
\noindent \textbf{Part III. Posterior contraction under prediction error loss.}
The proof is very similar to the proof of (\ref{l2contraction}). The only difference is the testing conditions. We use the same sieve $\mathcal{F}_n$ as that in (\ref{sievedef}) so that (\ref{testingcond1}) holds, but now, we need to show the existence of a different sequence of test functions $\tau_n \in [0,1]$ such that
\begin{equation} \label{testingcond2prediction}
\mathbb{E}_{f_0} \tau_n \leq e^{-C_4 n \epsilon_n^2},
\end{equation}
and
\begin{equation} \label{testingcond3prediction}
\displaystyle \sup_{ \begin{array}{rl} f \in \mathcal{F}_n: & \lVert \bm{X} \bm{\beta} - \bm{X} \bm{\beta}_0 \rVert_2 \geq M_2 \sigma_0 \sqrt{n} \epsilon_n, \\ & \textrm{ or } \lvert \sigma^2 - \sigma_0^2 \rvert \geq 4 \sigma_0^2 \epsilon_n \end{array} } \mathbb{E}_f (1 - \tau_n) \leq e^{-C_4 n \epsilon_n^2}.
\end{equation}
Let $\widetilde{p}$ be the same integer from (\ref{testfunction}) and consider the test function $\tau_n = \max \{ \tau_n', \tilde{\tau}_n \}$, where
\begin{equation} \label{testfunctionprediction}
\begin{array}{ll}
\tau_n' = \displaystyle \max_{\xi \supset S_0, \lvert \xi \rvert \leq \widetilde{p}+s_0} 1 \left\{ \lvert \widehat{\sigma}_{\xi}^2 - \sigma_0^2 \rvert \geq \sigma_0^2 \epsilon_n \right\}, & \textrm{ and } \\
\tilde{\tau}_n = \displaystyle \max_{\xi \supset S_0, \lvert \xi \rvert \leq \widetilde{p}+s_0} 1 \left\{ \lVert \bm{X}_{\xi} \widehat{\bm{\beta}}_{\xi} - \bm{X}_{\xi} \bm{\beta}_{0 \xi} \rVert_2 \geq \sigma_0 \sqrt{n} \epsilon_n \right\}. &
\end{array}
\end{equation}
Using Assumption \ref{A4} that for any $\xi \subset \{1, \ldots, G \}$ such that $\lvert \xi \rvert \leq \bar{p}$, $\lambda_{\max} ( \bm{X}_{\xi}^T \bm{X}_{\xi}) \leq n \nu_2$ for some $\nu_2 > 0$, we have that
\begin{equation*}
\lVert \bm{X}_{\xi} \widehat{\bm{\beta}}_{\xi} - \bm{X}_{\xi} \bm{\beta}_{0 \xi} \lVert_2 \leq \sqrt{n \nu_2 } \lVert \widehat{\bm{\beta}}_{\xi} - \bm{\beta}_{0 \xi} \lVert_2,
\end{equation*}
and so
\begin{equation*}
\Pr \left( \lVert \bm{X}_{\xi} \widehat{\bm{\beta}}_{\xi} - \bm{X}_{\xi} \bm{\beta}_{0 \xi} \lVert_2 \geq \sigma_0 \sqrt{n} \epsilon_n \right) \leq \Pr \left( \lVert \widehat{\bm{\beta}}_{\xi} - \bm{\beta}_{0 \xi} \lVert_2 \geq \nu_2^{-1/2} \sigma_0 \epsilon_n \right).
\end{equation*}
Therefore, using similar steps as those in Part II of the proof, we can show that our chosen sequence of tests $\tau_n$ satisfies (\ref{testingcond2prediction}) and (\ref{testingcond3prediction}). We thus arrive at
\begin{align*}
\Pi \left( \bm{\beta} : \lVert \bm{\beta} - \bm{\beta}_0 \rVert_2 \geq M_2 \sigma_0 \epsilon_n \bigg| \bm{Y} \right) \rightarrow 0 \textrm{ a.s. } \mathbb{P}_0 \textrm{ as } n, G \rightarrow \infty,
\end{align*}
i.e. we have proven (\ref{predictioncontraction}).
\end{proof}
\begin{proof}[Proof of Theorem \ref{dimensionalitygroupedregression}]
According to Part I of the proof of Theorem \ref{posteriorcontractiongroupedregression}, we have that for $\epsilon_n = \sqrt{s_0 \log G / n}$,
\begin{align*}
\Pi \left( K(f_0, f) \leq n \epsilon_n^2, V(f_0, f) \leq n \epsilon_n^2 \right) \geq \exp \left( -C n \epsilon_n^2 \right)
\end{align*}
for some $C>0$. Thus, by Lemma 8.10 of \cite{GhosalVanDerVaart2017}, there exist positive constants $C_1$ and $C_2$ such that the event,
\begin{equation} \label{eventEn}
E_n = \left\{ \displaystyle \int \displaystyle \int \frac{f (\bm{Y}) }{f_0 (\bm{Y})} d \Pi (\bm{\beta}) d \Pi(\sigma^2) \geq e^{-C_1 n \epsilon_n^2} \right\},
\end{equation}
satisfies
\begin{equation} \label{probEncomplement}
\mathbb{P}_0 ( E_n^c) \leq e^{-(1+C_2) n \epsilon_n^2}.
\end{equation}
Define the set $\mathcal{T} = \{ \bm{\beta} : \lvert \bm{\gamma} (\bm{\beta}) \rvert \leq C_3 s_0 \}$, where we choose $C_3 > 1+C_2$. We must show that $\mathbb{E}_0 \Pi ( \mathcal{T}^c \vert \bm{Y} ) \rightarrow 0$ as $n \rightarrow \infty.$ The posterior probability $\Pi ( \mathcal{T}^c \vert \bm{Y})$ is given by
\begin{equation} \label{posteriorprobTcomplement}
\Pi ( \mathcal{T}^c \vert \bm{Y} ) = \frac{ \displaystyle \int \int_{\mathcal{T}^c} \frac{f (\bm{Y})}{f_0 (\bm{Y})} d \Pi (\bm{\beta}) d \Pi ( \sigma^2)}{ \displaystyle \int \int \frac{f (\bm{Y}) }{f_0 (\bm{Y})} d \Pi (\bm{\beta}) d \Pi (\sigma^2)}.
\end{equation}
By (\ref{probEncomplement}), the denominator of (\ref{posteriorprobTcomplement}) is bounded below by $e^{-(1+C_2) n \epsilon_n^2}$. For the numerator of (\ref{posteriorprobTcomplement}), we have as an upper bound,
\begin{equation} \label{numeratorupperbound}
\mathbb{E}_0 \left( \displaystyle \int \int_{\mathcal{T}^c} \frac{f (\bm{Y})}{f_0 (\bm{Y})} d \Pi (\bm{\beta}) \Pi (\sigma^2) \right) \leq \displaystyle \int_{\mathcal{T}^c} d \Pi (\bm{\beta}) = \Pi \left( \lvert \bm{\gamma} ( \bm{\beta} ) \rvert > C_3 s_0 \right).
\end{equation}
Using the same arguments as (\ref{sievecomplement2})-(\ref{sievecomplement3}) in the proof of Theorem \ref{posteriorcontractiongroupedregression}, we can show that
\begin{equation} \label{numeratorupperbound2}
\Pi \left( \lvert \bm{\gamma} ( \bm{\beta} ) \rvert > C_3 s_0 \right) \prec e^{-C_3 n \epsilon_n^2}.
\end{equation}
Combining (\ref{eventEn})-(\ref{numeratorupperbound}), we have that
\begin{align*}
\mathbb{E}_0 \Pi \left( \mathcal{T}^c \vert \bm{Y} \right) & \leq \mathbb{E}_0 \Pi ( \mathcal{T}^c \vert \bm{Y} ) 1_{E_n} + \mathbb{P}_0 (E_n^c) \\
& < \exp \left( (1+C_2) n \epsilon_n^2 - C_3 n \epsilon_n^2 \right) + o(1) \\
& \rightarrow 0 \textrm{ as } n, G \rightarrow \infty,
\end{align*}
since $C_3 > 1+C_2$. This proves (\ref{posteriorcompressibility}).
\end{proof}
\begin{proof}[Proof of Theorem \ref{contractionGAMs}]
Let $f_{0j}(\bm{X}_j)$ be an $n \times 1$ vector with $i$th entry equal to $f_{0j}(X_{ij})$. Note that proving posterior contraction with respect to the empirical norm \eqref{empiricalcontractionGAM} is equivalent to proving that
\begin{equation} \label{predictioncontractionGAM}
\Pi \left( \bm{\beta}: \lVert \widetilde{\bm{X}} \bm{\beta} - \sum_{j=1}^{p} f_{0j} (\bm{X}_j) \rVert_2 \geq \widetilde{M}_1 \sqrt{n} \epsilon_n \bigg| \bm{Y} \right) \rightarrow 0 \textrm{ a.s. } \widetilde{\mathbb{P}}_0 \textrm{ as } n, p \rightarrow \infty,
\end{equation}
so to prove the theorem, it suffices to prove \eqref{predictioncontractionGAM}. Let $f \sim \mathcal{N}_n( \widetilde{\bm{X}} \bm{\beta}, \sigma^2 \bm{I}_n)$ and $f_0 \sim \mathcal{N}_n ( \widetilde{\bm{X}} \bm{\beta}_{0} + \bm{\delta}, \sigma_0^2 \bm{I}_n )$, and let $\Pi(\cdot)$ denote the prior (\ref{hiermodel}). Similar to the proof for Theorem \ref{posteriorcontractiongroupedregression}, we show that for our choice of $\epsilon_n^2 = s_0 \log p /n + s_0 n^{-2 \kappa / (2 \kappa + 1)}$ and some constant $C_1 > 0$,
\begin{equation} \label{KullbackLeiblercondGAMs}
\Pi \left( K (f_0, f) \leq n \epsilon_n^2, V(f_0, f) \leq n \epsilon_n^2 \right) \geq \exp (-C_1 n \epsilon_n^2),
\end{equation}
and the existence of a sieve $\mathcal{F}_n$ such that
\begin{equation} \label{testingGAMcond1}
\Pi ( \mathcal{F}_n^c) \leq \exp(-C_2 n \epsilon_n^2),
\end{equation}
for positive constant $C_2 > C_1+2$, and a sequence of test functions $\phi_n \in [0,1]$ such that
\begin{equation} \label{testingGAMcond2}
\mathbb{E}_{f_0} \phi_n \leq e^{-C_4 n \epsilon_n^2},
\end{equation}
and
\begin{equation} \label{testingGAMcond3}
\displaystyle \sup_{ \begin{array}{rl} f \in \mathcal{F}_n: & \lVert \widetilde{\bm{X}} \bm{\beta} - \sum_{j=1}^{p} f_{0j} ( \bm{X}_{j} ) \rVert_2 \geq \tilde{c}_0 \sigma_0 \sqrt{n} \epsilon_n, \\ & \textrm{ or } \lvert \sigma^2 - \sigma_0^2 \rvert \geq 4 \sigma_0^2 \epsilon_n \end{array} } \mathbb{E}_f (1 - \phi_n) \leq e^{-C_4 n \epsilon_n^2},
\end{equation}
for some $C_4 > 0$ and $\tilde{c}_0 > 0$.
We first verify (\ref{KullbackLeiblercondGAMs}). The KL divergence between $f_0$ and $f$ is
\begin{equation} \label{KLdivGAM}
K(f_0, f) = \frac{1}{2} \left[ n \left( \frac{\sigma_0^2}{\sigma^2} \right) - n - n \log \left( \frac{\sigma_0^2}{\sigma^2} \right) + \frac{ \lVert \widetilde{\bm{X}} ( \bm{\beta} - \bm{\beta}_0 ) - \bm{\delta} \rVert_2^2}{\sigma^2} \right],
\end{equation}
and the KL variation between $f_0$ and $f$ is
\begin{equation} \label{KLvarGAM}
V(f_0, f) = \frac{1}{2} \left[ n \left( \frac{\sigma_0^2}{\sigma^2} \right)^2 - 2n \left( \frac{\sigma_0^2}{\sigma^2} \right) + n \right] + \frac{\sigma_0^2}{(\sigma^2)^{2}} \lVert \widetilde{\bm{X}} ( \bm{\beta} - \bm{\beta}_0 ) - \bm{\delta} \rVert_2^2.
\end{equation}
Define the two events $\widetilde{\mathcal{A}}_1$ and $\widetilde{\mathcal{A}}_2$ as follows:
\begin{equation} \label{eventA1GAM}
\widetilde{\mathcal{A}}_1 = \left\{ \sigma^2: n \left( \frac{\sigma_0^2}{\sigma^2} \right) - n - n \log \left( \frac{\sigma_0^2}{\sigma^2} \right) \leq n \epsilon_n^2, \right. \\
\left. n \left( \frac{\sigma_0^2}{\sigma^2} \right)^2 - 2n \left( \frac{\sigma_0^2}{\sigma^2} \right) + n \leq n \epsilon_n^2 \right\}
\end{equation}
and
\begin{equation} \label{eventA2GAM}
\widetilde{\mathcal{A}}_2 = \left\{ (\bm{\beta}, \sigma^2): \frac{ \lVert \widetilde{\bm{X}} ( \bm{\beta} - \bm{\beta}_0 ) - \bm{\delta} \rVert_2^2}{\sigma^2} \leq n \epsilon_n^2, \right. \\
\left. \frac{\sigma_0^2}{(\sigma^2)^{2}} \lVert \widetilde{\bm{X}} ( \bm{\beta} - \bm{\beta}_0 ) - \bm{\delta} \rVert_2^2 \leq n \epsilon_n^2/2 \right\}.
\end{equation}
Following from (\ref{KLdivGAM})-(\ref{eventA2GAM}), we have $\Pi ( K(f_0, f) \leq n \epsilon_n^2, V(f_0, f) \leq n \epsilon_n^2) = \Pi (\widetilde{\mathcal{A}}_2 \vert \widetilde{\mathcal{A}}_1 ) \Pi (\widetilde{\mathcal{A}}_1)$. Using the steps we used to prove (\ref{neglogA1upper}) in part I of the proof of Theorem \ref{posteriorcontractiongroupedregression}, we have
\begin{equation} \label{A1upperGAM}
\Pi ( \widetilde{\mathcal{A}}_1 ) \gtrsim \exp (- C_1 n \epsilon_n^2 / 2),
\end{equation}
for some sufficiently large $C_1 > 0$. Following similar reasoning as in the proof of Theorem \ref{posteriorcontractiongroupedregression}, we also have for some $b_2 > 0$,
\begin{equation} \label{lowerboundA2givenA1GAM}
\Pi \left( \widetilde{A}_2 \vert \widetilde{A}_1 \right) \geq \Pi \left( \lVert \widetilde{\bm{X}} (\bm{\beta} - \bm{\beta}_{0}) - \bm{\delta} \rVert_2^2 \leq \frac{b_2^2 n \epsilon_n^2}{2} \right).
\end{equation}
Using Assumptions \ref{B3} and \ref{B6}, we then have
\begin{align*}
\lVert \widetilde{\bm{X}} ( \bm{\beta} - \bm{\beta}_{0}) - \bm{\delta} \rVert_2^2 & \leq \left( \lVert \widetilde{\bm{X}} ( \bm{\beta} - \bm{\beta}_0) \rVert_2 + \lVert \bm{\delta} \rVert_2 \right)^2 \\
& \leq 2 \lVert \widetilde{\bm{X}} ( \bm{\beta} - \bm{\beta}_0 ) \rVert_2^2 + 2 \lVert \bm{\delta} \rVert_2^2 \\
& \lesssim 2 \left( n k_1 \lVert \bm{\beta} - \bm{\beta}_0 \rVert_2^2 + \frac{ k_1 b_2^2 n s_0 d^{-2 \kappa}}{4} \right) \\
& \asymp 2n \left( \lVert \bm{\beta} - \bm{\beta}_0 \rVert_2^2 + \frac{ b_2^2 s_0 d^{-2 \kappa}}{4} \right),
\end{align*}
and so (\ref{lowerboundA2givenA1GAM}) can be asymptotically lower bounded by
\begin{align*} \label{lowerboundA2givenA1GAM2}
& \Pi \left( \lVert \bm{\beta} - \bm{\beta}_0 \rVert_2^2 + \frac{b_2^2 s_0 d^{-2 \kappa}}{4} \leq \frac{b_2^2 \epsilon_n^2}{4 } \right) \\
& = \Pi \left( \lVert \bm{\beta} - \bm{\beta}_0 \rVert_2^2 \leq \frac{b_2^2}{4} \left( \epsilon_n^2 - s_0 n^{- 2 \kappa / (2 \kappa + 1)} \right) \right),
\end{align*}
where we used Assumption \ref{B1} that $d \asymp n^{1 / (2 \kappa + 1)}$. Using very similar arguments as those used to prove (\ref{neglogpiA2givenA1pt2}), this term can also be lower bounded by $\exp (- C_1 n \epsilon_n^2 /2 )$. Altogether, we have
\begin{equation} \label{lowerboundA2givenA1GAM3}
\Pi( \widetilde{A}_2 \vert \widetilde{A}_1 ) \gtrsim \exp ( - C_1 \epsilon_n^2 / 2).
\end{equation}
Combining (\ref{A1upperGAM}) and (\ref{lowerboundA2givenA1GAM3}), we have that (\ref{KullbackLeiblercondGAMs}) holds. To verify (\ref{testingGAMcond1}), we choose $C_3 \geq C_1 + 2 + \log 3$ and use the same sieve $\mathcal{F}_n$ as the one we employed in the proof of Theorem \ref{posteriorcontractiongroupedregression} (eq. (\ref{sievedef})), and then (\ref{testingGAMcond1}) holds for our choice of $\mathcal{F}_n$.
Finally, we follow the recipe of \citet{WeiReichHoppinGhosal2018} and \citet{SongLiang2017} to construct our test function $\phi_n$ which will satisfy both (\ref{testingGAMcond2}) and (\ref{testingGAMcond3}). For $\xi \subset \{1, \ldots, p \}$, let $\widetilde{\bm{X}}_{\xi}$ denote the submatrix of $\widetilde{\bm{X}}$ with submatrices indexed by $\xi$, where $\lvert \xi \rvert \leq \bar{p}$ and $\bar{p}$ is from Assumption \ref{B4}. Let $\widehat{\bm{\beta}}_{\xi} = ( \widetilde{\bm{X}}_{\xi}^T \widetilde{\bm{X}}_{\xi})^{-1} \widetilde{\bm{X}}_{\xi}^T \bm{Y}$ and $\bm{\beta}_{0 \xi}$ denote the subvector of $\bm{\beta}_0$ with basis coefficients appearing in $\xi$. Then the total number of elements in $\widehat{\bm{\beta}}_{\xi}$ is $d \lvert \xi \rvert $. Finally, let $\widehat{\sigma}_{\xi}^2 = \bm{Y}^T ( \bm{I}_n - \bm{H}_{\xi} ) \bm{Y} / (n - d \lvert \xi \rvert )$, where $\bm{H}_{\xi} = \widetilde{\bm{X}}_{\xi} ( \widetilde{\bm{X}}_{\xi}^T \widetilde{\bm{X}}_{\xi} )^{-1} \widetilde{\bm{X}}_{\xi}^T$ is the hat matrix for the subgroup $\xi$.
Let $\widetilde{p}$ be an integer satisfying $\widetilde{p} \asymp s_0$ and $\widetilde{p} \leq \bar{p} - s_0$, where $\bar{p}$ is from Assumption \ref{B4} and the specific choice for $\widetilde{p}$ will be given later. Recall that $S_0$ is the set of true nonzero groups with cardinality $s_0 = \lvert S_0 \rvert$. Similar to \cite{WeiReichHoppinGhosal2018}, we consider the test function, $\phi_n = \max \{ \phi_n', \tilde{\phi}_n \}$, where
\begin{equation} \label{testfunctionGAM}
\begin{array}{ll}
\phi_n' = \displaystyle \max_{\xi \supset S_0, \lvert \xi \rvert \leq \widetilde{p}+s_0} 1 \left\{ \lvert \widehat{\sigma}_{\xi}^2 - \sigma_0^2 \rvert \geq c_0 ' \sigma_0^2 \epsilon_n \right\}, & \textrm{ and } \\
\tilde{\phi}_n = \displaystyle \max_{\xi \supset S_0, \lvert \xi \rvert \leq \widetilde{p}+s_0} 1 \left\{ \bigg| \bigg| \widetilde{\bm{X}} \widehat{\bm{\beta}}_{\xi} - \displaystyle \sum_{j \in \xi} f_{0j} ( \bm{X}_{j} ) \bigg| \bigg|_2 \geq \tilde{c}_0 \sigma_0 \sqrt{n} \epsilon_n \right\}, &
\end{array}
\end{equation}
for some positive constants $c_0 '$ and $\tilde{c}_0$. Using Assumptions \ref{B1} and \ref{B4}, we have that for any $\xi$ in our test $\phi_n$, $d \lvert \xi \rvert \leq d (\widetilde{p}+s_0) \leq d \bar{p} \prec n \epsilon_n^2$. Using essentially the same arguments as those in the proof for Theorem 4.1 in \cite{WeiReichHoppinGhosal2018}, we have that for any $\xi$ which satisfies $\xi \supset S_0$ so that $\lvert \xi \rvert \leq \widetilde{p} + s_0$,
\begin{equation} \label{upperboundfirsttestingGAM1}
\mathbb{E}_{(\bm{\beta}_0, \sigma_0^2)} 1 \left\{ \lvert \widehat{\sigma}_{\xi}^2 - \sigma_0^2 \rvert \geq c_0 ' \epsilon_n \right\} \leq \exp(-c_4' n \epsilon_n^2),
\end{equation}
for some $c_0 '' > 0$. By Assumption \ref{B3}, we also have
\begin{align*}
\bigg| \bigg| \widetilde{\bm{X}} \widehat{\bm{\beta}} - \displaystyle \sum_{j=1}^{p} f_{0j} ( \bm{X}_{j} ) \bigg| \bigg|_2 & = \lVert \widetilde{\bm{X}} (\widehat{\bm{\beta}} - \bm{\beta}_0 ) - \bm{\delta} \rVert_2 \\
& \leq \sqrt{n k_1} \lVert \widehat{\bm{\beta}} - \bm{\beta}_0 \rVert_2 + \lVert \bm{\delta} \rVert_2,
\end{align*}
and using the fact that $ \lVert \bm{\delta} \rVert_2 \lesssim \sqrt{n s_0} d^{-\kappa} \lesssim \tilde{c}_0 \sigma_0 \sqrt{n} \epsilon_n / 2$ (by Assumptions \ref{B1} and \ref{B6}), we have that for any $\xi$ such that $\xi \supset S_0, \lvert \xi \rvert \leq \widetilde{p} + s_0$,
\begin{align*}
& \mathbb{E}_{(\bm{\beta}_0, \sigma_0^2)} 1 \left\{ \bigg| \bigg| \widetilde{\bm{X}} \widehat{\bm{\beta}} - \displaystyle \sum_{j=1}^{p} f_{0j} ( \bm{X}_j ) \bigg| \bigg|_2 \geq \tilde{c}_0 \sigma_0 \sqrt{n} \epsilon_n \right\} \\
& \qquad \leq \mathbb{E}_{(\bm{\beta}_0, \sigma_0^2)} \left\{ \lVert \widehat{\bm{\beta}} - \bm{\beta}_0 \rVert_2 \geq \tilde{c}_0 \sigma_0 \epsilon_n / 2 \sqrt{k_1} \right\} \\
& \qquad \leq \exp ( -\tilde{c}_4 n \epsilon_n^2),
\end{align*}
for some $\tilde{c}_4 > 0$, where we used the proof of Theorem A.1 in \cite{SongLiang2017} to arrive at the final inequality. Again, as in the proof of Theorem A.1 of \cite{SongLiang2017}, we choose $\widetilde{p} = \lfloor \min \{ c_4', \tilde{c}_4 \} n \epsilon_n^2 / (2 \log p) \rfloor$, and then
\begin{equation} \label{upperboundfirsttestingGAM2}
\mathbb{E}_{f_0} \phi_n \leq \exp( - \check{c}_4 n \epsilon_n^2),
\end{equation}
for some $\check{c}_4 > 0$. Next, we define the set,
\begin{align*}
\begin{array}{ll}
\mathcal{C} & = \left\{ \lVert \widetilde{\bm{X}} \bm{\beta} - \sum_{j=1}^{p} f_{0j} ( \bm{X}_j ) \rVert_2 \geq \tilde{c}_0 \sigma_0 \sqrt{n} \epsilon_n \textrm{ or } \sigma^2 / \sigma_0^2 > (1+\epsilon_n)/(1-\epsilon_n) \right. \\
& \qquad \left.\textrm{ or } \sigma^2 / \sigma_0^2 < (1-\epsilon_n)/(1+\epsilon_n ) \right\}
\end{array}.
\end{align*}
By Lemma \ref{auxlemma2}, we have
\begin{align*} \label{upperboundsecondtesting1GAM}
& \displaystyle \sup_{ \begin{array}{rl} f \in \mathcal{F}_n: & \lVert \widetilde{\bm{X}} \bm{\beta} - \sum_{j=1}^{p} f_{0j} ( \bm{X}_j ) \rVert_2 \geq \tilde{c}_0 \sigma_0 \sqrt{n} \epsilon_n , \\ & \textrm{ or } \lvert \sigma^2 - \sigma_0^2 \rvert \geq 4 \sigma_0^2 \epsilon_n \end{array}} \mathbb{E}_f (1-\phi_n) \\
&\qquad \qquad \leq \displaystyle \sup_{ f \in \mathcal{F}_n: (\bm{\beta}, \sigma^2) \in \mathcal{C}} \mathbb{E}_f (1 - \phi_n). \numbereqn
\end{align*}
Similar to \cite{SongLiang2017}, we consider $\mathcal{C} \subset \widehat{\mathcal{C}} \cup \widetilde{\mathcal{C}}$, where
\begin{align*}
& \widehat{\mathcal{C}} = \{ \sigma^2/\sigma_0^2 > (1+\epsilon_n)/(1-\epsilon_n) \textrm{ or } \sigma^2 / \sigma_0^2 < (1-\epsilon_n)/(1+\epsilon_n) \}, \\
& \tilde{\mathcal{C}} = \{ \lVert \widetilde{\bm{X}} \bm{\beta} - \sum_{j=1}^{p} f_{0j} ( \bm{X}_j ) \rVert_2 \geq \tilde{c}_0 \sigma_0 \epsilon_n \textrm{ and } \sigma^2 = \sigma_0^2 \},
\end{align*}
and so an upper bound for (\ref{upperboundsecondtesting1GAM}) is
\begin{align*} \label{upperboundsecondtesting2GAM}
& \displaystyle \sup_{f\in \mathcal{F}_n: (\bm{\beta}, \sigma^2) \in \mathcal{C}} \mathbb{E}_f (1-\phi_n) = \displaystyle \sup_{f \in \mathcal{F}_n: (\bm{\beta}, \sigma^2) \in \mathcal{C}} \mathbb{E}_f \min \{ 1-\phi_n', 1-\tilde{\phi}_n \} \\
& \qquad \leq \max \left\{ \displaystyle \sup_{f \in \mathcal{F}_n: (\bm{\beta}, \sigma^2) \in \hat{\mathcal{C}}} \mathbb{E}_f (1-\phi_n'), \displaystyle \sup_{f \in \mathcal{F}_n: (\bm{\beta}, \sigma^2) \in \tilde{\mathcal{C}}} \mathbb{E}_f (1-\tilde{\phi}_n) \right\}. \numbereqn
\end{align*}
Using very similar arguments as those used to prove (\ref{upperboundsecondtesting6}) in Theorem \ref{posteriorcontractiongroupedregression} and using Assumptions \ref{B1} and \ref{B6}, so that the bias $\lVert \bm{\delta} \rVert_2^2 \lesssim n s_0 d^{-2 \kappa} \lesssim n \epsilon_n^2 $, we can show that (\ref{upperboundsecondtesting2GAM}) can be further bounded from above as
\begin{align*} \label{upperboundsecondtesting3GAM}
& \displaystyle \sup_{ \begin{array}{rl} f \in \mathcal{F}_n: & \lVert \widetilde{\bm{X}} \bm{\beta} - \sum_{j=1}^{p} f_{0j} ( \bm{X}_j ) \rVert_2 \geq \tilde{c}_0 \sigma_0 \sqrt{n} \epsilon_n , \\ & \textrm{ or } \lvert \sigma^2 - \sigma_0^2 \rvert \geq 4 \sigma_0^2 \epsilon_n \end{array}} \mathbb{E}_f (1-\phi_n) \\
& \qquad \qquad \leq \exp \left( - \min \{ \hat{c}_4, \tilde{c}_4 \} n \epsilon_n^2 \right), \numbereqn
\end{align*}
where $\hat{c}_4 > 0$ and $\tilde{c}_4 > 0$ are the constants from (\ref{upperboundfirsttestingGAM1}) and (\ref{upperboundfirsttestingGAM2}).
Choose $C_4 = \min \{ \check{c}_4, \hat{c}_4, \tilde{c}_4 \}$, and we have from (\ref{upperboundfirsttestingGAM2}) and (\ref{upperboundsecondtesting3GAM}) that (\ref{testingGAMcond2}) and (\ref{testingGAMcond3}) both hold.
Since we have verified (\ref{KullbackLeiblercondGAMs}) and (\ref{testingGAMcond1})-(\ref{testingGAMcond3}) for our choice of $\epsilon_n^2 = s_0 \log p / n + s_0 n^{-2 \kappa / (2 \kappa + 1)}$, it follows that
\begin{align*}
\Pi \left( \bm{\beta} : \bigg| \bigg| \widetilde{\bm{X}} \bm{\beta} - \displaystyle \sum_{j=1}^{p} f_{0j} ( \bm{X}_j ) \bigg| \bigg|_2 \geq \tilde{c}_0 \sigma_0 \sqrt{n} \epsilon_n \vert \bm{Y} \right) \rightarrow 0 \textrm{ a.s. } \widetilde{\mathbb{P}}_0 \textrm{ as } n, p \rightarrow \infty,
\end{align*}
and
\begin{align*}
\Pi \left( \sigma^2: \lvert \sigma^2 - \sigma_0^2 \rvert \geq 4 \sigma_0^2 \epsilon_n \vert \bm{Y} \right) \rightarrow 0 \textrm{ as } n \rightarrow \infty, \textrm{ a.s. } \widetilde{\mathbb{P}}_0 \textrm{ as } n, p \rightarrow \infty,
\end{align*}
i.e. we have proven (\ref{predictioncontractionGAM}), or equivalently, (\ref{empiricalcontractionGAM}) and (\ref{GAMvarianceconsistency}).
\end{proof}
\begin{proof}[Proof of Theorem \ref{dimensionalityGAM}]
The proof is very similar to the proof of Theorem \ref{dimensionalitygroupedregression} and is thus omitted.
\end{proof}
\end{appendix}
\end{document}
|
\begin{document}
\title{Chromatic Completion Number}
\author{$^*$E.G. Mphako-Banda and $^{\dag}$J. Kok }
\address{ $^*$School of Mathematical Sciences, University of Witswatersrand, Johannesburg, South Africa.\\
$^{\dag}$Centre for Studies in Discrete Mathematics,Vidya Academy of Science \& Technology,Thrissur, India}
\email{$^*[email protected] and $^{\dag}$ [email protected]}
\date{}
\keywords{chromatic completion number, chromatic completion graph, chromatic completion edge, bad edge, sum-term partition, $\ell$-completion sum-product}
\subjclass[2010]{05C15, 05C38, 05C75, 05C85}
\begin{abstract}
We use a well known concept of proper vertex colouring of a graph to introduce the construction of a chromatic completion graph and its related parameter, the chromatic completion number of a graph. We then give the chromatic completion number of certain classes of cycle derivative graphs and helm graphs. Finally, we discuss further problems for research related to this concept.
\end{abstract}
\maketitle
\section{Introduction}
For general notation and concepts in graphs see \cite{bondy,harary,banda}. Unless stated otherwise, all graphs will be finite and simple, connected graphs with at least one edge. The set of vertices and the set of edges of a graph $G$ are denoted by, $V(G)$ and $E(G)$ respectively. The number of vertices is called the order of $G$ say, $n$ and the number of edges of $G$ is denoted by, $\varepsilon(G).$ If $G$ has order $n \geq 1$ and has no edges ($\varepsilon(G)=0$) then $G$ is called a null graph. The degree of a vertex $v \in V(G)$ is denoted $d_G(v)$ or when the context is clear, simply as $d(v)$. The minimum and maximum degree $\delta(G)$ and $\Delta(G)$ respectively, have the conventional meaning. When the context is clear we shall abbreviate to $\delta$ and $\Delta,$ respectively.
For a set of distinct colours $\mathcal{C}= \{c_1,c_2,c_3,\dots,c_\ell\}$ a vertex colouring of a graph $G$ is an assignment $\varphi:V(G) \mapsto \mathcal{C}.$ A vertex colouring is said to be a \textit{proper vertex colouring} of a graph $G$ if no two distinct adjacent vertices have the same colour. The cardinality of a minimum set of distinct colours in a proper vertex colouring of $G$ is called the \textit{chromatic number} of $G$ and is denoted $\chi(G).$ We call such a colouring a $\chi$-colouring or a \textit{chromatic colouring} of $G.$ A chromatic colouring of $G$ is denoted by $\varphi_\chi(G)$. Generally a graph $G$ of order $n$ is $k$-colourable for $\chi(G) \leq k.$ Unless mentioned otherwise, a set of colours will mean a set of distinct colours.
Generally the set, $c(V(G)) \subseteq \mathcal{C}.$ A set $\{c_i \in \mathcal{C}: c(v)=c_i$ for at least one $v\in V(G)\}$ is called a colour class of the colouring of $G.$ If $\mathcal{C}$ is the chromatic set it can be agreed that $c(G)$ means set $c(V(G))$ hence, $c(G) \Rightarrow \mathcal{C}$ and $|c(G)| = |\mathcal{C}|.$ For the set of vertices $X\subseteq V(G),$ the subgraph induced by $X$ is denoted by, $\langle X\rangle.$ The colouring of $\langle X\rangle$ permitted by $\varphi:V(G) \mapsto \mathcal{C}$ is denoted by, $c(\langle X\rangle).$ The number of times a colour $c_i$ is allocated to vertices of a graph $G$ is denoted by $\theta_G(c_i)$ or if the context is clear simply, $\theta(c_i).$
Index labeling the elements of a graph such as the vertices say,\\ $v_1,v_2,v_3,\dots,v_n$ or written as, $v_i,$ where $i = 1,2,3,\dots,n,$ is called minimum parameter indexing. Similarly, a \textit{minimum parameter colouring} of a graph $G$ is a proper colouring of $G$ which consists of the colours $c_i;\ 1\le i\le \ell.$
In this paper, Section~\ref{s2} introduces a new parameter called, the \emph{chromatic completion number} of a graph $G.$ Subsection~\ref{sub2.1} presents results on chromatic completion number for a few known classes of cycle derivative graphs. Subsection~\ref{sub2.2} presents results on chromatic completion number on helm graphs. Finally, in section~\ref{s3}, a few suggestions on future research on this problem are discussed.
\section{Chromatic completion number of cycle derivative graphs}
\label{s2}
In an improper colouring an edge $uv$ for which, $c(u)=c(v)$ is called a \emph{bad edge}. See \cite{banda} for an introduction to $k$-defect colouring and corresponding polynomials. For a colour set $\mathcal{C},$ $|\mathcal{C}| \geq \chi (G)$ a graph $G$ can always be coloured properly hence, such that no bad edge results. Also, for a set of colours $\mathcal{C},$ $|\mathcal{C}| = \chi (G) \geq 2$ a graph $G$ of order $n$ with corresponding chromatic polynomial $\mathcal{P}_G(\lambda),$ can always be coloured properly in $\mathcal{P}_G(\lambda)$ distinct ways. The notion of the \emph{chromatic completion number} of a graph $G$ denoted by, $\zeta(G)$ is the maximum number of edges over all chromatic colourings that can be added to $G$ without adding a bad edge. The resultant graph $G_\zeta$ is called a \emph{chromatic completion graph} of $G.$ The additional edges are called \emph{chromatic completion edges}. It is trivially true that $G\subseteq G_\zeta.$ Clearly for a complete graph $K_n,$ $\zeta(K_n)=0.$ In fact for any complete $\ell$-partite graph $H=K_{n_1,n_2,n_3,\dots,n_\ell},$ $\zeta(H)=0.$ Hereafter, all graphs will not be $\ell$-partite complete. For graphs $G$ and $H$ of order $n$ with $\varepsilon(G)\geq \varepsilon(H)$ no relation between $\zeta(G)$ and $\zeta(H)$ could be found. The first result is straight forward.
\begin{theorem}
\label{thm2.1}
A graph $G$ of order $n$ is not complete, if and only if $G_\zeta$ is not complete.
\end{theorem}
\begin{proof}
Let $G$ be of order $n,$ then $G_\zeta$ is of order $n.$ If $G_\zeta \ncong K_n$ then $G\ncong K_n,$ since $G\subseteq G_\zeta.$
Conversely, if $G$ is not complete then $\chi(G)< n$ hence, for any chromatic colouring of $G,$ at least one pair of distinct vertices say $u$ and $v$ exists such that $c(u)=c(v).$ Therefore, edge $uv\notin E(G_\zeta)$ implying $G_\zeta$ is not complete.
\end{proof}
Theorem~\ref{thm2.1} can be stated differently i.e. $G$ is complete if and only if $G_\zeta$ is complete.
The next lemma does not necessarily correspond to a chromatic completion graph. It represents a \emph{pseudo completion graph} corresponding to a chromatic colouring, $\varphi:V(G)\mapsto \mathcal{C}.$
\begin{lemma}
\label{lem2.2}
For a chromatic colouring $\varphi:V(G)\mapsto \mathcal{C}$ a pseudo completion graph, $H(\varphi)= K_{n_1,n_2,n_3,\dots,n_\chi}$ exists such that, $$\varepsilon(H(\varphi))-\varepsilon(G) =\sum\limits_{i=1}^{\chi-1}\theta_G(c_i)\theta_G(c_j)_{(j=i+1,i+2,i+3,\dots,\chi)}-\varepsilon(G) \leq \zeta(G).$$
\end{lemma}
\begin{proof}
For any chromatic colouring $\varphi:V(G)\mapsto \mathcal{C},$ the graph, $H(\varphi) = K_{\theta_G(c_1),\theta_G(c_2),\dots,\theta_G(c_\chi)}$ is a corresponding pseudo completion graph. Therefore the result as stated.
\end{proof}
Now we are ready for a main result in the form of a corollary which is a direct consequence of Lemma~\ref{lem2.2}
\begin{corollary}
\label{col2.3}
Let $G$ be a graph. Then
\begin{eqnarray*}
\zeta(G) & =& max(\varepsilon(H(\varphi)) -\varepsilon(G) \text{ over all} \ \varphi:V(G)\mapsto \mathcal{C}.
\end{eqnarray*}
\end{corollary}
\begin{theorem}
\label{thm2.4}
Let $G$ be a graph. Then $\zeta(G)\leq \varepsilon(\overline{G})$, and equality holds if and only if $G$ is complete.
\end{theorem}
\begin{proof}
Since a chromatic completion edge $e\notin E(G)$ it follows $e\in E(\overline{G})$ hence, $\zeta(G)\leq \varepsilon(\overline{G}).$
\end{proof}
An immediate consequence of Theorem 2.4 read with the definition of chromatic completion is that equality holds for a graph $G$ if and only if, for all pairs of distinct vertices, $u$, $v$ for which the edge, $uv \notin E(G)$ we have, $c(u)\neq c(v)$.
For a positive integer $n \geq 2$ and $2\leq \ell \leq n$ let integers, \\$1\leq a_1,a_2,a_3,\dots,a_{\ell-r}, a'_1,a'_2,a'_3,\dots,a'_r \leq n-1$ be such that\\ $n=\sum\limits_{i=1}^{\ell-r}a_i + \sum\limits_{j=1}^{r}a'_j.$ Then $(a_1,a_2,a_3,\dots,a_{\ell-r}, a'_1,a'_2,a'_3,\dots,a'_r)$ is called a $\ell$-partition of $n$ and $\sum\limits_{i=1}^{\ell-r-1}\prod\limits_{k=i+1}^{\ell-r}a_ia_k + \sum\limits_{i=1}^{\ell-r}\prod\limits_{j=1}^{r}a_ia'_j + \sum\limits_{j=1}^{r-1}\prod\limits_{k=j+1}^{r}a'_ja'_k$ is called the \emph{sum of permutated term products} of the $\ell$-partition of $n.$
To illustrate the concepts consider $n=2.$ Since, $(1,1)$ is the only $2$-partition of 2, it follows that $1\times 1=1$ is the only sum of permutated term product, (a single product for $n=2$). For $n=5$ and by the commutative law there are two distinct possible $3$-partitions namely, $(1,1,3)$ or $(1,2,2).$ Hence, the two distinct sum of permutated term products are equal to 7 and 8. For $n=8$ and by the commutative law there are four distinct possible $3$-partitions namely, $(1,2,5),$ $(1,3,4),$ $(2,3,3)$ or $(2,2,4),$ with corresponding sum of permutated term products equal to 17, 19, 21 and 20, respectively.
\begin{definition}
\label{def2.1}
{\rm For two positive integers $2\leq \ell \leq n$ the division, $\frac{n}{\ell} = \lfloor\frac{n}{\ell}\rfloor + r,$ with $r$ some positive integer and $\ell> r\geq 0$. Hence, $n= \underbrace{\lfloor\frac{n}{\ell}\rfloor+\lfloor\frac{n}{\ell}\rfloor+\cdots+\lfloor\frac{n}{\ell}\rfloor}_{(\ell-r)-terms} +\underbrace{\lceil\frac{n}{\ell}\rceil +\lceil\frac{n}{\ell}\rceil+\cdots+\lceil\frac{n}{\ell}\rceil}_{(r\geq 0)-terms}.$ This specific $\ell$-partition, $(\underbrace{\lfloor\frac{n}{\ell}\rfloor,\lfloor\frac{n}{\ell}\rfloor,\dots,\lfloor\frac{n}{\ell}\rfloor}_{(\ell-r)-terms},\underbrace{\lceil\frac{n}{\ell}\rceil,\lceil\frac{n}{\ell}\rceil,\dots,\lceil\frac{n}{\ell}\rceil}_{(r\geq 0)-terms})$ is called a \emph{completion $\ell$-partition} of $n.$}
\end{definition}
The next theorem is a number theoretical result which finds application in the study of chromatic completion of graphs. To ease the formulation of the next result let, $t_i=\lfloor\frac{n}{\ell}\rfloor,$ $i=1,2,3,\dots,(\ell-r)$ and $t'_j=\lceil\frac{n}{\ell}\rceil,$ $j=1,2,3,\dots,r.$ Call, $ \mathcal{L}=\sum\limits_{i=1}^{\ell-r-1}\prod\limits_{k=i+1}^{\ell-r}t_it_k + \sum\limits_{i=1}^{\ell-r}\prod\limits_{j=1}^{r}t_it'_j + \sum\limits_{j=1}^{r-1}\prod\limits_{k=j+1}^{r}t'_jt'_k,$ the \emph{$\ell$-completion sum-product} of $n.$
\begin{theorem}${(Lucky's~Theorem)}$\footnote{Dedicated to late Lucky Mahlalela who was a disabled, freelance traffic pointsman in the City of Tshwane. Sadly he was brutally murdered.}
For a positive integer $n \geq 2$ and $2\leq p \leq n$ let integers, $1\leq a_1,a_2,a_3,\dots,a_{p-r}, a'_1,a'_2,a'_3,\dots,a'_r \leq n-1$ be such that $n=\sum\limits_{i=1}^{p-r}a_i + \sum\limits_{j=1}^{r}a'_j$ then, the $\ell$-completion sum-product $\mathcal{L} = max\{\sum\limits_{i=1}^{p-r-1}\prod\limits_{k=i+1}^{p-r}a_ia_k + \sum\limits_{i=1}^{p-r}\prod\limits_{j=1}^{r}a_ia'_j + \sum\limits_{j=1}^{r-1}\prod\limits_{k=j+1}^{r}a'_ja'_k\}$ over all possible, $n=\sum\limits_{i=1}^{p-r}a_i + \sum\limits_{j=1}^{r}a'_j.$
\label{thm2.5}
\end{theorem}
\begin{proof}
Let $n,p \in \Bbb N$, $2 \leq p \leq n.$ The commutative law is valid for addition and multiplication hence, we assume that, $1 \leq a_1\leq a_2 \leq a_3 \leq\cdots \leq a_{p-r}\leq a'_1\leq a'_2\leq a'_3\leq \cdots \leq a'_r \leq n-1$ and that $n=\sum\limits_{i=1}^{p-r}a_i + \sum\limits_{j=1}^{r}a'_j.$
For $p=2,$ consider $a_1=x,$ $a'_1 =n-x.$ So $a_1\times a'_1=x(n-x)$ for which a maximum of $\frac{n}{2}\times \frac{n}{2}$ is obtain at $x= \frac{n}{2}.$ We restrict values to integer products thus an integer maximum is attained for the ordered pairs, $(\lfloor \frac{n}{2}\rfloor, \lfloor \frac{n}{2}\rfloor)$ or $(\lfloor \frac{n}{2}\rfloor, \lceil \frac{n}{2}\rceil)$ or $(\lceil \frac{n}{2}\rceil, \lceil \frac{n}{2}\rceil).$ Hence, the result, \emph{maximum sum of permutated term products} holds for the completion 2-partition of $n$ if $p=2.$ Assume it holds for $p = q \in \Bbb N.$ Hence, the assumption states that for, $1 \leq a_1,a_2,a_3,\dots,a_{q-r}, a'_1,a'_2,a'_3,\dots,a'_r \leq n-1$ be such that:
$n=\sum\limits_{i=1}^{q-r}a_i + \sum\limits_{j=1}^{r}a'_j$ then, the $q$-completion sum-product \\ $\mathcal{L} = max\{\sum\limits_{i=1}^{q-r-1}\prod\limits_{k=i+1}^{q-r}a_ia_k + \sum\limits_{i=1}^{q-r}\prod\limits_{j=1}^{r}a_ia'_j + \sum\limits_{j=1}^{r-1}\prod\limits_{k=j+1}^{r}a'_ja'_k\}$ over all possible, $n=\sum\limits_{i=1}^{q-r}a_i + \sum\limits_{j=1}^{r}a'_j.$ Put differently, the aforesaid means that the \emph{sum of permutated term products} is a maximum over that particular $q$-partition. Hence, $(a_{i_{(1 \leq i \leq (q-r))}}, a'_{j_{(1 \leq j \leq r)}})$ corresponds to the completion $q$-partition of $n$ such that,
\begin{eqnarray*}
n&=& \underbrace{\lfloor\frac{n}{q}\rfloor+\lfloor\frac{n}{q}\rfloor+\cdots+\lfloor\frac{n}{q}\rfloor}_{(q-r)-terms}
+ \underbrace{\lceil\frac{n}{q}\rceil +\lceil\frac{n}{q}\rceil+\cdots+\lceil\frac{n}{q}\rceil}_{(r\geq 0)-terms}.
\end{eqnarray*}
Now consider $p=q+1.$
Case 1: If $r>0$ for $\frac{n}{q},$ determine a $(q+1)^{th}$ sum-term by reducing a sufficient number of the $\lceil \frac{n}{q}\rceil$ sum-terms by $1$ each to obtain terms of the form $\lfloor \frac{n}{q+1}\rfloor$ or $\lceil \frac{n}{q+1}\rceil.$ The aforesaid is always possible. Each pair of terms in the $(q+1)$-partition corresponds to a $2$-completion sum-product of $\lfloor \frac{n}{q+1}\rfloor,$ $\lfloor \frac{n}{q+1}\rfloor$ or $\lfloor \frac{n}{q+1}\rfloor,$ $\lceil \frac{n}{q+1}\rceil,$ or $\lceil \frac{n}{q+1}\rceil,$ $\lceil \frac{n}{q+1}\rceil,$ so it follows that the maximum sum of permutated term products has been obtained between all pairs (follows from the case $p = 2$). It follows that the sum of the maximums yields a maximum over the sum of pairwise products hence, a maximum sum of permutated term products has been obtained. Furthermore, the $(q+1)$-partition obtained corresponds to the terms required for a $(q+1)$-completion sum-product of $n.$ Therefore, the result holds for $p = q+1$ thus it holds for any $2\leq p \leq n$ for which $\frac{n}{q}$ has $r>0.$
Case 2: Through similar reasoning the results holds for $r = 0.$
Through immediate induction it follows that the result holds for all $n \in \Bbb N,$ $n \geq 2.$ That concludes the proof.
\end{proof}
Theorem~\ref{thm2.5} leads to a lemma in which each term in a sum-term partition corresponds to a distinct colour class. Hence, if the colours are $c_i,$ $1\leq i \leq \ell$ then, $\theta(c_i) = \lfloor \frac{n}{\ell}\rfloor$ or $\lceil \frac{n}{\ell}\rceil.$
\begin{lemma}
\label{lem2.6}
If a subset of $m$ vertices say, $X \subseteq V(G)$ can be chromatically coloured by $t$ distinct colours and if the graph structure permits such, then allocate colours as follows:
\begin{enumerate}[(a)]
\item For $t$ vertex subsets each of cardinality $s= \lfloor \frac{m}{t}\rfloor$ allocate a distinct colour followed by:
\item Colour one additional vertex (from the $r\geq 0$ which are uncoloured), each in a distinct colour.
\end{enumerate}
This chromatic colouring permits the maximum number of chromatic completion edges between the vertices in $X$ amongst all possible chromatic colourings of $X$.
\end{lemma}
Lemma~\ref{lem2.6} can be applied to a set of vertices which induce a connected graph by assigning a proper colouring. Lemma~\ref{lem2.6} also has an interesting implication. This is stated as a corollary.
\begin{corollary}
\label{col2.7}
Let $G$ be a graph. Then
\begin{enumerate}[(i)]
\item a chromatic completion graph $G_\zeta$ is not unique.
\item a set of chromatic completion edges of maximum cardinality is not unique.
\end{enumerate}
\end{corollary}
Another interesting implication of Lemma~\ref{lem2.6} is that for any $n\in \Bbb N$ the complete $\ell$-partite graph of order $n$ given by $$K_{(\underbrace{\lfloor\frac{n}{\ell}\rfloor,\lfloor\frac{n}{\ell}\rfloor,\cdots,\lfloor\frac{n}{\ell}\rfloor}_{(\ell-r)-terms}, \underbrace{\lceil\frac{n}{\ell}\rceil,\lceil\frac{n}{\ell}\rceil,\cdots+\lceil\frac{n}{\ell}\rceil}_{(r\geq 0)-terms})}, \ \frac{n}{\ell} = \lfloor\frac{n}{\ell}\rfloor + r$$ with $r$ some positive integer and $r\geq 0,$ has maximum number of edges amongst all complete $\ell$-partite graph of order $n.$ Furthermore, it is a direct consequence from the proof of Theorem~\ref{thm2.5} that for those graphs which permit the colour allocation prescribed by Lemma~\ref{lem2.6}, the maximum number of chromatic completion edges between the vertices in $X$ amongst all possible chromatic colourings of $X$ are unique hence, well-defined.
It is important to note that not all graphs permit the colour allocation prescribed by Lemma~\ref{lem2.6}. For such graphs an \emph{optimal near-completion $\ell$-partition} is always possible. The optimal near-completion $\ell$-partition follows from the fact that for $n+1,$ even, we have that $1\times n<2\times (n-1)< 3\times (n-2)<\cdots < (\frac{n+1}{2})^2.$ Similarly for $n+1,$ odd, we have that, $1\times n<2\times (n-1)< 3\times (n-2)<\cdots < \lfloor \frac{n+1}{2}\rfloor \times \lceil \frac{n+1}{2}\rceil.$ This then yields the unique chromatic completion number. See discussion following Proposition~\ref{prop2.8}.
\subsection{Chromatic completion number of certain graphs}
\label{sub2.1}
The result for acyclic graphs and even cyclic graphs (graphs containing only even cycles), $G$ of order $n$ is straight forward i.e. $\zeta(G) = \theta(c_1)\theta(c_2)-\varepsilon(G).$ Example, for an even cycle graph $C_n$ it follows that, $\zeta(C_n)=\frac{n}{2}\times \frac{n}{2}-n=\frac{n(n-4)}{4}.$ This section will henceforth, unless stated otherwise, consider graphs which contains at least one odd cycle, thus graphs for which $\chi(G)\geq 3.$
Let the vertices of a cycle graph $C_n$ be labeled $v_i$, $1\leq i \leq n.$
A sunlet graph $Sl_n,$ $n\geq 3$ is obtained from a cycle graph $C_n$ by attaching a pendant vertex $u_i$ to each cycle vertex $v_i,$ $1\leq i \leq n.$ A graph $W_{1+n} = C_n+K_1,$ $n\geq 3$ is called a wheel graph. The edges and vertices of $C_n$ are respectively, called rim edges and rim vertices. The vertex corresponding to $K_1$ is called the central vertex say, $v_0$ and the edges incident with the central vertex are called spokes.
Since a complete graph $K_n$ is obtain from a cycle graph $C_n$ by adding all possible chords, a complete graph is a cycle derivative graph as well. Recall that a sun graph $S_n,$ $n\geq 2$ is obtained from the complete graph $K_n$ by adding vertices $u_i$ and the edges $u_iv_i,$ $u_iv_{i+1},$ $1\leq i\leq n$ and where modular arithmetic at edge $v_nv_1$ has known meaning. Note that $S_2\cong K_3 \cong C_3$ and is therefore treated as $C_3.$
\begin{proposition}
\label{prop2.8}
\begin{enumerate}[(i)]
\item Let $C_n$ be an odd cycle graph and $n\geq 3.$ Then
\begin{eqnarray*}
\zeta(C_n) &=&
\begin{cases}
n(\frac{n}{3}-1), &\text {if $n=0~(mod~3),$}\\
(n-2)\frac{n-5}{3} + \lceil \frac{n-2}{2}\rceil +1, &\text {if $n=2~(mod~3),$}\\
(n-1)\frac{n-4}{3} + \lceil \frac{2}{3}(n-5)\rceil +1, & \text {if $n=1~(mod~3).$}
\end{cases}
\end{eqnarray*}
\item Let $Sl_n$ be a sunlet graph and $n\geq 3.$ Then $\zeta(Sl_n)= 3\zeta(C_n)+n.$
\item Let $W_{1,n}$ be a wheel graph and $n\geq 3.$ Then
\begin{eqnarray*}
\zeta(W_{1,n}) &=&
\begin{cases}
\frac{n^2}{4}, &\text {if $n$ is even,}\\
\zeta(C_n), & \text {if $n$ is odd.}
\end{cases}
\end{eqnarray*}
\item Let $S_n$ be a sun graph and $n\geq 3.$ Then $\zeta(S_n)= \frac{n(3n-4)}{2}.$
\end{enumerate}
\end{proposition}
\begin{proof}
\begin{enumerate}[(i)]
\item Let $\Bbb{N}^{odd} = \{n:set~of~odd~positive~integers, n\geq 3\}.$ Let $\Bbb{N}_1 =\{n_i \in \Bbb{N}^{odd}: n_i=0~(mod~3)\},$ $\Bbb{N}_2 =\{n_j \in \Bbb{N}^{odd}: n_j=1~(mod~3)\},$ $\Bbb{N}_3 =\{n_k \in \Bbb{N}^{odd}: n_k=2~(mod~3)\}.$ Clearly, $\Bbb{N}^{odd} = \Bbb{N}_1 \cup \Bbb{N}_2 \cup \Bbb{N}_3.$
Part 1: Let $n =3t,$ $t=1,3,5,7,\dots.$ Hence, $n=0~(mod~3)$ and $\chi(C_n)=3.$ For the colour set $\mathcal{C} = \{c_1,c_2,c_3\}$ and without loss of generality and by symmetry consideration, the extremal number of vertices coloured $c_3$ are either $\theta_{C_n}(c_3) =1$ or $\theta_{C_n}(c_3) = \frac{n}{3}.$
Case 1; ($\theta_{C_n}(c_3)=1$): without loss of generality, let $c(v_n)=c_3.$ Note that, $C_n-v_n \cong P_{n-1}$ and $n-1$ is even. From Theorem~\ref{thm2.1} it follows that, $\varepsilon(H(\varphi)) -\varepsilon(C_n) =(n-1) +\frac{(n-1)^2}{4} -(n-2)-2=\frac{n^2-2n-3}{4}.$
Case 2; ($\theta_{C_n}(c_1)=\theta_{C_n}(c_2)=\theta_{C_n}(c_3)=\frac{n}{3}$): now $\varepsilon(H(\varphi)) -\varepsilon(C_n) = n(\frac{n}{3}-1).$
Since, for $n\geq 3$ it follows that, $n^2-6n+9\geq 0 \Rightarrow 4n^2-12n\geq 3n^2-6n-9 \Rightarrow \frac{n^2-3n}{3} \geq \frac{n^2-2n-3}{4}$ we have, $\zeta(C_n) \geq n(\frac{n}{3}-1).$ Through similar reasoning and immediate induction for $1 \leq \theta_{C_n}(c_3) < \frac{n}{3}$ it is concluded that, $\zeta(C_n) = n(\frac{n}{3}-1) =n(t-1).$
Part 2: Consider $C_n,$ $n =3t,$ $t=1,3,5,7,\dots$ as in (i)Part 1 with the extremal repetitive colouring, $c(v_1)=c_1,$ $c(v_2)=c_2,$ $c(v_3)=c_3,\cdots$, $c(v_n)=c_3.$ Now add vertex $v_{n+1},$ $v_{n+2}$ to obtain $C_{n+2}$ and note that the edge $v_nv_1$ is now a chord which represents a count of $+1.$ The additional vertices can only be coloured by the ordered pairs, $(c(v_{n+1}),c(v_{n+2})) = (c_1,c_2)$ or $(c_1,c_3)$ or $(c_2,c_3).$ The number of chromatic completion edges that can be added with an end vertex $v_{n+1}$ or $v_{n+2}$ is exactly $\lceil \frac{n}{2}\rceil.$ Hence, from (i)Part 1, $\zeta(C_{n+2}) = n(\frac{n}{3}-1) + \lceil \frac{n}{2}\rceil +1.$ Finally, standardising to the conventional notation gives the result for $n=2~(mod~3)$ i.e. $\zeta(C_n) = (n-2)(\frac{n-2}{3}-1) + \lceil \frac{n-2}{2}\rceil +1.$
Part 3: Let $n =3s +1,$ $s=2,4,6,8,\dots.$ Hence, $n=1~(mod~3)$ and $\chi(C_n)=3.$ Similar to (i)Part 1 colour vertices $v_i,$ $1\leq i \leq n-1$ with the extremal repetitive colouring, $c(v_1)=c_1,$ $c(v_2)=c_2,$ $c(v_3)=c_3,\cdots ,$ $c(v_{n-1})=c_3.$ For the cycle graph $C_{n-1}$ it follows from (i)Part 1 that the chromatic completion number is $\zeta(C_{n-1})= (n-1)(\frac{n-1}{3}-1) =(n-1)\frac{n-4}{3}.$ In $C_n$ the edge $v_{n-1}v_1$ is a chord and corresponds to a count of $+1.$ The vertex $v_n$ can only be coloured $c_2.$ The number of chromatic completion edges from vertex $v_n$ is exactly $\lceil \frac{2}{3}(n-5)\rceil.$ Therefore, $\zeta(C_n) = (n-1)\frac{n-4}{3} + \lceil \frac{2}{3}(n-5)\rceil +1.$
\item Colour the cycle subgraph as in (i). Colour the pendant vertices through say, a clockwise rotation of the cycle colouring of one vertex index that is, $c(v_i)\mapsto c(v_{i+1})$ and modular arithmetic for $v_n,$ $v_1$ has known meaning. Clearly the number of chromatic completion edges permitted amongst the pendant vertices \emph{per se} will be the chromatic completion edges of a cycle graph$C_n$ as well as, $\zeta(C_n)$ chromatic completion edges found for $C_n.$ Therefore, the partial count of chromatic completion edges permitted amongst the pendant vertices is, $\zeta(C_n)+n.$ The cycle graph itself permits $\zeta(C_n)$ chromatic completion edges. Finally, the number of chromatic completion edges permitted between the pendant vertices and the cycle vertices amounts to $\zeta(C_n)$ as well. Hence, $\zeta(Sl_n)=3\zeta(C_n)+n.$
\item Part 1: Because the central vertex is adjacent to all other vertices the chromatic completion edges can only come from the even rim cycle $C_n$. The result follows from Theorem~\ref{thm2.1}.
Part 2: As in (i)Part 1, it follows that only the odd rim cycle can contribute to chromatic completion edges. Hence, the result follows from (i).
\item For a complete graph $K_n,$ $n\geq 3$ each $v_i$ can uniquely be coloured $c_i,$ $1\leq i\leq n.$ From Lemma~\ref{lem2.6} it follows that each vertex $u_i$ can be uniquely coloured some $c_j,$ $c_j\neq c(v_i),$ $c_j\neq c(v_{i+1}),$ $1\leq i \leq n$ and where modular arithmetic at edge $v_nv_1$ has known meaning. Because the set $\{u_i:1\leq i \leq n\}$ is an independent set and each vertex is uniquely coloured amongst the $u_i's$ he chromatic completion permits a complete graphs. This gives the number of chromatic completion edges to be $\frac{1}{2}n(n-1).$ Furthermore, each $u_i$ may be linked to a further $n-3$ vertices of $K_n.$ Hence, the total number of chromatic completion edges is, $\zeta(S_n) =\frac{1}{2}n(n-1) + n(n-3) = \frac{n(3n-4)}{2},$ $n\geq 3.$
\end{enumerate}
\end{proof}
\begin{note}[Optimal near-completion $\ell$-partition]
{\rm Consider the graph $K_1+C_{21}$ and $V(K_1)=\{v\}.$ From Proposition~\ref{prop2.8}(Part 1) and the fact that $N(v)=V(C_{21})$ prohibits the allocation prescribed by Lemma~\ref{lem2.6} The optimal near-completion $\ell$-partition allows for say $\theta(c_1)=\theta(c_2)=\theta(c_3)=7$ and $\theta(c_4)=1$ say, $c(v)=c_4.$ Clearly for a graph $G$ of order $n\geq 2$ and $\chi(G)\geq 2$ all nested graphs of structure $$\underbrace{K_1+(K_1+(K_1+\cdots +(K_1+ G)))}_{k-times}$$ only an optimal near-completion $\ell$-partition can be found.}
\end{note}
\subsection{Chromatic completion number of helm graphs}
\label{sub2.2}
A helm graph $H_{1,n},$ $n\geq 3.$ is obtained from the wheel graph $W_{1,n}$ by adding a pendant vertex $u_i$ to each rim vertex $v_i$, $1\leq i \leq n.$ Helm graphs derived from wheel graphs, $W_{1,n}$ for even $n,$ will be discussed first. Clearly $n\geq 4.$ Let $\Bbb{N}^{even}=\{n: positive~even~integers, n\geq 4\}.$ Let $\Bbb{N}_1=\{n_i\in \Bbb{N}^{even}:n_i=4+6i, i=0,1,2,\dots\},$ $\Bbb{N}_2=\{n_j \in \Bbb{N}^{even}: n_j=6+6j, j=0,1,2,\dots\}$ and $\Bbb{N}_3=\{n_k \in \Bbb{N}^{even}:n_k=8+6k, k=0,1,2,\dots\}.$ Clearly, $\Bbb{N}^{even} = \Bbb{N}_1 \cup \Bbb{N}_2 \cup \Bbb{N}_3.$
\begin{proposition}
\label{prop2.9}
Let $H_{1,n_i}$ be a helm graph, $n_i$ even and $n_i\geq 4.$ Then
\begin{eqnarray*}
\zeta(H_{1,n_i}) &=&
\begin{cases}
\frac{(4n_i-1)(n_i-1)}{3}, &\text {if $n_i \in \Bbb{N}_1,$}\\
\frac{n_i(12n_i - 19)}{9}, &\text {if $n_i \in \Bbb{N}_2,$}\\
\frac{12n_i^2 - 27n_i -4}{9}, &\text {if $n_i \in \Bbb{N}_3.$}
\end{cases}
\end{eqnarray*}
\end{proposition}
\begin{proof}
Part 1: For $n_i \in \Bbb{N}_1$ the colouring $\theta(c_1)=\theta(c_2)=\theta(c_3)=\frac{2n_i+1}{3}$ is always possible. Also, $\varepsilon(H_{1,n_i}) = 3n_i.$ Thus, from Lucky's theorem, Theorem~\ref{thm2.5} read with Corollary~\ref{col2.3} and Lemma~\ref{lem2.6} it follows that, $\zeta(H_{1,n_i}) = 3(\frac{2n_i+1}{3})^2 - 3n_i = \frac{(4n_i-1)(n_i-1)}{3}.$
Part 2: For $n_i \in \Bbb{N}_2$ the colouring $\theta(c_1)=\theta(c_2)= \lfloor \frac{2n_i+1}{3}\rfloor = \frac{2n_i}{3}$ and $\theta(c_3)=\lceil \frac{2n_i+1}{3}\rceil =\frac{2(n_i+1)}{3}$ is always possible. Also, $\varepsilon(H_{1,n_i}) = 3n_i.$ By similar reasoning as in Part 1, the result of Part 2 follows.
Part 3: For $n_i \in \Bbb{N}_3$ the colouring $\theta(c_1)=\lfloor \frac{2n_i+1}{3}\rfloor = \frac{2n_i -1}{3}$ and $\theta(c_2)=\theta(c_3)=\lceil \frac{2n_i+1}{3}\rceil =\frac{2(n_i+1)}{3}$ is always possible. Also, $\varepsilon(H_{1,n_i}) = 3n_i.$ By similar reasoning as in Part 1, the result of Part 3 follows.
\end{proof}
The diagrams in Figure~\ref{emb1} serve as illustration of the reasoning used in the proof of Proposition~\ref{prop2.9}
\begin{figure}\label{emb1}
\end{figure}
The next results are for helm graphs $H_{1,n},$ for odd $n.$ Let $\Bbb{N}'_1=\{n_i\in \Bbb{N}^{even}:n_i=3+6i, i=0,1,2,\dots\},$ $\Bbb{N}'_2=\{n_j \in \Bbb{N}^{even}: n_j= 5+6j, j=0,1,2,\dots\}$ and $\Bbb{N}'_3=\{n_k \in \Bbb{N}^{even}:n_k= 7+6k, k=0,1,2,\dots\}.$ Clearly, $\Bbb{N}^{odd} = \Bbb{N}'_1 \cup \Bbb{N}'_2 \cup \Bbb{N}'_3.$
\begin{proposition}
\label{prop2.10}
Let $H_{1,n_i}$ be a helm graph, $n_i$ odd and $n_i\geq 3.$ Then
\begin{eqnarray*}
\zeta(H_{1,n_i}) &=&
\begin{cases}
9, &\text {if $n_i =3$},\\
\frac{3n_i(n_i-1)}{2}, &\text {if $n_i \in \Bbb{N}'_1\backslash \{3\}$ or $n_i \in \Bbb{N}'_2$ or $n_i \in \Bbb{N}'_3.$}
\end{cases}
\end{eqnarray*}
\end{proposition}
\begin{proof}
Part 1: It is easy to verify that $\zeta(H_{1,3})=9.$
Part 2(a): For $n_i \in \Bbb{N}'_1\backslash \{3\}$ the colouring $\theta(c_1)=\theta(c_2)=\theta(c_3)=\lceil \frac{2n_i+1}{4}\rceil = \frac{2(n_i+1)}{4}$ and $\theta(c_4)=\lfloor \frac{2n_i+1}{4}\rfloor =\frac{2(n_i-1)}{4}$ is always possible. Also, $\varepsilon(H_{1,n_i}) = 3n_i.$ Thus, from Lucky's theorem read with Corollary~\ref{col2.3} and Lemma~\ref{lem2.6} it follows that, $\zeta(H_{1,n_i}) = 3(\frac{2(n_i+1)}{4})^2 + 3(\frac{2(n_i+1)\times 2(n_i-1)}{4}) - 3n_i = \frac{3n_i(n_i-1)}{2}.$
Part 2(b): For $n_i \in \Bbb{N}'_2$ the colouring $\theta(c_1)=\theta(c_2)=\theta(c_3)=\lceil \frac{2n_i+1}{4}\rceil = \frac{2(n_i+1)}{4}$ and $\theta(c_4)=\lfloor \frac{2n_i+1}{4}\rfloor =\frac{2(n_i-1)}{4}$ is always possible. Also, $\varepsilon(H_{1,n_i}) = 3n_i.$ This result then follows from Part 2(a) noting, $n_i \in \Bbb{N}'_2.$
Part 2(c): For $n_i \in \Bbb{N}'_3$ the colouring $\theta(c_1)=\theta(c_2)=\theta(c_3)=\lceil \frac{2n_i+1}{4}\rceil = \frac{2(n_i+1)}{4}$ and $\theta(c_4)=\lfloor \frac{2n_i+1}{4}\rfloor =\frac{2(n_i-1)}{4}$ is always possible. Also, $\varepsilon(H_{1,n_i}) = 3n_i.$ This result then follows from Part 2(a) noting, $n_i \in \Bbb{N}'_3.$
\end{proof}
The diagrams in Figure~\ref{emb2} serve as illustration of the reasoning used in the proof of Proposition 2.10.
\begin{figure}\label{emb2}
\end{figure}
\section{Conclusion}
\label{s3}
In several of the proofs the technique of graph decomposition permitted by Lemma~\ref{lem2.2} and vertex partitioning permitted by Lemma~\ref{lem2.6} were incorporated. These salient techniques of proof are worthy of further research.
Essentially chromatic completion of a given graph $G$ yields a new graph $G'$ such that both $G,$ $G'$ are of the same order, $\chi(G)=\chi(G'),$ $G\ncong G'$ and $\varepsilon(G')$ is a maximum. For both a chromatic polynomial exists. It is of interest to find a relation between these chromatic polynomials if such relation exists.
Determining the chromatic completion number of a wide range of small graphs is worthy research. Research in respect of all known graph operations remains open. The behavior of chromatic completion for other derivative proper colourings such as Johan colouring (also called $\mathcal{J}$-colouring), co-colouring, Grundy colouring, harmonious colouring, complete colouring, exact colouring, star colouring and others offers a wide scope for further research. Relations between the corresponding derivative chromatic completion numbers, if such exist, are open problems to be investigated. It is suggested that complexity analysis of these new parameters are worthy of further research.
The problem of characterising graphs which permit the colour allocation prescribed by Lemma~\ref{lem2.6} is a challenging open problem. Certainly all graphs $G$ of order $n\geq 4$ with $\chi(G)\geq 2,$ which has a spanning subgraph $H$ which is a star graph, prohibit the prescription of Lemma~\ref{lem2.6}.
\end{document}
|
\begin{document}
\title[Almost sure scattering]{Almost sure scattering for the energy-critical NLS with radial data below $H^1(\mathbb{R}^4)$}
\author[R. Killip]{Rowan Killip}
\address{Department of Mathematics, UCLA, Los Angeles, USA}
\email{[email protected]}
\author[J. Murphy]{Jason Murphy}
\address{Department of Mathematics and Statistics, Missouri University of Science and Technology, Rolla, USA}
\email{[email protected]}
\author[M. Visan]{Monica Visan}
\address{Department of Mathematics, UCLA, Los Angeles, USA}
\email{[email protected]}
\begin{abstract}
We prove almost sure global existence and scattering for the energy-critical nonlinear Schr\"odinger equation with randomized spherically symmetric initial data in $H^s(\mathbb{R}^4)$ with $\frac56<s<1$. We were inspired to consider this problem by the recent work of Dodson--L\"uhrmann--Mendelson \cite{DLM}, which treated the analogous problem for the energy-critical wave equation.
\end{abstract}
\maketitle
\section{Introduction}\label{S:intro}
We consider the initial-value problem for the defocusing cubic nonlinear Schr\"odinger equation (NLS) in four space dimensions:
\begin{equation}\label{nls}
(i\partial_t + \Delta) u = |u|^2 u.
\end{equation}
This equation is {energy-critical} in four dimensions: the rescaling that preserves the class of solutions to \eqref{nls}, namely,
\[
u(t,x)\mapsto \lambda u(\lambda^2 t,\lambda x),
\]
also leaves invariant the conserved {energy}, defined by
\begin{equation}\label{def:E}
E[u(t)] = \int_{\mathbb{R}^4}\tfrac12|\nabla u(t,x)|^2 + \tfrac14|u(t,x)|^4\,dx.
\end{equation}
Equation \eqref{nls} is known to be globally well-posed in the energy space. More precisely, we have the following:
\begin{theorem}[Well-posedness in the energy space; \cite{RV, Visan4D}]\label{T:EC}\leavevmode\
Let $u_0\in \dot H^1(\mathbb{R}^4)$. Then there exists a unique global solution $u\in C_t\dot H^1_x(\mathbb{R}\times\mathbb{R}^4)$ to \eqref{nls} with $u(0)=u_0$. Moreover, the solution satisfies
$$
\|u\|_{L_t^4 L_x^8(\mathbb{R}\times\mathbb{R}^4)} \leq L(E(u_0)).
$$
Consequently, there exist scattering states $u_\pm\in \dot H^1(\mathbb{R}^4)$ such that
$$
\|u(t)-e^{it\Delta}u_\pm\|_{\dot H^1_x}\to 0\qtq{as} t\to \pm\infty.
$$
\end{theorem}
On the other hand, Christ--Colliander--Tao \cite{CCT} showed that the data-to-solution map for \eqref{nls} is discontinuous at the origin in the $H^s(\mathbb{R}^4)$ topology whenever $s<1$. In this paper, we prove that suitably randomized spherically symmetric initial data in $H^{s}(\mathbb{R}^4)$ with $\frac56<s<1$ lead to global scattering solutions almost surely.
\begin{definition}[Randomization]\label{def:random} Let $\varphi$ be a bump function supported in the unit ball such that
\[
\sum_{k\in\mathbb{Z}^4} \varphi_k(\langle x\ranglei)= 1\qtq{for all $\langle x\ranglei\in \mathbb{R}^4$, where} \varphi_k(\langle x\ranglei) := \varphi(\langle x\ranglei-k).
\]
Fix $s\in \mathbb{R}$ and $f\in H^s(\mathbb{R}^4)$. For $k\in\mathbb{Z}^4$, we define
$$
f_k:= [\hat f \varphi_k]^{\vee} = f* \check{\varphi}_k.
$$
Let $\{X_k\}_{k\in\mathbb{Z}^4}$ be independent, mean zero, real or complex Gaussian random variables of uniformly bounded variance. We will write the underlying probability space as $(\Omega,\Sigma,\mathbb{P})$. We define the randomization of $f$ via
\[
f^\omega(x) = \sum_{k\in\mathbb{Z}^4} X_k f_k(x).
\]
\end{definition}
For concreteness, in this paper we work with the Gaussian randomization introduced above. Use of Kinchine's inequality would allow one to treat more general randomizations, such as those satisfying
$$
\mathbb{E}(e^{\gamma X_k})\leq e^{c\gamma^2}
$$
uniformly for $\gamma\in \mathbb{R}$, $k\in \mathbb{Z}^4$, for some $c>0$.
\begin{remarks}\label{Remarks}
(i) Note that $m(i\nabla)f^\omega=[m(i\nabla)f]^\omega$ for any Fourier multiplier operator $m$. We also have $(f+g)^\omega = f^\omega + g^\omega$. \\[2mm]
(ii) For any $s\in \mathbb{R}$ we have $\mathbb{E}(\|f^\omega\|_{H^s}^2)\sim \|f\|_{H^s}^2$.\\[2mm]
(iii) Even if the function $f$ is radial, the randomization $f^\omega$ is not.
\end{remarks}
Our main result is the following:
\begin{theorem}[Almost sure scattering]\label{T} Fix $\frac56<s<1$ and a spherically symmetric function $f\in H^s(\mathbb{R}^4)$. For almost every $\omega$, there exists a unique global solution $u$ to \eqref{nls} with $u(0)=f^\omega$. Furthermore, $u$ scatters in the following sense: there exist unique $u_\pm\in H^1(\mathbb{R}^4)$ such that
\[
\lim_{t\to\pm\infty} \|u(t) - e^{it\Delta}[f^\omega + u_\pm]\|_{H^1(\mathbb{R}^4)} = 0.
\]
\end{theorem}
Uniqueness in Theorem~\ref{T} holds in the following sense: Writing $u(t) = e^{it\Delta}f^\omega + v(t)$, there exists a unique global solution $v\in C_tH^1_x(\mathbb{R}\times\mathbb{R}^4)\cap L_t^4L_x^8 (\mathbb{R}\times\mathbb{R}^4)$ to
\begin{align*}
(i\partial_t+\Delta) v = |v+e^{it\Delta}f^\omega|^2(v+e^{it\Delta}f^\omega) \qtq{with} v(0)=0.
\end{align*}
A substantial body of work on dispersive equations with randomized initial data has built up over the last two decades. Correspondingly, we must curtail our presentation here and primarily discuss works concerned with the energy-critical wave and Schr\"odinger problems on Euclidean space. See also \cite{NS} for a proof of almost sure well-posedness for the energy-critical NLS on $\mathbb{T}^3$.
Almost sure global well-posedness for supercritical data, randomized as in Definition~\ref{def:random}, was proved by Pocovnicu \cite{Pocovnicu} and Oh--Pocovnicu \cite{Oh-Pocovnicu} for the energy-critical wave equation and by Benyi--Oh--Pocovnicu \cite{Benyi-Oh-Pocovnicu} and Brereton \cite{Brereton} for the energy-critical Schr\"odinger equation. These works also establish scattering with positive probability for small randomized data. We should note that the results in \cite{Benyi-Oh-Pocovnicu, Brereton} are conditional on energy-critical bounds satisfied by the function $v$ introduced above. In \cite{DLM}, Dodson--L\"uhrmann--Mendelson proved almost sure scattering for the four-dimensional energy-critical wave equation with (large) supercritical radial data, randomized as in Definition~\ref{def:random}. In this paper we establish the analogous result for the energy-critical Schr\"odinger equation, Theorem~\ref{T}; in particular, our global well-posedness result is not conditional on bounds satisfied by $v$.
Many prior works considered energy-critical and -subcritical problems on Euclidean space, mostly with different randomizations; see, for example, \cite{BTT, Deng, Suzzoni1, Suzzoni2, LM1, LM2, Murphy, Poiret1, Poiret2, PRT, Thomann}. We wish to draw particular attention to \cite[Theorem~1.3]{PRT}, which establishes scattering for the energy-critical Schr\"odinger equation with positive probability for a particular ensemble of random initial data which is merely $L^2_x$.
The proof of Theorem~\ref{T} relies on the further development of the methods introduced in the papers described above, particularly \cite{DLM, Oh-Pocovnicu, Pocovnicu}. The first step is to regard the equation satisfied by $v$ as a perturbation of the energy-critical problem. Specifically, we write
\begin{align}\label{v eqn}
(i\partial_t+\Delta) v = |v|^2v + \bigl[|v+e^{it\Delta}f^\omega|^2(v+e^{it\Delta}f^\omega)-|v|^2v\bigr].
\end{align}
The fact that the stability theory for the energy-critical NLS is the right tool to study energy-critical equations with perturbations was first observed by X. Zhang in \cite{Zhang} and elaborated on in \cite{Matador}. The utility of this approach in the energy-critical random-data setting was first observed by O. Pocovnicu in \cite{Pocovnicu}.
Relying on Theorem~\ref{T:EC}, we develop a stability theory (along pre-existing lines) tailored to equation \eqref{v eqn}. This allows us to show that there exists a unique global solution $v$ to \eqref{v eqn} that scatters in $H^1_x$, provided we can verify two conditions: (1) $v$ satisfies uniform energy bounds on its lifespan and (2) the error $|v+e^{it\Delta}f^\omega|^2(v+e^{it\Delta}f^\omega)-|v|^2v$ is controlled in suitable scaling-critical spaces. As we will see in Section~\ref{S:wp}, the second condition above is satisfied as long as the forcing term $e^{it\Delta} f^\omega$ obeys certain spacetime bounds. Thus, building on the stability result we develop for \eqref{v eqn} (see Lemma~\ref{L:stab2}), we show in Proposition~\ref{P:v-scatter} that the proof of Theorem~\ref{T} reduces to demonstrating uniform energy bounds for $v$ on its lifespan and certain spacetime bounds for the free evolution of the randomized data.
In Section~\ref{S:E}, we show that if the forcing term $e^{it\Delta} f^\omega$ obeys some further spacetime bounds (see \eqref{X} and \eqref{3:30}), then $v$ is uniformly bounded in $H^1_x$ on its lifespan. To achieve this, we run a double bootstrap argument relying on an estimate on the energy increment of $v$ (see Lemma~\ref{P:energy}) and a Morawetz-type inequality (see Lemma~\ref{P:Morawetz}). Instead of the standard Lin--Strauss Morawetz weight $a(x)=|x|$, we prove an estimate based on the weight $a(x)=\langle x\rangle$. The additional convexity of this weight gains us much-needed time integrability for $\nabla v$, albeit in weighted spaces.
In Section~\ref{S:notation}, we prove that for spherically symmetric $f\in H^{s}(\mathbb{R}^4)$ with $s>\frac56$, the random free evolution $e^{it\Delta} f^\omega$ almost surely obeys the spacetime bounds needed to run all the arguments described above (see Proposition~\ref{P:STE2} and Proposition~\ref{P:STE}). The key ingredients here are weighted radial Strichartz estimates (see Proposition~\ref{P:RS}) and the local smoothing estimate (see Lemma~\ref{L:LS}), combined with the moment bounds in Lemma~\ref{L:SF}.
\section{Notation and useful lemmas}\label{S:notation}
We write $A\lesssim B$ to indicate that $A\leq CB$ for some constant $C>0$. Dependence of implicit constants on various parameters will be indicated with subscripts. For example $A\lesssim_\varphi B$ means that $A\leq CB$ for some $C=C(\varphi)$. Implicit constants will always be permitted to depend on the parameters in the randomization. We write $A\sim B$ if $A\lesssim B$ and $B\lesssim A$. We write $A\ll B$ if $A\leq cB$ for some small $c>0$.
We write $L_x^r$, $H_x^s$, and $W_x^{s,r}$ for the usual Lebesgue and Sobolev spaces. We also use mixed space-time norms, e.g. $L_t^q L_x^r$ and $L_t^q W_x^{s,r}$. We write $H^s_{\rad}$ to denote the space of spherically symmetric functions in $H_x^s$.
We use the standard Littlewood--Paley projection operators $P_N$ with the understanding that $P_1$ denotes the operator $P_{\leq 1}$. Summation in $N$ will always be taken over $N\in 2^{\mathbb{N}}=\{1,2,4, \ldots\}$. The Littlewood--Paley operators obey the following well-known Bernstein estimates:
\begin{lemma}[Bernstein estimates] For $1\leq r\leq q\leq\infty$ and $s\geq 0$ we have
\begin{align*}
\|\vert\nabla\vert^s P_Nu\|_{L_x^r(\mathbb{R}^d)}&\lesssim N^s \|P_N u\|_{L_x^r(\mathbb{R}^d)}\\
\|P_Nu\|_{L_x^q(\mathbb{R}^d)}&\lesssim N^{\frac dr-\frac dq} \|P_Nu\|_{L_x^r(\mathbb{R}^d)}.
\end{align*}
\end{lemma}
Next, we record two simple weighted estimates.
\begin{lemma}\label{L:WY} For $1\leq r\leq m\leq\infty$, $\beta>0$, and $\phi\in \mathcal S(\mathbb{R}^d)$,
\[
\|\langle x\rangle^\beta[|\phi|\ast |u|]\|_{L_x^m(\mathbb{R}^d)} \lesssim \|\langle x\rangle^\beta u\|_{L_x^r(\mathbb{R}^d)}.
\]
\end{lemma}
\begin{proof} Using the rapid decay of $\phi$, the triangle inequality, H\"older's inequality, and Minkowski's integral inequality, we estimate for any $A>0$,
\begin{align*}
\|\langle x\rangle^\beta[|\phi|\ast|u|]\|_{L_x^m} & \lesssim \biggl\| \int_{|x-y|\leq 1}\langle y\rangle^\beta |u(y)|\,dy\biggr\|_{L_x^m} +
\sum_{R\geq1} \biggl\|\int_{|x-y|\sim R}\frac{\langle x\rangle^\beta |u(y)|}{\langle x-y\rangle^{A}} dy\biggr\|_{L_x^m} \\
& \lesssim \|\langle y\rangle^\beta\chi_{|x-y|\leq 1}u\|_{L_x^m L_y^r} + \sum_{R\geq 1} R^{-A+\beta+\frac{d}{r'}}\|\langle y\rangle^\beta \chi_{|x-y|\sim R}u\|_{L_x^m L_y^r} \\
&\lesssim \|\langle y\rangle^\beta\chi_{|x-y|\leq 1}u\|_{L_y^r L_x^m} + \sum_{R\geq 1} R^{-A+\beta+\frac{d}{r'}}\|\langle y\rangle^\beta \chi_{|x-y|\sim R}u\|_{L_y^r L_x^m} \\
& \lesssim \|\langle y\rangle^\beta u\|_{L_y^r} + \sum_{R\geq 1} R^{-A+\beta+\frac{d}{r'}+\frac{d}{m}}\|\langle y\rangle^\beta u\|_{L_y^r}.
\end{align*}
For $A$ large enough, we can sum over $R\in 2^{\mathbb{N}}$ to complete the proof.
\end{proof}
\begin{lemma}\label{L:commutator} For $0\leq\beta\leq 1$ and $d<m<\infty$,
\[
\|\langle x\rangle^\beta u\|_{L_x^\infty(\mathbb{R}^d)}\lesssim \sum_{N\geq 1} N^{\frac{d}{m}}\|\langle x\rangle^\beta P_N u\|_{L_x^m(\mathbb{R}^d)}.
\]
\end{lemma}
\begin{proof} To begin, we apply Bernstein to estimate
\begin{align*}
\|\langle x\rangle^\beta u\|_{L_x^\infty} &\lesssim \sum_{N\geq 1}\|P_N[\langle x\rangle^{\beta}u]\|_{L_x^\infty} \lesssim \sum_{N\geq 1} N^{\frac{d}{m}}\|P_N[\langle x\rangle^\beta u]\|_{L_x^m} \\
& \quad\lesssim \sum_{N\geq 1} N^{\frac{d}{m}}\|\langle x\rangle^\beta P_N u\|_{L_x^m} + \sum_{N\geq 1}N^{\frac{d}{m}}\|[\langle x\rangle^\beta,P_N]u\|_{L_x^m}.
\end{align*}
Writing $\phi(\cdot/N)$ for the multiplier of $P_N$, a direct computation gives
\[
[a,P_N](x,y) = N^d \check\phi(N(x-y))[a(x)-a(y)]
\]
for any function $a$. Thus, by Schur's test,
\[
\|[a,P_N]\|_{L^m_x\to L^m_x} \lesssim N^{-1}\|\partial a\|_{L^\infty_x}.
\]
Applying this with $a(x)=\langle x\rangle^\beta$ for $0\leq\beta\leq 1$, we find
\[
\sum_{N\geq 1}N^{\frac{d}{m}}\|[\langle x\rangle^\beta,P_N]u\|_{L_x^m} \lesssim \|u\|_{L_x^m}\lesssim \sum_{N\geq 1} N^{\frac{d}{m}}\|\langle x\rangle^\beta P_N u\|_{L_x^m},
\]
where we used $d<m$ to derive the first inequality above. \end{proof}
\subsection{The linear Schr\"odinger equation}
The standard dispersive estimate for the linear propagator $e^{it\Delta}$ in four space dimensions follows from the kernel estimate $|e^{it\Delta}(x,y)| \lesssim |t|^{-2}.$ This bound, together with the unitarity of the linear propagator in $L^2_x$, implies the full range of Strichartz estimates:
\begin{proposition}[Strichartz estimates]\label{P:strichartz} Let $2\leq q_1,q_2\leq\infty$ and $r_j=\frac{2q_j}{q_j-1}$. Let $I$ be a time interval with $t_0\in\bar I$. Then
\begin{align*}
\| e^{it\Delta}f\|_{L_t^{q_1}L_x^{r_1}(\mathbb{R}\times\mathbb{R}^4)}&\lesssim \|f\|_{L_x^2(\mathbb{R}^4)}, \\
\biggl\| \int_{t_0}^t e^{i(t-s)\Delta}F(s)\,ds\biggr\|_{L_t^{q_1}L_x^{r_1}(I\times\mathbb{R}^4)} &\lesssim \|F\|_{L_t^{q_2'}L_x^{r_2'}(I\times\mathbb{R}^4)}.
\end{align*}
\end{proposition}
For radial functions, one has additional estimates. Letting $P_{\text{rad}}$ denote the projection onto radial functions, one has the following kernel estimate from \cite{KVZ}:
\begin{equation}\label{rde}
|e^{it\Delta}P_{\text{rad}}(x,y)| \lesssim |t|^{-\frac12}|x|^{-\frac32}|y|^{-\frac32}.
\end{equation}
Combined with the standard dispersive estimate, this leads to
\[
|e^{it\Delta}P_{\text{rad}}(x,y)| \lesssim |t|^{-\frac2q}|x|^{-\frac{2(q-1)}{q}}|y|^{-\frac{2(q-1)}{q}}\qtq{for all} 1\leq q\leq 4.
\]
Combining this with the standard $TT^*$ argument leads to the following weighted radial Strichartz estimates:
\begin{proposition}[Weighted radial Strichartz]\label{P:RS} For $f\in L^2_{\rad}(\mathbb{R}^4)$ and $2<q\leq 4$,
\[
\| |x|^{\frac{2(q-1)}{q}} e^{it\Delta} f\|_{L_t^q L_x^\infty(\mathbb{R}\times\mathbb{R}^4)} \lesssim \|f\|_{L_x^2(\mathbb{R}^4)}.
\]
\end{proposition}
Interpolating the estimates of Propositions~\ref{P:strichartz} and \ref{P:RS} yields the following:
\begin{corollary}\label{C:RS} For $f\in L^2_{\rad}(\mathbb{R}^4)$, $2<q\leq 4$, and $0\leq \beta\leq \frac{2(q-1)}{q}$,
\[
\||x|^\beta e^{it\Delta} f\|_{L_t^q L_x^{\frac{4q}{2(q-1)-\beta q}}(\mathbb{R}\times\mathbb{R}^4)}\lesssim \|f\|_{L_x^2(\mathbb{R}^4)}.
\]
\end{corollary}
We will rely on local smoothing estimates (cf. \cite{ConsSaut,Sjolin87,Vega88}) to absorb some of the derivatives landing on the randomized linear evolution.
\begin{lemma}[Local smoothing]\label{L:LS} For any $\varepsilon>0$,
\[
\|\langle x\rangle^{-\frac12-\varepsilon} e^{it\Delta} f\|_{L_{t,x}^2(\mathbb{R}\times\mathbb{R}^d)} \lesssim \|f\|_{\dot H_x^{-\frac12}(\mathbb{R}^d)}.
\]
\end{lemma}
\subsection{Almost sure bounds}
In this subsection we develop a collection of almost sure estimates on the randomized free evolution. We start by estimating the moments of the randomized free evolution.
\begin{lemma}[Moment bounds]\label{L:SF} Let $f^\omega$ be the randomization of $f$ as in Definition~\ref{def:random}. For $m\geq 1$,
\[
\mathbb{E}\bigl(|f^\omega|^{2m}\bigr)\lesssim_m \bigl(|\check\varphi|\ast|f|^2\bigr)^m.
\]
\end{lemma}
\begin{proof} As $\sum X_k f_k(x)$ is Gaussian, its moments can be computed exactly. Specifically, we have
\begin{align*}
\mathbb{E}\bigl(\bigl|\sum_k X_k f_k(x)\bigr|^{2m}\bigr) & \sim_m \biggl(\sum_k |f_k(x)|^2\biggr)^m.
\end{align*}
Next, using the Poisson summation formula and Cauchy--Schwarz, we estimate
\begin{align*}
\sum_{k\in\mathbb{Z}^4} |f_k(x)|^2& \sim \sum_{k\in\mathbb{Z}^4} \iint \check{\varphi}_k(y)\overline{\check{\varphi}_k}(z) f(x-y)\overline{f}(x-z)\,dy\,dz \\
& \sim \iint \sum_{k\in\mathbb{Z}^4} e^{ik(y-z)}\check\varphi(y)\overline{\check{\varphi}}(z)f(x-y)\overline{f}(x-z)\,dy\,dz \\
& \sim \int \sum_{\ell\in2\pi\mathbb{Z}^4} \check\varphi(y)\overline{\check{\varphi}}(y-\ell) f(x-y)\overline{f}(x-y+\ell)\,dy \\
& \lesssim \int \sum_{\ell\in2\pi\mathbb{Z}^4}\bigl[|\check\varphi(y)\check{\varphi}(y-\ell)|+|\check{\varphi}(y)\check{\varphi}(y+\ell)|\bigr]|f(x-y)|^2\,dy \\
& \lesssim (|\check{\varphi}|\ast |f|^2)(x).
\end{align*}
This completes the proof. \end{proof}
Combining Lemmas~\ref{L:WY} and \ref{L:SF}, we derive almost sure bounds on weighted norms of the randomized free evolution.
\begin{lemma}\label{L:weighted} For $1\leq q,r\leq m<\infty$ and $\beta\geq 0$,
\begin{equation}\label{E:weighted1}
\mathbb{E}\biggl[\| \langle x\rangle^\beta e^{it\Delta}f^\omega\|_{L_t^qL_x^m}^{q}\biggr] \lesssim
\|\langle x\rangle^\beta e^{it\Delta} f\|_{L_t^q L_x^r}^{q}.
\end{equation}
In particular, for $p>2$, $1\leq r_1,r_2,\frac{2p}{p-2}\leq m<\infty$, and $\beta\geq 0$,
\begin{equation}\label{E:weighted2}
\mathbb{E}\biggl[\| \langle x\rangle^\beta e^{it\Delta} f^\omega\|_{L_t^{\frac{2p}{p-2}}L_x^m}^{\frac{2p}{p-2}}\biggr] \lesssim
\|e^{it\Delta} f\|_{L_t^{\frac{2p}{p-2}} L_x^{r_1}(\mathbb{R}\times B)}^{\frac{2p}{p-2}}+\||x|^\beta e^{it\Delta} f\|_{L_t^{\frac{2p}{p-2}} L_x^{r_2}(\mathbb{R}\times B^c)}^{\frac{2p}{p-2}},
\end{equation}
where $B$ denotes the unit ball and $B^c$ its complement. Unless otherwise indicated, all space-time norms are over $\mathbb{R}\times\mathbb{R}^4$.
\end{lemma}
\begin{proof} Using H\"older's inequality and the assumption $m\geq q$,
\[
\text{LHS}\eqref{E:weighted1} = \int \mathbb{E}\bigl(\|\langle x\rangle^\beta e^{it\Delta} f^\omega \|_{L_x^m}^{q}\bigr)\,dt
\lesssim \int\bigl\{ \mathbb{E}\bigl(\|\langle x\rangle^\beta e^{it\Delta} f^\omega\|_{L_x^m}^m\bigr)\bigr\}^{\frac{q}{m}}\,dt.
\]
Next, by Lemmas~\ref{L:SF} and \ref{L:WY},
\begin{align*}
\int \mathbb{E}\bigl( \langle x\rangle^{\beta m}|e^{it\Delta} f^\omega |^m\bigr) \,dx\lesssim \int \langle x\rangle^{\beta m}[|\check\varphi|\ast|e^{it\Delta} f|^2]^{\frac{m}{2}}(x)\,dx \lesssim \|\langle x\rangle^\beta e^{it\Delta} f\|_{L_x^r}^m.
\end{align*}
This proves \eqref{E:weighted1}. To derive \eqref{E:weighted2}, we write
\[
e^{it\Delta}f^\omega = [\chi e^{it\Delta} f]^\omega + [(1-\chi)e^{it\Delta}f]^\omega,
\]
where $\chi$ is the characteristic function of the unit ball, and apply the argument above to each summand.
\end{proof}
\begin{proposition}\label{P:STE2} Let $f\in L^2(\mathbb{R}^4)$. For $2\leq p\leq\infty$,
\[
\|e^{it\Delta} f^\omega\|_{L_t^\infty L_x^2}+ \|e^{it\Delta} f^\omega\|_{L_t^3 L_x^6} + \|e^{it\Delta}f^\omega\|_{L_{t,x}^4} + \|e^{it\Delta} f^\omega\|_{L_t^{\frac{4p}{p+2}}L_x^4}< \infty
\]
almost surely, where all space-time norms are over $\mathbb{R}\times\mathbb{R}^4$. If $f\in H^s(\mathbb{R}^4)$ for some $s>\frac12$, then we also have
\[
\|e^{it\Delta} f^\omega\|_{L_t^\infty L_x^4(\mathbb{R}\times\mathbb{R}^4)} < \infty
\]
almost surely.
\end{proposition}
\begin{proof} Almost sure finiteness of the $L_t^\infty L_x^2$ norm of the randomized free evolution follows from the unitarity of the linear propagator on $L^2_x$ and Remark~\ref{Remarks}(ii).
Using Lemma~\ref{L:weighted} (with $\beta=0$) and the Strichartz estimates, we find
\begin{align}
\mathbb{E}\bigl(\|e^{it\Delta}f^\omega\|_{L_t^3 L_x^6}^3\bigr) &\lesssim \|e^{it\Delta} f\|_{L_{t,x}^3}^3\lesssim \|f\|_{L_x^2}^3, \notag\\
\mathbb{E}\bigl(\|e^{it\Delta} f^\omega\|_{L_{t,x}^4}^4\bigr)&\lesssim \|e^{it\Delta} f\|_{L_t^4 L_x^{\frac83}}^4\lesssim \|f\|_{L_x^2}^4,\label{4,4}\\
\mathbb{E}\bigl(\|e^{it\Delta}f^{\omega}\|_{L_t^{\frac{4p}{p+2}}L_x^{4}}^{\frac{4p}{p+2}}\bigr)&\lesssim \|e^{it\Delta} f\|_{L_t^{\frac{4p}{p+2}}L_x^{\frac{8p}{3p-2}}}^{\frac{4p}{p+2}}\lesssim \|f\|_{L_x^2}^{\frac{4p}{p+2}},\notag
\end{align}
where we used $p\geq 2$ for the last estimate. Thus, these norms are finite almost surely.
Finally, we consider the $L_t^\infty L_x^4$ norm. We begin with a general estimate:
\begin{equation}\label{E:gen}
\|F\|_{L_t^\infty L_x^4(\mathbb{R}\times\mathbb{R}^4)}^4 \lesssim \delta^{-1}\|F\|_{L_{t,x}^4(\mathbb{R}\times\mathbb{R}^4)}^4 + \delta^3\|\partial_t F\|_{L_{t,x}^4(\mathbb{R}\times\mathbb{R}^4)}^4\qtq{for any}\delta>0.
\end{equation}
To prove this, first fix a bounded interval $I\subset\mathbb{R}$. By the fundamental theorem of calculus,
\[
\|F\|_{L_t^\infty L_x^4(I\times\mathbb{R}^4)} \leq \|F(t_0)\|_{L_x^4(\mathbb{R}^4)} + \|\partial_t F\|_{L_t^1 L_x^4(I\times\mathbb{R}^4)}
\]
uniformly in $t_0\in I$. Averaging over $t_0\in I$ and applying H\"older's inequality,
\begin{align*}
\|F\|_{L_t^\infty L_x^4(I\times\mathbb{R}^4)} &\leq |I|^{-1} \|F\|_{L_t^1 L_x^4(I\times\mathbb{R}^4)} + \|\partial_t F\|_{L_t^1 L_x^4(I\times\mathbb{R}^4)} \\
& \leq |I|^{-\frac14}\|F\|_{L_{t,x}^4(I\times\mathbb{R}^4)} + |I|^{\frac34}\|\partial_t F\|_{L_{t,x}^4(I\times\mathbb{R}^4)}.
\end{align*}
To pass to \eqref{E:gen}, we partition $\mathbb{R}$ into intervals of length $\delta$ and sum the fourth power of the inequality above over the partition.
Now we apply \eqref{E:gen} to $F=e^{it\Delta} P_{N} f^\omega$. Using also Bernstein and \eqref{4,4}, we find
\begin{align*}
\mathbb{E}\bigl( \| e^{it\Delta} P_{N} f^\omega \|_{L_t^\infty L_x^4}^4\bigr) & \lesssim (\delta^{-1} + \delta^3 N^8)\mathbb{E}\bigl( \|e^{it\Delta} P_{N} f^\omega\|_{L_{t,x}^4}^4\bigr) \\
& \lesssim (\delta^{-1}+ \delta^3 N^8)\|P_Nf\|_{L^2_x}^4 \\
& \lesssim (\delta^{-1}N^{-4s}+ \delta^3 N^{8-4s})\|f\|_{H_x^s}^4.
\end{align*}
Optimizing in $\delta$, we get
\[
\mathbb{E}\bigl(\|e^{it\Delta}P_Nf^\omega\|_{L_t^\infty L_x^4}^4\bigr) \lesssim N^{2-4s}\|f\|_{H_x^s}^4.
\]
Thus
\[
\|e^{it\Delta} f^\omega\|_{L_\omega^4 L_t^\infty L_x^4} \lesssim \sum_{N\geq 1} N^{\frac12-s}\|f\|_{H_x^s} \lesssim \|f\|_{H_x^s}
\]
whenever $s>\frac12$, which implies almost sure finiteness of the $L_t^\infty L_x^4$ norm.
\end{proof}
\begin{proposition}\label{P:STE} Fix $s>\frac 56$ and $f\in H^s_{\rad}(\mathbb{R}^4)$. For $p$ sufficiently large,
\[
\|\langle x\rangle^{\frac3p+\frac12} e^{it\Delta} f^\omega\|_{L_t^{\frac{2p}{p-2}}L_x^\infty} + \|\langle x\rangle^{\frac3p+\frac12}\nabla e^{it\Delta} f^\omega\|_{L_t^{\frac{2p}{p-2}}L_x^\infty} + \|\langle x\rangle^{\frac{1}{p}}\nabla e^{it\Delta} f^\omega\|_{L_t^{\frac{2p}{p-2}}L_x^4} <\infty
\]
almost surely, where all space-time norms are over $\mathbb{R}\times\mathbb{R}^4$.
\end{proposition}
We break the proof of Proposition~\ref{P:STE} into three lemmas, whose proofs all rely on applications of Lemma~\ref{L:weighted}, but with different exponents.
\begin{lemma}\label{L:1st-bd} Let $4\leq p<\infty$ and $0\leq\beta\leq 1$. Then for any $s>0$ and $f\in H^s_{\rad}(\mathbb{R}^4)$,
\[
\mathbb{E}\biggl[\| \langle x\rangle^\beta e^{it\Delta} f^\omega\|_{L_t^{\frac{2p}{p-2}}L_x^\infty(\mathbb{R}\times\mathbb{R}^4)}^{\frac{2p}{p-2}}\biggr]\lesssim \|f\|_{H^s_x(\mathbb{R}^4)}^{\frac{2p}{p-2}}.
\]
\end{lemma}
\begin{proof} By Strichartz and Corollary~\ref{C:RS},
\[
\|e^{it\Delta}f\|_{L_t^{\frac{2p}{p-2}}L_x^{\frac{4p}{p+2}}} + \||x|^\beta e^{it\Delta} f\|_{L_t^{\frac{2p}{p-2}}L_x^{\frac{4p}{p+2-p\beta}}}\lesssim \|f\|_{L_x^2},
\]
provided $p\geq 4$ (to ensure that $\frac{2p}{p-2}\in(2,4]$) and $0\leq\beta\leq 1+\frac2p$. Thus, for
\[
\max\{\tfrac{2p}{p-2},\tfrac{4p}{p+2-\beta p}\}\leq m <\infty,
\]
an application of Lemma~\ref{L:weighted} with $r_1 = \frac{4p}{p+2}$ and $r_2=\frac{4p}{p+2-p\beta}$ yields
\begin{equation}\label{E:mL2}
\mathbb{E}\biggl[\|\langle x\rangle^\beta e^{it\Delta}f^\omega\|_{L_t^{\frac{2p}{p-2}}L_x^m(\mathbb{R}\times\mathbb{R}^4)}^{\frac{2p}{p-2}}\biggr] \lesssim \|f\|_{L_x^2}^{\frac{2p}{p-2}}.
\end{equation}
Now given $s>0$, we may choose $m$ large enough so that we also have $s>\frac4m$. Using \eqref{E:mL2} together with Lemma~\ref{L:commutator}, for $\beta \leq 1$ we find
\begin{align*}
\|\langle x\rangle^\beta e^{it\Delta} f^\omega\|_{L_{t,\omega}^{\frac{2p}{p-2}}L_x^\infty} & \lesssim \sum_{N\geq 1} N^{\frac4m}\|\langle x\rangle^\beta e^{it\Delta} P_N f^\omega\|_{L_{t,\omega}^{\frac{2p}{p-2}}L_x^m} \\
& \lesssim \sum_{N\geq 1} N^{\frac4m} \|P_N f\|_{L_x^2} \lesssim \|f\|_{H^s_x}.
\end{align*}
This completes the proof.\end{proof}
\begin{lemma}\label{L:2nd-bd} Fix $s>\frac56$ and $f\in H^s_{\rad}(\mathbb{R}^4)$. For $p$ sufficiently large,
\[
\mathbb{E}\biggl[\| \langle x\rangle^{\frac12+\frac3p} e^{it\Delta}\nabla f^\omega\|_{L_t^{\frac{2p}{p-2}}L_x^\infty(\mathbb{R}\times\mathbb{R}^4)}^{\frac{2p}{p-2}}\biggr] \lesssim \|f\|_{H^s_x(\mathbb{R}^4)}^{\frac{2p}{p-2}}.
\]
\end{lemma}
\begin{proof} Let $\varepsilon>0$ be a small parameter to be chosen below. An application of Lemma~\ref{L:weighted} with $r_1=2$ and $r_2=\frac{2}{1-\theta}$, where
\[
\theta=\tfrac{2}{3+2\varepsilon}(1+\tfrac1p +\varepsilon)
\]
yields
\begin{align*}
\mathbb{E}\biggl[\| \langle x\rangle^{\frac12+\frac3p} &e^{it\Delta}\nabla P_N f^\omega\|_{L_t^{\frac{2p}{p-2}}L_x^m}^{\frac{2p}{p-2}}\biggr] \\
&\lesssim\|e^{it\Delta} \nabla P_N f\|_{L_t^{\frac{2p}{p-2}}L_x^2(\mathbb{R}\times B)}^{\frac{2p}{p-2}} + \||x|^{\frac12+\frac3p}e^{it\Delta}\nabla P_N f\|_{L_t^{\frac{2p}{p-2}}L_x^{r_2}(\mathbb{R}\times B^c)}^{\frac{2p}{p-2}},
\end{align*}
provided $m\geq \max\{2, \frac2{1-\theta}, \frac {2p}{p-2}\}$.
By H\"older's inequality, Lemma~\ref{L:LS}, and Bernstein's inequality:
\begin{equation}\label{E:LS1}
\begin{aligned}
\|e^{it\Delta} \nabla P_N f\|_{L_t^{\frac{2p}{p-2}}L_x^2(\mathbb{R}\times B)} & \lesssim \|e^{it\Delta}\nabla P_Nf\|_{L_{t,x}^2(\mathbb{R}\times B)}^{1-\frac2p} \|e^{it\Delta} \nabla P_Nf\|_{L_t^\infty L_x^2}^{\frac2p} \\
& \lesssim N^{\frac12+\frac1p}\|P_N f\|_{L_x^2}.
\end{aligned}
\end{equation}
On the other hand, setting
\[
q=\tfrac{2(p+1+p\varepsilon)}{(p-2)(1+\varepsilon)},
\]
we may apply H\"older's inequality, Proposition~\ref{P:RS}, and Lemma~\ref{L:LS} (provided we choose $0<\varepsilon\ll 1$ and $p$ large) to get
\begin{equation}\label{E:2nd-bd}
\begin{aligned}
\| |x|^{\frac12+\frac3p}e^{it\Delta}\nabla &P_N f\|_{L_t^{\frac{2p}{p-2}}L_x^{r_2}(\mathbb{R}\times B^c)} \\
& \lesssim \||x|^{\frac{2(q-1)}{q}}e^{it\Delta} \nabla P_N f\|_{L_t^q L_x^\infty}^\theta
\| \langle x\rangle^{-\frac12-\varepsilon}e^{it\Delta} \nabla P_N f\|_{L_{t,x}^2}^{1-\theta} \\
& \lesssim N^{\frac{5+4\varepsilon}{6+4\varepsilon}+\frac{1}{p(3+2\varepsilon)}}\|P_N f\|_{L_x^2}.
\end{aligned}
\end{equation}
Collecting \eqref{E:LS1} and \eqref{E:2nd-bd}, we find
\begin{equation}\label{E:mH1}
\mathbb{E}\biggl[\| \langle x\rangle^{\frac12+\frac3p} e^{it\Delta}\nabla P_N f^\omega\|_{L_t^{\frac{2p}{p-2}}L_x^m}^{\frac{2p}{p-2}}\biggr] \lesssim \bigl(N^{\frac{5+4\varepsilon}{6+4\varepsilon}+\frac{1}{p(3+2\varepsilon)}}\|P_N f\|_{L_x^2}\bigr)^{\frac{2p}{p-2}}
\end{equation}
for $\varepsilon$ small and $p, m$ large.
We now proceed as in the proof of Lemma~\ref{L:1st-bd}, using \eqref{E:mH1} together with Lemma~\ref{L:commutator} to estimate
\begin{align*}
\| \langle x\rangle^{\frac12+\frac3p} e^{it\Delta}\nabla f^\omega\|_{L_{t,\omega}^{\frac{2p}{p-2}}L_x^\infty} & \lesssim \sum_{N\geq 1} N^{\frac4m+\frac{5+4\varepsilon}{6+4\varepsilon}+\frac{1}{p(3+2\varepsilon)}}\|P_N f\|_{L_x^2} \lesssim \|f\|_{H^s_x},
\end{align*}
provided
\[
s>\tfrac4m + \tfrac{5+4\varepsilon}{6+4\varepsilon}+\tfrac{1}{p(3+2\varepsilon)}.
\]
Note that for any $s>\frac56$, we may choose $\varepsilon$ sufficiently small and $p,m$ sufficiently large to guarantee that this condition holds.
\end{proof}
\begin{lemma}\label{L:3rd-bd} Fix $s>\frac23$ and $f\in H^s_{\rad}(\mathbb{R}^4)$. For $p$ sufficiently large,
\[
\mathbb{E}\biggl[\|\langle x\rangle^{\frac1p}e^{it\Delta}\nabla f^\omega\|_{L_t^{\frac{2p}{p-2}}L_x^4(\mathbb{R}\times\mathbb{R}^4)}^{\frac{2p}{p-2}}\biggr] \lesssim \|f\|_{H_x^s(\mathbb{R}^4)}^{\frac{2p}{p-2}}.
\]
\end{lemma}
\begin{proof} Let $\varepsilon>0$ be a small parameter to be chosen below. We apply \eqref{E:weighted2} with $r_1=2$ and $r_2=\frac{2}{1-\theta}$, where
\[
\theta = \tfrac{1}{3+2\varepsilon}(1-\tfrac2p+2\varepsilon).
\]
We estimate the contribution of $B$ using \eqref{E:LS1}. The contribution of $B^c$ will be estimated as in \eqref{E:2nd-bd}, but with a different choice of exponents. To be precise, we will now take
\[
q=\tfrac{2[p(1+2\varepsilon)-2]}{p(1+2\varepsilon)-8-4\varepsilon},
\]
which belongs to the range $(2,4]$ for $\varepsilon$ small and $p$ large. Note that to apply \eqref{E:weighted2} also requires $r_2\leq 4$, which is also satisfied for $\varepsilon$ small and $p$ large. In this case, the contribution of $B^c$ can be estimated by
\[
\|e^{it\Delta}\nabla P_N f\|_{L_t^{\frac{2p}{p-2}}L_x^{r_2}(R\times B^c)} \lesssim N^{\frac{2+2\varepsilon}{3+2\varepsilon}-\frac{1}{p(3+2\varepsilon)}}\|P_Nf\|_{L_x^2}.
\]
Choosing $\varepsilon$ sufficiently small and $p$ sufficiently large, we can therefore estimate
\[
\|\langle x\rangle^{\frac1p}e^{it\Delta} \nabla f^\omega\|_{L_{t,\omega}^{\frac{2p}{p-2}}L_x^4} \lesssim \|f\|_{H^s_x}
\]
for any $s>\frac23$.
\end{proof}
Collecting the results of Lemmas~\ref{L:1st-bd}, \ref{L:2nd-bd}, and \ref{L:3rd-bd}, we obtain Proposition~\ref{P:STE}.
\section{Well-posedness and scattering for the forced equation}\label{S:wp}
In this section we prove well-posedness and a conditional scattering result for the forced NLS
\begin{equation}\label{fnls}
\begin{cases}
(i\partial_t + \Delta) v = |v+F|^2(v+F), \\
v(t_0)=v_0.
\end{cases}
\end{equation}
We will consider forcing terms $F$ satisfying $(i\partial_t+\Delta)F=0$ and the following bounds:
\begin{equation}\label{lwp-F}
F\in L_t^{\frac{4p}{p+2}}L_x^4(\mathbb{R}\times\mathbb{R}^4)\cap L_t^{\frac{2p}{p-2}}W_x^{1,\infty}(\mathbb{R}\times\mathbb{R}^4)
\end{equation}
for some large, but finite $p$. Note that by Propositions~\ref{P:STE2} and \ref{P:STE}, $F^\omega=e^{it\Delta}f^\omega$ satisfies \eqref{lwp-F} almost surely for $f\in H^s_{\rad}(\mathbb{R}^4)$ with $s>\frac56$ and $p$ sufficiently large.
\begin{proposition}[Local well-posedness]\label{P:lwp} Let $t_0\in\mathbb{R}$, $v_0\in H^1(\mathbb{R}^4)$, and $F$ be a solution to $(i\partial_t + \Delta) F = 0$ satisfying \eqref{lwp-F}. Suppose $\|v_0\|_{H^1_x}\leq E$. There exists $\eta_0=\eta_0(E)>0$ so that if $I\ni t_0$ is an open interval such that
\begin{equation}\label{lwp-condition}
\|e^{i(t-t_0)\Delta}v_0\|_{L_t^4 L_x^8(I\times\mathbb{R}^4)} + \|F\|_{L_t^{\frac{4p}{p+2}}L_x^4\cap L_t^{\frac{2p}{p-2}}W_x^{1,\infty}(I\times\mathbb{R}^4)} \leq\eta\leq\eta_0,
\end{equation}
then there exists a unique solution $v\in C_tH^1_x\cap L_t^4L_x^8(I\times\mathbb{R}^4)$ to \eqref{fnls} on $I$. In particular, for any $v_0\in H^1(\mathbb{R}^4)$ there exists a unique local-in-time solution $v$ to \eqref{fnls}, which extends to a maximal lifespan $I_{\text{max}}$.
Moreover, if $F(t_0)\in L_x^2$, we have the following blowup/scattering criterion:
\begin{itemize}
\item[(i)] If $\supI_{\text{max}}<\infty$, then $\|v\|_{L_t^4 L_x^8((t_0,\supI_{\text{max}})\times\mathbb{R}^4)}=\infty$.
\item[(ii)] If $\supI_{\text{max}}=\infty$ and $\|v\|_{L_t^4 L_x^8((t_0,\infty)\times\mathbb{R}^4)}<\infty$, then $v$ scatters forward in time.
\end{itemize}
The analogous statements hold backward in time.
\end{proposition}
\begin{proof} Without loss of generality assume $t_0=0$. Define
\[
[\mathbb{P}hi v](t) = e^{it\Delta}v_0 - i\int_0^t e^{i(t-s)\Delta}\bigl[|v(s)+F(s)|^2(v(s)+F(s))\bigr]\,ds.
\]
Let $\eta>0$ to be chosen below and let $I\ni 0$ be a time interval as in \eqref{lwp-condition}. Note that for any $v_0\in H^1_x$, such an interval exists by Sobolev embedding, Strichartz estimates, and the monotone convergence theorem.
In the following, we take space-time norms over $I\times\mathbb{R}^4$. Define
\[
X=\{v:I\times\mathbb{R}^4\to \mathbb{C}:\,\|v\|_{L_t^\infty H_x^1} \leq 2CE,\quad \|v\|_{L_t^4 L_x^8}\leq 2C\eta\}.
\]
Here $C$ is a constant that accounts for implicit constants appearing in Strichartz estimates, Sobolev embedding, etc. We equip $X$ with the $L_t^\infty L_x^2$ metric.
We write
\[
|v+F|^2(v+F)=|v|^2 v + |F|^2 F + 2|v|^2 F + v^2\bar F + 2|F|^2 v + F^2 \bar v.
\]
To estimate the nonlinearity, we note that for $p>6$ the pair $(\frac{4p}{3p-6},\frac{8p}{5p+6})$ is dual admissible, while for $p>2$ the pair $(\frac{p+6}{p+2},\frac{2(p+6)}{p+10})$ is also dual admissible. Using the product rule and H\"older's inequality, we estimate
\begin{align*}
\| \langle\nabla\rangle(|v|^2 v)\|_{L_t^2 L_x^{\frac43}}&\lesssim \|v\|_{L_t^4 L_x^8}^2 \|\langle\nabla\rangle v\|_{L_t^\infty L_x^2}, \\
\| \langle\nabla\rangle(|F|^2 F)\|_{L_t^1 L_x^2} &\lesssim \| F\|_{L_t^{\frac{4p}{p+2}}L_x^4}^2 \|F\|_{L_t^{\frac{2p}{p-2}}W_x^{1,\infty}}, \\
\| \langle\nabla\rangle(Fv^2)\|_{L_t^{\frac{4p}{3p-6}}L_x^{\frac{8p}{5p+6}}}&\lesssim \|F\|_{L_t^{\frac{2p}{p-2}}W_x^{1,\infty}}\|\langle\nabla\rangle v\|_{L_t^\infty L_x^2}\|v\|_{L_t^\infty L_x^2}^{\frac2p}\|v\|_{L_t^4 L_x^8}^{1-\frac2p}, \\
\| \langle\nabla\rangle(F^2 v)\|_{L_t^{\frac{p+6}{p+2}}L_x^{\frac{2(p+6)}{p+10}}}& \lesssim \|F\|_{L_t^{\frac{2p}{p-2}}W_x^{1,\infty}}^{\frac{2p+4}{p+6}}\|F\|_{L_t^{\frac{4p}{p+2}}L_x^4}^{\frac{8}{p+6}}\|\langle\nabla\rangle v\|_{L_t^\infty L_x^2}.
\end{align*}
Thus, an application of Strichartz shows that for $v\in X$,
\begin{align*}
\|\mathbb{P}hi v\|_{L_t^\infty H_x^1}\lesssim E + \eta^2 E + \eta^3+ \eta^{2-\frac2p} E^{1+\frac2p} \leq 2CE
\end{align*}
for $\eta\leq\eta_0(E)$ small. Similarly, using $\dot H^{1,\frac83}(\mathbb{R}^4)\hookrightarrow L^8(\mathbb{R}^4)$, we have
\[
\|\mathbb{P}hi v\|_{L_t^4 L_x^8} \lesssim \eta + \eta^2 E + \eta^3+ \eta^{2-\frac2p} E^{1+\frac2p} \leq 2C\eta
\]
for $\eta\leq\eta_0(E)$ small. Thus $\mathbb{P}hi:X\to X$.
Next, note that
\[
\bigl||v+F|^2(v+F)-|w+F|^2(w+F)\bigr| \lesssim |v-w|\bigl(|v|^2+|w|^2+|F|^2).
\]
Estimating essentially as above, we find
\[
\|\mathbb{P}hi v - \mathbb{P}hi w \|_{L_t^\infty L_x^2} \lesssim \eta^{2}\|v-w\|_{L_t^\infty L_x^2}
\]
for any $v,w\in X$. Thus $\mathbb{P}hi$ is a contraction for $\eta\leq\eta_0(E)$ small and we deduce the existence of a solution on $I$, which may then be extended to its maximal lifespan $I_{\text{max}}$.
Note that since $F$ solves the linear Schr\"odinger equation, $u:= F+v$ solves \eqref{nls} on $I_{\text{max}}\times\mathbb{R}^4$ with $u(0)=v_0+F(0)$. Thus if $F(0)\in L_x^2$, then by the conservation of mass for \eqref{nls} and the triangle inequality we get
\begin{align}\label{mass v}
\|v\|_{L_t^\infty L_x^2(I_{\text{max}}\times\mathbb{R}^4)}\lesssim \|v_0\|_{L_x^2} + \|F(0)\|_{L_x^2}.
\end{align}
Next suppose toward a contradiction that $\sup I_{\text{max}} <\infty$ but
\begin{equation}\label{bs-contra1}
\|v\|_{L_t^4 L_x^8((0,\supI_{\text{max}})\times\mathbb{R}^4)}<\infty.
\end{equation}
Fix $\varepsilon>0$ to be chosen below. Using \eqref{bs-contra1} and \eqref{lwp-F}, we may decompose $(0,\sup I_{\text{max}})$ into finitely many intervals $I_j$ so that
\[
\|v\|_{L_t^4 L_x^8(I_j\times\mathbb{R}^4)} + \|F\|_{L_t^{\frac{4p}{p+2}}L_x^4\cap L_t^{\frac{2p}{p-2}}W_x^{1,\infty}(I_j\times\mathbb{R}^4)}<\varepsilon
\]
for each $j$. Using the nonlinear estimates above, we find
\[
\bigl|\|v\|_{L_t^\infty H_x^1(I_1\times\mathbb{R}^4)}-\|v_0\|_{H^1_x} \bigr|\lesssim + \varepsilon^3 + (\varepsilon^2 + \varepsilon^{2-\frac2p}\|v\|_{L_t^\infty L_x^2(I_1\times\mathbb{R}^4)}^{\frac2p})\|v\|_{L_t^\infty H_x^1(I_1\times\mathbb{R}^4)}.
\]
Thus, recalling \eqref{mass v} and choosing $\varepsilon$ sufficiently small compared to $\|v_0\|_{H^1_x}$ and $\|F(0)\|_{L_x^2}$, we deduce
\[
\|v\|_{L_t^\infty H_x^1(I_1\times\mathbb{R}^4)} \leq 2\|v_0\|_{H^1_x}.
\]
We can repeat this argument on $I_2$ (with the same choice of $\varepsilon$) to deduce a bound of $4\|v_0\|_{H^1}$. By induction,
\[
\|v\|_{L_t^\infty H_x^1((0,\supI_{\text{max}})\times\mathbb{R}^4)} \lesssim 2^{C(\varepsilon)}\|v_0\|_{H^1_x}.
\]
Using this bound and \eqref{bs-contra1}, the nonlinear estimates then imply
\[
\biggl\|\int_{t_0}^t e^{i(t-s)\Delta}\bigl[|v(s)+F(s)|^2(v(s)+F(s)\bigr]\,ds\biggr\|_{L_t^4 L_x^8((t_0,\supI_{\text{max}})\times\mathbb{R}^4)}\lesssim 1
\]
uniformly in $t_0\in(0,\supI_{\text{max}})$. Thus, by the Duhamel formula, the triangle inequality, and monotone convergence,
\[
\lim_{t_0\to \supI_{\text{max}}}\|e^{i(t-t_0)\Delta}v(t_0)\|_{L_t^4 L_x^8((t_0,\supI_{\text{max}})\times\mathbb{R}^4)}=0.
\]
In particular, there exists $\delta>0$ so that
\[
\|e^{i(t-t_0)\Delta}v(t_0)\|_{L_t^4 L_x^8((t_0-\delta,\supI_{\text{max}}+\delta)\times\mathbb{R}^4)}<\tfrac12\eta_0,
\]
where $\eta_0=\eta_0(\|v\|_{L_t^\infty H_x^1((0,\supI_{\text{max}})\times\mathbb{R}^4)})$ is the same as in the statement of local well-posedness. However, this implies that the solution $v$ extends beyond $\supI_{\text{max}}$, a contradiction.
Finally, suppose that $\supI_{\text{max}}=\infty$ and $v\in L_t^4 L_x^8((0,\infty)\times\mathbb{R}^4)$. Repeating the arguments just given, we can deduce that $v\in L_t^\infty H_x^1((0,\infty)\times\mathbb{R}^4)$. An application of Strichartz combined with the observation that
\[
\|v\|_{L_t^4 L_x^8((s,t)\times\mathbb{R}^4)} + \|F\|_{L_t^{\frac{2p}{p-2}}W_x^{1,\infty}((s,t)\times\mathbb{R}^4)}\to 0\qtq{as}s,t\to\infty,
\]
yields that $e^{-it\Delta}v(t)$ is Cauchy in $H_x^1$ as $t\to\infty$.
\end{proof}
Our next goal is a conditional scattering result for \eqref{fnls}; see Proposition~\ref{P:v-scatter}. As described in the introduction, this relies on a stability theory for \eqref{fnls}, which we elaborate next.
\begin{lemma}[Short-time stability]\label{L:stab} Let $I\ni t_0$ be a time interval and let $v_0\in H^1(\mathbb{R}^4)$ with $\|v_0\|_{H_x^1} \leq E$. Suppose $v:I\times\mathbb{R}^4\to \mathbb{C}$ is a solution to \eqref{fnls} where $v(t_0)=v_0$ and $F$ is a solution to $(i\partial_t+\Delta)F=0$ satisfying \eqref{lwp-F}. Suppose $u_0\in H^1(\mathbb{R}^4)$ satisfies
\[
\|v_0-u_0\|_{H^1_x}\leq \varepsilon
\]
for some $0<\varepsilon<\varepsilon_0$. Let $u$ be the solution to \eqref{nls} with $u(t_0)=u_0$ and suppose
\[
\|u\|_{L_t^4 L_x^8(I\times\mathbb{R}^4)}\leq\delta.
\]
Finally, suppose
\[
\|F\|_{L_t^{\frac{2p}{p-2}}W_x^{1,\infty}(I\times\mathbb{R}^4)} + \|F\|_{L_t^{\frac{4p}{p+2}}L_x^4(I\times\mathbb{R}^4)} \leq\varepsilon.
\]
Then for $\varepsilon_0, \delta$ sufficiently small depending on $E$,
\[
\|\langle\nabla\rangle(v-u)\|_{L_t^\infty L_x^2 \cap L_t^4 L_x^{\frac83}(I\times\mathbb{R}^4)} \leq C(E)\varepsilon.
\]
\end{lemma}
\begin{proof} Without loss of generality, assume $t_0=0=\inf I$. In the following, all space-time norms are taken over $I\times\mathbb{R}^4$. Define $w=v-u$ and set $S=L_t^\infty L_x^2 \cap L_t^4 L_x^{\frac83}$.
Standard continuity arguments combined with an application of the Strichartz inequality yield
\[
\|\langle\nabla\rangle u\|_S \lesssim E,
\]
for $\varepsilon_0,\delta$ sufficiently small depending on $E$. Thus, using the equation for $w$, the nonlinear estimates from the local theory, and the hypotheses of the lemma, we get
\begin{align*}
\|\langle\nabla\rangle w\|_S & \lesssim \|v_0\!-\!u_0\|_{H^1_x} + \varepsilon^3 + \varepsilon^2\|\langle\nabla\rangle v\|_S + \varepsilon\|\langle\nabla\rangle v\|_S^2 + \|\langle\nabla\rangle\bigl(|v|^2 v - |u|^2 u\bigr)\|_{L_t^2 L_x^{\frac43}} \\
& \lesssim C(E)\varepsilon + \varepsilon^2\|\langle\nabla\rangle w\|_S + \varepsilon\|\langle\nabla\rangle w\|_S^2 + \|\langle\nabla\rangle\bigl(|v|^2 v\!\!-\!\!|u|^2 u\bigr)\|_{L_t^2 L_x^{\frac43}}.
\end{align*}
Using $L_t^4 \dot H_x^{1,\frac83}\hookrightarrow L_t^4 L_x^8$,
\begin{align*}
\|\langle\nabla\rangle\bigl(|v|^2 v - |u|^2 u)\|_{L_t^2 L_x^{\frac43}} & \lesssim \|\langle\nabla\rangle w\|_{S}\bigl(\|\langle\nabla\rangle w\|_S^2 + \|u\|_{L_t^4 L_x^8}\|\langle\nabla\rangle u\|_{S}\bigr) \\
& \lesssim \delta E \|\langle\nabla\rangle w\|_{S} + \|\langle\nabla\rangle w\|_S^3.
\end{align*}
Combining the estimates above and choosing $\delta,\varepsilon_0$ sufficiently small depending on $E$, a standard continuity argument yields the result.
\end{proof}
\begin{lemma}[Long-time stability]\label{L:stab2} Let $I\ni t_0$ be a time interval and let $v_0\in H^1(\mathbb{R}^4)$ with $\|v_0\|_{H^1_x}\leq E$. Let $u$ be the solution to \eqref{nls} with $u(t_0)=v_0$. Suppose that
\[
\|u\|_{L_t^4 L_x^8(I\times\mathbb{R}^4)} \leq L.
\]
Then there exists $\varepsilon_1=\varepsilon_1(E,L)>0$ so that if $F$ is a solution to $(i\partial_t+\Delta)F=0$ satisfying
\begin{equation}\label{stab-F}
\|F\|_{L_t^{\frac{2p}{p-2}}W_x^{1,\infty}(I\times\mathbb{R}^4)} + \|F\|_{L_t^{\frac{4p}{p+2}}L_x^4(I\times\mathbb{R}^4)} \leq\varepsilon
\end{equation}
for some $0<\varepsilon\leq\varepsilon_1$, then there exists a unique solution $v$ to \eqref{fnls} with $v(t_0)=v_0$ on $I\times\mathbb{R}^4$. Moreover,
\[
\|v\|_{L_t^4 L_x^8(I\times\mathbb{R}^4)}\leq C(E,L)<\infty.
\]
\end{lemma}
\begin{proof} Assume without loss of generality that $t_0=0=\inf I$. By the local theory, it suffices to establish the $L_t^4 L_x^8$ bound for $v$ as an \emph{a priori} estimate. Note that by conservation of mass and energy, we may assume $\|u\|_{L_t^\infty H_x^1} \leq C_0(E)$.
Choose $\delta = \delta(2C_0(E))$ as in Lemma~\ref{L:stab} and divide $I$ into $J=J(E,L)$ subintervals $I_j=[t_j,t_{j+1}]$ so that
\[
\|u\|_{L_t^4 L_x^8(I_j\times\mathbb{R}^4)} \leq \delta
\]
for each $j$.
We claim that if we choose $\varepsilon_1=\varepsilon_1(E,L)$ sufficiently small and assume \eqref{stab-F}, then there exists $C_j\geq 1$ so that
\begin{equation}\label{stab-induction}
\|v(t_j)-u(t_j)\|_{H_x^1}\leq C_j\varepsilon\leq\varepsilon_0\qtq{and}\|v(t_j)\|_{H_x^1}\leq 2C_0(E)\quad\text{for each $j$,}
\end{equation}
where $\varepsilon_0=\varepsilon_0(2C_0(E))$ is as in Lemma~\ref{L:stab}.
Note that \eqref{stab-induction} holds trivially for $j=0$. Now suppose it holds for each $0\leq k\leq j-1$; we will prove it holds at $j$. Using the Duhamel formula and the inductive hypothesis, and estimating as in Lemma~\ref{L:stab}, we get
\begin{align*}
\|v(t_j)-u(t_j)\|_{H_x^1} \lesssim \||v+F|^2(v+F)-|u|^2 u\|_{N([0,t_j])} \leq C(E)\sum_{k=0}^{j-1} C_k\varepsilon.
\end{align*}
Thus we may define $C_j$ inductively with $C_0=1$ and $C_j=C(E)\sum_{k=0}^{j-1}C_k$. Choosing $\varepsilon_1=\varepsilon_1(E,L)$ sufficiently small, we can also ensure that
\[
\sup_{0\leq j\leq J} C_j\varepsilon \leq\varepsilon_0(2C_0(E)).
\]
Then by the triangle inequality,
\[
\|v(t_j)\|_{H_x^1(\mathbb{R}^4)} \leq C_0(E)+C_j\varepsilon \leq 2C_0(E),
\]
for $\varepsilon\leq\varepsilon_1(E,L)$ small enough. This completes the induction and settles \eqref{stab-induction}.
We may therefore apply Lemma~\ref{L:stab} on each $I_j$, yielding $L_t^4 L_x^8$ bounds for $v$. Summing up these bounds completes the proof. \end{proof}
With Lemma~\ref{L:stab2} in hand, we are now in a position to prove the following:
\begin{proposition}[$H^1_x$ bounds imply scattering]\label{P:v-scatter} Let $v_0\in H^1(\mathbb{R}^4)$ and let $F$ be a solution to $(i\partial_t+\Delta) F = 0$ satisfying \eqref{lwp-F}. Let $v:I_{\text{max}}\times\mathbb{R}^4\to\mathbb{C}$ be the maximal-lifespan solution to \eqref{fnls} with $v(0)=v_0$. Suppose
\begin{equation}\label{bounded-energy}
\sup_{t\in(0,\supI_{\text{max}})}\|v(t)\|_{H_x^1}\leq E<\infty.
\end{equation}
Then $\supI_{\text{max}}=\infty$ and $v$ scatters as $t\to\infty$. The analogous statements hold backward in time.
\end{proposition}
\begin{proof} By Proposition~\ref{P:lwp}, it suffices to show that
\[
\|v\|_{L_t^4 L_x^8((0,\supI_{\text{max}})\times\mathbb{R}^4)} \leq C(E).
\]
To prove this, we will rely on Theorem~\ref{T:EC}, which guarantees that there exists a unique global solution $u$ to \eqref{nls} from data $\|u(t_0)\|_{H_x^1}\leq E$ and it satisfies
\[
\|u\|_{L_t^4 L_x^8(\mathbb{R}\times\mathbb{R}^4)}\leq L(E).
\]
Now let $\varepsilon_1=\varepsilon_1(E,L(E))$ be as in Lemma~\ref{L:stab2} and divide $(0,\supI_{\text{max}})$ into finitely many intervals $\{I_j\}_{j=0}^J$ so that
\[
\|F\|_{L_t^{\frac{2p}{p-2}}W_x^{1,\infty}(I_j\times\mathbb{R}^4)} + \|F\|_{L_t^{\frac{4p}{p+2}}L_x^4(I_j\times\mathbb{R}^4)} \leq\varepsilon_1
\]
for each $j$. Note that $J=J(E)$, so that it suffices to show
\[
\|v\|_{L_t^4 L_x^8(I_j\times\mathbb{R}^4)}\leq C(E)\qtq{for each}j.
\]
To this end, write $I_j=[t_j,t_{j+1}]$. Let $u$ be the solution to \eqref{nls} with initial data $v(t_j)$. Then since $\|v(t_j)\|_{H_x^1}\leq E$, we have
\[
\|u\|_{L_t^4 L_x^8(I_j\times\mathbb{R}^4)}\leq\|u\|_{L_t^4 L_x^8(\mathbb{R}\times\mathbb{R}^4)}\leq L(E).
\]
Thus we are in a position to apply the stability result Lemma~\ref{L:stab2}, yielding
\[
\|v\|_{L_t^4 L_x^8(I_j\times\mathbb{R}^4)}\leq C(E,L(E)) = C(E),
\]
as needed. \end{proof}
\section{Energy bounds for the forced equation}\label{S:E}
In this section, we prove that suitable space-time bounds on the forcing term $F$ guarantee that the solution $v$ to the forced equation \eqref{fnls} obeys uniform energy bounds, and so one may invoke Proposition~\ref{P:v-scatter} to conclude that $v$ scatters in $H^1_x$. The particular norm we rely on is
\begin{align}
\|F\|_{X(I)}& := \|\langle x\rangle^{\frac3p+\frac12} F\|_{L_t^{\frac{2p}{p-2}}L_x^\infty} + \|\langle x\rangle^{\frac3p+\frac12}\nabla F\|_{L_t^{\frac{2p}{p-2}}L_x^\infty} + \|\langle x\rangle^{\frac{1}{p}}\nabla F\|_{L_t^{\frac{2p}{p-2}}L_x^4}\notag\\
& \quad+ \|F\|_{L_t^3 L_x^6} +\|F\|_{L_t^{\frac{4p}{p+2}}L_x^4} + \|F\|_{L_{t,x}^4},\label{X}
\end{align}
where all space-time norms are over $I\times\mathbb{R}^4$ and $p$ is large, but finite. Note that for $F^\omega=e^{it\Delta}f^\omega$, Propositions~\ref{P:STE2} and \ref{P:STE} guarantee that $\|F^\omega\|_{X(\mathbb{R})}<\infty$ almost surely, whenever $f\in H^s_{\rad}(\mathbb{R}^4)$ for $s>\frac56$ and $p$ is taken sufficiently large.
Our main result in this section is the following:
\begin{proposition}[Energy bounds]\label{P:energy-bds} Suppose that $F$ is a solution to $(i\partial_t+\Delta) F = 0$ satisfying
\begin{align}\label{3:30}
\|F\|_{X(\mathbb{R})}+\|F\|_{L_t^\infty L_x^2(\mathbb{R}\times\mathbb{R}^4)}+\|F\|_{L_t^\infty L_x^4(\mathbb{R}\times\mathbb{R}^4)} < \infty.
\end{align}
Let $v_0\in H^1(\mathbb{R}^4)$ and let $v:I_{\text{max}}\times\mathbb{R}^4\to\mathbb{C}$ be the maximal-lifespan solution to \eqref{fnls} with $v(0)=v_0$. Then $\sup_{t\inI_{\text{max}}}\|v(t)\|_{H_x^1} <\infty$.
\end{proposition}
By time-reversal symmetry, it suffices to prove uniform energy bounds for $v$ on $[0, \supI_{\text{max}})$. For $0<T\in I_{\text{max}}$, we define
\begin{equation}\label{bold-E}
\mathbb{E}E(T) = \sup_{t\in[0,T]}E[v(t)],
\end{equation}
where the energy $E[\cdot]$ is as in \eqref{def:E}. We seek bounds on $\mathbb{E}E(T)$ that are uniform in $T$. We will prove this using a double bootstrap argument involving both a Morawetz inequality for $v$ and control of the energy increment for $v$.
\begin{lemma}[Morawetz estimate]\label{P:Morawetz} Suppose $v:[0,T]\times\mathbb{R}^4\to\mathbb{C}$ is a solution to \eqref{fnls} satisfying the uniform mass bound
\begin{equation}\label{mass-bd}
\|v\|_{L_t^\infty L_x^2([0,T]\times\mathbb{R}^4)}\lesssim 1.
\end{equation}
Writing
\begin{equation}\label{def:A}
\begin{aligned}
A(T)&:=\| \langle x\rangle^{-\frac14}v\|_{L_{t,x}^4([0,T]\times\mathbb{R}^4)}^4 + \|\langle x\rangle^{-\frac32}v\|_{L_{t,x}^2([0,T]\times\mathbb{R}^4)}^2\\ & \quad +\|\langle x\rangle^{-\frac32}\nabla v\|_{L_{t,x}^2([0,T]\times\mathbb{R}^4)}^2,
\end{aligned}
\end{equation}
we have
\begin{equation}\label{E:Morawetz2}
\begin{aligned}
A(T)& \lesssim \mathbb{E}E(T)^{\frac12}+\mathbb{E}E(T)^{\frac12}\|F\|_{X([0,T])}^3+\mathbb{E}E(T)\|F\|_{X([0,T])}^{\frac{2p}{p-2}},
\end{aligned}
\end{equation}
where $\mathbb{E}E(\cdot)$ is as in \eqref{bold-E}.
\end{lemma}
\begin{proof}
We write \eqref{fnls} in the following form:
\[
(i\partial_t+\Delta) v = |v|^2 v + \mathcal{N},\qtq{where} \mathcal{N} := |v+F|^2(v+F)-|v|^2 v.
\]
Given a weight $a=a(x)$, we define the standard Morawetz action
\[
m(t) =2\Im\int a_k(x) v_k(t,x) \bar v(t,x)\,dx,
\]
where subscripts denote derivatives and repeated indices are summed. A direct computation using the equation and integration by parts leads to the Morawetz identity
\begin{align*}
\dot m(t) = \int -\Delta\Delta a|v|^2 + 4\mathbb{R}e a_{jk} \bar v_j v_k + \Delta a|v|^4 + 4a_k\mathbb{R}e\{\bar\mathcal{N} v_k\} + 2\Delta a\mathbb{R}e \{\bar v \mathcal{N}\}\,dx.
\end{align*}
For the weight $a(x)=\langle x\rangle$, one has
\[
\nabla a = \tfrac{x}{\langle x\rangle}, \quad a_{jk} = \tfrac{\delta_{jk}}{\langle x\rangle}-\tfrac{x_jx_k}{\langle x\rangle^3},\quad \Delta a = \tfrac{3}{\langle x\rangle} + \tfrac{1}{\langle x\rangle^3}, \quad -\Delta\Delta a = \tfrac{3}{\langle x\rangle^3}+\tfrac{6}{\langle x\rangle^5}+\tfrac{15}{\langle x\rangle^7}.
\]
Using Cauchy-Schwarz and \eqref{mass-bd}, we see that
\begin{equation}\label{mor-bd}
\|m\|_{L_t^\infty([0,T])}\lesssim \mathbb{E}E(T)^{\frac12}.
\end{equation}
Noting that
\[
\mathbb{R}e a_{jk} \bar v_j v_k \geq \langle x\rangle^{-3}|\nabla v|^2,
\]
we apply the Morawetz identity and the fundamental theorem of calculus to obtain
\begin{equation}\label{E:Morawetz1}
\begin{aligned}
&A(T) \lesssim \mathbb{E}E(T)^{\frac12} + \|\mathcal{N}\nabla v\|_{L_{t,x}^1([0,T]\times\mathbb{R}^4)}+ \|\langle x\rangle^{-1}\mathcal{N} v\|_{L_{t,x}^1([0,T]\times\mathbb{R}^4)}.
\end{aligned}
\end{equation}
To estimate the last two terms, we first note that by H\"older's inequality,
\begin{equation}\label{interpolate-mor}
\|\langle x\rangle^{-\frac3p}\nabla v\|_{L_t^p L_x^2} \lesssim \|\nabla v\|_{L_t^\infty L_x^2}^{1-\frac2p}\|\langle x\rangle^{-\frac32}\nabla v\|_{L_{t,x}^2}^{\frac2p}
\lesssim \mathbb{E}E(T)^{\frac12-\frac1p}A(T)^{\frac1p}
\end{equation}
for any $2\leq p\leq\infty$. Using also that
\[
\mathcal{N} = |v+F|^2(v+F)-|v|^2 v = \mathcal{O}\bigl( Fv^2+ F^3\bigr),
\]
together with H\"older's and Hardy's inequalities, we estimate
\begin{align*}
\|&\mathcal{N}\nabla v\|_{L_{t,x}^1}+ \|\langle x\rangle^{-1}\mathcal{N} v\|_{L_{t,x}^1} \\ & \lesssim \|\langle x\rangle^{-\frac3p}\nabla v\|_{L_t^p L_x^2} \|\langle x\rangle^{-\frac14}v\|_{L_{t,x}^4}^2 \|\langle x\rangle^{\frac3p+\frac12}F\|_{L_t^{\frac{2p}{p-2}}L_x^\infty} \\
& \quad + \|\langle x\rangle^{-\frac14}v\|_{L_{t,x}^4}^2\|\langle x\rangle^{-\frac32}v\|_{L_{t,x}^2}^{\frac13}\|v\|_{L_t^\infty L_x^4}^{\frac23}\|F\|_{L_t^3 L_x^6} + \|\nabla v\|_{L_t^\infty L_x^2} \|F\|_{L_t^3 L_x^6}^3 \\
& \lesssim \mathbb{E}E(T)^{\frac12-\frac1p}A(T)^{\frac12+\frac1p}\|F\|_{X([0,T])}+\mathbb{E}E(T)^{\frac16}A(T)^{\frac23}\|F\|_{X([0,T])} + \mathbb{E}E(T)^{\frac12}\|F\|_{X([0,T])}^3.
\end{align*}
Continuing from \eqref{E:Morawetz1} and using Young's inequality to absorb $A(T)$ into the left-hand side, we deduce \eqref{E:Morawetz2}.\end{proof}
\begin{lemma}[Energy increment]\label{P:energy}
Suppose $v:[0,T]\times\mathbb{R}^4\to\mathbb{C}$ is a solution to \eqref{fnls}. Then
\begin{equation}\label{E:E}
\begin{aligned}
\mathbb{E}E(T) & \lesssim E[v(0)] + \|F\|_{L_t^\infty L_x^4}^4 + A(T)\|F\|_{X([0,T])}^{\frac{2p}{p+2}} +A(T)^{\frac{p+4}{2(p+2)}}\|F\|_{X([0,T])}^{\frac{4p}{p+2}} \\
& \quad +A(T)^{\frac{2}{p+2}}\|F\|_{X([0,T])}^{\frac{6p}{p+2}} + A(T)^{\frac{4}{p+4}}\|F\|_{X([0,T])}^{\frac{4p}{p+4}} \\
& \quad + A(T)^{\frac{8}{3p+8}}\bigl[ \|F\|_{X([0,T])}^2\|F\|_{L_t^\infty L_x^4([0,T]\times\mathbb{R}^4)}\bigr]^{\frac{4p}{3p+8}},
\end{aligned}
\end{equation}
where $\mathbb{E}E(\cdot)$ is as in \eqref{bold-E} and $A(\cdot)$ is as in \eqref{def:A}.
\end{lemma}
\begin{proof} Set $G(z)=|z|^2 z$. A direct computation using \eqref{fnls} yields
\begin{align*}
\partial_t E[v(t)] &=-\mathbb{R}e\int [\partial_t \bar v] [G(v+F)-G(v)]\,dx \\
& =-\tfrac14\partial_t\int \bigl[|v+F|^4 - |v|^4-|F|^4]\,dx + \mathbb{R}e\int [\bar G(v+F)-\bar G(F)]\partial_t F\,dx,
\end{align*}
where in the last line we used the identity $\partial_t |z|^4 = 4\mathbb{R}e G(z)\partial_t \bar z$. Recalling that $F$ solves $(i\partial_t+\Delta) F=0$, we continue from above and integrate by parts to get
\begin{align*}
\partial_t E[v(t)] = -\tfrac14\partial_t\! \int\bigl[v+F|^4\! -\! |v|^4\!-\!|F|^4]\,dx + \Im\int \nabla [\bar G(v+F)-\bar G(F)]\cdot\nabla F\,dx.
\end{align*}
In particular, by the fundamental theorem of calculus,
\begin{align*}
\mathbb{E}E[T] & \leq E[v(0)]+\bigl\|\, |v+F|^4 - |v|^4 - |F|^4\bigr\|_{L_t^\infty L_x^1([0,T]\times\mathbb{R}^4)}\\
&\quad +\bigl\| \nabla F\cdot \nabla\bigl[|v+F|^2(v+F)-|F|^2 F\bigr]\bigr\|_{L_{t,x}^1([0,T]\times\mathbb{R}^4)}.
\end{align*}
We first estimate the boundary terms:
\begin{align*}
\bigl\|\, |v+F|^4 - |v|^4 - |F|^4\bigr\|_{L_t^\infty L_x^1}&\lesssim \|v\|_{L_t^\infty L_x^4}\|F\|_{L_t^\infty L_x^4}^3 + \|v\|_{L_t^\infty L_x^4}^3 \|F\|_{L_t^\infty L_x^4} \\
& \lesssim \mathbb{E}E(T)^{\frac14}\|F\|_{L_t^\infty L_x^4}^3 + \mathbb{E}E(T)^{\frac34}\|F\|_{L_t^\infty L_x^4}.
\end{align*}
Distributing the derivative in the remaining term, we are led to estimate five terms. Using H\"older's inequality and \eqref{interpolate-mor}, we obtain
\begin{align*}
\|v^2\nabla F\cdot\nabla v\|_{L_{t,x}^1} & \lesssim \|\langle x\rangle^{-\frac3p}\nabla v\|_{L_t^p L_x^2} \|\langle x\rangle^{-\frac14}v\|_{L_{t,x}^4}^2 \|\langle x\rangle^{\frac3p+\frac12}\nabla F\|_{L_t^{\frac{2p}{p-2}}L_x^\infty} \\
& \lesssim A(T)^{\frac12+\frac1p}\mathbb{E}E(T)^{\frac12-\frac1p}\|F\|_{X([0,T])}, \\
\| vF\nabla F\cdot\nabla v \|_{L_{t,x}^1} & \lesssim \|\langle x\rangle^{-\frac3p}\nabla v\|_{L_t^p L_x^2} \|\langle x\rangle^{-\frac14}v\|_{L_{t,x}^4} \|\langle x\rangle^{\frac14+\frac3p}\nabla F\|_{L_t^{\frac{2p}{p-2}}L_x^\infty} \|F\|_{L_{t,x}^4} \\
& \lesssim A(T)^{\frac14+\frac1p}\mathbb{E}E(T)^{\frac12-\frac1p}\|F\|_{X([0,T])}^2,\\
\|F^2\nabla F \cdot\nabla v \|_{L_{t,x}^1}&\lesssim \|\langle x\rangle^{-\frac3p}\nabla v\|_{L_t^p L_x^2} \|F\|_{L_{t,x}^4}^2 \|\langle x\rangle^{\frac3p}\nabla F\|_{L_t^{\frac{2p}{p-2}}L_x^\infty} \\
& \lesssim A(T)^{\frac1p}\mathbb{E}E(T)^{\frac12-\frac1p}\|F\|_{X([0,T])}^3,\\
\|v^2|\nabla F|^2 \|_{L_{t,x}^1} & \lesssim \| \langle x\rangle^{-\frac14}v\|_{L_{t,x}^4}^{\frac8p}\|v\|_{L_t^\infty L_x^4}^{2-\frac8p} \|\langle x\rangle^{\frac1p}\nabla F\|_{L_t^{\frac{2p}{p-2}}L_x^4}^2 \\
& \lesssim A(T)^{\frac2p}\mathbb{E}E(T)^{\frac12-\frac2p}\|F\|_{X([0,T])}^2,\\
\|vF|\nabla F|^2 \|_{L_{t,x}^1} & \lesssim \|\langle x\rangle^{-\frac14}v\|_{L_{t,x}^4}^{\frac8p}\|v\|_{L_t^\infty L_x^4}^{1-\frac8p} \|F\|_{L_t^\infty L_x^4} \|\langle x\rangle^{\frac1p}\nabla F\|_{L_t^{\frac{2p}{p-2}}L_x^4}^2 \\
& \lesssim A(T)^{\frac2p} \mathbb{E}E(T)^{\frac14-\frac2p} \|F\|_{X([0,T])}^2\|F\|_{L_t^\infty L_x^4}.
\end{align*}
Collecting the estimates above and applying Young's inequality to absorb $\mathbb{E}E(T)$ into the left-hand side, we arrive at \eqref{E:E}.\end{proof}
We are now ready to present the proof of Proposition~\ref{P:energy-bds}.
\begin{proof}[Proof of Proposition~\ref{P:energy-bds}] First, as $F\in L_t^\infty L_x^2$, conservation of mass for \eqref{nls} implies that $v$ satisfies the mass bound \eqref{mass-bd}.
As remarked before, by time-reversal symmetry, it suffices to prove uniform energy bounds for $v$ on $[0, \sup I_{\text{max}})$. To this end, let $0<\eta\ll1$ be a small parameter and subdivide $[0,\supI_{\text{max}})$ into finitely many intervals $\{I_j\}_{j=0}^J$ so that
\begin{equation}\label{Fsmallj}
\|F\|_{X(I_j)} \leq \eta\qtq{for each}j.
\end{equation}
Inserting \eqref{Fsmallj} into \eqref{E:Morawetz2} and \eqref{E:E}, we find
\begin{align*}
A(T)&\lesssim \mathbb{E}E(T)^{\frac12}+\eta^{\frac{2p}{p-2}}\mathbb{E}E(T),\\
\mathbb{E}E(T)&\lesssim E[v(0)] + \|F\|_{L_t^\infty L_x^4}^4+ \eta^{\frac{2p}{p+2}}\bigl[ A(T) +A(T)^{\frac2{p+2}}\bigr] \\ & \quad + \bigl[\|F\|_{L_t^\infty L_x^4}\eta^2\bigr]^{\frac{4p}{3p+8}} A(T)^{\frac8{3p+8}},
\end{align*}
for all $0<T\in I_0$. Recalling \eqref{3:30} and choosing $\eta$ sufficiently small, a continuity argument shows that $\mathbb{E}E$ can increase by at most a fixed constant on the interval $I_0$. Repeating this argument on each $I_j$, we conclude that there exists a uniform energy bound on the forward maximal-lifespan of $v$.
\end{proof}
We now have all the pieces we need to complete the proof of our main result.
\begin{proof}[Proof of Theorem~\ref{T}] Let $s>\frac56$ and let $f\in H^s_{\rad}(\mathbb{R}^4)$ be radial. Propositions~\ref{P:STE2} and Proposition~\ref{P:STE} guarantee that
\begin{equation}\label{F-good}
F^\omega:=e^{it\Delta} f^\omega \in X(\mathbb{R})\cap L_t^\infty L_x^2(\mathbb{R}\times\mathbb{R}^4)\cap L_t^\infty L_x^4(\mathbb{R}\times\mathbb{R}^4)
\end{equation}
almost surely, where $X(\mathbb{R})$ is as in \eqref{X}.
Now fix $\omega$ such that \eqref{F-good} holds. Writing $u=F^\omega +v$, we see that $u$ solving \eqref{nls} with $u(0)=f^\omega$ is equivalent with $v$ solving \eqref{fnls} with $v(0)=0$. By Proposition~\ref{P:energy-bds}, $v$ is uniformly bounded in $H^1_x$ throughout its maximal lifespan. Thus, by Proposition~\ref{P:v-scatter}, $v$ is a global solution to \eqref{fnls} and scatters in $H^1_x$, which gives Theorem~\ref{T}.\end{proof}
\end{document}
|
\begin{document}
\title{Very well-covered graphs with the Erd\H{o}
\begin{abstract}
A family of independent $r$-sets of a graph $G$ is an $r$-star if every set in the family contains some fixed vertex $v$. A graph is $r$-EKR if the maximum size of an intersecting family of independent $r$-sets is the size of an $r$-star. Holroyd and Talbot conjecture that a graph is $r$-EKR as long as $1\leq r\leq\frac{\mu(G)}{2}$, where $\mu(G)$ is the minimum size of a maximal independent set. It is suspected that the smallest counterexample to this conjecture is a well-covered graph. Here we consider the class of very well-covered graphs $G^*$ obtained by appending a single pendant edge to each vertex of $G$. We prove that the pendant complete graph $K_n^*$ is $r$-EKR when $n \geq 2r$ and strictly so when $n>2r$. Pendant path graphs $P_n^*$ are also explored and the vertex whose $r$-star is of maximum size is determined.
\end{abstract}
\section{Introduction}
Let $G$ be a finite simple graph with vertex and edge sets $V(G)$ and $E(G)$, respectively. Let $\mathcal{I}^{(r)}(G)$ denote the family of all independent $r$-sets of $G$. We say that $\mathcal{A}\subseteq\mathcal{I}^{(r)}(G)$ is {\bf intersecting} if the intersection of each pair of sets in $\mathcal{A}$ is nonempty. One such intersecting family is an {\bf $r$-star} comprised of independent $r$-sets containing some fixed vertex $v\in V(G)$. The vertex $v$ is called the {\bf centre} of the $r$-star. A graph is {\bf $r$-EKR} if there exists an $r$-star whose size is maximum amongst all intersecting families of independent $r$-sets. If all extremal intersecting families are $r$-stars, the graph is considered to be strictly $r$-EKR. This naming stems from the classical Erd\H{o}s-Ko-Rado theorem, framed in the language of graph theory as follows:
\begin{theorem}[\emph{Erd\H{o}s-Ko-Rado} \cite{ClassicEKR}]
If $E_n$ is the empty graph of order $n$, then $E_n$ is $r$-EKR for $n \geq 2r$ and strictly so when $n>2r$.
\end{theorem}
This Erd\H{o}s-Ko-Rado property for graphs was formalized by Holroyd, Talbot, and Spencer in \cite{EKRComp}. There has been significant interest and progress in exploring the Erd\H{o}s-Ko-Rado property for many classes of graphs (e.g., \cite{DisUnions,SpecTrees,Chordal,LadderGraph}). In \cite{EKRProp}, Holroyd and Talbot conjecture a connection between the Erd\H{o}s-Ko-Rado property of a graph $G$ and the parameter $\mu(G)$ denoting the minimum size of a maximal independent set.
\begin{conjecture}[\emph{Holroyd and Talbot} \cite{EKRProp}]
\label{HT}
A graph $G$ will be $r$-EKR if $1 \leq r \leq \frac{\mu(G)}{2}$, and is strictly so if $2<r<\frac{\mu(G)}{2}.$
\end{conjecture}
In \cite{borg1,borg2}, Borg proved this conjecture in a much more general form for any graph $G$ with $\mu(G)\geq(r-1)r^2+r$. This result is a major step forward in proving Conjecture \ref{HT}, however there is still a need to tighten the bound on $\mu(G)$. While introducing this conjecture, Holroyd and Talbot suggest that if a counterexample exists, it is likely that $\mu(G)$ is as large as possible. This occurs exactly when a graph is {\bf well-covered}, that is every maximal independent set is also of maximum size. This theory motivates the study of the Erd\H{o}s-Ko-Rado property of well-covered graphs.
In \cite{well_covered}, Finbow, Hartnell, and Nowakowski characterize well-covered graphs with girth at least 6. A vertex $p\in V(G)$ whose neighborhood contains a single vertex $x$ is considered to be a {\bf pendant vertex} and $px\in E(G)$ is a {\bf pendant edge}.
\begin{theorem}[\emph{Finbow, Hartnell, and Nowakowski} \cite{well_covered}] Let $G$ be a connected graph of girth at least~$6$, which is neither isomorphic to $C_7$ nor $K_1$. Then $G$ is well-covered if and only if its pendant edges form a perfect matching.
\end{theorem}
A graph whose pendant edges form a perfect matching can be constructed by appending a single pendant edge to each vertex of some base graph $G$. We denote this well-covered graph as $G^*$ and refer to it as a pendant graph. In addition to being well-covered, a pendant graph is also {\bf very well-covered} as there are no isolated vertices and $\mu(G)=\frac{|V(G)|}{2}$.
Here we aim to motivate the study of pendant graphs in relation to the Erd\H{o}s-Ko-Rado property. In \cite{dis_cliques}, Boll\'{o}bas and Leader prove the first and only prior EKR-type result regarding pendant graphs. They show that the disjoint union of at least $r$ cliques $K_t$ with $t\geq 2$ is $r$-EKR. The special case of size 2 cliques gives that when $n\geq r$, $E_n^*$ is $r$-EKR and strictly so unless $n=r$. A recent result by Estrugo and Pastine proves that every pendant graph has an $r$-star of maximum size whose centre is a pendant. The result requires the existence of an {\bf escape path}, defined to be a path $v_1v_2\cdots v_n$ such that $\textrm{deg}(v_n)=1$ and $\textrm{deg}(v_i)=2$ for $2\leq i\leq n-2$.
\begin{theorem}[\emph{Estrugo and Pastine} \cite{escape_path}]\label{escape} Let $G$ be a graph, $v\in V(G)$, and $r\geq 1$. If there is an escape path from $v$ to a pendant vertex $p$, then $|\mathcal{I}^{(r)}_{v}(G)|\leq|\mathcal{I}^{(r)}_p(G)|$.
\end{theorem}
In the pendant graph $G^*$, the edge between a vertex $x$ in the base graph $G$ and its corresponding pendant vertex $p_x$ is an escape path from $x$ to $p_x$. This implies that for every vertex $v\in V(G^*)$, $|\mathcal{I}^{(r)}_v(G^*)|\leq|\mathcal{I}^{(r)}_p(G^*)|$ for some pendant vertex $p$.
In this paper, we prove the following results for the pendant graphs $K_n^*$ and $P_n^*$.
\begin{theorem}\label{pkEKR}
The pendant complete graph $K_n^*$ is $r$-EKR for $n \geq 2r$, and strictly so for $n > 2r$.
\end{theorem}
\begin{theorem}
\label{thetheorem}
Let $1\leq r\leq n$. The $r$-stars whose centre is one of the two pendant vertices appended to the second outermost vertices on the base graph $P_n$ are maximum size $r$-stars of $P_n^*$.
\end{theorem}
Theorem \ref{pkEKR} aligns with Conjecture \ref{HT} and we believe a similar EKR-type result holds for $P_n^*$ although we establish $P_n^*$ is not $n$-EKR. The proof of Theorem \ref{thetheorem} is of particular interest as we establish a recurrence relation that reduces to the Fibonacci sequence when $r=n$.
\section{Pendant Complete Graphs}
We begin by establishing notation used throughout. Given a graph $G$ with vertices $\{x_1, \ldots, x_n\}$, the pendant graph of $G$, denoted $G^*$, is defined by the following:
\[ V(G^*) = \{x_1, \ldots, x_n\} \sqcup \{p_1, \ldots, p_n\},\] \[E(G^*) = E(G) \sqcup \{x_1p_1, \ldots, x_np_n\}. \]
When referring to the base graph $G$ in regards to $G^*$, it is more precisely defined as $G:=G^*[\{x_1,\ldots,x_n\}]$.
Recall that $\mathcal{I}^{(r)}(G^*)$ is the set of all independent $r$-sets of $G^*$. Further, for $v\in V(G^*)$ define $\mathcal{I}^{(r)}_v(G^*)$ to be the $r$-star comprised of all independent $r$-sets of $G^*$ containing $v$.
\begin{lemma}
For $n,r\geq 1$,
\begin{align}
\left| \mathcal{I}^{(r)}(K^*_n) \right| =
\begin{cases}(r+1) \binom{n}{r} & \text{if }r\leq n\\
0&\text{otherwise}.
\end{cases}
\end{align}
\end{lemma}
\begin{proof}
Suppose $\mathcal{I}^{(r)}(K_n^*)$ is nonempty. Note that an independent set of $K_n^*$ can contain at most one vertex of $K_n$. Since there are $n$ pendant vertices, this implies $r\leq n+1$. Sets of size $n+1$ containing at most one vertex of $K_n$ contain a pendant-base vertex pair. Hence no set of size $n+1$ is independent, so we may assume $r\leq n$.
Partition the set of independent sets of size $r\leq n$ into those containing only pendant vertices, and those containing exactly one vertex from $K_n$. The set of $n$ pendant vertices form an independent set, thus there are $\binom{n}{r}$ independent $r$-sets containing only pendant vertices. Each independent set containing one vertex in $K_n$ can be realized as a set of only pendant vertices with one of the $r$ pendants $p_i$ mapped to its corresponding base vertex $x_i$. This gives $r\binom{n}{r}$ independent sets containing at least one vertex from $K_n$, proving the claim.
\end{proof}
The following lemma uses compression-like operation similar to that of \cite{EKRComp}. For $v\in G^*$, we define $G^*\downarrow v:=G^*\backslash N[v]$.
\begin{lemma} \label{cliqstars} Let $p$ be a pendant vertex of $K_n^*$ and suppose $r\leq n$, then
\begin{align}
\left| \mathcal{I}^{(r)}_p(K_n^*) \right| = \left| \mathcal{I}^{(r-1)}(K^*_{n-1}) \right| = r \binom{n-1}{r-1}.
\end{align}
\end{lemma}
\begin{proof}
Consider the bijection
\[f:\mathcal{I}_p^{(r)}(K_n^*)\to\mathcal{I}^{(r-1)}(K_{n}^*\downarrow p)\]
defined as $f(I)=I\backslash\{p\}$. Furthermore, $K_n^*\downarrow p\cong K_{n-1}^*$ as $p$ is a pendant vertex.
\end{proof}
Now, given any independent intersecting family $\mathcal{A}$ on $K^*_n$, we look to construct an injective mapping to an independent intersecting family that permits favorable partitions.
\begin{lemma}\label{pendint}
Let $\mathcal{A}\subseteq\mathcal{I}^{(r)}(K^*_n)$ be an intersecting family. Then, there exists an intersecting family $\mathcal{B}\subseteq\mathcal{I}^{(r)}(K_n^*)$ satisfying
\begin{compactenum}[\hspace{0.25cm} 1. ]
\item $|\mathcal{A}|=|\mathcal{B}|$, and
\item if $B_1,B_2\in \mathcal{B}$, then $B_1\cap B_2\not\subseteq V(K_n)$.
\end{compactenum}
\end{lemma}
\begin{proof}
Let $\mathcal{A}_n:=\mathcal{A}$ and for each $1\leq i\leq n$ recursively define $\mathcal{A}_{i-1}:=\varphi_i(\mathcal{A}_i)$ where
\begin{align*}
\varphi_i(A) = \begin{cases}
A \cup \{p_i\} \setminus \{x_i\} &\text{ if } \exists \text{ some } C \in \mathcal{A}_i \text{ such that } A \cap C = \{x_i\}, \\
A &\text{ otherwise.}
\end{cases}
\end{align*}
\begin{claim}
$\mathcal{A}_i\subseteq\mathcal{I}^{(r)}(K_n^*)$ for all $0\leq i \leq n.$
\end{claim}
\begin{claimproof}
\normalfont
It suffices to show that if $A\in \mathcal{A}_i$ is independent, then $\varphi_i(A)\in\mathcal{A}_{i-1}$ has size $r$ and is also independent. This is clear if $\varphi_i(A)=A.$
Otherwise $\varphi_i(A)\backslash A=\{p_i\}$. Since $N(p_i)=\{x_i\}$ and $x_i\not\in \varphi_i(A)$ then $\varphi_i(A)$ is independent. By definition of $\varphi_i$, $\varphi_i(A)\neq A$ implies $x_i\in A$. Furthermore $\{p_{i},x_{i}\}\nsubseteq A$ as $A$ is independent, thus $\varphi_{i}(A)$ has size $r$ and $\mathcal{A}_{i-1}\subseteq \mathcal{I}^{(r)}(K_n^*)$.
\end{claimproof}
\begin{claim}
$\mathcal{A}_i$ is an intersecting family for all $0\leq i\leq n$.
\end{claim}
\begin{claimproof}
\normalfont
As in the previous claim, we show that if $\mathcal{A}_i$ is an intersecting family, then $\mathcal{A}_{i-1}=\varphi_i(\mathcal{A}_i)$ is intersecting. Consider $A,C\in \mathcal{A}_i$, if $\varphi_i(A)=A$ and $\varphi_i(C)=C$ then $\varphi_i(A)\cap\varphi_i(C)\neq\emptyset$ since $\mathcal{A}_i$ is intersecting. Thus we may further suppose $\varphi_i(A)=A\cup\{p_i\}\backslash\{x_i\}$. The intersecting property of $\mathcal{A}_i$ implies $A\cap C=\{x_i\}$ or there is some $v\in A\cap C$ where $v\neq x_i$. In the former case, $\varphi_i(C)=C\cup\{p_i\}\backslash\{x_i\}$ hence $\varphi_i(A)\cap\varphi_i(C)=\{p_i\}$. Otherwise, since $v\neq x_i$ $v\in \varphi_i(A)\cap \varphi_i(C)$.
\end{claimproof}
\begin{claim}
$\varphi_i$ is injective for all $0\leq i\leq n$.
\end{claim}
\begin{claimproof}
\normalfont
Consider unique sets $A,C \in \mathcal{A}_i$ and suppose for the sake of contradiction $\varphi_i(A)=\varphi_i(C)$. Without loss of generality, assume $\varphi_i(A)=A\cup\{p_i\}\backslash\{x_i\}$ and $\varphi_i(C)=C$. Since $\varphi_i(A)\neq A$, there exists some $D\in \mathcal{A}_i$ such that $A\cap D=\{x_i\}$. Furthermore, $\varphi_i(A)=\varphi_i(C)=C$ implies $C=A\cup\{p_i\}\backslash\{x_i\}$. However $\mathcal{A}_i$ is intersecting, so $D\cap C=\{p_i\}$ giving $\{x_i,p_i\}\subseteq D$. This contradicts the independence of $D$, therefore $\varphi_i$ is injective.
\end{claimproof}
Consider $\mathcal{B}:=\mathcal{A}_0$, Claims 1-2 give that $\mathcal{B}$ is an intersecting subfamily of $\mathcal{I}^{(r)}(K_n^*)$. Furthermore, notice that $\mathcal{B}=\varphi(\mathcal{A})$ where
\[\varphi=\varphi_1 \circ \varphi_2 \circ \dots \circ \varphi_n.\]
Since each $\varphi_i$ is injective by Claim 3, $\varphi$ is injective. Thus $|\mathcal{A}|=|\mathcal{B}|$.
Lastly, consider $B_1,B_2\in \mathcal{B}$. We show $B_1\cap B_2\neq\{x_i\}$ for any $1\leq i\leq n$. If $x_i\in B_1\cap B_2$ for some $i$, then $B_1\backslash\{p_j\}\cup\{x_j\}$ and $B_2\backslash\{p_j\}\cup\{x_j\}$ are not independent for any $j\neq i$. Thus $(\varphi_1\circ\cdots\circ\varphi_{i-1})^{-1}(B_1)=B_1$ and $(\varphi_1\circ\cdots\circ\varphi_{i-1})^{-1}(B_2)=B_2$, so we may write $B_1=\varphi_i(C_1)$ and $B_2=\varphi_i(C_2)$ for some $C_1,C_2\in\mathcal{A}_i$. Since $x_i\in B_1\cap B_2$ then $\varphi_i(C_1)=C_1$ and $\varphi_i(C_2)=C_2$ giving $C_1=B_1$ and $C_2=B_2$. Hence $x_i\in B_1\cap B_2$ implies $B_1\cap B_2\neq\{x_i\}$ else $\varphi_i(B_1)\neq B_1$ and $\varphi_i(B_2)\neq B_2$. Therefore if $x_i\in B_1\cap B_2$, then $B_1\cap B_2\neq\{x_i\}$, concluding the proof.
\end{proof}
\begin{proof}[Proof of Theorem \ref{pkEKR}]
Let $\mathcal{A}\subseteq\mathcal{I}^{(r)}(K_n^*)$ be an intersecting family and let $\mathcal{B}$ be the corresponding intersecting family satisfying the conditions of applying Lemma \ref{pendint} to $\mathcal{A}$. Consider the partition $\mathcal{B} = \mathcal{P} \sqcup \mathcal{X}$ where
\[ \mathcal{P} := \{B \in \mathcal{B}: x_i \notin B \text{ for all } i\}\]
\[ \mathcal{X} := \{B \in \mathcal{B}: x_i \in B \text{ for some } i \}\subseteq\mathcal{A}.\]
Note that $\mathcal{P}\subseteq\mathcal{I}^{(r)}(K_n^*[\{p_1,\ldots,p_n\}])$ is an intersecting family. Since $K_n^*[\{p_1,\ldots,p_n\}]\cong E_n$, the Erd\H{o}s-Ko-Rado theorem gives $|\mathcal{P}|\leq\binom{n-1}{r-1}$ when $n\geq 2r$.
Recall that an independent set of $K_n^*$ contains at most one vertex of $K_n$, hence each set in the following family of subsets of $\mathcal{X}$ are of size $r-1$ and contain only pendants
\[ \mathcal{R} := \{ X \setminus \{x_i\} : X \in \mathcal{X},\,x_i\in X \}. \]
Note that each set of $\mathcal{X}$ contains $x_i$ for some $i$, thus
\[\mathcal{X}\subseteq\{R\cup\{x_j\}\,:\,R\in\mathcal{R}, p_j\not\in R\}.\]
Since each $R\in\mathcal{R}$ has $n-(r-1)$ choices of $j$ satisfying $p_j\not\in R$ then
\[|\mathcal{X}|\leq (n-r+1)|\mathcal{R}|.\]
By condition (2) of $\mathcal{B}$ in Lemma \ref{pendint}, $\mathcal{R}\subseteq\mathcal{I}^{(r-1)}(K_n^*[\{p_1,\ldots,p_n\}])$ is intersecting. By the Erd\H{o}s-Ko-Rado theorem, $\mathcal{R}$ has size at most $\binom{n-1}{r-2}$ when $n \geq 2(r-1)$. This inequality is strict when $n>2(r-1)$.
It follows that when $n \geq 2r$,
\begin{align*}|\mathcal{A}| &= |\mathcal{B}|\\
&= |\mathcal{P}| + |\mathcal{X}|\\
&\leq |\mathcal{P}| + (n-r+1)|\mathcal{R}|\\
&\leq \binom{n-1}{r-1} + (n-r+1)\binom{n-1}{r-2}\\
&= r\binom{n-1}{r-1}.
\end{align*}
Therefore $|\mathcal{A}|\leq r\binom{n-1}{r-1}$ and Lemma \ref{cliqstars} implies that $K_n^*$ is $r$-EKR when $n\geq 2r$.
Furthermore, the Erd\H{o}s-Ko-Rado Theorem implies this inequality is strict when $n>2r$ unless all of the following statements hold:
\begin{compactenum}[\hspace{0.25cm} 1. ]
\item $\mathcal{P}$ is the set of all $r$-sets of $\{p_1,\ldots,p_n\}$ containing some fixed vertex $p_i$.
\item $\mathcal{X}=\{R\cup\{x_k\}\,:\,R\in\mathcal{R},p_k\not\in R\}$.
\item $\mathcal{R}$ is the set of all $(r-1)$-sets of $\{p_1,\ldots,p_n\}$ containing some fixed vertex $p_j$.
\end{compactenum}
To prove $K_n^*$ is strictly $r$-EKR when $n>2r$, it suffices to show that these conditions imply $p_i=p_j$. Suppose $p_i\neq p_j$. Since $n>2r,$ there exist $2r-2$ distinct pendant vertices
\[\{p_{i_1},\ldots,p_{i_{r-1}},p_{j_1},\ldots,p_{j_{r-1}}\}\subseteq\{p_1,\ldots,p_n\}\backslash\{p_i,p_j\}.\]
Consider the following sets
\[P=\{p_i,p_{i_1},\ldots,p_{i_{r-1}}\}\textrm{ and } X=\{p_j,p_{j_1},\ldots,p_{j_{r-2}},x_{j_{r-1}}\}.\]
From (1), we have $P\in\mathcal{P}$ and by (2) and (3), $X\in\mathcal{X}\subseteq\mathcal{A}$. Since $\mathcal{A}$ is intersecting, $X\in\mathcal{A}$, and $P\cap X=\emptyset$, then $P\not\in \mathcal{A}$. From Lemma 3, this implies either $P\cup\{x_i\}\backslash\{p_i\}\in\mathcal{A}$ or $P\cup\{x_{i_k}\}\backslash\{p_{i_k}\}\in\mathcal{A}$ for some $1\leq k\leq r-1$. But $x_{j_{r-1}}\not\in\{x_i,x_{i_1},\ldots,x_{i_{r-1}}\}$, thus both cases contradict the intersecting property of $\mathcal{A}$. Therefore $p_i=p_j$, hence $K_n^*$ is strictly $r$-EKR when $n>2r$.
\end{proof}
\section{Pendant Path Graphs}\label{sec:results}
Define $P_n$ to be the path graph on $n$ vertices whose edges are of the form $x_ix_{i+1}$ with $1\leq i\leq n-1$. As is standard in many EKR-type results, we establish a recurrence relation for $r$-stars of $P_n^*$. However, the recurrence only holds for $r$-stars whose centre is a pendant vertex.
\begin{lemma}
\label{bigrecur}
The recurrence relation
\begin{align}|\mathcal{I}_{p_i}^{(r)}(P_n^*)| = |\mathcal{I}_{p_i}^{(r)}(P_{n-1}^*)| + |\mathcal{I}_{p_i}^{(r-1)}(P_{n-1}^*)| + |\mathcal{I}_{p_i}^{(r-1)}(P_{n-2}^*)| + |\mathcal{I}_{p_i}^{(r-2)}(P_{n-2}^*)|\end{align}\label{bigrecrelation}
holds for all $n,r\geq 1$ and $i \leq n-2$.
\end{lemma}
\begin{proof}[Proof of Lemma 1]
Let
\begin{align*}
\mathcal{P} &:= \{ S\backslash\{p_n\} : S\in \mathcal{I}_{p_i}^{(r)}(P_n^*),\,p_n \in S \}, \\
\mathcal{X} &:= \{ S\backslash\{x_n\} : S\in \mathcal{I}_{p_i}^{(r)}(P_n^*), \,x_n \in S, p_{n-1}\not\in S\},\\
\mathcal{B} &:= \{ S\backslash\{x_n,p_{n-1}\} : S\in \mathcal{I}_{p_i}^{(r)}(P_n^*), \,x_n,p_{n-1} \in S\},
\text{and} \\
\mathcal{E} &:= \{ S \in \mathcal{I}_{p_i}^{(r)}(P_n^*) : x_n, p_n \notin S\}.
\end{align*}
Note that $i\leq n-2$ implies $p_i\in V(P_{n-1}^*)$ and $p_i\in V(P_{n-2}^*)$. Moreover, an independent set $S$ of $P_n^*$ satisfies $|S\cap\{x_j,p_j\}|\leq 1$ for all $j$. This gives $\mathcal{P}=\mathcal{I}^{(r-1)}_{p_i}(P_{n-1}^*)$, $\mathcal{B}=\mathcal{I}^{(r-2)}_{p_i}(P_{n-2}^*)$, and $\mathcal{E}=\mathcal{I}_{p_i}^{(r)}(P_{n-1}^*)$. Since $x_n\in S$ implies $x_{n-1}\not\in S$, we have $\mathcal{X}=\mathcal{I}_{p_i}^{(r-1)}(P_{n-2}^*)$. Therefore
\begin{align*}
|\mathcal{I}_{p_i}^{(r)}(P_n^*)| &= |\mathcal{E}| + |\mathcal{P}| + |\mathcal{X}| + |\mathcal{B}| \\&=|\mathcal{I}_{p_i}^{(r)}(P_{n-1}^*)| + |\mathcal{I}_{p_i}^{(r-1)}(P_{n-1}^*)| + |\mathcal{I}_{p_i}^{(r-1)}(P_{n-2}^*)| + |\mathcal{I}_{p_i}^{(r-2)}(P_{n-2}^*)|,
\end{align*}
as claimed.
\end{proof}
We can extend this recurrence relation to $p_{n-1}$ and $p_n$ by graph symmetry.
\begin{remark}
\label{symmetry}
When $n \geq 4$ and $1 \leq i, r \leq n$, the following recurrences hold by symmetry:
\[ |\mathcal{I}_{p_{n}}^{(r)}(P_n^*)| = |\mathcal{I}_{p_1}^{(r)}(P_{n-1}^*)| + |\mathcal{I}_{p_1}^{(r-1)}(P_{n-1}^*)| + |\mathcal{I}_{p_1}^{(r-1)}(P_{n-2}^*)| + |\mathcal{I}_{p_1}^{(r-2)}(P_{n-2}^*)|\]
and
\[ |\mathcal{I}_{p_{n-1}}^{(r)}(P_n^*)| = |\mathcal{I}_{p_2}^{(r)}(P_{n-1}^*)| + |\mathcal{I}_{p_2}^{(r-1)}(P_{n-1}^*)| + |\mathcal{I}_{p_2}^{(r-1)}(P_{n-2}^*)| + |\mathcal{I}_{p_2}^{(r-2)}(P_{n-2}^*)|.\]
\end{remark}
By Theorem \ref{escape}, for any pendant graph $G^*$ there exists an $r$-star of maximum size whose centre is a pendant vertex. We include the proof of the special case $G=P_n^*$ here for completeness.
\begin{lemma}
\label{injections}
For $1 \leq i \leq n$ and $1 \leq r \leq n$,
\[|\mathcal{I}_{x_i}^{(r)}(P_n^*)| \leq |\mathcal{I}_{p_i}^{(r)}(P_n^*)|.\]
\end{lemma}
\begin{proof}
Fix $1\leq i\leq n$ and consider the family
\[\mathcal{P}=\{S\backslash\{x_i\}\cup\{p_i\}\,:\,S\in\mathcal{I}_{x_i}^{(r)}(P_n^*)\}.\]
For $S\in\mathcal{I}_{x_i}^{(r)}(P_n^*)$, $x_i\in S$ implies $p_i\not\in S$. Furthermore, $N(p_i)=\{x_i\},$ thus $S\backslash\{x_i\}\cup\{p_i\}$ is independent. Therefore $S\subseteq\mathcal{I}_{p_i}^{(r)}(P_n^*)$ which implies the result.
\end{proof}
Next, we characterize the behavior of $|\mathcal{I}^{(n)}(P_n^*)|$, the total number of independent $n$-sets of $P_n^*$. Prior to stating this result, we provide the definition of the Fibonacci numbers. These numbers are usually defined with the seed values $0$ and $1$. For the purposes of this paper, however, we define the Fibonacci numbers with a shift in indexing.
\begin{definition}
The Fibonacci sequence is defined by
\[F(n) = F(n-1) + F(n-2) \]
with $F(0)=1$ and $F(1)=2$.
\end{definition}
With this in hand, we present the following result.
\begin{lemma}
\label{fibo}
If $n \geq 0$, then $|\mathcal{I}^{(n)}(P_n^*)| = F(n).$
\end{lemma}
\begin{proof}
A base-pendant pair $x_i$ and $p_i$ cannot both be in an independent set. The pendant graph $P_n^*$ is comprised of exactly $n$ base-pendant pairs, thus any independent set of size $n$ must contain either $x_i$ or $p_i$ for all $i$. Furthermore, since $p_n\not\in S$ exactly when $x_n\in S$, then $x_{n-1}\not\in S$ since $x_nx_{n-1}\in E(P_n^*)$. Hence $p_n\not\in S$ if and only if $x_n,p_{n-1}\in S$, so we can create the following two subfamilies $\mathcal{A}$ and $\mathcal{B}$ such that $|\mathcal{I}^{(r)}(P_n^*)|=|\mathcal{A}|+|\mathcal{B}|$
\begin{align*}
\mathcal{A}&=\{S\backslash\{p_n\}\,:S\in\mathcal{I}^{(r)}(P_n^*),\,p_n\in S\} \text{ and }\\
\mathcal{B}&=\{S\backslash\{x_n,p_{n-1}\}\,:S\in\mathcal{I}^{(r)}(P_n^*),\,p_n\not\in S\}
\end{align*}
Note that $A\in\mathcal{I}^{(r-1)}(P_{n-1}^*)$ if and only if $A\cup\{p_n\}\in\mathcal{I}^{(r)}(P_{n}^*)$ since $N(p_n)\cap V(P_{n-1}^*)=\emptyset$. Similarly, $B\in \mathcal{I}^{(r-2)}(P_{n-2}^*)$ if and only if $B\cup\{p_{n-1},x_n\}\in\mathcal{I}^{(r)}(P_{n}^*)$ since $N(\{p_{n-1},x_n\})\cap V(P_{n-2}^*)=\emptyset$. Therefore $\mathcal{A}=\mathcal{I}^{(r-1)}(P_{n-1}^*)$ and $\mathcal{B}=\mathcal{I}^{(r-2)}(P_{n-2}^*)$ together with $|\mathcal{I}^{(r)}(P_n^*)|=|\mathcal{A}|+|\mathcal{B}|$ results in the recurrence
\[|\mathcal{I}^{(n)}(P_n^*)| = |\mathcal{I}^{(n-1)}(P_{n-1}^*)| + |\mathcal{I}^{(n-2)}(P_{n-2}^*)|.\]
Examining small values of $n$, we note $|\mathcal{I}^{(0)}(P_0^*)|=1$ since the empty set is, in all cases, the only independent set of cardinality 0. We also have $|\mathcal{I}^{(1)}(P_1^*)| = 2$ as $P_1^*$ contains two vertices and there must consequently be two singleton independent sets.
\end{proof}
\begin{lemma}
\label{countInd}
If $1 \leq k \leq n$, then
\begin{align*}
|\mathcal{I}^{(n)}_{p_k}(P_n^*)| = F(k-1)F(n-k).
\end{align*}
\end{lemma}
\begin{proof}
Fix $1\leq k\leq n$ and consider the following two subgraphs of $P_n^*:$
\begin{align*}
P_{<k}^*&:=P_n^*[\{x_1,p_1,\ldots,x_{k-1},p_{k-1}\}]\\
P_{>k}^*&:=P_n^*[\{x_{k+1},p_{k+1},\ldots,x_n,p_n\}].
\end{align*}
Recall that an independent $n$-set of $P_n^*$ contains exactly one of $x_i$ or $p_i$ for all $i$. For $S\in\mathcal{I}^{(r)}_{p_k}(P_n^*)$, this implies $|S\cap P_{<k}^*|=k-1$ and $|S\cap P_{>k}^*|=n-k$.
Furthermore, the inclusion of $p_k$ in $S$ implies that $x_k\not\in S$. Thus since the subgraphs $P_{<k}^*$ and $P_{>k}^*$ are only adjacent to $x_k$ in the larger graph $P_n^*$, $S$ can be realized as $S=A\cup B\cup\{p_k\}$ where $A\in\mathcal{I}^{(k-1)}(P_{<k}^*)$ and $B\in\mathcal{I}^{(n-k)}(P_{>k}^*)$. Therefore
\begin{align}
\label{fiboeq}
|\mathcal{I}^{(n)}_{p_k}(P_n^*)| = |\mathcal{I}^{(k-1)}(P_{<k}^*)||\mathcal{I}^{(n-k)}(P_{>k}^*)|.
\end{align}
Finally, notice that $P_{<k}^*$ is isomorphic to $P^*_{k-1}$ and $P_{>k}^*$ is isomorphic to $P^*_{n-k}$. Hence the result follows from applying Lemma \ref{fibo} to Equation \ref{fiboeq}.
\[
|\mathcal{I}^{(n)}_{p_k}(P_n^*)| = F(k-1)F(n-k).
\qedhere
\]
\end{proof}
Now, we identify which value(s) of $k$ maximize the product $F(k-1)F(n-k)$.
This allows us to identify stars of maximum size when $r=n$.
\begin{lemma}\label{FiboMaximum}
Fix $n\geq 1$ and $1\leq k \leq n$. Define
\begin{align*}
f(k) \coloneqq F(k-1)F(n-k).
\end{align*}
The function $f(k)$ is maximized when $k=2$ or $k=n-1$.
\end{lemma}
\begin{proof}
First, we leverage the symmetry of the product,
\begin{align*}
f(k) &= F(k-1)F(n-k), \\
&= F(n-(n-k+1))F((n-k+1)-1), \\
&= f(n-k+1).
\end{align*}
Setting $k=2$, this implies $f(2)=f(n-1)$. Therefore, it suffices to show that $k=2$ maximizes $F(k-1)F(n-k)$ for $k \leq \lceil n/2 \rceil$. It is well-known that the Fibonacci numbers can be written in closed form as
\begin{align*}
F(n-2) &= \frac{\varphi^{n} - (-\varphi)^{-n}}{\sqrt{5}},
\end{align*}
where $\varphi = \frac{1 + \sqrt{5}}{2}$ is the golden ratio \cite{Binet}. The product can then be written as
\begin{align*}
F(k-1)F(n-k) &= \frac{\left(\varphi^{k+1} - (-\varphi)^{-k-1}\right)\left(\varphi^{n-k+2} - (-\varphi)^{-n+k-2}\right)}{5}
\end{align*}
which equates to
\begin{align*}
5F(k-1)F(n-k) &= \varphi^{n+3} +(-1)^{n-k+1} \varphi^{-n+2k-1} +(-1)^{k} \varphi^{n-2k+1} + (-1)^{n+1} \varphi^{-n-3}.
\end{align*}
Since the terms $\varphi^{n+3}$ and $ (-1)^{n+1} \varphi^{-n-3}$ are independent of $k$, in order to maximize the product of $F(k-1)$ and $F(n - k)$ it suffices to maximize
\begin{align*}
\alpha(k) \coloneqq (-1)^{n-k+1} \varphi^{-n+2k-1} +(-1)^{k} \varphi^{n-2k+1}.
\end{align*}
If $n$ is even, then
\begin{align*}
\alpha(k) = (-1)^{-k+1} \varphi^{-(n-2k+1)} + (-1)^{k}\varphi^{n-2k+1}.
\end{align*}
\begin{claim}
If $k$ is odd, $\alpha(k)<0$ and if $k$ is even then $\alpha(k)>0$.
\end{claim}
\begin{claimproof}
We further refine our assumptions based on the parity of $k$:
\begin{align}
\label{alpha}
\alpha(k) = \begin{cases}
-\varphi^{-(n-2k+1)} + \varphi^{n-2k+1} \quad &k \text{ is even}, \\
\varphi^{-(n-2k+1)} - \varphi^{n-2k+1} \quad &k \text{ is odd.}
\end{cases}
\end{align}
Since $1\leq k\leq \lceil n/2\rceil$, we have $1\leq n-2k+1\leq n-1$. Therefore,
\begin{align}
\label{fiboinv}
\varphi^{-(n-2k+1)} < 1
\end{align}
and
\begin{align}
\label{fiboinv2}
\varphi^{n-2k+1} > 1.
\end{align}
Applying Equations \eqref{fiboinv} and \eqref{fiboinv2} to Equation \eqref{alpha} for odd $k$ implies
\begin{align*}
\alpha(k) &< 1 - \varphi^{n-2k+1} < 0.
\end{align*}
Similarly, we get a lower bound for even $k$,
\begin{align*}
\alpha(k) &> -1 + \varphi^{n-2k+1} > 0.
\end{align*}
So we conclude $\alpha(k)$ is positive for even $k$ and negative for odd $k$.
\end{claimproof}
In order to maximize $\alpha(k)$ it is sufficient to consider even $k$. The derivative of $\alpha(k)$ for even $k$ is
\begin{align*}
\frac{d \alpha}{dk} = -2\ln(\varphi) \varphi^{-(n-2k+1)} - 2\ln(\varphi) \varphi^{n-2k+1}.
\end{align*}
Noting that $\ln(\varphi) > 0$ yields
\begin{align*}
\frac{d \alpha}{dk} &< 0.
\end{align*}
Therefore, $\alpha(k)$ is a strictly decreasing function, hence it is maximized by the minimum value of $k$, which is $k=2$.
\ \\
Finally, we consider the case where $n$ is odd. As in the $n$ is even case, we again refine $\alpha(k)$ based on the parity of $k$:
\begin{align*}
\alpha(k) = \begin{cases}
\varphi^{-(n-2k+1)} + \varphi^{n-2k+1} \quad & k \text{ is even}, \\
-\varphi^{-(n-2k+1)} - \varphi^{n-2k+1} \quad & k \text{ is odd}.
\end{cases}
\end{align*}
It is clear that $\alpha(k)>0$ when $k$ is even, while $\alpha(k)<0$ when $k$ is odd. Hence, we consider only even $k$ and compute the derivative:
\begin{align*}
\frac{d \alpha}{dk} = 2\ln(\varphi) \varphi^{-(n-2k+1)} - 2\ln(\varphi) \varphi^{n-2k+1}.
\end{align*}
Applying Equations \eqref{fiboinv}, \eqref{fiboinv2} and that $\ln(\varphi)>0$, we obtain
\begin{align*}
\frac{d \alpha}{dk} &< 2\ln(\varphi) - 2\ln(\varphi) = 0.
\end{align*}
Since $\alpha(k)$ is a strictly decreasing function, it is maximized when $k$ is minimum. Thus $\alpha(k)$ and hence $f(k)$ is maximized when $k=2$ and, by symmetry, when $k=n-1$.
\end{proof}
Applying Lemmas \ref{countInd} and \ref{FiboMaximum} gives us the following corollary.
\begin{cor}
\label{cor:nmax}
$|\mathcal{I}^{(n)}_{p_k}(P_n^*)|$ is maximized when $k=2$ or $k=n-1$.
\end{cor}
Next we are ready to prove our second main result that $\mathcal{I}_{p_2}^{(r)}(P_n^*)$ and $\mathcal{I}_{p_{n-1}}^{(r)}(P_n^*)$ are each of maximum size for all $n$ and $r\leq n$.
\begin{proof}[Proof of Theorem \ref{thetheorem}]
By Lemma \ref{injections} it suffices to show $|\mathcal{I}^{(r)}_{p_i}(P_n^*)|\leq|\mathcal{I}^{(r)}_{p_2}(P_n^*)|$ for all $i$. We proceed by induction on $r$ and begin by establishing two base cases.
When $r=1$, it is clear that for all $i$, $|\mathcal{I}_{p_i}^{(1)}(P_n^*)|= 1$. For $r=2$, all independent $r$-sets containing $p_i$ are of the form $\{p_i,p_j\}$ or $\{p_i,x_j\}$ for $i\neq j$.
Thus, $|\mathcal{I}^{(2)}_{p_i}(P_n^*)| = 2n-2$ for all $i$. Thus Equation \eqref{fiboeq} holds for $r=1$ and $r=2$.
Fix $i$ and $3\leq k\leq n$ and assume that for all $r<k$,
$|\mathcal{I}_{p_i}^{(r)}(P_n^*)|\leq |\mathcal{I}_{p_2}^{(r)}(P_n^*)|$. Further induct on $n\geq k$. By Corollary \ref{cor:nmax}, we have $|\mathcal{I}_{p_i}^{(k)}(P_k^*)| \leq |\mathcal{I}_{p_2}^{(k)}(P_k^*)|$. Now fix $l$ and suppose that for all $r$, $j$ such that $2\leq r< k \leq j <l$, $|\mathcal{I}_{p_i}^{(r)}(P_{j}^*)| \leq |\mathcal{I}_{p_2}^{(r)}(P_{j}^*)|$ holds.
Taking $r=k-1$ and $j=l-1$ we have $|\mathcal{I}_{p_i}^{(k-1)}(P_{l-1}^*)| \leq |\mathcal{I}_{p_2}^{(k-1)}(P_{l-1}^*)|$, hence
\begin{align*}|\mathcal{I}_{p_i}^{(k-1)}(P_{l-1}^*)| + |\mathcal{I}_{p_i}^{(k)}(P_{l-1}^*)| \\\leq |\mathcal{I}_{p_2}^{(k-1)}(P_{l-1}^*)| + |\mathcal{I}_{p_2}^{(k)}(P_{l-1}^*)|.\end{align*}
Similarly, $ |\mathcal{I}_{p_i}^{(k-1)}(P_{l-2}^*)| \leq |\mathcal{I}_{p_2}^{(k-1)}(P_{l-2}^*)|$ which gives
\[|\mathcal{I}_{p_i}^{(k-1)}(P_{l-2}^*)| + |\mathcal{I}_{p_i}^{(k-1)}(P_{l-1}^*)| + |\mathcal{I}_{p_i}^{(k)}(P_{l-1}^*)| \leq |\mathcal{I}_{p_2}^{(k-1)}(P_{l-2}^*)| + |\mathcal{I}_{p_2}^{(k-1)}(P_{l-1}^*)| + |\mathcal{I}_{p_2}^{(k)}(P_{l-1}^*)|.\]
Applying the inductive hypothesis once more, we have $|\mathcal{I}_{p_i}^{(k-2)}(P_{l-2}^*)| \leq |\mathcal{I}_{p_2}^{(k-2)}(P_{l-2}^*)|$, hence
\begin{align*}|\mathcal{I}_{p_i}^{(k-2)}(P_{l-2}^*)| + |\mathcal{I}_{p_i}^{(k-1)}(P_{l-2}^*)| + |\mathcal{I}_{p_i}^{(k-1)}(P_{l-1}^*)| + |\mathcal{I}_{p_i}^{(k)}(P_{l-1}^*)| \\\leq |\mathcal{I}_{p_2}^{(k-2)}(P_{l-2}^*)| + |\mathcal{I}_{p_2}^{(k-1)}(P_{l-2}^*)| + |\mathcal{I}_{p_2}^{(k-1)}(P_{l-1}^*)| + |\mathcal{I}_{p_2}^{(k)}(P_{l-1}^*)|.\end{align*}
Since $k \geq 3$ and $l > k$, by Equation \eqref{bigrecrelation} we have that
\[|\mathcal{I}_{p_i}^{(k)}(P_{l}^*)| \leq |\mathcal{I}_{p_2}^{(k)}(P_l^*)|.\]
\end{proof}
Now that we have identified the maximum $r$-stars of a pendant path graph, we look to investigate for which value(s) of $r$ is $P_n^*$ $r$-EKR. In doing so, we find that when $n\geq 4$ $P^*_n$ is not $n$-EKR.
\begin{lemma}
\label{notNEKR}
For all $n\geq 4$, $P^*_n$ is not $n$-EKR.
\end{lemma}
\begin{proof}
Recall that an independent $n$-set $S \in \mathcal{I}^{(r)}(P_n^*)$ contains exactly one of either $p_i$ or $x_i$ for all $i$.
Let $A \in \mathcal{I}^{(n)}(P_n^*)$ be the independent set that contains $x_i$ if $i$ is odd, and $p_i$ if $i$ is even. Then the complement of $A$, denoted $A^{C}$, is the independent set containing $x_i$ if $i$ is even, and $p_i$ if $i$ is odd. Any intersecting independent family of $n$-sets cannot contain both $A$ and $A^C$, so the largest such family has size at most $|\mathcal{I}^{(n)}(P_n^*)|-1$. We show that this bound is tight.
Define the family of independent $r$-sets $\mathcal{B} = \mathcal{I}^{(n)}(P_n^*) \setminus \{A^{C}\}$. We claim $\mathcal{B}$ is intersecting.
Note that any independent set $S \in \mathcal{B}\backslash \{A\}$ must contain more pendant vertices than that of $A$. Thus, any two sets in $\mathcal{B}\backslash\{A\}$ contain at least $n+1$ pendant vertices between them. Since there are exactly $n$ pendant vertices, the two sets must intersect by the Pigeonhole Principle. Moreover, since $A^C$ is the only independent $r$-set that does not intersect $A$ and $A^C\not\in\mathcal{B}$ then $\mathcal{B}$ is intersecting. By Lemma \ref{fibo}, this implies that the largest set of intersecting independent $n$-sets has size $F(n)-1$.
Finally, by Lemma \ref{countInd} we have $\mathcal{I}^{(n)}_{p_2}(P_n^*) = 2F(n-2)$.
Since $F(n)-1 > 2F(n-2)$ when $n\geq 4$, we conclude that the pendant path graph $P^*_n$ is not $n$-EKR.
\end{proof}
\section{Future work}\label{sec:futurework}
It is important to note that although the pendant path graph is similar in many ways to the ladder graph, the ladder graph has been proven to be $n-$EKR in \cite{LadderGraph} while we see here that the pendant path graph is not. However we do still believe that Holroyd and Talbot's conjecture holds for pendant path graphs, which we conjecture here.
\begin{conjecture}
The pendant path graph $P^*_n$ is $r$-EKR for $n \geq 2r$.
\end{conjecture}
Although the standard compression method seems to fail for the pendant path graph, another technique may be suitable such as the cycle method of \cite{cycle_method}. Furthermore, beyond the pendant path graph there is certainly much left to investigate in regards to the Erd\H{o}s-Ko-Rado property of pendant graphs.
\end{document}
|
\begin{document}
\title
{Nonlinear Orbital Stability for Planar Vortex Patches}
\author{Daomin Cao, Guodong Wang, Jie Wan}
\address{Institute of Applied Mathematics, Chinese Academy of Science, Beijing 100190, and University of Chinese Academy of Sciences, Beijing 100049, P.R. China}
\email{[email protected]}
\address{Institute of Applied Mathematics, Chinese Academy of Science, Beijing 100190, and University of Chinese Academy of Sciences, Beijing 100049, P.R. China}
\email{[email protected]}
\address{Institute of Applied Mathematics, Chinese Academy of Science, Beijing 100190, and University of Chinese Academy of Sciences, Beijing 100049, P.R. China}
\email{[email protected]}
\begin{abstract}
In this paper, we prove nonlinear orbital stability for steady vortex patches that maximize the kinetic energy among isovortical rearrangements in a planar bounded domain. As a result, nonlinear stability for an isolated vortex patch is proved. The proof is based on conservation of energy and vorticity, which is an analogue of the classical Liapunov function method.
\end{abstract}
\maketitle
\section{Introduction}
This paper proves that the set of vortex patches as maximizers of the kinetic energy on an isovortical surface(a set of functions with the same distributional function) is orbitally stable for the incompressible Euler equations in a planar bounded domain. Here orbital stability means: if at initial time the flow is close to a maximizer, then it remains close to the set of maximizers. As a consequence of orbital stability, we show stability for isolated maximizers. The key point of the proof is that for an ideal fluid the vorticity moves on an isovortical surface and the kinetic energy is conserved.
In \cite{T}, steady vortex patches were constructed by maximizing the kinetic energy subject to some constraints for vorticity. Burton in \cite{B2,B4} considered more general cases. He constructed various steady vortex flows by maximizing the kinetic energy on rearrangement class, which included the vortex patch solution in \cite{T} as a special case. An interesting and unsolved problem is the stability of these vortex patches. For a single concentrated vortex patch, stability was proved in \cite{CW}, where local uniqueness played an essential role.
But for vortex patches that are not sufficiently concentrated uniqueness is still an open problem, and the method in \cite{CW} does not apply anymore. In this paper, we turn to prove orbital stability for the set of maximizers. The results are stated precisely in Section 2.
For certain domains, there may be no isolated maximizers. For example, for an annular domain, the functional and the constraint are both invariant under rotations, so the set of maximizers is also invariant under rotations. That is the reason we consider orbital stability here. However, if there is an isolated maximizer, we can prove its stability, see Theorem \ref{101} below.
The study of stability for steady Euler flows has a long history. Here we comment on some of the relevant and significant results. In \cite{K} Kelvin proved linear stability for circular vortex patches in $\mathbb R^2$. Later Love \cite{Lo} proved linear stability for a rotating Kirchhoff elliptical vortex patch. In \cite{A,A2}, Arnold firstly considered nonlinear stability for smooth steady Euler flows, moreover, he came up with the idea that a steady planar Euler flow could be seen as a critical point of the energy on a constraint surface, and stability could be obtained by some kind of non-degenerate condition for this critical point. In 1985, by establishing a relative variational principle for the energy, Wan and Pulvirenti \cite{WP} proved nonlinear stability for circular vortex patches in an open disk. For general bounded domains, Burton in \cite{B3} proved nonlinear stability for steady vortex flows as the strict local maximizer of the energy on rearrangement class. Similar idea was used to prove nonlinear orbital stability for vortex pairs in the whole plane in \cite{B5}. This paper is mostly inspired by \cite{B3} and \cite{B5}.
The main difficulty in proving orbital stability is to obtain compactness for a particular weakly convergent sequence. In \cite{B5}, compactness was proved by a Concentration-Compactness argument. Here for vortex patches in a bounded domain the proof is relatively simple. In fact, we can prove that any maximizing sequence is compact in $L^p$ norm. The key point is that the weak limit of any maximizing sequence must be a vortex patch, which excludes oscillation and ensures compactness.
Our result also gives a short proof of the stability theorem proved in \cite{B3} for vortex patches and includes the result in \cite{WP} as a special case.
\section{Main Results}
In this section, we state the main result. To begin with, we recall some known facts about the 2-D Euler equations.
Throughout this paper we assume $D$ to be a bounded domain(not necessarily simply-connected) with smooth boundary, $G$ is the Green function for $-\Delta$ in $D$ with zero
boundary condition.
We consider the motion of an ideal fluid in $D$. The governing equations are the following incompressible Euler system
\begin{equation}\lambdabel{2}
\begin{cases}
\nabla\cdot\mathbf{v}=0 \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\text{in $D$},
\\ \partial_t\mathbf{v}+(\mathbf{v}\cdot\nabla)\mathbf{v}=-\nabla P \,\,\,\,\,\,\,\,\,\,\,\text{in $D$},\\
\mathbf{v}(x,0)=\mathbf{v}_0(x)\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\text{in $D$},
\\ \mathbf{v}\cdot \vec{n}=0 \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\text{on $\partial D$},
\end{cases}
\end{equation}
where $\mathbf{v}$ is the velocity field, $P$ is the pressure, $\mathbf{v}_0(x)$ is the initial velocity, and $\vec{n}$ is the outward unit normal of $\partial D$. Here we impose the impermeability boundary condition $\mathbf{v}\cdot \vec{n}=0$.
We define the vorticity function $\omega = \partial_1 v_2-\partial_2v_1$. Using the identity $\frac{1}{2}\nabla|\mathbf{v}|^2=(\mathbf{v}\cdot\nabla)\mathbf{v}+J\mathbf{v}\omega$,
the second equation of $\eqref{2}$ becomes
\begin{equation}\lambdabel{3}
\partial_t\mathbf{v}+\nabla(\frac{1}{2}|\mathbf{v}|^2+P)-J\mathbf{v}\omega=0,
\end{equation}
where $J(v_1,v_2)=(v_2,-v_1)$ denotes clockwise rotation through $\frac{\pi}{2}$. Taking the curl in $\eqref{3}$ gives
\begin{equation}\lambdabel{499}
\partial_t\omega+\mathbf{v}\cdot\nabla\omega=0.
\end{equation}
By the divergence-free condition $\nabla\cdot\mathbf{v}=0$ and the boundary condition $\mathbf{v}\cdot \vec{n}=0$, $\mathbf{v}$ can be written as
\begin{equation}
\mathbf{v}=J\nabla\psi
\end{equation}
for some function $\psi$ called the stream function(see \cite{MPu}, Chapter 1, Theorem 2.2). Obviously $\psi$ satisfies
\begin{equation}\lambdabel{788}
\begin{cases}
-\Delta \psi=\omega\text{ \quad \,\quad in $D$,}\\
\psi= \text{constant} \text{\quad on $\partial D$.}
\end{cases}
\end{equation}
Note that for multi-connected domains, $\psi$ is uniquely determined by \eqref{788} provided boundary circulations are prescribed. In this paper, we assume that the stream function vanishes on $\partial D$, i.e.,
\begin{equation}
\psi(x)=\int_DG(x,y)\omega(y)dy.
\end{equation}
Using the notation
$\partial(\psi,\omega)\triangleq\partial_1\psi\partial_2\omega-\partial_2\psi\partial_1\omega$, $\eqref{499}$ can be written as
\begin{equation}\lambdabel{599}
\partial_t\omega+\partial(\omega,\psi)=0.
\end{equation}
Integrating by parts gives the following weak form of $\eqref{599}$:
\begin{equation}\lambdabel{997}
\int_D\omega(x,0)\xi(x,0)dx+\int_0^{+\infty}\int_D\omega(\partial_t\xi+\partial(\xi,\psi))dxdt=0
\end{equation}
for all $\xi\in C_0^{\infty}(D\times[0,+\infty))$.
According to Yudovich \cite{Y}, for any initial vorticity $\omega(x,0)\in L^{\infty}(D)$ there is a unique solution to $\eqref{997}$ and
$\omega(x,t)\in L^{\infty}(D\times(0,+\infty))\cap C([0,+\infty);L^p(D)), \forall \,\,p\in [1,+\infty)$. Moreover, $\omega(x,t)\in R_{\omega_0}$ for all $t\geq 0$. Here $R_{\omega}$ denotes the rearrangement class of a given function $\omega$, that is,
\begin{equation}
R_\omega\triangleq\{ v | |\{v>a\}|=|\{\omega>a\}| ,\forall a\in \mathbb R^1\}.
\end{equation}
where $|A|$ denotes the area of a set $A\subset\mathbb R^2$. For convenience, we also write $\omega(x,t)$ as $\omega_t(x)$.
If $\omega$ is a solution of \eqref{997} and is independent of $t$, it is called steady. In this paper, we consider steady vortex patch solution having the form $\omega=\lambdambda I_A$, where $I_{A}$ denotes the characteristic function of some measurable set $A$, i.e., $I_A(x)=1$ in $A$ and $I_A=0$ elsewhere. It is easy to see that $\omega$ is steady if and only if
\begin{equation}\lambdabel{888}
\int_D\omega\partial(\xi,\psi) dx=0,\text{ for all $\xi\in C^\infty_0(D).$}
\end{equation}
There are many ways to construct steady vortex patches. Here we consider the construction in \cite{T}. Define
\[K\triangleq\{\omega\in L^{\infty}(D)|\,\,0\leq \omega\leq \lambdambda,\,\int_{D}\omega(x)dx=1\},\]
where $\lambdambda$ is any given positive constant. For any $\omega\in K$, the kinetic energy of $\omega$ is defined by
\begin{equation}
E(\omega)\triangleq\frac{1}{2}\int_D \int_D G(x,y)\omega(x)\omega(y)dxdy.
\end{equation}
For planar vortex flow the kinetic energy is conserved, i.e., if $\omega_t\in L^\infty(D)$ is a solution to \eqref{997} with initial vorticity $\omega_0$, then $E(\omega_t)=E(\omega_0)$ for all $t\geq0$, see Theorem 14 in \cite{B3} for a detailed proof. In this paper, we use $E$ as the Liapunov function for the Euler dynamical system to obtain stability.
\begin{theorem}[Turkington, \cite{T}]\lambdabel{51}
$E$ attains its maximum on $K$ and any maximizer is a steady vortex patch.
\end{theorem}
For reader's convenience and completeness, we give the proof of Theorem \ref{51} in Section 3.
In the sequel, we denote $M$ the set of all maximizers, and define
\begin{equation}
R\triangleq\{\omega|\omega=\lambdambda I_A, \lambdambda|A|=1\}.
\end{equation}
By Theorem \ref{51}, any maximizer is a vortex patch, so $M\subset R$.
\begin{remark}
It is easy to verify that $sup_{\omega\in K}E(\omega)=sup_{\omega\in R}E(\omega)$, which means that $M$ is in fact the set of maximizers of $E$ on rearrangement class $R$. The variational problem on rearrangement class has been considered by Burton, see \cite{B2}, \cite{B4} for example.
\end{remark}
Our purpose in this paper is to prove the orbital stability of $M$.
\begin{theorem}\lambdabel{666}
$M$ is orbitally stable. More specifically, for any $\varepsilon>0$, there exists $\delta>0$, such that for any $\omega_0\in K$, $dist(\omega_0,M)<\delta$, we have $dist(\omega_t,M)<\varepsilon$ for all $t\geq0$. Here the distance is in the sense of $L^p$ norm for any $1\leq p<+\infty$, $\omega_t$ is the solution to \eqref{997} with initial vorticity $\omega_0$.
\end{theorem}
\begin{remark}
In \cite{CW,MPu,Ta}, the perturbed vorticity $\omega_0$ is restricted on the isovortical surface $R$, here we extend the perturbation set to $K$.
\end{remark}
For certain domains, the maximizer of $E$ may be not isolated in $L^p(D)$. For example, when $D$ is a ring, i.e., $D=B_R(x_0)\verb|\|B_r(x_0)$ for some $x_0\in \mathbb R^2, 0<r<R<+\infty$, the rotation of any maximizer is still a maximizer. But once there is an isolated maximizer, we can prove stability.
\begin{theorem}\lambdabel{101}
Assume that $\omega_\lambdambda$ is an isolated maximizer of $E$ on $K$, i.e., there exists some $\delta_0>0$ such that for any $\omega\in K$, $0<dist(\omega,\omega_\lambdambda)<\delta_0$, we have $E(\omega)<E(\omega_\lambdambda)$, then $\omega_\lambdambda$ is stable. More specifically, for any $\varepsilon>0$, there exists $\delta>0$, such that for any $\omega_0\in K$, $dist(\omega_\lambdambda,\omega_0)<\delta$, we have $dist(\omega_\lambdambda,\omega_t)<\varepsilon$ for all $t\geq0$. Here the distance is in the sense of $L^p$ norm for any $1\leq p<+\infty$, $\omega_t$ is the solution to \eqref{997} with initial vorticity $\omega_0$.
\end{theorem}
\begin{remark}
In \cite{B3}, a more general stability theorem for isolated maximizers was proved, here we give a different and short proof in the case of vortex patches.
\end{remark}
\begin{remark}
In some cases the maximizer is unique and thus isolated. For example, when $D$ is a convex domain, there is a unique maximizer provided $\lambdambda$ is large enough, see \cite{CGPY}, or when $D$ is an open disc, for each $\lambdambda$ there is a unique maximizer, namely the circular vortex patch concentric to $D$ with radius $\frac{1}{\sqrt{\lambdambda\pi}}$, see \cite{BM}, Theorem 3.1.
\end{remark}
\section{Proofs}
In this section we prove the main results.
\begin{proof}[Proof of Theorem \ref{51}]
Step 1: $E$ attains its maximum. Notice that $G(x,y)\in L^{1}(D\times D)$, thus for any $\omega\in K$,
\[E(\omega)=\frac{1}{2}\int_D\int_DG(x,y)\omega(x)\omega(y)dxdy\leq \frac{1}{2}\lambdambda^2\int_D\int_D|G(x,y)|dxdy\leq C\lambdambda^2,
\]
where $C$ is a positive number depending on $D$, which means that $E$ is bounded from above on $K$. Let $\{\omega^n\}\subset K$ be a maximizing sequence. Since $K$ is bounded in $\L^\infty(D)$, $K$ is sequentially compact in the weak star topology in $L^\infty(D)$. Without loss of generality we assume that $\omega^n\rightarrow \omega^*$ weakly star in $L^\infty(D)$ for some $\omega^*\in L^\infty(D)$ as $n\rightarrow +\infty$.
We claim that $\omega^*\in K$. In fact, $\omega^n\rightarrow \omega^*$ weakly star in $L^\infty(D)$ means
\[\lim_{n\rightarrow +\infty}\int_D\omega^n\phi=\int_D\omega^*\phi\]
for any $\phi\in L^1(D)$. Choosing $\phi\equiv1$, we have
\[\lim_{n\rightarrow +\infty}\int_D\omega^n=\int_D\omega^*=1.\]
Now we prove $0\leq \omega^*\leq\lambdambda$ by contradiction. Suppose that $|\{\omega^*>\lambdambda\}|>0$, then there exists $\varepsilon_0>0$ such that $|\{\omega^*\geq\lambdambda+\varepsilon_0\}|>0$. Denote $A=\{\omega^*\geq\lambdambda+\varepsilon_0\}$, then for $\phi=I_A$ we have
\[0=\lim_{n\rightarrow +\infty}\int_D(\omega^*-\omega^n)\phi=\lim_{n\rightarrow +\infty}\int_{A}\omega^*-\omega^n.\]
On the other hand
\[\lim_{n\rightarrow +\infty}\int_{A}\omega^*-\omega^n\geq\varepsilon_0|A|>0,\]
which is a contradiction. So we have $\omega^*\leq \lambdambda$. Similarly we can prove $\omega^*\geq 0$.
Finally since $G(x,y)\in L^1(D\times D)$, we have $\lim_{n\rightarrow +\infty} E(\omega^n)=E(\omega^*)$, so $\omega^*$ is a maximizer of $E$.
Step 2: Any maximizer satisfies \eqref{888}. For any $\xi\in C^{\infty}_0(D)$, we define a family of transformations $\Phi_t(x)$ from $D$ to $D$ by the following equations,
\begin{equation}\lambdabel{400}
\begin{cases}\frac{d\Phi_t(x)}{dt}=J\nabla\xi(\Phi_t(x)),\,\,\,t\in\mathbb R^1, \\
\Phi_0(x)=x.
\end{cases}
\end{equation}
Since $J\nabla\xi$ is a smooth vector field with compact support, $\eqref{400}$ is solvable for all $t\in \mathbb R^1$. It is easy to verify that $J\nabla\xi$ is divergence-free, so by Liouville theorem(see \cite{MPu}, Appendix 1.1) $\Phi_t$ is area-preserving. Let $\omega^*$ be any maximizer and define a family of test functions
\begin{equation}
\omega^t(x)\triangleq\omega^*(\Phi_t(x)).
\end{equation}
Obviously $\omega^t\in K$, so $\frac{dE(\omega^t)}{dt}|_{t=0}=0$.
Expanding $E(\omega^t)$ at $t=0$ gives
\[\begin{split}
E(\omega^t)=&\frac{1}{2}\int_D\int_DG(x,y)\omega^*(\Phi_t(x))\omega^*(\Phi_t(y))dxdy\\
=&\frac{1}{2}\int_D\int_DG(\Phi_{-t}(x),\Phi_{-t}(y))\omega^*(x)\omega^*(y)dxdy\\
=&E(\omega^*)+t\int_D\omega^*\partial(\psi^*,\xi)+o(t),
\end{split}\]
as $t\rightarrow 0$, where $\psi^*$ is the stream function. So we have
\[\int_D\omega^*\partial(\psi^*,\xi)=0.\]
Step 3: Any maximizer is a vortex patch. Let $\omega^*$ be any maximizer and define a family of test functions $\omega^s(x)=\omega^*+s[z_0(x)-z_1(x)]$, $s>0$, where $z_0,z_1$ satisfies
\begin{equation}
\begin{cases}
z_0,z_1\in L^\infty(D),
\\ \int_Dz_0=\int_D z_1,
\\ z_0,z_1\geq 0,
\\z_0=0\text{\,\,\,\,\,\,} in\text{\,\,} D\verb|\|\{\omega^*\leq\lambdambda-\delta\},
\\z_1=0\text{\,\,\,\,\,\,} in\text{\,\,} D\verb|\|\{\omega^*\geq\delta\}.
\end{cases}
\end{equation}
Here $\delta$ is any positive number. Note that for fixed $z_0,z_1$ and $\delta$, $\omega^s\in K$ provided $s$ is sufficiently small. So we have
\[0\geq\frac{dE(\omega^s)}{ds}|_{s=0^+}=\int_Dz_0\psi^*-\int_Dz_1\psi^*,\]
which gives
\[\sup_{\{\omega^*<\lambdambda\}}\psi^*\leq\inf_{\{\omega^*>0\}}\psi^*.\]
Since $D$ is connected and $\overline{\{\omega^*<\lambdambda\}}\cup\overline{\{\omega^*>0\}}=D$, we have $\overline{\{\omega^*<\lambdambda\}}\cap\overline{\{\omega^*>0\}}\neq\varnothing$, then by continuity of $\psi^*$,
\[\sup_{\{\omega^*<\lambdambda\}}\psi^*=\inf_{\{\omega^*>0\}}\psi^*.\]
Now define \[\mu\triangleq\sup_{\{\omega^*<\lambdambda\}}\psi^*=\inf_{\{\omega^*>0\}}\psi^*,\] we have
\begin{equation}
\begin{cases}
\omega^*=0\text{\,\,\,\,\,\,$a.e.$\,} in\text{\,\,}\{\psi^*<\mu\},
\\ \omega^*=\lambdambda\text{\,\,\,\,\,\,$a.e.$\,} in\text{\,\,}\{\psi^*>\mu\}.
\end{cases}
\end{equation}
On $\{\psi^*=\mu\}$, we have $\nabla\psi^*=0\text{\,\,} a.e.$, which gives $\omega^*=-\triangle \psi^*=0$. That is,
\begin{equation}
\begin{cases}
\omega^*=0\text{\,\,\,\,\,\,$a.e.$\,} in\text{\,\,}\{\psi^*\leq\mu\},
\\ \omega^*=\lambdambda\text{\,\,\,\,\,\,$a.e.$\,} in\text{\,\,}\{\psi^*>\mu\},
\end{cases}
\end{equation}
so $\omega^*$ is a vortex patch.
\end{proof}
Now we turn to the proof of Theorem \ref{666}. The key point is compactness. Generally speaking, for a weak convergent function sequence in $K$, strong convergence may fail because of oscillation, but here for a maximizing sequence we can prove that the weak convergence limit is a vortex patch, which will be used to exclude oscillation and obtain compactness.
In the sequel, $p\in[1,+\infty)$ is fixed, and $|f|_p$ denotes the $L^p$ norm of some function $f$.
\begin{proof}[Proof of Theorem \ref{666}]
We prove by contradiction in the following.
Suppose that there exist $\varepsilon_0>0$, $\{\omega^n_0\}\subset K$, $\{t^n\}\subset \mathbb{R}^+$ such that
\begin{equation}\lambdabel{10}
dist(\omega^n_0,M)\rightarrow 0,
\end{equation}
and
\begin{equation}\lambdabel{11}
dist(\omega^n_{t_n},M)\geq\varepsilon_0,
\end{equation}
for any $n$, where $\omega^n_{t_n}$ is the solution to \eqref{997} at time $t_n$ with initial vorticity $\omega^n_0$. By vorticity conservation(see \cite{MPu}, Chapter 1) $\omega^n_{t_n}$ has the same distributional function as $\omega^n_0$(or $\omega^n_{t_n}\in R_{\omega^n_0}$), so $\omega^n_{t_n}\in K$.
From \eqref{10}, we can choose$\{v^n\}\subset M$ such that
\begin{equation}
|\omega^n_0-v^n|_p\rightarrow 0.
\end{equation}
We claim that $\{\omega^n_0\}$ is an energy maximizing sequence for $E$ on $R$. In fact,
\begin{equation}\lambdabel{19}
\begin{split}
E(\omega^n_0)-E(v^n)=&\frac{1}{2}\int_D\int_DG(x,y)\left[\omega^n_0(x)\omega^n_0(y)-v^n(x)v^n(y)\right]\\
=&\frac{1}{2}\int_D\int_DG(x,y)\left[\omega^n_0(x)\omega^n_0(y)-v^n(x)\omega^n_0(y)+v^n(x)\omega^n_0(y)-v^n(x)v^n(y)\right]\\
=&\frac{1}{2}\int_D\int_DG(x,y)\omega^n_0(y)\left[\omega^n_0(x)-v^n(x)\right]+\frac{1}{2}\int_D\int_DG(x,y)v^n(x)\left[\omega^n_0(y)-v^n(y)\right]\\
=&\frac{1}{2}\int_D\xi^n(x)\left[\omega^n_0(x)-v^n(x)\right]+\frac{1}{2}\int_D\zeta^n(y)\left[\omega^n_0(y)-v^n(y)\right]
\end{split}
\end{equation}
where $\xi^n(x)=\int_DG(x,y)\omega^n_0(y),\zeta^n(y)=\int_DG(x,y)v^n(x)$. Since $\{\omega^n_0\},\{v^n\}$ are both bounded in $L^{\infty}(D)$, by $L^p$ estimates $\{\xi^n\}$, $\{\zeta^n\}$ are bounded in $W^{2,r}(D)$ for any $r\in[1,+\infty)$ and thus bounded in $L^{\infty}(D)$. Combining \eqref{19} we have \begin{equation}E(\omega^n_0)=E(v^n)+o(1)\end{equation} as $n\rightarrow +\infty$, which means that $\{\omega^n_0\}$ is an energy maximizing sequence.
By energy conservation we have
\begin{equation}
E(\omega^n_0)=E(\omega^n_{t_n}),
\end{equation}
so $\{\omega^n_{t_n}\}$ is also an energy maximizing sequence.
For convenience, we write $u_n\triangleq \omega^n_{t_n}.$ Now choose $q$ to be fixed, $1\leq p<q<+\infty$, since $u_n\in K$, we know that $\{u_n\}$ is a bounded sequence in $L^q(D)$. Without loss of generality, we assume that $u_n\rightarrow u$ weakly in $L^q(D)$.
$Claim:$ $u\in K$ and $u$ is an energy maximizer of $E$ on $K$.
$Proof\,\, of\,\, the\,\, Claim:$ Firstly, $u_n\rightarrow u$ weakly in $L^q(D)$ implies
\[\lim_{n\rightarrow +\infty}\int_Du_n\phi=\int_Du\phi\]
for any $\phi\in L^{q^*}(D)$, where $q^*=\frac{q}{q-1}$. By choosing $\phi\equiv 1$ we have
\[1=\lim_{n\rightarrow +\infty}\int_Du_n=\int_Du.\]
Now we prove $u\leq \lambdambda$ by contradiction. Suppose that $|\{u>\lambdambda\}|>0$, then there exists $\varepsilon_1>0$ such that $|\{u>\lambdambda+\varepsilon_1\}|>0$. Denote $A=\{u>\lambdambda+\varepsilon_1\}$, then for any $\phi=I_{A}$ weak convergence implies
\[\lim_{n\rightarrow +\infty}\int_D(u-u_n)\phi=0,\]
but on the other hand
\[\lim_{n\rightarrow +\infty}\int_D(u-u_n)\phi=\int_{A}u-u_n\geq|A|\varepsilon_1>0,\]
which is a contradiction. Similar argument gives $ u\geq 0$. Finally, since $G\in L^1(D\times D)$, we have $\lim_{n\rightarrow+\infty}E(u_n)=E(u)$, which means $u$ is an energy maximizer on $K$. Thus the claim is proved.
From the claim $u\in M$, thus by \eqref{11}
\begin{equation}\lambdabel{20}
|u-u_n|_p\geq\varepsilon_0
\end{equation}
for any $n$.
According to Theorem \ref{51}, any maximizer of $E$ on $K$ must be a vortex patch, so
\begin{equation}
\int_D |u|^q=\lambdambda^{q-1}.
\end{equation}
Now we show that $\lim_{n\rightarrow+\infty}\int_D|u_n|^q= \int_D|u|^q.$ On the other hand, by weak lower semi-continuity of $L^q$ norm
\begin{equation}
\lambdambda^{q-1}=\int_D|u|^q\leq\varliminf_{n\rightarrow+\infty}\int_D|u_n|^q=\varliminf_{n\rightarrow+\infty}\int_D|\omega^n_0|^q,
\end{equation}
on the other hand, $\omega^n_0\in K$ gives
\begin{equation}
\int_D|\omega^n_0|^q=\int_D|\omega^n_0|^{q-1}\omega^n_0\leq\lambdambda^{q-1}\int_D\omega^n_0=\lambdambda^{q-1},
\end{equation}
so
\begin{equation}\lambdabel{61}
\lim_{n\rightarrow+\infty}\int_D|u_n|^q= \int_D|u|^q.
\end{equation}
That is, $u_n\rightarrow u$ weakly in $L^q(D)$ and $\int_D |u_n|^q\rightarrow\int_D |u|^q$, then immediately we have $u_n\rightarrow u$ in $L^q(D)$ by uniform convexity of $L^q$ norm(recall $q$ is chosen such that $1\leq p<q<+\infty$). By H\"{o}lder inequality we have $u_n\rightarrow u$ in $L^p(D)$, which is a contradiction to \eqref{20}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{101}]
Denote $N=M\verb|\|\{\omega_\lambdambda\},$ then $dist(\omega_\lambdambda,N)\geq\delta_0.$ By orbital stability, for any $\varepsilon$, $0<\varepsilon<\frac{\delta_0}{4}$, there exists $\delta>0$, $\delta<\frac{\delta_0}{2}$, such that for any $\omega_0\in K$, $dist(\omega_0,\omega_\lambdambda)<\delta$, we have $dist(\omega_t, M)<\varepsilon$ for all $t\geq0$. We have
\begin{equation}\lambdabel{81}
\min\{dist(\omega_t, \omega_\lambdambda),dist(\omega_t, N)\}\leq \varepsilon
\end{equation}
for all $t\geq0$. We claim that
\begin{equation}\lambdabel{85}
dist(\omega_t, N)>\varepsilon
\end{equation}
for all $t\geq0$. In fact, suppose that there is $t_1\geq0$ such that $dist(\omega_{t_1}, N)\leq\varepsilon $, then
\begin{equation}
\varepsilon\geq dist(\omega_{t_1}, N)\geq dist(\omega_\lambdambda,N)-dist(\omega_\lambdambda,\omega_{t_1})\geq \delta_0-dist(\omega_\lambdambda,\omega_{t_1}),
\end{equation}
since $\varepsilon<\frac{\delta_0}{4}$, we have
\begin{equation}
dist(\omega_{t_1},\omega_\lambdambda)>\frac{3}{4}\delta_0.
\end{equation}
That is, $dist(\omega_0,\omega_\lambdambda)<\delta<\frac{\delta_0}{2}$ and $dist(\omega_{t_1},\omega_\lambdambda)>\frac{3}{4}\delta_0$, by continuity(recall that $\omega_t\in C([0,+\infty);L^p(D))$ for all $p\in[1,+\infty)$) there exists $t_2$ such that
\begin{equation}\lambdabel{82}
dist(\omega_{t_2},\omega_\lambdambda)=\frac{\delta_0}{2}>\varepsilon,
\end{equation}
thus
\begin{equation}\lambdabel{83}
dist(\omega_{t_2},N)\geq dist(\omega_\lambdambda,N)-dist(\omega_{t_2},\omega_\lambdambda)\geq \frac{\delta_0}{2}>\varepsilon.
\end{equation}
Combing \eqref{81},\eqref{82} and \eqref{83}, we get a contradiction. Now \eqref{81} and \eqref{85} give
\begin{equation}
dist(\omega_t, \omega_\lambdambda)\leq \varepsilon
\end{equation}
for all $t\geq0$ provided $dist(\omega_0,\omega_\lambdambda)<\delta$, which is the desired result.
\end{proof}
\end{document}
|
\begin{document}
\title{Equation of motion for process matrix: Hamiltonian identification and dynamical control of open quantum systems}
\author{M. Mohseni}
\affiliation{Research Laboratory of Electronics, Massachusetts
Institute of Technology, 77 Massachusetts Ave., Cambridge, MA 02139,
USA}
\author{A. T. Rezakhani}
\affiliation{Department of Chemistry and Center for Quantum Information Science and Technology, University of Southern
California, Los Angeles, CA 90089, USA}
\begin{abstract}
We develop a general approach for monitoring and controlling
evolution of open quantum systems. In contrast to the master
equations describing time evolution of density operators, here, we
formulate a dynamical equation for the evolution of the process matrix
acting on a system. This equation is applicable to non-Markovian
and/or strong coupling regimes. We propose two distinct applications
for this dynamical equation. We first demonstrate identification of
quantum Hamiltonians generating dynamics of closed or open systems
via performing process tomography. In particular, we argue how one
can efficiently estimate certain classes of sparse Hamiltonians by
performing partial tomography schemes. In addition, we introduce a
novel optimal control theoretic setting for manipulating quantum
dynamics of Hamiltonian systems, specifically for the task of
decoherence suppression.
\end{abstract}
\pacs{03.65.Wj, 03.67.-a, 02.30.Yy} \maketitle
\textit{Introduction.}---Characterization and control of quantum systems are among the most
fundamental primitives in quantum physics and chemistry
\cite{nielsen-book,qcontrol}. In particular, it is of paramount
importance to identify and manipulate Hamiltonian systems which have
unknown interactions with their embedding environment \cite{kosut}.
In the past decade, several methods have been developed for
estimation of quantum dynamical processes within the context of
quantum computation and quantum control
\cite{nielsen-book,aapt,dcqd1,dcqd2,Emerson,bendersky-paz}. These
techniques are known as ``quantum process tomography" (QPT), and
originally were developed to estimate the parameters of a
``superoperator" or ``process matrix", which contains all
information about the dynamics. This is usually achieved through an
inversion of experimental data obtained from a complete set of state
tomographies. QPT schemes are generally inefficient, since for a
complete process estimation the number of required experimental
configurations and the amount of classical information processing
grows exponentially with size of the system. Recently,
alternative schemes for partial and efficient estimation of quantum
maps have been developed
\cite{dcqd1,dcqd2,Emerson,bendersky-paz,dcqd3,HI} including
efficient data processing for selective diagonal \cite{Emerson} and
off-diagonal parameters of a process matrix \cite{bendersky-paz}.
However, it is not clear how the estimated elements of the
process matrix could help us actually characterize the set of
parameters for Hamiltonians generating such dynamics. These
parameters of interest generally include the system free
Hamiltonians and those coupling strengths of system-bath
Hamiltonians. More importantly, it is not fully understood how the
relevant information obtained from quantum process estimation
experiments can be utilized for other applications such as optimal
control of a quantum device.
In this work, we develop a novel theoretical framework for studying
general dynamics of open quantum systems. In contrast to the usual
approach of utilizing master equations for density operator of a
quantum system, we introduce an equation of motion for the evolution
of a process matrix acting on states of a system. This equation does
not presume Markovian or perturbative assumptions, hence provides a
broad approach for analysis of quantum processes. We argue that the
application of partial quantum estimation schemes
\cite{dcqd1,dcqd2,Emerson,bendersky-paz,dcqd3,HI} enables efficient
estimation of sparse Hamiltonians.
Furthermore, the dynamical equation for process matrices leads to
alternative ways for controlling generic quantum Hamiltonian
systems. In other words, one can utilize this equation to drive the
dynamics of a(n) closed (open) quantum system to any desired target
quantum operation. In particular, we apply quantum control
theory to find the optimal fields to decouple a system from its
environment, hence, ``controlling decoherence''.
\textit{Dynamical equation for open quantum systems.}---In quantum theory, the evolution of a system---assuming separable
initial state of the system and environment---can be described by a
(completely-positive) quantum map $\mathcal{E}_{t}(\varrho
)=\sum_{i}A_{i}(t)\varrho A_{i}^{\dagger }(t)$, where $\varrho $ is
the initial state of the system \cite{open-book}. An alternative,
more useful expression is obtained by expanding
$A_{i}(t)=\sum_{m}a_{im}(t)\sigma _{m}$ in $\{\sigma
_{k};k=0,1,\ldots ,d^{2}-1\}$ (a fixed operator basis for the
$d$-dimensional Hilbert space of the system) which leads to
$\mathcal{E}_{t}(\varrho )=\sum_{mn=0}^{d^{2}-1}\chi
_{mn}(t)\sigma _{m}\varrho \sigma _{n}^{\dagger }$. The positive-Hermitian process matrix $\bm{\chi}
(t)=\bigl[\sum_ia_{im}(t)\bar{a}_{in}(t)\bigr]$ represents $\mathcal{E}_{t}$ in the $\{\sigma _{k}\}$
basis, where bar denotes complex conjugation. The process matrix elements $
\chi _{mn}(t),$ in any specific time $t$, can be experimentally
measured by any QPT scheme \cite{experiment}.
In an open quantum system, the time-dependent Hamiltonian of the
total system-bath ($SB$) has the general form $H(t)=H_{S}(t)+H_{B}(t)+H_{SB}(t)$, where $S$ ($B$) stands for the system (bath or the surrounding environment). We denote
the evolution operator which is generated from this Hamiltonian, from
time $0$ to $t$, by $U(t)$. The
Hamiltonian $H_{SB}(t)$ can be written as $H_{SB}(t)=\sum_{k}\lambda _{k}(t)\sigma _{k}\otimes
B_{k}$, where $\lambda _{k}(t)$s are the coupling strengths of the
system-bath interaction, and $\{B_{k}\}$ are some bath operators.
Now we describe the dynamics in the interaction picture by
introducing the time evolution operators
$U_{0}(t)$, $U_S(t)$, and $U_B(t)$, generated by $H_{0}=H_{S}\otimes
I_{B}+I_{S}\otimes H_{B}$, $H_S$, and $H_B$, respectively.
The system-bath Hamiltonian in the interaction picture becomes:
$H_{I}(t)=U_{0}^{\dagger }(t)H_{SB}(t)U_{0}(t)$. By
introducing $\widetilde{\sigma }_{k}(t)=U_S^{\dag}(t)\sigma
_{k}U_S(t)\equiv \sum_{l}s_{kl}(t)\sigma _{l}$ and
$\widetilde{B}_{k}(t)=U_B^{\dag}(t)B_{k}U_B(t)$
as the rotating operators under the evolution of the
free Hamiltonian of the system and bath, we can rewrite
$H_{I}(t)=\sum_{k}\lambda _{k}(t)\widetilde{\sigma }_{k}(t)\otimes \widetilde{
B}_{k}(t)$. The Schrodinger equation in the interaction picture can
be expressed as:
\begin{eqnarray}
i\text{d}A_{i}^{I}(t)/\text{d}t=\sum_{k}H_{ik}^{\prime
}(t)A_{k}^{I}(t),
\end{eqnarray}
where $U_{I}(t)=U_{0}^{\dagger }(t)U(t)$, $A_{i}^{I}(t)=~_{B}\langle
b_{i}|U_{I}(t)|b_{0}\rangle _{B}\equiv \sum_{m}a_{im}^{I}(t)\sigma
_{m}$ are the Kraus operators at the interaction picture,
$H'_{ij}(t)=\sum_{pq}\lambda _{p}s_{pq}~_{B} \langle
b_{i}|\widetilde{B}_{p}|b_{j}\rangle _{B}\sigma _{q}$, and
$\{|b_{i}\rangle \}$ is a basis for the bath Hilbert space. The
interaction picture $\bm{\chi}$ matrix is defined as $\chi
_{mn}^{I}(t)=\sum_{i}a_{im}^{I}(t)\bar{a}_{in}^{I}(t)$, which is
related to the elements of the measured process matrix through
$\chi^I_{mn}(t) =
\sum_{m'n'}\chi_{m'n'}(t)\text{Tr}[\sigma_mU^{\dag}_S(t)\sigma_{m'}]
\text{Tr}[\sigma_n U_S^T(t)\sigma_{n'}].$ Thus, the time evolution
of the $a_{m}^{I}$ coefficients reads as:
\begin{eqnarray}
&i\text{d}a_{im}^{I}/\text{d}t=\sum_{klpq}a_{kl}^{I}\lambda _{p}s_{pq}\alpha
_{~~m}^{qp}~_{B}\langle b_{i}|\widetilde{B}_{p}|b_{k}\rangle _{B},
\label{aI-dot}
\end{eqnarray}
where $\alpha _{~~m}^{kl}=\text{Tr}[\sigma _{k}\sigma_{l}\sigma_m^{\dag}]$.
From this equation, one can obtain the time evolution of $\bm{\chi}^I$ as follows:
\begin{eqnarray}
&i \text{d}\bm{\chi}^{I}/ \text{d}t=\widetilde{\bm{H}}\bm{K}-\bm{K}^{\dag }\widetilde{\bm{H}}
^{\dag }, \label{open-EQ1}
\end{eqnarray}
where
\begin{eqnarray}
&[\widetilde{\bm{H}}]_{n(imj)}=&\textstyle{\sum_{pq}}\lambda
_{p}s_{pq}\alpha
_{~~n}^{qp}~_{B}\langle b_{j}|\widetilde{B}_{m}|b_{i}\rangle _{B}, \label{p-H}\\
&[\bm{K}]_{(imj)n}=&a_{im}^{I}\bar{a}_{jn}^{I}, \label{K}
\end{eqnarray}
in which $(imj)$ is considered as a new single index. The order of
the pseudo-Hamiltonian $\widetilde{\bm{H}}$ is $d^{2}\times d^{6}$, but number of independent parameters is $\leqslant d^2$, which is the
maximum number of nonzero $\lambda _{p}$s. By using a generalized
commutator notation $\left[ A,B\right] ^{\star }\equiv AB-B^{\dag}A^{\dag }$, Eq.~(\ref{open-EQ1}) can be represented in the following form:
\begin{eqnarray}
&i\mathrm{d}\bm{\chi}^{I}/\mathrm{d}t=[\widetilde{\bm{H}},\bm{K}]^{\star }.
\label{open-EQ}
\end{eqnarray}
This is the (super-) dynamical equation for open quantum systems, i.e.,
an equation for the time variation of quantum dynamics itself, in
which no state of the system appears, in contrast to the existing master equations \cite{open-book}.
The knowledge of the $\bm{K}$ matrix is generally required for application of Eq.~(\ref{open-EQ}).
The $\bm{\chi}^{I}$ matrix can be diagonalized by the
unitary operator $\bm{V} $: $\bm{\chi}^{I}=\bm{V}\bm{D}\bm{V}^{\dag
}$, where $\bm{D}=\text{diag}(D_i)$. Then, the Kraus operators in the interaction picture are
$A_{i}^{I}(t)=\sqrt{D_{i}}\sum_{m}V_{mi}\sigma _{m}$ \cite{nielsen-book}. Hence, we obtain $a_{im}^{I}=\sqrt{D_{i}}V_{mi}$ and $K_{imjn}=\sqrt{D_{i}D_{j}}V_{mi}
\overline{V}_{nj}$. Diagonalization of a sparse $\bm{\chi}^{I}$ matrix,
hence construction of the $\bm{K}$ matrix, can be performed
efficiently. The unknown parameters of Eq.~(\ref{open-EQ}) are
elements of $\widetilde{\bm{H}}$ matrix which contain the
information about the system-bath coupling strengths $\lambda _{k}$.
For unitary evolutions, following a similar approach, the dynamical equation for the process matrix is obtained as:
\begin{eqnarray}
&i\mathrm{d} \bm{\chi}/\mathrm{d}t=[\widetilde{\bm{H}},\bm{\chi}]^{\star },
\label{closed-EQ}
\end{eqnarray}
where $\bm{\widetilde{H}}=[\widetilde{h}_{ml}]$, $\widetilde{h}_{mn}(t)\equiv
\sum_{k}\alpha _{~~m}^{kn}h_{k}(t)$, and $h_l(t)$ are defined
through $H(t)=\sum_{l}h_{l}(t)\sigma_{l}$. It should be noted that Hermiticity of
$H$ implies only $d^2$ real independent parameters in $\widetilde{\bm{H}}$, which can
be estimated via QPT schemes.
\textit{Hamiltonian identification.}---Consider a large ensemble of the identically prepared systems in the
state $\varrho $, half of which are measured after duration $t$, and
the rest are measured at $t+\Delta t$, where $\Delta t$ is small
relative to $t$. Thus, by performing any type of QPT strategy one can
obtain the matrix elements $\chi_{mn}(t)$ and $\chi _{mn}(t+\Delta
t)$, hence their derivatives $\dot{\chi}_{mn}(t)\approx\left( \chi
_{mn}(t+\Delta t)-\chi _{mn}(t)\right) /\Delta t$ with accuracy
$\Delta t$. Consequently, using Eq.~(\ref{closed-EQ})
(Eq.~(\ref{open-EQ})) one can in principle identify the free
[system-bath] Hamiltonians for closed (open) quantum systems.
For unitary evolutions, a simple relation between the elements of
the $\bm{\chi}$ matrix and the system Hamiltonian parameters is
obtained, up to the second order in $t$ and a global phase
$\text{Tr}[H]$, as:
$\chi
_{00}(t)\approx\ 1-\frac{1}{2}t^2\sum_{ij}\alpha^{ij}_{~~0}h_ih_j+ \bar{\alpha}
^{ij}_{~~0}\bar{h}_i\bar{h}_j$, $\chi
_{m0}(t)\approx\-ih_mt-\frac{1}{2}t^2\sum_{ij}\alpha^{ij}_{~~m}h_ih_j$,
and
\begin{eqnarray}
\chi _{mn}(t)\approx\ h_m\bar{h}_nt^2, \label{ShortTimeChi}
\end{eqnarray}
where $m,n\neq0$. From Eq.~(\ref{ShortTimeChi}), for a given short time $
t$, we have $h_{n}=e^{i\varphi _{n}}\sqrt{\chi _{nn}}/t$, from which the relative errors satisfy $\text{Re}
[\delta h_{n}/h_{n}]=\delta \chi _{nn}/2\chi _{nn}$. According to
the Chernoff bound \cite{dcqd3,chernoff}, to estimate $\chi _{nn}$s with
accuracy $\Delta\geqslant|\chi _{nn}-\overline{\chi }_{nn}|=\delta
\chi _{nn} $ --- where $\overline{\chi}_{nn}$ is the average of the
results of $M$ repeated measurements --- with success probability
greater than $1-\epsilon $, one needs $M=\mathcal{O}(|\log
\epsilon/2|/\Delta^2)$. Information of the phases $\varphi _{n}$, up
to a global phase, can be estimated by measuring $\chi_{mn}$s for
$m\neq n$.
Using the above construction, next we discuss efficient Hamiltonian
identification schemes via performing certain short-time scale QPTs.
A precursor to this type of short-time expansion in order to
efficiently obtain process matrix parameters can be found in
Ref.~\cite{levi}, however, its underlying models, the assumptions,
and the identification method are more restrictive and generally
incommensurable with ours.
In generic $N$-body physical systems (e.g., $N$ qubits),
interactions are $L$-local where $L$ is typically $2$. That is,
$H=\sum_{k}H_{k}$, where each $H_{k}$ includes only interactions of
$L$ subsystems, with overall $\mathcal{O}(N^{L})$ independent
parameters. This implies that in the $\{\sigma _{k}\}$ basis $H$ has
a sparse-matrix representation. Hence, the number of free parameters
of the corresponding unitary or process matrix, unlike their
exponential size, will be polylog($d$) (i.e., a polynomial of $N$).
Here, we mainly concentrate on controllable $L$-local Hamiltonians,
which are of particular interest in the context of quantum
information processing in order to generate a desired quantum operation. An
important example of this class is the Heisenberg exchange
Hamiltonian in a network of spins with nearest neighbor
interactions. This 2-local sparse Hamiltonian (in the Pauli basis)
also generates a sparse process matrix \cite{HI} and is
computationally universal over a subspace of fixed angular momentum
\cite{heisenberg}.
Let us consider a sparse Hamiltonian,
$H(t)=\sum_{m}h_{m}(t)\sigma_{m}$, with polylog$(d)$ nonzero $h_m$s,
where $\{\sigma_m\}$ is the nice error basis \cite{dcqd2}. In the
short-time limit, according to Eq.~(\ref{ShortTimeChi}), if the
Hamiltonian is sparse in the $\{\sigma _{k}\}$ basis, only
polylog$(d)$ of $h_{m}$s would be nonzero. Thus, number of nonzero elements in the $\bigl[t^{2}h_{m}\bar{h}_{n}\bigr]_{m,n\neq 0}$ block would be also of the same
order. \textit{A priori} knowledge of the general form of a given
sparse Hamiltonian leads to [up to $\mathcal{O}(t^{3})$] nonzero elements in the $\bigl[t^{2}h_{m}\bar{h}_{n}\bigr]_{m,n\neq 0}$ block, according to Eq.~(\ref{ShortTimeChi}).
Therefore, if we can efficiently determine all nonzero elements
of this block, we would have polylog$(d)$ quadratic equations from
which we can estimate $h_{m}$s. In other words, by only
\textit{polynomial} experimental settings we would be able to
extract relevant information about the Hamiltonian from a suitable QPT experiment \cite{experiment}. In general,
there are three distinct process estimation techniques, including
Standard Quantum Process Tomography (SQPT) \cite{nielsen-book},
Direct Characterization of Quantum Dynamics (DCQD) \cite{dcqd1}, and Selective Efficient Quantum Process Tomography (SEQPT) \cite{bendersky-paz}. The scaleup of physical resources varies among
these process estimation strategies \cite{dcqd3}. SQPT is
inefficient by construction, since we still need to measure an
exponentially large number of observables in order to reconstruct
the process matrix through a set of state tomographies. SEQPT can efficiently estimate quantum sparse
Hamiltonians via selectively estimating a polynomial number of
$\chi_{mn}$s associated to the Hamiltonian, within the context of
short-time analysis. Using the DCQD scheme, in short-time limit, one
can also efficiently estimate all the parameters of certain sparse
Hamiltonians, specifically all the diagonal elements $\chi_{nn}$ --- a detailed analysis thereof is beyond the discussion of this work and will be presented in another publication \cite{xx}. Note that in contrast to SQPT, both DCQD and SEQPT
assume access to noise-free ancilla channels. However, recently a
generalization of the DCQD scheme to certain cases of calibrated faulty
preparation, measurement, and auxiliary systems has been developed \cite{AIP}.
We emphasize that, within the context of short-time analysis, the
efficient estimation is only applicable to the Hamiltonians for
which the location of nonzero elements in a given basis is known
from general physical or engineering considerations, such as in the
exchange Hamiltonian in solid-state quantum information processing
\cite{heisenberg}. The exchange Hamiltonian describes the underlying
interactions for various systems, such as spin-coupled quantum dots
\cite{Loss:98}, donor-atom nuclear/electron spins
\cite{Kane:98Vrijen:00}, semiconductor quantum dots \cite{Petta05},
and superconducting flux qubits \cite{niskanen}. The anisotropic
exchange Hamiltonian exists in quantum Hall systems \cite{Hall},
quantum dots/atoms in cavities \cite{Imamoglu:99}, exciton-coupled
quantum dots \cite{exciton}, electrons in liquid-Helium
\cite{Platzman:99}, and neutral atoms in optical lattices
~\cite{opticallattices}.
\textit{Applications to quantum dynamical control.}---One immediate application of any equation of motion --- i.e, dynamical equation --- for a quantum
or classical system is to manipulate its state or dynamics toward a
desired target. The ability to control quantum dynamics by certain
external control fields is essential in many applications including
physical realizations of quantum information devices. Due to environmental noise and device imperfections, it is generally difficult
to maintain quantum coherence during dynamical evolution of quantum systems. Reducing or controlling decoherence, therefore, is an
important objective in a control theoretic investigation of quantum
systems.
Optimal control theory (OCT) \cite{oct}, has been developed for
finding control fields to guide a quantum system, subject to natural
or engineering constraints, as close as possible to a particular
target. For closed quantum system, OCT has been proposed for
controlling states \cite{oct} and unitary dynamics
\cite{palao-kosloff}. In OCT, a quantum system is driven from an
initial state or unitary operation to a final configuration, via
applying external fields. This is achieved, for example, by
modifying a free Hamiltonian $H_0$ as $H(t)=H_0-\mu\pi(t)$, where
$\mu$ is a system operator (e.g., atomic or molecular dipole moment)
and $\pi(t)$ is a shaped external field (e.g., laser pulse)
\cite{palao-kosloff}. The optimization is based on maximizing a
yield functional $\widetilde{\mathcal{Y}}$, e.g., fidelity of the
final and target configurations, by a variational procedure
($\delta\widetilde{\mathcal{Y}}/\delta\pi=0$) subject to a set of
constraints.
Having an equation of motion implies how one can control dynamics of
a system toward a target configuration. Thus, a new method for
controlling dynamics of \textit{open} quantum systems can be
developed by our equation of motion (Eq.~(\ref{open-EQ})),
specifically applicable to optimal decoherence control. For isolated
systems we have $\lambda_k=0$ (hence $\widetilde{\bm{H}}=0$), from
which one can obtain $\chi^I_{mn}=\delta_{m0}\delta_{n0}\equiv
[\bm{E}_{00}]_{mn}$. However, due to decoherence or other
environmental effects, there might be some residual interaction
$\widetilde{\bm{H}}_0$ between the system and environment. Our
objective, here, is to apply a control field $\pi(t)$ to modify the
pseudo-Hamiltonian, Eq.~(\ref{p-H}), in order to suppress the
decohering interaction. Since $\widetilde{\bm{H}}$ is linear in
$\lambda$s, applying a control coupling field would affect
$\widetilde{\bm{H}}$ linearly. Thus, if we introduce an external
controllable field $\pi(t)$, the pseudo-Hamiltonian
$\widetilde{\bm{H}}_0$ becomes
$\widetilde{\bm{H}}(t)=\widetilde{\bm{H}}_0 -\bm{\mu}\pi(t)$, where
$\bm{\mu}$ is a system operator. The control strategy is to find the
optimal $\pi(t)$ such that the constrained fidelity
\begin{eqnarray}
&\widetilde{\mathcal{Y}}=\text{Re}\bigl[\mathcal{Y}- \int_0^T \mathrm{d}t~ \text{Tr}\{( \mathrm{d}\bm{\chi}^I/ \mathrm{d}t+
i[\widetilde{\bm{H}}(t),\bm{K}(t)]^{\star}) \bm{\Lambda}(t)\}\bigr]\nonumber\\
&~-\eta\int_0^T \mathrm{d}t~|\pi(t)|^2/f(t),
\label{Y-tilde}
\end{eqnarray}
becomes maximal, where $\mathcal{Y}= \text{Re} \bigl[\text{Tr}[{\bm{\chi}^I}^{\dag}(T)\bm{E}_{00}]\bigr]$
and $\bm{\Lambda}(t)$ ($\eta$) is an operator (scalar) Lagrange multiplier. The last term in
Eq.~(\ref{Y-tilde}) describes an ``energy" constraint \cite{palao-kosloff}, in which
$f(t)$ is a shape function for switching the control field on and
off. In order to find the optimal field, we vary $\pi$, $\bm{\Lambda}$, and $a^I_{im}$,
and set $\delta\widetilde{\mathcal{Y}}=0$. By variation of the operator Lagrange multiplier $\bm{\Lambda}$,
we obtain the original dynamical equation [Eq.~(\ref{open-EQ})],
and variation of $\pi$ yields
\begin{eqnarray}
&\pi(t)=-\dfrac{f(t)}{2\eta}\text{Im}[\text{Tr}\left(\left[ \bm{\mu},\bm{K}(t)]^{\star}\bm{\Lambda}(t)\right)\right].
\label{pulse-eq}
\end{eqnarray}
This equation implies that the knowledge of $\bm{K}(t)$ and $\bm{\Lambda}(t)$ is
necessary to specify the optimal control field. The superoperator $\bm{K}(t)$ can be
constructed by process estimation techniques. To obtain the dynamics of $\bm{\Lambda}(t)$ we vary
$a^I_{im}$, which in turn leads to variations of $\bm{\chi}^I$ and $\bm{K}$. Thus, the Lagrange
operator satisfies
\begin{eqnarray}
&-i\left[\bm{K}\dfrac{d\bm{\Lambda}}{dt}\right]_{imim}=\nonumber\\
&\sum_{njl}\Lambda_{ln}\widetilde{H}_{nimj}K_{imjn} -
\Lambda_{nm}\widetilde{H}_{mjli}\overline{K}_{jlim}.
\label{Lambda-dot}
\end{eqnarray}
Equations~(\ref{pulse-eq}) and (\ref{Lambda-dot}), in principle, can be
solved iteratively by the Krotov method \cite{palao-kosloff,krotov} to find the optimal
control field $\pi$ for decoherence suppression. That is, one can
effectively preserve coherence in dynamics of an open quantum system by applying external pulses
to decouple it from the environment. This could provide an alternative method for
an effective dynamical decoupling \cite{dyn-dec} in the language of
process matrix evolution. One can devise a learning decoherence control strategy by
estimating $\bm{K}(t)$, via certain QPT schemes on subensembles of identical systems, after each application
of the optimal control field in a given time $t$. The information learned from the
estimation is used through Eqs.~(\ref{Y-tilde})--(\ref{Lambda-dot}) for a second round to
find a new optimal $\pi$. This procedure can be repeated to enhance
the decoherence suppression task.
\textit{Conclusion and outlook.}---We have developed an alternative framework for monitoring and
controlling dynamics of open quantum systems, and have derived a
dynamical equation for the time variation of process matrices. This
nonperturbative approach can be applied to non-Markovian systems and
systems or devices strongly interacting with their embedding
environment. In addition, we have shown how the information gathered
via partial process tomography schemes can be used to efficiently
identify unknown parameters of
certain classes of local Hamiltonians in short-time scales.
Furthermore, we have proposed an
optimal quantum control approach for the dynamics of open quantum
systems. Specifically, we have suggested how this mechanism can be
used for a generic decoherence suppression.
The approach presented here can be used for exploring new ways for
dynamical open-loop/learning control of Hamiltonian systems
\cite{JM}. One can utilize continuous weak measurements \cite{weak}
for process tomography to develop a real-time dynamical
closed-loop control for a quantum system. Our
Hamiltonian identification scheme could be utilized for efficient
verification of certain correlated errors for quantum computers and
quantum communication networks \cite{nielsen-book}. Using our dynamical approach,
one could explore the existence of certain symmetries in system-bath couplings
which would lead to noiseless subspaces and subsystems. The dynamical equation
developed here can also be applied to studying the energy transfer in multichoromophoric
complexes in the non-Markovian and/or strong interaction regimes \cite{mohseni-fmo}.
\textit{Acknowledgments.}---We thank A. Aspuru-Guzik,
G. M. D'Ariano, N. Khaneja, P. G. Kwiat, D. A. Lidar, S. Lloyd, and B. C. Sanders
for helpful discussions. This work was supported by Faculty of Arts and Sciences of Harvard University, the USC Center for Quantum
Information Science and Technology, NSERC, \textit{i}CORE, MITACS, and PIMS.
\end{document}
|
\begin{document}
\title{Geometric phase for an adiabatically evolving
open quantum system} \author{Ingo Kamleitner}
\affiliation{Australian Centre for Quantum Computer Technology,
Macquarie University, Sydney, New South Wales 2109, Australia}
\author{James~D.~Cresser} \affiliation{Department of Physics,
Macquarie University, Sydney, New South Wales 2109, Australia.}
\author{Barry~C.~Sanders} \affiliation{Australian Centre for
Quantum Computer Technology, Macquarie University, Sydney, New
South Wales 2109, Australia} \affiliation{Institute for Quantum
Information Science, University of Calgary, Alberta T2N 1N4,
Canada} \date{\today}
\begin{abstract}
We derive an elegant solution for a two-level system evolving
adiabatically under the influence of a driving field with a
time-dependent phase, which includes open system effects such
as dephasing and spontaneous emission. This solution, which is
obtained by working in the representation corresponding to the
eigenstates of the time-dependent Hermitian Hamiltonian,
enables the dynamic and geometric phases of the evolving
density matrix to be separated and relatively easily calculated.
\end{abstract}
\pacs{}
\maketitle
\section{Introduction}
The discovery by Berry \cite{Berry, Simon} that a (non-degenerate)
state of a quantum system can acquire a phase of purely geometric
origin when the Hamiltonian of the system undergoes a cyclic, adiabatic
change has lead to an explosion of interest in this and related
geometric phases in quantum mechanics, both from a theoretical
perspective, and from the point of view of possible applications, the
latter including applications to optics (where the geometric phase was
first discovered~\cite{Pan56}), NMR and molecular physics, and to
quantum computing \cite{JonesVAG,EkertEHIJOV}. Since Berry's work, and
the demonstration that Berry's phase can be understood as a holonomy
associated with the parallel transport of the quantum state
\cite{Simon}, there have been numerous proposals for generalizations.
The first of these was due to Wilczek and Zee \cite{WilczekZee} who, by
considering a Hamiltonian with non-degenerate eigenstates, established
the existence of an intimate connection between Berry's phase and
non-Abelian gauge theories. The restriction to changes occurring
adiabatically was relaxed in the work of Aharonov and Anandan
\cite{Aha87} while Anandan \cite{Anandan} generalized the
geometric phase to the non-adiabatic non-Abelian case. The restriction
of cyclicity was removed by Samuel and Bhandari \cite{SamuelBhandari}
and by Pati \cite{Pati}. All of this work is concerned with geometric
phases of pure states of closed systems and is now standard, though
\cite{SamuelBhandari} indicated extensions to taking account of quantum
measurements and consequent non-unitary evolution. A nice
overview, theoretical as well as experimental, is given in
\cite{Ben-Aryeh}.
More recently attention has turned to studying geometric phases for
mixed states, though there is not yet a standard description for
geometric phases associated with mixed states.
As realistic systems always interact with their environment, and as an
open system is almost always to be found in a mixed state, open systems
are a natural source of problems involving the geometric phases of
mixed states. Garrison and Wright \cite{GarissonWright} were the first
to touch on this issue in a phenomenological way, by describing open
system evolution in terms of a non-Hermitean Hamiltonian. This was, in
fact, a pure state analysis, so it did not, strictly speaking, directly
address the problem of geometric phases for a mixed state, but this
work raised issues which could potentially have a bearing on the
analysis of the mixed state problem. In fact, they did point out that
a proper treatment of an open system would require making use of the
density operator approach. Nevertheless, they arrived at an
interesting result, a \emph{complex} geometric phase for dissipative
evolution. This is a result that has been recently put into doubt by a
master equation treatment by Fonseca Romero et al and Aguiar Pinto and
Thomaz \cite{Fonseca,Pin03}.
The first complete open systems analyses of geometric phase for a mixed
state, from two different perspectives, is to be found in the papers of
Ellinas et al \cite{Ell89} and Gamliel and Freed \cite{Gam89}. The
former worked with the standard master equation for the density
operator of a multilevel atom subject to radiative damping and driven
by a laser field with a time-dependent phase. What is of interest in
their approach is that it entailed introducing eigenmatrices of the
Liouville superoperator of the master equation for the damped system.
The system Hamiltonian was allowed to vary adiabatically, with the
result that a non-degenerate eigenmatrix acquires a geometric phase as
well as a dynamic phase. In \cite{Gam89}, the effect of the
environment was modelled as an external classical stochastic influence
which, when averaged, gives rise to the relaxation terms of the
master equation for the system. In both cases the effects of any
geometric phase was then shown to be present in measurable quantities
such as the inversion of a two state system.
Since then, research has been increasing rapidly into the problem of
defining a geometric phase for mixed quantum states for both unitary
and non-unitary evolution, motivated to a very large extent by the need
to understand the effects of decoherence in quantum computational
processes that exploit geometric phases as a means of constructing
intrinsically fault-tolerant quantum logic gates. This issue has been
addressed from two points of view, the first holistic in nature wherein
the aim is to identify a geometric phase to be associated with the
mixed state itself, and the second, essentially the approach of
\cite{Ell89}, which works with the pure state geometric phases of an
appropriate set of parallel transported basis states, which then gives
rise to geometric phase factors in the off-diagonal elements of the
density operator. No geometric phase is explicitly associated with the
mixed state itself; instead observable quantities that will exhibit the
effects of the geometric phases of the underlying basis states are
determined.
The former, holistic approach was first introduced in a formal way by
Uhlmann \cite{Uhlmann}, and in a different way, based on
phase-sensitive measurements via interferometry, by Sj\"{o}qvist et al
\cite{SPEAEOV} for unitary evolution of a mixed state, and later for
non-unitary evolution \cite{ESBOP,deFaria}. The phase defined in this
way is not the same in all respects to that proposed by Uhlmann
\cite{Tidstrom}.
The latter kind of approach has been used only for open systems, and
involves working with the (Markovian) master equation of the open
system. The approaches used involve either solving the master
equation of the system \cite{Ell89,Gam89,Fonseca,Pin03}, or
employing a quantum trajectory analysis \cite{Nazir,Carollo,Mar04} to
unravel the dynamics into pure state trajectories, and calculating the
geometric phases associated with individual pure state trajectories.
Noise of a classical origin, such as stochastic fluctuations of the
parameters of the Hamiltonian have also been studied by Chiara and
Palma \cite{ChiaraPalma}. In essence, the common feature is not so
much to propose a new definition of geometric phase for a mixed state
as to show how the underlying existence of a geometric phase will
nevertheless show up in the observed behaviour of an open quantum
system. It is this perspective that is adopted in the work to be
presented in this paper.
Here we introduce an elegant approach for solving the central master
equation which is based on introducing a unitary transformation due to
Kato \cite{Kato} described in the classic text by Messiah
\cite{Messiah}.
The new picture is defined via a time-dependent unitary transformation
$A^\dagger(t)$, usually referred to as a rotating axis transformation,
which is such that the transformed system Hamiltonian has time
independent eigenspaces. This method is extended by showing that
under the conditions of adiabatic evolution, all the information on
geometric phase for a closed loop is contained within $A(t)$, and is
regained by transforming back to the original picture. The goals of
this approach is its simplicity, since one needs only to calculate the
geometric phases for the eigenstates of the Hamiltonian $H(t)$, and
the fact that dynamic and geometric phase are separated in a clear
way. In fact, perfect separation of geometric and dynamical
contributions is obtained provided the Hamiltonian evolution is
adiabatic and the coupling to the environment is weak. This approach
bears some similarity to that used by Fonseca RomeroFonseca\cite{Fonseca}
who make use of several unitary transformations to separate the
geometric phase from the dynamic phase. However, the rationale for their
transformations, and the origin of the geometric phase, is somewhat elusive
in their analysis. In contrast to \cite{Fonseca}, with the transformation
introduced here, the parallel transport condition is essential and
explains the appearance of the geometric phase.
Within the approach used here, it is possible to show explictly how to
achieve, under certain circumstances, operational cancellation of the
dynamic phase, thereby making the geometric phase accessible in
experiments. An example of where this is possible is given in
Section~\ref{Examples}.
This paper is organized as follows. In Section~\ref{MathMethods} we
present the main ideas. In Section~\ref{Examples} we look at several
examples. In Section~\ref{conclusions} we summarize our results while
in Section~\ref{newdirections} we indicate possible new directions,
including generalisations to non-cyclic evolution and non-Abelian
holonomies. An analog to the adiabatic theorem is proved for a general
Lindblad equation \cite{Lin76} in the Appendix.
\section{The Rotating Axis Transformation for Nondegenerate Multilevel
Systems
\label{MathMethods}}
\subsection{Geometric Phases for a Closed System}
For the present we consider the case of a closed system so as to
introduce the basic method employed here. Suppose we have a system
with Hamiltonian $H(t)$, a function of time due to the dependence of
$H$ on parameters whose values can be changed in time. This
Hamiltonian will have instantaneous eigenvectors $\N[(t)]$ with
eigenvalues $E_{n}(t)$:
\begin{equation}
H(t) \N[(t)] = E_n(t) \N[(t)].
\end{equation}
For simplicity we assume $E_i(t) \neq E_j(t)$ for $i\neq j$. This
restriction will be removed in section IV to obtain non-Abelian
holonomies. For an adiabatically slow change in the system
parameters,
these eigenvectors will also change in such as way as to satisfy the
parallel transport condition \cite{Simon}:
\begin{equation}
\NN[(t)] | \der\N[(t)]=0.
\label{parallel}
\end{equation}
At this point we introduce a unitary operator $A(t)$ via the equation
\begin{equation}
A(t)\N[(0)] = \N[(t)].
\label{defnofA}
\end{equation}
This completely defines $A(t)$. Note that because of the path
dependence of the parallel transported eigenstates \N[(t)], the
operator $A(t)$ has a non-integrable nature, and, as we see later, will
contain the information on the geometric phase.
This unitary operator can now be used to remove the time dependence of
the eigenstates of the Hamiltonian. Thus, if we define
\begin{equation}
H^{A} \equiv A^{\dagger}HA
\end{equation}
we note that the eigenvectors of $H^{A}$ are now just $\N[(0)]$ and
hence are time independent. The transformed Schr\"{o}dinger equation
is then
\begin{equation}
H^{A}|\psi^{A}\rangle=i\hbar\left(A^{\dagger}\dot{A}|\psi^{A}\rangle
+\frac{d}{dt}|\psi^{A}\rangle\right)
\end{equation}
where $|\psi^{A}\rangle=A^{\dagger}|\psi\rangle$. If \N[(t)] is
parallel transported, then, to the lowest-order
adiabatic approximation, one can neglect the terms containing
$A^\dagger (t) \dot{A}(t)$ so that the transformed
Schr\"{o}dinger equation becomes~\cite{Messiah}
\begin{equation}
H^{A}|\psi^{A}\rangle=i\hbar\frac{d}{dt}|\psi^{A}\rangle.
\label{purestatenogeom}
\end{equation}
The solution of \eqref{purestatenogeom} contains no geometric
contribution --- it gives the dynamic contribution to the phase of
any adiabatically evolving state. So we have extracted the dynamics
from the geometric influence of the time-varying Hamiltonian. The
geometric contribution is entirely contained within the operator $A$.
To obtain this information we have to transform back to the original
picture in terms of the states $|\psi\rangle$:
\begin{equation}
|\psi\rangle=A|\psi^{A}\rangle.
\end{equation}
If the Hamiltonian undergoes a closed loop in time $T$, i.e.
$H(T)=H(0)$, then the parallel transported eigenstates \N[(T)] return
to the initial eigenstates \N[(0)] up to the geometric phase. Hence we
have $A(T)= \text{diag} (e^{i\varphi_1}, \ldots ,e^{i\varphi_N})$ where
$\varphi_n$ is the geometric phase associated with the eigenstate
\N[(T)]. This result can be readily generalized if the system is in a
mixed state $\rho$. Introducing the notation
\begin{equation}
\rho^A(t) = A^\dagger(t)\rho(t)A(t)
\end{equation}
we obtain
\begin{align}
\rho(T) =& A(T)\rho^A(T)A^\dagger(T) \nonumber \\
=& \text{diag}
(e^{i\varphi_1},\ldots,e^{i\varphi_N})\rho^A(T)\nonumber\\
&\times \text{diag}(e^{-i\varphi_1}, \ldots ,e^{-i\varphi_N}).
\label{rhoholonomy}
\end{align}
This holonomic transformation multiplies the off-diagonal elements of
the density operator $\rho^A_{ij}$ by a phase
$e^{i(\varphi_j-\varphi_i)}$, which is the difference of the geometric
phases of the eigenstates of the Hamiltonian $H(T)$.
\subsection{Open System and Master Equation}
Systems that are coupled to a reservoir (or environment) can usually be
described by a reduced density operator that evolves according to a
master equation, which, in many cases, can be written in the Lindblad
form \cite{Lin76}
\begin{equation}
\dot{\rho} (t) = \ih [H(t),\rho (t)] +\frac{1}{2}
\sum_{\alpha=1}^k \mathcal{L}_{\Gamma_\alpha} [\rho(t)]
\label{meqn1}
\end{equation}
for
\begin{equation}
\mathcal{L}_\Gamma[\rho] \equiv 2 \Gamma \rho \Gamma^\dagger
-\Gamma^\dagger \Gamma \rho - \rho\Gamma^\dagger \Gamma~,
\end{equation}
and where $\rho(t)$ is the density operator for the system of interest,
$\dot{\rho}(t)$ is its derivative with respect to time, $H(t)$ is the
system Hamiltonian and the dissipation operators $\Gamma_\alpha$
arise due to the presence of the reservoir. To obtain a geometric phase
we once again consider an adiabatically changing Hamiltonian $H(t)$,
with $H(T)=H(0)$, as in the preceding section. Upon introducing the
operator $A(t)$ defined in \eqref{defnofA} we can transform the master
equation with the unitary operator $A^\dagger(t)$, which leads us to
the new master equation
\begin{align}
\dot{\rho}^A (t) = &\ih [H^A(t),\rho^A (t)]
+\rho^A(t)A^\dagger (t) \dot{A}(t)\nonumber \\ & -A^\dagger (t)
\dot{A}(t)\rho^A(t) +\half\sum_{\alpha =1}^k
\mathcal{L}_{\Gamma_\alpha^A} [\rho^A(t)] . \label{meqn2}
\end{align}
Note that $\Gamma_\alpha^A\equiv A^\dagger\Gamma_\alpha A$ is in
general time-dependent even when $\Gamma_\alpha$ is not. As shown in
the Appendix, the terms containing $A^\dagger (t) \dot{A}(t)$ can be
neglected, as in the case of a unitary evolution but with the
additional requirement that the damping is weak. This establishes the
parallel transport condition \eqref{parallel} for weakly damped systems
in an adiabatic evolution. We then get
\begin{equation}
\dot{\rho}^A (t) = \ih [H^A(t),\rho^A (t)] +
\half\sum_{\alpha =1}^k \mathcal{L}_{\Gamma_\alpha^A}
[\rho^A(t)] .
\label{meqn3}
\end{equation}
The solution of \eqref{meqn3} contains no geometric
contributions. To regain $\rho(T)$ we have to transform back to the
original picture as in the unitary case, \eqref{rhoholonomy}.
\begin{align}
\rho(T) =& A(T)\rho^A(T)A^\dagger(T) \nonumber \\
=& \text{diag}
(e^{i\varphi_1}, \ldots ,e^{i\varphi_N})\rho^A(T)\nonumber \\
& \times \text{diag}(e^{-i\varphi_1}, \ldots ,e^{-i\varphi_N})
\end{align}
with $\varphi_n$ being the geometric phase for the state \N .
\section{Examples for Two-Level Systems\label{Examples}}
\subsection{Optical Resonance with Spontaneous Emission}
We consider a two level atom in a classical resonant laser field. In
the rotating-wave approximation the Hamiltonian for this system is
\begin{equation}
H=\hbar \begin{pmatrix} \frac{\Delta}{2} & \Omega
e^{-i\phi} \\
\Omega e^{i\phi} & -\frac{\Delta}{2} \end{pmatrix}
\end{equation}
The detuning $\Delta$, the coupling strength $\Omega$ and the phase
$\phi$ are properties of the laser. To induce a geometric phase we
change the phase $\phi(t)$ of the laser field slowly in comparison to
$E/\hbar = (\Omega^2 + \frac{1}{4}\Delta^2)^\frac{1}{2}$, which is the
absolute value of the eigenenergies of the Hamiltonian divided by
$\hbar$. The eigenvalue equation is
\begin{align}
H(t)\PLUS[(t)]
&= E\PLUS[(t)] \\
H(t)\MINUS[(t)] &= -E\MINUS[(t)]
\end{align}
with
\begin{align}
\PLUS[(t)]&=e^{-i\phi(t)\sin^2{\THETA}}\cos{\THETA}\EX +
e^{i\phi(t)\cos^2{\THETA}}\sin{\THETA}\GR \\
\MINUS[(t)]&=-e^{-i\phi(t)\cos^2{\THETA}}\sin{\THETA}\EX +
e^{i\phi(t)\sin^2{\THETA}}\cos{\THETA}\GR
\end{align}
and
\begin{align}
\sin{\THETA}&=\sqrt{\frac{E-\half\hbar\Delta}{2E}}
\label{theta1}\\
\cos{\THETA}&=\sqrt{\frac{E+\half\hbar\Delta}{2E}} \label{theta2}
\end{align}
\EX and \GR denotes the excited state and the ground state of the two
level atom, respectively. Note that \PLUS and \MINUS satisfy the
parallel transport condition as required in Section~\ref{MathMethods}.
Furthermore we want to include spontaneous emission as a source of
dissipation. In the weak coupling limit the master equation is known
to be
\begin{equation}
\dot{\rho} (t) = \ih [H(t),\rho (t)]
+\frac{1}{2}\mathcal{L}_\Gamma [\rho(t)].
\label{emission1}
\end{equation}
for
\begin{equation}
\Gamma = \sqrt{\tilde{\lambda}bda}\:\sigma_- =
\sqrt{\tilde{\lambda}bda}\begin{pmatrix}0&0\\1&0\end{pmatrix}.
\end{equation}
Here $\tilde{\lambda}bda$ denotes the spontaneous emission rate. The task here is
to solve \eqref{emission1} in the adiabatic and weak damping limit. As
in Section \ref{MathMethods} we define the operator $A(t)$ by
$A(t)\PLUSM[(0)]=\PLUSM[(t)]$. After the transformation of
\eqref{emission1} with $A^\dagger(t)$, the Hamiltonian is not diagonal.
Hence we carry out another transformation with an operator $B^\dagger$
that is defined by
\begin{align}
B^\dagger |+(0)\rangle =& |e\rangle\notag\\
B^\dagger |-(0)\rangle =& |g\rangle.
\end{align}
As $B^\dagger$ is time-independent, we obtain no term $B^\dagger\dot{B}$ in the
master equation. We can carry out both transformations together with
the operator $C^\dagger(t)=B^\dagger A^\dagger(t)$, which turns out to
be
\begin{equation}
C(t)=
\begin{pmatrix}
e^{-i\phi(t)\sin^2{\THETA}}\cos{\THETA} &
-e^{-i\phi(t)\cos^2{\THETA}}\sin{\THETA} \\
e^{i\phi(t)\cos^2{\THETA}}\sin{\THETA} &
e^{i\phi(t)\sin^2{\THETA}}\cos{\THETA}
\end{pmatrix}.
\end{equation}
The master equation for $\rho^C(t)=C^\dagger(t)\rho(t)C(t)$ is, from
\eqref{meqn3} and \eqref{emission1},
\begin{equation}
\dot{\rho}^C (t) = \ih[H^C,\rho^C
(t)]+\frac{1}{2}\mathcal{L}_{\Gamma^C (t)} [\rho^C(t)]
\label{emission2}
\end{equation}
for
\begin{equation}
H^C = C^\dagger(t)H(t)C(t) =
\begin{pmatrix}
E&0\\0&-E
\end{pmatrix}
\end{equation}
and
\begin{align}
\Gamma^C(t) = & C^\dagger(t)\Gamma C(t) \notag\\
= &
\sqrt{\tilde{\lambda}bda}
\begin{pmatrix}
\cos{\THETA}\sin{\THETA} &
\sin^2{\THETA}e^{-i\phi(t)\cos\theta} \\
\cos^2{\THETA}e^{i\phi(t)\cos\theta}
& -\cos{\THETA}\sin{\THETA}
\end{pmatrix}.
\end{align}
We show in the Appendix that only the absolute value of the
off-diagonal elements of the
dissipation operator $\Gamma^C(t)$ contributes in the solution of
\eqref{emission2}. Using only the absolute values of these elements
does not simplify the calculation, but multiplying them with a time
independent factor $e^{i\beta}$ and averaging over all
$0 < \beta < 2\pi$ does.
Doing so, we first rewrite \eqref{emission2}
\begin{align}
\dot{\rho}^C (t) =& \ih [H^C,\rho^C (t)]
+ \frac{1}{4\pi}\int_0^{2\pi} \mathcal{L}_{\Gamma^C}
[\rho^C(t)]
\,\text{d}\beta . \label{emission2a}
\end{align}
In \eqref{emission2a} we substitute the Lindblad operator $\Gamma^C$
with the new Lindblad operators $\Gamma^C_\beta$ with
\begin{equation}
\Gamma^C_\beta = \sqrt{\tilde{\lambda}bda}
\begin{pmatrix}
\cos{\THETA}\sin{\THETA} & \sin^2{\THETA}e^{-i\beta} \\
\cos^2{\THETA}e^{i\beta} & -\cos{\THETA}\sin{\THETA}
\end{pmatrix},\qquad 0\leq \beta < 2\pi
\end{equation}
as explained previously. Finally we get a new master equation, which
is equivalent to \eqref{emission2} in the adiabatic and weak damping
limit:
\begin{align}
\dot{\rho}^C (t) =& \ih [H^C,\rho^C (t)]
+ \frac{1}{4\pi}\int_0^{2\pi} \mathcal{L}_{\Gamma^C_\beta (t)}
[\rho^C(t)]
\,\text{d}\beta \label{emission3}
\end{align}
\eqref{emission3} may seem to be more complicated than
\eqref{emission2}, but if we evaluate the term behind the integral in
\eqref{emission3} we realize that many contributions are cancelled out
by the integration. If we write
\begin{equation}
\rho^C(t)=\begin{pmatrix}a&b\\b^*&1-a\end{pmatrix}
\end{equation}
then \eqref{emission3} reduces to the two independent equations
\begin{align}
\dot{a}(t) &=
\tilde{\lambda}bda\Bigg[\sin^4{\THETA}
-a\left(\sin^4{\THETA}+\cos^4{\THETA}\right)\Bigg]\\
\dot{b}(t) &= -\frac{2i}{\hbar}Eb - \tilde{\lambda}bda
b\left(\sin^2{\THETA}\cos^2{\THETA}+\frac{1}{2} \right)
\end{align}
with the solutions
\begin{align}
a(t) =& \left(
a(0)-\frac{\sin^4{\THETA}}{\sin^4{\THETA}+\cos^4{\THETA}}\right)
e^{-\tilde{\lambda}bda t \left( \sin^4{\THETA}+\cos^4{\THETA} \right)} ,
\nonumber \\ &
+\frac{\sin^4{\THETA}}{\sin^4{\THETA}+\cos^4{\THETA}} \\
b(t) =& b(0) e^{-2i \frac{Et}{\hbar}} e^{-\tilde{\lambda}bda t
\left(\sin^2{\THETA}\cos^2{\THETA}+\half \right)}.
\end{align}
As the last step we need to evaluate
$\rho(t)=C(t)\rho^C(t)C^\dagger(t)$. As the inversion provides an
operational quantity for inferring the geometric phase by measuring the
relative proportion of ground vs excited states, and because the terms
become rather long, we only write the inversion $w(t)$,
which is
\begin{align}
w(t)=&\rho_{11}-\rho_{22}= (2a(t)-1)\cos\theta
\nonumber \\ &
-2\sin\theta\text{Re}\left(
b(t)e^{i\phi(t)\cos\theta}\right) \label{inversion}
\end{align}
To compare this result with that found by Ellinas et al \cite{Ell89}, we
set $\rho(0)=\half + p\:\sigma_3$ with $|p|\leq\half$ and substitute
for $\sin{\THETA}$ and $\cos{\THETA}$ from \eqref{theta1} and
\eqref{theta2}, respectively. Furthermore we define
\begin{equation}
K=\frac{2\Omega^2 +\Delta^2}{4\Omega^2 +\Delta^2}\text{ and }
G=\frac{6\Omega^2+\Delta^2}{8\Omega^2+2\Delta^2}.
\end{equation}
If we furthermore consider the inversion at a time $T$ at the end of
the cyclic evolution we finally get for the inversion:
\begin{widetext}
\begin{align}
w(T) = 2p \Bigg( \left( \frac{\Delta\hbar}{2E}\right)^2
\left(\frac{2Kp+1}{2Kp}e^{-K\tilde{\lambda}bda T}-\frac{1}{2Kp} \right)
+ \cos{\left(\frac{2ET}{\hbar}
-2\pi\frac{\hbar\Delta}{2E}\right) } e^{-G\tilde{\lambda}bda T}
\left( \frac{\Omega\hbar}{E}\right)^2 \Bigg)\label{Ell89Result}
\end{align}
which is the same as that derived by \cite{Ell89}. The dynamic and
geometric phases are found in the cosine term in this expression: the
difference of the dynamic phases of the eigenstates of the
Hamiltonian is given by $2ET/\hbar$, and the difference of the
geometric phases of these eigenstates (for $\phi(T)=2\pi)$ given by
$2\pi \hbar \Delta/2E$. This term is diminished by a damping factor
$\exp(-G\tilde{\lambda}bda T)$ which can influence the observability of the
geometric phase effect on the inversion. The issues of time scales to
observe the effect of the geometric phase have been discussed in
\cite{Ell89}. However, for the present, we wish to point out that the
result above has been derived here by use of a simple transformation
into a rotating frame. This is to be contrasted with the much more
complicated approach of \cite{Ell89}, based on calculating the
eigenmatrices of the Liouvillian.
\subsection{Optical Resonance with Dephasing}
As in the previous subsection we treat a two level atom driven by a
resonant electromagnetic field. This time we assume the damping is
due to dephasing that occurs as a consequence of phase changing
collisions, which changes the relative phase between the excited state
and the ground state of the atom (in contrast to strong collisions
that change populations of eigenstates). Since the phase change can
vary for each collision we have to consider a one dimensional
manifold of dissipation operators
\begin{equation}
\Gamma_\alpha = \sqrt{\tilde{\lambda}bda(\alpha)}
\begin{pmatrix}
1&0\\0&e^{i\alpha}
\end{pmatrix}
,\qquad -\pi < \alpha <\pi ,
\end{equation}
where $\tilde{\lambda}bda(\alpha)$ is the dephasing rate density. Hence we get
the master equation
\begin{align}
\dot{\rho} (t) &= \ih [H(t),\rho (t)] + \int_{-\pi}^{\pi} \left(
\Gamma_\alpha \rho(t) \Gamma^\dagger_\alpha -
\tilde{\lambda}bda(\alpha)\rho(t) \right) \,\text{d} \alpha
\label{dephasing1}
\end{align}
with the Hamiltonian $H(t)$ from the previous subsection with a slowly
changing phase $\phi(t)$ again. Thus, we get the same parallel
transported eigenstates of the Hamiltonian and we can start by carrying
out the same transformation as in the previous subsection. In the
adiabatic and weak damping limit we obtain the transformed master
equation
\begin{align}
\dot{\rho}^C (t) =& \ih [H^C,\rho^C (t)]
+ \int_{-\pi}^{\pi} \left(
\Gamma^C_\alpha(t) \rho^C(t) \Gamma^{C\dagger}_\alpha(t) -
\tilde{\lambda}bda(\alpha)\rho^C(t) \right) \,\text{d} \alpha \label{dephasing2}
\end{align}
for
\begin{equation}
H^C = C^\dagger(t)H(t)C(t) =
\begin{pmatrix}
E&0\\0&-E
\end{pmatrix}
\end{equation}
and
\begin{align}
\Gamma^C_\alpha(t) &= C^\dagger(t)\Gamma_\alpha C(t)
=
\sqrt{\tilde{\lambda}bda(\alpha)}
\begin{pmatrix}
\cos^2{\THETA}+e^{i\alpha}\sin^2{\THETA} & i
\sin\theta\sin \frac{\alpha}{2} e^{i
(-\phi(t)\cos\theta+\frac{\alpha}{2})} \\ i
\sin\theta\sin \frac{\alpha}{2} e^{i
(\phi(t)\cos\theta+\frac{\alpha}{2})} & \sin^2{\THETA} +
e^{i\alpha}\cos^2\THETA
\end{pmatrix}.
\end{align}
Now everything is much the same as in the previous subsection.
Finally we find for the components $a$ and $b$
of the density operator $\rho^C(t)$ the decoupled differential
equations
\begin{align}
\dot{a} &=-4fa\cos^2\THETA\sin^2\THETA +
2f\cos^2\THETA\sin^2\THETA \\
\dot{b} &=i\left(\frac{-2E}{\hbar} +
gb\left(\sin^4\THETA - \cos^4\THETA \right)\right) -
fb\left(\sin^4\THETA + \cos^4\THETA \right).
\end{align}
\end{widetext}
where
\begin{equation}
f=\int_{-\pi}^{+\pi} \tilde{\lambda}bda(\alpha)(1-\cos \alpha) \,\text{d}\alpha
\label{def:f}
\end{equation}
and
\begin{equation}
g=\int_{-\pi}^{+\pi}
\tilde{\lambda}bda(\alpha)\sin\alpha \,\text{d}\alpha
\label{def:g}
\end{equation}
are properties of the model describing the damping collisions. The
solutions of these equations are
\begin{align}
a(t) &= \left( a(0) - \half \right)
e^{-4ft\cos^2\THETA\sin^2\THETA } +\half \label{a}, \\
b(t) &= b(0) e^{i \left( \frac{-2E}{\hbar}+g\left(\sin^4 \THETA
-\cos^4\THETA \right)\right)t} e^{-\left(\sin^4 \THETA
+\cos^4\THETA \right) ft}. \label{b}
\end{align}
To calculate the inversion~$w(t)$ we can take \eqref{inversion} and
substitute $a(t)$ and $b(t)$ with \eqref{a} and \eqref{b},
respectively. Using \eqref{theta1} and \eqref{theta2} as well as the
previous definition of $K$ we finally find at time $t=T$:
\begin{align}
w(T) =& 2p\Bigg( \left( \frac{\hbar\Delta}{2E}\right)^2
e^{-\left(\frac{\Omega\hbar}{E}\right)^2fT} + \half
\left(\frac{\Omega\hbar}{E}\right)^2 e^{-KfT}\nonumber\\
&\times\cos \left( \left( g\frac{\hbar\Delta}{2E} -
\frac{2E}{\hbar} \right)T + 2\pi\frac{\hbar\Delta}{2E}
\right)\Bigg). \label{dephasing3}
\end{align}
This result is similar to that found in the case of spontaneous
emission in \eqref{Ell89Result}. There appears in the cosine term in
\eqref{dephasing3} the difference of the dynamic phases of the
eigenstates of the Hamiltonian, as given by $2ET/\hbar$, and the
difference of the geometric phases of these eigenstates (for
$\phi(T)=2\pi)$, given by $2\pi \hbar \Delta/2E$. This term is also
diminished by a damping factor $\exp(-KfT)$ which influences the
observability of the geometric phase effect on the inversion. Both
this term and an additional contribution of a shift in the Rabi
frequency by $g\frac{\hbar\Delta}{2E}$ arise through the presence of
dephasing, though the latter will only appear if the dephasing rate
density $\tilde{\lambda}bda(\alpha)$ is not symmetric.
\subsection{Spin in Magnetic Field with Dephasing}
As another example we consider the simple model of a spin-$\half$
particle in a magnetic field with constant field strength, which
demonstrates how to remove the dynamic phase in a standard model. To
induce a geometric phase we change the direction of the magnetic field
slowly in comparison to $E/\hbar$. As a source of decoherence we
consider dephasing which is defined by
\begin{align}
\Gamma_\alpha(t) \EX[(t)] &= \sqrt{\tilde{\lambda}bda(\alpha)}\EX[(t)] \\
\Gamma_\alpha(t) \GR[(t)] &=
\sqrt{\tilde{\lambda}bda(\alpha)}e^{i\alpha}\GR[(t)].
\end{align}
\GR[(t)] and \EX[(t)] are the parallel transported eigenstates of the
Hamiltonian with spin parallel and antiparallel to the magnetic field,
respectively. Further $\tilde{\lambda}bda(\alpha)$ is the dephasing rate
density. Note that dephasing does not change the energy of the
spin-system. It is important to distinguish between this model and
the two level atom in the previous subsection. Here the Lindblad
operators are defined in the basis of the time-dependent eigenstates
of the Hamiltonian whereas before the Lindblad operators have been
defined in the basis of the excited and the ground state of the two
level atom which are independent of the properties of the applied
laserfield and hence not the eigenstates of the Hamiltonian. Such
dephasing operators could be realized by random fluctuations of the
field strength of the applied magnetic field. Since the Hamiltonian
changes in time, the dephasing operators have to be time-dependent,
too. As in Section \ref{MathMethods} we define the operator
\begin{align}
A(t) |e(0\rangle =&|e(t)\rangle\notag\\
A(t)|g(0)\rangle =&|g(t)\rangle
\end{align}
and find, from \eqref{meqn3}, for $\rho^A = A^\dagger(t) \rho(t) A(t)$
\begin{align}
\dot{\rho}^A(t) =& \ih \left[
\begin{pmatrix}
E&0\\ 0&-E
\end{pmatrix},\rho^A(t) \right]\notag\\
&+ \frac{1}{2}\int_{-\pi}^{\pi} \left[ 2
\begin{pmatrix}
1&0\\0&e^{i\alpha}
\end{pmatrix}
\rho^A(t)
\begin{pmatrix}
1&0\\0&e^{-i\alpha}
\end{pmatrix}
- 2\rho^A(t)
\right]\nonumber \\ &\times\tilde{\lambda}bda(\alpha) \,\text{d}\alpha \notag \\
=& \begin{pmatrix}
0 & \frac{-2i E}{\hbar} \rho^A_{12}(t) \\
\frac{2i E}{\hbar} \rho^A_{21}(t) & 0
\end{pmatrix}
\nonumber \\ &
+ \int_{-\pi}^{\pi}
\begin{pmatrix}
0 & (e^{-i\alpha}-1) \rho^A_{12}(t) \\
(e^{i\alpha}-1)\rho^A_{21}(t) & 0
\end{pmatrix} \nonumber \\ & \times \tilde{\lambda}bda(\alpha)
\,\text{d}\alpha. \label{spin2}
\end{align}
Here $\rho^A_{ij}(t)$ denotes the components of $\rho^A(t)$. The
solution of \eqref{spin2} is
\begin{align}
\rho^A_{11}(t) &=
\rho^A_{11}(0) \\
\rho^A_{12}(t) &= \rho^A_{12}(0) e^{-i\left( \frac{2
E}{\hbar} +g \right)t} e^{ -ft }
\end{align}
with $f$ and $g$ defined in Eqs.~(\ref{def:f}) and (\ref{def:g}),
respectively. As the last step we need to calculate $\rho(t) =
A(t)\rho^A(t)A^\dagger(t)$. If the evolution of the Hamiltonian is
cyclic we have $A(t)=$diag$(e^{i\varphi},e^{-i\varphi})$ where
$\varphi$ is the geometric phase for \EX. The geometric phase is half
of the solid angle enclosed by the path which \EX[(t)] drives on the
Bloch sphere \cite{Berry}. This is equivalent to half of the solid
angle enclosed by the path determined by the direction of the magnetic
field. Hence we finally get for the components of the density
operator after the Hamiltonian undergoes a closed loop
\begin{align}
\rho_{11}(T) &= \rho_{11}(0) \nonumber \\
\rho_{12}(T) &= \rho_{12}(0) e^{-i\left( \frac{2
ET}{\hbar} +gT - 2\varphi
\right)} e^{ -fT } \label{rho_12}
\end{align}
In the latter equation we can see a phase change due to the energy
difference of the system, an additional phase change due to the
dephasing and the geometric phase. Furthermore we see how the absolute
value of the off-diagonal element of the density operator decreases
exponentially in time because of the dephasing.
Our task now is to remove the dynamic phase. We do a
$\sigma_x$-transformation in our system and then in the time
interval $[T,2T]$ we drive the direction of the magnetic field
around the same loop as before but backwards:
$\vec{B}(T+t)=\vec{B}(T-t)$. The components of the density operator
$\rho'(T)$ after the $\sigma_x$-transformation are
\begin{align}
\rho'_{11}(T) &= \rho_{22}(T) = 1-\rho_{11}(0) \nonumber\\
\rho'_{12}(T) &= \rho_{21}(T) = \rho_{12}^*(0)
e^{i\left(
\frac{2 ET}{\hbar} +Tg - 2\varphi \right)}
e^{-Tf}\label{forgot}
\end{align}
When we drive the magnetic field backwards, then the parallel
transported eigenstates are $\EX[(T+t)]=\EX[(T-t)]$ and
$\GR[(T+t)]=\GR[(T-t)]$.
Now we define the operator
\begin{align}
A'(T+t)|e(T)\rangle &= |e(T+t)\rangle = |e(T-t)\rangle\notag\\
A'(T+t)|g(T)\rangle &= |g(T+t)\rangle = |g(T-t)\rangle
\label{second}
\end{align}
which parallel transports the eigenstates of the Hamiltonian
$H(T+t)$. Again we transform the density operator
$\rho'^A(T+t) = A'^\dagger(T+t) \rho'(T+t) A'(T+t)$ and find for the
components of $\rho'^A(2T)$ \eqref{rho_12}
\begin{align}
\rho'^A_{11}(2T) &=\rho'_{11}(T) = 1-\rho_{11}(0) \nonumber\\
\rho'^A_{12}(2T) &= \rho'_{12}(T) e^{-i\left( \frac{2
E}{\hbar} +g\right)T} e^{ -Tf} = \rho_{12}(0)^* e^{-2i
\varphi} e^{ -2Tf}
\end{align}
where in the last step \eqref{forgot} is used. Now we need to find
$\rho'(2T) = A'(2T)\rho'^A(2T)A'^\dagger(2T)$. From \eqref{second},
and
because now we drive the loop backwards and hence get the same
geometric phase up to a sign, it follows that
$A'(2T)=A^\dagger(T)=$diag$(e^{-i\varphi},e^{+i\varphi})$
After another $\sigma_x$-transformation we finally get the density
operator
\begin{equation}
\rho(2T)= \begin{pmatrix} \rho_{11}(0) & \rho_{12}(0) e^{4i
\varphi} e^{ -2Tf } \\ \rho_{12}(0)^* e^{-4i \varphi} e^{
-2Tf } & 1-\rho_{11}(0) \end{pmatrix}.
\end{equation}
Hence we see that not only the dynamic phase of the Hamiltonian is
removed, but also the phase shift through dephasing. What stays is
twice the difference of the geometric phases of the ground state and
the excited state. This geometric effect appears in the off-diagonal
components of the density operator and is damped out exponentially in
time through dephasing.
\section{Generalizations\label{newdirections} }
\subsection{Non-Cyclic Evolution}
To consider a non-cyclic evolution we first outline Pati's analysis
\cite{Pati}. If $H(T)=H(0)$, Pati compared the phase of the parallel
transported eigenstate of the Hamiltonian at time $t=T$, \N[(T)] with
the phase of the eigenstate at time $t=0$, \N[(0)]. If the Hamiltonian
does not undergo a closed loop, i.e. $H(T)\neq H(0)$, then \N[(T)] is
not \N[(0)] up to a geometric phase. Comparing the phases of states
which differ not only by a phase is not straightforward. Pati
introduced a reference section $\widetilde{\N[(t)]}$ which is supported
by eigenstates of the Hamiltonian $H(t)$. The phase of
$\widetilde{\N[(t)]}$ is fixed by the requirement to make
$\widetilde{\N[(t)]}$ in phase with \N[(0)] as defined by means of the
work of \cite{Pan56}, i.e. $\NN[(0)] \widetilde{\N[(t)]} =0$. Then
$\widetilde{\N[(T)]}$ and \N[(T)] differ only by a phase and this phase
is defined to be the generalization of the geometric phase to non-cylic
evolution.
We use this idea and generalize it to open systems. First one has to
calculate the density operator $\rho^A(T)$ in the rotating axis
representation as in Section~\ref{MathMethods}. Instead of transforming back to
the original picture we transform to the picture given by the
reference section introduced in \cite{Pati}. We define the operator
$\tilde{A}(T)$ by
\begin{equation}
\widetilde{\N[(T)]} = \tilde{A}(T)\N[(0)]
\end{equation}
The density operator in this new picture is
\begin{align}
\rho^{\tilde{A}}(T) =& \tilde{A}^\dagger(T) A(T) \rho^A(T)
A^\dagger(T) \tilde{A}(T) \nonumber \\ =& \text{diag}
(e^{i\varphi_1}\! , \ldots ,e^{i\varphi_N})
\rho^A(T)\text{diag}(e^{\! -i\varphi_1}\! , \ldots ,e^{\!
-i\varphi_N})
\end{align}
and $\tilde{A}^\dagger(T) A(T) = \text{diag} (e^{i\varphi_1}, \ldots
,e^{i\varphi_N}) $ is the generalized holonomy transformation with
respect to the reference section $\widetilde{\N[(T)]}$.
\subsection{Non-Abelian Holonomies}
Again we consider the master equation \eqref{meqn1} with an
adiabatically changing Hamiltonian. For simplicity we restrict to a
cyclic Hamiltonian $H(T)=H(0)$.
Until now we assumed the eigenenergies of the Hamiltonian to be
non-degenerate. However, if the eigenvalues are degenerate we expect
to get non-Abelian holonomies \cite{WilczekZee, Anandan} as the
generalization of the geometric phase. In this case we have the
eigenvalue equation
\begin{equation}
H(t) \N[_m(t)] = E_n(t) \N[_m(t)]
\end{equation}
in which $n=1,\cdots ,N$ ; $m=1,\cdots ,M_n$ and $M_n$ is the degree of
degeneracy of the subspace of the Hamiltonian with energy $E_n$. The
$M_n$ are required to be constant in time, i.e.~we do not allow any
level crossings of the Hamiltonian $H(t)$. The \N[_m(t)] are now
assumed to satisfy the modified parallel transport condition
\cite{Anandan}
\begin{equation}
\NN[_m(t)]|\der \N[_{m'}(t)] =0 \qquad \forall\: m,m'=1,
\cdots ,M_n.
\label{new}
\end{equation}
Now we can define the operator $A(t)$ by
\begin{equation}
A(t) \N[_m(0)]=\N[_m(t)]
\end{equation}
and with its help we transform the master equation in the rotating
axis representation to remove the time dependence of the eigenspaces
of the Hamiltonian analogous to Section~\ref{MathMethods}. Doing this
we get the new master equation \eqref{meqn2} for
$\rho^A(t)=A^\dagger(t)\rho(t)A(t)$ as before. Again it can be shown
that the terms $\rho^A(t)A^\dagger(t)\dot{A}(t)$ and
$A^\dagger(t)\dot{A}(t)\rho^A(t)$ can be neglected if the \N[_m(t)]
satisfy the modified parallel transport condition \eqref{new} in the
adiabatic and weak damping limit. This justifies the condition
\eqref{new}. Since the proof for this is much the same as for
non-degenerate Hamiltonians as done in the Appendix, we do not carry
out the proof in this case. Now we have to solve (16) which
represents the dynamics. Finally we have to transform back to the
original picture:
\begin{equation}
\rho(T)=A(T)\rho^A(T)A^\dagger(T).
\end{equation}
We obtain the non-Abelian holonomy $A(T)\in U(M_1)\otimes \cdots
\otimes U(M_N) $ simliar to the way obtained the geometric phase for
non-degenerate systems.
\section{Conclusions and Future Directions\label{conclusions}}
We have introduced the rotating axis transformation, in which
parallel transport of the eigenstates of the Hamiltonian plays an
important role, to study the geometric phase for an adiabatically
evolving multilevel system. This transformation was shown to be
particularly useful in simplifying the calculation of open-system
evolution, described by a master equation of the
Lindblad form, as it allows an easy separation of dynamic and geometric
phases. These advantages were illustrated by applying it to optical
resonance with spontaneous emission, where we obtain known results but
more easily. The method was then used to quickly and easily study the
effects of the geometric phase in a number of new problems. In one
application we show explicitly how to remove the dynamic phase.
Although, in our applications, we concentrated on Abelian holonomies
for nondegenerate systems, the generalization to non-Abelian holonomies
for degenerate Hamiltonians and to non-cyclic evolution is straight
forward.
\acknowledgments
BCS acknowledges financial support from an Australian DEST IAP grant to
support participation in the European Fifth Framework project QUPRODIS
and from Alberta's informatics Circle of Research Excellence (iCORE) as
well as valuable discussions with S.~Ghose and K.-P.~Marzlin.
\appendix
\section{Neglecting $A^\dagger (t) \dot{A}(t)$ in the adiabatic
approximation}
Here we prove that the terms containing $A^\dagger (t) \dot{A}(t)$ in
\eqref{meqn2} can be neglected in the adiabatic approximation. The
proof will be analogous to the proof of the adiabatic theorem given in
\cite{Messiah}. For simplicity we assume $H(t)$ in \eqref{meqn2} to
be
diagonal which can always be achieved by a proper time independent
transformation and hence is no restriction. We start by transforming
\eqref{meqn2} in the interaction picture. We define
\begin{align}
\rho^H(t) &= e^{-i\int_0^t H^A(t) \,\text{d} t} \rho^A(t)
e^{i\int_0^t H^A(t) \,\text{d} t} \nonumber\\
\Gamma_\alpha^H(t) &= e^{-i\int_0^t H^A(t) \,\text{d} t}
\Gamma_\alpha^A(t) e^{i\int_0^t H^A(t) \,\text{d} t} \nonumber\\
(A^\dagger \dot{A})^H(t) &= e^{-i\int_0^t H^A(t)
\,\text{d} t} A^\dagger (t) \dot{A}(t) e^{i\int_0^t H^A(t)
\,\text{d} t} \nonumber
\end{align}
and get by use of \eqref{meqn2} the master equation in the
interaction
picture
\begin{equation}
\dot{\rho}^H = \rho^H(A^\dagger \dot{A})^H -
(A^\dagger \dot{A})^H \rho^H
+ \half\sum_{\alpha=1}^k \mathcal{L}_{\Gamma_\alpha^H}[\rho^H]
\label{A1}
\end{equation}
The formal solution of this equation is
\begin{align}
\rho^H(t) =& \rho^H(0)
+ \int_0^t \Bigg( \rho^H(A^\dagger \dot{A})^H -
(A^\dagger \dot{A})^H \rho^H\Bigg. \notag\\
&\Bigg.+ \half\sum_{\alpha=1}^k \mathcal{L}_{\Gamma^H_\alpha}
[\rho^H] \Bigg) \!(s) \,\text{d}
s. \label{A2}
\end{align}
Within the integral, there are some contributions of the form of a
product of a slowly varying function and a fast oscillating function.
These contributions are known to become small when the frequency of
the oscillating function increases in comparison with the time
derivative of the slowly varying function. To see this we make use of
the result, following \cite{Messiah},
\begin{align}
\int_0^t f(s)e^{i\omega
s}\,\text{d} s & = \frac{1}{i\omega} \left( [f(s)e^{i\omega s}]^t_0
- \int_0^t f'(s)e^{i\omega s} \,\text{d} s \right) \nonumber \\ &
\overset{\frac{\omega}{f'}\rightarrow
\infty}{\longrightarrow}0
\end{align}
To make use of this we write \eqref{A2} in components
\begin{align}
\rho^H_{ij}(t) = \rho^H_{ij}(0) &+ \int_0^t \Bigg( \rho^H_{ik}
\,(A^\dagger \dot{A})^H_{kj} - (A^\dagger
\dot{A})^H_{ik}\, \rho^H_{kj}\Bigg. \notag\\
&\Bigg.+ \half\sum_{\alpha=1}^k
2\Gamma^H_{\alpha ik}\,\rho^H_{kl}\,\Gamma^{H\dagger}_{\alpha
lj} - \Gamma^{H\dagger}_{\alpha ik}\,\Gamma^H_{\alpha
kl}\,\rho^H_{lj}
\nonumber \\ &- \rho^H_{ik}\,\Gamma^{H\dagger}_{\alpha
kl}\,\Gamma^H_{\alpha lj} \Bigg) \!(s) \,\text{d} s. \label{A3}
\end{align}
Here and later, summations are implied over all indices except of $i$
and $j$. The components of $\dot{A}$ and $\Gamma$ are assumed
to be small in comparison with $\omega_{ij}= E_i-E_j$ (adiabaticity
and
weak damping, respectively) and hence we see in \eqref{A3} that all
components of $\dot{\rho}^H$ are small in comparison with
$\omega_{ij}$. The components of $(A^\dagger \dot{A})$ are
slowly varying and hence the off-diagonal components, $(A^\dagger
\dot{A})^H_{ij}$, $i\neq j$ are oscillating with frequency
$\omega_{ij}$ and can be neglected. Because of the parallel transport
condition the diagonal elements $(A^\dagger \dot{A})^H_{nn}$ are
null as we can see:
\begin{align}
(A^\dagger \dot{A})^H_{nn} =& (A^\dagger \dot{A})_{nn}
\nonumber \\
=& \NN[(0)] | A^\dagger \dot{A} \N[(0)] = \NN[(t)] |\der
\N[(t)] = 0 \nonumber
\end{align}
The last equality is true because we assumed the \N[(t)] to be
parallel
transported. So we have proved that we can neglect $A^\dagger
\dot{A}$ in \eqref{meqn2}.
Furthermore we can rewrite \eqref{A3} as
\begin{align}
\rho^H_{ij}(t) = &\rho^H_{ij}(0) + \int_0^t \Big(
\half\sum_{\alpha=1}^k 2\Gamma^H_{\alpha
ik}\,\rho^H_{kl}\,\Gamma^{H *}_{\alpha jl}
\nonumber \\ &
- \Gamma^{H *}_{\alpha
ki}\,\Gamma^H_{\alpha kl}\,\rho^H_{lj} -
\rho^H_{ik}\,\Gamma^{H
*}_{\alpha lk}\,\Gamma^H_{\alpha lj} \Big) \!(s) \,\text{d} s.
\end{align}
The star denotes complex conjugation. The functions
$\Gamma^H_{ik}\Gamma^{H*}_{jl}$ are oscillating with frequency
$\omega_{ik}-\omega_{jl}$. Hence we can achieve a significant
simplification if the differences of all eigenfrequences,
$\omega_{ik}-\omega_{jl}$ are not vanishing (are big in comparison
with
$\dot{H}$ and $\Gamma_\alpha$) which is always the case if we
consider a two-level system. Then the condition
$\omega_{ik}-\omega_{jl}=0$ implies $i=k \,, j=l$ or $i=j \,, k=l$ and
hence only corresponding parts will contribute to the integral:
\begin{align}
\rho^H_{ij}(t) =& \rho^H_{ij}(0) + \int_0^t \!\Bigg(
\!\half\sum_{\alpha=1}^k 2 \delta_{ij} \Gamma^H_{\alpha
ik}\,\rho^H_{kk}\,\Gamma^{H *}_{\alpha ik}
\nonumber \\ & - 2 \delta_{ij}
\Gamma^H_{\alpha ii}\,\rho^H_{ii}\,\Gamma^{H *}_{\alpha
ii} + 2
\Gamma^H_{\alpha ii}\,\rho^H_{ij}\,\Gamma^{H *}_{\alpha jj}
\nonumber \\ &-
\Gamma^{H *}_{\alpha ki}\,\Gamma^H_{\alpha ki}\,\rho^H_{ij} -
\rho^H_{ij}\,\Gamma^{H *}_{\alpha lj}\,\Gamma^H_{\alpha lj}
\!\!\Bigg) \!(s) \,\text{d} s.
\end{align}
Here we see that only the absolute value of the off-diagonal elements
of $\Gamma^H_\alpha$ and hence $\Gamma^A_\alpha$ contribute in
\eqref{meqn2}.
We can further note that if we set $i=j$, we find that the diagonal
elements of the density operator are coupled only to diagonal
elements, whereas for $i\ne j$, we find that $\rho_{ij}^{H}$ is
coupled only to itself.
\end{document}
|
\begin{document}
\twocolumn[
\icmltitle{
How Robust is Your Fairness? \\
Evaluating and Sustaining Fairness under Unseen Distribution Shifts
}
\icmlsetsymbol{equal}{*}
\begin{icmlauthorlist}
\icmlauthor{Haotao Wang}{1}
\icmlauthor{Junyuan Hong}{2}
\icmlauthor{Jiayu Zhou}{2}
\icmlauthor{Zhangyang Wang}{1}
\end{icmlauthorlist}
\icmlaffiliation{1}{University of Texas at Austin}
\icmlaffiliation{2}{Michigan State University}
\icmlcorrespondingauthor{Haotao Wang}{[email protected]}
\icmlkeywords{Machine Learning, ICML}
{\bm{s}}kip 0.3in
]
\printAffiliationsAndNotice{}
\begin{abstract}
Increasing concerns have been raised on deep learning fairness in recent years. Existing fairness-aware machine learning methods mainly focus on the fairness of in-distribution data.
However, in real-world applications, it is common to have distribution shift between the training and test data.
In this paper, we first show that the fairness achieved by existing methods can be easily broken by slight distribution shifts.
To solve this problem, we propose a novel fairness learning method termed CUrvature MAtching (CUMA), which can achieve robust fairness generalizable to unseen domains with unknown distributional shifts.
Specifically, CUMA enforces the model to have similar generalization ability on the majority and minority groups, by matching the loss curvature distributions of the two groups.
We evaluate our method on three popular fairness datasets.
Compared with existing methods, CUMA achieves superior fairness under unseen distribution shifts, without sacrificing either the overall accuracy or the in-distribution fairness.
\end{abstract}
\section{Introduction}
\label{sec:intro}
\begin{figure*}
\caption{
Illustrating the achieved fairness of normal training, traditional fair training and our proposed robust fair training algorithms. Horizontal and vertical axes represent input $x$ and corresponding loss value ${\mathcal{L}
\label{fig:teaser}
\end{figure*}
With the wide deployment of deep learning in modern business applications concerning individual lives and privacy, there naturally emerge concerns on machine learning fairness \citep{housebig,executive2016big,smuha2019eu}.
Although research efforts on various fairness-aware learning algorithms have been carried out \citep{edwards2015censoring, hardt2016equality,du2020fairness},
most of them focus only on equalizing model performance across different groups on \textit{in-distribution} data.
Unfortunately, in real-world applications, one commonly encounter data with unforeseeable distribution shifts from model training.
It has been shown that deep learning models have drastically degraded performance \citep{hendrycks2019robustness,hendrycks2020many, hendrycks2021nae, taori2020measuring} and show unpredictable behaviors \citep{qiu2019adversarial,yan2021cifs} under unseen distribution shifts.
Intuitively speaking, previous fairness learning algorithms aim to optimize the model to a local minimum where data from majority and minority groups have similar average loss values (and thus similar in-distribution performance).
However, those algorithms do not take into consideration the stability or ``robustness" of their found fairness-aware minima. Taking object detection in self-driving cars for example, it might have been calibrated over high-quality clear images to be ``fair" with different pedestrian skin colors; however such fairness may break down when applied to data collected in adverse visual conditions, such as inclement weather, poor lighting, or other digital artifacts.
Our experiments also find that previous state-of-the-art fairness algorithms would be jeopardized if distributional shifts are present in test data.
The above findings beg the following question:
\begin{center}
\textit{
Using the currently available training set, how to achieve robust fairness that is generalizable to unseen domains with unpredictable distribution shifts?
}
\end{center}
To solve this problem, we decompose it into the following two-step objectives and achieve them one by one:
\begin{enumerate}[leftmargin=*]
\item The minority and majority groups should have similar prediction \textbf{accuracy} on \textit{in-distribution} data. This is usually attained by traditional in-distribution fairness goals.
\item The minority and majority groups should have similar \textbf{robustness} under \textit{unseen distributional shifts}. In this context, the ``robustness" refers to model performance gap between in-distribution and unseen out-of-distribution data: the larger gap the weaker.
\end{enumerate}
The first objective is well studied and can be achieved by existing fairness learning methods such as adversarial training \cite{edwards2015censoring, hardt2016equality,du2020fairness}.
In this paper, we focus our efforts on addressing the second objective, which has been much less studied. We present empirical evidence that the fairness achieved by existing in-distribution oriented methods can be easily compromised by even slight distribution shifts.
Next, to mitigate this fragility, we note that model robustness against distributional shift is often found to be highly correlated with the loss curvature \citep{bartlett2017spectrally,weng2018evaluating}.
Our experiments further observed that, the local loss curvature of minority group is often much larger than that of majority group, which explains their discrepancy of robustness.
Motivated by the above, we propose a new fairness learning algorithm, termed \textbf{Curvature Matching (CUMA)}, to robustify the fairness.
Specifically, CUMA enforces the model to have similar robustness on the majority and minority groups, by matching the loss curvature distributions of the two groups.
As a plug-and-play modular, CUMA can be flexibly combined with existing in-distribution fairness learning methods, such as adversarial training, to fulfil our overall goal of robust fairness.
We illustrate the core idea of CUMA and its difference compared with traditional in-distribution fairness methods in Figure {\textnormal{e}}f{fig:teaser}.
We evaluate our method on three popular fairness datasets: Adult, C\&C, and CelebA.
Experimental results show that CUMA achieves significantly more robust fairness against unseen distribution shifts, without sacrificing either overall accuracy or the in-distribution fairness, compared to traditional fairness learning methods.
\section{Preliminaries}
\subsection{Machine Learning Fairness} \label{sec:pre}
\paragraph{Problem Setting and Metrics}
Machine learning fairness can be generally categorized into individual fairness and group fairness \citep{du2020fairness}. Individual fairness requires similar inputs to have similar predictions \citep{dwork2012fairness}. Compared with individual fairness, group fairness is a more popular setting and thus the focus of our paper.
Given input data $X\in{\mathbb{R}}^n$ with sensitive attributes $A\in\{0,1\}$ and their corresponding ground truth labels $Y\in\{0,1\}$, group fairness requires a learned binary classifier $f(\cdot;\theta):{\mathbb{R}}^n {\textnormal{i}}ghtarrow \{0,1\}$ parameterized by $\theta$ to give equally accurate predictions (denoted as $\hat{Y} \coloneqq f(X)$) on the two groups with $A=0$ and $A=1$.
Multiple fairness criteria have been defined in this context.
Demographic parity (DP) \citep{edwards2015censoring} requires identical ratio of positive predictions between two groups: $P(\hat{Y}=1|A=0) = P(\hat{Y}=1|A=1)$.
Equalized Odds (EO) \citep{hardt2016equality} requires identical false positive rates (FPRs) and false negative rates (FNRs) between the two groups: $P(\hat{Y} \neq Y|A=0, Y=y) = P(\hat{Y} \neq Y|A=1, Y=y), \forall y \in \{0,1\}$.
Based on these fairness criteria, quantified metrics are defined to measure fairness.
For example, the EO distances \citep{madras2018learning} is defined as follows:
\begin{align}
\Delta_{EO} := \sum_{y\in\{0,1\}}|&P(\hat{Y} \neq Y|A=0, Y=y) - \nonumber \\
&P(\hat{Y} \neq Y|A=1, Y=y)|
\label{eq:eo}
\end{align}
\paragraph{Bias Mitigation Methods}
Many methods have been proposed to mitigate model bias. Data pre-processing methods such as re-weighting \citep{kamiran2012data} and data-transformation \citep{calmon2017optimized} have been used to reduce discrimination before model training. In contrast, \cite{hardt2016equality} and \cite{zhao2017men} propose post-processing methods to calibrate model predictions towards a desired fair distribution after model training.
Instead of pre- or post-processing, researchers have explored to enhance fairness during training.
For example, \cite{madras2018learning} uses a adversarial training technique and shows the learned fair representations can transfer to unseen target tasks.
The key technique, adversarial training \citep{edwards2015censoring}, was designed for feature disentanglement on hidden representations such that sensitive \citep{edwards2015censoring} or domain-specific information \citep{ganin2016domain} will be removed while keeping other useful information for the target task.
The hidden representations are typically the output of intermediate layers of neural networks \citep{ganin2016domain,edwards2015censoring,madras2018learning}.
Instead, methods, like adversarial debiasing \citep{zhang2018mitigating} and its simplified version \citep{wadsworth2018achieving}, directly apply the adversary on the output layer of the classifier, which also promotes the model fairness.
Observing the unfairness due to ignoring the worst learning risk of specific samples, \citet{hashimoto2018fairness} proposes to use distributionally robust optimization which provably bounds the worst-case risk over groups.
\citet{creager2019flexibly} proposes a flexible fair representation learning framework based on VAE \citep{kingma2013auto}, that can be easily adapted for different sensitive attribute settings during run-time.
\citet{sarhan2020fairness} uses orthogonality constraints as a proxy for independence to disentangles the utility and sensitive representations.
\citet{martinez2020minimax} formulates group fairness with multiple sensitive attributes as a multi-objective learning problem and proposes a simple optimization algorithm to find the Pareto optimality.
Another line of research focuses on learning unbiased representations from biased ones \citep{bahng2020learning, nam2020learning}.
\citet{bahng2020learning} proposes a novel framework to learn unbiased representations by explicitly enforcing them to be different from a set of pre-defined biased representations.
\citet{nam2020learning} observes that data bias can be either benign or malicious, and removing malicious bias along can achieve fairness.
\citet{li2019repair} jointly learns a data re-sampling weight distribution that penalizes easy samples and network parameters.
\citet{li2019fair} scaled by higher-order power to re-emphasize the loss of minority samples (or nodes) in distributed learning.
\citet{agarwal2018reductions} formulates a fairness-constrained optimization to train a randomized classifier which is provably accurate and fair.
\citet{quadrianto2019discovering} casts the sensitive information removal problem as a data-to-data translation problem with unknown target domain.
\paragraph{Applications in Computer Vision}
When many fairness metrics and debiasing algorithms are designed for general learning problems as aforementioned, there are a line of research and applications focusing on fairness-encouraged computer vision tasks.
For instance,
\citet{buolamwini2018gender} shows current commercial gender-recognition systems have substantial accuracy disparities among groups with different genders and skin colors.
\citet{wilson2019predictive} observe that state-of-the-art segmentation models achieve better performance on pedestrians with lighter skin colors.
In \citep{shankar2017no,de2019does}, it is found that the common geographical bias in public image databases can lead to strong performance disparities among images from locales with different income levels.
\citet{nagpal2019deep} reveal that the focus region of face-classification models depends on people's ages or races, which may explain the source of age- and race-biases of classifiers.
On the awareness of the unfairness, many efforts have been devoted to mitigate such biases in computer vision tasks.
\citet{wang2019balanced} shows the effectiveness of adversarial debiasing technique \citep{zhang2018mitigating} in fair image classification and activity recognition tasks.
Beyond the supervised learning, FairFaceGAN \citep{hwang2020fairfacegan} is proposed to prevent undesired sensitive feature translation during image editing.
Similar ideas have also been successfully applied to visual question answering \citep{park2020fair}.
\paragraph{Fairness under distributional shift}
Recently, several papers have investigated the fairness learning problem under distributional shift \cite{mandal2020ensuring,zhang2021farf,rezaei2021robust,singh2021fairness,dailabel}. Although these works are relevant with ours, there are significant differences in the problem settings.
\citet{zhang2021farf} studied the problem of enforcing fairness in online learning, where the training distribution constantly shifts.
The authors proposed to adapt the model to be fair on the current \textit{known} data distribution. In contrast, our work aims to generalize fairness learned on current distribution to \textit{unknown} and \textit{unseen} target distributions. In our setting, the algorithm can not access any training data from the unknown target distributions.
\citet{rezaei2021robust} studied to preserve fairness under covariate shift. However, their method requires unlabeled data from the target distribution. In other words, they assume the target distribution to be \textit{known}. In contrast, our method is more general and works on \textit{unknown} target distributions.
\citet{singh2021fairness} also studied to preserve fairness under covariate shift. However, their method is based on model adaptation and requires the existence of a joint causal graph to represent the data distribution for all domains. Our method, however, does not require such requirement and generally works on any unseen target distributions.
\citet{dailabel} studies fairness under label distributional shift, while we focus on covariate shift.
\subsection{Model Robustness and Smoothness}
Model generalization ability and robustness has been shown to be highly correlated with model smoothness \citep{moosavi2019robustness,weng2018evaluating}.
\citet{weng2018evaluating} and \citet{guo2018sparse} use local Lipschitz constant to estimate model robustness against small perturbations on inputs within a hyper-ball. \citet{moosavi2019robustness} proposes to improve model robustness by adding a curvature constraint to encourage model smoothness.
\citet{miyato2018virtual} approximates model local smoothness by the spectral norm of Hessian matrix, and improves model robustness against adversarial attacks by regularizing model smoothness.
\section{The Challenge of Robust Fairness}
\label{sec:challenge}
In this section, we show that the current state-of-the-art in-distribution fairness learning methods suffer significant performance drop under unseen distribution shifts.
Specifically, we train the model using normal training (denoted as ``Normal" in Table {\textnormal{e}}f{tab:teaser}), AdvDebias \citep{zhang2018mitigating} and LAFTR \cite{madras2018learning} on Adult \cite{kohavi1996scaling} dataset (i.e., US Census data before 1996).
We evaluate the $\Delta_{EO}$ on the original Adult test set and the 2015 subset of Folktables datase \cite{ding2021retiring} (i.e., US Census data in 2015) respectively, in order to check whether the fairness achieved on in-distribution data is preserved under the temporal distribution shift.
The results are shown in Table {\textnormal{e}}f{tab:teaser}.
As we can see, LAFTR and AdvDebias successfully improve the in-distribution fairness compared with normal training.
However, both methods suffer significant performance drop in terms of $\Delta_{EO}$ under the temporal distribution shift.
Moreover, under the distribution shift, the $\Delta_{EO}$ achieved by LAFTR and AdvDebias are almost the same with that of normal training. In other words, the models trained by LAFTR and AdvDebias are almost as unfair as a normally trained model under this naturally occurring distribution shift.
\begin{table}[ht]
\begin{center}
\caption{
Existing in-distribution fairness learning methods suffer significant performance drop under distribution shifts.
All methods are trained on Adult dataset (i.e., US Census data before 1996) with ``Sex'' as the sensitive attribute.
The best and second-best metrics are shown in bold and underlined, respectively. Mean and standard deviation over three random runs are shown for our method.}
{\bm{s}}pace{-0.5em}
{\textnormal{e}}sizebox{\linewidth}{!}{
\begin{tabular}{|c|c|c|}
\hline
\multirow{3}{*}{Method} & In-distribution fairness & \makecell{Robust fairness under\\unseen distribution shift} \\
\cline{2-3}
& \makecell{$\Delta_{EO}$ ($\downarrow$) on US Census data\\before 1996\\(i.e., the in-distribution test set)} & \makecell{$\Delta_{EO}$ ($\downarrow$) on US Census data\\in 2015} \\
\hline\hline
Normal & 15.45 & 14.65 \\
LAFTR & 11.96 & 14.80 \\
AdvDebias & \underline{5.92} & \underline{12.35} \\
\hline
Ours & \textbf{4.77}$\pm0.34$ & \textbf{8.20}$\pm$1.26 \\
\hline
\end{tabular}
}
\label{tab:teaser}
{\bm{s}}pace{-1em}
\end{center}
\end{table}
\section{Curvature Matching: Towards Robust Fairness under Unseen Distributional Shifts}
In this section, we present our proposed solution for the robust fairness challenge described in Section {\textnormal{e}}f{sec:challenge}.
\subsection{Loss Curvature as the Measure for Robustness}
Before introducing our robust fairness learning method, we need to first define the measure for model robustness under unseen distributional shifts.
Consider a binary classifier $f(\cdot;\theta)$ trained on two groups of data $X_1$ and $X_2$. Our goal is to define a metric to measure the gap of model robustness between the two groups.
Previous research \citep{guo2018sparse,weng2018evaluating} has shown both theoretically and empirically that deep model robustness scales with its model smoothness.
Motivated by the above, we use the spectral norm of Hessian matrix to approximate local smoothness as a measure of model robustness. Specifically, given an input $x$, the Hessian matrix $H(x)$ is defined as the second-order gradient of $\mathcal{L}(x)$ with respect to model weights $\theta$: $H(x) = \nabla^2_{\theta} \mathcal{L}(x)$.
The approximated local curvature ${\mathcal{C}}(x)$ at point $x$ is thus defined as:
\begin{equation} \label{eq:curvature}
{\mathcal{C}}(x) = \sigma(H(x)),
\end{equation}
where $\sigma(H)$ is the spectral norm (SN) of $H$: $\sigma(H) = \sup_{v:\|v\|_2=1} \|Hv\|_2$.
Intuitively, ${\mathcal{C}}(x)$ measures the maximal directional curvature or change rate of the loss function at $x$. Thus, smaller ${\mathcal{C}}(x)$ indicates better local smoothness around $x$.
\paragraph{Practical Curvature Approximation}
It is inefficient to directly optimize the loss curvature through Eq. ({\textnormal{e}}f{eq:curvature}), since it involves high order gradients.\footnote{The Hessian matrix itself involves second order gradients, and backpropagation through Eq. ({\textnormal{e}}f{eq:curvature}) requires even higher order gradient on top of the Hessian matrix SN.}
To solve this problem, we use a one-shot power iteration method (PIM) for practical approximation of ${\mathcal{C}}(x)$ during training.
First we rewrite ${\mathcal{C}}(x)$ with the following form: ${\mathcal{C}}(x) = \sigma(H(x)) = \|H(x)v\|$,
where $v$ is the dominant eigenvector with the maximal eigenvalue, which can be calculated by power iteration method.
In practice, we estimate the dominant eigenvector $v$ by the gradient direction: $\tilde{v} \coloneqq \frac{\text{sign}(g)}{\|\text{sign}(g)\|} \approx v$, where $g=\nabla_{\theta} {\mathcal{L}}(x)$.
This is because previous works have observed a large similarity between the dominant eigenvector and the gradient direction \citep{miyato2018virtual,moosavi2019robustness}. We further approximate Hessian matrix by finite differentiation on gradients: $H(x)v \approx \frac{\nabla_\theta {\mathcal{L}}(x+hv) - \nabla_\theta {\mathcal{L}}(x)}{h}$ where $h$ is a small constant. As a result, the final approximation of curvature smoothness is
\begin{equation}
\label{eq:curvature_aprx}
\begin{split}
\tilde{{\mathcal{C}}}(x) \coloneqq \frac{\|\nabla_\theta {\mathcal{L}}(x+h\tilde{v}) - \nabla_\theta {\mathcal{L}}(x)\|}{|h|} \approx {\mathcal{C}}(x).
\end{split}
\end{equation}
\begin{table*}[ht]
\begin{center}
\caption{Results on Adult dataset with ``Sex'' as the sensitive attribute. The best and second-best metrics are shown in bold and underlined, respectively. Mean and standard deviation over three random runs are shown for CUMA.}
{\bm{s}}pace{-0.5em}
{\textnormal{e}}sizebox{\linewidth}{!}{
\begin{tabular}{|c|c|c|c|c|}
\hline
\multirow{3}{*}{Method} & \multicolumn{2}{c|}{\makecell{The original Adult test set\\(US Census data before 1996)}} & \multicolumn{1}{c|}{\makecell{Folktables 2014 subset\\(US Census data in 2014)}} & \multicolumn{1}{c|}{\makecell{Folktables 2015 subset\\(US Census data in 2015)}} \\
\cline{2-5}
& \multirow{2}{*}{Accuracy ($\uparrow$)} & $\Delta_{EO}$ ($\downarrow$) & $\Delta_{EO}$ ($\downarrow$) & $\Delta_{EO}$ ($\downarrow$) \\
\cline{3-5}
& & \multicolumn{1}{c|}{\textit{In-distribution fairness}} & \multicolumn{2}{c|}{\textit{Robust fairness under distribution shifts}} \\
\hline\hline
Normal & \textbf{86.11} & 15.45 & 14.28 & 14.65 \\
AdvDebias & {85.17} & \underline{5.92} & \underline{8.25} & \underline{10.16} \\
LAFTR & \underline{85.97} & 11.96 & 13.95 & 14.80 \\
CUMA & 85.30$\pm0.73$ & \textbf{4.77}$\pm0.34$ & \textbf{6.33}$\pm$0.94 & \textbf{8.20}$\pm$1.26 \\
\hline
\end{tabular}
}
\label{tab:adult}
\end{center}
\end{table*}
\subsection{Curvature Matching}
\label{sec:cuma}
Equipped with the practical curvature approximation, now we can match the curvature distribution of the two groups by minimizing their maximum-mean-discrepancy (MMD) \cite{gretton2012kernel} distance.
Suppose $\tilde{{\mathcal{C}}}(X_1) \sim {\mathcal{Q}}_1$ and $\tilde{{\mathcal{C}}}(X_2) \sim {\mathcal{Q}}_2$, we define the curvature matching loss functions as:
\begin{equation}
\label{eq:Lcm}
\begin{split}
\mathcal{L}_{cm} = \MMD^2({\mathcal{Q}}_1, {\mathcal{Q}}_2),
\end{split}
\end{equation}
The MMD distance, which is widely used to measure the distance between two high-dimensional distributions in deep learning \citep{li15gmmn,li2017mmd,binkowski2018demystifying}, is defined as
\begin{align}
\MMD^2({\mathcal{P}},{\mathcal{Q}}) = &\mathbb{E}_{\mathcal{P}}[k(X,X)] - \nonumber \\
&2\mathbb{E}_{{\mathcal{P}},{\mathcal{Q}}}[k(X,Y)] + \mathbb{E}_{\mathcal{Q}}[k(Y,Y)]
\end{align}
where $X \sim {\mathcal{P}}$, $Y \sim {\mathcal{Q}}$ and $k(\cdot,\cdot)$ is the kernel function.
In practice, we use finite samples from ${\mathcal{P}}$ and ${\mathcal{Q}}$ to statistically estimate their MMD distance:
\begin{align}
&\MMD^2({\mathcal{P}}, {\mathcal{Q}}) = \frac{1}{M^2} \sum_{i=1}^{M}\sum_{i'=1}^{M} k(x_i, x_{i'}) \nonumber \\
&-\frac{2}{MN} \sum_{i=1}^{M}\sum_{j=1}^{N} k(x_i, y_j) +\frac{1}{N^2} \sum_{j=1}^{N}\sum_{j'=1}^{N} k(y_j, y_{j'})
\end{align}
where $\{x_i\sim {\mathcal{P}}\}_{i=1}^M$, $\{y_j \sim {\mathcal{Q}} \}_{j=1}^N$, and we use the mixed RBF kernel function $k(x,y)=\sum_{\sigma \in {\mathbb{S}}}e^{-\frac{\|x-y\|^2}{2\sigma^2}}$ with hyperparameter ${\mathbb{S}}=\{1,2,4,8,16\}$.
As a side note, MMD has been previously used in fairness learning. \citet{quadrianto2017recycling} defines a more general fairness metric using MMD distance, and shows DP and EO to be the spatial cases of their unified metric. Their paper, however, still focuses on the in-distribution fairness.
In contrast, our CUMA minimizes the MMD distance on the curvature distributions to achieve robust fairness.
Back to our method. After defining $\mathcal{L}_{cm}$, we add it to the traditional adversarially fair training \citep{ganin2016domain, madras2018learning} loss function as a regularizer, in order to attain both in-distribution fairness and robust fairness.
As illustrated in Figure~{\textnormal{e}}f{fig:framework}, our model follows the same ``two-head'' structure as traditional adversarial learning frameworks \citep{ganin2016domain, madras2018learning}, where $h_t$ is the utility head for the target task, $h_a$ is the adversarial head to predict sensitive attributes, and $f_s$ is the shared backbone.\footnote{Thus the binary classifier $f(\cdot;\theta)=h_t(f_s(\cdot; \theta_s); \theta_t)$, with $\theta=\theta_t \cup \theta_s$.}
Suppose for each sample $x_i$, the sensitive attribute is $a_i$ and the corresponding target label is $y_i$, then our overall optimization problem can be written as:
\begin{equation}
\label{eq:overall_loss}
\begin{split}
\min_{\theta_s,\theta_t}\max_{\theta_a}\mathcal{L} = \min_{\theta_s,\theta_t}\max_{\theta_a}(\mathcal{L}_{clf} - \alpha \mathcal{L}_{adv} + \gamma \mathcal{L}_{cm})
\end{split}
\end{equation}
where
\begin{align}
&\mathcal{L}_{clf}=\frac{1}{N}\sum_{i=1}^{N} \ell(h_t(f_s(x_i; \theta_s); \theta_t),y_i), \label{eq:Lclf} \\ &\mathcal{L}_{adv}=\frac{1}{N}\sum_{i=1}^{N} \ell(h_a(f_s(x_i;\theta_s);\theta_a),a_i), \label{eq:Ladv}
\end{align}
$\ell(\cdot,\cdot)$ is the cross-entropy loss function, $\alpha$ and $\gamma$ are trade-off hyperparameters, and $N$ is the number of training samples.
\begin{figure}
\caption{
The overall framework of CUMA. $x$ is the input sample. $h_t$ is the utility head for the target task. $h_a$ is the adversarial head to predict sensitive attributes. $f_s$ is the shared backbone.
${\mathcal{C}
\label{fig:framework}
\end{figure}
\section{Experiments}
\begin{table*}[ht]
\begin{center}
\caption{Results on CelebA dataset with ``Chubby'' as the sensitive attribute. The best and second-best metrics are shown in bold and underlined, respectively.}
{\bm{s}}pace{-0.5em}
{\textnormal{e}}sizebox{\linewidth}{!}{
\begin{tabular}{|c|c|c|c|c|}
\hline
\multirow{3}{*}{Method} & \multicolumn{2}{c|}{\makecell{In-distribution test set\\(Young and high visual quality face images)}} & \multicolumn{1}{c|}{\makecell{Old face images\\with high visual quality}} & \multicolumn{1}{c|}{\makecell{Young face images\\under sever JPEG compression}} \\
\cline{2-5}
& \multirow{2}{*}{Accuracy ($\uparrow$)} & $\Delta_{EO}$ ($\downarrow$) & $\Delta_{EO}$ ($\downarrow$) & $\Delta_{EO}$ ($\downarrow$) \\
\cline{3-5}
& & \multicolumn{1}{c|}{\textit{In-distribution fairness}} & \multicolumn{2}{c|}{\textit{Robust fairness under distribution shifts}} \\
\hline\hline
Normal & \textbf{85.76} & 37.42 & 40.52 & 43.01 \\
AdvDebias & \underline{80.31} & 33.25 & 36.54 & \underline{35.86} \\
LAFTR & 79.56 & \textbf{31.02} & \underline{35.88} & 37.46 \\
CUMA & 80.26 & \underline{31.52} & \textbf{33.26} & \textbf{33.12} \\
\hline
\end{tabular}
}
\label{tab:celeba_chubby}
\end{center}
\end{table*}
\begin{table*}[ht]
\begin{center}
\caption{Results on CelebA dataset with ``Eyeglasses'' as the sensitive attribute. The best and second-best metrics are shown in bold and underlined, respectively.}
{\bm{s}}pace{-0.5em}
{\textnormal{e}}sizebox{\linewidth}{!}{
\begin{tabular}{|c|c|c|c|c|}
\hline
\multirow{3}{*}{Method} & \multicolumn{2}{c|}{\makecell{In-distribution test set\\(Young and high visual quality face images)}} & \multicolumn{1}{c|}{\makecell{Old face images\\with high visual quality}} & \multicolumn{1}{c|}{\makecell{Young face images\\under sever JPEG compression}} \\
\cline{2-5}
& \multirow{2}{*}{Accuracy ($\uparrow$)} & $\Delta_{EO}$ ($\downarrow$) & $\Delta_{EO}$ ($\downarrow$) & $\Delta_{EO}$ ($\downarrow$) \\
\cline{3-5}
& & \multicolumn{1}{c|}{\textit{In-distribution fairness}} & \multicolumn{2}{c|}{\textit{Robust fairness under distribution shifts}} \\
\hline\hline
Normal & \textbf{85.76} & 43.56 & 42.03 & 40.15 \\
AdvDebias & 78.56 & 33.25 & 37.41 & \underline{35.22} \\
LAFTR & \underline{82.30} & \underline{32.06} & \underline{35.61} & 35.74 \\
CUMA & 81.06 & \textbf{31.15} & \textbf{33.06} & \textbf{32.52} \\
\hline
\end{tabular}
}
\label{tab:celeba_eyeglasses}
\end{center}
\end{table*}
\subsection{Experimental Setup} \label{sec:settings}
\paragraph{Datasets and pre-processing}
Experiments are conducted on three datasets widely used to evaluate machine learning fairness:
Adult \citep{kohavi1996scaling}, and CelebA \citep{liu2015faceattributes}, and Communities and Crime (C\&C) \citep{redmond2002data}.\footnote{Traditional image classification datasets (e.g., ImageNet) are not directly applicable since they lack fairness attribute labels.}
\underline{Adult} dataset has 48,842 samples with basic personal information such as education and occupation, where 30,000 are used for training and the rest for evaluation. The target task is to predict the person's annual income, and we use ``gender'' (male or female) as the sensitive attribute. The features in Adult dataset are of either continuous (e.g., age) or categorical (e.g. sex) values. We use one-hot encoding on the categorical features and then concatenate them with the continuous ones. We use data whitening on the concatenated features.
\underline{CelebA} has over 200,000 images of celebrity faces, with 40 attribute annotations. The target task is to predict gender (male or female) and the sensitive attributes to protect are ``chubby'' and ``eyeglasses''. We randomly select $10,000$ as training samples and $1,000$ as testing samples. All images are center-cropped and resized to $64\times64$, and pixel values are scaled to $[0,1]$.
\underline{C\&C} dataset has 1,994 samples with neighborhood population statistics, where 1,500 are used for training and the rest for evaluation. The target task is to predict violent crime per capita, and we use ``RacePctBlack'' (percentage of black population in the neighborhood) and ``FemalePctDiv'' (divorce ratio of female in the neighborhood) as sensitive attributes. All features in C\&C dataset are of continous values in $[0,1]$. To fit in the fairness problem setting, we binarilize the target and sensitive attributes with the top-$30\%$ largest value as the threshold (As a result $\text{P}[A=0]=30\%$ and $\text{P}[Y=0]=30\%$).
We also do data-whitening on C\&C.
{\bm{s}}pace{-1em}
\paragraph{Models}
For C\&C and Adult datasets, we use two-layer MLPs for $f_s$, $h_t$ and $h_a$.
Specifically, suppose the input feature dimension is $d$, then the dimensions of hidden layers in $f_s$ and $h_t$ are $d {\textnormal{i}}ghtarrow 100 {\textnormal{i}}ghtarrow 64$ and $64 {\textnormal{i}}ghtarrow 32 {\textnormal{i}}ghtarrow 2$, respectively. $h_a$ has identical model structure with $h_t$.
For all three sub-networks, ReLU activation function and dropout layer with $0.25$ dropout ratio are applied between the two fully connected layers.
For CelebA dataset, we use ResNet18 as backbone, where the first three stages are used as $f_s$ and the last stage (together with the fully connected classification layer) is used as $h_t$. The auxiliary adversarial head $h_a$ has the same structure as $h_t$.
\paragraph{Baseline methods}
We compare CUMA with the following state-of-the-art in-distribution fairness algorithms.
Adversarial debiasing (AdvDebias) \citep{zhang2018mitigating} is one of the most popular fair training algorithm based on adversarial training \citep{ganin2016domain}.
\citet{madras2018learning} proposes a similar framework termed Learned Adversarially Fair and Transferable Representations (LAFTR), by replacing the cross-entropy loss used in \citep{zhang2018mitigating} with a group-normalized $\ell_1$ loss, which is shown to work better on highly unbalanced datasets.
We also include normal (fairness-ignorant) training as a baseline.
\paragraph{Evaluation metric}
We report the \underline{overall accuracy} on all test samples in the original test sets.
To measure \underline{in-distribution fairness}, we use $\Delta_{EO}$ on the original test sets.
To measure \underline{robust fairness} under distribution shifts, we use $\Delta_{EO}$ on test sets with distribution shifts.
See the following paragraph for the details in constructing distribution shifts.
\paragraph{Distribution shifts}
Adult dataset contains US Census data collected before 1996. We use the 2014 and 2015 subset of Folktables dataset \cite{ding2021retiring}, which contain US Census data collected in 2014 and 2015 respectively, as the test sets with distribution shifts.
This simulates the real-world temporal distributional shifts.
On CelebA dataset, we train the model on $10,000$ ``young'' face images. We use another $1,000$ ``young'' face images as in-distribution test set and $1,000$ ``not young'' face images as the test set with distribution shifts.
This simulates the real-world scenario when the model is trained and used on people with different age groups.
We also construct another test set with a different type of distribution shift, by applying strong JPEG compression on the original $1,000$ ``young'' test images, following \cite{hendrycks2019robustness}.
This simulates the scenario when the model is trained on good quality images while the test images has poor visual quality.
For C\&C dataset, we construct two artificial distribution shifts by adding random Gaussian and uniform noises, respectively, to the test data.
Specifically, the categorical features in C\&C dataset are first one-hot encoded and then whitened into float-value vectors, where noises are added.
Both types of noises have mean $\mu=0$ and has standard derivation $\sigma=0.03$ .
\paragraph{Implementation details}
Unless further specified, we set the loss trade-off parameter $\alpha$ to 1 in all experiments by default.
We use Adam optimizer \citep{kingma2014adam} with initial learning rate $10^{-3}$ and weight decay $10^{-5}$. The learning rate is gradually decreased to 0 by cosine annealing learning rate scheduler \citep{loshchilov2016sgdr}.
On both Adult and C\&C datasets, we train for $50$ epochs from scratch for all methods.
On CelebA dataser, we first normally train a model for 100 epochs, and then finetune it for 20 epochs using CUMA.
For fair comparison, we train for 120 epochs on CelebA for all baseline methods.
The constant $h$ in Eq.~({\textnormal{e}}f{eq:curvature_aprx}) is set to $1$ by default.
\begin{table*}[ht]
\begin{center}
\caption{Results on C\&C dataset with ``RacePctBlack'' as the sensitive attribute. The best and second-best metrics are shown in bold and underlined, respectively. Mean and standard deviation over three random runs are shown for CUMA.}
{\bm{s}}pace{-0.5em}
{\textnormal{e}}sizebox{\linewidth}{!}{
\begin{tabular}{|c|c|c|c|c|}
\hline
\multirow{3}{*}{Method} & \multicolumn{2}{c|}{\makecell{The original C\&C test set}} & \multicolumn{1}{c|}{\makecell{With Gaussian Noise}} & \multicolumn{1}{c|}{\makecell{With Uniform Noise}} \\
\cline{2-5}
& \multirow{2}{*}{Accuracy ($\uparrow$)} & $\Delta_{EO}$ ($\downarrow$) & $\Delta_{EO}$ ($\downarrow$) & $\Delta_{EO}$ ($\downarrow$) \\
\cline{3-5}
& & \multicolumn{1}{c|}{\textit{In-distribution fairness}} & \multicolumn{2}{c|}{\textit{Robust fairness under distribution shifts}} \\
\hline\hline
Normal & \textbf{89.05} & 63.22 & 60.13 & 64.21 \\
AdvDebias & {84.79} & 39.84 & 39.84 & 36.81 \\
LAFTR & \underline{85.80} & \underline{28.83} & \underline{29.04} & \underline{32.20} \\
CUMA & 85.20$\pm1.70$ & \textbf{28.17}$\pm1.70$ & \textbf{28.69}$\pm1.92$ & \textbf{27.11}$\pm0.82$ \\
\hline
\end{tabular}
}
\label{tab:ccrace}
{\bm{s}}pace{-1em}
\end{center}
\end{table*}
\begin{table*}[ht]
\begin{center}
\caption{Results on C\&C dataset with ``FemalePctDiv'' as the sensitive attribute. The best and second-best metrics are shown in bold and underlined, respectively. Mean and standard deviation over three random runs are shown for CUMA.}
{\bm{s}}pace{-0.5em}
{\textnormal{e}}sizebox{\linewidth}{!}{
\begin{tabular}{|c|c|c|c|c|}
\hline
\multirow{3}{*}{Method} & \multicolumn{2}{c|}{\makecell{The original C\&C test set}} & \multicolumn{1}{c|}{\makecell{With Gaussian Noise}} & \multicolumn{1}{c|}{\makecell{With Uniform Noise}} \\
\cline{2-5}
& \multirow{2}{*}{Accuracy ($\uparrow$)} & $\Delta_{EO}$ ($\downarrow$) & $\Delta_{EO}$ ($\downarrow$) & $\Delta_{EO}$ ($\downarrow$) \\
\cline{3-5}
& & \multicolumn{1}{c|}{\textit{In-distribution fairness}} & \multicolumn{2}{c|}{\textit{Robust fairness under distribution shifts}} \\
\hline\hline
Normal & \textbf{89.05} & 54.74 & 56.41 & 54.60 \\
AdvDebias & \underline{83.57} & {38.73} & 38.73 & 37.15\\
LAFTR & 83.16 & \underline{27.83} & \underline{29.30} & \underline{30.11}\\
CUMA & 83.39$\pm1.01$ & \textbf{27.57}$\pm0.74$ & \textbf{27.70}$\pm1.04$ & \textbf{28.35}$\pm1.73$ \\
\hline
\end{tabular}
}
\label{tab:ccdiv}
{\bm{s}}pace{-1em}
\end{center}
\end{table*}
\subsection{Main Results}
\label{sec:main_results}
Experimental results on three datasets with different sensitive attributes are shown in Tables~{\textnormal{e}}f{tab:adult}-{\textnormal{e}}f{tab:ccdiv}, where we compare CUMA with the baseline methods on different metrics as discussed in Section~{\textnormal{e}}f{sec:settings}.
``Normal'' means standard training without any fairness regularization.
All numbers are shown as percentages.
Many intriguing findings can be concluded from the results.
\underline{First}, we see that previous state-of-the-art fairness learning algorithms would be jeopardized if distributional shifts are present in test data.
For example, on Adult dataset (Table~{\textnormal{e}}f{tab:adult}), LAFTR achieves $\Delta_{EO} = 11.96\%$ on in-distribution test set, while that number is increased to $13.95\%$ on the 2014 test set and $14.80\%$ on the 2015 test set, which is almost as unfair as the normally trained model.
Similarly, on CelenA dataset with ``Chubby" as the sensitive attribute (Table~{\textnormal{e}}f{tab:celeba_chubby}), LAFTR achieves $\Delta_{EO} = 31.02\%$ on the original CelebA test set, while that number is increased to $35.88\%$ and $37.46\%$ under distribution shifts of user age and image quality, respectively.
\underline{Second}, we see that CUMA achieves the best robust fairness under distribution shifts under all evaluated settings, while maintaining similar in-distribution fairness and overall accuracy.
For example, on Adult dataset (Table~{\textnormal{e}}f{tab:adult}), CUMA achieves $1.92\%$ and $1.96\%$ less $\Delta_{EO}$ than the second-best performer (AdvDebias) on the 2014 and 2015 Census dataset, respectively.
On CelebA dataset (Table~{\textnormal{e}}f{tab:celeba_chubby}) with ``Chubby" as the sensitive attribute, CUMA achieves $2.62\%$ and $2.74\%$ less $\Delta_{EO}$ than the second-best performers under distribution shifts of user age and image quality, respectively.
Moreover, still in Table~{\textnormal{e}}f{tab:celeba_chubby}, CUMA and LAFTR achieve almost identical in-distribution fairness (the difference between their $\Delta_{EO}$ on original test set is $0.5\%$), CUMA keeps the fairness under distribution shifts (with only around $1.6\%$ increase in $\Delta_{EO}$), while the fairness achieved by LAFTR is significantly worse, especially under the image quality distribution shift, where the $\Delta_{EO}$ is increased by $6.44\%$.
\subsection{Ablation Study} \label{sec:abla}
In this section, we check the sensitivity of CUMA with respect to its hyper-parameters: the loss trade-off parameters $\alpha$ and $\gamma$ in Eq.~({\textnormal{e}}f{eq:overall_loss}) and $h$ in Eq.~({\textnormal{e}}f{eq:curvature_aprx}).
Results are shown in Table~{\textnormal{e}}f{tab:abla_cuma}.
When fixing $\alpha=1$, the best trade-off between overall accuracy and robust fairness is achieved at round $\gamma=1$, which we use as the default $\gamma$.
Varying the value of $h$ hardly affects the performance of CUMA.
\begin{table}[ht]
\centering
{\bm{s}}pace{-1em}
\caption{Ablation study results on the loss trade-off parameters $\alpha$ and $\gamma$ in the CUMA algorithm. Results are reported on C\&C dataset with ``RacePctBlack'' as the sensitive attribute.}
{\textnormal{e}}sizebox{1.0\linewidth}{!}
{
\begin{tabular}{c|ccc|ccc|cc}
\hline
& \multicolumn{3}{c|}{$\alpha$} & \multicolumn{3}{c|}{$\gamma$} & \multicolumn{2}{c}{$h$} \\
& 0.1 & 1 & 10 & 0.1 & 1 & 10 & 0.1 & 1 \\
\hline
Accuracy & 86.94 & 85.40 & 83.75 & 85.19 & 85.40 & 84.79 & 85.32 & 85.40 \\
\hline
\makecell{$\Delta_{EO}$\\with Gaussian noise} & 66.51 & 28.74 & 33.16 & 38.85 & 28.74 & 27.95 & 30.56 & 28.74 \\
\hline
\end{tabular}
}
\label{tab:abla_cuma}
\end{table}
\subsection{Trade-off Curves between Fairness and Accuracy}
\label{sec:appx-tradeoff-curves}
For CUMA and both baseline methods, we can obtain different trade-offs between fairness and accuracy by setting the loss function weights (e.g., $\alpha$ and $\gamma$) to different values. For example, the larger $\alpha$, the better fairness and the worse accuracy.
Such trade-off curves between fairness and accuracy of different methods are shown in Figure {\textnormal{e}}f{fig:curves}.
The closer the curve to the top-left corner (i.e., with larger accuracy and smaller $\Delta_{EO}$), the better Pareto frontier is achieved.
As we can see, our method achieves the best Pareto frontiers for both in-distribution fairness (left panel) and robust fairness under distribution shifts (middle and right panel).
\begin{figure}
\caption{Trade-off curves between fairness and accuracy of different methods. Results are reported on C\&C dataset with ``RacePctBlack'' as the sensitive attribute.}
\label{fig:curves}
\end{figure}
\section{Conclusion}
In this paper, we first observe the challenge of robust fairness: Existing state-of-the-art in-distribution fairness learning methods suffer significant performance drop under unseen distribution shifts.
To solve this problem, we propose a novel robust fairness learning algorithm, termed Curvature Matching (CUMA), to simultaneously achieve both traditional in-distribution fairness and robust fairness.
Experiments show CUMA achieves more robust fairness under unseen distribution shifts, without more sacrifice on either overall accuracies or the in-distribution fairness compared with traditional in-distribution fairness learning methods.
\end{document}
|
\begin{document}
\title{Prym varieties and moduli of polarized Nikulin surfaces}
\author[G. Farkas]{Gavril Farkas}
\address{Humboldt-Universit\"at zu Berlin, Institut f\"ur Mathematik, Unter den Linden 6
\newline\texttt{}
\indent 10099 Berlin, Germany} \email{{\tt [email protected]}}
\thanks{}
\author[A. Verra]{Alessandro Verra}
\address{Universit\'a Roma Tre, Dipartimento di Matematica, Largo San Leonardo Murialdo
\newline \indent 1-00146 Roma, Italy}
\email{{\tt
[email protected]}}
\begin{abstract} We present a structure theorem for the moduli space $\mathcal{R}_7$ of Prym curves of genus $7$ as a projective bundle over the moduli space of $7$-nodal rational curves. The existence of this parametrization implies the unirationality of $\mathcal{R}_7$ and that of the moduli space of Nikulin surfaces of genus $7$, as well as the rationality of the moduli space of Nikulin surfaces of genus $7$ with a distinguished line. Using the results in genus $7$, we then establish that $\mathcal{R}_8$ is uniruled.
\end{abstract}
\maketitle
\vskip 6pt
\section{Introduction}
A polarized Nikulin surface of genus $g$ is a smooth polarized $K3$ surface $(S, \mathfrak{c})$, where $\mathfrak{c}\in \mbox{Pic}(S)$ with $\mathfrak{c}^2=2g-2$, equipped with a double cover
$f:\widetilde{S}\rightarrow S$ branched along disjoint rational curves $N_1, \ldots, N_8\subset S$, such that $\mathfrak{c}\cdot N_i=0$ for $i=1, \ldots, 8$. Denoting by $e\in \mbox{Pic}(S)$ the class defined by the equality $e^{\otimes 2}=\mathcal{O}_S(\sum_{i=1}^8 N_i)$, one forms the \emph{Nikulin lattice}
$$\mathfrak{N}:=\Bigl\langle \mathcal{O}_S(N_1), \ldots, \mathcal{O}_S(N_8), e\Bigr\mathfrak{Rat}ngle$$
and obtains a primitive embedding $j:\mathcal{L}ambda_g:=\mathbb Z\cdot [\mathfrak{c}]\oplus \mathfrak{N}\hookrightarrow \mbox{Pic}(S)$. Nikulin surfaces of genus $g$ form an irreducible $11$-dimensional moduli space $\mathcal{F}_g^{\mathfrak{N}}$ which has been studied from a lattice-theoretic point of view in \cite{Do1} and \cite{vGS}.
The connection between $\mathcal{F}_g^{\mathfrak{N}}$ and the moduli space $\mathcal{R}_g$ of pairs $[C, \eta]$, where $C$ is a curve of genus $g$ and $\eta\in \mbox{Pic}^0(C)[2]$ is a non-trivial $2$-torsion point, has been pointed out in \cite{FV} and used to describe $\mathcal{R}_g$ in small genus. Over $\mathcal{F}_g^{\mathfrak{N}}$ one considers the open set in a tautological ${\textbf P}^g$-bundle
$$\mathcal{P}_g^{\mathfrak{N}}:=\Bigl\{\bigl[S, j:\mathcal{L}ambda_g\hookrightarrow \mbox{Pic}(S), C\bigr]: C\in |\mathfrak{c}| \mbox{ is a smooth curve of genus } g\Bigr\},$$
which is endowed with the two projection maps
$$\xymatrix{
& \mathcal{P}_g^{\mathfrak{N}} \ar[dl]_{p_g} \ar[dr]^{\chi_g} & \\
\mathcal{F}_g^{\mathfrak{N}} & & \mathcal{R}_{g} \\
}$$
defined by $p_g([S, j, C]):=[S,j]$ and $\chi_g([S, j, C]):=[C, e_C:=e\otimes \mathcal{O}_C]$ respectively.
\vskip 4pt
Observe that $\mbox{dim}(\mathcal{P}_7^{\mathfrak{N}})=\mbox{dim}(\mathcal{R}_7)=18$. The map $\chi_7:\mathcal{P}_7^{\mathfrak{N}}\dashrightarrow \mathcal{R}_7$ is a birational isomorphism, precisely $\mathcal{R}_7$ is birational to a Zariski locally trivial ${\textbf P}^7$-bundle over $\mathcal{F}_7^{\mathfrak{N}}$. This is reminiscent of Mukai's well-known result \cite{Mu}: The moduli space $\mathcal{M}_{11}$ of curves of genus $11$ is birational to a projective bundle over the moduli space $\mathcal{F}_{11}$ of polarized $K3$ surfaces of genus $11$. Note that $\mathcal{M}_{11}$ and $\mathcal{R}_7$ are the only known examples of moduli spaces of curves admitting a non-trivial fibre bundle structure over a moduli space of polarized $K3$ surfaces. Here we describe the structure of $\mathcal{F}_7^{\mathfrak{N}}$:
\begin{theorem}\label{unir7}
The Nikulin moduli space $\mathcal{F}_7^{\mathfrak{N}}$ is unirational. The Prym moduli space $\mathcal{R}_7$ is birationally isomorphic to a ${\textbf P}^7$-bundle over $\mathcal{F}_7^{\mathfrak{N}}$. It follows that $\mathcal{R}_7$ is unirational as well.
\end{theorem}
It is well-known that $\mathcal{R}_g$ is unirational for $g\leq 6$, see \cite{Do}, \cite{ILS}, \cite{V}, and even rational for $g\leq 4$, see \cite{Do2}, \cite{Cat}. On the other hand, the Deligne-Mumford moduli space $\overline{\mathcal{R}}_g$ of stable Prym curves of genus $g$ is a variety of general type for $g\geq 14$, whereas $\mbox{kod}(\overline{\mathcal{R}}_{12})\geq 0$, see \cite{FL} for the cases $g\neq 15$ and \cite{Br} for the case $g=15$. Nothing seems to be known about the Kodaira dimension of $\overline{\mathcal{R}}_g$, for $g=9, 10, 11$.
\vskip 3pt
We now discuss the structure of $\mathcal{F}_7^{\mathfrak{N}}$. For each positive $g$, we denote by $$\mathfrak{Rat}_g:=\overline{\mathcal{M}}_{0,2g}/\mathbb Z_2^{\oplus g}\rtimes \mathfrak{S}_g$$ the moduli space of $g$-nodal stable rational curves. The action of the group $\mathbb Z_2^{\oplus g}$ is given by permuting the marked points labeled by $\{1, 2\}, \ldots, \{2g-1,2g\}$ respectively, while the symmetric group $\mathfrak{S}_g$ acts by permuting the $2$-cycles $(1,2), \ldots, (2g-1,2g)$ respectively. The variety $\mathfrak{Rat}_g$, viewed as a subvariety of $\overline{\mathcal{M}}_g$, has been studied by Castelnuovo \cite{Cas} at the end of the 19th century in the course of his famous attempt to prove the Brill-Noether Theorem, as well as much more recently, for instance in \cite{GKM} \footnote{Unfortunately, in \cite{GKM} the notation $\overline{\mathcal{R}}_g$ (reserved for the Prym moduli space) is proposed for what we denote in this paper by $\mathfrak{Rat}_g$.}, in the context of determining the ample cone of $\overline{\mathcal{M}}_g$.
Using the identification $\mbox{Sym}^2({\textbf P}^1)\cong {\textbf P}^2$, we obtain a birational isomorphism
$$\mathfrak{Rat}_g\cong \mbox{Hilb}^g({\textbf P}^2) {/\!/} PGL(2),$$
where $PGL(2)\subset PGL(3)$ is regarded as the group of projective automorphisms of ${\textbf P}^2$ preserving the image of a fixed smooth conic in ${\textbf P}^2$.
\vskip 5pt
Let us fix once and for all a smooth rational quintic curve $R\subset {\textbf P}^5$. For general points $x_1, y_1, \ldots, x_{7}, y_7\in R$, we note that $\bigl[R, (x_1+y_1)+ \cdots +(x_{7}+y_{7})\bigr]\in \mathfrak{Rat}_7$. We denote by
$$N_1:=\langle x_1, y_1\mathfrak{Rat}ngle, \ldots, N_7:=\langle x_{7}, y_{7}\mathfrak{Rat}ngle \in G(2, 6),$$
the corresponding bisecant lines to $R$ and observe that $C:=R\cup N_1\cup \ldots \cup N_7$ is a nodal curve of genus $7$ and degree $12$ in ${\textbf P}^5$.
By writing down the Mayer-Vietoris sequence
for $C$, we find the following identifications:
$$H^0(C, \mathcal{O}_C(1))\cong H^0(\mathcal{O}_R(1)) \ \mbox{ and } \ H^0(C, \mathcal{O}_C(2))\cong H^0(\mathcal{O}_R(2))\oplus \Bigl(\oplus_{i=1}^7 H^0(\mathcal{O}_{N_i})\Bigr).$$
It can easily be checked that the base locus
$$S:=\mathrm{Bs}\ \bigl|\mathcal{I}_{C/{\textbf P}^5}(2)\bigr|$$ is a
smooth $K3$ surface which is a complete intersection of three quadrics in ${\textbf P}^5$. Obviously, $S$ is equipped with the seven lines $N_1, \ldots, N_7$. In fact, $S$ carries an eight line as well! If $H\in |\mathcal{O}_S(1)|$ is a hyperplane section, after setting
$$N_8:=2R+N_1+\cdots+N_7-2H\in \mbox{Div}(S),$$
we compute that $N_8^2=-2, N_8\cdot H=1$ and $N_8\cdot N_i=0$, for $i=1, \ldots, 7$. Therefore $N_8$ is equivalent to an effective divisor on $S$, which is embedded in ${\textbf P}^5$ as a line by the linear system $|\mathcal{O}_S(1)|$. Furthermore,
$$N_1+\cdots+N_8=2(R+N_1+\cdots+N_7-H)\in \mathrm{Pic}(S),$$
hence by denoting $e:=R+N_1+\cdots+N_7-H$, we obtain an embedding $\mathfrak{N}\hookrightarrow \mbox{Pic}(S)$. Moreover $C\cdot N_i=0$ for $i=1, \ldots, 8$ and we may view $\mathcal{L}ambda_7\hookrightarrow \mbox{Pic}(S)$. In this way $S$ becomes a Nikulin surface of genus $7$.
\vskip 5pt
We introduce the moduli space $\widehat{\mathcal{F}}_g^{\mathfrak{N}}$ of \emph{decorated} Nikulin surfaces consisting of polarized Nikulin surfaces $\bigl[S, j:\mathcal{L}ambda_g\hookrightarrow \mbox{Pic}(S)\bigr]$
of genus $g$, together with a distinguished line $N_8\subset S$ viewed as a component of the branch divisor of the double covering $f:\widetilde{S}\rightarrow S$. There is an obvious forgetful map $\widehat{\mathcal{F}}_g^{\mathfrak{N}}\rightarrow \mathcal{F}_g^{\mathfrak{N}}$ of degree $8$. Having specified $N_8\subset S$, we can also specify the divisor $N_1+\cdots+N_7\subset S$ such that $e^{\otimes 2}=\mathcal{O}_S(N_1+\cdots+N_7+N_8)$. We summarize what has been discussed so far and refer to Section 2 for further details:
\begin{theorem}\label{m14}
The rational map $\varphi:\mathfrak{Rat}_7\dashrightarrow \widehat{\mathcal{F}}_7^{\mathfrak{N}}$ given by
$$\varphi\Bigl(\bigl[R, (x_1+y_1)+ \cdots +(x_7+y_7)\bigr]\Bigr):=\Bigl[S, \mathcal{O}_S (R+N_1+\cdots+N_7), N_8\Bigr]$$
is a birational isomorphism.
\end{theorem}
A construction of the inverse map $\varphi^{-1}$ using the geometry of Prym canonical curves of genus $7$ is presented in Section 2.
The moduli space $\mathfrak{Rat}_g$ is related to the configuration space
$$U_g^2:=\mbox{Hilb}^g({\textbf P}^2){/\!/} PGL(3)$$ of $g$ unordered points in the plane. Using the isomorphism $PGL(3)/PGL(2)\cong {\textbf P}^5$, we observe in Section 2 that there exists a
(locally trivial) ${\textbf P}^5$-bundle structure $\mathfrak{Rat}_g\dashrightarrow U^2_g$. In particular $\mathfrak{Rat}_g$ is rational whenever $U^2_g$ is. Since the rationality of $U^2_7$ has been established by Katsylo \cite{Ka} (see also \cite{Bo}), we are led to the following result:
\begin{theorem}\label{rat8}
The moduli space $\widehat{\mathcal{F}}_7^{\mathfrak N}$ of decorated Nikulin surfaces of genus $7$ is rational.
\end{theorem}
Putting together Theorems \ref{m14} and \ref{rat8}, we conclude that there exists a dominant rational map ${\textbf P}^{18}\dashrightarrow \mathcal{R}_7$ of degree $8$. We are not aware of any dominant map from a rational variety to $\mathcal{R}_7$ of degree smaller than $8$. It would be very interesting to know whether $\mathcal{R}_7$ itself is a rational variety. We recall that although $\mathcal{M}_g$ is known to be rational for $g\leq 6$ (see \cite{Bo} and the references therein), the rationality of $\mathcal{M}_7$ is an open problem.
\vskip 3pt
We sum up the construction described above in the following commutative diagram:
$$
\xymatrix{
\overline{\mathcal{M}}_{0,14} \ar[r]^{(2^7\cdot 7!):1} \ar@{-->}[d]_{} & \mathfrak{Rat}_{7} \ar@{-->}[d]_{\cong} & \\
\mathcal{F}_7^{\mathfrak{N}} & \widehat{\mathcal{F}}_7^{\mathfrak{N}} \ar[l]^{8:1} \ar[r]^{{\textbf P}^5} & U^2_7\\}
$$
The concrete geometry of $\mathcal{R}_7$ by means of polarized Nikulin surfaces has direct consequences concerning the Kodaira dimension of $\overline{\mathcal{R}}_8$. The projective bundle structure of $\mathcal{R}_7$ over $\mathcal{F}_7^{\mathfrak{N}}$ can be lifted to a boundary divisor of $\overline{\mathcal{R}}_8$. Denoting by $\pi:\overline{\mathcal{R}}_g\rightarrow \overline{\mathcal{M}}_g$ the map forgetting the Prym structure, one has the formula
$$
\pi^*(\delta_0)=\delta_0^{'}+\delta_0^{''}+2\delta_{0}^{\mathrm{ram}}\in CH^1(\overline{\mathcal{R}}_g),
$$
where $\delta_0^{'}:=[\Delta_0^{'}], \, \delta_0^{''}:=[\Delta_0^{''}]$, and $\delta_0^{\mathrm{ram}}:=[\Delta_0^{\mathrm{ram}}]$ are boundary divisor classes on $\overline{\mathcal{R}}_g$ whose meaning will be recalled in Section 3. Note that up to a $\mathbb Z_2$-factor, a general point of $\Delta_0^{'}$ corresponds to a $2$-pointed Prym curve of genus $7$, for which we apply our Theorem \ref{unir7}. We establish the following result:
\begin{theorem}\label{r8}
The moduli space $\overline{\mathcal{R}}_8$ is uniruled.
\end{theorem}
Using the parametrization of $\mathcal{R}_7$ via Nikulin surfaces, we construct a sweeping curve $\mathcal{G}amma$ of the boundary divisor $\Delta_0^{'}$ of $\overline{\mathcal{R}}_8$ such that $\mathcal{G}amma \cdot \delta_0^{'}>0$ and $\mathcal{G}amma \cdot K_{\overline{\mathcal{R}}_8}<0$. This implies that the canonical class $K_{\overline{\mathcal{R}}_8}$ cannot be pseudoeffective, hence via \cite{BDPP}, the moduli space $\overline{\mathcal{R}}_8$ is uniruled. This way of showing uniruledness of a moduli space, though quite effective, does not lead to an \emph{explicit} uniruled parametrization of $\mathcal{R}_8$. In Section 3, we sketch an alternative, more geometric way of showing that $\mathcal{R}_8$ is uniruled, by embedding a general Prym-curve of genus $8$ in a certain canonical surface. A rational curve through a general point of $\overline{\mathcal{R}}_8$ is then induced by a pencil on this surface.
\section{Polarized Nikulin surfaces}
We briefly recall some basics on Nikulin surfaces, while referring to \cite{vGS}, \cite{GS} and \cite{Mo} for details. A \emph{symplectic involution} $\iota$ on a smooth $K3$ surface $Y$ has $8$ fixed points and we denote by $\bar{Y}:=Y/\langle \iota \mathfrak{Rat}ngle$ the quotient. The surface $\bar{Y}$ has $8$ nodes. Letting $\sigma:\widetilde{S}\rightarrow Y$ be the blow-up of the fixed points, the involution $\iota$ lifts to an involution $\tilde{\iota}:\widetilde{S}\rightarrow \widetilde{S}$ fixing the eight $(-1)$-curves $E_1, \ldots, E_8\subset \widetilde{S}$. Denoting by $f:\widetilde{S}\rightarrow S$ the quotient map by the involution $\widetilde{\iota}$, we obtain a smooth $K3$ surface $S$, together with a primitive embedding of the Nikulin lattice $\mathfrak{N}\cong E_8(-2)\hookrightarrow \mbox{Pic}(S)$, where $N_i=f(E_i)$ for $i=1, \ldots, 8$. In particular, the sum of rational curves $N:=N_1+\cdots+N_8$ is an even divisor on $S$, that is, there exists a class $e\in \mbox{Pic}(S)$ such that $e^{\otimes 2}=\mathcal{O}_S(N_1+\cdots+N_8)$. The cover $f:\widetilde{S}\rightarrow S$ is branched precisely along the curves $N_1, \ldots, N_8$. The following diagram summarizes the notation introduced so far and will be used throughout the paper:
\begin{equation}\label{diagram}
\begin{CD}
{\widetilde{S}} @>{\sigma}>> {Y} \\
@V{f}VV @V{}VV \\
{S} @>{}>> {\bar{Y}} \\
\end{CD}
\end{equation}
Nikulin \cite{Ni} p.262 showed that the possible configurations of even sets of disjoint $(-2)$-curves on a $K3$ surface $S$ are only those consisting of either $8$ curves (in which case $S$ is a Nikulin surface as defined in this paper), or of $16$ curves, in which case $S$ is a Kummer surface. From this point of view, Nikulin surfaces appear naturally as the \emph{Prym analogues} of $K3$ surfaces.
\begin{definition} A \emph{polarized Nikulin surface} of genus $g$ consists of a smooth $K3$ surface and a primitive embedding $j$ of the lattice $\mathcal{L}ambda_g=\mathbb Z\cdot \mathfrak{c}\oplus \mathfrak N\hookrightarrow \mbox{Pic}(S)$, such that $\mathfrak{c}^2=2g-2$ and the class $j(\mathfrak{c})$ is nef.
\end{definition}
Polarized Nikulin surfaces of genus $g$ form an irreducible $11$-dimensional moduli space $\mathcal{F}_g^{\mathfrak{N}}$, see for instance \cite{Do1}. Structure theorems for $\mathcal{F}_g^{\mathfrak{N}}$ for genus $g\leq 6$ have been established in \cite{FV}. For instance the following result is proven in \emph{loc.cit.} for Nikulin surfaces of genus $g=6$. Let $V=\mathbb C^5$ and fix a smooth quadric $Q\subset {\textbf P}(V)$. Then one has a birational isomorphism, which, in particular, shows that $\mathcal{F}_6^{\mathfrak{N}}$ is unirational:
$$\mathcal{F}_6^{\mathfrak{N}}\stackrel{\cong}\dashrightarrow G\Bigl(7, \bigwedge^2 V \Bigr)^{\mathrm{ss}}{/\!/} \mathrm{Aut}(Q).$$
On the other hand, fundamental facts about $\mathcal{F}_g^{\mathfrak{N}}$ are still not known. For instance, it is not clear whether $\mathcal{F}_g^{\mathfrak{N}}$ is a variety of general type for large $g$. Nikulin surfaces have been recently used decisively in \cite{FK} to prove the Prym-Green Conjecture on syzygies of general Prym-canonical curves of even genus.
\vskip 3pt
For a polarized Nikulin surface $(S, j)$ of genus $g$ as above, we set $C:=j(\mathfrak{c})$ and then $H\equiv C-e\in \mbox{Pic}(S)$. It is shown in \cite{GS}, that for any Nikulin surface $S$ having minimal Picard lattice $\mbox{Pic}(S)=\mathcal{L}ambda_g$, the linear system $\mathcal{O}_S(H)$ is very ample for $g\geq 6$. We compute that $H^2=2g-6$ and denote by $\phi_H:S\rightarrow {\textbf P}^{g-2}$ the corresponding embedding. Since $N_i\cdot H=1$ for $i=1, \ldots,8$, it follows that the images $\phi_H(N_i)\subset {\textbf P}^{g-2}$ are lines. The existence of two closely linked distinguished polarizations $\mathcal{O}_S(C)$ and $\mathcal{O}_S(H)$ of genus $g$ and $g-2$ respectively on any Nikulin surface is one of the main sources for the rich geometry of the moduli space $\mathcal{F}_g^{\mathfrak{N}}$ for $g\leq 6$, see \cite{FV} and \cite{vGS}.
\vskip 4pt
Suppose that $\bigl[S, j:\mathcal{L}ambda_7\hookrightarrow \mbox{Pic}(S)\bigr]$ is a polarized Nikulin surface of genus $7$. In this case $$\phi_H:S\hookrightarrow {\textbf P}^5$$ is a surface of degree $8$ which is a complete intersection of three quadrics. For each smooth curve $C\in |\mathcal{O}_S(j(\mathfrak{c}))|$, we have that $[C, \eta:=e_C]\in \mathcal{R}_7$. Since $\mathcal{O}_C(1)=K_C\otimes \eta$, it follows that the restriction $\phi_{H|C}:C\hookrightarrow {\textbf P}^5$ is a Prym-canonically embedded curve of genus $7$. This assignment gives rise to the map $\chi_7:\mathcal{P}_7^{\mathfrak{N}}\rightarrow \mathcal{R}_7$.
\vskip 3pt
Conversely, to a general Prym curve $[C, \eta]\in \mathcal{R}_7$ we associate a unique Nikulin surface of genus $7$ as follows. We consider the Prym-canonical embedding $\phi_{K_C\otimes \eta}:C \hookrightarrow {\textbf P}^5$ and observe that $S:=\mbox{bs}(|\mathcal{I}_{C/{\textbf P}^5}(2)|)$ is a complete intersection of three quadrics, that is, if smooth, a $K3$ surfaces of degree $8$. In fact, $S$ is smooth for a general choice of $[C, \eta]\in \mathcal{R}_7$, see \cite{FV} Proposition 2.3. We then set $N\equiv 2(C-H)\in \mbox{Pic}(S)$ and note that $N^2=-16$ and $N\cdot H=8$. Using the cohomology exact sequence
$$0\longrightarrow H^0(S, \mathcal{O}_S(N-C))\longrightarrow H^0(S, \mathcal{O}_S(N)) \longrightarrow H^0(C, \mathcal{O}_C(N)) \longrightarrow 0,$$
since $\mathcal{O}_C(N)$ is trivial, we conclude that the divisor $N$ is effective on $S$. It is shown in \emph{loc.cit.} that for a general $[C, \eta]\in \mathcal{R}_7$, we have a splitting
$N=N_1+\cdots+N_8$ into a sum of $8$ disjoint lines with $C\cdot N_i=0$ for $i=1, \ldots, 8$. This turns $S$ into a Nikulin surface and explains the birational isomorphisms
$$\chi_7^{-1}:\mathcal{\mathcal{P}}_7^{\mathfrak N}\stackrel{\cong}\dashrightarrow \mathcal{R}_7$$
referred to in the Introduction.
\vskip 3pt
Suppose now that $\bigl[S, \mathcal{O}_S(C), N_8\bigr]\in \widehat{\mathcal{F}}_7^{\mathfrak{N}}$, that is, we single out a $(-2)$-curve in the Nikulin lattice. Writing $e^{\otimes 2}=\mathcal{O}_C(N_1+\cdots+N_8)$, the choice of $N_8$ also determines the sum of the seven remaining lines $N_1+\cdots+N_7$, where $H\cdot N_i=1$, for $i=1, \ldots, 8$. We compute
$$(C-N_1-\cdots-N_7)^2=-2 \ \ \mbox{ and } \ \ (C-N_1-\cdots-N_7)\cdot H=5,$$ in particular, there exists an effective divisor $R$ on $S$, with $R\equiv C-N_1-\cdots-N_7$. Note also that $R\cdot N_i=2$, for $i=1, \ldots, 7$, that is, $R\subset {\textbf P}^5$ comes endowed with seven bisecant lines.
\begin{proposition}\label{vanish} For a decorated Nikulin surface $\bigl[S, \mathcal{O}_S(C), N_8\bigr]\in \widehat{\mathcal{F}}_7^{\mathfrak{N}}$ satisfying $\mathrm{Pic}(S)=\mathcal{L}ambda_7$, we have that
$H^1(S, \mathcal{O}_S(C-N_1-\cdots-N_7))=0$. In particular, $$R\in |\mathcal{O}_S(C-N_1-\cdots-N_7)|$$ is a smooth rational quintic curve on $S$.
\end{proposition}
\begin{proof}
Assume by contradiction that the curve $R\subset S$ is reducible. In that case, there exists a smooth irreducible $(-2)$-curve $Y\subset S$, such that $Y\cdot R<0$ and
$H^0(S, \mathcal{O}_S(R-Y))\neq 0$. Assuming $\mbox{Pic}(S)$ is generated by $C$, $N_1, \ldots, N_8$ and the class $e=(N_1+\cdots+N_8)/2$, there exist integers $a, b, c_1, \ldots, c_8\in \mathbb Z$, such that
$$Y\equiv a\cdot C+\Bigl(c_1+\frac{b}{2}\Bigr)\cdot N_1+\cdots+\Bigl(c_8+\frac{b}{2}\Bigr)\cdot N_8.$$
Setting $b_i:=c_i+\frac{b}{2}$, the numerical hypotheses on $Y$ can be rewritten in the following form:
\begin{equation}\label{ineq}
b_1^2+\cdots+b_8^2=6a^2+1 \ \mbox{ and } 6a+b_1+\cdots+b_8\leq -1.
\end{equation}
Since $Y$ is effective, we find that $a\geq 0$ (use that $C\subset S$ is nef). Applying the same considerations to the effective divisor $R-Y$,
we obtain that $a\in \{0,1\}$.
\vskip 3pt
If $a=0$, then $Y\equiv b_1 N_1+\cdots+b_8 N_8\geq 0$, hence $b_i\geq 0$ for $i=1, \ldots, 8$, which contradicts the inequality
$b_1+\cdots+b_8\leq -1$, so this case does not appear.
\vskip 2pt
If $a=1$, then $R-Y\equiv -(1+b_1) N_1-\cdots -(1+b_7) N_7-b_8 N_8\geq 0$, therefore
$b_8\leq 0$ and $b_i\leq -1$ for $i=1, \ldots,7$. From (\ref{ineq}), we obtain that $b_8=0$ and $b_1=\cdots=b_7=-1$. Thus $Y\equiv R$, which is a contradiction, for $Y$ was assumed to be a proper irreducible component of $R$.
\end{proof}
Retaining the notation above, we obtain a map $\widetilde{\mathcal{S}^{\mp}}i:\widehat{\mathcal{F}}_7^{\mathfrak{N}}\dashrightarrow \mathfrak{Rat}_7$, defined by
$$\widetilde{\mathcal{S}^{\mp}}i\Bigl([S, \mathcal{O}_S(C), N_8]\Bigr):=[R, \ N_1\cdot R+\cdots+N_7\cdot R],$$
where the cycle $N_i\cdot R\in \mbox{Sym}^2(R)$ is regarded as an effective divisor of degree $2$ on $R$. The map $\widetilde{\mathcal{S}^{\mp}}i$ is regular over the dense open subset of $\widehat{\mathcal{F}}_7^{\mathfrak{N}}$ consisting of Nikulin surfaces having the minimal Picard lattice $\mathcal{L}ambda_7$. We are going to show that $\widetilde{\mathcal{S}^{\mp}}i$ is a birational isomorphism by explicitly constructing its inverse. This will be the map $\varphi$ described in the Introduction in Theorem \ref{m14}.
\vskip 5pt
We fix a smooth rational quintic curve $R\subset {\textbf P}^5$ and recall the canonical identification
\begin{equation}\label{ident}
\bigl|\mathcal{I}_{R/{\textbf P}^5}(2)\bigr|=\bigl|\mathcal{O}_{\mathrm{Sym}^2(R)}(3)\bigr|
\end{equation}
between the linear system of quadrics containing $R\subset {\textbf P}^5$ and that of plane cubics.
Here we use the isomorphism
$\mbox{Sym}^2(R)\stackrel{\cong}\longrightarrow {\textbf P}^2,$ under which to a quadric $Q\in H^0({\textbf P}^5, \mathcal{I}_{R/{\textbf P}^5}(2))$ one assigns the symmetric correspondence
$$\Sigma_Q:=\{x+y\in \mbox{Sym}^2(R): \langle x,y\mathfrak{Rat}ngle \subset Q\},$$
which is a cubic curve in $\mbox{Sym}^2(R)$.
Let $N_1, \ldots, N_7$ be general bisecant lines to $R$ and consider the nodal curve of genus $7$
$$C:=R\cup N_1\cup \ldots \cup N_7\subset {\textbf P}^5.$$
\begin{proposition}
For a general choice of the bisecants $N_1, \ldots, N_7$ of the curve $R\subset {\textbf P}^5$, the base locus
$$S:=\mathrm{Bs}\ \bigl|\mathcal{I}_{C/{\textbf P}^5}(2)\bigr|$$
is a smooth $K3$ surface of degree $8$.
\end{proposition}
\begin{proof}
The bisecant line $N_i$ is determined by the degree $2$ divisor $N_i\cdot R\in \mbox{Sym}^2(R)$. Under the identification (\ref{ident}), the quadrics containing the line $N_i$ are identified with the cubics in $\bigl|\mathcal{O}_{\mathrm{Sym}^2(R)}(3)\bigr|$ that pass through the point $N_i\cdot R$. It follows that the linear system $\bigl |\mathcal{I}_{C/{\textbf P}^5}(2)\bigr|$ corresponds to the linear system of cubics in $\mbox{Sym}^2(R)$ passing through $7$ general points. Since the secants $N_i$ (and hence the points $N_i\cdot R\in \mbox{Sym}^2(R)$) have been chosen to be general, we obtain that $\mbox{dim } |\mathcal{I}_{C/{\textbf P}^5}(2)|=2$.
We have proved in Proposition \ref{vanish} that for a general Nikulin surface $[S, \mathcal{O}_S(C)]\in \mathcal{F}_7^{\mathfrak{N}}$ we have
$$H^1(S, \mathcal{O}_S(C-N_1-\cdots-\hat{N}_i-\cdots-N_8))=0,$$ and the corresponding curves $R_i\in \bigl|\mathcal{O}_S(C-N_1-\cdots-\hat{N}_i-\cdots-N_8)\bigr|$ are smooth rational quintics for $i=1, \ldots, 8$. In particular, the morphism $\widetilde{\mathcal{S}^{\mp}}i:\widehat{\mathcal{F}}_7^{\mathfrak{N}}\dashrightarrow \mathfrak{Rat}_7$ is defined on all components of $\widehat{\mathcal{F}}_7^{\mathfrak{N}}$ and the image of each component is an element of $\mathfrak{Rat}_7$ (a priori, one does not know that $\widehat{\mathcal{F}}_7^{\mathfrak{N}}$ is irreducible, this will follow from our proof). For such a point in $\mbox{Im}(\widetilde{\mathcal{S}^{\mp}}i)$, it follows that the base locus $\mbox{bs } \bigl|\mathcal{I}_{C/{\textbf P}^5}(2)\bigr|$ is a smooth surface, in fact a general Nikulin surface of genus $7$. Hence $[S,\mathcal{O}_S(C), N_i]\in \mbox{Im}(\varphi)$ for $i=1, \ldots, 8$. Since $\mathfrak{Rat}_7$ is an irreducible variety, the conclusion follows.
\end{proof}
\noindent \emph{Proof of Theorem \ref{m14}}. As explained in the Introduction, the map $\varphi:\mathfrak{Rat}_7\dashrightarrow \widehat{\mathcal{F}}_7^{\mathfrak{N}}$ is well-defined and clearly the inverse of $\widetilde{\mathcal{S}^{\mp}}i$. In particular, it follows that $\widehat{\mathcal{F}}_7^{\mathfrak{N}}$ is also irreducible (and in fact unirational).
$\Box$
\section{Configuration spaces of points in the plane}
Throughout this section we use the identification $\mbox{Sym}^2({\textbf P}^1)\cong {\textbf P}^2$ induced by the map $\rho:{\textbf P}^1\times {\textbf P}^1\rightarrow {\textbf P}^2$ obtained by taking the projection of the Segre embedding of ${\textbf P}^1\times {\textbf P}^1$ to the space of symmetric tensors, that is, $\rho\bigl([a_0,a_1],[b_0,b_1]\bigr)=[a_0 b_0, a_1b_1, a_0b_1+a_1b_0]$. We identify the diagonal $\Delta\subset {\textbf P}^1\times {\textbf P}^1$ with its image $\rho(\Delta)$ in ${\textbf P}^2$. We view $PGL(2)$ as the subgroup of automorphisms of ${\textbf P}^2$ that preserve the conic $\Delta$. Furthermore, the choice of $\Delta$ induces a canonical identification
$$PGL(3)/PGL(2)=|\mathcal{O}_{{\textbf P}^2}(2)|={\textbf P}^5.$$
For $g\geq 5$, we consider the projection
$$\beta:\mathfrak{Rat}_g:=\mbox{Hilb}^g({\textbf P}^2){/\!/} SL(2)\rightarrow \mbox{Hilb}^g({\textbf P}^2){/\!/} SL(3)=:U_g^2.$$
\begin{definition}
If $X$ is a del Pezzo surface of degree $2$, a \emph{contraction} of $X$ is the blow-up $f:X\rightarrow {\textbf P}^2$ of $7$ points in general position in ${\textbf P}^2$.
\end{definition}
Specifying a pair $(X,f)$ as above, amounts to giving a \emph{plane model} of the del Pezzo surface, that is, a pair $(X,L)$, where $X$ is a del Pezzo surface with $K_X^2=2$ and $L\in \mbox{Pic}(S)$ is such that $L^2=1$ and $K_X\cdot L=-2$. Therefore $U_7^2$ is the GIT moduli space of pairs $(X,f)$ (or equivalently of pairs $(X,L)$) as above.
\begin{proposition}\label{descent}
The morphism $\beta: \mathrm{Hilb}^g({\textbf P}^2){/\!/} SL(2)\rightarrow U_g^2$ is a locally trivial ${\textbf P}^5$-fibration.
\end{proposition}
\begin{proof}
Having fixed the conic $\Delta\subset {\textbf P}^2$, we have an identification ${\textbf P}^2\cong \mbox{Sym}^2(\Delta)\cong ({\textbf P}^2)^{\vee}$, that is, we view points in $\mbox{Sym}^2(\Delta)$ as lines in ${\textbf P}^2$. A general point $D\in \mbox{Hilb}^g({\textbf P}^2)$ corresponds to a union $D=\ell_1+\cdots + \ell_g$ of $g$ lines in ${\textbf P}^2$, such that $\mbox{Aut}(\{\ell_1, \ldots, \ell_g\})=1$.
We consider the rank $6$ vector bundle $\mathcal{E}$ over $\mbox{Hilb}^g({\textbf P}^2)$ with fibre
$$\mathcal{E}(\ell_1+\cdots+\ell_g):=H^0\bigl(\mathcal{O}_{\ell_1+\cdots+\ell_g}(2)\bigr).$$
Clearly $\mathcal{E}$ descends to a vector bundle $E$ over the quotient $U^2_g$. We then observe that one has a canonical identification ${\textbf P}(E)\cong \mbox{Hilb}^g({\textbf P}^2){/\!/} SL(2)$, or more geometrically, $\mathfrak{Rat}_g$ is the moduli space of pairs consisting of an unordered configuration of $g$ lines and a conic in ${\textbf P}^2$. The birational isomorphism
${\textbf P}(E)\rightarrow \mbox{Hilb}^g({\textbf P}^2){/\!/} SL(2) $ is given by the assignment
$$\Bigl(\ell_1+\cdots+\ell_g, Q\Bigr) \mbox{ mod } SL(3)\mapsto \sigma(\ell_1)+\cdots+\sigma(\ell_g) \mbox{ mod } SL(2),$$
where $\sigma \in SL(3)$ is an automorphism such that $\sigma(Q)=\Delta$.
\end{proof}
\vskip 3pt
\noindent \emph{Proof of Theorem \ref{rat8}}. We have established that the moduli space $\widehat{\mathcal{F}}_7^{\mathfrak{N}}$ is birationally isomorphic to the projectivization of a ${\textbf P}^5$-bundle
over $U_7^2$. Since $U_7^2$ is rational, cf. \cite{Bo} Theorem 2.2.4.2, we conclude.
$\Box$
\begin{remark} In view of Theorem \ref{rat8}, it is natural to ask whether there exists a rational \emph{modular} degree $8$ cover $\widehat{\mathcal{R}}_7\rightarrow \mathcal{R}_7$ which is a locally trivial ${\textbf P}^7$-bundle over the rational variety $\widehat{\mathcal{F}}_7^{\mathfrak{N}}$, such that the following diagram is commutative:
$$
\xymatrix{
\widehat{\mathcal{R}}_{7} \ar[r]^{?} \ar[d]_{8:1} & \widehat{\mathcal{F}}_{7}^{\mathfrak{N}} \ar[d]_{8:1} \ar[r]^{\cong} & \mathfrak{Rat}_7 \\
\mathcal{R}_7 \ar[r]^{{\textbf P}^7} & \mathcal{F}_7^{\mathfrak{N}} \\}
$$
One candidate for the cover $\widehat{\mathcal{R}}_7$ is the universal singular locus of the Prym-theta divisor,
$$\widehat{\mathcal{R}}_7:=\Bigl\{[C, \eta, L]\in \mathcal{R}_7: [C, \eta]\in \mathcal{R}_7 \mbox{ and } L\in \mathrm{Sing}(\Xi)/\pm\Bigr\},$$
where $\mbox{Sing}(\Xi)=\{L\in \mbox{Pic}^{2g-2}(\widetilde{C}):\mathrm{Nm}_f(L)=K_C, h^0(C,L)\geq 4, h^0(C,L)\equiv 0 \mbox{ mod } 2\}.$
It is shown in \cite{De} that for a general point $[C, \eta]\in \mathcal{R}_7$, the locus $\mbox{Sing}(\Xi)$ is reduced and consists of $16$ points, so indeed
$\mbox{deg}(\widehat{\mathcal{R}}_7/\mathcal{R}_7)=8$. So far we have been unable to construct the required map $\widehat{\mathcal{R}}_7\rightarrow \widehat{\mathcal{F}}_7^{\mathfrak{N}}$
and we leave this as an open question.
\end{remark}
\section{The uniruledness of $\overline{\mathcal{R}}_8$}
We now explain how our structure results on $\mathcal{F}_7^{\mathfrak{N}}$ and $\mathcal{R}_7$ lead to an easy proof of the uniruledness of $\overline{\mathcal{R}}_8$. We begin by reviewing a few facts about the compactification $\overline{\mathcal{R}}_g$ of $\mathcal{R}_g$ by means of stable Prym curves, see \cite{FL} for details. The geometric points of the coarse moduli space $\overline{\mathcal{R}}_g$ are triples $(X, \eta, \beta)$, where $X$ is a quasi-stable curve of genus $g$, $\eta\in \mbox{Pic}(X)$ is a line bundle of total degree is $0$ such that $\eta_{E}=\mathcal{O}_E(1)$ for each smooth rational component $E\subset X$ with $|E\cap \overline{X-E}|=2$ (such a component is said to be \emph{exceptional}), and $\beta:\eta^{\otimes 2}\rightarrow \mathcal{O}_X$ is a sheaf homomorphism whose restriction to any non-exceptional component is an isomorphism. If $\pi:\overline{\mathcal{R}}_g\rightarrow \overline{\mathcal{M}}_g$ is the map dropping the Prym structure, one has the formula \cite{FL}
\begin{equation}\label{pullbackrg}
\pi^*(\delta_0)=\delta_0^{'}+\delta_0^{''}+2\delta_{0}^{\mathrm{ram}}\in CH^1(\overline{\mathcal{R}}_g),
\end{equation}
where $\delta_0^{'}:=[\Delta_0^{'}], \, \delta_0^{''}:=[\Delta_0^{''}]$, and $\delta_0^{\mathrm{ram}}:=[\Delta_0^{\mathrm{ram}}]$ are irreducible boundary divisor classes on $\overline{\mathcal{R}}_g$, which we describe by specifying their respective general points.
\vskip 3pt
We choose a general point $[C_{xy}]\in \Delta_0\subset \overline{\mathcal{M}}_g$ corresponding to a smooth $2$-pointed curve $(C, x, y)$ of genus $g-1$ and consider the normalization map $\nu:C\rightarrow C_{xy}$, where $\nu(x)=\nu(y)$. A general point of $\Delta_0^{'}$ (respectively of $\Delta_0^{''}$) corresponds to a pair $[C_{xy}, \eta]$, where $\eta\in \mbox{Pic}^0(C_{xy})[2]$ and $\nu^*(\eta)\in \mbox{Pic}^0(C)$ is non-trivial
(respectively, $\nu^*(\eta)=\mathcal{O}_C$). A general point of $\Delta_{0}^{\mathrm{ram}}$ is a Prym curve of the form $(X, \eta)$, where $X:=C\cup_{\{x, y\}} {\textbf P}^1$ is a quasi-stable curve with $p_a(X)=g$ and $\eta\in \mbox{Pic}^0(X)$ is a line bundle such that $\eta_{{\textbf P}^1}=\mathcal{O}_{{\textbf P}^1}(1)$ and $\eta_C^{\otimes 2}=\mathcal{O}_C(-x-y)$. In this case, the choice of the homomorphism $\beta$ is uniquely determined by $X$ and $\eta$. Therefore, we drop $\beta$ from the notation of such a Prym curve.
There are similar decompositions of the pull-backs $\pi^*([\Delta_j])$ of the other boundary divisors $\Delta_j\subset \overline{\mathcal{M}}_g$ for $1\leq j\leq \lfloor \frac{g}{2}\rfloor$, see again \cite{FL} Section 1 for details.
\vskip 4pt
Via Nikulin surfaces we construct a sweeping curve for the divisor $\Delta^{'}_0\subset \overline{\mathcal{R}}_8$. Let us start with a general element of $\Delta_0^{'}$ corresponding to a smooth $2$-pointed curve $[C,x,y]\in \mathcal{M}_{7,2}$ and a $2$-torsion point $\eta\in \mbox{Pic}^0(C_{xy})[2]$ and set $\eta_C:=\nu^*(\eta)\in \mbox{Pic}^0(C)[2]$. Using \cite{FV} Theorem 0.2, there exists a Nikulin surface $f:\widetilde{S}\rightarrow S$ branched along $8$ rational curves $N_1, \ldots, N_8\subset S$ and an embedding $C\subset S$, such that $C\cdot N_i=0$ for $i=1, \ldots, 8$ and $\eta_C=e_C$, where $e\in \mbox{Pic}(S)$ is the even class with $e^{\otimes 2}=\mathcal{O}_S(N_1+\cdots+N_8)$. We can also assume that $\mbox{Pic}(S)=\mathcal{L}ambda_7$. By moving $C$ in its linear system on $S$, we may assume that $x,y\notin N_1\cup \ldots \cup N_8$, and we set $\{x_1, x_2\}=f^{-1}(x)$ and $\{y_1,y_2\}=f^{-1}(y)$.
\vskip 5pt
We pick a Lefschetz pencil $\mathcal{L}ambda:=\{C_t\}_{t\in {\textbf P}^1}$ consisting of curves on $S$ passing through the points $x$ and $y$. Since the locus $\bigl\{D\in |\mathcal{O}_S(C)|: D\supset N_i\bigr\}$ is a hyperplane in $|\mathcal{O}_S(C)|$, it follows that there are precisely eight distinct values $t_1, \ldots, t_8\in {\textbf P}^1$ such that
$$C_{t_i}=:C_i=N_i+D_i,$$
where $D_i$ is a smooth curve of genus $6$ which contains $x$ and $y$ and intersects $N_i$ transversally at two points. For each $t\in {\textbf P}^1-\{t_1, \ldots, t_8\}$, we may assume that $C_t$ is a smooth curve and denoting $[\bar{C}_t:=C_t/x\sim y]\in \overline{\mathcal{M}}_8$, we have an exact sequence
$$0\longrightarrow \mathbb Z_2 \longrightarrow \mbox{Pic}^0(\bar{C}_t)[2]\longrightarrow \mbox{Pic}^0(C_t)[2]\longrightarrow 0.$$
In particular, there exist two distinct line bundles $\eta_t^{'}, \eta_t^{''}\in \mbox{Pic}^0(\bar{C}_t)$ such that
$$\nu_t^*(\eta_t^{'})=\nu_t^*(\eta_t^{''})=e_{C_t}.$$
Using the Nikulin surfaces, we can consistently distinguish $\eta_t^{'}$ from $\eta_t^{''}$. Precisely, $\eta_t^{'}$ corresponds to the admissible cover
$$f^{-1}(C_t)/x_1\sim y_1, x_2\sim y_2 \stackrel{2:1}\longrightarrow \bar{C}_t$$
whereas $\eta_t^{''}$ corresponds to the admissible cover
$$f^{-1}(C_t)/x_1\sim y_2, x_2\sim y_1 \stackrel{2:1}\longrightarrow \bar{C}_t.$$
\vskip 5pt
First we construct the pencil $R:=\{\bar{C}_t\}_{t\in {\textbf P}^1}\hookrightarrow \overline{\mathcal{M}}_8$. Formally, we have a fibration $u:\mbox{Bl}_{2g-2}(S)\rightarrow {\textbf P}^1$ induced by the pencil $\mathcal{L}ambda$ by blowing-up $S$ at its $2g-2$ base points (two of which being $x$ and $y$ respectively), which comes endowed with sections $E_x$ and $E_y$ given by the corresponding exceptional divisors.
The pencil $R$ is obtained from $u$, by identifying inside the surface $\mbox{Bl}_{2g-2}(S)$ the sections $E_x$ and $E_y$ respectively.
\begin{lemma}\label{int11}
The pencil $R\subset \overline{\mathcal{M}}_8$ has the following numerical characters:
$$R\cdot \lambda=g+1=8, \ \ R\cdot \delta_0=6g+16=58, \ \mbox{ and } \ R\cdot \delta_j=0 \ \ \mbox{ for } j=1, \ldots, 4.$$
\end{lemma}
\begin{proof}
We observe that $(R\cdot \lambda)_{\overline{\mathcal{M}}_8}=(\mathcal{L}ambda \cdot \lambda)_{\overline{\mathcal{M}}_7}=g+1=8$ and $(R\cdot \delta_j)_{\overline{\mathcal{M}}_8}=(\mathcal{L}ambda \cdot \delta_j)_{\overline{\mathcal{M}}_7}=0$ for $j\geq 1$. Finally, in order to determine the degree of the normal bundle of $\Delta_0$ along $R$, we write:
$$(R\cdot \delta_0)_{\overline{\mathcal{M}}_8}=(\mathcal{L}ambda \cdot \delta_0)_{\overline{\mathcal{M}}_7}+E_x^2+E_y^2=6g+18-2=58,$$
where we have used the well-known fact that a Lefschetz pencil of curves of genus $g$ on a $K3$ surface possesses $6g+18$ singular fibres (counted with their multiplicities) and that $E_x^2=E_y^2=-1$.
\end{proof}
\vskip 4pt
Next, note that the family of Prym curve
$\Bigl\{[\bar{C_t}, \eta_t \bigr]: \nu_t^*(\eta_t)=e_{C_t}\Bigr\}_{t\in {\textbf P}^1}\hookrightarrow \overline{\mathcal{R}}_8$ splits into two irreducible components meeting in eight points. We consider
one of the irreducible components, say
$$\mathcal{G}amma:=\Bigl\{[\bar{C}_t,\eta_t^{'}]\Bigr\}_{t\in {\textbf P}^1}\hookrightarrow \overline{\mathcal{R}}_8,$$
where the notation for $\eta_t^{'}$ has been explained above.
\begin{lemma}\label{int12}
The curve $\mathcal{G}amma\subset \overline{\mathcal{R}}_8$ constructed above has the following numerical features:
$$\mathcal{G}amma \cdot \lambda=8, \ \ \mathcal{G}amma\cdot \delta_0^{'}=42, \ \ \mathcal{G}amma\cdot \delta_0^{''}=0 \mbox{ and } \mathcal{G}amma\cdot \delta_0^{\mathrm{ram}}=8.$$
Furthermore, $\mathcal{G}amma$ is disjoint from all boundary components contained in $\pi^*(\Delta_j)$ for $j=1, \ldots, 4$.
\end{lemma}
\begin{proof} First we observe that $\mathcal{G}amma$ intersects the divisor $\Delta_0^{\mathrm{ram}}$ transversally at the points corresponding to the values $t_1, \ldots, t_8\in {\textbf P}^1$, when the curve $C_{i}$ acquires the $(-2)$-curve $N_i$ as a component. Indeed, for each of these points $e^{\otimes (-2)}_{D_i}=\mathcal{O}_{D_i}(-N_i)$ and $e^{\vee}_{N_i}=\mathcal{O}_{N_i}(1)$, therefore $[C_i, e_{C_i}]\in \Delta_0^{\mathrm{ram}}$. Furthermore, using Lemma \ref{int11} we write $(\mathcal{G}amma\cdot \lambda)_{\overline{\mathcal{R}}_8}=\pi_*(\mathcal{G}amma)\cdot \lambda=8$ and
$$\mathcal{G}amma\cdot (\delta_0^{'}+\delta_0^{''}+2\delta_0^{\mathrm{ram}})=\mathcal{G}amma \cdot \pi^*(\delta_0)=R\cdot \delta_0=58.$$
Furthermore, for $t\in {\textbf P}^1-\{t_1, \ldots, t_8\}$, the curve $f^{-1}(C_t)$ cannot split into two components, else $\mbox{Pic}(S)\varsupsetneq \mathcal{L}ambda_7$. Therefore $\gamma\cdot \delta_0^{''}=0$ and hence $\mathcal{G}amma\cdot \delta_0^{'}=42$.
\end{proof}
\vskip 4pt
\noindent \emph{Proof of Theorem \ref{r8}.} The curve $\mathcal{G}amma\subset \overline{\mathcal{R}}_8$ constructed above is a sweeping curve for the irreducible boundary divisor $\Delta_0^{'}$, in particular it intersects non-negatively every irreducible effective divisor $D$ on $\overline{\mathcal{R}}_8$ which is different from $\Delta_0^{'}$. Since $\mathcal{G}amma \cdot \delta_0^{'}>0$, it follows that $D$ intersects non-negatively \emph{every} pseudoeffective divisor on $\overline{\mathcal{R}}_8$. Using the formula for the canonical divisor \cite{FL}
$$K_{\overline{\mathcal{R}}_8}=13\lambda-2(\delta_0^{'}+ \delta_0^{''})-3\delta_0^{\mathrm{ram}}-\cdots \in CH^1(\overline{\mathcal{R}}_8),$$ applying Lemma \ref{int12} we obtain that $\mathcal{G}amma \cdot K_{\overline{\mathcal{R}}_8}=-4<0$, thus $K_{\overline{\mathcal{R}}_8}\notin \mbox{Eff}(\overline{\mathcal{R}}_8)$. Using \cite{BDPP}, we conclude that $\overline{\mathcal{R}}_8$ is uniruled, in particular its Kodaira dimension is negative.
$\Box$
\vskip 5pt
\subsection{The uniruledness of the universal singular locus of the theta divisor over $\overline{\mathcal{R}}_8$.}
\vskip 4pt
In what follows, we sketch a second proof of Theorem \ref{r8}, skipping some details. This parametrization provides a \emph{concrete} way of constructing a rational curve through a general point of $\overline{\mathcal{R}}_8$. We fix a general element $[C, \eta]\in \mathcal{R}_8$ and denote by $f:\widetilde{C}\rightarrow C$ the corresponding unramified double cover and by $\iota:\widetilde{C}\rightarrow \widetilde{C}$ the involution exchanging the sheets of $f$. Following \cite{W}, we consider the singular locus of the Prym theta divisor, that is, the locus
$$V^3(C,\eta) =\mbox{Sing}(\Xi):= \bigl\{L\in \mathrm{Pic}^{14}(\widetilde{C}): \mathrm{Nm}_f(L) = K_C, \ h^0(C, L)\geq 4 \hbox{ and } h^0(C, L)\equiv 0 \ \mathrm{mod}\ 2\bigr\}.$$
It follows from \cite{W}, that $V^3(C, \eta)$ is a smooth curve. We pick a line bundle $L\in V^3(C, \eta)$ with $h^0(\widetilde{C}, L)=4$, a general point $\tilde{x}\in \widetilde{C}$ and consider the $\iota$-invariant part of the Petri map, that is,
$$\mu_0^+\bigl(L(-\tilde{x})\bigr): \mathrm{Sym}^2 H^0(\widetilde{C}, L(-\tilde{x})) \rightarrow H^0(C, K_C(-x)),$$
$$\ \ \mbox{ } s\otimes t+t\otimes s\mapsto s\cdot \iota^*(t)+t\cdot \iota^*(s),$$
where $x:=f(\tilde{x})\in C$.
We set ${\textbf P}^2:={\textbf P}\bigl(H^0(L(-\tilde{x}))^{\vee}\bigr)$, and similarly to \cite{FV} Section 2.2, we consider the map $q:{\textbf P}^2\times {\textbf P}^2\rightarrow {\textbf P}^5$ obtained from the Segre embedding ${\textbf P}^2\times {\textbf P}^2\hookrightarrow {\textbf P}^8$ by projecting onto the space of symmetric tensors. We have the following commutative diagram:
Let $\Sigma:=\mbox{Im}(q)\subset {\textbf P}^5$ be the determinantal cubic surface; its singular locus is the Veronese surface $V_4$. For a general choice of $[C, \eta]\in \mathcal{R}_8, L\in V^3(C,\eta)$ and of $\tilde{x}\in \widetilde{C}$, the map
$\mu_0^+(L(-\tilde{x}))$ is injective and let $W\subset H^0(C, K_C(-x))$ be its $6$-dimensional image. Comparing dimensions, we observe that the kernel of the multiplication map
$$\mbox{Sym}^2(W)\longrightarrow H^0(C, K_C^{\otimes 2}(-2x))$$
is at least $2$-dimensional. In particular, there exist distinct quadrics $Q_1, Q_2\subset {\textbf P}^5$ such that
$$C\subset S:=Q_1\cap Q_2\cap \Sigma\subset {\textbf P}^5.$$
Since $\mbox{Sing}(\Sigma)=V_4$, the surface $S$ is singular at the $16$ points of intersection $Q_1\cap Q_2\cap V_4$, or equivalently, $\mbox{Sing}(S)\supseteq Q_1\cap Q_2\cap V_4$.
Assume now, we can find $(C,L, \eta, \tilde{x})$ as above such that $S$ has no further singularities except the already exhibited $16$ points, that is,
$$\mbox{Sing}(S)=Q_1\cap Q_2\cap V_4.$$
We obtain that $S$ is a $16$-nodal canonical surface, that is, $K_S=\mathcal{O}_S(1)$.
\vskip 4pt
Using the exact sequence $0\rightarrow H^0(S,\mathcal{O}_S)\rightarrow H^0(S, \mathcal{O}_S(C))\rightarrow H^0(\mathcal{O}_C(C))\rightarrow 0$, since $\mathcal{O}_C(C)=\mathcal{O}_C(x)$, we find that
$\mbox{dim } |\mathcal{O}_S(C)|=1$, that is, $C$ moves on $S$. Moreover the pencil $|\mathcal{O}_S(C)|$ has $x\in S$ as a base point.
\vskip 3pt
We consider the surface $\widetilde{S}:=q^{-1}(S)\subset {\textbf P}^2\times {\textbf P}^2$. For each curve $C_t\in |\mathcal{O}_S(C)|$, we denote by $\widetilde{C}_t:=q^{-1}(C_t)\subset \widetilde{S}$ the corresponding double cover. Furthermore, we define a line bundle $L_t\in \mbox{Pic}^{14}(\widetilde{C}_t)$, by setting $\mathcal{O}_{\widetilde{C}_t}(1,0)=L_t(-\tilde{x})$ (in which case, $\mathcal{O}_{\widetilde{C}_t}(0,1)=\iota^*(L_t(-\tilde{x}))$).
\vskip 4pt
The construction we just explained induces a uniruled parametrization of the universal singular locus of the Prym theta divisor in genus $8$ (which dominates $\mathcal{R}_8$). Our result is conditional to a (very plausible) transversality assumption:
\begin{theorem}\label{r83}
Assume there exists $[C, \eta, L, x]$ as above, such that $S=Q_1\cap Q_2\cap \Sigma\subset {\textbf P}^5$ is a $16$-nodal canonical surface. Then the moduli space
$$\mathcal{R}_8^3:=\Bigl\{[C,\eta,L]: [C, \eta]\in \mathcal{R}_8, \ L\in V^3(C,\eta)\Bigr\}$$ is uniruled.
\end{theorem}
\begin{proof}
The assignment ${\textbf P}^1\ni t\mapsto [\widetilde{C}_t/C_t, L_t]\in \mathcal{R}_8^3$ described above provides a rational curve passing through a general point of $\mathcal{R}_8^3$.
\end{proof}
\end{document}
|
\begin{document}
\title[From Sobolev Inequality to Doubling]{From Sobolev Inequality to Doubling}
\author[Korobenko]{Lyudmila Korobenko}
\address{University of Calgary\\
Calgary, Alberta\\
[email protected]}
\author[Maldonado]{Diego Maldonado}
\address{Kansas State University\\
Manhattan, Kansas\\
[email protected]}
\author[Rios]{Cristian Rios}
\address{University of Calgary\\
Calgary, Alberta\\
[email protected]}
\thanks{Second author supported by the US National Science Foundation under grant DMS 1361754. Third author supported by the Natural Sciences and
Engineering Research Council of Canada.}
\subjclass[2010]{35J70, 35J60, 35B65, 46E35, 31E05, 30L99}
\keywords{Sobolev inequality, Moser iteration, subunit metric spaces, doubling condition}
\date{}
\dedicatory{}
\mathbb{C}mby{Jeremy Tyson}
\begin{abstract} In various analytical contexts, it is proved that a weak Sobolev inequality implies a doubling property for the underlying measure.
\end{abstract}
\maketitle
\section{Introduction and main result}\label{sec:intro}
Let $d_E$ denote the usual Euclidean distance in $\mathbb{R}^n$, that is, $d_E(x,y):=|x-y|$ for every $x, y \in \mathbb{R}^n$. Given $y \in \mathbb{R}^n$ and $R > 0$ let $B(y,R):=\{x \in \mathbb{R}^n : d_E(y,x) < R\}$ denote the Euclidean ball centered at $y$ with radius $R$.
\begin{definition}
Let $\mu$ be a Borel measure on $(\mathbb{R}^n, d_E)$ such that $0 < \mu(B) < \infty$ for every Euclidean ball $B \subset \mathbb{R}^n$. Given $1\leq p < \infty$ and $1< \sigma <\infty$, we say that the triple $(\mathbb{R}^n, d_E, \mu)$ admits a \emph{weak $(p\sigma, p)$-Sobolev inequality} with a (finite) constant $C_S > 0$ if for every Euclidean ball $B:=B(y,R) \subset \mathbb{R}^n$ and every function $\varphi \in C_c^{0,1}(B)$ (Lipschitz and compactly supported on $B$) it holds true that
\begin{align}\label{weakEuclideanSob}
\left(\frac{1}{\mu(B)} \int_B |\varphi|^{p \sigma} d\mu \right)^{\frac{1}{p \sigma}} & \leq C_S R \left(\frac{1}{\mu(B)} \int_B |\mathbb{N}bla \varphi|^p d\mu \right)^{\frac{1}{p}}\\\nonumber
& + C_S \left(\frac{1}{\mu(B)} \int_B |\varphi|^p d\mu\right)^{\frac{1}{p}}.
\end{align}
\end{definition}
Our main result is
\begin{theorem}\label{thm:Sob=>doubling} Suppose that, for some $1\leq p < \infty$ and $1< \sigma <\infty$, the triple $(\mathbb{R}^n, d_E, \mu)$ admits a weak $(p\sigma, p)$-Sobolev inequality with a constant $C_S >0$. Then, the measure $\mu$ is doubling on $(\mathbb{R}^n, d_E)$. More precisely, there exists a constant $C_D \geq 1$, depending only on $p$, $\sigma$, and $C_S$, such that
\begin{equation}\label{mudoubling}
\mu(B(y,2R)) \leq C_D \, \mu(B(y,R)) \quad \forall y \in \mathbb{R}^n, R > 0.
\end{equation}
\end{theorem}
\begin{remark} In \cite{FKS82}, Fabes, Kenig and Serapioni identified four conditions on an absolutely continuous measure $d\mu = w \,dx$ on $(\mathbb{R}^n, d_E)$ as essential in proving Harnack's inequality for solutions to certain degenerate elliptic PDEs (whose degeneracy is ruled by $w$) by means of the implementation of Moser's iterative scheme and John-Nirenberg-type inequalities. For $1 \leq p < \infty$, these conditions, which we next list for the reader's convenience, define $w$ as a $p$-\emph{admissible weight} (see \cite[Section 13]{Haj2} and \cite[p.7]{HKM}):
\begin{enumerate}[(I)]
\item the doubling property: there exists $C > 0$ such that $\mu(2B) \leq C \mu(B)$ for every Euclidean ball $B \subset \mathbb{R}^n$;\label{list-doubling}
\item the uniqueness condition for the gradient: if $D \subset \mathbb{R}^n$ is an open set and $\{\varphi_j\}_j \subset C^\infty(D)$ satisfy $\int_D |\varphi_j|^p d\mu \rightarrow 0$ and $\int_D |\mathbb{N}bla \varphi_ j - v|^p d\mu \rightarrow 0$ as $j \rightarrow \infty$ for some $v \in L^p(D, \mu)$, then $v \equiv 0$;\label{list-uniq}
\item the $p$-Sobolev inequality: there exist $\sigma > 1$ and $C > 0$ such that for every ball $B:=B(y,R)$ and $\varphi \in C^\infty_c(B)$ it holds \label{list-sobolev}
\begin{equation}\label{p-Sobolev}
\left(\frac{1}{\mu(B)} \int_B |\varphi|^{\sigma p} \, d\mu \right)^{\frac{1}{\sigma p}} \leq C R \left(\frac{1}{\mu(B)} \int_B |\mathbb{N}bla \varphi|^p \, d\mu \right)^{\frac{1}{p}};
\end{equation}
\item the $p$-Poincar\'e inequality: there exists $C > 0$ such that for every ball $B:=B(y,R)$ and $\varphi \in C^\infty(B)$ it holds \label{list-poincare}
\[
\int_B |\varphi - \varphi_B|^p \, d\mu \leq C R^p \int_B |\mathbb{N}bla \varphi|^p \, d\mu,
\]
where $\varphi_B$ stands for the average of $\varphi$ over $B$ with respect to $d\mu$.
\end{enumerate}
The interplay between the conditions above has received considerable attention. Independently, Saloff-Coste \cite{SC1} and Grygor'yan \cite{Gr} proved that \eqref{list-doubling} and \eqref{list-poincare} imply \eqref{list-sobolev} (and this implication was later systematized by several authors, see \cite[p.79]{Haj2}). Then, Heinonen and Koskela \cite[Theorem 5.2]{HK95} proved that, as it had been announced by S. Semmes, conditions \eqref{list-doubling} and \eqref{list-poincare} imply \eqref{list-uniq}.
Our Theorem \mathbb{R}f{thm:Sob=>doubling} then contributes to the further understanding of such interplay by establishing that a weaker version of \eqref{list-sobolev} (as given by \eqref{weakEuclideanSob}) implies \eqref{list-doubling}, thus placing the Sobolev inequality \eqref{p-Sobolev} back into the core of the regularity theory aspects of partial differential equations.
\end{remark}
\subsection{Metric spaces} The theory of Sobolev-type inequalities in metric spaces, including imbedding theorems and several definitions of Sobolev spaces, has seen great developments in the past two decades, see for instance \cite{BB11, Fr1, Fr2, Haj,Haj2, Shan} and references therein. A driving motivation for the study of such Sobolev-type inequalities arises in the study of regularity of solutions to certain classes of degenerate elliptic and parabolic PDEs, see for instance \cite[Chapters 7-14]{BB11}, \cite{BM, DJS13, KS, MM} and references therein. In what follows, we briefly review the theory of Sobolev spaces in metric spaces as to formulate a corresponding version of Theorem \mathbb{R}f{thm:Sob=>doubling}. The reader is referred to \cite[Chapters 1-5]{BB11}, \cite[Sections 1-5]{BM}, \cite[Sections 1-3]{KS} for further details.
Let $(X,d)$ be a metric space. For $y \in X$ and $R> 0$ the $d$-ball centered at $y$ with radius $R$ is defined as $B_d(y,R):=\{x \in X: d(x,y) < R\}$.
Given a function $u: X \rightarrow [-\infty, \infty]$, a non-negative Borel function $g: X\rightarrow [0,\infty]$ is called an \emph{upper gradient} of $u$ if for all curves (i.e. non-constant rectifiable continuous mappings) $\gamma : [0, l_\gamma] \rightarrow X$ it holds
$$
|u(\gamma(0)) - u(\gamma (l_\gamma))| \leq \int_\gamma g \, ds.
$$
In particular, if for some $L \geq 1$, $u: X \rightarrow \mathbb{R}$ is an \emph{$L$-Lipschitz function}, i.e., $|u(x)-u(y)| \leq L \, d(x,y)$ for every $x,y\in X$, then the function $lip(u)$ defined for $x \in X$ as
\begin{equation}\label{deflipu}
lip(u)(x):= \liminf_{r \to 0^+} \sup\limits_{y \in B_d(x,r)} \frac{|u(x)-u(y)|}{r}
\end{equation}
is an upper-gradient for $u$ (see \cite[Proposition 1.14]{BB11}). Notice also that $lip(u)(x) \leq L$ for every $x \in X$.
Let $\mu$ be a Borel measure on $(X,d)$ such that $0 < \mu(B) < \infty$ for every $d$-ball $B \subset X$. For $1 \leq p < \infty$ and $u \in L^p(X, \mu)$ set
$$
\norm{u}{N^{1,p}}^p:= \int_X |u|^p d\mu + \inf\limits_{g} \int_X g^p d\mu,
$$
where the infimum is taken over all the upper-gradients $g$ of $u$. Given a $d$-ball $B \subset X$, its Newtonian space with zero boundary values is defined as
$$
N_0^{1,p}(B):=\{f|_B : \norm{f}{N^{1,p}} < \infty \text{ and } f\equiv 0 \text{ on } X \setminus B\}.
$$
\begin{definition}
Let $(X, d, \mu)$ be as above. Given $1\leq p < \infty$ and $1< \sigma <\infty$, we say that the triple $(X, d, \mu)$ admits a \emph{weak $(p\sigma, p)$-Sobolev inequality} with a (finite) constant $C_S > 0$ if for every $d$-ball $B:=B_d(y,R) \subset X$ and every function $\varphi \in N_0^{1,p}(B)$ it holds true that
\begin{align}\label{weakMetricSob}
\left(\frac{1}{\mu(B)} \int_B |\varphi|^{p \sigma} d\mu \right)^{\frac{1}{p \sigma}} & \leq C_S R \left(\frac{1}{\mu(B)} \int_B g^p d\mu \right)^{\frac{1}{p}}\\\nonumber
& + C_S \left(\frac{1}{\mu(B)} \int_B |\varphi|^p d\mu\right)^{\frac{1}{p}},
\end{align}
for all upper-gradients $g$ of $\varphi$.
\end{definition}
Then we have the following metric-space formulation of Theorem \mathbb{R}f{thm:Sob=>doubling}
\begin{theorem}\label{thm:Sob=>doublingMetric} Suppose that, for some $1\leq p < \infty$ and $1< \sigma <\infty$, the triple $(X, d, \mu)$ admits a weak $(p\sigma, p)$-Sobolev inequality with a constant $C_S >0$. Then, the measure $\mu$ is doubling on $(X, d)$. More precisely, there exists a constant $C_D \geq 1$, depending only on $p$, $\sigma$, and $C_S$, such that
\begin{equation*}
\mu(B_d(y,2R)) \leq C_D \, \mu(B_d(y,R)) \quad \forall y \in X, R >0.
\end{equation*}
\end{theorem}
Clearly, Theorem \mathbb{R}f{thm:Sob=>doubling} then follows as a corollary of Theorem \mathbb{R}f{thm:Sob=>doublingMetric}.
\section{Proof of Theorem \mathbb{R}f{thm:Sob=>doublingMetric}}\label{sec:proofThm}
Given a $d$-ball $B:=B_d(y,R)$ set $B^*:=B_d(y,2R)$ and define a family of $d$-Lipschitz functions $\{\psi_j\}_{j \in \mathbb{N}} \subset N_0^{1,p}(B) \subset N_0^{1,p}(B^*)$ as follows: for $j \in \mathbb{N}$ set $r_j:= (2^{-j-1} + 2^{-1}) R$ and
\begin{equation}\label{defpsijMetric}
\psi_j(x):= \left(\frac{r_j - d(x,y)}{r_j - r_{j+1}} \right)^+ \wedge 1.
\end{equation}
Also for $j \in \mathbb{N}$, define the $d$-balls $B_j$ as
$$
\frac{1}{2}B \subset B_j:= \{x \in X: d(x,y) \leq r_j\} \subset B \subset B^*=2B.
$$
Our first step will be to apply the weak-Sobolev inequality \eqref{weakMetricSob} to $\psi_j$ on $B^*$ by choosing the upper-gradient $g_j:=lip(\psi_j)$ as defined in \eqref{deflipu}. In particular, it follows that
\begin{equation}\label{condpsij}
g_j(x) \leq \frac{2^{j+2}}{R} \chi_{B_j}(x) \quad \text{and} \quad 0 \leq \psi_j(x) \leq 1 \quad \forall x \in X.
\end{equation}
Then, by the weak-Sobolev inequality \eqref{weakMetricSob} applied to each $\psi_j$ on $B^*=2B$ (and using the fact that each $\psi_j$ is supported in $B_j$, so that $B^*$ can be replaced by $B_j$ in the integrals), we obtain
\begin{align}\label{Sobappliedtopsij}
\left(\frac{1}{\mu(B^*)} \int_{B_j} |\psi_j|^{p \sigma} d\mu \right)^{\frac{1}{p \sigma}} & \leq 2 C_S R \left(\frac{1}{\mu(B^*)} \int_{B_j} g_j^p d\mu \right)^{\frac{1}{p}}\\\nonumber
& + C_S \left(\frac{1}{\mu(B^*)} \int_{B_j} |\psi_j|^p d\mu\right)^{\frac{1}{p}}.
\end{align}
By the estimates for $g_j$ and $\psi_j$ in \eqref{condpsij}, we obtain
\begin{equation}\label{boundGradPsij}
\left(\frac{1}{\mu(B^*)} \int_{B_j} g_j^p d\mu \right)^{\frac{1}{p}} \leq \frac{2^{j+2}}{R} \left(\frac{\mu(B_j)}{\mu(B^*)} \right)^{\frac{1}{p}}
\end{equation}
and
\begin{equation}\label{boundPsijp}
\left(\frac{1}{\mu(B^*)} \int_{B_j} |\psi_j|^p d\mu\right)^{\frac{1}{p}} \leq \left(\frac{\mu(B_j)}{\mu(B^*)} \right)^{\frac{1}{p}}.
\end{equation}
On the other hand, since $\psi_j \equiv 1$ on $B_{j+1} \subset B_j$, it follows that
\begin{equation}\label{boundEjplus1}
\left(\frac{\mu(B_{j+1})}{\mu(B^*)} \right)^{\frac{1}{p \sigma}} \leq \left(\frac{1}{\mu(B^*)} \int_{B_j} |\psi_j|^{p \sigma} d\mu \right)^{\frac{1}{p \sigma}}.
\end{equation}
Therefore, by combining \eqref{boundEjplus1}, \eqref{boundPsijp}, and \eqref{boundGradPsij} with \eqref{Sobappliedtopsij}, we obtain
\begin{align*}
\left(\frac{\mu(B_{j+1})}{\mu(B^*)} \right)^{\frac{1}{p \sigma}} & \leq 2 C_S 2^{j+2} \left(\frac{\mu(B_j)}{\mu(B^*)} \right)^{\frac{1}{p}} + C_S \left(\frac{\mu(B_j)}{\mu(B^*)} \right)^{\frac{1}{p}} \leq C_S 2^{j+4} \left(\frac{\mu(B_j)}{\mu(B^*)} \right)^{\frac{1}{p}} .
\end{align*}
Raising to the power $p/\sigma^{j-1}$ then yields
\begin{equation}\label{preiterationPj}
\left(\frac{\mu(B_{j+1})}{\mu(B^*)} \right)^{\frac{1}{\sigma^j}} \leq C_S^{\frac{p}{\sigma^{j-1}}} 2^{\frac{p(j+4)}{\sigma^{j-1}}} \left(\frac{\mu(B_j)}{\mu(B^*)} \right)^{\frac{1}{\sigma^{j-1}}}.
\end{equation}
At this point, for $j \in \mathbb{N}$, define $P_j:= \mu(B_j)^{\frac{1}{\sigma^{j-1}}}$, so that \eqref{preiterationPj} can be recast as
\begin{equation}\label{preiterationPj2}
P_{j+1} \leq C_S^{\frac{\sigma p}{\sigma^{j}}} 2^{\frac{\sigma p (j+4)}{\sigma^{j}}} \mu(B^*)^{\frac{1-\sigma}{\sigma^j}} P_j.
\end{equation}
Notice that, from the construction of $B_j$, we have $B_d(y,R/2) \subset B_j \subset B$ for every $j \in \mathbb{N}$, so that $0 < \mu(B_d(y,R/2)) \leq \mu(B_j) \leq \mu(B) < \infty$ for every $j \in \mathbb{N}$. Then
$$
\mu(B_d(y,R/2))^{\frac{1}{\sigma^{j-1}}} \leq P_j \leq \mu(B)^{\frac{1}{\sigma^{j-1}}} \quad \forall j \in \mathbb{N},
$$
which implies $\lim\limits_{j\to\infty}P_j =1$. Now, by iterations of \eqref{preiterationPj2}, we get
\begin{equation}\label{postiterationsPsij}
1=\lim\limits_{j\to\infty}P_j \leq P_1 \prod\limits_{j=1}^\infty [C_S^{\sigma p} 2^{\sigma p (j+4)} \mu(B^*)^{1-\sigma}]^{\frac{1}{\sigma^j}},
\end{equation}
with
$$
\prod\limits_{j=1}^\infty [C_S^{\sigma p}]^{\frac{1}{\sigma^j}} = \exp \left[\sigma p \left( \sum\limits_{j=1}^\infty \frac{1}{\sigma^j} \right) \log C_S \right] = C_S^{\frac{p \sigma}{\sigma -1}},
$$
$$
\prod\limits_{j=1}^\infty [2^{p \sigma (j+4)}]^{\frac{1}{\sigma^j}} = \exp \left[\sigma p \left( \sum\limits_{j=1}^\infty \frac{j+4}{\sigma^j} \right) \log 2 \right] =: K_1(\sigma,p)<\infty,
$$
and
\begin{align*}
\prod\limits_{j=1}^\infty [\mu(B^*)^{1-\sigma}]^{\frac{1}{\sigma^j}} & = \exp \left[ \left( \sum\limits_{j=1}^\infty \frac{(1-\sigma)}{\sigma^j} \right) \log \mu(B^*) \right]\\
& = \exp \left[ \sum\limits_{j=1}^\infty \left( \frac{1}{\sigma^j}- \frac{1}{\sigma^{j-1}} \right) \log \mu(B^*) \right] = \frac{1}{\mu(B^*)}.
\end{align*}
Consequently, \eqref{postiterationsPsij} yields
$$
1 \leq C_S^{\frac{p \sigma}{\sigma -1}} K_1(\sigma, p) \frac{P_1}{\mu(B^*)},
$$
which, together with the fact that $P_1 = \mu(B_1) \leq \mu(B)$, implies
$$
\mu(B^*) \leq C_S^{\frac{p \sigma}{\sigma -1}} K_1(\sigma, p) \mu(B)
$$
and \eqref{mudoubling} follows with $C_D:= C_S^{\frac{p \sigma}{\sigma -1}} K_1(\sigma, p)$.
\qed
\section{Further extensions of Theorem \mathbb{R}f{thm:Sob=>doubling}}
From the proof of Theorem \mathbb{R}f{thm:Sob=>doublingMetric} the central role of the functions $\{\psi_j\}$ becomes quite apparent. In this section we reformulate Theorem \mathbb{R}f{thm:Sob=>doubling} in other contexts where corresponding functions $\{\psi_j\}$ can be constructed.
\subsection{Dirichlet forms} Since the 1990's, deep connections between Sobolev and Poincar\'e inequalities, doubling properties for measures, and elliptic and parabolic Harnack inequalities have been discovered and further developed in the ample context of strongly local, regular Dirichlet forms (including analysis on complete Riemannian manifolds, Alexandrov spaces, self-similar sets, graphs, etc.) see, for instance, \cite{BGK, BM1, BM2, BM3, Gr, Ki, KSW,KST, Ku, Kuw, KMS, SC1, SC2, St1, St2, St3, St4} and references therein. In this subsection we will recast Theorem \mathbb{R}f{thm:Sob=>doubling} in the language of Dirichlet forms. A brief review of some basic notions is in order.
Let $(X,\tau)$ be a Hausdorff, locally compact, separable topological space and let $\mu$ be a Radon measure on $(X,\tau)$ such that $\mu(U) >0$ for every nonempty open subset $U \subset X$.
Let $\mathcal{F}$ be a dense subspace of $L^2(X,\mu):=\{u:X \rightarrow \mathbb{R} : \int_X u^2 d\mu < \infty \}$ and let $\mathcal{E} : \mathcal{F} \times \mathcal{F} \rightarrow [0,\infty)$ be a bilinear, non-negative definite (that is, $\mathcal{E}(u,u) \geq 0 \: \forall u \in \mathcal{F}$), and symmetric functional. For every $u \in \mathcal{F}$ set $\mathcal{E}(u):=\mathcal{E}(u,u)$. Assume that $(\mathcal{E}, \mathcal{F})$ is \emph{closed}, that is, $\mathcal{F}$ equipped with the norm $\norm{u}{\mathcal{F}}:=(\norm{u}{L^2(X,\mu)} + \mathcal{E}(u))^{1/2}$ becomes a Hilbert space and that $(\mathcal{E}, \mathcal{F})$ is \emph{Markovian}; that is, for every $u \in \mathcal{F}$, it follows that $u_1:= (0 \vee u ) \wedge 1 \in \mathcal{F}$ and $\mathcal{E}(u_1) \leq \mathcal{E}(u)$. When all the above conditions are met, $(\mathcal{F}, \mathcal{E})$ is called a \emph{Dirichlet form} on $L^2(X,\mu)$. We refer the reader to \cite[Chapters 1-3]{FOT} for further details and properties of Dirichlet forms.
We intend to state a Sobolev-type inequality along the lines of \eqref{weakEuclideanSob} involving a Dirichlet form $(\mathcal{F}, \mathcal{E})$; hence, the next step will be about imposing conditions on $(\mathcal{F}, \mathcal{E})$ as to equip $X$ (which thus far is just a topological, and not necessarily metric, space) with a convenient distance.
Let $C_c(X)$ denote the class of real-valued, continuous functions on $X$ with compact support equipped with the uniform topology. Following the notation in \cite[Section 1.1]{FOT}, a Dirichlet form is \emph{regular} if $\mathcal{F} \cap C_c(X)$ is dense in both $(\mathcal{F}, \norm{\cdot}{\mathcal{F}})$ and $(C_c(X), \norm{\cdot}{L^\infty(X)})$ and it is \emph{strongly local} if $\mathcal{E}(u,v)=0$ for all $u,v \in \mathcal{F}$ with $u \equiv 1$ on a neighborhood of $\text{supp}(v)$. A strongly local regular Dirichlet form $(\mathcal{F}, \mathcal{E})$ admits the integral representation
\begin{equation}
\mathcal{E}(u,v) = \int_X d\Gamma(u,v) \quad \forall u,v \in \mathcal{F},
\end{equation}
where $\Gamma$ (called the \emph{energy measure} of $(\mathcal{F}, \mathcal{E})$) is a bilinear, non-negative definite, symmetric form with values in the signed Radon measures of $X$ (see \cite[Section 3.2]{FOT}). Moreover, the energy measure has a local character, meaning that given $u, v \in \mathcal{F}$ and an open set $\Omega \subset X$, the restriction of $\Gamma(u,v)$ to $\Omega$ depends only on the restrictions of $u$ and $v$ to $\Omega$. We write $u \in \mathcal{F}_{loc}(\Omega)$ if $u \in L^2_{loc}(U,\mu)$ and for every compact subset $K\subset \Omega$ there exists $w \in \mathcal{F}$ such that $u=w$ $\mu$-a.e. on $K$ (see \cite[p.130]{FOT}). Then, the local character of $\Gamma$ allows to unambiguously define it on $\mathcal{F}_{loc}(\Omega) \times \mathcal{F}_{loc}(\Omega)$. The energy measure $\Gamma$ induces a pseudo-metric $\rho$ (called the \emph{intrinsic metric} on $X$) defined for $x,y \in X$ as
\begin{equation}
\rho(x,y):=\sup \{u(x) - u(y) : u \in \mathcal{F}_{loc}(X) \cap C(X) \text{ and } d\Gamma(u,u)\leq d\mu \text { on } X\},
\end{equation}
where the condition $d\Gamma(u,u) \leq d\mu$ on $X$ means that the measure $\Gamma(u,u)$ is absolutely continuous with respect to $\mu$ and the Radon-Nikodym derivative $d\Gamma(u,u)/d\mu \leq 1$ on $X$. In general $\rho$ could be degenerate in the sense that $\rho(x,y) = \infty$ or $\rho(x,y)=0$ can happen for some $x \ne y$. One way to avoid this degeneracy is to introduce the following:
\noindent \textbf{Assumption (A):} All $\rho$-balls $B_\rho(y,R):=\{x \in X : \rho(x,y) < R\}$ are relatively compact in $(X,\tau)$.\\
\noindent \textbf{Assumption (A'):} The topology induced by $\rho$ is equivalent to $\tau$ (the original topology in $X$).
Then, under Assumption (A'), it follows that if $X$ is connected and given $x,y \in X$, with $x \ne y$, then $0 < \rho(x,y) < \infty$, thus turning $\rho$ into a metric on $X$ (see \cite[Section 4.2]{St1}). Assumptions (A) and (A') will also be used to guarantee \eqref{Gammarhoyr} below.
\begin{definition}
Let $(X, \tau)$ and $\mu$ be as above and assume that $X$ is connected and Assumptions (A) and (A') hold true. Let $(\mathcal{F}, \mathcal{E})$ be a regular, strongly local Dirichlet form on $L^2(X,\mu)$.
Following \cite[p.38]{BM3}, let $\Omega \subset X$ be open, and for $u \in \mathcal{F}_{loc}(\Omega)$ write $\alpha(u,u)=d\Gamma(u,u)/d\mu$, the Radon-Nikodym derivative. Given $1 \leq p < \infty$, the Dirichlet-Sobolev space $D_p[\mathcal{E}, \Omega]$ is defined as
$$
D_p[\mathcal{E}, \Omega]:=\{u \in \mathcal{F}_{loc}(\Omega): \alpha(u,u)\in L^1_{loc}(\Omega, \mu), \ \int_\Omega \alpha(u,u)^{\frac{p}{2}} d\mu + \int_\Omega |u|^p d\mu < \infty \}.
$$
\end{definition}
Given $1\leq p < \infty$ and $1< \sigma <\infty$, we say that the Dirichlet form $(\mathcal{F}, \mathcal{E})$ admits a \emph{weak $(p\sigma, p)$-Sobolev inequality} with a (finite) constant $C_S > 0$ if for every $\rho$-ball $B:=B_\rho(y,R) \subset X$ and every function $\varphi \in D_p[\mathcal{E}, B]$ with $\text{supp}(\varphi) \subset B$ it holds true that
\begin{align}\label{weakDirichletSob}
\left(\frac{1}{\mu(B)} \int_B |\varphi|^{p \sigma} d\mu \right)^{\frac{1}{p \sigma}} & \leq C_S R \left(\frac{1}{\mu(B)} \int_B \alpha(u,u)^{\frac{p}{2}} d\mu \right)^{\frac{1}{p}}\\\nonumber
& + C_S \left(\frac{1}{\mu(B)} \int_B |\varphi|^p d\mu\right)^{\frac{1}{p}}.
\end{align}
Then we have the following Dirichlet-form version of Theorem \mathbb{R}f{thm:Sob=>doubling}
\begin{theorem}\label{thm:Sob=>doublingDirichlet} Let $(X, \tau)$ and $\mu$ be as above and assume that $X$ is connected and that Assumptions (A) and (A') hold true. Let $(\mathcal{F}, \mathcal{E})$ be a regular, strongly local Dirichlet form on $L^2(X,\mu)$. Suppose that, for some $1\leq p < \infty$ and $1< \sigma <\infty$, the Dirichlet form $(\mathcal{F}, \mathcal{E})$ admits a weak $(p\sigma, p)$-Sobolev inequality with a constant $C_S >0$. Then, the measure $\mu$ is doubling on $(X, \rho)$. More precisely, there exists a constant $C_D \geq 1$, depending only on $p$, $\sigma$, and $C_S$, such that
\begin{equation*}
\mu(B_\rho(y,2R)) \leq C_D \, \mu(B_\rho(y,R)) \quad \forall y \in X, R > 0.
\end{equation*}
\end{theorem}
\begin{proof} Given a $\rho$-ball $B:=B_\rho(y,R)$, set $B^*:=B_\rho(y,2R)$. By \cite[Lemma 1' on p.191]{St1}, assumptions (A) and (A') imply that for every $y \in X$ and every $r > 0$, the function $\rho_{y,r}: x \mapsto (r- \rho(x,y))^+$ satisfies $\rho_{y,r} \in \mathcal{F} \cap C_c(X)$ and
\begin{equation}\label{Gammarhoyr}
d\Gamma(\rho_{y,r}, \rho_{y,r}) \leq d\mu.
\end{equation}
Then, for each $j \in \mathbb{N}$ set $r_j:= (2^{-j-1} + 2^{-1}) R$ and, just as in the case for metric spaces in \eqref{defpsijMetric}, define
\[
\psi_j(x):= \left(\frac{r_j - \rho(x,y)}{r_j - r_{j+1}} \right)^+ \wedge 1.
\]
For each $j \in \mathbb{N}$ define $s_j:=1/ (r_j - r_{j-1}) = 2^{j+2}/R$ and the set $D_j:= \{x \in X: s_j (r_j - \rho(x,y)) > 1\}$ so that, by the definition of $\psi_j$, the so-called truncation property (see \cite[p.190]{St1}), the fact that $d\Gamma(1,1) =0$ (due to the locality of $\Gamma$, see \cite[p.189]{St1}), and the estimate \eqref{Gammarhoyr} we have
\begin{align*}
d\Gamma(\psi_j, \psi_j) & = d\Gamma(s_j \rho_{y, r_j} \wedge 1,s_j \rho_{y, r_j} \wedge 1) = 1_{D_j} d\Gamma(1,1) + 1_{X \setminus D_j} d\Gamma(s_j \rho_{y, r_j}, s_j \rho_{y, r_j}) \\
& = 1_{X \setminus D_j} d\Gamma(s_j \rho_{y, r_j}, s_j \rho_{y, r_j}) = 1_{X \setminus D_j} s_j^2 d\Gamma(\rho_{y, r_j}, \rho_{y, r_j}) \leq s_j^2 d\mu.
\end{align*}
Hence,
\begin{equation}\label{gradientboundDirichlet}
d\Gamma(\psi_j, \psi_j) \leq \left(\frac{2^{j+2}}{R}\right)^2 d\mu\quad \forall j \in \mathbb{N}.
\end{equation}
As usual, define $B_j$ by
$$
\frac{1}{2}B \subset B_j:= \{x \in X: \rho(x,y) \leq r_j\} \subset B \subset B^*=2B,
$$
and the proof of Theorem \mathbb{R}f{thm:Sob=>doublingDirichlet} follows along the same lines as the one for Theorem \mathbb{R}f{thm:Sob=>doublingMetric}. Notice that, alternatively, one could use the construction from \cite[Section 3]{BM2} to produce $\{\psi_j\}_{j \in \mathbb{N}} \subset \mathcal{F}_{loc}(X) \cap C(X)$ with $\psi_j \equiv 1$ on $B_{j+1}$, $\psi_j \equiv 0$ on $X\setminus B_j$, $0 \leq \psi_j \leq 1$, and
$$
d\Gamma(\psi_j, \psi_j) \leq 10\left(\frac{2^{j+2}}{R}\right)^2 d\mu \quad \forall j \in \mathbb{N}.
$$
\end{proof}
\subsection{A subelliptic version} In $(\mathbb{R}^n,d_E)$ as well as in the metric-space and Dirichlet-form contexts above, the various sequences $\{\psi_j\}_{j \in \mathbb{N}}$ had ``bounded gradients'' in the sense of \eqref{condpsij} and \eqref{gradientboundDirichlet}. These uniform bounds were used to obtain the corresponding inequalities of the type \eqref{boundGradPsij}. Next we will show how in certain subelliptic contexts the uniform bounds on the gradients can be weakened to suitable integral bounds. This will allow us to relate our Theorem \mathbb{R}f{thm:Sob=>doubling} to the notion of accumulating sequence of Lipschitz cut-off functions (see Remark \mathbb{R}f{ASOLF}).
We largely follow the terminology from \cite[Section 1]{SW1}. Consider an open subset $\Omega\subset\mathbb{R}^n$ (in the Euclidean topology) and let
$$
Q : \Omega \rightarrow \{\text{non-negative semi-definite } n \times n \text{ matrices}\}
$$
be a locally bounded function on $\Omega$. For a Lipschitz function $u : \Omega \rightarrow \mathbb{R}$ (throughout this subsection, Lipschitz means Lipschitz with respect to the Euclidean distance), define its $Q$-gradient Lebesgue-a.e. in $\Omega$ as
$$
[\mathbb{N}bla u]_Q:= ( \mathbb{N}bla u^T Q \mathbb{N}bla u)^{\frac{1}{2}}.
$$
\begin{definition}
Let $d : \Omega \times \Omega \rightarrow [0, \infty)$ be a metric on $\Omega$ that generates a topology equivalent to the Euclidean topology in $\Omega$.
Given $s >1$ we say that the structure $(\Omega, d, Q)$ admits \emph{accumulating sequences of Lipschitz cut-off functions} with exponent $s$ (see \cite[p.9]{SW1}) if there exist constants $0 < \nu < 1$, $K > 0$, and $N > 1$ such that for every $d$-ball $B:=B_d(y,R)$, with $R < \mathrm{dist}(y, \partial \Omega) /6$, there exists a sequence $\{\psi_j\}_{j \in \mathbb{N}}$ of functions defined on $\Omega$ such that
\begin{align}\label{suppPsi1inB}
\mathrm{supp}(\psi_1) &\subset B,\\ \label{BnuRinPsij=1}
B_d(y,\nu R) &\subset \{x:\psi_j(x)=1\} \: \forall j \in \mathbb{N},\\ \label{suppPsij1}
\mathrm{supp} (\psi_{j+1}) &\subset \{x:\psi_j(x)=1\}\: \forall j \in \mathbb{N},\\ \label{psijLips01}
\psi_j \text{ is Lipschitz and } 0 & \le \psi_j\le 1\: \forall j \in \mathbb{N},\\
\left(\frac{1}{|B|}\int_{B}[\mathbb{N}bla\psi_j]_Q^s \, dx \right)^{\frac{1}{s}} &\leq \frac{K N^j}{R}\: \forall j \in \mathbb{N},\label{s-averageGradQ}
\end{align}
where $|B|$ stands for the Lebesgue measure of $B$.
\end{definition}
For $1 \leq p < \infty$, let $\mathcal{W}^{1,p}_Q(\Omega, dx)$ denote the closure of the Lipschitz functions on $\Omega$ under the norm
$$
\norm{u}{\mathcal{W}^{1,p}_Q(\Omega, dx)}:= \norm{u}{L^p(\Omega, dx)} + \norm{[\mathbb{N}bla u]_Q}{L^p(\Omega,dx)}.
$$
Now for $1 \leq p < \infty$ and $1 < \sigma < \infty$ we say that $(\Omega, d, Q)$ admits a \emph{weak $(p\sigma, p)$-Sobolev inequality} with a (finite) constant $C_S > 0$ if for every $d$-ball $B:=B_d(y,R) \subset X$, with $0 < R < \text{dist}(y, \partial \Omega)/6$, and every function $\varphi \in \mathcal{W}^{1,p}_Q(\Omega, dx)$ with $\text{supp}(\varphi) \subset B$ it holds true that
\begin{align}\label{weakSubellipticSob}
\left(\frac{1}{|B|} \int_B |\varphi|^{p \sigma} \, dx \right)^{\frac{1}{p \sigma}} & \leq C_S R \left(\frac{1}{|B|} \int_B [\mathbb{N}bla \varphi]_Q^p \, dx \right)^{\frac{1}{p}}\\\nonumber
& + C_S \left(\frac{1}{|B|} \int_B |\varphi|^p \, dx \right)^{\frac{1}{p}}.
\end{align}
As usual, given $r > 1$ denote its dual H\"older exponent as $r'$, defined by $r'+r=rr'$. Then we have the following formulation of Theorem \mathbb{R}f{thm:Sob=>doubling}
\begin{theorem}\label{thm:Sob=>doublingSubelliptic} Suppose that, for some $1\leq p < \infty$, $1< \sigma <\infty$, and $s > p \sigma'$, the structure $(\Omega, d, Q)$ admits accumulating sequences of Lipschitz cut-off functions with exponent $s$ as well as a weak $(p\sigma, p)$-Sobolev inequality with a constant $C_S >0$. Then, Lebesgue measure $dx$ is doubling on $(\Omega, d)$. More precisely, there exists a constant $C_D \geq 1$, depending only on $p$, $\sigma$, $s$, $K$, $N$, and $C_S$, such that
\begin{equation*}
|B_d(y,2R)| \leq C_D \, |B_d(y,R)| \quad \forall y \in X, \forall \: 0 < R < \text{dist}(y, \partial \Omega)/6.
\end{equation*}
\end{theorem}
\begin{proof} Given a $d$-ball $B:=B_d(y,R)$, with $0 < R < \text{dist}(y, \partial \Omega)/6$, just as in the proof of Theorem \mathbb{R}f{thm:Sob=>doublingMetric}, apply the weak-Sobolev inequality \eqref{weakSubellipticSob} to the accumulating sequence of Lipschitz cut-off functions $\{\psi_j\}$ on the $d$-ball $B^*:=B_d(y,2R)$ and, for $j \in \mathbb{N}$, set $B_j:=\mathrm{supp}(\psi_j)$, to obtain
\begin{align*}
\left(\frac{1}{|B^*|} \int_{B_j} |\psi_j|^{p \sigma} \, dx \right)^{\frac{1}{p \sigma}} & \leq 2 C_S R \left(\frac{1}{|B^*|} \int_{B_j} [\mathbb{N}bla \psi_j]_Q^p \, dx \right)^{\frac{1}{p}}\\\nonumber
& + C_S \left(\frac{1}{|B^*|} \int_{B_j} |\psi_j|^p \, dx\right)^{\frac{1}{p}}.
\end{align*}
The main step in the proof is to find a substitute for \eqref{boundGradPsij}. Set $q:=s/p > 1$ so that $q':=s/(s-p) >1$. By applying H\"older's inequality with $q$ and $q'$ and using \eqref{s-averageGradQ}, we get
\begin{align*}
\left(\frac{1}{|B^*|} \int_{B_j} [\mathbb{N}bla \psi_j]_Q^p dx \right)^{\frac{1}{p}} &\leq \left(\frac{|B_j|}{|B^*|} \right)^{\frac{1}{p} - \frac{1}{s}} \left(\frac{1}{|B^*|} \int_{B_j} [\mathbb{N}bla \psi_j]_Q^s dx \right)^{\frac{1}{s}} \\
& \leq \frac{KN^j}{2R} \left(\frac{|B_j|}{|B^*|} \right)^{\frac{1}{p} - \frac{1}{s}}.
\end{align*}
By properties \eqref{suppPsij1} and \eqref{psijLips01}, and from the inequalities above, it follows that
\begin{align*}
\left(\frac{|B_{j+1}|}{|B^*|} \right)^{\frac{1}{p \sigma}}& \leq \left(\frac{1}{|B^*|} \int_{B_j} |\psi_j|^{p \sigma} dx \right)^{\frac{1}{p \sigma}} \leq KN^j \left(\frac{|B_j|}{|B^*|} \right)^{\frac{1}{p} - \frac{1}{s}} + \left(\frac{|B_j|}{|B^*|} \right)^{\frac{1}{p}}\\
& \leq (KN^j + 1) \left(\frac{|B_j|}{|B^*|} \right)^{\frac{1}{p} - \frac{1}{s}} \leq (K+1)N^{j} \left(\frac{|B_j|}{|B^*|} \right)^{\frac{1}{p} - \frac{1}{s}}.
\end{align*}
Now, set $\beta:= \sigma/q' = \sigma (1-p/s)$ and notice that the hypothesis $s > p \sigma'$ means $\beta > 1$. Raising the inequality above to the power $ps/(s-p) >0$ yields
\begin{equation}\label{Ej1beta}
\left(\frac{|B_{j+1}|}{|B^*|} \right)^{\frac{1}{\beta}} \leq (K+1)^{\frac{p \sigma}{\beta}} N^\frac{j p \sigma}{\beta} \left(\frac{|B_j|}{|B^*|} \right).
\end{equation}
Set $P_j:= |B_j|^{\frac{1}{\beta^{j-1}}}$ so that raising \eqref{Ej1beta} to the power $1/\beta^{j-1}$ implies
$$
P_{j+1} \leq (K+1)^{\frac{p \sigma}{\beta^j}} N^\frac{j p \sigma}{\beta^j} |B^*|^{\frac{1-\beta}{\beta^j}} P_j
$$
From conditions \eqref{suppPsi1inB} and \eqref{BnuRinPsij=1} we get $\lim\limits_{j \to \infty} P_j =1$ and $P_1 \leq |B|$ and the proof can now be completed just as in Section \mathbb{R}f{sec:proofThm}. \end{proof}
\begin{remark}\label{ASOLF} Notice that the family $\{\psi_j\}_{j \in \mathbb{N}}$ in \eqref{defpsijMetric} (with respect to Euclidean distance) will satisfy \eqref{suppPsi1inB} through \eqref{psijLips01} (with $\nu=1/2$) as well as \eqref{s-averageGradQ} with $\norm{Q}{L^\infty(B)}$ instead of $K$. The fact that the bound $K$ must be uniform in $B$ leads to the study of the interaction between the Euclidean distance and the distance $d$ whose balls define the inequality \eqref{s-averageGradQ} and the Sobolev inequality \eqref{weakSubellipticSob}. Sufficient conditions on the interaction between Euclidean balls and $d$-balls for the structure $(\Omega, d, Q)$ to admit accumulating sequences of Lipschitz cut-off functions have been found in \cite[Proposition 68]{SW1} and \cite[Lemma 8]{KR}. The hypotheses on the validity of Sobolev inequalities, the existence of accumulating sequences of Lipschitz cut-off functions, and doubling property for Lebesgue measure on the $d$-balls, are typical in the related literature (see, for instance, \cite{MRW, SW1, SW2}). Theorem \mathbb{R}f{thm:Sob=>doublingSubelliptic} now renders the doubling condition redundant.
\end{remark}
\end{document}
|
\begin{document}
\input 2017planar-has-defs.tex
\input 2019J-ops-defs.tex
\input\jobname.dnt
\def\pu{}
\def\directlua#1{}
\pu
\def\overline{\overline}
\title{Planar Heyting Algebras for Children 2: Local Operators, J-Operators, and Slashings}
\author{Eduardo Ochs}
\maketitle
\begin{abstract}
Choose a topos $\calE$. There are several different ``notions of
sheafness'' on $\calE$. How do we visualize them?
Let's refer to the classifier object of $\calE$ as $Ω$, and to its
Heyting Algebra of truth-values, $\Sub(1_\calE)$, as $H$; we will
sometimes call $H$ the ``logic'' of the topos. There is a well-known
way of representing notions of sheafness as morphisms $j:Ω→Ω$, but
these `$j$'s yield big diagrams when we draw them explicitly; here we
will see a way to represent these `$j$'s as maps $J:H→H$ in a way that
is much more manageable.
In the previous paper of this series --- called \cite{OchsPH1} from
here on --- we showed how certain toy models of Heyting Algebras,
called ``ZHAs'', can be used to develop visual intuition for how
Heyting Algebras and Intuitionistic Propositional Logic work; here we
will extend that to sheaves. The full idea is this: {\sl notions of
sheafness} correspond to {\sl local operators} and vice-versa; {\sl
local operators} correspond to {\sl J-operators} and vice-versa; if
our Heyting Algebra $H$ is a ZHA then {\sl J-operators} correspond to
{\sl slashings} on $H$, and vice-versa; {\sl slashings} on $H$
correspond to {\sl ``sets of question marks''} and vice-versa, and
each set of question marks induces a notion of {\sl erasing and
reconstructing}, which induces a sheaf. Also, every ZHA $H$
corresponds to an {(acyclic) 2-column graph}, and vice-versa, and for
any two-column graph $(P,A)$ the logic of the topos $\Set^{(P,A)}$ is
exactly the ZHA $H$ associated to $(P,A)$.
The introduction of \cite{OchsPH1} discusses two different senses in
which a mathematical text can be ``for children''. The first sense
involves some precise metamathetical tools for transfering knowledge
back and forth between a general case ``for adults'' and a toy model
``for children''; the second sense is simply that the text's
presentation has few prerequisites and never becomes too abstract.
Here we will use the second sense: everything here, except for the
last section, should be accessible to students who have taken a course
on Discrete Mathematics and read \cite{OchsPH1}. This means that
categories, toposes, sheaves and the maps $j:Ω→Ω$ only appear in the
last section, and before that we deal only with the J-operators
$J:H→H$, how they correspond to slashings and sets of question marks,
and how they form an algebra.
\end{abstract}
\input 2019J-ops-slashings.tex
\input 2019J-ops-logic.tex
\input 2019J-ops-midway.tex
\input 2019J-ops-cubes.tex
\input 2019J-ops-valuations.tex
\input 2019J-ops-algebra.tex
\input 2019J-ops-categories.tex
\input 2019J-ops-classifier.tex
\input 2019J-ops-kan.tex
\pu
\printbibliography
\end{document}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.